content
stringlengths
86
994k
meta
stringlengths
288
619
Conceptual Flow Models | Ansys Innovation Courses This lesson covers the fundamental aspects of flow through porous media, focusing on the definition of porous medium, its characteristic parameters, and its applications. It delves into the concept of Darcy's law, explaining the relationship between pressure gradient and velocity in the context of fluid flow. The lesson also discusses the different types of pores (macro, meso, and micro) and their impact on the flow mechanism. It further explains the concept of permeability, its unit (Darcy), and its relation to flow rate, viscosity, and pressure gradient. The lesson concludes by highlighting the importance of understanding these concepts for various applications, such as oil recovery and sequestration of chemicals. Video Highlights 00:53 - Explanation of Darcy's law and its relation to pressure gradient and velocity 10:16 - Discussion on different types of pores and their impact on flow mechanism 24:19 - Explanation of permeability and its unit, Darcy 28:15 - Discussion on the application of these concepts in various fields Key Takeaways - Darcy's law explains the relationship between pressure gradient and velocity in the context of fluid flow. - The type of pores (macro, meso, and micro) significantly impacts the flow mechanism. - Permeability, measured in Darcy, is a key factor in determining the flow rate, viscosity, and pressure gradient. - These concepts find extensive applications in fields like oil recovery and sequestration of chemicals.
{"url":"https://innovationspace.ansys.com/courses/courses/introduction-to-flow-through-porous-media/lessons/conceptual-flow-models-lesson-2/","timestamp":"2024-11-08T21:43:51Z","content_type":"text/html","content_length":"173932","record_id":"<urn:uuid:e2742d82-1fb9-439e-9da0-9fc55226f4c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00746.warc.gz"}
f'(x) at a specific "x" on Prime When using the template in CAS to type in f'(sin x) where x=pi/3, I get the expected 1/2. In home, if 0 is stored in X, the same derivative evaluates as 0. Why doesn't using the "where X=pi/3" temporarily override the 0 that was stored in X to produce the expected 1/2? 11-30-2013, 12:35 AM If you use the form SLOPE(SIN(X),pi/3), you get the correct answer. 11-30-2013, 01:06 AM Thanks. How do I know when it is appropriate to use the "where X=" template. Is this meant mainly for CAS? 11-30-2013, 01:17 AM Well basically the ' derivative is the CAS function diff, which is primarily a symbolic operation, so you are going to have problems using it outside of CAS. Capital X is a reserved real number variable, whereas small x is just a symbol that can take on a numeric value. SLOPE OTOH is a non-CAS operation that is numeric in nature, hence the format where you assign a numeric value to the point where you want to find the derivative. You can still use SLOPE in CAS if you want to as well as Home. Small letter operations and functions like diff are typically CAS, whereas capital lettered ones like SLOPE are non-CAS. 11-30-2013, 09:59 AM Richard, by "using the template," I assume you mean you are using the derivative template on the math template key rather than typing "diff" on the entry line. True? If so, I don't see how you are able to add a "where X=pi/3" with the template. What exactly did you mean by that? I've been evaluating derivatives at specific points in CAS View with Have you found a better way using the templates? I was getting so many uppercase-lowercase errors that I just stay in CAS now unless I'm just doing simple "scientific calculator" math. If I want to get an approximate answer instead of a symbolic one, I use Shift+Enter instead of Enter rather than going back to Home View. 11-30-2013, 10:13 AM Which is why I will use a non-CAS alternative such as SLOPE if it is available. It works w/o any drama anywhere, including programs, and no worrying about upper case vs lower case, quotes, local vs global variables etc. I find CAS to be most useful when seeking purely symbolic results. Edited: 30 Nov 2013, 10:14 a.m. 11-30-2013, 10:30 AM I used the template key and chose where f(x) = choice and filled the template in (choosing template again to enter the function to differentiate) and then filled in the where X= part. This way of doing worked in CAS, but the where x= Option would not override whatever had been stored in X (0, for example) in home view. I could use the template for differentiating an equation in home, and simply store my value of interest in X, and it would work. Seemed strange that the use of the where X= template wouldn't temporarily override the stored X value. If I didn't recognize an answer as wrong, I would be mislead by this procedure not working in home. 11-30-2013, 10:44 AM Sorry, I still don't understand. I am in CAS View and have the math template key pressed now (the key with "Units" and "C" on it), and there is no "f(x) = choice" kind of option, nor is there a "where X=" part. Precisely what key are you pressing to get this? 11-30-2013, 10:51 AM You have to first enter the derivative template (fourth item on first row), make the entries, then enter the where bar |, after which the where options will appear as you enter them. 11-30-2013, 10:51 AM You are right there...the where variable equals option is the choice on the template menu immediately left of the differentiate choice. This works fine in CAS, but the where variable equals whatever option will not override whatever had been stored in X in home which I find strange, and misleading to a student that might try this in the home view...an correct answer for whatever had been stored in X will come up in home instead. 11-30-2013, 11:10 AM Got it. Thanks. By the way, I get exactly the same behavior as you do in Home View. 11-30-2013, 11:34 AM Quote: When using the template in CAS to type in f'(sin x) where x=pi/3, I get the expected 1/2. In home, if 0 is stored in X, the same derivative evaluates as 0. Why doesn't using the "where X= pi/3" temporarily override the 0 that was stored in X to produce the expected 1/2? What exactly did you mean by: f'(sin x) ? I would interpret this as: f(x) is a predefined function, and we wish to evaluate the derivative of f(x) at sin(x). Perhaps you meant simply to take the derivative of f(x) = sin(x), and evaluate the result at x=pi/3? In CAS view, type: f(x) := sin(x); g := f' ; Now, whether you are in HOME or CAS, you can simply type: to get the correct result. With respect to using they templates: As you have noticed, the behavior is different in CAS and HOME view. It's just how things were designed on the HP Prime. For symbolic manipulation and exact evaluation, use CAS view. 11-30-2013, 02:26 PM Thanks for the discussion and working solution. I do hope for more intuitive consistency between home and CAS. By offering the template option of "where variable equals", it implies that this has been offered to me as a handy tool to accomplish this sort of task. If it won't work in home, it should be grayed out perhaps. In any case, consistency between CAS and home should be a goal for intuitive use of this wonderful creation! I remembering wondering with a friend while a senior in high school...1971-1972...how wonderful it would be for such an instrument to be possible!
{"url":"https://archived.hpcalc.org/museumforum/thread-257361-post-257400.html#pid257400","timestamp":"2024-11-14T01:03:50Z","content_type":"application/xhtml+xml","content_length":"53797","record_id":"<urn:uuid:a21e958b-1f65-4483-8010-9bc6f7a2efa0>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00080.warc.gz"}
Triangle Congruence Worksheet Answer Key - Wordworksheet.com Triangle Congruence Worksheet Answer Key. She attracts each so that two sides are four in. Useful for revision, classwork and homework.. Proving triangles congruent worksheet answer key. One purpose why so much consideration is given to congruent triangles. Congruent triangles proofs worksheet web page three 7. Worksheet congruent triangles date hr a determine whether or not the next triangles are congruent. Geometry worksheet triangle congruence proofs Determine the missing congruence property in a pair of triangles to substantiate the concept. Acute triangle right triangle obtuse triangle all acute angles one right angle Draw two triangles that match each part of the venn diagram below. The reply key’s routinely generated and is positioned on the second. American Credit Acceptance Repo Time Congruent Triangles Asa And Aas Answers from… If we enlarge one shape to make it larger or smaller, then the shapes are said to be similar. The corresponding sides of comparable shapes have to be in the same proportion and the … Check whether two triangles PQR and STU are congruent. Check whether two triangles PQR and CDE are congruent. Check whether two triangles PQR and ABC are congruent. Wsj Magazine Pdf A really nice exercise for allowing college students to know the ideas of the Congruence of Triangles. Some of the worksheets for this idea are […] These worksheets, created by specialists, can be used on a regular basis by your teen. In similar triangles, the ratio of the corresponding sides are equal. Skills Practice 4-2 Day 4 Isosceles and Equilateral Triangles. Day 5 Coordinate Triangle Proofs D5 HW – pg. 22 (#1) Day 6 Coordinate Triangle Proofs D6 HW – pg. Peoples Bank Account Number On-line Implement this collection of pdf worksheets to introduce congruence of triangles. 1 obtuse isosceles 2 obtuse scalene 3 proper isosceles. Gina wilson all things algebra 2014 reply key unit 5, gina wilson all things . This checks the scholars capacity to grasp Congruence of Triangles. This tests the scholars capacity to know Congruent Triangles. Members have exclusive amenities to obtain an individual worksheet, or a complete level. Proving Triangles Congruent Worksheet Reply Key Triangle Calculate the perimeter of triangle ABC. Since this could be a right-angled triangle, we will use Pythagoras! Algebra finance simple and compound interest evaluate worksheet quiz solving linear equations easy. View worksheet Independent Practice 1 A actually great activity for allowing students to grasp the ideas of Similar & Congruent Figures. View worksheet Independent Practice 2 Students classify figures as congruent or related. The answers may be found under. Key Phrases Relevant To Triangle Congruence Part 1 Answer Key Congruent triangles worksheet doc. Triangles worksheet grade 5. Gina wilson all issues algebra answer key kind. Sss triangle congruence worksheet page i Kuta software program infinite geometry congruence and triangles answer key. Congruent triangles worksheet doc worksheets solutions grade Congruence and similarity questions with options. Triangles quadrilaterals and polygons worksheets pdf. Level 1 – Determining whether two triangles are congruent and finding the explanation. Level 2 – Further questions on recognising congruency ordered randomly. Level 3 – Use your data of congruent triangles to find lengths and angles. Visited our web site crystalgraphics presents more powerpoint displays and much more triangles congruent to developing given two and angle worksheet contains. Click on the HTML hyperlink code below. The line phase itself is commonly referred to as the altitude. This exercise was designed for a high school stage geometry class. If a second triangle is successfully fashioned you may be requested if they’re congruent. This obtain contains 6 totally different questions units for students to practice identifying the 5 major congruence types. Students can use math worksheets to master a math ability by way of apply, in a examine group or for peer tutoring. Use the buttons under to print, open, or download the PDF version of the Identifying Shapes math worksheet. The measurement of the PDF file is bytes. Each question set has 6 problems. The completely different versions make this very versatile in order that academics could use as an exit ticket or evaluation instead of simply independent practice. The question units every match on a half sheet of paper, so as to reduce quantity of paper wanted when printing. Congruent figures worksheet answer key. Factor of Cubic variables, two stages equations … Factor of Cubic variables, two levels equations …. Select the very best one in your course. This construction shows how to draw the perpendicular bisector of a given line section with compass and straightedge or ruler. This both bisects the phase , and is perpendicular to it. Make positive you’re employed is the proof usinf the poatule , or theorem, feom the important thing on the las two pages. This is a coloring exercise for sixteen problems. Worksheet congruent triangles answer key (QSTION.CO) – If a second triangle is successfully shaped you’ll be asked if they’re congruent. Which is larger the moons interval of rotation or its period of revolution b. Attempt to prove these triangles congruent if you cannot due to a ignorance its time to take a detour three. Delightful to find a way to the blog, on this explicit event i’m going to offer you relating to proving triangles congruent worksheet solutions. Congruence of Triangles • Congruent triangles are triangles which have the same size and shape. This implies that the corresponding sides are equal and the corresponding angles are equal • In the above diagrams, the corresponding sides are a and d; b and e ; c and f. • The corresponding angles are x and s; y and t; z and u. Learn cpctc congruent with free interactive flashcards. Choose from 31 different sets of cpctc congruent flashcards on Quizlet. Give Thanks Turkey Coloring PageHave you ever seen as many turkey coloring pages as what we’ve got here?! Check whether or not two triangles ABC and DEF are comparable. Triangle PQR and triangle WXY are right triangles. Because they both have a proper angle. If three corresponding angles of two triangles are equal then triangles are congruent. The congruent figure super impose one another fully. The high and bottom faces of a kaleidoscope are congruent. The stepwise mechanism of those worksheets helps students turn out to be properly versed with concepts, as they move on to extra complicated questions. Benefits of Congruent Triangles Worksheets. Angles of Triangles and Congruent Triangles. On this page you’ll find a way to learn or download reply key triangle congruence answer key gina wilson in pdf format. Procedure for missing diagram …. Circle the determine that is congruent to the first determine in the sequence. Learn concerning the appearance backdrop acclimated to analyze the names of altered accredited and aberrant polygons, corresponding to triangles or hexagons. The similarity and congruence worksheets ask them to discover out whether pairs of triangles are congruent in increasingly troublesome geometrical situations. This work pack also comes with handy reply sheets to speed up marking work. Worksheet WS four.4D 2004 Name Per Date CONGRUENT TRIANGLES #2 From the markings on the figures, decide if the triangles are congruent by SSS or SAS. Get your pupils in mathematical shape by letting them measurement up this fantastic worksheet on Congruence! The question of Congruence is also applied to shapes such as triangles, the place a collection of strategies are used to …. Pdf, 327.3 KB Congruent triangles KS3 KS4 non-claculator. Useful for revision, classwork and homework.. If the second triangle is efficiently constructed, you may be requested if it suits. Answers may in fact differ . A triangle congruence theorem like sss,. If three sides in a single triangle are congruent to three sides in one other. Related posts of "Triangle Congruence Worksheet Answer Key"
{"url":"https://wordworksheet.com/triangle-congruence-worksheet-answer-key/","timestamp":"2024-11-05T16:47:43Z","content_type":"text/html","content_length":"80791","record_id":"<urn:uuid:e52f5b95-35c3-42a9-a053-24a015677c96>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00195.warc.gz"}
EViews Help: rgmprobs Display regime probabilities for a switching regression equation. eq_name.rgmprobs(options) [indices] where eq_name is the name of an equation estimated using switching regression. The elements to display are given by the optional indices corresponding to the regimes (e.g., “1 2 3” or “2 3”). If indices is not provided, results for all of the regimes will be displayed. type=arg (default=“pred”) Type of regime probability to compute: one-step ahead predicted (“pred”), filtered (“filt”), smoothed (“smooth”). view=arg (default=“graph”) Display format: multiple graphs (“graph”), single graph “graph1”, sheet (“sheet”), summary (“summary”). prompt Force the dialog to appear from within a program. p Print results. equation eq1.switchreg(type=markov) y c @nv ar(1) ar(2) ar(3) displays two graphs containing the one-step ahead regime probabilities for the Markov switching regression estimated in EQ1. eq1.rgmprobs(type=filt) 2 displays the filtered probabilities for regime 2. eq1.rgmprobs(type=smooth, view=graph1) displays the smoothed probabilities for both regimes in a single graph.
{"url":"https://help.eviews.com/content/equationcmd-rgmprobs.html","timestamp":"2024-11-10T05:03:39Z","content_type":"application/xhtml+xml","content_length":"13470","record_id":"<urn:uuid:0faad656-647a-45ef-837e-4b40762a4991>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00792.warc.gz"}
Geb/category-theory reading group Geb Reading Group Syllabus This document is intended to summarize the background, especially in category theory and polynomial functors, required to understand how Geb works. It will serve as a guide to topics for the weekly reading group, and for attendees on how to prepare. Practically none of the following is required just to write code in Geb! But it is required to understand how the language works under the hood, and that would also enable a programmer to use more of Learning outline The two main threads of theory underlying Geb are category theory and polynomial functors. To some degree I am treating the former as the theory of (programming) languages and the latter as the theory of data structures, but in Geb they are intertwined, defined through mutual recursion: category theory defined by data structures, data structures given semantics by category theory, and “ (programming) language” emerging as a notion of its own (broader than “category”). The big picture The end goal of the language definition of Geb – the point at which the language itself is finished and all else is libraries – is what I have been calling “programming languages Ă la carte”. That is a reference, to “Data Types Ă la Carte”, so the founding paper there (see the Bibliography) is one place you might start. It illustrates how to define data structures in ways which allow for more combinators than traditional explicitly-recursive ADTs. Geb aims to extend this notion to programming languages – defining them in terms of individual language features and combinators on languages themselves. That possibility emerges from founding category theory itself on polynomials. Track: category theory Fortunately, but not just coincidentally, the aspects of category theory underpinning Geb are mainly the foundational ones – the first few that are typically presented in books on category theory. One of my aims in the reading group will be to communicate these foundations in terms that will be most familiar to programmers – each of them has a clear analogue in and application to Here is a rough order of topics I’d recommend. • The axioms which define a category • The notion of isomorphism • Question: what aspects of programming languages could be expressed as categories? The category of categories • Functors • Natural transformations • Question: which programming language constructs could be viewed as functors? As natural transformations? Slice categories and (co)presheaves Slice categories are the standard way of expressing dependent types in category theory, so they are particularly relevant to programming with formal verification. They are also a step on the way to understanding the more general notion of (co)presheaf. Adjunctions, universal properties, representable functors, and the Yoneda lemma Riehl is a superb reference on these intertwined topics, but also quite dense. They are perhaps best learned by example, of which several follow below. Geb turns their use somewhat on its head: instead of defining “universal property” in terms of categories, Geb defines polynomials without explicit reference to (although with antipication of) category theory, then universal properties in terms of polynomials, then categories in terms of universal properties. This sequence allows the intermediate definition of “programming language” in terms of universal properties, distinct from, though closely related to, that of “category”. Examples of universal properties The following examples should help to absorb the notion of “universal property” itself, but are also precisely the ones which underlie Geb, and programming languages in general. As such, they also underlie category theory itself – they are the ones in terms of which category theory can be homoiconic (can write itself). For each of these examples, therefore, the question is: what programming language concept(s) do these universal properties correspond to? • Terminal objects • Initial objects • Products • Coproducts • Hom-objects (exponentials) • Initial algebras • Terminal coalgebras • Free monads • Cofree comonads • Equalizers • Coequalizers • Pullbacks • Pushouts • Limits • Colimits • Ends • Coends • Kan extensions Track: polynomial functors By far the most comprehensive, up-to-date reference that I know of on this topic is from the Topos Institute – a book which is still being updated at least as of the last few months, and an accompanying video series (with more relevant videos frequently produced). The book was originally called Polynomial Functors: A General Theory of Interaction; “General” has since been changed to “Mathematical”. The book’s website is in the Bibliography. The book is large and contains many examples and applications; many of the formulas and theorems have been coded in Geb’s Idris implementation, and in many cases I’ve ported them to the dependently-typed context (that is, polynomial functors on slice categories). The most important thing to take from it is that there are many ways of viewing polynomial functors – if you can understand each and how to translate among them, then you will understand how to use them to write data structures, programming languages, category theory, and Geb in particular. Category theory Polynomial functors
{"url":"https://research.anoma.net/t/geb-category-theory-reading-group/833","timestamp":"2024-11-09T12:25:05Z","content_type":"text/html","content_length":"32722","record_id":"<urn:uuid:86f43ee5-190e-40dc-b3f1-02ee934af49f>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00857.warc.gz"}
What is the density of the stone? What is the density of the stone? Stone, crushed weighs 1.602 gram per cubic centimeter or 1 602 kilogram per cubic meter, i.e. density of stone, crushed is equal to 1 602 kg/m³. In Imperial or US customary measurement system, the density is equal to 100.0096 pound per cubic foot [lb/ft³], or 0.92601 ounce per cubic inch [oz/inch³] . Is #53 stone good for driveway? #53 crushed limestone is commonly used for driveways and as a compacted base for asphalt, flagstone, and paver patios. Varying in sizes from dust to 1 1/2″. Approximate coverage of 100 sf/ton. What size is #53 Limestone? 1 1/2″ #53 Limestone Size: 1 1/2″ Limestone down to powder. Lbs per Cu Yard: 3000 Packs well. What is the density of ballast? 60 mm Railway Ballast Stone Aggregate Size 60mm Form Solid Bulk Density 1520 To 1680 Kg/ Cubic M Water Absorption 2.4% Material Stone What is the heaviest stone? The heaviest rocks would be those that are made up of dense, metallic minerals. Two of the heaviest or densest rocks are peridotite or gabbro. They each have a density of between 3.0 to 3.4 grams per cubic centimeter. Interestingly, peridotite are the rocks that naturally occurring diamonds are found in. How dense is marble? 2.64 g/cm³ Among the marble measurements, the density calculated for each marble varied from 2.52 g/cm³ to 2.64 g/cm³. What size is #53 Rock? #53 same as #4 but with limestone dust for easy packing 1″-2″ in size with half lime dust. What is #53 crushed limestone? #53 Crushed Limestone has 1″ material down to dust, making it great for compaction. It is widely used for driveways and as a sub base for concrete slabs, flagstone, paver patios or asphalt. Category: Decorative Stone, Sand & Aggregates. What is the density of stone aggregate? The approximate bulk density of aggregate that is commonly used in normal-weight concrete is between 1200-1750 kg/m3 (75-110 lb/ft3). Here, the Standard test method for determining the bulk density of aggregates is given in ASTM C 29 (AASHTO T 19). What is the density of 3/4 stone? In Imperial or US customary measurement system, the density is equal to 84.03 pound per cubic foot [lb/ft³], or 0.78 ounce per cubic inch [oz/inch³] . What rock is heavier than gold? Most minerals are in the range of 2 – 4, but the most dense minerals are gold at 19.32, iridium at 22.42, and platinum at 21.45. How dense is basalt? The physical properties of basalt reflect its relatively low silica content and typically high iron and magnesium content. The average density of basalt is 2.9 g/cm3, compared with a typical density for granite of 2.7 g/cm3.
{"url":"https://www.nbccomedyplayground.com/what-is-the-density-of-the-stone/","timestamp":"2024-11-10T14:40:36Z","content_type":"text/html","content_length":"137173","record_id":"<urn:uuid:7d1733a6-ab76-45fb-8a98-50c572f88b62>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00477.warc.gz"}
GAMM spaghetti plots in R with ggplot - Dr. Mowinckel’s GAMM spaghetti plots in R with ggplot A co-worker was working on some generalized additive mixed models (GAMM) through the R package [mgcv] The analyses work very well, and results were as expected . There were also some built in plotting functions for the gamm output in R . However, they were with base-r, and we all know base r is not the most beautiful . Here we go! Stepping into the world of blogging through R and blogdown! This is both super exciting and slightly scary, I have not really shared code online like I plan here before. But I guess we start somewhere. And we start with plotting. A co-worker was working on some generalized additive mixed models (GAMM) through the R package mgcv. The analyses work very well, and results were as expected. There were also some built in plotting functions for the gamm output in R. However, they were with base-r, and we all know base-r is not the most beautiful plotting application. Also, we have a need to a spaghetti plot in the background, as the whole point with a GAMM is that we have repeated data, and showing those in the background is very essential. This is not easily done in base-r. This is what we have library(tidyverse); library(mgcv) ## ── Attaching packages ──────────────────────────────────────────────────── tidyverse 1.3.0 ── ## ✓ ggplot2 3.3.0 ✓ purrr 0.3.4 ## ✓ tibble 3.0.1 ✓ dplyr 0.8.5 ## ✓ tidyr 1.0.3 ✓ stringr 1.4.0 ## ✓ readr 1.3.1 ✓ forcats 0.5.0 ## ── Conflicts ─────────────────────────────────────────────────────── tidyverse_conflicts() ── ## x dplyr::filter() masks stats::filter() ## x dplyr::lag() masks stats::lag() ## Loading required package: nlme ## Attaching package: 'nlme' ## The following object is masked from 'package:dplyr': ## collapse ## This is mgcv 1.8-31. For overview type 'help("mgcv-package")'. n.g <- 10 dat <- gamSim(1,n=n,scale=2) ## Gu & Wahba 4 term additive model f <- dat$f ## simulate nested random effects.... fa <- as.factor(rep(1:10,rep(4*n.g,10))) ra <- rep(rnorm(10),rep(4*n.g,10)) fb <- as.factor(rep(rep(1:4,rep(n.g,4)),10)) rb <- rep(rnorm(4),rep(n.g,4)) for (i in 1:9) rb <- c(rb,rep(rnorm(4),rep(n.g,4))) ## simulate auto-correlated errors within groups for (i in 1:40) { eg <- rnorm(n.g, 0, sd(f)) for (j in 2:n.g) eg[j] <- eg[j-1]*0.6+ eg[j] dat$y <- f + ra + rb + e dat$id <- fa;dat$fb <- fb # Let's have a look at it dat %>% ggplot(aes(x=x2, y=y)) + geom_line(alpha=.3,aes(group=id)) + geom_point(alpha=.3) + ## `geom_smooth()` using method = 'loess' and formula 'y ~ x' The data is not particularly well crafted, but it serves to make my point I believe. geom_smooth runs a loess model for prediction, which, given this data, makes a lot of sense. But it’s not what I want, and it won’t take into account the repeated measures. Let’s go over to the gamm models. I’ll omit gam, because you can easily get gams in ggplot by running: dat %>% ggplot(aes(x=x2, y=y)) + geom_line(alpha=.3,aes(group=id)) + geom_point(alpha=.3) + ## `geom_smooth()` using formula 'y ~ s(x, bs = "cs")' but that’s definately not what we want, despite the horrible data We could even give it a formula, which might look really nice. Not that the formula will read x and y in the formula as the ‘x’ and ‘y’ set in the aes. So don’t give it the actual names of the column, use ‘x’ and ‘y’. dat %>% ggplot(aes(x=x2, y=y)) + geom_line(alpha=.3,aes(group=id)) + geom_point(alpha=.3) + geom_smooth(method="gam", formula=y~s(x,bs='cr')) The downside to this is that it is only modelling the one term, and we want a fit that takes into account all our covariates, but only plots the one prediction, holding the others constant. trying to increase formula complexity in ggplot directly wont work, it’s just not made for something so complex. dat %>% ggplot(aes(x=x2, y=y)) + geom_line(alpha=.3,aes(group=id)) + geom_point(alpha=.3) + geom_smooth(method="gam", formula=y~s(x,bs='cr') + s(group, bs="re")) ## Warning in predict.gam(model, newdata = data_frame(x = xseq), se.fit = se, : not all required variables have been supplied in newdata! ## Warning: Computation failed in `stat_smooth()`: ## object 'group' not found So, let’s have a look at what we want to model. This particular example is compelx, because it is the complex that is hard to predict and plot. The simple ones will work with the above, provigin a formula directly to geom_stat. We’re using three smoothing splines on three predictors in this model, and including two random intercepts. But we are really only interested in the x2 smoothing spline, the others are just covariates of no interest. b = gamm(y~s(x0,bs="cr")+s(x1,bs="cr")+s(x2,bs="cr")+ # Output is a list of two b %>% summary() ## Length Class Mode ## lme 18 lme list ## gam 31 gam list # Let's see the model summary b$gam %>% summary() ## Family: gaussian ## Link function: identity ## Formula: ## y ~ s(x0, bs = "cr") + s(x1, bs = "cr") + s(x2, bs = "cr") + ## s(x3, bs = "cr") ## Parametric coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 7.6878 0.5131 14.98 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## Approximate significance of smooth terms: ## edf Ref.df F p-value ## s(x0) 2.760 2.760 5.136 0.00925 ** ## s(x1) 1.938 1.938 61.438 < 2e-16 *** ## s(x2) 7.059 7.059 42.398 < 2e-16 *** ## s(x3) 1.000 1.000 0.185 0.66755 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## R-sq.(adj) = 0.351 ## Scale est. = 13.963 n = 400 So I went googling, and in my frustration, there was very little help in what I found. mgcv’s gamm output is different from other analyses, as it gives a list containing an lme and gam model (which is really a GAMM model). So grabbing the smooth directly in ggplot will not work. Furthermore, we want one single smoothed predictor while still taking into account the rest of the predictors and I even posted something on stackoverflow (the example data is horrendous!), and the answer pointed me to predict(). So that’ what I did, I tried running predict() on the gamm output, but the resulting output was not what I wanted. predict() predicts based on all the predictors and covariates, and so, we are not plotting the one smoothed predictor, but a jumble of predictors. The resulting ‘smooth’ it not pretty, and completely wrong. pred <- predict(b$gam, se.fit=T) dat %>% ggplot(aes(x=x2, y=y)) + geom_line(alpha=.3,aes(group=id)) + geom_point(alpha=.3) + ymax=pred$fit+1.96*pred$se.fit), alpha=0.2, fill="red")+ geom_line(aes(y=pred$fit), col="blue", lwd=1) Now, there are packages to help do this. For instance, visreg has a gamm plotting function. However, visreg requires attached data, or each variable existing in the global environment. I don’t like working with attached data, in my opinion in makes the workflow less transparent, and it’s hard to debug issues. If you forget to detach the data, you can spend hours trying to figure out why things aren’t working, without noticing that the attached data may be messing things up. What we need to do, is predict all the data, keeping the other covariates constant, while only varying the predictor of interest. Right-o. If you’re good at simulating data etc, this might be easy for you. I’m pretty terrible at it (for lack of practise for sure). After trying for several days, without managing to make the plot I wanted, I tweeted my frustration. my inability to plot partial predictions of #gamm models in #rstats #ggplot after 2 days of googling, stackoverflow and constant coding is infuriatingly frustrating! I'm not used to not grasping — Athanasia Mo Mowinckel @Drmowinckels@fosstodon.org (@DrMowinckels) February 27, 2018 And to the rescue comes twitter, a useR pointed me to itsadug. The obscure package I just could not manage to find through all my googling. itsadug has a predict function, where you can specify which predictor you want to predict on. If you give it the min, max and length of what you want to predict, it then generates data with all other predictors set to constants, for your convenient plotting. It was like magic, exactly what i was searching for. Now, I could plot what I wanted, just like I wanted it! ## Loading required package: plotfunctions ## Attaching package: 'plotfunctions' ## The following object is masked from 'package:ggplot2': ## alpha ## Loaded package itsadug 2.4 (see 'help("itsadug")' ). # predict on x2 pred = get_predictions(b$gam, cond = list(x2 = seq(min(dat$x2, na.rm=T), max(dat$x2, na.rm=T), length.out = nrow(dat)), se=T)) ## Summary: ## * x0 : numeric predictor; set to the value(s): 0.476351245073602. ## * x1 : numeric predictor; set to the value(s): 0.514732652809471. ## * x2 : numeric predictor; with 400 values ranging from 0.001315 to 0.999931. ## * x3 : numeric predictor; set to the value(s): 0.477402933407575. ## * NOTE : No random effects in the model to cancel. # add y to predicted data, becasue ggplot requires all main variables in both datasets pred = pred %>% mutate(y=1) dat %>% ggplot(aes(x=x2, y=y)) + geom_line(alpha=.3,aes(group=id)) + geom_point(alpha=.3) + geom_ribbon(data=pred, alpha=.4, aes(ymin=fit-CI, ymax=fit+CI), show.legend = F, fill='forestgreen') + geom_line(data=pred, aes(y=fit), show.legend = F, color='forestgreen') That’s looking much nicer! Hurrah! I also made a convenience function for my self, to make the predicted data. This would only work for other numeric predictors, and it’s not particularly pretty coding, as it’s using ugly eval parsing, but that’s how I got it working for me. # Custom function that predicts on a single predictor, and adds the dependent with the value of 1 GammPredData = function(data, gamm.model, condition){ "get_predictions(gamm.model, cond = list(", "=seq(min(data[condition], na.rm=T),max(data[condition], na.rm=T), length.out = nrow(data)))) %>% as.data.frame() %>% mutate(", str_split(gamm.model$formula, " ")[[2]],"=1)"))) # Use the function to predict on x2. you can easily supply the other predictors pred = GammPredData(dat, b$gam, "x2") ## Summary: ## * x0 : numeric predictor; set to the value(s): 0.476351245073602. ## * x1 : numeric predictor; set to the value(s): 0.514732652809471. ## * x2 : numeric predictor; with 400 values ranging from 0.001315 to 0.999931. ## * x3 : numeric predictor; set to the value(s): 0.477402933407575. ## * NOTE : No random effects in the model to cancel. dat %>% ggplot(aes(x=x2, y=y)) + geom_line(alpha=.3,aes(group=id)) + geom_point(alpha=.3) + geom_ribbon(data=pred, alpha=.4, aes(ymin=fit-CI, ymax=fit+CI), show.legend = F, fill='forestgreen') + geom_line(data=pred, aes(y=fit), show.legend = F, color='forestgreen') You might want to grab each predictor and plot them? That can be done also, and I’ll use a combination of apply‘es to do so, for convenience. I love apply’es… # Predictions p = c("x0", "x1", "x2","x3") # we will be using facet_wrap, gather the data on the predictors, for a long data frame. dat2 = dat %>% gather(Pred, x, p) ## Note: Using an external vector in selections is ambiguous. ## ℹ Use `all_of(p)` instead of `p` to silence this message. ## ℹ See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>. ## This message is displayed once per session. preds = list() # prepare a list # loop through the predictors for(i in 1:length(p)){ preds[[i]] = GammPredData(dat, b$gam, p[i]) %>% select_("y", "CI", "fit", p[i]) names(preds)[i] = p[i] ## Summary: ## * x0 : numeric predictor; with 400 values ranging from 0.013078 to 0.996077. ## * x1 : numeric predictor; set to the value(s): 0.514732652809471. ## * x2 : numeric predictor; set to the value(s): 0.445692849811167. ## * x3 : numeric predictor; set to the value(s): 0.477402933407575. ## * NOTE : No random effects in the model to cancel. ## Warning: select_() is deprecated. ## Please use select() instead ## The 'programming' vignette or the tidyeval book can help you ## to program with select() : https://tidyeval.tidyverse.org ## This warning is displayed once per session. ## Summary: ## * x0 : numeric predictor; set to the value(s): 0.476351245073602. ## * x1 : numeric predictor; with 400 values ranging from 0.001837 to 0.999455. ## * x2 : numeric predictor; set to the value(s): 0.445692849811167. ## * x3 : numeric predictor; set to the value(s): 0.477402933407575. ## * NOTE : No random effects in the model to cancel. ## Summary: ## * x0 : numeric predictor; set to the value(s): 0.476351245073602. ## * x1 : numeric predictor; set to the value(s): 0.514732652809471. ## * x2 : numeric predictor; with 400 values ranging from 0.001315 to 0.999931. ## * x3 : numeric predictor; set to the value(s): 0.477402933407575. ## * NOTE : No random effects in the model to cancel. ## Summary: ## * x0 : numeric predictor; set to the value(s): 0.476351245073602. ## * x1 : numeric predictor; set to the value(s): 0.514732652809471. ## * x2 : numeric predictor; set to the value(s): 0.445692849811167. ## * x3 : numeric predictor; with 400 values ranging from 0.001642 to 0.996272. ## * NOTE : No random effects in the model to cancel. # use bind_rows to make them into a large data frame, gather them, just like the data preds = bind_rows(preds)%>% gather(Pred, x, p) %>% dat2 %>% ggplot(aes(x=x, y=y)) + geom_line(alpha=.3,aes(group=id)) + geom_point(alpha=.3) + geom_ribbon(data=preds, alpha=.4, aes(ymin=fit-CI, ymax=fit+CI, fill=Pred), show.legend = F) + geom_line(data=preds, aes(y=fit, color=Pred), show.legend = F) + facet_wrap(~Pred, scales="free") And that is it. My first blogpost, and I hope it is to some help. Happy International Woman’s day! For attribution, please cite this work as Dr. Mowinckel (Mar 8, 2018) GAMM spaghetti plots in R with ggplot. Retrieved from https://drmowinckels.io/blog/2018/gamm-spaghetti-plots-in-r-with-ggplot/. DOI: https://www.doi.org/10.5281/ BibTeX citation author = "Dr. Mowinckel", title = "GAMM spaghetti plots in R with ggplot", url = "https://drmowinckels.io/blog/2018/gamm-spaghetti-plots-in-r-with-ggplot/", year = 2018, doi = "https://www.doi.org/10.5281/zenodo.13256467", updated = "Nov 7, 2024"
{"url":"https://drmowinckels.io/blog/2018/gamm-spaghetti-plots-in-r-with-ggplot/","timestamp":"2024-11-08T22:23:24Z","content_type":"text/html","content_length":"60360","record_id":"<urn:uuid:d13319a1-41a7-4ff2-a179-f270be348e60>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00704.warc.gz"}
Motion Class 9 Notes, Science Chapter 8, Explanation, Question Answers Motion | Class 9 Science Chapter 8 Notes, Explanation, Video and Question Answers Motion CBSE Class 9 Science Chapter 8 – Complete explanation and Notes of the chapter ‘Motion’. Topics covered in the lesson are Rest and Motion, Acceleration, Types of Motion, Distance Time Graphs, Scalar and Vector Quantities, Velocity Time Graphs, Distance and Displacement, Derive Three Equations of Motion, Uniform & Non Uniform Motion, Circular Motion, Speed, Velocity. Given here is the complete explanation of the chapter, along with all the important questions and NCERT solutions to book questions have also been provided for the ease of students. Class 9 Science Chapter 8 – Motion Topics to be Covered Also See: CBSE Class 10 Science Syllabus 2019-2020 Session Video Explanation of Chapter 8 Motion – Part 1 Motion Class 9 Science Chapter Introduction You all use common things around you that are moving like if you see around you, observe air moving around you, like you have clocks with the hands moving, you all know that day and night is caused because of motion of Earth around the Sun, even seasons are caused because of it. So we are going to study in detail what exactly is the term motion. To know more about motion or rest we should first get familiar with the term reference point or stationary object. To comment about the state of any object, we have to consider a reference point or stationary object or surroundings. This reference point, stationary object does not change its position. For Example: If we consider a pole and a car, then a pole will be stationary as it can’t change its position but the car can change. So, in order to say that car is at rest or motion we need to consider its motion with respect to that of pole or any other stationary object. This stationary object can also be called as reference point. Rest and motion For this let us take different examples to understand it: Let us consider a car standing in front of house. Let’s say first it is at position A then after some it moves to position B. That means it has changed its position with respect to stationary object that is house. And if that car keeps on standing at position A that means it hasn’t changed its position with respect to house that means it is at rest so here we define the terms rest and motion. When a body doesn’t change its position with respect to its surroundings or reference point When the body changes its position with respect to its surroundings or reference point. So, we can say that when any object is moving or in motion it possess following characteristics as given below. Characteristics of moving object • The moving object changes its position with time Now we have seen that motion of car can be easily seen that is we don’t have to concentrate much but to know the movement of arm of clock let’s say hour hand we need to keep check this is because some motion is that fast that we can see it happening, on the other hand some motion is so slow that it can’t be seen clearly. We can elaborate it as: Our wrist watch has three hands: minute, seconds and hour hand .out of them second hand moves fast that its motion can be seen but in order to see the motion in hour hand and minute hand we have to keep the track as the motion in them is comparatively slow. Types of Motion We come across different types of motions like • Linear • Rotational • Circular • Vibratory Let us explain them with examples: • We all when drive on road mostly the roads are straight so, that motion where we move in a straight line is called linear motion • You all know the earth rotates on its axis which causes day and night that movement is rotational motion because it is rotating on its axis so, we can define as the rotational motion when the body rotates about a fixed axis. • You all encounter round bouts on road and when you are driving you can’t drive straight through it, you have to take the curved path that is the circular motion. So, if body travels along a curved path • You all must have played with the instrument guitar. So, what happens when you strike it with your finger its string starts vibrating and thus the sound is produced. That motion is vibratory as it is caused due to vibratory motion of particles so it is when body show to and from movements. Scalar and vector physical quantities We study so many physical quantities like distance, velocity and many more. All these quantities are broadly classified under two categories that is scalar or vector depending upon where they give the complete information about the magnitude (value) and direction or give the incomplete information like only direction or only value. Scalar Quantities The quantities that depend upon magnitude and not the direction is called scalar quantities. They are represented as their own symbol. For Example: If you travel to Delhi to your relative’s place and if someone asks you about the distance what do you reply. We say it was like 250km run from Chandigarh. You don’t tell him that 25km towards east, then west. We just say 250km that means we describe it only in magnitude and we don’t specify directions. So, it is a scalar quantity. Vector Quantities The physical quantities that depends upon magnitude as well as direction. They are represented by putting arrow on their symbol. For Example: Now if you travel a straight short path in a specific direction then we can say that I travelled 25km towards east. So, in this case direction is specified so it falls in the category of vector quantity. Class 9 Science Chapter wise Explanation │Chapter 1 Matter in Our Surroundings│Chapter 2 Is Matter around us Pure │Chapter 3 Atoms and Molecules│ │Chapter 4 The Structure of an Atom │Chapter 5 The Fundamental unit of life │Chapter 6 Tissues │ Distance & Displacement Let’s take an example to understand Distance • Consider a boy going to school, then from school going to a market, then going to a friend’s place to return his book and then coming back home let’s say he travels 2km from home to school then 1 km to market then 1 km to friends place then 2 km to reach home. Then the total path travelled by boy is 2+1+1+2=6 km. So, the sum total of path he travels in any direction gives us the distance. It can be defined as total path covered by body in any direction. It is a scalar quantity. • Formula: distance = total path travelled • Unit – meter/second • Bigger unit: Km/hour • 1 km = 1000m Characteristics of distance • It can never be zero: it is not possible that body moves but the distance is zero • If a body travels different paths then total distance is calculated by simply adding the magnitude of all the paths • It is a scalar quantity Let us understand displacement now Let’s say there were 3 friends Ram, Shyam and Geeta. Now ram has to go to Geeta’s place but he doesn’t know the house so what he plans He first went to Shyam place and then with shyam he went to Geeta’s home. In doing so he has to travel long distance first to Shyam house and then to Geeta. But if he knew the house of Geeta then he would have taken a short route and in specific direction that short route in particular direction is called displacement. • It is the shortest route travelled by body in given direction • It is represented as S • It is a vector quantity • It can be zero: whenever a body starts from one point and returns to the same point the displacement is zero. • If body travels different paths then total displacement is calculated by adding different vectors Let us do a problem on distance and displacement Uniform and non uniform motion Uniform motion When body covers equal distances in equal intervals of time, the body is said to home uniform motion. Example: A train running at a speed of 90 km/hr. Let’s say a train is travelling to Delhi and it covers the same distance in the same intervals of time like: │Distance │90 km│180km│270km│450km│ │Time │1 hr │2hr │3hr │4 hr │ So here we see that train travels equal distance of 90km in each hour .so it is said to be moving with uniform motion. The graph involved in it is given as: Non-uniform motion When body covers unequal distance is equal intervals of time. Example: when brakes are applied to speeding car. Let’s say A car travelling to Delhi covers 40km in one hour and then due to huge traffic on road travels 25km in another hour and then in another hour takes a break for having lunch, then speed up the car and travels 50km in another hour. It can be shown as: │Distance │40km│65km│115km │ │Time │1hr │2hr │3hr │ So, in this unequal distance is travelled in equal intervals of time. The graph drawn to show non uniform motion is: │Chapter 7 Diversity in Living Organisms │Chapter 8 Motion │Chapter 9 Force and Laws of Motion│ │Chapter 10 Gravitation │Chapter 11 Work and Energy│Chapter 12 Sound │ We all often say to a driver that drive carefully or drive slowly. If he drives slowly we reach the destination late and if he drives fast we reach the destination early. This is all due to speed which actually means how much distance we are covering in given time. It is defined travelled by body with respect to time in any direction. • It is Represented as V • Formula 🡺 speed = Distance/time • speed= distance /time • Unit => m/sec or m/sec or m sec-1 or km/hr) • It is a scalar quantity Speed of any object can also be measured at any instant of time with the help of speedometer, whereas. The distance travelled by car is measured by Odometer. Speed is of Three Types: • Uniform speed • Non uniform speed • Average speed Average Speed In order to understand it let’s take an example: If we travel to Delhi in 4hours and if your friend asks you about your speed you reply 60km/hr or 40km/hr. This is average speed. As you don’t reply that in first hour it was 50 then in another hour it was 45km/hr. We actually calculate it by measure of total distance of your destination from where you started by total time that you took to It is defined as the total distance travelled by body divided by total time taken to cover distance. Let’s do problems based on average speed Video Explanation of Chapter 8 Motion – Part 2 The speed gives us the information about the distance with respect to time but doesn’t specify the direction so we are in immediate need of physical quantity that gives the information as speed and also specifies the direction so here comes velocity. It is the distance covered by a body per unit time in a given direction. Or “Speed in given direction” • Units🡪 m/s or km/hr (meter) • It is a vector quantity • It is a complete quantity Types of velocity • Uniform velocity • Non uniform velocity • Average velocity Uniform Velocity Let’s take an example suppose we are travelling to Delhi and we are covering the same distance like in each hour we cover 60 km in specified direction then the body is said to be travelling with uniform velocity. When body covers equal distances is equal intervals of time in a straight line (given direction). Non-uniform Velocity If we are travelling to Delhi and due to mending of roads or any other reason there is huge traffic due to which we are not able to take the straight line direction and also the speed doesn’t remain same then the velocity is said to be non uniform velocity. When body covers unequal distances in equal intervals of time not in a straight line (not in given direction). Example: A car running towards north on busy road with variable velocity. Before studying average velocity let’s take in to account about the concept of initial and final velocities Suppose a scooter is at rest and then we start it and now it is moving at a speed of 20m/sec. Then the velocity which it possesses earlier is initial velocity and what it possesses now is its final velocity. That is earlier at rest so zero and now moving at 20m/sec so final velocity is 20 m/sec. Similarly, If I say that scooter is moving at 20m/sec and then its speed increases to 40m/sec. Then its initial velocity is 20m/sec and final is 40m/sec In the same way, if I say scooter is moving with 40m/sec and then suddenly brakes are applied and it stops. Then in this case initial velocity is 40m/sec and final will be zero as it stops. • Initial velocity is denoted by ‘u’ • Final velocity is denoted by ‘v’ Average velocity It is actually an arithmetic mean of initial velocity & final Velocity for given time. Or it can be defined as total displacement per unit total time taken. Whenever the velocity increases or decreases with time, then another quantity comes in to notice and that is acceleration. Is defined as the rate of change of velocity with time. • Units = m/sec 2 or m sec -2 • It is a vector quantity Please Note: • If velocity increases with time => +ve Acceleration • If velocity decreases with time => – Acceleration or retardation ode-acceleration Types of acceleration • Uniform acceleration • Non uniform acceleration • Retardation Let’s explain them Uniform Acceleration Suppose we are travelling to Delhi from Chandigarh and our velocity increases in equal amounts like initially it was 50m/sec then it becomes 70m/sec, then 90m/sec in equal intervals of time and in a straight line then the acceleration comes in to play as velocity is undergoing a change. When velocity of a body undergoes equal changes in velocity in equal intervals of time in straight line. Example: Freely falling body Non-uniform Acceleration Suppose in a similar as we took for uniform acceleration if increase or decrease in velocity is not regular and not in the specified direction, then it falls in the category of non uniform When velocity undergoes unequal changes in equal intervals of time and the direction may not be a straight line Example: Body moving with variable velocity Acceleration is said to be retardation when velocity decreases with time. Example: Slowing down of car Newton’s equation of motion Newton gave three equations of motion that is followed universally. These equations are: Please keep in mind: In these equations V= final velocity, U=initial velocity, S = distance or displacement, a= acceleration and t=time. Distance time graph Like we took so many examples above so, in any example if we plot distance versus time we get the important information from it. When d-t graph is parallel to time axis In this body is not changing its position with respect to time that shows body is stationary. Like if we take the markings on the axis, we see that in any time the distance covered is exactly the So, the information that we get, is: When d-t graph is straight line not parallel This shows the distance covered is directly proportional to time that shows body is moving with [uniform speed] If we look at the markings carefully we observe, that when time is 2 sec then distance is 10m, when time is 4 sec the distance is 20m. We can say there is an equal increase with equal intervals of So, the information we get is: • Body is moving with uniform speed When d-t graph is not a straight line If we look at the markings carefully we observe, that when time is 2 sec then distance is 10m, when time is 5 sec the distance is 20m. We can say there is an unequal increase with equal intervals of So, the information we get: • This shows body is moving with non uniform speed. │Chapter 13 Why do we fall ill │Chapter 14 Natural Resources│ │Chapter 15 Improvement in Food Resources │ │ Velocity Time -Graph Like we took so many examples above so, in any example if we plot velocity versus time, then we get the important information from it. • It represents acceleration • The area enclosed under graph gives displacement When v-t graph is || to time axis In this body is not changing its velocity with respect to time that shows body is moving with the same velocity. Like if we take the markings on the axis, we see that in any time the velocity is exactly the same. So, the information that we get, is: • The body is moving with constant velocity • Acceleration = change in velocity/time • The displacement =area under v-t graph S= area of rectangle = OBCD = L x B= 50 x 5 = 250m • This shows body is starting from rest & then its velocity increases at uniform rate. When v-t is straight line This shows velocity is directly proportional to time that shows body is moving with [uniform velocity] │Class 9th English Lessons │Class 9th English Mcq│Take Class 9 MCQs│ │Class 9th Hindi Lessons │Class 9th Hindi Mcq │Take Class 9 MCQs│ │Class 9th Science Lessons │Class 9th Science Mcq│ │ If we look at the markings carefully we observe, that when time is 1 sec then velocity is 10m/sec, when time is 2 sec the velocity is 20m/sec. We can say there is an equal increase with equal intervals of time. So, the information we get is: • Body is moving with uniform velocity • Acc =equal to the slope • Displacement = area of triangle S=1/2 Lx B S=1/2 x 6 x 50=150 m When v-t is straight line but body does start from rest This shows velocity is directly proportional to time that shows body is moving with [uniform velocity] but here in this body is not starting from rest, it has already acquired a velocity of 10m/sec. If we look at the markings carefully we observe, that when time is 1 sec then velocity is 10m/sec, when time is 2 sec the velocity is 20m/sec. We can say there is an equal increase with equal intervals of time. So, the information we get is: • Body is moving with uniform velocity but not starting from rest . • Acc =equal to the slope a= v-u/t • Displacement = area of trapezium S= ½ (OA+ BC) x CO = ½ (10 +40) x 4 = ½ x 50 x 4 =100m Derive equations of Motion => [Graphically] Ist equation: V = u + at S= area of trapezium S= (sum of two parallel sides) x height S= (u+v) x t Substituting the value of ‘v’ from the first equation of motion, we get S= (v+u)(v-u/2a) S= (v2-u2)/2a Or, v2-u2= 2as Hence proved. │Class 9th English Lessons │Class 9th English Mcq│Take Class 9 MCQs│ │Class 9th Hindi Lessons │Class 9th Hindi Mcq │Take Class 9 MCQs│ │Class 9th Science Lessons │ │ │ Circular Motion You must have encountered roundabout while travelling on a road or you must have seen the athletes running in a circular track there the motion is not in a straight line there one has to move in circles. This is circular motion. The motion of a body around a circular path with uniform speed is called circular motion”. Please note that: When body revolves in circular path it moves with uniform speed but its direction changes continuously . Due to which we can say it has non uniform velocity ,so this on uniform velocity give rise to acceleration therefore the motion is said to be accelerated motion . Motion Class 9 Questions and Answers 1. A scooter acquires a velocity of 36 km/hour in 10 seconds just after the start. Calculate acceleration of scooter? 2. A moving train is brought to rest within 20 seconds by applying the brakes. Find the initial velocity if retardation due to brakes is 2m/sec 2? 3. A body starts to slide over a horizontal surface with initial velocity of 0.5 m/sec. Due to friction its velocity decreases at the rate of 0.05m/sec2. How much time will the body takes to stop? 4. A scooter moving with a speed of 10m/sec is stopped by applying brakes which produce uniform acceleration of -0.05m/sec. How much distance will it cover before it stops? Motion Class 9 NCERT Book solutions Q 1. Why blades of fan are said to have an acc. motion? A. When the blades of fan rotate they rotate in circular path although they rotate with uniform speed but direction of blades continuously changes. Therefore, we can say that they are rotating with non-uniform velocity. We also know, the rate of change of velocity is acceleration. So, we can say blade of fan possess acc. motion as there is continuous change velocity with time. Q. Joseph jogs from one end A to another end B of a straight 300m road in 2 min 30 seconds and then turns around and jogs 100 m back to point C in another 1 minute. What are Joseph’s average speeds and average velocities from (a) A to B (b) A to C? Q. Abdul while driving to school computes the average for his route to be 20km/hr. On his return trip along the same route, there is less traffic and his average speed is 30km/hr. What is the average speed for Abdul’s trip? Q. A ball is gently dropped from a height of 20 m. If its velocity increases uniformly at the rate of10m/sec2, with what velocity will it strike the ground? After what time will it strike the ground? Q. An object has moved through a distance. Can it have zero displacement? If so support your answer with example? Ans.: Yes, displacement can be zero even if the distance is not zero. Example: A child going to school in the morning and returning home after school covers distance but displacement is zero. Q. A farmer is moving along a boundary of square field of side 10 m in 40 seconds. What will be the magnitude of displacement of the farmer at the need of 2 minutes and 20 seconds? Q. Which of the following is true for displacement? 1. It can’t be zero 2. Its magnitude is greater than the distance travelled by object? Ans.: Both are false. Firstly, it can’t be zero, positive or negative. Secondly its magnitude is less than or equal to distance covered. Q. Distinguish between speed and velocity? │Speed │Velocity │ │ 1. It is distance travelled with respect to time in any direction. │It is distance travelled with respect to time in given direction.│ │ 1. Scalar quantity │Vector quantity │ │ 2. Complete quantity │Complete quantity │ Q. Under what condition is the magnitude of the average velocity of an object equal to average speed? Ans.: The magnitude of the average velocity of an object equal to average speed when motion is along a straight line. Q. What does the odometer measure? Ans.: Odometer measures the distance covered . Q. What does the path of an object look when it is in uniform motion? Ans.: the body will displaced equally in equal intervals of time . Q. During an experiment, a signal from a spaceship reached the ground station in 5 minute. what was the distance of spaceship from the ground station? The signal travels at a speed of light i.e. 3 x 10 8 m/sec? Q. When will you say that body is in uniform acceleration and non uniform acceleration? Ans.: Uniform acceleration – if change in velocity is equal in equal intervals of time. Non uniform acceleration – if change in velocity is not equal in equal intervals of time. Q. A bus decreases its speed from 80km/hr to 60km/hr In 5 sec. Find acceleration? Q. A train starting from railway station and moving with uniform acceleration attains the speed of 40km/hr in 10 minutes find acceleration? Q. What is the nature of distance time graph for uniform and non uniform motion? Ans.: For uniform motion: For non uniform motion: Q. What can you say about the motion of object whose distance time graph is a straight line parallel to time axis? In this body is not changing its position with respect to time that shows body is stationary. Like if we take the markings on the axis, we see that in any time the distance covered is exactly the Q. What can you say about the motion of object whose velocity time graph is a straight line parallel to time axis? In this, the body is not changing its velocity with respect to time that shows body is moving with the same velocity. Like if we take the markings on the axis, we see that in any time the velocity is exactly the same. So, the information that we get, is: • The body is moving with constant velocity • Acceleration = change in velocity/time • The displacement =area under v-t graph S= area of rectangle = OBCD = L x B= 50 x 5 = 250m • This shows body is starting from rest & then its velocity increases at a uniform rate. Q. What is the quantity which is measured by the area occupied below the velocity time graph? Ans.: The area occupied below the velocity time graph gives distance covered by the body . Q. A bus starting from rest moves with uniform acceleration of 0.1m/sec2 for 2 minutes. Find the speed acquired and distance travelled? Q. A train is travelling at a speed of 90km/hr. Brakes is applied so as to produce a uniform acceleration of -0.5m/sec2 . Find how far the train will go before it is brought to rest? Q. A trolley while going down an inclined plane has an acceleration of 2cm/sec 2.. What will be its velocity 3seconds after the start? Q. A racing car has uniform acceleration of 4m/sec2 . What distance will it cover in 10seconds after the start? Q. A stone is thrown vertically upward with a velocity of 5m/sec. If the acceleration of stone during its motion is 10m/sec 2 in the downward direction, What will be the height attained by the stone and how much time will it take to reach there? Q. An athlete covers one round of circular track of diameter 200m in 40 seconds. What will be the distance covered and displacement at the end of 2 minutes 30 seconds? Q. A motor boat starting from rest on a lake accelerates in a straight line at constant rate of 3m/sec 2 for 8 seconds. How far does the boat travel during this time? Q. An artificial satellite is moving in a circular orbit of radius 42,250 km. Calculate its speed if it takes 24 hours to revolve around the earth? Q. State which of the following situations are possible and give an example of each: • An object with constant acceleration but zero velocity • An object moving in a certain direction with acceleration in perpendicular direction? Ans.: (a) when a body is projected upwards, its velocity at highest point reached is zero although it has uniform acceleration. (b) When an object is moving in a circular path, its direction of motion is towards tangent to the circle although the centripetal acceleration is acting perpendicular to the direction of motion that is towards the circle. Q. The speed time graph shown for the car is shown in figure: How far does the car travel in first 4 seconds? Which part of the graph represents uniform motion? Ans.: The Area OABO gives the distance in first 4 seconds Part CD gives uniform motion Q. A driver of a car travelling at 52km/hr applies brakes and accelerates uniformly in the opposite direction the car stops in 5 seconds. Another driver going at 34km/hr in another car applies brakes slowly and stops in 10 seconds. Plot the speed versus time graph for both time graphs for two cars. Which of the two travels farther after the brakes are applied? │Class 9th English Lessons │Class 9th English Mcq│Take Class 9 MCQs│ │Class 9th Hindi Lessons │Class 9th Hindi Mcq │Take Class 9 MCQs│ │Class 9th Science Lessons │Class 9th Science Mcq│ │
{"url":"https://www.successcds.net/class9/science/motion.html","timestamp":"2024-11-03T23:20:00Z","content_type":"text/html","content_length":"187863","record_id":"<urn:uuid:5818226a-47dc-477c-8c75-c756891edd3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00140.warc.gz"}
There are some connections between aging notions, stochastic orders, and expected utilities. It is known that the DRHR (decreasing reversed hazard rate) aging notion can be characterized via the comparative statics result of risk aversion, and that the location-independent riskier order preserves monotonicity between risk premium and the Arrow–Pratt measure of risk aversion, and that the dispersive order preserves this monotonicity for the larger class of increasing utilities. Here, the aging notions ILR (increasing likelihood ratio), IFR (increasing failure rate), IGLR (increasing generalized likelihood ratio), and IGFR (increasing generalized failure rate) are characterized in terms of expected utilities. Based on these observations, we recover the closure properties of ILR, IFR, and DRHR under convolution, and of IGLR and IGFR under product, and investigate the closure properties of the dispersive order, location-independent riskier order, excess wealth order, the total time on test transform order under convolution, and the star order under product. We have some new findings.
{"url":"https://core-cms.prod.aop.cambridge.org/core/search?filters%5BauthorTerms%5D=Taizhong%20Hu&eventCode=SE-AU","timestamp":"2024-11-12T04:13:04Z","content_type":"text/html","content_length":"1035005","record_id":"<urn:uuid:c456128a-9a70-4eef-b9fe-fde6d0b5ee3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00035.warc.gz"}
The rise of Ethereum and other blockchains that support smart contracts has led to the creation of decentralized exchanges (DEXs), such as Uniswap, Balancer, Curve, mStable, and SushiSwap, which enable agents to trade cryptocurrencies without trusting a centralized authority. While traditional exchanges use order books to match and execute trades, DEXs are typically organized as constant function market makers (CFMMs). CFMMs accept and reject proposed trades based on the evaluation of a function that depends on the proposed trade and the current reserves of the DEX. For trades that involve only two assets, CFMMs are easy to understand, via two functions that give the quantity of one asset that must be tendered to receive a given quantity of the other, and vice versa. When more than two assets are being exchanged, it is harder to understand the landscape of possible trades. We observe that various problems of choosing a multi-asset trade can be formulated as convex optimization problems, and can therefore be reliably and efficiently solved. 1. Introduction In the past few years, several new financial exchanges have been implemented on blockchains, which are distributed and permissionless ledgers replicated across networks of computers. These decentralized exchanges (DEXs) enable agents to trade cryptocurrencies, i.e., digital currencies with account balances stored on a blockchain, without relying on a trusted third party to facilitate the exchange. DEXs have significant capital flowing through them; the four largest DEXs on the Ethereum blockchain (Curve Finance^1, Uniswap^2^3, SushiSwap^4, and Balancer((Fernando Martinelli and Nikolai Mushegian. Balancer: A non-custodial portfolio manager, liquidity provider, and price sensor. 2019.))) have a collective trading volume of several billion dollars per day. Unlike traditional exchanges, DEXs typically do not use order books. Instead, most DEXs (including Curve, Uniswap, SushiSwap, and Balancer) are organized as constant function market makers (CFMMs). A CFMM holds reserves of assets (cryptocurrencies), contributed by liquidity providers. Agents can offer or tender baskets of assets to the CFMM, in exchange for another basket of assets. If the trade is accepted, the tendered basket is added to the reserves, while the basket received by the agent is subtracted from the reserves. Each accepted trade incurs a small fee, which is distributed pro-rata among the liquidity providers. CFMMs use a single rule that determines whether or not a proposed trade is accepted. The rule is based on evaluating a trading function, which depends on the proposed trade and the current reserves of the CFMM. A proposed trade is accepted if the value of the trading function at the post-trade reserves (with a small correction for the trading fee) equals the value at the current reserves, i.e., the function is held constant. This condition is what gives CFMMs their name. One simple example of a trading function is the product^5^6, implemented by Uniswap^7 and SushiSwap^4; this CFMM accepts a trade only if it leaves the product of the reserves unchanged. Several other functions can be used, such as the sum or the geometric mean (which is used by Balancer^8). For trades involving just two assets, CFMMs are very simple to understand, via a scalar function that relates how much of one asset is required to receive an amount of the other, and vice versa. Thus the choice of a two-asset trade involves only one scalar quantity: how much you propose to tender (or, equivalently, how much you propose to receive). For general trades, in which many assets may be simultaneously exchanged, CFMMs are more difficult reason about. When multiple assets are tendered, there can be many baskets that can be tendered to receive a specific basket of assets, and vice versa, there are many choices of the received basket, given a fixed one that is tendered. Thus the choice of a multi-asset trade is more complex than just specifying an amount to tender or receive. In this case the trader may wish to tender and receive baskets that are most aligned with their preferences or utility (e.g., one that maximizes their risk-adjusted return). In all practical cases, including the ones mentioned above, the trading function is concave^9. In this paper we make use of this fact to formulate various multi-asset trading problems as convex optimization problems. Because convex optimization problems can be solved reliably and efficiently (in theory and in practice)^10, we can solve the formulated trading problems exactly. This gives a practical solution to the problem of choosing among many possible multi-asset trades: the trader articulates their objective and constraints, and a solution to this problem determines the baskets of assets to be tendered and received. We start by surveying related work in §1.1. In §2, we give a complete description of CFMMs, describing how agents may trade with a CFMM, as well as add or remove liquidity. In §3 we study some basic properties of CFMMs, many of which rely on the concavity of the trading function. In §4 we examine trades involving just two assets, and show how to understand them via two functions that give the amount of asset received for a given quantity of the tendered asset. Finally, in §5 we formulate the general multi-asset trading problem as a convex optimization problem, and give some specific 1.1. Background and related work CFMMs are typically implemented on a blockchain: a decentralized, permissionless, and public ledger. The blockchain stores accounts, represented by cryptographic public keys, and associated balances of one or more cryptocurrencies. A blockchain allows any two accounts to securely transact with each other without the need for a trusted third party or central institution, using public-key cryptography to verify their identities. Executing a transaction, which alters the state of the blockchain, costs the issuer a fee, typically paid out to the individuals providing computational power to the network. (This network fee depends on the amount of computation a transaction requires and is paid in addition to the CFMM trading fee mentioned above and described below.) Blockchains are highly tamper resistant: they are replicated across a network of computers, and kept in consensus via simple protocols that prevent invalid transactions such as double-spending of a coin. The consensus protocol operates on the level of blocks (bundles of transactions), which are verified by the network and chained together to form the ledger. Because the ledger is public, anyone in the world can view and verify all account balances and the entire record of transactions. The idea of a blockchain originated with a pseudonymously authored whitepaper that proposed Bitcoin, widely considered to be the first cryptocurrency^11. A cryptocurrency is a digital currency implemented on a blockchain. Every blockchain has its own native cryptocurrency, which is used to pay the network transaction fees (and can also be used as a standalone currency). A given blockchain may have several other cryptocurrencies implemented on it. These additional currencies are sometimes called tokens, to distinguish them from the base currency. There are thousands of tokens in circulation today, across various blockchains. Some, like the Uniswap token UNI, give holders rights over the governance of a protocol, while others, like USDC, are stablecoins, pegged to the market value of some external or real-world currency or commodity. Smart contracts. Modern blockchains, such as Ethereum^12^13, Polkadot^14, and Solana^15, allow anyone to deploy arbitrary stateful programs called smart contracts. A contract’s public functions can be invoked by anyone, via a transaction sent through the network and addressed to the contract. (The term ‘smart contract’ was coined in the 1990s, to refer to a set of promises between agents codified in a computer program^16.) Because creators are free to compose deployed contracts or remix them in their own applications, software ecosystems on these blockchains have developed rapidly. CFMMs are implemented using smart contracts, with functions for trading, adding liquidity, and removing liquidity. Their implementations are usually simple. For example, Uniswap v2 is implemented in just 200 lines of code. In addition to DEXs, many other financial applications have been deployed on blockchains, including lending protocols (e.g.,^17^18) and various derivatives (e.g.,^19^20). The collection of financial applications running on blockchains is known as decentralized finance, or DeFi for short. Exchange-traded funds. CFMMs have some similarities to exchange-traded funds (ETFs). A CFMM’s liquidity providers are analogous to an ETF’s authorized participants; adding liquidity to a CFMM is analogous to the creation of an ETF share, and subsequently removing liquidity is analogous to redemption. But while the list of authorized participants for an ETF is typically very small, anyone in the world can provide liquidity to a CFMM or trade with it. Comparison to order books. In an order book, trading a basket of multiple assets for another basket of multiple assets requires multiple separate trades. Each of these trades would entail the blockchain fee, increasing the total cost of trading to the trader. In addition, multiple trades cannot be done at the same time with an order book, exposing the trader to the risk that some of the trades go through while others do not, or that some of the trades will execute at unfavorable prices. In a CFMM, multiple asset baskets are exchanged in one trade, which either goes through as one group trade, or not at all, so the trader is not exposed to the risk of partial execution. Another advantage of CFMMs over order book exchanges is their efficiency of storage, since they do not need to store and maintain a limit order book, and their computational efficiency, since they only need to evaluate the trading function. Because users must pay for computation costs for each transaction, and these costs can often be nonnegligible in some blockchains, exchanges implementing CFMMs can often be much cheaper for users to interact with than those implementing order books. Previous work. Academic work on automated market makers began with the study of scoring rules within the statistics literature, e.g.,^21. Scoring rules furnish probabilities for baskets of events, which can be viewed as assets or tokens in a prediction market. The output probability from a scoring rule was first proposed as a pricing mechanism for a binary option (such as a prediction market) in^22. Unlike CFMMs, these early automated market makers were shown to be computationally complicated for users to interact with. For example. Chen^23 demonstrated that computing optimal arbitrage portfolios in logarithmic scoring rules (the most popular class of scoring rules) is #P-hard. The first CFMM on Ethereum (the most commonly used blockchain for smart contracts) was Uniswap^2^3. The first formal analysis of Uniswap was first done in^24 and extended to general concave trading functions in^9. Evans^25 first proved that constant mean market makers could replicate a large set of portfolio value functions. The converse result was later proven, providing a mechanism for constructing a trading function that replicates a given portfolio value function^26. Analyses of how fees^27^28 and trading function curvature^29^30^31 affect liquidity provider returns are also common in the literature. Finally, we note that there exist investigations of privacy in CFMMs^32, suitability of liquidity provider shares as a collateral asset^33, and the question of triangular arbitrage^34 in CFMMs. 1.2. Convex analysis and optimization Convex analysis. A function f : D \to {{\bf R}}, with D \subseteq {{\bf R}}^n, is convex if D is a convex set and f(\theta x + (1 - \theta)y) \leq \theta f(x) + (1 - \theta)f(y), for 0 \le \theta \le 1 and all x,y \ in D. It is common to extend a convex function to an extended-valued function that maps {{\bf R}}^n to {{\bf R}}\cup \{\infty\}, with f(x)=+\infty for x \not\in D. A function f is concave if -f is convex^10, Chap. 3]. When f is differentiable, an equivalent characterization of convexity is f(z) \geq f(x)+ \nabla f(x)^T(z-x), for all z,x \in D. A differentiable function f is concave if and only if for all z,x\in D we have (1)f(z) \leq f(x)+ \nabla f(x)^T(z-x). The right hand side of this inequality is the first-order Taylor approximation of the function f at x, so this inequality states that for a concave function, the Taylor approximation is a global upper bound on the function. By adding (1) and the same inequality with x and z swapped, we obtain the inequality (2)(\nabla f(z)-\nabla f(x))^T (z-x) \leq 0, valid for any concave f and z,x\in D. This inequality states that for a concave function f, -\nabla f is a monotone operator^35. Convex optimization. A convex optimization problem has the form \begin{array}{ll} {minimize} & f_0(x) \\ {subject to} & f_i(x) \leq 0, \quad i=1, \ldots, m\\ & g_i(x) = 0, \quad i=1, \ldots, p, \end{array} where x \in {{\bf R}}^n is the optimization variable, the objective function f_0 : D \to {{\bf R}} and inequality constraint functions f_i : D \to {{\bf R}} are convex, and the equality constraint functions g_i : {{\bf R}}^n \to {{\bf R}} are affine, i.e., have the form g_i(x) = a_i^T x + b_i for some a_i \in {{\bf R}}^n and b_i \in {{\bf R}}. (We assume the domains of the objective and inequality functions are the same for simplicity.) The goal is to find a solution of the problem, which is a value of x that minimizes the objective function, among all x satisfying the constraints f_i(x) \leq 0, i=1, \ ldots, m, and g_i(x) = 0, i=1, \ldots, p^10, Chap. 4]. In the sequel we will refer to the problem of maximizing a concave function, subject to convex inequality constraints and affine equality constraints, as a convex optimization problem, since this problem is equivalent to minimizing -f_0 subject to the constraints. Convex optimization problems are notable because they have many applications, in a wide variety of fields, and because they can be solved reliably and efficiently^10. The list of applications of convex optimization is large and still growing. It has applications in vehicle control^36^37^38, finance^39^40, dynamic energy management^41, resource allocation^42, machine learning^43^44, inverse design of physical systems^45, circuit design^46^47, and many other fields. In practice, once a problem is formulated as a convex optimization problem, we can use off-the-shelf solvers (software implementations of numerical algorithms) to obtain solutions. Several solvers, such as OSQP^48, SCS^49, ECOS^50, and COSMO^51, are free and open source, while others, like MOSEK^52, are commercial. These solvers can handle problems with thousands of variables in seconds or less, and millions of variables in minutes. Small to medium-size problems can be solved extremely quickly using embedded solvers^50^48^53 or code generation tools^54^55^56. For example, the aerospace and space transportation company SpaceX uses CVXGEN^54 to solve convex optimization problems in real-time when landing the first stages of its rockets^37. Domain-specific languages for convex optimization. Convex optimization problems are often specified using domain-specific languages (DSLs) for convex optimization, such as CVXPY^57^58 or JuMP^59, which compile high-level descriptions of problems into low-level standard forms required by solvers. The DSL then invokes a solver and retrieves a solution on the user’s behalf. DSLs vastly reduce the engineering effort required to get started with convex optimization, and in many cases are fast enough to be used in production. Using such DSLs, the convex optimization problems that we describe later can all be implemented in just a few lines of code that very closely parallel the mathematical specification of the problems. 2. Constant function market makers In this section we describe how CFMMs work. We consider a DEX with n>1 assets, labeled 1, \ldots, n, that implements a CFMM. Asset n is our numeraire, the asset we use to value and assign prices to the others. 2.1. CFMM state Reserve or pool. The DEX has some reserves of available assets, given by the vector R \in {{\bf R}}_+^n, where R_i is the quantity of asset i in the reserves. Liquidity provider share weights. The DEX maintains a table of all the liquidity providers, agents who have contributed assets to the reserves. The table includes weights representing the fraction of the reserves each liquidity provider has a claim to. We denote these weights as v_1, \ldots, v_N, where N is the number of liquidity providers. The weights are nonnegative and sum to one, i.e., v \geq 0, and \sum_{i=1}^N v_i =1 . The weights v_i and the number of liquidity providers N can change over time, with addition of new liquidity providers, or the deletion from the table of any liquidity provider whose weight is State of the CFMM. The reserves R and liquidity provider weights v constitute the state of the DEX. The DEX state changes over time due to any of the three possible transactions: a trade (or exchange), adding liquidity , or removing liquidity. These transactions are described in §2.2 and §2.6. 2.2. Proposed trade A proposed trade (or proposed exchange) is initiated by an agent or trader, who proposes to trade or exchange one basket of assets for another. A proposed trade specifies the tender basket, with quantities given by \Delta \in {{\bf R}}_+^n, which is the basket of assets the trader proposes to give (or tender) to the DEX, and the received basket, the basket of assets the trader proposes to receive from the DEX in return, with quantities given by \Lambda\in {{\bf R}}_+^n. Here \Delta_i ([/latex]\Lambda_i)[/latex] denotes the amount of asset i that the trader proposes to tender to the DEX (receive from the DEX). In the sequel we will refer to the vectors that give the quantities, i.e., \Delta and \Lambda, as the tender and receive baskets, respectively. The proposed trade can either be rejected by the DEX, in which case its state does not change, or accepted, in which case the basket \Delta is transfered from the trader to the DEX, and the basket \ Lambda is transfered from the DEX to the trader. The DEX reserves are updated as (3)R^+ = R + \Delta - \Lambda, where R^+ denotes the new reserves. A proposed trade is accepted or rejected based on a simple condition described in §2.3, which always ensures that R^+ \geq 0. Disjoint support of tender and receive baskets. Intuition suggests that a trade would not include an asset in both the proposed tender and receive baskets, i.e., we should not have \Delta_i and \Lambda_i both positive. We will see later that while it is possible to include an asset in both baskets, it never makes sense to do so. This means that \Delta and \Lambda can be assumed to have disjoint support, i.e., we have \Delta_i \Lambda_i = 0 for each i. This allows us to define two disjoint sets of assets associated with a proposed or accepted trade: \mathcal T = \{ i \mid \Delta_i >0 \}, \qquad \mathcal R = \{ i \mid \Lambda_i >0 \}. Thus \ mathcal T are the indices of assets the trader proposes to give to the DEX, in exchange for the assets with indices in \mathcal R. If j \not\in \mathcal T \cup \mathcal R, it means that the proposed trade does not involve asset j, i.e., \Delta_j = \Lambda_j =0. Two-asset and multi-asset trades. A very common type of proposed trade involves only two assets, one that is tendered and one that is received, i.e., |\mathcal T| =|\mathcal R|=1. Suppose \mathcal T= \{i\} and \mathcal R=\{j\}, with i\neq j. Then we have \Delta = \delta e_i and \Lambda = \lambda e_j, where e_i denotes the ith unit vector, and \lambda \geq 0 is the quantity of asset j the trader wishes to receive in exchange for the quantity \delta \geq 0 of asset i. (This is referred to as exchanging asset i for asset j.) When a trade involves more than two assets, it is called a multi-asset trade. We will study two-asset and multi-asset trades in §4 and §5, respectively. 2.3. Trading function Trade acceptance depends on both the proposed trade and the current reserves. A proposed trade (\Delta,\Lambda) is accepted only if (4)\varphi(R+\gamma\Delta-\Lambda) = \varphi(R), where \varphi: {{\bf R}}_+^n \to {{\bf R}} is the trading function associated with the CFMM, and the parameter \gamma \in (0,1] introduces a trading fee (when \gamma<1). The ‘constant function’ in the name CFMM refers to the acceptance condition (4). We can interpret the trade acceptance condition as follows. If \gamma =1, a proposed trade is accepted only if the quantity \varphi(R) does not change, i.e., \varphi(R^+)=\varphi(R). When \gamma <1 (with typical values being very close to one), the proposed trade is accepted based on the devalued tendered basket \gamma \Delta. The reserves, however, are updated based on the full tendered basket \Delta as in (3). We will assume that the trading function \varphi is concave, increasing, and differentiable. Many existing CFMMs are associated with functions that satisfy the additional property of homogeneity, i.e., \varphi(\alpha R) = \alpha \varphi(R) for \alpha>0. 2.4. Trading function examples We mention some trading functions that are used in existing CFMMs. Linear and sum. The simplest trading function is linear, \varphi(R) = p^TR = p_1R_1+ \cdots + p_nR_n, with p>0, where p_i can be interpreted as the price of asset i. The trading condition (4) simplifies to \gamma p^ T \Delta = p^T \Lambda. We interpret the righthand side as the total value of received basket, at the prices given by p, and the lefthand side as the value of the tendered basket, discounted by the factor \gamma. A CFMM with p= \mathbf 1, i.e., all asset prices equal to one, is called a constant sum market maker. The CFMM mStable, which held assets that were each pegged to the same currency, was one of the earliest constant sum market makers. Geometric mean. Another choice of trading function is the (weighted) geometric mean, \varphi(R) = \prod_{i=1}^n R_i^{w_i}, where total w>0 and \mathbf 1^T w=1. Like the linear and sum trading functions, the geometric mean is homogeneous. CFMMs that use the geometric mean are called constant mean market makers. The CFMMs Balancer^8, Uniswap^2 and SushiSwap^4 are examples of constant mean market makers. (Uniswap and SushiSwap use weights w_i = 1/n, and are sometimes called constant product market makers^60^9.) Other examples. Another example combines the sum and geometric mean functions, \varphi(R) = (1 - \alpha) \mathbf 1^TR + \alpha \prod_{i=1}^n R_i^{w_i}, where \alpha \in [0,1] is a parameter, w \geq 0, and \mathbf 1^ T w =1. This trading function yields a CFMM that interpolates between a constant sum market (when \alpha=0) and a constant geometric mean market (when \alpha=1). Because it is a convex combination of the sum and geometric mean functions, which are themselves homogeneous, the resulting function is also homogeneous. The CFMM known as Curve^1 uses the closely related trading function \varphi(R) = \mathbf 1^TR - \alpha \prod_{i=1}^n R_i^{-1}, where \alpha > 0. Unlike the previous examples, this trading function is not homogeneous. 2.5. Prices and exchange rates In this section we introduce the concept of asset (reported) prices, based on a first order approximation of the trade acceptance condition (4). These prices inform how liquidity can be added and removed from the CFMM, as we will see in §2.6. Unscaled prices. We denote the gradient of the trading function as P=\nabla \varphi(R). We refer to P, which has positive entries since \varphi is increasing, as the vector of unscaled prices, (5)P_i = \nabla \varphi(R)_i = \frac{\partial \varphi}{\partial R_i}(R), \quad i=1,\ldots, n. To see why these numbers can be interpreted as prices, we approximate the exchange acceptance condition (4) using its first order Taylor approximation to get 0 = \varphi(R+\gamma \Delta - \Lambda) - \varphi(R) \approx \nabla \varphi(R) ^T (\gamma \Delta - \Lambda)= P^T(\gamma \Delta - \Lambda), when \gamma \Delta - \Lambda is small, relative to R. We can express this approximation as (6)\gamma \sum_{i \in \mathcal T} P_i \Delta_i \approx \sum_{i \in \mathcal R} P_i \Lambda_i. The righthand side is the value of the received basket using the unscaled prices P_i. The lefthand side is the value of the tendered basket using the unscaled prices P_i, discounted by the factor \ The condition (6) is homogeneous in the prices, i.e., it is the same condition if we scale all prices by any positive constant. The reported prices (or just prices) of the assets are the prices relative to the price of the numeraire, which is asset n. The prices are p_i = \frac{P_i}{P_n}, \quad i=1, \dots, n. (The price of the numeraire is always 1.) In general the prices depend on the reserves R. (The one exception is with a linear trading function, in which the prices are constant.) In terms of prices, the condition (6) is (7)\gamma \sum_{i \in \mathcal T} p_i \Delta_i \approx \sum_{i \in \mathcal R} p_i \Lambda_i. We observe for future use that the prices for two values of the reserves R and \tilde R are the same if and only if (8)\nabla \varphi(\tilde R) = \alpha \nabla \varphi(R), for some \alpha >0. Geometric mean trading function prices. For the special case \varphi(R)=\prod_{i=1}^n R_i^{w_i}, with w_i >0 and \sum_{i=1}^nw_i=1, the unscaled prices are P = \nabla\varphi(R) = \varphi(R) (w_1R_1^{-1}, w_2R_2^{-1}, \dots, w_nR_n^{-1}), and the prices are (9)p_i = \frac{w_iR_n}{w_nR_i}, \quad i=1, \ldots, n. Exchange rates. In a two-asset trade with \Delta= \delta e_i and \Lambda = \lambda e_j, i.e., we are exchanging asset i for asset j, the exchange rate is E_{ij} = \gamma \frac{\nabla \varphi(R)_i}{\nabla \varphi(R) _j} = \gamma \frac{P_i}{P_j} = \gamma \frac{p_i}{p_j}. This is approximately how much asset j you get for each unit of asset i, for a small trade. Note that E_{ij}E_{ji} = \gamma^2 < 1, when \gamma<1 , i.e., round-trip trades lose value. These are first order approximations. We remind the reader that the various conditions described above are based on a first order Taylor approximation of the trade acceptance condition. A proposed trade that satisfies (7) is not (quite) valid; it is merely close to valid when the proposed trade baskets are small compared to the reserves. This is similar to the midpoint price (average of bid and ask prices) in an order book; you cannot trade in either direction exactly at this price. Reserve value. The value of the reserves (using the prices p) is given by (10)V = p^T R = \frac{\nabla \varphi(R)^TR}{\nabla \varphi(R)_n}. When \varphi is homogeneous we can use the identity \nabla\varphi(R)^T R =\varphi(R) to express the reserves value as (11)V = p^T R = \frac{\varphi(R)}{\nabla \varphi(R)_n}. 2.6. Adding and removing liquidity In this section we describe how agents called liquidity providers can add or remove liquidity from the reserves. When an agent adds liquidity, she adds a basket \Psi \in {{\bf R}}_+^n to the reserves, resulting in the updated reserves R^+ = R+\Psi. When an agent removes liquidity, she removes a basket \Psi \in {{\bf R}}_+^n from the reserves, resulting in the updated reserves R^+ = R-\ Psi. (We will see below that the condition for removing liquidity ensures that R^+ \geq 0.) Adding or removing liquidity also updates the liquidity provider share weights, as described below. Liquidity change condition. Adding or removing liquidity must be done in a way that preserves the asset prices. Using (8), this means we must have (12)\nabla \varphi(R^+) = \alpha \nabla \varphi(R), for some \alpha>0. (We will see later that \alpha>1 corresponds to removing liquidity, and \alpha<1 corresponds to adding liquidity.) This liquidity change condition is analogous to the trade exchange condition (4). We refer to \Psi as a valid liquidity change if this condition holds. The liquidity change condition (12) simplifies in some cases. For example, with a linear trading function the prices are constant, so any basket can be used to add liquidity, and any basket with \Psi \leq R can be removed. (The constraint comes from the requirement R^+ \geq 0, the domain of \varphi.) Liquidity change condition for homogeneous trading function. Another simplification occurs when the trading function is homogeneous. For this case we have, for any \alpha >0, \nabla \varphi(\alpha R)= \nabla \varphi(R), (by taking the gradient of \varphi(\ alpha R) = \alpha \varphi(R) with respect to R). This means that \Psi = \nu R, for \nu >0, is a valid liquidity change (provided \nu \leq 1 for liquidity removal). In words: you can add or remove liquidity by adding or removing a basket proportional to the current reserves. Liquidity provider share update. Let V = p^T R denote the value of the reserves before the liquidity change, and V^+=(p^+)^T R^+ = p^T R^+ the value after. The change in reserve value is V^+-V= p^T \Psi when adding liquidity, and V^ +- V= -p^T\Psi when removing liquidity. Equivalently, p^T\Psi is the value of the basket a liquidity provider gives, when adding liquidity, or receives when removing liquidity. The fractional change in reserve value is (V^+-V)/V^+. When liquidity provider j adds or removes liquidity, all the share weights are adjusted pro-rata based on the change of value of the reserves, which is the value of the basket she adds or removes. The weights are adjusted to (13) v_i^+ = \begin{cases} v_i V/V^+ + (V^+ - V)/V^+ & i= j\\ v_iV/V^+ & i \ne j. \end{cases} Thus the weight of liquidity provider j is increased (decreased) by the fractional change in reserve value when she adds (removes) liquidity. These new weights are also nonnegative and sum to one. When \varphi is homogeneous and we add liquidity with the basket \Psi = \nu R, with \nu >0, we have V_+ = (1+\nu)p^TR, so V/V^+ = 1/(1+\nu), \qquad (V^+-V)/V^+ = \nu / (1+\nu). The weight updates for adding liquidity \Psi= \nu R are then v_i^+ = \begin{cases} (v_i+\nu)/(1+\nu) & i=j \\ v_i/(1+\nu) & i\neq j. \end{cases} For removing liquidity with the basket \Psi = \nu R, we replace \nu with -\nu in the formulas above, along with the constraint \nu \le v_j. 2.7. Agents interacting with CFMMs Agents seeking to trade or add or remove liquidity make proposals. These proposals are accepted or not, depending on the acceptance conditions given above. A proposal can be rejected if another agent’s proposed action is accepted (processed) before their proposed action, thus changing R and invalidating the acceptance condition. Slippage thresholds. One practical and common approach to mitigating this problem during trading is to allow agents to set a slippage threshold on the received basket. This slippage threshold, represented as some percentage 0 \le \eta \le 1, is simply a parameter that specifies how much slippage the agent is willing to tolerate without their trade failing. In this case, the agent presents some trade (\Delta, \Lambda) along with a threshold \eta, and the contract accepts the trade if there is some number \alpha satisfying \eta \le \alpha such that the trade (\Delta, \alpha\Lambda) can be accepted. In other words, the agent allows the contract to devalue the output basket by at most a factor of \eta. If no such value of \alpha exists, the trade fails. Maximal liquidity amounts. While setting slippage thresholds can help with reducing the risk of trades failing, another possible failure mode can occur during the addition of liquidity. A simple solution to this problem is that the liquidity provider specifies some basket \Psi to the CFMM contract, and the contract accepts the largest possible basket \Psi^- such that \Psi^- \le \Psi, returning the remaining amount, \ Psi - \Psi^-, to the liquidity provider. In other words, \Psi can be seen as the maximal amount of liquidity a user is willing to provide. 3. Properties In this section we present some basic properties of CFMMs. 3.1. Properties of trades If we replace the trading function \varphi with \tilde \varphi= h\circ \varphi, where h is concave, increasing, and differentiable, we obtain another concave increasing differentiable function. The associated CFMM has the same trade acceptance condition, the same prices, the same liquidity change condition, and the same liquidity provider share updates as the original CFMM. Maximum valid receive basket. Any valid trade satisfies \varphi(R+\gamma \Delta-\Lambda) = \varphi(R), so in particular R+\gamma \Delta -\Lambda \geq 0. Since we assume \Delta and \Lambda have non-overlapping support, it follows that \Lambda \le R. A valid trade cannot ask to receive more than is in the reserves. Non-overlapping support for valid tender and receive baskets. Here we show why a valid proposed trade with \Delta_k>0 and \Lambda_k>0 for some k does not make sense when \gamma<1, justifying our assumption that this never happens. Let (\tilde \Delta,\tilde \ Lambda) be a proposed trade which coincides with (\Delta,\Lambda) except in the kth components, which we set to \tilde \Delta_k = \Delta_k - \tau/\gamma, \qquad \tilde \Lambda_k = \Lambda_k - \tau, where \tau = \min\{\gamma \Delta_k,\Lambda_k\} >0. Evidently \tilde \Delta \geq 0, \tilde \Lambda \geq 0, and R+\gamma \Delta - \Lambda = R+\gamma \tilde \Delta - \tilde \Lambda, so the proposed trade (\tilde \Delta,\tilde \Lambda) is also valid. If the trader proposes this trade instead of (\Delta,\Lambda), the net change in her assets is \tilde \Lambda- \tilde \Delta = \Lambda -\Delta + \ left(\frac{1}{\gamma}-1\right) \tau e_k. The last vector on the right is zero in all entries except k, and positive in that entry. Thus the valid proposed trade (\tilde \Delta,\tilde \Lambda) has the same net effect as the trade (\Delta,\Lambda), except that the trader ends up with a positive amount more of the kth asset. Assuming the kth asset has value, we would always prefer this. Trades increase the function value. For an accepted nonzero trade, we have \varphi(R^+) =\varphi(R+\Delta-\Lambda) > \varphi(R+\gamma\Delta- \Lambda) = \varphi(R), since \varphi is increasing and R+\Delta-\Lambda \ge R+\gamma\Delta- \ Lambda, with at least one entry being strictly greater, whenever \gamma < 1. We can derive a stronger inequality using concavity of \varphi, which implies that \varphi(R+\gamma \Delta-\Lambda) \leq \varphi(R+\Delta-\Lambda) + (\gamma-1) \nabla \varphi(R+\Delta-\Lambda)^T \ Delta. This can be re-arranged as \varphi(R^+) \geq \varphi(R) + (1-\gamma) (P^+)^T \Delta, where P^+ = \nabla \varphi(R^+) are the unscaled prices at the reserves R^+. This tells us the function value increases at least by (1-\gamma) times the value of tendered basket at the unscaled prices. Trading cost is positive. Suppose (\Delta, \Lambda) is a valid trade. The net change in the trader’s holdings is \Lambda-\Delta. We can interpret \delta = p^T (\Delta-\Lambda) as the decrease in value of the trader’s holdings due to the proposed trade, evaluated at the current prices. We can interpret \delta as a trading cost, evaluated at the pre-trade prices, and now show it is positive. Since \varphi is concave, we have \varphi(R+\gamma \Delta -\Lambda) \leq \varphi(R) + \nabla \varphi(R)^T(\gamma \Delta - \Lambda). Using \varphi(R+\gamma \Delta -\Lambda) = \varphi(R), this implies 0 \leq \nabla \varphi(R)^T (\gamma \Delta - \Lambda) = P^T(\gamma \Delta-\Lambda). %\sum_{i=1}^n P_i (\gamma \Delta_i - \Lambda_i). From this we obtain P^T(\Delta-\Lambda) = P^T(\gamma \Delta-\ Lambda) + (1-\gamma) P^T \Delta \geq (1-\gamma) P^T \Delta . Dividing by P_n gives \delta \geq (1-\gamma) p^T \Delta. Thus the trading cost is always at least a factor (1-\gamma) of p^T\Delta, the total value of the tendered basket. The trading cost \delta is also the increase in the total reserve value, at the current prices. So we can say that each trade increases the total reserve value, at the current prices, by at least (1- \gamma) times the value of the tendered basket. 3.2. Properties of liquidity changes Liquidity change condition interpretation. One natural interpretation of the liquidity change condition (12) is in terms of a simple optimization problem. We seek a basket \Psi that maximizes the post-change trading function value subject to a given total value of the basket at the current prices, (14)\begin{aligned} & \text{maximize} && \varphi(R^+)\\ & \text{subject to} && p^T (R^+-R) \le M. \end{aligned} Here the optimization variable is R^+\in {{\bf R}}_+^n, and M is the desired value of the basket \Psi at the current prices, for adding liquidity, or its negative, for removing liquidity. The optimality conditions for this convex optimization problem are p^T(R^+-R)\le M, \qquad \nabla \varphi(R^+) - \nu p =0, where \nu \ge 0 is a Lagrange multiplier. Using p = \nabla \varphi(R)/\nabla \ varphi(R)_n, the second condition is \nabla \varphi(R^+) = \frac{\nu}{\nabla \varphi(R)_n} \nabla \varphi(R), which is (12) with \alpha = \nu/ \nabla \varphi(R)_n. We can easily recover the trading basket \Psi from R^+ since \Psi = R^+ - R. Liquidity provision problem. When the trading function is homogeneous, it is easy to understand what baskets can be used to add or remove liquidity: they must be proportional to the current reserves. In other cases, it can be difficult to find an R^+ that satisfies (12). In the general case, however, the convex optimization problem (14) can be solved to find the basket \Psi that gives a valid liquidity change, with M denoting the total value of the added basket (when M>0) or removed basket (when M<0). Liquidity change and the gradient scale factor \alpha. Suppose that we add or remove liquidity. Since \varphi is concave (2) tells us that (\nabla \varphi(R^+)-\nabla \varphi(R))^T (R^+-R) \leq 0. Using \nabla \varphi(R^+) = \alpha \nabla \varphi(R), this becomes (\alpha-1) \nabla \varphi(R)^T (R^+-R) \leq 0. We have \nabla \varphi(R)>0. If we add liquidity, we have R^+ - R \geq 0 and R^+-R \neq 0, so \nabla \varphi(R)^T (R^+-R) >0. From the inequality above we conclude that \alpha<1. If we remove liquidity, a similar arguments tells us that \alpha>1. 4. Two-asset trades Two-asset trades, sometimes called swaps, are some of the most common types of trades performed on DEXs. In this section, we show a number of interesting properties of trades in this common special 4.1. Exchange functions Suppose we exchange asset i for asset j, so \Delta = \delta e_i and \Lambda = \lambda e_j, with \delta\geq 0, \lambda \geq 0. The trade acceptance condition (4) is (15)\varphi(R+\gamma \delta e_i - \lambda e_j)=\varphi(R). The lefthand side is increasing in \delta and decreasing in \lambda, so for each value of \delta there is at most one valid value of \lambda, and for each value of \lambda, there is at most one valid value of \delta. In other words, the relation (15) between \lambda and \gamma defines a one-to-one function. This means that two-asset trades are characterized by a single parameter, either \delta (how much is tendered) or \lambda (how much is received). Forward exchange function. Define F: {{\bf R}}_+ \to {{\bf R}}, where F(\delta) is the unique \lambda that satisfies (15). The function F is called the forward exchange function, since F(\delta) is how much of asset j you get if you exchange \delta of asset i. The forward exchange function F is increasing since \varphi is componentwise increasing and nonnegative since F(0) = 0. We will now show that the function F is Using the implicit function theorem on (15) with \lambda = F(\delta), we obtain (16)F'(\delta) = \gamma\frac{\nabla\varphi(R')_i}{\nabla\varphi(R')_j}, where we use R' = R + \gamma\delta e_i - F(\delta) e_j to simplify notation. To show that F is concave, we will show that, for any nonnegative trade amounts \delta, \delta' \ge 0, the function F (17)F(\delta') \le F'(\delta)(\delta' - \delta) + F(\delta), which establishes that F is concave. We write R'' = R + \gamma \delta' e_i - F(\delta')e_j, and note that \varphi(R) = \varphi(R') = \varphi(R'') from the definition of F. Since \varphi is concave it satisfies \varphi(R'') \le \nabla \ varphi(R')^T(R'' - R') + \varphi(R'), so \nabla \varphi(R')^T(R'' - R') \ge 0. Using the definitions of R'' and R', we have 0 \le \gamma(\delta' - \delta)\nabla \varphi(R')_i - (F(\delta') - F(\ delta))\nabla\varphi(R')_j. Dividing by \nabla\varphi(R')_j and using (16), we obtain (17). Reverse exchange function. Define G: {{\bf R}}_+ \to {{\bf R}}\cup \{\infty\}, where G(\lambda) is the unique \delta that satisfies (15), or G(\lambda)=\infty is there is no such \delta. The function F is called the reverse exchange function, since F(\lambda) is how much of asset i you must exchange, to receive \lambda of asset j. In a similar way to the forward trade function, the reverse exchange function is nonnegative and increasing, but this function is convex rather than concave. (This follows from a nearly identical proof.) Forward and reverse exchange functions are inverses. The forward and reverse exchange functions are inverses of each other, i.e., they satisfy G(F(\delta)) = \delta, \qquad F(G(\lambda)) = \lambda, when both functions are finite. Analogous functions for a limit order book market. There are analogous functions in a market that uses a limit order book. They are piecewise linear, where the slopes are the different prices of each order, while the distance between the kink points is equal to the size of each order. The associated functions have the same properties, i.e., they are increasing, inverses of each other, F is concave, and G is convex. Evaluating F and G. In some important special cases, we can express the functions F and G in a closed form. For example, when the trading function is the sum function, they are F(\delta) = \min\{\gamma\delta, R_j\}, \ qquad G(\lambda) = \begin{cases} \lambda/\gamma & \lambda/\gamma \le R_j\\ +\infty & \text{otherwise}. \end{cases} When the trading function is the geometric mean, the functions are F(\delta) = R_j \ left(1-\frac{R_i^{w_i/w_j}}{(R_i + \gamma \delta)^{w_i/w_j}}\right), \qquad G(\lambda) = \frac{R_i}{\gamma}\left(\frac{R_j^{w_j/w_i}}{(R_j - \lambda)^{w_j/w_i}} - 1\right), whenever \lambda < R_j, and G(\lambda) = \infty otherwise. On the other hand, when the forward and reverse trading functions F and G cannot be expressed analytically, we can use several methods to evaluate them numerically^61. To evaluate F(\delta), we fix \ delta and solve for \lambda in (15). The lefthand side is a decreasing function of \lambda, so we can use simple bisection to solve this nonlinear equation. Newton’s method can be used to achieve higher accuracy with fewer steps. Exploiting the concavity of \varphi, it can be shown an undamped Newton iteration always converges to the solution. With superscripts denoting iteration, this is \ lambda^{k+1} = \lambda^k + \frac{\varphi(R+\gamma \delta e_i - \lambda^k e_j) - \varphi(R)} {\nabla \varphi(R+\gamma \delta e_i - \lambda^k e_j)_j}, with starting point based on the exchange rate, \ lambda^0 = \delta E_{ij} = \delta\frac{\gamma p_i}{p_j}. (It can be shown that the convergence is monotone decreasing.) We note that one of the largest CFMMs, Curve, uses a trading function that is not homogeneous and uses this method in production^1. Figure 1: Left. Forward exchange functions for two values of the reserves. Right. Reverse exchange functions for the same two values of the reserves. Slope at zero. Using (16), we see that F'(0^+) = E_{ij}, i.e., the one-sided derivative at 0 is exactly the exchange rate for assets i and j. Since F is concave, we have (18)F(\delta) \leq F'(0^+)\delta = E_{ij} \delta. This tells us that the amount of asset j you will receive for trading \delta of asset i is no more than the amount predicted by the exchange rate. The one-sided derivative of the reverse exchange function G at 0 is G'(0^+) = E_{ji}. The analog of the inequality (18) is (19)G(\lambda) \geq G'(0^+)\lambda = \gamma^{-2} E_{ji} \lambda, which states that the amount of asset i you need to tender to receive an amount of asset j is at least the amount predicted by the exchange rate. Figure 1 shows the forward and reverse exchange functions for a constant geometric mean market with two assets and weights w_1 = .2 and w_2 = .8, and \gamma =0.997. We show the functions for two values of the reserves: R=(1,100) and R=(0.1,10). The exchange rate is the same for both values of the reserves and equal to E_{12} = \gamma w_1R_2/w_2R_1 = 25. 4.2. Exchanging multiples of two baskets Here we discuss a simple generalization of two-asset trade, in which we tender and receive a multiple of fixed baskets. Thus, we have \Delta = \delta \tilde \Delta and \Lambda = \lambda \tilde \ Lambda, where \lambda\geq 0 and \delta \geq 0 scale the fixed baskets \tilde \Delta and \tilde \Lambda. When \tilde \Delta = e_i and \tilde \Lambda= e_j, this reduces to the two-asset trade discussed The same analysis holds in this case as in the simple two-asset trade. We can introduce the forward and reverse functions F and G, which are inverses of each other. They are increasing, F is concave, G is convex, and they satisfy F(0)=G(0)=0. We have the inequality F(\delta) \leq E \delta, where E is the exchange rate for exchanging the basket \tilde \Delta for the basket \tilde \Lambda, given by E = \gamma \frac{\nabla \varphi(R)^T \tilde \Delta}{\nabla \varphi(R)^T \tilde \Lambda}. There is also an inequality analogous to (19), using this definition of the exchange rate. We mention two specific important examples in what follows. Liquidating assets. Let \Delta\in {{\bf R}}_+^n denote a basket of assets we wish to liquidate, i.e., exchange for the numeraire. We can assume that \Delta_n=0. We then find the \alpha>0 for which (\Delta,\alpha e_n) is a valid trade, i.e., (20)\varphi(R+\gamma \Delta - \alpha e_n) = \varphi(R). We can interpret \alpha as the liquidation value of the basket \Delta . We can also show that the liquidation value is at most as large as the discounted value of the basket; i.e., \alpha \le \gamma To see this, apply (1) to the left hand side of (20), which gives, after cancelling \varphi(R) on both sides, \nabla\varphi(R)^T(\gamma \Delta - \alpha e_n) \ge 0. Rearranging, we find: \alpha \le \ frac{\gamma\nabla\varphi(R)^T\Delta}{\nabla\varphi(R)_n} = \gamma p^T\Delta. Purchasing a basket. Let \Lambda \in {{\bf R}}_+^n denote a basket we wish to purchase using the numeraire. We find \alpha >0 for which (\alpha e_n,\Lambda) is a valid trade, i.e., \varphi(R+\gamma \alpha e_n - \Lambda) = \varphi(R). We interpret \alpha as the purchase cost of the basket \Lambda. It can be shown that \alpha \geq (1/\gamma) p^T \Lambda, i.e., the purchase cost is at least a factor 1/\gamma more than the value of the basket, at the current prices. This follows from a nearly identical argument to that of the liquidation value. Figure 2: Valid tendered baskets (\Delta[3], \Delta[4]) for the received basket \Lambda = (2,4,0,0). 5. Multi-asset trades We have seen that two-asset trades are easy to understand; we choose the amount we wish to tender (or receive), and we can then find the amount we will receive (or tender). Multi-asset trade are more complex, because even for a fixed receive basket \Lambda, there are many tender baskets that are valid, and we face the question of which one should we use. The same is true when we fix the tendered basket \Delta: there are many baskets \Lambda we could receive, and we need to choose one. More generally, we have the question of how to choose the proposed trade (\Delta,\Lambda). In the two-asset case, the choice is parametrized by a scalar, either \delta or \lambda. In the multi-asset case, there are more degrees of freedom. We consider an example with n=4, geometric mean trading function with weights w_i = 1/4 and fee \gamma = .997, with reserves R = (4, 5, 6, 7). We fix the received basket to be \Lambda = (2,4,0,0). There are many valid tendered baskets, which are shown in figure 2. The plot shows valid values of (\Delta_3, \Delta_4), since the first two components of \Delta are zero. {#f-tendered width=”.8\\textwidth”} 5.1. The general trade choice problem We formulate the problem of choosing (\Delta,\Lambda) as an optimization problem. The net change in holdings of the trader is \Lambda - \Delta. The trader judges a net change in holdings using a utility function U:{{\bf R}}^n \to {{\bf R}}\cup\{-\infty\}, where she prefers (\Delta,\Lambda) to (\tilde \Delta, \tilde \Lambda) if U(\Lambda - \Delta)> U(\tilde \Lambda - \tilde \Delta). The value -\infty is used to indicate that a change in holdings is unacceptable. We will assume that U is increasing and concave. (Increasing means that the trader would always prefer to have a larger net change than a smaller one, which comes from our assumption that all assets have value.) To choose a valid trade that maximizes utility, we solve the problem (21)\begin{aligned} & \text{maximize} && U(\Lambda - \Delta) \\ & \text{subject to} && \varphi(R+\gamma \Delta - \Lambda) = \varphi(R), \quad \Delta \geq 0, \quad \Lambda \geq 0, \end{aligned} with variables \Delta and \Lambda. Unfortunately the constraint \varphi(R+\gamma \Delta - \Lambda) = \varphi(R) is not convex (unless the trading function is linear), so this problem is not in general convex. Instead we will solve its convex relaxation, where we change the equality constraint to an inequality to obtain the convex problem (22)\begin{aligned} & \text{maximize} && U(\Lambda - \Delta) \\ & \text{subject to} && \varphi(R+\gamma \Delta - \Lambda) \geq \varphi(R), \quad \Delta \geq 0, \quad \Lambda \geq 0, \end{aligned} which is readily solved. It is easy to show that any solution of (22) satisfies \varphi(R+\gamma \Delta - \Lambda) = \varphi(R), and so is also a solution of the problem (21). (If a solution satisfies \varphi(R+\gamma \Delta - \Lambda) > \varphi(R), we can decrease \Delta or increase \Lambda a bit, so as to remain feasible and increase the objective, a contradiction.) Thus we can (globally and efficiently) solve the non-convex problem (21) by solving the convex problem (22). No-trade condition. Assuming U(0) > - \infty, the solution to the problem (22) can be \Delta = \Lambda = 0, which means that trading does not increase the trader’s utility, i.e., the trader should not propose any trade. We can give simple conditions under which this happens for the case when U is differentiable. They are (23)\gamma p \leq \alpha\nabla U(0) \leq p, for some \alpha>0. We can interpret the set of prices p for which this is true, i.e., K = \{p \in {{\bf R}}^n_+ \mid \gamma p \le \alpha \nabla U(0) \le p ~ \text{for some} ~ \alpha > 0\}, as the no-trade cone for the utility function U. (It is easy to see that K is a convex polyhedral cone.) We interpret \nabla U(0) as the vector of marginal utilities to the trader, and p as the prices of the assets in the CFMM. For \gamma =1, the condition says that we do not trade when the marginal utility is a positive multiple of the current asset prices; if this does not hold, then the solution of the trading problem (22) is nonzero, i.e., the trader should trade to increase her utility. When \gamma<1, the trader will not trade when the prices are in K. To derive condition (23), we first derive the optimality conditions for the problem (22). We introduce the Lagrangian L(\Delta, \Lambda, \lambda, \omega, \kappa) = U(\Lambda -\Delta) + \lambda(\ varphi(R+\gamma \Delta-\Lambda)-\varphi(R)) + \omega^T\Delta + \kappa^T\Lambda, where \lambda \in {{\bf R}}_+, \omega\in {{\bf R}}_+^n, and \kappa \in {{\bf R}}_+^n are dual variables or Lagrange multipliers for the constraints. The optimality conditions for (22) are feasibility, along with \nabla_\Delta L = 0, \qquad \nabla _\Lambda L = 0. The choice \Delta=0, \Lambda =0 is feasible, and satisfies this condition if \nabla_\Delta L(0,0,\lambda,\omega,\kappa) = 0, \qquad \nabla_\Lambda L(0,0,\lambda,\omega,\kappa) = 0. These are - \nabla U(0) + \lambda \gamma \nabla \varphi(R) + \omega =0, \qquad \nabla U(0) - \lambda \nabla \varphi(R) + \kappa=0, which we can write as \nabla U(0) \geq \lambda \gamma \nabla \varphi(R), \qquad \nabla U(0) \leq \lambda \nabla \varphi(R). Dividing these by \lambda P_n, we obtain (23) with \alpha = 1/(\lambda P_n). 5.2. Special cases Linear utility. When U(z) = \pi^T z, with \pi \geq 0, we can interpret \pi as the trader’s private prices of the assets, i.e., the prices she values the assets at. From (23) we see that the trader will not trade if her private asset prices satisfy (24)\gamma p \le \alpha\pi \le p for some \alpha > 0. In the special case where \pi satisfies (\pi_2, \dots, \pi_n) = \lambda (p_2, \dots, p_n), for \lambda \ge 0, i.e., \pi is collinear with p except in the first entry, (24) is satisfied if and only if \lambda \gamma p_1 \le \pi_1 \le \lambda \gamma^{-1}p_1. If \lambda = 1, then this simplifies to the condition \gamma p_1 \le \pi_1 \le \gamma^{-1}p_1. (This will arise in an example we present Markowitz trading. Suppose the trader models the return r\in {{\bf R}}^n on the assets over some period of time as a random vector with mean \mathop{\bf E{}}r = \mu \in {{\bf R}}^n and covariance matrix \mathop{\bf E {}}(r-\mu)(r-\mu)^T =\Sigma \in {{\bf R}}^{n\times n}. If the trader holds a portfolio of assets z \in {{\bf R}}^n_+, the return is r^T z; the expected portfolio return is \mu^Tz and the variance of the portfolio return is z^T\Sigma z. In Markowitz trading, the trader maximizes the risk-adjusted return, defined as \mu^T z - \kappa z^T\Sigma z, where \kappa>0 is the risk-aversion parameter^62^63. This leads to the Markowitz trading problem (25)\begin{aligned} & {maximize} && \mu^T z - \kappa z^T \Sigma z\\ &{subject to} && z = z^\text{curr} - \Delta + \Lambda \\ &&&\varphi(R+\gamma \Delta - \Lambda) \geq \varphi(R)\\ &&& \Delta \geq 0, \quad \Lambda \geq 0, \end{aligned} with variables z, \Delta, \Lambda, where z^\text{curr} is the trader’s current holdings of assets. This is the general problem (22) with concave utility function U(Z) = \mu^T(z^\text{curr} + Z) - \ kappa (z^\text{curr} + Z)^T \Sigma (z^\text{curr} + Z). A well-known limitation of the Markowitz quadratic utility function U, i.e., the risk-adjusted return, is that it is not increasing for all Z, which implies that the trading function relaxation need not be tight. However, for any sensible choice of the parameters \mu and \Sigma, it is increasing for the values of Z found by solving the Markowitz problem (25), and the relaxation is tight. As a practical matter, if a solution of (25) does not satisfy the trading constraint, then the parameters are inappropriate. Expected utility trading. Here the trader models the returns r\in {{\bf R}}^m on the assets over some time interval as random, with some known distribution. The trader seeks to maximize the expected utility of the portfolio return, using a concave increasing utility function \psi: {{\bf R}}\to {{\bf R}} to introduce risk aversion. (Thus we use the term utility function to refer to both the trading utility function U: {{\bf R}}_+^n \to {{\bf R}} and the portfolio return utility function \psi:{{\bf R}}\to {{\bf R}}, but the context should make it clear which is meant.) This leads to the problem (26)\begin{aligned} & {maximize} && \mathop{\bf E{}}\psi(r^Tz)\\ &{subject to} && z = z^\text{curr} - \Delta + \Lambda \\ &&&\varphi(R+\gamma \Delta - \Lambda) \geq \varphi(R)\\ &&& \Delta \geq 0, \ quad \Lambda \geq 0, \end{aligned} where the expectation is over r. This is the general problem (22), with utility U(Z) = \mathop{\bf E{}}\psi (r^T (z^\text{curr}+Z)), which is concave and increasing. This problem can be solved using several methods. One simple approach is to replace the expectation with an empirical or sample average over some Monte Carlo samples of r, which leads to an approximate solution of (26). The problem can also be solved using standard methods for convex stochastic optimization, such as projected stochastic gradient methods. 5.3. Numerical examples In this section we give two numerical examples. Figure 3: Solutions \Lambda - \Delta for the linear utility maximization problem, as the private price for asset 1 is varied by the factor t from the CFMM price. The blue curve shows asset 1. Linear utility. Our first example involves a CFMM with 6 assets, geometric mean trading function with equal weights w_i = 1/6, and trading fee parameter \gamma = .9. (We intentionally use an unrealistically small value of \gamma so the no-trade condition is more evident.) We take reserves R = (1, 3, 2, 5, 7, 6). The corresponding prices are given by (9), p = (R_6/R_1, R_6/R_2, \ldots, 1) = (6, 2, 3, 6/5, 6/7, 1). We consider linear utility, with the trader’s private prices given by \pi = (tp_1, p_2, \dots, p_n), where t is a parameter that we vary over the interval t \in [1/2,2]. For t=1, we have \pi=p, i.e., the CFMM prices and the trader’s private prices are the same (and not surprisingly, the trader does not trade). As we vary t, we vary the trader’s private price for asset 1 by up to a factor of two from the CFMM price. The family of optimal trades are shown in figure 3, as a function of the parameter t. We plot \Lambda - \Delta versus t, which shows assets in the tender basket as negative and the received basket as positive. The blue curve shows asset 1, which we tender when t is small, and receive when t is large. The no-trade region is clearly seen as the interval t\in [0.9,1.1]. Markowitz trading. Figure 4: Solutions \Lambda - \Delta for instances of an example Markowitz trading problem as the risk-aversion parameter \kappa is varied. Our second example uses nearly the same CFMM and reserves as the previous example, but with a more realistic trading fee parameter \gamma = .997. (This is a common choice of trading fee for many CFMMs.) We solve the Markowitz trading problem (25), with current holdings z^\text{curr} = (2.5, 1, .5, 2.5, 3, 1), mean return \mu = (-.01, .01, .03, .05, -.02, .02), and covariance \Sigma = V^TV/ 100, where the entries of V\in {{\bf R}}^{6\times 6} are drawn from the standard normal distribution. We solve the optimal trading problem for values of the risk aversion parameter \kappa varying between 10^{-2} and 10^{1}. (For all of these values, the trading constraint is tight.) These optimal trades are shown in figure 4. It is interesting to note that depending on the risk aversion, we either tender or receive assets 2 and 3. The CVXPY code for the Markowitz optimal trading problem is given below. In this snippet we assume that `mu`, `sigma`, `gamma`, `kappa`, `R`, and `z_curr` have been previously defined. Note that the code closely follows the mathematical description of the problem given in (25). 6. Conclusion We have provided a general description of CFMMs, outlining how users can interact with a CFMM through trading or adding and removing liquidity. We observe that many of the properties of CFMMs follow from concavity of the trading function. In the simple case where two assets are traded or exchanged, it suffices to specify the amount we wish to receive (or tender), which determines the amount we tender (receive), by simply evaluating a convex (concave) function. Multi-asset trades are more complex, since the set of valid trades is multi-dimensional, i.e., multiple tender or received baskets are possible. We formulate the problem of choosing from among these possible valid trades as a convex optimization problem, which can be globally and efficiently solved. Listing 1: Markowitz trading CVXPY code. The authors would like to acknowledge Shane Barratt for useful discussions. Guillermo Angeris is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1656518. Akshay Agrawal is supported by a Stanford Graduate Fellowship.
{"url":"https://baincapitalcrypto.com/insights/?authors=667","timestamp":"2024-11-07T04:29:08Z","content_type":"text/html","content_length":"186651","record_id":"<urn:uuid:881bf837-86f4-44cd-b389-385387597df8>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00552.warc.gz"}
B. Mirkin (2011) Core Concepts in Data Analysis: Summarization, Correlation, Visualization, London, Springer; Undergraduate Topics in Computer Science Series. Its draft is here. A review from ACM Computing Reviews is here. Here are some of my papers in PDF format (page numbering may not coincide with that of the published version). You will need Adobe® Acrobat® Reader™ in order to view them.
{"url":"https://www.dcs.bbk.ac.uk/~mirkin/publications.html","timestamp":"2024-11-05T06:18:17Z","content_type":"application/xml","content_length":"7379","record_id":"<urn:uuid:8c66c0dc-b4be-463f-8e78-5f6c6908170c>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00582.warc.gz"}
Initial and terminal objects In category theory, a branch of mathematics, an initial object of a category C is an object I in C such that for every object X in C, there exists precisely one morphism I → X. The dual notion is that of a terminal object (also called terminal element): T is terminal if for every object X in C there exists a single morphism X → T. Initial objects are also called coterminal or universal, and terminal objects are also called final. If an object is both initial and terminal, it is called a zero object or null object. A pointed category is one with a zero object. A strict initial object I is one for which every morphism into I is an isomorphism. • In the category of pointed sets (whose objects are non-empty sets together with a distinguished element; a morphism from (A,a) to (B,b) being a function ƒ : A → B with ƒ(a) = b), every singleton is a zero object. Similarly, in the category of pointed topological spaces, every singleton is a zero object. • In the category of semigroups, the empty semigroup is the unique initial object and any singleton semigroup is a terminal object. There are no zero objects. In the subcategory of monoids, however, every trivial monoid (consisting of only the identity element) is a zero object. • In the category of groups, any trivial group is a zero object. There are zero objects also for the category of abelian groups, category of pseudo-rings Rng ( the zero ring), category of modules over a ring, and category of vector spaces over a field; see zero object (algebra) for details. This is the origin of the term "zero object". • In the category of rings with unity and unity-preserving morphisms, the ring of integers Z is an initial object. The zero ring consisting only of a single element 0 = 1 is a terminal object. • In the category of fields, there are no initial or terminal objects. However, in the subcategory of fields of fixed characteristic, the prime field is an initial object. • Any partially ordered set (P,≤) can be interpreted as a category: the objects are the elements of P, and there is a single morphism from x to y if and only if x ≤ y. This category has an initial object if and only if P has a least element; it has a terminal object if and only if P has a greatest element. • All monoids may be considered, in their own right, to be categories with a single object. In this sense, each monoid is a category that consists of one object and a collection of specific morphisms to itself. This one object is neither initial or terminal unless the monoid is trivial, in which case it is both. • In the category of graphs, the null graph, containing no vertices nor edges, is an initial object. If loops are permitted, then the graph with a single vertex and one loop is terminal. The category of simple graphs does not have a terminal object. • Similarly, the category of all small categories with functors as morphisms has the empty category 0 (with no objects, no morphisms) as initial object and the terminal or trivial category 1 (with a single object, single morphism) as terminal object. • Any topological space X can be viewed as a category by taking the open sets as objects, and a single morphism between two open sets U and V if and only if U⊂V. The empty set is the initial object of this category, and X is the terminal object. This is a special case of the case "partially ordered set", mentioned above. Take P :=the set of open subsets. • If X is a topological space (viewed as a category as above) and C is some small category, we can form the category of all contravariant functors from X to C, using natural transformations as morphisms. This category is called the category of presheaves on X with values in C. If C has an initial object c, then the constant functor which sends every open set to c is an initial object in the category of presheaves. Similarly, if C has a terminal object, then the corresponding constant functor serves as a terminal presheaf. • In the category of schemes, Spec(Z) the prime spectrum of the ring of integers is a terminal object. The empty scheme (equal to the prime spectrum of the zero ring) is an initial object. • If we fix a homomorphism ƒ: A → B of abelian groups, we can consider the category C consisting of all pairs (X,φ) where X is an abelian group and φ: X → A is a group homomorphism with ƒφ = 0. A morphism from the pair (X,φ) to the pair (Y,ψ) is defined to be a group homomorphism r: X → Y with the property ψr = φ. The kernel of ƒ is a terminal object in this category; this is nothing but a reformulation of the universal property of kernels. With an analogous construction, the cokernel of ƒ can be seen as an initial object of a suitable category. • In the category of interpretations of an algebraic model, the initial object is the initial algebra, the interpretation that provides as many distinct objects as the model allows and no more. Existence and uniqueness Initial and terminal objects are not required to exist in a given category. However, if they do exist, they are essentially unique. Specifically, if I[1] and I[2] are two different initial objects, then there is a unique isomorphism between them. Moreover, if I is an initial object then any object isomorphic to I is also an initial object. The same is true for terminal objects. For complete categories there is an existence theorem for initial objects. Specifically, a (locally small) complete category C has an initial object if and only if there exist a set I (not a proper class) and an I-indexed family (K[i]) of objects of C such that for any object X of C there at least one morphism K[i] → X for some i ∈ I. Equivalent formulations Terminal objects in a category C may also be defined as limits of the unique empty diagram ∅ → C. Since the empty category is vacuously a discrete category, a terminal object can be thought of as an empty product (a product is indeed the limit of the discrete diagram {X_i}, in general). Dually, an initial object is a colimit of the empty diagram ∅ → C and can be thought of as an empty coproduct or categorical sum. It follows that any functor which preserves limits will take terminal objects to terminal objects, and any functor which preserves colimits will take initial objects to initial objects. For example, the initial object in any concrete category with free objects will be the free object generated by the empty set (since the free functor, being left adjoint to the forgetful functor to Set, preserves Initial and terminal objects may also be characterized in terms of universal properties and adjoint functors. Let 1 be the discrete category with a single object (denoted by •), and let U : C → 1 be the unique (constant) functor to 1. Then • An initial object I in C is a universal morphism from • to U. The functor which sends • to I is left adjoint to U. • A terminal object T in C is a universal morphism from U to •. The functor which sends • to T is right adjoint to U. Relation to other categorical constructions Many natural constructions in category theory can be formulated in terms of finding an initial or terminal object in a suitable category. Other properties • The endomorphism monoid of an initial or terminal object I is trivial: End(I) = Hom(I,I) = { id[I] }. • If a category C has a zero object 0 then for any pair of objects X and Y in C the unique composition X → 0 → Y is a zero morphism from X to Y. • Adámek, Jiří; Herrlich, Horst; Strecker, George E. (1990). Abstract and Concrete Categories. The joy of cats (PDF). John Wiley & Sons. ISBN 0-471-60922-6. Zbl 0695.18001. • Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical foundations. Special topics in order, topology, algebra, and sheaf theory. Encyclopedia of Mathematics and Its Applications. 97 . Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001. • Mac Lane, Saunders (1998). Categories for the Working Mathematician. Graduate Texts in Mathematics. 5 (2nd ed.). Springer-Verlag. ISBN 0-387-98403-8. Zbl 0906.18001. This article is based in part on PlanetMath's article on examples of initial and terminal objects. This article is issued from - version of the 4/28/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
{"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Initial_and_terminal_objects.html","timestamp":"2024-11-13T03:06:52Z","content_type":"text/html","content_length":"28494","record_id":"<urn:uuid:a70e9a96-45a6-4295-826c-840f70804c3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00309.warc.gz"}
What our customers say... Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences: Step-by-step, Algebrator has made algebra as easy as memorizing the multiplication tables! It is just impossible that I would be doing so well academically, and feel so confident about myself, if not for this program! It changed my life! Kevin Woods, WI Algebrator is the best software I've used! I never thought I was going to learn the different formulas and rules used in math, but your software really made it easy. Thank you so much for creating it. Now I don't dread going to my algebra class. Thanks! May Sung, OK As a student I was an excellent maths student but due to scarcity of time I couldnt give attention to my daughters math education. It was an issue I could not resolve and then I came across this software. Algebrator was of immense help for her. She could now learn the basics of algebra. This was shown in her next term grades. D.H., Tennessee I failed Algebra at my local community college twice before I bought Algebrator. Third time was a charm though, got a B thanks to Algebrator. David Felton, MT No Problems, this new program is very easy to use and to understand. It is a good program, I wish you all the best. Thanks! Christy Roberts, TN Search phrases used on 2010-12-11: Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among • function notations solver • square root of an absolute value function domain • calculator for dividing monomials • lgebra 2 practice tests • program ti83 factoring • online trig function graph calculator • eigenvalues TI84 • factorize quadratic expressions the step by step explanation • prentice hall math answer key for teachers • extended 9th grade math problems • pythagorean theory worksheets • mcdougal Algebra 2 answer key • math larson free downloadable • mathematical foiling • ti-83 trinomials with calculator • 6th grade math adding subtracting decimals • help algebra free 9th grade • math games with factoring trinomials • Scale Factor Problems Middle School • quadratic formula program for TI-84 • how to convert a decimal to fraction • how to explain probability to 4th grade • powerpoint LESSON PLAN ON INTERCEPTS OF A LINE • 6th grade ratio lessons • kumon answers • Advanced Algebra quizzes and Test by Scott, Foresman • baldor algebra • math algebra problem solver • solving for variables in rational expressions • 5th grade math partial difference • 'calculas 2' • integers worksheet • binomial equations • ged-books + links • example of question and answer of law and simplifying radicals • online calculator for algebra and graphing • pictographs, kids, worksheets • solving equations by substitution worksheet • printable variables worksheets • solve formula for specified variables • cheats for first in math • solving multivariable first order differential equations • softmath • algebrator free download • prentice hall mathematics book online • 10 grade taks release 2004 answers • how to solve fractions algebra • simultaneous equations in C language • solve algebra problems • grade 9 algebra • subtract integer • converting decimals to fractions worksheet • objective questions and answer in fourier series • roots of third order equations • permutation and combination worksheet • learn beginners algebra • lattice math worksheets • year 8 algebra questions • add and subtract negative fractions worksheets • algebra worksheets positive and negative numbers • year 10 math algebra worksheet • Math + Rules Of GCM LCM • how to solve literal expressions • factoring cubed equations • MATH TRIVIA QUESTION AND ANSWER • clep test practise for calculus • free worksheet simplifying exponential expressions • examples on finding scale factors • simplifying radical functions • pie sign math • learn permutations and combinations • alegbra with pizzazz • elementary algebra problems • Inequalities and TI 84 • pre-algebra work • show me some math problems an 8th grader can work out • how to cube root on calculator • free 10th grade algebra help • 8th grade math balancing equations • simplify negative root • square root of an integer matlab • writing linear equations calculator • square root worksheets • practice sheets involving addition and subtraction of algebraic expressions • grade seven math worksheet answers
{"url":"https://softmath.com/math-book-answers/multiplying-fractions/how-to-convert-from-base-7-to.html","timestamp":"2024-11-07T19:16:51Z","content_type":"text/html","content_length":"35533","record_id":"<urn:uuid:22740afd-79e7-465a-9d8e-9c24a172c442>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00316.warc.gz"}
Vector Rotation (Shift Elements) VecRot {DescTools} R Documentation Vector Rotation (Shift Elements) Shift the elements of a vector in circular mode by k elements to the right (for positive k) or to the left (for negative k), such that the first element is at the (k+1)th position of the new vector and the last k elements are appended to the beginning. VecShift does not attach the superfluous elements on one side to the other, but fills the resulting gaps with NAs. VecRot(x, k = 1) VecShift(x, k = 1) x a vector of any type. k the number of elements to shift. The function will repeat the vector two times and select the appropriate number of elements from the required shift on. the shifted vector in the same dimensions as x. Andri Signorell <andri@signorell.net> See Also [, rep, lag VecRot(c(1,1,0,0,3,4,8), 3) VecRot(letters[1:10], 3) VecRot(letters[1:10], -3) VecShift(letters[1:10], 3) VecShift(letters[1:10], -3) version 0.99.55
{"url":"https://search.r-project.org/CRAN/refmans/DescTools/html/VecRot.html","timestamp":"2024-11-01T20:58:49Z","content_type":"text/html","content_length":"2986","record_id":"<urn:uuid:75ef1de5-e123-4eb2-903e-0d2c58e81a53>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00156.warc.gz"}
BoTorch · Bayesian Optimization in PyTorch Source code for botorch.acquisition.objective #!/usr/bin/env python3 # Copyright (c) Facebook, Inc. and its affiliates. # This source code is licensed under the MIT license found in the # LICENSE file in the root directory of this source tree. Objective Modules to be used with acquisition functions. from __future__ import annotations import inspect import warnings from abc import ABC, abstractmethod from typing import Callable, List, Optional import torch from botorch.posteriors.gpytorch import GPyTorchPosterior, scalarize_posterior from botorch.utils import apply_constraints from torch import Tensor from torch.nn import Module [docs]class AcquisitionObjective(Module, ABC): r"""Abstract base class for objectives.""" [docs]class ScalarizedObjective(AcquisitionObjective): r"""Affine objective to be used with analytic acquisition functions. For a Gaussian posterior at a single point (`q=1`) with mean `mu` and covariance matrix `Sigma`, this yields a single-output posterior with mean `weights^T * mu` and variance `weights^T Sigma w`. Example for a model with two outcomes: >>> weights = torch.tensor([0.5, 0.25]) >>> objective = ScalarizedObjective(weights) >>> EI = ExpectedImprovement(model, best_f=0.1, objective=objective) def __init__(self, weights: Tensor, offset: float = 0.0) -> None: r"""Affine objective. weights: A one-dimensional tensor with `m` elements representing the linear weights on the outputs. offset: An offset to be added to posterior mean. if weights.dim() != 1: raise ValueError("weights must be a one-dimensional tensor.") self.register_buffer("weights", weights) self.offset = offset [docs] def evaluate(self, Y: Tensor) -> Tensor: r"""Evaluate the objective on a set of outcomes. Y: A `batch_shape x q x m`-dim tensor of outcomes. A `batch_shape x q`-dim tensor of objective values. return self.offset + Y @ self.weights [docs] def forward(self, posterior: GPyTorchPosterior) -> GPyTorchPosterior: r"""Compute the posterior of the affine transformation. posterior: A posterior with the same number of outputs as the elements in `self.weights`. A single-output posterior. return scalarize_posterior( posterior=posterior, weights=self.weights, offset=self.offset [docs]class MCAcquisitionObjective(AcquisitionObjective): r"""Abstract base class for MC-based objectives.""" [docs] @abstractmethod def forward(self, samples: Tensor, X: Optional[Tensor] = None) -> Tensor: r"""Evaluate the objective on the samples. samples: A `sample_shape x batch_shape x q x m`-dim Tensors of samples from a model posterior. X: A `batch_shape x q x d`-dim tensor of inputs. Relevant only if the objective depends on the inputs explicitly. Tensor: A `sample_shape x batch_shape x q`-dim Tensor of objective values (assuming maximization). This method is usually not called directly, but via the objectives >>> # `__call__` method: >>> samples = sampler(posterior) >>> outcome = mc_obj(samples) pass # pragma: no cover [docs]class IdentityMCObjective(MCAcquisitionObjective): r"""Trivial objective extracting the last dimension. >>> identity_objective = IdentityMCObjective() >>> samples = sampler(posterior) >>> objective = identity_objective(samples) [docs] def forward(self, samples: Tensor, X: Optional[Tensor] = None) -> Tensor: return samples.squeeze(-1) [docs]class LinearMCObjective(MCAcquisitionObjective): r"""Linear objective constructed from a weight tensor. For input `samples` and `mc_obj = LinearMCObjective(weights)`, this produces `mc_obj(samples) = sum_{i} weights[i] * samples[..., i]` Example for a model with two outcomes: >>> weights = torch.tensor([0.75, 0.25]) >>> linear_objective = LinearMCObjective(weights) >>> samples = sampler(posterior) >>> objective = linear_objective(samples) def __init__(self, weights: Tensor) -> None: r"""Linear Objective. weights: A one-dimensional tensor with `m` elements representing the linear weights on the outputs. if weights.dim() != 1: raise ValueError("weights must be a one-dimensional tensor.") self.register_buffer("weights", weights) [docs] def forward(self, samples: Tensor, X: Optional[Tensor] = None) -> Tensor: r"""Evaluate the linear objective on the samples. samples: A `sample_shape x batch_shape x q x m`-dim tensors of samples from a model posterior. X: A `batch_shape x q x d`-dim tensor of inputs. Relevant only if the objective depends on the inputs explicitly. A `sample_shape x batch_shape x q`-dim tensor of objective values. if samples.shape[-1] != self.weights.shape[-1]: raise RuntimeError("Output shape of samples not equal to that of weights") return torch.einsum("...m, m", [samples, self.weights]) [docs]class GenericMCObjective(MCAcquisitionObjective): r"""Objective generated from a generic callable. Allows to construct arbitrary MC-objective functions from a generic callable. In order to be able to use gradient-based acquisition function optimization it should be possible to backpropagate through the callable. >>> generic_objective = GenericMCObjective( lambda Y, X: torch.sqrt(Y).sum(dim=-1), >>> samples = sampler(posterior) >>> objective = generic_objective(samples) def __init__(self, objective: Callable[[Tensor, Optional[Tensor]], Tensor]) -> None: r"""Objective generated from a generic callable. objective: A callable `f(samples, X)` mapping a `sample_shape x batch-shape x q x m`-dim Tensor `samples` and an optional `batch-shape x q x d`-dim Tensor `X` to a `sample_shape x batch-shape x q`-dim Tensor of objective values. if len(inspect.signature(objective).parameters) == 1: "The `objective` callable of `GenericMCObjective` is expected to " "take two arguments. Passing a callable that expects a single " "argument will result in an error in future versions.", def obj(samples: Tensor, X: Optional[Tensor] = None) -> Tensor: return objective(samples) self.objective = obj self.objective = objective [docs] def forward(self, samples: Tensor, X: Optional[Tensor] = None) -> Tensor: r"""Evaluate the feasibility-weigthed objective on the samples. samples: A `sample_shape x batch_shape x q x m`-dim Tensors of samples from a model posterior. X: A `batch_shape x q x d`-dim tensor of inputs. Relevant only if the objective depends on the inputs explicitly. A `sample_shape x batch_shape x q`-dim Tensor of objective values weighted by feasibility (assuming maximization). return self.objective(samples, X=X) [docs]class ConstrainedMCObjective(GenericMCObjective): r"""Feasibility-weighted objective. An Objective allowing to maximize some scalable objective on the model outputs subject to a number of constraints. Constraint feasibilty is approximated by a sigmoid function. mc_acq(X) = ( (objective(X) + infeasible_cost) * \prod_i (1 - sigmoid(constraint_i(X))) ) - infeasible_cost See `botorch.utils.objective.apply_constraints` for details on the constraint >>> bound = 0.0 >>> objective = lambda Y: Y[..., 0] >>> # apply non-negativity constraint on f(x)[1] >>> constraint = lambda Y: bound - Y[..., 1] >>> constrained_objective = ConstrainedMCObjective(objective, [constraint]) >>> samples = sampler(posterior) >>> objective = constrained_objective(samples) def __init__( objective: Callable[[Tensor, Optional[Tensor]], Tensor], constraints: List[Callable[[Tensor], Tensor]], infeasible_cost: float = 0.0, eta: float = 1e-3, ) -> None: r"""Feasibility-weighted objective. objective: A callable `f(samples, X)` mapping a `sample_shape x batch-shape x q x m`-dim Tensor `samples` and an optional `batch-shape x q x d`-dim Tensor `X` to a `sample_shape x batch-shape x q`-dim Tensor of objective values. constraints: A list of callables, each mapping a Tensor of dimension `sample_shape x batch-shape x q x m` to a Tensor of dimension `sample_shape x batch-shape x q`, where negative values imply infeasible_cost: The cost of a design if all associated samples are eta: The temperature parameter of the sigmoid function approximating the constraint. self.constraints = constraints self.eta = eta self.register_buffer("infeasible_cost", torch.as_tensor(infeasible_cost)) [docs] def forward(self, samples: Tensor, X: Optional[Tensor] = None) -> Tensor: r"""Evaluate the feasibility-weighted objective on the samples. samples: A `sample_shape x batch_shape x q x m`-dim Tensors of samples from a model posterior. X: A `batch_shape x q x d`-dim tensor of inputs. Relevant only if the objective depends on the inputs explicitly. A `sample_shape x batch_shape x q`-dim Tensor of objective values weighted by feasibility (assuming maximization). obj = super().forward(samples=samples) return apply_constraints(
{"url":"https://botorch.org/v/0.6.0/api/_modules/botorch/acquisition/objective.html","timestamp":"2024-11-05T19:17:06Z","content_type":"text/html","content_length":"40735","record_id":"<urn:uuid:d804a5e4-4b23-40ff-8fe0-6a78cf2aeced>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00468.warc.gz"}
Peirce Quincuncial Peirce Quincuncial The Peirce Quincuncial projection is a conformal map projection that transforms the circle of the northern hemisphere into a square, and the southern hemisphere split into four triangles arranged around the square to form a quincunx. The resulting projection is a regular diamond shape or can be rotated to form a square. The resulting tile can be infinitely tessellated. Though this implementation defaults to a central meridian of 0, it is more common to use a central meridian of around 25 to optimise the distortions. Peirce's original published map from 1879 used a central meridian of approx -70. The diamond and square versions can be produced using the +shape=diamond and +shape=square options respectively. This implementation includes an alternative lateral projection which places hemispheres side-by-side (+shape=horizontal or +shape=vertical). Combined with a general oblique transformation, this can be used to produced a Grieger Triptychial projection (see example below). Classification Miscellaneous Available forms Forward spherical projection Defined area Global Alias peirce_q Domain 2D Input type Geodetic coordinates Output type Projected coordinates All parameters are optional.
{"url":"https://proj.org/en/9.5/operations/projections/peirce_q.html","timestamp":"2024-11-09T07:59:19Z","content_type":"text/html","content_length":"36678","record_id":"<urn:uuid:3d5c8f4e-05db-4a48-9947-ee2720effe2d>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00594.warc.gz"}
Machine Learning: From Theory to Algorithms - reason.townMachine Learning: From Theory to Algorithms Machine Learning: From Theory to Algorithms A beginner’s guide to understanding the basics of machine learning theory and algorithms. Checkout this video: 1. Introduction In this course, we will start with a broad overview of the field of machine learning and data mining, including definitions and important applications. We then narrow our focus to supervised learning, arguably the most important subfield of machine learning. After introducing the problem setup, we will discuss various popular optimization formulations for supervised learning problems, including linear models, support vector machines (SVMs), boosting, and neural networks (NNs). Next, we will study a number of important techniques for making learning algorithms more efficient and effective such as regularization for preventing overfitting, cross-validation for selections of model hyperparameters, dimensionality reduction for dealing with high-dimensional data, and algorithms for scaling up learning to very large datasets. Finally, we will conclude with a brief discussion on some important recent developments in the field such as online learning and feature engineering. Throughout the course, we will also touch upon basic concepts from probability theory and linear algebra that are relevant to machine learning. 2. What is Machine Learning? Machine learning is a field of computer science that gave computers the ability to learn without being explicitly programmed. In other words, machine learning algorithms build a model based on sample data, known as “training data”, in order to make predictions or decisions without being given explicit instructions to do so. 3. Types of Machine Learning Machine learning can be classified in a number of ways, with the most popular distinction being between supervised and unsupervised learning, or between inductive and deductive learning. Supervised learning is where you have input variables (x) and an output variable (y) and you use an algorithm to learn the mapping function from the input to the output. Y is usually a category, like “spam” or “not spam”, or a real number, like “housing price”. Unsupervised learning is where you only have input data (x) and no corresponding output variables. The aim is to find some structure in the data, like groups of similar items. In inductive learning, we learn from a set of training data and then make predictions on unseen data. This is the kind of learning that you typically think of as “machine learning”. Deductive learning takes a different approach: We start with a set of rules which describe the relationships between different items of interest, and then use those rules to make predictions on new 4. Supervised Learning In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output. The goal is to learn a function h:X→Y so that given an unseen observation x, h(x) can confidently predict the corresponding output y. Supervised learning problems are categorized by the type of output variable y. If y can take on only a finite set of values, then the problem is classification; otherwise it is regression. 5. Unsupervised Learning Unsupervised learning is a type of machine learning algorithm that is used to find patterns in data. It is called unsupervised because the data is not labeled and the algorithm does not have a target to predict. There are two types of unsupervised learning algorithms: clustering and dimensionality reduction. Clustering algorithms group data points together based on similarity. Dimensionality reduction algorithms reduce the number of variables in the data by finding relationships between them. Unsupervised learning algorithms are used for a variety of tasks, including Recommender systems, Anomaly detection, and Data visualization. 6. Reinforcement Learning Reinforcement learning is a type of machine learning in which an agent learns to take actions in an environment so as to maximize some notion of cumulative reward. The agent interacts with the environment over a number of discrete time steps, and at each time step, it receives a state observation s_t and reward r_t. Based on these two things, the agent chooses an action a_t. Reinforcement learning is different from other types of machine learning in that it is not i) supervised, ii) unsupervised, or iii) reinforcement learning. Instead, it is a combination of all three. The distinguishing features of reinforcement learning are: 1) the focus on making decisions (i.e., the use of agents), 2) the use of delayed rewards, 3) the interaction with an environment over time (rather than batch-style learning), and 4) the focus on cumulative rewards (i.e., maximizing long-term rather than short-term performance). 7. Semi-Supervised Learning Semi-supervised learning is a set of techniques used to learn from both labeled and unlabeled data. It is often desirable to have access to both types of data because acquiring labels can be expensive, while unlabeled data is often plentiful. For example, consider the task of classifying email as spam or non-spam. It is expensive to label a large set of emails by hand, but it is easy to collect a large set of them automatically. Semi-supervised learning algorithms aim to make use of both labeled and unlabeled data to improve the performance of supervised learning algorithms. They do this by utilizing the structure in the unlabeled data to help learn the correct labels for new data points. There are many different semi-supervised learning algorithms, but they can be broadly divided into two categories: transductive and inductive. Transductive semi-supervised learning algorithms make predictions for each new data point based only on that point’s similarity to other points in the dataset (that is, they are instance-based methods). Inductive semi-supervised learning algorithms, on the other hand, build a model using both labeled and unlabeled data that can then be used to make predictions for new data points (that is, they are model-based methods). Images can often be categorized by their content (e.g., dog, cat, airplane), but determining image content manually is a costly process. Semi-supervised learning can be used to automatically classify images based on their content. This can be done by first labeling a small number of images by hand then using those labels to train an inductive semi-supervised learning algorithm. The algorithm can then be used to label new images automatically. 8. Transfer Learning In machine learning, transfer learning is a technique where knowledge learned in one task is applied to another similar task. For example, if a model trained on images of cats is used to identify dogs in new images, that would be an example of transfer learning. The goal of transfer learning is to improve the performance of a model on a new task by leveraging the knowledge learned on a different, but similar, task. Transfer learning can be used in a few different ways: -Pre-trained models: A pre-trained model is a model that has been trained on a large dataset for a specific task. This model can then be used as the basis for a new model that is trained on a smaller dataset for a new task. -Transferring weights: Another way to use transfer learning is to transfer the weights from one model to another. This can be done by initializing the weights of a new model with the weights of an existing model. The new model can then be trained on data for the new task. -Fine-tuning: Fine-tuning is another way to use transfer learning. With fine-tuning, a pre-trained model is first modified to fit the data for the new task. Themodel is then retrained on both the old dataset and the new dataset. This allows themodel to learn both tasks simultaneously and improve performance on both tasks. 9. Anomaly Detection Anomaly detection is the task of identifying data points that don’t conform to expected (normal) behaviour. It’s often used in fraud detection, intrusion detection, fault detection and system health In this chapter we’ll cover the basics of one-class Support Vector Machines (SVMs), a powerful tool for anomaly detection. We’ll also discuss how to use Gaussian Mixture Models (GMMs) for more sophisticated anomaly detection. 10. Time Series Forecasting Time Series Forecasting is the part of machine learning that deals with making predictions about the future. Time series data is data that is collected over time, such as daily stock prices or monthly weather temperatures. The goal of time series forecasting is to make predictions about the future based on past data. Time series forecasting is a difficult problem because there are many factors that can affect the future, such as seasonality, trends, and external events. To make accurate predictions, time series forecasters need to account for all of these factors. There are many different algorithms that can be used for time series forecasting. The most popular algorithm is the autoregressive moving average (ARMA) model. ARMA models are a type of linear model that are very effective at predicting time series data. Other popular algorithms include exponential smoothing (ETS) and artificial neural networks (ANN). ETS models are a type of nonlinear model that are very effective at predicting time series data. ANN models are a type of machine learning algorithm that can learn any function, making them very effective at predicting time series data.
{"url":"https://reason.town/machine-learning-from-theory-to-algorithms/","timestamp":"2024-11-11T18:08:01Z","content_type":"text/html","content_length":"100423","record_id":"<urn:uuid:24562710-10c5-41a9-b65c-194d9456465b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00832.warc.gz"}
a. Use the Student's t distribution to find tc for a 0.99 confidence level when a. Use the Student's t distribution to find tc for a 0.99 confidence level when the... a. Use the Student's t distribution to find t[c] for a 0.99 confidence level when the sample is 8. (Round your answer to three decimal places.) b. Use the Student's t distribution to find t[c] for a 0.90 confidence level when the sample is 8. (Round your answer to three decimal places.) c. Sketch the area under the standard normal curve over the indicated interval and find the specified area. (Round your answer to four decimal places.) The area to the right of z = 0.25 is . d. Sketch the area under the standard normal curve over the indicated interval and find the specified area. (Round your answer to four decimal places.) The area between z = −1.38 and z = 1.98 is .
{"url":"https://justaaa.com/statistics-and-probability/12364-use-the-students-t-distribution-to-find-tc-for","timestamp":"2024-11-09T06:57:14Z","content_type":"text/html","content_length":"41275","record_id":"<urn:uuid:d806f77a-7e57-4e67-8935-d0e1f1bc9b02>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00410.warc.gz"}
For engineering analysis it is always preferable to develop explicit equations that include symbols, but this is not always practical. In cases where the equations are too costly to develop, numerical methods can be used. As their name suggests, numerical methods use numerical calculations (i.e., numbers not symbols) to develop a unique solution to a differential equation. The solution is often in the form of a set of numbers, or a graph. This can then be used to analyze a particular design case. The solution is often returned quickly so that trial and error design techniques may be used. But, without a symbolic equation the system can be harder to understand and manipulate. This chapter focuses on techniques that can be used for numerically integrating systems of differential equations.
{"url":"https://engineeronadisk.com/V2/book_modelling/engineeronadisk-28.html","timestamp":"2024-11-02T14:45:51Z","content_type":"text/html","content_length":"2102","record_id":"<urn:uuid:61a3f58a-679f-455e-9660-3e45a27aefae>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00447.warc.gz"}
Radical expressions review sheet radical expressions review sheet Related topics: exponent solver Fun Algebra Worksheets how to convert 500 to decimal divide and simplify exponents factoring math equations formulas 3rd degree polynomial equations algebra algebra 1 how to solve equations by graphing factoring polynomials of the form x^2+bx+c maths printable worksheets ks3 trigonometry identity strategies algebra 2 mcdougal littell answers answers to addison-wesley conceptual physics mcdougal littell algebra 1 on line answers systems of nonlinear equations simple grade 9 Author Message UnrealBnoyir Posted: Wednesday 31st of Mar 09:56 I have trouble with radical expressions review sheet. I tried a lot to get someone who can assist me with this. I also looked out for a teacher to tutor me and work out my problems on long division, radicals and graphing equations. Though I found a few who could may be explain my problem, I realized that I cannot find the money for them. I do not have much time too. My quiz is coming up in a little while. I am distressed. Can anyone assist me with this situation? I would really welcome any assistance or any advice Registered: 10.09.2003 Back to top Jahm Xjardx Posted: Friday 02nd of Apr 08:34 I know how frustrating it can be if you are struggling with radical expressions review sheet. It’s a bit hard to help you out without more information of your requirements. But if you don’t want to pay for a tutor, then why not just use some piece of software and see what you think. There are numerous programs out there, but one you should think about would be Algebrator. It is pretty handy plus it is worth the money. Registered: 07.08.2005 From: Odense, Denmark, Back to top fveingal Posted: Friday 02nd of Apr 21:47 My parents could not afford my college fees, so I had to work in the evening, after my classes. Solving problems at the end of the day seemed to be difficult for me at those times. A colleague introduced Algebrator to me and since then I never had trouble solving my equations. Registered: 11.07.2001 From: Earth Back to top onaun Posted: Saturday 03rd of Apr 14:07 I definitely will try Algebrator! I didn't know that there are software like that, but since it's very simple to use I can't wait to check it out! Does anyone know where can I find this software? I want to get it right now! Registered: 29.09.2003 From: Cornwall, UK Back to top Dolknankey Posted: Sunday 04th of Apr 10:01 subtracting fractions, simplifying fractions and difference of cubes were a nightmare for me until I found Algebrator, which is really the best math program that I have ever come across. I have used it frequently through many algebra classes – Algebra 2, Remedial Algebra and Algebra 2. Simply typing in the algebra problem and clicking on Solve, Algebrator generates step-by-step solution to the problem, and my algebra homework would be ready. I really recommend the program. Registered: 24.10.2003 From: Where the trout streams flow and the air is nice Back to top fveingal Posted: Sunday 04th of Apr 18:29 It is really good if you think so. You can find the software here https://softmath.com/reviews-of-algebra-help.html. Registered: 11.07.2001 From: Earth Back to top
{"url":"https://softmath.com/algebra-software/long-division/radical-expressions-review.html","timestamp":"2024-11-09T18:58:30Z","content_type":"text/html","content_length":"43624","record_id":"<urn:uuid:ac2f66a0-f9a7-46d6-a256-4c0300192e30>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00510.warc.gz"}
Setting up the problem In Julia we define the problem as # Define the problem H = [1.0 0; 0 1]; f = [1.0 ;1]; A = [1.0 2; 1 -1]; bupper = [1.0 ; 2; 3; 4]; blower = [-1.0; -2; -3; -4]; sense = zeros(Cint,4); sense determines the type of the constraints (more details are given here). Note: When \(b_u\) and \(b_l\) has more elements than the number of rows in \(A\), the first elements in \(b_u\) and \(b_l\) are interpreted as simple bounds. Calling DAQP There are two ways of calling DAQP in Julia. The first way is through a quadprog call: x,fval,exitflag,info = DAQP.quadprog(H,f,A,bupper,blower,sense); This will solve the problem with default settings. A more flexible interface is also offered, where we first setup the problem and then solve it d = DAQP.Model(); x,fval,exitflag,info = DAQP.solve(d); This allows us to reuse internal matrix factorization if we want to solve a perturbed problem. Changing settings If we, for example, want to change the maximum number of iterations to 2000 we can do so by DAQP.settings(d; Dict(:iter_limit =>2000)) A full list of available settings is provided here Using DAQP in JuMP DAQP can also be interfaced to JuMP. The following code sets up and solves the problem considered above using DAQP using JuMP ## Setup problem model = Model(DAQP.Optimizer) @variable(model, -1<= x1 <=1) @variable(model, -2<= x2 <=2) @objective(model, Min, 0.5*(x1^2+x2^2)+x1+x2) @constraint(model, c1, -3 <= x1 + 2*x2 <= 3) @constraint(model, c2, -4 <= x1 - x2 <= 4) ## Solve problem
{"url":"https://darnstrom.github.io/daqp/start/julia","timestamp":"2024-11-12T00:59:17Z","content_type":"text/html","content_length":"15999","record_id":"<urn:uuid:1efe0316-9072-4af4-ac90-857f5ded3c25>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00047.warc.gz"}
The Old Man The Froth Line – Walking the Planck There comes a point where clear, logical and pure scientific exploration for fundamental truth arrives at a Venn intersection with metaphysics and produces insight that simply surpasses all mathematical understanding. The law of unintended consequences In the 1970’s, when computers were getting cheaper and more powerful (Compared to my Curta), along came Benoit Mandelbrot who said that things typically considered to be a mess or chaotic, like clouds or shorelines, actually had a degree of order. My interest was purely selfish at the time, and that was to figure out how to develop a stock trading application centered on his and other concepts around Fractals and Chaos to make money. While I was writing the trading application, the meme I used for development was to look at the boundary of the Fractal set as a Froth line – the chaotic place where land and sand mix, where all bets are off for any given element in transition. Questions like “will a chip in the froth end up on shore, or back in the sea after ‘n’ cycles”, “will it hang around in the froth line forever or not”, and “is there a ‘why’ that we can quantify and count on”. You get the idea. So here we are, Forty years later, with more time and less self; the graphics and logic that Benoit used to create and display his Fractal Mandelbrot set are old friends, but no longer attached to money. We are now free to explore without agenda or purpose – a holiday from education, the science is settled, and must haves. Unintended consequences. Who knew? The Fractal Froth Line Metaphorically, the froth line separates the sand from the sea as the interface where the waves lap or crash along the shoreline intersection. Up on the beach, there is no doubt that it’s sand. Out in the water, it’s clear that’s the sea. The Lawyer in you will hate this next part related to scale: At the intersection, where does the sea end and the sand start? To test your definition, how long is the shoreline? You may well answer that it depends on your legal definition of scale and the elemental unit of dimension. For the Fractal Froth line, you’ll have a problem. Scale is meaningless. Every piece of shoreline is infinite at every scale. No matter how far you zoom in, it’s still shoreline in full detail, and its DNA still looks familiar and recognizable. You have no idea if you are at Mile 0 or Mile 10 million, and the shoreline is just as complex to infinity. An Ant and an elephant walking the same shoreline on their own relative scale will tell similar stories to their offspring about the scenery and mapping of the bays and inlets along the way. Walking the Planck No, that’s not a typo. Read on. To recap, driven by recursion, fractals are images of dynamic systems – the pictures of Chaos. There are things about the universe we inherently know that seem a bit strange. It is essentially chaotic, full of surprises, of the nonlinear and the unpredictable. Without the Chaos and (strange and otherwise) attractors, it can’t evolve. At the same time, Fractal patterns are extremely familiar, since nature is full of fractals. For instance: coastlines, mountains, rivers, clouds, trees, plants, Climate, and so on. Geometrically and mathematically they co-exist in between our familiar dimensions. The Fractal Recursion loop in space logically sets up the concept of time. You need to wait for the output from the underlying algorithm to feed into the next iteration at the lowest quantum level in order to build the next step. So, is there a limit in our universe as to how fast any Fractal system can propagate to the next iteration in time? To think on that, take a moment to look at Planck’s constant on this side of the quantum limit of our understanding at the frontier of the known universe, where quantum uncertainty begins. In physics, the Planck time is the unit of time in the system of natural units known as Planck units. It is the time required for light to travel, in a vacuum, a distance of 1 Planck length. Within that Planck timeframe and distance, we as observers can’t be sure what’s happened in the gap, or what the Master Magician is pulling off behind the scene. Werner Heisenberg figured that out in 1927 and published the Heisenberg uncertainty principle. Thus, our Fractal Natural Universe is essentially in a quantum stop-frame space-time movie, one Planck frame behind the unfolding now which gets pre-set in the Planck-width bow wave gap running in front of the universe, engaging and collapsing the endless quantum possibilities that unfold as the realized next recursive block of our particular fractal universe. If you think this is nuts, (which well you might) know that the current search for the laws of physics, valid at the Planck length is a part of the search for The Theory of Everything. It seems possible that that the Planck gap has enough room for it all, recursively setting the hard limits of our physical space-time universe to this side of the gap, one step behind the now. So close logically, yet eternally out of touch to the mind’s eye. The Science is never settled -the old man There is a crack, a crack in everything / That’s how the light gets in – Leonard (Planck) Cohen 🙂 A Word on Enlightenment A Word on Enlightenment Anyone who tells you they are enlightened is not even close. The first part of the problem is that to attempt do so requires the DNA of language; – A Subject acting on a separate Object linked with a Verb. Language exists in, and can only be understood in, the separation it defines and reinforces. In that sense, Enlightened as a described idea is not of the moment, nor can it be. Enlightenment (remember that is just a word pointer trying to describe <“”>), is outside language and description, and can never be translated as a concept or place in our physical separation. <“”> is of the infinite undifferentiated whole, unreachable to the I/Me/Self longing to go there as a place of inner peace. If you think you have been there, you haven’t. There is nothing to remember or bring back from the undifferentiated whole, nor could it ever make sense to anything remotely connected to the separate You/I/Me stuck in our space-time machinery. <“”> will never show up if you desire it, meditate for it, or demand it. In fact, any emotion will bolt you down to Space-Time and keep you on this side of the curtain. <“”> comes uninvited when you least expect it. You won’t know it in the moment, but when you drift back to thought, language and the daily grind, you will inwardly carry a profound calm and direct understanding about the connected universe that is always unfolding as it should . Regardless of why and where you have left to travel on this earth you will know this: Nothing is wrong; – nothing ever can be wrong. It’s the singular truth that passes through the filters of our earthly understanding – Live your life to the fullest without fear or regret, with the wind beneath your wings. That’s why you are -the old man and one of his young man toys
{"url":"https://theoldman.website/2016/06/","timestamp":"2024-11-03T18:14:38Z","content_type":"text/html","content_length":"41445","record_id":"<urn:uuid:98d757a2-ecd3-4152-807b-dd913fdf0131>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00855.warc.gz"}
Any one good at Math, I mean really good Man I'm interwebbing and I came across this: ei + 1 = 0 Eulers Formula. I have read this And done some googling. It ain't hitting me. Can someone explain this to me ? (I am not super great at math but I pick things up pretty well) Calculus 102 ?? I really forgot this stuff ! tell me which part of the proof you don't get ? Calculus 102 ?? I really forgot this stuff ! tell me which part of the proof you don't get ? The entire thing Honestly I saw it on a Tee Shirt on Think Geek and I was curious. I pulled/am pulling a double a the NOC and I am seriously board. CCNA later today but honestly if I see another circle with arrows on it I WILL die. Ok, it's trying to prove that: ei + 1 equals to zero. I went thro the wiki article to remember, and yeah I do remember now little. It's important because it's one relations between "e" and trigonometric functions (sine, cosine,..etc). The mathematical proof you linked us to is using Taylors series to substitute for "e" to the power x. He then substituted x for "i" times Pai. (i = the imaginary number, which is square root of Then he removed the "series" notation and substituted for the product of it. Like when you the series y=1+x (for x=0...to infiniti) you can substitute it by writing: y = (1 + 0) + (1 +1 ) + (1+2) +..... so the same idea he used here. Then he factored the nominator: (i times pai) square. you need to know all these equations, they're available in any calculus book. With proves. They're not difficult, you need to work out lot of problems, and you will be fine Now the applications, don't ask. Plenty ! All these notations are used heavily in electrical engineering. They're used in computing electrical signals, and power machines. They're also heavily used in probabilities. These formulas are the basis for all the electrical engineering. Honestly I saw it on a Tee Shirt on Think Geek and I was curious. I pulled/am pulling a double a the NOC and I am seriously board. CCNA later today but honestly if I see another circle with arrows on it I WILL die. Ok, you don't need that in for CCNA or anything of that sort. you only need this in electrical engineering at college. Chances are you won't even use them on the job even if you work as an electrical engineer. It's not difficult, if you started at college with calculus then you moved up, it's normal Ok, it's trying to prove that: ei + 1 equals to zero. I went thro the wiki article to remember, and yeah I do remember now little. It's important because it's one relations between "e" and trigonometric functions (sine, cosine,..etc). The mathematical proof you linked us to is using Taylors series to substitute for "e" to the power x. He then substituted x for "i" times Pai. (i = the imaginary number, which is square root of -1). Then he removed the "series" notation and substituted for the product of it. Like when you the series y=1+x (for x=0...to infiniti) you can substitute it by writing: y = (1 + 0) + (1 +1 ) + (1+2) +..... so the same idea he used here. Then he factored the nominator: (i times pai) square. you need to know all these equations, they're available in any calculus book. With proves. They're not difficult, you need to work out lot of problems, and you will be fine Now the applications, don't ask. Plenty ! All these notations are used heavily in electrical engineering. They're used in computing electrical signals, and power machines. They're also heavily used in probabilities. These formulas are the basis for all the electrical engineering. That actually makes a lot of sense. Still a bit over my head, but I have been working for going on 16hrs... Ok, you don't need that in for CCNA or anything of that sort. you only need this in electrical engineering at college. Chances are you won't even use them on the job even if you work as an electrical engineer. It's not difficult, if you started at college with calculus then you moved up, it's normal I have to do a trig class then a Stats class then technically I will have all math I need for by AAS. But since I want to transfer to a 4 year as soon as I am done, I will try to complete Cal I-III and do Dif Eqs at the 4 year. My dad has his masters in Chemistry and a minor in math so he will be my tutor for that Honestly Calc I-IV were the reason why I was not going to do a CS degree, because I really don't care for math and at the 4 year I want to go to the failure rate for these classes is like 75% and I have a nice GPA Honestly Calc I-IV were the reason why I was not going to do a CS degree, because I really don't care for math and at the 4 year I want to go to the failure rate for these classes is like 75% and I have a nice GPA I wouldn't change my major because of one class or two. While the failure rate is high, there are lot of people who can get full marks. Anybody can do it, and it doesn't really need super brains. It's just about working out problems in books, period ! I recommend CS, you will enjoy it. And for the math classes, just invest your time in working out those problems at the end of each chapter, and you will be just fine I wouldn't change my major because of one class or two. Or 4.... While the failure rate is high, there are lot of people who can get full marks. Anybody can do it, and it doesn't really need super brains. It's just about working out problems in books, period ! I recommend CS, you will enjoy it. And for the math classes, just invest your time in working out those problems at the end of each chapter, and you will be just fine Hopefully. Since it is a BSCS w/ a concentration in Business, I think it blends my interests perfectly. Plus BSCSB sounds really bad ass to say. Or 4.... Hopefully. Since it is a BSCS w/ a concentration in Business, I think it blends my interests perfectly. Plus BSCSB sounds really bad ass to say. lool @ BSCSB sounds bad ass It won't matter a lot, few classes difference. Personally I'd choose computer engineering or science. Good luck lool @ BSCSB sounds bad ass It won't matter a lot, few classes difference. Personally I'd choose computer engineering or science. Good luck Thanks. I passed the CCNA and after I do S+/L+ I am going right for SCSA/SCNA so watch out The PDF of the proof you linked to is basically a neat calculus trick -- in the proof they don't use the actual number zero, but both sides of the equation approach zero at i = infinite. The PDF says..."Assume that Euler's Formula is true, then Assume the Taylor Series is true (which it is) and show that they are equal in the limit i->infinite." Yes, the equation has actual use in trig. dynamik Banned Posts: 12,312 ■■■■■■■■■□ Plus BSCSB sounds really bad ass to say. Palindromes are cute The proof uses infinite Taylor series expansion to show the equivalence of for both side (e^ix & cos x + i sin x), basically if you re-arrange the terms (and provided the series convergences) you can see that the two sides are equal. Another way you can see this is through complex analysis in which you map e^ix and you can see it is just a circular function on the complex plane. And you can map out the same function using cos x + i sin x, which is valid for -infinity to +infinity. (So when you substitute pi for x, you get the shown result) Btw that equation is frequently voted as one of the most beautiful equations by mathematicians. The proof uses infinite Taylor series expansion to show the equivalence of for both side (e^ix & cos x + i sin x), basically if you re-arrange the terms (and provided the series convergences) you can see that the two sides are equal. Another way you can see this is through complex analysis in which you map e^ix and you can see it is just a circular function on the complex plane. And you can map out the same function using cos x + i sin x, which is valid for -infinity to +infinity. (So when you substitute pi for x, you get the shown result) Btw that equation is frequently voted as one of the most beautiful equations by mathematicians. I got with my dad this weekend and he explained it. Although he didn't know why it is so beautiful. He has a Masters in Chemistry and a minor in Math but he was telling me it has been about 20 years since he has seen this thing. I got with my dad this weekend and he explained it. Although he didn't know why it is so beautiful. He has a Masters in Chemistry and a minor in Math but he was telling me it has been about 20 years since he has seen this thing. Well it is subjective but IMHO it is because that equation illustrates some fundamental concepts in mathematics, yet shown in a simple, elegant equation: - e (transcendental number) - i (imagery/complex number) - pi (transcendental number) - 0 - 1 (Unity)
{"url":"https://community.infosecinstitute.com/discussion/48935/any-one-good-at-math-i-mean-really-good","timestamp":"2024-11-13T09:18:03Z","content_type":"text/html","content_length":"356233","record_id":"<urn:uuid:31e33507-afa6-414c-87ce-bdeb1ce79db4>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00304.warc.gz"}
Monte-Carlo Simulation | Windham Insights Fortune tellers, palm readers, the Farmer’s Almanac, financial analysts. What do these things have in common? They all attempt to anticipate the future. While some use a crystal ball, the ridges of their subject’s palm, or sunspots, financial analysts use numeric models and mathematical techniques to generate simulations of their client’s financial future. There are two kinds of forecasting models: deterministic models and stochastic models. Deterministic models assume a fixed relationship between the inputs and the output, while stochastic models depends on inputs that are influenced by chance. Deterministic models can be solved analytically via mathematical formulas, while stochastic models require numerical solutions. To solve stochastic models numerically, one must try various values for the model’s parameters and variables. When these variables come from a sequence of random numbers, the solution is called Monte-Carlo simulation. Monte-Carlo simulation was originally introduced by financial analysts John Von Neumann and Stanislaw Ulam while working on the Manhattan Project at the Los Alamos National Laboratory. They invented a procedure of substituting a random sequence of numbers into equations to solve problems regarding the physics of nuclear explosions. The term Monte-Carlo was inspired by the gambling casinos in Monte-Carlo simulation is performed with a sequence of numbers that are distributed uniformly, are independently of each other, and are random. Random sequences can be generated using mathematical techniques such as the mid-square method, but most spreadsheet software includes random-number generators. It is important to note that most financial analysis applications generate random variables that are not distributed uniformly. In order to perform a Monte-Carlo simulation, the sequence of uniformly distributed random numbers must be transformed into a sequence of normally distributed random numbers. Applying Monte-Carlo Simulation to the Stock Market Imagine we invest $100,000 to an S&P 500 index fund in which the dividends are then reinvested, and we want to predict the value of that investment 10 years from now. As previously mentioned, we start by generating a series of 10 random numbers that are uniformly distributed. However, we assume that the S&P’s returns are normally distributed and we therefore need to transform our sequence. This transformation can be accomplished easily by applying the Central Limit Theorem. The following table shows the relative frequencies of two variables, X and Y. It is clear to see that while neither X nor Y are normally distributed on their own, their average beings to approach a normal distribution. Therefore, we can create a sequence of random numbers that is normally distributed by taking the averages of many sequences of random, uniformly distributed numbers. Figure 1 shows the relative frequency of both sequences, and we can see from this graph that the average of the 30 uniformly distributed sequences approached a normal distribution. The next step in Monte-Carlo simulation is to scale the normally distributed sequence so it has a standard deviation of one and a mean of zero. By dividing each observation by 0.05 (the theoretical standard deviation), and then subtracting 10 (the theoretical mean of this sequence) from each other its observations, we can achieve this transformation. Next, we must rescale our sequence to reflect our assumptions about the mean return and standard deviation of the S&P 500. For this exercise, let us believe that the average return of the S&P 500 is 12% and its standard deviation is 20%. We rescale our standardized normal distribution by multiplying each observation by our assumption of 20% for the standard deviation, and adding to this value our assumption of 12% for its average return. Now we have a sequence of returns that we can then use to simulate our investments performance (column E). To carry out the Monte-Carlo simulation, we then link the sequence of random returns and multiply the result by 100,000 (our investment) to derive an estimate of its value in 10 years. For example, the simulated returns in Table 4 yield a value of $333,810. We repeat the entire process, beginning with the generation and averaging of 30 random sequences. We proceed until we generate a sufficiently large quantity of estimates. The distribution of these estimates is the solution to our problem. The figure below shows the frequency distribution of the terminal value of $100,000 over 10 years, resulting from 100 simulations. While many problems can be solved using an analytical solution, Monte-Carlo simulation is immensely more simple for problems that are too complex to be described by equations. However, in order for Monte-Carlo simulation to obtain a reliable result, it must be repeated enough times.
{"url":"https://insights.windhamlabs.com/general/monte-carlo-simulation","timestamp":"2024-11-05T15:55:55Z","content_type":"text/html","content_length":"399867","record_id":"<urn:uuid:21a7c8ca-e8cb-4d0d-98ba-c3693ef52a85>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00477.warc.gz"}
The Mossywell Web Site - Pixel Aspect Ratio • Luminance and chroma sampling • CRT over scan • 16 : 9 1. Ratios are always expressed as width to height, unless expressed otherwise. 2. The ratio symbol “:” is sometimes replaced with a simple division, particularly when dealing with equations. Of course, the two are interchangeable. 3. I’ve tried to keep to the principle of “*” meaning multiply, and “x” meaning “by” as in 640 by 480. 4. When writing equations, I sometimes include brackets for clarification even though they are not strictly necessary. Before attempting to explain pixel aspect ratio, the origins of analogue TV standards need to be explained. The reason for this is simple: the digital standards were designed to be compatible with the older analogue standards. That is, they were designed such that it is possible and indeed simple to digitise analogue recordings. If they had have been designed completely independently of the old analogue standards, this might not have been possible, or have been extremely complex if it were. Unfortunately, this put the digital standards in a strait jacket as they were constrained by some old and sometimes arcane analogue standards! Firstly, let’s look at the analogue video standards. The two main analogue video transmission standards are: • NTSC-M (or simply NTSC) • PAL There are others, but they are not dealt with in this site. Each standard is based on the principle of transmitting the video and audio on a carrier wave at different frequencies. The carrier wave is received by the receiver, the sound is separated from the video and the latter is sent to the cathode ray tube (assuming a CRT display). The audio signal is sent to an amplifier and comes out as sound. In the early days of black and white TV, the video signal only consisted of luminance information. That is, how bright of dark should we be at any moment in time. Of course the luminance value changed rapidly with respect to time. How does the CRT display the signal on a black and white television? Basically it sends a beam rapidly across the screen from left to right (as you look at it) in a straight line. It then switches off (this is called “horizontal blanking”), returns the beam back to the left of the screen, moves down a tiny bit, switches back on and does the same again. If it is done fast enough, it is possible to fill the whole screen (called a “frame”) in a fraction of a second. When the beam gets to the bottom of the screen, it switches off (this is called “vertical blanking”), moves back to the top, switches back on and starts again. Clearly, this begs the questions of how many lines there are on the screen and how fast the beam actually moves. The answer to this is where the differences between NTSC and PAL become apparent. PAL is much easier to explain, to let’s start with NTSC! In the early days of NTSC, it was decided that a whole screen should be drawn at exactly half the local AC mains frequency and that there should be 525 lines. The AC mains frequency in North America was (and still is) 60 Hz. So, a whole screen would be drawn with a frequency of 30 Hz. The same rationale was used for PAL (which came after NTSC) that the whole screen should be drawn at exactly half the local AC mains frequency but that there should be 625 lines, not 525. The AC mains frequency in Europe was (and still is) 50 Hz. So, a whole screen would be drawn with a frequency of 25 Hz. Why use the A/C frequency? The mains frequency was chosen as it was a handy timing signal that already existed and so it was easy for the television manufacturers to use this signal. In addition, using the mains frequency made it easier to avoid any interference signals with the mains itself. Why half the AC frequency and not the whole AC frequency? To answer this, we first need to explain the term “interlacing”. (The explanation starts using a simple model and then homes in on what actually happens. As a result, the early descriptions are deliberately over simplistic but become more realistic as the explanation moves on.) Instead of drawing the top line on the screen, then the second, then the third and so on to the 525^th line (625^th line with PAL), the CRT draws the first line, then the third, then the fifth and so on down to the 525^th line (625^th line with PAL). (Later, when we get onto blanking, we discover that in fact the first few lines aren’t actually drawn at all.) It then blanks vertically, moves back to the top and draws the second line, then the forth, then the sixth and so on to the 524^th (624^th with PAL) line. It then blanks back to the top and starts at the first line again. The consequence of this is that in effect, the CRT draws half the screen in one scan and then goes back to the top and fills in the gaps. Each half-screen is called a “field” and this method of drawing is called “interlacing”. Each frame is therefore made up of two fields. It is now possible to answer the question “Why half the AC frequency and not the whole AC frequency?” Because the CRT is effectively moving from top to bottom every field, not every frame, and it does this 60 950 with PAL) times a second, which matches the AC frequency perfectly. Why interlace and not just draw all lines on one hit? Mainly to reduce the flicker that 30 frames per second would suffer from if drawn in one hit. Interlacing reduces this. As technology progressed, it became possible to draw each line in turn without interlacing and without flicker (usually by doubling the frame rate). This is called “progressing scan”, but it is not covered further in this Notice that I used the terms “first line”, “fifth line” and so on instead of “line 1”, “line 5”? This is because the lines are numbered in the order that they are drawn in, NOT the order that they appear on the screen in. So in fact, the CRT draws lines 1 to 525 (625 with PAL) in sequence. Also note that the line numbering does not necessarily start from 1. It is worth remembering that line scanning is just a series of electrical signals: there’s nothing intrinsic in the analogue specifications that actually numbers the lines. Numbering is just an artificial construct for our convenience and it only really became important when we needed to count the lines for the proposes of converting them to a digital signal. One of the called “ITU-R BT.470” (http://www.itu.int/rec/ recommendation.asp?type=folders&lang=e&parent=R-REC-BT.470 ) lists the line numbering as follows. • NTSC: □ Field 1: 4 to 265 inclusive (262 lines) □ Field 2: 266 to 525 and then 1 to 3 inclusive (263 lines) • PAL □ Field 1: 1 to 312 inclusive (312 lines) □ Field 2: 313 to 625 inclusive (313 lines) So in fact, NTSC doesn’t start at line 1 anyway! Also, remember that although this numbering scheme implies that the fields are not the same size, this is the digital world’s perspective of line numbering. In practice in the old black and white analogue world, each field was the same size: • NTSC: 262.5 lines • PAL: 312.5 lines The half lines correctly imply that line drawing starts halfway through one of the lines. This is not only possible, but also makes sense when it is considered that the lines were not exactly horizontal: they are slightly tilted downhill from left to right and so one of the lines starts half way across the screen. Blanking and Synchronisation More importantly than worrying about where one field starts and the next stops or actual line numbers is to ask if all the 525 lines (625 for PAL) used for video and audio signals? To answer this we have to look at the horizontal and vertical blanking in more detail. Horizontal Scan Frequency and Line Timings Firstly we need to define the “horizontal scan frequency”. This is the frequency with which the CRT beam goes from left to right and back again horizontally. (We already know the frequency that it does this vertically: 60 Hz for NTSC and 50 Hz for PAL.) We can calculate this for both NTSC and PAL easily. For black and white NTSC, the CRT paints the whole screen 30 times a second and each screen consists of 525 lines. So, every second, it draws and moves the beam back to the left 30 x 525 times a second = 15750 Hz. For PAL, the CRT paints the whole screen 25 times a second and each screen consists of 625 lines, so every second, it draws and moves the beam back to the left 25 x 625 times a second = 15625 Hz. Note that the horizontal scan frequency is a consequence of the specification of having 525 lines (625 for PAL). That is, the specification came first and the horizontal scan frequency was simply a mathematical consequence of this. Taking one over the frequency, we can also calculate the time taken for each line to be drawn. For NTSC, it is 1 / 15750 = approximately 63.492 ns. For PAL, it is 1 / 15625 = 64 ns. So, we could simply tell the CRT “keep drawing lines 15750 times a second displaying whatever signal happens to come along as you’re doing it”. Seeing as we have defined in our standards that we paint 525 lines 30 times a second, one might think that this was a good idea. Unfortunately, electronics components come with a “tolerance”. This is a statement of how accurate the components are. For example, a resistor may be rated at 200 ohms. However, this does not mean that it is exactly 200 ohms, only that it is guaranteed to be 200 ohms plus or minus some small amount, for example 5%. (That is, it is between 190 and 210 ohms.) Similarly, we can’t guarantee that the CRT beam moves at exactly 15750 Hz, only that it is very very close to this. Now suppose that we let the CRT draw lines at what we think of as 15750 Hz, but which is in fact only 15749.999999 Hz, for example due to the tolerance of the electronic components. The picture will be skewed and will appear more skewed across the screen with time. So, to account for this, synchronisation pulses are used. A synchronisation pulse is a “tick” pulse (like a metronome) that is part of the transmission. In simple terms, this “synchronisation pulse” tells the CRT when it can start to draw a new line. There is also another kind of pulse that tells the CRT to stop drawing. This is, not surprisingly, called the “blanking pulse”. In practice, the synchronisation pulses and blanking pulses are separate but both form part of the video signal, though this is not important. The important point here is that as well as the actual picture signal, there are at least two other signals: 1. A horizontal synchronisation pulse. 2. A horizontal blanking pulse. On top of this, there is a less-frequent vertical synchronisation pulse and a vertical blanking pulse. Seeing as both the horizontal blanking pulse and vertical blanking pulse tell the CRT to stop drawing, they share the same electrical properties. So, if we were to convert the video transmission into words, it might go something like: • Start (CRT pointing top left) • Horizontal Blank (Return to left hand side of screen) • Start drawing when next non-blanking signal received (a “pulse”) • Horizontal Blank (Return to left hand side of screen) • Start drawing the variable signal now (the picture) to the right hand side of the screen • Horizontal Blank (Return to left hand side of screen) • Start drawing when next non-blanking signal received (a “pulse”) • Horizontal Blank (Return to left hand side of screen) • Start drawing the variable signal now (the picture) to the right hand side of the screen • And so on for a whole field of information • Horizontal Blank (Return to left hand side of screen) • Start drawing when next non-blanking signal received (a “pulse”) • Horizontal Blank (Return to left hand side of screen) • Start drawing the variable signal now (the picture) to the right hand side of the screen • Vertical Blank (Return to top of screen) • Start drawing when next non-blanking signal received (a “pulse”) • Horizontal Blank (Return to left hand side of screen) • Start drawing the variable signal now (the picture) to the right hand side of the screen • And so on… The important thing about the horizontal tick pulses is that they carry on even when a vertical blanking signal is being sent out. One way to think of this is like a drummer playing to a click track on a metronome. Even when the drummer isn’t playing, the click track continues ensuring that the drummer next comes in at the right place. The NTSC standard (and the PAL standard) take this into account and so what they really specify is the number of horizontal synchronisation pulse per frame to be 525 (625 for PAL), not the number of visible lines to be drawn on the television screen. Some of the synchronisations (or “lines” as they are still called even though we now know that lines aren’t necessarily drawn) are “wasted” on vertical blanking and indeed other control signals, which means that not all the lines are used for actual video image. The period of time that the CRT is not drawing active picture is called the “vertical blanking period” in the case of moving back to the top, even though blanking isn’t the only thing going on in these intervals. So how many lines are used for picture? The simple answer is as follows: • PAL □ 25 lines of blanking, 287.5 lines of picture, 25 lines of blanking, 287.5 lines of picture and so on. Put another way, 8% of the lines are for vertical blanking and other controls in between • NTSC □ 20 lines of blanking, 242.5 lines of picture, 20 lines of blanking, 242.5 lines of picture and so on. Put another way, roughly 7.6% of the lines are for vertical blanking and other controls in between fields. This is the “simple answer” because the half files do change from field to field. In fact, NTSC uses a four-field sequence during which time the actual frame numbers that start and end the active picture varies slightly from field to field. To make it even more complex, PAL uses an 8-field sequence. Therefore, it is best not to get bogged down with exactly which lines are active and which are not and just stick with the averages: • NTSC: 485 active lines • PAL: 575 active lines This was a very long explanation of vertical blanking! However, we now need to look at horizontal blanking. Whilst it is easy to nominate a certain number of lines for blanking and other control signals, doing the same for horizontal blanking is somewhat arbitrary. Therefore, it is left to the standards committees to decide how much time should be allowed to get the beam back to the left hand side of the screen (and other control signals) before drawing the next line. For PAL, this was defined as 12 us for blanking and 52 us for picture, which makes up the required total of 64 us for each line. For NTSC (in the days of black and white) this is defined as anything between 10.2 us and 11.4 us for blanking and the remainder for picture. Let’s summarise what we have thus far. Unfortunately, with the advent of colour, the NTSC the horizontal scan frequency 15750 Hz had a problem. (PAL did not run into this problem. It has been stated on some sites that PAL, which came after NTSC, was designed such that it deliberately did not run into this problem.) The challenge to the designers of the standard for transmitting a colour signal was how to do it in such a way that when a colour transmission was sent to a black and white television, it would display in black and white (as opposed to not displaying anything or displaying a scrambled picture) without interference. To do this, they had to use the same carrier frequency and leave the luminance and audio signals untouched. The new colour part of the signal, called the “chroma signal”, somehow had to be added to this signal without it interfering with either the luminance or the audio signals. In the first attempts, they tried a “suck it and see” approach of simply selecting a frequency within the available bandwidth, squeezing the chroma signal in and seeing what happened. Unfortunately, it became evident that it was not possible to do this because the chroma signal kept on interfering with the audio signal. Colour Horizontal Scan Frequency Further study showed the problem to be to do with the horizontal scan frequency of 15750 Hz. It was calculated that if the chroma sub-frequency were an odd multiple of half the horizontal scan frequency, then interference would be greatly reduced. (The reasoning for this is outside the scope of this simple explanation!) After much experimentation and number-crunching with frame rates, horizontal scan frequencies and so on, it was concluded that the ideal was a horizontal scan frequency of 4500000 / 286, roughly 15734.265734265734… Hz and achroma sub-frequency of roughly 3.579545579545… MHz would do the trick! DIAGRAM SHOWING HOW THIS IS DERIVED: 4.5 X 10^6 * 2 / (455 + 117) AND 4.5 * 455 / (455 + 117) If the horizontal scan frequency were set to this, the chroma signal could be squeezed into the existing bandwidth with almost no noticeable effects. As 15734.265734265734… Hz is close to the original 15750 Hz, any difference should not be noticeable. Let’s see if this is the case: The number of lines had to be kept the same at 525, but now were only drawing 15734.265734265734… lines per second, so we’re only drawing (4500000 / 286) / 525 = 30000 / 1001 = 29.970029970029… frames per second. Originally, it was 30 frames per second, so the difference is tiny. Unfortunately, it is also significant and it is explained further below! The television manufacturers weren’t entirely happy with the new frame rate because it meant that the AC frequency could no longer be used as a clock signal. Therefore, they had to add new clocks to the televisions that “ticked at” 30000 / 1001 Hz and use this as a timing signal instead. 29.97 or 29.970029970029…? It is often said that the NTSC frame rate is exactly 29.97. This, of course, does not equal 29.970029970029… but it is very close. Where does the 29.97 come from and which is right? The simple answer is that 29.970029970029… is right and that 29.97 is an approximation. However, 29.97 is an approximation that is not just a rounding of 29.970029970029… It comes from the addition of time codes to the video. Suppose I wanted to give a video to a colleague so that he could add a sound effect. I might say: “Start the sound effect at the bit where Arnie blasts the T1000 to smithereens”. This is not, however, the most accurate way to represent the point at which an event should happen! Therefore a system of numbering each frame was invented. The simplest way would be to start the counting at 0 (or some other integer) and simply count each frame one by one. So, I might say “start the sound effect at frame 16589483632”. If you have this information, along with the rate at which frames pass, you could add a sound effect without even needing the original video. However, in practice a different system is used called SMPTE. SMTPE simply counts hours, minutes, seconds and the frame number (starting from 0) in that second. So the first frame is 00:00:00:00, the thirtieth frame is 00:00:00:29. Bearing in mind that in black and white NTSC, the next frame would start in the next second, the thirty-first frame would be 00:00:01:00, the thirty-second 00:00:01:01 and so on. This tying-in of the frame counter with a real-time clock has its advantages in that it is now possible to think of the frame count as a time line (which indeed it is). We easily know, for example, that the third frame in the eleventh minute is 00:11:00:02. Had we simply counted the frames from 0, we’d have to work this out with a calculator! SMPTE for PAL is the same except that because there are only 25 frames per second, the last number in the time code only goes up to 24. What about SMPTE time codes and colour NTSC? This is where the problem is. Suppose we have an NTSC colour video that lasts exactly one hour and which has a frame rate of 29.970029970029… Now suppose we use the same SMPTE time coding that we used for black and white. Firstly, how many frames do we have? 29.970029970029… x 60 * 60 = 107892 (rounding down to the nearest integer) frames. Now let’s add the SMPTE time codes to these frames. We start at 00:00:00:00 and end with the last frame having a code of 00:59:56:11 whereas we’d like it to have a time code of 00:59:59:29. This might not be a problem if we told a sound engineer “start the sound effect at time code 00:12:13:14”, but the engineer might be doing the sound effects without access to the video. Instead, they may be using their own accurate time source with the intention of “sticking the two together” at the end. (This is often how sound is added to video.) So if the engineer were to start the sound at time code 00:12:13:14 (using his time source), it would not be in the right place when the sound was added to the video. Therefore, a variant of SMPTE was created called “drop-frame time code”. This does not, as the name implies, drop any frames! Instead, it skips certain time codes. In particular, it skips frame numbers 00 and 01, except where the minute is 00, 10, 20, 30, 40 or 50. So, the frame numbering sequence might go: And so on to: Taking a 10-minute section of video, we would skip 9 x 2 frame numbers = 18 frame numbers. Over a one hour section of video, we skip 108 frame numbers. These 108 skipped frame numbers bring the 107892 frames up to 108000, which is exactly what we want it to be! So, if we now apply the drop-frame time code to a sequence of video frames, we get exactly 107892 x 30 / 108000 frames per second = 29.97. Of course, this assumes that the device that is playing back the video is playing it faithfully according to the time code and not with some prior knowledge of the “actual” frame rate. This raises two questions. 1. Because the frame numbers are dropped at a slightly irregular interval, how does this affect out sound engineer assuming that 29.97 is the real frame rate? 2. How inaccurate is 29.97 frames per second? The first is easy to answer: we know that with drop-frame SMTPE time code, the frame numbering drifts from real-time but is then pulled back into line by the drop-frame numbers. The most that it is ever out is two frames. Therefore, when the sound is added, it will be at most two frames out or approximately 1/15^th of a second. This is deemed acceptable for most purposes. The second is slightly more interesting because we know that 29.97 is still an approximation to 29.970029970029… If we take a 24 hour video instead of 1 hour, the number of frames is 29.970029970029… x 60 x 60 x 24 = 2589410 (rounding down). However, assuming 29.97 to be the frame rate would give us 29.97 x 60 x 60 x 24 = 2589408 frames. The difference is two frames in 24 hours. Put another way, if we applied drop-frame SMPTE to the 2589410 frames, we’d incorrectly conclude that the hour was up at after 2589408 frames, and the last two would be “lost”, which in the case of most studios is at the outer limit of acceptability. (Most would want to resynchronise the time code with the video after 12 hours, but fortunately, there aren’t many unbroken video sequences that long!) Therefore, 29.97 is deemed “accurate enough” when it comes to time codes, though bear in mind that if we’re simply transmitting a video signal at one end and playing it back at the other, we’re not worried about time codes and both ends will be working at 29.970029970029… anyway, so no problems will be encountered. Whilst talking about how exact we have to be, it is worth adding that because the equipment used to do all this is not as accurate as our theory, the specification allows the equipment to work within certain “tolerances”. The tolerances are as follows. • Chroma sub-carrier: 3579545 Hz +- 10 Hz, or 3579535 Hz – 3579555 Hz. • Horizontal scan frequency must therefore be within: 15734.2197 Hz - 15734.3076 Hz. • Frame rate must therefore be within: 29.9699 – 29.9701 Rather usefully, 29.97 and 29.970029970029… both fit within this allowed tolerance anyway, so the differences, from a practical point of view are negligible. This site, however, still assumes an NTSC colour frame rate of precisely 29.970029970029… unless otherwise specified. Let’s take another look at the effect this has on horizontal blanking and timings. To refresh, PAL is unaffected by the addition of colour and continued to work at a horizontal scan frequency of 15625 Hz and a line time of 64 us and an active picture time of 52 us. NTSC, however, now works at a horizontal scan frequency of 15734.265734265734… Hz. This equates to a line time of 63.55555555555555… ns. Again, it is left to the standards committees to decide how much time should be allowed to get the beam back to the left hand side of the screen (and other control signals) before drawing the next line. For NTSC (colour) this was defined as 10.9 us for blanking and the remaining 52.65555555555… us for picture. Let’s summarise all these values so far. Digitising Analogue Video So, we now have an analogue signal that we understand and that we want to turn into digital data. How do we go about it? The first thing that we must be clear about is exactly what we are going to digitise. Every line? All of the line? Only the active picture? What about the half lines? This section answers those questions. The first place to start is to say that there is no point in digitising non-active picture (blanking and so on), because they control how the CRT tubes operates. However, in the digital domain, there is no CRT to control, so digitising non-active picture would be pointless. So, only the active picture component is digitised. However, as we know some lines are only half long. What happens to To recap the number of analogue lines in each of NTSC and PAL: • PAL □ 25 lines of blanking, 287.5 lines of picture, 25 lines of blanking, 287.5 lines of picture and so on. • NTSC □ 20 lines of blanking, 242.5 lines of picture, 20 lines of blanking, 242.5 lines of picture and so on. Digitising half-lines would be a real headache for those that write standards, so the first thing that was agreed on is that half lines are treated as whole lines. Therefore, from the perspective of sampling, each PAL field has 288 lines and each NTSC field has 243 lines. This makes the field sizes as follows. • NTSC: 486 lines • PAL: 576 lines It makes huge sense if each line were converted to a horizontal row of pixels, and indeed this is what is done. So, digitised PAL has 576 rows and digitised NTSC has 486 rows. What about the number of columns? One of the many standards for sampling of analogue video, “ITU-R BT.470”, states that (paraphrased): • For PAL with 576 active lines, an active sample line of 52 us which has been digitised must give a 4:3 display aspect ratio (“DAR”). • For NTSC with 486 active lines, an active sample line of 52.65555555555… us which has been digitised must give a 4:3 display aspect ratio (“DAR”). Notice that it doesn’t make any preconditions about how many pixels are used horizontally to achieve this, only that when the active sample line is sampled at a predetermined rate, a 4:3 aspect ratio must result. What about if the sample line is not 52 us for PAL (or 52.65555555555… us for NTSC)? What does the standard have to say about that? Fortunately, the next few sections don’t need to worry about that because it deals exclusively in the theoretical world of sample widths of 52 us for PAL and 52.65555555555… for NTSC. Eventually, however, we will be forced to take this possibility into account. Let’s answer the question briefly for now, however. The consensus seems to be that the width of the display is adjusted on a pro rata basis assuming that the pixel aspect ratio (“PAR”) doesn’t change. (Actually, most sites don’t seem to realise that they are making the assumption about the PAR not changing, but an analysis of the maths behind them shows that this assumption is indeed being A simple example: if the scan width of a PAL picture is 104 us, then the DAR will be 8:3. More generally, • PAL: □ DAR = (w * 4) / (52 * 3) □ DAR = 4w / 156, where w is the scan width in microseconds. • NTSC: □ DAR = (w * 4) / (52.65555555555… * 3) □ DAR = 120w / 4739, where w is the scan width in microseconds. For now, let’s get back to standard scan widths. Suppose that as a result of sampling the active PAL picture at some unknown sample rate, we ended up with 1536 pixels. Clearly, if we have a pixel aspect ratio of exactly square (1:1), that is, if each pixel is square, we’ll end up with a DAR of 1536:576, which is certainly not the same as 4:3 and is therefore not ITU-R BT.470-6 compliant. Therefore, we need to define our pixels such that the picture scales to 4:3. If the pixel aspect ratio (“PAR”) is represented as X:Y, we can state that: (1536 * X) / (576 * Y) = 4 / 3 If we arbitrarily state that Y = 1, we get: (1536 * X) * 3 = 576 * 4, therefore X = (576 * 4) / (1536 * 3), therefore X = 0.5 So, in our fictitious example, our PAR is 0.5:1, or more conveniently written as 1:2. In English, the pixel is twice as high as it is wide. Going back to our example to check that this does indeed give a 4:3 DAR, if we have pixels that are twice as tall as they are wide and 1536 pixels across, then that is the same DAR as 1536 * 1 : 576 * 2 = 4:3. Excellent – we have a system that is compliant with “ITU-R BT.470”! Notice that we defined the PAR, not the standard. The PAR was simply a consequence of the standard – you could say that we fudged our definition of the PAR to make the scan Let’s generalise this equation. If the PAR is X:1, then we can calculate X as follows: X = (576 * 4) / (P * 3), or more simply X = 768 / P, where: P is the number of pixels that represent the active PAL picture if a standard scan width of 52 us has been used. This equation is for a height of 576 pixels. More generally for a height of H pixels, we can say that: PAR = (H / P) * DAR, where: H is the height of the active screen in pixels; P is the number of pixels that represent the active PAL picture if a standard scan width of 52 us has been used; DAR is the display aspect ratio of 4 : 3. We’ll use this equation later. Let’s do the same for NTSC. Suppose, therefore, that as a result of sampling the active NTSC picture, we ended up with 972 pixels. Clearly, if we have a pixel aspect ratio of exactly square (1:1), we’ll end up with a DAR of 972:486, which is certainly not the same as 4:3! Again, we need to define our pixels such that the picture scales to 4:3. If the pixel aspect ratio (“PAR”) is represented as X:Y, we can state that: (972 * X) / (486 * Y) = 4 / 3 If we arbitrarily state that Y = 1, we get: (972 * X) * 3 = 486 * 4, therefore X = (486 * 4) / (972 * 3), therefore X = 0.666… So, our PAR is 0.666…:1, or more conveniently written as 2:3. So, if each pixel is one and a half times tall as it is wide, we’ll be compliant with “ITU-R BT.470”. Let’s generalise this equation. If the PAR is X:1, then we can calculate X as follows: X = (486 * 4) / (P * 3), or more simply X = 648 / P, where: P is the number of pixels that represent the active NTSC picture if a standard scan width of 52.6555… us has been used. In its general form: PAR = (H / P) * DAR, where: H is the height of the active screen in pixels; P is the number of pixels that represent the active NTSC picture if a standard scan width of 52.6555… us has been used; DAR is the display aspect ratio. It’s the same as for PAL, which is to be expected. Going back to the nasty situation of the scan width not being the standard lengths – what are the PARs then? Easy! Remember that when we were calculating the DAR, we assumed that the PAR stays the same even if the scan width changes – that is, more of the line is sampled and more pixels are output leaving the PAR as it is. So, in fact the PAR is independent of the scan widths. In this case, we can generalise the statements about PAR to the following: • NTSC: PAR = 648 / P, where P is the number of horizontal pixels in 52.6555… us of the active NTSC picture. • PAL: PAR = 768 / P, where P is the number of horizontal pixels in 52 us of the active PAL picture. PAR = (H / P) * DAR, where: H is the height of the active screen in pixels; P is the number of horizontal pixels in 52 us (PAL) or 52.6555... (NTSC) of the active picture; DAR is the display aspect ratio. Notice now that we don’t need to make any statements about the total scan width, only the number of pixels in the standard scan width. If you want some “proof” that the total scan widths do indeed cancel each other out in the equation, then this is for you: We’ll take PAL as an example and start again from first principles. Let’s take a scan width of w, P is the number of horizontal pixels in 52 us of the picture. Let p (small p) be the total number of horizontal pixels on the picture. p / P = w / 52 DAR = 4w / 156 What is the PAR? It is: (picture width / number of pixels in width) / (picture height / number of pixels in height) (picture width / picture height) = DAR. So, PAR = DAR / (number of pixels in width / number of pixels in height) PAR = DAR / (p / 576) Substituting P for p: PAR = DAR / ((Pw / 52) / 576) = DAR / (Pw / 29952) Substituting DAR: PAR = (4w / 156) / (Pw / 29952) PAR = 119808w / 156Pw We can cancel out the w (at last!) PAR = 119808/ 156P PAR = 768 / P This proves that the PAR is dependent not on the scan width, but on the number of pixels that are in the standard 52 us of the horizontal line. The same proof can easily be applied to NTSC as well. “Hold on”, I hear you say, “I’ve seen plenty of sites that show the PAR being related to the scan width”! Let’s this by substituting in little p for big P in out equation. This gives us for PAL: PAR = 768w / 52p Ah ha! We have now got the PAR in terms of the scan width, right? Well, yes and no. The number of pixels in the scan width equals the sample rate multiplied by the scan width. Formally, p = fw, where f is the sample rate If we substitute this into the PAR equation, we get: PAR = 768w / 52fw, and again the w cancels out, giving PAR = 768 / 52f So again, we can’t really get the PAR to depend on the scan width if we continue to assume that increasing the sample width leaves the PAR alone (which is fairly obvious really), though we can get it to depend on either the number of pixels in the standard sample width or the sample rate (which of course depend on each other). To summarise, we can express the PAR in terms of “P”, the number of pixels is the standard sample width: • NTSC: PAR = 648 / P • PAL: PAR = 768 / P Or we can express it in terms of the sample frequency “f”, where f is the sample frequency in megahertz. : NTSC: PAR = 648 / (52.6555… * f) PAL: PAR = 768 / (52 * f) Expressed as fractions and in the more generic form that doesn’t assume the number of lines: PAR = (H * DAR) * 90 / (4739 * f) assuming that a non-standard sample width doesn’t change the PAR, where: H is the height of the active screen in pixels; DAR is the display aspect ratio; f is the sample frequency. PAR = (H * DAR) / (52 * f) assuming that a non-standard sample width doesn’t change the PAR, where: H is the height of the active screen in pixels; DAR is the display aspect ratio; f is the sample frequency. This last formula is really useful. For example, imagine we have an NTSC device that has a sample rate of 9 MHz. What is the PAR? PAR = 486 * (4 / 3) / (52.6555… * 9) = 6480 / 4739 This is, in fact, the PAR of an NTSC SVCD. All this time, the fact that we are assuming that a non-standard sample width doesn’t affect the PAR has been emphasised. Is this indeed what happens? Unfortunately, in some cases, the PAR is affected by the sample width and this assumption is therefore false. This doesn't mean that the formulae above are invalid, simply that we must remember what assumptions were made when creating them. Later on, we will take into account the effect of a PAR that is altered as a result of the sample width when we look at capture cards. In fact, the equations for the PARs are slight modifications of the ones above, so all is not lost. So, how many pixels do we get in the real world and what is the actual sample rate? Let’s look at these questions next. We can actually sample the analogue signal as often as we like. For example, we could sample a PAL signal once every 26 us. However, for an active line of 52 us, we only get two samples per active line, or two horizontal pixels per active line. This is unlikely to yield a very good picture! (We’d get a PAR of 384:1 – very short and fat pixels as well.) If we increase the sample rate to say, once every millionth of a microsecond (very fast indeed, in other words), we’ll end up with 52 million samples of the active line. As well as consuming vast processor and bandwidth resources, we also have a PAR problem, because our pixels will have a ratio of 14.78:1000000! These would be unfeasible thin pixels to implement in any system. Therefore, the sample rate has to be one that gives us sufficient resolution, but doesn’t go “over the top”. Clearly, there’s no perfect answer, but most importantly, we want to pick a sample rate that everyone else is using. Therefore, we need a standard. Cue “ITU-R BT.601”! (http://www.itu.int/rec/recommendation.asp?type=items&lang=e&parent=R-REC-BT.601-5-199510-I) This states that • For PAL with 625 total lines, we must end up with 864 horizontal samples. • For NTSC with 525 total lines, we must end up with 858 horizontal samples. This makes the sample rate easy to calculate, because with PAL, the total line length is 64 us, and for NTSC, it is 63.555… ns. So, we can calculate the sample rate as follows: • PAL: Sample rate is 864 / (64 * 10^6) = 0.0000135 Hz. • NTSC: Sample rate is 858 / (63.555… * 10^6) = 0.0000135 Hz. Magic! Both come out with the same sample rate. Of course, when the standards were written, this was by design, hence the numbers 864 and 858. So, the sample rate is 13.5 MHz. This 13.5 MHz turns up just about everywhere when it comes to digitising analogue video signals. There are other sample rates relating to other standards, but the 13.5 MHz figure is by far the most widely used, and the best thing that it is independent of NTSC or PAL. Lets’ quickly summarise where we are. We sample the video signal at 13.5 MHz, and we have formulae for calculating the PAR to give us a 4 : 3 DAR. Let’s put this to the test by digitising a PAL video signal whilst staying within “ITU-R BT.470” and “ITU-R BT.601”. To get the PAR, we need to know the length of the active picture part of the line. This is 52 us, as already mentioned. So, if we sample that part of the line at 13.5 MHz, we get 13.5 x 10^6 * 52 x 10^-6 = 702 samples. The PAR is therefore (768/702):1 = 128 / 117 : 1 (roughly 1.094017… : 1). Therefore, in this case, the pixels are slightly wider than they are tall. The formula for the PAR could be re-written as: PAR = (576 * 4 / 3) / (13.5 * 52) = 128 / 117 ~ 1.094017… (It might have been easier simply to use the equation that relates the PAR to the frequency. However, taking the longer route here keeps our brains sharp!) If we do the same for NTSC, we get 13.5 x 10^6 * 52.6555… x 10^-6 = 710.84999… samples. Now we have a problem: is this 710, 711 or indeed 710.84999… samples? The answer depends on the hardware doing the sampling. For now, let’s get the PAR of all three! Firstly, 710 samples: PAR = 648 / 710 ~ 0.912676… Next 711 samples: PAR = 648 / 711 ~ 0.911392… Finally, 710.84999… samples: PAR = 648 / (13.5 * 52.6555…) = 648 / (27 / 2 * (286 / 4.5 – 10.9)) = 648 / (27 / 2 * (28600 / 450 – 4905 / 450)) = 648 / (27 / 2 * 23695 / 450) = 648 / (27 / 2 * 4739 / 90) = 648 / (127953 / 180) = 4320/4739 ~ 0.911585… To find out which one is “right” we need to get back to the real world! When it comes to sampling an analogue video signal, it has to be borne in mind that the video signal doesn’t start and stop abruptly. Also, due to the tolerance of the electronics, allowance has to be made for the small variations in the equipment differences. For example, one might be forgiven for manufacturing a PAL analogue to digital sampler that samples the active picture part of the line on the basis that it lasts 51.99999998 us! Therefore, the standards do allow for a range of active picture times. The ranges are as follows. • NTSC. The FCC (the US federal body that in this case gives legal backing to the ITU recommendations) allow the horizontal blanking period to be anything up to 18%, but no more, of the total line time. (Reference should be in this URL apparently, but I am unable to find ithttp://www.atsc.org/standards/a_54a.pdf) In practice, it is possible to get this down to roughly 16.5%. These two extremes compare with the exact theoretical percentage of 10.9 / 63.555… = 17.15% approximately. If we convert these two extremes to actual times, we get active picture times and thence samples are PARs of: 16.5% --> 53.06888… ns, 716.42999…, 0.904485… 18% --> 52.11555… ns, 703.55999…, 0.921030… • PAL. The horizontal blanking time is nominally 12 us, as already stated. However, in practice, it is defined as 12 +- 0.3 us. (Reference for this?) If we convert these two extremes to actual times, we get active picture times and thence samples are PARs of: 11.7 us --> 52.3 us, 706.05, 1.087742… 12.3 us --> 51.7 us, 697.95, 1.100365… So, if we summarise these values for the PAR, we get (with the mathematically “exact” values in parentheses): • NTSC: 0.904485… --> 0.921030… (0.911585…) • PAL: 1.087742… --> 1.100365… (1.094017…) Isn’t this a contradiction? I said earlier that the PAR is independent of the scan width, and now I’m showing that it changes with scan width. The answer is that in the previous section where I said that the PAR does not change with scan width, I was referring to deliberate over (or under) sampling on the part of the D to A converter. However, here, I am talking about the electronic tolerance that is allowed when attempting to sample to the exact scan widths of 52 us for PAL and 52.6555… us for NTSC. I do discuss deliberate over (and under) scanning later when we learn that in fact, this is what most video capture cards do! When we are converting from one digital format to another, we need to know which PAR the source and destination are using. However, we clearly have a problem here because the standards that we have discussed so far allow a range of possible PARs. What should we do? Perhaps another standard is called for. J SMPTE RP-187-1995 (http://www.smpte.org/shopping_cart/cart.cfm?function=add&productid=430) tries to resolve this by defining the PARs as follows: • NTSC: 160 / 177 ~ 0.903955 • PAL: 1132 / 1035 ~ 1.093720 These values are certainly within the allowed tolerances, but the standard, unfortunately, has not been adopted. It has been suggested that this is because they are too hard to adopt in practice. ( Anyone else want to have a go at a standard? J When it comes to video sampling, there’s square, and then there’s square! To a mathematician, square means a quadrilateral with four right angles and four sides of equal length. However, to a member of the ITU standards committee, square can also mean a rectangle of dimensions 767 x 768! Why? “ITU-R BT.470” states that a “square pixel” will result if you sample the active picture of a PAL signal at exactly 14.75 MHz and scaled to a 4:3 display, and the active picture of a NTSC signal at exactly 12.2727… MHz and scaled to a 4:3 display. Are they mathematically right? Let’s find out. • NTSC: 52.6555… * 12.2727… = 646.2272727... (646 45/198) samples • PAL: 52 * 14.75 = 767 samples Using our handy formulae, we can calculate the PARs as follows: • NTSC: PAR = 648 / (646 45/198) : 1 = 4752 / 4739 : 1 = 4752 : 4793 ~ 0.991446 • PAL: PAR = 768 / 767 : 1 = 768 : 767 ~ 1.001304 : 1 As can be seen, neither, so called “square pixels” are actually square. However, both a probably near enough to “square” such that the difference is not visible to the naked eye. (I speak for myself only here. If your name happens to be Colonel Steve Austin, you may think differently. J ) What effect does this have on our PARs? PARs using ITU-R BT.470 “Square Pixels” Let’s go back to our mathematically “perfect” PARs: • NTSC: 4320 / 4739 • PAL: 128 / 117 Now, we can adjust these PARs based on the new definition of “square”. This is not a particularly intuitive process (though interestingly, there are similarities between this and Lorenz’s transformation theory and special relativity, whereby we adjust our frames of reference to give us new dimensions in the “real world”). To do this, we simply divide the “perfect” PARs by the ITU-R BT.470 “Square Pixels” PAR (you’ll have to take my word for this as it’s not obvious): • NTSC: (4320 / 4739) / (4752 / 4739) = 10:11 ~ 0.909091 : 1 • PAL: (128 / 117) / (768 / 767) = 59:54 ~ 1.092593 : 1 Fortunately, both ratios are still within the allowable tolerance of PARs. Therefore, even though they are not strictly correct because they are based on “square” pixels that aren’t actually square, they do give us PARs that are within the tolerances allowed in the various standards and they are also simple ratios which should be easy to implement. It is written in various places that the PAR of NTSC is “exactly” 10:11 (X:Y), and that the PAR of PAL is “exactly” 54:59 (X:Y). IMHO, it is not worth getting too bogged down with the fact that this is only true if we define square as a 768:767 box. I believe that it is far more important that everyone is working to the same standards, so that at least we can all converse with each other happily. J One final point about PARs. Remember that apart from on displays, they don’t actually exist! OK, they exist as an abstract entity in the minds of people like us, but they don’t exists to a DVD player or a video capture card, for example. For example, when digitizing the luminance values for a PAL video signal, we might end up with a value table something like (the “value” column is completely made up in this example): Line 13: … and so on for each line. Notice that there is nothing in the encoding that explicitly states what the PAR is. This is indeed part of the problem is transferring data between digital sources: each source has its own “understanding” of what PAR it is working with. So, the table of information could be passed from one digital device to another and because they work to their own in-build specifications, of which PAR is not included, the total image could easily become stretched or squashed in one or more dimensions. For this reason, it is up to us to know what the effective PAR is on the various devices that we are dealing with and mould the picture in such a way that the various devices all end up showing the picture with the same total aspect ratio. We need to look at how we do this in more detail. Effect of PAR When Transferring Let’s take a simple example. We have a PAL DV camera that we wish to edit on our computer and then burn the edited material onto a DVD for display on a plasma screen. In this scenario, there are 4 digital devices and a whole raft of analogue to digital and digital to analogue conversions. Let’s list them, using A and D for analogue and digital respectively. I’m assuming here that the computer is using a flat screen connected by a DVI cable, and that the plasma screen is connected by a RGB cable to the DVD player. 1. DV Camera (A) --> DV Tape (D) 2. DV Tape (D) --> AVI file on computer (D) 3. AVI file on computer (D) --> DVI Cable (D) 4. DVI Cable (D) --> Computer display (D) 5. [Edit of AVI file and conversion to MPEG VOB] (D) 6. MPEG VOB (D) --> DVD (D) 7. [DVD in DVD player] 8. DVD Player MPEG Decoder (D) --> MPEG Decoder output (A) 9. MPEG Decoder output (A) – Scaler / Deinterlacer (D) 10. Scaler / Deinterlacer (D) --> RGB cable (A) 11. RGB cable (A) --> Plasma display (D) I make that three A to D conversions and two D to A conversion – one of each of which occurs within the DVD player and which nothing can be done about*. In an ideal world, this would only be one A to D conversion at the DV tape stage. Imagine that throughout this whole process, the source (DV tape) and destination (plasma display) PARs are the same and also that the number of horizontal and vertical pixels between the source and destination are the same. It doesn’t take much to work out that this is the ideal situation because the image will appear on the destination display device with exactly the right aspect ratio (assuming, of course, that it is correct in the source). Not surprisingly, in the real world, this rarely happens, and we do need to consider the PAR and the number of horizontal and vertical pixels at the source and destination, and calculate from these what we need to do in the edit stage to get everything looking “perfect” in the destination display. The PAR has been covered ad nauseum. What about the horizontal and vertical pixels? [* Actually, it is possible to take a direct digital feed from the MPEG decoder using a SDI interface and feed that directly to a scaler.] In all the discussion about PARs, I didn’t once refer to the actual number pixels on a horizontal line that are actually output. We know that we only need to sample the active picture and that the sample rate defines how many pixels this gives us. A reminder of the simple case of digitizing an analogue PAL and NTSC picture of the standard heights, sampling only the active picture and staying within the various standards that we’ve mentioned.: • NTSC: 710.84999… samples • PAL: 702 samples Is this what is actually output? No! The ITU-R BT.601 standard actually states that we must output 720 pixels when digitising both NTSC and PAL. The reason for this will become clearer later on. As has been shown, when taking the various tolerances into account, there is a range of pixels that could represent the active picture. These are approximately: • NTSC: 703 – 717 • PAL: 697 – 707 Now the figure of 720 is looking rather handy for a number of reasons. 1. It is greater than the maximum possible number of horizontal pixels for both NTSC and PAL. It therefore, allows us to ensure that we have sampled the whole of the active picture. 2. It is a nice round number, and computers like dealing with nice round numbers! (OK, computers like dealing with binary, but humans like nice round numbers anyway!) In fact, probably most important is that it is divisible by 16. This is important because MPEG-2 encoders work in 16 x 16 blocks, so 720 pixels makes MPEG encoding much easier. The next mod 16 pixel down would be 704, which is too small for the PAL and NTSC standards, so 720 pixels is not really just a happy coincidence at all. (I was being flippant!) This begs the question of what makes up the remaining pixels? The answer is that the standard doesn’t actually specify, so it is left to the specific implementation of the A to D converter and software (driver) to decide. Sometimes, the remainder is just black pixels. However, it would be equally plausible to scale the whole thing up, whilst leaving the PAR the same, so that the whole image fits into the 720 pixels. Another possibility is to scale horizontally, but leave the vertical as is. This, of course, messes up the PAR, but unfortunately this is also quite common – particularly with capture cards! If you know that this happens, and you know the amount of horizontal scaling, it is possible to account for this when editing the video. If you don’t know, what should you do? This is harder to answer because even if you take a reference picture and transfer it to the computer and count the pixels (looking, for example, for black at either end), you are at the mercy of the software that did the transfer in the first place. Can you guarantee that it transferred the picture faithfully, or that the software that you are using is displaying the picture faithfully? There is a web site that assists in this department by playing a standard DVD and effective counting the pixels. (http://www.doom9.org/index.html?/capture/capture_window.html) The best thing to do, therefore, is look up on the Internet what your particular A to D converter actually does and work from there – try to avoid guessing! In all the discussion about PARs, I didn’t once refer to the actual number of vertical pixels that are actually output. In our minds, we might have assumed that for PAL, 576 lines are output and for NTSC, 486 lines are output. Is this right? Possibly. Nearly all PAL instances of A to D conversion output 576 pixels if there are 576 active lines, but most (not all) instances of NTSC A to D conversion output 480, not 486 pixels active lines. Why? Because, as I mentioned before, MPEG encoders work on 16 x 16 blocks, so it is very useful indeed if the height were divisible by 16. 576 is OK, but 486 is not. The next smallest mod 16 value is 480. Therefore, most A to D conversion crops to this size. Note that I said “crop” and not “scale”. Scaling 486 to 480 would be a nightmare and probably yield a fuzzy picture. It is far easier simply to lop off the top and bottom 3 row and output the rest. This is indeed what is almost always done. (Some high-end A to D converters give all 486 rows.) Capture cards make good examples of A to D converters because: 1. They are common, and so people can relate to them easily 2. They do exactly what we’ve been discussing: they take a NTSC or PAL signal and convert it into digital data. All capture card capture at (or attempt to capture at) 13.5 MHz and capture (or attempt to capture) the standard active picture widths, right? Sadly, no, as has already been alluded to. If this were the case, life would be too simple! Most capture cards actually over-capture or under-capture. That is, they capture more or less than the standard active picture widths. So, providing that they capture at exactly 13.5 MHz and over or under-sampling doesn’t affect the PAR, we can take the PARs to be: • NTSC: 4320 / 4739 • PAL: 128 / 117 An example is called for. Suppose that we have a PAL video capture card that only captures 51.56 us of active picture instead of 52 us. This indeed actually happens with BT878 and the iuLabs universal WDM v3.1.28.36 driver. ( http://www.doom9.org/index.html?/capture/introduction.html) How many horizontal pixels will be captured? 51.56 x 13.5 = 696 horizontal pixels, approximately. Because the ITU-R BT.601 says that we must output 720 horizontal pixels, we would hope that the card will pad the image with 12 pixels either side, leaving the PAR alone. An ITU-R BT.601 compliant device reading this information will not know (or indeed care) that there has been any padding because it will simply read the pixels and “assume that” the PAR is 128 : 117. This would, of course, gives us the correct aspect ratio and the resulting picture will have small black bars either side. Again, this would make life far too simple, so what actually happens? Most capture cards ask the user prior to the capture what pixel output they require. On the surface, this sounds like a nice friendly question but unfortunately it is highly dubious! The reason is that if asked I would prefer it to output exactly what had been scanned. So, my preferences would be (on order of preference) 1. 696 x 576 2. 720 x 576 (with black bars added either side) In fact, either will do as none of them is particularly “harmful” to the PAR. It’ll never offer the first choice, however, because most cards and driver combinations would hate to admit that they are in this case under-scanning. Well, that’s not so bad is it? Sadly, yes. To see why, we need to look at horizontal resampling. Horizontal Resampling (“Scaling”) The driver of the A to D converter tries to be clever here. If I ask for 720 x 576 pixels, it says “Ah – you want 720 pixels do you? In that case, I’ll resample the image horizontally, as in stretch it, to give you 720 pixels. No black bars for you Sir!” You can scream and shout that you actually want your black bars, but “driver knows best”! We haven’t looked at horizontal resampling yet, but it doesn’t take much to work out that this does affect the PAR and invalidates our earlier assumption about a changing sample width not affecting the PAR, which consequently means that the capture card will have stretched (or shrunk) the image, albeit by a small amount. It is worth emphasizing here that it is not the fact that the image has been over or under scanned that is the problem per se. It is simply that the card resamples the result of the A to D conversion to the format that you requested. How do we overcome this changing PAR? There are two solutions. 1. Request an output width that exactly matches the output that we know that it outputs, in this case 696 pixels, or failing that, request that it add black bars instead of resampling. Most cards don’t offer these options, which leaves solutions 2 and 3. 2. Resample the image using computer software to get back to the required PAR and, if necessary, add the black bars yourself. 3. Do nothing. This is not only the easiest option, but is may actually not be necessary as the examples show. Let’s take another example of capturing PAL using a sample rate of 13.5 MHz and a sample width of 53.333… us. How many pixels are captured? 53.333… * 13.5 = 720 pixels. Let’s also ask the card to output 720 x 576, which it should offer us. In this case, the card will not do any resampling because it has captured 720 pixels. Hurrah! One thing to note is that the card has, however, over-captured by 18 pixels. These 18 pixels are captured horizontal blanking data. However, the card will not worry about this and show it as black. Therefore, we will get black bars either side, but technically, these black bars are simply a result of over-capturing and not a post-capture addition of black bars. Either way, the PAR is not changed. Now let’s take another example of capturing NTSC using a sample rate of 12.306394 MHz (this rate is explained in the next section) and a sample width of 52.80 us. How many pixels are captured? 52.80 * 12.306394 = 649.78 (approx.) pixels. Let’s also ask the card to output 640 x 480, which it should offer us. In this case, the card crops 6 pixels vertically to get the resolution down to 480 pixels. We’d hope that it would also crop horizontally, but it doesn’t – it resamples the picture so that exactly 640 pixels are output. If we call the PAR before the resampling PARold and the PAR after the sampling PARnew, then we can say that: PARnew = 649.78 / 640 * PARold More generally, PARnew = wf / Wnew * PARold, where: PARnew is the PAR after the resampling; w is the sample width in us; f is the sample frequency; Wnew is the new width in pixels; PARold is the PAR prior to the resampling. Notice that this is independent of whether it is NTSC or PAL and it makes no statement about the number of active lines. However, one of our earlier equations related the PAR and the sample frequency, f. These assumed that the PAR had been unchanged by any resampling. Of course, for PARold this is true because the resampling has not yet taken place. So we can substitute in our equations for the PAR for NTSC and PAL: PARnew = w * f / Wnew * H * DAR / (52 * f) = (H / Wnew) * (w / 52) * DAR Similarly for NTSC: PARnew = w * f / Wnew * 58320 / (4739 * f) = 90 * (H / Wnew) * (w / 4739) * DAR PARnew = (H / Wnew) * (w / 52) * DAR, where: PARnew is the PAR after the resampling; H is the height of the active screen in pixels; Wnew is the new width in pixels; w is the sample width in us; DAR is the display aspect ratio. PARnew = 90 * (H / Wnew) * (w / 4739) * DAR, where: PARnew is the PAR after the resampling; H is the height of the active screen in pixels; Wnew is the new width in pixels; w is the sample width in us; DAR is the display aspect ratio. Now at last we have the PAR in terms of the sample width, w, and we are no longer assuming that the card does not adjust the PAR prior to outputting the pixels. We do, however, need to be careful about the meaning of “Wnew” if we are to use this equation in both situations of a card that scales the picture to the requested output size and one that add black bars (or crops). Wnew is the width in pixels after any resampling (scaling) of the picture but excluding any cropping or adding of black bars. So, a PAL device that had a sample width of 52 us, but which added black bars to get the horizontal size up to 720 pixels would work as follows: H = 576 Wnew = 702 (from 13.5 * 52), NOT 720 w = 52 Therefore PARnew = 576 / 702 * 52 / 52 * 4 /3 = 128 / 117. Therefore, the PAR is exactly what it was before the black bars were added. However, if instead the card resampled the picture to 720 pixels wide, we get: PARnew = 576 / 720 * 52 / 52 * 4 /3 = 128 / 120, so indeed it has changed from 128 : 117. Before getting into the subject of converting from one format to another whilst maintaining an exact PAR, we need to look at devices that don’t sample at 13.5 MHz. Non 13.5 MHz Devices – TV Cards So far, we’ve taken 13.5 MHz to be a ubiquitous number. However, this is not always the case. A good example is a PAL TV tuner card running on a PC. (Sorry about all the PAL examples – it’s just that PAL is easier because there are fewer nasty fractions to deal with.) Let’s design such a PAL TV tuner card from scratch and see where it takes us. Firstly, what’s the target’s PAR? Well, being a computer monitor, it is exactly 1:1, i.e. square. We’re not talking about 767:768 square, we’re talking about “real” square. We’ll also assume that the DAR is 4 : 3, which it almost always is. (1024 x 768, 800 x 600 and so on.) Finally, we’ll try to adhere to the standards described so far. Firstly, we’ll read the active 52 us of the PAL signal and sample it at 13.5 MHz. This gives us: 52 * 13.5 = 702 pixels wide x 576 pixels high. If we send this to a screen with a PAR of 1:1, we’ll get an image that is too tall. This is because PAL pixels, as we know, are slightly wider than they are tall. If we squeeze these pixels in on either side to get a 1:1 pixel, the image will stretch in height. So, what are the possibilities? One thing we could do is resample the image horizontally in software before sending it to the screen to force it into a 4:3 ratio. How much scaling? Required pixel width = 576 / 3 * 4 = 768. (This is an easy calculation because the pixels are square.) So, we need 768 pixels. Therefore, we need to resample the image horizontally by a factor of 768 / 702. It’s no coincidence that this is resampling by 128 / 117. Doing this will give us the required pixel and display aspect ratio. Any further resampling (for example to get full screen) is proportional and can be done in software. The downside of this is that this resampling has to be done “on the fly” by the TV card, which would be a processor intensive operation. Another solution would be simply to change the sample rate so that it gives us 768 pixels “right out of the box”. That way, no horizontal resampling is required. What sample rate does this? All we have to do is increate the sample rate using the same proportions as earlier: 128 / 117. New sample rate = 13.5 * 128 / 117 = 1728 / 117 = 192 / 13 ~ 14.769230… MHz Problem solved! This is actually the sample rate that PAL TV tuner cards work to. Doing the same for an NTSC tuner card, we get: New sample rate = 13.5 * 4320 / 4739 = 58320 / 4739 ~ 12.306394… MHz Again, this is actually the sample rate that NTSC TV tuner cards work to. So, if we sample the picture at a rate that is different from 13.5 MHz, we will get the right number of pixels to display the picture at the correct 4 : 3 DAR. So far, we’ve dealt with the follow types of device for both NTSC and PAL: • Theoretically “correct” A to D conversion of an analogue video signal. • Video capture cards. • PC TV tuner cards. Notice that all three contain A to D converters. However, when it comes to transferring between digital devices, not all do an A to D conversion. (Actually, many do, but the conversion happens internally with a D to A conversion again. An example of this is a DVD player. However, we will ignore the fact that this happens and simply treat the whole thing as a “black box” into which a signal goes and out comes a signal.) Some common examples of other types of “kit” are: • DVDs • DVD players. • DV cameras (combined A to D encoder and playback device) • VCD (i.e. CD) players. • SVCD players. Notice that I deliberately listed DVDs and DVD players separately. This is because the whole process of manufacturing DVD from the camera is effectively an A to D process. I.e., the camera (analogue) to an MPEG-2 stream on the DVD (digital). In practice, there is (or can be) more than one A to D conversion, but we do not need to worry about the details here. However, a DVD player takes an MPEG-2 stream (digital) and outputs some kind of analogue video signal, such as RGB (analogue). So, for a DVD player, we have a D to A conversion. (Again, there is actually more than one conversion, such as occurred between the MPEG-2 decoder chip and the scaler / deinterlacer chip. However, we can ignore these details.) By comparison, often what is found on the Internet is a statement like “the sample rate of a DVD is 13.5 MHz”. Clearly, a DVD doesn’t sample anything – it’s just a shiny storage medium for data! The player also doesn’t sample anything as it’s only fed with a digital signal. (Again, not strictly true if we want to get to the innards of a DVD player, which we don’t!) What we can say is that the process of manufacturing a DVD involves sampling an analogue signal at 13.5 MHz and that a DVD play will faithfully reproduce the analogue signal on the basis that the original encoding was done at 13.5 MHz. The distinction is somewhat pedantic, but it clarifies what is really meant by “the sample rate of a DVD is 13.5 MHz”. Most sites therefore don’t distinguish between DVDs and “DVD player” and just say “DVD”. This site is the same and assumes that DVD, DV players, VCD players and so on all faithfully reproduce the original analogue signal because they make an assumption about the original sample rate that was used. Transferring Between Digital Devices – Examples This is where it gets fun, because we have varying sample rates, PARs, horizontal lines and vertical lines! However, we also have the tools to work through methodically, always remembering that we want to preserve the aspect ratio of the source. Example 1 – NTSC TV Tuner Card to NTSC SVCD This example works through the process very laboriously. However, it puts to use everything leant so far and makes subsequent calculations easier. Suppose that we have an NTSC TV tuner card that has an active sample width of 52.6555… us and that we want to make an NTSC SVCD out of it. The first thing to do is get more information about the source and destination. We look up on the Internet and find that the sample frequency of an SVCD is 9 MHz. We can immediately calculate the PAR of an SVCD using the equation that relates PAR to the sample frequency: PAR = (H * DAR) * 90 / (4739 * f) PAR of SVCD = 648 / (52.6555… * 9) = 6480 / 4739. Next, we look up on the Internet how wide and high an SVCD is in pixels. The answer is 480 x 480. Let’s stop and think about what this means for a second. The way I look at it is: if I started with an NTSC analogue signal and had to encode a SVCD out of it, what would I have to do? The first thing would be to find out what the sample rate is for SVCD. We have already found this out when calculating the PAR: it is 9 MHz. How much of the 52.6555… us is sampled when creating an SVCD? We need to look this up and discover that it is 53.333… us (that is, it over samples the active picture). How many pixels does this create horizontally? 9 x 53.333… = 480 pixels. What about vertically? Well, NTSC is 486 lines and it comes as no surprise that the creation of SVCD invariably crops 6 of the lines to give 480 vertically. So in summary for SVCD we have a sample rate of 9 MHz, giving a PAR of 6480 / 4739, a sampling width of 53.333… us, a total sampling window of 480 x 486, an output window of 480 x 480 and no scaling, so the PAR stays at 6480 / 4739. Phew! We shall assume that the SVCD player is faithful to these figures and outputs the analogue video stream Now the source TV tuner card. We know the sample frequency is 12.306394 MHz. Straight away we can calculate the PAR: PAR = 648 / 52.6555… x 12.306394 = 1 (i.e. square pixels). (H * DAR was shown as 648.) After looking up how much of the 52.6555… us active picture is sampled, we find out that exactly all of it and no more is, i.e., 52.6555… us. This gives us a horizontal pixel width of 52.6555… * 12.306394… = 648 pixels. However, NTSC cards usually resample the output to 640 pixels. What about vertically? Well, NTSC is 486 lines as ever, and the TV card, like nearly all cards, crops 6 pixels, so it outputs 480 pixels vertically. This means that the output is 640 x 480 but the picture has been resampled giving a revised PAR of (1 * 648) / (1 * 640) = 81 / 80. Looking at the vertical size of the picture, we’re in luck because both the source and destination are the same. What do we have to do in the horizontal dimension? There are so many different ways of approaching this! Here’s one of them: “Clearly”, we need to resample the picture before creating the SVCD. Let’s start by resampling it to get the PAR back to 1:1. What is the width in pixels we’ll be creating? We simply need to resample back to what the width in pixels was before the card resampled it down, i.e. 648 pixels. So, we resample the picture creating an output of 648 x 480. This gets the PAR back to 1:1. What if now we resample the picture again down to 480 pixels wide? This would give us the correct number of pixels for an SVCD. This would then be creating a PAR of 648 / 480 = 1.35 (remember that we started off with a PAR of 1). However, the required PAR for SVCD is 6480 / 4739 = 1.37 approximately (slightly wider than we got). So, simply resampling down to 480 will not do the trick. OK, let’s turn the question on its head and ask what we have to resample the 648 pixels down to, to get a PAR of 6480 / 4739? If we denote the number of pixels that we resample down to as X, we can say that: 648 / X = 6480 / 4739, because it is a simple ratio. Therefore, X = 473.9 So, let’s resample the 648 pixels down to 473.9 pixels (In the real world, it’ll have to be 474 pixels, but let’s stay hypothetical for now.) This gives us a PAR of 6480 / 4739 and an output of 473.9 x 480. Because we are now short of 6.1 pixels, we’ll have to add these ourselves. In summary, this is what we’ve done: 1. Start with a picture of 640 x 480 from our TV card. 2. Resample the picture up to 648 x 480 pixels. 3. Resample the picture down to 473.9 x 480 pixels. 4. Add 6.1 pixels to the output to give 480 x 480. Notice stage 2 and 3 resample up and then down the picture. Can the first resampling be skipped? Look at it like this: If at stage 2, we resampled it up to 12345678 x 480 pixels and then resampled it down to 479.9 pixels, would it make a difference? The answer is no, because every time you resample, the same total picture exists, it is simply being stretched on way and then the other. Indeed, resampling twice in a row is bad because each scaling operating loses some information. Therefore, we can change the procedure to: 1. Start with a picture of 640 x 480 from our TV card. 2. Resample the picture down to 473.9 x 480 pixels. 3. Add 6.1 pixels to the output to give 480 x 480. In the real world, with integer pixels: 1. Start with a picture of 640 x 480 from our TV card. 2. Resample the picture down to 474 x 480 pixels. 3. Add 3 pixels either side to the output to give 480 x 480. Before leaving this example, the fact that we were able to remove the second stage in the calculation does beg the question of whether we care that the TV card itself resamples the image of not. This is a question that is often skipped or glanced over on many web sites. In this case, clearly not. However, PARs were a big concern when we talked about how to get the picture displaying correctly on a screen. Why is this? Simply put, if we have a sequence of event that goes something like: 1. Starts with a picture X * Y. 2. Resample it down horizontally. 3. Resample it up horizontally. 4. Resample it up horizontally. 5. Resample it down horizontally. 6. Resample it up horizontally. 7. Add pixels to the left and right. … only the final resampling need be done. Simply squeezing and stretching the picture a few times doesn’t affect the final result. In the case of a computer monitor being the final destination, there is only one resampling stage that we need to worry about and that is the one of the card resampling the image down to 640 pixels wide. In the case of converting to an NTSC SVCD, however, there are subsequent resampling operations that take place, making the preceding one superfluous. Therefore, it is only the final resampling that we really need to worry about. Example 2 – NTSC TV Tuner Card without scaling to NTSC SVCD This example is almost identical to the previous except that in this case, the TV tuner card crops rather than resamples from 648 to 640 pixels wide. Does this make a difference to the calculations? (Admittedly, we’ve lost a small amount of the picture, but let’s not worry about that – all we want is to keep the aspect ratio correct.) We start with a picture that is 640 x 480 and has a PAR or 1. What if we simply resample the picture again down to 480 pixels wide? This would give us the correct number of pixels for an SVCD. However, we would then be creating a PAR of 640 / 480 = 1.333, but the required PAR is 1.367 approximately (slightly wider than we got). So, simply resampling down to 480 will not do the trick. OK, let’s turn the question on its head and ask what we have to resample the 640 pixels down to, to get a PAR of 6480 / 4739? If we denote the number of pixels that we resample down to as X, we can say 640 / X = 6480 / 4739, because it is a simple ratio. Therefore, X ~ 468.04 So, in this case, we have to resample the picture down to 468.04, let’s say 468, being the nearest integer. This leaves us short of 12 pixels, so we’ll pad 6 pixels either side. In the real world, with integer pixels: 1. Start with a picture of 640 x 480 from our TV card. 2. Scale the picture down to 468 x 480 pixels. 3. Add 6 pixels either side to the output to give 480 x 480. Clearly, it does make a difference that the TV tuner card crops and not scales the picture. Can we come up with a generic equation or procedure that takes either situation into account? Of course we In the first (TV card resamples) example, we resampled to: (648 * 4739) / 6480 pixels, and in the second (TV card crops) example, we resampled to: (640 * 4739) / 6480 pixels. The only difference is the 648 and the 640. However, in the first example, we notice that: 640 = 648 * (80 / 81) = 648 / PAR of scaled picture Re-writing this equation, we get: 648 = 640 * PAR of scaled picture We can write this into the first example’s scaling, saying that we scale the picture to: (640 * PAR of scaled picture) * 4739 / 6480 pixels. In the second example, the PAR of the scaled picture was 1 because it wasn’t scaled, so we can also re-write the second example as: (640 * PAR of scaled picture) * 4739 / 6480 pixels. Bingo! We have two equations that are the same, so we now have a way of calculating the scaling that we have to do which is “independent of” (i.e. we don’t need to think hard about it – not mathematically independent) of whether our capture card scales or crops. We need to rearrange the equation slightly, because 4739 / 6480 = 1 / (6480 / 4739) = 1 / PAR of NTSC SVCD. Therefore, the completely generic equation that defined the number of pixels “X” horizontally that we need to scale to is therefore: X = (Wsrc * PAR of scaled picture) * (1 / PAR of destination), which is the same as: X = Wsrc * Fh, where: Wsrc is the actual width of the source after any pre-output cropping or scaling has happened; Fh is the factor by which we scale horizontally, and defined as: Fh = PARsrc / PARdst, where: PARdst is the PAR of the destination; PARsrc is the PAR of the source after any pre-output cropping or scaling has happened. Note: If PARsrc is the result of pre-output resampling, we can use the equation for PARnew, above, to work out what it is. Considering the hoops that we’ve had to jump through to get here, this is a remarkably simple equation! However, we have explained where is comes from and why it is an “exact” equation and not just an approximation. Wsrc will be obvious because it is usually printed on the outside of the box, or is in the software configuration! PARdst we can look up in a table (or work it out easily enough from the sample rate). PARsrc is the most trick one, but only just, because it requires that we know whether the card scales or crops the image and hence what the PAR is. Don’t forget that some cropping or adding of black pixels may still be required after the resampling. Example 3 – PAL DV to a computer monitor We should be able to use our new generic equation to good use now. Firstly, let’s find out about the source. We look up and find that it samples a standard PAL picture of 576 lines at 13.5 MHz, giving a PAR of 128 / 117. This is the PAR of the source before the DV camera has done any last minute scaling. Does it indeed do this? We look up the sample width and find it to be 53.333… us. So, the width horizontally is 13.5 * 53.333… = 720. The standards say that we must output 720 pixels horizontally and 576 vertically, which is rather handy because it means that we don’t have to crop or resample horizontally and vertically, we’ll just leave it as is. Therefore, the PAR is unchanged and we can say that PARsrc = 128 / 117. That’s the hard bit! Now the easy bits of looking up the PAR of the destination – the computer monitor, which of course is 1. So, we need to resample the image to the following number of pixels: 720 * ((128 / 117) / (1 / 1)) ~ 787.692… (Round it up to 788) So, we resample the picture to 788 x 576. On a computer monitor, we probably don’t need to do any last minute cropping or adding of black bars, but let’s suppose that the software we use insist on some predefined format such as 768 x 576, we would have to take off 20 pixels, 10 off of each side would make sense. Let’s check that this figure of 787.692… is right and suppose that we do indeed resample the picture to this width. What proportion of the screen contains active picture (remembering that the DV originally over sampled each active picture from 52 us to 53.333… us)? Clearly it is 52 / 53.333… (or 702 / 720 – whichever way you want to look at it). So, the proportion of picture that is active in our new window is: 787.692… * 52 / 53.333… = 768 So, the DAR is 768 : 576 = 4 : 3. So, we have successfully got back to a 4 : 3 DAR. Note that in the real world, there’s usually no need to do this resampling because the software that we’re using to display the picture usually “knows” that we need a 1 : 1 PAR and so it resamples “on the fly”. However, it does depend on how the data were transferred from the DV camera. As an example, transferring uncompressed DV (about 6 times the size it is stored on the DV tape) results in Windows Media Player 10 not correcting for a 1 : 1 PAR monitor. However, transferring the data as a Type 2 AVI (same size as on the tape), results in Windows Media Player 10 correcting the picture to display correctly. (I did measure the “uncorrected” PAR and got PAR: 0.912, which is very close to the expected PAR of 0.914062.) The moral of the story is, don’t use WMP 10 to decide what the internal PAR of the file is! Example 4 – PAL DV to a PAL DVD We already know that the PAR of a PAL DV is not resampled and has a PAR of 128 / 117. What about the destination? The width of the picture in pixels is 720 and the PAR is 128 / 117. Again, the height of both the source and destination are 576, so we do not need to worry about that. We resample the picture to the following number of pixels: 720 * ((128 / 117) / (128 / 117)) = 720 So, we don’t actually need to resample the DV at all. What about cropping or adding black bars? As a result of not resampling, we’ll end up with a picture 720 x 576, which is exactly what we need for a PAL DVD, so cropping or adding black bars either. This has to be one of the easiest transfers to do! Note that whilst you are editing the video on a computer, the aspect ratio will be wrong, of course, unless the software compensates on the screen (as long as it doesn’t adjust the video data itself). This does not matter, because the final output will be right for a PAL DVD. Example 5 – PAL DVD to NTSC DVD This is getting into tiger country now, because this will involve scaling vertically to get the correct PAR. In general, vertical scaling has a noticeably adverse effect on the image. However, let’s not worry about that and carry on regardless. One word of caution: if resampling vertically, ensure that the source material is deinterlaced first. DVDs are generally (but not always) interlaced, and any attempt to scale an interlaced picture will be horrible! The source material has a size of 720 x 576 and a PAR of 128 : 117, and the destination has a size of 720 x 480 and a PAR of 4320 : 4739. Let’s first try to resample vertically and see where this takes us. So, firstly we resample vertically to get 480 lines, giving us 720 x 480 pixels. What does this do to the PAR? The new PAR can be calculated as follows: PAR = 128 / 117 * 480 / 576 = 320 : 351 (If you think that this should be 128 / 117 * 576 / 480, remember that resampling takes all the existing information and “overlays” the new sample on top of it, giving a new PAR.) What do we have to do horizontally to get the PAR to 4320 : 4739? Put another way, how many pixels do we have to resample to horizontally to get the PAR to 4320 : 4739? Let’s call this number X and do the sums. If we resample to X pixels wide, the new PAR is: (320 / 351) * (720 / X) But we want this to equal 4320 : 4739, so (320 / 351) * (720 / X) = 4320 / 4739 Therefore, X ~ 720.079, call it 720. So, we don’t need to resample horizontally to get extremely close to the correct PAR! Let’s see if we can get a more generic equation out of this. Rearranging X in the above equation and re-substituting back in the 128 / 117 * 480 / 576 for the 320 / 351, we get: X = (720 * (128 / 117) * (480 / 576)) / (4320 / 4739) But, the 720 is actually Wsrc, 128 / 117 is PARsrc, 4320 / 4739 is PARdst, and 480 / 576 was the factor by which we scaled the image vertically. So, we can write: X = Wsrc * Fh * Fv, where: Wsrc is the actual width of the source after any pre-output cropping or scaling has happened; Fh is the factor by which we scale horizontally, and defined as: Fh = PARsrc / PARdst, where: PARdst is the PAR of the destination; PARsrc is the PAR of the source after any pre-output cropping or scaling has happened. Note: If PARsrc is the result of pre-output resampling, we can use the equation for PARnew, above, to work out what it is. Fv is the factor by which we scale vertically, and defined as: Fv = Hdst / Hsrc, where: Hdst is the output height in pixels of the destination; Hsrc is the output height in pixels of the source. This is a more generic version of the equation in example 2. In example 2, Fv is 1 because we are not scaling vertically. When resampling in software, such as with VirtualDub, you are asked what the target sizes are that you are after as opposed to any PARs. So, taking this example again, we would tell VirtualDub that the target height was 480, because that is exactly what we need for NTSC. From this, we calculate Fv, which is 480 / 576. We look up or calculate the PARs of the source and destinations in a table and from that calculate Fh, which is (128 / 117) / (4320 / 4739). We know the width of the source is 720, so multiplying the three together, we get 720.076, which we round to 720. So, we tell VirtualDub to scale to 720 x 480. VirtualDub then re-samples the picture to this size and because the destination already has the correct PAR, no further cropping is required. Note that this gets the destination aspect ration correct. Of course, this does not make it NTSC because the frame rate is still wrong. Software such as VirtualDub can correct the frame rate as well, though I have not tested how well it does it. Example 6 – PAL DVD to NTSC VCD Now we are really going to town! Knowing that VCD are much “smaller” than DVDs, it doesn’t take much to work out that this will involve resampling in both dimensions. Let’s go straight for the We look up in a table and find that the destination height is 240. The height of the source is 576. So, Fv is 240 / 576. We look up the PARs of the source and destinations in a table and from that calculate Fh, which is (128 / 117) / (4320 / 4739). The width of the source is 720, so X = 360.038 approximately. So, if we resample the picture to 360 x 240, we get the correct aspect ratio. However, in this case, the target size is 352 x 240. Therefore, we need to crop the picture by 8 pixels vertically (4 at the top and 4 at the bottom) to get the correct size. Again, we also need to change the frame rate from 25 to 29.97 to get an NTSC VCD. Example 7 – PAL DV to PAL SVCD This should be an easier example again, but is worth doing to get comfortable with the equations. Therefore, we’re not going to look up pre-calculated values in tables this time, we’re going to works them out for ourselves. Some values we have to look up from the standards. These are: · PAL lines for both DV and SVCD: 576 · PAL DV sample frequency: 13.5 · PAL SVCD sample frequency: 9 · DV does not change the PAR prior through resampling to output Source – PAL DV: PAR = 576 * (4 : 3) / (52 * 13.5) = 128 : 117 Any pre-output resampling that affects the PAR? No. Output height: 576 Output width: 720 Destination – PAL SVCD: PAR = 576 * (4 : 3) / (52 * 9) = 64 : 39 Output height: 576 The calculation: Fh = (128 / 117) / (64 / 39) = 2 / 3 Fv = 576 / 576 = 1 Wsrc = 720 X = 720 * 1 * 2 / 3 = 480 So, if we resample the DV output to 480 pixels horizontally, we will get the correct PAR for SVCD and we will output 480 x 576 pixels, which is rather handily the size we want anyway, so no cropping need be done. In nearly all cases, A to D devices do not sample the standard widths of 52 us for PAL and 52.6555… for NTSC. In fact, they nearly all change the scan widths so that the number of pixels output on the horizontal axis is exactly the number needed by the format’s standards. This means that the A to D device doesn’t need to worry about adding black bars or cropping the picture horizontally. It simply outputs all the pixels sampled, giving the required size horizontally. The main exceptions to this are capture cards, which resample (scale) the picture horizontally so that they output the correct number of horizontal pixels. This, of course, affects the PAR which affects the resampling calculations. This formula can be used to calculate the pixel aspect ratio from the height of the active screen, the display aspect ratio and the sample frequency. “Active screen” height here means the height before any vertical cropping has occurred. For NTSC it is usually 486 and for PAL it is usually 576. In all the examples used, the DAR is 4 : 3. PAR = (H * DAR) * 90 / (4739 * f) assuming that a non-standard sample width doesn’t change the PAR, where: H is the height of the active screen in pixels; DAR is the display aspect ratio; f is the sample frequency. PAR = (H * DAR) / (52 * f) assuming that a non-standard sample width doesn’t change the PAR, where: H is the height of the active screen in pixels; DAR is the display aspect ratio; f is the sample frequency. These formulae can be used to calculate the pixel aspect ratio after it has undergone pre-output resampling (for example by a video capture card) from the height of the active screen, the new width of the picture in pixels, the sample width and the display aspect ratio. “Active screen” height here means the height before any vertical cropping has occurred. For NTSC it is usually 486 and for PAL it is usually 576. In all the examples used, the DAR is 4 : 3. PARnew = (H / Wnew) * (w / 52) * DAR, where: PARnew is the PAR after the resampling; H is the height of the active screen in pixels; Wnew is the new width in pixels; w is the sample width in us; DAR is the display aspect ratio. PARnew = 90 * (H / Wnew) * (w / 4739) * DAR, where: PARnew is the PAR after the resampling; H is the height of the active screen in pixels; Wnew is the new width in pixels; w is the sample width in us; DAR is the display aspect ratio. This formula can be used to calculate the width of picture in pixels that we need to resample to to get to the correct PAR from the PARs of the source and destination and output heights of the source and destination screens. X = Wsrc * Fh * Fv, where: Wsrc is the actual width of the source after any pre-output cropping or scaling has happened; Fh is the factor by which we scale horizontally, and defined as: Fh = PARsrc / PARdst, where: PARdst is the PAR of the destination; PARsrc is the PAR of the source after any pre-output cropping or scaling has happened. Note: If PARsrc is the result of pre-output resampling, we can use the equation for PARnew, above, to work out what it is. Fv is the factor by which we scale vertically, and defined as: Fv = Hdst / Hsrc, where: Hdst is the output height in pixels of the destination; Hsrc is the output height in pixels of the source. Table of PARs and Dimensions 1. These values are no particularly important if the device doesn't resample the picture prior to output, as would happen if it add black bars (or crops) or it already outputs exactly the right number of pixels horizontally anyway. The obvious exception is for the TV cards listed.
{"url":"https://www.mossywell.com/home/technical-stuff/pixel-aspect-ratio","timestamp":"2024-11-06T15:16:37Z","content_type":"text/html","content_length":"446506","record_id":"<urn:uuid:7140ef1f-7f68-4bb6-887a-48e6a1b30795>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00278.warc.gz"}
Clustering Affine Subspaces: Algorithms and Hardness We study a generalization of the famous k-center problem where each object is an affine subspace of dimension Δ, and give either the first or significantly improved algorithms and hardness results for many combinations of parameters. This generalization from points (Δ=0) is motivated by the analysis of incomplete data, a pervasive challenge in statistics: incomplete data objects in R^d can be modeled as affine subspaces. We give three algorithmic results for different values of k, under the assumption that all subspaces are axis-parallel, the main case of interest because of the correspondence to missing entries in data tables. 1) k=1: Two polynomial time approximation schemes which runs in poly(Δ, 1/ε)nd. 2) k=2: O(Δ^1/4)-approximation algorithm which runs in poly(n,d,Δ) 3) General k: Polynomial time approximation scheme which runs in 2^O(Δk log k(1+1/ε^2))nd We also prove nearly matching hardness results; in both the general (not necessarily axis-parallel) case (for k ≥ 2) and in the axis-parallel case (for k ≥ 3), the running time of an approximation algorithm with any approximation ratio cannot be polynomial in even one of k and Δ, unless P = NP. Furthermore, assuming that the 3-SAT problem cannot be solved subexponentially, the dependence on both k and Δ must be exponential in the general case (in the axis-parallel case, only the dependence on k drops to 2^Ω√k)). The simplicity of the first and the third algorithm suggests that they might be actually used in statistical applications. The second algorithm, which demonstrates a theoretical gap between the axis-parallel and general case for k=2, displays a strong connection between geometric clustering and classical coloring problems on graphs and hypergraphs, via a new Helly-type theorem. Item Type: Thesis (Master's thesis) Subject Keywords: k-center; Clustering; Helly theorem; High dimension; Flats; Minimum Enclosing Ball; Incomplete data Degree Grantor: California Institute of Technology Division: Engineering and Applied Science Major Option: Computer Science Thesis Availability: Public (worldwide access) Research Advisor(s): • Schulman, Leonard J. Thesis Committee: • Unknown, Unknown Defense Date: 2012 Funders: ┌──────────────────────┬───────────────┐ │ Funding Agency │ Grant Number │ │ Samsung Scholarship │ UNSPECIFIED │ Record Number: CaltechTHESIS:07052012-191337554 Persistent URL: https://resolver.caltech.edu/CaltechTHESIS:07052012-191337554 DOI: 10.7907/VF38-NT60 Default Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided. ID Code: 7171 Collection: CaltechTHESIS Deposited By: Euiwoong Lee Deposited On: 19 Jul 2012 23:46 Last Modified: 03 Oct 2019 23:56 Thesis Files PDF (Complete Thesis) - Final Version See Usage Policy. Repository Staff Only: item control page
{"url":"https://thesis.library.caltech.edu/7171/","timestamp":"2024-11-11T05:21:13Z","content_type":"application/xhtml+xml","content_length":"33393","record_id":"<urn:uuid:8c8ac08e-d45d-45f0-a892-ef9d7593a977>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00543.warc.gz"}
Dual power assignment via second Hamiltonian cycle A power assignment is an assignment of transmission power to each of the wireless nodes of a wireless network, so that the induced graph satisfies some desired properties. The cost of a power assignment is the sum of the assigned powers. In this paper, we consider the dual power assignment problem, in which each wireless node is assigned a high- or low-power level, so that the induced graph is strongly connected and the cost of the assignment is minimized. We improve the best known approximation ratio from [Formula presented]−[Formula presented]+ϵ≈1.617 to [Formula presented] ≈1.571. Moreover, we show that the algorithm of Khuller et al. [11] for the strongly connected spanning subgraph problem, which achieves an approximation ratio of 1.617, is 1.522-approximation algorithm for symmetric directed graphs. The innovation of this paper is in achieving these results by using interesting conditions for the existence of a second Hamiltonian cycle. • Approximation algorithm • Computational geometry • Power assignment ASJC Scopus subject areas • Theoretical Computer Science • Computer Networks and Communications • Computational Theory and Mathematics • Applied Mathematics Dive into the research topics of 'Dual power assignment via second Hamiltonian cycle'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/dual-power-assignment-via-second-hamiltonian-cycle","timestamp":"2024-11-03T06:52:26Z","content_type":"text/html","content_length":"56510","record_id":"<urn:uuid:30317ffd-4a63-44c6-a29d-27f00efa23f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00157.warc.gz"}
Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board | Maharashtra Board Book Solution Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board: Welcome to Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board blog. In the Maharashtra Board Class 9 curriculum, Chapter 5 is dedicated to Mathematics. It covers various important concepts and formulas that students need to understand and apply to solve mathematical problems effectively. To assist students in their studies, we have created a comprehensive and detailed solutions PDF for Class 9 Maths Chapter 5. Maths 1 Digest Std 9 Maharashtra Board Class 9 Maths Chapter 1 Sets Chapter 1 Sets Practice Set 1.1 Chapter 1 Sets Practice Set 1.2 Chapter 1 Sets Practice Set 1.3 Chapter 1 Sets Practice Set 1.4 Maths Solution Class 9 Maharashtra Board Chapter 2 Real Numbers Chapter 2 Real Numbers Practice Set 2.1 Chapter 2 Real Numbers Practice Set 2.2 Chapter 2 Real Numbers Practice Set 2.3 Chapter 2 Real Numbers Practice Set 2.4 Chapter 2 Real Numbers Practice Set 2.5 Chapter 2 Real Numbers Problem Set 2 Maharashtra State Board 9th Maths Solution Chapter 3 Polynomials Chapter 3 Polynomials Practice Set 3.1 Chapter 3 Polynomials Practice Set 3.2 Chapter 3 Polynomials Practice Set 3.3 Chapter 3 Polynomials Practice Set 3.4 Chapter 3 Polynomials Practice Set 3.5 Chapter 3 Polynomials Practice Set 3.6 Chapter 3 Polynomials Problem Set 3 Class 9 Maths Solution Maharashtra Board Chapter 4 Ratio and Proportion Chapter 4 Ratio and Proportion Practice Set 4.1 Chapter 4 Ratio and Proportion Practice Set 4.2 Chapter 4 Ratio and Proportion Practice Set 4.3 Chapter 4 Ratio and Proportion Practice Set 4.4 Chapter 4 Ratio and Proportion Practice Set 4.5 Chapter 4 Ratio and Proportion Problem Set 4 9th Class Maths Solution Maharashtra Board Chapter 5 Linear Equations in Two Variables Chapter 5 Linear Equations in Two Variables Practice Set 5.1 Chapter 5 Linear Equations in Two Variables Practice Set 5.2 Chapter 5 Linear Equations in Two Variables Problem Set 5 Maharashtra Board 9th Maths Solution Chapter 6 Financial Planning Chapter 6 Financial Planning Practice Set 6.1 Chapter 6 Financial Planning Practice Set 6.2 Chapter 6 Financial Planning Problem Set 6 Maharashtra Board Class 9 Maths Chapter 7 Statistics Chapter 7 Statistics Practice Set 7.1 Chapter 7 Statistics Practice Set 7.2 Chapter 7 Statistics Practice Set 7.3 Chapter 7 Statistics Practice Set 7.4 Chapter 7 Statistics Practice Set 7.5 Maharashtra State Board 9th Maths Solutions Part 2 Geometry Pdf Maths 2 Digest Std 9 Std 9 Maths Solutions Maharashtra State Board Chapter 1 Basic Concepts in Geometry Chapter 1 Basic Concepts in Geometry Practice Set 1.1 Chapter 1 Basic Concepts in Geometry Practice Set 1.2 Chapter 1 Basic Concepts in Geometry Practice Set 1.3 Chapter 1 Basic Concepts in Geometry Problem Set 1 Maharashtra Board Class 9 Maths Chapter 2 Parallel Lines Chapter 2 Parallel Lines Practice Set 2.1 Chapter 2 Parallel Lines Practice Set 2.2 Chapter 2 Parallel Lines Problem Set 2 Maharashtra Board Class 9 Maths Chapter 3 Triangles Chapter 3 Triangles Practice Set 3.1 Chapter 3 Triangles Practice Set 3.2 Chapter 3 Triangles Practice Set 3.3 Chapter 3 Triangles Practice Set 3.4 Chapter 3 Triangles Practice Set 3.5 Chapter 3 Triangles Problem Set 3 Maharashtra Board Class 9 Maths Chapter 4 Constructions of Triangles Chapter 4 Constructions of Triangles Practice Set 4.1 Chapter 4 Constructions of Triangles Practice Set 4.2 Chapter 4 Constructions of Triangles Practice Set 4.3 Chapter 4 Constructions of Triangles Problem Set 4 Maharashtra Board Class 9 Maths Chapter 5 Quadrilaterals Chapter 5 Quadrilaterals Practice Set 5.1 Chapter 5 Quadrilaterals Practice Set 5.2 Chapter 5 Quadrilaterals Practice Set 5.3 Chapter 5 Quadrilaterals Practice Set 5.4 Chapter 5 Quadrilaterals Practice Set 5.5 Chapter 5 Quadrilaterals Problem Set 5 Maharashtra Board Class 9 Maths Chapter 6 Circle Chapter 6 Circle Practice Set 6.1 Chapter 6 Circle Practice Set 6.2 Chapter 6 Circle Practice Set 6.3 Chapter 6 Circle Problem Set 6 Maharashtra Board Class 9 Maths Chapter 7 Co-ordinate Geometry Chapter 7 Co-ordinate Geometry Practice Set 7.1 Chapter 7 Co-ordinate Geometry Practice Set 7.2 Chapter 7 Co-ordinate Geometry Problem Set 7 Maharashtra Board Class 9 Maths Chapter 8 Trigonometry Chapter 8 Trigonometry Practice Set 8.1 Chapter 8 Trigonometry Practice Set 8.2 Chapter 8 Trigonometry Problem Set 8 Maharashtra Board Class 9 Maths Chapter 9 Surface Area and Volume Chapter 9 Surface Area and Volume Practice Set 9.1 Chapter 9 Surface Area and Volume Practice Set 9.2 Chapter 9 Surface Area and Volume Practice Set 9.3 Chapter 9 Surface Area and Volume Practice Set 9 This Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board contains step-by-step solutions for all the questions in the textbook, providing students with a valuable resource to enhance their understanding and improve their problem-solving skills. Whether you are a student looking for additional practice or a teacher searching for reliable reference material, this solution PDF is a valuable tool for mastering the concepts covered in Chapter 5 of the Class 9 Maths curriculum. Importance of having access to Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board Having access to the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board can be incredibly beneficial for both students and teachers. Here are a few reasons why: a. Comprehensive and step-by-step solutions: The Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board provides students with detailed explanations of how to solve each question in Chapter 5. This helps students understand the concepts and improve their problem-solving skills. b. Additional practice: The Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board offers students the opportunity to practice solving a wide range of questions. By attempting these questions and referring to the solutions, students can gain confidence in their abilities and enhance their understanding of the topics covered in Chapter 5. c. Reliable reference material: Teachers can use the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board as reliable reference material to assist their students in understanding the concepts covered in Chapter 5. It can be used to clarify doubts, provide additional examples, and offer extra practice resources. d. Time-saving: Instead of spending time trying to figure out the solutions to complex problems, students can refer to the solutions PDF and save precious time. This allows them to focus on understanding the concepts rather than getting stuck on difficult questions. In conclusion, having access to the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board is a valuable resource that can greatly aid students in their studies and help them excel in their examinations. Whether you are a student looking for additional practice or a teacher seeking reliable reference material, this solutions PDF is a must-have tool to enhance your understanding and improve your problem-solving skills. Download Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board students Maharashtra Board students can greatly benefit from the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board. Here’s why: a. As per the Maharashtra State Board curriculum: The Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board is specifically designed to align with the Maharashtra Board syllabus. This ensures that students are getting access to solutions that are relevant and accurate for their examinations. b. Familiarity with the exam pattern: The Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board provides students with a clear understanding of the exam pattern and format followed by the Maharashtra Board. By practicing with these solutions, students can become familiar with the type of questions that are likely to appear in their exams. c. Language clarity: The Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board is written clearly and concisely, using language that is easy to understand. This is particularly beneficial for Maharashtra Board students, as it helps them grasp the concepts more effectively. d. Notes for revision: The Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board also offers additional notes and explanations that can serve as helpful revision material for students. These notes can be used to reinforce the concepts learned in class and provide a quick way to revise before exams. In summary, the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board is a valuable resource for Maharashtra Board students. It not only helps them understand and solve the questions in Chapter 5 but also prepares them for their exams by providing a clear understanding of the exam pattern. With its language clarity and revision notes, this solution PDF is a must-have tool for students aiming to excel in their studies. Where to find reliable and comprehensive Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board Finding a reliable and comprehensive Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board is crucial for Maharashtra Board students. To ensure accuracy and reliability, it is recommended to refer to trusted educational websites and platforms. Some popular options include the official website of the Maharashtra State Board, reputable educational portals, and renowned publishers. These sources often provide solutions that are in line with the board syllabus and exam pattern. Additionally, they may offer comprehensive explanations and notes for better understanding and revision. Students need to verify the credibility and authenticity of the source before accessing and utilizing the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board. By choosing reliable platforms, students can confidently rely on the solutions provided to enhance their learning experience and excel in their exams. Download Class 9th Other Books Solution Maharashtra Board Class 9 Hindi Lokbharti Solution Maharashtra Board Class 9 English Kumarbharati Solution Maharashtra State Board Class 9 Marathi Aksharbharati Solution Maharashtra Board Class 9 Aamod Sanskrit Solution Maharashtra Board Class 9 Geography Solution Maths Part 1 Class 9 PDF Maharashtra Board Textbook Maths Part 2 Class 9 Pdf Maharashtra Board Textbook Maths Practical Book Class 9 Maharashtra Board Solutions Maharashtra Board English Workbook Class 9 Solutions Benefits of using a well-structured PDF for exam preparation Using a well-structured PDF for exam preparation offers several benefits for Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board. Firstly, it provides convenience and accessibility as students can easily access the solutions anytime and anywhere through their electronic devices. Moreover, a well-structured PDF allows students to organize their study materials efficiently, easily navigate through topics, and quickly find specific solutions or explanations. It also helps in better time management during revisions as students can focus on specific areas that need more practice. Additionally, a well-structured PDF provides a comprehensive overview of the entire chapter, aiding in a better understanding of concepts and their applications. Overall, using a well-structured PDF can greatly enhance the exam preparation process and help students achieve better results. Download Class 9th Books PDF Maharashtra State Board Hindi Textbook Class 9 PDF Maharashtra Board 9th Standard English Textbook PDF Maths Part 1 Class 9 PDF Maharashtra Board Textbook Maths Part 2 Class 9 Pdf Maharashtra Board Textbook Maths Practical Book Class 9 Maharashtra Board Maharashtra State Board 9th Class Science Book PDF Maharashtra Board 9th Std Science Practical Book PDF Maharashtra Board English Workbook Class 9 PDF How to effectively utilize the solutions for better understanding and grades To make the most of the solutions provided in the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board for Maharashtra Board, it is essential to utilize them effectively. Here are some tips to help you do just that: 1. Start by understanding the concepts: Before diving into the solutions, make sure you have a clear understanding of the concepts covered in the chapter. Refer to your textbook or any other study material to grasp the fundamentals. 2. Read the solutions step-by-step: While going through the solutions, read every step carefully and try to understand the logic behind each calculation. This will help you not only in solving similar problems but also in gaining a deeper understanding of the underlying concepts. 3. Practice, practice, practice: Once you have understood the solutions, the key to mastering the chapter is practice. Solve as many problems as you can, paying attention to different variations and 4. Analyze your mistakes: It is crucial to analyze your mistakes and learn from them. If you encounter any difficulties or make errors while solving the problems, try to understand where you went wrong and how you can avoid similar mistakes in the future. 5. Seek help when needed: If you come across any doubts or concepts that are unclear, don’t hesitate to seek help from your teacher, friends, or online resources. Clarifying your doubts will ensure you have a solid foundation for the chapter. By effectively utilizing the solutions provided in the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board, you can not only enhance your understanding of the subject but also improve your grades in the examination. So, make the most of these resources and excel in your studies. Conclusion and recommendation to download Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board In conclusion, having access to the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board is an invaluable resource for students looking to excel in their mathematics studies. By utilizing the solutions effectively, understanding the concepts, practicing consistently, analyzing mistakes, and seeking help when needed, students can enhance their understanding and improve their grades. To ensure easy access to these solutions, I highly recommend downloading the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board. This will allow you to refer to the solutions whenever you need them, even when you don’t have internet connectivity. Having a digital copy of the solutions also means that you can easily zoom in to read the steps clearly and annotate any additional notes or explanations that may be useful. So, don’t miss out on this opportunity. Download the Class 9 Maths Chapter 5 Solutions PDF Maharashtra Board today and make the most of this valuable resource to excel in your mathematics studies.
{"url":"https://maharashtraboardbookolution.in/class-9-maths-chapter-5-solutions-pdf-maharashtra-board/","timestamp":"2024-11-07T06:06:47Z","content_type":"text/html","content_length":"235193","record_id":"<urn:uuid:1ad523c8-dc18-4024-89f8-874c1e3f3ede>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00584.warc.gz"}
Goal Seek: A simple tool to reverse engineer an answer in Excel Trial and error is time consuming. Goal Seek fixes that. Goal Seek is a simple tool in Excel that lets you go straight to the answer and set it to anything you want. Excel then works backwards and evaluates what input value is required in order for the answer to be achieved. Example 1: Using Goal Seek To Forecast Revenue Let’s start with a really simple example - a classic sales calculation. Now, let’s say we want to reach a revenue of $300.00, Assuming the same quantity of widgets are ordered, what would we need to increase the price to? OR … assuming we sell widgets at the same price, what quantity would the customer need to order? Here’s the spreadsheet: Here’s how we would use Goal Seek to calculate the new PRICE. 1. Select cell B3. 2. Select Data | What If Analysis | Goal Seek. 3. Change Value to 300. 4. Set By Changing Cell to B1. 5. Click OK to see the results. This tells us that we would need to increase the price from $0.13 to $0.15 to achieve our revenue target of $300.00 At this point you can click OK to keep the new price or Cancel to revert back to the original value. Let’s use Goal Seek again to calculate the new QUANTITY. 1. Select cell B3. 2. Select Data | What If Analysis | Goal Seek. 3. Change ‘To Value’ to 300. 4. Set ‘By Changing Cell’ to B2. 5. Click OK. This tells us that we would need to increase the quantity to 2,308 (rounding up the decimals) to achieve our revenue target of $300.00 Example 2: Using Goal Seek To Calculate Monthly Mortgage Repayments A mortgage repayment has 3 contributing factors Here’s the spreadsheet, already populated. This is a quick breakdown of the PMT function. Notice how all the figures were adjusted to work on a common MONTHLY basis (i.e. B2/12 and B3*12). There is a minus sign in front of the Present Value (PV). This makes it negative, which makes the final repayment figure positive. Anyway back to Goal Seek! The current monthly repayment based on these figures is $2,347. Let’s say you get a pay rise and you can now afford to pay $2,600 instead of $2,347. Using Goal Seek you can discover 3 things: Let’s answer those questions. 1. Select cell B4. 2. Select Data | What If Analysis | Goal Seek. 3. Change ‘To Value’ to 2600. 4. Set ‘By Changing Cell’ to B1. 5. Click OK. The amount you can borrow has increased from $450,000 to $498,421. And setting ‘By Changing Cell’ to B2 … The interest rate could rise from 4.75% to 5.66%. And setting ‘By Changing Cell’ to B3 … The term reduces from 30 years to 24.37 years. What next? So that’s Goal Seek. A simple tool that provides incredible functionality and purpose. I hope you found plenty of value in this post. I'd love to hear your biggest takeaway in the comments below together with any questions you may have. Have a fantastic day. Jason Morrell is a professional trainer, consultant and course creator who lives on the glorious Gold Coast in Queensland, Australia. He helps people of all levels unleash and leverage the power contained within Microsoft Office by delivering training, troubleshooting services and taking on client projects. He loves to simplify tricky concepts and provide helpful, proven, actionable advice that can be implemented for quick results. Purely for amusement he sometimes talks about himself in the third person. {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
{"url":"https://officemastery.com/_goal-seek/","timestamp":"2024-11-03T12:01:51Z","content_type":"text/html","content_length":"586117","record_id":"<urn:uuid:9a307445-60c6-4821-8333-bf749e2e4a1d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00015.warc.gz"}
Clustering Dynamic Textures In this project, we address the problem of clustering dynamic texture (DT) models, i.e., clustering linear dynamical systems (LDS). Given a set of DTs (e.g., each learned from a small video cube extracted from a large set of videos), the goal is to group similar DTs into several clusters, while also learning a representative DT “center” that can sufficiently summarize each group. This is analogous to standard K-means clustering, except that the datapoints are dynamic textures, instead of real vectors. The parameters of the LDS lie on a non-Euclidean space (non-linear manifold), and hence cannot be clustered directly with the K-means algorithm, which operates on real vectors in Euclidean space. An alternative to clustering with respect to the manifold structure is to directly cluster the probability distributions of the DTs. One method for clustering probability distributions, in particular, Gaussians, is the hierarchical expectation-maximization (HEM) algorithm, proposed in [Vasconcelos & Lippman, NIPS’98]. The original HEM algorithm takes a Gaussian mixture model (GMM) and reduces it to another GMM with fewer components, where each of the new Gaussian components represents a group of the original Gaussians (i.e., forming a cluster of Gaussians). In this project, we derive an HEM algorithm for clustering dynamic textures through their probability distributions. The resulting algorithm is capable of both clustering DTs and learning novel DT cluster centers that are representative of the cluster members, in a manner that is consistent with the underlying generative probabilistic model of the DT. A robust DT clustering algorithm has several applications in video and motion analysis, including: 1) hierarchical clustering of motion; 2) video indexing for fast video retrieval; 3) DT codebook generation for the bag-of-systems motion representation; 4) semantic video annotation via weakly-supervised learning. DT clustering can also serve as an effective method for learning DTs from a large dataset of video via hierarchical modeling. Finally, DT clustering can also be applied to semantic music annotation, where each concept is modeled by a mixture of DTs. Selected Publications Music Applications • A Bag of Systems Representation for Music Auto-tagging. IEEE Trans. on Audio, Speech and Language Processing (TASLP) Dec 2013 • Time Series Models for Semantic Music Annotation. IEEE Trans. on Audio, Speech and Language Processing (TASLP) Jul 2011
{"url":"http://visal.cs.cityu.edu.hk/research/hemdtm/","timestamp":"2024-11-14T03:22:32Z","content_type":"text/html","content_length":"19813","record_id":"<urn:uuid:a5999110-e264-4126-bf16-21233b8a94e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00492.warc.gz"}
The Four Core DAX Measures The DAX language is deceptively complex. The dynamics of the language itself, are actually easy. It's the nuances that trip people up, even experienced DAX programmers. If you are just getting started, though, you'll likely struggle on where to start. This article will help you with many projects. We'll discuss the four core DAX measure that you should always create from the start. Many DAX projects work with sales data. Most of the tutorials or articles you search for on Power BI or DAX will use sales data as the samples to illustrate the techniques. This is likely due to sales data is what many business owners are most concerned with. After all, if they can't increase sales, they won't stay in business for very long. Related: How to Get Started in Power BI Because a fair number of projects will be sales-based, it makes sense to define measures that will be used for these types of projects. When I first started learning DAX, I never knew where to start on a project. With this short tutorial, you'll know exactly what measures to produce as a starting point. By defining a set of core measures that should be included for every project, they will become second nature to you. That is something I wish tutorials taught me in the beginning. The core four DAX measures are Total Sales, Total Cost, Total Profit, Profit Margin. When you define a set of core measures, you'll have at your disposal the tools you need to define more complicated measures. That's because most of the Key Performance Indicators will be derived from these core measures. Total Sales The Total Sales measure is an aggregation of the table that contains your sales information. This is usually called Sales or Orders. Some datasets will call this the Transaction table. No matter what it's called the table will usually contain the quantity of products sold and the price. Unfortunately, sometimes the price exists in other tables, like the Products table. This complicates defining the Total Sales measure, but only slightly. We'll cover this below. If you are lucky, you'll have a column that is already defined called Sales Amount or Revenue. This is by far the easiest when creating measures, but the least likely. Let's start with the easiest definition: when a Sales or Revenue column is already defined in the Sales table. Here is how to define the Total Sales measure: Total Sales := SUM('Sales'[Revenue]) If the column in the Sales table is called Sales Amount, then Total Sales would be defined as: Total Sales := SUM('Sales'[Sales Amount]) See how simple that is? If you need to calculate the total sales, then you'll need to use SUMX() instead of SUM(). Suppose we have the Quantity Sold and the Price columns, both defined in the Sales table: Total Sales = SUMX(Sales, 'Sales'[Quantity Sold] * 'Sales'[Price]) What if one of the columns needed for the calculation exists in another table? Let's suppose that Price is not in the Sales table, but instead in the Products table. In this case, you'll use the RELATED() function, as follows: Total Sales = SUMX(Sales, 'Sales'[Quantity Sold] * RELATED('Products'[Price]) Total Cost The same logic applies to Total Cost. The difference is that instead of using price, you'll use the cost column. You'll need to locate where your costs reside. You'll see it in both the Sales table and the Products table. This discovery is all part of the Exploratory Data Analysis (EDA) process. This occurs at the beginning of a data analysis project and is used to find out where to source the data that will be used to answer the questions defined by the initial requirements. Let's assume that as part of your EDA process, you discover that the cost is contained in the Sales table. Therefore, to get the total cost, you would multiple the Cost x Quantity Sold: Total Cost := SUMX('Sales'[Quantity Sold] * 'Sales'[Cost]) You may see Cost referred to as Unit Cost in many datasets. Total Profits Okay, here is where it gets really hard (sarcasm included!) Since you already have Total Sales and Total Cost, Total Profits is simply Total Sales - Total Cost: Total Profits := [Total Sales] - [Total Cost] Yep! That is how simple it is to get the Total Profits measure! Profit Margin Profit Margin := DIVIDE([Total Profits], [Total Sales], 0) You have noticed for the profit margin that the DIVIDE() function is being used. It's not required to use this function, but you'll be happy you did. If you tried to divide the two numbers using the standard divide symbol (/) you'll run into errors when the denominator is 0. The DIVIDE() function in Power BI helps avoid that situation from ever occurring. Related: Need Data for Your Projects? Take this course to learn how to find data for your projects. These Four Core Measures Serve as Building Blocks After you define these four core measures, you create a foundation for future measures to be developed. For instance, if you need to get average sales per some category, you'll use the [Total Sales] measure as the base for that calculation. The same is true for the other measures, too. Why Not Create Calculated Columns Instead? Most beginners to Power BI and DAX default to using calculated columns. That's usually because they are familiar with working in spreadsheets. The reasoning makes sense as all of the data is contained in the tables and all you need to do is use the other columns in the table to define the new calculated column. It's much more intuitive for beginners to go this route. But with BI tools it will come back to haunt you in the long run. Because calculated columns take up memory and tools like Power BI are meant to handle large amounts of data. The more memory you consume by defining calculated columns, the faster you'll run out of memory. You'll also end up slowing down your model, too! Measure don't consume memory until they are used, and most measures are used for aggregations. Aggregations use up significantly less memory. The downside is that measures are a more difficult concept for people to grasp. That's because it's hard to get your head around what the underlying components of the aggregation are and what filters are applied to the measure for it to create the The topics of the basic of DAX and filters are too vast to cover in one article and would bloat the article unnecessarily. There are plenty of tutorials on the topic and I will be including some on this website soon. The takeaway for this article is to learn the basics of DAX and then define the core functions that you'll likely use in many projects going forward. This will remove the guesswork from the process and will set you to a point in your learning where you'll be further ahead than most beginners. Leave a Comment: Add Your Reply
{"url":"https://datasciencereview.com/the-four-core-dax-measures/","timestamp":"2024-11-06T04:55:04Z","content_type":"text/html","content_length":"73175","record_id":"<urn:uuid:cb00a3c4-86a2-48ab-9769-e2ddd4d751e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00475.warc.gz"}
Body Physics 2.0 Walking with Friction In the previous chapter we used the Impulse-Momentum Theorem to analyze how force, time, mass, and change in velocity are related during falling and landing. We can use the same strategies to analyze forces associated with everyday locomotion. Everyday Example: Responding to a Code Blue Jolene is walking with a speed of a 2.5 mph down the hospital corridor when a code blue is called over the intercom. She stops then turns around and starts walking the other direction, toward the room where the code was called, at a speed of 4.0 mph. If Jolene’s mass is 61 kg and she tried to make that change in motion very quickly, for example in just 0.75 s, how large of a force must she receive from the ground? We will start with the impulse momentum theorem for an object with constant mass: We want to solve for the force applied to Jolene so we divide both sides by the time interval: We have only been given speeds in the problem, but in order to analyze velocity we need to define direction of motion. Let’s assign Jolene’s initial direction as the negative direction so her initial velocity was -2.5 mph. In that case her final velocity was +4.2 mph. We can calculate her change in velocity as: If we convert our answer to units of m/s we get: 6.5 mph = 2.9 m/s. (You can check this yourself using unit analysis or an online unit converter). Now we can enter the mass, change in velocity, and time interval into the impulse momentum equation: To make the desired change in motion Jolene must experience a 240 N horizontal frictional force from the floor. Newton’s Third Law of Motion Looking back at the example above, we could analyze the larger system that includes both Jolene as one internal component and the building + Earth as a second internal component. In that case the frictional forces between Jolene and the floor are internal to the system and we can say the system is isolated. Therefore the system momentum can not change so the net impulse on the system must be zero and we know the Building + Earth must have experienced the same impulse as Jolene, but in the opposite direction. In other words, during the time that Jolene received a force from the floor, she must have also exerted an equal force back onto the floor in the opposite direction. This result is summarized by Newton's Third Law of Motion, which states that when system A applies a force to system B, then System B must also apply an equal and opposite force back on System A. The equal and opposite forces referenced in Newton’s Third Law are known as third law pair forces (or third law pairs). Other Third Law pair forces include: • The Earth pulls down on you due to gravity and you pull back up on the Earth due to gravity. • A falling body pushing air out of its way and air resistance pushing back on the body. • You pull on a rope and the rope pulls back against your hand via tension. • You push on the wall, and the wall pushes back with a normal force. • A rocket engine pushes hot gasses out the back, and the gasses push back on the rocket in the forward direction. • You push your hand along the wall surface, and the wall pushes back on your hand due to kinetic friction. • You push your foot against the ground as you walk, and the floor pushes back against your food due to friction (static if your foot doesn’t slip, kinetic if it does). You may have noticed that in each of the cases above there were two objects listed. This is because Newton’s Third Law pairs must act on different objects. Therefore, Third Law pair forces cannot be drawn on the same free body diagram and can never cancel each other out. (Imagine if they did act on the same object, then they would always balance each other out and no object could ever have a net force, so no object could ever accelerate!) At some point in your life you may have tried to change direction too quickly on a slick floor and slipped. Let’s check to make sure that wouldn’t happen to Jolene in the previous example. We know that the friction force between Jolene and the floor must be 240 N, so we will check to see if the maximum static frictional force can be that large. If not, then Jolene would slip. In that case kinetic friction would eventually bring her to a stop, but not as quickly and possibly not before she lost her balance. First we start with the equation for max static friction force: Looking up friction coefficients we find that the rubber-concrete friction coefficient is typically 0.6 or greater. Assuming the floor is level and only Jolene’s horizontal motion is changing, then the normal force must be balancing Jolene’s weight (if these were not balanced then she would have a vertical change in velocity as well). In that case we substitute her weight for the normal force: Finally we enter our values: The max static frictional force is 358 N and the quick turn described in the previous example only requires 240 N of frictional force so Jolene could easily make the turn around without slipping. the change in momentum experienced by system is equal to the net force on the system multiplied by the amount of time that force is applied a quantity of speed with a defined direction, the change in speed per unit time, the slope of the position vs. time graph a system for which neither thermal energy or particles are allowed to leave or enter. If object A exerts a force on object B, then object B also exerts an equal and opposite force on object A for the same amount of time a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid the force that is provided by an object in response to being pulled tight by forces acting from opposite ends, typically in reference to a rope, cable or wire the outward force supplied by an object in response to being compressed from opposite directions, typically in reference to solid objects. a force that resists the sliding motion between two surfaces a force that acts on surfaces in opposition to sliding motion between the surfaces a force that resists the tenancy of surfaces to slide across one another due to a force(s) being applied to one or both of the surfaces a graphical illustration used to visualize the forces applied to an object the total amount of remaining unbalanced force on an object
{"url":"https://openoregon.pressbooks.pub/bodyphysics2ed/chapter/motion-laws/","timestamp":"2024-11-08T07:51:13Z","content_type":"text/html","content_length":"146207","record_id":"<urn:uuid:64e1a756-96b4-4af0-a610-6fc7d928b9d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00571.warc.gz"}
Understanding Elementary Shape Extra Questions for Class 6 CBSE Class 6 Maths Chapter wise Important Questions - Free PDF Download CBSE Important Questions for Class 6 Maths are available in Printable format for Free Download.Here you may find NCERT Important Questions and Extra Questions for Class 6 Mathematics chapter wise with answers also. These questions will act as chapter wise test papers for Class 6 Mathematics. These Important Questions for Class 6 Mathematics are as per latest NCERT and CBSE Pattern syllabus and assure great success in achieving high score in Board Examinations Maths Topics to be covered for Class 6 • Knowing Our Numbers: Introduction, Comparing Numbers, Large Numbers in Practice, Estimation nearest tens, hundred, thousands, outcomes of numbers situations, sum and difference, product, Using brackets, Roman Numerals. • Whole Numbers: Introduction, Whole Numbers, The Number Line. Properties of whole numbers, Patterns in whole numbers • Playing With Numbers: Introduction, Factors and Multiples, Prime and Composite Numbers, Tests for Divisibility of numbers, Common factors and Common multiples, Some more Divisibility Rules, Prime factorization, Highest Common Factor (HCF), Least Common Multiple (LCM), Some Problems on HCF and LCM. • Basic Geometrical Ideas: Introduction, Points, A Line Segment, A Line, Intersecting Lines, Parallel Lines, Ray, Curves, Polygons, Angles, Triangles. Quadrilaterals, circles. • Understanding Elementary Shapes: Introduction, Measuring line segment, Angles-right and straight, Angles-Acute, Obtuse and Reflex, Measuring angles, Perpendicular Lines, Classification of Triangles, Quadrilaterals, Polygons, 3-dimensional shapes. • Integers: Introduction, integers, addition of integers, subtraction of integers with the help of a number line, Ordering of Integers Note: As per SCERT guidelines, content not to be taught- section 6.2 except 6.2.1, 6.2.2, section 6.3 exercise 6.2 section 6.4 exercise 6.3 • Fractions: Introduction, Fraction, Fraction on Number Line, Proper, Improper and Mixed Fractions, Equivalent fractions, Simplest form of a fraction, Like fractions, Comparing fractions, Addition and Subtraction of fractions. • Decimals: Introduction, Tenths, Hundredths, Comparing decimals, Using decimals, Addition of numbers with decimals, Subtraction of numbers with decimals. • Data Handling: Introduction, recording data, organizing data, Pictograph, interpretation of a pictograph, drawing a pictograph, a bar graph. Note: As per SCERT guidelines, content not to be taught- Data Handling but to be included in math lab activities only. • Mensuration: Introduction, Perimeter, Area, Area of Rectangle, Area of square. • Algebra: Introduction, Matchstick Patterns, The Idea of Variables, More Matchstick patterns, More examples of Variables, use of variables in common rules, expressions and variables, what is an equation, solution of a equation. Using expressions practically. Note: As per SCERT guidelines, content not to be taught-section 11.6 and exercise 11.2, section 11.7 and exercise 11.3, section 11.9 & 11.10 and exercise 11.5. • Ratio and Proportion: Introduction, Ratio, Proportion, Unitary method. • Symmetry: Introduction, making symmetric figure, figures with two lines of symmetry, figures with multiple lines of symmetry, reflection and symmetry. Note: As per SCERT guidelines, content not to be taught-symmetry but to be included in math lab activities. • Practical Geometry: Introduction, The Circle, A Line Segment, Perpendiculars,perpendicular bisector of a line segment, Angles For Preparation of exams students can also check out other resource material CBSE Class 6 Maths Sample Papers CBSE Class 6 Maths Question Papers CBSE Class 6 Maths Test Papers Question Bank of Other Subjects of Class 6 CBSE Question Bank of Class 6 Science CBSE Question Bank of Class 6 English CBSE Question Bank of Class 6 Social Science CBSE Question Bank of Class 6 Hindi Importance of Question Bank for Exam Preparation? There are many ways to ascertain whether a student has understood the important points and topics of a particular chapter and is he or she well prepared for exams and tests of that particular chapter. Apart from reference books and notes, Question Banks are very effective study materials for exam preparation. When a student tries to attempt and solve all the important questions of any particular subject , it becomes very easy to gauge how much well the topics have been understood and what kind of questions are asked in exams related to that chapter.. Some of the other advantaging factors of Question Banks are as follows 1. Since Important questions included in question bank are collections of questions that were asked in previous exams and tests thus when a student tries to attempt them they get a complete idea about what type of questions are usually asked and whether they have learned the topics well enough. This gives them an edge to prepare well for the exam.Students get the clear idea whether the questions framed from any particular chapter are mostly either short or long answer type questions or multiple choice based and also marks weightage of any particular chapter in final exams. 2. CBSE Question Banks are great tools to help in analysis for Exams. As it has a collection of important questions that were asked previously in exams thereby it covers every question from most of the important topics. Thus solving questions from the question bank helps students in analysing their preparation levels for the exam. However the practice should be done in a way that first the set of questions on any particular chapter are solved and then solutions should be consulted to get an analysis of their strong and weak points. This ensures that they are more clear about what to answer and what can be avoided on the day of the exam. 3. Solving a lot of different types of important questions gives students a clear idea of what are the main important topics of any particular chapter that needs to focussed on from examination perspective and should be emphasised on for revision before attempting the final paper. So attempting most frequently asked questions and important questions helps students to prepare well for almost everything in that subject. 4. Although students cover up all the chapters included in the course syllabus by the end of the session, sometimes revision becomes a time consuming and difficult process. Thus, practicing important questions from Question Bank allows students to check the preparation status of each and every small topic in a chapter. Doing that ensures quick and easy insight into all the important questions and topics in each and every individual. Solving the important questions also acts as the revision process. Question Bank of Other Classes
{"url":"https://www.ribblu.com/cbse/understanding-elementary-shape-extra-questions-for-class-6","timestamp":"2024-11-02T05:52:42Z","content_type":"text/html","content_length":"504744","record_id":"<urn:uuid:8654b586-ff09-4fc4-822f-de84518761bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00405.warc.gz"}
Check if name is constant or function? Is there a way to check if given a variable name (as in an identifier), we can say if it corresponds to constant or a function within a coq plugin? can you explain more? As in: Definition fooC1: nat := 3. Definition fooF1: nat -> nat := id. Definition fooF2: nat -> nat -> bool := Nat.eqb. Here, fooC is a constant ([DEL:as in has kind *):DEL] but fooF1 and fooF2 are functions. I was hoping to find a way to distinguish between constants and functions. What is a constant if not a constant function? ;) Julio Di Egidio said: What is a constant if not a constant function? ;) Ah.. that's a good question.. Given a name, would it be possible to get hold of its type somehow? or (with more details) No, I meant from a plugin. I guess for that some AST fiddling is involved? resolve a qualid with Nametab.locate get the type with Environ.constant_type_in or combine Evd.fresh_global with the Retyping functions Last updated: Oct 13 2024 at 01:02 UTC
{"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/Check.20if.20name.20is.20constant.20or.20function.3F.html","timestamp":"2024-11-09T07:51:58Z","content_type":"text/html","content_length":"8779","record_id":"<urn:uuid:4f2fe378-0759-4be5-942b-fdc747a9896d>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00014.warc.gz"}
Discrete Fourier Transform Calculator The DFT of a real sequence of numbers will be a sequence of complex numbers of the same length which can be calculated using discrete Fourier transform calculator. The DFT converts a finite sequence of equally-spaced samples of a function into an equivalent-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. Use this online Fourier series DTF calculator to perform discrete Fourier transformation calculation. Fourier Series DTF Calculator The discrete Fourier transform is a special case of the Z-transform. The Fourier Series DTF can be computed efficiently using a fast Fourier transform. The Fourier Series DTF can also be generalized to two and more dimensions. DTF are extremely useful because they reveal periodicities in input data as well as the relative strengths of any periodic components.
{"url":"https://www.calculators.live/discrete-fourier-transform","timestamp":"2024-11-08T14:51:15Z","content_type":"text/html","content_length":"9860","record_id":"<urn:uuid:d5cf10ec-1753-4483-af51-3bfa057cf1a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00370.warc.gz"}
Course title in Estonian: Elementaarmatemaatika I Credits: 3 ECTS Assessment form: assessment Responsible lecturer: Annika Volt Course description: https://ois2.tlu.ee/tluois/subject/MLM6101.DT Autumn 2023 Study programme version Study programme title Responsible lecturer MLMB/23.DT Mathematics, Mathematical Economics and Data Analysis Alar Pukk Autumn 2022 Study programme version Study programme title Responsible lecturer MLMB/22.DT Mathematics, Mathematical Economics and Data Analysis Annika Volt Autumn 2021 Study programme version Study programme title Responsible lecturer MLMB/21.DT Mathematics, Mathematical Economics and Data Analysis Annika Volt Autumn 2020 Study programme version Study programme title Responsible lecturer MLMB/20.DT Mathematics, Mathematical Economics and Data Analysis Annika Volt Autumn 2019 Study programme version Study programme title Responsible lecturer MLMB/19.DT Mathematics, Mathematical Economics and Data Analysis Jüri Kurvits
{"url":"https://dti.tlu.ee/digiois/course_history.php?c=MLM6101.DT&lang=en","timestamp":"2024-11-02T15:19:25Z","content_type":"text/html","content_length":"4116","record_id":"<urn:uuid:c1b439f2-b1fd-4e2f-a23a-ff7a79c1b0c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00791.warc.gz"}
Half a Circle. The Perimeter of the semicircle of Radius is and the Area is The weighted mean of is The Centroid is then given by The semicircle is the Cross-Section of a Hemisphere for any Plane through the z-Axis. See also Arbelos, Arc, Circle, Disk, Hemisphere, Lens, Right Angle, Salinon, Thales' Theorem, Yin-Yang © 1996-9 Eric W. Weisstein
{"url":"http://drhuang.com/science/mathematics/math%20word/math/s/s171.htm","timestamp":"2024-11-04T05:49:50Z","content_type":"text/html","content_length":"5995","record_id":"<urn:uuid:72e6c33c-9f11-4c4f-a86a-3d1fed3cb29f>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00862.warc.gz"}
Enter the the Ksp expression for the solid AB2 in terms of the molar solubility x. express your answer in terms of x The following solution is suggested to handle the subject “Enter the the Ksp expression for the solid AB2 in terms of the molar solubility x. express your answer in terms of x“. Let’s keep an eye on the content below! Question “Enter the the Ksp expression for the solid AB2 in terms of the molar solubility x. express your answer in terms of x” Enter the the Ksp expression for the solid AB2 in terms of the molar solubility x. express your answer in terms of x Concepts and Reason This concept is used to determine the solubility constant expression of the salt given in x. , the solubility product , is the equilibrium constant of a solid that dissolves in a solution. It is the sum of the products’ concentrations multiplied by their coefficients. as a general reaction. This is the expression for the solubility constant of the reaction: This is the dissociation reaction: This is the expression for the solubility products: have a molar solubility of and For the reaction, create an ICE table. This is the expression for the solubility products in x: The expression for Above is the solution for “Enter the the Ksp expression for the solid AB2 in terms of the molar solubility x. express your answer in terms of x“. We hope that you find a good answer and gain the knowledge about this topic of science.
{"url":"https://trustsu.com/qa/enter-the-the-ksp-expression-for-the-solid-ab2-in-terms-of-the-molar-solubility-x-express-your-answer-in-terms-of-x/","timestamp":"2024-11-02T01:16:58Z","content_type":"text/html","content_length":"170473","record_id":"<urn:uuid:c4baae82-3b6a-4cc3-9a84-ae1fd4cd0197>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00635.warc.gz"}
Data Science. The Central Limit Theorem and sampling There are a lot of engineers who have never been involved in statistics or data science. So, to build data science pipelines or rewrite produced by data scientists code to an adequate, easily maintained code many nuances and misunderstandings arise from the engineering side. For these Data/ML engineers and novice data scientists, I make this series of articles. I'll try to explain some basic approaches in plain English and, based on them, explain some of the Data Science model concepts. The whole series: The practice of studying random phenomena shows that although the results of individual observations, even those carried out under the same conditions, can differ. But the average results for a sufficiently large number of observations are stable and only weakly fluctuates by the results of individual observations. The theoretical basis for this remarkable property of random phenomena is the Central Limit Theorem(aka law of large numbers). According to the central limit theorem, the average value of the data sample will be closer to the average value of the whole population and will be approximately normal, as the sample size increases. Importance of this theorem comes from the fact that this is true regardless of the distribution of a population. To illustrate the concept check the following animation of the die roll and code: import numpy as np import seaborn as sns import matplotlib.pyplot as plot import matplotlib.animation as animation from wand.image import Image from wand.display import display # 1000 simulations of die roll n, m = 200, 31 # In each simulation, there is one trial more than the previous simulation avg = [] for i in range(2, n): a = np.random.randint(0, m, i) # Function that will plot the histogram, where current is the latest figure def clt(current): # if animation is at the last frame, stop it if current == n: plt.xlim(0, m) plt.gca().set_title('Expected value of die rolls') plt.gca().set_xlabel('Average from die roll') plt.annotate('Die roll = {}'.format(current), [3, 27]) fig = plt.figure() a = animation.FuncAnimation(fig, clt, interval=1, save_count=n) a.save('animation.gif', writer='imagemagick', fps=10) In a practical world, to understand the characteristics of a population, scientists usually sample data and work with sample' statistics. They deal with samples to understand and to generalize insights about the population. Using a big sample size, central limit theorem allows us to apply properties of normal distribution in this process. We already know that the normal distribution is special. We can also use some of its properties for distributions that, strictly speaking, cannot be called normal. Before that, we talked about how, knowing the theoretical distribution, knowing the theory of probability, knowing the parameters of the distribution, to evaluate the sample what happens to the general population. But there is a problem that even the best theory and even the most knowing distribution will not help us if the sample on which we estimate is designed incorrectly or it does not represent the population. Why sampling? Well, the logical question is: if we take a sample, we, in any case, discard some data. Why do not take all the data and work with all the elements of the population? First, the data need to be collected, and it is very expensive. Think about surveys. Even if we are not talking about the population of all Americans, but we want to present, for example, all the residents of California, it is 39 million people. To take an interview with 39 million people, we need budgets, which, of course, we don’t have. Besides, even if we have such budgets, it is almost impossible to reach all residents of any state. And the idea of sampling is, in general, simply — take a small set, but which will be quite heterogeneous, in terms of some key criteria that represent our general population. That is, not to interview all of California residents, but to take some kind of a slice that will represent California by important criteria for us. The idea of sampling is very similar to the idea of soup. When we cooked a soup that contains a lot of ingredients, they are cut differently, they are added at different times, and we need to evaluate the quality of the soup. We do not need to eat all the soup to evaluate how tasty it turned out. Moreover, if we needed to eat all the soup to understand how good it is, then any idea of collective cooking would be somewhat absurd. But what are we doing? We boil the soup, and after that, we take a spoon, scoop up and, based on this small portion, we are trying to assess whether we have done the soup or we need to change something in it. If we just take some random part, for example, scoop up from above, then we will have a spoon full of water, and it will not give us any idea about the ingredients (vegetables or meat). If we scoop anyhow from the bottom, it may turn out that we only got large pieces, but we did not understand anything about small pieces. In order to get all the ingredients of our soup into sample on which we can get the taste of the soup, we need to mix it first, and then, after we mix it well, scoop it up, and we see that then all the ingredients turn out to be in the spoon — large, small, and water, all. That is, we can estimate already at this portion how well all the ingredients in the soup are prepared. The same with sampling. The analog of this mixing in the case of sampling is random sampling. It is a random selection, the essence of which is to ensure an equal probability for each element of the population in the sample to get, it provides us with this representativeness of the sample. What's terrible is that the sample is not representative? Well, I will highlight a few problems. The simplest and most understandable example — if we select, for example, an available sample. That is, we study the preferences of young people, but since we study or work in a certain university, we only interview students of our university and say that we will know about all the young people based on this study. Obviously, we will not know about all the young people, because the sample is a very specific group. The available sample gives us some part of the picture, but not a complete one. In this case, a substantial part of young people who do not study at universities or study at other universities will not be covered by our sample. Another problem is — we can select only those people who want to talk to us. That is, the trouble with such non-representative samples is that we do not give equal opportunities to different people, different points of view to be represented in our sample. A random sample at least formally guarantees a possibility of such a representation. Probability sampling methods The simplest and most understandable method is a simple random sample when we have a complete list of elements of the general population. For example, our population is all owners of the telephone numbers of NYC, and we have a complete list of these numbers. We turn on the "random number sensor", select the number of objects we need and call these phone numbers — a simple random selection. Another option is stratified sampling. Here we are no longer doing a random selection, but here we know something about our population. We know that it consists of several homogeneous clusters, which need to be presented in our sample. For example, a man and a woman, who have different opinions on some questions. And we first divide our population into men and women clusters, and then we randomly select in both clusters to guarantee the representation of these clusters in the final sample. And one more variant of the random sample is the cluster sampling. Such samples are used when we explore cities, for example, which are very often divided into districts. Some of the regions are similar to each other, some are different, and we have such kind of clusters of areas that are similar, say, by socio-economic conditions. And we first divide the city into clusters and then randomly select one of these clusters one at a time so as not to go to all twelve districts, for example, but choose three out of twelve, randomly, dividing them into these similar categories first, and then work inside these areas. Non-probability sampling methods Non-probability sampling methods are also needed. Moreover, in some cases, non-probability sampling is irreplaceable. For example, there is a sampling method — snowball sampling. It is necessary if we investigate hard-to-reach groups, or we don’t know exactly the volume of the general population then it turns out that we are talking to the first person, he contact us with the next one, the next, the next, and we sort of accumulate a snowball. We increased the sample, starting from one, or sometimes start several such snowballs to guarantee the heterogeneity of the population. But of course this sample is statistically unrepresentative, but there are tasks that we simply cannot do without it. The central limit theorem is quite an important concept in statistics, and consequently data science. This theorem will allow us to test the so-called statistical hypotheses, i.e. allow testing assumptions on applicability to the whole population. We will be covering the concept in the later posts. Sampling is a cheap and understandable concept of getting little but representative data from the population. The probability methods are preferable for most of the research problems, but there are tasks for which only non-random samples can help. There are tasks for which they are irreplaceable, but in the statistical sense non-random samples are not representative. Therefore, all of this theoretical knowledge about distributions, about concluding the general population based on a sample, we can do this only on random samples. Thank you for reading! Any questions? Leave your comment below to start fantastic discussions! Check out my blog or come to say hi 👋 on Twitter or subscribe to my telegram channel. Plan your best! Top comments (0) For further actions, you may consider blocking this person and/or reporting abuse
{"url":"https://dev.to/luminousmen/data-science-the-central-limit-theorem-and-sampling-1aa5","timestamp":"2024-11-06T08:27:47Z","content_type":"text/html","content_length":"90798","record_id":"<urn:uuid:a67e3d08-0d5c-45ed-ad76-fa0ba0546c58>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00881.warc.gz"}
Open Economy Macroeconomics [PDF] [1fufmv131sng] Combining theoretical models and data in ways unimaginable just a few years ago, open economy macroeconomics has experienced enormous growth over the past several decades. This rigorous and self-contained textbook brings graduate students, scholars, and policymakers to the research frontier and provides the tools and context necessary for new research and policy proposals. Martín Uribe and Stephanie Schmitt-Grohé factor in the discipline's latest developments, including major theoretical advances in incorporating financial and nominal frictions into microfounded dynamic models of the open economy, the availability of macro- and microdata for emerging and developed countries, and a revolution in the tools available to simulate and estimate dynamic stochastic models. The authors begin with a canonical general equilibrium model of an open economy and then build levels of complexity through the coverage of important topics such as international business-cycle analysis, financial frictions as drivers and transmitters of business cycles and global crises, sovereign default, pecuniary externalities, involuntary unemployment, optimal macroprudential policy, and the role of nominal rigidities in shaping optimal exchange-rate policy. Open Economy Macroeconomics1 Mart´ın Uribe2 Stephanie Schmitt-Groh´e3 August 10, 2014 Newer versions are maintained at www.columbia.edu/~mu2166 1 We would like to thank Javier Garc´ıa-Cicco, Felix Hammermann, Manfred Jager-Ambrozewicz, Pablo Ottonello, St´ephane Dupraz, and Krisztina Orban for comments and suggestions. Comments welcome. 2 Columbia University and NBER. E-mail: [email protected] . 3 Columbia University, CEPR, and NBER. E-mail: [email protected] Contents I Economic Fluctuations In Open Economies 1 A First Look at the Data: Ten Facts 1.1 Measuring Business Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Business-Cycle Facts Around The World . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Business Cycles in Poor, Emerging, and Rich Countries . . . . . . . . . . . . . . . 1.4 Country Size and Observed Business Cycles . . . . . . . . . . . . . . . . . . . . . . 1.5 Hodrick-Prescott Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Growth Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Duration and Amplitude of Business Cycles in Emerging and Developed Countries 1.8 Business Cycle Facts With Quarterly Data . . . . . . . . . . . . . . . . . . . . . . 1.9 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9.1 Countries With At Least 30 Years of Annual Data . . . . . . . . . . . . . . 1.9.2 Derivation of the HP Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9.3 Country-By-Country Business Cycle Statistics At Annual And Quarterly Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 A Small Open Endowment Economy 2.1 The Model Economy . . . . . . . . . . . . . 2.1.1 Equilibrium . . . . . . . . . . . . . . 2.2 Stationary Income Shocks . . . . . . . . . . 2.3 Stationary Income Shocks: AR(2) Processes 2.4 Nonstationary Income Shocks . . . . . . . 2.5 Testing the Model . . . . . . . . . . . . . . 2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 . 35 . . . . . . . 3 A Small Open Economy with Capital 67 3.1 The Basic Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.2 Adjustment To A Permanent Productivity Shock . . . . . . . . . . . . . . . . . . . 71 iii CONTENTS 3.3 3.4 3.5 Adjustment to Temporary Productivity Shocks Capital Adjustment Costs . . . . . . . . . . . . 3.4.1 A Permanent Technology Shock . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 The Small-Open-Economy Real-Business-Cycle Model 4.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Inducing Stationarity: External Debt-Elastic Interest Rate (EDEIR) . . . 4.1.2 Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Decentralization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Computing the Quantitative Predictions of the SOE-RBC Model . . . . . . . . . . 4.2.1 Functional Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Deterministic Steady State . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 Approximating Equilibrium Dynamics . . . . . . . . . . . . . . . . . . . . . 4.3 The Performance of the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 The Role of Persistence and Capital Adjustment Costs . . . . . . . . . . . . . . . . 4.5 The SOE-RBC Model With Complete Asset Markets (CAM) . . . . . . . . . . . . 4.6 Alternative Ways to Induce Stationarity . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Internal Debt-Elastic Interest Rate (IDEIR) . . . . . . . . . . . . . . . . . 4.6.2 Portfolio Adjustment Costs (PAC) . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 External Discount Factor (EDF) . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 Internal Discount Factor (IDF) . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.5 The Model With No Stationarity Inducing Features (NSIF) . . . . . . . . . 4.6.6 The Perpetual-Youth Model (PY) . . . . . . . . . . . . . . . . . . . . . . . 4.6.7 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Appendix A: Log-Linearization of the SOE-RBC EDEIR Model . . . . . . . . . . . 4.8 Appendix B: First-Order Accurate Approximations to Dynamic General Equilibrium Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Local Existence and Uniqueness of Equilibrium . . . . . . . . . . . . . . . . . . . . 4.9.1 Local Uniqueness of Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . 4.9.2 No Local Existence of Equilibrium . . . . . . . . . . . . . . . . . . . . . . . 4.9.3 Local Indeterminacy of Equilibrium . . . . . . . . . . . . . . . . . . . . . . 4.10 Second Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Impulse Response Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Matlab Code For Linear Perturbation Methods . . . . . . . . . . . . . . . . . . . . 4.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Emerging-Country Business Cycles Through the Lens of the SOE-RBC Model 177 5.1 An SOE-RBC Model With Stationary And Nonstationary Technology Shocks . . . . 181 5.2 Letting Technology Shocks Compete With Other Shocks And Frictions . . . . . . . . 190 5.3 Bayesian Estimation On A Century of Data . . . . . . . . . . . . . . . . . . . . . . . 196 5.4 How Important Are Permanent Productivity Shocks? . . . . . . . . . . . . . . . . . . 202 5.5 The Role of Financial Frictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5.6 Investment Adjustment Costs and the Persistence of the Trade Balance . . . . . . . 208 5.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 6 Interest-Rate Shocks 6.1 An Empirical Model . . . . . . . . . . . . . . . . . . . . . . . 6.2 Impulse Response Functions . . . . . . . . . . . . . . . . . . . 6.3 Variance Decompositions . . . . . . . . . . . . . . . . . . . . . 6.4 A Theoretical Model . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Households . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Firms . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Driving Forces . . . . . . . . . . . . . . . . . . . . . . 6.4.4 Equilibrium, Functional Forms, and Parameter Values 6.5 Theoretical and Estimated Impulse Responses . . . . . . . . . 6.6 The Endogeneity of Country Spreads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 The Terms of Trade 7.1 Defining the Terms of Trade . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Empirical Regularities . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 TOT-TB Correlation: Two Early Explanations . . . . . . . . . 7.3 Terms-of-Trade Shocks in an RBC Model . . . . . . . . . . . . . . . . 7.3.1 Households . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Production of Consumption Goods . . . . . . . . . . . . . . . . 7.3.3 Production of Tradable Consumption Goods . . . . . . . . . . 7.3.4 Production of Importable, Exportable, and Nontradable Goods 7.3.5 Market Clearing . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.6 Driving Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.7 Competitive Equilibrium . . . . . . . . . . . . . . . . . . . . . 7.3.8 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.9 Model Performance . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.10 How Important Are the Terms of Trade? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 . 216 . 218 . 223 . 226 . 228 . 232 . 235 . 236 . 239 . 241 . . . . . . . . . . . . . . 245 . 246 . 246 . 248 . 255 . 256 . 258 . 259 . 260 . 261 . 261 . 262 . 262 . 265 . 268 Exchange Rate Policy 8 Nominal Rigidity, Exchange Rates, And Unemployment 8.1 An Open Economy With Downward Nominal Wage Rigidity . . . . . . . . . . . . . 8.1.1 Households . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Firms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Downward Nominal Wage Rigidity . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Currency Pegs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 How Pegs Cum Nominal Rigidity Amplify Contractions: A Peg-Induced Externality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Volatility And Average Unemployment . . . . . . . . . . . . . . . . . . . . . 8.2.3 Adjustment To A Temporary Fall in the Interest Rate . . . . . . . . . . . . 8.3 Optimal Exchange Rate Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 The Full-Employment Exchange-Rate Policy . . . . . . . . . . . . . . . . . 8.3.2 Pareto Optimality of the Full-Employment Exchange-Rate Policy . . . . . 8.3.3 When Is It Optimal To Devalue? . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Evidence On Downward Nominal Wage Rigidity . . . . . . . . . . . . . . . . . . . 8.4.1 Evidence From Micro Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Evidence From Informal Labor Markets . . . . . . . . . . . . . . . . . . . . 8.4.3 Evidence From The Great Depression of 1929 . . . . . . . . . . . . . . . . . 8.4.4 Evidence From Emerging Countries and Inference on γ . . . . . . . . . . . 8.5 The Case of Equal Intra- And Intertemporal Elasticities of Substitution . . . . . . 8.6 Approximating Equilibrium Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Parameterization of the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Estimation Of The Exogenous Driving Process . . . . . . . . . . . . . . . . 8.7.2 Calibration Of Preferences, Technologies, and Nominal Rigidities . . . . . . 8.8 External Crises and Exchange-Rate Policy: A Quantitative Analysis . . . . . . . . 8.8.1 Definition of an External Crisis . . . . . . . . . . . . . . . . . . . . . . . . 8.8.2 Crisis Dynamics Under A Currency Peg . . . . . . . . . . . . . . . . . . . . 8.8.3 Crisis Dynamics Under Optimal Exchange Rate Policy . . . . . . . . . . . . 8.8.4 Devaluations, Revaluations, and Inflation In Reality . . . . . . . . . . . . . 8.9 Empirical Evidence On The Expansionary Effects of Devaluations . . . . . . . . . 8.9.1 Exiting a Currency Peg: Argentina Post Convertibility . . . . . . . . . . . . 8.9.2 Exiting the Gold Standard: Europe 1929 to 1935 . . . . . . . . . . . . . . . 8.10 The Welfare Costs of Currency Pegs . . . . . . . . . . . . . . . . . . . . . . . . . . 8.11 Symmetric Wage Rigidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.12 The Mussa Puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13 Endogenous Labor Supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14 Product Price Rigidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 275 . 277 . 277 . 281 . 283 . 284 . 286 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 9 Fixed Exchange Rates, Taxes, And Capital Controls 9.1 First-Best Fiscal Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Labor Subsidies . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Sales Subsidies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 Consumption Subsidies . . . . . . . . . . . . . . . . . . . . . . . 9.2 Capital Controls As Second-Best Policies . . . . . . . . . . . . . . . . . 9.2.1 Capital Controls As A Distortion To The Interest Rate . . . . . 9.2.2 Equilibrium Under Capital Controls . . . . . . . . . . . . . . . . 9.3 Ramsey Optimal Capital Controls . . . . . . . . . . . . . . . . . . . . . 9.4 Interest-Rate Shocks and the Optimality of Prudential Capital Controls 9.5 Optimal Capital Controls During a Boom-Bust Episode . . . . . . . . . 9.6 Currency Pegs and Optimal Capital Controls: Unconditional Moments 9.7 Currency Pegs And Overborrowing . . . . . . . . . . . . . . . . . . . . . 9.8 Welfare Costs of Free Capital Mobility Under Fixed Exchange Rates . . 9.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 . 368 . 368 . 374 . 375 . 377 . 379 . 381 . 382 . 385 . 390 . 394 . 396 . 398 . 401 . 405 Financial Frictions 10 Overborrowing 10.1 Imperfect Policy Credibility . . . . . . . . 10.1.1 The Government . . . . . . . . . . . 10.1.2 A Credible Permanent Trade Reform 10.1.3 A Temporary Tariff Reform . . . . . 10.2 Financial Externalities . . . . . . . . . . . . 10.2.1 The No Overborrowing Result . . . 10.2.2 The Case of Overborrowing . . . . . 10.2.3 The Case of Underborrowing . . . . 10.2.4 Discussion . . . . . . . . . . . . . . . 10.3 Exercises . . . . . . . . . . . . . . . . . . . 407 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 . 410 . 413 . 414 . 415 . 421 . 422 . 438 . 452 . 456 . 457 11 Sovereign Debt 11.1 Empirical Regularities . . . . . . . . . . . . . . 11.1.1 Frequency, Size, And Length of Defaults 11.1.2 Default, Debt, And Country Premia . . 11.1.3 Do Countries Default In Bad Times? . . 11.2 The Cost of Default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 11.5 11.6 11.7 11.8 11.2.1 Default and Exclusion From Financial Markets . . . . . . 11.2.2 Output Costs Of Default . . . . . . . . . . . . . . . . . . 11.2.3 Trade Costs of Default . . . . . . . . . . . . . . . . . . . . Default Incentives With State-Contingent Contracts . . . . . . . 11.3.1 The Optimal Debt Contract With Commitment . . . . . 11.3.2 Optimal Debt Contract Without Commitment . . . . . . Default Incentives With Non-State-Contingent Contracts . . . . Saving and the Breakdown of Reputational Lending . . . . . . . Quantitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . 11.6.1 Serially Correlated Endowment Shocks . . . . . . . . . . . 11.6.2 Reentry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.3 Output Costs . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.4 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.5 Calibration and Functional Forms . . . . . . . . . . . . . 11.6.6 Computation . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.8 Dynamics Around The Typical Default Episode . . . . . . 11.6.9 Goodness of Approximation of the Eaton-Gersovitz Model 11.6.10Alternative Output Cost Specification . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part I Economic Fluctuations In Open Economies Chapter 1 A First Look at the Data: Ten Facts How volatile is output? Do the components of aggregate demand (consumption, investment, government spending, and exports) move pro or countercyclically? How persistent are movements in aggregate activity? Are economic expansions associated with deficits or surpluses in the trade balance? What about economic contractions? Is aggregate consumption less or more volatile than output? Are emerging countries more or less volatile than developed countries? Does country size matter for business cycles? The answer to these and other similar questions form a basic set of empirical facts about business cycles that one would like macro models of the open economy to be able to explain. Accordingly, the purpose of this chapter is to document these facts using aggregate data on economic activity spanning time and space. Measuring Business Cycles In the theoretical models we study in this book, the basic economic units are the individual consumer, the firm, and the government. The models produce predictions for the consumers’ levels of income, spending, and savings and for firms’ investment and production decisions. To compare 3 M. Uribe and S. Schmitt-Groh´e the predictions of theoretical models to actual data, it is therefore natural to consider time series and cross-country evidence on per capita measures of aggregate activity. Accordingly, in this chapter we describe the business-cycle properties of output per capita, denoted y, total private consumption per capita, denoted c, investment per capita, denoted, i, public consumption per capita, denoted g, exports per capita, denoted x, imports per capita, denoted m, the trade balance, denoted tb ≡ (x − m), and the current account, denoted ca. To compute business-cycle statistics we use annual, cross-country, time-series data from the World Bank’s World Development Indicators (WDI) data base.1 All time series are expressed in real per capita terms. Only countries with at least 30 uninterrupted years of data for y, c, i, g, x, and m were considered. The resulting sample contains 120 countries and covers, on average, the period 1965-2010.2 A word on the consumption data is in order. The WDI data base contains information on household final consumption expenditure. This time series includes consumption expenditure on nondurables, services, and durables. Typically, business-cycle studies remove expenditures on durables from the definition of consumption. The reason is that from an economic point of view, expenditure on durable consumption goods, such as cars and washing machines, represent an investment in household physical capital. For this reason, researchers often add this component of consumption to the gross investment series. From a statistical point of view, there is also a reason to separate durables from nondurables and services in the definition of consumption. Expenditures on durables are far more volatile than expenditures on nondurables and services. For example, in the United States, durable consumption is about three times as volatile as output, whereas consumption of nondurable and services is less volatile than output. Even though expenditures on durables represent only 13 percent of total consumption expenditure, the standard deviation of total consumption 1 The data set is publicly available at databank.worldbank.org. The specific annual data used in this chapter available online at www.columbia.edu/~mu2166/book/ 2 Only 94 countries contained 30 uninterrupted years of current account data. Open Economy Macroeconomics, Chapter 1 is 20 percent higher than that of nondurables and services. Unfortunately, the WDI data set does not provide disaggregated consumption data. One should therefore keep in mind that the volatility of consumption reported later in this chapter is likely to be somewhat higher than the one that would result if our measure of consumption excluded expenditures on durables goods. The focus of our analysis is to understand aggregate fluctuations at business-cycle frequency in open economies. It is therefore important to extract from the raw time series data the associated cyclical component. The existing literature suggests a variety of methods for isolating the cyclical component of a time series. The most popular ones are log-linear detrending, log-quadratic detrending, Hodrick-Prescott (HP) filtering, first differencing, and band-pass filtering. The following analysis uses quadratic detrending, HP filtering, and first differencing. To extract a log-quadratic trend, we proceed as follows. Let yt denote the natural logarithm of real output per capita in year t for a given country, ytc the cyclical component of yt , and yts the secular (or trend) component of yt . Then we have yt = ytc + yts . The components ytc and yts are estimated by running the following regression yt = a + bt + ct2 + t , and setting ytc = t and yts = a + bt + ct2 . M. Uribe and S. Schmitt-Groh´e An identical procedure is used to detrend the natural logarithms of consumption, investment, government spending, exports, and imports and the levels of the trade-balance-to-output ratio and the current-account-to-output ratio. The levels of the trade balance and the current account (tb and s ca) are first divided by the secular component of output (eyt ) and then quadratically detrended. We perform the decomposition into cycle and trend for every time series and every country separately. To illustrate the workings of the log-quadratic filter, we show the decomposition into trend and cycle it delivers for Argentine real GDP per capita over the period 1960-2011. The top panel of figure 1.1 shows with a solid line raw data and with a broken line the estimated quadratic trend, yts . The bottom panel shows the cyclical component, ytc . The detrending procedure delivers three well marked cycles, one from the beginning of the sample until 1980, a second one from 1980 to 1998, and a third one from 1998 to the end of the sample. In particular, the log-quadratic filter succeeds in identifying the two major contractions in postwar Argentina, namely the one associated with the hyperinflation of the late 1980s and the one associated with the demise of the Convertibility Plan in 2001. In the first of these contractions, real GDP per capita fell by about 40 percent from the peak in 1980 to the trough in 1990, giving the 1980s the well-deserved nick name of lost decade. The behavior of the business-cycle component of real GDP suggests that the Argentine economy has been highly volatile over the past 50 years. The standard deviation of ytc is 10.7 percent per year. The cyclical component is also quite persistent. The serial correlation of ytc is 0.85. In the next section, we expand this analysis to all macroeconomic aggregates and countries included in our data Business-Cycle Facts Around The World To characterize the average world business cycle, we compute business-cycle statistics for each country in the sample and then take a population-weighted average of each statistic across countries. Open Economy Macroeconomics, Chapter 1 Figure 1.1: Trend and Cycle of Argentine real per capita GDP 9.4 yt yts 8.4 1960 25 1985 Year Percent deviation from trend −25 1960 1985 Year Data Source: WDI Database and authors’ calculations. M. Uribe and S. Schmitt-Groh´e The resulting average summary statistics appear in table 1.1 under the heading ‘All Countries.’ The table displays standard deviations, correlations with output, and serial correlations. Relative standard deviations are cross-country averages of country-specific relative standard deviations. The table also displays averages of the trade-balance-to-output ratio and the openness ratio, defined as (x + m)/y. According to table 1.1, the world is a pretty volatile place. The average standard deviation of output across all countries is 6.2 percent. To put this number into perspective, we contrast it with the volatility of output in the United States. The standard deviation of the cyclical component of U.S. output is 2.9 percent, less than half of the average volatility of output across all countries in the data set. Fact 1.1 (High Global Volatility) The cross-country average volatility of output is twice as large as its U.S. counterpart. One statistic in table 1.1 that might attract some attention is that on average across countries private consumption is 5 percent more volatile than output. This fact might seem at odds with the backbone of optimizing models of the business cycle, namely, consumption smoothing. However, recall that the measure of consumption used here includes expenditures on consumer durables, which are highly volatile. The fact that expenditure on durables is highly volatile need not be at odds with consumption smoothing because it represents an investment in household capital rather than direct consumption. For example, a household that buys a new car every 5 years displays a choppy path for expenditures on cars, but might choose to experience a smooth consumption of the services provided by its car. 3 Country-by-country statistics for a selected number of emerging and rich countries are shown in table 1.8 in the appendix. The online appendix (www.columbia.edu/~mu2166/book/) presents country-by-country statistics for all countries. Open Economy Macroeconomics, Chapter 1 Table 1.1: Business Cycles in Poor, Emerging, and Rich Countries Statistic United All States Countries Standard Deviations σy 2.94 6.22 σc /σy 1.02 1.05 σg /σy 1.93 2.26 σi /σy 3.52 3.14 σx /σy 3.49 3.07 σm /σy 3.24 3.23 σtb/y 0.94 2.34 σca/y 1.11 2.16 Correlations with y y 1.00 1.00 c 0.90 0.69 g/y -0.32 -0.02 i 0.80 0.66 x -0.11 0.19 m 0.31 0.24 tb/y -0.51 -0.15 tb -0.54 -0.18 ca/y -0.62 -0.28 ca -0.64 -0.28 Serial Correlations y 0.75 0.71 c 0.82 0.66 g 0.91 0.76 i 0.67 0.56 x 0.75 0.68 m 0.63 0.65 tb/y 0.79 0.61 ca/y 0.79 0.57 Means tb/y -1.5 -1.3 (x + m)/y 18.9 36.5 Poor Countries Emerging Countries Rich Countries 6.08 1.12 2.46 3.24 3.08 3.30 2.12 2.06 8.71 0.98 2.00 2.79 2.82 2.72 3.80 3.08 3.32 0.87 1.73 3.20 3.36 3.64 1.25 1.39 1.00 0.66 0.08 0.60 0.14 0.14 -0.11 -0.14 -0.28 -0.28 1.00 0.75 -0.08 0.77 0.35 0.50 -0.21 -0.24 -0.24 -0.26 1.00 0.76 -0.39 0.77 0.17 0.34 -0.26 -0.25 -0.30 -0.31 0.65 0.62 0.71 0.49 0.65 0.61 0.59 0.55 0.87 0.74 0.80 0.72 0.74 0.74 0.62 0.52 0.76 0.75 0.89 0.67 0.74 0.69 0.69 0.71 -1.6 32.5 -1.4 46.4 -0.0 40.4 Note. The variables y, c, g, i, x, m, tb ≡ (x − m), and ca denote, respectively, output, total private consumption, government spending, investment, exports, imports, the trade balance, and the current account. All variables are expressed in real per capita terms. The variables y, c, g, i, x, and m are quadratically detrended in logs and expressed in percent deviations from trend. The variables tb/y, g/y, and ca/y are quadratically detrended in levels. The variables tb and ca are scaled by the secular component of y and quadratically detrended. The sample contains 120 countries and covers, on average, the period 1965-2010 at annual frequency. Moments are averaged across countries using population weights. The sets of poor, emerging, and rich countries are defined as all countries with average PPP converted GDP per capita in U.S. dollars of 2005 over the period 1990-2009 within the ranges 0-3,000, 3,000-25,000, and 25,000-∞, respectively. The lists of poor, emerging, and rich countries are presented in the appendix to this chapter. Data source: World Development Indicators, The World Bank. M. Uribe and S. Schmitt-Groh´e The government does not appear to smooth its own consumption of goods and services either. On average, the standard deviation of public consumption is more than twice that of output. Fact 1.2 (High Volatility Of Government Consumption) On average across countries government consumption is twice as volatile as output. Investment, exports, and imports are by far the most volatile components of the national income and product accounts, with standard deviations around three times as large as those of output. The trade-balance-to-output ratio and the current-account-to-output ratio are also highly volatile, with standard deviations of more than 2 percent of GDP. Fact 1.3 (Global Ranking Of Volatilities) The ranking of cross-country average standard deviations from top to bottom is imports, investment, exports, government spending, consumption, and output. We say that a variable is procyclical when it has a positive correlation with output. Table 1.1 reveals that consumption, investment, exports, and imports are all procyclical. Private consumption is the most procyclical component of aggregate demand. Fact 1.4 (Procyclicality Of The Components of Aggregate Demand) On average consumption, investment, exports, and imports are all positively correlated with output. By contrast, the trade balance, the trade-balance-to-output ratio, the current account, and the current-account-to-output ratio are all countercyclical. This means that countries tend to import more than they export during booms and to export more than they import during recessions. Fact 1.5 (Countercyclicality Of The Trade Balance And The Current Account) On average across countries the trade balance, the trade-balance-to-output ratio, the current account, and the current-account-to-output ratio are all negatively correlated with output. Open Economy Macroeconomics, Chapter 1 It is worth noting that the government-spending-to-output ratio is roughly acyclical. This empirical regularity runs contrary to the traditional Keynesian stabilization policy prescription according to which the share of government spending in GDP should be increased during contractions and cut during booms. Fact 1.6 (Acyclicality Of The Share Of Government Consumption in GDP) On average across countries, the share of government consumption in output is roughly uncorrelated with output. This fact must be qualified along two dimensions. First, here the variable g denotes government consumption of goods. It does not include government investment, which may be more or less procyclical than government consumption. Second, g does not include transfers. To the extent that transfers are countercyclical and directed to households with high propensities to consume— presumably low-income households—total government spending may be more countercyclical than government consumption. A standard measure of persistence in time series is the first-order serial correlation. Table 1.1 shows that on average across all countries, output is quite persistent, with a serial correlation of 0.71. All components of aggregate demand as well as imports are broadly as persistent as output. Fact 1.7 (Persistence) The components of aggregate supply (output and imports) and aggregate demand (consumption, government spending, investment, and exports) are all positively serially correlated. Later in this chapter, we will investigate whether output is a persistent stationary variable or a nonstationary variable. This distinction is important for choosing the stochastic processes of shocks driving our theoretical models of the macro economy. M. Uribe and S. Schmitt-Groh´e Business Cycles in Poor, Emerging, and Rich Countries An important question in macroeconomics is whether business cycles look differently in poor, emerging, and rich economies. For if this was the case, then a model that is successful in explaining business cycles in, say, rich countries, may be less successful in explaining business cycles in emerging or poor countries. One difficulty with characterizing business cycles at different stages of development is that any definition of concepts such as poor, emerging, or rich country is necessarily arbitrary. For this reason, it is particularly important to be as explicit as possible in describing the classification method adopted. As the measure of development, we use the geometric average of PPP converted GDP per capita in U.S. dollars of 2005 over the period 1990-2009. Loosely speaking, PPP-converted GDP in a given country is the value of all goods and services produced in that country evaluated at U.S. prices. By evaluating production of goods in different countries at the same prices, PPP conversion makes cross-country comparisons more sensible. To illustrate the concept of PPP conversion, suppose that in a given year country X produces 3 hair cuts and 1 ton of grain and that the unit prices of these items inside country X are, 1 and 200 dollars, respectively. Then, the nonconverted measure of GDP is 203 dollars. Suppose, however, that because a hair cut is not a service that can be easily traded internationally, its price is very different in country X and the United States (few people are willing to fly from one country to another just to take advantage of differences in hair cut prices). Specifically, assume that a hair cut costs 20 dollars in the United States, twenty times more than in country X. Assume also that, unlike hair cuts, grain is freely traded internationally, so its price is the same in both countries. Then, the PPP-converted measure of GDP in country X is 260. In this example, the PPP adjusted measure is higher than its unadjusted counterpart, reflecting the fact that nontraded services are more expensive in the United States than in country X. We define the set of poor countries as all countries with annual PPP-converted GDP per capita Open Economy Macroeconomics, Chapter 1 of up to 3,000 dollars, the set of emerging countries as all countries with PPP-converted GDP per capita between 3,000 and 25,000 dollars, and the set of rich countries as all countries with PPP-converted GDP per capita above 25,000 dollars. This definition delivers 40 poor countries, 58 emerging countries, and 22 rich countries. The lists of countries in each category appear in the appendix to this chapter. The fact that there are fewer rich countries than either emerging or poor countries makes sense because the distribution of GDP per capita across countries is highly skewed to the right, that is, the world is characterized by few very high-income countries and many lowto medium-income countries. Summary statistics for each income group are population-weighted averages of the corresponding country-specific summary statistics. Table 1.1 shows that there are significant differences in volatility across income levels. Compared to rich countries, the rest of the world is a roller coaster. A simple inspection of table 1.1 makes it clear that the central difference between business cycles in rich countries and business cycles in either emerging or poor countries is that rich countries are about half as volatile as emerging or poor countries. This is true not only for output, but also for all components of aggregate demand. Fact 1.8 (Excess Volatility of Poor and Emerging Countries) Business cycles in rich countries are about half as volatile as business cycles in emerging or poor countries. Explaining this impressive fact is perhaps the most important unfinished business in macroeconomics. Are poor and emerging countries more volatile than rich countries because they face more volatile shocks, such as terms of trade, country risk premia, productivity disturbances, or animal spirits? Or is their elevated instability the result of precarious economic institutions, manifested in, for example, poorly designed monetary and fiscal policies, political distortions, fragile financial systems, or weak enforcement of economic contracts, that tend to exacerbate the aggregate effects of changes in fundamentals? One of the objectives of this book is to shed light on these two M. Uribe and S. Schmitt-Groh´e non-mutually exclusive views. A second important fact that emerges from the comparison of business-cycle statistics across income levels is that consumption smoothing is increasing with income per capita. In rich countries consumption is 13 percent less volatile than output, whereas in poor countries it is 12 percent more volatile. In emerging countries, consumption and output are about equally volatile. Fact 1.9 (Less Consumption Smoothing in Poor and Emerging Countries) The relative consumption volatility is higher in poor and emerging countries than in rich countries. Table 1.1 shows that the trade-balance-to-output ratio is countercyclical for poor, emerging, and rich countries. That is, fact 1.5 holds not only unconditionally, but also conditional on the level of economic development. An important difference between business cycles in rich countries and the rest of the world that emerges from table 1.1 is that in rich countries the share of government consumption in GDP is significantly more countercyclical than in emerging or poor countries. Fact 1.10 (The Countercyclicality of Government Spending Increases With Income) The share of government consumption is countercyclical in rich countries, but acyclical in emerging and poor countries. Rich countries appear to deviate less from the classic Keynesian stabilization rule of boosting (reducing) the share of government spending during economic contractions (expansions) than do poor or emerging economies. Country Size and Observed Business Cycles Table 1.2 presents business-cycle facts disaggregated by country size. Countries are sorted into three size categories: small, medium, and large. These three categories are defined, respectively, Open Economy Macroeconomics, Chapter 1 as all countries with population in 2009 of less than 20 million, between 20 and 80 million, and more than 80 million. The first regularity that emerges from table 1.2 is that conditional on size, rich countries are at least half as volatile as emerging or poor countries. This means that fact 1.8 is robust to controlling for country size. To further characterize the partial correlations of output volatility with economic development and country size, we regress the standard deviation of output per capita of country i, denoted σy,i , onto a constant, the logarithm of country i’s population in 2009, denoted ln popi , the logarithm of country i’s average PPP-converted output per capita over the period 1990-2009, denoted ln yiP P P , and country i’s openness share, denoted xmyi . All 120 countries in the sample are included. The regression yields σy,i = 15.0 −0.08 ln popi −0.78 ln yiP P P t − stat (3.5) (−0.4) +0.86 xmyi + i (0.9) R2 = 0.07 This regression shows that both higher income per capita and larger country size tend to be associated with lower output volatility. At the same time, more open economies appear to be more volatile. Note, however, that population and openness are statistically insignificant. Table 1.2 suggests that the consumption-output volatility ratio falls with income per capita and, less strongly, also with country size. This relationship is corroborated by the following regression: σc,i σy,i = 1.8 −0.06 ln popi −0.11 ln yiP P P t − stat (4.1) (−2.7) +0.14 xmyi + i (+1.4) R2 = 0.19 According to this regression, more populous and richer countries tend to have a lower relative volatility of consumption. Taking into account that the volatility of output falls with size M. Uribe and S. Schmitt-Groh´e Table 1.2: Business Cycles in Small, Medium, and Large Countries All Countries S M L Standard Deviations σy 8.00 7.92 5.55 σc /σy 1.12 0.96 1.07 σg /σy 2.22 2.21 2.28 σi /σy 3.65 3.23 3.06 σx /σy 2.46 3.29 3.07 σm /σy 2.55 3.12 3.33 σtb/y 4.29 3.64 1.76 σca/y 3.68 2.97 1.84 Correlations with y y 1.00 1.00 1.00 c 0.64 0.71 0.69 g/y -0.03 -0.01 -0.02 i 0.60 0.70 0.66 x 0.54 0.42 0.08 m 0.59 0.57 0.11 tb/y -0.12 -0.24 -0.13 tb -0.21 -0.26 -0.15 ca/y -0.17 -0.22 -0.30 ca -0.21 -0.25 -0.30 Serial Correlations y 0.83 0.83 0.66 c 0.67 0.69 0.66 g 0.73 0.80 0.75 i 0.66 0.66 0.53 x 0.67 0.75 0.67 m 0.69 0.70 0.63 tb/y 0.54 0.58 0.63 ca/y 0.42 0.50 0.60 Means tby -5.6 -1.5 -0.8 xmy 73.9 48.6 29.0 Poor Countries S M L Emerging Countries S M L Rich Countries S M L 8.17 1.39 2.92 4.68 2.81 2.96 5.62 4.84 9.46 1.05 2.86 4.01 3.94 3.45 3.82 3.40 5.63 1.11 2.40 3.08 3.01 3.30 1.77 1.87 9.50 0.97 1.85 2.97 2.23 2.25 4.00 3.55 8.99 0.93 2.05 2.86 2.92 2.68 4.39 3.45 7.86 1.08 1.99 2.58 2.95 3.02 2.75 2.39 4.31 0.92 1.66 3.07 2.23 2.36 2.29 2.37 3.05 0.93 1.71 3.07 3.33 3.80 1.47 1.47 3.29 0.84 1.76 3.28 3.56 3.77 0.98 1.23 1.00 0.58 0.02 0.45 0.53 0.53 -0.04 -0.18 -0.17 -0.23 1.00 0.74 0.24 0.55 0.58 0.62 -0.25 -0.33 -0.11 -0.17 1.00 0.66 0.07 0.61 0.08 0.07 -0.10 -0.12 -0.30 -0.29 1.00 0.73 0.03 0.72 0.53 0.62 -0.21 -0.32 -0.20 -0.25 1.00 0.70 0.00 0.76 0.36 0.57 -0.24 -0.24 -0.34 -0.36 1.00 0.84 -0.26 0.82 0.25 0.34 -0.17 -0.21 -0.11 -0.13 1.00 0.55 -0.26 0.63 0.58 0.63 -0.11 -0.04 -0.11 -0.08 1.00 0.70 -0.40 0.74 0.37 0.47 -0.24 -0.24 -0.08 -0.10 1.00 0.82 -0.40 0.81 0.00 0.23 -0.29 -0.29 -0.44 -0.43 0.76 0.61 0.61 0.62 0.58 0.68 0.50 0.36 0.84 0.61 0.74 0.64 0.73 0.68 0.51 0.42 0.62 0.62 0.72 0.47 0.65 0.60 0.61 0.57 0.89 0.70 0.78 0.67 0.74 0.71 0.52 0.40 0.84 0.71 0.80 0.70 0.76 0.72 0.58 0.46 0.90 0.81 0.81 0.79 0.70 0.80 0.74 0.65 0.83 0.73 0.87 0.71 0.68 0.66 0.67 0.56 0.80 0.75 0.89 0.61 0.75 0.71 0.68 0.67 0.74 0.75 0.90 0.70 0.74 0.68 0.70 0.75 -10.4 57.7 -5.4 48.9 -0.7 29.5 -5.2 69.2 -0.0 49.7 -1.7 29.9 3.1 116.8 0.0 45.2 -0.6 25.3 Note. See table 1.1. The sets of small (S), medium (M), and large (L) countries are defined as countries with 2011 populations of, respectively, less than 20 million, between 20 and 80 million, and more than 80 million. Open Economy Macroeconomics, Chapter 1 income, this means that the volatility of consumption falls even faster than that of income as size and income increase. These results generalize fact 1.9, according to which consumption smoothing increases with income. Finally, table 1.2 shows that smaller countries are more open than larger countries. This result holds unconditionally as well as conditional upon the level of income. Hodrick-Prescott Filtering We now consider an alternative detrending method developed by Hodrick and Prescott (1997), known as the Hodrick-Prescott, or HP, filter. The HP filter identifies the cyclical component, ytc , and the trend component, yts , of a given series yt , for t = 1, 2, . . .T , as the solution to the minimization problem {ytc ,yts }T t=1 ( T X t=1 (ytc )2 T −1 X t=2 s (yt+1 yts ) − (yts 2 s yt−1 ) subject to (1.1). The appendix provides the first-order conditions and solution to this problem. According to this formula, the HP trend is the result of a trade off between minimizing the variance of the cyclical component and keeping the growth rate of the trend constant. This tradeoff is governed by the parameter λ. The larger is λ the more penalized are changes in the growth rate of the trend. In the limit as λ goes to infinity, the trend component associated with the HP filter coincides with the linear trend. At the other extreme, as λ goes to zero, all of the variation in the time series is attributed to the trend and the cyclical component is nil. Business-cycle studies that use data sampled at an annual frequency typically assume a value of λ of 100. Figure 1.2 displays the trend in Argentine real per capita GDP implied by the HP filter for λ equal to 100. M. Uribe and S. Schmitt-Groh´e Figure 1.2: HP Filtered Trend of Argentine Output (λ = 100) 9.4 yt yts 8.4 1960 1985 Year Open Economy Macroeconomics, Chapter 1 Figure 1.3: Cyclical Component of Argentine GDP HP Filter 100 Versus Quadratic Trend 25 Quadratic Time Trend HP Filter 100 20 Percent deviation from trend −25 1960 1985 Year The HP filter attributes a significant fraction of the output decline during the lost decade (19801989) to the trend. By contrast, the log-quadratic trend is monotonically increasing during this period, implying that the lost decade was a cyclical phenomenon. Figure 1.3 displays the cyclical component of Argentine output according to the HP filter (λ = 100) and the log-quadratic filter. The correlation between the two cyclical components is 0.70, indicating that for the most part they identify the same cyclical movements. However, the two filters imply quite different amplitudes for the Argentine cycle. The standard deviation of the cyclical component of output is 10.8 percent according to the log-quadratic filter, but only 5.7 percent according to the HP filter. The reason for this large reduction in the volatility of the cycle when applying the HP filter is that under this filter the trend moves much more closely with the M. Uribe and S. Schmitt-Groh´e Figure 1.4: Trend of Argentine Output According to the HP Filter 6.25 9.4 yt yts 8.4 1960 1985 Year raw series. The value of λ plays an important role in determining the amplitude of the business cycle implied by the HP filter. Recently, Ravn and Uhlig (2002) have suggested a value of λ of 6.25 for annual data. Under this calibration, the standard deviation of the cyclical component of Argentine GDP drops significantly to 3.6 percent. Figure 1.4 displays the actual Argentine GDP and the trend implied by the HP filter when λ takes the value 6.25. In this case, the trend moves much closer with the actual series. In particular, the HP filter now attributes the bulk of the 1989 crisis and much of the 2001 crisis to the trend. This is problematic, especially for the 2001 depression. For this was a V-shaped, relatively short contraction followed by Open Economy Macroeconomics, Chapter 1 a swift recovery. This suggest that the 2001 crisis was a business-cycle phenomenon. By contrast, the HP trend displays a significant contraction in 2001, suggesting that the crisis was to a large extent noncyclical. For this reason, we calibrate λ at 100 for the subsequent analysis. Table 1.3 displays business-cycle statistics implied by the HP filter for λ = 100. The central difference between the business-cycle facts derived from quadratic detrending and HP filtering is that under the latter detrending method the volatility of all variables falls by about a third. In particular, the average cross-country standard deviation of output falls from 6.2 percent under quadratic detrending to 3.8 percent under HP filtering. In all other respects, the two filters produce very similar business-cycle facts. In particular, facts 1.1-1.10 are robust to applying the HP filter with λ = 100. Growth Rates Thus far, we have detrended output and all other components of aggregate demand using either a log-quadratic trend or the HP filter. An alternative to these two approaches is to assume that these variables have a stochastic trend. Here, we explore this avenue. Specifically, we assume that the levels of the variables of interest are nonstationary, but that their growth rates are. That is, we assume that the logarithm of output and the components of aggregate demand are integrated of order one. Table 1.4 displays two statistical tests that provide evidence in favor of modeling these time series as stationary in growth rates and nonstationary in levels. The top panel of the table displays the results of applying the ADF test to the logarithm of real per capita GDP. The ADF test evaluates the null hypothesis that a univariate representation of the time series in question has a unit root against the alternative hypothesis that it does not. The table displays population-weighted cross-country averages of the decision value. The decision value is unity if the null hypothesis is M. Uribe and S. Schmitt-Groh´e Table 1.3: HP-Filtered Business Cycles Statistic All Countries Standard Deviations σy 3.79 σc/σy 1.08 σg /σy 2.29 σi /σy 3.77 σx/σy 3.50 σm /σy 3.65 σtb/y 1.79 σca/y 1.78 Correlations with y y 1.00 c 0.60 g/y -0.08 i 0.69 x 0.19 m 0.32 tb/y -0.18 tb -0.20 ca/y -0.32 ca -0.33 Serial Correlations y 0.46 c 0.36 g 0.51 i 0.34 x 0.47 m 0.42 tb/y 0.39 ca/y 0.39 Means tb/y -1.3 (x + m)/y 36.5 Poor Countries Emerging Countries Rich Countries 4.12 1.09 2.53 3.80 3.47 3.70 1.64 1.71 3.98 1.23 2.29 3.79 3.67 3.52 2.92 2.63 2.07 0.87 1.23 3.62 3.42 3.63 0.89 1.02 1.00 0.53 0.02 0.65 0.18 0.23 -0.08 -0.11 -0.29 -0.29 1.00 0.68 -0.06 0.71 0.13 0.46 -0.34 -0.36 -0.39 -0.41 1.00 0.82 -0.56 0.86 0.30 0.58 -0.37 -0.36 -0.38 -0.37 0.39 0.29 0.48 0.27 0.47 0.43 0.36 0.36 0.60 0.44 0.52 0.45 0.44 0.44 0.42 0.39 0.55 0.53 0.65 0.46 0.46 0.33 0.47 0.54 -1.6 32.5 -1.4 46.4 -0.0 40.4 Note. See table 1.1. The variables y, c, g, i, x, and m are HP filtered in logs and expressed in percent deviations from trend, and the variables tb/y and ca/y are HP filtered in levels and expressed in percentage points of output. The variables tb and ca were scaled by the secular component of GDP and then HP-filtered. The parameter λ of the HP filter takes the value 100. Open Economy Macroeconomics, Chapter 1 Table 1.4: ADF and KPSS Tests for Output in Poor, Emerging, and Rich Countries Lags ADF Test 0 1 2 3 AIC for Lag Length KPSS Test 0 1 2 3 AIC for Lag Length All Countries Poor Countries Emerging Countries Rich Countries 0.5 0.3 0.3 0.2 0.8 0.7 0.3 0.4 0.3 0.9 0.1 0.0 0.0 0.0 1.0 0.3 0.3 0.3 0.1 0.5 1.0 1.0 1.0 0.9 0.0 1.0 1.0 1.0 0.9 0.0 1.0 0.9 0.9 0.8 0.0 1.0 0.9 0.9 0.9 0.0 Note. See notes to table 1.1. Entries correspond to population-weighted decision values for the ADF and KPSS tests. For each country, a decision value of 1 indicates rejection of the null at 5% confidence level and a decision value of 0 indicates failure to reject the null. The null hypothesis is unit root under the ADF test and all roots within the unit circle in the KPSS test. Decision values are based on an F test. AIC stands for the population weighted cross-country average of the lag length suggested by the Akaike information criterion. M. Uribe and S. Schmitt-Groh´e rejected and 0 if it cannot be rejected. The table shows that the null hypothesis is rejected in 30 percent of the countries at the lag length of 1 year suggested by the Akaike information criterion (AIC), providing support to the unit-root hypothesis. The lower panel of table 1.4 displays the results of applying the KPSS test to the logarithm of real output. This test evaluates the null hypothesis that the univariate representation of the logarithm of output has no unit root versus the alternative hypothesis that it does. For the lag length favored by the AIC test, the decision value is unity for virtually all countries, which suggests that the hypothesis of stationarity in levels is strongly rejected. The results of the ADF and KPSS tests have to be interpreted with caution. The reason is that they both are based on the assumption that the time series in question has a univariate representation. As we will see in the following chapters, in general, theoretical models of the business cycle do not imply that output has a univariate representation. Table 1.5 displays standard deviations, correlations with output growth, and serial correlations of the growth rates of output, private consumption, government consumption, investment, exports, and imports. Most of the ten business-cycle facts obtained under quadratic detrending also hold true when stationarity is induced by first-differencing the data. For example, world business cycles are highly volatile (fact 1.1). The cross-country average volatility of output growth is twice as large as the volatility of U.S. output growth (not shown). Poor and emerging countries are twice as volatile as rich countries (fact 1.8). The volatility of consumption growth relative to output growth is much higher in emerging and poor countries than in rich countries (fact 1.9). The trade-balance share is negatively correlated with output growth (fact 1.5). Finally, we note that, predictably, the serial correlations of growth rates are much lower than their (detrended) level Open Economy Macroeconomics, Chapter 1 Table 1.5: First Differenced Business Cycles Statistic All Countries Standard Deviations σ∆y 4.39 σ∆c /σ∆y 1.14 σ∆g /σ∆y 2.14 σ∆i /σ∆y 3.81 σ∆x /σ∆y 3.37 σ∆m /σ∆y 3.60 σtb/y 2.34 σca/y 2.16 Correlations with ∆y ∆y 1.00 ∆c 0.60 g/y -0.10 ∆i 0.64 ∆x 0.21 ∆m 0.33 tb/y -0.10 ca/y -0.07 Serial Correlations ∆y 0.29 ∆c 0.02 ∆g 0.18 ∆i 0.01 ∆x 0.07 ∆m 0.04 tb/y 0.61 ca/y 0.57 Poor Countries Emerging Countries Rich Countries 4.94 1.14 2.28 3.80 3.22 3.50 2.12 2.06 4.08 1.34 2.39 4.06 3.98 3.84 3.80 3.08 2.38 0.85 1.17 3.49 3.22 3.76 1.25 1.39 1.00 0.54 -0.02 0.59 0.18 0.26 -0.08 -0.06 1.00 0.64 -0.18 0.66 0.15 0.40 -0.20 -0.12 1.00 0.79 -0.32 0.83 0.42 0.57 -0.07 -0.07 0.28 -0.03 0.14 -0.01 0.08 0.08 0.59 0.55 0.29 0.02 0.11 0.03 -0.00 -0.02 0.62 0.52 0.32 0.27 0.48 0.08 0.10 -0.04 0.69 0.71 Note. See notes to table 1.1. The variables ∆y, ∆c, ∆g, ∆i, ∆x, and ∆m denote, respectively the log differences of output, consumption, government consumption, investment, exports, and imports. The variables g/y, tb/y, and ca/y are quadratically detrended in levels. All variables are expressed in percent. M. Uribe and S. Schmitt-Groh´e Duration and Amplitude of Business Cycles in Emerging and Developed Countries We have documented that emerging countries display significantly more output volatility than developed countries. We now decompose business cycles into contractions and expansions and estimate for each of these phases of the cycle its duration and amplitude. Calder´ on and Fuentes (2010) adopt a classical approach to characterizing business cycles in emerging and developed countries, consisting in identifying peaks and troughs in the logarithm of real quarterly GDP. They define a peak as an output observation that is larger than the two immediately preceding and succeeding observations. Formally, letting yt denote the logarithm of real GDP, a peak takes place when yt > yt+j , for j = −2, −1, 1, 2. Similarly, a trough is defined as an output observation that is lower than its two immediately preceding and succeeding observations, that is, as a level of yt satisfying yt < yt+j , for j = −2, −1, 1, 2. The duration of a cycle is the period of time between one peak and the next. The duration of a contraction is the period of time between a peak and the next trough. And the duration of an expansion is the period of time that it takes to go from a trough to the next peak. The amplitude of a contraction is the percentage fall in output between a peak and the next trough. The amplitude of an expansion is the percentage increase in output between a trough and the next peak. Table 1.6 displays the average duration and amplitude of business cycles in two groups of countries, one consisting of 12 Latin America countries and the other of 12 OECD countries. We will identify the former group with emerging countries and the latter with developed countries. The table shows that contractions in emerging and developed countries have equal durations of 3-4 quarters. However, the amplitude of contractions is much larger in emerging countries than in developed countries (6.2 versus 2.2 percent of GDP). Comparing the durations of expansions to that of contractions indicates that expansions are much longer than contractions and that expansions Open Economy Macroeconomics, Chapter 1 Table 1.6: Duration and Amplitude of Business Cycles in Emerging and Developed Economies Group of Countries Latin America OECD Duration Contraction Expansion 3.5 16.0 3.6 23.8 Amplitude Contraction Expansion 6.2 21.3 2.2 20.2 Source: Calder´ on and Fuentes (2010). Note: The data is quarterly real GDP from 1980:1 to 2006:4. The countries included in the Latin America group are: Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Ecuador, Mexico, Paraguay, Peru, Uruguay, and Venezuela. The countries included in the OECD group are Australia, Canada, France, Germany, Italy, Japan, New Zealand, Portugal, Spain, Sweden, United Kingdom, and the United States. are relatively shorter in emerging countries than in developed countries (16 versus 23.8 quarters). At the same time, the amplitude of expansions is about the same in both groups of countries (about 20 percent of GDP). Finally, emerging countries are more cyclical than developed countries in the sense that in the former complete cycles are shorter, 20 quarters versus 27 quarters. (A complete cycle is computed as the sum of the average durations of contractions and expansions.) The general pattern that emerges from this classical approach to characterizing business cycles is of emerging countries being more volatile because they display more cycles per unit of time and because they experience deeper contractions. Business Cycle Facts With Quarterly Data Thus far, we have empirically characterized business cycles around the world using annual data. Because annual data on national income and product accounts is readily available, this choice made it possible to derive business cycle facts for a large set of countries and for a relatively long period of time. Many business-cycle studies, however, especially those focused on developed economies, use quarterly data. For this reason, in this section we characterized business cycle facts using quarterly M. Uribe and S. Schmitt-Groh´e data. Gathering data at a quarterly frequency turns out to be much more difficult than doing so at an annual frequency. Most countries have some quarterly data, but often sample periods are short, typically less than 20 years. The problem with so short samples is that it becomes difficult to separate the trend from the cyclical component. For inclusion in the data set, we continue to require that a country has at least 30 years (or 120 quarters) of quarterly data for output, consumption, investment, exports, imports, and public consumption. This restriction reduces significantly the number of countries for which data is available relative to the case of annual data. Specifically, our quarterly panel contains no poor countries, 11 emerging countries, and 17 rich countries. By comparison, our annual panel contains 40 poor countries, 58 emerging countries, and 22 rich countries. The sample period is 1980:Q1 to 2012:Q4 with two exceptions, Uruguay and Argentina.4 The data is available online at http://www.columbia.edu/~mu2166/book/ Table 1.7 displays business-cycle statistics at quarterly frequency for emerging and rich countries and for three different ways to measure the cyclical component, namely, log-quadratic detrending, HP filtering, and first differencing. Overall, the business-cycle facts that emerge from quarterly data are similar to those identified using annual data. In particular, table 1.7 shows that: a. Investment, government spending, exports, and imports are more volatile than output, and private consumption is about as volatile as output. b. Consumption, investment, exports, and imports are all procyclical, whereas the trade balance is countercyclical. c. Output, consumption, investment, exports, and imports are all positively serially correlated. d. Emerging countries are more volatile than rich countries. 4 The data for Uruguay begins in 1983:Q1 and the time series for private and public consumption in Argentina begin in 1993:Q1. Open Economy Macroeconomics, Chapter 1 Table 1.7: Business Cycles in Emerging and Rich Countries, Quarterly Data, 1980Q1-2012Q4 Log-Quadratic Time Trend Statistic All Emerging Rich Standard Deviations σy 3.26 4.27 2.74 σc /σy 0.99 1.23 0.87 σg /σy 1.46 2.07 1.15 σi /σy 3.44 3.67 3.31 σx /σy 3.77 3.97 3.67 σm /σy 3.52 3.55 3.51 σtb/y 1.80 2.93 1.21 Correlations with y y 1.00 1.00 1.00 c 0.83 0.72 0.88 g/y -0.43 -0.11 -0.59 i 0.86 0.82 0.88 x 0.17 -0.00 0.26 m 0.60 0.48 0.66 tb/y -0.44 -0.52 -0.41 tb -0.44 -0.51 -0.40 Serial Correlations y 0.94 0.91 0.95 c 0.91 0.87 0.93 g 0.87 0.79 0.91 i 0.91 0.87 0.93 x 0.92 0.90 0.93 m 0.90 0.88 0.91 tb/y 0.88 0.85 0.89 Means tb/y -0.1 0.2 -0.2 (x + m)/y 43.8 45.7 42.8 HP Filter Emerging First Differences Emerging Rich 1.80 1.01 1.30 3.73 4.01 4.44 1.09 2.60 1.32 2.02 3.88 3.80 3.65 1.95 1.38 0.85 0.93 3.65 4.11 4.84 0.64 1.12 1.18 2.07 4.32 4.38 4.60 1.80 1.70 1.48 3.33 4.95 4.65 4.26 2.93 0.81 1.03 1.41 3.99 4.25 4.77 1.21 1.00 0.78 -0.58 0.84 0.43 0.68 -0.39 -0.39 1.00 0.78 -0.22 0.77 -0.05 0.52 -0.56 -0.56 1.00 0.78 -0.78 0.87 0.67 0.76 -0.31 -0.31 1.00 0.61 -0.16 0.65 0.33 0.44 -0.02 1.00 0.62 -0.17 0.57 0.04 0.37 -0.11 1.00 0.61 -0.15 0.70 0.48 0.47 0.02 0.84 0.76 0.56 0.78 0.80 0.80 0.70 0.80 0.74 0.44 0.71 0.73 0.72 0.71 0.85 0.76 0.62 0.82 0.83 0.84 0.69 0.33 0.11 -0.14 0.14 0.25 0.27 0.88 0.24 0.07 -0.25 -0.01 0.06 0.05 0.85 0.37 0.13 -0.09 0.22 0.35 0.38 0.89 Note. The variables y, c, g, i, x, m, and tb ≡ (x−m), denote, respectively, output, total private consumption, government spending, investment, exports, imports, and the trade balance. All variables are real and per capita. For quadratic detrending or HP filtering the variables y, c, g, i, x, and m are detrended in logs and expressed in percent deviations from trend. For first differencing, y, c, g, i, x, and m denote log differences. The variables tb/y and g/y are detrended in levels. The variable tb is scaled by the secular component of y and detrended. The sample contains 11 emerging and 17 rich countries. Moments are averaged across countries using population weights. The sets of emerging and rich countries are defined as all countries with average PPP converted GDP per capita in U.S. dollars of 2005 over the period 1990-2009 within the ranges 3,000-25,000 and 25,000-∞, respectively. Rich Countries: Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Hong Kong, Italy, Japan, Netherlands, Norway, Sweden, Switzerland, United Kingdom, United States. Emerging Countries: Argentina, Israel, South Korea, Mexico, New Zealand, Peru, Portugal, South Africa, Spain, Turkey, and Uruguay. The data sources are presented in the appendix to this chapter. M. Uribe and S. Schmitt-Groh´e e. Consumption is more volatile than output in emerging countries, but less volatile than output in rich countries. And f. the share of government spending in output is more countercyclical in rich countries than in emerging countries. As expected, the serial correlation of all macroeconomic indicators is higher in quarterly data than in annual data. Table 1.9 in the appendix presents business cycles statistics for each individual country in the sample. 1.9 1.9.1 Appendix Countries With At Least 30 Years of Annual Data The sample consists of 120 countries. There are 22 small poor countries, 11 medium-size poor countries, 7 large poor countries, 41 small emerging countries, 14 medium-size emerging countries, 3 large emerging countries, 14 small rich countries, 5 medium-size rich countries and 3 large rich countries. The individual countries belonging to each group are listed below. Small Poor Countries: Benin, Bhutan, Burkina Faso, Burundi, Central African Republic, Comoros, Gambia, Guyana, Honduras, Lesotho, Malawi, Mali, Mauritania, Mongolia, Niger, Papua New Guinea, Rwanda, Senegal, Sierra Leone, Togo, Zambia, Zimbabwe. Medium Size Poor Countries: Cameroon, Congo, Dem. Rep., Cote d’Ivoire, Ghana, Kenya, Madagascar, Mozambique, Nepal, Sri Lanka, Sudan, Uganda. Large Poor Countries: Bangladesh, China, Ethiopia, India, Indonesia, Pakistan, Philippines. Small Emerging Countries: Albania, Antigua and Barbuda, Bahrain, Barbados, Bolivia, Botswana, Bulgaria, Chile, Costa Rica, Cuba, Cyprus, Dominica, Dominican Republic, Ecuador, El Salvador, Fiji, Gabon, Greece, Grenada, Guatemala, Hungary, Israel, Jordan, Malta, Mauritius, Namibia, Open Economy Macroeconomics, Chapter 1 New Zealand, Panama, Paraguay, Portugal, Puerto Rico, Seychelles, St. Kitts and Nevis, St. Lucia, St. Vincent and the Grenadines, Suriname, Swaziland, Tonga, Trinidad and Tobago, Tunisia, Uruguay. Medium Size Emerging Countries: Algeria, Argentina, Colombia, Iran, South Korea, Malaysia, Morocco, Peru, South Africa, Spain, Syria, Thailand, Turkey, Venezuela. Large Emerging Countries: Brazil, Egypt, Mexico. Small Rich Countries: Austria, Belgium, Denmark, Finland, Hong Kong, Iceland, Ireland, Luxembourg, Macao, Netherlands, Norway, Singapore, Sweden, Switzerland. Medium Size Rich Countries: Australia, Canada, France, Italy, United Kingdom. Large Rich Countries: Germany, Japan, United States. Derivation of the HP Filter The first-order conditions associated with the problem of choosing the series {ytc , yts}Tt=1 to minimize (1.2) subject to (1.1) are y1 = y1s + λ(y1s − 2y2s + y3s ), y2 = y2s + λ(−2y1s + 5y2s − 4y3s + y4s ), s s s s yt = yts + λ(yt−2 − 4yt−1 + 6yts − 4yt+1 + yt+2 ); t = 3, . . . , T − 2, yT −1 = yTs −1 + λ(yTs −3 − 4yTs −2 + 5yTs −1 − 2yTs ), and yT = yTs + λ(yTs −2 − 2yTs −1 + yTs ). M. Uribe and S. Schmitt-Groh´e Letting Y s ≡ [y1s y2s . . . yTs ] and Y ≡ [y1 y2 . . . yT ], the above optimality conditions can be written in matrix form as Y = (I + λA)Y s , where I is the T × T identity matrix. and A is the following T × T matrix of constants 1 0 0 0 0 0 1 −2 −2 5 −4 1 0 0 0 0 1 −4 6 −4 1 0 0 0 0 1 −4 6 −4 1 0 0 0 0 1 −4 6 −4 1 0 .. A= . 0 ... 0 1 −4 6 −4 1 0 0 1 −4 6 −4 0 ... 0 ... 0 0 0 1 −4 6 0 ... 0 0 0 0 1 −4 0 ... 0 0 0 0 0 1 Solving for Y s , one obtains: ... ... ... ... ... 0 1 −4 5 −2 0 0 0 0 0 .. . . 0 0 1 −2 1 Y s = (I + λA)−1 Y. Finally, letting Y c ≡ [y1c y2c . . . yTc ] we have that Y c = Y − Y s. Country-By-Country Business Cycle Statistics At Annual And Quarterly Frequency Standard Deviations Country σg σy σi σy σx σy σm σy g y 10.76 5.35 6.09 6.02 3.86 12.78 6.67 9.13 6.43 4.92 7.12 0.86 1.03 0.92 0.75 0.89 1.07 0.77 0.84 0.67 1.15 1.38 3.88 4.31 1.75 2.49 1.06 1.79 1.38 1.14 0.58 3.04 1.85 2.55 3.44 2.39 2.26 3.61 1.96 2.48 2.93 2.37 3.45 4.66 2.15 4.10 3.67 2.88 1.91 2.23 2.19 1.88 1.96 5.58 2.55 2.06 5.86 2.56 2.64 2.76 1.76 1.62 2.36 2.44 4.09 2.76 3.18 6.26 3.44 2.69 2.64 3.78 2.82 3.48 1.98 2.46 3.17 0.71 0.47 0.75 0.73 0.87 0.84 0.60 0.80 0.72 0.76 0.82 0.49 0.61 -0.07 0.49 -0.31 0.57 0.23 -0.19 -0.79 0.29 -0.09 0.92 0.80 0.48 0.85 0.88 0.57 0.59 0.96 0.71 0.78 0.51 0.19 0.58 -0.62 -0.20 0.39 0.72 0.27 0.70 0.50 -0.41 -0.11 3.21 2.81 3.08 3.71 2.98 5.35 3.08 1.93 6.29 2.26 4.99 4.39 2.89 4.44 2.34 3.36 2.94 0.92 0.89 0.65 0.59 0.91 0.90 0.59 0.87 1.01 0.99 0.39 0.94 1.65 0.57 0.54 1.39 1.02 1.52 1.39 2.59 2.01 2.20 0.91 1.17 2.43 1.30 1.86 0.89 1.86 1.34 1.71 1.37 2.02 1.93 2.82 2.51 3.22 2.78 4.23 3.39 2.65 4.34 3.05 3.70 2.01 2.04 4.57 2.61 3.60 3.17 3.52 2.07 3.23 2.23 3.31 2.18 2.59 3.19 4.28 2.00 4.62 3.27 1.60 2.80 2.33 2.62 2.71 3.49 2.32 3.17 2.55 2.65 2.74 2.59 3.65 5.24 1.98 6.12 4.13 1.51 2.47 2.48 3.32 2.86 3.24 1.27 1.13 1.58 1.72 1.95 2.86 1.29 1.10 4.93 1.68 0.99 1.27 3.82 1.44 0.98 1.40 0.94 0.62 0.47 0.25 0.62 0.71 0.86 0.50 0.57 0.85 0.88 0.79 0.62 0.42 0.75 0.40 0.83 0.90 -0.41 -0.34 0.13 -0.17 -0.49 -0.65 -0.55 -0.38 -0.62 -0.43 -0.62 0.01 -0.54 -0.03 -0.34 -0.32 -0.32 0.78 0.41 0.55 0.80 0.89 0.94 0.57 0.79 0.67 0.73 0.86 0.46 0.31 0.75 0.94 0.89 0.80 0.24 0.84 0.62 0.34 0.24 0.45 0.65 0.15 0.78 0.29 0.17 0.45 0.52 0.58 0.70 0.22 -0.10 Emerging Countries Argentina Israel Korea, Rep. Mexico New Zealand Peru Portugal South Africa Spain Turkey Uruguay Rich Countries Australia Austria Belgium Canada Denmark Finland France Germany HongKong Italy Japan Netherlands Norway Sweden Switzerland United Kingdom United States Correlations with y i x m σc σy Serial Correlations g i x tb y 0.63 0.67 -0.45 -0.31 0.57 0.84 0.65 0.87 0.79 0.04 0.44 -0.40 -0.63 -0.03 0.04 -0.56 0.10 -0.28 -0.47 -0.57 -0.54 -0.67 -0.31 -0.70 -0.09 0.03 -0.55 0.05 -0.41 -0.42 -0.59 -0.58 -0.67 0.85 0.83 0.84 0.85 0.82 0.92 0.88 0.96 0.93 0.68 0.81 0.61 0.44 0.71 0.61 0.77 0.89 0.80 0.83 0.85 0.72 0.58 0.78 0.89 0.77 0.88 0.70 0.88 0.84 0.89 0.78 0.86 0.62 0.83 0.70 0.51 0.73 0.72 0.67 0.63 0.89 0.85 0.51 0.37 0.12 0.88 0.58 0.36 0.43 0.75 0.73 0.17 0.88 0.35 0.07 0.47 0.42 0.66 0.71 0.51 0.31 0.04 0.10 -0.11 0.12 -0.28 -0.42 -0.31 -0.07 -0.14 -0.17 0.10 0.02 0.14 -0.20 -0.24 -0.55 -0.51 0.03 0.09 -0.08 0.14 -0.25 -0.37 -0.30 -0.06 -0.09 -0.17 0.15 0.12 0.19 -0.09 -0.16 -0.57 -0.54 0.85 0.83 0.85 0.86 0.72 0.84 0.89 0.54 0.78 0.64 0.85 0.90 0.84 0.87 0.74 0.80 0.75 0.84 0.67 0.71 0.70 0.62 0.80 0.77 0.67 0.77 0.63 0.62 0.89 0.68 0.77 0.72 0.86 0.82 0.84 0.88 0.95 0.93 0.93 0.86 0.92 0.89 0.83 0.81 0.88 0.95 0.50 0.96 0.77 0.92 0.91 0.64 0.58 0.67 0.69 0.67 0.78 0.59 0.66 0.71 0.50 0.79 0.77 0.74 0.65 0.67 0.67 0.67 Open Economy Macroeconomics, Chapter 1 Table 1.8: Business Cycles, Log-Quadratic Detrending, Annual Data, 1965-2010 tb y tb y x+m y 0.63 0.87 0.83 0.78 0.68 0.80 0.72 0.86 0.84 0.72 0.59 0.59 0.89 0.78 0.81 0.67 0.71 0.53 0.84 0.76 0.54 0.64 0.60 0.79 0.51 0.72 0.44 0.63 0.68 0.65 0.70 0.41 0.71 1.83 -11.19 -2.42 -0.69 -0.39 -0.92 -7.41 2.52 -1.84 -2.91 0.18 20.49 77.62 59.61 35.46 56.69 36.65 55.92 52.46 38.12 29.26 39.15 0.42 0.80 0.54 0.85 0.62 0.81 0.78 0.65 0.80 0.77 0.79 0.58 0.62 0.78 0.61 0.78 0.75 0.49 0.74 0.56 0.78 0.62 0.78 0.74 0.73 0.77 0.74 0.77 0.52 0.64 0.72 0.66 0.69 0.63 0.30 0.55 0.81 0.76 0.73 0.83 0.76 0.68 0.82 0.67 0.49 0.57 0.60 0.66 0.56 0.68 0.79 -0.93 0.49 1.51 1.35 1.08 1.68 0.08 0.50 3.82 0.45 1.11 3.60 4.84 2.95 4.35 -0.78 -1.54 32.90 71.41 120.82 55.43 72.33 58.26 42.03 54.83 237.14 41.11 22.23 108.67 73.25 64.99 77.53 51.21 18.89 Note. The variables y, c, g, i, x, m, and tb ≡ (x − m), denote, respectively, output, total private consumption, government spending, investment, exports, imports, and the trade balance. All variables are real and per capita. The variables y, c, g, i, x, and m are detrended in logs and expressed in percent deviations from trend. The variables tb/y, and g/y are detrended in levels. The variable tb is scaled by the secular component of y and detrended. This table includes all countries for which we have not only 30 years of annual data but also 30 years of quarterly data. The country-specific sample periods and data sources are given in the online appendix to this chapter, available at http://www.columbia.edu/~mu2166/book. M. Uribe and S. Schmitt-Groh´e Table 1.9: Business Cycles, Log-Quadratic Detrending, Quarterly Data, 1980Q1-2012Q4 Standard Deviations σg σy σi σy σx σy σm σy g y 7.71 2.63 3.05 3.44 3.99 7.26 4.43 2.80 4.16 4.46 7.40 1.11 2.08 1.51 1.29 0.85 0.97 1.11 1.30 0.97 1.15 1.39 1.41 2.06 1.42 2.40 0.94 2.51 1.75 2.67 1.15 2.51 0.96 3.04 4.84 4.07 3.51 3.54 2.53 3.00 4.83 3.69 3.67 3.35 2.38 4.00 3.38 5.53 1.97 2.58 1.92 4.62 3.00 3.91 1.77 4.24 3.54 2.42 4.34 2.66 2.02 1.96 5.19 3.06 2.73 2.66 3.72 2.88 3.57 3.04 1.87 3.66 3.11 3.03 2.12 2.08 3.18 0.75 0.33 0.37 0.75 0.86 0.81 0.90 0.82 0.94 0.66 0.94 0.13 -0.30 -0.26 0.02 -0.61 0.45 0.44 0.01 -0.59 -0.38 -0.35 0.91 0.54 0.90 0.82 0.85 0.71 0.86 0.61 0.93 0.87 0.78 -0.47 -0.02 -0.25 -0.09 0.30 0.28 -0.17 0.41 0.24 -0.01 0.73 2.46 2.04 2.27 3.48 3.12 5.70 2.06 2.17 6.35 2.44 2.92 3.23 3.41 3.92 2.45 3.64 2.60 1.24 0.63 0.65 0.66 0.94 0.67 0.75 0.72 1.05 0.98 0.68 1.34 1.16 0.77 0.54 0.97 0.97 0.93 0.89 1.21 1.16 0.88 0.81 0.73 0.91 0.87 1.80 0.76 0.76 1.27 0.69 1.36 1.35 1.34 3.42 2.37 3.75 2.18 4.33 3.27 4.27 3.24 2.69 3.07 2.88 2.45 3.66 3.29 3.63 3.06 3.57 3.17 4.35 3.06 4.16 2.40 2.73 4.09 3.66 2.23 4.14 4.12 2.09 3.01 2.84 2.69 1.77 3.90 3.09 3.75 3.19 3.10 2.91 1.91 3.90 3.60 2.16 4.33 5.13 2.13 1.98 2.54 3.27 1.86 3.21 1.21 1.36 1.19 2.02 1.49 2.79 0.90 1.09 4.83 1.28 0.96 1.13 4.12 1.46 1.13 1.25 1.12 0.69 0.71 0.71 0.84 0.78 0.83 0.84 0.73 0.87 0.91 0.90 0.91 0.38 0.87 0.44 0.95 0.95 -0.68 -0.79 -0.47 -0.80 -0.60 -0.73 -0.82 -0.66 -0.88 -0.26 -0.87 -0.72 -0.63 -0.73 -0.36 -0.24 -0.50 0.78 0.63 0.74 0.84 0.89 0.86 0.86 0.88 0.59 0.93 0.95 0.67 0.45 0.83 0.89 0.79 0.91 0.33 0.83 0.53 0.54 0.62 0.34 0.65 0.34 0.90 0.43 0.16 0.62 0.64 0.57 0.69 0.30 0.02 Emerging Countries Argentina Israel South Korea Mexico New Zealand Peru Portugal South Africa Spain Turkey Uruguay Rich Countries Australia Austria Belgium Canada Denmark Finland France Germany HongKong Italy Japan Netherlands Norway Sweden Switzerland United Kingdom United States Correlations with y i x m σc σy Serial Correlations g i x tb y 0.52 -0.03 0.34 0.43 0.44 0.61 0.71 0.54 0.76 0.39 0.89 -0.88 0.08 -0.49 -0.45 -0.37 -0.18 -0.72 -0.28 -0.79 -0.61 -0.65 -0.87 0.03 -0.47 -0.43 -0.36 -0.20 -0.78 -0.26 -0.80 -0.62 -0.66 0.95 0.81 0.87 0.92 0.90 0.92 0.89 0.95 0.98 0.86 0.96 0.95 0.91 0.90 0.87 0.86 0.87 0.89 0.89 0.95 0.75 0.91 0.95 0.71 0.82 0.77 0.61 0.82 0.93 0.89 0.54 0.77 0.84 0.93 0.92 0.86 0.88 0.94 0.88 0.95 0.67 0.95 0.88 0.78 0.73 0.84 0.58 0.52 0.81 0.61 0.79 0.44 0.93 0.72 0.26 0.66 0.61 0.69 0.70 0.57 0.85 -0.46 0.39 -0.14 0.44 -0.59 -0.04 -0.17 -0.11 0.01 -0.55 -0.30 -0.13 0.26 0.02 -0.16 -0.40 -0.73 -0.47 0.41 -0.11 0.46 -0.52 0.02 -0.18 -0.08 0.09 -0.56 -0.27 0.00 0.32 0.15 -0.08 -0.43 -0.75 0.95 0.94 0.96 0.97 0.92 0.97 0.97 0.90 0.96 0.96 0.95 0.96 0.93 0.95 0.95 0.98 0.96 0.96 0.81 0.91 0.95 0.87 0.95 0.92 0.80 0.96 0.93 0.89 0.97 0.85 0.95 0.92 0.96 0.96 0.64 0.88 0.79 0.96 0.93 0.96 0.96 0.78 0.75 0.93 0.83 0.86 0.81 0.88 0.95 0.94 0.97 0.88 0.96 0.96 0.96 0.88 0.77 0.94 0.93 0.88 0.97 0.96 0.82 0.84 0.88 0.91 0.87 0.95 Means m tb y tb y x+m y 0.88 0.88 0.88 0.90 0.87 0.90 0.91 0.83 0.96 0.93 0.80 0.96 0.84 0.79 0.90 0.88 0.75 0.88 0.88 0.96 0.84 0.86 0.93 0.78 0.85 0.91 0.69 0.84 0.94 0.73 0.94 0.77 0.68 2.18 -7.53 1.35 1.11 0.07 0.13 -8.14 2.35 -1.83 -1.12 -1.33 23.91 77.87 71.75 41.79 57.95 38.08 64.79 52.51 47.68 36.80 47.54 0.82 0.96 0.89 0.97 0.92 0.93 0.96 0.91 0.96 0.94 0.90 0.92 0.90 0.95 0.93 0.86 0.96 0.88 0.95 0.87 0.96 0.94 0.85 0.95 0.92 0.95 0.93 0.93 0.92 0.82 0.94 0.94 0.85 0.90 0.82 0.84 0.44 0.93 0.81 0.82 0.89 0.78 0.89 0.87 0.92 0.72 0.87 0.76 0.69 0.85 0.96 -1.18 1.31 2.18 1.86 3.66 3.30 -0.18 1.38 5.44 0.63 1.37 5.22 8.91 4.66 4.51 -0.87 -2.56 36.83 83.28 138.62 62.22 79.42 66.20 48.71 61.22 288.07 46.54 23.17 121.34 72.44 75.52 78.06 54.67 22.31 Note. The variables y, c, g, i, x, m, and tb ≡ (x − m), denote, respectively, output, total private consumption, government spending, investment, exports, imports, and the trade balance. All variables are real and per capita. The variables y, c, g, i, x, and m are detrended in logs and expressed in percent deviations from trend. The variables tb/y, and g/y are detrended in levels. The variable tb is scaled by the secular component of y and detrended. Only countries with at least 30 years of quarterly data are included. The country-specific sample periods and data sources are given in the online appendix to this chapter, available at http://www.columbia.edu/~mu2166/book. Open Economy Macroeconomics, Chapter 1 Exercise 1.1 (Business Cycle Regularities in South Korea and the United States) In this exercise, you are asked to analyze the extent to which the numbered business cycle facts discussed in this chapter apply to (a) South Korea and (b) The United States. To this end compute the relevant business cycle statistics for the following four alternative detrending methods: (a) log-linear detrending; (b) log-quadratic detrending; (c) Hodrick Prescott filtering with λ = 100; and (d) Hodrick Prescott filtering with λ = 6.25. Make a two by two graph showing the natural logarithm of real per capita GDP and the trend, one panel per trend. Discuss how the detrending method influences the volatility of the cyclical component of output. Also discuss which detrending method identifies recessions for the U.S. most in line with the NBER business cycle dates. The data should be downloaded from the World Bank’s WDI database. As the sample period for South Korea use 1960 to 2011 and for the United States use 1965-2011. Specifically, use the following time series to construct the required business cycle statistics: GDP per capita (constant LCU) Household final consumption expenditure, etc. (% of GDP) Gross capital formation (% of GDP) General government final consumption expenditure (% of GDP) Imports of goods and services (% of GDP) Exports of goods and services (% of GDP) M. Uribe and S. Schmitt-Groh´e Chapter 2 A Small Open Endowment Economy The purpose of this chapter is to build a canonical dynamic, general equilibrium model of the small open economy and contrast its predictions with some of the empirical regularities of business cycles in small emerging and developed countries documented in chapter 1. The model developed in this chapter is simple enough to allow for a full characterization of its equilibrium dynamics using pen and paper. The Model Economy Consider an economy populated by a large number of infinitely-lived households with preferences described by the utility function E0 ∞ X β t U (ct), where ct denotes consumption and U denotes the single-period utility function, which is assumed to be continuously differentiable, strictly increasing, and strictly concave. Each period, households receive an exogenous and stochastic endowment and have the ability to borrow or lend in a risk-free internationally traded bond. The sequential budget constraint of 37 M. Uribe and S. Schmitt-Groh´e the representative household is given by ct + (1 + r)dt−1 = yt + dt, where dt−1 denotes the debt position assumed in period t − 1 and due in period t, r denotes the interest rate, assumed to be constant, and yt is an exogenous and stochastic endowment of goods. In exercise 2.3, you are asked to relax this assumption by assuming that output is produced with labor. The endowment process represents the sole source of uncertainty in this economy. The above constraint states that the household has two sources of funds, the endowment, yt , and debt, dt. It uses those funds to purchase consumption goods, ct , and pay back the principle and interest of debt acquired in period t − 1, (1 + r)dt−1 . Households are assumed to be subject to the following sequence of borrowing constraints that prevents them, at least in expectations, from engaging in Ponzi games: lim Et dt+j ≤ 0. (1 + r)j This limit condition states that the household’s debt position must be expected to grow at a rate lower than the interest rate r in the long run. The optimal allocation of consumption and debt will always feature the no-Ponzi-game constraint (2.3) holding with strict equality. This is because if the allocation {ct, dt}∞ t=0 satisfied the borrowing constraint with strict inequality, then one could choose an alternative allocation 0 {c0t, d0t}∞ t=0 that also satisfies the borrowing constraint and that satisfies ct ≥ ct , for all t ≥ 0, c0t0 > ct0 for at least one date t0 ≥ 0. This alternative allocation is clearly strictly preferred to the original one because the single period utility function is strictly increasing. To see this, assume that lim Et dt+1 = −α; (1 + r)j α > 0. Open Economy Macroeconomics, Chapter 2 Then, consider the following alternative allocation {c0t+j , d0t+j }∞ j=0 : ct+j + α = ct+j if j = 0 if j > 0 Clearly, because the period utility function is assumed to be strictly increasing, the process {c0t+j }∞ j=0 0 ∞ must be preferred to the process {ct+j }∞ j=0 . Now construct the associated debt process, {dt+j }j=0 , as follows. Given c0t , the budget constraint (2.2) dictates that d0t must satisfy d0t = (1 + r)dt−1 + c0t − yt = (1 + r)dt−1 + ct + α − yt = dt + α. Now, given c0t+1 and d0t, the debt level in t + 1 that satisfies the budget constraint (2.2) is d0t+1 = (1 + r)d0t + c0t+1 − yt+1 = (1 + r)(dt + α) + ct+1 − yt+1 = dt+1 + (1 + r)α. Continuing with this argument for j iterations, we obtain d0t+j = dt+j + (1 + r)j α. We have constructed the new process for debt to ensure that it satisfies the sequential budget constraint (2.2) for all dates and states. It remains to check that the proposed debt path satisfies M. Uribe and S. Schmitt-Groh´e the no-Ponzi-game constraint (2.3). lim Et d0t+j (1 + r)j [dt+j + (1 + r)j α] j→∞ (1 + r)j dt+j = lim Et +α j→∞ (1 + r)j lim Et = −α + α = 0. This completes the proof that at the optimal allocation the no-Ponzi-game constraint (2.3) must hold with equality. When the limiting condition (2.3) holds with equality, it is called the transversality condition. The household chooses processes for ct and dt for t ≥ 0, so as to maximize (2.1) subject to (2.2) and (2.3). The Lagrangian of this problem in period 0 is given by L0 = E0 ∞ X t=0 β t{U (ct) + λt[dt + yt − (1 + r)dt−1 − ct]}, where β t λt denotes the Lagrange multiplier associated with the sequential budget constraint in period t. The optimality conditions associated with this problem are (2.2), (2.3) holding with equality, U 0 (ct) = λt, and λt = β(1 + r)Etλt+1 . The last two conditions are, respectively, the derivatives of the Lagrangian with respect to ct and dt equalized to zero. Combining these expressions to eliminate λt yields the following optimality Open Economy Macroeconomics, Chapter 2 condition often referred to as the Euler equation U 0 (ct) = β(1 + r)EtU 0 (ct+1 ). The interpretation of this expression is simple. If the household reduces consumption by one unit in period t, its period-t utility, U (ct), falls by U 0 (ct). If the household invests this unit of consumption in the international bond market, in period t + 1 it will receive 1 + r units of goods, which, if consumed, increase the period t + 1 utility, U (ct+1 ), by (1 + r)U 0(ct+1 ) and leave utility in all future dates and states unchanged. From the perspective of period t the expected present value of this increase in period t + 1 utility equals β(1 + r)EtU 0 (ct+1 ). At the optimum, the household must be indifferent between allocating a marginal unit of goods to present or future consumption. All households are assumed to have identical preferences, realizations of the endowment process, and initial asset holdings. Therefore, we can interpret ct and dt as the aggregate per capital levels of consumption and net foreign liabilities, respectively. Then, a rational expectations equilibrium can be defined as a pair of processes {ct , dt}∞ t=0 satisfying (2.2), (2.3) holding with equality, and (2.4), given the initial condition d−1 and the exogenous driving process {yt }∞ t=0 . We now derive an intertemporal resource constraint by combining the household’s sequential budget constraint (2.2) and the the borrowing constraint (2.3) holding with equality. Begin by expressing the sequential budget constraint in period t as (1 + r)dt−1 = yt − ct + dt. M. Uribe and S. Schmitt-Groh´e Eliminate dt by combining this expression with itself evaluated one period forward. This yields: (1 + r)dt−1 = yt − ct + yt+1 − ct+1 dt+1 + . 1+r 1+r Repeat this procedure s times to obtain (1 + r)dt−1 s X yt+j − ct+j dt+s = + . j (1 + r) (1 + r)s j=0 Apply expectations conditional on information available at time t and take the limit for s → ∞ using condition (2.3) holding with equality, to get the following intertemporal resource constraint: (1 + r)dt−1 ∞ X yt+j − ct+j , = Et (1 + r)j which must hold for all dates and states. Intuitively, this equation says that the country’s initial net foreign debt position must equal the expected present discounted value of current and future differences between output and absorption. According to this intertemporal resource constraint, if the economy starts as a net external debtor, i.e., if dt−1 (1 + r) > 0, then it must run a trade surplus in at least one period, that is, yt+j must exceed ct+j for at least one state and date j ≥ 0. However, if the economy starts as a net creditor of the rest of the world, i.e., if dt−1 (1 + r) < 0, then it could in principle run a perpetual trade deficit, that is, yt+j could in principle be larger than ct+j for all j ≥ 0. At this point, we make two additional assumptions that greatly facilitate the analysis. First we require that the subjective and pecuniary rates of discount, β and 1/(1 + r), be equal to each other, that is, β(1 + r) = 1. Open Economy Macroeconomics, Chapter 2 This assumption eliminates long-run growth in consumption when the economy features no stochastic shocks. Second, we assume that the period utility index is quadratic and given by 1 U (c) = − (c − c) 2 , 2 with c ≤ c. This specification has the appealing feature of allowing for a closed-form solution of the model. After imposing the above two assumptions, our model becomes essentially Hall’s (1978) permanent income model of consumption. The Euler condition (2.4) now becomes ct = Et ct+1 , which says that consumption follows a random walk: At each point in time, households expect to maintain a constant level of consumption next period. Indeed, households expect all future levels of consumption to be equal to ct. To see this, lead the Euler equation (2.7) one period to obtain ct+1 = Et+1 ct+2 . Take expectations conditional on information available at time t and use the law of iterated expectations to obtain Et ct+1 = Etct+2 . Finally, using again the Euler equation (2.7) to replace Etct+1 by ct, we can write ct = Et ct+2 . Repeating this procedure j times, we can deduce that Etct+j = ct, for all j ≥ 0. To find the closed-form solution for consumption, use the above expression to get rid of expected future consumption in the intertemporal resource constraint (2.5) to obtain (after slightly M. Uribe and S. Schmitt-Groh´e rearranging terms) X yt+j r ct = Et − rdt−1 . 1+r (1 + r)j This expression states that the optimal plan allocates the annuity value of the income stream, P∞ yt+j r j=0 (1+r)j , to consumption, ct , and interest payments, rdt−1 . Note that at any date t, the 1+r Et debt level dt−1 is predetermined, and Etyt+j for all j ≥ 0 is exogenously given. Therefore, the above expression represents the closed-form solution for ct . Combining the closed-form solution for consumption (2.8) with the sequential budget constraint (2.2) yields the closed for solution for the equilibrium level of the country’s external debt ∞ X yt+j r dt = dt−1 − yt + Et . 1+r (1 + r)j Two key variables in open economy macroeconomics are the trade balance and the current account. The trade balance is defined as the difference between exports and imports of goods and services. In the present model, there is a single good. Therefore, in any give period, the country either exports or imports this good depending, respectively, on whether the endowment exceeds consumption or is less than consumption. It follows that the trade balance is given by the difference between output and consumption. Formally, letting tbt denote the trade balance in period t, we have that tbt ≡ yt − ct . The current account is defined as the sum of the trade balance and net investment income on the country’s net foreign asset position, which in this model is simply given by net interest income, −rdt−1 . Formally, letting cat denote the current account in period t, we have that cat ≡ tbt − rdt−1 . Open Economy Macroeconomics, Chapter 2 Combining this expression with the definition of the trade balance and with the sequential budget constraint (2.2), we obtain the following alternative expression for the current account: cat = −(dt − dt−1 ). This expression says that the current account equals the change in the country’s net foreign asset position. In other words, a current account deficit (surplus) is associated with an increase (reduction) in the country’s external debt of equal magnitude. The closed form solutions for consumption, debt, the trade balance and the current account, all depend not just on the current level of net foreign debt, dt−1 , and the current level of the endowment yt , but also on the entire expected future path of the endowment. Therefore, in order to characterize the cyclical adjustment of consumption, the trade balance and the current account to output shocks, we need to specify the expected future path of income. We first analyze the economy’s adjustment to stationary income shocks and then its adjustment to nonstationary income shocks. Stationary Income Shocks Assume that the endowment follows an AR(1) process of the form yt = ρyt−1 + t , where t denotes an i.i.d. innovation and the parameter ρ ∈ (−1, 1) defines the serial correlation of the endowment process. The larger is ρ, the more persistent is the endowment process. Given this autoregressive structure of the endowment, the j-period-ahead forecast of output in period t is given by Et yt+j = ρj yt . M. Uribe and S. Schmitt-Groh´e Using this expression to eliminate expectations of future income from equation (2.8), we obtain ∞ rdt−1 + ct r X = yt 1+r j=0 ρ 1+r r yt . 1+r−ρ Solving for ct, we obtain ct = r yt − rdt−1 . 1+r−ρ Consider now the effect of an innovation in the endowment. Because ρ is less than unity, we have that a unit increase in yt leads to a less-than-unit increase in consumption. The remaining income is saved to allow for higher future consumption. Using the definitions for the trade balance and the current account and the expression derived for the equilibrium level of consumption, equation (2.10), we can write the equilibrium levels of the trade balance and the current account as tbt = rdt−1 + 1−ρ yt 1+r−ρ and cat = 1−ρ yt . 1+r−ρ Note that the current account inherits the stochastic process of the underlying endowment shock. Because the current account equals the change in the country’s net foreign asset position, i.e., cat = −(dt − dt−1 ), it follows that the equilibrium evolution of the stock of external debt is given by dt = dt−1 − 1−ρ yt . 1+r−ρ Open Economy Macroeconomics, Chapter 2 According to this expression, external debt follows a random walk and is therefore nonstationary. A temporary increase in the endowment produces a gradual but permanent decline in the stock of foreign liabilities. Because the long-run behavior of the trade balance is governed by the dynamics of external debt, a temporary increase in the endowment leads to a long-run deterioration in the trade balance. Consider the response of our model economy to an unanticipated increase in output. Two polar cases are of interest. In the first case, the endowment shock is assumed to be purely transitory, ρ = 0. According to equation (2.10), when innovations in the endowment are purely temporary, only a small part of the changes in income—a fraction r/(1 + r)—is allocated to current consumption. Most of the endowment increase—a fraction 1/(1+r)—is saved. The intuition for this result is clear. Because income is expected to return quickly to its long-run level, households smooth consumption by eating a tiny part of the current windfall and leaving the rest for future consumption. In this case, the current account plays the role of a shock absorber.The economy saves in response to positive income shocks and borrows from the rest of the world to finance negative income shocks. Importantly, the current account is procyclical. That is, it improves during expansions and deteriorates during contractions. This prediction of the model is at odds with the data. As documented in chapter 1, in small open economies the current account is countercyclical and not procyclical as predicted by the model studied here. The other polar case emerges when shocks are highly persistent, ρ → 1. In this case, households allocate all innovations in their endowments to current consumption, and, as a result, the current account is nil and the stock of debt remains constant over time. Intuitively, when endowment shocks are permanent, an increase in income today is not accompanied by the expectation of a future decline in income. Rather, households expect the higher level of income to persist over time. As a result, households are able to sustain an expenditure path consisting in consuming the totality of the current income shock. M. Uribe and S. Schmitt-Groh´e Figure 2.1: Response to a Positive Endowment Shock Endowment Trade Balance and Current Account External Debt cat 0 t tbt It follows that the more temporary are endowment shocks, the more the current account plays the role of a shock absorber. This, in turn, implies that the more temporary is the shock, the more volatile is the current account. In the case of purely transitory shocks, the standard deviation of the current account is given by σy /(1 + r), which is close to the volatility of the endowment shock itself for small values of r. On the other extreme, when the endowment shock is permanent, the volatility of the current account is zero. The intermediate case of a gradually trend-reverting endowment process, which takes place when ρ ∈ (0, 1), is illustrated in figure 2.1. In response to the positive stationary endowment shock, consumption experiences a once-and-for-all increase. This expansion in domestic absorption is smaller than the initial increase in income. As a result, the trade balance and the current account Open Economy Macroeconomics, Chapter 2 improve. After the initial increase, these two variables converge gradually to their respective longrun levels. Note that the trade balance converges to a new long-run level lower than the pre-shock one. This is because in the long-run the economy settles at a lower level of external debt, whose service requires a smaller trade surplus. Thus, a positive endowment shock produces a short-run improvement but a long-run deterioration in the trade balance. Summarizing, in this model, which captures the essential elements of what has become known as the intertemporal approach to the current account, external borrowing is conducted under the principle: ‘finance temporary shocks but adjust to permanent shocks.’ A central failure of the model is the prediction of a procyclical current account and a procyclical trade balance. Fixing this problem is at the heart of what follows in this and the next three chapters. Stationary Income Shocks: AR(2) Processes The discussion in the previous section suggests that the current account and the trade balance will decline in response to a positive endowment shock if the current increase in the endowment is less that the associated increase in permanent income. In the case of AR(1) endowment shocks the increase in the endowment always the increase in permanent income in the period of impact of the shock. Therefore, the model driven by autoregressive endowment shocks of order one, fails to predict a countercyclical trade balance on impact and excess consumption volatility. Consider now the case that the endowment process is autoregressive of order two. Specifically, assume that yt = ρ1 yt−1 + ρ2 yt−2 + t , where t denotes an i.i.d. shock and the parameters ρ1 and ρ2 are such that this process is stationary. This latter requirement is satisfied as long as −1 < ρ2 < 1 − ρ1 and ρ2 < 1 + ρ1 . Figure 2.2 M. Uribe and S. Schmitt-Groh´e Figure 2.2: Impulse Response of Endowment, AR(2) Process 5 10 periods after shock an impulse response of output to a unit innovation in period 0 when ρ1 > 1 and ρ2 < 0. In this case, the impulse response is hump-shaped, that is, the peak output response occurs several periods after the shock occurs. (In the figure the peak response is reached three period after the shock.) A humpshaped path for income implies that the current level of output may rise by less than permanent P yt+j r income, that is, the change in yt may be less than the change in 1+r Et ∞ j=0 (1+r)j . Because, as we have shown earlier, in equilibrium the impulse response of consumption to an endowment shock is equal to the impulse response of permanent income, in the period of impact consumption could rise by more than current income. If so, the trade balance and the current account will deteriorate in the period of the endowment shock. To establish whether there exist parameterizations of the AR(2) endowment process that imply that the trade balance and the current account respond countercyclically to an endowment shock Open Economy Macroeconomics, Chapter 2 proceed we must evaluate permanent income, r 1+r ∞ j=0 1 1+r Etyt+j . Let yt Yt = . yt−1 Then we can write the endowment process as t+1 Yt+1 = R Yt + , 0 ρ1 ρ2 R≡ . 1 0 Taking expectations as of time t, iterating forward, and multiplying by 1/(1 + r)j yields 1 1+r EtYt+j = 1 R 1+r Yt . Summing this expression we have j ∞ X 1 Et Yt+j 1+r j=0 j ∞ X 1 = R Yt 1+r j=0 = (I − = 1 R)−1 Yt 1+r ρ2 1+r 1+r Yt . (1 + r − ρ1 )(1 + r) − ρ2 1 1 + r − ρ1 Multiplying the first row of this vector by r/(1 + r), we find permanent income as ∞ X yt+j r r ((1 + r)yt + ρ2 yt−1 ) Et = . j 1+r (1 + r) (1 + r − ρ1 )(1 + r) − ρ2 j=0 Using this expression to replace permanent income from the closed form solution for consumption, M. Uribe and S. Schmitt-Groh´e given in equation (2.8), the equilibrium trade balance can be expressed as tbt = yt − r ((1 + r)yt + ρ2 yt−1 ) + rdt−1 . (1 + r − ρ1 )(1 + r) − ρ2 Notice that, when ρ2 = 0, that is, when the endowment follows an AR(1) process, this expression coincides with the closed form solution for the trade balance given in equation (2.11). Assume now that in period t the economy experiences an unanticipated endowment shock, t = 1. To find out whether the trade balance deteriorates or not, we have to find out by how much permanent income increases. From the above expression, it follows that the change in permanent income is equal to r(1+r) (1+r−ρ1 )(1+r)−ρ2 . Thus permanent income will increase more than current income if r(1 + r) > (1 + r − ρ1 )(1 + r) − ρ2 > 0. Because r = β −1 − 1 > 0, and because ρ2 < 1 − ρ1 , it follows that a necessary condition is that ρ1 > 1 and ρ2 < 0. The requirement that ρ1 > 1 is intuitive because it implies that the impulse response of output is hump-shaped. The requirement that ρ2 < 0 ensures that the endowment process is stationary given that ρ1 is greater than one. But simply requiring ρ1 > 1 and ρ2 < 0 does not guarantee a countercyclical response of the trade balance. The necessary and sufficient conditions for this are that −1 < ρ2 < 1 − ρ1 and ρ2 > (1 + r)(1 − ρ1 ). These conditions put a restriction on how negative ρ2 can be for a given value of ρ1 > 1. The reason why ρ2 cannot become too negative is that smaller values of ρ2 reduce the response of permanent income to a positive innovation in the endowment. Specifically, for large negative values of ρ2 (i.e., values close to −1) the impulse response of the endowment process starts to oscillate. In this case a positive output shock will first drive the endowment up but then down below its average value. This reduces the increase in permanent income. Second, even if ρ2 is not sufficiently negative to induce an oscillating impulse response, the more negative is ρ2 the smaller is the hump of the endowment response and the less persistent is the response. This factor also reduces the response of permanent income. Open Economy Macroeconomics, Chapter 2 This example shows that a higher-order autoregressive endowment process allows the model to predict a countercyclical trade balance response. Notice that in the example the endowment process is stationary. Hence this result demonstrates that the counterfactual prediction of the model of a procyclical trade balance is a consequence of assuming an AR(1) structure for the endowment process rather than of assuming that the endowment process is mean reverting, i.e., stationary. In the next section we show that allowing the endowment to follow a nonstationary process is an alternative mechanism to induce the model to predict that permanent income increases by more than current income. Nonstationary Income Shocks Suppose now that the rate of change of output, rather than its level, displays mean reversion. Specifically, let ∆yt ≡ yt − yt−1 denote the change in endowment between periods t − 1 and t. Suppose that ∆yt evolves according to the following autoregressive process: ∆yt = ρ∆yt−1 + t , where t is an i.i.d. shock with mean zero and variance σ2 , and ρ ∈ [0, 1) is a constant parameter. According to this process, the level of income is nonstationary, in the sense that a positive output shock (t > 0) produces a permanent future expected increase in the level of output. Faced with such an income profile, consumption-smoothing households have an incentive to borrow against future income, thereby producing a countercyclical tendency in the current account. This is the basic intuition why allowing for a nonstationary output process can help explain the observed coun- M. Uribe and S. Schmitt-Groh´e Figure 2.3: Stationary Versus Nonstationary Endowment Shocks Stationary Endowment Nonstationary Endowment yt c tercyclicality of the trade balance and the current account at business-cycle frequencies. Figure 2.3 provides a graphical representation of this intuition, and the following model formalizes the story. In chapter 5 we will study the importance of nonstationary output shocks in accounting for emerging market business cycles in the context of an estimated small open economy real business cycle model. As before, we assume that the model economy is inhabited by an infinitely-lived representative household that chooses contingent plans for consumption and debt to maximize the utility function (2.6) subject to the sequential resource constraint (2.2) and the no-Ponzi-game constraint (2.3). The first-order conditions associated with this problem are the sequential budget constraint, the no-Ponzi-game constraint holding with equality, and the Euler equation (2.7). Using these optimality conditions yields the expression for consumption given in equation (2.8), which we reproduce here for convenience X yt+j r ct = −rdt−1 + Et . 1+r (1 + r)j j=0 Using this expression and recalling that the current account is defined as cat = yt − ct − rdt−1 , we Open Economy Macroeconomics, Chapter 2 can write X yt+j r cat = yt − Et . 1+r (1 + r)j j=0 This expression is quite intuitive. The term r 1+r Et yt+j j=0 (1+r)j is the annuity value of the income stream and is the amount that the household wishes to allocate to consumption and interest payments. The household allocates any income in excess of this annuity value to increasing its net asset position, −(dt − dt−1 ), which equals the current account. Rearranging the above expression, we obtain cat = −Et ∞ X ∆yt+j . (1 + r)j This expression states that the current account equals the present discounted value of future expected income decreases. If output is expected to fall over time, then the current account is positive. In this case, households save part of their current income to allow for a smooth path of future consumption. The opposite happens if income is expected to increase over time. In this case, the country runs a current account deficit, as households borrow against their future income to finance present spending. Note that in deriving this expression, we have not used the assumed stochastic properties of output. Therefore, the above expression is valid regardless of whether output follows a stationary or a nonstationary process. Now using the assumed autoregressive structure for the change in the endowment, we have that Et∆yt+j = ρj ∆yt . Using this result in the above expression, we can write the current account as: cat = −ρ ∆yt . 1+r−ρ According to this formula, the current account deteriorates in response to a positive innovation in output. This prediction is in line with the cross-country time series evidence presented in chapter 1. This implication is therefore an important improvement relative to the model with M. Uribe and S. Schmitt-Groh´e stationary shocks. Recall that when the endowment level is stationary, the current account increases in response to a positive endowment shock. We note that the countercyclicality of the current account in the model with nonstationary shocks depends crucially on output changes being positively serially correlated, or ρ > 0. In effect, when ρ is zero or negative, the current account ceases to be countercyclical. The intuition behind this result is clear. For an unexpected increase in income to induce an increase in consumption larger than the increase in income itself, it is necessary that future income be expected to be higher than current income, which happens only if ∆yt is positively serially correlated. Are implied changes in consumption more or less volatile than changes in output? This question is important because, as we saw in chapter 1, developing countries are characterized by consumption growth being more volatile than output growth. Formally, letting σ∆c and σ∆y denote the standard 2 deviations of ∆ct ≡ ct − ct−1 and ∆yt , respectively, we wish to find out conditions under which σ∆c 2 in equilibrium.1 We start with the definition of the current account can be higher than σ∆y cat = yt − ct − rdt−1 . Taking differences, we obtain cat − cat−1 = ∆yt − ∆ct − r(dt−1 − dt−2 ). Strictly speaking, this exercise is not comparable to the data displayed in chapter 1, because here we are analyzing changes in the levels of consumption and output (i.e., ∆ct and ∆yt), whereas in chapter 1 we reported statistics pertaining to the growth rates of consumption and output (i.e., ∆ct /ct−1 and ∆yt /yt−1 ). Open Economy Macroeconomics, Chapter 2 Noting that dt−1 − dt−2 = −cat−1 and solving for ∆ct, we obtain: ∆ct = ∆yt − cat + (1 + r)cat−1 ρ(1 + r) ρ ∆yt − ∆yt−1 1+r−ρ 1+r−ρ 1+r ρ(1 + r) ∆yt − ∆yt−1 1+r−ρ 1+r−ρ 1+r t . 1+r−ρ = ∆yt + = = This expression implies that the standard deviation of consumption changes is given by σ∆c = 1+r σ . 1+r−ρ 2 (1 − ρ2 ) = σ 2 . Then, we can write the standard The AR(1) specification of ∆yt implies that σ∆y deviation of consumption changes as p 1+r σ∆c = 1 − ρ2 . σ∆y 1+r−ρ The right-hand side of this expression equals unity at ρ = 0. This result confirms the one obtained earlier in this chapter, namely, that when the level of income is a random walk, consumption and income move hand in hand, so their changes are equally volatile. The right hand side of the above expression is increasing in ρ at ρ = 0. It follows that there are values of ρ in the interval (0, 1) for which the volatility of consumption changes is indeed higher than that of income changes. This property ceases to hold as ∆yt becomes highly persistent. This is because as ρ → 1, the variance of ∆yt becomes infinitely large as changes in income become a random walk, whereas, as expression (2.14) shows, ∆ct follows an i.i.d. process with finite variance for all values of ρ ∈ [0, 1). M. Uribe and S. Schmitt-Groh´e Testing the Model Hall (1978) was the first to explore the econometric implication of the simple model developed in this chapter. Specifically, Hall tested the prediction that consumption follows a random walk. Hall’s work motivated a large empirical literature devoted to testing the empirical relevance of the model described above. Campbell (1987), in particular, deduced and tested a number of theoretical restrictions on the equilibrium behavior of national savings. In the context of the open economy, Campbell’s restrictions are readily expressed in terms of the current account. Here we review these restrictions and their empirical validity. The starting point is equation (2.13), which we reproduce for convenience cat = − ∞ X (1 + r)−j Et∆yt+j . j=1 Recall the intuition behind this expression. It states that the country borrows from the rest of the world (i.e., runs a current account deficit) when income is expected to grow in the future. Similarly, the country chooses to build its net foreign asset position (runs a current account surplus) when income is expected to decline in the future. In this case the country saves for a rainy day. It is important to notice that the derivation of this equation does not require the specification of a particular stochastic process for the endowment yt . Consider now an empirical representation of the time series ∆yt and cat . Define ∆yt xt = . cat Open Economy Macroeconomics, Chapter 2 Consider estimating the following vector autoregression (VAR) in xt : xt = Dxt−1 + t . As a first comment on this empirical strategy, we notice that the model is silent on whether xt has a VAR representation of this form. An example in which such a representation exists is when ∆yt itself is a univariate AR(1) process like the one assumed in section 2.4. Let Ht denote the information contained in the vector xt. Then, from the above VAR system, we have that the forecast of xt+j given Ht is given by Et[xt+j |Ht] = D j xt . It follows that ∞ X j=1 (1 + r)−j Et[∆yt+j |Ht] = Let F ≡− I− 1 0 I− 1 0 D 1+r D 1+r D ∆yt . 1+r cat D . 1+r Now consider running separate regressions of the left- and right-hand sides of equation (2.13) onto the vector xt . Since xt includes cat as one element, we obtain that the regression coefficient for the left-hand side regression is the vector [0 1]. The regression coefficients of the right-hand side regression is F . So the model implies the following restriction on the vector F : F = [0 1]. M. Uribe and S. Schmitt-Groh´e Nason and Rogers (2006) perform an econometric test of this restriction. They estimate the VAR system using Canadian data on the current account and GDP net of investment and government spending. The estimation sample is 1963:Q1 to 1997:Q4. The VAR system that Nason and Rogers estimate includes 4 lags. In computing F , they calibrate r at 3.7 percent per year. Their data strongly rejects the above cross-equation restriction of the model. The Wald statistic associated with the null hypothesis that F = [0 1] is 16.1, with an asymptotic p-value of 0.04. This p-value means that if the null hypothesis was true, then the Wald statistic, which reflects the discrepancy of F from [0 1], would take a value of 16.1 or higher only 4 out of 100 times. Consider now an additional testable cross-equation restriction on the theoretical model. From equation (2.13) it follows that Et cat+1 − (1 + r)cat − Et ∆yt+1 = 0. According to this expression, the variable cat+1 − (1 + r)cat − ∆yt+1 is unpredictable in period t. In particular, if one runs a regression of this variable on current and past values of xt , all coefficients should be equal to zero.2 Nason and Rogers (2006) find that this hypothesis is rejected with a p-value of 0.06. This restriction is not valid in a more general version of the model featuring private demand shocks. Consider, for instance, a variation of the model economy where the bliss point, c, is a random variable. Specifically, replace c in equation (2.6) by c + µt , where c is still a constant, and µt is an i.i.d. shock with mean zero. In this environment, equation (2.15) becomes Etcat+1 − (1 + r)cat − Et∆yt+1 = µt . 2 Consider projecting the left- and right-hand sides of this expression on the information set Ht . This projection yields the orthogonality restriction [0 1][D − (1 + r)I] − [1 0]D = [0 0]. Open Economy Macroeconomics, Chapter 2 Clearly, because in general µt is correlated with cat, the orthogonality condition stating that cat+1 − (1 + r)cat − ∆yt+1 be orthogonal to variables dated t or earlier, will not hold. Nevertheless, in this case we have that cat+1 − (1 + r)cat − ∆yt+1 should be unpredictable given information available in period t − 1 or earlier.3 This orthogonality condition is also strongly rejected by the data. Nason and Rogers (2006) find that a test of the hypothesis that all coefficients are zero in a regression of cat+1 − (1 + r)cat − ∆yt+1 onto past values of xt has a p-value of 0.01. We conclude that the propagation mechanism invoked by the canonical intertemporal model of the current account does not provide a satisfactory account of the observed behavior of current account dynamics, regardless of whether the underlying endowment shock is stationary or nonstationary. To bring closer together the observed and predicted behavior of the current account and other macroeconomic aggregates, in the following chapters, we will enrich the model’s sources of fluctuations and propagation mechanism. 3 In particular, one can consider projecting the above expression onto ∆yt−1 and cat−1 . This yields the orthogonality condition [0 1][D − (1 + r)I]D − [1 0]D 2 = [0 0]. M. Uribe and S. Schmitt-Groh´e Exercise 2.1 (Predicted Second Moments) In chapter 1, we showed that two empirical regularities that characterize emerging economies are the countercyclicality of the trade balance-to-output ratio and the fact that consumption growth appears to be more volatile than output growth. In this chapter, we developed a simple small open endowment economy and provided intuitive arguments suggesting that this economy fails to account for these two stylized facts. However, that model does not allow for closed form solutions of second moments of output growth, consumption growth, or the trade balance-to-output ratio. The goal of this assignment is to obtain these implied statistics numerically. To this end, consider the following parameterization of the model developed in the present chapter: yt − y = ρ(yt−1 − y) + t , with ρ = 0.9, y = 1, and t is distributed normally with mean 0 and standard deviation 0.03. Note that the parameter y, which earlier in this chapter was implicitly assumed to be nil, represents the deterministic steady state of the output process. Assume further that r = 1/β − 1 = 0.1, d−1 = y/2, and y−1 = y. 1. Simulate the economy for 100 years. 2. Discard the first 50 years of artificial data to minimize the dependence of the results on initial conditions. 3. Compute the growth rates of output and consumption and the trade balance-to-output ratio. 4. Compute the sample standard deviations of output growth and consumption growth and the correlation between output growth and the trade balance-to-output ratio. Here, we denote these Open Economy Macroeconomics, Chapter 2 three statistics σgy , σgc , and ρgy,tby , respectively. 5. Replicate steps 1 to 4 1000 times. For each replication, keep record of σgy , σgc , and ρgy,tby . 6. Report the average of σgy , σgc , and ρgy,tby over the 1000 replications. 7. Discuss your results. Exercise 2.2 (Free Disposal) Consider the small, open, endowment economy with stationary endowment shocks and quadratic preferences analyzed in this chapter. An important implicit assumption of that model is the absence of free disposal of goods. Consider now a variation of the model that allows for free disposal. Specifically, assume that the sequential budget constraint faced by households is of the form dt = (1 + r)dt−1 + yt − ct − xt , where xt is an endogenous variable determined in period t and subject only to the nonnegativity constraint xt ≥ 0. All other aspects of the model are as presented in the main text. Characterize as many differences as you can between the equilibrium dynamics of the models with and without free disposal. Exercise 2.3 (An Economy with Endogenous Labor Supply) Consider a small open economy populated by a large number of households with preferences described by the utility function ∞ X β t U (ct, ht), where U is a period utility function given by U (c, h) = − 1 (c − c)2 + h2 , 2 M. Uribe and S. Schmitt-Groh´e where c > 0 is a satiation point. The household’s budget constraint is given by dt = (1 + r)dt−1 + ct − yt , where dt denotes real debt acquired in period t and due in period t + 1, r > 0 denotes the world interest rate. To avoid inessential dynamics, we impose β(1 + r) = 1. The variable yt denotes output, which is assumed to be produced by the linear technology yt = Aht . Households are also subject to the no-Ponzi-Game constraint limj→∞ Etdt+j /(1 + r)j ≤ 0. 1. Compute the equilibrium laws of motion of consumption, debt, the trade balance, and the current account. 2. Assume that in period 0, unexpectedly, the productivity parameter A increases permanently to A0 > A. Establish the effect of this shock on output, consumption, the trade balance, the current account, and the stock of debt. Exercise 2.4 (An Open Economy With Habit Formation) Section 2.2 characterizes the equilibrium dynamics of a small open economy with time separable preferences driven by stationary endowment shocks. It shows that a positive endowment shock induces an improvement in the trade balance on impact. This prediction, we argued, was at odds with the empirical evidence presented in chapter 1. Consider now a variant of the aforementioned model economy in which the representative Open Economy Macroeconomics, Chapter 2 consumer has time nonseparable preferences described by the utility function ∞ 1 X j − Et β [ct+j − α˜ ct+j−1 − c]2 ; 2 j=0 t ≥ 0, where ct denotes consumption in period t, c˜t denotes the cross-sectional average level of consumption in period t, Et denotes the mathematical expectations operator conditional on information available in period t, and β ∈ (0, 1), α ∈ (−1, 1), and c > 0 are parameters. The case α = 0 corresponds to time separable preferences, which is studied in the main text. Households take as given the evolution of c˜t . Households can borrow and lend in international financial markets at the constant interest rate r. For simplicity, assume that (1 + r)β equals unity. In addition, each period t = 0, 1, . . . the household is endowed with an exogenous and stochastic amount of goods yt . The endowment stream follows an AR(1) process of the form yt+1 = ρyt + t+1 , where ρ ∈ [0, 1) is a parameter and t is a mean-zero i.i.d. shock. Households are subject to the no-Ponzi-game constraint lim Etdt+j ≤ 0, (1 + r)j where dt denotes the representative household’s net debt position at date t. At the beginning of period 0, the household inherits a stock of debt equal to d−1 . 1. Derive the initial equilibrium response of consumption to a unit endowment shock in period 0. 2. Discuss conditions (i.e., parameter restrictions), if any, under which a positive output shock can lead to a deterioration of the trade balance. M. Uribe and S. Schmitt-Groh´e Exercise 2.5 (Anticipated Endowment Shocks ) Consider a small open endowment economy enjoying free capital mobility. Preferences are described by the utility function ∞ 1 X t − E0 β (ct − c)2 , 2 t=0 with β ∈ (0, 1). Agents have access to an internationally traded bond paying the constant interest rate r ∗ , satisfying β(1 + r ∗ ) = 1. The representative household starts period zero with an asset position b−1 . Each period t ≥ 0, the household receives an endowment yt , which obeys the law of motion, yt = ρyt−1 + t−1 , where t is an i.i.d shock with mean zero and standard deviation σ . Notice that households know already in period t − 1 the level of yt with certainty. 1. Derive the equilibrium process of consumption and the current account. 2. Compute the correlation between the current account and output. Compare your result with the standard case in which yt is known only in period t. Chapter 3 A Small Open Economy with Capital In this chapter, we introduce capital accumulation in the simple small open economy of chapter 2. The purpose of introducing physical capital in the model is twofold. First, an important result derived in the previous chapter is that for commonly used specifications of the endowment shock process the simple endowment economy model fails to predict the observed countercyclicality of the trade balance and the current account documented in chapter 1. Only if the endowment shock has a hump-shaped impulse response, that is, only if the current endowment shock falls short of the rise in permanent income that it generates, can the endowment economy model predict a countercyclical trade balance response. Here, we show that allowing for capital accumulation can contribute to mending this problem. The reason is that if in the augmented model the investment share is procyclical then the trade balance could become countercyclical. Second, the assumption that output is an exogenously given stochastic process, which was maintained throughout the previous chapter, is unsatisfactory if the goal is to understand observed business cycles. For output is perhaps the main variable any theory of the business cycle should aim to explain. In this chapter we provide a partial remedy to this problem by assuming that output is produced with physical capital, which, in turn, is an endogenous variable. 67 M. Uribe and S. Schmitt-Groh´e The Basic Framework Consider a small open economy populated by a large number of infinitely lived households with preferences described by the utility function ∞ X β t U (ct), where ct denotes consumption, β ∈ (0, 1) denotes the subjective discount factor, and U denotes the period utility function, assumed to be strictly increasing, strictly concave, and twice continuously differentiable. Households seek to maximize this utility function subject to the following three sequential constraints ct + it + (1 + r)dt−1 = yt + dt, yt = At F (kt ), kt+1 = kt + it, and to the terminal borrowing constraint dt+j ≤ 0, (1 + r)j where it denotes investment in new units of physical capital in period t, r denotes the interest rate on one-period external debt, dt−1 denotes the quantity of one-period debt acquired in period t − 1 and due in period t, yt denotes output in period t, kt denotes the stock of physical capital in period t, which is assumed to be determined in period t − 1. The first constraint is a sequential budget constraint similar to the one presented in the endowment economy of chapter 2, see equation Open Economy Macroeconomics, Chapter 3 (2.2). The main difference is that the present one includes investment as one of the uses of funds. The second constraint describes the technology employed to produce final goods. The production function F is assumed to be strictly increasing, strictly concave, and to satisfy the Inada conditions. The variable At denotes an exogenous, nonstochastic productivity factor. The third constraint specifies the law of motion of the capital stock. For the sake of simplicity, we assume that capital does not depreciate. In later chapters, we relax both the assumption of no depreciation and the assumption of deterministic productivity. Finally, the fourth constraint is the deterministic version of the no-Ponzi-game condition, (2.3), introduced in chapter 2. The Lagrangian associated with the household’s problem is {ct ,kt+1 ,dt } ∞ X t=0 β t {U (ct) + λt [AtF (kt ) + dt − ct − (kt+1 − kt ) − (1 + r)dt−1 ]} . The first-order conditions corresponding to this problem are U 0 (ct) = λt, λt = β(1 + r)λt+1 , λt = βλt+1 [At+1 F 0 (kt+1 ) + 1], and At F (kt ) + dt = ct + kt+1 − kt + (1 + r)dt−1 . Household optimization implies that the borrowing constraint holds with equality dt = 0. t→∞ (1 + r)t lim M. Uribe and S. Schmitt-Groh´e As in chapter 2, we assume that β(1 + r) = 1, to avoid inessential long-run dynamics. This assumption together with the first two of the above optimality conditions implies that consumption is constant over time, ct+1 = ct ; ∀t ≥ 0. Note that this result does not require the assumption of quadratic preferences as in the previous chapter. The reason is that in the present model there is no uncertainty. The result ct = Etct+1 does not follow for general preferences in the stochastic case. Using this expression, the optimality conditions can be reduced to the following two expressions r = At+1 F 0 (kt+1 ) r X At+j F (kt+j ) − (kt+j+1 − kt+j ) ct = −rdt−1 + , 1+r (1 + r)j for t ≥ 0. Equilibrium condition (3.9) states that households invest in physical capital in period t until the expected marginal product of capital in period t + 1 equals the rate of return on foreign debt. It follows from this equilibrium condition that next period’s level of physical capital, kt+1 , is an increasing function of the future expected level of productivity, At+1 , and a decreasing function of the opportunity cost of holding physical capital, r. Formally, kt+1 = κ At+1 r κ0 > 0. Open Economy Macroeconomics, Chapter 3 Equilibrium condition (3.10) says that consumption equals the annuity value of a broad definition of wealth, which includes not only initial financial wealth in the form of initial debt, d−1 , but P At+j F (kt+j )−(kt+j+1 −kt+j ) . To also the present discounted value of output net of investment, ∞ j=0 (1+r)j obtain equilibrium condition (3.10), follow the same steps as in the derivation of its counterpart for the endowment economy, equation (2.8). A perfect-foresight equilibrium is a set of sequences {ct , dt, kt+1 }∞ t=0 satisfying (3.8), (3.10), and (3.11) for all t ≥ 0, given the initial stock of physical capital, k0 , the initial net external debt position, d−1 , and the deterministic sequence of productivity {At}∞ t=0 . To construct a perfectforesight equilibrium proceed as follows. Given initial conditions k0 and d−1 and a deterministic sequence {At} use equilibrium condition (3.11) to obtain {kt}∞ t=1 . With the path for kt in hand, evaluate condition (3.10) at t = 0 to obtain c0 . Then use condition (3.8) to find the time path of ct for all t > 0. Finally, solve equation (3.10) evaluated at t > 0 for dt−1 to obtain the equilibrium sequence for dt for any t ≥ 0. We can then determine output from equation (3.3), investment from (3.4), and the marginal utility of consumption, λt, from (3.6). Adjustment To A Permanent Productivity Shock ¯ Suppose that up until period -1 inclusive, the technology factor At was constant and equal to A. We now show that this assumption gives rise to a steady state in which all endogenous variables are constant. We indicate the steady-state of a variable by placing a bar over it. The fact that ¯ A is constant implies, by (3.11), that the capital stock is also constant and equal to k ≡ κ(A/r). ¯ (k). Because the capital stock is constant and Similarly, output is constant and equal to y ≡ AF because the depreciation rate of capital is assumed to be zero, we have by equation (3.4) that investment is also constant and equal to zero, i = 0. By the Euler equation (3.8), consumption must also be constant. Equilibrium condition (3.10) then implies that dt−1 must also have been M. Uribe and S. Schmitt-Groh´e constant for all t ≤ −1, that is, the net external debt position of the economy is constant. Let the that constant value of debt be denoted by d. Because the current account is defined as the change in the net international asset position (cat = (−dt) − (−dt−1 )), we have that in the steady state the current account equals zero, ca = 0. Finally, recalling that the current account is also defined as the sum of the trade balance and net investment income, cat = tbt + r(−dt−1 ), we have that in the steady state the trade balance must be constant and equal to tb ≡ rd. Suppose now that in period 0, unexpectedly, the technology factor increases permanently from ¯ that is, A¯ to A0 > A, At = for t ≤ −1 A0 > A¯ for t ≥ 0 Figure 3.1 presents the impulse response this shock. Because k0 and d−1 were chosen in period ¯ we have that k0 = k and d−1 = d. In period −1, when households expected A0 to be equal to A, 0, investment experiences an increase that raises the level of capital available for production in ¯ period 1, k1 , from k to k0 ≡ κ(A0 /r) > κ(A/r) = k. The fact that productivity is constant after period zero implies, by (3.4) and (3.11), that starting in period 1 the capital stock is constant and investment is nil. Thus, kt = k0 for t ≥ 1, i0 = k0 − k > 0, and it = 0, for t ≥ 1. Plugging this path for the capital stock into the intertemporal resource constraint (3.10) and evaluating that equation at t = 0 yields c0 = −rd + r 0 1 A F (k) − k0 + k + A0 F (k0 ). 1+r 1+r By equilibrium condition (3.8) ct = c0 for all t ≥ 0. Let this constant level of consumption be denoted c0 . We have that ct = c0 for all t ≥ 0. We wish to show that c0 > c, that is, that consumption increases. Rewrite (3.12) as c0 = −rd + A0 F (k) + 1 0 A F (k0 ) − F (k) − r k0 − k . 1+r Open Economy Macroeconomics, Chapter 3 Figure 3.1: Adjustment to a Permanent Productivity Increase At k¯ A¯ −2 0 t 0 t A0 F (k0 ) k0 − k¯ ¯ A0 F (k) 0 ¯ ¯ (k) AF −2 0 t 0 t tbt rd0 rd¯ c¯ tb0 −2 0 t 0 t dt ca0 −2 0 t 0 t M. Uribe and S. Schmitt-Groh´e Use equation (3.9) to replace r for A0 F 0 (k0 ) in the expression within curly brackets to obtain c0 = −rd + A0 F (k) + 1 0 A F (k0 ) − F (k) − A0 F 0 (k0 ) k0 − k . 1+r Because F is assumed to be strictly concave and k0 > k, we have that F (k0 )−F (k) k0 −k > F 0 (k0 ). This means that the expression within curly brackets is strictly positive. Therefore, we have that ¯ (k) = c. c0 > −rd + A0 F (k) > rb + AF This establishes that consumption experiences a once-and-for-all increase in period 0.1 Indeed, consumption initially increases by more than output. To see this use the above inequality and the definition of the steady state to write ∆c0 ≡ c0 − c−1 ¯ ¯ (k) − r d] = c0 − [AF ¯ − [AF ¯ ¯ (k) − r d] > [A0 F (k) − r d] ¯ (k) = A0 F (k) − AF = y0 − y−1 = ∆y0 . 1 We thank Alberto Felettigh for providing this proof. In early drafts of this book, we used to offer the following alternative demonstration: Consider the following suboptimal paths for consumption and investment: cst = A0 F (k) − ¯ the consumption path cst is strictly preferred to the pre-shock rd and ist = 0 for all t ≥ 0. Clearly, because A0 > A, ¯ path, given by c ≡ AF (k) − rd. To show that the proposed allocation is feasible, let us plug the consumption and investment paths cst and ist into the sequential budget constraint (3.2) to obtain the sequence of asset positions dst = d for all t ≥ 0. Obviously, limt→∞ d/(1 + r)t = 0, so the proposed suboptimal allocation satisfies the no-Ponzi-game condition (3.5). We have established the existence of a feasible consumption path that is strictly preferred to the pre-shock consumption allocation. It follows that the optimal consumption path must also be strictly preferred to the pre-shock consumption path. This result together with the fact that, from equilibrium condition (3.8), the optimal consumption path is constant starting in period 0, implies that consumption must experience a permanent, once-and-for-all increase in period 0. Open Economy Macroeconomics, Chapter 3 The initial overreaction of consumption is due to the fact that output continues to grow after period 0. So, from the perspective of period 0, households observe an increasing path of income over time (yt − y0 = A0 F (k0 ) − A0 F (k) > 0 for all t > 0). As a consequence, households borrow against future income to finance current consumption. This result resembles what happens in the endowment economy with stationary endowment shocks that follow an AR(2) process studied in section 2.3, or with serially correlated nonstationary endowment shocks, studied in section 2.4. The initial increase in domestic absorption causes the trade balance to deteriorate. To see this, recall that tbt = yt − ct − it to write the change in tbt as ∆tb0 ≡ tb0 − tb−1 = ∆y0 − ∆c0 − ∆i0 . We have already shown that ∆y0 − ∆c0 < 0 and that ∆i0 > 0. It therefore follows that ∆tb0 < 0. This result is significantly different from the one obtained in the endowment economy studied in section 2.4 of chapter 2 in which a once-and-for-all increase in the endowment leaves the trade balance unchanged. The trade balance is constant from period 1 onwards. To see this, note that it = 0 for all t ≥ 1 and that yt = A0 F (k0 ) for all t ≥ 1. Consumption is also constant and equal to ct = −rd0 +A0 F (k0 ), where the last expression follows from evaluating (3.10) at t = 1. We then have that for any t ≥ 1, tbt = rd0 . This result then implies that debt will be constant and equal to d0 for all t ≥ 1. To see this note that one can express the sequential budget constraint, (3.2) as dt = (1 + r)dt−1 − tbt. Evaluating this expression at t = 1 and using tb1 = rd0 yields, d1 = d0 , and more generally dt = d0 for all t ≥ 1. The fact that net foreign debt is constant from period 1 on implies that the current account will be zero, cat = 0, for all t ≥ 1. We next show that the current account deteriorates in period 0, that is, ∆ca0 < 0. This follows ¯ and the fact, as we just immediately from the definition of the current account, ca0 = tb0 − r d, established, that the trade balance deteriorates in period 0. Indeed, because the current account is nil in the pre-shock steady state (cat = 0 for t < 0), we have that the level of the current account is negative in period 0, ca0 < 0. In turn, this result and the identity ca0 = (−d0 ) − (−d−1 ) together M. Uribe and S. Schmitt-Groh´e imply that net foreign debt must rise in period zero, that is, d0 > d. To service this elevated level of debt the trade balance must increase from period 1 onwards above and beyond the level it had in period tb−1 and earlier. That is, the permanent increase in the technology shock first leads to a deterioration of the trade balance and then to an improvement in the trade balance above its pre-shock level. Let’s take stock of the results obtained thus far. We started with the simple small open economy of chapter 2 and introduced a single modification, namely, capital accumulation. We then showed that the modified model produces very different predictions regarding the initial behavior of the trade balance in response to a positive permanent increase in productivity. In the present economy, such a shock causes the trade balance to initially deteriorate, whereas in the endowment economy a permanent output shock leaves the trade balance unchanged. What is behind the novel predictions of the present model? An important factor that contributes to causing an initial deterioration of the trade balance is the combination of a demand for goods for investment purposes and a persistent productivity shock. This factor has two implications. One direct implication is that because the positive productivity shock is expected to last investment in physical capital increases. The fact that investment rises in response to the permanent increase in productivity deteriorates the trade balance. This channel is closed in the endowment economy of the previous chapter because investment was by assumption always equal to zero. The second implication is less direct. The combination of capital accumulation and a persistent increase in productivity implies that output increases by more the long run than in period 0. This is so because in the period the productivity shock occurs the capital stock has not yet adjusted and output only increases by the increase in productivity. But in later periods output is higher due to both a higher level of productivity and a higher level of physical capital. Consumption-smoothing households adjust consumption in period 0 taking into account the entire future path of output. Faced with an upward sloping time path of income, households increase consumption in period 0 by more than the increase in output in that Open Economy Macroeconomics, Chapter 3 period. In the simple endowment economy a once-and-for-all increase in the endowment did not give rise to an upward sloping path of income and hence the consumption response did not exceed the output response in period 0. The size of the increase in investment and hence the size of the implied decline in the trade balance in response to a positive productivity shock depends on the assumed absence of capital adjustment costs. Note that in response to the increase in future expected productivity, the entire adjustment in investment occurs in period zero. Indeed, investment falls to zero in period 1 and remains nil thereafter. In the presence of costs of adjusting the stock of capital, investment spending is spread over a number of periods, dampening the increase in domestic absorption in the period the shock occurs. The fact that the productivity shock leads to a time path of income that is increasing is the result of the productivity shock being permanent. To highlight the importance of these two assumptions, namely, absence of adjustment costs, and permanence of the productivity shock, in generating a deterioration of the trade balance in response to a positive productivity shock, in the next two sections we analyze separately an economy with purely temporary productivity shocks and an economy with capital adjustment costs. Adjustment to Temporary Productivity Shocks To stress the importance of persistence in productivity movements in inducing a deterioration of the trade balance in response to a positive output shock, it is worth analyzing the effect of a purely temporary shock. Specifically, suppose that up until period -1 inclusive the productivity factor At ¯ Suppose also that in period -1 people assigned a zero probability to was constant and equal to A. ¯ In period 0, however, a zero probability event happens, the event that A0 would be different from A. ¯ Furthermore, suppose that everybody correctly expects the productivity namely, A0 = A0 > A. shock to be purely temporary. That is, everybody expects At = A¯ for all t > 0. In this case, M. Uribe and S. Schmitt-Groh´e equation (3.9) implies that the capital stock, and therefore also investment, are unaffected by the productivity shock. That is, kt = k for all t ≥ 0, where k is the level of capital inherited in period 0. This is intuitive. The productivity of capital unexpectedly increases in period zero. As a result, households would like to have more capital in that period. But k0 is fixed in period zero. Investment in period zero can only increase the future stock of capital. At the same time agents have no incentives to have a higher capital stock in the future, because its productivity is expected to go back down to its historic level A¯ right after period 0. The positive productivity shock in period zero does produce an increase in output in that period, ¯ (k) to A0 F (k). That is from AF ¯ (k), y0 = y−1 + (A0 − A)F ¯ (k) is the pre-shock level of output. This output increase induces higher consumpwhere y−1 ≡ AF ¯ and that d−1 = d, ¯ ¯ (k), tion. Evaluating equation (3.10) for t = 0, recalling that c−1 = −r d¯ + AF we have that c0 = c−1 + r (y0 − y−1 ). 1+r Basically, households invest the entire increase in output in the international financial market and increase consumption by the interest flow associated with that financial investment. Combining the above two expressions and recalling that investment is unaffected by the temporary shock, we get that the trade balance in period 0 is given by tb0 − tb−1 = (y0 − y−1 ) − (c0 − c−1 ) − (i0 − i−1 ) = 1 ¯ (k) > 0. (A0 − A)F 1+r This expression shows that the trade balance improves on impact. The reason for this counterfactual prediction is simple: Firms have no incentive to invest, as the increase in the productivity of capital is short lived. At the same time, consumers save most of the purely temporary increase in income Open Economy Macroeconomics, Chapter 3 in order to smooth consumption over time. As a consequence, domestic absorption increases but by less than the increase in output. Comparing the results obtained under the two polar cases of permanent and purely temporary productivity shocks, we can derive the following principle: Principle I: The more persistent are productivity shocks, the more likely is an initial deterioration of the trade balance. We will analyze this principle in more detail in chapters 4 and 5, in the context of models featuring a more flexible notion of persistence. Capital Adjustment Costs Consider now an economy identical to the one analyzed thus far, except that now changes in the stock of capital come at a cost. Capital adjustment costs—in a variety of forms—are a regular feature of business cycle models. A property of small open economy models is that investment, in the absence of adjustment costs, is very volatile. Investment adjustment costs are therefore frequently used in this literature to dampen the volatility of investment over the business cycle (see, e.g., Mendoza, 1991; and Schmitt-Groh´e, 1998, among many others). Suppose that the sequential budget constraint is of the form At F (kt ) + dt = (1 + r)dt−1 + ct + it + i2t . 2kt Here, capital adjustment costs are given by i2t /(2kt) and are a strictly convex function of investment. Note that the level of this function vanishes at the steady-state value of investment, it = 0. This means that capital adjustment costs are nil in the steady state. Note further that the slope of the adjustment-cost function, given by it /kt, also vanishes in the steady state. As will be clear M. Uribe and S. Schmitt-Groh´e shortly, this feature implies that in the steady state the relative price of capital goods in terms of consumption goods is unity. As in the economy without adjustment costs, we assume that physical capital does not depreciate, so that the law of motion of the capital stock continues to be given by (3.4). The household problem then consists in maximizing the utility function, equation (3.1), subject to (3.4), the no-Ponzi-game constraint (3.5), and the sequential budget constraint (3.13). The Lagrangian associated with this optimization problem is ∞ X t=0 1 i2t β U (ct) + λt At F (kt ) + dt − (1 + r)dt−1 − ct − it − + qt (kt + it − kt+1 ) . 2 kt t The variables λt and qt denote Lagrange multipliers on the sequential budget constraint and the law of motion of the capital stock, respectively. The optimality conditions associated with the household problem are (3.4), (3.5) holding with equality, (3.6), (3.7), (3.13), 1+ and λtqt = βλt+1 it = qt , kt 1 qt+1 + At+1 F 0 (kt+1 ) + 2 it+1 kt+1 2 # We continue to assume that β(1+r) = 1. As in the model without investment adjustment costs, this assumption implies by optimality conditions (3.6) and (3.7) that λt and ct are constant over time. The level of consumption can be found by solving the sequential budget constraint, (3.13), forward and using the fact that (3.5) holds with equality. This yields ∞ 1 2 r X At+j F (kt+j ) − it+j − 2 (it+j /kt+j ) ct = −rdt−1 + . 1+r (1 + r)j j=0 Open Economy Macroeconomics, Chapter 3 The Lagrange multiplier qt represents the shadow relative price of capital in terms of consumption goods, and is known as Tobin’s q. Optimality condition (3.14) equates the marginal cost of producing a unit of capital, 1 + it /kt, on the left-hand side, to the marginal revenue of selling a unit of capital, qt , on the right-hand side. As qt increases, agents have incentives to devote more resources to the production of physical capital. In turn, the increase in investment raises the marginal adjustment cost, it /kt, which tends to restore the equality between the marginal cost and marginal revenue of capital goods. Using the fact that (1 + r)β = 1 and that λt is constant, optimality condition (3.15) can be written as 1 (1 + r)qt = At+1 F (kt+1 ) + 2 0 it+1 kt+1 + qt+1 . The left hand side of this expression is the rate of return on bonds and the right hand side is the rate of return of investing in physical capital. Consider first the rate of return of investing in physical capital. Adding one unit to the existing stock costs qt . The additional unit yields At+1 F 0 (kt+1 ) units of output next period. In addition, an extra unit of capital reduces tomorrow’s adjustment costs by (it+1 /kt+1 )2 /2. Finally, the unit of capital can be sold next period at the price qt+1 . Alternatively, instead the agent can engage in a financial investment by purchasing qt bonds in period t, which yields a gross return of (1 + r)qt in period t + 1. At the optimum both strategies must yield the same return. Dynamics of the Capital Stock Using the evolution of capital (3.4) to eliminate it from optimality conditions (3.14) and (3.16), we obtain the following two first-order, nonlinear difference equations in kt and qt : kt+1 = qt kt M. Uribe and S. Schmitt-Groh´e Figure 3.2: The Dynamics of the Capital Stock Q S a • 1 qt = At+1 F 0 (qtkt ) + (qt+1 − 1)2 /2 + qt+1 . 1+r The perfect-foresight solution to these equations is depicted in figure 3.2. The horizontal line KK 0 corresponds to the pairs (kt, qt) for which kt+1 = kt in equation (3.17). That is, q = 1. Above the locus KK 0 , the capital stock grows over time, and below the locus KK 0 the capital stock declines over time. The locus QQ0 corresponds to the pairs (kt, qt ) for which qt+1 = qt in equation (3.18). That is, rq = AF 0 (qk) + (q − 1)2 /2. Jointly, equations (3.19) and (3.20) determine the steady-state value of the capital stock, which we denote by k∗ , and the steady-state value of Tobin’s q, 1. The value of k∗ is implicitly determined by Open Economy Macroeconomics, Chapter 3 the expression r = AF 0 (k∗ ). This is the same value obtained in the economy without adjustment costs. This is not surprising, because, as noted earlier, adjustment costs vanish in the steady state. For qt near unity, the locus QQ0 is downward sloping. Above and to the right of QQ0 , q increases over time and below and to the left of QQ0 , q decreases over time. The system (3.17)-(3.18) is saddle-path stable. The locus SS 0 represents the converging saddle path. If the initial capital stock is different from its long-run level, both q and k converge monotonically to their steady states along the saddle path. A Permanent Technology Shock ¯ Suppose now that in period 0 the technology factor At increases permanently from A¯ to A0 > A. It is clear from equation (3.19) that the locus KK 0 is not affected by the productivity shock. Equation (3.20) shows that the locus QQ0 shifts up and to the right. It follows that in response to a permanent increase in productivity, the long-run level of capital experiences a permanent increase. The price of capital, qt , on the other hand, is not affected in the long run. Consider now the transition to the new steady state. Suppose that the steady-state value of capital prior to the innovation in productivity is k0 in figure 3.2. Then the new steady-state values of k and q are given by k∗ and 1. In the period of the shock, the capital stock does not move. The price of installed capital, qt , jumps to the new saddle path, point a in the figure. This increase in the price of installed capital induces an increase in investment, which in turn makes capital grow over time. After the initial impact, qt decreases toward 1. Along this transition, the capital stock increases monotonically towards its new steady-state k∗ . The equilibrium dynamics of investment in the presence of adjustment costs are quite different from those arising in the absence thereof. In the frictionless environment, investment experiences a one-time jump equal to k∗ − k0 in period zero. Under capital adjustment costs, the initial increase M. Uribe and S. Schmitt-Groh´e in investment is smaller, as the capital stock adjusts gradually to its long-run level.2 The different behavior of investment with and without adjustment costs has consequences for the equilibrium dynamics of the trade balance. In effect, because investment is part of domestic absorption, and because investment tends to be less responsive to productivity shocks in the presence of adjustment costs, it follows that the trade balance falls by less in response to a positive innovation in productivity in the environment with frictions. The following principle therefore emerges: Principle II: The more pronounced are capital adjustment costs, the smaller is the initial trade balance deterioration in response to a positive and persistent productivity shock. In light of principles I and II derived in this chapter it is natural to ask what the model would predict for the behavior of the trade balance in response to productivity shocks when one introduces realistic degrees of capital adjustment costs and persistence in the productivity-shock process. We address this issue in the next chapter. 2 It is straightforward to see that the response of the model to a purely temporary productivity shock is identical to that of the model without adjustment costs. In particular, capital and investment display a mute response. Open Economy Macroeconomics, Chapter 3 Exercise 3.1 (Anticipated Productivity Shocks) Consider a perfect-foresight economy populated by a large number of identical households with preferences described by the utility function ∞ X β t U (ct), where ct denotes consumption, U is a period utility function assumed to be strictly increasing, strictly concave, and twice continuously differentiable, and β ∈ (0, 1) is a parameter denoting the subjective rate of discount. Households are subject to the following four constraints yt + dt = (1 + r)dt−1 + ct + it, yt = At F (kt) kt+1 = kt + it, and dt+j ≤ 0, j→∞ (1 + r)j lim given d−1 , k0 , and {At }∞ t=0 . The variable dt denotes holdings of one-period external debt at the end of period t,r denotes the interest rate on these debt obligations, yt denotes output, kt denotes the (predetermined) stock of physical capital in period t, and it denotes gross investment, F is a production function assumed to be strictly increasing, strictly concave, and to satisfy the Inada conditions, and At > 0 is an exogenous productivity factor. Suppose that β(1 + r) = 1. Assume further that up until period −1 inclusive, the productivity factor was constant and equal to A¯ > 0 and that the economy was in a steady state with a constant level of capital and a constant net debt M. Uribe and S. Schmitt-Groh´e ¯ Suppose further that in period 0 the productivity factor also equals A, ¯ but that position equal to d. ¯ That is, in period 0, households agents learn that in period 1 it will jump permanently to A0 > A. know that the path of the productivity factor is given by A¯ t ≤ −1 At = A¯ t=0 A0 > A¯ t ≥ 1 1. Characterize the equilibrium paths of output, consumption, investment, capital, the net foreign debt position, the trade balance, and the current account. 2. Compare your answer to the case of an unanticipated permanent increase in productivity studied in section 3.2. 3. Now assume that the anticipated productivity shock is transitory. Specifically, assume that the information available to households at t = 0 is At = t ≤ −1 A0 > A¯ t = 1 A¯ t≥2 (a) Characterize the equilibrium dynamics. (b) Compare your answer to the case of an unanticipated temporary increase in productivity studied in section 3.2. (c) Compare your answer to the case of an anticipated endowment shocks in the endowment economy studied in exercise 2.5 of chapter 2. Chapter 4 The Small-Open-Economy Real-Business-Cycle Model In the previous two chapters, we arrived at the conclusion that a model driven by productivity shocks can explain the observed countercyclicality of the trade balance. We also established that two features of the model are important for making this prediction possible. First, productivity shocks must be sufficiently persistent. Second, capital adjustment costs must not be too strong. In this chapter, we extend the model of the previous chapter by allowing for three features that make its structure more realistic: endogenous labor supply and demand, uncertainty in the technology shock process, and capital depreciation. The resulting theoretical framework is known as the SmallOpen-Economy Real-Business-Cycle model, or, succinctly, the SOE-RBC model. The Model The model presented in this section is based, to a large extent, on Schmitt-Groh´e and Uribe (2003) and Mendoza (1991). Consider a small open economy populated by an infinite number of identical 87 M. Uribe and S. Schmitt-Groh´e households with preferences described by the utility function ∞ X β t U (ct, ht), where ct denotes consumption, ht denotes hours worked, β ∈ (0, 1) is the subjective discount factor, and U is a period utility function, which is assumed to be increasing in its first argument, decreasing in its second argument, and concave. The symbol Et denotes the expectations operator conditional on information available in period t. The period-by-period budget constraint of the representative household is given by dt = (1 + rt−1 )dt−1 − yt + ct + it + Φ(kt+1 − kt), where dt denotes the household’s debt position at the end of period t, rt denotes the interest rate at which domestic residents can borrow in period t, yt denotes domestic output, it denotes gross investment, and kt denotes physical capital. The function Φ(·) is meant to capture capital adjustment costs and is assumed to satisfy Φ(0) = Φ0 (0) = 0. Small open economy models typically include capital adjustment costs to avoid excessive investment volatility in response to variations in the productivity of domestic capital or in the foreign interest rate. The restrictions imposed on Φ ensure that in the steady state adjustment costs are nil and the relative price of capital goods in terms of consumption goods is unity. Note that here adjustment costs are expressed in terms of final goods. Alternatively, one could assume that adjustment costs take the form of lost capital goods (see exercise 4.7). Output is produced by means of a linearly homogeneous production function that takes capital and labor services as inputs, yt = At F (kt, ht ), Open Economy Macroeconomics, Chapter 4 where At is an exogenous and stochastic productivity shock. This shock represents the single source of aggregate fluctuations in the present model. The stock of capital evolves according to kt+1 = (1 − δ)kt + it , where δ ∈ (0, 1) denotes the rate of depreciation of physical capital. Households choose processes {ct, ht, yt, it, kt+1 , dt}∞ t=0 so as to maximize the utility function (4.1) subject to (4.2)-(4.4) and a no-Ponzi constraint of the form lim Et Qj s=0 (1 + rs ) ≤ 0. Use equations (4.3) and (4.4) to eliminate, respectively, yt and it from the sequential budget constraint (4.2). This operation yields dt = (1 + rt−1 )dt−1 − At F (kt , ht) + ct + kt+1 − (1 − δ)kt + Φ(kt+1 − kt ). The Lagrangian corresponding to the household’s maximization problem is L = E0 ∞ X t=0 β t {U (ct, ht) +λt [At F (kt, ht) + (1 − δ)kt + dt − ct − (1 + rt−1 )dt−1 − kt+1 − Φ(kt+1 − kt )]} , where β t λt denotes the Lagrange multiplier associated with the sequential budget constraint (4.6). The first-order conditions associated with the household’s maximization problem are (4.5) holding with equality, (4.6), and λt = β(1 + rt)Etλt+1 M. Uribe and S. Schmitt-Groh´e Uc (ct, ht) = λt − Uh (ct, ht ) = λtAt Fh (kt, ht) λt [1 + Φ0 (kt+1 − kt)] = βEt λt+1 At+1 Fk (kt+1 , ht+1 ) + 1 − δ + Φ0 (kt+2 − kt+1 ) . Optimality conditions (4.7), (4.8), and (4.10) are familiar from chapter 3. Optimality condition (4.9) equates the supply of labor to the demand for labor. To put it in a more familiar form, divide (4.9) by (4.8) to eliminate λt . This yields Uh (ct, ht) = At Fh (kt, ht). Uc (ct, ht) The left-hand side of this expression is the household’s labor supply schedule. It is the marginal rate of substitution between leisure and consumption, which is increasing in hours worked, holding the level of consumption constant.1 The right-hand side of (4.11) is the marginal product of labor, which, in a decentralized version of this model equals the demand for labor. The marginal product of labor is decreasing in labor, holding constant the level of capital. The law of motion of the productivity shock is assumed to be given by the first-order autoregressive process ln At+1 = ρ ln At + η˜t+1 , where the parameter ρ ∈ (−1, 1) measures the serial correlation of the technology shock, the innovation t is i.i.d. with mean zero and unit standard deviation, and the parameter η˜ governs the standard deviation of the innovation to technology. 1 A sufficient condition for −Uh /Uc to be increasing in ht holding ct constant is Uch < 0, and the necessary and sufficient condition is Uhh /Uh > Uch /Uc . Open Economy Macroeconomics, Chapter 4 Inducing Stationarity: External Debt-Elastic Interest Rate (EDEIR) In chapters 2 and 3 we saw that the equilibrium of a small open economy with one internationally traded bond and a constant interest rate satisfying β(1 + r) = 1 features a random walk in consumption, net external debt, and the trade balance. Under perfect foresight, that model predicts that the steady state levels of debt and consumption depend on initial conditions, such as the initial level of debt itself. The nonstationarity of the model complicates the task of approximating equilibrium dynamics, because available approximation techniques require stationarity of the state variables. Here we follow Schmitt-Groh´e and Uribe (2003) and induce stationarity by making the interest rate debt elastic. In section 4.6 we study various alternative ways to accomplish this task. We assume that the interest rate faced by domestic agents, rt , is increasing in the country’s cross-sectional average level of debt, which we denote by d˜t. Specifically, rt is given by rt = r ∗ + p where r ∗ denotes the world interest rate and p(·) is a country-specific interest rate premium. Households take the evolution of d˜t as exogenously given. For simplicity, we assume that the world interest rate, r ∗ , is constant.2 The function p(·) is assumed to be strictly increasing. As we will see shortly, the assumption of a debt-elastic interest rate premium gives rise to a steady state of the model that is independent of initial conditions. In particular, under this formulation the deterministic steady state of the model is independent of the initial net foreign asset position of the economy. We have motivated this feature of the model on purely technical grounds. However, the assumption of a debt elastic interest-rate premium may be of interest also because it can capture the presence of financial frictions. In chapter 11, we provide micro-foundations to such a formulation in the 2 Chapter 6 and exercise 4.1 at the end of the present chapter analyze the case of a time-varying world interest rate. M. Uribe and S. Schmitt-Groh´e context of a model with imperfect enforcement of international debt contracts. And in chapters 5 and 6, we argue on econometric grounds that data from emerging countries favor a significantly debt-sensitive interest rate. Because agents are assumed to be identical, in equilibrium the cross-sectional average level of debt must be equal to the individual level of debt, that is, d˜t = dt. Use equations (4.8), (4.13), and (4.14) to eliminate λt , rt, and d˜t from (4.5), (4.6), (4.7), and (4.10) to obtain dt = [1 + r ∗ + p(dt−1 )]dt−1 − At F (kt , ht) + ct + kt+1 − (1 − δ)kt + Φ(kt+1 − kt). Uc (ct, ht) = β(1 + r ∗ + p(dt))EtUc (ct+1 , ht+1 ) Uc (ct, ht )[1 + Φ0 (kt+1 − kt )] = βEtUc (ct+1 , ht+1 ) At+1 Fk (kt+1 , ht+1 ) + 1 − δ + Φ0 (kt+2 − kt+1 ) . lim Et Qj s=0 (1 + r ∗ + p(ds)) = 0. A competitive equilibrium is a set of processes {dt, ct, ht, kt+1 , At} satisfying (4.11), (4.12), and (4.15)-(4.18), given A0 , d−1 , and k0 , and the process {t }. Given the equilibrium processes of consumption, hours, capital, and debt, output is obtained from equation (4.3), investment from equation (4.4), and the interest rate from equation (4.13) evaluated at d˜t = dt. One can then construct the equilibrium process of the trade balance from the Open Economy Macroeconomics, Chapter 4 definition tbt ≡ yt − ct − it − Φ(kt+1 − kt ), where tbt denotes the trade balance in period t. Finally, the current account is given by the sum of the trade balance and net investment income, that is, cat = tbt − rt−1 dt−1 . Alternatively, one could construct the equilibrium process of the current account by using the fact that the current account measures the change in net foreign assets, that is, cat = dt−1 − dt. The economy presented thus far assumes that production, employment, and the use of capital are all carried out within the household. Here, we present an alternative formulation in which all of these activities are performed in the marketplace. This formulation is called the decentralized economy. A key result of this section is that the equilibrium conditions of the decentralized economy are identical to those of the centralized one. Households We assume that each period the household supplies ht hours to the labor market. We also assume that the household owns the stock of capital, kt, and rents it to firms each period. The process of capital accumulation continues to take place within the household. Let wt and ut denote the wage rate and the rental rate of capital, respectively. The household takes wt and ut as M. Uribe and S. Schmitt-Groh´e given. Its period-by-period budget constraint can then be written as dt = (1 + rt−1 )dt−1 + ct + it + Φ(kt+1 − kt ) − wtht − ut kt. Here, wtht represents labor income and ut kt represents income from the rental of capital. The household maximizes the utility function (4.1) subject to (4.4), (4.5), and (4.22). The first-order conditions associated with the household’s problem are (4.4), (4.5) holding with equality, (4.7), (4.8), (4.22), and Uh (ct, ht) = wt Uc(ct , ht) λt [1 + Φ0 (kt+1 − kt)] = βEt λt+1 ut+1 + 1 − δ + Φ0 (kt+2 − kt+1 ) . − (4.23) (4.24) Firms produce final goods with labor and capital and operate in perfectly competitive markets. The production technology is given by yt = At F (kt, ht ). Profits in period t are given by AtF (kt , ht) − wtht − utkt . Each period t ≥ 0 the firm hires workers and rents capital to maximize profits. The first-order conditions associated with the firm’s profit maximization problem are At Fh (kt, ht) = wt Open Economy Macroeconomics, Chapter 4 and At Fk (kt, ht ) = ut . Because the production function is assumed to be homogeneous of degree one, profits are zero at all times. To see this multiply (4.25) by ht , (4.26) by kt and sum the resulting expressions to obtain At Fh (kt, ht)ht + At Fk (kt, ht)kt = wtht + ut kt. By the assumed linear homogeneity of the production function the left hand side of this expression is equal to At F (kt , ht). It then follows that the total cost of production equals output, or, that profits equal zero. Consider now the equilibrium of this economy. We have just shown that wtht + utkt = At F (kt , ht) ≡ yt . Using this result to eliminate wtht + utkt from (4.22) results in (4.2). Combining (4.23) and (4.25) yields (4.11). Finally, using (4.26) to eliminate ut+1 from (4.24) one obtains (4.10). It is straight forward to see that the resulting system of equilibrium conditions can be further compacted to become equations (4.11), (4.12), and (4.15)-(4.18), which represent the complete set of equilibrium conditions of the centralized economy. This completes the proof that the equilibrium conditions of the centralized and decentralized economies are identical. Computing the Quantitative Predictions of the SOE-RBC Model One of the most important methodological advances in macroeconomics in the last thirty years has been the development of micro-founded dynamic stochastic general equilibrium models that deliver quantitative predictions for short-run fluctuations in key aggregate indicators that can be directly compared to actual data. The first concrete step in this direction was the seminal contribution of Kydland and Prescott (1982). A central element of this new methodology is the numerical M. Uribe and S. Schmitt-Groh´e approximation of the equilibrium dynamics of business-cycle models. The importance of numerical approximation techniques lies in the fact that, in general, compelling models of the business cycle do not admit closed form solutions. One of the most widely used approximation methods is first-order perturbation (or linearization) around the nonstochastic steady state. In this section, we explain this method and apply it to the SOE-RBC model. Functional Forms The implementation of the first-order perturbation method requires Taylor expanding the equilibrium conditions of the model around the deterministic steady state. In turn, this expansion requires taking the first derivative of the equilibrium conditions with respect to the endogenous variables and evaluating it at the steady state. For large models, this step can be tedious to carry out by pen and paper. Therefore, it is convenient to rely on computer software to perform the linearization of DSGE models analytically.3 This step requires adopting specific functional forms for preferences and technologies. We assume the following functional form for the period utility function: U (c, h) = G(c, h)1−σ − 1 , 1−σ σ > 0, with G(c, h) = c − hω , ω ω > 1. The form of the subutility index G(c, h) is due to Greenwood, Hercowitz, and Huffman (1988) and is typically referred to as GHH preferences. It implies that the marginal rate of substitution between consumption and leisure depends only on labor. In other words, GHH preferences deliver 3 The code we provide later in this chapter relies on the symbolic math toolbox of matlab. Open Economy Macroeconomics, Chapter 4 a labor supply that is income inelastic. To see this, express equilibrium condition (4.23) using the assumed functional form for the period utility function to obtain htω−1 = wt. This labor supply schedule has a wage elasticity of 1/(ω − 1) and is independent of consumption. GHH preferences were popularized in the open economy business cycle literature by Mendoza (1991). The period utility function U (c, h) is completed by assuming that preferences display constant relative risk aversion (CRRA) over the subutility index G(c, h). The parameter σ measures the degree of relative risk aversion, and its reciprocal, 1/σ, measures the intertemporal elasticity of substitution, both defined over the composite G(c, h). We adopt a Cobb-Douglas specification for the production function F (k, h) = kαh1−α , α ∈ (0, 1). This specification implies a unitary elasticity of substitution between capital and labor. That is, a one percent increase in the wage to rental ratio, wt /ut, induces firms to increase the capitallabor ratio by one percent. To see this divide equation (4.25) by equation (4.26) and use the Cobb-Douglas form for the production function to obtain 1−α α kt wt = , ht ut which implies that in equilibrium the capital-labor ratio is proportional to the wage to rental ratio. The Cobb-Douglas specification of the production function is widely used in the business-cycle M. Uribe and S. Schmitt-Groh´e The capital adjustment cost function is assumed to take the following quadratic form Φ(x) = φ 2 x ; 2 φ > 0, which means that net investment, whether positive or negative, generates adjustment costs. Finally, we follow Schmitt-Groh´e and Uribe (2003) and assume that the country interest rate premium takes the form p(d) = ψ1 ed−d − 1 , where ψ1 > 0 and d¯ are parameters. According to this expression the country premium is an increasing and convex function of net external debt. Deterministic Steady State Assume that the variance of the innovation to the productivity shock, η˜, is nil. We refer to such an environment as a deterministic economy. We define a deterministic steady state as an equilibrium of the deterministic economy in which all endogenous variables are constant over time. The purpose of this section is to derive a closed form solution to the steady state equilibrium of the small open economy for the functional forms assumed in the previous section. This characterization is of interest for two reasons. First, the steady state facilitates the calibration of the model. This is because, to a first approximation, the deterministic steady state coincides with the average position of the stochastic economy. In turn, structural parameters are often calibrated to target average characteristics of the economy such as labor shares, consumption shares, and trade-balance-to-output ratios. Second, the deterministic steady state is often used as a convenient point of reference around which to approximate the equilibrium dynamics of the stochastic economy. For any variable we denote its steady-state value by removing the time subscript. Evaluating Open Economy Macroeconomics, Chapter 4 equilibrium condition (4.16) at the steady state yields h i 1 = β 1 + r ∗ + ψ1 ed−d − 1 . Without loss of generality, we assume that β(1 + r ∗ ) = 1. This assumption is a normalization, not necessary for stationarity. Combining the above two restrictions one obtains d = d. The steady-state version of (4.17) implies that " # k α−1 1=β α +1−δ . h This expression delivers the steady-state capital-labor ratio, which we denote by κ. Formally, k κ≡ = h β −1 − 1 + δ α Using this expression to eliminate the capital-labor ratio from equilibrium condition (4.11) evaluated at the steady state, one obtains the following expression for the steady-state level of hours h = [(1 − α)κα ]1/(ω−1) . Given the steady-state values of labor and the capital-labor ratio, the steady-state level of capital M. Uribe and S. Schmitt-Groh´e is simply given by k = κh. Finally, the steady-state level of consumption can be obtained by evaluating equilibrium condition (4.15) at the steady state. This yields c = −r ∗ d + κα h − δk. This completes the characterization of the deterministic steady state of the present economy. An important intermediate step in computing the quantitative predictions of a business-cycle model is to assign values to its structural parameters. There are two main ways to accomplish this step. One is econometric estimation by methods such as the generalized method of moments (GMM), impulse response matching, maximum likelihood, or likelihood-based Bayesian methods. We will explain and apply some of these econometric techniques in later chapters. The second approach, which we study here, is calibration. Often, business-cycle studies employ a combination of calibration and econometric estimation. In general, the calibration method assigns values to the parameters of the model in three different ways: (a) Based on sources unrelated to the macro data the model aims to explain. (b) To match first moments of the data that the model aims to explain. (c) To match second moments of the data the model aims to explain. To illustrate how calibration works, we adapt the calibration strategy adopted in Mendoza (1991) to the present model. His SOE-RBC model aims to explain the Canadian business cycle. The time unit in the model is meant to be one year. In the present model, there are 10 parameters that need to be calibrated: σ, δ, r ∗ , α, d, ω, φ, ψ1 , ρ, and η˜. We separate these parameters into Open Economy Macroeconomics, Chapter 4 the three calibration categories described above. (a) Parameters Based On Sources Unrelated To The Data The Model Aims To Explain The parameters that fall in this category are the intertemporal elasticity of substitution, σ, the depreciation rate, δ, and the world interest rate, r ∗ . Based on parameter values widely used in related business-cycle studies, Mendoza sets σ equal to 2, δ equal to 0.1, and r ∗ equal to 4 percent per year. (b) Parameters Set To Match First Moments Of The Data The Model Aims To Explain In this category are the capital elasticity of the production function, α, and the parameter d pertaining to the country interest-rate premium. The parameter α is set to match the average labor share in Canada of 0.68. In the present model, the labor share, given by the ratio of labor income to output, or wtht /yt , equals 1 − α at all times. To see this, note that in equilibrium, wt equals the marginal product of labor, which, under the assumed Cobb-Douglas production function is given by (1 − α)yt /ht. The parameter d is set to match the observed average trade-balance-to-output ratio in Canada of 2 percent. Combining the definition of the trade balance given in equation (4.19) with the resource constraint (4.15) implies that in the steady state tb = r ∗ d. This condition states that in the deterministic steady state the country must generate a trade M. Uribe and S. Schmitt-Groh´e surplus sufficiently large to service its external debt. Solving for d yields tb/y y, r∗ where we divided and multiplied the right-hand side by y to express the trade balance as a share of output. At this point we know that tb/y = 0.02 and that r ∗ = 0.04, but y remains unknown. From the derivation of the steady state presented in section 4.2.2, one can deduce that 1 y = [(1 − α)καω ] ω−1 , where κ = [(α/(r ∗ + δ)]1/(1−α). The only unknown parameter in the expression for y, and therefore d, is ω. Next, we discuss how the calibration strategy assigns values to ω and the remaining unknown structural parameters. (c) Parameters Set To Match Second Moments Of The Data The Model Aims To Explain This category of parameters contains ω, which governs the wage elasticity of labor supply, φ, which defines the magnitude of capital adjustment costs, ψ1 which determines the debt sensitivity of the interest rate, and ρ and η˜ defining, respectively, the persistence and volatility of the technology shock. The calibration strategy for these parameters is to match the following five second moments of the Canadian data at business-cycle frequency: A standard deviation of hours of 2.02 percent, a standard deviation of investment of 9.82 percent, a standard deviation of the trade-balance-tooutput ratio of 1.87 percentage points, a serial correlation of output of 0.62, and a standard deviation of output of 2.81 percent. These are natural targets, as their theoretical counterparts are directly linked to the parameters to be calibrated. In practice, this last step of the calibration procedure goes Open Economy Macroeconomics, Chapter 4 Table 4.1: Calibration of the EDEIR Small Open RBC Economy Parameter Value σ 2 δ 0.1 r∗ 0.04 α 0.32 d 0.7442 ω 1.455 φ 0.028 ψ1 0.000742 ρ 0.42 η˜ 0.0129 as follows: (i) Guess values for the five parameters in category (c). This automatically determines a value for d. (ii) Approximate the equilibrium dynamics of the model. Shortly, we present a widely used method to accomplished this task. (iii) Calculate the implied five second moments to be matched. (iv) If the match between actual and predicted second moments is judged satisfactory, the procedure has converged. If not, try a new guess for the five parameters to be calibrated and return to (i). There is a natural way to update the parameter guess. For instance, if the volatility of output predicted by the model is too low, raise the volatility of the innovation to the technology shock, η˜. Similarly, if the volatility of investment is too high, increase the value of φ. And so on. In general, there are no guarantees of the existence of a set of parameter values that will produce an exact match between the targeted empirical second moments and their theoretical counterparts. So some notion of distance and tolerance is in order. The parameter values that result from this calibration procedure are shown in table 4.1. It is important to note that the calibration strategy presented here is just one of many possible ones. For instance, we could place δ in category (b) and add the average investment share as a first moment of the data to be matched. Similarly, we could take the parameter ω out of category (c) and place it instead in category (a). To assign a value to this parameter we could then use existing micro-econometric estimates of the Frisch elasticity of labor supply. Finally, a calibration approach that has been used extensively, especially in the early days of the RBC literature, is to place ρ and η˜ into category (a) instead of (c). Under this approach, one uses Solow residuals as a proxy for the productivity shock At . Then one estimates a univariate representation of the Solow residual to obtain values for ρ and η˜. M. Uribe and S. Schmitt-Groh´e Approximating Equilibrium Dynamics The competitive equilibrium of the SOE-RBC model is described by a system of nonlinear stochastic difference equations. Closed-form solutions to this type of systems are typically not available. We therefore must resort to an approximate solution. There exist a number of techniques that have been devised to solve such dynamic systems. The one we employ here is a linear approximation to the nonlinear solution. We begin by introducing a convenient variable transformation. It is useful to express some variables in terms of percent deviations from their steady-state values. In the present SOE-RBC model, this is the case for ct , ht , kt , and At . Some other variables are more naturally expressed in levels. This is the case, for instance, with net interest rates or variables that can take negative and positive values, such as the stock of debt, dt−1 , in the present application. Let yt ≡ [ln ct ln ht ]0 denote the vector of control variables of the model. Control variables in period t are endogenous variables that are determined in period t. Let x1t ≡ [dt−1 ln kt]0 denote the vector of endogenous state variables. Endogenous state variables are endogenous variables that in period t are predetermined (i.e., determined before period t). Let x2t ≡ [ln At ] denote the vector of exogenous state variables. Exogenous state variables in period t are exogenous variable that are determined in period t or earlier. Finally, let xt ≡ [x1t x2t ]0 denote the vector of state variables. Apply the conditional expectations operator to both sides of equation (4.12) to obtain Et ln At+1 − ρ ln At = 0. Then this expression together with equilibrium conditions (4.11) and (4.15)-(4.17) can be written as Et f (yt+1 , yt, xt+1 , xt) = 0. The law of motion of the exogenous state vector x2t is given by x2t+1 = ρx2t + η˜t+1 . Open Economy Macroeconomics, Chapter 4 We restrict attention to equilibria in which at every date t the economy is expected to converge to the non-stochastic steady state, that is, we impose Etyt+j y lim , = j→∞ Etxt+j xss where y ss and xss denote, respectively, the deterministic steady state values of yt and xt . This restriction implies that the transversality condition (4.18) is always satisfied. The first-order accurate solution to the system (4.28), (4.29), and (4.30) is given by x bt+1 = hx x bt + η t+1 ybt = gx x bt , where x bt ≡ xt − xss and ybt ≡ yt − y ss denote, respectively, the deviations of xt and yt from their steady state values. Appendix B shows how to obtain numerical values for the matrices hx , gx , and η, given numerical values for the deep structural parameters of the model and the deterministic steady state. The solution methodology described in the appendix is general, in the sense that it applies to any model that can be expressed in the form (4.28), (4.29), and (4.30). In particular, it is valid for state and control vectors of any size. Appendix B also shows how to compute second moments and impulse response functions predicted by the model. Matlab code for performing first-order accurate approximations to DSGE models and for computing second moments and impulse response functions is available at http://www.columbia.edu/ ~mu2166/1st_order.htm. In addition, Matlab code to solve the specific SOE-RBC EDEIR model studied here is available online at http://www.columbia.edu/~mu2166/closing.htm (see the files M. Uribe and S. Schmitt-Groh´e under the heading ‘External Debt Elastic Interest Rate Model’). The Performance of the Model Having calibrated the model and computed a first-order approximation to the equilibrium dynamics, we are ready to explore its quantitative predictions. As a point of reference, table 4.2 displays empirical second moments of interest from the Canadian economy. These observed summary statistics are taken from Mendoza (1991). The table shows standard deviations, serial correlations, and contemporaneous correlations with output of output, consumption, investment, hours, and the trade-balance-to-output ratio. The data is annual, quadratically detrended, and covers the period 1946-1985. Although outdated, we choose to use the empirical moments reported in Mendoza (1991) to preserve coherence with the calibration strategy of subsection 4.2.3. Exercise 4.6 presents second moments using Canadian data over the period 1960-2011 and uses them to calibrate and evaluate the SOE-RBC EDEIR model. Table 4.2 also displays second moments predicted by the the SOE-RBC EDEIR model. Comparing empirical and predicted second moments, it should not come as a surprise that the model does very well at replicating the volatilities of output, hours, investment, and the trade-balance-tooutput ratio, and the serial correlation of output. For we calibrated the parameters ω, φ, ψ1 , ρ, and η˜ to match these five moments. But the model performs relatively well along other dimensions. For instance, it correctly implies that consumption is less volatile than output and investment and more volatile than hours and the trade-balance-to-output ratio. Also in line with the data is the model’s prediction of a countercyclical trade balance-to-output ratio. This prediction is of interest because the parameters φ and ρ governing the degree of capital adjustment costs and the persistence of the productivity shock, which, as we established in the previous chapter, are key determinants of the cyclicality of the trade-balance-to-output ratio, were set independently of the observed cyclical Open Economy Macroeconomics, Chapter 4 Table 4.2: Empirical and Theoretical Second Moments Variable σx t Canadian Data ρxt,xt−1 ρxt ,GDPt Model σx t c i 2.46 9.82 0.70 0.31 0.59 0.64 2.71 9.04 0.78 0.07 0.84 0.67 2.02 1.87 0.54 0.66 0.80 -0.13 2.12 1.78 0.62 0.51 1.00 -0.04 tb y ca y Note. Empirical moments are taken from Mendoza (1991) and correspond to annual observations for the period 1946-1985, expressed in per capita terms and quadratically detrended. Standard deviations are measured in percentage points. properties of the trade balance. On the downside, the model predicts too little countercyclicality in the trade balance and overestimates the correlations of both hours and consumption with output. Note in particular that the implied correlation between hours and output is exactly unity. This prediction is due to the assumed functional form for the period utility index. To see this, note that equilibrium condition (4.11), which equates the marginal product of labor to the marginal rate of substitution between consumption and leisure, can be written as hωt = (1 − α)yt . The log-linearized version of this condition is ωb ht = ybt , which implies that b ht and ybt are perfectly correlated. Figure 4.1. displays the impulse response functions of a number of variables of interest to a technology shock of size 1 percent in period 0. In response to this innovation, the model predicts an expansion in output, consumption, investment, and hours and a deterioration in the trade-balanceto-output ratio. The level of the trade balance, not shown, also falls on impact. This means that the initial increase in domestic absorption (i.e., the increase in c0 + i0 ) is larger than the increase M. Uribe and S. Schmitt-Groh´e Figure 4.1: Responses to a One-Percent Productivity Shock Output 1 0.5 0.5 0 Investment 1.5 Trade Balance / Output TFP Shock 0.5 0 −0.5 −1 Open Economy Macroeconomics, Chapter 4 Figure 4.2: Response of the Trade-Balance-To-Output Ratio to a Positive Technology Shock 0.6 baseline high φ low ρ 0.4 percentage points 4 5 6 7 periods after the productivity shock in output. The Role of Persistence and Capital Adjustment Costs In the previous chapter, we deduced that the negative response of the trade balance to a positive technology shock was not a general implication of the neoclassical model. In particular, Principles I and II of the previous chapter state that two conditions must be met for the model to generate a deterioration in the external accounts in response to a mean-reverting improvement in total factor productivity. First, capital adjustment costs must not be too stringent. Second, the productivity shock must be sufficiently persistent. To illustrate this conclusion, figure 4.2 displays the impulse response function of the trade balance-to-GDP ratio to a technology shock of unit size in period 0 under three alternative parameter specifications. The solid line reproduces the benchmark case M. Uribe and S. Schmitt-Groh´e from figure 4.1. The broken line depicts an economy where the persistence of the productivity shock is half as large as in the benchmark economy (ρ = 0.21). In this case, because the productivity shock is expected to die out quickly, the response of investment is relatively weak. In addition, the temporariness of the shock induces households to save most of the increase in income to smooth consumption over time. As a result, the expansion in aggregate domestic absorption is modest. At the same time, because the size of the productivity shock is the same as in the benchmark economy, the initial responses of output and hours are identical in both economies (recall that, by equation (4.27), ht depends only on kt and At , and that kt is predetermined in period t). The combination of a weak response in domestic absorption and an initial response in output that is independent of the value of ρ, results in an improvement in the trade balance when productivity shocks are not too persistent. The crossed line depicts the case of high capital adjustment costs. Here the parameter φ equals 0.084, a value three times as large as in the benchmark case. In this environment, high adjustment costs discourage firms from increasing investment spending by as much as in the benchmark economy. As a result, the response of aggregate domestic demand is weaker, leading to an improvement in the trade balance-to-output ratio. The SOE-RBC Model With Complete Asset Markets (CAM) The SOE-RBC model economy considered thus far features incomplete asset markets. In that model, agents have access to a single financial asset that pays a non-state-contingent rate of return. In the model studied in this section, by contrast, agents are assumed to have access to a complete array of state-contingent claims. As we will see, the introduction of complete asset markets per se induces stationarity in the equilibrium dynamics, so there will be no need to introduce any ad-hoc Open Economy Macroeconomics, Chapter 4 stationarity inducing feature. Preferences and technologies are as in the EDEIR model. The period-by-period budget constraint of the household is given by Etrt,t+1 bt+1 = bt + At F (kt, ht ) − ct − [kt+1 − (1 − δ)kt] − Φ(kt+1 − kt), where rt,t+1 is a pricing kernel such that the period-t price of a random payment bt+1 in period t + 1 is given by Et rt,t+1 bt+1 . To clarify the nature of the pricing kernel rt,t+1 , define the current state of nature as S t. Let p(S t+1 |S t) denote the price of a contingent claim that pays one unit of consumption in a particular state S t+1 following the current state S t. Then the current price of a portfolio composed of b(S t+1 |S t) units of contingent claims paying in states S t+1 following S t is P given by S t+1 |S t p(S t+1 |S t)b(S t+1|S t). Now let π(S t+1 |S t) denote the probability of occurrence of state S t+1 , given information available at the current state S t. Multiplying and dividing the expression inside the summation sign by π(S t+1 |S t) we can write the price of the portfolio as P t+1 |S t ) t+1 |S t ) p(S b(S t+1 |S t). Now let rt,t+1 ≡ p(S t+1 |S t)/π(S t+1 |S t) be the price of a S t+1 |S t π(S π(S t+1 |S t ) contingent claim that pays in state S t+1 |S t scaled by the inverse of the probability of occurrence of the state in which the claim pays. Also, let bt+1 ≡ b(S t+1 |S t). Then, we can write the price of the P portfolio as S t+1 |S t π(S t+1 |S t)rt,t+1 bt+1 . But this expression is simply the conditional expectation Etrt,t+1 bt+1 . Note that Et rt,t+1 is the price in period t of an asset that pays 1 unit of consumption goods in every state of period t + 1. It follows that 1 + rt ≡ 1 Et rt,t+1 represents the risk-free real interest rate in period t. M. Uribe and S. Schmitt-Groh´e Households are assumed to be subject to a no-Ponzi-game constraint of the form lim Etrt,t+j bt+j ≥ 0, at all dates and under all contingencies. The variable rt,t+j ≡ rt,t+1 rt+1,t+2 · · · rt+j−1,t+j represents the pricing kernel such that Et rt,t+j bt+j is the period-t price of a stochastic payment bt+j in period t + j. Clearly, rt,t = 1. To characterize the household’s optimal plan, it is convenient to derive an intertemporal budget constraint. Begin by multiplying both sides of the sequential budget constraint (4.31) by r0,t. Then apply the conditional expectations operator E0 to obtain E0 r0,tEtrt,t+1 bt+1 = E0 r0,t [bt + AtF (kt , ht) − ct − kt+1 + (1 − δ)kt − Φ(kt+1 − kt)] . By the definition of the pricing kernel and the law of iterated expectations, we have that E0 r0,tEtrt,t+1 bt+1 = E0 r0,t+1 bt+1 . So we can write the above expression as E0 r0,t+1 bt+1 = E0 r0,t [bt + At F (kt , ht) − ct − kt+1 + (1 − δ)kt − Φ(kt+1 − kt)] . Now sum this expression for t = 0 to t = T > 0. This yields E0 r0,T +1bT +1 = b0 + E0 T X t=0 r0,t [At F (kt , ht) − ct − kt+1 + (1 − δ)kt − Φ(kt+1 − kt )] . Open Economy Macroeconomics, Chapter 4 Take limit for T → ∞ and use the no-Ponzi-game constraint (4.32) to obtain b0 ≥ E0 ∞ X t=0 r0,t [ct + kt+1 − (1 − δ)kt + Φ(kt+1 − kt) − At F (kt , ht)] . This expression states that the period-0 value of the stream of current and future trade deficits cannot exceed the value of the initial asset position b0 . The household’s problem consists in choosing contingent plans {ct, ht, kt+1 } to maximize the lifetime utility function (4.1) subject to (4.33), given k0 , b0 , and exogenous processes {At , r0,t}. The Lagrangian associated with this problem is L = E0 ∞ X t β U (ct, ht ) + ξ0 r0,t [At F (kt , ht) − ct − kt+1 + (1 − δ)kt − Φ(kt+1 − kt)] + ξ0 b0 , t=0 where ξ0 > 0 denotes the Lagrange multiplier on the time-0 present-value budget constraint (4.33). The first-order conditions associated with the household’s maximization problem are (4.11), (4.17), (4.33) holding with equality, and β t Uc (ct, ht ) = ξ0 r0,t. Taking the ratio of this expression to itself evaluated in period t + 1 yields βUc (ct+1 , ht+1 ) = rt,t+1 , Uc (ct, ht) which says that consumers equate their intertemporal marginal rate of substitution of current consumption for consumption in a particular state next period to the price of the corresponding state contingent claim scaled by the probability of occurrence of that state. Rearranging the above expression, taking expectations conditional on information available in period t, and recalling the M. Uribe and S. Schmitt-Groh´e definition of the risk-free interest rate given above yields Uc (ct, ht) = β(1 + rt )EtUc (ct+1 , ht+1 ), which is identical to equation (4.7) in the incomplete asset market version of the model. In other words, the complete asset market model generates a state-by-state version of the Euler equation implied by the incomplete-asset-market model, reflecting the fact that in the present environment consumers have more financial instruments available to diversify risk. We assume that the economy is small and fully integrated to the international financial market. ∗ Let r0,t denote the pricing kernel prevailing in international financial markets. By the assumption of free capital mobility, we have that domestic asset prices must be equal to foreign asset prices, that is, ∗ r0,t = r0,t for all dates and states. Foreign households are also assumed to have unrestricted access to international financial markets. Therefore, a condition like (4.34) must also hold abroad. Formally, ∗ β t Uc∗∗ (c∗t , h∗t ) = ξ0∗ r0,t . Note that we are assuming that domestic and foreign households share the same subjective discount factor, β. Combining (4.34)-(4.36) yields Uc (ct, ht) = ξ0 ∗ ∗ ∗ U ∗ (c , h ) ξ0∗ c t t for all dates and states. This expression says that under complete asset markets, the marginal utility of consumption is perfectly correlated across countries. The ratio ξ0 ξ0∗ reflects differences in Open Economy Macroeconomics, Chapter 4 per capita wealth between the domestic economy and the rest of the world. Because the present model is one of a small open economy, c∗t and h∗t are taken as exogenously given. We endogenize the determination of c∗t and h∗t in exercise 4.7 at the end of this chapter. This exercise analyzes a two-country model with complete asset markets in which one country is large and the other is small. Because the domestic economy is small, the domestic productivity shock At does not affect the foreign variables, which respond only to foreign shocks. The domestic economy, however, can be affected by foreign shocks via c∗t and h∗t . To be in line with the stochastic structure of the EDEIR model, we shut down all foreign shocks and focus attention only on the effects of innovations in domestic productivity. Therefore, we assume that the foreign marginal utility of consumption is time invariant and given by Uc∗ (c∗ , h∗ ), where c∗ and h∗ are constants. Let ψcam ≡ ξ0 ∗ ∗ ∗ ξ0∗ Uc∗ (c , h ). Then, we can rewrite the above expression as Uc (ct, ht) = ψcam . This expression reflects the fact that, because domestic consmers have access to a complete set of Arrow-Debreu contingent assets, they can fully diversify domestic risk. Thus, domestic consumers are exposed only to aggregate external risk. We are assuming that aggregate external risk is nil. As a result, by appropriately choosing their asset portfolios, domestic consumers can attain a constant marginal utility of consumption at all times and under all contingencies. Exercise 4.2 at the end of this chapter studies a version of the present model in which ψcam is stochastic, reflecting the presence of external shocks. The competitive equilibrium of the CAM economy is a set of processes {ct, ht , kt+1, At} satisfying (4.11), (4.12), (4.17), and (4.37), given A0 , k0 , and the exogenous process {t }. The CAM model delivers starionary processes for all variables of interest. This means that M. Uribe and S. Schmitt-Groh´e replacing the assumption of incomplete asset markets for the assumption of complete asset markets eliminates the endogenous random walk problem that plagues the dynamics of the one-bond economy. The key feature of the complete asset market responsible for its stationarity property is equation (4.37 ), which states that with complete asset markets the marginal utility of consumption is constant. By contrast, in the one-bond model, in the absence of any ad-hoc stationarity inducing feature, the marginal utility of consumption follows a random walk. To see this, set β(1 + rt) = 1 for all t in equation (4.7). We now wish to shed light on a question that arises often in models with complete asset markets, namely, what is the current account when financial markets are complete? In the one-bond economy the answer is simple: the current account can be measured either by changes in net holdings of the internationally traded bond or by the sum of the trade balance and net interest income paid by the single bond. Under complete asset markets, there is a large (possibly infinite) number of state-contingent financial assets, each with different returns. As a result, it is less clear how to keep track of the country’s net foeign asset position or of its net investment income. It turns out that there is a simple way of characterizing and computing the equilibrium level of the current account. Let us begin by addressing the simpler question of defining the trade balance. As in the one-bond model, the trade balance in the CAM model is simply given by equation (4.19). The current account can be defined as the change in the country’s net foreign asset position. Let st ≡ Etrt,t+1 bt+1 denote the net foreign asset position at the end of period t. Then, the current account is given by cat = st − st−1 . Open Economy Macroeconomics, Chapter 4 Alternatively, the current account can be expressed as the sum of the trade balance and net investment income. In turn, net investment income is given by the difference between the payoff in period t of assets acquired in t − 1, given by bt , and the resources spent in t − 1 on purchases of contingent claims, given by Et−1 rt−1,tbt . Thus, the current account is given by cat = tbt + bt − Et−1 rt−1,tbt . To see that the above two definitions of the current account are identical, use the definition of the trade balance, equation (4.19), and the definition of the net foreign asset position st to write the sequential resource constraint (4.31) as st = tbt + bt Subtracting st−1 from both sides of this expression, we have st − st−1 = tbt + bt − Et−1 rt−1,tbt. The left-hand side of this expression is our first definition of the current account, and the right-hand side our second definition. The functions U , F , and Φ are parameterized as in the EDEIR model. The parameters σ, β, ω, α, φ, δ, ρ, and η˜ take the values displayed in table 4.1. The parameter ψcam is set so as to ensure that the steady-state levels of consumption in the CAM and EDEIR models are the same. Table 4.3 displays unconditional second moments predicted by the SOE-RBC model with complete asset markets. The predictions of the model regarding output, consumption, investment, and the trade balance are qualitatively similar to those of the (EDEIR) incomplete-asset-market model. In particular, the model preserves the volatility ranking of output, consumption, investment, and M. Uribe and S. Schmitt-Groh´e Table 4.3: The SOE-RBC Model With Complete Asset Markets: Predicted Second Moments Variable σx t ρxt ,xt−1 ρxt ,GDPt y c 3.1 1.9 0.61 0.61 1.00 1.00 i h 9.1 2.1 0.07 0.61 0.66 1.00 tb y ca y Note. Standard deviations are measured in percentage points. Matlab code to produce this table is available at http://www.columbia.edu/~mu2166/closing.htm. the trade balance. Also, the domestic components of aggregate demand are all positively serially correlated and procyclical. Note that the correlation of consumption with output is now unity. This prediction of the CAM model is a consequence of assuming complete markets and GHH preferences. Under complete asset markets the marginal utility of consumption is constant over time, so that up to first order consumption is linear in hours. In turn, with GHH preferences, as we deduced earlier in this chapter, hours are linearly related to output up to first order. A significant difference between the predictions of the complete- and incomplete-asset-market models is that the former implies a highly countercyclical current account, whereas the latter implies an acyclical current Alternative Ways to Induce Stationarity The small open economy RBC model analyzed thus far features a debt-elastic country interest-rate premium. As mentioned earlier in this chapter, the inclusion of a debt-elastic premium responds to the need to obtain stationary dynamics up to first order. Had we assumed a constant interest rate, Open Economy Macroeconomics, Chapter 4 the linearized equilibrium dynamics would have contained an endogenous random walk component and the steady state would have depended on initial conditions. Two problems emerge when the linear approximation possesses a unit root. First, one can no longer claim that when the support of the underlying shocks is sufficiently small the linear system behaves like the original nonlinear system, which is ultimately the focus of interest. Second, when the variables of interest contain random walk elements, it is impossible to compute unconditional first and second moments, such as standard deviations, serial correlations, correlations with output, etc., which are the most common descriptive statistics of the business cycle. Nonstationarity arises in the small open economy model from three features: an exogenous cost of borrowing in international financial markets, an exogenous subjective discount factor, and incomplete asset markets. Accordingly, in this section we study stationarity inducing devises that consist in altering one of these three features. Our analysis follows closely Schmitt-Groh´e and Uribe (2003), but expands their analysis by including two additional approaches to inducing stationarity, a model with an internal interest-rate premium, and a model with perpetually-young consumers. One important question is whether the different stationarity inducing devises affect the predicted business cycle of the small open economy. A result of this section is that, given a commong calibration, all models considered deliver similar business cycles. Before plunging into details, it is important to note that the nature of the non-stationarity that is present in the small open economy model is different from the one that emerges from the introduction of non-stationary exogenous shocks. In the latter case, it is typically possible to find a transformation of variables that renders the model economy stationary in terms of the transformed variables. We will study an economy with non-stationary shocks and provide an example of a stationarity inducing transformation in section ??. By contrast, the nonstationarity that arises in the small open economy model with an exogenous cost of borrowing, an exogenous rate of time preference, and incomplete markets cannot be eliminated by any variable transformations. M. Uribe and S. Schmitt-Groh´e The section proceeds by first presenting and calibrating the different stationarity inducing the- oretical devises and then comparing the quantitative predictions of the various models. Internal Debt-Elastic Interest Rate (IDEIR) The EDEIR model studied thus far assumes that the country interest-rate premium depends upon the cross-sectional average of external debt. As a result, households take the country premium as exogenously given. The model with an internal debt-elastic interest rate assumes instead that the interest rate faced by domestic agents is increasing in the individual debt position, dt. Consequently, households internalize the effect that their borrowing choices have on the interest rate they face. In all other aspects, the IDEIR and EDEIR models are identical. Formally, in the IDEIR model the interest rate is given by rt = r ∗ + p(dt), where r ∗ , as before, denotes the world interest rate, but now p(·) is a household-specific interest-rate premium. Note that the argument of the interest-rate premium function is the household’s own net debt position. This means that in deciding its optimal expenditure and savings plan, the household will take into account the fact that a change in its debt position alters the marginal cost of funds. The only optimality condition that changes relative to the EDEIR model is the Euler equation for debt accumulation, which now takes the form Uc (ct, ht ) = β[1 + r ∗ + p(dt) + p0 (dt)dt]EtUc (ct+1 , ht+1 ). This expression features the derivative of the premium with respect to debt because households internalize the fact that as their net debt increases, so does the interest rate they face in financial Open Economy Macroeconomics, Chapter 4 markets. As a result, in the margin, the householdl cares about the marginal cost of borrowing 1 + r ∗ + p(dt) + p0 (dt)dt and not about the average cost of borrowing, 1 + r ∗ + p(dt). The competitive equilibrium of the IDEIR economy is a set of processes {dt, ct, ht, kt+1 , At} satisfying (4.11), (4.12), (4.15), (4.17), (4.18), and (4.39), given A0 , d−1 , and k0 , and the process {t }. We assume the same functional forms and parameter values as in the EDEIR model (see section 4.2.1). We note that in the model analyzed here the steady-state level of debt is no longer equal to d. To see this, recall that β(1 + r ∗) = 1 and note that the steady-state version of equation (4.39) imposes the following restriction on d, (1 + d)ed−d = 1, which does not admit the solution d = d, except in the special case in which d = 0. We set d = 0.7442, which is the value imposed in the EDEIR model. The implied steady-state level of debt is then given by d = 0.4045212. Intuitively, households internalize that their own debt position drives up the interest rate, hence they choose to borrow less than households in the EDEIR economy, who fail to internalize the dependence of the interest rate on the stock of debt. In this sense, one can say that households in the EDEIR economy overborrow. The fact that the steady-state debt is lower than d implies that the country premium is negative in the steady state. However, the marginal country premium, given by p(dt) + p0 (dt)dt, is nil in the steady state, as it is in the EDEIR economy. Recall that in the EDEIR economy, the marginal and average premia perceived by households are equal to each other and given by p(d˜t). An alternative calibration strategy is to impose d = d, and to adjust β to ensure that equation (4.39) holds in the deterministic steady state. In this case, the country premium vanishes in the steady state, but the marginal premium is positive and equal to ψ1 M. Uribe and S. Schmitt-Groh´e Portfolio Adjustment Costs (PAC) In the portfolio adjustment cost (PAC) model, stationarity is induced by assuming that agents face convex costs of holding assets in quantities different from some long-run level. Preferences and technology are as in the EDEIR model. However, in contrast to what is assumed in the EDEIR model, in the PAC model the interest rate at which domestic households can borrow from the rest of the world is assumed to be constant and equal to the world interest rate, r ∗ , that is, the country premium is nil at all times. The sequential budget constraint of the household is given by dt = (1 + r ∗ ) dt−1 − At F (kt, ht) + ct + kt+1 − (1 − δ)kt + Φ(kt+1 − kt ) + ψ2 (dt − d)2 , 2 where ψ2 and d are constant parameters defining the portfolio adjustment cost function. The first-order conditions associated with the household’s maximization problem are identical to those associated with the EDEIR model, except that the Euler condition for debt, equation (4.16), now becomes Uc (ct , ht)[1 − ψ2 (dt − d)] = β(1 + r ∗ )EtUc (ct+1 , ht+1 ). This optimality condition states that if the household chooses to borrow an additional unit, then current consumption increases by one unit minus the marginal portfolio adjustment cost ψ2 (dt − d). The value of this increase in consumption in terms of utility is given by the left-hand side of the above equation. Next period, the household must repay the additional unit of debt plus interest. The value of this repayment in terms of today’s utility is given by the right-hand side of the above optimality condition. At the optimum, the marginal benefit of a unit debt increase must equal its marginal cost. The competitive equilibrium of the PAC economy is a set of processes {dt, ct, ht, kt+1 , At} satisfying (4.11), (4.12), (4.17), (4.18), (4.40), and (4.41), given A0 , d−1 , and k0 , and the process Open Economy Macroeconomics, Chapter 4 {t }. Functional forms and the calibration of common parameters are as in the EDEIR model. Equation (4.41) together with the assumption that β(1+r ∗ ) = 1 implies that the parameter d determines the steady-state level of foreign debt (d = d). It follows that the steady-state level of debt is independnet of initial conditions. We calibrate d to 0.7442, which is the same value as in the EDEIR model. This means that the steady-state values of all endogenous variables are the same in the PAC and EDEIR models. We set ψ2 at 0.00074, which ensures that the volatility of the currentaccount-to-output ratio is the same as in the EDEIR model. At this point, it might be natural to expect the analysis of an external version of the PAC model in which the portfolio adjustment cost depends on the aggregate level of debt, d˜t, as opposed to the individual debt position dt . However, this modification would fail to render the small open economy model stationary. The reason is that in this case, the optimality condition with respect to debt, given by equation (4.41) in the PAC model, would become Uc (ct, ht) = β(1 + r ∗ )EtUc (ct+1 , ht+1 ), which, because β(1 + r ∗ ) equals one, implies that the marginal utility of consumption follows a random walk, and is therefore nonstationary. In addition, because this expression represents no restriction on the deterministic steady state of the economy, the external PAC model leaves the steady state levels of consumption and debt indeterminate. External Discount Factor (EDF) We next study an SOE RBC model in which stationarity is induced by assuming that the subjective discount factor depends upon endogenous variables. Specifically, we consider a preference specification in which the discount factor depends on endogenous variables that are taken as exogenous by individual households. We refer to this environment as the external discount factor (EDF) model. Suppose that the discount factor depends on the average per capita levels of consumption and M. Uribe and S. Schmitt-Groh´e hours worked. Formally, preferences are described by ∞ X θt U (ct, ht) θt+1 = β(˜ ct , ˜ht)θt t ≥ 0, θ0 = 1; where c˜t and ˜ht denote the cross-sectional averages of per capita consumption and hours, respectively, which the individual household takes as exogenously given. In the EDF model, the interest rate is assumed to be constant and equal to r ∗ . The sequential budget constraint of the household therefore takes the form dt = (1 + r ∗ )dt−1 − AtF (kt , ht) + ct + kt+1 − (1 − δ)kt + Φ(kt+1 − kt), and the no-Ponzi-game constraint simplifies to limj→∞ (1 + r ∗ )−j Etdt+j ≤ 0. The first-order conditions associated with the household’s maximization problem are (4.11), (4.44), and Uc (ct, ht ) = β (˜ ct, ˜ht)(1 + r ∗ )Et Uc (ct+1 , ht+1 ) Uc (ct , ht)[1+Φ0 (kt+1 −kt )] = β(˜ ct, ˜ht)EtUc (ct+1 , ht+1 ) At+1 Fk (kt+1 , ht+1 ) + 1 − δ + Φ0 (kt+2 − kt+1 ) (4.46) dt+j lim Et = 0. j→∞ (1 + r ∗ )j In equilibrium, individual and average per capita levels of consumption and effort are identical. That is, ct = c˜t Open Economy Macroeconomics, Chapter 4 and ht = ˜ht . A competitive equilibrium is a set of processes {dt, ct, ht , c˜t , ˜ht , kt+1 , At } satisfying (4.11), (4.12), and (4.44)-(4.49), given A0 , d−1 , and k0 and the stochastic process {t }. We evaluate the model using the same functional forms for the period utility function, the production function, and the capital adjustment cost function as in the EDEIR model. We assume that the subjective discount factor is of the form β(c, h) = hω 1+c− ω with ψ3 > 0, so that increases in consumption or leisure make households more impatient. To see that in the EDF model the steady-state level of debt is determined independently of initial conditions, start by noticing that in the steady state, equation (4.45) implies that β(c, h)(1 + r ∗ ) = 1, where c and h denote the steady-state values of consumption and hours. Next, notice that, given this result, the steady-state values of hours, capital (k), and output (kαh1−α ) can be found in exactly the same way as in the EDEIR model, with β replaced by (1 + r ∗ )−1 . Notice that k and h depend only on the deep structural parameters r ∗ , α, ω, and δ. With h in hand, the above expression delivers c, which depends only on the deep structural parameters defining h and on ψ3 . Finally, in the steady state, the resource constraint (4.44) implies that the steady state level of debt, d, is given by d = (c + δk − kα h1−α )/r ∗ , which depends only on structural parameters. The EDF model features one new parameter relative to the EDEIR model, namely the elasticity of the discount factor relative to the composite 1 + ct − hωt /ω. We set ψ3 to ensure that the steady- M. Uribe and S. Schmitt-Groh´e state trade-balance-to-output ratio equals 2 percent, in line with the calibration of the EDEIR model. The implied value of ψ3 is 0.11. Note that in our assumed specification of the endogenous discount factor, the parameter ψ3 governs both the steady-state trade-balance-to-output ratio and the stationarity of the equilibrium dynamics. This dual role may create a conflict. On the one hand, one may want to set ψ3 at a small value so as to ensure stationarity without affecting the predictions of the model at business-cycle frequency. On the other hand, matching the observed average trade-balance-to-output ratio might require a value of ψ3 that does affect the behavior of the model at business-cycle frequency. For this reason, it might be useful to consider a two-parameter specification of the discount factor, such as β(ct, ht ) = (ψ˜3 + ct − ω −1 hωt )−ψ3 , where ψ˜3 > 0 is a parameter. With this specification, one can fix the parameter ψ3 at a small value, just to ensure stationarity, and set the parameter ψ˜3 to match the observed trade-balance-to-output ratio. Internal Discount Factor (IDF) Consider now a variation of the EDF model in which the subjective discount factor depends on the individual levels of consumption and hours worked rather than on the aggregate levels. Specifically, suppose that preferences are given by equation (4.42), with the following law of motion for θt : θt+1 = β(ct , ht)θt t ≥ 0, θ0 = 1. This preference specification was conceived by Uzawa (1968) and introduced in the small-openeconomy literature by Mendoza (1991). Under these preferences, households internalize that their choices of consumption and leisure affect their valuations of future period utilities. Households choose processes {ct , ht, kt+1 , dt, θt+1 }∞ t=0 so as to maximize the utility function (4.42) subject to the sequential budget constraint (4.44), the law of motion of the discount factor (4.50), Open Economy Macroeconomics, Chapter 4 and the same no-Ponzi constraint as in the EDF economy. Let θt λt denote the Lagrange multiplier associated with (4.44) and θt ηt the Lagrange multiplier associated with (4.50). The first-order conditions associated with the household’s maximization problem are (4.44), (4.47), and λt = β(ct, ht)(1 + rt)Etλt+1 Uc (ct, ht ) − ηtβc (ct, ht ) = λt − Uh (ct, ht ) + ηtβh (ct, ht ) = λtAt Fh (kt, ht) ηt = −EtU (ct+1 , ht+1 ) + Et ηt+1 β(ct+1 , ht+1 ) λt[1 + Φ0 (kt+1 − kt )] = β(ct, ht)Etλt+1 At+1 Fk (kt+1 , ht+1 ) + 1 − δ + Φ0 (kt+2 − kt+1 ) These first-order conditions are fairly standard, except for the fact that the marginal utility of consumption is not given simply by Uc (ct, ht ) but rather by Uc (ct, ht)−βc (ct, ht)ηt. The second term in this expression reflects the fact that an increase in current consumption lowers the discount factor (βc < 0). In turn, a unit decline in the discount factor reduces utility in period t by ηt . Intuitively, −ηt equals the expected present discounted value of utility from period t + 1 onward. To see P θt+j this, iterate the first-order condition (4.54) forward to obtain ηt = −Et ∞ j=1 θt+1 U (ct+j , ht+j ). Similarly, the marginal disutility of labor is not simply Uh (ct, ht) but instead Uh (ct, ht)−βh (ct, ht)ηt. The competitive equilibrium of the IDF economy is a set of processes {dt, ct, ht, kt+1 , ηt, λt, At} satisfying (4.12), (4.44), (4.47), and (4.51)-(4.55), given the initial conditions A0 , d−1 , and k0 and the exogenous process {t }. We pick the same functional forms as in the EDF model. The fact that both the period utility function and the discount factor have a GHH structure implies that, as in all versions of the SOE RBC model considered thus far, the marginal rate of substitution between consumption and leisure M. Uribe and S. Schmitt-Groh´e depends only on hours worked and is independent of consumption. This yields the, by now familiar, equilibrium condition htω−1 = At Fh (kt , ht). The steady state of the IDF economy is the same as that of the EDF economy. To see this, note that in the steady state, (4.51) implies that β(c, h)(1 + r ∗ ) = 1, which also features in the EDF model. Also, in the steady state, equation (4.55) yields an expression for the capital-labor ratio that is the same as in all versions of the SOE RBC model considered thus far. Finally, the fact that the labor supply schedule and the sequential budget constraint are identical in the EDF and IDF models, implies that h, c, and d are also equal across the two models. This shows that the IDF model delivers a steady-sate value of debt that is independent of initial conditions. Of course, the IDF model includes the variable ηt , which does not feature in the EDF model. The steady-state value of this variable is given by −U (c, h)/r ∗. Finally, we assign the same values to the structural parameters as in the EDF model. The Model With No Stationarity Inducing Features (NSIF) For comparison with the models studied thus far, we now consider a version of the small open economy RBC model featuring no stationarity inducing features. In this model (a) the discount factor is constant; (b) the interest rate at which domestic agents borrow from the rest of the world is constant (and equal to the subjective discount rate, β(1 + r ∗ ) = 1); (c) agents face no frictions in adjusting the size of their asset portfolios; and (d) markets are incomplete, in the sense that domestic households have only access to a single risk-free international bond. Under this specification, the deterministic steady state of consumption depends on the assumed initial level of net foreign debt. Also, up to first order, the equilibrium dynamics contain a random walk component in variables such as consumption, the trade balance, and net external debt. A competitive equilibrium in the nonstationary model is a set of processes {dt, ct , ht , kt+1 , At } Open Economy Macroeconomics, Chapter 4 satisfying (4.11), (4.12), (4.17), (4.44), (4.47), and the consumption Euler equation Uc (ct , ht) = β(1 + r ∗ )EtUc (ct+1 , ht+1 ), given d−1 , k0 , A0 , and the exogenous process {t }. It is clear from the above consumption Euler equation that in the present model the marginal utility of consumption follows a random walk (recall that β(1 + r ∗ ) = 1). This property is transmitted to consumption, debt, and the trade balance. Also, because the above Euler equation imposes no restriction in the deterministic steady state, the steady-state values of consumption, debt, and the trade balance are all indeterminate. The model does deliver unique deterministic steady-state values for kt and ht . We calibrate the parameters σ, r ∗ , ω, α, φ, δ, ρ, and η˜ using the values displayed in tables 4.1. The Perpetual-Youth Model (PY) In this subsection, we present an additional way to induce stationarity in the small open economy RBC model. It is a discrete-time, stochastic, small-open-economy version of the perpetual-youth model due to Blanchard (1985). Cardia (1991) represents an early adoption of the perpetual-youth model in the context of a small open economy. Our model differs from Cardia’s in that we assume a preference specification that allows for an exact aggregation of this model. Our strategy avoids the need to resort to linear approximations prior to aggregation. The Basic Intuition The basic intuition behind why the assumption of finite lives by itself helps to eliminate the unit root in the aggregate net foreign asset position can be seen from the following simple example. Consider an economy in which debt holdings of individual agents follow a pure random walk of M. Uribe and S. Schmitt-Groh´e the form ds,t = ds,t−1 + µt . Here, ds,t denotes the net debt position at the end of period t of an agent born in period s, and µt is an exogenous shock common to all agents and potentially serially correlated. This is exactly the equilibrium evolution of debt we obtained in the quadraticpreference, representative-agent economy of chapter 2, see equation (2.12). We now depart from the representative-agent assumption by introducing a constant and age-independent probability of death at the individual level. Specifically, assume that the population is constant over time and normalized to unity. Each period, individual agents face a probability 1 − θ ∈ (0, 1) of dying. In addition, to keep the size of the population constant over time, we assume that 1 − θ agents are born each period. Assume that those agents who die leave their outstanding debts unpaid and that newborns inherit no debts. Adding the left- and right-hand sides of the law of motion for debt P t−s on both sides of the over all agents alive in period t—i.e., applying the operator (1 − θ) −∞ s=t θ expression ds,t = ds,t−1 + µt —yields dt = θdt−1 + µt , where dt denotes the aggregate debt position in period t. In performing the aggregation, recall that dt,t−1 = 0, because agents are born free of debts. Clearly, the resulting law of motion for the aggregate level of debt is mean reverting at the survival rate θ. The key difference with the representative agent model is that here each period a fraction 1 − θ of the stock of debt simply disappears. In what follows, we embed this basic stationarity result into the small-open-economy realbusiness-cycle model. Households Each agent maximizes the utility function ∞ 1 X − E0 (βθ)t (xs,t − x)2 2 t=0 Open Economy Macroeconomics, Chapter 4 with xs,t = cs,t − hωs,t , ω where cs,t and hs,t denote consumption and hours worked in period t by an agent born in period s. The parameter β ∈ (0, 1) represents the subjective discount factor, and x is a parameter denoting a satiation point. The symbol Et denotes the conditional expectations operator over aggregate states. Following the preference specification used in all of the models studied in this chapter, we assume that agents derive utility from a quasi-difference between consumption and leisure. But we depart from the preference specifications used earlier in this chapter by assuming a quadratic period utility index. As will become clear shortly, this assumption is essential to achieve aggregation in the presence of aggregate uncertainty. Financial markets are incomplete. Domestic consumers can borrow internationally by means of a bond paying a constant real interest rate. The debts of deceased domestic consumers are assumed to go unpaid. Foreign agents are assumed to lend to a large number of domestic consumers so that the fraction of unpaid loans due to death is deterministic. To compensate foreign lenders for these losses, domestic consumers pay a constant premium over the world interest rate. Specifically, the gross interest rate at which domestic consumers borrow internationally is (1 + r ∗ )/θ, where r ∗ denotes the world interest rate. Domestic agents can also lend internationally. The lending contract stipulates that should the domestic lender die, the foreign borrower is relieved of his debt obligations. Since foreing borrowers can perfectly diversify their loans across dometic agents, they pay a deterministic interest rate. To eliminate pure arbitrage opportunities, domestic consumers must lend at the rate (1 + r ∗ )/θ. It follows that the gross interest rate on the domestic consumer’s asset position (whether this position is positive or negative) is given by (1 + r ∗ )/θ. M. Uribe and S. Schmitt-Groh´e The budget constraint of a domestic consumer born in period s ≤ t is ds,t = 1 + r∗ θ ds,t−1 + cs,t − πt − wths,t , where πt and wt denote, respectively, profits received from the ownership of stock shares and the real wage rate. To facilitate aggregation, we assume that agents do not trade shares and that the shares of the dead are passed to the newborn in an egalitarian fashion. Thus, share holdings are identical across agents. Agents are assumed to be subject to the following no-Ponzi-game constraint lim Et θ 1 + r∗ ds,t+j ≤ 0. The first-order conditions associated with the agent’s maximization problem are (4.56), (4.57), (4.58) holding with equality, and − (xs,t − x) = λs,t, ω−1 hs,t = wt , λs,t = β(1 + r ∗ )Etλs,t+1 . Note that hs,t is independent of s (i.e., it is independent of the agent’s birth date). This means that we can drop the subscript s from hs,t and write htω−1 = wt. Use equations (4.56) and (4.60) to eliminate cs,t from the sequential budget constraint (4.57). This Open Economy Macroeconomics, Chapter 4 yields ds,t = 1 + r∗ θ 1 ds,t−1 − πt − 1 − ω wtht + x + (xs,t − x). To facilitate notation, we introduce the auxiliary variable 1 wt ht − x, zt ≡ πt + 1 − ω which is the same for all generations s because both profits and hours worked are independent of the age of the cohort. Then the sequential budget constraint becomes ds,t = 1 + r∗ θ ds,t−1 − zt + (xs,t − x). Now iterate this expression forward, apply the Et operator, and use the transversality condition (i.e., equation (4.58) holding with equality), to obtain 1 + r∗ θ ds,t−1 = Et ∞ X j=0 θ 1 + r∗ [zt+j − (xs,t+j − x)] . Using equations (4.59) and (4.61) to replace Etxs,t+j yields 1 + r∗ θ ds,t−1 = Et ∞ X j=0 θ 1 + r∗ zt+j − β(1 + r ∗ )2 (xs,t − x). β(1 + r ∗ )2 − θ Solve for xs,t to obtain xs,t = x + where β(1 + r ∗ )2 − θ (˜ zt − ds,t−1 ), βθ(1 + r ∗ ) ∞ z˜t ≡ X θ E t 1 + r∗ j=0 θ 1 + r∗ M. Uribe and S. Schmitt-Groh´e denotes the weighted average of current and future expected values of zt . It can be expressed recursively as z˜t = θ θ zt + Etz˜t+1 . ∗ 1+r 1 + r∗ We now aggregate individual variables by summing over generations born at time s ≤ t. Notice that at time t there are alive 1 − θ people born in t, (1 − θ)θ people born in t − 1, and, in general, (1 − θ)θs people born in period t − s. Let xt ≡ (1 − θ) −∞ X dt ≡ (1 − θ) −∞ X θt−s xs,t θt−s ds,t denote the aggregate levels of xs,t and ds,t, respectively. Now multiply (4.65) by (1 − θ)θt−s and then sum for s = t to s = −∞ to obtain the following expression for the aggregate version of equation (4.65): xt = x + β(1 + r ∗ )2 − θ (˜ zt − θdt−1 ). βθ(1 + r ∗ ) In performing this step, keep in mind that dt,t−1 = 0. That is, consumers are born debt free. Finally, aggregate the first-order condition (4.59) and the budget constraint(4.64) to obtain − (xt − x) = λt dt = (1 + r ∗ )dt−1 − zt + xt − x, Open Economy Macroeconomics, Chapter 4 where λt ≡ (1 − θ) −∞ X θt−s λs,t. denotes the cross-sectional average of marginal utilities of consumption. Firms Producing Consumption Goods We assume the existence of competitive firms that hire capital and labor services to produce consumption goods. These firms maximize profits, which are given by AtF (kt , ht) − wtht − utkt , where the function F and the productivity factor At are as in the EDEIR model. The first-order conditions associated with the firm’s profit-maximization problem are At Fk (kt , ht) = ut At Fh (kt , ht) = wt. We assume perfect competition in product and factor markets. Because F is homogeneous of degree one, firms producing consumption goods make zero profits. Firms Producing Capital Goods We assume the existence of firms that buy consumption goods to transform them into investment goods, rent out capital, and pay dividends, πt . Formally, dividends in period t are given by πt = ut kt − it − Φ(kt+1 − kt). M. Uribe and S. Schmitt-Groh´e The evolution of capital follows the law of motion given in (4.4), which we reproduce here for convenience kt+1 = (1 − δ)kt + it . The optimization problem of the capital producing firm is dynamic. This is because investment goods take one period to become productive capital and because of the presence of adjustment costs. The firm must maximize some present discounted value of current and future expected profits. A problem that emerges at this point is what discount factor should the firm use. This issue does not have a clear answer for two reasons: first, the owners of the firm change over time. Recall that the shares of the dead are distributed in equal parts among the newborn. It follows that the firm cannot use as its discount factor the intertemporal marginal rate of substitution of a ‘representative household.’ For the representative household does not exist. Second, the firm operates in a financial environment characterized by incomplete asset markets. For this reason, it cannot use the price of state-contingent claims to discount future profits. For there is no market for such claims. One must therefore introduce assumptions regarding the firm’s discounting behavior. These assumptions will in general not be innocuous with respect to the dynamics of capital accumulation. With this in mind, we will assume that the firm uses the discount factor β j λt+j /λt to calculate the period-t value of one unit of consumpiton delivered in a particular state of period t + j. Note that this discount factor uses the average marginal utility of consumption of agents alive in period t + j relative to the average marginal utility of consumption of agents alive in period t. Note that we use as the subjective discount factor the parameter β and not βθ. This is because the number of shareholders is constant over time (and equal to unity), unlike the size of a cohort born at a particular date, which declines at the mortality rate 1 − θ. The Lagrangian associated with the Open Economy Macroeconomics, Chapter 4 optimizaiton problem of capital goods producers is then given by L = Et ∞ X j=0 λt+j [ut+j kt+j − kt+j+1 + (1 − δ)kt+j − Φ(kt+j+1 − kt+j )] . λt The first-order condition with respect to kt+1 is λt [1 + Φ0 (kt+1 − kt)] = βEt λt+1 ut+1 + 1 − δ + Φ0 (kt+2 − kt+1 ) . Equilibrium Equations (4.62), (4.63), (4.66)-(4.74) form a system of eleven equations in eleven unknowns: xt, λt, ht , wt , ut , πt , it, kt, dt, zt , z˜t . Here, we reproduce the system of equilibrium conditions for convenience: htω−1 = wt, 1 zt ≡ πt + 1 − wt ht − x, ω z˜t = θ θ zt + Etz˜t+1 , ∗ 1+r 1 + r∗ xt = x + β(1 + r ∗ )2 − θ (˜ zt − θdt−1 ), βθ(1 + r ∗ ) −(xt − x) = λt, dt = (1 + r ∗ )dt−1 − zt + xt − x, At Fk (kt, ht ) = ut , At Fh (kt , ht) = wt, πt = ut kt − it − Φ(kt+1 − kt), M. Uribe and S. Schmitt-Groh´e kt+1 = (1 − δ)kt + it , λt [1 + Φ0 (kt+1 − kt)] = βEt λt+1 ut+1 + 1 − δ + Φ0 (kt+2 − kt+1 ) . It is of interest to consider the special case in which β(1 + r ∗ ) = 1. In this case, the evolution of external debt is given by dt = θdt−1 + (1 + r ∗ − θ)/θ˜ zt − zt . This expression shows that the stock of debt does not follow a random walk as was the case in the representative-agent economy with quadratic preferences of chapter 2. In fact, the (autoregressive) coefficient on past external debt is θ ∈ (0, 1). The mean reverting property of aggregate external debt obtains in spite of the fact that individual debt positions follow a random walk. The reason why the aggregate level of external debt is trend reverting in equilibrium is the fact that each period a fraction 1 − θ ∈ (0, 1) of the agents die and are replaced by newborns holding no financial assets. As a result, on average, the current aggregate level of debt is only a fraction θ of the previous period’s level of debt. This intuition also goes through when β(1 + r ∗ ) 6= 1, although in this case individual levels of debt display a trend in the deterministic equilibrium. In the deterministic steady state, the aggregate level of debt is given by θ(1 − β(1 + r ∗ )) y (1 + r ∗ − θ)(θ − β(1 + r ∗ )) In the special case in which β(1 + r ∗ ) equals unity, the steady-state aggregate stock of debt is nil. This is because in this case agents, all of whom are born with no debts, wish to hold constant debt levels over time. In this case, the steady state both the aggregate and the individual levels of debt are zero. It can be shown that if β(1 + r ∗ ) is less than unity but larger than θ, the steady-state level of debt must be positive. We adopt the same functional forms for F and Φ as in the EDEIR model. We calibrate ω, α, φ, δ, ρ, β and η˜ at the values displayed in tables 4.1. Consequently, the steady-state vlaues of hours, Open Economy Macroeconomics, Chapter 4 capital, output, investment, consumption, and the trade balance are the same as in the EDEIR model. We set θ = 1 − 1/75, which implies a life expectancy of 75 years. Finally, we calibrate r ∗ and x to ensure that in the steady state the trade-balance-to-output ration is 2 percent and the degree of relative risk aversion, given by −x/(x − x), is 2. This calibration results in an interest rate of 3.7451 percent and a satiation point of 0.6334. Quantitative Results Table 4.4 displays a number of unconditional second moments of interest implied by the IDF, EDF, EDEIR, IDEIR, PAC, CAM, and PY models. The NSIF model is nonstationary up to first order, and therefore does not have well defined unconditional second moments. The second moments for all models other than the IDEIR and PY models are taken from Schmitt-Groh´e and Uribe (2003). We compute the equilibrium dynamics by solving a log-linear approximation to the set of equilibrium conditions. The Matlab computer code used to compute the unconditional second moments and impulse response functions for all models presented in this section is available at www.columbia.edu/~mu2166/closing.htm. Table 4.4 shows that regardless of how stationarity is induced, the model’s predictions regarding second moments are virtually identical. One noticeable difference arises in the CAM model, the complete markets case, which, as might be expected, predicts less volatile consumption. The low volatility of consumption in the complete markets model introduces a difference between the predictions of this model and those of the IDF, EDF, EDEIR, IDEIR, PAC, and PY models: Because consumption is smoother in the CAM model, its role in determining the cyclicality of the trade balance is smaller. As a result, the CAM model predicts that the correlation between output and the trade balance is positive, whereas the models featuring incomplete asset markets all imply that this correlation is negative. Figure 4.3 demonstrates that all of the models being compared imply virtually identical impulse M. Uribe and S. Schmitt-Groh´e Table 4.4: Second Moments Across Models IDF Volatilities: std(yt) std(ct) std(it) 2.3 9.1 2.3 9.1 2.5 9 2.7 9 2.7 9 1.9 9.1 2.5 8.7 std(ht) t std( tb yt ) 2.1 1.5 2.1 1.5 2.1 1.6 2.1 1.8 2.1 1.8 2.1 1.6 2.1 1.5 t std( ca yt ) Serial Correlations: corr(yt, yt−1 ) corr(ct, ct−1 ) 0.61 0.7 0.61 0.7 0.62 0.76 0.62 0.78 0.62 0.78 0.61 0.61 0.62 0.74 corr(it, it−1 ) corr(ht, ht−1 ) 0.07 0.61 0.07 0.61 0.068 0.62 0.069 0.62 0.069 0.62 0.07 0.61 0.064 0.62 tbt−1 yt−1 ) cat cat−1 corr( yt , yt−1 ) Correlations with Output: corr(ct, yt ) 0.94 0.94 corr(it, yt ) corr(ht, yt) 0.66 1 0.66 1 0.68 1 0.67 1 0.67 1 0.66 1 0.69 1 t corr( tb yt , yt ) t corr( ca yt , yt ) t corr( tb yt , Note. Standard deviations are measured in percent per year. IDF = Internal Discount Factor; EDF = External Discount Factor; IDEIR = Internal Debt-Elastic Interest Rate; EDEIR = External Debt-Elastic Interest Rate; PAC = Portfolio Adjustment Costs; CAM = Complete Asset Markets; PY = Perpetual Youth Model. Parts of the table are reproduced from Schmitt-Groh´e and Uribe (2003). Open Economy Macroeconomics, Chapter 4 Figure 4.3: Impulse Response to a Unit Technology Shock Across Models Output 1 0.5 0.5 0 Investment 1.5 Trade Balance / GDP 1 −0.5 0 Current Account / GDP Note. Solid line, IDF model; squares, EDF; dashed line, EDEIR model; dash-dotted line, PAC model; dotted line, CAM model; circles, NSIF model; right triangle, IDEIR model; left triangle, PY. M. Uribe and S. Schmitt-Groh´e response functions to a technology shock. Each panel shows the impulse response of a particular variable in the eight models. For all variables, the impulse response functions are so similar that to the naked eye the graph appears to show just a single line. Again, the only small and barely noticeable difference is given by the responses of consumption and the trade-balance-to-GDP ratio in the complete markets model. In response to a positive technology shock, consumption increases less when markets are complete than when markets are incomplete. This in turn, leads to a smaller decline in the trade balance in the period in which the technology shock occurs. Open Economy Macroeconomics, Chapter 4 Appendix A: Log-Linearization of the SOE-RBC EDEIR Model Let x bt ≡ ln(xt/x), denote the log-deviation of xt from its deterministic steady state steady-state value x. Then the log-linearized version of equilibrium conditions (4.11), (4.12), and (4.15)-(4.17) of the SOE-RBC EDEIR model is: [hh − ch ]hˆt + [hc − cc ]cˆt = a ˆt + α(kˆt − ˆht ) Et a ˆt+1 = ρˆ stb aˆt + αkˆt + (1 − α)hˆt + ∗ dˆt = r ˆ t + cc cˆt ch h stb [ψ1 d + 1 + r ∗ ]dˆt−1 + sc cˆt r∗ si + [kˆt+1 − (1 − δ)kˆt] δ = ψ1 d/(1 + r ∗ )dˆt + ch Etˆht+1 + cc Etcˆt+1 (4.77) (4.78) ˆ t + Φ00 (0)k(kˆt+1 − kˆt ) = cc Etcˆt+1 + ch Et h ˆ t+1 cc cˆt + ch h i r∗ + δ h ˆt+1 − Etˆht+1 ) + E a ˆ + (α − 1)(E k t t+1 t 1 + r∗ Φ00 (0)k ˆ + [Etkt+2 − Et kˆt+1 ] (4.79) 1 + r∗ where hh ≡ Uhh h/Uh , ch ≡ Uch h/Uc , hc ≡ Uhc c/Uh , cc ≡ Ucc c/Uc , stb ≡ r ∗ d/F (k, h), sc ≡ c/F (k, h), si ≡ δk/F (k, h). In the above log-linearization we are using the particular forms assumed for the production function and the country premium function. Appendix B: First-Order Accurate Approximations to Dynamic General Equilibrium Models The equilibrium conditions of the SOE-RBC model take the form of the nonlinear stochastic vector difference equation (4.28). In this respect, the SOE-RBC model is not special. In fact systems of equilibrium conditions of this form are ubiquitous in macroeconomics. A problem that one must M. Uribe and S. Schmitt-Groh´e face is that, in general, it is impossible to solve such systems analytically. But fortunately one can obtain good approximations to the true solution in relatively easy ways. In section 4.2.4, we introduced one particular strategy, consisting in linearizing the equilibrium conditions around the nonstochastic steady state. Here we explain in detail how to solve the resulting system of linear stochastic difference equations. In addition, we show how to use the solution to compute second moments and impulse response functions. For convenience let’s begin by reproducing the compact form of the system of equilibrium conditions shown in equation (4.28). Etf (yt+1 , yt, xt+1 , xt) = 0, where Et denotes the mathematical expectations operator conditional on information available at time t. The vector xt denotes predetermined (or state) variables and the vector yt denotes nonpredetermined (or control) variables. Let nx be the length of xt and ny the length of yt . We define n = nx + ny . The function f then maps Rny × Rny × Rnx × Rnx into Rn . The initial value of the state vector x0 is an initial condition for the economy. In addition, the complete set of equilibrium conditions typically includes a transversality condition, like (4.18). As we restrict attention to equilibria in which all variables are expected to converge to their nonstochastic steady state values, we can ignore this type of condition. The state vector xt can be partitioned as xt = [x1t ; x2t ]0 . The vector x1t consists of endogenous predetermined state variables and the vector x2t of exogenous state variables. Specifically, we assume that x2t follows the exogenous stochastic process given by x2t+1 = Λx2t + η˜σt+1 , Open Economy Macroeconomics, Chapter 4 where both the vector x2t and the innovation t are of order n × 1.4 The vector t is assumed to have a bounded support and to be independently and identically distributed, with mean zero and variance/ covariance matrix I. The eigenvalues of the matrix Λ are assumed to lie within the unit circle. The parameter σ is a scalar. It is an auxiliary perturbation parameter, introduced to scale the degree of uncertainty in the model. When σ = 0, the model becomes deterministic. When σ = 1, we obtain the law of motion of the exogenous variables of the original model. As will become clear shortly, the perturbation parameter σ is convenient for approximating the stochastic economy around its non-stochastic steady state. Its usefulness, however, is greatest when one wishes to find solutions that are accurate to order higher than one. See for instance, Schmitt-Groh´e and Uribe (2006). The solution to the system (4.28) is of the form: yt = g(xt, σ) xt+1 = h(xt , σ) + ησt+1 , where the function g maps Rnx × R+ into Rny and the function h maps Rnx × R+ into Rnx . The matrix η is of order nx × n and is given by ∅ η = . η˜ 4 It is straightforward to accommodate the case in which the size of the innovations vector t is different from that of x2t . M. Uribe and S. Schmitt-Groh´e The first-order Taylor series expansion of the functions g and h around the point (x, σ) = (x, σ) g(x, σ) ≈ g(x, σ) + gx(x, σ)(x − x) + gσ (x, σ)(σ − σ) h(x, σ) ≈ h(x, σ) + hx (x, σ)(x − x) + hσ (x, σ)(σ − σ), where gx (x, σ) and hx (x, σ) denote the Jacobian matrices of the functions g and h, respectively, with respect to the vector xt , and gσ (x, σ) and hσ (x, σ) denote the Jacobian matrices of the functions g and h, respectively, with respect to the parameter σ. These matrices define the first-order accurate solution to the DSGE model. To identify these matrices, substitute the proposed solution, equations (4.80) and (4.81), into equation (4.28), and define F (x, σ) ≡ Etf (g(h(x, σ) + ησ0 , σ), g(x, σ), h(x, σ) + ησ0 , x) = 0. Here we drop time subscripts for variables dated in period t and use a prime to indicate variables dated in period t + 1. Because F (x, σ) must be equal to zero for any value of x and σ, it must be the case that the derivatives of F must also be equal to zero. Formally, Fx (x, σ) = 0 Fσ (x, σ) = 0 for any x and σ. Here Fx (x, σ) denotes the derivative of F with respect to x and is of order n × nx , and Fσ (x, σ) denotes the derivative of F with respect to σ and is of order n × 1. As will become clear below, a particularly convenient point to approximate the functions g and Open Economy Macroeconomics, Chapter 4 h around is the non-stochastic steady state, xt = xss and σ = 0. We define the non-stochastic steady state as vectors (xss , y ss ) such that f (y ss , y ss, xss , xss ) = 0. It is clear that y ss = g(xss , 0) and xss = h(xss , 0). To see this, note that if σ = 0, then Etf = f . The reason why the steady state is a particularly convenient point is that in most models it is possible to solve for the steady state analytically or numerically. With the steady state values in hand, one can then evaluate the derivatives of the function F . We are looking for approximations to g and h around the point (x, σ) = (xss , 0) of the form g(x, σ) ≈ g(xss , 0) + gx(xss , 0)(x − xss ) + gσ (xss , 0)σ h(x, σ) ≈ h(xss , 0) + hx (xss , 0)(x − xss ) + hσ (xss , 0)σ. Solving the DSGE model up to first-order of accuracy amounts to finding numerical values for g(xss , 0), gx (xss , 0), gσ (xss , 0) h(xss , 0), hx (xss , 0), and hσ (xss , 0). We have already established that g(xss, 0) = y ss and h(xss , 0) = xss and therefore are numerically known. The remaining unknown coefficients of the first-order approximation to g and h are identified by using the fact that, by equation (4.83), it must be the case that: Fσ (xss , 0) = 0 and Fx (xss , 0) = 0. M. Uribe and S. Schmitt-Groh´e To find these derivatives let’s start by repeating equation (4.82) F (x, σ) ≡ Et f (g(h(x, σ) + ησ0 , σ), g(x, σ), h(x, σ) + ησ0 , x). Taking derivative with respect to the scalar σ we find: Fσ (xss , 0) = Etfy0 [gx (hσ + η0 ) + gσ ] + fy gσ + fx0 (hσ + η0 ) = fy0 [gxhσ + gσ ] + fy gσ + fx0 hσ , where fy0 , fy , fx0 , and fx denote, respectively, the partial derivatives of the function f with respect to y 0 , y, x0 , and x evaluated at the nonstochastic steady state. Then imposing Fσ (xss , 0) = 0, one obtains the following system fy0 gx + fx0 hσ fy0 + fy =0 gσ in the unknowns gσ and hσ . This system is linear and homogeneous in gσ and hσ . Therefore, if a unique solution exists, we have that hσ = 0 and gσ = 0. These two expressions imply two important theoretical results. First, they show that in general, up to first order, one need not correct the constant term of the approximation to the policy function for the size of the variance of the shocks. Second, and perhaps more importantly, they imply that Open Economy Macroeconomics, Chapter 4 in a first-order approximation the expected values of xt and yt are equal to their non-stochastic steady-state values xss and y ss . To see this, take unconditional expectation of equation (4.81), which yields Ex = Eh(x, σ) ≈ xss + hx (xss , 0)(E(x) − xss ), where the last relationship uses the approximation of the policy function. Collecting terms, one obtain (I − hx (xss , 0))(E(x) − xss) = 0. If no eigenvalue of hx (xss , 0) lies on the unit circle, then the only solution to this equation is E(x) = xss . To find gx and hx differentiate (4.82) with respect to x to obtain the following system Fx (xss , 0) = fy0 gxhx + fy gx + fx0 hx + fx . Note that the derivatives of f evaluated at (y 0 , y, x0, x) = (y ss , y ss , xss , xss ) are known. Imposing Fx (xss , 0) = 0 the above expression can be written as: I fy0 ] hx = −[fx gx I fy ] , gx which is a system of n × nx quadratic equations in the n × nx unknowns given by the elements of gx and hx . Let A = [fx0 fy0 ] and B = −[fx fy ]. Note that both A and B are known. At this point we reintroduce time subscripts. Define x bt+j ≡ Etxt+j − xss , then post multiplying the above system by x bt we obtain: I I A bt = B bt . hx x x gx gx M. Uribe and S. Schmitt-Groh´e Noticing that x bt+1 ≈ hx x bt and that ybt ≈ gxx bt , we can write the system as bt+1 bt x x A =B . ybt+1 ybt Now define the vector w bt containing all control and state variables of the system. Formally bt x w bt ≡ ybt We can then write the linearized system as We seek solutions in which Aw bt+1 = B w bt . lim w bt+j = 0. This requirement means that at every point in time the vector wt is expected to converge to its non-stochastic steady state, w ss ≡ [xss0 y ss0 ]0 . The remainder of this section is based on Klein (2000) (see also Sims, 1996). Consider the generalized Schur decomposition of A and B: qAz = a and qBz = b, where a and b are upper triangular matrices and q and z are orthonormal matrices. Recall that a Open Economy Macroeconomics, Chapter 4 matrix a is said to be upper triangular if elements in row i and column j, denoted a(i, j) are 0 for i > j. A matrix z is orthonormal if z 0 z = zz 0 = I. Define st ≡ z 0 w bt. Then we have that ast+1 = bst. The ratio b(i, i)/a(i, i) is known as the generalized eigenvalue of the matrices A and B. Assume, without loss of generality, that the ratios |b(i, i)/a(i, i)| are increasing in i. Now partition a, b, z, w bt, and st as a11 a12 a= , ∅ a22 b11 b12 b= ; ∅ b22 z11 z12 z= ; z21 z22 w bt = w bt1 w bt2 st = s1t s2t where a11 and b11 are square matrices whose diagonals generate the generalized eigenvalues of (A, B) with absolute values less than one, and a22 and b22 are square matrices whose diagonals generate the generalized eigenvalues of (A, B) with absolute values greater than one . Then we have that a22 s2t+1 = b22 s2t . The partition of the matrix B guarantees that all diagonal elements of b22 are nonzero. In addition, recalling that a triangular matrix is invertible if the elements along its main diagonal are nonzero, it follows, that b22 is invertible. So we can write 2 2 b−1 22 a22 st+1 = st . M. Uribe and S. Schmitt-Groh´e 5 By construction, the eigenvalues of b−1 22 a22 are all less than unity in modulus. It follows that the elements of s2t+j will converge to infinity in absolute value unless s2t is zero for all t. Since st is a linear transformation of w bt, the terminal condition (4.84) implies the terminal condition lim st+j = 0. Therefore, (4.84) is satisfied only if s2t = 0 for all t. In turn, by the definition of s2t , this restriction implies that Solving this expression for w bt2 yields 0 0 z12 w bt1 + z22 w bt2 = 0. w bt2 = Gw bt1 , 0 G ≡ −z22 −1 0 z12 . 0 follows from the fact that, being orthonormal, z 0 itself is invertible. The The invertibility of z22 condition s2t = 0 for all t also implies that a11 s1t+1 = b11 s1t . The criteria used to partition A and B guarantee that the diagonal elements of the upper triangular matrix a11 are nonzero. Therefore, a11 is invertible, which allows us to write 1 s1t+1 = a−1 11 b11 st . To arrive at this conclusion we are applying a number of properties of upper triangular matrices. Namely, (a) The inverse of a nonsingular upper triangular matrix is upper triangular. (b) the product of two upper triangular matrices is upper triangular. (c) The eigenvalues of an upper triangular matrix are the elements of its main diagonal. Open Economy Macroeconomics, Chapter 4 Now express s1t as a linear transformation of w bt1 as follows: 0 0 s1t = z11 w bt1 + z21 w bt2 0 0 = (z11 + z21 G)w bt1 0 0 0 = (z11 − z21 z22 −1 1 = z11 w bt −1 0 z12 )w bt1 The second and third equalities make use of equation (4.85) and identity (4.86), respectively. The last equality follows from the fact that z is orthonormal.6 Combining this expression with (4.87) yields 1 w bt+1 = Hw bt1 , −1 H ≡ z11 a−1 11 b11 z11 . Finally, note that all eigenvalues of H are inside the unit circle. To see this note that the eigenvalues −1 −1 −1 of z11 a−1 11 b11 z11 must be same as the eigenvalues of a11 b11. In turn a11 b11 is upper triangular with diagonal elements less than one in modulus. 0 0 0 To see this, let k ≡ z11 − z21 z22 that −1 0 z12 . −1 We with to show that k = z11 . Note that the orthonormality of z implies » 0 – 0 0 0 z11 z11 + z21 z21 z11 z12 + z21 z22 I = z0z = . 0 0 0 0 z12 z11 + z22 z21 z12 z12 + z22 z22 −1 −1 0 0 0 0 0 Use element (2, 1) of z 0 z to get z12 z11 = −z22 z21 . Pre-multiply by z22 and post multiply by z11 to get z22 z12 = −1 0 −1 0 0 0 −1 −z21 z11 . Use this expression to eliminate z22 z12 from the definition of k to obtain k = [z11 + z21 z21 z11 ]. Now 0 0 0 use element (1, 1) of z 0z to write z21 z21 = I − z11 z11 . Using this equation to eliminate z21 z21 from the expression in −1 0 0 −1 square brackets, we get k = [z11 + (I − z11 z11 )z11 ], which is simply z11 . Finally, note that the invertibility of z11 follows from the invertibility of z. M. Uribe and S. Schmitt-Groh´e Local Existence and Uniqueness of Equilibrium The analysis thus far has not delivered the matrices hx and gx that define the first-order accurate solution of the DSGE model. In this section, we accomplish this task and derive conditions under which the equilibrium dynamics are locally unique. Local Uniqueness of Equilibrium Suppose that the number of generalized eigenvalues of the matrices A and B with absolute value less than unity is exactly equal to the number of states, nx . That is, suppose that a11 and b11 are of size nx × nx . In this case, the matrix H is also of size nx × nx , and the matrix G is of size ny × nx . Moreover, since w bt1 must be comformable with H, we have that w bt1 is given by the first nx elements of w bt, which exactly coincide with x bt . In turn, this implies that w bt2 must equal ybt . Defining hx ≡ H and gx ≡ G, we can then write x bt+1 = hx x bt ybt = gx x bt , which is the solution we were looking for. Notice that because x bt is predetermined in period t, we have that ybt and x bt+1 are uniquely determined in period t. The evolution of the linearized system Open Economy Macroeconomics, Chapter 4 is then unique and given by yt − y ss = gx (xt − xss ) xt+1 − xss = hx (xt − xss ) + ηt+1 , where we have set σ at the desired value of 1. Summarizing, the condition for local uniqueness of the equilibrium is that the number of generalized eigenvalues of the matrices A and B is exactly equal to the number of states, nx . No Local Existence of Equilibrium Now suppose that the number of generalized eigenvalues of the matrices A and B with absolute value less than one is smaller than the number of state variables, nx . Specifically, suppose that a1 1 and b11 are of size (nx − m) × (nx − m), with 0 < m ≤ nx . In this case, the matrix H is of order (nx − m) × (nx − m) and the matrix G is of order (ny + m) × (nx − m). Moreover, the vectors w bt1 and w bt2 no longer coincide with x bt and ybt , respectively. Instead, w bt1 and w bt2 take the form w bt1 = x bat w bt2 = x bbt ybt where x bat and x bbt are vectors of lengths nx − m and m, respectively, and satisfy x bt = x bat x bbt M. Uribe and S. Schmitt-Groh´e The law of motion of x bt and ybt is then of the form x bat+1 = H x bat x bbt ybt xat = Gb This expression states that x bbt is determined by x bat . But this is impossible, because x bat and x bbt are predetermined independently of each other. We therefore say that locally there exists no equilibrium. Summarizing, no local equilibrium exists if the number of generalized eigenvalues of the matrices A and B with absolute values less than one is smaller than the number of state variables, nx . Local Indeterminacy of Equilibrium Finally, suppose that the number of generalized eigenvalues of the matrices A and B with absolute value less than one is larger than the number of state variables, nx . Specifically, suppose that a1 1 and b11 are of size (nx + m) × (nx + m), with 0 < m ≤ ny . In this case, the matrix H is of order (nx + m) × (nx + m) and the matrix G is of order (ny − m) × (nx + m). The vectors w bt1 and w bt2 take the form bt x w bt1 = , ybta w bt2 = ybtb Open Economy Macroeconomics, Chapter 4 where ybta and ybtb are vectors of lengths m and ny − m, respectively, and satisfy ybt = ybta ybtb The law of motion of x bt and ybt is then of the form bt+1 bt x x =H a ybta ybt+1 bt x ybtb = G ybta These expressions state that one can freely pick ybta in period t. Since ybta is not predetermined, the equilibrium is indeterminate. In this case, we say that the indeterminacy is of dimension m. The evolution of the system can then be written as − xss xt+1 xt − η ∅ t+1 =H + a yt+1 − y ass µt+1 yta − y ass ν νµ and xt − x ytb − y bss = G , yta − y ass where the matrices ν and νµ allow for nonfundamental uncertainty, and µt is an i.i.d. innovation with mean ∅ and variance covariance matrix equal to the identity matrix. Summarizing, the equilibrium displays local indeterminacy of dimension m if the number of generalized eigenvalues of the matrices A and B with absolute values less than one exceeds the M. Uribe and S. Schmitt-Groh´e number of state variables, nx , by 0 < m ≤ ny . Second Moments Start with the equilibrium law of motion of the deviation of the state vector with respect to its steady-state value, which is given by x ˆt+1 = hx x ˆt + σηt+1 , Covariance Matrix of xt Let Σx ≡ E x ˆt x ˆ0t denote the unconditional variance/covariance matrix of x ˆt and let Σ ≡ σ 2 ηη 0. Then we have that Σx = hx Σxh0x + Σ . We will describe two numerical methods to compute Σx. Method 1 One way to obtain Σx is to make use of the following useful result. Let A, B, and C be matrices whose dimensions are such that the product ABC exists. Then vec(ABC) = (C 0 ⊗ A) · vec(B), Open Economy Macroeconomics, Chapter 4 where the vec operator transforms a matrix into a vector by stacking its columns, and the symbol ⊗ denotes the Kronecker product. Thus if the vec operator is applied to both sides of Σx = hx Σxh0x + Σ , the result is vec(Σx) = vec(hxΣx h0x ) + vec(Σ) = F vec(Σx) + vec(Σ ), where F = hx ⊗ hx . Solving the above expression for vec(Σx ) we obtain vec(Σx) = (I − F )−1 vec(Σ ) provided that the inverse of (I − F ) exists. The eigenvalues of F are products of the eigenvalues of the matrix hx . Because all eigenvalues of the matrix hx have by construction modulus less than one, it follows that all eigenvalues of F are less than one in modulus. This implies that (I − F ) is nonsingular and we can indeed solve for Σx . One possible drawback of this method is that one has to invert a matrix that has dimension n2x × n2x . Method 2 The following iterative procedure, called doubling algorithm, may be faster than the one described above in cases in which the number of state variables (nx ) is large. M. Uribe and S. Schmitt-Groh´e Σx,t+1 = hx,tΣx,t h0x,t + Σ,t hx,t+1 = hx,thx,t Σ,t+1 = hx,tΣ,t h0x,t + Σ,t Σx,0 = I hx,0 = hx Σ,0 = Σ Other second moments Once the covariance matrix of the state vector, xt has been computed, it is easy to find other second moments of interest. Consider for instance the covariance matrix E x ˆt x ˆ0t−j for j > 0. Let µt = σηt. Ex ˆt x ˆ0t−j j−1 X E[hjxx ˆt−j k=0 j 0 hx E x ˆt−j x ˆt−j hkx µt−k ]ˆ x0t−j = hjx Σx Similarly, consider the variance covariance matrix of linear combinations of the state vector xt. For instance, the co-state, or control vector yt is given by yt = y ss + gx (xt − xss ), which we can write Open Economy Macroeconomics, Chapter 4 as: yˆt = gx x ˆt . Then E yˆt yˆt0 = Egxx ˆt x ˆ0t gx0 = gx [E x ˆtx ˆ0t ]gx0 = gx Σx gx0 and, more generally, 0 E yˆtyˆt−j = gx [E x ˆtx ˆ0t−j ]gx0 = gx hjx Σx gx0 , for j ≥ 0. Impulse Response Functions The impulse response to a variable, say zt in period t + j to an impulse in period t is defined as: IR(zt+j ) ≡ Etzt+j − Et−1 zt+j The impulse response function traces the expected behavior of the system from period t on given information available in period t, relative to what was expected at time t − 1. Using the law of motion Etx ˆt+1 = hx x ˆt for the state vector, letting x denote the innovation to the state vector in period 0, that is, x = ησ0 , and applying the law of iterated expectations we get that the impulse response of the state vector in period t is given by IR(ˆ xt ) ≡ E 0 x ˆt − E−1 x ˆt = htx [x0 − E−1 x0 ] = htx [ησ0 ] = htx x; t ≥ 0. M. Uribe and S. Schmitt-Groh´e The response of the vector of controls yˆt is given by IR(ˆ yt) = gxhtx x. Matlab Code For Linear Perturbation Methods Stephanie Schmitt-Groh´e and I have written a suite of programs that are posted on the courses webpage: www.columbia.edu/~mu2166/1st_order.htm. The program gx_hx.m computes the matrices gx and hx using the Schur decomposition method. The program mom.m computes second moments. The program ir.m computes impulse response functions. Open Economy Macroeconomics, Chapter 4 Exercise 4.1 (Inducing Stationarity and Interest-Rate Shocks) One result derived in this chapter is that the business cycle implied by the SOE-RBC model is not affected by the method used to induce stationarity. This result, however, was derived in the context of a model driven by technology shocks. The present exercise aims to establish whether that result is robust to assuming that business cycles are driven by world-interest-rate shocks. 1. Consider the external debt-elastic interest-rate (EDEIR) model of section 4.1.1. Shut down the productivity shock by setting η˜ = 0. Replace equation (4.13) with rt = rt∗ + p(d˜t), and ∗ rt∗ = r ∗ + ξ(rt−1 − r ∗ ) + µt , where µt ∼ N (0, σµ2 ). Set ξ = 0.8 and σµ = 0.012. Calibrate all other parameters of the model at the values given in table 4.1. Using this version of the EDEIR model, compute the statistics considered in table 4.4 and make a table. Make a figure showing impulse responses of output, consumption, hours, investment, the trade-balance-to-output ratio, and the current-accountto-output ratio implied by the EDEIR model driven by interest-rate shocks. Provide intuition for these results. 2. Now consider the internal discount factor (IDF) model of section 4.6.4. Again, set η˜ = 0. Replace the assumption that rt = r ∗ with rt = r ∗ + ξ(rt−1 − r ∗ ) + µt . M. Uribe and S. Schmitt-Groh´e Calibrate ξ, σµ , and all common parameters as in the previous question. Calibrate ψ3 as in section 4.6.4. Use the resulting calibrated model to compute unconditional second moments and impulse responses. Provide intuition for your results. To facilitate comparison, place the information generated here in the same table and figure produced in the previous 3. Compare the predictions of the EDEIR and IDF models driven by interest rate shocks. Does the stationarity-inducing mechanism make any difference for the business-cycles implied by the SOE model driven by interest-rate shocks? Exercise 4.2 [Business Cycles in a Small Open Economy with Complete Asset Markets and External Shocks] Consider the small open economy model with complete asset markets (CAM) studied in this chapter. Suppose that the productivity factor At is constant and normalized to 1. Replace the equilibrium condition Uc (ct, ht ) = ψcam with the expression Uc (ct, ht) = xt , where xt is an exogenous and stochastic random variable, which can be interpreted as an external shock. Assume that the external shock follows a process of the form x ˆt = ρˆ xt−1 + t ; t ∼ N (0, σ2), where x ˆt ≡ ln(xt/x) and x denotes the non-stochastic steady-state level of xt . Let ρ = 0.9 and σ = 0.02. Calibrate all other parameters of the model following the calibration of the CAM model presented in the main body of this chapter. Finally, set the steady state value of xt in such a way that the steady-state level of consumption equals the level of steady-state consumption in the version of the CAM model studied in the main text. Open Economy Macroeconomics, Chapter 4 1. Produce a table displaying the unconditional standard deviation, serial correlation, and correlation with output of ybt , b ct, bit , b ht , and tbt /yt . 2. Produce a figure with 5 plots depicting the impulse responses to an external shock (a unit innovation in t ) of ybt , b ct, bit , b ht , and tbt/yt . 3. Now replace the values of ρ and σ given above with values such that the volatility and serial correlation of output implied by the model are the same as those reported for the Canadian economy in table 4.2. Answer questions 4.2.a and 4.2.b using these new parameter values. 4. Based on your answer to the previous question, evaluate the ability of external shocks (as defined here) to explain business cycles in Canada. Exercise 4.3 (A Small Open Economy with an AR(2) TFP Process) In this question you are asked to show that the SOE-RBC model can predict consumption to be more volatile than output when the productivity shock follows a second-order autoregressive process displaying a humpshaped impulse response. The theoretical model to be used is the External Debt-Elastic Interest Rate (EDEIR) model presented in section 4.1.1 of the current chapter. Replace the AR(1) process with the following AR(2) specification: ln At+1 = 1.42 ln At − 0.43 ln At−1 + t+1 , where t is an i.i.d. random variable with mean zero and standard deviation σ > 0. Scale σ to ensure that the predicted standard deviation of output is 3.08, the value predicted by the AR(1) version of this model. Otherwise use the same calibration and functional forms as presented in the chapter. Download the matlab files for the EDEIR model from http://www.columbia.edu/~mu2166/closing.htm . Then modify them to accommodate the present specification. M. Uribe and S. Schmitt-Groh´e 1. Produce a table displaying the unconditional standard deviation, serial correlation, and correlation with output of output, consumption, investment, hours, the trade-balance-to-output ratio, and the current-account-to-output ratio. 2. Produce a 3 × 2 figure displaying the impulse responses of output, consumption, investment, hours, the trade-balance-to-output ratio, and TFP to a unit innovation in TFP. 3. Compare and contrast the predictions of the model under the AR(1) and the AR(2) TFP processes. Provide intuition. Exercise 4.4 (A Small Open Economy With Durable Consumption) Consider an economy populated by a large number of identical households with preferences described by the lifetime utility function E0 ∞ X hω t ω γ st denotes consumption of nondurable goods, ht denotes hours worked, and st denotes the stock of durable consumption goods. The parameter β ∈ (0, 1) denotes the subjective discount factor, γ, (ω − 1), (σ − 1) > 0 are preference parameters, and Et denotes the expectations operator conditional on information available in period t. The law of motion of the stock of durables is assumed to be of the form st = (1 − δ)st−1 + cdt , where cdt denotes durable consumption in period t, and δ ∈ (0, 1) denotes the depreciation rate. The sequential budget constraint of the household is given by dt = (1 + rt−1 )dt−1 + cnt + cdt + φd φk (st − st−1 )2 + it + (kt+1 − kt)2 − At ktα ht1−α , 2 2 Open Economy Macroeconomics, Chapter 4 where dt denotes debt acquired in period t and maturing in period t + 1, rt denotes the interest rate on assets held between periods t and t + 1, it denotes gross investment, kt denotes the stock of physical capital, and At represents a technology factor assumed to be exogenous and stochastic. The parameters φd , φk > 0 govern the degree of adjustment costs in the accumulation of durable consumption goods and physical capital, respectively. The parameter α resides in the interval (0, 1). The capital stock evolves over time according to the law of motion kt+1 = (1 − δ)kt + it . Note that we assume that physical capital, kt, is predetermined in period t and that investment, it, takes one period to become productive capital. By contrast, the stock of consumer durables, st is non-predetermined in period t, and expenditures in consumer durables in period t, cdt , become productive immediately. Finally, assume that the interest rate is debt elastic, h ˜ ¯ i rt = r ∗ + ψ edt −d − 1 , ¯ and ψ are parameters. where d˜t denotes the cross-sectional average level of debt per capita, and r ∗ , d, The productivity factor At evolves according to the ln At+1 = ρ ln At + t+1 , where t is a white noise with mean zero and variance σ2 , and ρ ∈ (0, 1) is a parameter. Assume that β(1 + r ∗ ) = 1. 1. Derive the complete set of equilibrium conditions. 2. Derive the deterministic steady state. Specifically, find analytical expressions for the steady M. Uribe and S. Schmitt-Groh´e state values of cnt , ht , st , kt+1 , dt , rt , it , tbt , and cat in terms of the structural parameters of ¯ Here, tbt and cat denote, respectively, the trade balance the model σ, β, δ, ω, α, γ, r ∗ , and d. and the current account. 3. Assume the following parameter values: σ = 2, δ = 0.1, r ∗ = 0.04, α = 0.3, and ω = 1.455. Calibrate d¯ and γ so that in the steady state the debt to output ratio is 25 percent and the nondurable consumption to output ratio is 68 percent. Report the implied numerical values of ¯ Also report the numerical steady state values of rt, dt , ht , kt , cn , st , cd , it, tbt , cat, γ and d. t t and yt ≡ At ktαht1−α . 4. Approximate the equilibrium dynamics using a first-order perturbation technique. In performing this approximation, express all variables in logs, except for the stock of debt, the interest rate, the trade balance, the current account, the trade-balance-to-output ratio, and the current-account-to-output ratio. You are asked to complete the calibration of the model by setting values for ψ, φd , φk , ρ, and σ to target key empirical regularities of medium-size emerging countries documented in chapter 1 of Uribe’s Open Economy Macroeconomics textbook. Specifically, the targets are a standard deviation of output, σy , of 8.99 percent, a relative standard deviation of consumption, σc /σy , of 0.93, a relative standard deviation of gross investment, σi /σy , of 2.86, a serial correlation of output of 0.84, and a correlation between the trade-balance-to-output ratio and output of -0.24. In general, you will not be able to hit these targets exactly. Instead, you are required to define a distance between the targets and their corresponding theoretical counterparts and devise a numerical algorithm to minimize it. Define the distance as follows. Let z(ψ, φd, φk , ρ, σ) ≡ x(ψ, φd, φk , ρ, σ) − x∗ , where x∗ is the 5x1 vector of empirical targets (the 5 numbers given above) and x(ψ, φd, φk , ρ, σ) is the 5x1 vector of theoretical counterparts as a function of the parameters. Let D(ψ, φd, φk , ρ, σ) ≡ p z(ψ, φd, φk , ρ, σ)0 z(ψ, φd, φk , ρ, σ) be the distance between the target and its theoretical Open Economy Macroeconomics, Chapter 4 counterpart. Report (a) the values of ψ, φd , φk , ρ, and σ that you find and (b) complete the following table: Data σy σc /σy σi /σy corr(yt , yt−1 ) corr(tbt /yt , yt) Prediction of the Model 5. Produce a table displaying the model predictions. The table should contain the unconditional standard deviation, correlation with output, and the first-order serial correlation of output, consumption, investment, consumption of durables, consumption of nondurables, the tradebalance-to-output ratio, and the current-account-to-output ratio. For consumption, consumption of durables, consumption of nondurables, and investment report the standard deviation relative to output. Discuss how well the model is able to explain actual observed second moments that were not targeted in the calibration. Use the second moments reported in table 1.2 of Uribe’s textbook to compare the model’s predictions to actual data. Exercise 4.5 (Complete Markets and The Countercyclicality of the Trade Balance) Consider a small open economy with access to a complete array of internationally traded state contingent claims. There is a single good, which is freely traded internationally. Let rt,t+1 denote the period t price of a contingent claim that pays one good in a particular state of the world in period t + 1 divided by the probability of occurrence of that state. The small open economy takes the process for rt,t+1 as exogenously given. M. Uribe and S. Schmitt-Groh´e Households have preferences over consumption, ct, and hours, ht , given by ∞ X t=0 ct − βt hω t ω − 1 ; σ, ω > 1, where E0 denotes the expectations operator conditional on information available in period 0. Households produce goods according to the following production technology At ktα ht1−α , where At denotes an exogenous productivity factor, kt denotes the capital stock in period t, and the parameter α ∈ (0, 1) denotes the elasticity of the production function with respect to capital. Domestic households are the owners of physical capital. The evolution of capital is given by kt+1 = (1 − δ)kt + it , where it denotes investment in physical capital in period t and δ ∈ (0, 1) denotes the depreciation rate. In period 0, households are endowed with k0 units of capital and hold contingent claims (acquired in period −1) that pay d0 goods in period 0. 1. State the household’s period-by-period budget constraint. 2. Specify a borrowing limit that prevents household’s from engaging in Ponzi schemes. 3. State the household’s utility maximization problem. Indicate which variables/processes the household chooses and which variables/processes it takes as given. 4. Derive the complete set of competitive equilibrium conditions. Open Economy Macroeconomics, Chapter 4 5. Let x ˆt ≡ ln xt /x denote the percent deviation of a variable from its non-stochastic steady state value. Assume that in the non-stochastic steady state r0,t = β t and At = 1. Show that in response to a positive innovation in technology in period t, Aˆt > 0, the trade balance will respond countercyclically only if the response in investment in period t is positive. Then find the minimum percent increase in investment in period t required for the trade balance to decline in period t in response to the technology shock. To answer this question use a first-order accurate approximation to the solution of the model. Show that your answer is independent of the expected future value of At+1 . 6. Compare and contrast your findings in the previous item to the ones derived in chapter 3 for a model with capital accumulation, no depreciation, no capital adjustment costs, inelastic labor supply, and incomplete markets. In particular, discuss how in that model the sign of the impulse response of the trade balance to a positive innovation in the technology shock, Aˆt > 0, depended on the persistence of the technology shock. Give an intuitive explanation for the similarities/differences that you identify. 7. Now find the size of EtAˆt+1 relative to the size of Aˆt that guarantees that the trade balance deteriorates in period t in response to a positive innovation in At in period t. Your answer should be a condition of the form Aˆt < M Et Aˆt+1 , where M is a function of the structural parameters of the model. In particular, it is a function of α, β, δ, and ω. Find the value of M for α = 1/3, δ = 0.08, β −1 = 1.02, and ω = 1.5. 8. Discuss to which extend your findings support or contradict Principle I, derived in chapter 3, which states that: “The more persistent are productivity shocks, the more likely is the trade balance to experience an initial deterioration in response to a positive technology shock.” 9. How would your answers to questions 5 and 7 change if the period utility function was sepa- M. Uribe and S. Schmitt-Groh´e rable in consumption, ct, and hours, ht ? Exercise 4.6 [Calibrating the EDEIR Model Using Canadian Data Over the Period 1960-2011] In section 4.2.3, we calibrated the EDEIR model using Canadian data over the period 1946-1985. The following table displays observed standard deviations, serial correlations, and correlations with output for Canada over the period 1960-2011. The source is World Development Indicators. The data are annual and in per capita terms. The series y, c, and i are in logs, and the series tb/y is in levels. All series were quadratically detrended. Standard deviations are measured in percentage points. Canadian Data 1960-2011 σx t ρxt ,GDPt 1. Compare the empirical summary statistics reported in the above table with the ones shown in table 4.2. How has the business cycle of the Canadian economy changed over the past three decades? 2. Calibrate the EDEIR model as follows: Set β = 1/1.04, σ = 2, ω = 1.455, α = 0.32, δ = 0.10, and d = 0.7442. Set the remaining four parameters, ρ, η, φ, and ψ1 to match the observed standard deviations and serial correlations of output and the standard deviations of investment and the trade-balance-to-output ratio in Canada over the period 1960-2011. Approximate the equilibrium dynamics up to first order and use a distance minimization procedure similar to the one used in exercise 4.4. Open Economy Macroeconomics, Chapter 4 3. Produce the theoretical counterpart of the table shown above. 4. Comment on the ability of the model to explain observed business cycles in Canada over the period 1960-2011. 5. Compute the unconditional standard deviation of the productivity shock, ln At under the present calibration. Compare this number to the one corresponding to the 1946-1985 calibration presented in chapter 4.2.3. Now do the same with the standard deviation of output. Discuss and interpret your findings. Exercise 4.7 (A Model of the U.S.-Canada Business Cycle) Consider a world with two economies, Canada and the United States, indexed by i = Can, U S, respectively. Suppose that both economies are populated by a large number of identical households with preferences given by ∞ X t=0 cit − i (hit )ω 1−σ ω 1−σ where cit and hit denote, respectively, consumption and hours worked in country i in period t. In both countries, households operate a technology that produces output, denoted yti , using labor and capital, denoted kti . The production technology is Cobb-Douglas and given by yti = Ait (kti )α(hit )1−α, where Ait denotes a productivity shock in country i, which evolves according to the following AR(1) process: ln Ait+1 = ρi ln Ait + η iit+1 , where it is an i.i.d. innovation with mean zero and variance equal to one, and ρi and η i are country-specific parameters. Both countries produce the same good. The evolution of capital obeys M. Uribe and S. Schmitt-Groh´e the following law of motion: i kt+1 1 + i φ iit δkti − 1 δkti , where iit denotes investment in country i, and φi is a country-specific parameter. Assume that asset markets are complete and that there exists free mobility of goods and financial assets between the United States and Canada, but that labor and installed capital are immobile across countries. Finally, assume that Canada has measure zero relative to the United States, so that the latter can be modeled as a closed economy. Consider the business cycle regularities for Canada for the period 1960 to 2011 shown in exercise 4.6. The following table displays observed standard deviations, serial correlations, and correlations with output for the United States over the period 1960-2011. The source is World Development Indicators. The data are annual and in per capita terms. The series y, c, and i are in logs, and the series tb/y is in levels. All series were quadratically detrended. Standard deviations are measured in percentage points. U.S. Data 1960-2011 σx t ρxt ,xt−1 ρxt ,GDPt y c i 2.94 3.00 10.36 0.75 0.82 0.67 1.00 0.90 0.80 1. Calibrate the model as follows: Assume that the deterministic steady-state levels of consumption per capita are the same in Canada and the United States. Set β = 1/1.04, σ = 2, ω = 1.455, α = 0.32, and δ = 0.10. Set the remaining six parameters, ρi , η i, and φi , for i = Can, U S, to match the observed standard deviations and serial correlations of output Open Economy Macroeconomics, Chapter 4 and the standard deviations of investment in Canada and the United States. Use a distance minimization procedure as in exercise 4.4. 2. Approximate the equilibrium dynamics up to first order. Produce the theoretical counterparts of the two tables showing Canadian and U.S. business-cycle regularities. 3. Comment on the ability of the model to explain observed business cycles in Canada and the United States. 4. Plot the response of Canadian output, consumption, investment, hours, and the trade-balanceto-output ratio to a unit innovation in the Canadian productivity shock. On the same plot, show the response of the Canadian variables to a unit innovation to the U.S. productivity shock. Discuss the differences in the responses to a domestic and a foreign technology shock and provide intuition. 5. Compare, by means of a graph and a discussion, the predicted responses of Canada and the United States to a unit innovation in the U.S. productivity shock. The graph should include the same variables as the one for the previous item. 6. Compute the fraction of the volatilities of Canadian output and the trade-balance-to-output ratio explained by the U.S. productivity shock according to the present model. To this end, set η Can = 0 and compute the two standard deviations of interest. Then, take the ratio of these standard deviations to their respective counterparts when both shocks are active. 7. This question aims to quantify the importance of common shocks as drivers of the U.S.-Canada business cycle. Replace the process for the Canadian productivity shock with the following one Can US ln ACan ln ACan + η CanCan t+1 = ρ t t+1 + νt+1 . M. Uribe and S. Schmitt-Groh´e All other aspects of the model are as before. Recalibrate the model using an augmented version of the strategy described above that includes an additional parameter, ν, and an additional target, the cross-country correlation of output, which in the sample used here is 0.64. Report the new set of calibrated parameters. Compute the variance of Canadian output. Now set ν = 0 keeping all other parameter values unchanged, and recalculate the variance of Canadian output. Explain. Chapter 5 Emerging-Country Business Cycles Through the Lens of the SOE-RBC Model Can the SOE-RBC model of chapter 4 explain business cycles in emerging or poor economies? In chapter 1, we documented that the most striking difference between business cycles in rich and emerging or poor countries is that the latter are twice as volatile as the former (see fact 1.8). In principle, the SOE-RBC model can account for this difference. All that is needed is to increase the volatility of the productivity shock. After all, the calibration strategy adopted in chapter 4, which is representative of much of the existing related literature, was to set the standard deviation of the exogenous productivity shock to match the observed variance of output. Since not only output, but all components of aggregate demand are more volatile in emerging and poor countries than in rich countries, increasing the volatility of the productivity shock will help in more than one dimension. However, not all volatilities increase by the same proportions as one moves from rich to emerging 177 M. Uribe and S. Schmitt-Groh´e or poor economies. In particular, a second important difference between the group of rich countries and the group of emerging and poor countries is that in the former consumption is less volatile than output, whereas in the latter consumption is at least as volatile as output (see fact 1.9). The SOE-RBC model of chapter 4 predicts that consumption is less volatile than output. This prediction is in line with the observed relative volatility of consumption in Canada, a rich economy to which the model was calibrated. A natural question is whether there exist calibrations of that SOE-RBC model that can account for the excess volatility of consumption observed in emerging countries. The answer is yes. We note, however, that simply jacking up the volatility of the productivity shock in the SOE-RBC model will not do the job. The reason is that up to first order in models with a single exogenous shock the ratio of any two standard deviations is independent of the standard deviation of the exogenous shock. The analysis of a small open economy with capital of chapter 3 provides the insight that the response of consumption relative to that of output to a productivity shock depends significantly on the persistence of the productivity shock. Building on this insight, it is natural to explore whether increasing the persistence of the productivity shock will allow the SOE-RBC model of chapter 4 to explain the observed excess volatility of consumption in emerging and poor countries. Figure 5.1 displays the ratio of the volatility of consumption to the volatility of output, σc /σy , as a function of the persistence of the stationary productivity shock, ρ, predicted by the SOE-RBC model of chapter 4, section 4.1.1. For values of ρ larger than 0.88 the volatility of consumption exceeds that of output. Figure 5.2 helps build the intuition behind this result. It displays the impulse response of output to a one-percent increase in productivity for two values of ρ, 0.42 (the value used in chapter 4) and 0.99. For the lower value of ρ, the impulse response of output to a positive productivity shock is positive on impact and monotonically decreasing. This means that in the period the shock occurs, Open Economy Macroeconomics, Chapter 5 Figure 5.1: The Relative Volatility of Consumption as a Function of the Persistence of the Stationary Technology Shock 1.15 σc /σy 0.5 ρ future output is expected to be lower than current output. Because consumption depends not on current output alone, but on the present discounted value of output, we have that the impact response of consumption is smaller than that of output. By contrast, when the technology shock is highly persistent, the response of output is hump-shaped (see the broken line in figure 5.2). In this case, on impact, output may be smaller than the average of current and future values of output. Consequently the impact response of consumption may exceed that of output, suggesting a higher volatility of consumption relative to output. In turn, the reason why the response of output is humped-shaped has to do with the behavior of investment. If the serial correlation of the technology shock is small, investment does not react much to innovations in productivity, since they are expected to die out quickly. Thus, the response of output mimics that of technology. However, if the productivity shock is highly persistent, then firms will have an incentive to increase the stock of physical capital, to take advantage of the fact that capital will be highly productive for a number M. Uribe and S. Schmitt-Groh´e Figure 5.2: Impulse Response of Output to a One-Percent Increase in Productivity for High and Low Persistence Of the Stationary Productivity Shock 4.5 4 ρ = 0.42 ρ = 0.99 percentage points 4 5 6 7 periods after the productivity shock of periods. As a result, output may continue to increase even as TFP falls monotonically back to its steady-state level. It follows that capital accumulation is crucial for the SOE-RBC model of chapter 4 to capture the excess volatility of consumption characteristic of emerging economies. However, increasing the serial correlation of the stationary productivity shock may come at a cost. Recall, for instance, that in chapter 4, the strategy for calibrating the parameter ρ was to match the observed serial correlation of output. Thus, in principle, there will be a tradeoff between matching the excess volatility of consumption and matching the serial correlation of output. A possible solution to this tradeoff is to add an additional shock to the SOE-RBC model. In chapter 2, section ??, we suggested that a possible way to induce excess volatility of consumption is to introduce nonstationary shocks. Aguiar and Gopinath (2007) pursue this strategy. Specifically, they specify a small-open-economy version of the closed-economy RBC model with permanent Open Economy Macroeconomics, Chapter 5 and temporary TFP shocks due to King, Plosser, and Rebelo (1988a,b). Aguiar and Gopinath argue that the SOE-RBC model is an adequate framework for understanding aggregate fluctuations in emerging countries provided it is augmented to allow for both stationary and nonstationary productivity shocks. We present this model next. An SOE-RBC Model With Stationary And Nonstationary Technology Shocks Consider a small open economy populated by a large number of identical households seeking to maximize the utility function max E0 ∞ X [Ctγ (1 − ht )1−γ ]1−σ − 1 1−σ subject to Dt+1 φ = Dt + Ct + Kt+1 − (1 − δ)Kt + 1 + rt 2 Kt+1 −g Kt Kt − Yt , and to a no-Ponzi-game constraint of the form lim Et Dt+j+1 j Πs=0 (1 + rt+s ) ≤ 0, where Yt = at Ktα(Xtht )1−α denotes output in period t. In the above expressions, Ct denotes consumption, ht denotes hours worked, Kt denotes the stock of physical capital, Dt denotes net external debt, and rt denotes the interest rate charged by the rest of the world. The parameters α, β, and δ lie in the interval (0, 1), and the parameters γ, σ, φ, and g are positive. As in chapter 4, section 4.1.1, the interest rate is M. Uribe and S. Schmitt-Groh´e assumed to be debt elastic: h i ˜ rt = r ∗ + ψ eDt+1 /Xt−d − 1 , ˜ t denotes the cross-sectional average level of external debt where r ∗ , ψ, and d are parameters, and D per capita in period t. In equilibrium, because all households are identical, we have that ˜ t = Dt . D This economy is driven by a stationary productivity shock at and a nonstationary productivity shock Xt . Note that, unlike the SOE-RBC model of chapter 4, the period utility function in the Aguiar and Gopinath model takes a Cobb-Douglas form. Their results, however, are robust to assuming GHH preferences. The optimality conditions associated with the household’s problem are 1 − γ Ct = (1 − α) at Xt γ 1 − ht γ(1−σ)−1 Kt Xtht (1 − ht )(1−γ)(1−σ) = Λt, Λt = β(1 + rt)EtΛt+1 , and " α−1 Kt+1 Kt+1 Λt 1 + φ −g = βEt Λt+1 1 − δ + αat+1 Kt Xt+1 ht+1 2 # Kt+2 Kt+2 φ Kt+2 +φ −g − −g , Kt+1 Kt+1 2 Kt+1 where Λt denotes the Lagrange multiplier associated with the sequential budget constraint of the household. Open Economy Macroeconomics, Chapter 5 The main difference between the present model and the one studied in chapter 4 is the introduction of the nonstationary productivity shock Xt . Assume that Xt and at are mutually independent random variables with laws of motion given by ln at = ρa ln at−1 + σa at and ln(gt /g) = ρg ln(gt−1 /g) + σg gt , where gt ≡ Xt Xt−1 denotes the gross growth rate of Xt. The parameters ρa and ρg lie in the inerval (−1, 1), and σa and σa are positive. The variables at and gt are assumed to be exogenous, mutually independent white noises distributed N (0, 1). The parameter g > 0 denotes the gross growth rate of productivity in a nonstochastic equilibrium path. Note that the productivity factor Xt is nonstationary in the sense that it displays both secular growth, at an average rate g, and a random walk component. This last characteristic is reflected in the fact that an innovation in gt has a permanent effect on the level of Xt . Let T F Pt ≡ Yt . Ktα h1−α t Under the present technology specification, we have that T F Pt = at Xt1−α. Clearly, because at is a stationary random variable independent of Xt, total factor productivity inherits the nonstationarity of Xt . And this property will be transmitted in equilibrium to other variables of the model, including consumption, investment, the capital stock, the marginal utility M. Uribe and S. Schmitt-Groh´e of wealth, and the stock of external debt. Because none of these variables exhibits a deterministic steady state, it is impossible to linearize the model around such point. Fortunately, however, there exists a simple stationary transformation of the variables of the model whose equilibrium behavior is described by a system of equations very similar to the one that governs the joint determination of the original variables. Specifically, let ct ≡ Ct /Xt−1 , kt ≡ Kt /Xt−1 , dt ≡ Dt/Xt−1 , 1+(σ−1)γ and λt ≡ Xt−1 as Λt. Then, one can write the system of equilibrium conditions in stationary form 2 gt dt+1 φ gt kt+1 = dt + ct + gtkt+1 − (1 − δ)kt + − g kt − at ktα (gtht )1−α, 1 + rt 2 kt h i rt = r ∗ + ψ edt+1 −d − 1 , 1 − γ ct = (1 − α)at gt γ 1 − ht γ(1−σ)−1 kt gtht (1 − ht )(1−γ)(1−σ) = λt , γ(1−σ)−1 λt = β(1 + rt)gt Etλt+1 , and " α−1 gt kt+1 kt+1 γ(1−σ)−1 λt 1 + φ −g = βgt Et λt+1 1 − δ + αat+1 kt gt+1 ht+1 2 # gt+1 kt+2 gt+1 kt+2 φ gt+1 kt+2 +φ −g − −g . kt+1 kt+1 2 kt+1 This is a system of six stochastic difference equations in the endogenous variables dt+1 , ct, kt+1 , ht , λt, and rt. This system, together with the laws of motion of gt and at , possesses two properties. First, it has a deterministic steady state that is independent of initial conditions. Second, the rational expectations dynamics of all variables are, up to first order, mean reverting, or stationary. Recalling that the variable transformations involve scaling by the nonstationary productivity Open Economy Macroeconomics, Chapter 5 Table 5.1: SOE-RBC Model With Nonstationary Shocks: Calibrated Parameters β 0.98 γ 0.36 ψ 0.001 α 0.32 σ 2 δ 0.05 d 0.1 Note. The time unit is one quarter. Source: Aguiar and Gopinath (2007). Table 5.2: SOE-RBC Model With Nonstationary Shocks: Estimated Parameters σg 0.0213 σa 0.0053 ρg 0.00 ρa 0.95 g 1.0066 φ 1.37 Note. The time unit is one quarter. Source: Aguiar and Gopinath (2007). factor, it follows that in this model consumption, output, the capital stock, investment, and net external debt all share the same stochastic trend, Xt. For instance, consumption satisfies Ct = ctXt−1 . Since ct is stationary, it follows directly that Ct carries the same random walk component as Xt . The existence of a common stochastic trend implies that in equilibrium, the shares of consumption, investment, capital, and external debt in GDP are all stationary variables. This property of the model is known as the balanced-growth property. Aguiar and Gopinath (2007) econometrically estimate the parameters defining the laws of motion of the two productivity shocks as well as the parameter governing the strength of capital adjustment costs. All other parameters of the model are calibrated. The econometric estimation consists in picking values for σa, σg , ρa, ρg , g, and φ to match the empirical second moments displayed in table 5.3 and the observed average growth rate of GDP. The data are from Mexico and the sample is 1980:Q1 to 2003:Q1. Because the number of estimated parameters (six) is smaller than the number of moments matched (eleven), the estimation procedure uses a weighting matrix following the GMM technique. To perform this estimation, one must assign values to all non-estimated parameters. Table 5.1 displays these values. And table 5.2 displays the M. Uribe and S. Schmitt-Groh´e Table 5.3: Model Fit Statistic σ(y) σ(∆y) σ(c)/σ(y) σ(i)/σ(y) σ(nx)/σ(y) ρ(y) ρ(∆y) ρ(y, nx) ρ(y, c) ρ(y, i) Data 2.40 1.52 1.26 4.15 0.80 0.83 0.27 -0.75 0.82 0.91 Model 2.13 1.42 1.10 3.83 0.95 0.82 0.18 -0.50 0.91 0.80 Note. Variables in levels were HP filtered using a parameter of 1600. Growth rates are unfiltered. Source: Aguiar and Gopinath (2007). values taken by the estimated parameters. Table 5.3 displays ten empirical and theoretical second moments for Mexico. The estimated model does a good job at matching all of the moments shown in the table. Of particular relevance is the ability of the model to match the fact that in Mexico, as in most other emerging countries, consumption is more volatile than output. As explained in detail in chapter 2, section ?? for the case of nonstationary endowment shocks, the presence of nonstationary productivity shocks plays a key role in making this prediction possible. It is therefore of interest to calculate the importance of the nonstationary component of productivity in driving movements in total factor productivity implied by the econometrically estimated parameters. To this end, consider the growth rate of total factor productivity, which, from equation (5.1), can be written as ∆ ln T F Pt = ∆ ln at + (1 − α)gt. Because the two terms on the right-hand side of this expression are mutually independent, we can Open Economy Macroeconomics, Chapter 5 ask what fraction of the variance of ∆ ln T F Pt is explained by gt . It is straightforward to deduce that the variance of ∆ ln at is given by 2σa2 /(1 + ρa ). At the same time, the variance of gt is given by σg2 /(1 − ρ2g ). Therefore, we have that var((1 − α)gt ) var(∆ ln T F Pt ) = = (1 − α)2 σg2 /(1 − ρ2g ) 2σa2 /(1 + ρa ) + (1 − α)2 σg2 /(1 − ρ2g ) (1 − 0.32)2 × 0.02132/(1 − 0.002) 2 × 0.00532/(1 + 0.95) + (1 − 0.32)2 × 0.02132/(1 − 0.002 ) = 0.8793. That is, the estimated parameters imply that the nonstationary component of productivity explains 88 percent of the variance of the growth rate of total factor productivity. This is an indication that this model can fit the Mexican data best when nonstationary technology shocks play a significant role in moving total factor productivity at business-cycle frequency. Aguiar and Gopinath (2007) also estimate the present model using quarterly Canadian data from 1981:Q1 to 2003:Q2 and find that the nonstationary component explains only 40 percent of movements in total factor productivity. Aguiar and Gopinath conclude that their estimates of the model on Mexican and Canadian data, taken together, suggest that nonstationary productivity shocks are more relevant in emerging economies than in developed ones. How should we interpret these results? There are three aspects of the econometric estimation that deserve special comments. One is that the data sample, 1980:Q1 to 2003:Q1, is relatively short. Recall that the main purpose of the estimation procedure is to identify the random-walk, or unit-root, component in total factor productivity. It is well known that the only reliable way to disentangle the stationary and nonstationary components of a time series is to use long samples. Short samples can lead to spurious results. In fact, Aguiar and Gopinath (2007) analyze direct evidence on Solow residuals, which in the present model coincide with total factor productivity, for M. Uribe and S. Schmitt-Groh´e Figure 5.3: Business Cycles in Latin America: 1900-2005 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 Argentina Brazil Chile Colombia Mexico Peru Uruguay Venezuela −0.5 −0.6 −0.7 −0.8 1900 Note. Percent deviations of real GDP per capita from a cubic trend. Source: Data base compiled by R. Barro and J. Urs´ ua, available on line at http://www.economics.harvard.edu/faculty/ barro/ Mexico and Canada over the period 1980-2000, and conclude that it is not possible in that short sample to determine reliably whether the nonstationary component is more important in Mexico or in Canada (see their figure 2). Figure 5.3 shows why using short samples for estimation may be problematic. It displays the cyclical component of the log of real GDP per capita for seven Latin American countries over the period 1900-2005. In the figure, the cycle is computed as percent deviations of GDP from a cubic trend. The period 1980-2005 contains only between one and a half and two cycles for most of the Open Economy Macroeconomics, Chapter 5 Latin American economies included in the figure. Doing econometrics with so few cycles is problematic for uncovering virtually any parameter value of a business-cycle model, but particularly for telling apart highly persistent but stationary productivity shocks from nonstationary productivity shocks. The second difficulty with the econometric strategy pursued in Aguiar and Gopinath is that it allows room only for productivity shocks. This would not be a big problem if no other candidate shocks could be identified as potentially important in driving business cycles in emerging economies. But this is not the case. For example, a growing number of studies show that world interest-rate shocks and country-spread shocks play an important role in driving business cycles in emerging countries (see, for example, Neumeyer and Perri, 2005; and Uribe and Yue, 2006). Omitting these and other relevant shocks in the econometric estimation necessarily induces a bias in favor of the shocks that are included. Finally, the present model limits attention to a frictionless neoclassical framework. This might also be an oversimplification. A large body of work points at financial frictions, including default risk and balance-sheet effects, as important propagation mechanisms of business cycles in emerging economies (see part III of this book). Omitting these sources of friction might cause a spurious increase in the estimated variance and persistence of the exogenous driving processes. Garc´ıa-Cicco, Pancrazi, and Uribe (2010) address these concerns by estimating a SOE-RBC model in which stationary and nonstationary productivity shocks compete with interest-rate and country-spread shocks in explaining business cycles. To obtain a reliable measure of the nonstationary component of productivity, they estimate the model using long data samples spanning over 100 years. And to capture the presence of financial frictions, these authors estimate the parameter governing the debt elasticity of the country interest rate. They find that once financial shocks and frictions are taken explicitly into account, the data assigns a small role to permanent technology shocks as drivers of the business cycle. In the next section, we take a closer look at the Garc´ıa-Cicco, M. Uribe and S. Schmitt-Groh´e Pancrazi, and Uribe (2010) model. Letting Technology Shocks Compete With Other Shocks And Frictions In the model of the previous section, technology shocks monopolize the explanation of the business cycle. In this section, we make stationary and nonstationary technology shocks compete with interest-rate shocks and other shocks in explaining business cycles in emerging countries. In addition to stationary and nonstationary technology shocks and interest-rate shocks, the competition includes two domestic demand disturbances stemming from shifts in the marginal utility of consumption (preference shocks) and from random changes in aggregate spending (public spending shocks). The nonstructural shocks included in the competition emerge from assuming that the time series used for the econometric estimation of the model may be measured with error. The presentation draws from Garc´ıa-Cicco, Pancrazi, and Uribe (2010), hereafter GPU. Households Consider an economy populated by a large number of identical households with preferences described by the utility function ∞ X t=0 Ct − ω −1 Xt−1 hωt νt β 1−γ t where Ct denotes consumption, ht denotes hours supplied to the labor market, νt denotes a preference shock, and Xt denotes a stochastic trend. This preference specification follows Schmitt-Groh´e (1998). One might wonder why a stochastic trend appears in the utility function. The technical reason is that, as we will show shortly, this formulation makes it possible for a model with Open Economy Macroeconomics, Chapter 5 GHH preferences to exhibit an equilibrium in which output, consumption, investment, and the capital stock all grow on average at the same rate, whereas hours do not grow in the long run. From an economic point of view, Xt may reflect the impact of technological progress on household production. As in the previous section, we use upper case letters to denote variables that contain a trend in equilibrium and lower case letters to denote variables that do not contain a trend in equilibrium. The laws of motion of νt and Xt are assumed to be ln νt+1 = ρν ln νt + νt+1 , and ln(gt+1 /g) = ρg ln(gt/g) + gt+1 , where gt ≡ Xt Xt−1 denotes the gross growth rate of Xt . The innovations νt and gt are assumed to be mutually independent i.i.d. processes with mean zero and variances σν2 and σg2 , respectively. The parameter g measures the deterministic gross growth rate of the stochastic trend Xt . The parameters ρν , ρg ∈ (−1, 1) govern the persistence of νt and gt , respectively. Households face the period-by-period budget constraint Dt+1 φ = Dt − Wt ht − ut Kt + Ct + St + It + 1 + rt 2 Kt+1 −g Kt Kt , where Dt+1 denotes the stock of one-period debt acquired in period t and due in period t + 1, rt denotes the country-specific interest rate on debt held between periods t and t + 1, It denotes gross M. Uribe and S. Schmitt-Groh´e investment, Kt denotes the stock of physical capital owned by the household, ut denotes the rental rate of capital, and Wt denotes the real wage rate. The variable St is meant to capture aggregate shifts in domestic absorption, possibly stemming from unproductive government consumption, and is assumed to be exogenous and stochastic. We assume that the detrended component of St, denoted st ≡ St , Xt−1 obeys the AR(1) process ln(st+1 /s) = ρs ln(st /s) + st+1 , where s is a parameter. The innovation st is assumed to be a white noise with mean zero and variance σs2 , and the parameter ρs ∈ (−1, 1) governs the persistence of st . The parameter φ introduces quadratic capital adjustment costs. The capital stock evolves according to the following law of motion: Kt+1 = (1 − δ)Kt + It , where δ ∈ [0, 1) denotes the depreciation rate of capital. Consumers are assumed to be subject to Dt+j+1 s=0 (1+rt+s ) a no-Ponzi-scheme constraint of the form limj→∞ Et Qj ≤ 0. The optimization problem of the household consists in choosing processes {Ct , ht, Dt+1 , Kt+1, It} to maximize the utility function (5.3) subject to (5.4), (5.6), and the no-Ponzi-game constraint, taking as given the processes {Wt, ut, Xt, rt, νt, st} and the initial conditions K0 , and D0 . Let−γ ting λtXt−1 denote the Lagrange multiplier associated with the sequential budget constraint, the optimality conditions associated with this problem are (5.4), (5.6), the no-Ponzi-game constraint holding with equality, and −γ νt Ct /Xt−1 − ω −1 hωt = λt , Open Economy Macroeconomics, Chapter 5 −γ ω−1 Wt νt Ct /Xt−1 − ω −1 hωt ht = λt , Xt−1 λt = β 1 + rt Etλt+1 , gtγ and Kt+1 1+φ − g λt = Kt β Et λt+1 [1 − δ + ut+1 gtγ 2 # Kt+2 φ Kt+2 Kt+2 −g − −g . +φ Kt+1 Kt+1 2 Kt+1 Firms Firms are assumed to operate in perfectly competitive product and factor markets. They produce a single good with a Cobb-Douglas production function that uses capital and labor as inputs and is buffeted by stationary and nonstationary productivity shocks. Formally, Yt = atKtα (Xtht )1−α , where Yt denotes output in period t, α ∈ (0, 1) is a parameter, and at represents a stationary productivity shock following an AR(1) process of the form ln at+1 = ρa ln at + at+1 . The innovation at is assumed to be a white noise with mean zero and variance σa2 , and the parameter ρa ∈ [0, 1) governs the persistence of at . Following Neumeyer and Perri (2005), Uribe and Yue (2006), and Chang and Fern´ andez (2013), we depart slightly from the model studied by Garc´ıa-Cicco, Pancrazi, and Uribe (2010) by assuming that firms face a working capital constraint. Specifically, we assume that for each unit of wage M. Uribe and S. Schmitt-Groh´e payments firms must hold η units of a non-interest-bearing asset. To determine the marginal cost of holding the non-interest-bearing asset, compare the rates of return associated with holding interestbearing and non-interest-bearing assets. Investing one unit of consumption in the noninterestbearing asset in period t pays 1 unit of consumption in period t + 1. Investing 1/(1 + rt) units of consumption in the interest-bearing asset in period t also pays 1 unit of consumption in period t + 1. It follows that the opportunity cost in period t of holding the non-interest-bearing asset is given by the difference between 1 and 1/(1 + rt), or rt/(1 + rt). Therefore, in the presence of a working-capital constraint, the total labor cost includes the standard wage component, given by Wt ht , and a financial component, given by ηWtht rt/(1 + rt). Note that an increase in the interest rate acts like an increase in the real wage, thereby inducing firms to reduce employment. This effect is of interest because it introduces a supply-side channel through which changes in the interest rate can affect the economy. In this way, interest-rate shocks are allowed to directly compete with technology shocks in determining movements in employment and output. Firms choose factor inputs to maximize profits. Formally, the firm’s optimization problem is given by ηrt max Yt − ut Kt − Wtht 1 + , 1 + rt {Yt ,ht ,Kt } subject to the technological constraint (5.7). The first-order conditions associated with this problem are (1 − α)at Kt Xtht and αat Xt ht Kt Wt ηrt = 1+ Xt 1 + rt 1−α = ut . The first of these optimality conditions equates the demand for labor to the effective marginal cost of labor, which includes the financial cost stemming from the working-capital constraint. All Open Economy Macroeconomics, Chapter 5 things equal, an increase in the interest rate causes firms to reduce their demand for labor services. Interest-Rate Shocks We augment the interest-rate specification of the previous section by introducing interest-rate shocks. Specifically, the domestic interest rate is assumed to be given by De /X −d t+1 t y rt = r + ψ e − 1 + eµt −1 − 1, ∗ e t+1 denotes the cross sectional average of external debt per capita acquired in period t, and where D d and y are parameters. The variable µt is assumed to be exogenous and stochastic. It is meant to reflect exogenous, random variations in the world interest rate and the country spread. The law of motion of µt is given by ln µt+1 = ρµ ln µt + µt+1 . The innovation µt is assumed to be a white noise with mean zero and variance σµ2 , and the parameter ρµ ∈ [0, 1) governs the persistence of µt . Equilibrium Because all consumers are assumed to be identical, the cross sectional average of per capita debt must equal the individual level of debt, that is, e t = Dt D for all t. As in section 5.1, we perform a stationarity inducing transformation by scaling trending variables by Xt−1 . Specifically, define yt = Yt /Xt−1 , ct = Ct /Xt−1 , st = St /Xt−1 , dt = Dt/Xt−1 , and kt = Kt/Xt−1 . Then, a stationary competitive equilibrium is given by a set of processes M. Uribe and S. Schmitt-Groh´e {ct, ht, λt, kt+1 , dt+1, it, rt, yt} satisfying νt [ct − ω −1 hωt ]−γ = λt, htω−1 = (1 − λt = kt+1 1+φ gt − g λ t = kt kt ht α ηrt −1 1+ , 1 + rt β (1 + rt )Etλt+1 , gtγ " β gt+1 ht+1 1−α Et λt+1 1 − δ + αat+1 gtγ kt+1 2 # kt+2 kt+2 φ kt+2 +φ gt+1 gt+1 − g − gt+1 − g , kt+1 kt+1 2 kt+1 2 dt+1 φ kt+1 gt = dt − yt + ct + st + it + gt − g kt , 1 + rt 2 kt d −d t+1 ∗ rt = r + ψ e y − 1 + eµt −1 − 1, kt+1 gt = (1 − δ)kt + it, and yt = at ktα (gtht )1−α, given exogenous processes at , gt , νt , µt , and st and initial conditions k0 and d0 . Bayesian Estimation On A Century of Data The econometric estimation uses annual per capita data from Argentina on output growth, consumption growth, investment growth, and the trade-balance-to-output ratio for the period 1900 to Open Economy Macroeconomics, Chapter 5 2005. All four of these observable variables are assumed to be measured with error. Specifically, let the theoretical counterparts of the four observables be the vector ∆ ln Yt ∆ ln Ct Ot∗ = ∆ ln It T Bt /Yt where φ T Bt ≡ Yt − Ct − It − St − 2 Kt+1 −g Kt denotes the trade balance.1 Then, the vector of observables, denoted Ot , is given by me,g σgme Y t me,g C σ me C t Ot = Ot∗ + g me me,gI σgI t me,T B/Y σTme B/Y t me,i me me me where σgme is an exogenous i.i.d. disturbance Y , σg C , σg I , σT B/Y are positive parameters and t with mean zero and unit variance for i = g Y , g C , g I , T B/Y . The values assigned to the structural parameters are based on a combination of calibration and econometric estimation. The calibrated parameters are g, d/y, δ, r ∗ , α, γ, ω, and s/y and are set to match long-run data relations from Argentina or in accordance with related business-cycle studies. Table 5.4 presents the calibrated parameter values. The parameter g is set to match the average growth rate of per capita GDP in Argentina over the period 1900 to 2005 of 1.07 percent per year. 1 Note that in the theoretical model, the definition of gross investment does not include investment adjustment costs. An alternative definition could include them. This distinction is immaterial in the present context, because up to first order adjustment costs are nil. M. Uribe and S. Schmitt-Groh´e Table 5.4: Calibrated Parameters Parameter g d/y δ r∗ α γ ω s/y Value 1.0107 0.037 0.1255 0.10 0.32 2 1.6 0.10 Note. The time unit is one year. We impose a steady-state trade-balance to output ratio of 0.3 percent, as observed on average in Argentina over the period 1900-2005. We set r ∗ to 10 percent per year, and impose the restriction 1 + r ∗ = β −1 g γ . This implies a steady-state interest rate of 10 percent, a value that is empirically plausible for an emerging market economy like Argentina, and a subjective discount factor, β, of 0.9286. A further implication of these restrictions is that the steady state of dt equals d. We restrict y to equal the steady-state value of detrended output, yt .This restriction and the assumed target for the steady state of the trade-balance-to-output ratio implies a value of d/y of 0.037, which coincides with the steady-state debt-to-output ratio. The value assigned to the depreciation rate δ implies an average investment share in GDP of 19 percent, which is in line with the average value observed in Argentina over the calibration period. There is no reliable data on factor income shares for Argentina. We therefore set the parameter α, which determines the average capital income share, at 0.32, a value commonly used in the related literature. The parameter γ, defining the curvature of the period utility function, takes the value 2, which is standard in related business-cycle studies. The parameter ω is calibrated at 1.6, which implies a labor-supply elasticity of 1/(ω − 1) = 1.7. Finally, the share of exogenous spending to GDP, s/y, is set at 10 percent, which implies that s/y Open Economy Macroeconomics, Chapter 5 equals 0.10. The remaining parameters are estimated using likelihood-based Bayesian techniques on a loglinear approximation of the equilibrium dynamics. The log-linear approximation is computed using the techniques and matlab code introduced in chapter 4. The estimated parameters consist of thirteen structural parameters and the standard deviations of the four measurement errors. The thirteen structural parameters are the ten parameters defining the stochastic processes of the shocks driving the model (σi and ρi , for i = a, g, ν, µ, s), the parameter φ, governing the strength of capital adjustment costs, the parameter ψ, determining the debt elasticity of the country-specific interest rate, and the parameter η, defining the size of the working-capital constraint. Table 5.5 displays salient characteristics of the prior and posterior distributions of the estimated parameters. All prior distributions are assumed to be uniform. For the structural parameters, the supports of the uniform prior distributions are relatively wide. For example, for serial correlations, we allow for the maximum possible range that the parameter can take. Thus, the estimation results, can be interpreted as maximum likelihood estimates. For the prior uniform distributions of the four nonstructural parameters, namely the standard deviations of the measurement errors, we impose upper bounds that imply that measurement errors can account for no more than 6.25 percent of the variance of the corresponding observable. The statistics pertaining to the posterior distributions were computed using an MCMC chain of length 1 million. The data and Matlab code to reproduce the estimation is available at http: //www.columbia.edu/~mu2166/book/. Table 5.6 shows that the estimated model does a good job at matching a number of second moments typically used to characterize business cycles in emerging countries. In particular, the model replicates the excess volatility of consumption relative to output, the high volatility of investment, and a volatility of the trade-balance-to-output ratio comparable to that of output growth. The estimated model also captures the procyclicality of consumption and investment and M. Uribe and S. Schmitt-Groh´e Table 5.5: Bayesian Estimation Parameter σg ρg σa ρa σν ρν σs ρs σµ ρµ φ ψ η σgme Y σgme C σgme I σTme B/Y Prior Min 0 -0.99 0 -0.99 0 -0.99 0 -0.99 0 -0.99 0 0 0 0.0001 0.0001 0.0001 0.0001 Distributions Max Mean 0.2 0.1 0.99 0 0.2 0.1 0.99 0 1 0.5 0.99 0 0.2 0.1 0.99 0 0.2 0.1 0.99 0 8 4 10 5 5 2.5 0.013 0.0067 0.019 0.0095 0.051 0.025 0.013 0.0065 Posterior Distributions Mean Median 5% 95% 0.0082 0.0067 0.00058 0.021 0.15 0.21 -0.69 0.81 0.032 0.032 0.027 0.036 0.84 0.84 0.75 0.91 0.53 0.51 0.39 0.77 0.85 0.85 0.76 0.93 0.062 0.064 0.0059 0.12 0.46 0.56 -0.42 0.92 0.12 0.11 0.067 0.18 0.91 0.92 0.83 0.98 5.6 5.6 3.9 7.5 1.4 1.3 0.55 2.4 0.42 0.4 0.18 0.7 0.0045 0.0042 0.00051 0.0096 0.0075 0.0076 0.00097 0.014 0.041 0.044 0.022 0.05 0.0033 0.0031 0.00041 0.0068 Note. All prior distributions are taken to be uniform. Moments of the posterior distribution are based on a 1,000,000 MCMC chain. The symbol σime denotes the standard deviation of the measurement error associated with the observable i, for i = gY , gC , gI , and T B/Y , where gi denotes the growth rate of variable i, for i = Y, C, I, and T B/Y denotes the trade-balance-to-output ratio. The data and Matlab code to reproduce this table are available at http://www.columbia.edu/~mu2166/book/. Open Economy Macroeconomics, Chapter 5 Table 5.6: Empirical and Theoretical Second Moments Statistic Standard Deviation Model Data T B/Y 6.2 5.3 (0.43) 8.9 7.5 (0.6) 18.6 20.4 (1.8) 4.9 5.2 (0.57) 0.80 0.72 (0.07) 0.53 0.67 (0.09) -0.18 -0.035 (0.09) -0.37 -0.27 (0.07) -0.31 -0.19 (0.08) -0.06 -0.0047 (0.08) -0.098 0.32 (0.10) Correlation with g Y Model Data Correlation with T B/Y Model Data Serial Correlation Model Data 0.04 0.11 (0.09) 0.51 0.58 (0.07) Note. Empirical moments are computed using data from Argentina for the period 1900 to 2005. Standard deviations of empirical moments are computed using GMM. Theoretical moments are unconditional moments computed by evaluating the model at the posterior median of the estimated parameters. M. Uribe and S. Schmitt-Groh´e the slight countercyclicality of the trade-balance-to-output ratio. How Important Are Permanent Productivity Shocks? An important result that emerges from table 5.5 is that the parameters defining the stochastic process of the nonstationary productivity shock are estimated with significant uncertainty. Specifically, the posterior distribution of the standard deviation of innovations to the nonstationary productivity shock, σg , has a median of 0.67 percent but a 95% probability interval that ranges from 0 to 2.1 percent. Similarly, the posterior distribution of the serial correlation of the nonstationary productivity shock, ρg , has a median of 0.21, but a 95% probability interval that ranges from -0.69 to +0.81. By contrast, the parameters defining the process of the stationary productivity shock are estimated much more tightly. The parameter σa has a posterior median of 0.032 and a 95-percent probability interval of 0.027 to 0.036, and the parameter ρa has a posterior median of 0.84 and a 95-percent probability interval ranging from 0.75 to 0.91. Consider now computing the share of the variance of the growth rate of total factor productivity explained by nonstationary productivity shocks. That is, consider computing the fraction of the variance of ∆ ln T F Pt ≡ ∆ ln(at Xt1−α) explained by ∆ ln(Xt1−α). Recall from section 5.1 that the SOE-RBC model estimated in Aguiar and Gopinath (2007) using Mexican data from 1980 to 2003 implies that nonstationary productivity shocks explain 88 percent of movements in total factor productivity. How does this share change when one estimates the stochastic trend using a long sample and when productivity shocks are allowed to compete with other shocks and frictions? Evaluating the formula given in (5.2) using the MCMC chain for the posterior estimates of the relevant structural parameters, one can derive a MCMC chain of posterior draws of the share of Open Economy Macroeconomics, Chapter 5 Table 5.7: Variance Decomposition Shock Nonstationary Tech. Stationary Tech. Preference Country Premium Spending Measurement Error gY 2.6 81.8 6.8 6.1 0.0 0.4 gC 1.1 42.4 27.7 25.8 0.3 0.7 gI 0.2 12.7 29.1 52.0 0.3 5.2 T B/Y 0.1 0.5 6.2 92.1 0.1 0.4 Note. Median of 1 million draws from the posterior distribution of the unconditional variance decomposition. the variance of TFP explained by nonstationary productivity shocks. Using this chain yields posterior median var(∆ ln(Xt1−α)) var(∆ ln T F Pt ) = posterior median = 0.024. (1 − α)2 σg2 /(1 − ρ2g ) 2σa2 /(1 + ρa ) + (1 − α)2 σg2 /(1 − ρ2g ) That is, nonstationary productivity shocks explain only 2.4 percent of movements in total factor productivity. This result suggests that the long data sample used for the estimation of the model plus the inclusion of additional shocks and financial frictions results in the data favoring stationary productivity shocks over nonstationary productivity shocks as drivers of total factor productivity. Table 5.7 presents the predicted contribution of each shock to explaining the variances of output growth, consumption growth, investment growth, and the trade-balance-to-output ratio. The central result emerging from this table is that nonstationary productivity shocks play a negligible role in explaining aggregate fluctuations. They account for less than 5 percent of the variances of all variables considered in the table. This result is in sharp contrast with the one obtained in the model with only productivity shocks and no financial frictions. Table 5.7 also shows that output growth is driven primarily by stationary productivity shocks, M. Uribe and S. Schmitt-Groh´e while investment and the trade-balance-to-output ratio are mostly accounted for by interest-rate shocks. Finally, the main drivers of consumption growth are stationary productivity shocks, interest-rate shocks, and preference shocks. That is, given the choice the data prefer to ex- plain the excess volatility of consumption relative to output observed in Argentina, and typical of many emerging countries, by disturbances to exogenous variables other than permanent technology shocks. At the beginning of this chapter we showed that—and provided intuition for why—excess volatility of consumption relative to output can be explained by sufficiently persistent stationary technology shocks. The present estimation delivers this channel by assigning a high posterior value to the serial correlation of the stationary productivity shock of 0.84. The intuition for why interest rate and preference shocks are important is that the former change the relative price of present consumption in terms of future consumption and the latter alter the subjective valuation of present consumption relative to future consumption. The fact that interest rate shocks explain a modest fraction of the variance of output growth, 6.1 percent, indicates that the working-capital friction does not play a central role in the estimated model. Indeed the parameter η, defining the magnitude of the working-capital friction, is estimated with significant uncertainty. Specifically, the median value of η is 0.4, which means that firms hold about 5 months of the wage bill as working capital, but the 95% posterior probability interval ranges from 0.18 to 0.70 (or 2 to 8 months). Moreover, the predictions of the model are virtually unchanged if the model is estimated under the constraint η = 0 (see Garc´ıa-Cicco, Pancrazi, and Uribe, 2010). Chang and Fern´ andez (2010) find a similar result using quarterly data from Mexico. The Role of Financial Frictions Consider now the importance of the parameter ψ, which governs the debt elasticity of the country premium. This elasticity captures, in a reduced-form fashion, financial frictions, which can stem Open Economy Macroeconomics, Chapter 5 from a variety of sources. For instance, models with imperfect enforcement of international loan contracts a` la Eaton and Gersovitz (1981), which will be studied in detail in chapter 11, predict that the country premium increases with the level of external indebtedness. Similarly, models in which international borrowing is limited by collateral constraints, like those studied later in chapter 10, imply a shadow interest premium that is increasing in the level of net external debt. By estimating the parameter ψ, we let the data determine the importance of this type of financial friction. The posterior median estimate of ψ, shown in table 5.5, is 1.3. How big a financial friction does this value represent? Consider a partial differentiation of equation (5.8) with respect to rt and ˜ t+1 : D e ∆rt = ψe(Dt+1/Xt −d)/y ˜ t+1 ∆D Xt y ˜ t+1 /Xt = d. We then Assume that debt is at its deterministic steady-state level. That is, set D have that ∆rt = ψ ˜ t+1/Xt ∆D . y Now setting ψ = 1.3 (its posterior median estimate) yields ∆rt = 1.3 ˜ t+1 /Xt ∆D . y ˜ t+1 /(yXt ) = 0.01), This expression indicates that an increase in debt of 1 percent of GDP (∆D causes an increase of 1.3 percentage points in the interest rate (∆rt = 0.013). This nontrivial debt-elasticity of the interest rate plays an important role in explaining the cyclical behavior of the trade balance. Figure 5.4 displays the empirical and theoretical autocorrelation functions of the trade-balance-to-output ratio. The empirical autocorrelation function, shown with a solid line, is estimated using annual data from Argentina for the period 1900-2005. Broken lines M. Uribe and S. Schmitt-Groh´e Figure 5.4: The Autocorrelation Function of the Trade-Balance-To-Output Ratio 1.2 Data Data ± 2 Std. Dev. Baseline Model, ψ = 1.3 Model with ψ = 0.001 Order of Autocorrelation indicate the two-standard-deviation confidence interval. The point estimate of the autocorrelation function starts at about 0.6 and falls gradually toward zero. This pattern is observed more generally in emerging countries (see Garc´ıa-Cicco, Pancrazi, and Uribe, 2010; and Miyamoto and Nguyen, 2013). The estimated model captures the empirical pattern quite well. The predicted autocorrelation function, shown with a crossed line, lies close to its empirical counterpart and is entirely within the two-standard-error confidence band. For comparison, figure 5.4 also displays, with a circled line, the autocorrelation function of the trade-balance-to-output ratio predicted by a special case of the present model in which the parameter ψ is calibrated at 0.001. Holding this parameter fixed, the model is then reestimated. In sharp contrast with the predictions of the baseline model, the model without financial frictions (i.e., the model with ψ = 0.001) predicts an autocorrelation function that is flat, close to unity, and entirely outside of the confidence band. Furthermore, the Open Economy Macroeconomics, Chapter 5 estimated model without financial frictions grossly overpredicts the volatility of the trade balance. Specifically, this model implies a standard deviation of the trade-balance-to-output ratio of 32.5 percentage points, six times larger than the observed standard deviation of 5.2 percentage points. The intuition behind why a small value of the debt elasticity of the interest rate causes the trade-balance-to-output ratio to have an autocorrelation function that is flat and near unity and a standard deviation that is excessively large can be found in the fact that in the absence of financial frictions external debt follows a highly persistent process (a quasi random walk). This is because as ψ → 0, the linearized equilibrium dynamics are governed by an eigenvalue that approaches unity in modulus. If the stock of external debt is extremely persistent, then so is the trade balance, which is determined to a large extent by the need to service external interest obligations. By contrast, when ψ is sufficiently large, as in the baseline model, the stock of external debt follows a less persistent stationary process. In this case, movements in the level of debt cause endogenous, self-stabilizing changes in the country interest-rate premium. For example, if the level of external debt rises too far above its steady-state level, the country premium increases, inducing households to cut spending thereby bringing debt closer to its steady-state level. The following general result can be established regarding the role of ψ in generating a downwardsloping autocorrelation function of the trade-balance-to-output ratio. Holding all other parameters of the model constant, one can always find a positive but sufficiently small value of ψ such that the equilibrium dynamics are stationary up to first order and the autocorrelation function of the trade-balance-to-output ratio is flat and close to unity. This result implies that, if a particular calibration of a SOE model does not deliver a flat and near-unit autocorrelation function of the trade-balance-to-output ratio it is because ψ has not been set at a small enough value, given the values assigned to all other parameters of the model. For example, in the SOE-RBC model of chapter 4, section 4.1, the parameter ψ takes the M. Uribe and S. Schmitt-Groh´e value 0.0011,2 which is about the same as the value considered here, and nevertheless the predicted first-order serial correlation of the trade-balance-to-output ratio is 0.51, significantly below unity (see table 4.2). This discussion motivates the question of what parameters other than the debt elasticity of the country interest rate affect the height and slope of the autocorrelation function of the trade-balance-to-output ratio. We turn to this issue next. Investment Adjustment Costs and the Persistence of the Trade Balance In general, given the value of ψ there exist a number of structural parameters that play a role in determining the shape of the autocorrelation function of the trade-balance-to-output ratio. One such parameter is the one governing the degree of capital adjustment costs. Consider again the SOE-RBC model analyzed in chapter 4, section 4.1.1, to which we will refer here as the SGU model. The calibration of this model, shown in table 4.1, is of particular interest for the present discussion because it features a value of ψ of 0.000742 (see footnote 2 for an explanation of why this value has to be adjusted upward to 0.0011 to make it comparable with the GPU model) and predicts an autocorrelation of the trade-balance-to-output ratio of 0.51, significantly below unity. As we have just shown, the GPU model delivers a near unity serial correlation of the trade-balance-to-output ratio for ψ = 0.001. It follows that it cannot simply be the size of ψ that determines the persistence of the trade balance. We argue here that the calibration of the SGU model in chapter 4 features a capital adjustment cost coefficient that is about 20 times smaller than the one estimated for the GPU model. Let 2 Notice that the specification of the country interest rate premium in chapter 4, section 4.1 makes the interest rate a function of the level of debt as opposed to the level of debt relative to trend output. Therefore, to make the comparison possible, the value of ψ of 0.000742 used in chapter 4 must be multiplied by the level of steady-state output in that model, which equals 1.4865. Open Economy Macroeconomics, Chapter 5 φSGU and φGP U be the values of this coefficient in the SGU and GPU models, respectively. From tables 4.1 we have that φSGU = 0.028 and from the reestimation of the GPU model with a fixed value of ψ = 0.001 (not shown), we find that φGP U = 2.0. The comparison of the degree of capital adjustment costs is more complicated than simply comparing these two number, however, because the SGU and GPU models assume different specifications of the capital-adjustment-cost function. In the SGU model, the capital-adjustment-cost function takes the form whereas in the GPU model it takes the form φ 2 Kt (Kt+1 /Kt φ 2 (kt+1 − kt )2 , − g)2 . In general, using the same value of φ in both models introduces different degrees of capital adjustment costs. One can show that in the absence of long-run growth, g = 1, and holding all other parameters equal across models, both specifications give rise to identical equilibrium dynamics up to first order as long as kSGU φSGU = φGP U , where kSGU denotes the steady-state level of capital in the SGU model. Thus if one wished to introduce in the SGU model the same degree of adjustment costs as in the GPU model, one must set φSGU = 1 kSGU φGP U . The calibration of table 4.1 implies that kSGU = 3.4. Thus, the value of φSGU that makes both models comparable is 0.59. This value is about 20 times larger than the value of 0.028 used to calibrate the SGU model in chapter 4. We conclude that the GPU model estimated with a fixed ψ of 0.001 features a degree of capital adjustment cost that is about 20 times as large as the one used in the calibration of the SGU model in chapter 4. Moreover, one can show that for values of φSGU ranging from 0.028 to 0.59, the SGU model implies serial correlations of the trade-balance-to-output ratio between 0.51 and 0.97, although the relationship is not monotone. Intuitively, the higher is the size of the adjustment costs, the more persistent is investment. Since the trade-balance-to-output ratio is governed by the sum of the consumption share and the investment share, the persistence of investment is partially transmitted to the trade balance. M. Uribe and S. Schmitt-Groh´e Exercise 5.1 (Explaining the Serial Correlation of Investment Growth) The version of the GPU model estimated in this chapter predicts a negative first-order serial correlation of investment growth of -0.098 (see table 5.6). By contrast, the empirical counterpart is positive and significant, with a point estimate of 0.32. This empirical fact is also observed in other emerging countries over long horizons. For example, Miyamoto and Nguyen (2013) report serial correlations of investment growth greater than or equal to 0.2 for Brazil, Mexico, Peru, Turkey, and Venezuela using annual data covering the period 1900 to 2006. 1. Think of a possible modification of the theoretical model that would result in an improvement of the model’s prediction along this dimension. Provide intuition. 2. Implement your suggestion. Show the complete set of equilibrium conditions. 3. Reestimate your model using the data set for Argentina on which the GPU model of this chapter was estimated. 4. Summarize your results by expanding table 5.6 with appropriate lines containing the predictions of your model. 5. Compare the performance of your model with the data and with the predictions of the version of the GPU model analyzed in this chapter. Exercise 5.2 (Slow Diffusion of Technology Shocks to the Country Premium, Household Production, and Government Spending) The model presented in section 5.2 assumes that permanent productivity shocks affect not only the productivity of labor and capital in producing market goods, but also the country premium, home production, and government spending. For instance, the assumption that the country interest Open Economy Macroeconomics, Chapter 5 e t+1 /Xt, implies that a positive innovation in Xt in period t, causes, all other rate depends on D things equal, a fall in the country premium. In this exercise, we attenuate this type of effect by reformulating the model. Let et = X e ζ X 1−ζ , X t−1 t with ζ ∈ [0, 1). Note that the original formulation obtains when ζ = 0. Replace equations (5.3), (5.5), and (5.8), respectively, with ∞ X t=0 νt β t e t−1 hω Ct − ω −1 X t 1−γ st = and St , et−1 X e e rt = r ∗ + ψ e(Dt+1 /Xt−d)/y − 1 + eµt −1 − 1. Keep all other features of the model as presented in section 5.2. 1. Present the equilibrium conditions of the model in stationary form. 2. Using Bayesian techniques, reestimate the model adding ζ to the vector of estimated parameters. Assume a uniform prior distribution for ζ with support [0 0.99] and produce 1 million draws from the posterior distribution of the parameter vector. Present the estimation results in the form of a table like table 5.5. Discuss your findings. 3. Characterize numerically the predictions of the model. In particular, produce tables similar to tables 5.6 and 5.7. Discuss your results and provide intuition. 4. Compute the impulse responses of output, consumption, investment, the trade-balance-tooutput ratio, and the country interest rate to a one-percent innovation in gt for three values M. Uribe and S. Schmitt-Groh´e of ζ, namely, 0, its posterior median, and 0.99. Exercise 5.3 (The Importance of Nonstationary Productivity Shocks in the GPU Model) The model of section 5.2 introduces three modifications to the SOE-RBC model with stationary and nonstationary technology shocks of section 5.2, namely, a longer sample, additional shocks, and financial frictions. A result of section 5.2 is that once these modifications are put in place, nonstationary productivity shocks cease to play a central role in explaining business cycles. The goal of this exercise is to disentangle which of the three aforementioned modifications is responsible for this result. To this end, estimate, using Bayesian methods, one at the time, the following three variants of the model: 1. Shorter Sample: Use data from 1975-2005. 2. Only Technology Shocks: Set to zero the standard deviations of the preference shock, the country-interest-rate shock, and the spending shock. 3. No Financial Frictions: Set ψ = 0.001 and η = 0. Use the same priors as in the body of the chapter and produce MCMC chains of 1 million draws with an acceptance rate of 25 percent. In each case, report the implied variance decomposition of TFP, output growth, consumption growth, investment growth, and the trade-balance-to output ratio and measures of model fit, using the formats of tables 5.5, 5.6, and 5.7 and equation 5.9. Discuss your results. Chapter 6 Interest-Rate Shocks Business cycles in emerging market economies are correlated with the interest rate that these countries face in international financial markets. This observation is illustrated in figure 6.1, which depicts detrended output and the country interest rate for seven developing economies between 1994 and 2001. Periods of low interest rates are typically associated with economic expansions and times of high interest rates are often characterized by depressed levels of aggregate activity.1 Data like those shown in figure 6.1 have motivated researches to ask what fraction of observed business cycle fluctuations in emerging markets is due to movements in country interest rate. This question is complicated by the fact that the country interest rate is unlikely to be completely exogenous to the country’s domestic conditions.2 To clarify ideas, let Rt denote the gross interest rate at which the country borrows in international markets, or the country interest rate. This us interest rate can be expressed as Rt = Rus denotes the world interest rate, or the t St . Here, R interest rate at which developed countries, like the U.S., borrow and lend from one another, and 1 The estimated correlations (p-values) are: Argentina -0.67 (0.00), Brazil -0.51 (0.00), Ecuador -0.80 (0.00), Mexico -0.58 (0.00), Peru -0.37 (0.12), the Philippines -0.02 (0.95), South Africa -0.07 (0.71). 2 There is a large literature arguing that domestic variables affect the interest rate at which emerging markets borrow externally. See, for example, Edwards (1984), Cline (1995), and Cline and Barnes (1997). M. Uribe and S. Schmitt-Groh´e Figure 6.1: Country Interest Rates and Output in Seven Emerging Countries Argentina 0.1 0.1 0.05 0 −0.1 94 0.4 0.15 0.3 0.1 0.2 0.05 0.1 0 0 94 −0.05 94 Philippines 0.1 0.1 0.05 −0.05 94 South Africa 0.06 0.04 0.02 0 94 Country Interest Rate Note: Output is seasonally adjusted and detrended using a log-linear trend. Country interest rates are real yields on dollar-denominated bonds of emerging countries issued in international financial markets. Data source: output, IFS; interest rates, EMBI+. Source: Uribe and Yue (2006). Open Economy Macroeconomics, Chapter 6 St denotes the gross country interest-rate spread, or country interest-rate premium. Because the interest-rate premium is country specific, in the data we find an Argentine spread, a Colombian spread, etc. If the country in question is a small player in international financial markets, as many emerging economies are, it is reasonable to assume that the world interest rate Rus t , is completely exogenous to the emerging country’s domestic conditions. We can’t say the same, however, about the country spread St . An increase in output, for instance, may induce foreign lenders to lower spreads on believes that the country’s ability to repay its debts has improved. Interpreting the country interest rate as an exogenous variable when in reality it has an endogenous component is likely to result in an overstatement of the importance of interest rates in explaining business cycles. To see why, consider the following example. Suppose that the interest rate Rt is purely endogenous. Thus, its contribution to generating business cycles is nil. Assume, furthermore, that Rt is countercyclical, i.e., foreign lenders reduce the country spread in response to expansions in aggregate activity. The researcher, however, wrongly assumes that the interest rate is purely exogenous. Suppose now that a domestic productivity shock induces an expansion in output. In response to this output increase, the interest rate falls. The researcher, who believes Rt is exogenous, erroneously attributes part of the increase in output to the decline in Rt. The right conclusion, of course, is that all of the movement in output is due to the productivity shock. It follows that in order to quantify the macroeconomic effects of interest rate shocks, the first step is to identify the exogenous components of country spreads and world interest rate shocks. Necessarily, the identification process must combine statistical methods and economic theory. The particular combination adopted in this chapter draws heavily from Uribe and Yue (2006). M. Uribe and S. Schmitt-Groh´e An Empirical Model Our empirical model takes the form of a first-order VAR system: ˆıt tbyt = B ˆ us R t ˆt R yˆt−1 ˆıt−1 tbyt−1 ˆ us R t−1 ˆ t−1 R i t + tby t rus t rt where yt denotes real gross domestic output, it denotes real gross domestic investment, tbyt denotes the trade balance to output ratio, Rus t denotes the gross real US interest rate, and Rt denotes the gross real (emerging) country interest rate. A hat on yt and it denotes log deviations from a us log-linear trend. A hat on Rus t and Rt denotes simply the log. We measure Rt as the 3-month gross Treasury bill rate divided by the average gross US inflation over the past four quarters.3 We measure Rt as the sum of J. P. Morgan’s EMBI+ stripped spread and the US real interest rate. Output, investment, and the trade balance are seasonally adjusted. To identify the shocks in the empirical model, Uribe and Yue (2006) impose the restriction that the matrix A be lower triangular with unit diagonal elements. Because Rus t and Rt appear at the bottom of the system, this identification strategy presupposes that innovations in world interest r rates (rus t ) and innovations in country interest rates (t ) percolate into domestic real variables with a one-period lag. At the same time, the identification scheme implies that real domestic shocks (yt , tby it , and t ) affect financial markets contemporaneously. This identification strategy is a natural one, for, conceivably, decisions such as employment and spending on durable consumption goods and investment goods take time to plan and implement. Also, it seems reasonable to assume that 3 Using a more forward looking measure of inflation expectations to compute the US real interest rate does not significantly alter our main results. Open Economy Macroeconomics, Chapter 6 financial markets are able to react quickly to news about the state of the business cycle.4 An additional restriction imposed on the VAR system, is that the world interest rate Rus t follows a simple univariate AR(1) process (i.e., A4i = B4i = 0, for all i 6= 4). Uribe and Yue (2006) adopt this restriction primarily because it is reasonable to assume that disturbances in a particular (small) emerging country will not affect the real interest rate of a large country like the United States. The country-interest-rate shock, rt , can equivalently be interpreted as a country spread shock. ˆ t using the definition To see this, consider substituting in equation (6.1) the country interest rate R ˆt − R ˆ us . Clearly, because Rus appears as a regressor in the bottom of country spread, Sˆt ≡ R t t equation of the VAR system, the estimated residual of the newly defined bottom equation, call it st , is identical to rt . Moreover, it is obvious that the impulse response functions of yˆt , ˆıt, and tbyt associated with st are identical to those associated with rt . Therefore, throughout the paper we indistinctly refer to rt as a country interest rate shock or as a country spread shock. After estimating the VAR system (6.1), Uribe and Yue use it to address a number of questions central to disentangle the effects of country-spread shocks and world-interest-rate shocks on aggregate activity in emerging markets: First, how do US-interest-rate shocks and country-spread shocks affect real domestic variables such as output, investment, and the trade balance? Second, how do country spreads respond to innovations in US interest rates? Third, how and by how much do country spreads move in response to innovations in emerging-country fundamentals? Fourth, how important are US-interest-rate shocks and country-spread shocks in explaining movements in aggregate activity in emerging countries? Fifth, how important are US-interest-rate shocks and country-spread shocks in accounting for movements in country spreads? We answer these questions with the help of impulse response functions and variance decompositions. 4 Uribe and Yue (2006), discuss an alternative identification strategy consisting in placing financial variables ‘above’ real variables first in the VAR system. M. Uribe and S. Schmitt-Groh´e Impulse Response Functions Figure 6.2 displays with solid lines the impulse response function implied by the VAR system (6.1) to a unit innovation in the country spread shock, rt . Broken lines depict two-standard-deviation bands.5 In response to an unanticipated country-spread shock, the country spread itself increases and then quickly falls toward its steady-state level. The half life of the country spread response is about one year. Output, investment, and the trade balance-to-output ratio respond as one would expect. They are unchanged in the period of impact, because of our maintained assumption that external financial shocks take one quarter to affect production and absorption. In the two periods following the country-spread shock, output and investment fall, and subsequently recover gradually until they reach their pre-shock level. The adverse spread shock produces a larger contraction in aggregate domestic absorption than in aggregate output. This is reflected in the fact that the trade balance improves in the two periods following the shock. Figure 6.3 displays the response of the variables included in the VAR system (6.1) to a one percentage point increase in the US interest rate shock, rus t . The effects of US interest-rate shocks on domestic variables and country spreads are measured with significant uncertainty, as indicated by the width of the 2-standard-deviation error bands. The point estimates of the impulse response functions of output, investment, and the trade balance, however, are qualitatively similar to those associated with an innovation in the country spread. That is, aggregate activity and gross domestic investment contract, while net exports improve. However, the quantitative effects of an innovation in the US interest rate are much more pronounced than those caused by a country-spread disturbance of equal magnitude. For instance, the trough in the output response is twice as large under a US-interest-rate shock than under a country-spread shock. It is remarkable that the impulse response function of the country spread to a US-interest-rate 5 These bands are computed using the delta method. Open Economy Macroeconomics, Chapter 6 Figure 6.2: Impulse Response To Country-Spread Shock Output −1.2 5 Trade Balance−to−GDP Ratio World Interest Rate 1 0.4 0.5 0.3 0 0.2 0.1 Country Interest Rate 1 Country Spread Notes: (1) Solid lines depict point estimates of impulse responses, and broken lines depict two-standard-deviation error bands. (2) The responses of Output and Investment are expressed in percent deviations from their respective log-linear trends. The responses of the Trade Balance-to-GDP ratio, the country interest rate, the US interest rate, and the country spread are expressed in percentage points. The two-standard-error bands are computed using the delta method. M. Uribe and S. Schmitt-Groh´e Figure 6.3: Impulse Response To A US-Interest-Rate Shock Investment Output 1 0 −1 −2 −3 −4 −5 −6 −1.5 5 Trade Balance−to−GDP Ratio World Interest Rate 1 0.6 1 0.4 0.5 0.2 0 Country Interest Rate Country Spread 2.5 3 2.5 −0.5 −1 −0.5 5 Notes: (1) Solid lines depict point estimates of impulse responses, and broken lines depict two-standard-deviation error bands. (2) The responses of Output and Investment are expressed in percent deviations from their respective log-linear trends. The responses of the Trade Balance-to-GDP ratio, the country interest rate, and the US interest rate are expressed in percentage points. Open Economy Macroeconomics, Chapter 6 shock displays a delayed overshooting. In effect, in the period of impact the country interest rate increases but by less than the jump in the US interest rate. As a result, the country spread initially falls. However, the country spread recovers quickly and after a couple of quarters it is more than one percentage point above its pre-shock level. Thus, country spreads increase significantly in response to innovations in the US interest rate but with a short delay. The negative impact effect is in line with the findings of Eichengreen and Mody (1998) and Kamin and Kleist (1999). We note, however, that because the models estimated by these authors are static in nature, by construction, they are unable to capture the rich dynamic relation linking these two variables. The overshooting of country spreads is responsible for the much larger response of domestic variables to an innovation in the US interest rate than to an innovation in the country spread of equal We now ask how innovations in output, t , impinge upon the variables of our empirical model. The model is vague about the precise nature of output shocks. They can reflect variations in total factor productivity, the terms-of-trade, etc. Figure 6.4 depicts the impulse response function to a one-percent increase in the output shock. The response of output, investment, and the trade balance is very much in line with the impulse response to a positive productivity shock implied by the small open economy RBC model (see figure 4.1). The response of investment is about three times as large as that of output. At the same time, the trade balance deteriorates significantly by about 0.4 percent and after two quarters starts to improve, converging gradually to its steady-state level. More interestingly, the increase in output produces a significant reduction in the country spread of about 0.6 percent. The half life of the country spread response is about five quarters. The countercyclical behavior of the country spread in response to output shocks suggests that country interest rates behave in ways that exacerbates the business-cycle effects of output shocks. M. Uribe and S. Schmitt-Groh´e Figure 6.4: Impulse Response To An Output Shock Investment Output 1 3 0.8 2.5 2 1.5 0.4 1 0.2 0.5 0 Trade Balance−to−GDP Ratio World Interest Rate 1 0 −0.1 −0.2 0 −0.3 −0.4 −0.5 5 Country Interest Rate 0 −0.8 10 Country Spread Notes: (1) Solid lines depict point estimates of impulse response functions, and broken lines depict two-standard-deviation error bands. (2) The responses of Output and Investment are expressed in percent deviations from their respective log-linear trends. The responses of the Trade Balance-to-GDP ratio, the country interest rate, and the US interest rate are expressed in percentage points. Open Economy Macroeconomics, Chapter 6 Variance Decompositions Figure 6.5 displays the variance decomposition of the variables contained in the VAR system (6.1) at different horizons. Solid lines show the fraction of the variance of the forecasting error explained jointly by US-interest-rate shocks and country-spread shocks (rus and rt ). Broken lines depict the t fraction of the variance of the forecasting error explained by US-interest-rate shocks (rus t ). Because rus and rt are orthogonal disturbances, the vertical difference between the solid line and the broken t line represents the variance of the forecasting error explained by country-spread shocks at different horizons.6,7 Note that as the forecasting horizon approaches infinity, the decomposition of the variance of the forecasting error coincides with the decomposition of the unconditional variance of the series in question. For the purpose of the present discussion, we associate business-cycle fluctuations with the variance of the forecasting error at a horizon of about five years. Researchers typically define business cycles as movements in time series of frequencies ranging from 6 quarters to 32 quarters (Stock and Watson, 1999). Our choice of horizon falls in the middle of this window. ˆ us ˆ These forecasting errors are computed as follows. Let xt ≡ [ˆ yt ˆıt tbyt R t Rt ] be the vector of variables included y i tby rus r in the VAR system and t ≡ [t t t Pt t ] the vector of disturbances of the VAR system. Then, one can write the −1 MA(∞) representation of xt as xt = ∞ B)j A−1 . The error in forecasting xt+h at time t j=0 Cj t−j , where Cj ≡ (A Ph for h > 0, that is, xt+h − Et xt+h , is given by j=0 Cj t+h−j . The variance/covariance matrix of this h-step-ahead P forecasting error is given by Σx,h ≡ hj=0 Cj ΣCj0 , where Σ is the (diagonal) variance/covariance matrix of t . Thus, the variance of the h-step-ahead forecasting error of xt is simply the vector containing the diagonal elements of Σx,h . In turn, the variance of the error of the h-step-ahead forecasting error of xt due to a particular shock, say rus t , is Ph x,rus ,h given by the diagonal elements of the matrix Σ ≡ j=0 (Cj Λ4 )Σ (Cj Λ4 )0 , where Λ4 is a 5×5 matrix with all elements equal to zero except element (4,4), which takes the value one. Then, the broken lines in figure 6.5 are given rus by the element-by-element ratio of the diagonal elements of Σx, ,h to the diagonal elements of the matrix Σx,h for different values of h. The difference between the solid lines and the broken lines (i.e., the fraction of the variance of the forecasting error due to rt ) is computed in a similar fashion but using the matrix Λ5 . 7 r We observe that the estimates of yt , it , tby t , and t (i.e., the sample residuals of the first, second, third, and fifth equations of the VAR system) are orthogonal to each other. But because yˆt , ˆit , and tbyt are excluded from rus the Rus will in general not be orthogonal to the estimates of yt , it, or t equation, we have that the estimates of t tby t . However, under our maintained specification assumption that the US real interest rate does not systematically respond to the state of the business cycle in emerging countries, this lack of orthogonality should disappear as the sample size increases. 6 M. Uribe and S. Schmitt-Groh´e Figure 6.5: Variance Decomposition at Different Horizons Output Investment 0.3 0.25 0.25 0.2 10 15 quarters Trade Balances−to−GDP Ratio 10 15 quarters World Interest Rate 2 0.4 1.5 0.3 1 10 15 quarters Country Interest Rate 0.8 0.1 10 15 quarters Country Spread 10 15 quarters 10 15 quarters rus + r Note: Solid lines depict the fraction of the variance of the k-quarter-ahead forecasting error explained jointly by rus and rt at different horizons. Broken lines depict the t fraction of the variance of the forecasting error explained by rus at different horizons. t Open Economy Macroeconomics, Chapter 6 According to our estimate of the VAR system given in equation (6.1), innovations in the US interest rate, rus t , explain about 20 percent of movements in aggregate activity in emerging countries at business cycle frequency. At the same time, country-spread shocks, rt , account for about 12 percent of aggregate fluctuations in these countries. Thus, around one third of business cycles in emerging economies is explained by disturbances in external financial variables. These disturbances play an even stronger role in explaining movements in international transactions. In effect, USinterest-rate shocks and country-spread shocks are responsible for about 43 percent of movements in the trade balance-to-output ratio in the countries included in our panel. Variations in country spreads are largely explained by innovations in US interest rates and innovations in country-spreads themselves. Jointly, these two sources of uncertainty account for about 85 percent of fluctuations in country spreads. Most of this fraction, about 60 percentage points, is attributed to country-spread shocks. This last result concurs with Eichengreen and Mody (1998), who interpret this finding as suggesting that arbitrary revisions in investors sentiments play a significant role in explaining the behavior of country spreads. The impulse response functions shown in figure 6.4 establish empirically that country spreads respond significantly and systematically to domestic macroeconomic variables. At the same time, the variance decomposition performed in this section indicates that domestic variables are responsible for about 15 percent of the variance of country spreads at business-cycle frequency. A natural question raised by these findings is whether the feedback from endogenous domestic variables to country spreads exacerbates domestic volatility. Here we make a first step at answering this quesˆ t equation of the VAR system by setting to zero the coefficients tion. Specifically, we modify the R ˆ t in on yˆt−i , ˆit−i , and tbyt−i for i = 0, 1. We then compute the implied volatility of yˆt , ˆit, tbyt and R the modified VAR system at business-cycle frequency (20 quarters). We compare these volatilities to those emerging from the original VAR model. Table 6.1 shows that the presence of feedback from domestic variables to country spreads significantly increases domestic volatility. In particular, M. Uribe and S. Schmitt-Groh´e Table 6.1: Aggregate Volatility With and Without Feedback of Spreads from Domestic Variables Model Variable Feedback No Feedback Std. Dev. Std. Dev. yˆ 3.6450 3.0674 14.1060 11.9260 ˆı tby 4.3846 3.5198 R 6.4955 4.7696 when we shut off the endogenous feedback, the volatility of output falls by 16 percent and the volatility of investment and the trade balance-to-GDP ratio fall by about 20 percent. The effect of feedback on the cyclical behavior of the country spread itself is even stronger. In effect, when feedback is negated, the volatility of the country interest rate falls by about one third. Of course, this counterfactual exercise is subject to Lucas’ (1976) celebrated critique. For one should not expect that in response to changes in the coefficients defining the spread process all other coefficients of the VAR system will remain unaltered. As such, the results of table 6.1 serve solely as a way to motivate a more adequate approach to the question they aim to address. This more satisfactory approach necessarily involves the use of a theoretical model economy where private decisions change in response to alterations in the country-spread process. We follow this route next. A Theoretical Model The process of identifying country-spread shocks and US-interest-rate shocks involves a number of restrictions on the matrices defining the VAR system (6.1). To assess the plausibility of these restrictions, it is necessary to use the predictions of some theory of the business cycle as a metric. If the estimated shocks imply similar business cycle fluctuations in the empirical as in Open Economy Macroeconomics, Chapter 6 models, we conclude that according to the proposed theory, the identified shocks are plausible. Accordingly, we will assess the plausibility of our estimated shocks in four steps: First, we develop a standard model of the business cycle in small open economies. Second, we estimate the deep structural parameters of the model. Third, we feed into the model the estimated version of the fourth and fifth equations of the VAR system (6.1), describing the stochastic laws of motion of the US interest rate and the country spread. Finally, we compare estimated impulse responses (i.e., those shown in figures 6.2 and 6.3) with those implied by the proposed theoretical framework. The basis of the theoretical model presented here is the standard neoclassical growth model of the small open economy (e.g., Mendoza, 1991). We depart from the canonical version of the smallopen-economy RBC model along four dimensions. First, as in the empirical model, we assume that in each period, production and absorption decisions are made prior to the realization of that period’s world-interest-rate shock and country-spread shock. Thus, innovations in the world interest rate or the country spread are assumed to have allocative effects with a one-period lag. Second, preferences are assumed to feature external habit formation, or catching up with the Joneses as in Abel (1990). This feature improves the predictions of the standard model by preventing an excessive contraction in private non-business absorption in response to external financial shocks. Habit formation has been shown to help explain asset prices and business fluctuations in both developed economies (e.g., Boldrin, Christiano, and Fisher, 2001) and emerging countries (e.g., Uribe, 2002). Third, firms are assumed to be subject to a working-capital constraint. This constraint introduces a direct supply side effect of changes in the cost of borrowing in international financial markets, and allows the model to predict a more realistic response of domestic output to external financial shocks. Fourth, the process of capital accumulation is assumed to be subject to gestation lags and convex adjustment costs. In combination, these two frictions prevent excessive investment volatility, induce persistence, and allow for the observed nonmonotonic (hump-shaped) response of investment in response to a variety of shocks (see Uribe, 1997). M. Uribe and S. Schmitt-Groh´e Consider a small open economy populated by a large number of infinitely lived households with preferences described by the following utility function ∞ X t=0 β t U (ct − µ˜ ct−1 , ht), where ct denotes consumption in period t, c˜t denotes the cross-sectional average level of consumption in period t, and ht denotes the fraction of time devoted to work in period t. Households take as given the process for c˜t. The single-period utility index U is assumed to be increasing in its first argument, decreasing in its second argument, concave, and smooth. The parameter β ∈ (0, 1) denotes a subjective discount factor, and the parameter µ measures the intensity of external habit formation. Households have access to two types of asset, physical capital and an internationally traded bond. The capital stock is assumed to be owned entirely by domestic residents. Households have three sources of income: wages, capital rents, and interest income bond holdings. Each period, households allocate their wealth to purchases of consumption goods, purchases of investment goods, and purchases of financial assets. The household’s period-by-period budget constraint is given by dt = Rt−1 dt−1 + Ψ(dt) + ct + it − wtht − ut kt, where dt denotes the household’s debt position in period t, Rt denotes the gross interest rate faced by domestic residents in financial markets, wt denotes the wage rate, ut denotes the rental rate of capital, kt denotes the stock of physical capital, and it denotes gross domestic investment. We assume that households face costs of adjusting their foreign asset position. We introduce these adjustment costs with the sole purpose of eliminating the familiar unit root built in the dynamics Open Economy Macroeconomics, Chapter 6 of standard formulations of the small open economy model. The debt-adjustment cost function Ψ(·) is assumed to be convex and to satisfy Ψ(d) = Ψ0 (d) = 0, for some d > 0. Earlier in chapter 4, we compared a number of standard alternative ways to induce stationarity in the small open economy framework, including the one used here, and conclude that they all produce virtually identical implications for business fluctuations.8 The process of capital accumulation displays adjustment costs in the form of gestation lags and convex costs as in Uribe (1997). Producing one unit of capital good requires investing 1/4 units of goods for four consecutive periods. Let sit denote the number of investment projects started in t − i for i = 0, 1, 2, 3. Then investment in period t is given by 1X sit . 4 si+1t+1 = sit . it = In turn, the evolution of sit is given by The stock of capital obeys the following law of motion: kt+1 = (1 − δ)kt + ktΦ s3t kt The debt adjustment cost can be decentralized as follows. Suppose that financial transactions between domestic and foreign residents require financial intermediation by domestic institutions (banks). Suppose there is a continuum of banks of measure one that behave competitively. They capture funds from foreign investors at the country rate Rt and lend to domestic agents at the rate Rdt . In addition, banks face operational costs, Ψ(dt), that are increasing and convex in the volume of intermediation, dt . The problem of domestic banks is then to choose the volume dt so as to maximize profits, which are given by Rdt [dt − Ψ(dt )] − Rt dt , taking as given Rdt and Rt . It follows from the first-order condition associated with this problem that the interest rate charged to domestic residents is given by Rdt = 1−ΨR0t(dt ) , which is precisely the shadow interest rate faced by domestic agents in the centralized problem (see the Euler condition (6.10) below). Bank profits are assumed to be distributed to domestic households in a lump-sum fashion. This digression will be of use later in the paper when we analyze the firm’s problem. M. Uribe and S. Schmitt-Groh´e where δ ∈ (0, 1) denotes the rate of depreciation of physical capital. The process of capital accumulation is assumed to be subject to adjustment costs, as defined by the function Φ, which is assumed to be strictly increasing, concave, and to satisfy Φ(δ) = δ and Φ0 (δ) = 1. These last two assumptions ensure the absence of adjustment costs in the steady state and that the steady-state level of investment is independent of Φ. The introduction of capital adjustment costs is commonplace in models of the small open economy. As discussed in chapters 3 and 4, adjustment costs are a convenient and plausible way to avoid excessive investment volatility in response to changes in the interest rate faced by the country in international markets. Households choose contingent plans {ct+1 , ht+1 , s0,t+1 , dt+1 }∞ t=0 so as to maximize the utility function (6.2) subject to the budget constraint (6.3), the laws of motion of total investment, investment projects, and the capital stock given by equations (6.4)-(6.6), and a borrowing constraint of the form dt+j+1 lim Et Qj ≤0 j→∞ s=0 Rt+s that prevents the possibility of Ponzi schemes. The household takes as given the processes {˜ ct−1 , Rt, wt, ut}∞ t=0 as well as c0 , h0 , k0 , R−1 d−1 , and sit for i = 0, 1, 2, 3. The Lagrangian associated with the household’s optimization problem can be written as: ∞ X 1X L = E0 β t U (ct − µ˜ ct−1 , ht) + λt dt − Rt−1 dt−1 − Ψ(dt ) + wtht + ut kt − sit − ct 4 t=0 i=0 ) 2 X s3t + λtqt (1 − δ)kt + ktΦ − kt+1 + λt νit (sit − si+1t+1 ) , kt where λt, λt νit , and λtqt are the Lagrange multipliers associated with constraints (6.3), (6.5), and (6.6), respectively. The optimality conditions associated with the household’s problem are Open Economy Macroeconomics, Chapter 6 (6.4)-(6.7) all holding with equality and Etλt+1 = Uc (ct+1 − µ˜ ct , ht+1 ) Et[wt+1 λt+1 ] = −Uh (ct+1 − µ˜ ct , ht+1 ) λt 1 − Ψ0 (dt) = βRt Etλt+1 1 Etλt+1 4 β Et λt+1 + λtν0t 4 Etλt+1 ν0t+1 = βEtλt+1 ν1t+1 = β Et λt+1 + λtν1t 4 β 0 s3t+1 = Et λt+1 + λtν2t βEt λt+1 qt+1 Φ kt+1 4 s3t+1 s3t+1 0 s3t+1 λtqt = βEt λt+1 qt+1 1 − δ + Φ − Φ + λt+1 ut+1 . kt+1 kt+1 kt+1 βEtλt+1 ν2t+1 = (6.13) (6.14) (6.15) It is important to recall that, because of our assumed information structure, the variables ct+1 , ht+1 , and s0t+1 all reside in the information set of period t. Equation (6.8) states that in period t households choose consumption and leisure for period t+1 in such as way as to equate the marginal utility of consumption in period t + 1 to the expected marginal utility of wealth in that period, Etλt+1 . Note that in general the marginal utility of wealth will differ from the marginal utility of consumption (λt 6= Uc (ct − µ˜ ct−1 , ht)), because current consumption cannot react to unanticipated changes in wealth. Equation (6.9) defines the household’s labor supply schedule, by equating the marginal disutility of effort in period t + 1 to the expected utility value of the wage rate in that period. Equation (6.10) is an asset pricing relation equating the intertemporal marginal rate of substitution in consumption to the rate of return on financial assets. Note that, because of the M. Uribe and S. Schmitt-Groh´e presence of frictions to adjust bond holdings, the relevant rate of return on this type of asset is not simply the market rate Rt but rather the shadow rate of return Rt/[1 − Ψ0 (dt )]. Intuitively, when the household’s debt position is, say, above its steady-state level d, we have that Ψ0 (dt) > 0 so that the shadow rate of return is higher than the market rate of return, providing further incentives for households to save, thereby reducing their debt positions. Equations (6.11)-(6.13) show how to price investment projects at different stages of completion. The price of an investment project in its ith quarter of gestation equals the price of a project in the i-1 quarter of gestation plus 1/4 units of goods. Equation (6.14) links the cost of producing a unit of capital to the shadow price of installed capital, or Tobin’s Q, qt . Finally, equation (6.15) is a pricing condition for physical capital. It equates the revenue from selling one unit of capital today, qt , to the discounted value of renting the unit of capital for one period and then selling it, ut+1 + qt+1 , net of depreciation and adjustment costs. Output is produced by means of a production function that takes labor services and physical capital as inputs, yt = F (kt , ht), where the function F is assumed to be homogeneous of degree one, increasing in both arguments, and concave. Firms hire labor and capital services from perfectly competitive markets. The production process is subject to a working-capital constraint that requires firms to hold non-interestbearing assets to finance a fraction of the wage bill each period. Formally, the working-capital constraint takes the form κt ≥ ηwtht ; η ≥ 0, where κt denotes the amount of working capital held by the representative firm in period t. Open Economy Macroeconomics, Chapter 6 f The debt position of the firm, denoted by dt , evolves according to the following expression dft = Rdt−1 dft−1 − F (kt , ht) + wtht + ut kt + πt − κt−1 + κt , where πt denotes distributed profits in period t, and Rdt ≡ Rt 1−Ψ0 (dt ) is the interest rate faced by nonfinancial domestic agents, as shown in footnote 8. The interest rate Rdt will in general differ from the country interest rate Rt—the interest rate that domestic banks face in international financial markets—because of the presence of debt-adjustment costs. Define the firm’s total net liabilities at the end of period t as at = Rdt dft − κt . Then, we can rewrite the above expression as at = at−1 − F (kt , ht) + wt ht + ut kt + πt + Rt Rdt − 1 κt . Rdt We will limit attention to the case in which the interest rate is positive at all times. This implies that the working-capital constraint will always bind, for otherwise the firm would incur in unnecessary financial costs, which would be suboptimal. So we can use the working-capital constraint holding with equality to eliminate κt from the above expression to get d at Rt − 1 = at−1 − F (kt , ht) + wtht 1 + η + ut k t + π t . Rdt Rdt It is clear from this expression that the assumed working-capital constraint increases the unit labor cost by a fraction η(Rdt − 1)/Rdt , which is increasing in the interest rate Rdt . The firm’s objective is to maximize the present discounted value of the stream of profits distributed to its owners, the domestic residents. That is, max E0 ∞ X t=0 λt πt . λ0 M. Uribe and S. Schmitt-Groh´e We use the household’s marginal utility of wealth as the stochastic discount factor because households own domestic firms. Using constraint (6.17) to eliminate πt from the firm’s objective function the firm’s problem can be stated as choosing processes for at , ht , and kt so as to maximize ∞ X t=0 t λt d at Rt − 1 − at−1 + F (kt, ht ) − wtht 1 + η − ut k t , Rdt Rdt subject to a no-Ponzi-game borrowing constraint of the form lim Et Qj ≤ 0. The first-order conditions associated with this problem are (6.10), (6.17), the no-Ponzi-game constraint holding with equality, and d Rt − 1 Fh (kt , ht) = wt 1 + η Rdt Fk (kt, ht) = ut . It is clear from the first of these two efficiency conditions that the working-capital constraint distorts the labor market by introducing a wedge between the marginal product of labor and the real wage rate. This distortion is larger the larger the opportunity cost of holding working capital, (Rdt − 1)/Rdt , or the higher the intensity of the working capital constraint, η.9 We also observe that any process at satisfying equation (6.17) and the firm’s no-Ponzi-game constraint is optimal. We assume that firms start out with no liabilities. Then, an optimal plan consists in holding no The precise form taken by this wedge depends on the particular timing assumed in modeling the use of working capital. Here we adopt the shopping-time timing. Alternative assumptions give rise to different specifications of the wedge. For instance, under a cash-in-advance timing the wedge takes the form 1 + η(Rdt − 1). Open Economy Macroeconomics, Chapter 6 liabilities at all times (at = 0 for all t ≥ 0), with distributed profits given by d Rt − 1 πt = F (kt , ht) − wt ht 1 + η − ut k t Rdt In this case, dt represents the country’s net debt position, as well as the amount of debt intermediated by local banks. We also note that the above three equations together with the assumption that the production technology is homogeneous of degree one imply that profits are zero at all times (πt = 0 ∀ t). Driving Forces One advantage of our method to assess the plausibility of the identified US-interest-rate shocks and country-spread shocks is that one need not feed into the model shocks other than those whose effects one is interested in studying. This is because we empirically identified not only the distribution of the two shocks we wish to study, but also their contribution to business cycles in emerging economies. In formal terms, we produced empirical estimates of the coefficients associated with rt and rus in the MA(∞) representation of the endogenous variables of interest (output, investment, t etc.). So using the economic model, we can generate the corresponding theoretical MA(∞) representation and compare it to its empirical counterpart. It turns out that up to first order, one only needs to know the laws of motion of Rt and Rus t to construct the coefficients of the theoretical MA(∞) representation. We therefore close our model by introducing the law of motion of the country interest rate Rt . This process is the estimate of the bottom equation of the VAR system (6.1) and is given by ˆ t = 0.63R ˆ t−1 + 0.50R ˆ us + 0.35R ˆ us − 0.79ˆ R yt + 0.61ˆ yt−1 + 0.11ˆıt − 0.12ˆıt−1 t t−1 + 0.29tbyt − 0.19tbyt−1 + rt , M. Uribe and S. Schmitt-Groh´e where r is an i.i.d. disturbance with mean zero and standard deviation 0.031. As indicated earlier, the variable tbyt stands for the trade balance-to-GDP ratio and is given by:10 tbyt = yt − ct − it − Ψ(dt ) . yt Because the process for the country interest rate defined by equation (6.20) involves the world interest rate Rus t , which is assumed to be an exogenous random variable, we must also include this variable’s law of motion as part of the set of equations defining the equilibrium behavior of the theoretical model. Accordingly, we estimate Rus t as follows an AR(1) process and obtain ˆ us = 0.83R ˆ us + rus , R t t−1 t where rus is an i.i.d. innovation with mean zero and standard deviation 0.007. t Equilibrium, Functional Forms, and Parameter Values In equilibrium all households consume identical quantities. Thus, individual consumption equals average consumption across households, or ct = c˜t; t ≥ −1. An equilibrium is a set of processes ct+1 , c˜t+1 , ht+1 , dt , it, kt+1 , sit+1 for i = 0, 1, 2, 3, Rt, Rdt , wt, ut , yt , tbyt , λt , qt , and νit for i = 0, 1, 2 satisfying conditions (6.3)- (6.16), (6.18)-(6.21), and (6.23), all holding with equality, given c0 , c−1 , y−1 , i−1 , i0 , h0 , the processes for the exogenous 10 In an economy like the one described by our theoretical model, where the debt-adjustment cost Ψ(dt ) are incurred by households, the national income and product accounts would measure private consumption as ct + Ψ(dt ) and not simply as ct . However, because of our maintained assumption that Ψ0 (d) = 0, it follows that both measures of private consumption are identical up to first order. Open Economy Macroeconomics, Chapter 6 innovations rus and rt , and equation (6.22) describing the evolution of the world interest rate. t We adopt the following standard functional forms for preferences, technology, capital adjustment costs, and debt adjustment costs, c − µ˜ c − ω −1 hω U (c − µ˜ c, h) = 1−γ F (k, h) = kα h1−α , Φ(x) = x − φ (x − δ)2 ; 2 Ψ(d) = φ > 0, ψ (d − d)2 . 2 In calibrating the model, the time unit is meant to be one quarter. Following Mendoza (1991), we set γ = 2, ω = 1.455, and α = .32. We set the steady-state real interest rate faced by the small economy in international financial markets at 11 percent per year. This value is consistent with an average US interest rate of about 4 percent and an average country premium of 7 percent, both of which are in line with actual data. We set the depreciation rate at 10 percent per year, a standard value in business-cycle studies. There remain four parameters to assign values to, ψ, φ, η, and µ. There is no readily available estimates for these parameters for emerging economies. We therefore proceed to estimate them. Our estimation procedure follows Christiano, Eichenbaum, and Evans (2001) and consists in choosing values for the four parameters so as to minimize the distance between the estimated impulse response functions shown in figure 6.2 and the corresponding impulse responses implied by the model.11 In our exercise we consider the first 24 quarters of the impulse response functions of 4 11 A key difference between the exercise presented here and that in Christiano et al. is that here the estimation procedure requires fitting impulse responses to multiple sources of uncertainty (i.e., country-interest-rate shocks and world-interest-rate shocks, whereas in Christiano et al. the set of estimated impulse responses used in the estimation procedure are originated by a single shock. M. Uribe and S. Schmitt-Groh´e variables (output, investment, the trade balance, and the country interest rate), to 2 shocks (the US-interest-rate shock and the country-spread shock). Thus, we are setting 4 parameter values to match 192 points. Specifically, let IRe denote the 192×1 vector of estimated impulse response functions and IRm (ψ, φ, η, µ) the corresponding vector of impulse responses implied by the theoretical model, which is a function of the four parameters we seek to estimate. Then our estimate of (ψ, φ, η, µ) is given by e m argmax{ψ,φ,η,µ} [IRe − IRm (ψ, φ, η, µ)]0Σ−1 IRe [IR − IR (ψ, φ, η, µ)], where ΣIRe is a 192×192 diagonal matrix containing the variance of the impulse response function along the diagonal. This matrix penalizes those elements of the estimated impulse response functions associated with large error intervals. The resulting parameter estimates are ψ = 0.00042, φ = 72.8, η = 1.2, and µ = 0.2. The implied debt adjustment costs are small. For example, a 10 percent increase in dt over its steady-state value d maintained over one year has a resource cost of 4 × 10−6 percent of annual GDP. On the other hand, capital adjustment costs appear as more significant. For instance, starting in a steady-state situation, a 10 percent increase in investment for one year produces an increase in the capital stock of 0.88 percent. In the absence of capital adjustment costs, the capital stock increases by 0.96 percent. The estimated value of η implies that firms maintain a level of working capital equivalent to about 3.6 months of wage payments. Finally, the estimated degree of habit formation is modest compared to the values typically used to explain asset-price regularities in closed economies (e.g., Constantinides, 1990). Table 10.1 gathers all parameter values. Open Economy Macroeconomics, Chapter 6 Table 6.2: Parameter Values Symbol β γ µ ω α φ ψ δ η R Value 0.973 2 0.204 1.455 0.32 72.8 0.00042 0.025 1.2 2.77% Description Subjective discount factor Inverse of intertemporal elasticity of substitution Habit formation parameter 1/(ω − 1) = Labor supply elasticity capital elasticity of output Capital adjustment cost parameter Debt adjustment cost parameter Depreciation rate (quarterly) Fraction of wage bill subject to working-capital constraint Steady-state real country interest rate (quarterly) Theoretical and Estimated Impulse Responses Figure 6.6 depicts impulse response functions of output, investment, the trade balance-to-GDP ratio, and the country interest rate.12 The left column shows impulse responses to a US-interestr rate shock (rus t ), and the right column shows impulse responses to a country-spread shock (t ). Solid lines display empirical impulse response functions, and broken lines depict the associated two-standard-error bands. This information is reproduced from figures 6.2 and 6.3. Crossed lines depict theoretical impulse response functions. The model replicates three key qualitative features of the estimated impulse response functions: First, output and investment contract in response to either a US-interest-rate shock or a countryspread shock. Second, the trade balance improves in response to either shock. Third, the country interest rate displays a hump-shaped response to an innovation in the US interest rate. Fourth, the country interest rate displays a monotonic response to a country-spread shock. We therefore con12 The Matlab code used to produce theoretical impulse response functions is available on line at http://www. columbia.edu/~mu2166/uribe_yue_jie/uribe_yue_jie.html. M. Uribe and S. Schmitt-Groh´e Figure 6.6: Theoretical and Estimated Impulse Response Functions rus Response of Output to ε Response of Output to ε 0 −1.5 0 Response of Investment to εrus Response of Investment to εr 0 0 −2 −4 −1 −6 0 20 r Response of TB/GDP to ε Response of TB/GDP to ε 2 0.4 1.5 0.3 1 0.2 0.5 Response of Country Interest Rate to εrus Response of Country Interest Rate to εr 1 0.8 2 0.4 0.2 Estimated IR -x-x- Model IR 2-std error bands around Estimated IR Note: The first column displays impulse responses to a US interest rate shock (rus ), and the second column displays impulse responses to a country-spread shock (r ). Open Economy Macroeconomics, Chapter 6 clude that the scheme used to identify the parameters of the VAR system (6.1) is indeed successful in isolating country-spread shocks and US-interest-rate shocks from the data. The Endogeneity of Country Spreads According to the estimated process for the country interest rate given in equation (6.20), the ˆt − R ˆ us moves in response to four types of variable: its own lagged value St−1 country spread Sˆt = R t (the autoregressive component), the exogenous country-spread shock rt (the sentiment component), us current and past US interest rates Rus t and Rt−1 ), and current and past values of a set of ˆ , tby ˆ endogenous variables, yˆt , yˆt−1 , ˆıt , ˆıt−1 , tby t t−1 . A natural question is to what extent the endogeneity of country spreads contributes to exacerbating aggregate fluctuations in emerging countries. We address this question by means of two counterfactual exercises. The first exercise aims at gauging the degree to which country spreads amplify the effects of world-interest-rate shocks. To this end, we calculate the volatility of endogenous macroeconomic variables due to US-interest-rate shocks in a world where the country spread does not directly depend on the US interest rate. Specifically, we assume that the process for the country interest rate is given by ˆ t = 0.63R ˆ t−1 + R ˆ us − 0.63R ˆ us − 0.79ˆ R yt + 0.61ˆ yt−1 + 0.11ˆıt − 0.12ˆıt−1 t + 0.29tbyt − 0.19tbyt−1 + rt . This process differs from the one shown in equation (6.20) only in that the coefficient on the contemporaneous US interest rate is unity and the coefficient on the lagged US interest rate equals -0.63, which is the negative of the coefficient on the lagged country interest rate. This parameterization has two properties of interest. First, it implies that, given the past value of the country M. Uribe and S. Schmitt-Groh´e Table 6.3: Endogeneity of Country Spreads and Aggregate Instability Std. Dev. due to rus Std. Dev. due to r Baseline No yˆt Baseline No yˆt us us Variable Model No R ˆı, or tby Model No R ˆı, or tby yˆ 1.110 0.420 0.784 0.819 0.819 0.639 ˆı 2.245 0.866 1.580 1.547 1.547 1.175 tby 1.319 0.469 0.885 0.663 0.663 0.446 R 3.509 1.622 2.623 4.429 4.429 3.983 S 2.515 0.347 1.640 4.429 4.429 3.983 Note: The variable S denotes the country spread and is defined as S = R/Rus . A hat on a variable denotes log-deviation from its non-stochastic steady-state value. ˆ t−1 − R ˆ us , the current country spread, St, does not directly depend upon current spread, Sˆt−1 = R t−1 or past values of the US interest rate. Second, the above specification of the country-interest-rate process preserves the dynamics of the model in response to country-spread shocks. The process for the US interest rate is assumed to be unchanged (i.e., given by equation (6.22)). We note that in conducting this and the next counterfactual exercises we do not reestimate the VAR system. The reason is that doing so would alter the estimated process of the country spread shock rt . This would amount to introducing two changes at the same time. Namely, changes in the endogenous and the sentiment components of the country spread process. ˆ t induces higher volatility in The precise question we wish to answer is: what process for R macroeconomic variables in response to US-interest-rate shocks, the one given in equation (6.20) or the one given in equation (6.24)? To answer this question, we feed the theoretical model first with equation (6.20) and then with equation (6.24) and in each case compute a variance decomposition of output and other endogenous variables of interest. The result is shown in table 6.3. We find that when the country spread is assumed not to respond directly to variations in the US interest rate (i.e., under the process for Rt given in equation (6.24)) the standard deviation of output and the Open Economy Macroeconomics, Chapter 6 trade balance-to-output ratio explained by US-interest-rate shocks is about two thirds smaller than in the baseline scenario (i.e., when Rt follows the process given in equation (6.20)). This indicates that the aggregate effects of US-interest-rate shocks are strongly amplified by the dependence of country spreads on US interest rates. A second counterfactual experiment we wish to conduct aims to assess the macroeconomic consequences of the fact that country spreads move in response to changes in domestic variables, such as output and the external accounts. To this end, we use our theoretical model to compute the volatility of endogenous domestic variables in an environment where country spreads do not respond to domestic variables. Specifically, we replace the process for Rt given in equation (6.20) with the process ˆ t = 0.63R ˆ t−1 + 0.50R ˆ us + 0.35R ˆ us + r . R t t−1 t Table 6.3 displays the outcome of this exercise. We find that the equilibrium volatility of output, investment, and the trade balance-to-output ratio explained jointly by US-interest-rate shocks and country-spread shocks (rus and rt ) falls by about one fourth when the feedback from endogenous t domestic variables to country spreads is shut off.13 We conclude that the fact that country spreads respond to the state of domestic business conditions significantly exacerbates aggregate instability in emerging countries. 13 Ideally, this particular exercise should be conducted in an environment with a richer battery of shocks capable of explaining a larger fraction of observed business cycles than that accounted by rus and rt alone. t M. Uribe and S. Schmitt-Groh´e Chapter 7 The Terms of Trade Three key stylized facts documented in chapter 1 are: (1) that emerging market economies are about twice as volatile as developed economies; (2) that private consumption spending is more volatile than output in emerging countries, but less volatile than output in developed countries; and (3) that the trade-balance-to-output ratio is significantly more countercyclical in emerging markets than it is in developed countries. Explaining this striking contrast between emerging and industrialized economies is at the top of the research agenda in small-open-economy macroeconomics. Broadly, the available theoretical explanations fall into two categories: One is that emerging market economies are subject to more volatile shocks than are developed countries. The second category of explanations argues that in emerging countries government policy tends to amplify business-cycle fluctuations whereas in developed countries public policy tends to mitigate aggregate instability. This and the following two chapters provide a progress report on the identification and quantification of exogenous sources of business cycles in small open economies. The present chapter concentrates on terms-of-trade shocks. 245 M. Uribe and S. Schmitt-Groh´e Defining the Terms of Trade The terms of trade are defined as the relative price of exports in terms of imports. Letting Ptx and Ptm denote indices of world prices of exports and imports for a particular country, the terms of trade for that country are given by tott ≡ Ptx /Ptm . Typically, emerging countries specialize in exports of a few primary commodities, such as metals, agricultural products, or oil. At the same time, emerging countries are normally small players in the world markets for the goods they export or import. It follows that for many small countries, the terms of trade can be regarded as an exogenous source of aggregate fluctuations. Because primary commodities display large fluctuations over time, the terms of trade have the potential to be an important source of business cycles in developing countries. Empirical Regularities Table 7.1 displays summary statistics relating the terms of trade to output, the components of aggregate demand, and the real exchange rate in the postwar era. In the table, the real exchange rate (rer) is defined as the relative price of consumption in terms of importable goods. Specifically, let Ptc denote a domestic CPI index. Then the real exchange rate is given by Ptc /Ptm . A number of empirical regularities emerge from the table: 1. The terms of trade are twice as volatile in emerging countries as in developed countries, and they are almost twice as volatile in oil-exporting countries as in developing countries. 2. The terms of trade are half as volatile as output in developed countries, 75 percent as volatile as output in developing countries, and 150 percent as volatile as output in oil-exporting countries. Open Economy Macroeconomics, Chapter 7 Table 7.1: The Terms of Trade and Business Cycles Summary Statistic σ(tot) ρ(tott , tott−1 ) σ(tot)/σ(y) ρ(tot, y) ρ(tot, c) ρ(tot, i) ρ(tot, tb) ρ(tot, rer) Developed Countries 4.70 0.47 0.52 0.78 0.74 0.67 0.24 0.70 Developing Countries 10.0 0.40 0.77 0.39 0.34 0.38 0.28 0.07 Oil Exporting Countries 18.0 0.50 1.40 0.30 0.19 0.45 0.33 0.42 Source: Mendoza (1995), tables 1 and 3-6. Note: tot, y, c, i, and tb denote, respectively, the terms of trade, output, consumption, investment, and the trade balance. The sample is 1955 to 1990 at annual frequency. The terms of trade are measured as the ratio of export to import unit values with 1900=100. All other variables are measured per capita at constant import prices. All variables are expressed in percent deviations from a HP trend constructed using a smoothing parameter of 100. The group of developed countries is formed by the US, UK, France, Germany, Italy, Canada, and Japan. The group of developing countries is formed by Argentina, Brazil, Chile, Mexico, Peru, Venezuela, Taiwan, India, Indonesia, Korea, Philippines, and Thailand. The group of oil-exporting countries is formed by Mexico, Venezuela, Saudi Arabia, Algeria, Cameroon, and Nigeria. M. Uribe and S. Schmitt-Groh´e 3. The terms of trade are procyclical. They are twice as procyclical in developed countries as in developing countries. 4. The terms of trade display positive but small serial correlation. 5. The correlation between the terms of trade and the trade balance is positive but small. 6. The terms of trade are positively correlated with the real exchange rate. This correlation is high for developed countries but almost nil for less developed countries. The information provided in table 7.1 is mute on the importance of terms of trade shocks in explaining movements in aggregate activity. Later in this chapter, we attempt to answer this question by combining the empirical information contained in table 7.1 with the theoretical predictions of a fully specified dynamic general equilibrium model of the open economy. TOT-TB Correlation: Two Early Explanations The effects of terms-of-trade shocks on the trade balance is an old subject of investigation. More than half a century ago, Harberger (1950) and Laursen and Metzler (1950) formalized, within the context of a keynesian model, the conclusion that rising terms of trade should be associated with an improving trade balance. This conclusion became known as the Harberger-Laursen-Metzler (HLM) effect. This view remained more or less unchallenged until the early 1980s, when Obstfeld (1982) and Svensson and Razin (1983), using a dynamic optimizing model of the current account, concluded that the effect of terms of trade shocks on the trade balance depends crucially on the perceived persistence of the terms of trade. In their model a positive relation between terms of trade and the trade balance (i.e., the HLM effect) weakens as the terms of trade become more persistent and may even be overturned if the terms of trade are of a permanent nature. This view became known as the Obstfeld-Razin-Svensson (ORS) effect. Let us look at the HLM and ORS Open Economy Macroeconomics, Chapter 7 effects in some more detail. The Harberger-Laursen-Metzler Effect A simple way to obtain a positive relation between the terms of trade and the trade balance in the context of a Keynesian model is by starting with the national accounting identity yt = ct + gt + it + xt − mt , where yt denotes output, ct denotes private consumption, gt denotes public consumption, it denotes private investment, xt denotes exports, and mt denotes imports. Consider the following behavioral equations defining the dynamics of each component of aggregate demand. Public consumption and private investment are assumed to be independent of output. For simplicity, we will assume that these two variables are constant over time and given by gt = g and it = i, respectively, where g and i are parameters. Consumption is assumed to be an increasing linear function of output ct = c + αyt , with α ∈ (0, 1) and c > 0 are parameters. Imports are assumed to be proportional to output, mt = µyt , M. Uribe and S. Schmitt-Groh´e with µ ∈ (0, 1). In the jargon of the 1950s, the parameters α and µ are referred to as the marginal propensities to consume and import, respectively, whereas the term c + g + i is referred to as the autonomous component of domestic absorption. Output as well as all components of aggregate demand are expressed in terms of import goods. The quantity of goods exported in period t is denoted by qt . Thus, the value of exports in terms of importables, xt , is given by xt = tott qt , where tott denotes the terms of trade. The terms of trade are assumed to evolve exogenously, and the quantity of goods exported, qt , is assumed to be constant and given by qt = q, where q is a positive parameter. Using the behavioral equations to eliminate ct , it , gt , xt, and mt from the national income identity, and solving for output yields yt = c + g + i + tott q . 1+µ−α Letting tbt ≡ xt − mt denote the trade balance, we can write tbt = 1−α µ(c + g + i) tott q − . 1+µ−α 1+µ−α Clearly, this theory implies that an improvement in the terms of trade (an increase in tott ) gives rise to an expansion in the trade surplus. This positive relation between the terms of trade and the trade balance is stronger the larger is the volume of exports, q, the smaller is the marginal propensity to import, µ, and the smaller is the marginal propensity to consume α. The reason why Open Economy Macroeconomics, Chapter 7 µ increases the TOT multiplier is that a higher value of µ weakens the endogenous expansion in aggregate demand to an exogenous increase in exports, as a larger fraction of income is used to buy foreign goods. Similarly, a larger value of α reduces the TOT multiplier because it exacerbates the endogenous response of aggregate demand to a TOT shock through private consumption. It is worth noting that in the context of this model, the sign of the effect of a TOT shock on the trade balance is independent of whether the terms of trade shocks are permanent or temporary in nature. This is the main contrast with the Obstfeld-Razin-Svensson effect. The Obstfeld-Razin-Svensson Effect The ORS effect is cast within the dynamic optimizing theoretical framework that differs fundamentally from the reduced-form Keynesian model we used to derive the HLM effect. Consider the small, open, endowment economy studied in chapter 2. This is an economy inhabited by an infinitely lived representative household with preferences described by the intertemporal utility function given in (??). Suppose that the good the household consumes is different from the good it is endowed with. The household, therefore exports the totality of its endowment and imports the totality of its consumption. Let tott denote the relative world price of exported goods in terms of imported goods, or the terms of trade. Assume for simplicity that the endowment of exportable goods is constant and normalized to unity, yt = 1 for all t. The resource constraint is then given by dt = (1 + r)dt−1 + ct − tott . The borrowing constraint given in (??) prevents the household from engaging in Ponzi games. The economy is small in world product markets, so it takes the evolution of tott as exogenous. The model is therefore identical to the stochastic-endowment economy studied in chapter 2, with tott taking the place of yt . We can then use the results derived in chapter 2 to draw the following M. Uribe and S. Schmitt-Groh´e conclusion: if the terms of trade are stationary then an increase in the terms of trade produces an improvement in the current account. Agents save in order to ensure higher future consumption. When terms of trade are nonstationary, an improvement in the terms of trade induces a trade balance deficit. In this case, the value of income is expected to grow over time, so agents can afford assuming higher current debts without sacrificing future expenditures. This conclusion can be extended to a model with endogenous labor supply and capital accumulation. A simple way to do this is to modify the RBC model of chapter 4 by assuming again that households do not consume the good they produce. In this case, the productivity shock At can be interpreted as a terms-of-trade shock. An increase in the terms of trade produces an improvement in the trade balance if the terms of trade shock is transitory, but as the serial correlation of the terms of trade shock increases, an improvement in the terms of trade can lead to a deterioration in the current account driven by investment expenditures. Is the ORS effect borne out in the data? If so, we should observe that countries experiencing more persistent terms-of-trade shocks should display lower correlations between the terms of trade and the trade balance than countries facing less persistent terms of trade shocks. Figure 7.1 plots the serial correlation of the terms of trade against the correlation of the trade balance with the terms of trade for 30 countries, including the G7 countries and 23 selected developing countries from Latin America, Africa, East Asia, and the Middle East. The 30 observations were taken from Mendoza (1995), table 1. The cloud of points, shown with circles, displays no pattern. The OLS fit of the 30 points, shown with a solid line, displays a small negative slope of -0.14. The sign of the slope is indeed in line with the ORS effect: As the terms of trade shocks become more persistent, they should be expected to induce a smaller response in the trade balance. It is apparent in the graph, however, that the negative slope in the OLS regression is driven by a single observation, Argentina, the only country in the sample with a negative serial correlation of the terms of trade. Eliminating Open Economy Macroeconomics, Chapter 7 Figure 7.1: TOT Persistence and TB-TOT Correlations 0.8 TB−TOT Correlation −0.6 −0.1 0.2 0.3 0.4 TOT Serial Correlation Source: Mendoza (1995), table 1. Note: Each point corresponds to one country. The TOT serial correlation and the TB-TOT correlation are computed over the period 1955-1990. The sample includes the G-7 countries (United States, United Kingdom, France, Germany, Italy, Canada, and Japan), 6 countries from Latin America, (Argentina, Brazil, Chile, Mexico, Peru, Venezuela), 3 countries from the Middle East (Israel, Saudi Arabia, and Egypt), 6 countries from Asia (Taiwan, India, Indonesia, Korea, Philippines, and Thailand), and 8 countries from Africa (Algeria, Cameroon, Zaire, Kenya, Morocco, Nigeria, Sudan, and Tunisia). The solid line is the OLS fit and is given by corr(T B, T OT ) = 0.35 − 0.14ρ(T OT ). The dashed line is the OLS fit after eliminating Argentina from the sample and is given by corr(T B, T OT ) = 0.23 + 0.12ρ(T OT ). The dashed-dotted line is the OLS fit after eliminating the G7 countries, Saudi Arabia, and Argentina from the sample and is given by corr(T B, T OT ) = 0.28 + 0.03ρ(T OT ). M. Uribe and S. Schmitt-Groh´e Argentina from the sample one obtains a positive OLS slope of 0.12.1 The corresponding fitted relationship is shown with a broken line on figure 7.1. A number of countries in figure 7.1 are likely to be large players in the world markets for the goods and services they import and/or export. Countries in this group would include all of the G7 nations, the largest economies in the world, and Saudi Arabia, a major oil exporter. For these countries, the terms of trade are not likely to be exogenous. Eliminating these 8 countries (as well as the outlier Argentina) from the sample, gives us a better idea of what the relation between the TB-TOT correlation and the TOT persistence looks for small emerging countries that take their terms of trade exogenously. The fitted line using this reduced sample has a negligible slope equal to 0.03 and is shown with a dash-dotted line in figure 7.1. We conclude that the observed relationship between the TB-TOT correlation and the persistence of TOT is close to nil. Does this conclusion suggest that the empirical evidence presented here is against the ObstfeldRazin-Svensson effect? Not necessarily. The ORS effect requires isolating the effect of TOT shocks on the trade balance. The raw data is in principle driven by a multitude of shocks, of which the terms of trade is just one. Moreover, some of these shocks may directly affect both the trade balance and the terms of trade. Not controlling for these shocks may result in erroneously attributing part of their effect on the trade balance to the terms of trade. A case in point is given by worldinterest-rate shocks. High world interest rates may be associated with depressed economic activity in developed and emerging economies alike. In turn, low levels of economic activity in the developed world are likely to be associated with a weak demand for primary commodities, and, as a result, with deteriorated terms of trade for the emerging countries producing those commodities. At the same time, high world interest rates are associated with contractions in aggregate demand and improvements in the trade balance in emerging countries. Under this scenario, the terms of trade and the trade balance are moving at the same time, but attributing all of the movement in the trade 1 Indeed, Argentina is the only country whose elimination from the sample results in a positive slope. Open Economy Macroeconomics, Chapter 7 balance to changes in the terms of trade would be clearly misleading. As another example, suppose that domestic technology shocks are correlated with technology shocks in another country or set of countries. Suppose further that this other country or set of countries generates a substantial fraction of the demand for exports or the supply of imports of the country in question. In this case, judging the empirical validity of the ORS effect only on the grounds of raw correlations would be misplaced. An important step in the process of isolating terms-of-trade shocks—or any kind of shock, for that matter—is identification. Data analysis based purely on statistical methods will in general not result in a successful identification of technology shocks. Economic theory must be used be at center stage in the identification process. The following exercise, which follows Mendoza (1995), represents an early step in the task of identifying the effects of terms-of-trade shocks on economic activity in emerging economies. Terms-of-Trade Shocks in an RBC Model Consider expanding the real-business-cycle model of chapter 4 to allow for terms-of-trade shocks. In doing this, we follow the work of Mendoza (1995). The household block of the model is identical to that of the standard RBC model studied in chapter 4. The main difference with the model of chapter 4 is that the model studied here features three sectors: a sector producing importable goods, a sector producing exportable goods, and a sector producing nontradable goods. An importable good is either an imported good or a good that is produced domestically but is highly substitutable with a good that is imported. Similarly, an exportable good is either an exported good or a good that is sold domestically but is highly substitutable with a good that is exported. A nontradable good is a good that is neither exportable nor importable. M. Uribe and S. Schmitt-Groh´e This block of the model is identical to that of the RBC model studied in chapter 4. The economy is populated by a large number of identical households with preferences described by the utility function E0 ∞ X θt U (ct, ht ), where ct denotes consumption, ht denotes labor effort, and U is a period utility function taking the form U (c, h) = [c (1 − h)ω ]1−γ . 1−γ The variable θt /θt−1 is a time-varying discount factor and is assumed to evolve according to the following familiar law of motion: θt+1 = θt β(ct, ht ), where the function β is assumed to take the form β(c, h) = [1 + c(1 − h)ω ]−β . We established in chapter 4 that the endogeneity of the discount factor serves the purpose of rendering the deterministic steady state independent of the country’s initial net foreign asset position. Households offer labor services for a wage wt and own the stock of capital, kt, which they rent at the rate ut . The stock of capital evolves according to the following law of motion: kt+1 = (1 − δ)kt + it − Φ(kt+1 − kt ), where it denotes gross investment, which is assumed to be an importable good. The parameter δ ∈ [0, 1] denotes the capital depreciation rate. The function Φ introduces capital adjustment costs, Open Economy Macroeconomics, Chapter 7 and is assumed to satisfy Φ(0) = Φ0 (0) = 0 and Φ00 > 0. Under these assumptions, the steady-state level of capital is not affected by the presence of adjustment costs. As discussed in chapter 4, capital adjustment costs help curb the volatility of investment in small open economy models like the one studied here. Households are assumed to be able to borrow or lend freely in international financial markets by buying or issuing risk-free bonds denominated in units of importable goods and paying the constant interest rate r ∗ . Letting dt denote the debt position assumed by the household in period t and pct denote the price of the consumption good, the period budget constraint of the household can be written as dt = (1 + r ∗ )dt−1 + pct ct + it − wtht − ut kt. The relative prices pct , wt, and ut are expressed in terms of importable goods, which serve the role of numeraire. Households are subject to a no-Ponzi-game constraint of the form Etdt+j ≤ 0. j→∞ (1 + r ∗ )j lim The household seeks to maximize the utility function (7.1) subject to (7.2)-(7.5). Letting θt ηt and θt λt denote the Lagrange multipliers on (7.2) and (7.4), the first-order conditions of the household’s maximization problem are (7.2), (7.4), (7.5) holding with equality, and Uc (ct, ht) − ηtβc (ct, ht) = λtpct − Uh (ct, ht ) + ηtβh (ct , ht) = λtwt λt = β(ct, ht)(1 + rt)Etλt+1 λt[1 + Φ0 (kt+1 − kt )] = β(ct, ht)Etλt+1 ut+1 + 1 − δ + Φ0 (kt+2 − kt+1 ) M. Uribe and S. Schmitt-Groh´e ηt = −EtU (ct+1 , ht+1 ) + Et ηt+1 β(ct+1 , ht+1 ) Production of Consumption Goods The consumption good, ct, is produced by domestic firms. These firms operate a CES production function that takes tradable consumption goods, cTt , and nontradable consumption goods, cN t , as inputs. Formally, −µ −1/µ ct = [χ(cTt )−µ + (1 − χ)(cN ] , t ) with µ > −1. Firms operate in perfectly competitive product and input markets. They choose output and inputs to maximize profits, which are given by N pct ct − pTt cTt − pN t ct , where pTt and pN t denote, respectively, the relative prices of tradable and nontradable consumption goods in terms of importable goods. The first-order conditions associated with this profitmaximization problem are (7.11) and cN t = cTt ct = cTt 1−χ χ 1 1+µ pTt pN t 1 1+µ 1 1 T 1+µ 1 1+µ pt . χ pct It is clear from the first of these efficiency conditions that the elasticity of substitution between tradable and nontradable goods is given by 1/(1 + µ). From the second optimality condition, one observes that if the elasticity of substitution between tradables and nontradables is less than unity (or µ > 0), then the share of tradables in total consumption, given by pTt cTt /(pct ct ), increases as the relative price of tradables in terms of consumption, pTt /pct , increases. Open Economy Macroeconomics, Chapter 7 Production of Tradable Consumption Goods Tradable consumption goods, denoted cTt , are produced using importable consumption goods, cM t , and exportable consumption goods, cX t , via a Cobb-Douglas production function. Formally, α M 1−α cTt = (cX , t ) (ct ) where α ∈ (0, 1) is a parameter. Firms are competitive and aim at maximizing profits, which are given by X M pTt cTt − pX t ct − ct , where pX t denotes the relative price of exportable goods in terms of importable goods, or the terms of trade. Note that because the importable good plays the role of numeraire, we have that the relative price of importables in terms of the numeraire is always unity (pM t = 1). The optimality conditions associated with this problem are (7.14) and: X pX t ct =α pTt cTt cM t = 1 − α. pTt cTt These optimality conditions state that the shares of consumption of exportables and importables in total consumption expenditure in tradable goods are constant and equal to α and 1−α, respectively. This implication is a consequence of the assumption of Cobb-Douglas technology in the production of tradable consumption. M. Uribe and S. Schmitt-Groh´e Production of Importable, Exportable, and Nontradable Goods Exportable and importable goods are produced with capital as the only input, whereas nontradable goods are produced using labor services only. Formally, the three production technologies are given by X αX , ytX = AX t (kt ) M αM ytM = AM , t (kt ) N αN ytN = AN , t (ht ) where ytX denotes output of exportable goods, ytM denotes output of importable goods, and ytN denotes output of nontradable goods. The factors Ait denote exogenous and stochastic technology shocks in sectors i = X, M, N . The variable kti denotes the capital stock in sector i = X, M , and the variable hN t denotes labor services employed in the nontradable sector. Firms demand input quantities to maximize profits, which are given by X M N N N X M pX t yt + yt + pt yt − wt ht − ut (kt + kt ). The optimality conditions associated with this problem are ut ktX = αX , X pX t yt ut ktM = αM , ytM wt hN t = αN . N pN y t t Open Economy Macroeconomics, Chapter 7 According to these expressions, and as a consequence of the assumption of Cobb-Douglas technologies, input shares are constant. Market Clearing In equilibrium, the markets for capital, labor, and nontradables must clear. That is, kt = ktX + ktM , ht = hN t , N cN t = yt . Also, in equilibrium the evolution of the net foreign debt position of the economy is given by X X M M dt = (1 + r ∗ )dt−1 − pX t (yt − ct ) − yt + ct + it . Driving Forces There are four sources of uncertainty in this economy: One productivity shock in each of the three sectors (importable, exportable, and nontradable), and the terms of trade. We assume that all shocks follow autoregressive processes of order one. Mendoza (1995) imposes four restrictions on the joint distribution of the exogenous shocks: (1) all four shocks share the same persistence. (2) The sectorial productivity shocks are assumed to be perfectly correlated. (3) The technology shocks affecting the production of importables and exportables are assumed to be identical. (4) Innovations to productivity shocks and terms-of-trade shocks are allowed to be correlated. These M. Uribe and S. Schmitt-Groh´e assumptions give rise to the following laws of motion: p X ln pX t = ρ ln pt−1 + t ; pt ∼ N (0, σ2p ). X T ln AX t = ρ ln At−1 + t . M T ln AM t = ρ ln At−1 + t . N N ln AN t = ρ ln At−1 + t . Tt = ψT pt + νtT ; νtT ∼ N (0, σν2T ), E(pt, νtT ) = 0. T N t = ψN t . We are now ready to define a competitive equilibrium. Competitive Equilibrium M N N X A stationary competitive equilibrium is a set of stationary processes {ct, cTt , cX t , ct , ct , ht , ht , yt , T ∞ ytM , ytN , kt, ktX , ktM , it, dt , pct , pN t , pt , wt , ut , ηt , λt } t=0 satisfying equations (7.3) and (7.6)-(7.26), M N X ∞ given the initial conditions k0 and d−1 and the exogenous processes {AX t , At , At , pt }t=0 . Mendoza (1995) presents two calibrations of the model, one matching key macroeconomic relations in developed countries, and the other matching key macroeconomic relations in developing countries. In calibrating the driving forces of the developed-country version of the model, the parameters ρ and σp are set to match the average serial correlation and standard deviation of the terms of trade for the group of G7 countries. Using the information presented in table 1 of Mendoza (1995) Open Economy Macroeconomics, Chapter 7 yields ρ = 0.473, and p σp = 0.047 1 − ρ2 . Using estimates of productivity shocks in five industrialized countries by Stockman and Tesar (1995), Mendoza (1995) sets the volatility of productivity shocks in the importable and exportable sectors at 0.019 and the volatility of the productivity shock in the nontraded sector at 0.014. This implies that q p ψT2 σ2p + σν2T = 0.019 1 − ρ2 , and ψN q p ψT2 σ2p + σν2T = 0.014 1 − ρ2 . Based on correlations between Solow residuals and terms of trade in five developed countries, Mendoza (1995) sets the correlation between the productivity shock in the exportable sector and the terms of trade at 0.165. This implies that ψT σp ψT2 σ2p + σν2T = 0.165. Table 7.2 displays the parameter values implied by the above restrictions. This completes the calibration of the parameters defining exogenous driving forces in the developed-country model. In calibrating the driving forces of the developing-country model, the parameter ρ and σp are picked to match the average serial correlation and standard deviation of the terms of trade for the M. Uribe and S. Schmitt-Groh´e Table 7.2: Calibration Parameter σp σν T ρ ψT ψN r∗ αX αM αN δ φ γ µ α ω β Developed Country 0.041 0.017 0.47 0.067 0.74 0.04 0.49 0.27 0.56 0.1 0.028 1.5 0.35 0.3 2.08 0.009 Developing Country 0.011 0.032 0.41 -0.156 0.74 0.04 0.57 0.70 0.34 0.1 0.028 2.61 -0.22 0.15 0.79 0.009 Open Economy Macroeconomics, Chapter 7 group of developing countries reported in table 1 of Mendoza (1995). This implies: ρ = 0.414, and p σp = 0.12 1 − ρ2 . Mendoza (1995) assumes that the standard deviation of productivity shocks in the traded sector are larger than in the nontraded sector by the same proportion as in developed countries. This means that the parameter ψN takes the value 0.74 as in the developed-country model. Mendoza sets the standard deviation of productivity shocks in the traded sectors at 0.04 and their correlation with the terms of trade at -0.46 to match the observed average standard deviation of GDP and the correlation of GDP with TOT in developing countries. This yields the restrictions: q p ψT2 σ2p + σν2T = 0.04 1 − ρ2 , and q ψT σp ψT2 σ2p + σν2T = −0.46. The implied parameter values are shown in table 7.2. This completes the calibration of the exogenous driving forces for the developing-country version of the model. Table 7.2 also displays the values assigned to the remaining parameters of the model.2 Model Performance Table 7.3 presents a number of data summary statistics from developed (G7) and developing coun2 See Mendoza, 1995 for more details. M. Uribe and S. Schmitt-Groh´e Table 7.3: Data and Model Predictions Variable ρxt,xt−1 G7 DCs ρx,GDP G7 DCs .47 .47 .41 .41 .78 .78 .25 .32 1.60 .86 .33 .37 .38 .69 .18 -.11 -.17 -.45 .34 .19 .32 .08 1.30 .47 .49 .68 .49 .82 .78 .78 .25 .32 1.27 1.32 .44 .85 .42 .99 .96 .81 .89 .75 .74 .39 .18 .03 1.62 .94 .51 .14 .49 .11 .84 .67 .72 .39 .66 .70 .26 .28 1.23 .38 .60 .79 (1995). .43 .95 .58 .80 .52 .71 .70 .57 .12 .25 σT OT TOT Data 1 1 Model TB Data 1.62 Model 3.5 GDP Data 1.69 Model .86 C Data 1.59 Model 1.01 I Data 1.90 Model 2.0 RER Data 1.44 Model .57 Source: Mendoza ρx,T OT G7 DCs Open Economy Macroeconomics, Chapter 7 tries (DCs) and their theoretical counterparts. The following list highlights a number of empirical regularities and comments on the model’s ability to capture them. 1. In the data, the terms of trade are procyclical, although much less so in developing countries than in G7 countries. The model captures this fact relatively well. 2. The observed terms of trade are somewhat persistent. This fact is matched by construction; recall that the parameter ρ is set to pin down the serial correlation of the terms of trade in developing and G7 countries. 3. The terms of trade are less volatile than GDP. The model fails to capture this fact. 4. The terms of trade are positively correlated with the trade balance. The model captures this empirical regularity, but underestimates the TB-TOT correlation, particularly for developing countries. 5. The trade balance is countercyclical in DCs but procyclical in G7 countries. In the model, the trade balance is countercyclical in both, developed and developing countries. The failure of the model to capture the procyclicality of the trade balance in developed countries should be taken with caution, for other authors estimate negative TB-GDP correlations for developed countries. The model appears to overestimate the countercyclicality of the trade balance. 6. In the data, the real exchange rate (RER) is measured as the ratio of the domestic CPI to an exchange-rate-adjusted, trade-weighted average of foreign CPIs. In the model, the RER is defined as the relative price of consumption in terms of importables and denoted by pct . In the data, the RER is procyclical. The model captures this fact, although it overestimates somewhat the RER-GDP correlation. 7. The RER is somewhat persistent (with a serial correlation of less than 0.45 for both developing M. Uribe and S. Schmitt-Groh´e and G7 countries). In the model, the RER is highly persistent, with an autocorrelation above 0.75 for both types of country. How Important Are the Terms of Trade? To assess the contribution of the terms of trade to explaining business cycles in developed and developing countries, one can run the counterfactual experiment of computing equilibrium dynamics after shutting off all sources of uncertainty other than the terms of trade themselves. In the context of the model of this section, one must set all productivity shocks at their deterministic steady-state values. This is accomplished by setting σν T = ψT = 0. Mendoza (1995) finds that when the volatility of all productivity shocks is set equal to zero in the developed-country version of the model, the volatility of output deviations from trend measured at import prices falls from 4.1 percent to 3.6 percent. Therefore, in the model the terms of trade explain about 88 percent of the volatility of output. When output is measured in terms of domestic prices, the terms of trade explain about 66 percent of output movements. When the same experiment is performed in the context of the developing-country version of the model, shutting off the variance of the productivity shocks results in an increase in the volatility of output. The reason for this increase in output volatility is that in the benchmark calibration the terms of trade are negatively correlated with productivity shocks—note in table 7.2 that ψT < 0 for developing countries. Taking this result literally would lead to the illogical conclusion that terms of trade explain more than 100 percent of output fluctuations in the developing-country model. What is wrong? One difficulty with the way we have measured the contribution of the terms of trade is that it is not based on a variance decomposition of output. A more satisfactory way to assess the importance of terms-of-trade and productivity shocks would be to define the terms of trade shock as pt and the productivity shock as νtT . One justification for this classification is that pt affects both the terms Open Economy Macroeconomics, Chapter 7 of trade and sectoral total factor productivities, while νtT affects sectoral total factor productivities but not the terms of trade. Under this definition of shocks, the model with only terms of trade shocks results when σν T is set equal to zero. Note that the parameter ψT must not be set to zero. An advantage of this approach is that, because the variance of output can be decomposed into a nonnegative fraction explained by pt and a nonnegative part explained by νtT , the contribution of the terms of trade will always be a nonnegative number no larger than 100 percent. Exercise 7.1 Using the model presented in this section, compute a variance decomposition of output. What fraction of output is explained by terms-of-trade shocks in the developed- and developingcountry versions of the model? M. Uribe and S. Schmitt-Groh´e Part II Exchange Rate Policy 273 An important question in open economy macroeconomics is the role of monetary and exchangerate policy in stabilizing business cycles. The models studied thus far are useful for understanding the efficient adjustment of open economies to a number of aggregate disturbances. However, all of those models belong to the family of real models, in which nominal frictions, such as product or factor price rigidities or a demand for fiat money are absent. By construction, in this type of environment there is little room for the central bank to affect real allocations. In this part of the book, we study environments with nominal frictions in which policy actions of the monetary authority have real effects and welfare consequences. Nominal wage or price rigidity can be an important source of monetary nonneutrality especially in low inflation environments. For instance, in the United States and Europe, during the Great Depression of the 1930s, product prices fell by a much larger proportion than nominal wages. As a result, real wages increased in spite of the fact that involuntary unemployment had reached unprecedented levels. Some researchers have attributed the disruptions in the labor market observed during this period to downward nominal wage rigidity, and argue that countries in which the monetary authority reflated earlier we more successful in lifting the economy out of the slump. By contrast, in high inflation environments, price rigidity tends to be less important, as prices and wages increase quickly through indexation schemes or currency substitution. However, monetary frictions continue to play an important role. In particular, high inflation represents a tax on the use of money balances. To the extent that money facilitates transactions at the firm and household level, inflation can disrupt the efficient functioning of the economy. The central policy challenge in this context is how to transition from high to low inflation with the minimum amount of economic disruption. In this part of the book, we build an analytical framework useful for understanding the role of monetary policy in low- and high inflation environments. Chapter 8 presents evidence on downward nominal wage rigidity and builds a model of a small open economy in which this type of friction can 274 lead to significant involuntary unemployment. This chapter then uses this framework to study the welfare consequences of various exchange-rate arrangements, including currency pegs or monetary unions, inflation targeting, and the optimal flexible exchange-rate arrangement. Chapter 9 analyzes the role of capital controls for aggregate stabilization under different exchange-rate arrangements. Finally, chapter ?? develops a framework for understanding high-inflation stabilization. There we stress the role of credibility and dollarization in shaping the effectiveness of alternative disinflation policies. Chapter 8 Nominal Rigidity, Exchange Rates, And Unemployment In this chapter, we build a theoretical framework in which the presence of nominal rigidities can induce an inefficient adjustment to aggregate disturbances. The analysis is guided by two objectives. One is to convey in an intuitive manner how nominal rigidities amplify the business cycle in open economies. The second is to develop a framework from which one can derive quantitative predictions useful for policy evaluation. To motivate the type of theoretical environment we will study in this chapter, take a look at figure 8.1. It displays the current account, nominal hourly wages, and the unemployment rate in the periphery of Europe between 2000 and 2011. The inception of the Euro in 1999 was followed by massive capital inflows into the region, possibly driven by expectations of a quick convergence of peripheral and central Europe to core Europe (Germany and France). The boom lasted from 2000 to 2008 and was characterized by large current account deficits, spectacular nominal wage increases, and declining rates of unemployment. With the onset of the global crisis of 2008, capital 275 M. Uribe and S. Schmitt-Groh´e Figure 8.1: Boom-Bust Cycle in Peripheral Europe: 2000-2011 Current Account / GDP Labor Cost Index, Nominal Unemployment Rate 14 13 11 Percent Index, 2008 = 100 12 −6 2002 2004 2006 2008 2010 2002 2004 2006 2008 2010 Date Date Data Source: Eurostat. Data represents arithmetic mean of Bulgaria, Cyprus, 2002 2004 2006 2008 2010 Date Estonia, Greece, Ireland, Lithuania, Latvia, Portugal, Spain, Slovenia, and Slovakia. inflows dried up abruptly (see the sharp reduction in the current account deficit) and the region suffered a severe sudden stop. At the same time, central banks were unable to change the course of monetary policy because the respective countries were either in the eurozone or pegging to the euro. In spite of the collapse in aggregate demand and the lack of a devaluation, nominal wages remained as high as during the boom. Meanwhile, massive unemployment affected all countries in the region. The data in figure 8.1 does not provide any indication of causality. In this chapter, we will interpret episodes of the type illustrated in the figure through the lens of a theoretical model in which, following a negative external shock, the combination of downward nominal wage rigidity and a fixed exchange rate, can cause massive involuntary unemployment. Section 8.1 presents a model of an open economy with downward nominal wage rigidity. Sections 8.2 and 8.3 characterize theoretically the equilibrium dynamics and welfare levels associated with currency pegs and flexible exchange rates, with special attention on the optimal exchange-rate policy. Section 8.4 presents extensive evidence on downward nominal wage rigidity in developed, developing, and poor countries. Sections 8.5-8.13 study the quantitative implications of the model Open Economy Macroeconomics, Chapter 8 under alternative exchange-rate regimes. Finally, section 8.14 studies the case of product price rigidity. An Open Economy With Downward Nominal Wage Rigidity We develop a model of a small open economy in which nominal wages are downwardly rigid. The model features two types of goods, tradables and nontradables. The economy is driven by two exogenous shocks, a country-interest-rate shock and a terms-of-trade shock. The presentation draws on Schmitt-Groh´e and Uribe (2010). Consider an economy populated by a large number of identical households with preferences described by the utility function E0 ∞ X β t U (ct), where ct denotes consumption. The period utility function U is assumed to be strictly increasing and strictly concave and the parameter β, denoting the subjective discount factor resides in the interval (0, 1). The symbol Et denotes the mathematical expectations operator conditional upon information available in period t. The consumption good is a composite of tradable consumption, cTt , and nontradable consumption, cN t . The aggregation technology is of the form ct = A(cTt , cN t ), where A is an increasing, concave, and linearly homogeneous function. We assume that all of the external liabilities of the household are denominated in foreign currency. This assumption is motivated by the empirical literature on the ‘original sin,’ which M. Uribe and S. Schmitt-Groh´e documents that virtually all of the debt issued by emerging countries is denominated in foreign currency (see, for example, Eichengreen, Hausmann, and Panizza, 2005). Specifically, households are assumed to have access to a one-period, internationally traded, state non-contingent bond denominated in tradables. We let dt denote the level of debt assumed in period t − 1 and due in period t and rt the interest rate on debt held between periods t and t + 1. The sequential budget constraint of the household is given by T T PtT cTt + PtN cN t + Et dt = Pt yt + Wt ht + Φt + Et dt+1 , 1 + rt where PtT denotes the nominal price of tradable goods, PtN the nominal price of nontradable goods, Et the nominal exchange rate defined as the domestic-currency price of one unit of foreign currency, ytT the endowment of traded goods, Wt the nominal wage rate, ht hours worked, and Φt nominal profits from the ownership of firms. The variables rt and ytT are assumed to be exogenous and stochastic. Movements in ytT can be interpreted either as shocks to the physical availability of tradable goods or as shocks to the country’s terms of trade. ¯ hours to the labor market each period. Because of the presence Households supply inelastically h of downward nominal wage rigidity, households may not be able to sell all of the hours they supply. As a result, households take employment, ht ≤ ¯h, as exogenously given. Households are assumed to be subject to a debt limit ¯ dt+1 ≤ d, which prevents them from engaging in Ponzi schemes, where d¯ denotes the natural debt limit. We assume that the law of one price holds for tradables. Specifically, letting PtT ∗ denote the Open Economy Macroeconomics, Chapter 8 foreign currency price of tradables, the law of one price implies that PtT = PtT ∗ Et . We further assume that the foreign-currency price of tradables is constant and normalized to unity, PtT ∗ = 1.1 Thus, we have that the nominal price of tradables equals the nominal exchange rate, PtT = Et . Households choose contingent plans {ct, cTt , cN t , dt+1 } to maximize (8.1) subject to (8.2)-(8.4) taking as given PtT , PtN , Et , Wt , ht , Φt , rt, and ytT . Letting pt ≡ PtN /PtT denote the relative price of nontradables in terms of tradables and using the fact that PtT = Et, the optimality conditions associated with this problem are (8.2)-(8.4) and A2 (cTt , cN t ) = pt , T A1 (ct , cN t ) λt = U 0 (ct)A1 (cTt , cN t ), λt = βEt λt+1 + µt , 1 + rt µt ≥ 0, ¯ = 0, µt (dt+1 − d) where λt /PtT and µt denote the Lagrange multipliers associated with (8.3) and (8.4), respectively. Equation (8.5) describes the household’s demand for nontradables as a function of the relative price of nontradables, pt , and the level of tradable absorption, cTt . Given cTt , the demand for non1 Exercise 8.5 relaxes this assumption. M. Uribe and S. Schmitt-Groh´e price, p Figure 8.2: The Demand For Nontradables A 2 (c T1 , c N ) A 1 (c T1 , c N ) A 2 (c T0 , c N ) A 1 (c T0 , c N ) quantity, cN cT1 > cT0 tradables is strictly decreasing in pt. This property is a consequence of the assumptions made about the aggregator function A. It reflects the fact that as the relative price of nontradables increases, households tend to consume relatively less nontradables. The demand function for nontradables is depicted in figure 8.2 with a downward sloping solid line. An increase in the absorption of tradables shifts the demand schedule up and to the right, reflecting normality. Such a shift is shown with a dashed downward sloping line in figure 8.2 for an increase in traded consumption from cT0 to cT1 > cT0 . It follows that absorption of tradables can be viewed as a shifter of the demand for nontradables. Of course, cTt is itself an endogenous variable, which is determined simultaneously with all other endogenous variables of the model.2 2 A digression on the real exchange rate and the relative price of nontradables: At this point, it is useful to note that in the model presented here there is a one-to-one negative relationship between the relative price of nontradables Open Economy Macroeconomics, Chapter 8 Nontraded output, denoted ytN , is produced by perfectly competitive firms. Each firm operates a production technology given by ytN = F (ht ), which uses labor services as the sole input. The function F is assumed to be strictly increasing and concave. Firms choose the amount of labor input to maximize profits, given by Φt ≡ PtN F (ht ) − Wtht . The optimality condition associated with this problem is PtN F 0 (ht) = Wt . Dividing both sides by PtT and using the facts that PtT = Et and that ht = F −1 (ytN ) yields a supply schedule of nontradable goods of the form pt = Wt /Et . 0 F (F −1 (ytN )) This supply schedule is depicted with a solid upward sloping line in figure 8.3. All other things equal, the higher is the relative price of the nontraded good, the larger is the supply of nontradable goods. Also, the higher is the labor cost Wt /Et, the smaller is the supply of nontradables at each in terms of tradables, pt , and the real exchange rate, RERt . The real exchange rate is defined as the relative price of a foreign basket of consumption goods in terms of domestic baskets of consumption goods, or RERt ≡ Et Pt∗ , Pt where Pt∗ denotes the nominal price of consumption in the foreign country in units of foreign currency, and Pt denotes the nominal price of consumption in the domestic country in units of domestic currency. When RERt increases, the domestic economy becomes relatively cheaper than the foreign economy. In this case, we say that the real exchange rate depreciates. Conversely, when RERt decreases, the domestic economy becomes relatively more expensive than the foreign economy, and we say that the real exchange rate appreciates. In the present model, the real exchange rate appreciates if and only if the relative price of nontradables increases (i.e., RERt ↓ ⇐⇒ pt ↑), and the real exchange rate depreciates if and only if the relative price of nontradables decreases (i.e., RERt ↑ ⇐⇒ pt ↓). For this reason, throughout this chapter, we use the terms real exchange rate appreciation and increases in the relative price of nontradables interchangeably. The same holds for the terms real exchange rate depreciation and decrease in the relative price of nontradables. M. Uribe and S. Schmitt-Groh´e Figure 8.3: The Supply Of Nontradables price, p W 1 /E0 F 0 (F −1 (y N )) W 0 /E0 F 0 (F −1 (y N )) quantity, yN level of the relative price pt . That is, an increase in the nominal wage rate, holding constant the nominal exchange rate, causes the supply schedule to shift up and to the left. Figure 8.3 displays with a broken upward sloping line the shift in the supply schedule that results from an increase in the nominal wage rate from W0 to W1 > W0 , holding the nominal exchange rate constant at E0 . A currency devaluation, holding the nominal wage constant, shifts the supply schedule down and to the right. Intuitively, a devaluation that is not accompanied by a change in nominal wages reduces the real labor cost thereby inducing firms to increase the supply of nontradable goods for any given relative price. To illustrate this effect, assume that the nominal wage equals W1 and the nominal exchange rate equals E0 . The corresponding supply schedule is the upward sloping broken line in figure 8.3. To keep the graph simple, suppose that the government devalues the currency to a level E1 > E0 such that W1 /E1 = W0 /E0 . Such a devaluation shifts the supply schedule back to its original position given by the solid line. Open Economy Macroeconomics, Chapter 8 Downward Nominal Wage Rigidity The central friction emphasized in this chapter is downward nominal wage rigidity. Specifically, we impose that Wt ≥ γWt−1 , γ > 0. The parameter γ governs the degree of downward nominal wage rigidity. The higher is γ, the more downwardly rigid are nominal wages. This setup nests the cases of absolute downward rigidity, when γ ≥ 1, and full wage flexibility, when γ = 0. In section 8.4, we present empirical evidence suggesting that γ is close to unity when time is measured in quarters. The presence of downwardly rigid nominal wages implies that the labor market will in general not clear. Instead, involuntary unemployment, given by ¯h − ht , will be a regular feature of this economy. Actual employment must satisfy ht ≤ ¯h at all times. We postulate that at any point in time, wages and employment must satisfy the slackness condition (¯h − ht ) (Wt − γWt−1 ) = 0. This condition states that in periods in which the economy suffers from involuntary unemployment ¯ the lower bound on nominal wages must be binding (Wt = γWt−1 ). It also states that (ht < h) in periods in which the lower bound on nominal wages is not binding (Wt > γWt−1 ), the economy must be operating at full employment (ht = ¯h). M. Uribe and S. Schmitt-Groh´e In equilibrium, the market for nontraded goods must clear at all times. That is, the condition N cN t = yt must hold for all t. Combining this condition, the production technology for nontradables, the household’s budget constraint, and the definition of firm’s profits, we obtain the following marketclearing condition for traded goods: cTt + dt = ytT + dt+1 . 1 + rt Let wt ≡ Wt Et denote the real wage in terms of tradables and t ≡ Et Et−1 the gross devaluation rate of the domestic currency. Then, an equilibrium is a set of stochastic processes {cTt , ht , wt, dt+1 , pt, λt, µt }∞ t=0 satisfying cTt + dt = ytT + dt+1 , 1 + rt ¯ dt+1 ≤ d, µt ≥ 0, Open Economy Macroeconomics, Chapter 8 ¯ = 0, µt (dt+1 − d) λt = U 0 (A(cTt , F (ht)))A1 (cTt , F (ht)), λt = βEt λt+1 + µt , 1 + rt pt = A2 (cTt , F (ht )) , A1 (cTt , F (ht )) wt , 0 F (ht ) wt−1 , t pt = wt ≥ γ ¯ ht ≤ h, wt−1 ¯ = 0, (h − ht ) wt − γ t (8.19) (8.20) given an exchange rate policy {t }∞ t=0 , initial conditions w−1 and d0 , and exogenous stochastic processes {rt, ytT }∞ t=0 . It remains to specify the exchange-rate regime. We will study a variety of empirically realistic exchange-rate policies. We begin with currency pegs, a policy that is frequently observed in the emerging-country world. We close this subsection with a digression on Walras’ law in the context of the present model. Notice that all markets except the labor market are in equilibrium. One might therefore wonder whether this situation violates Walras’ Law, according to which, if all markets but one can be verified to be in equilibrium, then the remaining market must also be in equilibrium. The answer is that Walras’ Law is not applicable in the current environment, because the present model does not feature a Walrasian equilibrium. A Walrasian equilibrium is built under the assumption that at the price vector submitted by a fictitious auctioneer, all market participants submit notional demand M. Uribe and S. Schmitt-Groh´e and supplies of final goods and inputs of production. That is, supplies and demands computed under the assumption that at the given price vector (regardless of whether it happens to be the equilibrium price vector or not) the agent could buy or sell any desired quantities of final goods and inputs of production subject only to his budget constraints. But this is not the case under the present non-Walrasian equilibrium. In particular, at any given price vector the household’s labor ¯ but its realized employment, ht , reflecting the fact that supply is not its desired supply of labor, h, households internalize the existence of rationing in the labor market. As a result, adding the budget constraints of all households and using the fact that all markets but the labor market clear, does not yield the result that the aggregate desired supply of labor equals the aggregate desired supply of labor, but rather the tautology that the desired demand for labor equals the desired demand for labor. In other words, in the present model, the fact that in equilibrium all but one market clear does not imply that the remaining market must also clear. Currency Pegs Countries can find themselves confined to a currency peg in a number of ways. For instance, a country could have adopted a currency peg as a way to stop high or hyperinflation in a swift and nontraumatic way. A classical example is the Argentine Convertibility Law of April 1991, which, by mandating a one-to-one exchange rate between the Argentine peso and the U.S. dollar, painlessly eliminated hyperinflation virtually overnight. Another route by which countries arrive at a currency peg is the joining of a monetary union. Recent examples include emerging countries in the periphery of the European Union, such as Ireland, Portugal, Greece, and a number of small eastern European countries that joined the Eurozone. Most of these countries experienced an initial transition into the Euro characterized by low inflation, low interest rates, and economic expansion. However, history has shown time and again that fixed exchange rate arrangements are easy to Open Economy Macroeconomics, Chapter 8 adopt but difficult to maintain. Continuing with the example of the Argentine peg of the 1990s, its initial success in stabilizing inflation and restoring growth turned into a nightmare by the end of the decade. Starting in 1998 Argentina was hit by a string of large negative external shocks, including depressed commodity prices and elevated country premia, which pushed the economy into a deep deflationary recession. Between 1998 and late 2001, the subemployment rate, which measures the fraction of the population that is either unemployed or involuntarily working part time, increased by 10 percentage points. At the same time, consumer prices were falling at a rate near one percent per year. Eventually, the crisis led to the demise of the peg in December of 2001. The Achilles’ heel of currency pegs is that they hinder the efficient adjustment of the economy to negative external shocks, such as drops in the terms of trade, captured by the variable ytT in our model, or hikes in the country interest-rate, captured by the variable rt . The reason is that such shocks produce a contraction in aggregate demand that requires a decrease in the relative price of nontradables, that is, a real depreciation of the domestic currency, in order to bring about an expenditure switch away from tradables and toward nontradables. In turn, the required real depreciation may come about via a nominal devaluation of the domestic currency or via a fall in nominal prices or both. The currency peg rules out a devaluation. Thus, the only way the necessary real depreciation can occur is through a decline in the nominal price of nontradables. However, when nominal wages are downwardly rigid, producers of nontradables are reluctant to lower prices, for doing so might render their enterprises no longer profitable. As a result, the necessary real depreciation takes place too slowly, causing recession and unemployment along the way. This narrative is well known. It goes back at least to Keynes (1925) who argued that Britain’s 1925 decision to return to the gold standard at the 1913 parity despite the significant increase in the aggregate price level that took place during World War I would cause deflation in nominal wages with deleterious consequences for unemployment and economic activity. Similarly, Friedman’s (1953) seminal essay points at downward nominal wage rigidity as the central argument against M. Uribe and S. Schmitt-Groh´e fixed exchange rates. This section formalizes this narrative in the context of the dynamic, stochastic, optimizing model developed in the previous section. Later sections use parameterized versions of this model to generate precise quantitative predictions for aggregate activity around external crises and for the welfare costs of currency pegs. A currency peg is an exchange rate policy in which the nominal exchange rate is fixed. The gross devaluation rate therefore satisfies t = 1, for all t ≥ 0. Under a currency peg, the economy is subject to two nominal rigidities. One is policy induced: The nominal exchange rate, Et, is kept fixed by the monetary authority. The second is structural and is given by the downward rigidity of the nominal wage Wt. The combination of these two nominal rigidities results in a real rigidity. Specifically, under a currency peg, the real wage expressed in terms of tradables, wt ≡ Wt/Et , is downwardly rigid. Formally, equation (8.18) becomes wt ≥ γwt−1 . As a result of this real rigidity, in general the labor market is in disequilibrium and features involuntary unemployment. The magnitude of the labor market disequilibrium is a function of the amount by which the past real wage, wt−1 , exceeds the current full-employment real wage. It follows that under a currency peg wt−1 becomes a relevant state variable for the economy. How Pegs Cum Nominal Rigidity Amplify Contractions: A Peg-Induced Externality Figure 8.4 illustrates the adjustment of the economy to a boom-bust episode under a currency peg. N Because in equilibrium cN t = yt = F (ht ), the figure plots the demand and supply schedules for nontraded goods in terms of employment in the nontraded sector, so that the horizontal axis mea- Open Economy Macroeconomics, Chapter 8 Figure 8.4: Currency Pegs, Downward Wage Rigidity, and Unemployment W 1 /E 0 F 0 (h) A 2 (cT1 , F (h) ) A 1 (cT1 , F (h) ) pbust p0 W 0 /E 0 F 0 (h) A 2 (cT0 , F (h) ) W 1 /E 1 F 0 (h) A 1 (cT0 , F (h) ) ¯ h sures ht . The intersection of the demand and supply schedules, therefore, indicate the equilibrium demand for labor, given cTt and Wt /Et . The figure also shows with a dotted vertical line the labor supply, ¯h. Suppose that the initial position of the economy is at point A, where the labor market is ¯ Suppose that in response to a positive external shock, such operating at full employment, ht = h. as a decline in the country interest rate, traded absorption increases from cT0 to cT1 > cT0 causing the demand function to shift up and to the right. If nominal wages stayed unchanged, the new intersection of the demand and supply schedules would occur at point B. However, at point B the demand for labor would exceed the available supply of labor ¯h. The excess demand for labor drives up the nominal wage from W0 to W1 > W0 causing the supply schedule to shift up and to the left. The new intersection of the demand and supply schedules occurs at point C, where all hours of labor are employed and there is no excess demand for labor. Because nominal wages are upwardly flexible, the transition from point A to point C happens instantaneously. M. Uribe and S. Schmitt-Groh´e Suppose now that the external shock fades away, and that, therefore, absorption of tradables goes back to its original level cT0 . The decline in cTt shifts the demand schedule back to its original position, indicated by the downward sloping solid line. However, the economy does not immediately return to point A. Due to downward nominal wage rigidity, the nominal wage stays at W1 , and because of the currency peg, the nominal exchange rate remains at E0 . (For simplicity, we draw figure 8.4 assuming that γ is unity.) As a result, the supply schedule does not move. The new ¯ −hbust . intersection is at point D. There, the economy suffers involuntary unemployment equal to h Involuntary unemployment will persist over time unless the government does something to boost the economy. In the absence of such intervention, nominal wages will fall slowly (if γ is less than unity) to the initial level W0 . During this process the supply schedule shifts gradually down and to the right, until eventually intersecting the demand schedule at point A, where full employment is restored. The entire transition, however, is characterized by involuntary unemployment and a depressed level of activity in the nontraded sector. The dynamics described above suggest that the combination of downward nominal wage rigidity and a currency peg creates a negative externality. The nature of this externality is that in periods of economic expansion, elevated demand for nontradables drives nominal (and real) wages up. Although this increase in wages occurs in the context of full employment and strong aggregate activity, they place the economy in a vulnerable situation, because in the contractionary phase of the cycle, downward nominal wage rigidity and the currency peg hinder the downward adjustment of real wages, causing unemployment. Individual agents understand this mechanism, but are too small to internalize the fact that their own expenditure choices collectively exacerbate disruptions in the labor Open Economy Macroeconomics, Chapter 8 Volatility And Average Unemployment The present model implies an endogenous connection between the amplitude of the cycle and the average level of involuntary unemployment. The larger the degree of aggregate volatility, the larger the average level of involuntary unemployment. This connection between a second moment (the volatility of the underlying shocks) and a first moment (average unemployment) opens the door to large welfare gains from optimal stabilization policy. The predicted connection between the volatility of the underlying shocks and the mean level of involuntary unemployment is due to two maintained assumptions: (a) employment is determined by the smaller between the desired demand for labor and the desired supply of labor. And (b) wages are more rigid downwardly than upwardly. The mechanism is as follows: The economy responds efficiently to positive external shocks, as nominal wages adjust upward to ensure that firms are on their labor demand schedules and households are on their labor supply schedules. In sharp contrast, the adjustment to negative external shocks is inefficient, as nominal wages fail to fall forcing households off their labor supply schedules and generating involuntary unemployment. Thus, over the business cycle, the economy fluctuates between periods of full employment (or zero unemployment) and periods of positive unemployment, implying positive unemployment on average. Further, the implied level of unemployment during recessions is larger the larger is the amplitude of the underlying shocks. This is for two reasons. First, the contraction in the demand for labor during contractions is naturally larger the larger is the amplitude of shocks buffeting the economy. Second, the larger the amplitude of the underlying shocks, the larger is the increase in nominal wages during booms, which exacerbates the negative effects of wage rigidity during contractions. It follows that mean unemployment is increasing in the variance of the underlying shocks.3 3 This prediction represents an important difference with existing sticky-wage models ` a la Erceg, Henderson, and Levin (2000). In this class of models, assumption (a) does not hold. Instead, it is assumed that employment is M. Uribe and S. Schmitt-Groh´e Interestingly, the direct connection between aggregate uncertainty and the average level of unemployment predicted by the present model does not require assumption (b) (downward wage rigidity). It suffices to impose assumption (a) (employment is determined by the smaller between the demand and supply of labor). The following analytical example illustrates this point. Consider an economy in which the nominal wage rate is absolutely rigid in both directions. Specifically, ¯ , for all t, where W ¯ is a parameter. Let the exchange-rate regime be a suppose that Wt = W currency peg with Et = E¯ for all t, where E¯ > 0 is a parameter. Suppose that agents have no access to international financial markets, dt = 0 for all t. In this environment, consumption of tradables equals the endowment of tradables at all times, cTt = ytT for all t. Suppose that preferences are T N T logarithmic U (A(cTt , cN t )) = ln ct + ln ct . Assume that the endowment of tradables, yt , can take on the values y¯ + σ or y¯ − σ each with probability 1/2, where y¯ = 1 and σ > 0 is a parameter. According to this specification, E(ytT ) = 1 and var(ytT ) = σ 2 . Assume that the technology for producing nontradables is F (ht ) = hαt and that households supply inelastically one unit of labor ¯ /E¯ denote the real wage rate expressed in terms of tradables each period (ˆh = 1). Finally, let w ¯≡W and suppose that w ¯ = α. This value of w ¯ is the flexible-wage real wage that would obtain if the endowment took it unconditional mean value, E(ytT ) = 1. The equilibrium conditions associated with this economy are cTt = pt cN t αpt (hdt )α−1 = wt cTt = ytT always demand determined. As a result increases in involuntary unemployment during recessions are roughly offset by reductions in unemployment during booms. Consequently, the average level of unemployment does not depend in a quantitatively relevant way on the amplitude of the business cycle. Open Economy Macroeconomics, Chapter 8 α cN t = ht wt = w, ¯ and ht = min{¯h, hdt }, where hdt denotes the desired demand for labor by firms. The first equation is the demand for nontradables, the second is the supply of nontradables, the third and fourth are the market clearing conditions for tradables and nontradables, respectively. The last equation states that employment is determined by the smaller between supply and demand. It is straight forward to verify that the solution to the above system is ht = if ytT = 1 − σ if ytT = 1 + σ Let ut ≡ ¯h − ht denote the unemployment rate. It follows that the equilibrium distribution of ut is given by ut = σ with probability 0 with probability The unconditional mean of the unemployment rate is then given by E(ut) = σ . 2 This expression shows that the average level of unemployment increases linearly with the volatility of tradable endowment, in spite of the fact that wage rigidity is symmetric. The assumption that nominal wages are only downwardly inflexible amplifies the effect of volatil- M. Uribe and S. Schmitt-Groh´e ¯ with the assumption ity on average unemployment. To see this, replace the assumption Wt = W ¯ . In this case, nominal wages are absolutely inflexible downwardly, but perfectly flexible upWt ≤ W wardly. Suppose that all other aspects of the economy are unchanged. In this environment, during booms (i.e., when ytT = 1 + σ) the economy is in full employment and the real wage equals (1 + σ)α, which is larger than w, ¯ the real wage that prevails during booms when wages are bidirectionally rigid. This makes the economy more vulnerable to negative shocks. Indeed, when ytT = 1 − σ, the real wage fails to fall causing a contraction in employment to (1 − σ)/(1 + σ), which is lower than1 − σ, the level of employment during contractions in the economy with bidirectional wage rigidity. It follows that the average rate of unemployment equals E(ut) = σ/(1 + σ), which is larger than σ/2, the average rate of unemployment in the economy with two-sided wage rigidity. Adjustment To A Temporary Fall in the Interest Rate This section presents an analytical example showing that when the central bank pegs the domestic currency, a positive external shock can be the prelude to a slump with persistent unemployment. In this example, as in many observed boom-bust cycles in emerging countries, agents borrow internationally to take advantage of temporarily lower interest rates. The resulting capital inflow drives up domestic absorption of tradables and nominal wages. When the interest rate returns to its long-run level, aggregate demand falls and unemployment emerges as real wages—rigid by the combination of nominal wage rigidity and a currency peg—are stuck at a level too high to be consistent with full employment. T N Suppose that preferences are given by U (A(cTt , cN t )) = ln ct + ln ct and that the for producing nontradable goods is F (ht ) = hαt , with α ∈ (0, 1). Suppose that the endowment of ¯ = 1, and that β(1 + r) = 1, where r is tradables is constant over time and given by ytT = y T , that h a parameter. Assume that nominal wages are downwardly rigid and that γ = 1. Finally, suppose that prior to period 0, the economy had been at a full-employment equilibrium with dt+1 = 0, Open Economy Macroeconomics, Chapter 8 cTt = y T , ht = 1, wt = αy T , and cN t = 1, for t < 0. Consider the adjustment of this economy to a temporary decline in the interest rate. Specifically, suppose that r t 0, 1+r T for t ≥ 1. Notice that if r = r, then cT1 = y T and dt = 0. However, because r < r, the economy experiences a boom in traded consumption in period 0. This boom is financed with external debt, d1 > 0 = d0 . That is, the country experiences a current account deficit, or capital inflows, in period 0. In period 1, consumption falls permanently to a level lower than the one observed prior to the interest rate shock, and the trade balance switches permanently from a deficit to a surplus. This permanent trade-balance surplus is large enough to pay the interest on the external debt generated by the consumption boom of period 0. The surge in capital inflows in period 0 is accompanied by full employment, that is, h0 = 1. To see this, suppose, on the contrary, that h0 < 1. Then, by condition (8.17) we have that αcT0 /h0 = w0 . But since cT0 > y T and h0 < 1, this expression implies that w0 > αy T = w−1 , Open Economy Macroeconomics, Chapter 8 violating the slackness condition (8.20). It follows that h0 = 1. The initial rise in capital inflows also elevates the period-0 real wage. Specifically, by (8.17) and the fact that h0 = 1, we have that w0 = αcT0 > αy T = w−1 . This increase in labor costs results in an increase in the relative price of nontradables, or a real exchange-rate appreciation. This can be see from equation (8.5), which implies that p0 = cT0 > y T = p−1 . Graphically, the dynamics described here correspond to a movement from point A to point C in figure 8.4. The elevation in real wages that takes place in period 0 puts the economy in a vulnerable situation in period 1, when the interest rate increases permanently from r to r. In particular, in period 1, the economy enters in a situation of chronic involuntary unemployment. To see this, note that by (8.16) and (8.17) the full-employment real wage in period 1 is αcT1 , which is lower than αcT0 = w0 . As a result, the lower bound on wage growth must be binding in period 1, that is, condition (8.18) must hold with equality. Recalling that t = 1 for all t and that γ = 1, we then have that w1 = w0 . Combining this expression with (8.16) and (8.17) yields the following expression for the equilibrium level of involuntary unemployment: 1 − h1 = 1 − 1+r > 0, 1+r This level of unemployment persists indefinitely. To see this, note that at the beginning of period 2, the state of the economy is {r2 , y2T , d2 , w1 }, which, as we have shown, equals {r, y T , d1, w0 }, which, in turn, is the state of the economy observed at the beginning of period 1. A similar argument can be made for periods t = 3, 4, . . . . It follows that the unemployment rate is given by 1 − ht = 1 − 1+r > 0, 1+r for all t ≥ 1. Notice that the larger the decline in the interest rate in period 0, the larger is the M. Uribe and S. Schmitt-Groh´e unemployment rate in periods t ≥ 1. Figure 8.5 presents a graphical summary of the adjustment process. It is of interest to compare the dynamic adjustment of the present economy to the one that would take place under flexible wages (γ = 0). This adjustment is presented with broken lines in figure 8.5. The responses of tradable consumption and external debt are identical whether wages are rigid or not. The reason is that in this example preferences are additively separable in consumption of tradables and nontradables. However, the behavior of the remaining variables of the model is quite different under flexible and sticky wages. Contrary to what happens under sticky wages, the flexible-wage equilibrium is characterized by full employment at all times. Full employment is supported by a permanent fall in nominal (and real) wages in period 1. This decline in labor costs allows firms to lower the relative price of nontradables permanently. In turn, lower relative prices for nontradables induce consumers to switch expenditures from tradables to nontradables. Optimal Exchange Rate Policy In the example we just analyzed, the reason why the economy experiences unemployment when the country interest rate increases is that real wages are too high to clear the labor market. This downward rigidity in real wages is the consequence of downwardly rigid nominal wages and a fixed exchange rate regime. The example suggests that the unemployment problem could be addressed by any policy that results in lower real wages. One way to lower the real wage is to devalue the currency. By making the nominal price of tradables more expensive—recall that PtT = Et —a devaluation lowers the purchasing power of wages in terms of tradables. In turn, this erosion in the real value of wages induces firms to hire more workers. In this section, we formalize this intuition by characterizing the optimal exchange-rate policy. We begin by characterizing an exchange-rate policy that guarantees full employment at all times and then show that this policy is indeed Pareto Open Economy Macroeconomics, Chapter 8 Figure 8.5: A Temporary Decline in the Country Interest Rate Consumption of Tradables, cTt Country Interest Rate, rt r yT r −3 Time, t Time, t ¯ − h)/h ¯ Unemployment, (h Debt, dt Time, t Time, t Real Exchange Rate, PtN /PtT Real Wage, wt αy T −3 0 Time, t currency peg Time, t flexible wage economy or optimal exchange rate economy M. Uribe and S. Schmitt-Groh´e The Full-Employment Exchange-Rate Policy Consider an exchange-rate policy in which each period the central bank sets the devaluation rate to ensure full employment in the labor market, that is, to ensure that ¯ ht = h, for all t ≥ 0. We refer to this exchange-rate arrangement as the full-employment exchange-rate policy. The equilibrium dynamics associated with the full-employment exchange-rate policy are illustrated in figure 8.4. Suppose that, after being hit by a negative external shock, the economy is stuck ¯ − hbust . At point D, the desired demand for at point D with involuntary unemployment equal to h tradables is cT0 , the nominal wage is W1 , and the nominal exchange rate is E0 . Suppose that the central bank were to devalue the domestic currency so as to deflate the purchasing power of nominal wages to a point consistent with full employment. Specifically, suppose that the central bank sets the exchange rate at the level E1 > E0 satisfying (W1 /E1 )/F 0 (¯h) = A2 (cT0 , F (¯h))/ A1 (cT0 , F (¯h)). The devaluation causes the supply schedule to shift down and to the right intersecting the demand ¯ The devaluation lowers the real cost schedule at point A, where unemployment is nil (h = h). of labor, making it viable for firms to slash prices. The relative price of nontradables falls from pboom at the peak of the cycle, to p0 after the negative external shock. This fall in the relative price of nontradables induces households to switch expenditure away from tradables and toward nontradables in a magnitude compatible with full employment. The full-employment policy amounts to setting the devaluation rate to ensure that the real wage equals the full-employment real wage rate at all times. Formally, the full-employment exchange-rate Open Economy Macroeconomics, Chapter 8 policy ensures that wt = ω(cTt ), where ω(cTt ) denotes the full-employment real wage rate and is given by ω(cTt ) ≡ A2 (cTt , F (¯h)) 0 ¯ F (h). A1 (cTt , F (¯h)) The assumed properties of the aggregator function A ensure that the function ω(·) is strictly increasing in the domestic absorption of tradables, cTt ,4 ω 0 (cTt ) > 0. There exists a whole family of full-employment exchange-rate policies. Specifically, combining conditions (8.18) and (8.22), we have that any exchange rate policy satisfying t ≥ γ wt−1 ω(cTt ) ensures full employment at all times. To see this, suppose, on the contrary, that the above devaluation policy allows for ht to be less than ¯h for some t ≥ 0. Then, by the slackness condition (8.20) we have that wt = γwt−1 . t Solve this expression for t and use the resulting expression to eliminate t from (8.24) to obtain wt ≤ ω(cTt ). Now using (8.16) and (8.17) to eliminate wt and (8.23) to eliminate ω(cTt ), we can Exercise 8.3 asks you to prove this statement. M. Uribe and S. Schmitt-Groh´e rewrite this inequality as A2 (cTt , F (ht)) 0 A2 (cTt , F (¯h)) 0 ¯ F (h ) ≤ F (h). t A1 (cTt , F (ht)) A1 (cTt , F (¯h)) Because the left-hand side of this expression is strictly decreasing in ht , we have that the only value of ht that satisfies the above inequality is ¯h. But this is a contradiction, since we started by assuming that ht < ¯h. We have therefore shown that under any exchange rate policy belonging to the family defined by (8.24), unemployment is nil at all dates and states. A natural question is whether the full-employment exchange-rate policy is the most desirable from a social point of view. This question is nontrivial because the welfare of households does not depend directly on the level of employment but rather on the level of consumption of final goods. Pareto Optimality of the Full-Employment Exchange-Rate Policy Consider a social planner who wishes to maximize the welfare of the representative household. A relevant question is under what constraints the social planner operates. The answer to this question depends on the issue the researcher is interested in. Here, we consider the problem of a local policymaker who takes as given the international asset market structure. Specifically, we assume that the social planner has access to a single internationally traded state-noncontingent bond denominated in units of tradable goods that pays the exogenously given interest rate rt. ¯ We also assume that the planner Also, we assume that external debt must be bounded above by d. takes as given the endowment process ytT , the technology for producing nontradables F (ht ), and the time endowment ¯h. Given these constraints, the social planner picks processes for consumption, hours worked, and net foreign debt to maximize the welfare of the representative household. We refer to the solution of the social planner’s problem as the Pareto optimal allocation. The key difference between the competitive equilibrium and the Pareto optimal allocation is that the Open Economy Macroeconomics, Chapter 8 social planner can circumvent the goods and labor markets and impose directly the number of hours each household must work and the quantities of tradables and nontradables it can consume each period. This implies, in particular, that the allocation problem faced by the planner is not affected by the presence of nominal rigidities. Formally, the social planner’s problem is given by ∞ {cT t ,ht ,dt+1 }t=0 ∞ X β t U (A(cTt , F (ht))) subject to ¯ ht ≤ h, cTt + dt = ytT + (8.26) dt+1 , 1 + rt and ¯ dt+1 ≤ d. Because the objective function is concave and the constraints define a convex set, the first-order conditions associated with this problem are necessary and sufficient for an optimum. Let β t ηt , β t λt, and β t µt denote the Lagrange multipliers associated with (8.26), (8.27), and (8.28), respectively. The first-order conditions with respect to ηt and ht are ¯ ht ≤ h, U 0 (A(cTt , F (ht))A2 (cTt , F (ht ))F 0 (ht ) = ηt , and the associated slackness condition is ηt (¯h − ht ) = 0. M. Uribe and S. Schmitt-Groh´e Because the functions U , A, and F are strictly increasing the first condition implies that ηt is ¯ for all t. In words, the Pareto positive. It then follows from the slackness condition that ht = h optimal allocation features full employment at all times. The first-order conditions of the social planner’s problem with respect λt , µt , cTt , and dt+1 are cTt + dt = ytT + dt+1 , 1 + rt ¯ dt+1 ≤ d, λt = U 0 (A(cTt , F (ht)))A1 (cTt , F (ht)), λt = βEt λt+1 + µt , 1 + rt with µt ≥ 0, and ¯ = 0. µt (dt+1 − d) These conditions are identical, respectively, to competitive equilibrium conditions (8.10)-(8.15). This means that the processes µt , λt, cTt , dt+1 , and ht that satisfy the conditions for Pareto optimality, also satisfy the competitive equilibrium conditions when the exchange-rate policy belongs to the class of full-employment exchange-rate policies defined by (8.24). We have therefore establisehd that the real allocation associated with the full-employment exchange-rate policy is Pareto optimal. In other words, any exchange-rate policy that does not induce full employment is welfare dominated by the full-employment exchange-rate policy. Open Economy Macroeconomics, Chapter 8 When Is It Optimal To Devalue? Because under the optimal exchange policy the real wage is always equal to the full-employment real wage, ω(cTt ), equation (8.24) implies that for all t > 0 the devaluation rate satisfies t ≥ γ ω(cTt−1 ) ; ω(cTt ) t > 0. Recalling that ω(·) is a strictly increasing function of cTt , this expression states that optimal devaluations occur in periods of contraction in aggregate expenditure in tradables. A nonstructural econometric analysis of data stemming from this model may lead to the erroneous conclusion that devaluations are contractionary (see, for instance, the empirical literature surveyed in section 3.4 of Frankel, 2011). However, the role of optimal devaluations is precisely the opposite, namely, to prevent the contraction in the tradable sector to spill over into the nontraded sector. It follows that under the full-employment exchange-rate policy, devaluations are indeed expansionary in the sense that should they not take place, aggregate contractions would be even larger. Thus, under the full-employment exchange rate regime, the present model turns the view that ‘devaluations are contractionary’ on its head and instead predicts that ‘contractions are devaluatory.’ Consider, for example, the case of a temporary decline in the country interest rate studied in section 8.2.3. The equilibrium dynamics under the optimal exchange-rate policy are shown with broken lines in figure 8.5. Recall that under a currency peg, the permanent contraction in tradable consumption that occurs in period 1 causes a permanent increase in involuntary unemployment. Under the optimal exchange-rate policy, the path of tradable consumption is identical, but the permanent contraction in period 1 does not spill over to the nontraded sector or the labor market, which continues to operate under full employment. The monetary authority is able to maintain full employment by devaluing the currency in period 1 (not shown in the figure), which reduces the real cost of labor and makes it optimal for firms not to fire workers. M. Uribe and S. Schmitt-Groh´e The full-employment exchange-rate policy completely eliminates any real effect stemming from nominal wage rigidity. Indeed, one can show that the equilibrium under the full-employment exchange rate policy is identical to the equilibrium of an economy with full wage flexibility (γ = 0).5 Consider again the case of a temporary decline in the country interest rate studied in section 8.2.3. The equilibrium dynamics under fully flexible wages are identical to those associated with the fullemployment exchange-rate policy (shown with broken lines in figure 8.5), except that the decline in the real wage that occurs in period 1 is brought about by a decline in the nominal wage under flexible wages but by a devaluation of the currency under downward wage rigidity and the full-employment exchange-rate policy. Evidence On Downward Nominal Wage Rigidity The central friction in the model we have analyzed in this chapter is downward nominal wage rigidity. In this section, we review a body of empirical work suggesting that downward nominal wage rigidity is a widespread phenomenon. This type of nominal friction has been detected in micro and aggregated data stemming from developed, emerging, and poor regions of the world. It has also been found both in formal and informal labor markets. An important byproduct of this review is an estimate of the parameter γ governing the degree of nominal wage rigidity in the theoretical model. We will need this parameter value to study the quantitative predictions of the model. Evidence From Micro Data A number of studies has examined the rigidity of hourly wages using micro data. Gottschalk (2005), for example, uses panel data from the Survey of Income and Program Participation (SIPP) to estimate the frequency of wage declines, increases, and no changes for male and female hourly 5 See exercise 8.4. Open Economy Macroeconomics, Chapter 8 workers working for the same employer over the period 1986-1993 in the United States.6 Table 8.1 shows that over the course of one year only a small fraction of workers experiences a decline in Table 8.1: Probability of Decline, Increase, or No Change in Nominal Wages Between Interviews Decline Constant Increase Interviews One Year apart Males Females 5.1% 4.3% 53.7% 49.2% 41.2% 46.5% Note. U.S. data, SIPP panel 1986-1993, within-job changes. Source: Gottschalk (2005). nominal wages, while about half of workers experience no change. The large mass at no changes suggests that nominal wages are rigid. The small mass to the left of zero suggests that nominal wages are downwardly rigid. It is worth noting that the sample period used by Gottschalk comprises the 1990-1991 U.S. recession, for it implies that the observed scarcity of nominal wage cuts took place in the context of elevated unemployment. Barattieri, Basu, and Gottschalk (2012) report similar findings using data from the 1996-1999 SIPP panel. Figure 8.6 shows that during this period the distribution of nominal wage changes was also truncated to the left of zero. The figure does not show the frequency corresponding to no wage changes. The reason for this omission is that the mass at zero changes is high, so that including it would make the rest of the figure less visible. Nominal wage cuts were also rare in the United States during the Great Contraction that started in 2007. Daly, Hobijn, and Lucking (2012) use micro panel date on wage changes of individual workers to construct the empirical distribution of wage changes in 2011.7 Figure 8.7 6 The SIPP has been conducted by the Bureau of Labor Statistics since 1983. It is a stratified representative sample of the U.S. population. Individuals are interviewed every four months for a period of 24 to 48 months. 7 The data comes from the Current Population Survey (CPS), which, like the SIPP, is collected by the Bureau of Labor Statistics. M. Uribe and S. Schmitt-Groh´e Figure 8.6: Distribution of Non-Zero Nominal Wage Changes, U.S. 1996-1999 Source: Barattieri, Basu, and Gottschalk (2012). displays the empirical distribution of annual nominal wage changes in 2011. The pattern is similar to those displayed above: There is significant mass at no wage changes and more mass to the right of no changes than to the left. To emphasize the asymmetry in the distribution, Daly et al. plot with a dashed line a (symmetric) normal distribution. The figure suggests that during the Great Contraction, despite the fact that unemployment was high and inflation was below its 2 percent target, nominal wage cuts were less frequent than wage increases. A similar pattern of downward nominal wage rigidity based on microeconomic data is found in other developed countries. See, for example, Fortin (1996) for Canada, Kuroda and Yamamoto (2003) for Japan, and Fehr and Goette (2005) for Switzerland. Downward nominal wage rigidity is also found in industry-level wage data. See, for example, Holden and Wulfsberg (2008) for evidence from 19 OECD countries over the period 1973-1999. Open Economy Macroeconomics, Chapter 8 Figure 8.7: Distribution of Nominal Wage Changes, U.S. 2011 Source: Daly, Hobijn, and Lucking (2012). M. Uribe and S. Schmitt-Groh´e Evidence From Informal Labor Markets The evidence referenced above is based on data from formal labor markets in developed economies. However, a similar pattern of asymmetry in nominal wage adjustments emerges in informal labor markets located in poor areas of the world. Kaur (2012), for example, studies the behavior of nominal wages in casual daily agricultural labor markets in rural India. Specifically, she examines market-level wage and employment responses to local rainfall shocks in 500 Indian districts from 1956 to 2008. She finds that nominal wage adjustment is asymmetric. In particular, nominal wages rise in response to positive rain shocks but fail to fall during droughts. In addition, negative rain shocks cause labor rationing and unemployment. Importantly, inflation, which is uncorrelated with local rainfall shocks, moderates these effects. During periods of relatively high inflation, local droughts are more likely to result in lower real wages and less labor rationing. This effect suggests that nominal rather than real wages are downwardly rigid. Evidence From The Great Depression of 1929 According to the National Bureau of Economic Research, the Great Depression in the United States started in August 1929 and ended in March 1933. By 1931, the economy had experienced an enormous contraction. Employment in the manufacturing sector in 1931 stood 31 percent below its 1929 level. Figure 8.8 shows that inspite of a highly distressed labor market, nominal wages remained remarkably firm.8 Between 1929:8 and 1931:8, the nominal wage rate, shown with a solid line in the figure, fell by only 0.6 percent per year. By contrast, consumer prices, shown with a broken line, fell by 6.6 percent per year over the same period. As a result, in the first two years of the Great Depression, real wages increased by 12 percent in the midst of massive unemployment. 8 The graph shows the nominal wage rate as opposed to average hourly earnings. The problem with the latter series is that it includes compensation for overtime work. Contractions in overtime employment cause drops in average hourly earnings that are not reflective of downward wage flexibility. Open Economy Macroeconomics, Chapter 8 Figure 8.8: Nominal Wage Rate and Consumer Prices, United States 1923:1-1935:7 0.05 log(Nominal Wage Index) log(CPI Index) 1924 Note. The solid line is the natural logarithm of an index of manufacturing money wage rates (NBER data series m08272b), 1929:8 equal to zero. The broken line is the logarithm of the consumer price index (BLS series ID CUUR0000SA0), 1929:8 equal to zero. M. Uribe and S. Schmitt-Groh´e In the second half of the depression, nominal wages fell, but nominal prices fell even faster. As a result, by the end of the Great Depression the real wage rate was 26 percent above its 1929 level. The observed resilience of nominal wages in a context of extreme underutilization of the labor force is indicative of downward nominal wage rigidity. Evidence From Emerging Countries and Inference on γ The empirical literature surveyed thus far establishes that nominal wage rigidity is significant and asymmetric. However, because it uses data from developed and poor regions of the world, it does not provide evidence on the importance of downward nominal wage rigidity in emerging countries. In addition, it does not lend itself to calibrating the wage-rigidity parameter γ, because it does not provide information on the speed of nominal downward wage adjustments. In Schmitt-Groh´e and Uribe (2010), we examine data from emerging countries and propose an empirical strategy for identifying γ. The approach consists in observing the behavior of nominal hourly wages during periods of rising unemployment. We focus on episodes in which an economy undergoing a severe recession keeps the nominal exchange rate fixed. Two prominent examples are Argentina during the second half of the Convertibility Plan (1996-2001) and the periphery of Europe during the great recession of 2008. Figure 8.9 displays subemployment (defined as the sum of unemployment and underemployment)and nominal hourly wages expressed in pesos for Argentina during the period 1996-2001. The Convertibility Plan was in effect from April 1991 to December 2001 and consisted in a peg of the Argentine peso to the U.S. dollar at a one-for-one rate with free convertibility. The subperiod 1998-2001 is of particular interest because during that time the Argentine central bank was holding on to the currency peg in spite of the fact that the economy was undergoing a severe contraction and both unemployment and underemployment were in a steep ascent. The contraction was caused by a combination of large adverse external disturbances, including, a collapse Open Economy Macroeconomics, Chapter 8 Figure 8.9: Nominal Wages and Unemployment in Argentina, 1996-2001 Nominal Wage (W ) Unemployment Rate + Underemployment Rate 34 9 Pesos per Hour 1998 1999 Year 1998 1999 Year Source. Nominal exchange rate and nominal wage, BLS. Subemployment, INDEC. in export commodity prices, a 100-percent devaluation in Brazil, Argentina’s main trading partner, in 1999, and a deterioration in international borrowing conditions following the southeast Asian and Russian financial crises of 1998. In the context of a flexible-wage model, one would expect that the rise in unemployment would be associated with falling real wages. With the nominal exchange rate pegged, the fall in real wages must materialize through nominal wage deflation. However, during this period, the nominal hourly wage never fell. Indeed, it increased from 7.87 pesos in 1998 to 8.14 pesos in 2001. The model developed in this chapter predicts that with rising unemployment, the lower bound on nominal wages should be binding and therefore γ should equal the gross growth rate of nominal wages. We wish to parameterize the model so that one period corresponds to one quarter. An estimate of the parameter γ can then be constructed as the average quarterly growth rate of nominal wages over the three-year period considered, that is, γ = (W2001/W1998 )1/12. This yields a value of γ of 1.0028. M. Uribe and S. Schmitt-Groh´e This value means that, because of the presence of downward nominal wage rigidity, nominal wages must rise by at least 1.12 percent per year. In order for this estimate of γ to represent an appropriate measure of wage rigidity in the context of the theoretical model, it must be adjusted to account for the fact that the model abstracts from foreign inflation and long-run productivity growth (see exercises 8.5 and 8.6). To carry out this adjustment, we use the growth rate of the U.S. GDP deflator as a proxy for foreign inflation. Between 1998 and 2001, the U.S. GDP deflator grew by 1.77 percent per year on average. We set the long-run growth rate in Argentina at 1.07 percent per year, to match the average growth rate of Argentine per capita real GDP over the period 1900-2005 reported in Garc´ıa-Cicco et al. (2010). The adjusted value of γ is then given by 1.0028/(1.0107 ∗ 1.0177)1/4 = 0.9958. Finally, we note that during the 1998-2001 Argentine contraction, consumer prices, unlike nominal wages, did fall. The CPI inflation rate was on average -0.86 percent per year over the period 1998-2001. It follows that real wages rose not only in dollar terms but also in terms of CPI units. Incidentally, this evidence provides some support for the assumption, implicit in the theoretical framework developed in this chapter, that downward nominal rigidities are less stringent for product prices than for factor prices. The second episode from the emerging-market world that we use to document the presence of downward nominal wage rigidity and to infer the value of γ is the great recession of 2008 in the periphery of Europe. Table 8.2 presents an estimate of γ for twelve European economies that are either on the euro or pegging to the euro. The first two columns of the table show the unemployment rate in 2008:Q1 and 2011:Q2. The starting point of this period corresponds to the beginning of the great recession in Europe according to the CEPR Euro Area Business Cycle Dating Committee. The 2008 crisis caused unemployment rates to rise sharply across all twelve countries. The table also displays the total growth of nominal hourly labor cost in manufacturing, construction and services (including the public sector) over the thirteen-quarter period 2008:Q1- Open Economy Macroeconomics, Chapter 8 Table 8.2: Unemployment, Nominal Wages, and γ: Evidence from the Eurozone Unemployment Rate Wage Growth Implied W2011Q2 Value of 2008Q1 2011Q2 W2008Q1 Country (in percent) (in percent) (in percent) γ Bulgaria 6.1 11.3 43.3 1.028 Cyprus 3.8 6.9 10.7 1.008 Estonia 4.1 12.8 2.5 1.002 Greece 7.8 16.7 -2.3 0.9982 Ireland 4.9 14.3 0.5 1.0004 Italy 6.4 8.2 10.0 1.007 Lithuania 4.1 15.6 -5.1 0.996 Latvia 6.1 16.2 -0.6 0.9995 Portugal 8.3 12.5 1.91 1.001 Spain 9.2 20.8 8.0 1.006 Slovenia 4.7 7.9 12.5 1.009 Slovakia 10.2 13.3 13.4 1.010 Note. W is an index of nominal average hourly labor cost in manufacturing, construction, and services. Unemployment is the economy-wide unemployment rate. Source: EuroStat. 2011:Q2.9 Despite the large surge in unemployment, nominal wages grew in most countries and in those in which they fell, the decline was modest. The implied value of γ, shown in the last column of table 8.2, is given by the average growth rate of nominal wages over the period considered (that is, γ = (W2011:Q2/W2008:Q1)1/13). The estimated values of γ range from 0.996 for Lithuania to 1.028 for Bulgaria. To adjust γ for foreign inflation, we proxy this variable with the inflation rate in Germany. Over the thirteen-quarter sample period considered in table 8.2 inflation in Germany was 3.6 percent, or about 0.3 percent per quarter. To adjust for long-run growth, we use the average growth rate of per capita output in the southern periphery of Europe of 1.2 percent per year or 0.3 percent per 9 The public sector is not included for Spain due to data limitations. M. Uribe and S. Schmitt-Groh´e quarter.10 Allowing for these effects suggest an adjusted estimate of γ in the interval [0.990, 1.022]. The Case of Equal Intra- And Intertemporal Elasticities of Substitution For the remainder of this chapter, we assume a CRRA form for the period utility function, a CES form for the aggregator function, and an isoelastic form for the production function of nontradables: U (c) = c1−σ − 1 , 1−σ i 1 h 1− 1 1− 1ξ 1− 1 , A(cT , cN ) = a(cT ) ξ + (1 − a)(cN ) ξ and F (h) = hα , with σ, ξ, a, α > 0. These functional forms are commonplace in the quantitative business-cycle literature. A case that is of significant interest analytically, computationally, and empirically, is one in which the intra- and intertemporal elasticities of consumption substitution are equal to each other, that is, the case in which ξ= 1 . σ This restriction greatly facilitates the characterization of equilibrium, because it renders the equilibrium processes of external debt, dt, and consumption of tradables, cTt , independent of the level of activity in the nontraded sector. This implication is also of interest because it means that any 10 This figure corresponds to the average growth rate of per capita real GDP in Greece, Spain, Portugal, and Italy over the period 1990-2011 according to the World Development Indicators. Open Economy Macroeconomics, Chapter 8 welfare differences across exchange-rate regimes must be attributable to the effects of exchange-rate policy on unemployment and not to transitional dynamics in external debt. Finally, we will argue in section 8.7 that the case of equal intra- and intertemporal elasticities of substitution is empirically plausible. To see that setting ξ = 1/σ renders the equilibrium levels of external debt and tradable consumption independent of the level of activity in the nontraded sector, note than under this restriction we have that U (A(cTt , cN t )) = + (1 − a)cN t 1−σ which is additively separable in cTt and cN t . Therefore, equation (8.14) becomes λt = acTt T ∞ which is independent of cN t . Thus, the equilibrium processes {ct , dt+1 , µt , λt }t=0 can be obtained as the solution to the subsystem of equilibrium conditions (8.10)-(8.15). Clearly, this result holds for any exchange-rate policy and for any degree of wage rigidity. In particular, we have that when ξ = 1/σ, the equilibrium behavior of dt and cTt is the same under a currency peg, under the optimal exchange-rate policy, and under full wage flexibility. We impose this parameter restriction in the quantitative analysis that follows. For an analysis of the case σ 6= 1/ξ, see Schmitt-Groh´e and Uribe (2010). M. Uribe and S. Schmitt-Groh´e Approximating Equilibrium Dynamics The equilibrium dynamics under the optimal exchange rate policy can be characterized as the solution to the following value function problem: v OP T (ytT , rt, dt) = {dt+1 ,cT t } T U (A(cTt , F (¯h))) + βEt v OP T (yt+1 , rt+1 , dt+1) subject to (8.10) and (8.11), where the function v OP T (ytT , rt, dt) represents the welfare level of the representative agent under the full-employment exchange-rate policy in state (ytT , rt, dt). To approximate the solution to this dynamic programming problem, we apply the method of value function iterations over a discretized version of the state space. We assume that the exogenous driving forces ytT and rt follow a joint discrete Markov process with 21 points for ytT and 11 points for rt. In section 8.7.1, we econometrically estimate the parameters defining this process. We discretize the level of debt with 501 equally spaced points in the interval 1 to 8. The solution of the above dynamic programming problem yields the equilibrium processes for dt+1 and cTt for all possible states (ytT , rt, dt). Given these processes, the equilibrium processes of all other endogenous variables under the optimal exchange-rate policy can be readily obtained. The variable ht equals ¯h for all t, pt can be obtained from (8.16), and wt from (8.17). Finally, if a particular full-employment exchange-rate policy has been chosen from the family defined in equation (8.24) it can readily be backed out. The maintained parameter restriction ξ = 1/σ implies that the solution for dt+1 and cTt just described, also applies to the fixed-exchange-rate economy. Under a currency peg, however, the past real wage, wt−1 , becomes relevant for the determination of employment, current wages, nontradable output, and the relative price of nontradables. As a result, the fixed-exchange-rate economy Open Economy Macroeconomics, Chapter 8 an additional endogenous state variable, wt−1 . To discretize the past real wage, we use 500 points between 0.25 and 6. Points are equally spaced in a logarithmic scale. Computing the equilibrium level of the real wage, wt , given values for wt−1 and cTt requires solving a static problem and involves ¯ Then, use (8.23) to obtain the no iterative procedure. Specifically, begin by assuming that ht = h. full-employment real wage, ω(cTt ) (given a value for cTt , this is just a number). Next, check whether the full-employment real wage satisfies the lower bound on nominal wages when t = 1, that is, whether ω(cTt ) ≥ γwt−1. If so, then wt = ω(cTt ). Otherwise, the lower bound on nominal wages is binding and wt = γwt−1 . The resulting value of wt will in general not coincide exactly with any point in the grid, so pick the closest grid point. Given wt and cTt , all other endogenous variables can be easily obtained. For example, ht and pt are the solution to (8.16) and (8.17). When ξ 6 = 1/σ, approximating the dynamics of the model under a currency peg is computationally more demanding. The reason is that in this case the dynamics of debt and tradable consumption are affected by the level of activity in the nontraded sector. As a result, the equilibrium dynamics of dt+1 and cTt can no longer be obtained separately from the dynamics of variables pertaining to the nontraded sector. In addition, because of the distortions created by nominal rigidities, aggregate dynamics cannot be cast in terms of a Bellman equation without introducing additional state variables (such as the individual level of debt, which households perceive as distinct from its aggregate counterpart). In Schmitt-Groh´e and Uribe (2010), we show that one can approximate the solution by Euler equation iteration over a discretized version of the state space (ytT , rt, dt, wt−1 ). Parameterization of the Model We calibrate the model at a quarterly frequency. The model contains two types of parameters: structural parameters pertaining to preferences, technologies, and nominal frictions, and parameters M. Uribe and S. Schmitt-Groh´e defining the stochastic process of the exogenous driving forces. We begin by estimating the latter set of parameters. Estimation Of The Exogenous Driving Process We assume that the law of motion of tradable output and the country interest rate is given by the following autoregressive process: ln ytT ln 1+rt 1+r = A T ln yt−1 1+rt−1 1+r + t , where t is a white noise of order 2 by 1 distributed N (∅, Σ). The parameter r denotes the deterministic steady-state value of the country interest rate rt. We estimate this system using Argentine data over the period 1983:Q1 to 2001:Q4. We exclude the period post 2001 because Argentina was in default between 2002 and 2005 and excluded from international capital markets. The default was reflected in excessively high country premia (see figure 8.10(b)). Excluding this period is in order because interest rates were not allocative, which is at odds with our maintained assumption that the country never loses access to international financial markets. This is a conservative choice, for inclusion of the default period would imply a more volatile driving force accentuating the real effects of currency pegs on unemployment. Our empirical measure of ytT is the cyclical component of Argentine GDP in agriculture, forestry, fishing, mining, and manufacturing.11 As in the empirical business-cycle analysis of chapter 1, we obtain the cyclical component by removing a log-quadratic time trend. Figure 8.10(a) displays the resulting time series. We measure the country interest rate as the sum of the EMBI+ spread for Argentina and the 11 The data were downloaded from www.indec.mecon.ar. Open Economy Macroeconomics, Chapter 8 Figure 8.10: Traded Output and Interest Rate in Argentina, 1983:Q1-2008:Q3 (a) Traded Output 0.2 ln ytT −0.25 1980 (b) Interest Rate 70 rt in percent per year Note. Traded output is expressed in log-deviations from a quadratic time trend. Source: See the main text. M. Uribe and S. Schmitt-Groh´e 90-day Treasury-Bill rate, deflated using a measure of expected dollar inflation.12 Specifically, we construct the time series for the quarterly real Argentine interest rate, rt, as 1+rt = (1+it)Et 1+π1t+1 , where it denotes the dollar interest rate charged to Argentina in international financial markets and πt is U.S. CPI inflation. For the period 1983:Q1 to 1997:Q4, we take it to be the Argentine interest rate series constructed by Neumeyer and Perri (2005).13 For the period 1998:Q1 to 2001:Q4, we measure it as the sum of the EMBI+ spread and the 90-day Treasury bill rate, which is in line with the definition used in Neumeyer and Perri since 1994:Q2. We measure Et 1+π1t+1 by the fitted component of a regression of 1 1+πt+1 onto a constant and two lags. This regression uses quarterly data on the growth rate of the U.S. CPI index from 1947:Q1 to 2010:Q2. Our OLS estimates of the matrices A and Σ and of the scalar r are 0.79 −1.36 A= ; −0.01 0.86 0.00123 −0.00008 Σ = ; −0.00008 0.00004 r = 0.0316. According to these estimates, both ln ytT and rt are highly volatile, with unconditional standard deviations of 12.2 percent and 1.7 percent per quarter (6.8 percent per year), respectively. Also, the unconditional contemporaneous correlation between ln ytT and rt is high and negative at -0.86. This means that periods of relatively low traded output are associated with high interest rates and vice versa. The estimated joint autoregressive process implies that both traded output and the real 12 EMBI+ stands for Emerging Markets Bond Index Plus. The EMBI+ tracks total returns for traded external debt instruments (external meaning foreign currency denominated fixed income) in the emerging markets. Included in the EMBI+ are U.S.-dollar denominated Brady bonds, Eurobonds, and traded loans issued by sovereign entities. Instruments in the EMBI+ must have a minimum face value outstanding of $500 million, a remaining life of 2.5 years or more, and must meet strict criteria for secondary market trading liquidity. The EMBI+ is produced by J. P. Morgan. The time series starts in 1993 or later depending on the country and has a daily frequency. We convert the daily time series into a quarterly time series by taking the arithmetic average of daily observations within each quarter. 13 The time series is available online at www.fperri.net/data/neuperri.xls. For the period 1983:Q1 to 1994:Q1 these authors compute the interest rate as the sum of the 90-day U.S. T-bill rate and their own calculation of a spread on Argentine bonds. For the period 1994:Q1 to 1997:Q4 they use the EMBI+ spread. Open Economy Macroeconomics, Chapter 8 Table 8.3: Calibration Parameter γ σ yT ¯ h a ξ α β Value 0.99 2 1 1 0.26 0.5 0.75 0.9635 Description Degree of downward nominal wage rigidity Inverse of intertemporal elasticity of consumption Steady-state tradable output Labor endowment Share of tradables Elasticity of substitution between tradables and nontradables Labor share in nontraded sector Quarterly subjective discount factor interest rate are highly persistent, with first-order autocorrelations of 0.95 and 0.93, respectively. Finally, we estimate a steady-state real interest rate of 3.16 percent per quarter, or 12.6 percent per year. This high average value reflects the fact that our sample covers a period in which Argentina underwent a great deal of economic turbulence. We discretize the joint AR(1) process for ytT and rt given in equation (8.32) using 21 equally spaced points for ln ytT and 11 equally spaced points for ln(1 + rt )/(1 + r). The first and last √ values of the grids for ln ytT and ln(1 + rt)/ (1 + r) are set to ± 10 times the respective standard deviations (±0.3858 and ±0.0539, respectively). We construct the transition probability matrix of the state (ln ytT , ln((1 + rt)/(1 + r))) by simulating a time series of length 1,000,000 drawn from the system (8.32). We associate each observation in the time series with one of the 231 possible discrete states by distance minimization. The resulting discrete-valued time series is used to compute the probability of transitioning from a particular discrete state in one period to a particular discrete state in the next period. The resulting transition probability matrix captures well the covariance matrices of order 0 and 1. Calibration Of Preferences, Technologies, and Nominal Rigidities The values assigned to the structural parameters are shown in table 8.3. We set the parameter M. Uribe and S. Schmitt-Groh´e γ governing the degree of downward nominal wage rigidity at 0.99. This is a conservative value. The estimates of γ based on data from Argentina and the periphery of Europe presented in section 8.4 suggest that γ, after correcting for foreign inflation and long-run growth, lies in the interval [0.99, 1.022]. We set γ to the lower bound of this interval, which represents the greatest degree of downward wage flexibility consistent with the data used in the estimation. We normalize the steady-state levels of output of tradables and hours at unity. Then, if the steady-state trade-balance-to-output ratio is small, the parameter a is approximately equal to the share of traded output in total output. We set this parameter at 0.26, which is the share of traded output (as defined above) observed in Argentine data over the period 1980:Q1-2010:Q1. Uribe (1997) presents evidence suggesting that the labor share in the nontraded sector in Argentina is 0.75. Accordingly we set α equal to this value. As in the RBC models studied in chapter 4, we set σ equal to 2. As mentioned there, this value is commonly used in business-cycle studies. Using time series data for Argentina over the period 1993Q1-2001Q3, Gonz´ alez Rozada et al. (2004) estimate the elasticity of substitution between traded and nontraded consumption, ξ, to be 0.44. This estimate is consistent with the cross-country estimates of Stockman and Tesar (1995). These authors include in their estimation both developed and developing countries. Restricting the sample to include only developing countries yields a value of ξ of 0.43 (see Akinci, 2011). We set ξ equal to 0.5. We pick this particular value for two reasons. First, it is close to the value suggested by existing empirical studies. Second this value is the reciprocal of the one assigned to σ. As discussed in subsection 8.5, the restriction ξ = 1/σ implies that the dynamics of external debt and tradable consumption are independent of the exchange-rate policy or the degree of nominal wage rigidity. This implication greatly facilitates the numerical characterization of the equilibrium dynamics.14 We set d¯ at the natural debt limit, which we define as the level of external debt that can be supported with zero tradable consumption when the household perpetually receives the lowest 14 In Schmitt-Groh´e and Uribe (2010), we consider the case in which 1/σ = 0.2 and ξ = 0.44. Open Economy Macroeconomics, Chapter 8 min possible realization of tradable endowment, y T , and faces the highest possible realization of the min interest rate, r max. Formally, d¯ ≡ y T (1 + r max )/r max. Given our discretized estimate of the exogenous driving process, d¯ equals 8.34. We calibrate the subjective discount factor β to match the average external-debt-to-output ratio of 23 percent per year observed in Argentina over the period 1983-2001 (Lane and Milesi-Ferretti, 2007). We set β at 0.9635. This value yields an average debt-to-output ratio of 23.2 percent per year under the optimal exchange-rate policy and of 21.5 percent under a currency peg. External Crises and Exchange-Rate Policy: A Quantitative Analysis We are now ready to quantitatively characterize the response of the model economy to a large negative external shock. We have in mind extraordinary contractions like the 1989 or 2001 crises in Argentina, or the 2008 great recession in peripheral Europe. During the great Argentine crises of 1989 and 2001, for example, traded output fell by about two standard deviations within a period of two and a half years, and the country premium experienced equally large increases. We are particularly interested in contrasting the model economy’s adjustment to this type of external shock under the two polar exchange-rate arrangements we have been considering thus far, a currency peg and the optimal exchange rate policy. Definition of an External Crisis We define an external crisis that starts in period t as a situation in which tradable output is at or above trend in quarter t and at least two standard deviations below trend in quarter t + 10. To characterize the typical behavior of the economy during such episodes, we simulate the model for 20 million quarters and identify windows (t − 10, t + 30) in which movements in traded output M. Uribe and S. Schmitt-Groh´e Figure 8.11: The Source of a Crisis Tradable Output 5 Interest Rate Minus Mean 10 −5 % per year % dev. from trend −10 −15 0 −20 −25 −10 10 t −5 −10 10 t conform to our definition of an external crisis. Then, for each variable of interest we average all windows and subtract the respective mean taken over the entire sample of 20 million quarters. The beginning of the typical crisis is normalized at t = 0. Figure 8.11 displays the predicted average behavior of the two exogenous variables, traded output and the country interest rate, during a crisis. The downturn in traded output can be interpreted either as a drastic fall in the quantity of tradables produced by the economy or as an exogenous collapse in the country’s terms of trade. The figure shows that at the trough of the crisis (period 10), tradable output is 25 percent below trend. The contraction in tradable output is accompanied by a sharp increase in the interest rate that international financial markets charge to the emerging economy. The country interest rate peaks in quarter 10 at about 14 percentage points per annum (about two standard deviations) above its average value. This behavior of the interest rate is dictated by the estimated high negative correlation between tradable output and country interest rates. Indeed, the typical crisis would look quite similar to the one shown in figure 8.11 if we had defined a crisis episode as one in which the country interest rate is at or below its average Open Economy Macroeconomics, Chapter 8 level in period 0 and at least 2 standard deviations above its average level in period 10. How do the endogenous variables of the model, such as unemployment, real wages, consumption, the trade balance, and inflation, respond to these large negative external shocks? As we will see next, the answer depends crucially on the exchange-rate policy put in place by the monetary authority. Crisis Dynamics Under A Currency Peg Figure 8.12 depicts with solid lines the response of the endogenous variables to the external crisis defined in subsection 8.8.1 when the exchange-rate policy takes the form of a currency peg. The large exogenous increase in the country interest rate and the large fall in tradable endowment cause households to sharply reduce consumption of tradable goods. At the trough of the crisis, in quarter 10, tradable consumption is about 33 percent below trend. This adjustment is so pronounced that, in spite of the fact that the endowment of tradables falls significantly during the crisis, the trade balance actually improves. The bottom left panel of figure 8.12 shows that the trade-balance-to-output ratio rises by about 3 percentage points between the beginning and the trough of the crisis. The severity of the contraction in the absorption of tradables is driven primarily by the country-interest-rate hike, which causes a substitution effect against current consumption and a negative wealth effect stemming from an increase in interest payments on the external debt. The elevated cost of debt service causes the country’s external debt to increase during the crisis, in spite of the positive trade balance. The stock of external debt increases not only as a fraction of output (see the bottom right panel of figure 8.12), but also in level (not shown in figure 8.12). The contraction in the traded sector spills over to the nontraded sector in ways that can be highly deleterious. The full-employment real wage, ω(cTt ), shown with a broken line in the top right panel of figure 8.12, falls by 66 percent between periods 0 and 10. By contrast, the real wage, wt, shown with a solid line in the top right panal of Figure 8.12, falls by only 10 percent. The M. Uribe and S. Schmitt-Groh´e Figure 8.12: Crisis Dynamics: The Role Of Exchange-Rate Policy Real Wage (in terms of tradable) Unemployment Rate 20 % dev. from trend 30 20 % 10 t 10 t −40 −60 0 10 t Consumption of Tradables 10 % dev. from trend % 0 −5 20 −80 −10 10 t 10 t Annual CPI Inflation Rate Relative Price of Nontradables (PtN /Et ) −10 −10 Annualized Devaluation Rate −20 −10 −80 −10 % dev. from trend −10 −10 0 −10 −20 −30 −40 −10 10 t Debt-To-Output Ratio (Annual) Trade-Balance-To-Output Ratio 4 10 % 0 −1 −10 10 t Currency Peg −5 −10 10 t Optimal Exchange Rate Policy Open Economy Macroeconomics, Chapter 8 reason for the insufficient downward adjustment in the real wage is, of course, the combination of downward nominal wage rigidity and a currency peg. Recall that the real wage, expressed in terms of tradables, equals the ratio of the nominal wage, Wt , and the nominal exchange rate, Et. Therefore, a fall in the real wage requires either a fall in nominal wages or a depreciation of the currency (i.e., an increase in Et ), or a combination of both. Due to downward nominal wage rigidity, nominal wages can fall at most at 1 percent per quarter (since γ equals 0.99). At the same time, because of the currency peg, the nominal exchange rate is constant over time. It follows that the real wage can fall by at most 1 percent per quarter. Thus, real wages fall by only 10 percent between the beginning of the crisis in period 0 and the beginning of the recovery in period 10 more than 50 percentage points short of what would be necessary to ensure full employment. The sluggish downward adjustment of the real wage causes massive disequilibrium in the labor market. The top left panel of figure 8.12 shows that the rate of involuntary unemployment increases to 25 percent at the trough of the crisis. Further, unemployment is highly persistent. Five years after the trough of the crisis, the unemployment rate remains more than 7 percentage points above average. The persistence of unemployment is due to the slow downward adjustment of real wages. In spite of the large contraction in aggregate demand, the relative price of nontradables in terms of tradables, PtN /Et , falls little during the crisis. This is because labor costs (the real wage) remain too high making it unprofitable for firms to implement large price cuts. As a consequence of the insufficient fall in the relative price of nontradables, households do not face a strong enough incentive to switch expenditure away from tradables and toward nontradables. Put differently, the combination of downward nominal wage rigidity and a currency peg hinders the ability of the price system to signal to firms and consumers that during the crisis there is a relative aggregate scarcity of tradable goods and a relative aggregate abundance of nontradable goods. The predictions of our model suggest that a rigid exchange-rate policy whereby the necessary real depreciation is forced to occur via product-price and wage deflation is highly costly in terms of M. Uribe and S. Schmitt-Groh´e unemployment and forgone consumption of nontradables. As we will see next, the present economy requires large devaluations to bring about full employment. Crisis Dynamics Under Optimal Exchange Rate Policy We have seen in section 8.3 that there exists a whole family of optimal exchange-rate polices, defined by condition (8.24). Each member of this family supports the Pareto optimal real allocation. However, different members of this family can deliver different outcomes for nominal variables, such as the devaluation rate, price inflation, and wage inflation. Here, we consider the following specification for the optimal exchange-rate policy t = wt−1 . ω(cTt ) With γ < 1, this policy clearly belongs to the family defined in condition (8.24). According to this policy, the central bank devalues the domestic currency when the full employment wage falls below the past real wage and revalues the currency when the full employment wage exceeds the past real wage. This policy specification has three interesting properties. First, it ensures that nominal wages are constant at all times. To see this, note that in the Pareto optimal allocation, the real wage equals the full-employment real wage, that is, wt = ω(cTt ). Then, using the fact that wt ≡ Wt /Et and that t = Et/Et−1 , we can write the above exchange-rate policy as Et /Et−1 = (Wt−1 /Et−1 )/(Wt/Et ), which implies that Wt = Wt−1 for all t ≥ 0. Thus, the assumed exchange-rate policy stabilizes the nominal price that suffers from downward rigidity. A second property of the assumed optimal exchange-rate policy is that it implies that the nominal price of nontradables is also constant over time. This can be seen from equation (8.17), which states that ptF 0 (ht) = wt. Noticing that under the Pareto optimal allocation ht is constant and equal to ¯h, and that pt = PtN /Et, we have that in equilibrium PtN F 0 (¯h) = Wt . Since Wt is constant, so is PtN . A third property of interest is that Open Economy Macroeconomics, Chapter 8 the assumed optimal exchange-rate policy implies zero inflation and zero devaluation on average. To see this note that the relative price of nontradables, pt , is a stationary variable. That is, it may move over the business cycle, but does not have a trend. Since pt equals PtN /Et and PtN is constant, we have that the nominal exchange rate must also be stationary, that is, Et does not have a trend. This means that the devaluation rate, t − 1, must be zero on average. Finally, because the nominal price of nontradables, PtN , is constant and the nominal price of tradables, Et , is stationary, it follows that the nominal price of the composite consumption good, Pt , must be stationary. This implies, in turn, that the CPI inflation must be zero on average. Figure 8.12 displays with broken lines the average response of the economy to the external crisis defined in to the external crisis defined in subsection 8.8.1 under the optimal exchange rate policy. The central difference between the optimal exchange-rate policy and a currency peg is that under the optimal exchange-rate policy the external crisis does not spill over to the nontraded sector. Indeed, as we saw in section 8.3 the unemployment rate is nil under the optimal exchange-rate policy. The government ensures full employment through a series of devaluations of the domestic currency, which are quite large under the present parameterization of the model. Figure 8.12 shows that the monetary authority devalues the currency at a rate of about 35 percent per year during the contractionary phase of the crisis (quarters 0 to 10). The main purpose of these large devaluations is to drastically lower the real value of wages in terms of tradables, thereby reducing the labor cost faced by firms. The top right panel of figure 8.12 shows that the real wage, expressed in terms of tradables, falls by more than 60 percent over the first 10 quarters of the crisis. In turn, the decline in real labor costs allows firms to lower the relative price of nontradable goods in terms of tradable goods, which results in a large depreciation of the real exchange rate of over 60 percent (see the right panel on the second row of figure 8.12). This sizable change in relative prices induces households to redirect their spending toward nontradable goods. The large nominal and real depreciation of the currency predicted by the model is in line with the empirical findings of M. Uribe and S. Schmitt-Groh´e Burstein, Eichenbaum, and Rebelo (2005) who report that the primary force behind the observed large drop in the real exchange rate that occurred after the large devaluations in Argentina (2002), Brazil (1999), Korea (1997), Mexico (1994), and Thailand (1997) was the slow adjustment in the nominal price of nontradable goods. During the crisis, the optimal exchange-rate policy drives the CPI inflation rate to about 10 percent per year (see row 3 column 1 of figure 8.12). By contrast, under the exchange-rate peg the economy experiences deflation of about 6 percent per year. The model thus speaks with a strong voice against allowing the economy to fall into deflation during a crisis. The particular optimal exchange-rate policy we are considering, given in equation (8.33), has the property of inducing zero devaluation and zero inflation on average. This means that the elevated rates of devaluation and inflation predicted during crisis must be followed by revaluations and low inflation in the recovery phase. Row 2 column 1 of figure 8.12 shows that revaluations begin as soon as the economy begins to recover. Devaluations, Revaluations, and Inflation In Reality This means that the large devalutations during crises predicted by the model, must be followed by revaluations once the crisis is over. Are devaluations during crisis followed by revaluations during recoveries observed in reality? Figure 8.13 displays the CPI inflation rate and the devaluation rate against the U.S. dollar for Argentina and an average of Brazil, Chile, Colombia, Mexico, Peru, and Uruguay during the 2008 great contraction. Vertical lines mark the beginning and end of the great contraction according to the NBER. The crisis, which started in the United States in 2008, arrived in South America one year later. All countries in the sample responded to the crisis with sizable devaluations. This response is in line with the predictions of the model under the optimal exchange-rate policy. In both groups of countries, CPI inflation picked up during the crisis. At the same time, all countries Open Economy Macroeconomics, Chapter 8 Figure 8.13: Devaluation and Inflation In Latin America: 2006-2011 Devaluation Rate Inflation Rate 30 25 % per year % per year 20 10 0 −10 −20 2006 NBER reference dates with the exception of Argentina revalued their currencies as soon as the recovery began. Argentina, by contrast, continued to devalue its currency during the recovery. As predicted by the model, the countries that revalued experienced lower inflation than Argentina. Empirical Evidence On The Expansionary Effects of Devaluations Are devaluations expansionary in the way suggested by the model? Here we examine two episodes pointing in this direction. Both involve countries that during a currency peg are hit by severe negative shocks. After some years of increasing unemployment and general economic duress, these countries decide to abandon the fixed exchange rate regime and allow their currencies to devalue. In both cases, the devaluations were followed by a reduction in real wages and a quick expansion of aggregate employment. M. Uribe and S. Schmitt-Groh´e Exiting a Currency Peg: Argentina Post Convertibility Figure 8.14 displays the nominal exchange rate, subemployment, nominal hourly wages expressed in pesos and in U.S. dollars for Argentina during the period 1996-2006. As discussed in section 8.4.4, the Argentine peso was pegged to the U.S. dollar from April 1991 to December 2001. Since 1998 Argentina was undergoing a severe contraction that had pushed the subemployment rate to 35 percent. In spite of widespread unemployment and a fixed exchange rate, nominal wages did not decline in this period. In 2002 Argentina abandoned the peg devaluing the peso by 250 percent (see the upper left panel of the figure). As shown in the lower right panel of the figure the devaluation coincided with a vertical decline in the real wage by a magnitude proportional to the devaluation. Following the real wage decline, labor market conditions improved quickly. By 2005 the subemployment rate had fallen by 12 percentage points. We note additionally that the fact that Argentine real wages fell significantly and persistently after the devaluation of December 2001 (bottom right panel of figure 8.14) suggests that the 19982001 period was one of censored wage deflation, which further strengthens the view that nominal wages suffer from downward inflexibility. The fact that nominal wages increased after the devaluation is an indication that the size of the devaluation exceeded the one necessary to restore full employment. More importantly, it suggests that nominal wages are not upwardly rigid. Taken together the dynamics of nominal wages over the period 1998 to 2006 is consistent with the view that nominal wages are downwardly rigid but upwardly flexible. Exiting the Gold Standard: Europe 1929 to 1935 Another piece of indirect historical evidence of the expansionary effects of devaluations is provided by the international effects of the Great Depression of 1929 to 1933. Friedman and Schwartz Open Economy Macroeconomics, Chapter 8 Figure 8.14: Nominal Wages and Unemployment in Argentina, 1996-2006 Nominal Exchange Rate (E ) 4 Unemployment Rate + Underemployment Rate 40 35 Percent Pesos per U.S. Dollar 2002 Year Nominal Wage (Wt) 2000 2002 Year Real Wage (Wt/Et) 1.4 Index 1996=1 Pesos per Hour 1.2 12 0.8 0.6 2000 2002 Year 0.4 1996 2002 Year Source. Nominal exchange rate and nominal wage, BLS. Subemployment, INDEC. M. Uribe and S. Schmitt-Groh´e observe that countries that left the gold standard early enjoyed more rapid recoveries than countries that stayed on gold longer. The first group of countries was known as the sterling bloc and consisted of the United Kingdom, Sweden, Finland, Norway, and Denmark, and the second group was know as the gold bloc and was formed by France, Belgium, the Netherlands, and Italy. The sterling bloc countries left gold beginning in 1931, whereas the gold block countries stayed on gold much longer, some until 1935. One can think of the gold standard as a currency union in which members peg their currencies, not to the currency of another country member, but to gold. Thus, abandoning the gold standard is akin to abandoning a currency peg. When the sterling bloc countries left the gold standard, they effectively devalued their currencies, as the price of their currencies in terms of gold went down. The difference in economic performance was associated with earlier reflation of price levels in the countries leaving gold earlier. Importantly, Eichengreen and Sachs (1986) point out that real wages behaved differently in countries that left the gold standard early and in countries that stuck to it longer. Specifically, they compare the change in real wages and industrial production between 1929 and 1935 in the sterling and gold blocks. Figure 8.15 shows that relative to their respective 1929 levels real wages in the sterling bloc countries were lower than real wages in the gold bloc countries. And industrial production in the sterling bloc countries in 1935 exceeded their respective 1929 levels whereas industrial production in the gold bloc countries was below their respective 1929 levels. This suggests two things. First, countries in which real wages increased less showed stronger growth in industrial production. Second, only the countries that devalued showed moderation in real wage growth. This latter fact is suggestive of downward nominal wage rigidity. Open Economy Macroeconomics, Chapter 8 Figure 8.15: Changes In Real Wages and Industrial Production, 1929-1935 Source. Eichengreen and Sachs (1985). M. Uribe and S. Schmitt-Groh´e The Welfare Costs of Currency Pegs Thus far, we have used the theoretical model to compare the performance of currency pegs and the optimal exchange-rate policy over episodes of external crisis. We saw that currency pegs do a poor job at negotiating this type of situations. In this section, we use our theoretical laboratory to compare the performances of currency pegs and the optimal exchange rate policy not just over periods of economic duress, but along the infinite life of households. To this end, we calculate the level of welfare of individual households living in each of the two exchange-rate regimes. The key variable to understand welfare differences across exchange-rate regimes in the model we are working with is the rate of involuntary unemployment. To see this, note that because of our maintained assumption that the intra- and intertemporal elasticities of substitution are equal to each other (σ = 1/ξ), the behavior of tradable consumption is identical across exchange-rate regimes (see section 8.5 for a demonstration). Consequently, any welfare differences across exchange-rate regimes must stem from the behavior of nontradable consumption. In turn, since nontradable goods are produced with labor only, all welfare differences must be explained by differences in the predicted dynamics of involuntary unemployment. We define the welfare cost a currency peg, denoted Λ(ytT , rt, dt, wt−1 ), as the percent increase in the consumption stream of a representative individual living in a currency-peg economy that would make him as happy as living in the optimal-exchange-rate economy. Specifically, Λ (ytT , rt, dt, wt−1) is implicitly given by ∞ X s=0 EG cPt+s 1+ Λ(ytT ,rt ,dt ,wt−1 ) 100 = v OP T (ytT , rt, dt), where cPt EG denotes the equilibrium process of consumption in the currency-peg economy, and v OP T (ytT , rt, dt) denotes the value function associated with the optimal exchange-rate policy, defined Open Economy Macroeconomics, Chapter 8 in equation (8.31). Solving for Λ(ytT , rt, dt, wt−1 ), we obtain Λ(ytT , rt, dt, wt−1 ) = 100 v OP T (ytT , rt, dt)(1 − σ) + (1 − β)−1 v P EG (ytT , rt, dt, wt−1)(1 − σ) + (1 − β)−1 −1 , where v P EG (ytT , rt, dt, wt−1 ) denotes the value function associated with the currency-peg economy and is given by v P EG (ytT , rt, dt, wt−1) = Et ∞ X s=0 EG 1−σ cPt+s −1 . 1−σ Table 8.4 reports the median and the mean of Λ(ytT , rt, dt, wt−1) along with the average rate of unemployment induced by a currency peg. The distribution of Λ(ytT , rt, dt, wt−1 ) is a function of the equilibrium distribution of the state (ytT , rt, dt, wt−1 ). In turn, the distribution of (ytT , rt, dt, wt−1 ), and in particular that of wt−1 , depends upon the exchange-rate regime in place.15 Because we are interested in the welfare costs of living in a currency peg, we compute the mean and median of the welfare costs of pegs shown in table 8.4 using the equilibrium distribution of the state (ytT , rt, dt, wt−1 ) induced by the currency peg. The mean welfare cost of a currency peg is 7.8 percent of the consumption stream. That is, households living in a currency peg economy require on average 7.8 percent more consumption in every date and state in order to be indifferent between staying in the currency-peg regime and switching to the optimal exchange rate regime. This is a big number as welfare costs go in monetary business-cycle theory. Even under the most favorable initial conditions, the welfare cost of a currency peg is large, 4.0 percent of consumption each period. (This figure corresponds to the lower bound of the support of the probability density of Λ(ytT , rt, dt, wt−1 ).) As mentioned earlier, the welfare cost of pegs is entirely explained by involuntary unemployment. Table 8.4 reports an average rate of unemployment of 11.7 percent under the currency peg. A 15 When σ 6= 1/ξ, the distribution of dt also depends upon the exchange-rate regime in place. M. Uribe and S. Schmitt-Groh´e Table 8.4: The Welfare Costs of Currency Pegs Welfare Cost of Peg Average Unemployment Parameterization Mean Median Rate Under Peg Baseline (γ = 0.99) 7.8 7.2 11.7 Lower Downward Wage Rigidity γ = 0.98 5.7 5.3 8.9 γ = 0.97 3.5 3.3 5.6 γ = 0.96 2.8 2.7 4.6 Higher Downward Wage Rigidity γ = 0.995 14.3 13.0 19.5 t ≥γ Symmetric Wage Rigidity, γ1 ≥ WWt−1 γ = 0.99 3.3 3.0 5.2 γ = 0.98 2.8 2.5 4.4 Note. The welfare cost of a currency peg is expressed in percent of consumption per quarter, see equation (8.34). The mean and median of the welfare cost of a peg is computed over the distribution of the state (ytT , rt, dt , wt−1) induced by the peg economy. back-of-the envelope calculation can help to understand how unemployment translates into welfare costs. The average fall in nontradable consumption due to unemployment in the nontraded sector is approximately given by the product of the labor elasticity of nontradable output, α, times the average level of unemployment, or 8.8 = 0.75 × 11.7 percent per quarter. In turn, the total consumption loss is roughly given by the share of nontradable in total consumption. Which under the present parameterization is about 0.75, times the loss of nontradable consumption, or 6.6 = 0.75 × 8.8, which is close to the exact mean welfare cost. Figure 8.16 displays the unconditional distribution of Λ(ytT , rt, dt, wt−1 ). The distribution is skewed to the right, implying that the probability of very high welfare costs is non-negligible. For instance the probability of occurrence of a state associated with welfare costs larger than 10 percent of consumption per quarter is 15 percent, and the probability of occurrence of a state associated with welfare costs larger than 15 percent of consumption per quarter is 1.9 percent. Which states Open Economy Macroeconomics, Chapter 8 Figure 8.16: Probability Density Function of the Welfare Cost of Currency Pegs 0.35 10 12 14 16 welfare cost of a peg (% of ct per quarter) Note. The welfare cost of a currency peg is defined in equation (8.34). The density function of welfare costs is computed over the distribution of the state (ytT , rt , dt , wt−1) induced by the peg M. Uribe and S. Schmitt-Groh´e put the economy is such a vulnerable situation? Figure 8.17 sheds light on what these states are. It displays the welfare cost of currency pegs as a function of the four state variables. In each panel only one state variable is allowed to vary (along the horizontal axis) and the remaining three state variables are fixed at their respective unconditional means. The figure shows that currency pegs are more painful when the country is initially more indebted, when it inherits higher past real wages, when the tradable sector is undergoing a contraction (due, for example, to unfavorable terms of trade), or when the country interest-rate premium is high. Viewing the crisis in peripheral Europe that started in 2008 through the lens of our model, it is not difficult to understand why doubts about the optimality of European monetary union are the strongest for member countries like Greece, Portugal, and Spain: These are countries with highly inflexible labor markets that before the 2008 crisis experienced large increases in wages and sizable current account deficits. The fact that unemployment is the main source of welfare losses associated with currency pegs suggests that a key parameter determining the magnitude of these welfare losses should be γ, which governs the degree of downward nominal wage rigidity. Our baseline calibration (γ = 0.99) implies that nominal wages can fall frictionlessly up to four percent per year. In section 8.4, we argue that this is a conservative value in the sense that it allows for falls in nominal wages during crises that are much larger than those observed either in the 2001 Argentine crisis or in the ongoing crisis in peripheral Europe, even after correcting for foreign inflation and long-run growth. We now consider alternative values that allow for lower and higher degrees of downward nominal wage rigidity. On the more flexible side, we consider values of γ that allow for nominal wage declines of up to 16 percent per year. Taking into account that the largest wage decline observed in Argentina in 2001 or in the periphery of Europe since the onset of the great recession was 1.6 percent per year (Lithuania, see table 8.2), it follows that we are considering degrees of wage rigidity substantially lower than those implied by observed wage movements during large contractions. Table 8.4 shows that the mean welfare cost of a currency peg is strictly increasing in the degree of downward Open Economy Macroeconomics, Chapter 8 Figure 8.17: Welfare Cost of Currency Pegs as a Function of the State Variables 18 welfare cost welfare cost wt− 1 welfare cost welfare cost 3 dt 8 7.5 7 6.5 −0.2 8 7.5 7 6.5 0 log(ytT ) 10 15 rt (annual) Note. In each plot, all states except the one shown on the horizontal axis are fixed at their unconditional mean values. The dashed vertical lines indicate the unconditional mean of the state displayed on the horizontal axis (under a currency peg if the state is endogenous). M. Uribe and S. Schmitt-Groh´e nominal wage rigidity. As γ falls from its baseline value of 0.99 to the smallest value considered, 0.96, the welfare cost of a peg falls form 7.8 to is 2.8 percent of consumption per quarter. This welfare cost is still a large figure compared to existing results in monetary economics. The intuition why currency pegs are less painful when wages are more downwardly flexible is straightforward. A negative aggregate demand shock reduces the demand for nontradables which requires a fall in the real wage rate to avoid unemployment. Under a currency peg this downward adjustment must be brought about exclusively by a fall in nominal wages. The less downwardly rigid are nominal wages, the faster is the downward adjustment in both the nominal and the real wage and therefore the smaller is the resulting level of unemployment. Table 8.4 confirms this intuition. The average rate of involuntary unemployment falls from 11.7 percent to 4.6 percent as γ falls from its baseline value of 0.99 to 0.96. We also consider a higher degree downward nominal wage rigidity than the one used in the baseline parameterization. Specifically, table 8.4 includes the case γ = 0.995. This value of γ is perhaps of greater empirical interest than the low values just considered because, unlike those, it lies within the range of estimates obtained in section 8.4. This level of γ allows nominal wages to fall by up to 2 percent per year, half the fall permitted under the baseline parameterization. The associated welfare costs of pegs are extremely high, 14.3 percent of consumption on average, with an average rate of unemployment of 19.5 percent. The finding of large welfare costs of currency pegs predicted by the present model economy stand in stark contrast to a large body of work, pioneered by Lucas (1987), suggesting that the costs of business cycles (not just of suboptimal monetary or exchange-rate policy) are minor. Lucas’ approach to computing the welfare costs of business cycles consists in first removing a trend from a consumption time series and then evaluating a second-order approximation of welfare using observed deviations of consumption from trend. Implicit in this methodology is the assumption that the trend is unaffected by policy. In the present model, however, suboptimal monetary or exchange- Open Economy Macroeconomics, Chapter 8 rate policy creates an endogenous connection between the amplitude of the business cycle and the average rate of unemployment (see the analysis in section 8.2.2). In turn, through its effect on the average level of unemployment, suboptimal exchange-rate policy has a significant effect on the average level of consumption. And indeed, as we saw earlier in this section, lower average consumption is the main reason currency pegs are so costly in the present model. It follows that applying Lucas’ methodology to data stemming from the present model would overlook the effects of policy on trend (or average) consumption and therefore would result in spuriously low welfare costs. Symmetric Wage Rigidity We have shown that the welfare cost of currency pegs is increasing in the degree of downward nominal wage rigidity, governed by the parameter γ. We can think of this parameter as reflecting the intensive margin of wage rigidity. We now consider tightening the extensive margin of wage rigidity. We do so by imposing an upper bound on the rate at which nominal wages can increase from one period to the next. Specifically, we now assume the following constraint on nominal wage adjustments: γ≤ Wt 1 ≤ . Wt−1 γ This formulation of nominal rigidity is closer to that typically assumed in the new Keynesian literature on sluggish wage adjustment in the Erceg, Henderson, and Levin (2000) tradition. Table ?? shows that for the baseline value of γ of 0.99, increasing nominal wage rigidity along the extensive margin reduces unemployment and is welfare improving. The rate of involuntary unemployment falls from 11.7 percent under downward nominal wage rigidity to less than half that value under symmetric wage rigidity. This result may seem surprising, for it suggests that less wage flexibility is desirable. However, this prediction of the model is quite intuitive. The M. Uribe and S. Schmitt-Groh´e imposition of upward rigidity in nominal wages alleviates the peg-induced externality analyzed earlier in section 8.2.1. The nature of this externality is that during booms nominal wages rise placing the economy in a fragile position once the boom is over, as nominal wages cannot fall sufficiently fast to guarantee full employment during the downturn. Upward wage rigidity curbs the increase of nominal wages during booms, thereby reducing the required magnitude of wage declines during the contractionary phase of the cycle. Consider now increasing wage flexibility along the intensive margin by lowering γ, but keeping a symmetric specification of wage rigidity. In the presence of symmetric wage rigidity, lowering γ creates a tradeoff. On the one hand, higher downward wage flexibility is desirable because it allows for a more efficient adjustment of real wages during a downturn. On the other hand, higher upward wage flexibility is undesirable because, by allowing for larger wage increasing during booms, it exacerbates the peg-induced pecuniary externality. Table ?? shows that this tradeoff is resolved in favor of higher wage flexibility along the intensive margin. Involuntary unemployment falls from 5.2 to 4.2 percent when γ is reduced from 0.99 to 0.98. With lower unemployment the welfare costs of a currency peg are also smaller, 2.8 percent of consumption instead of 3.3 percent. We close this section by pointing out that the reason why we consider the case of symmetric wage rigidity is for comparison with existing related frameworks, but not because of its empirical relevance. For the evidence presented in section 8.4 speaks clearly in favor of asymmetric specifications. We conclude that of all the parameterizations shown in table 8.4 the ones of greatest empirical relevance are those pertaining to the case in which wages are downwardly rigid and γ takes the value 0.99 or 0.995. Open Economy Macroeconomics, Chapter 8 The Mussa Puzzle In an influential empirical study, Mussa (1986) compares the behavior of nominal and real exchange rates under fixed and floating exchange rate regimes. He analyzes data from 13 industrialized countries over the period 1957 to 1984.16 During the subperiod 1957 to 1970, the countries in the sample pegged their currencies to the U.S. dollar by an exchange-rate agreement known as Bretton Woods. During the second subperiod, 1973 to 1984, countries in the sample adopted flexible exchange-rate regimes as a consequence of the breakdown of the Bretton Woods agreement. Mussa documents three important facts about nominal and real exchange rates across fixed and floating exchange-rate regimes. First, the variability of the real exchange rate is much higher under flexible exchange rates than under fixed exchange rates. Second, under flexible exchange rates, movements in real exchange rates mimic movements in nominal exchange rates. This fact essentially suggests that under floating exchange rate regimes, observed changes in real exchange rates inherit the stochastic properties of observed changes in nominal exchange rates. Third, the volatility of national inflation rates is broadly the same under floating and fixed exchange-rate regimes. The reason why these facts are often referred to as a puzzle is that they suggest that relative prices depend on the behavior of nominal prices. At the time of Mussa’s writing, the dominant paradigm for understanding short-run fluctuations was the flexible-price, neoclassical, real-businesscycle framework treated in Part I of this book. In a neoclassical world, real variables, including relative prices, are determined by real factors, such as technologies, preferences, and real disturbances. In this type of environment, nominal exchange regime neutrality holds, in the sense that the exchange-rate regime can have effects on other nominal variables, but does not matter for real allocations. Of course, since the publication of Mussa’s work in 1986, nominal rigidities have found 16 The countries included in the sample are the United Kingdom, West Germany, France, Italy, Japan, Netherlands, Sweden, Switzerland, Austria, Belgium, Denmark, Luxembourg, and Norway. M. Uribe and S. Schmitt-Groh´e Table 8.5: Real and Nominal Exchange Rates Under Fixed And Floating Exchange-Rate Regimes Peg std(RER ) t std(t) corr(RER , RER t t−1 ) corr(t, t−1 ) corr(RER , t) t std(πt) 12.0 0 0.18 – – 13.2 Optimal 32.5 45.2 -0.04 -0.04 0.99 13.2 Float Suboptimal 5.2 44.0 -0.04 0.95 -0.15 44.3 Note. Standard deviations are expressed in percent per year. The optimal floating exchange-rate policy is given by t = wt−1 /ω(cTt ), and the suboptimal floating exchangerate policy is given by t = ω (cTt )/wt−1 . their way into the standard paradigm, changing researchers’ views on the ability of monetary policy in general, and exchange-rate policy in particular, to shape the course of real variables. For current readers, therefore, Mussa’s facts might sound less puzzling than they did in the mid 1980s. Stockman (1988) shows the difficulties faced by flexible-price models to capture the Mussa facts. An early analysis of the Mussa puzzle within the context of a sticky-price model is Monacelli (2004). We wish to ascertain whether the predictions of the theoretical model analyzed thus far in this chapter are consistent with the empirical regularities documented by Mussa. To this end, let’s define the real depreciation rate, denoted RER , as the gross growth rate of the real exchange rate. t That is, RER ≡ t RERt , RERt−1 where RERt denotes the real exchange rate as defined in equation (8.6). The first of Mussa’s facts means that the standard deviation of RER is larger under flexible exchange rate regimes than t under currency pegs. The first line of table 8.5 shows that, in line with Mussa’s first empirical fact, the predicted standard deviation of the real depreciation rate is almost much larger under the Open Economy Macroeconomics, Chapter 8 optimal floating exchange rate policy than under the peg. Lines 1 through 5 of the table show that under the optimal flexible exchange rate regime, the nominal and real exchange rates have similar standard deviation sand first-order serial correlations, and are highly positively contemporaneously correlated. This finding suggests that the model captures Mussa’s observation that under flexible exchange rates the real exchange rate shares, to a large extent, the stochastic properties of the nominal exchange rate. Finally, the last line of the table shows that the predicted volatility of CPI inflation, denoted πt ≡ Pt /Pt−1 , is the same under the peg and the optimal floating regime, which also concurs with the Mussa facts. At this point, it is important to clarify a common misconception. Many empirical studies classify exchange-rate regimes into fixed and floating, and then derive stylized facts associated with each regime. This practice is problematic because in reality there is not just one floating exchange-rate regime but an infinite family. And importantly, different floating exchange-rate regimes can induce different real allocations and, in particular, different nominal and real exchange-rate dynamics. To illustrate this point, we consider an alternative floating exchange-rate policy that does not belong to the class of optimal exchange-rate policies. Specifically, assume that the central bank sets the nominal exchange rate according to the rule t = ω(cTt )/wt−1 . This policy could be named the ‘anti’ optimal floating exchange-rate regime, as it revalues when the optimal rule, given in (8.33), calls for devaluations and vice versa. Table 8.5 shows that under this alternative floating exchange-rate policy, the model fails to capture all three of the Mussa facts. It follows that seen through the lens of the present model, Mussa’s facts can be interpreted as suggesting that during the early postBretton-Woods period, overall, the countries in the sample adopted floating regimes that gave rise to exchange-rate and inflation dynamics that are consistent with the ones associated with optimal exchange-rate policy. M. Uribe and S. Schmitt-Groh´e Endogenous Labor Supply We now relax the assumption of an inelastic labor supply schedule. Specifically, we consider a period-utility specification of the form U (ct, `t) = `1−θ − 1 ct1−σ − 1 +ϕ t , 1−σ 1−θ where `t denotes leisure in period t, and ϕ and θ are positive parameters. Under this specification, the household’s optimization problem features a new first-order condition determining the desired amount of leisure ϕ(`vt )−θ = wt λt, where `vt denotes the desired or voluntary amount of leisure. The above expression is a notional labor supply. It is notional in the sense that the worker may not be able to work the desired number of hours. We assume that households are endowed with ¯h hours per period. Let hvt denote the number of hours households desire work (the voluntary labor supply). The (voluntary) labor supply is the difference between the the endowment of hours and voluntary leisure, that is, ¯ − `v hvt = h t As before, households may not be able to sell all of the hours they supply to the labor market. Let ht denote the actual number of hours worked. Then, we impose hvt ≥ ht , Open Economy Macroeconomics, Chapter 8 This expression can also be interpreted as a ’no slavery’ cndition, as it states that noone can be forced to work longer hours than they wish. We impose the following slackness condition: (hvt wt−1 − ht ) wt − γ = 0. t Expressions (8.36)-(8.39) are the counterparts of conditions (8.8) and (8.20) in the baseline economy. All other conditions describing aggregate dynamics are as before. An important remaining issue is how to evaluate welfare in the present environment. This issue is not trivial because now leisure has two components, voluntary leisure, `vt , and involuntary leisure, or, synonymously, involuntary unemployment, which we denote by ut . Involuntary leisure is given by the difference between the number of hours the household voluntarily supplies to the market, hvt , and the number of hours the household is actually employed, ht , that is, ut = hvt − ht . How should voluntary and involuntary leisure enter in the utility function? One possibility is to assume that voluntary and involuntary leisure are perfect substitutes. In this case, the second argument of the period utility function is `t = `vt + ut . However, there exists an extensive empirical literature suggesting that voluntary and involuntary leisure are far from perfect substitutes. For instance, Krueger and Mueller (2012), using longitudinal data from a survey of unemployed workers in New Jersey, find that despite the fact that the unemployed spend relatively more time in leisure-related activities, they enjoy these activities to a lesser degree than their employed counterparts and thus, on an average day, report higher levels of sadness than the employed. Similarly, Winkelmann and Winkelmann (1998), using longitudinal data of working-age men in Germany, find that, after controlling for individual fixed effects and income, unemployment has a large non- M. Uribe and S. Schmitt-Groh´e pecuniary detrimental effect on life satisfaction. Another source of non-substitutability between voluntary and involuntary leisure stems from the fact that the unemployed spend more time than the employed looking for work, an activity that they perceive as highly unsatisfying. Krueger and Mueller (2012), for example, report that the unemployed work 391 minutes less per day than the employed but spend 101 minutes more per day on job search. In addition, these authors find that job search generates the highest feeling of sadness after personal care out of 13 time-use categories. Based on this evidence, it is important to consider specifications in which voluntary and involuntary leisure are imperfect substitutes in utility. Specifically, we model leisure as `t = `vt + δut . The existing literature strongly suggests that δ is less than unity. However, estimates of this parameter are not available. For this reason, we consider three values of δ, 0.5, 0.75, and 1. We calibrate the remaining new parameters of the model as follows: we assume that under full employment households spend a third of their time working. In addition, we adopt a Frisch wage elasticity of labor supply of 2, which is on the high end of available empirical estimates from micro and aggregate data (see, for example, Blundell and MaCurdy, 1999; Justiniano, Primiceri, and Tambalotti, 2010; and Smets and Wouters, 2007). Finally, we normalize the number of hours worked under full employment at unity so as to preserve the size of the nontraded sector relative ¯ = 3, to the traded sector as in the baseline economy. This calibration strategy yields ϕ = 1.11, h and θ = 1. Table 8.6 shows that the average rate of involuntary unemployment rises from 11.7 to 30.9 percent as the labor supply elasticity increases from 0 to 2. The reason why involuntary unemployment is much larger on average with an elastic labor supply specification, is that during slumps households experience a negative income effect, which induces them to increase their supply of labor. Open Economy Macroeconomics, Chapter 8 Table 8.6: Endogenous Labor Supply And The Welfare Costs of Currency Pegs Welfare Cost of Peg Average Unemployment Parameterization Mean Median Rate Under Peg Baseline (inelastic labor supply) 7.8 7.2 11.7 Endogenous Labor Supply δ = 0.5 16.5 15.2 30.9 δ = 0.75 8.2 7.5 30.9 δ=1 1.7 1.5 30.9 Note. The welfare cost of a currency peg is expressed in percent of consumption per quarter. Unemployment rates are expressed in percent. However, during slumps employment is determined by the demand for labor and not affected by the shift in labor supply. It follows that all of the increase in the labor supply contributes to increasing involuntary unemployment. During booms, regardless of the labor supply elasticity, employment is determined by the intersection of the labor supply and the labor demand schedules, and involuntary unemployment is nil. Thus, with an elastic labor supply the unemployment problem becomes worse during contractions but stays the same during booms. On net, therefore, unemployment must be higher in the economy with an endogenous labor supply. It is important to note that the behavior of unemployment is independent of the assumed value of δ, the parameter governing the relative valuation of voluntary and involuntary unemployment. This is because δ does not appear in any equilibrium condition of the model. This parameter affects only the welfare consequences of unemployment. Table 8.6 also shows the welfare cost of currency pegs relative to the optimal exchange-rate policy implied by the endogenous-labor-supply model. The welfare cost of a currency peg depends significantly on the degree of substitutability between voluntary and involuntary leisure, measured by the parameter δ. The more substitutable voluntary and involuntary leisure are, i.e., the larger is δ, the lower are the welfare cost of currency pegs. This result should be expected. Consider M. Uribe and S. Schmitt-Groh´e the case in which voluntary and involuntary unemployment are perfect substitutes (δ = 1). In this case, pegs reduce welfare because involuntary unemployment reduces the production and hence consumption of nontradable goods. However, unemployment increases leisure one for one and in this way increases utility, greatly offsetting the negative welfare effect of lower nontradable consumption. As δ falls, the marginal contribution of involuntary unemployment to total leisure, and therefore welfare, also falls. For a value of δ of 0.75, for instance, the welfare cost of currency pegs is 8.2 percent per period, which is higher than in the case with inelastic labor supply. With δ equals 0.5, the welfare cot currency pegs rises to to 16.5 percent of consumption per period. It follows that allowing for endogenous labor supply increases the average rate of unemployment caused by the combination of a currency peg and downward nominal wage rigidity, and may increase or decrease the welfare cost of currency pegs depending on how enjoyable involuntary leisure is assumed to be. Product Price Rigidity Consider now the case of product price rigidity. We first analyze the case of downward rigidity and then introduce symmetric price rigidity. For now, we assume that nominal wages are fully flexible. Suppose that the nominal price of nontradables is subject to the following constraint N PtN ≥ γp Pt−1 , where γp is a parameter governing the degree of downward nominal price rigidity. Dividing both sides of this expression by the nominal exchange rate, Et , yields pt ≥ γp pt−1 . t Open Economy Macroeconomics, Chapter 8 This expression replaces condition (8.18) of the economy with downward nominal wage rigidity. Define the full-employment relative price of nontradables, denoted ρ(cTt ), as the value of pt that induces households to voluntarily demand the full-employment level of nontradable output, F (¯h), given their consumption of tradables, cTt . By equation (8.16), we have that ρ(cTt ) is given by ρ (cTt ) ≡ A2 (cTt , F (¯h)) . A1 (cTt , F (¯h)) Given the assumed properties of the aggregator function A(·, ·), we have that ρ(cTt ) is increasing in cTt . Intuitively, households have an incentive to consume relatively more tradables only if nontradables become more expensive. Also, since A1 is increasing in its second argument and A2 is decreasing in its second argument, equation (8.16) implies that pt equals ρ(cTt ) if and only if ht equals ¯h. Therefore, we can postulate the following slackness condition γp ¯ (h − ht ) pt − pt−1 = 0, t which replaces condition (8.20) of the economy with downward nominal wage rigidity. The new slackness conditions states that if the economy experiences involuntary unemployment (ht < ¯h), then the price of nontradables must be stuck at its lower bound. By the properties of ρ(cTt ), this means that in this situation, pt must exceed its full-employment level. The above slackness condition also states that should the lower bound on the price of nontradables not bind, then the economy must have full employment. This, in turn, means that in these circumstances pt must equal its full-employment value, ρ(cTt ). An equilibrium in the economy with downward nominal price rigidity is then a set of stochastic processes {cTt , ht, dt+1 , pt, λt, µt }∞ t=0 satisfying (8.10)-(8.16), (8.19), (8.42), and (8.43), given an exchange-rate policy {t }∞ t=0 , initial conditions d0 and p−1 , and exogenous stochastic processes M. Uribe and S. Schmitt-Groh´e {rt, ytT }∞ t=0 . Notice that the real wage wt does not enter in any equilibrium condition. Also, the labor demand schedule, ptF 0 (ht) = wt, is no longer part of the set of equilibrium conditions. This is because firms are off their labor demand schedule when the economy suffers from involuntary unemployment, as they are rationed in product markets and thus employment is indirectly determined by the demand for nontradable goods. In such periods the real wage falls to zero and the value of the marginal product of labor exceeds the real wage, pt F 0 (ht) > wt.17 Consider now the workings of this economy under a currency peg, that is, assume that the exchange-rate policy takes the form t = 1 for all t. Figure 8.18 illustrates the economy’s adjustment to a negative external shock. Before the shock the demand for traded consumption is equal to cT0 and the demand schedule for nontraded goods is given by the solid downward sloping line. The economy is at point A and enjoys full employment. Suppose that the negative external shock lowers traded consumption from cT0 to cT1 < cT0 . Consequently the demand schedule shifts down and to the left, as depicted by the broken downward sloping line. Under price flexibility, the new equilibrium would be at point C. In this equilibrium, the relative price of nontradables, p, falls from ρ(cT0 ) to ρ(cT1 ). All of this adjustment occurs via a fall in the nominal price of nontradables, P N , since the nominal exchange rate is constant. However, under downward nominal price rigidity, P N cannot fall (for this illustration, we are assuming γp = 1.). As a result, the relative price of nontradables is stuck at ρ(cT0 ), and the equilibrium is at point B. ¯ − hbust . Also At this point, the economy suffers from involuntary unemployment in the amount h firms are rationed in product markets, in the sense that at the gibing price, they would like to sell more units than are demanded by consumers. Notice that here a fall in wages would not solve the unemployment problem, since firms cannot 17 In a model with an endogenous labor supply as the one implied by the utility function (8.35), the real wage will be positive and equal to a value that ensures that households are on their labor supply schedule. Firms, however, would continue to be quantity rationed during these periods and off their labor demand schedules. Open Economy Macroeconomics, Chapter 8 Figure 8.18: Adjustment to a Negative External Shock with Price Rigidity PN E PN E PN E A2 (cT0 , F (h) ) A1 (cT0 , F (h) ) A2 (cT1 , F (h) ) A1 (cT1 , F (h) ) ρ(cT0 ) ρ(cT1 ) ¯ h cT0 ; γp = 1 M. Uribe and S. Schmitt-Groh´e sell more than F (hbust ) units of goods in the market. Note also that the unemployment problem is more severe under price rigidity than under wage rigidity for identical values of γ and γp. We saw in previous sections that under downward nominal wage rigidity, the equilibrium is somewhere between points B and C at the intersection of the broken demand schedule and the supply schedule (not shown). There, the economy experiences less unemployment and lower product prices than under downward price rigidity. The peg-induced externality analyzed in section 8.2.1 for the case of downward wage rigidity is also present under downward price rigidity. To see this, consider a positive external shock that increases the desired consumption of tradables. This shocks pushes the demand schedule for nontradables up and to the right. The new equilibrium features full employment, a higher nominal price of nontradables, and no rationing in the goods market. On the surface, no problems arise in the adjustment process. However, the increase in the nominal price of nontradables makes the economy weaker. For once the positive external shock fades away, the nominal price of nontradables will have to fall to induce households to consume a quantity of nontradables compatible with full employment of the labor force. But this fall in the nominal price does not take place quickly enough if prices are rigid, and involuntary unemployment emerges. Collectively, households would be better off if they limited the initial expansion in the demand for nontradables. And they understand this. But each household is too small to affect the initial rise in prices by curbing its individual expenditure. There lies the peg induced externality. The optimal exchange-rate policy under price rigidity is quite similar to its counterpart under wage rigidity.18 In response to negative external shocks, the monetary authority can preserve fullemployment (point C in figure 8.18), by devaluing the currency. In this way, the monetary authority can bring down the relative price of nontradables from ρ(cT0 ) to ρ(cT1 ). The required depreciation is 18 Indeed, exercise 8.9 asks you to demonstrate that under certain conditions any exchange-rate policy that is optimal under downward nominal wage rigidity is also optimal under downward nominal price rigidity. Open Economy Macroeconomics, Chapter 8 Table 8.7: Price Rigidity And The Welfare Costs of Currency Pegs Welfare Cost of Peg Average Unemployment Parameterization Mean Median Rate Under Peg Baseline (wage rigidity, γ = 0.99 and γp = 0.) 7.8 7.2 11.7 Nominal Price Rigidity (γ = 0, γp = 0.99) N Downward Price Rigidity, PtN /Pt−1 ≥ γp 9.9 9.0 14.1 N N Symmetric Price Rigidity, 1/γp ≥ Pt /Pt−1 ≥ γp 4.4 3.9 6.6 Note. The welfare cost of a currency peg is expressed in percent of consumption per quarter. Unemployment rates are expressed in percent. larger the larger the contraction in the demand for tradable goods caused by the negative external shock. We then have that, as in the case of downward nominal wage rigidity, contractions are devaluatory under the optimal exchange-rate policy. To quantify the consequences of a currency peg for unemployment and welfare, we calibrate the model using the same parameter values as in the case of downward nominal wage rigidity, see table 8.3, with the difference that now wages are fully flexible (γ = 0) and that the nominal price of nontradables is downwardly rigid. For comparison with the case of wage rigidity, we set γp = 0.99. Table 8.7 shows that the mean unemployment rate under a currency peg is 14.1 percent, which is higher than the mean unemployment rate under wage rigidity. This finding confirms the intuition build around figure 8.18. With higher average unemployment, the welfare costs of currency pegs under downward price rigidity are also larger, with a mean of 9.9 percent of consumption per period as compared to 7.8 percent of consumption under downward wage rigidity. Unlike the empirical literature on wage rigidity, the empirical literature on nominal price rigidities has not drawn attention to asymmetries in product price adjustments (see, for example, Nakamura and Steinsson, 2008). This suggests, that the case of greater empirical relevance may be one in which price rigidity is symmetric. For this reason, we now consider an economy in which nominal M. Uribe and S. Schmitt-Groh´e prices of nontradables are subject to the constraint γp ≤ PtN 1 ≤ . N γp Pt−1 As in the case of nominal wage rigidity increasing the degree of price rigidity along the extensive margin (from downward to downward and upward rigidity) lowers unemployment and the welfare costs of currency pegs. Table 8.7 shows that unemployment and the welfare costs of currency pegs fall by more than half as upward rigidity is added. The intuition behind this result is that, as in the case of wage rigidity, hindering price increases ameliorates the externality created by currency pegs. Open Economy Macroeconomics, Chapter 8 Exercise 8.1 [Unwanted Positive Shocks] Show that in the example of section 8.2.3, the fall in the interest rate is welfare decreasing under downward nominal wage rigidity but welfare increasing under flexible wages. How can this be? Exercise 8.2 [Is More Wage Rigidity desirable?] Modify the example of section 8.2.3 to allow for wage rigidity in both directions, downwardly and upwardly. Characterize the economy’s response to a temporary decline in the interest rate. Show that welfare is higher under full wage rigidity than under downward wage rigidity. Provide intuition. Exercise 8.3 [Properties of the Full-Employment Real Wage] Show that ω 0 (cTt ) is positive. Exercise 8.4 [Pareto Optimality of the Flexible-Wage Equilibrium] Demonstrate that when nominal wages are fully flexible, the competitive equilibrium is Pareto optimal for any exchangerate policy. Exercise 8.5 [Foreign Inflation] Assume that the foreign price of tradable goods, PtT ∗ grows T∗ at the deterministic gross rate π ∗ , that is Pt+1 /PtT ∗ = π ∗ . How do the equilibrium conditions (8.10)-(8.20) change by the introduction of this assumption. Exercise 8.6 [Trend Growth] Continue to assume, as in exercise 8.5, that the foreign price of tradables grows at the rate π ∗ . In addition, assume that the production of nontradables is given by YtN = Xt F (ht ) and that the endowment of tradables is given by YtT = Xt ytT , where Xt is a deterministic trend that grows at the gross rate g, that is Xt+1 /Xt = g. Again, show how equilibrium conditions (8.10)-(8.20) are modified by the imposition of this assumption. Hint: As in the model of chapter 5, you will have to transform some variables appropriately to make them stationary. The equilibrium conditions should include only stationary variables. M. Uribe and S. Schmitt-Groh´e Exercise 8.7 Take a look at the two bottom panels of figure 8.12 showing the behavior of the tradebalance-to-output ratio and the debt-to-output ratio predicted by the model of section 8.1 during an external crisis under a currency peg and under the optimal exchange-rate policy. Note that these responses differ across the two exchange-rate regimes, in spite of the fact that the responses of the levels of the trade balance and the external debt are identical across exchange-rate policies, due to the assumption σ = 1/ξ. Of course, all of the differences must be due to the fact that output (measured in terms of tradables) behaves differently across exchange-rate regimes. Explain analytically the nature of these differences. Consider in particular the cases ξ = 1 and ξ > 1, under the maintained assumption σ = 1/ξ? Exercise 8.8 Show that if the technology for producing the composite consumption given in (8.2) is of the CES form, then the consumption price level, Pt , can be expressed as a CES function of the nominal prices of tradable and nontradables, Et and PtN , respectively. Exercise 8.9 [Equivalence of Optimal Exchange-rate Policy Under Price and Wage Stickiness] Show that, in the context of the model developed in this chapter, the families of optimal exchange-rate policies are identical under downward price rigidity and downward wage rigidity provided that γ = γp, that the economy was in full employment in period −1, and that the sources of uncertainty are stochastic disturbances in rt and ytT . Show that this result would fail to obtain in the presence of productivity shocks in the nontraded sector. Exercise 8.10 [Sudden Stops With Downward Nominal Wage Rigidity And Fixed Exchange Rates] Consider an open economy that lasts for only two periods, denoted 1 and 2. Households are endowed with 10 units of tradables in period 1 and 13.2 units in period 2 (y1T = 10 and y2T = 13.2). The country interest rate is 10 percent, or r = 0.1, the nominal exchange rate, defined as the price of foreign currency in terms of domestic currency, is fixed and equal to 1 in both periods Open Economy Macroeconomics, Chapter 8 (E1 = E2 = 1). Suppose that the foreign-currency price of tradable goods is constant and equal to one in both periods, and that the law of one price holds for tradable goods in both periods. Nominal wages are downwardly rigid. Specifically, assume that the nominal wage, measured in terms of domestic currency, is subject to the constraint Wt ≥ Wt−1 for t = 1, 2, with W0 = 8.25. Suppose the economy starts period 1 with no assets or debts carried over from the past (d1 = 0). Households are subject to the no-Ponzi-game constraint d3 ≤ 0. Suppose that the household’s preferences are defined over consumption of tradable and nontradable goods in periods 1 and 2, and are described by the following utility function, ln C1T + ln C1N + ln C2T + ln C2N , where CiT and CiN denote, respectively, consumption of tradables and nontradables in period i = 1, 2. Let p1 and p2 denote the relative prices of nontradables in terms of tradables in periods 1 and 2, respectively. Households supply inelastically ¯h = 1 units of labor to the market each period. Finally, firms produce nontradable goods using labor as the sole input. The production technology is given by ytN = hαt for t = 1, 2 where ytN and ht denote, respectively, nontradable output and hours employed in period t = 1, 2. The parameter α is equal to 0.75. 1. Compute the equilibrium levels of consumption of tradables and the trade balance in periods 1 and 2. M. Uribe and S. Schmitt-Groh´e 2. Compute the equilibrium levels of employment, nontradable output, and the relative price of nontradables in periods 1 and 2. 3. Suppose now that the country interest rate increases to 32 percent. Calculate the equilibrium levels of consumption of tradables, the trade balance, consumption of nontradables, the level of unemployment, and the relative price of nontradables in periods 1 and 2. Provide intuition. 4. Given the situation in the previous question, calculate the minimum devaluation rates in periods 1 and 2 consistent with full employment in both periods. To answer this question, assume that the nominal exchange rate in period 0 was also fixed at unity. Explain. 5. Continue to assume that W0 = 8.25 and that the interest rate is 32 percent. Assume also that the government is not willing to devalue the domestic currency, so that E1 = E2 = 1. Instead, the government chooses to apply capital controls in period 1. Specifically, let d2 /(1+r1 ) denote the amount of funds borrowed in period 1, which generate the obligation to pay d2 in period 2. Suppose that in period 1 the government imposes a proportional tax/subsidy τ1 on borrowed funds, so that the amount received by the household is (1 − τ1 )d2 /(1 + r1 ). Suppose that this tax/subsidy is rebated/financed in a lump-sum fashion. Calculate the Ramsey optimal level of τ1 . Exercise 8.11 (Productivity Shocks in the Nontraded Sector and Optimal Exchange-Rate Policy) Consider an economy like the one developed in section 8.1, in which the nontraded good is produced with the technology ytN = ezt hαt, where zt denotes an exogenous and stochastic productivity shock. Assume that zt evolves according to the law of motion zt = ρzt−1 + µt , where ρ ∈ [0, 1) is a parameter and µt is an i.i.d. disturbance with mean zero and standard deviation σµ . Suppose that the endowment of tradables is constant and equal to y T > 0 and that the interest rate is constant and equal to r. Assume that r satisfies β(1 + r) = 1. Assume that the period utility function and the Open Economy Macroeconomics, Chapter 8 aggregator function are given by (8.29) and (8.30), respectively, with ξ = 1/σ < 1. Suppose that d0 = 0 and that the economy was operating at full employment in t = −1. 1. Find the equilibrium process of consumption of tradables, cTt . 2. Derive the optimal devaluation rate, t , as a function of present and past values of the productivity shock. 3. Provide a graphical analysis of the effect of an increase in productivity (z0 > z−1 ) under the optimal exchange-rate policy and under a currency peg. Provide intuition. 4. Suppose that the monetary authority follows the optimal exchange-rate policy that makes the domestic currency as strong as possible relative to the foreign currency at all times. Find the unconditional correlation between the net devaluation rate, ln t and the growth rate of productivity, zt /zt−1 . Provide intuition. 5. How would the sign of the correlation obtained in the previous item and the intuition behind it change in the case ξ = 1/σ > 1. M. Uribe and S. Schmitt-Groh´e Chapter 9 Fixed Exchange Rates, Taxes, And Capital Controls In chapter 8, we studied a model with nominal rigidities in which the nominal exchange rate can be used to bring about the Pareto optimal allocation. We establisehd that the optimal exchangerate policy calls for devaluations when the economy is hit by negative external shocks. And when these shocks are large, so are the required devaluations. However, for emerging countries that are part of a currency union, such as those in the periphery of the eurozone, devaluations are not an option. This chapter explores the potential of other nonmonetary policies to address the distortions created by nominal rigidities when the exchange-rate regime is suboptimal. We begin by studying tax policies that can bring about the first-best (or Pareto optimal) allocation and then analyze optimal capital control policy. We analyze these questions in the context of the baseline model developed in chapter 8, in which nominal frictions take the form of downwardly rigid nominal wages. We will assume that the nominal exchange rate is fixed. But the insights of this chapter apply more broadly to any 367 M. Uribe and S. Schmitt-Groh´e suboptimal exchange-rate regime. First-Best Fiscal Policy Because wage rigidity creates a distortion in the labor market, and because nontradable goods are labor intensive (in the model as well as in the data), it is reasonable to begin by studying fiscal instruments directly targeted to the labor market or the market for nontraded goods as vehicles to remedy this source of inefficiency. We start by studying optimal labor subsidy schemes, which are perhaps the most direct way to address the distortions created by the combination of wage rigidity and suboptimal exchange-rate policy. Contributions in the area include Schmitt-Groh´e and Uribe (2010, 2012, and 2013) and Farhi, Gopinath, and Itskhoki (2014). The treatment here follows the former authors. Labor Subsidies The reason why in the model os chapter 8 negative external shocks cause involuntary unemployment is that the combination of downward nominal wage rigidity and a currency peg prevents the real wage from falling to the level compatible with full employment. In these circumstances, a labor subsidy would reduce the firm’s perceived labor cost thereby increasing the demand for labor. Specifically, suppose that the government subsidizes employment at the firm level at the proportional rate sht . Profits expressed in terms of tradable goods are then given by pt F (ht ) − (1 − sht )wtht . Open Economy Macroeconomics, Chapter 9 Figure 9.1: Adjustment Under Optimal Labor Subsidy Policy A 2 (cT0 ,F (h)) A 1 (cT0 ,F (h)) A 2 (cT1 ,F (h)) A 1 (cT1 ,F (h)) W0 /E0 F 0 (h) (1−sh )W0 /E0 F 0 (h) ¯ h The firm’s optimality condition becomes pt = (1 − sht ) wt . 0 F (ht ) This expression states that for a given relative price, pt , and for a given real wage, wt, the larger is the subsidy, sht , the lower is the marginal cost of labor perceived by the firm, and therefore the larger the amount of hours the firm is willing to hire. Figure 9.1 illustrates how labor subsidies of this form can bring about the efficient allocation. Consider a situation in which an external shock, such as an increase in the country interest rate brings the economy from an initial situation with full employment, point A, to one with involuntary bust , point B. The labor subsidy causes the labor supply schedule ¯ unemployment in the amount h−h M. Uribe and S. Schmitt-Groh´e to shift down and to the right, as shown by the dashed upward sloping line. The new intersection of the demand and supply schedules is at point C, where full employment is restored. We note that, unlike what happens under the optimal exchange-rate policy, under the optimal labor subsidy the real wage does not fall during the crisis. Specifically, the real wage received by the household remains constant at W0 /E0 . Once the negative external shock dissipates, i.e., once the interest rate falls back to its original level, the fiscal authority can safely remove the subsidy, without compromising its full employment objective. This graphical analysis suggests that a labor subsidy can support full employment at all times. Let’s now establish this result more formally. A relevant question is how the government should finance this subsidy. It turns out that in the present model the government can tax any source of income in a nondistorting fashion. Suppose, for instance, that the government levies a proportional tax, τt , on all sources of household income, wage income, profits, and the tradable endowment. In this case, the government budget constraint takes the form sht wtht = τt ytT + wtht + φt , where φt ≡ Φt/Et denotes profits expressed in terms of tradables. Implicit in this budget constraint is the simplifying assumption that the government issues no debt, i.e., that it follows a balancedbudget rule. This assumption entails no loss of generality. The left-hand side of this expression represents the government’s outlays, consisting in subsidies to firms. The right-hand side represents tax revenues. Given the level of labor subsidies, sht , the income tax rate, τt , adjusts endogenously to guarantee the government’s budget constraint holds period by period. To see that the income tax, τt , is nondistorting, consider the household’s budget constraint, which now takes the form T cTt + pt cN t + dt = (1 − τt )(yt + wt ht + φt ) + dt+1 . 1 + rt Open Economy Macroeconomics, Chapter 9 Let’s inspect each source of household income separately. Because the endowment of tradable goods, ytT , is assumed to be exogenous, it is not affected by taxation. Similarly, profit income from the ownership of firms, φt , is taken as given by individual households. Consequently, the imposition of profit taxes at the household level is non-distorting. Finally, notice that households either supply ¯h hours of work inelastically, in periods of full-employment, or are rationed in the labor market, in periods of unemployment. In any event, households take their employment status as given. As a result, taxes do not alter households incentives to work. (Exercise 9.1 asks you to demonstrate that income taxes continue to be nondistorting even when labor supply is endogenous.) It follows that the first-order conditions associated with the household’s utility-maximization problem are the same as those given in chapter 8.1 for an economy without taxation of household income. Combining the government budget constraint, the household budget constraint, the definition of firm’s profits, and the market clearing condition in the nontraded sector, cN t = F (ht ), yields the resource constraint (8.10). An equilibrium under a currency peg (t = 1) is then given by a set of processes {cTt , ht , wt, dt+1 , pt, λt, µt }∞ t=0 satisfying cTt + dt = ytT + dt+1 , 1 + rt ¯ dt+1 ≤ d, µt ≥ 0, ¯ = 0, µt (dt+1 − d) λt = U 0 (A(cTt , F (ht)))A1 (cTt , F (ht)), λt = βEt λt+1 + µt , 1 + rt M. Uribe and S. Schmitt-Groh´e pt = A2 (cTt , F (ht )) , A1 (cTt , F (ht )) wt , 0 F (ht ) pt = (1 − sht ) wt ≥ γwt−1 , ¯ ht ≤ h, (¯h − ht ) (wt − γwt−1 ) = 0, given a labor-subsidy policy, {sht }∞ t=0 , initial conditions w−1 and d0 , and exogenous stochastic processes {rt, ytT }∞ t=0 . Consider now a policymaker that wishes to set the labor subsidy sht in a Ramsey optimal fashion. The optimization problem faced by this policymaker is to maximize ∞ X U (A(cTt , F (ht )) subject to the complete set of equilibrium conditions (9.1)-(9.11). To see that the Ramsey-optimal labor subsidy policy supports the Pareto optimal allocation, consider the less restricted problem of maximizing (9.12) subject to only three constraints, namely (9.1), (9.2), and (9.10). No this less restricted optimization problem is the optimization problem of the Pareto planner (see chapter 8, section 8.3.2) and therefore yields the Pareto optimal allocation, which, among other things, is ¯ for all t. To establish that this allocation is characterized by full employment at all times, ht = h indeed the solution to the Ramsey optimization problem, we must show that the reamining constraints of the Ramsey problem, namely (9.3)-(9.9) and (9.11), are also satisfied. To see this, notice that equations (9.3)-(9.6) are first-order conditions of the less restricted optimization problem, so they are always satisfied. Also, since the Pareto optimal allocation implies full employment at Open Economy Macroeconomics, Chapter 9 times, the slackness condition (9.4) is always satisfied. Now set pt to satisfy (9.7). Then, for every t ≥ 0, given wt−1 , set wt at any arbitrary value satisfying (9.9). Finally, set the labor subsidy sht to satisfy equation (9.8). This completes the proof that the Pareto optimal allocation solves the optimization of the Ramsey planner. In other words, the Ramsey optimal labor subsidy policy supports the Pareto optimal allocation. How does the optimal labor subsidy in the present economy compare to the optimal devaluation policy studied in chapter 8? Combine equilibrium conditions (9.7) and (9.8) and evaluate the result at the optimal allocation to obtain wt(1 − sht ) = ω(cTt ) ≡ ¯ A2 (cT t ,F (h)) ¯ A1 (cT ,F ( h)) t ¯ A2 (cT t ,F (h)) F 0 (¯h) ¯ A1 (cT ,F ( h)) t As in chapter 8, define F 0 (¯h). Then we can write wt = ω(cTt )/(1 − sht ). Finally, combine this expression with equilibrium condition (9.9) to get 1 γwt−1 ≥ h ω(cTt ) 1 − st Any subsidy policy satisfying this condition is optimal. As in the case of the case of optimal exchange-rate policy, there is a whole family of labor subsidy policies that support the Pareto optimal allocation. Furthermore, comparing this expression with condition (8.24), we obtain the following result: If the process devaluation policy t is optimal in the economy with no labor subsidies, then the process sht ≡ (t − 1)/t is optimal in the economy with a fixed exchange rate. This relationship between the optimal exchange-rate policy and the optimal labor subsidy policy as alternative ways of achieving the first-best allocation is useful to gauge the magnitude of the labor subsidy necessary to preserve full employment during crises. In chapter 8, we found that during a large crisis like the one observed in Argentina in 2001, the model predicts optimal devaluations of between 30 and 40 percent per year, or between 7 and 9 percent per quarter, for about two and a half years (see figure 8.12). Using the formula given above, the implied optimal labor subsidy require to prevent unemployment ranges form 6.5 to 8 percent. These are large numbers. Consider M. Uribe and S. Schmitt-Groh´e a labor share of 75 percent of GDP and a share of nontradables of 75 percent of GDP as well. Then, the budgetary impact of a labor subsidy of 6.5 to 8 percent is 3.5 to 4.5 percent of GDP. Finally, we note that a property of the labor subsidies considered here is that there is a sense in which they are good for only one crisis. Specifically, suppose the fiscal authority grants a labor subsidy during a crisis, and keeps it in place once the crisis is over. When the next crisis comes, the old subsidy does not help at all to avoid unemployment. The reason is that the recovery after the first crisis cause nominal wages to increase, placing the economy in a vulnerable situation to face the next downturn. The new crisis would then require another increase in labor subsidies. This logic leads to a process for labor subsidies converging to one hundred percent. To avoid this situation, the policymaker must remove the subsidy as soon as the crisis is over. In this way, the recoveries occur in the context of nominal wage stability, and the optimal subsidy policy is stationary. Formally, the stationary optimal labor-subsidy policy takes the form 1 γwt−1 = max 1, 1 − sht ω(cTt ) which clearly belongs to the family of optimal labor-subsidy policies given in aqui (9.13). Sales Subsidies Another fiscal alternative to achieve an efficient allocation at all times is to subsidize sales in the yN nontraded sector. Let st be a proportional subsidy on sales in the nontraded sector. Then, profits of a representative firm in the nontraded sector are given by (1 + syN t )pt F (ht ) − Wt ht . Et Open Economy Macroeconomics, Chapter 9 The profit-maximization condition of the firm becomes pt = + syN t Wt /Et . F 0 (ht ) According to this expression, an increase in the sales subsidy increases the marginal revenue of the firm. Like a wage subsidy, a sales subsidy shifts the supply schedule down and to the right. The graphical analysis is therefore qualitatively identical to that used to explain the workings of the optimal wage subsidy shown in figure 9.1. Consumption Subsidies A third fiscal instrument that can be used to ensure full employment at all times in a pegging economy with downward nominal wage rigidity is a proportional subsidy to the consumption of nontradables. Specifically, assume that the after-subsidy price of nontradable goods faced by consumers is (1 − scN t )pt . The subsidy on nontraded consumpiton makes nontradables less expensive relative to tradables. It can therefore be used by the government during a crisis to facilitate an expenditure switch toward nontraded consumption and away from tradable consumption. With subsidies on nontraded consumption, the demand schedule is given by (1 − scN t )pt = A2 (cTt , F (ht )) . A1 (cTt , F (ht )) Figure 9.2 illustrates how the consumption subsidy can be optimally used to ensure the efficient functioning of the labor market. Suppose a negative external shock reduces the desired demand for tradable goods shifting the demand schedule for nontradables down and to the left as indicated by the downward sloping broken line. As discussed before, in the absence of any intervention, the pegging economy would be stuck at the inefficient point B, with involuntary unemployment. The M. Uribe and S. Schmitt-Groh´e Figure 9.2: Adjustment Under Optimal Taxation of Nontradable Consumption A 2 (cT0 ,F (h)) A (cT ,F (h)) = (1−scN2 )A1 1 (cT ,F (h)) A 1 (cT0 ,F (h)) 1 A 2 (cT1 ,F (h)) A 1 (cT1 ,F (h)) W0 /E0 F 0 (h) ¯ h Open Economy Macroeconomics, Chapter 9 introduction of the subsidy to nontraded consumption shifts the demand schedule back up and to the right. If the magnitude of the subsidy is chosen appropriately, the demand schedule will cross the supply schedule exactly at point A, where the labor market returns to full employment. Note that the real exchange rate does not adjust to the external shock. However, the after-tax real exchange rate experiences a depreciation when the consumption subsidy is implemented. This perceived depreciation boosts the demand for nontradables thereby preventing a spillover to the nontraded sector of a contraction originating in the traded sector. A criticism that can be raised against all three of the fiscal stabilization schemes considered here is that the implied tax policies inherit the stochastic properties of the underlying sources of uncertainty (i.e., rt and ytT ). This means that tax rates must change at business-cycle frequencies. To the extent that changes to the tax code are subject to legislative approval, the long and uncertain lags involved in this process might render the implementation of the optimal tax policy impossible. Another potential criticism of the fiscal polices considered here is that they are not prudential in nature. That is, they come into effect after a crisis has occurred. In the next section, we study macro prudential policies that are applied during the boom phase of the cycle in order to reduce an economy’s vulnerability to negative external shocks. Capital Controls As Second-Best Policies Fixed-exchange rate arrangements are often part of broader economic reform programs that include liberalization of international capital flows. For small emerging economies, such a policy combination has been a mixed blessing. A case in point is the European currency union, which imposes capital account liberalization as a prerequisite for admission. Figure 9.3 displays the average current-account-to-GDP ratio, an index of nominal hourly wages in Euros, and the rate of unemployment for a group of peripheral European countries that were either on or pegging to the M. Uribe and S. Schmitt-Groh´e Euro over the period 2000 to 2011. In the early 2000s, these countries enjoyed large capital inflows, which through their expansionary effect on domestic absorption, led to sizable appreciations in hourly wages. With the onset of the global recession in 2008, however, capital inflows dried up Figure 9.3: Boom-Bust Cycle in Peripheral Europe: 2000-2011 Current Account / GDP Labor Cost Index, Nominal Unemployment Rate 14 13 −8 −10 11 Percent Index, 2008 = 100 12 −6 8 −12 −14 2002 2004 2006 2008 2010 Date 7 2002 2004 2006 2008 2010 Date 2002 2004 2006 2008 2010 Date Data Source: Eurostat. Data represents arithmetic mean of Bulgaria, Cyprus, Estonia, Greece, Ireland, Lithuania, Latvia, Portugal, Spain, Slovenia, and Slovakia. and aggregate demand collapsed. At the same time nominal wages remained at the level they had achieved at the peak of the boom. The combination of depressed levels of aggregate demand and high nominal wages was associated with a massive increase in involuntary unemployment. In turn, local monetary authorities were unable to reduce real wages via a devaluation because of their commitment to the currency union. Viewed through the lens of the model with nominal rigidities studied earlier in this chapter and in the previous chapter, the type of empirical evidence presented above suggests the possibility that countries might benefit from adopting prudential capital controls, that is, from taxing net capital inflows during booms and subsidizing them during contractions. Capital controls essentially represent a wedge between the interest rate at which the rest of the world is willing to lend to Open Economy Macroeconomics, Chapter 9 domestic residents and the interest rate effectively paid by these agents. In other words, by using capital controls, the government can control the interest rate. Therefore, raising capital controls during booms can curb the expansion in aggregate demand and in this way slow nominal wage growth. In turn, this would allow the economy to enter the contractionary phase of the cycle with a lower level of wages, which would result in less unemployment. The reduction in unemployment could be even larger if during the contraction the policy authority subsidized net capital inflows.1 Imposing capital controls, however, comes at a cost. The reason is that capital controls distort the real interest rate perceived by private domestic agents, thereby introducing inefficiencies in the intertemporal allocation of consumption of tradable goods. Thus, the government contemplating the imposition of capital controls faces a tradeoff between intertemporal distortions, caused by capital controls themselves, and static distortions, caused by the combination of downward nominal wage rigidity and suboptimal exchange-rate policy. The remainder of this chapter is devoted to analyzing this tradeoff from a Ramsey perspective. The analysis follows Schmitt-Gro´e and Uribe (2011, 2013). Capital Controls As A Distortion To The Interest Rate We embed capital controls into the small open economy with downward nominal wage rigidity developed in chapter 8. Throughout the analysis, we assume that the nominal exchange rate is constant. Let τtd denote a tax on net foreign debt in period t. Then, the household’s sequential budget 1 Thus, capital controls can be viewed as one way out of Mundell’s (1963) trilemma of international finance, according to which a country cannot simultaneously have a fixed exchange rate, free capital mobility, and an independent interest-rate policy. For, by breaking free capital mobility, capital controls allow a country to pursue an independent interest rate policy. Of course, adopting an optimal exchange rate policy also represents a way out of the trilemma. M. Uribe and S. Schmitt-Groh´e constraint is given by T cTt + pt cN t + dt = (1 − τt )(yt + wt ht + φt ) + (1 − τtd )dt+1 . 1 + rt The government intervention in the international financial market through the capital control variable τtd alters the effective gross interest rate paid by the household from 1 + rt to (1 + rt)/(1 − τtd). The rate τtd can take positive or negative values. When it is positive, the government discourages external borrowing by raising the effective interest rate. In this case, we say that the government imposes capital controls. On the other hand, when τtd is negative, the government subsidizes international borrowing by lowering the effective interest rate. As we will see shortly, a benevolent government will make heavy use of cyclical adjustments in capital controls to stabilize consumption and employment. The variable τt denotes a proportional tax rate (or subsidy rate if negative) on personal income. In section 9.1.1, we showed that τt is a nondistorting tax instrument in the present economy. The government is assumed to set τt so as to balance its budget period by period. Specifically, τt is used to rebate or finance any revenue or deficit generated by capital controls. Assuming that the government starts out with no public debt outstanding, its balanced-budget rule implies that, given τtd , the income tax rate τt is set residually to satisfy τtd dt+1 + τt(ytT + wtht + φt ) = 0. 1 + rt Open Economy Macroeconomics, Chapter 9 Equilibrium Under Capital Controls The introduction of capital controls alters only one equilibrium condition, namely the one stemming from the household’s choice of foreign debt. This optimality condition now becomes λt 1+rt 1−τtd = βEtλt+1 + µt . It is clear from this expression that the gross interest rate that is relevant for the household is not 1 + rt but (1 + rt )/(1 − τtd ). Setting τtd at a positive level elevates the interest rate perceived by households and discourages borrowing in international financial markets. Similarly, a negative value of τtd lowers the effective interest rate encouraging borrowing. A competitive equilibrium under a fixed exchange-rate regime is then a set of processes {cTt , dt+1 , ht, wt, λt, µt }∞ t=0 satisfying dt+1 , 1 + rt P (cTt , ht)F 0 (ht) = wt, ¯ ht ≤ h, wt ≥ γwt−1 , ¯ dt+1 ≤ d, cTt + dt = ytT + M. Uribe and S. Schmitt-Groh´e λt = U 0 (A(cTt , F (ht)))A1 (cTt , F (ht)), λt(1 − τtd ) = βEtλt+1 + µt , 1 + rt µt ≥ 0, ¯ = 0, µt (dt+1 − d) (ht − ¯h)(wt − γwt−1 ) = 0, given exogenous stochastic processes {ytT , rt}∞ t=0 , intitial conditons d0 and w−1 , and a capital control policy {τtd }∞ t=0 . The function P (cTt , ht) ≡ A2 (cTt , F (ht )) A1 (cTt , F (ht )) denotes the equilibrium relative price of nontradables in terms of tradables expressed as a function of consumption of tradables and employment. Ramsey Optimal Capital Controls As discussed in chapter 8, the combination of downward nominal wage rigidity and a currency peg creates a negative pecuniary externality. In periods of economic expansion, elevated demand for nontradables drives real wages up. This poses no inefficiencies at this point in the cycle, but becomes problematic in the contractionary phase. For then downward nominal wage rigidity and the currency peg hinder the necessary downward adjustment of real wages, causing unemployment. Households are too small to internalize the fact that their own high consumption during the boom is the harbinger of higher unemployment in the recession. Consequently, the government has an incentive to prudentially regulate capital flows to curb the initial expansion in tradable consumption Open Economy Macroeconomics, Chapter 9 in response to positive external shocks. Such policy would dampen the initial increase in nominal wages and in that way mitigate the subsequent unemployment problem as the economy returns to its normal state. In general, each capital control policy induces a particular competitive equilibrium. We assume that the government is benevolent in the sense that it picks the capital control policy associated with the competitive equilibrium that yields the highest utility to households. That is, the government chooses the capital control policy to maximize the lifetime welfare of the representative household subject to the complete set of equilibrium conditions. This type of optimization problem is known as the Ramsey problem, its solution as the Ramsey optimal equilibrium, and the resulting policy as the Ramsey optimal policy. We do not impose the assumption that the government is endowed with full commitment, i.e., that the government always keeps it promises, for, as will become clear shortly, in the present model the Ramsey optimal capital control policy turns out to be time consistent, that is, in any given period and given the assumed exchange-rate policy, the government has no incentive to deviate from promises on capital controls made in the past. Formally, the Ramsey planner’s optimization problem consists in choosing processes {τtd , cTt , dt+1 , ht, wt, λt, µt}∞ t= to maximize (8.1) subject to conditions (9.14)-(9.23). The strategy we follow to characterize the Ramsey allocation is to drop conditions (9.19)-(9.23) from the set of constraints of the Ramsey planner’s problem and then show that the solution to this less constrained Ramsey problem satisfies the omitted constraints. To see that processes {cTt , ht, wt} that solve the less constrained Ramsey problem also satisfy the constraints that were omitted from the Ramsey problem, pick λt to satisfy (9.19). Next, set µt = 0 for all t.2 It follows that (9.21) and ¯ µt must be chosen to be zero. Note that in states in which the Ramsey allocation calls for setting dt+1 < d, ¯ µt need not be chosen to be zero. In these states, However, in states in which the Ramsey allocation yields dt+1 = d, any positive value of µt could be supported in the decentralization of the Ramsey equilibrium. Of course, in this case, τtd will depend on the chosen value of µt . In particular, τtd will be strictly decreasing in the arbitrarily chosen value of µt . 2 M. Uribe and S. Schmitt-Groh´e (9.22) are satisfied. Pick τtd to satisfy (9.20). It remains to show that the slackness condition (9.23) is satisfied when evaluated at the allocation that solves the less constrained Ramsey problem. To see that this is the case, consider the following proof by contradiction. Suppose, contrary to what we wish to show, that in the allocation that solves the less constrained Ramsey problem ht < ¯h and wt > γwt−1 at some date ˜ t ≤ ¯h. Clearly, this perturbation t. Consider now an increase in hours only at date t from ht to h does not violate (9.14). From (9.15) we have that the real wage falls to w ˜t ≡ P (cTt , ˜ht)F 0 (˜ht ) < wt. Because P and F 0 are continuous functions, expression (9.17) is satisfied provided the increase in hours is sufficiently small. Starting in t + 1, the lower bound on wages is smaller because w ˜ t < wt . This shows that the perturbation is feasible. Finally, the perturbation is clearly welfare increasing because it raises the consumption of nontradables in period t without affecting the consumption of tradables in any period or the consumption of nontradables in any period other than t. It follows that an allocation that does not satisfy the slackness condition (9.23) cannot be a solution to the less constrained Ramsey problem. An implication of the previous analysis is that one can characterize the Ramsey allocation as the solution to the following Bellman equation problem: T v(ytT , rt, dt, wt−1 ) = max U (A(cTt , F (ht)) + βEt v(yt+1 , rt+1 , dt+1 , wt) subject to (9.14) -(9.18), where v(ytT , rt, dt, wt−1) denotes the value function of the representative household. Note that the capital control rate τtd does not feature in this problem. However, from the arguments presented above, we have that the optimal capital control policy must deliver tax rates on debt satisfying τtd = 1 − β(1 + rt) EtU 0 (ct+1 )A1 (cTt+1 , cN t+1) , T N 0 U (ct)A1 (ct , ct ) Open Economy Macroeconomics, Chapter 9 where ct , cTt , and cN t denote the Ramsey-optimal processes of consumption, consumption of tradables, and consumption of nontradables, respectively. Interest-Rate Shocks and the Optimality of Prudential Capital Controls We now present an analytical example showing the prudential nature of optimal capital controls. Specifically, in the economy analyzed in this example, in response to a temporary decline in the interest rate, the Ramsey government introduces capital controls to discourage capital inflows. It does so in order to attenuate the impact on future unemployment once the interest rate goes back up to its long-run level. The example is a continuation of the one analyzed in chapter 8.2.3. There, we studied the response to a temporary decline in the interest rate under a currency peg and under the optimal exchange rate policy, assuming free capital mobility in both cases. Here, we characterize the response of the economy under a currency peg and Ramsey optimal capital controls.3 T N Preferences are given by U (ct) = ln(ct) and A(cTt , cN t ) = ct ct . The technology for producing nontradable goods is F (ht ) = hαt , with α ∈ (0, 1). The economy starts period zero with no outstanding debt, d0 = 0. The endowment of tradables, y T > 0, is constant over time. The real wage in period −1 equals αy T . The economy is subject to a temporary interest rate decline in period zero. Specifically, rt = r for all t 6= 0, and r0 = r < r. This interest-rate shock is assumed to be unanticipated. Assume that β(1 + r) = 1, γ = 1, and ¯h = 1. Finally, assume that the economy ¯α ¯ was at a full-employment equilibrium in periods t < 0, with dt = 0, cTt = y T , cN t = h , and ht = h. Recall from chapter 8.2.3 that aggregate dynamics under free capital mobility and a currency 3 In the context of the present model, capital controls would be superfluous under the optimal exchange-rate policy, since the latter achieves the first-best allocation even under free capital mobility. M. Uribe and S. Schmitt-Groh´e peg are given by 1 r =y > yT , + 1+r 1+r 1 r 1+r T T ct = y + < y T ; t ≥ 1, 1+r 1+r1+r 1+r > 0; t ≥ 1, dt = y T 1 − 1+r cT0 h0 = 1, ht = 1+r < 1; 1+r t ≥ 1. Figure 9.4 depicts these equilibrium dynamics with solid lines. Under free capital mobility and a fixed exchange rate, the fall in the interest rate in period 0 causes an expansion in the consumption of tradables financed by external debt and an increase in real wages and the relative price of nontradables (not shown). In period 1, the country interest rate rises to its normal value and consumption of tradables falls permanently to a level below the endowment. The resulting trade surplus is used to pay the interest on the debt incurred in period 0. The fall in aggregate demand in period 1 puts downward pressure on real wages. However, because nominal wages are downwardly rigid and the nominal exchange rate is fixed, the real wage cannot fall, causing disequilibrium in the labor market and the emergence of permanent involuntary unemployment. We will show that under Ramsey optimal capital controls and a currency peg the equilibrium allocation is given by cTt = y T ; ht = 1; dt+1 = 0; t ≥ 0, t ≥ 0, t ≥ 0, Open Economy Macroeconomics, Chapter 9 Figure 9.4: Adjustment Under Ramsey Optimal Capital Control Policy To a Temporary Interest Rate Decline Consumption of Tradables, cTt Country Interest Rate, rt r yT r −3 0 Time, t Debt, dt 0 1 2 Time, t ¯ − h)/h ¯ Unemployment, (h 0 1 Time, t Real Wage, wt 0 1 2 Time, t Capital Control Tax, τtd αyT 0 −3 0 Time, t Free Capital Mobility 0 Time, t Ramsey Optimal Capital Controls M. Uribe and S. Schmitt-Groh´e 1− d τt = 0 1+r 1+r > 0 for t = 0 for t ≥ 1 Figure 9.4 displays these equilibrium dynamics with crossed broken lines. The Ramsey optimal capital control policy taxes capital inflows in period 0 by setting τ0d > 0, which raises the cost of external borrowing above r. Indeed, the Ramsey planner finds it optimal to fully undo the temporary decline in the world interest rate. The effective interest rate faced by domestic households, (1 + rt)/(1 − τtd ) − 1, equals r even in period 0. In this way, the Ramsey planner curbs the boom in aggregate demand and limits the appreciation of real wages in period 0. Consumption is fully smoothed over time and as a result the labor market is unaffected by the temporary decline in interest rates. In the absence of downward nominal wage rigidity, it would be optimal for the economy to take advantage of the temporary fall in the interest rate and expand the absorption of tradables in period zero. Thus, by imposing capital controls at t = 0, the planner distorts the intertemporal allocation of consumption of tradables. This distortion is of course welfare decreasing. However, the gains stemming from avoiding unemployment in periods t ≥ 1 more than offset these welfare losses. In fact, in the present example, the tradeoff between intertemporal distortions in the allocation of tradables and future unemployment is resolved entirely in favor of eliminating unemployment and against any expansion in the absorption of tradables. In general, the resolution of the tradeoff will involve some unemployment and some increase in the absorption of tradables. We now show formally the validity of the conjectured Ramsey optimal equilibrium. The optimal capital control policy is the solution to the problem of maximizing the value function (9.24) subject to (9.14) -(9.18). The appendix to this chapter shows that under Ramsey optimal capital contols, beginning in period t = 1 all variables are constant, that is, cTt = cT1 , ht = h1 , dt+1 = d1 , wt = w1 Open Economy Macroeconomics, Chapter 9 for all t ≥ 1. The level of welfare associated with the conjectured optimal policy is v opt = 1 ln y T . 1−β To see that the conjecture is correct, consider first an alternative solution in which cT0 > y T . In this case, d1 > 0 and therefore cT1 < y T , (to ensure a constant equilibrium path of debt starting in period 1). The full-employment wage in period 0 is αcT0 > αy T ≡ w−1 . It follows that h0 = 1 and w0 = αcT0 . In period 1, the full-employment wage rate is αcT1 , which is clearly less than w0 . As a result, we have that in period 1 the lower bound on wages binds, that is, w1 = w0 . Equation (9.15) T T then implies that ht = cTt /cT0 < 1 and ln cN t = α(ln c1 − ln c0 ) for all t ≥ 1. Lifetime utility is then given by v˜ = β(1 + α) 1 − β(1 + α) ln cT0 + ln cT1 . 1−β 1−β Clearly, if α > r, then v opt > v˜, so cT0 > y T is not Ramsey optimal. The intuition for why the parameter α, governing labor productivity, is relevant for establishing this result is that if labor productivity is too low then it would not pay for the planner to avoid unemployment by raising capital controls, because the additional output of nontradables resulting from higher employment is too low. In this case, the planner prefers to preserve intertemporal efficiency in the allocation of tradables at the expense of some unemployment in the nontraded sector. Suppose now that, contrary to our conjecture, the Ramsey optimal plan features cT0 < y T . Then it must be the case that d1 < 0, and therefore cTt > y T , for all t ≥ 1. Also, the full-employment real wage in period 0 is αcT0 < αy T = w−1 , which implies the existence of involuntary unemployment in period 0. Equation (9.15) then implies that h0 = cT0 /y T < 1. By a similar logic, there is full employment starting in period 1, ht = 1 for t ≥ 1. Lifetime welfare is then given by vˆ = (1 + α) ln cT0 + β ln cT1 − α ln y T . 1−β M. Uribe and S. Schmitt-Groh´e Now, combine the sequential budgets constraint (9.14) evaluated at t = 0 and t = 1, given, respectively, by cT0 = y T + d1 /(1 + r) and cT1 = y T − cT1 r 1+r d1 , to obtain 1+r r(1 + r) T = 1+r yT − c . 1+r (1 + r) 0 Using this expression to eliminate cT1 from lifetime welfare, we obtain vˆ = (1 + α) ln cT0 + β ln 1−β 1+r 1+r yT − r(1 + r) T c0 − α ln y T . (1 + r) Notice that vˆ = v opt when cT0 = y T . Moreover, the derivative of vˆ with respect to cT0 is positive for any cT0 ≤ y T . This implies that vˆ < v opt for any cT0 < y T . We have therefore established the validity of the conjectured Ramsey optimal equilibrium. Finally, the capital control policy that supports the Ramsey equilibrium can be read off the household’s Euler equation (9.25) evaluated at cT0 = cT1 = y T , which yields τ0d = 1 − 1+r >0 1+r for t = 0 and τtd = 0 for t ≥ 1. Optimal Capital Controls During a Boom-Bust Episode To illustrate the prudential nature of optimal capital controls in a more realistic fixed-exchangerate economy than the one analyzed in the example of the previous section, we now consider a stochastic environment characterized by random disturbances to the country interest rate, rt , and Open Economy Macroeconomics, Chapter 9 to the endowment of tradables, ytT . In chapter 8.7, we estimated the joint law of motion of these two exogenous driving forces using data from Argentina over the period 1983:Q1 to 2001:Q4. We adopt this estimated process here.4 We also follow the calibration of chapter 8.7 to assign values to all other structural parameters of the model. Under this more realistic stochastic structure, there exists no closed form solution to the Ramsey optimal capital control problem. Therefore, we resort to numerical methods to approximate the solution to the value-function problem of maximizing (9.24) subject to (9.14)-(9.18). We apply a value-function-iteration procedure over a discretized state space. The discretization is described in chapter 8.6. In this section, we study the behavior of capital controls during a boom-bust episode. We define a boom-bust episode as a situation in which tradable output, ytT , is at or below trend in period 0, at least one standard deviation above trend in period 10, and at least one standard deviation below trend in period 20. To this end, we simulate the model economy for 20 million periods and select all subperiods that satisfy the definition of a boom-bust episode. We then average across these episodes. Figure 9.5 depicts the model’s predictions during a boom-bust cycle. Solid lines correspond to the economy with free capital mobility (i.e., no capital controls) and broken lines to the economy with optimal capital controls. The two top panels of the figure display the dynamics of the two exogenous driving forces, tradable output and the country interest rate. By construction, ytT and rt are unaffected by capital controls. The middle left panel of the figure shows that capital controls are used in a prudential fashion during boom-bust episodes. The are increased significantly during the expansionary phase of the cycle, from about half a percent at the beginning of the episode to almost 3 percent at the peak of the cycle. During the contractionary phase of the cycle, capital controls are drastically relaxed. 4 Schmitt-Groh´e and Uribe (2011) estimate the process on data from Argentina, Spain, and Greece and conduct an analysis similar to the one presented here. M. Uribe and S. Schmitt-Groh´e Figure 9.5: Prudential Policy For Peggers: Boom-Bust Dynamics With and Without Capital Controls Annualized Interest Rate, rt 20 16 % per year Traded Output, ytT 1.2 1.05 1 20 quarter 20 quarter Traded Consumption, cTt Capital Control Rate, τtd 3 1.2 1.1 1 % 1 0.9 0 0.8 −1 −2 20 quarter 1 level 20 % Consumption, ct Unemployment Rate, 1 − ht 30 0.95 0.9 0.85 20 quarter 0.8 0 20 quarter No Capital Controls 20 quarter Optimal Capital Controls Open Economy Macroeconomics, Chapter 9 Indeed at the bottom of the crisis, capital inflows are actually subsidized at a rate of about 2 percent. The increase in capital controls during the expansionary phase of the cycle puts sand in the wheels of capital inflows, thereby restraining the boom in tradable consumption (see the middle right panel of figure 9.5). Under free capital mobility, during the boom, traded consumption increases significantly more than under the optimal capital control policy. In the contractionary phase, the fiscal authority incentivates spending by subsidizing capital inflows. As a result tradable consumption falls by much less in the regulated economy than it does in the unregulated one. During the recession, the optimal capital control policy, far from calling for austerity in the form of trade surpluses, facilitates trade balance deficits. It follows that the Ramsey-optimal capital control policy does not belong to the family of beggar-thy-neighbor policies, for it does not seek to foster external demand during crises. Because unemployment depends directly upon variations in the level of tradable absorption through the latter’s role as a shifter of the demand schedule for nontradables, and because optimal capital controls stabilize the absorption of tradables, unemployment is also stable over the boombust cycle. Specifically, as can be seen from the bottom left panel of figure 9.5, in the absence of capital controls, unemployment increases sharply by about 20 percentage points during the recession. By contrast, under optimal capital controls the rate of unemployment is virtually zero. It follows that the Ramsey planner’s tradeoff between distorting the intertemporal allocation of tradable consumption and reducing unemployment is overwhelmingly resolved in favor of the latter. M. Uribe and S. Schmitt-Groh´e Table 9.1: Optimal Capital Controls And Currency Pegs: Level and Volatility Effects Capital Control Tax Rate Country Interest Rate 100 × τtd 400 × rt Mean FCM 0 OCC 0.6 Standard Deviation FCM 0 OCC 2.4 Correlation with ytT FCM – OCC 0.7 Correlation with rt FCM – OCC -0.9 Correlation with GDP FCM – OCC 0.7 Effective Interest Rate 1+rt 400 × 1−τ d −1 Unemployment Rate Growth in Traded Consumption 12.7 12.6 12.7 15.1 11.8 0.4 0.0 0.0 7.1 7.1 7.1 5.0 10.4 1.5 23.2 4.9 -0.8 -0.8 -0.8 0.3 -0.6 -0.2 0.1 0.2 1.0 1.0 1.0 -0.3 0.7 0.2 -0.2 -0.3 -0.8 -0.8 -0.8 0.2 -1.0 -0.5 0.3 0.3 100 × ¯ t h−h ¯ h 400 × ln(cTt /cTt−1 ) Note. FCM stands for free capital mobility (τtd = 0 for all t), and OCC stands for optimal capital controls. GDP stands for gross domestic product and is expressed in terms of units of the composite Currency Pegs and Optimal Capital Controls: Unconditional Moments Under fixed exchange raes, Ramsey optimal capital controls are prudential not only during boombust episodes but also unconditionally over the business cycle. Table 9.1 displays unconditional first and second moments of macroeconomic indicators of interest for the economies with free capital mobility (FCM) and optimal capital controls (OCC). The correlation between the capital control rate and output is 0.7. This strong positive correlation is the result of a capital control policy that Open Economy Macroeconomics, Chapter 9 restricts capital inflows when the country interest rate is high and traded output is low and vice versa. Specifically, the correlation between τtd and ytT is 0.7 and the correlation between τtd and rt is -0.9. The optimal capital control policy is highly effective in reducing involuntary unemployment. The average unemployment rate falls from 11.8 percent under free capital mobility to 0.4 percent under optimal capital controls. Notice that this finding is similar to the one derived in the analytical example of section 9.4 where we characterized the Ramsey optimal capital control policy in response to a temporary interest rate decline. In that example, the Ramsey planner set capital controls so as to isolate the domestic economy from variations in the country interest rate to ensure full employment. The virtually complete eradication of unemployment is achieved through a drastic reduction in the volatility of the shifter of the demand for nontradables, namely, the domestic absorption of tradables. Specifically, the standard deviation of the growth rate of consumption of tradables falls from 23.2 percent under free capital mobility to only 4.9 percent under optimal capital controls. The Ramsey planner engineers the optimal smoothing of tradable consumption via a time varying wedge between the country interest rate and the effective interest rate perceived by domestic agents. The size of the wedge is given by the size of the capital control tax rate. Indeed, capital controls are so strongly prudential that even though the country interest rate is markedly countercyclical (with a correlation of -0.8 with output), the effective interest rate under optimal capital controls is procyclical (with a correlation of 0.2 with output). The strong interference of the Ramsey planner with the intertemporal allocation of tradable consumption represents a large deviation from the first-best allocation. Recall that given the assumption of equality of the inter- and intratemporal elasticities of substitution (1/σ = ξ), the path of traded consumption under a peg with free capital mobility coincides with the path associated with the first-best allocation. Thus, as in the simple analytical example of section 9.4, in the present M. Uribe and S. Schmitt-Groh´e Figure 9.6: The Distribution of External Debt 0.8 Free Capital Mobility Optimal Capital Controls 0.7 0 −8 0 External Debt, dt richer stochastic economy, the tradeoff between an inefficient intertemporal allocation of tradable consumption and unemployment is resolved in favor of eliminating unemployment. Currency Pegs And Overborrowing The present model predicts that economies with free capital mobility and a fixed exchange rates overborrow in international financial markets. In the calibrated economy of the previus section, the average level of external debt is 22.4 percent of output under free capital mobility and -14.0 percent of output under optimal capital controls. This prediction of the model is also evident from figure 9.6, which shows the unconditional distribution of external debt under free capital mobility (solid line) and under optimal capital controls (dashed line). The Ramsey planner induces Open Economy Macroeconomics, Chapter 9 a lower average level of external debt by taxing borrowing at a positive rate. Table 9.1 shows that the effective interest rate (i.e., the after-tax interest rate) is 2.5 percentage points higher than the pre-tax interest rate. It follows that pegging economies with free capital mobility accumulate inefficiently large amounts of external debt. In other words currency pegs in combination with free capital mobility lead to overborrowing. The reason why the average level of external debt is lower under optimal capital controls than under free capital mobility is that the Ramsey planner finds it optimal to induce an external debt position that is significantly more volatile than the one associated with free capital mobility. The standard deviation of external debt is 1.45 under optimal capital contorls and only 0.65 under free capital mobility. This difference in volatilities is reflected in figure 9.6, which shows that the distribution of external debt is significantly more dispersed under optimal capital controls than under free capital mobility. A more volatile process for external debt requires centering the debt distribution further away from the natural debt limit, for precautionary reasons. In turn, the reason why the Ramsey planner finds wide swings in the external debt position desirable is that such variations allow him to insulate the domestic absorption of tradable goods from exogenous disturbances buffeting the economy. Put differently, in the Ramsey economy, external debt plays the role of shock absorber to a much larger extent that it does in the economy with free capital mobility. We close this section by commenting on the commonly held view that imposing capital controls amounts to making the current account more closed. This need not be the case. Indeed, in the present model, optimal capital controls play the opposite role. Under optimal capital controls, the economy makes more heavy use of the current account to smooth consumption than it does under free capital mobility. The volatility of the current account, given by dt/(1 + rt−1 ) − dt+1 /(1 + rt ), is 50 percent higher under optimal capital controls than under free capital mobility. And we saw in the previous section that the Ramsey planner uses the current account to smooth the path of M. Uribe and S. Schmitt-Groh´e traded absorption. So, far from insulating the economy from the rest of the world, optimal capital controls serve to strengthen intertemporal trade with the rest of the world. Welfare Costs of Free Capital Mobility Under Fixed Exchange Rates In the previous two sections, we saw that in the model economy studied in this and the previous chapter, the combination of a fixed exchange rate and free capital mobility entails excessive external debt and unemployment. Both of these factors tend to depress consumption and therefore reduce welfare. We now quantify the welfare losses associated with free capital mobility in economies subject to a currency peg. The welfare cost of free capital mobility conditional on a particular state {ytT , rt, dt, wt−1 }, denoted ΛF CM (ytT , rt, dt, wt−1), is defined as the permanent percent increase in the lifetime consumption stream required by an individual living in a fixed-exchange-rate economy with free capital mobility to be as well off as an individual living in a fixed-exchange-rate economy with optimal capital controls. Formally, ΛF CM (ytT , rt, dt, wt−1 ) is implicitly given by ∞ X s=0 h CM cFt+s 1+ ΛF CM (ytT ,rt ,dt ,wt−1 ) 100 = v OCC (ytT , rt, dt, wt−1 ), where cFt CM denotes the equilibrium process of consumption in the currency-peg economy with free capital mobility (and is identical to cPt EG in the notation of chapter 8.10) and v OCC (ytT , rt, dt, wt−1) denotes the value function associated with the optimal capital-control policy in the fixed-exchange- Open Economy Macroeconomics, Chapter 9 rate economy and is given by (ytT , rt, dt, wt−1) ≡ Et ∞ X 1−σ cOCC −1 t+s , 1−σ where cOCC denotes the equilibrium process for consumption in the fixed-exchange-rate economy t with optimal capital controls. Solving for ΛF CM (ytT , rt, dt, wt−1) yields F CM (ytT , rt, dt, wt−1 ) = 100 v OCC (ytT , rt, dt, wt−1 )(1 − σ) + (1 − β)−1 v F CM (ytT , rt, dt, wt−1)(1 − σ) + (1 − β)−1 −1 , where v F CM (ytT , rt, dt, wt−1) denotes the value function associated with free capital mobility and a fixed exchange rate and is given by F CM (ytT , rt, dt, wt−1) ≡ Et ∞ X s=0 CM 1−σ cFt+s −1 . 1−σ This value function is identical to v P EG (ytT , rt, dt, wt−1 ) introduced in chapter 8.10. Because the state vector is stochastic, the conditional welfare cost measure, ΛF CM (ytT , rt, dt, wt−1 ), is itself stochastic. We wish to compute the unconditional mean of ΛF CM (ytT , rt, dt, wt−1). This requires knowledge of the unconditional probability distribution of the state vector (ytT , rt, dt, wt−1 ). The distribution of the endogenous elements of the state vector, namely dt and wt−1 , depends on the capital control regime. Because we are analyzing the welfare gains of switching from free capital mobility to optimal capital controls, the relevant probability distribution is the one associated with the economy under free capital mobility. Let λF CM denote the unconditional mean of ΛF CM (ytT , rt, dt, wt−1 ) and π F CM (ytT , rt, dt, wt−1) the unconditional probability of the state vector M. Uribe and S. Schmitt-Groh´e (ytT , rt, dt, wt−1 ) under free capital mobility. Then λF CM is given by X λF CM = π F CM (ytT , rt, dt, wt−1 )ΛF CM (ytT , rt, dt, wt−1 ), {ytT ,rt ,dt ,wt−1 } where the sum is over all points in the discretized four dimensional state space. Evaluating this expression yields λF CM = 3.65. This figure reveals that for a an economy with a fixed exchange rate the average welfare gains of switching from free capital mobility to optimal capital controls are large. The representative household living in the economy with free capital mobility requires on average an increase of 3.65 percent in consumption every period to be indifferent between living under free capital mobility and living under optimal capital controls. An important fraction of the welfare cost of free capital mobility is accounted for by the transitional dynamics put in motion as policy switches from free capital mobility to Ramsey optimal capital controls. Recall that the peg economy with free capital mobility is on average substantially more indebted than the peg economy with optimal capital controls. The transition from a free capital mobility regime to the optimal capital control regime, therefore, requires a significant amount of deleveraging. In turn, deleveraging requires households to temporarily cut consumption of traded goods making it less enticing to switch from free capital mobility to optimal capital controls. To quantify this transitional effect, one can compute the compensation that equalizes the unconditional expectations of welfare under free capital mobility and optimal capital controls. Formally, this measure, which we denote λF CM,U , is given by ∞ X t=0 cFt CM λF CM,U 1+ 100 ∞ X t=0 β t U (cOCC ). t Open Economy Macroeconomics, Chapter 9 Solving this expression for λF CM,U gives λF CM,U = 100 " # 1 EcOCC 1−σ 1−σ EcF CM 1−σ Notice that the expectation on the left side is computed using the distribution of the state vector under free capital mobility and the expectation on the right side using the distribution under optimal capital controls. This welfare measure is useful for answering the following question: Suppose there are two countries, A and B. Both countries have a fixed exchange rate. Country A has free capital mobility, whereas country B applies optimal capital controls. Not knowing the state of either economy, by how much should the consumption stream in country A be increased for agents to be indifferent between being born in countries A or B. Evaluating this expression yields λF CM,U = 13.0. This figure suggests that unconditionally the welfare gains of optimal capital controls are enormous. However, this figure does not capture the welfare gains of switching from free capital mobility to optimal capital controls. For its computation does not take into account the transitional dynamics triggered by the policy switch. In chapter 8, we identified a negative pecuniary externality afflicting economies with nominal rigidities rigidity and fixed exchange rates. In this type of economic environment, private absorption expands too much in response to favorable shocks, causing inefficiently large increases in real wages. No problems are manifested in this phase of the cycle. However, as the economy returns to its M. Uribe and S. Schmitt-Groh´e path, wages fail to fall quickly enough because they are downwardly rigid. In addition, the central bank, having its hands tied by the commitment to a fixed exchange rate, cannot deflate the real value of wages via a devaluation. In turn, high real wages and a contracting level of aggregate absorption cause involuntary unemployment. Individual agents are conscious of this mechanism, but are too small to internalize it. The government, on the other hand, does internalize the distortion and therefore has an incentive to intervene. In this chapter, we analyzed the ability of capital controls to ameliorate the distortions introduced by the peg-induced pecuniary externality. We characterize both analytically and numerically the Ramsey optimal capital control policy. We show that, although capital controls cannot bring about the first-best allocation, they can substantially ease the pains of pegs. Under plausible calibrations of the model economy, the representative household living in the economy with free capital mobility requires a permanent increase in consumption of almost 4 percent to be indifferent between continuing to live in that environment and switching to the Ramsey optimal capital control policy. An important property of the optimal capital control policy is its prudential nature. The benevolent government taxes capital inflows in good times and subsidizes external borrowing in bad times. As a result, the economy experiences trade surpluses during booms and deficits during recessions. The role of capital controls is to insulate the domestic absorption of tradable goods from external shocks. In this way, the government prevents external disturbances from spilling over into the nontraded sector where they would otherwise cause unemployment. Finally, we establish that pegging economies are prone to overborrowing. In the calibrated model, the average debt-to-output ratio falls from 21 percent in the economy with free capital mobility to -14 percent in the economy with optimal capital controls. The regulated economy accumulates a war chest of assets in order to be able to stabilize traded consumption when the economy is buffeted by negative external shocks. Open Economy Macroeconomics, Chapter 9 Appendix: Equilibrium for t ≥ 1 in Section 9.4 We wish to show that in the economy analyzed in section 9.4, in which the nominal exchange rate is constant and capital controls are set in a Ramsey optimal fashion, the equilibrium allocation features constant values for consumption, debt, hours, and wages for t ≥ 1. Given the parameterization of the economy economy, the optimal capital control problem for t ≥ 1 can be written as follows max ∞ {cT t ,ht ,wt ,dt+1 }t=1 ∞ X β t [ln cTt + α ln ht ] subject to cTt + dt = y T + dt+1 , 1+r cTt = wt , ht ¯ ht ≤ h, wt ≥ wt−1 , ¯ dt+1 ≤ d, given d1 and w0 . Consider next the less restrictive problem of maximizing (9.31) subject to (9.32), (9.34), and (9.36). It is straightforward to see that the solution of this problem is cTt = cT ∗ ≡ y T − r 1+r d1 , dt+1 = d1 , and ht = 1 for all t ≥ 1. For this solution to comply with equilibrium condition (9.33), the wage rate must satisfy wt = αcT ∗ for all t ≥ 1. In turn, for equilibrium condition (9.35) to M. Uribe and S. Schmitt-Groh´e we need that αcT ∗ ≥ w0 . Therrefore, if this condition holds, the solution to the less constrained problem is also the solution to the original Ramsey problem. Moreover, if (9.37) holds, then the Ramsey optimnal allocation is Pareto optimal (see chapter 8.3.2). And clearly in this case the Ramsey optimal allocation implies constant paths for consumption, debt, hours, and wages. Now assume that condition (9.37) is not satisfied, that is, assume that cT ∗ < w 0 . Use equation (9.33) to eliminate ht from the utility function and all of the constraints. Then, we can rewrite the Ramsey problem as ∞ {cT t ,wt ,dt+1 }t=1 ∞ X t=1 β t [(1 + α) ln cTt + α ln α − α ln wt ] subject to (9.32), (9.35), (9.36), and wt ≥ αcTt given d1 and w0 . Consider the less restrictive problem consisting in droping (9.40) from the above maximization problem. Since the indirect utility function (9.42) is separable in consumption of tradables and wages and since the only constraint in the less restrictive problem that features wages, namely, equation (9.35), contains neither consumption of tradables nor debt, we can separate the less Open Economy Macroeconomics, Chapter 9 restricted problem into two independent problems. One is ∞ {cT t ,dt+1 }t=1 ∞ X β t [(1 + α) ln cTt + α ln α] subject to (9.32) and (9.36). The solution of this problem is cTt = cT ∗ and dt+1 = d1 for all t ≥ 1. The second problem is max {wt }∞ t=0 ∞ X β t [−α ln wt ] subject to (9.35), given w0 . The solution to this problem is wt = w0 for all t ≥ 1. It remains to show that the solution to these two problems satisfy the omitted constraint (9.40). To see that this is indeed the case, note that wt = w0 > αcT ∗ = αcTt , where the inequality follows from (9.38). We have therefore shown that in the Ramsey equilibrium all variables are constant for t ≥ 1. Exercise 9.1 [Labor Subsidies] Modify the model of section 9.1.1 by assuming that households have preferences of the type given in (8.35). Show that a labor subsidy at the firm level financed by a proportional income tax at the household level (i.e., the fiscal scheme studied in section 9.1.1) can support the Pareto optimal allocation. Exercise 9.2 Optimal Lump-Sum Transfers and Hand-to-Mouth Consumers Consider an economy in which the government can participate in the international financial market but households cannot. Assume further that the only fiscal policy instrument available to the government are lump-sum taxes or transfers. The exchange-rate regime is a currency peg. The government sets the level of external debt and lump-sum taxes or transfers in a Ramsey optimal fashion. All other M. Uribe and S. Schmitt-Groh´e aspects of the model are as in section 8.1. Show that the equilibrium real allocation is identical to the one obtained in section 9.3 under Ramsey optimal capital control policy. Exercise 9.3 Equivalence Between Capital Controls and Consumption Taxes Show that the allocation under Ramsey optimal capital controls characterized in section 9.3 can be replicated with a Ramsey optimal consumption tax scheme in which the tax rate on next period’s consumption is determined in the current period. Part III Financial Frictions Chapter 10 Overborrowing Business cycles in emerging countries are characterized by booms and contractions that are larger in amplitude than those observed in developed countries. One possible explanation for this phenomenon is simply that emerging countries are subject to larger shocks. Chapters 4, 6, and 7 are devoted to evaluating this hypothesis. A second explanation holds emerging countries suffer from more severe economic and political distortions than do developed countries, which amplify the cycles caused by aggregate shocks. The overborrowing hypothesis belongs to this line of thought . It argues that during booms emerging economies borrow and spend excessively (i.e., inefficiently), which exacerbates the expansionary phase of the cycle. During downturns, the argument continues, countries find themselves with too much debt and are forced to engage in drastic cuts in spending, which aggravates the contraction. We will analyze in some detail two very different theories of overborrowing. One stresses the role of policy credibility. It shows that policies that in principle are beneficial to the economy, can have deleterious effects if the policymaker who is in charge of implementing it lacks credibility. The second theory we will discuss argues that there exist pecuniary externalities in the market for external funds. In this branch of the overborrowing literature, debt limits faced by individual agents 409 M. Uribe and S. Schmitt-Groh´e depend upon variables that are exogenous to them but endogenous to the economy. For instance, the willingness of foreign lenders to provide funds to a given emerging economy might depend upon the countries’s aggregate level of external debt, or on the value of nontradable output. These variables (and therefore the cost of external borrowing) are not controlled by individual agents, but do depend on their collective behavior. This externality can give rise to inefficient borrowing. A common theme in much of the overborrowing literature is the need for regulatory government intervention. We will keep this issue in mind as we present the different theories.1 Imperfect Policy Credibility The imperfect-credibility hypothesis is due to Calvo (1986, 1987, 1988). Its basic premise is quite simple. Suppose that the government, possibly with good intentions, announces the removal of a consumption tax. The public, however, believes that the policy will be abandoned after a certain period. That is, they interpret the policy reform as being temporary. As a result, they take advantage of what they perceive to be a temporarily lower consumption tax and increase spending while the policy lasts. In the aggregate, the spending boom is financed by current account deficits. At some point, the euphoria ends, either because the tax cut is abandoned or because agents convince themselves that it is indeed permanent, spending collapses, and the current account experiences a (possibly sharp) reversal. The imperfect-credibility theory of overborrowing can be applied to a variety of policy environments. Calvo (1986), for instance, studies the consequences of a temporary inflation stabilization program. In this case, the consumption tax takes the form of inflation. To visualize inflation as 1 We note, however, that some theories argue that regulatory policy may be the cause rather than the remedy to overborrowing. McKinnon’s (1973) model of deposit guarantees, for example, has been intensively used to understand overborrowing in the aftermath of financial liberalization in the Southern Cone of Latin America in the 1970s. In McKinnon’s model, deposit guarantees induce moral hazard, as banks tend to undertake immoderately risky projects and depositors have less incentives to monitor the quality of banks’ loan portfolios. As a result deposit guarantees open the door to excessive lending and increase the likelihood of generalized bank failures. Open Economy Macroeconomics, Chapter 10 a consumption tax, imagine an economy in which households face a cash-in-advance constraint, whereby purchases of consumption goods require holding money. Because inflation erodes the real value of money, it acts as an indirect tax on consumption. Calvo (1987) applies the imperfectcredibility hypothesis to shed light on the dynamics of balance-of-payment crises. Here, the consumption tax also takes the form of inflation, and lack of credibility stems from the fact that agents disbelieve that the government will be able to cut deficits to a level consistent with low inflation in the long run. The particular application we analyze in this chapter is a trade reform and is due to Calvo (1988). Consider a perfect-foresight economy populated by a large number of infinitely lived households with preferences described by the utility function ∞ X β t U (ct), where ct denotes consumption in period t, β ∈ (0, 1) denotes a subjective discount factor, and U denotes a period utility function assumed to be increasing and strictly concave. Consumption goods are not produced domestically, and must be imported from abroad. Each period, households receive a constant endowment of goods, y > 0. This endowment is not consumed domestically, but can be exported. For simplicity, we assume that the relative price of exportables in terms of importables, the terms of trade, is constant and normalized to unity. Households start each period with a stock of debt, dht , carried over from the previous period. Debt is denominated in units of importable goods and carries a constant interest rate r > 0. In addition, households receive a lumpsum transfer xt from the government each period. Households use their financial and nonfinancial and income to purchase consumption goods, ct , and to increase their asset position. Imports are M. Uribe and S. Schmitt-Groh´e subject to a proportional tariff τt. The household’s sequential budget constraint is then given by dht = (1 + r)dht−1 − y − xt + ct(1 + τt ). To prevent Ponzi games, households are subject to the following borrowing constraint: dht+j ≤ 0. t→∞ (1 + r)j lim The fact that the period utility function is increasing implies that in the optimal plan the no-Ponzigame constraint must hold with equality. Combining the sequential budget constraint and the noPonzi-game constraint holding with equality yields the following intertemporal budget constraint: + r)dh−1 t ∞ X 1 [y + xt − ct (1 + τt)] = 1+r t=0 To avoid inessential long-run dynamics, we assume that the subjective and pecuniary discount rates are identical, that is β(1 + r) = 1. The consumer’s problem consists in choosing a sequence {ct}∞ t= 0 to maximize his lifetime utility function subject to this intertemporal budget constraint. Letting λ0 denote the Lagrange multiplier associated with the intertemporal budget constraint, the optimality conditions associated with this problem are the intertemporal budget constraint and U 0 (ct) = λ0 (1 + τt). Note that λ0 is determined in period zero but is constant over time. Open Economy Macroeconomics, Chapter 10 The Government Like households, the government has access to the international financial market. The government’s sources of income are import tariffs,τtct, and the issuance of new public debt, dgt −dgt−1 . Government spending stem from interest payments, rdgt−1 , and lump-sum transfers to households, xt . The resulting sequential budget constraint of the government is given by dgt = (1 + r)dgt−1 − τt ct + xt . The variable τtct − xt represents the primary fiscal surplus in period t, and the variable −rdgt−1 + τt ct − xt represents the secondary fiscal surplus. We assume that fiscal policy is such that in the long run the government’s debt position does not grow in absolute value at a rate larger than the interest rate. That is, we assume that the fiscal policy ensures that dgt = 0. t→∞ (1 + r)t lim This condition together with the government’s sequential budget constraint implies the following intertemporal government budget constraint: (1 + t ∞ X 1 = (τtct − xt ). 1+r t=0 This constraint states that the present discounted value of current and future primary fiscal surpluses must equal the government’s initial debt position. Let dt ≡ dht + dgt denote the country’s net foreign debt position. Then, combining the intertemporal budget constraints of the household and the government yields the following intertemporal M. Uribe and S. Schmitt-Groh´e economy-wide resource constraint: (1 + r)d−1 t ∞ X 1 = (y − ct). 1+r According to this expression, the present discounted value of the stream of current and future trade surpluses must equal the country’s initial net debt position. A competitive equilibrium is a scalar λ0 and a sequence {ct}∞ t=0 satisfying (10.1) and (10.2), given the initial debt position d−1 and a sequence of import tariffs {τt}∞ t=0 specified by the government. The focal interest of our analysis is to compare the economic effects of two polar tariff regimes. In one, the government implements a credible permanent tariff reform. In the second, the government implements a temporary tariff reform. We begin with the analysis of a permanent tariff reform. A Credible Permanent Trade Reform Suppose that in period 0 the government unexpectedly implements a permanent tariff reform consisting in lowering τt from its initial level, which we denote by τ H , to a new level τ L < τ H . Formally, the time path of τt is given by τt = τ L ; t ≥ 0. It follows directly from the efficiency condition (10.1) that in this case consumption is constant over time. Let the equilibrium level of consumption be denoted by c. The intertemporal resource constraint (10.2) then implies that c is given by c = y − rd−1 . According to this expression, each period, households consume their endowment net of interest payments on the country’s net external debt position per capita. In turn, the level of external debt Open Economy Macroeconomics, Chapter 10 per person, dt, is constant over time and equal to d−1 . Every period the country generates a trade surplus, y − c, which is just large enough to pay the interest accrued on he external debt. Importantly, the equilibrium level of consumption is independent of the level at which the government sets the import tariff. Any constant tariff path, implemented unexpectedly at time 0, gives rise to the same level of consumption. Similarly, the trade balance, y − c, the net foreign asset position, d−1 , and the current account, y − c − rd−1 , are all unaffected by the permanent trade liberalization. The intuition behind this result is as follows. A constant tariff is in effect a constant tax on consumption. Since consumption is taxed at the same rate in all periods, households have no incentive to substitute expenditure intertemporally. A Temporary Tariff Reform Suppose now that in period 0 the government unexpectedly announces and implements a temporary trade liberalization. The announcement specifies that the tariff is reduced from τ H to τ L between periods 0 and T − 1, and permanently increased back to τ H in period T . Formally, the announced path of τt is given by τL τt = τH for 0 ≤ t ≤ T − 1 for t ≥ T with τ L < τ H . Here, T > 0 denotes the length of the commercial liberalization policy. It is clear from equation (10.1) that consumption is constant over the periods 0 ≤ t ≤ T − 1 and t ≥ T . We can therefore write ct = c1 c2 for 0 ≤ t ≤ T − 1 for t ≥ T M. Uribe and S. Schmitt-Groh´e where c1 and c2 are two scalars determined endogenously. Because the period utility function is concave, it follows from the efficiency condition (10.1) that c1 is greater than c2 . Intuitively, households substitute consumption in the low-tariff period for consumption in the high-tariff period. We can then write c1 = (1 + κ)c2 ; for some κ > 0. The parameter κ is an increasing function of the intertemporal tariff distortion (1 + τ H )/(1 + τ L ). For example, if the period utility function is of the CRRA form, U (c) = c1−σ /(1 − σ), then we H 1/σ have that 1 + κ = 1+τ . Of course, the consumption stream must respect the intertemporal 1+τ L resource constraint (10.2). This restriction implies the following relationship between c1 and c2 : y − rd−1 = (1 − β T )c1 + β T c2 . The left-hand side of equation (10.6) is c, the level of consumption that results under a credible permanent trade reform. We can therefore write the (10.6) as c = (1 − β T )c1 + β T c2 . Because β T ∈ (0, 1), it follows from this expression that c is a weighted average of c1 and c2 . We therefore have that c1 > c > c2 . This means that if the economy was at a steady state c before period t, then the announcement of the temporary tariff reform causes consumption to rise to c1 at time 0, stay at that elevated level until period T − 1, and then fall in period T to a new long-run level c2 , which is lower than the pre-reform level of consumption. Open Economy Macroeconomics, Chapter 10 The temporary tariff reform induces households to engage in a consumption path that has the same present discounted value as the one associated with a permanent tariff, but is less smooth. Therefore, because households have concave preferences, it must be the case that the time-varying tariff regime must result in lower welfare than the constant-tariff regime. To state this more formally, consider the problem of a benevolent social planner trying to design a tariff policy that maximizes welfare subject to the constraint that the policy must belong to the family defined in (10.3). The planner’s objective function is to choose scalars c1 and c2 to maximize the household’s lifetime utility function evaluated at the consumption path given in (10.4), subject to the lifetime resource constraint (10.6). That is, the planner’s problem is:2 max (1 − β T )U (c1) + β T U (c2 ) c1 ,c2 subject to −rd−1 + y = (1 − β T )c1 + β T c2 . Because the period utility function U (·) is assumed to be strictly concave, the objective function is strictly concave in c1 and c2 . Also, because the constraint is linear in c1 and c2 , it describes a convex set of feasible pairs (c1 , c2 ). It follows that the first-order conditions of this problem are necessary and sufficient for a maximum. These conditions are: U 0 (c1 ) = λ, and U 0 (c2 ) = λ, 2 We omit the multiplicative factor (1 − β)−1 from the objective function because it is a constant and therefore does not affect the solution. M. Uribe and S. Schmitt-Groh´e where λ denotes the Lagrange multiplier on the constraint. Clearly, the solution to this problem is c1 = c2 . It then follows immediately from (10.1) that the set of tariff schemes that implements the planner’s problem solution satisfies τ L = τ H . Consequently, a temporary trade liberalization policy of the type studied here with τ L < τ H is welfare dominated by a constant-tariff regime. Consider the behavior of the trade balance, the current account, and the net foreign asset position induced by the temporary commercial liberalization policy. Assume for simplicity that the initial net debt potion is nil (d−1 = 0). In the pre-reform equilibrium, t < 0, agents expect tariffs to be constant forever. Thus, we have that before the trade reform consumption equals output, c = y, the trade balance is zero, y − c = 0, and so is the current account, y − c − rd−1 = 0. Recalling that c1 > c = y > c2 , we have that for t ≥ 0, the path of the trade balance, which we denote by tbt , is given by y − c1 < 0 tbt = y − c2 > 0 for 0 ≤ t < T for t ≥ T Thus, the initial consumption boom causes a trade balance deficit that lasts for the duration of the tariff cut. When the policy is abandoned, consumption falls and the trade balance displays a sharp reversal from deficit to surplus. The evolution of the net foreign debt position is given by t X dt = (c − y) (1 + r)j > 0; 1 0 ≤ t < T. This expression states that the foreign asset position is negative and deteriorates over time at an increasing rate until period T . The current account, denoted by cat and given by the change in the asset position, −(dt − dt−1 ), evolves according to the following expression: cat = (y − c1 )(1 + r)t; 0 ≤ t < T. Open Economy Macroeconomics, Chapter 10 According to this expression, the current account is negative and deteriorates exponentially. The paths of the current account and the asset position are unsustainable in the long run. Therefore, in period T , the household cuts consumption to a point at which external debt stops growing and the current account experiences a sudden improvement to a balanced position. Formally, we have dt = dT −1 ; t ≥ T, and cat = 0; t ≥ T. The level of consumption that results after the demise of the trade reform, c2 , therefore, satisfies dT −1 = (1 + r)dT −1 + c2 − y. Solving for c2 , we obtain c2 = y − rdT −1 < y − rd−1 = y = c. The inequality follows from the fact that dT −1 is negative while d−1 was assumed to be zero. The above expression shows, again, that after the collapse of the trade reform consumption falls below its pre-reform level. The reason why consumption must fall so sharply is that the initial boom in private consumption is entirely financed by external borrowing. As a result, in he new steady state the country must redirect resources away from consumption and toward servicing the increased external debt. The model described in this section is one of overborrowing in the sense that it delivers an initial phase (0 ≤ t < T ) in which households embark in a socially inefficient spending spree with external debt growing at an increasing rate and the external accounts displaying widening imbalances. Note M. Uribe and S. Schmitt-Groh´e that in this model overborrowing ends in a sudden stop in period T . In this period, the country experiences a reversal of the current account and a sharp contraction in domestic absorption. It is worth noting that in this particular model the terms ‘temporariness’ and ‘imperfect credibility’ are equivalent in a specific sense. To illustrate this idea, suppose that instead of announcing a temporary tariff cut, in period 0 the government announces a permanent tariff cut, but that the public disbelieves the announcement. Specifically, the government announces a permanent tariff reduction from τ H to τ L but the public believes that the tariff will return to τ H in period T . Clearly, the dynamics between periods 0 and T − 1 are identical as the ones associated with the temporary trade liberalization. This is because consumption decisions during the period 0 ≤ t < T are based on expectations of a future tariff increase regardless of whether the government explicitly announces it or simply the public believes it will take place. Suppose now that in period T , contrary to the public’s expectations, the government maintains the trade reform (τt = τ L for t ≥ T ), and, moreover, that the policy becomes credible to the public. From the point of view of period T , the situation is one in which the tariff is constant forever. It follows from our previous analysis that consumption must be constant from t = T on. A constant level of consumption can only be sustained by a constant level of assets.3 Thus, consumption must satisfy dT −1 = (1+r)dT −1 −y +c, where c is the level of consumption prevailing for t ≥ T . Comparing this expression with (10.7), it follows that c must equal c2 . This result establishes that the equilibrium path of consumption is identical whether the policy is temporary or permanent but imperfectly credible. The equivalence between temporariness and lack of credibility is not always valid. Exercise 4 illustrates a case in which lack of credibility can be much more costly that temporariness. The exercises augments the model discussed in this section to allow for nontradable goods and down3 To see this, note that the difference equation dt = (1 + r)dt−1 − y + c, for t ≥ T , implies a path of dt that converges to plus or minus infinity at a rate larger than r (implying a violation of the no-Ponzi-game constraint in the first case and a suboptimal accumulation of wealth in the second) unless c is such that (1 + r)dT −1 − y + c equals d−T . That is, a constant path of consumption is sustainable over time only if the asset position is also constant over time. Open Economy Macroeconomics, Chapter 10 ward nominal wage rigidity. In this setting, changes in the import tariff distort the relative price of nontradables relative to tradables. The difference between temporariness and lack of credibility arises in period T . In this period, given the relative price of nontradables, the demand for nontradables collapses under both temporariness and lack of credibility. This contraction in the deisered demand for nontradables is driven by a negative wealth effect caused by the overborrowing that takes place between periods 0 and T . Under temporariness, the tariff increase that takes place in period T makes nontradables relatively cheaper, generating a positive substitution effect which tends to offset the negative aforementione wealth effect. By contrast, under lack of credibility, the tariff never increases, since the trade liberalization policy was indeed permanent. As a result, the offsetting substitution effect does not take place, and the demand for nontradables falls more sharply than under policy temporariness. Because wages are downwardly rigid, the contraction in the demand for nontradables causes unemployment. Since the demand shift is larger under lack of credibility, we have that the increase in unemployment is more pronounced under lack of credibility than under temporariness. Financial Externalities We now study a class of models in which individual agents face a financial friction that depends upon an endogenous variable that individuals take as exogenous. In this class of models there is an externality because individual agents do not internalize the fact that their behavior collectively determines the strength of the financial friction. The financial friction typically takes the form of a collateral constraint imposed by foreign lenders. And the variable in question takes various forms. It could be a stock, such as external debt per capita, or a flow, such as output, or a relative price, such as the price of real estate in terms of consumption goods. A theme of this section is that the nature of this variable is a key M. Uribe and S. Schmitt-Groh´e determinant of whether the model will or will not generate overborrowing. The No Overborrowing Result Imagine a theoretical environment in which foreign lenders impose a borrowing constraint on emerging countries. Consider the following two alternative specifications of such constraint: dt ≤ d and Dt ≤ d, where dt denotes the individual household’s net external debt in period t, Dt denotes the average net external debt across all households in the economy in period t, and d is a constant. The key difference between these two formulations is that the consumer internalizes the first borrowing constraint, because it involves its own level of debt, but does not internalize the second one, because it features the cross-sectional average of debt, which is out of his control. These two specifications are meant to capture two distinct lending environments: One in which foreign lenders base their loan decisions on each borrower’s capacity to repay, and one in which foreign lenders regard the emerging country as a single investment opportunity and look only at aggregate variables in deciding whether to lend or not. The question is whether the two formulations give rise to different equilibrium distributions of debt, and in particular whether the one based on aggregate variables delivers more frequent crises (i.e., episodes in which the debt limit binds). The present analysis is based on Uribe (2006). Consider first the case of an aggregate borrowing constraint. The economy is assumed to be populated by a large number of households with preferences described by the utility function ∞ X θt U (ct, ht ), where ct denotes consumption, ht denotes hours worked, and θ denotes a subjective discount factor. All of the results in this chapter go through for any specification of θt that is exogenous to the Open Economy Macroeconomics, Chapter 10 household. For concreteness, we adopt the specification studied in section (4.6.3) and assume that θt+1 = β(Ct , Ht)θt for t ≥ 0 and θ0 = 1. The variables Ct and Ht denote, respectively, the cross sectional averages of consumption and hours, and the function β is assumed to be decreasing in its first argument and increasing in its second argument. Households take the evolution of Ct and Ht as exogenous. In equilibrium, we have that Ct = ct and Ht = ht , because all households are assumed to be identical. Output is produced using hours as the sole input with the technology ezt F (k∗ , ht), where zt is a productivity shock assumed to be exogenous and stochastic, and k∗ is a fixed factor of production, such as land. The function F is assumed to be increasing, concave, and to satisfy the Inada conditions. Households have access to a single risk-free bond denominated in units of consumption that pays the interest rate rt when held between periods t and t + 1. The interest rate rt is country specific and may differ from the world interest rate, which we assume to be constant and denoted by r. In the present model, the country premium, given by rt − r, is endogenously determined. The sequential budget constraint is then given by dt = (1 + rt−1 )dt−1 + ct − ezt F (k∗ , ht). In addition, households face a no-Ponzi-game constraint of the form lim Qj−1 s=0 (1 + rt+s ) for all t. The household chooses processes {ct , ht, dt} to maximize its utility function subject to the sequential budget constraint and the no-Ponzi-game constraint. The first-order conditions of this problem are the sequential budget constraint, the no-Ponzi-game constraint holding with equality, M. Uribe and S. Schmitt-Groh´e and − Uh (ct , ht) = ezt Fh (k∗ , ht) Uc (ct, ht) Uc (ct, ht) = β(Ct , Ht)(1 + rt )EtUc (ct+1 , ht+1 ) Debt accumulation is subject to the following aggregate constraint: Dt ≤ d. Because all agents are identical, we have that in equilibrium Dt must equal dt . When the borrowing constraint does not bind (dt < d), domestic agents borrow less than the total amount of funds foreign lenders are willing to invest in the domestic economy. As a result, in this case the domestic interest rate equals the world interest rate (rt = r). By contrast, when the borrowing constraint binds, households compete for a fixed amount of loans d. At the world interest rate r, the demand for loans exceeds d, and consequently the domestic interest rate rt rises to a point at which everybody is happy holding exactly d units of loans. The quantity (rt − r)dt, represents a pure financial rent. It is important to specify who appropriates this rent. There are two polar cases. In one, the financial rent is appropriated by domestic banks, which then distribute it to households in a lump-sum fashion. In this case, the emergence of rents does not represent a loss of resources for the country. In the second polar case, the rent is appropriated by foreign banks. In this case, the financial rent generates a resource cost for the domestic economy. Intermediate cases are also possible. We first study the case in which the financial rent is appropriated domestically. A stationary competitive equilibrium in the economy with an aggregate borrowing constraint is a set of processes {dt, rt, ct, ht } satisfying the following conditions: − Uh (ct, ht) = ezt Fh (k∗ , ht ), Uc (ct, ht) Open Economy Macroeconomics, Chapter 10 Uc (ct, ht) = β(ct, ht)(1 + rt)EtUc (ct+1 , ht+1 ), dt = (1 + r)dt−1 + ct − ezt F (k∗ , ht), dt ≤ d, rt ≥ r, (dt − d)(rt − r) = 0. The last of these expressions is a slackness condition stating that if the borrowing constraint is not binding then the domestic interest rate must equal the world interest rate, and that if the domestic interest rate is strictly above the world interest rate the borrowing constraint must be binding. Note that the resource constraint (10.13) features the world interest rate r and not the domestic interest rate rt. This is because we are assuming that the financial rent that emerges when the borrowing constraint is binding stays within the country. Consider now an environment in which the borrowing constraint is imposed at the level of the individual household. In this case, the household’s optimization problem consists in maximizing the utility function (10.8) subject to the constraints dt = (1 + r)dt−1 + ct − ezt F (k∗ , ht) and dt ≤ d. In this environment, household take explicitly into account the borrowing limit. Letting θt λt and θt λtµt denote the Lagrange multipliers on the sequential budget constraint and the borrowing M. Uribe and S. Schmitt-Groh´e constraint, respectively, we have that the optimality conditions associated with the household’s problem are the above two constraints, the labor efficiency condition (10.10), and Uc (ct, ht) = λt λt − λtµt = β(Ct , Ht)(1 + rt )Etλt+1 µt ≥ 0 (dt − d)µt = 0. The first and second of these expressions states that in periods in which the borrowing constraint binds (i.e., when µt > 0), the marginal utility of an extra unit of debt is not given by the marginal utility of consumption Uc (ct, ht) but by the smaller value Uc (ct , ht)(1 − µt). The reason behind this is that in this case an extra unit of debt tightens the borrowing constraint even more and therefore carries a shadow punishment, given by Uc (ct, ht )µt utils. A competitive equilibrium in the economy with an individual borrowing constraint is a set of processes {dt, µt, ct, ht} satisfying the following conditions: − Uh (ct , ht) = ezt Fh (k∗ , ht) Uc (ct, ht) Uc (ct, ht )(1 − µt ) = β(ct, ht)(1 + r)EtUc (ct+1 , ht+1 ) dt = (1 + r)dt−1 + ct − ezt F (k∗ , ht). dt ≤ d µt ≥ 0 Open Economy Macroeconomics, Chapter 10 (dt − d)µt = 0. We wish to establish that the equilibrium behavior of external debt, consumption, and hours is identical in the economy with an aggregate borrowing constraint and in the economy with an individual borrowing constraint. That is, we want to show that the processes {dt, ct, ht} implied by the system (10.11)-(10.16) is identical to the one implied by the system (10.17)-(10.22). To this end, we will show that by performing variable transformations, the system (10.17)-(10.22) can be written exactly like the system (10.11)-(10.16). Define the shadow interest rate r˜t as 1 + r˜t = 1+r . 1 − µt Note that µt must be nonnegative and less than unity. This last property follows from the fact that, from equation (10.18), a value of µt greater than or equal to unity would imply that the marginal utility of consumption Uc (ct, ht) is infinite or negative. It follows that r˜t is greater than or equal to r. Furthermore, it is straightforward to see that µt > 0 if and only if r˜t > r and that µt = 0 if and only if r˜t = r. We can therefore write the system (10.17)-(10.22) as Uh (ct , ht) = ezt Fh (k∗ , ht) Uc (ct, ht) Uc (ct, ht ) = β(ct, ht)(1 + r˜t)EtUc (ct+1 , ht+1 ) dt = (1 + r)dt−1 + ct − ezt F (k∗ , ht) dt ≤ d r˜t ≥ r (dt − d)(˜ rt − r) = 0. M. Uribe and S. Schmitt-Groh´e The systems (10.11)-(10.16) and (10.24)-(10.29) are identical, and must therefore deliver identical equilibrium processes for dt , ct, and ht . This result demonstrates that whether the borrowing constraint is imposed at the aggregate or the individual level, the real allocation is the same. In other words, the imposition of the borrowing constraint at the aggregate level generates no overborrowing. What happens is that the market interest rate in the economy with the aggregate borrowing limit conveys exactly the same signal as the Lagrange multiplier µt in the economy with the individual borrowing constraint. Resource Costs When rents from financial rationing are appropriated by foreign lenders, the equilibrium conditions of the economy with the aggregate borrowing constraint are as before, except that the resource constraint becomes dt = (1 + rt−1 )dt−1 + ct − ezt F (k∗ , ht). The fact that the domestic interest rate, rt ≥ r, appears in the resource constraint implies that when the borrowing constraint is binding and rt > r, the country as a whole loses resources in the amount (rt − r)dt. These resources represent pure rents paid to foreign lenders. When rents are appropriated by foreign lenders, it is no longer possible to compare analytically the dynamics of external debt in the economies with the aggregate debt limit and in the economy with the individual debt limit. We therefore resort to numerical methods to characterize competitive equilibria. Preferences and technologies are parameterized as follows: U (c, h) = 1−σ −ψ c − ω −1 hω /(1 − σ), β(c, h) = 1 + c − ω −1 hω , and F (k∗ , h) = k∗αh1−α , where σ, ω, ψ, k∗ , and α are fixed parameters. Table 10.1 displays the values assigned to these parameters. The time unit is meant to be one year. The values for α, ω, σ, and r are taken from Schmitt-Groh´e and Uribe (2003). The parameter ψ is set to induce a debt-to-GDP ratio, d/y, of 50 percent in Open Economy Macroeconomics, Chapter 10 Table 10.1: Parameter Values σ 2 ω 1.455 ψ 0.0222 α 0.32 r 0.04 κ 7.83 k∗ 78.3 π11 = π22 0.71 z 1 = −z 2 0.0258 the deterministic steady state. The calibrated value of κ is such that in the economy without the debt limit, the probability that dt is larger than κ is about 15 percent. I set the level of the fixed factor of production k∗ so that its market price in terms of consumption goods in the deterministic steady state is unity. The productivity shock is assumed to follow a two-state symmetric Markov process with mean zero. Formally, zt takes on values from the set {z 1 , z 2 } with transition probability matrix Π. I assume that and z 1 , z 2 and Π satisfy z 1 = −z 2 and π11 = π22 . I set π11 equal to 0.71 and z 1 equal to 0.0258. This process displays the same serial correlation (0.58) and twice as large a standard deviation (2.58 percent) as the one estimated for Canada by Mendoza (1991). The choice of a process for the productivity shock that is twice as volatile as the one observed in a developed small open economy like Canada reflects the view that the most salient distinction between business cycles in developed and developing countries is, as argued in chapter 1, that the latter are about twice as volatile as the former. The model is solved using the Chebyshev parameterized expectations method. The state space is discretized using 1000 points for the stock of debt, dt . The parameterization of expectations uses 50 coefficients. We compute the equilibrium for three model economies: An economy with no debt limit, an economy with a debt limit and financial rents accruing to domestic residents, and an economy with a debt limit and financial rents flowing abroad.4 The Matlab code that implements the numerical results reported in this section are available at http://www.columbia. edu/~mu2166/overborrowing/overborrowing.html. 4 The procedure approximates the equilibrium with reasonable accuracy. The DenHaan-Marcet test for 5-percent left and right tails yields (0.047,0.046) for the economy without a debt limit, (0.043,0.056) for the economy with a debt limit and rents owned domestically, and (0.048,0.056) for the economy with a debt limit and rents flowing abroad. This test was conducted using 1000 simulations of 5000 years each, dropping the first 1000 periods. M. Uribe and S. Schmitt-Groh´e Figure 10.1: Equilibrium Distribution of External Debt 0.07 No Resource Costs Resource Costs No Debt Limit 0.06 0 −10 5 external debt Figure 10.1 displays with a solid line the equilibrium probability distribution of external debt in the economy with an aggregate debt limit and financial rents from rationing accruing to domestic agents. According to the no-overborrowing result obtained earlier, this economy is identical to the one with a household-specific debt limit. The figure shows with a dash-crossed line the distribution of debt in the economy with an aggregate debt limit and financial rents accruing to foreign lenders. As a reference, the figure also displays, with a dashed line, the debt distribution in an economy without a debt limit (except, of course, for the natural debt limit that prevents Ponzi schemes). The main result conveyed by the figure is that the no overborrowing result is robust to allowing for financial rents to belong to foreign lenders. Specifically, the distribution of debt is virtually unaffected by whether financial rents are assumed to flow abroad or stay within the country’s limits.5 5 This no-overborrowing result is robust to imposing a more stringent debt limit. We experimented lowering the value of κ by 25 percent, from 7.8 to 5.9. This smaller value of the debt limit is such that in the unconstrained Open Economy Macroeconomics, Chapter 10 The reason behind this result is that the resource cost incurred by the economy when financial rents belong to foreigners is fairly small, about 0.008 percent of annual GDP. This implication, in turn, is the result of two properties of the equilibrium dynamics. First, the economy seldom hits the debt limit. The debt constraint binds on average less than once every one hundred years. This is because agents engage in precautionary saving to mitigate the likelihood of finding themselves holding too much debt in periods in which the interest rate is above the world interest rate. Second, when the debt limit does bind, it produces a country interest-rate premium of less than 2 percent on average, and the external debt is about 40 percent of GDP when the economy hits the debt limit. This observation implies that the average cost of remitting financial rents abroad is less than 0.008 = 40 × 0.02 × 100−1 percent of GDP per year. The Role of Asset Prices Thus far, we have limited attention to a constant debt limit. In practice, debt limits take the form of collateral constraints limiting the size of debt to a fraction of the market value of an asset, such as land or structures. Theoretically, this type of borrowing limit have been shown to help explain observed macroeconomic dynamics during sudden stops, as they tend to exacerbate the contractionary effects of negative aggregate shocks. This is because states in which the collateral constraint is binding are associated with sharp declines in stock prices and fire sales of collateral (see, for example, Mendoza, 2010). The central question for the issues being analyzed in this section is whether these fire sales are more or less severe when the collateral constraint is imposed at an aggregate level as opposed to at the level of the individual borrower. To model a time-varying collateral constraint, assume that output is produced via an homogeneous-of-degree-one function economy the probability that at is larger than κ is about 30 percent. Under this parameterization, We continue to find no overborrowing. Specifically, the debt distribution in the economy with an aggregate borrowing limit and rents accruing to foreign lenders is virtually identical to the distribution of debt in the economy with an aggregate debt limit and rents accruing to domestic households, which, as demonstrated earlier, is identical to the debt distribution in the economy with an individual borrowing limit. M. Uribe and S. Schmitt-Groh´e F that takes labor and land as inputs. Formally, yt = ezt F (kt , ht). Suppose further that the aggregate per capita supply of land is fixed and given by k∗ > 0. Unlike in the previous section, here we will assume that there is a market for land. Let qt denote the market price of land in terms of consumption goods. The sequential budget constraint of the household is given by dt = (1 + rt−1 ) dt−1 + ct + qt (kt+1 − kt ) − ezt F (kt , ht) Note that in period t, the household’s land holdings, kt, are predetermined. Each period t ≥ 0, the household chooses the amount of land, kt+1 , that it will use in production in period t + 1. Consider first the case in which foreign investors impose a collateral constraint at the country level of the form Dt ≤ κqt k∗ . Individual households do not internalize this constraint. The household’s optimization problem consists in choosing processes {ct, ht, dt, kt+1 } to maximize the utility function (10.8) subject to the sequential budget constraint (10.30) and the no-Ponzi-game constraint (10.9). The first-order conditions associated with this optimization problem with respect to consumption, hours, and debt are identical to those of its no-land-market counterpart. In the present environment, however, there is an additional first-order condition. It is an Euler equation associated with the use of land. This condition, evaluated at equilibrium quantities, is given by qt = Et {Λt,t+1 [qt+1 + ezt+1 Fk (k∗ , ht+1 )]} , Open Economy Macroeconomics, Chapter 10 where Λt,t+j j−1 Uc (ct+j , ht+j ) Y ≡ β(ct+s , ht+s ) Uc (ct, ht) is a stochastic discount factor given by the equilibrium marginal rate of consumption substitution between periods t and t + j. Iterating this expression forward yields qt = Et ∞ X Λt,t+j ezt+j Fk (k∗ , ht+j ). Intuitively, this expression states that the price of land equals the present discounted value of its future expected marginal products. A stationary competitive equilibrium with an aggregate collateral constraint and financial rents accruing domestically is given by a set of stationary stochastic processes {dt, rt, ct, ht, qt, Λt,t+1}∞ t=0 satisfying (10.11)-(10.13), (10.15), (10.31), (10.32), and dt ≤ κqt k∗ , (rt − r)(dt − κqt k∗ ) = 0, Consider now the case in which the collateral constraint is imposed at the level of each borrower. That is, dt ≤ κqt kt+1 . The household internalizes this borrowing limit. However, as first noted by Auernheimer and Garc´ıa-Saltos (2000), the presence of the price of land on the right-hand side of this borrowing constraint introduces an externality. This is because the individual household, being a price taker, takes as exogenous the evolution of the price of land, qt . M. Uribe and S. Schmitt-Groh´e In this case, all external loans are extended at the world interest rate r. The pricing equation for land takes the form 1 1 qt 1 − κ − = Et {Λt,t+1 [qt+1 + ezt+1 Fk (k∗ , ht+1 )]} , 1 + r 1 + r˜t where r˜t ≥ r denotes the shadow interest rate as defined by equation (10.23). Iterate this expression forward to obtain qt = Et ∞ X j=1 Λt,t+j Q ezt+j Fk (k∗ , ht+j ) h i . j−1 1 1 1 − κ − s=0 1+r 1+˜ rt+s Comparing this expression with its counterpart in the economy with an aggregate borrowing constraint (equation (10.33)), we observe that the fact that the shadow value of collateral, given by 1/(1 + r) − 1/(1 + r˜t), is nonnegative implies, ceteris paribus, that the individual agent discounts future marginal products of land less heavily in the economy with the internalized borrowing constraint than in the economy with an aggregate borrowing constraint. This is because when the collateral constraint is internalized the individual agent values the financial service provided by land, namely, collateral. Note that the above expression for qt does not state that the price of land should be higher in periods in which the shadow interest rate r˜t exceeds r. Indeed, we will see shortly that the real value of land falls dramatically during such periods. The above expression does state that when r˜t is larger than r, all other things equal, the value of land is likely to be higher in the economy in which the collateral constraint is internalized than in an economy in which it is not. In turn, the fact that in the economy with an internalized collateral constraint the value of collateral is higher than it is in an economy with an aggregate collateral constraint suggests that borrowing is less limited when the collateral constraint is internalized. Thus, intuitively one should not expect the no-overborrowing result of the previous section to be overturned by the introduction of a time-varying collateral constraint of the type considered in this section. Open Economy Macroeconomics, Chapter 10 A stationary competitive equilibrium with an individual collateral constraint and financial rents accruing domestically is given by a set of stationary stochastic processes {dt , ct, ht , dt, r˜t, qt, Λt,t+1} satisfying (10.24)-(10.26), (10.28), (10.32), (10.34), (10.36), and (˜ rt − r)(dt − κqt k∗ ) = 0, The only difference between the equilibrium conditions of the economy with an aggregate collateral constraint and those of the economy with an individual collateral constraint is the Euler condition for land (equation (10.31) versus equation (10.36)). To ascertain whether the imposition of an aggregate collateral constraint induces external overborrowing, we compute equilibrium dynamics numerically. I calibrate the economy as in the previous subsection, except for the parameter κ, which now takes the value 0.1.6 The model is solved using the Chebyshev parameterized expectations method.7 The top-left panel of figure 10.2 displays the unconditional distribution of external debt. A solid line corresponds to the economy with an internal collateral constraint, and a dashed line corresponds to the economy with an aggregate collateral constraint. The distribution of debt is virtually identical in the economy with an individual collateral constraint and in the economy with an aggregate collateral constraint. Similarly, as shown in the top-right and bottom-left panels of the figure, whether the collateral constraint is imposed at the individual or the aggregate levels appears to make no difference for the equilibrium dynamics of stock prices or consumption. Note that when the stock of debt is high agents engage in fire sales of land resulting in sharp declines in In the deterministic steady state we have that qt = 1, so that κqt k∗ = 7.83, which is the value assigned to κ in the economy with the constant debt limit. 7 The DenHaan-Marcet test for 5-percent left and right tails yields (0.043,0.061) for the economy with an individual collateral constraint, and (0.048,0.06) for the economy with an aggregate collateral constraint. This test is conducted using 5000 simulations of 5000 years each, dropping the first 1000 periods. The Matlab code that implements the numerical results reported in this section are available at http://www.columbia.edu/ ~mu2166/overborrowing/ overborrowing.html. M. Uribe and S. Schmitt-Groh´e Figure 10.2: Equilibrium Under a Time-Varying Collateral Constraint Unconditional Distribution of Debt Average Stock Prices 0.05 0.04 1.01 Indiv CC Agg CC 0.03 Eq 0.98 0.97 0.96 0.01 0 −10 0.95 −5 0 External Debt 0.94 −5 Average Consumption Indiv CC Agg CC 0 5 External Debt Average Interest Rate 1.055 9.6 9.4 9.2 −5 1.04 1.035 Indiv CC Agg CC 0 5 External Debt 1.03 −5 Indiv CC Agg CC 0 5 External Debt Note: ‘Indiv CC’ stands for Individual Collateral Constraint, and ‘Agg CC’ stands for Aggregate Collateral Constraint. Open Economy Macroeconomics, Chapter 10 its market price, qt . But the collapse of land prices is quantitatively similar in the two economies. The contraction in real estate prices is caused by the increase in interest rates as the economy approaches the debt limit. Formally, the fire sale of land is driven by a drop in the stochastic discount factor Λt,t+1 . When Λt,t+1 falls, future expected marginal products of land are discounted more heavily, depressing the value of the asset. To see that in a crisis the stochastic discount factor falls, note that the Euler equation for bond holdings is given by 1 = (1 + rt)EtΛt,t+1 , when the borrowing constraint is imposed at the aggregate level. When it is imposed at the individual level, r˜t replaces rt. During a crisis, rt and r˜t increase, generating expectations of a decline in Λt,t+1 . Finally, in line with the intuition developed earlier in this section, land prices are indeed higher in the economy with an individual debt limit, but this difference is quantitatively small. The fact that land prices are slightly higher in the economy with the individual borrowing limit means that in this economy the value of collateral is on average higher than in the economy with an aggregate borrowing limit. This allows households in the economy with the individual borrowing limit to actually hold on average more debt than households in the economy with an aggregate borrowing limit (see the top left panel of the figure 10.2). It follows that the current model predicts underborrowing. The intuition behind this result is that in the economy with the individual borrowing limit, households internalize the fact that holding land relaxes the borrowing limit. As a result, the demand for land is higher in the economy with an individual debt limit than in the economy in which the collateral service provided by land is not internalized. This causes land prices to be higher in the economy with the internalized borrowing constraint allowing households to borrow more. We will return to the issue of underborrowing in section 10.2.3. It follows from the analysis of this section, that the no-overborrowing result is robust to allowing M. Uribe and S. Schmitt-Groh´e for a debt limit that is increasing in the market value of a fixed factor of production. The reason why in this class of models households do not have a larger propensity to borrow under an aggregate debt limit is that the market and social prices of international liquidity are identical (in the case of a constant debt limit) or almost identical (in the case of a debt limit that depends on the price of land). Two features of the economies studied in this section are crucial in generating the equality of market and social prices of debt. First, when the borrowing limit is internalized the shadow price of funds, given by the pseudo interest rate r˜t, is constant and equal to the world interest rate r except when the debt ceiling is binding. Importantly, the shadow price of funds equals the world interest rate even as households operate arbitrarily close to the debt ceiling. Second, in the economy with the individual debt constraint, when the debt ceiling binds, it does so for all agents simultaneously. This property is a consequence of the assumption of homogeneity across economic agents. The absence of either of the abovementioned two features may cause the market price of foreign funds to be below the social price, thereby inducing overborrowing. The next section explores this issue in more detail. The Case of Overborrowing The present section provides three variations of the environment studied thus far that give rise to overborrowing. In the first example, agents are heterogeneous in endowments, in the second agents face a debt-elastic interest rate, and in the third the collateral is a flow that depends on the relative price of nontradables, which, in turn, is a continuous function of aggregate spending. Heterogeneous Agents The following model, taken from Uribe (2007), describes a situation in which overborrowing occurs because debt limits do not bind for all agents at the same time. The model features a two-period, endowment economy without uncertainty. The country faces a constant debt ceiling κ per capita. Open Economy Macroeconomics, Chapter 10 Figure 10.3: Overborrowing in an Economy with Heterogeneous Agents There is a continuum of agents of measure one, and agents are heterogeneous. The central result obtains under a variety of sources of heterogeneity, such as differences in endowments, preferences, or initial asset positions. Here, I assume that agents are identical in all respects except for their period-2 endowments. Specifically, in period 1 all households receive the same endowment y, whereas in period 2 half of the households receive an endowment y a > y and the other half receive a smaller endowment y b < y a . Agents receiving the larger future endowment have a stronger incentive to borrow in period 1 to smooth consumption over time. Suppose that in the absence of a debt ceiling households with high expected endowment consume ca > y + κ units in period 1 and the rest of the households consume cb < y + κ units. Figure 10.3 depicts the equilibrium in the absence of a debt constraint. In the unconstrained equilibrium, aggregate external debt per capita equals du = (ca + cb)/2 − y. When the borrowing ceiling κ is imposed at the level of each individual household, half of the households—those with high period-2 endowment—are constrained and consume y + κ units, M. Uribe and S. Schmitt-Groh´e whereas the other households are unconstrained and consume cb . Aggregate external debt per capita equals di = (κ + cb − y)/2 < du . Clearly, we also have that di < κ. Now suppose that the debt ceiling is imposed at the aggregate level. Two alternative situations are possible. One is that the aggregate debt limit is not binding. This case takes place when in the absence of a debt constraint debt per capita does not exceed the ceiling κ. That is, when du ≤ κ. In this case, the equilibrium interest rate equals the world interest rate r, and consumption of each agent equals the level attained in the absence of any borrowing constraint. External debt is given by da = du > di . Alternatively, if the aggregate level of external debt in the unconstrained environment exceeds the ceiling (i.e., if du > κ), then the economy is financially rationed, the domestic interest rate exceeds the world interest rate, and aggregate borrowing per capita is given by da = κ > di . It follows that, regardless of whether the aggregate debt limit is binding or not, external borrowing is higher when the debt ceiling is imposed at the aggregate level. That is, the combination of heterogeneous consumers and a debt limit imposed at the aggregate level induces overborrowing in equilibrium. Overborrowing occurs because of a financial externality. Specifically, the group of more frugal consumers provides a financial service to the group of more lavish consumers by placing comparatively less pressure on the aggregate borrowing constraint. This service, however, is not priced in the competitive equilibrium.8 This overborrowing result relies on the absence of a domestic financial market. When the debt limit κ is imposed at the individual level, intertemporal marginal rates of substitution are not equalized across households. Suppose that a domestic financial market existed in which frugal 8 Interestingly, economic heterogeneity, although of a different nature, is also the root cause of overborrowing in the dual-liquidity model of emerging-market crisis developed by Caballero and Krishnamurthy (2001). In their model, there is heterogeneity in the provision of liquidity across assets. Some assets are recognized as liquid collateral by both domestic and foreign lenders, while other assets serve as collateral only to domestic lenders. Caballero and Krishnamurthy show that in financially underdeveloped economies this type of heterogeneity produces an externality whereby the market price of international liquidity is below its social marginal Open Economy Macroeconomics, Chapter 10 households (households with relatively low future endowments) could borrow externally and lend internally to lavish households. In this case, in equilibrium, intertemporal marginal rates of substitution would be equal across households and the consumption allocation would equal the one emerging under an aggregate collateral constraint. It follows that with a domestic financial market, the overborrowing result disappears. This reemergence of the no-overborrowing result relies, however, on the assumption that foreign lenders will keep the external debt limit on lavish agents equal to κ. Alternatively, foreign lenders may realistically impose that total debt held by lavish households (i.e., the sum of external and domestic debt) be limited by κ. In this case, a domestic financial market would be ineffective in eliminating overborrowing. A comment on the concept of overborrowing when agents are heterogeneous is in order. The term overborrowing has a negative connotation, referring to a suboptimal amount of external financing. In the models with homogeneous agents and a constant debt limit studied earlier in this chapter, one can safely interpret any excess external debt in the economy with an aggregate debt limit over the economy with an individual debt limit as suboptimal, or overborrowing. This is because the competitive equilibrium associated with the economy featuring an individual debt limit coincides with the optimal allocation chosen by a social planner that internalizes the debt limit. When agents are heterogeneous it is not necessarily the case that the debt distribution associated with the economy featuring an aggregate debt limit is less desirable than the one implied by the economy with an individual debt limit. To see this, suppose, for instance, that in the economy analyzed in this section the social planner cared only about the well being of agents with high period-2 endowments. In this case, the social planner would favor the equilibrium associated with an aggregate borrowing limit over the one associated with an individual debt limit. M. Uribe and S. Schmitt-Groh´e Debt-Elastic Country Premium In section ??, we studied a way to induce stationarity in the small-open-economy business-cycle model consisting in making the country interest rate a function of the cross sectional average of external debt per capita. In this case, agents do not internalize that their individual borrowing contributes to increasing the cost of external funds. As a result, an externality emerges whereby the economy assumes more external debt than it is socially optimal. Formally, let the country interest rate be given by an increasing function of aggregate external debt of the form rt = r + ρ(Dt), with ρ0 > 0. Because individual households take the evolution of the aggregate debt position, Dt, as exogenous, they do not internalize the dependence of the interest rate on their individual debt positions. The reason why the cost of funds is debt elastic is unspecified in this simple setting, but it could be due to the presence of default risk as in models of sovereign debt. We study this class of models in chapter 11. We assume that all households are identical. Therefore, in equilibrium, we have that dt = Dt, where dt is the individual level of debt held by the representative household. Let d∗ > 0 denote the steady-state value of debt in this economy. Then d∗ must satisfy the condition 1 = (1 + r + ρ(d∗ ))β, where, as usual, β is a constant subjective discount factor. This steady-state condition arises in virtually all formulations of the small open economy with utility-maximizing households (e.g., Schmitt-Groh´e and Uribe, 2003). Assume now that the debt-elastic interest-rate schedule is imposed at the level of each individual household, so that rt = r + ρ(dt). Let d∗∗ > 0 denote the steady-state level of external debt in this economy. It can be shown that d∗∗ is determined by the condition 1 − βd∗∗ ρ0 (d∗∗ ) = (1 + r + ρ(d∗∗))β. Open Economy Macroeconomics, Chapter 10 Comparing equations (10.38) and (10.39), it is clear that because ρ0 > 0, we have that d∗ > d∗∗ . That is, the economy with the financial externality generates overborrowing. Auernheimer and Garc´ıa-Saltos (2000) derive a similar result in a model in which the interest rate depends on the t t leverage ratio. That is, rt = ρ qD in the case of an aggregate debt limit, and rt = ρ( qtkdt+1 ) in ∗ tk the case of an individual debt limit. As noted earlier, in the Auernheimer-Garc´ıa-Saltos model an externality emerges even in this latter case, because agents do not internalize the effect that their borrowing behavior has on the price of land, qt . We note that in the economy with the aggregate debt limit the market price of foreign funds, r + ρ(dt), is strictly lower than the social cost of foreign funds, given by r + ρ(dt) + dt ρ0 (dt). This discrepancy, which is key in generating overborrowing, is absent in the economy of the previous sections. Nontraded Output As Collateral Consider now a variation of the model analyzed thus far in which the object that serves as collateral is output. Here, the main mechanism for inducing overborrowing is the real exchange rate, defined as the relative price of nontradable goods in terms of tradables. Suppose that due to a negative shock, aggregate demand falls causing the relative price of nontradables to collapse. In this case, the value of nontradable output in terms of tradable goods falls. Since this variable serves as collateral, the borrowing constraint tightens exacerbating the economic contraction. This mechanism was first studied by Korinek (2010), in the context of a three-period model. Bianchi (2010), extends the Korinek model to an infinite-horizon quantitative setting. Our exposition follows Bianchi’s formulation. M. Uribe and S. Schmitt-Groh´e Consider a small open endowment economy in which households have preferences of the form ∞ X β t U (ct), with the usual notation. The period utility function takes the form U (c) = (c1−σ − 1)/(1 − σ). The consumption good is assumed to be a composite of tradable and nontradable consumption as follows: h i1/(1−1/η) T 1−1/η N 1−1/η ct = A(cTt , cN ) ≡ ωc + (1 − ω)c , t t t where cTt denotes consumption of tradables and cN t denotes consumption of nontradables. Households are assumed to have access to a single, one-period, risk-free, internationally-traded bond that pays the constant interest rate r. The household’s sequential budget constraint is given by N N dt = (1 + r)dt−1 + cTt − ytT + pN t (ct − yt ), where dt denotes the amount of debt assumed in period t and maturing in t + 1, pN t denotes the relative price of nontradables in terms of tradables, and ytT and ytN denote the endowments of tradables and nontradables, respectively. Both endowments are assumed to be exogenous and stochastic. The borrowing constraint takes the form N dt ≤ κT ytT + κN pN t yt , where κT , κN > 0 are parameters. Households internalize this borrowing limit. However, just as in the case in which the value of land is used as collateral, this borrowing constraint introduces an externality a` la Auernheimer and Garc´ıa-Saltos (2000), because each individual household takes the real exchange rate pN t as exogenously determined, even though their collective absorptions of Open Economy Macroeconomics, Chapter 10 nontradable goods is a key determinant of this relative price. Households choose a set of processes {cTt , cN t , ct , dt } to maximize (10.40) subject to (10.41)T N (10.43), given the processes {pN t , yt , yt } and the initial debt position d−1 . The first-order condi- tions of this problem are (10.41)-(10.43) and G(cTt , cN t ) = λt , pN t 1−ω = ω cTt cN t λt = β(1 + r)Etλt+1 + µt , µt ≥ 0, and N µt (dt − κT ytT − κN pN t yt ) = 0, 0 T N where G(cTt , cN t ) ≡ ωU (A(ct , ct )) 1/η N A(cT t ,ct ) T ct denotes the marginal utility of tradable consump- tion, and λt and µt denote the Lagrange multipliers on the sequential budget constraint (10.42) and the collateral constraint (10.43), respectively. As usual, the Euler equation (10.44) equates the marginal benefit of assuming more debt with its marginal cost. During tranquil times, when the collateral constraint does not bind, the benefit of increasing dt by one unit is the marginal utility of tradables G(cTt , cN t ), which in turn equals λt . The marginal cost of an extra unit of debt is the present discounted value of the payment that it generates in the next period, β(1 + r)Etλt+1 . During financial crises, when the collateral constraint binds, the marginal utility of increasing debt is unchanged, but the marginal cost increases to β(1 + r)Etλt+1 + µt , reflecting a shadow penalty for trying to increase debt when the collateral constraint is binding. M. Uribe and S. Schmitt-Groh´e In equilibrium, the market for nontradables must clear. That is, N cN t = yt . Then, a competitive equilibrium is a set of processes {cTt , dt, µt } satisfying N G(cTt , ytN ) = β(1 + r)EtG(cTt+1 , yt+1 ) + µt , dt = (1 + r)dt−1 + cTt − ytT 1 − ω T 1/η N 1−1/η dt ≤ +κ ct yt ω 1 − ω T 1/η N 1−1/η T T N µt dt − κ yt − κ ct yt = 0, ω κT ytT µt ≥ 0, (10.47) (10.48) (10.49) given processes {ytT , ytN } and the initial condition d−1 . The fact that cTt appears on the right-hand side of the equilibrium version of the collateral constraint (10.47) means that during contractions in which the absorption of tradables falls the collateral constraint endogenously tightens. Individual agents do not take this effect into account in choosing their consumption plans. This is the nature of the financial externality in this model. A benevolent government would be interested in designing policies that induce households to internalize the financial externality. The Socially Optimal Equilibrium The socially optimal policy aims at attaining the allocation implied by an optimization problem that takes into account the dependence of the value of collateral upon the aggregate consumption Open Economy Macroeconomics, Chapter 10 of tradables. Such allocation is the solution to the following social planner’s problem: max E0 {cT t ,dt } ∞ X β t U (A(cTt , ytN )) subject to the equilibrium conditions (10.46) and (10.47), which we reproduce here for convenience: dt = (1 + r)dt−1 + cTt − ytT dt ≤ κT ytT 1−ω ω 1/η N 1−1/η yt . The right-hand side of this expression contains the equilibrium level of consumption of tradables, cTt , whose evolution is taken as endogenous by the social planner but as exogenous by the individual household. Also, the social planner internalizes the fact that in the competitive equilibrium the market for nontraded goods clears at all times. This internalization implies that the endogenous N variable cN t , is replaced everywhere by the exogenous variable yt . The first-order conditions associated with this problem are the above two constraints and N N G(cTt , ytN ) + µt Γ(cTt , ytN ) = β(1 + r)Et [G(cTt+1, yt+1 ) + µt+1 Γ(cTt+1 , yt+1 )] + µt 1 − ω T 1/η N 1−1/η T T N µt dt − κ yt − κ ct yt =0 ω µt ≥ 0, where µt denotes the Lagrange multiplier on the borrowing constraint (10.47), and Γ(cTt , ytN ) ≡ yN 1−1/η κN 1−ω t denotes the amount by which an extra unit of tradable consumption relaxes η ω cT t the borrowing constraint (10.47). In periods in which the borrowing constraint is binding, the M. Uribe and S. Schmitt-Groh´e marginal utility of tradable consumption from a social point of view is given by the sum of the direct marginal utility G(cTt , ytN ) and the factor µt Γ(cTt , ytN ) reflecting the fact that an extra unit of consumption of tradables rises the relative price of nontradables, thereby making the collateral constraint marginally less tight. The social-planner equilibrium is then given by a set of processes {cTt , dt, µt} satisfying (10.46), (10.47), and (10.50)-(10.52). Shortly, we will compare the macroeconomic dynamics induced by the competitive equilibrium and the social-planner equilibrium. Before doing so, however, we wish to address the issue of how to implement the latter. That is, we wish to design fiscal instruments capable of supporting the social-planner allocation as the outcome of a competitive equilibrium. Optimal Fiscal Policy The competitive equilibrium conditions and the social-planner equilibrium conditions differ only in the Euler equation (compare equations (10.45) and (10.50)). The question we entertain here is whether there exists a fiscal-policy scheme that induces households to internalize their collective effect on the credit limit and thereby makes the competitive-equilibrium conditions identical to the social-planner equilibrium conditions. It turns out that such optimal fiscal policy takes the form of a proportional tax on external debt and a lump-sum transfer that rebates the entire proceeds from the debt tax equally among households. Specifically, let τt denote the tax on debt and st denote the lump-sum transfer. Then, the budget constraint of he household is given by N N dt = (1 + r)(1 + τt−1 )dt−1 + cTt − ytT + pN t (ct − yt ) − st . Households choose processes {cTt , cN t , ct , dt} to maximize (10.40) subject to this budget constraint T N and conditions (10.41) and (10.43), given the processes {pN t , yt , yt , τt , st } and the initial debt position d−1 . The Euler equation associated with this problem is of the form G(cTt , cN t ) = β(1 + Open Economy Macroeconomics, Chapter 10 r)(1 + τt )EtG(cTt+1 , cN t+1). This expression says that households choose a level of consumption at which the marginal benefit of an extra unit of debt, given by the current marginal utility of tradable consumption G(cTt , cN t ), equals the marginal cost of debt, given by the expected present discounted value of the after-tax interest payment measured in terms of utils, β(1+r)(1+τt)EtG (cTt+1 , cN t+1). It follows that the higher the tax rate on debt, the more costly it is for households to accumulate debt. Of course, this tax cost of debt is only an individual perception aimed at distorting household’s spending behavior. In the aggregate, the government rebates the tax to households in a lump-sum fashion by adopting a balanced-budget rule of the form st = τt−1 (1+r)dt−1 . It is straightforward to establish that a competitive equilibrium in this economy is given by processes {cTt , dt, µt} satisfying (10.46)-(10.49), which are common to both the competitive equilibrium without taxes and the social planner’s equilibrium, and N G(cTt , ytN ) = β(1 + r)(1 + τt )EtG(cTt+1 , yt+1 ) + µt , given a fiscal policy τt , exogenous processes {ytT , ytN }, and the initial condition d−1 . Thus, the competitive equilibrium conditions in the economy with taxes and the social planner’s equilibrium conditions differ only in their respective Euler equations, given by (10.53) and (10.50). The optimal fiscal policy then is the tax process {τt} that makes these two Euler equations equal to each other. It is easy to establish that the optimal tax process is given by τt = N β(1 + r)Etµt+1 Γ(cTt+1 , yt+1 ) − µt Γ(cTt , ytN ) . N ) β(1 + r)EtG(cTt+1 , yt+1 According to this expression, in tranquil periods, in which the borrowing constraint does not bind (µt = 0) and is not expected to bind in the next period (µt+1 = 0 in all states of period t + 1), the government does not tax external debt (τt = 0). In periods of uncertainty, when the collateral M. Uribe and S. Schmitt-Groh´e constraint does not bind in the current period but has some probability of binding in the next period (µt = 0 and µt+1 > 0 in some states of t + 1), the government taxes debt holdings to discourage excessive spending and external borrowing. Quantitative Predictions The numerical exercise conducted by Bianchi (2010) considers an approximately optimal tax policy in which the government levies a tax on debt equal to the one prescribed by the optimal tax policy only in periods in which the borrowing constraint is not binding in the current period but does bind in at least one of the possible states of period t + 1. When the borrowing constraint does bind in the current period or when it does not bind either currently or in any state of the next period, the tax rate is set at zero. The model is calibrated at an annual frequency. The output process is assumed to follow a bivariate AR(1) process. An empirical measure of traded output is defined as GDP in the manufacturing, agriculture, and mining sectors, and nontraded output as the difference between GDP and GDP in the traded sector. Using data from the World Development Indicators from 1965 to 2007, the output process is estimated to be ytT ytN T yt−1 0 0.047 0.90 0.46 , + = N 2t 0.035 0.022 yt−1 −0.45 0.22 with it ∼ N (0, 1) for i = 1, 2.9 The remaining parameters are calibrated as follows: κT = κN = 0.32, β = 0.91, ω = 0.31, η = 0.83, σ = 2, and r = 0.04. Bianchi finds that the optimal debt accumulation decision is quite different when the borrowing constraint binds and when it does not. Let the optimal debt accumulation rule be dt = D(dt−1 , ytT , ytN ) in the competitive equilibrium (CE) and dt = D s (dt−1 , ytT , ytN ) in the social-planner 9 Unfortunately, the 2010 version of Bianchi’s paper does not specify how the output series were detrended, and in which units they are Open Economy Macroeconomics, Chapter 10 (SP) equilibrium. The functions D and D s are increasing in the level of debt (D1 > 0, D1s > 0) when the borrowing constraint is not binding but are decreasing in the level of debt (D1 < 0, D1s < 0) when the borrowing constraint is binding. This means that the CE and SP economies experience capital outflows (dt − dt−1 < 0) when the borrowing constraint binds. There are also significant differences between the debt-accumulation behavior in the CE and SP economies: First, D is larger than D s , especially when the borrowing limit is not binding in either the CE or SP economies or when it is binding in the CE but not in the SP economy. Second, given the levels of sectoral output, the borrowing constraint binds at a higher level of debt in the SP economy than in the CE economy. Third, under the present calibration, there is a 15 percent probability that debt in the CE is higher than the upper bound of the support of the stationary distribution of debt in the SP economy. These predictions of the model point at a significant degree of overborrowing. The required tax on external debt that corrects this inefficiency, is 5 percent on average, and is increasing in the level of debt. Finally, the presence of overborrowing in this economy has quantitatively important effects on the frequency and size of economic crises. The probability of a crisis, defined as a situation in which the borrowing constraint is binding and capital outflows (dt − dt−1 < 0) are larger than one standard deviation, is 5.5 percent in the CE and 0.5 percent in the SP economy. Moreover, the initial contraction in consumption in a crisis is higher in the CE economy than in the SP economy. But these crises do not appear to inflict too much pain to the representative household. Bianchi finds that on average the representative household of the CE economy requires an increase of only one tenth of one percent of its consumption stream to be as happy as the household of the SP economy. This modest welfare cost of overborrowing is due to the fact that in this economy output is assumed to be exogenous. The next section shows that relaxing this strong assumption can lead to a surprising result. M. Uribe and S. Schmitt-Groh´e The Case of Underborrowing One lesson we can derive from the models of overborrowing based on collateral constraints is that the assumption of what precise object is used as collateral (a constant, the market value of some asset, traded output, nontraded output, etc.) can have crucial consequences for whether or not the presence of the borrowing constraint causes overborrowing, defined as a situation in which the competitive equilibrium produces more debt accumulation than the social planner equilibrium. In this section we stress this idea by considering a modification of Bianchi’s (2010) model due to Benigno, Chen, Otrok, Rebucci, and Young (2009), hereafter BCORY, that under plausible calibrations can produce underborrowing. The key modifications introduced by BCORY (2009) is to assume that the labor supply is endogenous and that nontradables are produced using labor—recall that Bianchi’s model assumes that output is exogenous. This simple and realistic extension can fundamentally alter the nature of the financial externality. To see this, suppose that, as in the Bianchi (2010) model, borrowing is limited by income. From a private point of view, a way to relax the borrowing constraint is to supply more hours of work, thereby increasing income at any given wage rate. In equilibrium, however, this can have a deleterious effect on the country’s ability to borrow. For an increase in the labor supply causes the supply of nontradables to rise, which, if the elasticity of substitution of tradables for nontradables is low (less than unity), may in turn cause a fall in the value of nontraded income in terms of tradables, tightening the borrowing limit. Formally, the representative household is assumed to have preferences described by the utility function E0 ∞ X t=0 β t U (ct, ht), Open Economy Macroeconomics, Chapter 10 where ht denotes hours worked in period t. The period utility function, given by U (c, h) = hδt δ is assumed to be increasing in consumption and decreasing in hours worked. As before, the consumption good is a composite of tradables and nontradables and the aggregation technology is given by equation (10.41). The sequential budget constraint is of the form N dt = (1 + r)dt−1 + cTt − ytT + pN t ct − wt ht , where wt denotes the real wage expressed in terms of tradables. The endowment of tradables, ytT , is assumed to be exogenous and stochastic. Borrowing in international markets is limited by income: dt ≤ κ(ytT + wtht ), where κ > 0 is a parameter. Note that, unlike the Bianchi (2010) model, this borrowing limit contains on its right-hand side a variable that is endogenous to households, namely, ht . In particular,in the present environment households internalize the benefit that working longer hours has on their ability to borrow. The household’s optimization problem consists in choosing processes {ct, cTt , cN t , ht , dt } to maximize the utility function (10.55) subject to the aggregation technology (10.41), the sequential budget constraint (10.56), and the borrowing constraint (10.57). The first-order conditions associated with this problem are identical to those derived for the Bianchi (2010) model, except for the emergence of a labor supply expression of the form: −Uh (ct, ht ) = λt(wt + κµt /λt), M. Uribe and S. Schmitt-Groh´e were, as before, λt denotes the Lagrange multiplier associated with the sequential budget constraint, and µt denotes the Lagrange multiplier associated with the borrowing constraint. This expression shows that periods in which the borrowing constraint is binding (µt > 0), the shadow real wage, given by wt + κµt /λt, exceeds the market wage rate wt , giving households an extra incentive to supply hours to the labor market. This incentive to work originates, as explained earlier, in the fact that at the going wage rate, an increase in hours raises labor income thereby enlarging the value of collateral and relaxing the borrowing limit. Firms produce nontradables using a linear technology that takes labor as the sole input of production. Specifically, output of tradables is given by ytN = ht . The problem of the firm consists N in choosing ht to maximize profits, given by pN t yt − wt ht , subject to the production technology. We assume that free entry guarantees zero profits in the nontraded sector. This means that pN t = wt at all times. It is of interest to examine the shape of the borrowing limit, now not from the household’s perspective, but from an equilibrium perspective. Noting that the zero-profit condition implies N that wt ht = pN t yt at all times, we can write equation (10.56) as: N dt ≤ κ(ytT + pN t yt ), which is identical to the borrowing constraint assumed by Bianchi (2010), given by equation (10.43), when κT = κN = κ. Note that both in the present model and in Bianchi’s, households take the variables that appear on the right-hand side of this expression as exogenous. The key difference between the two models is that whereas in the present model the above expression holds only Open Economy Macroeconomics, Chapter 10 in equilibrium, in Bianchi’s model it holds both in equilibrium and at the level of the individual household. To understand why the incentives of households to work and in that way relax the borrowing constraint can be counterproductive in equilibrium, we can rewrite the borrowing constraint (10.43) in yet another way by using the zero-profit condition wt = pN t to get rid of wt , the efficiency condition T 1/η 1−ω ct N N N pN to get rid of pN t = ω t , the market-clearing condition yt = ct to get rid of ct , and cN t the technological relation ytN = ht get rid of ytN . This yields the following equilibrium borrowing limit 1 − ω T 1/η 1−1/η T dt ≤ κ yt + c ht ω t This expression says that, unlike the perception of the individual household, i.e., that an increase in hours expands the borrowing limit, in equilibrium increasing hours might actually shrink the borrowing possibility set. This will be the case when tradables and nontradables are poor substitutes, or, more precisely, when the elasticity of substitution between these two types of good, η, is less than unity. In this case, an increase in hours produces a one-to-one increase in the supply of nontradables which drives the relative price of this type of goods down so much that the value of nontraded output ends up falling. BCORY (2009) argue that this is indeed a realistic possibility. Existing estimates of the elasticity of substitution η for emerging countries yield values ranging from slightly below 0.5 to about 0.83 (see, for instance, Ostry and Reinhart, 1992; and Neumeyer and Gonz´ alez-Rozada, 2003). BCORY (2009) set η at 0.76 and calibrate the rest of the parameters of the model using values that are plausible for the typical emerging economy. They then compute both the competitive equilibrium and the social planner’s equilibrium. The latter is, as before, the solution to a maximization problem in which the endogeneity of the wage rate, wt , on the right-hand side of (10.57) is internalized. BCORY (2009) find that the competitive equilibrium yields a lower average debt- M. Uribe and S. Schmitt-Groh´e to-output ratio and a smaller probability that the borrowing constraint will bind than the social planner’s equilibrium. That is, the model generates underborrowing. We close this section by reiterating that in models in which borrowing is limited by a collateral constraint, the emergence of overborrowing depends crucially on what objects are assumed to serve as collateral. This adds an element of discretionality to the analysis because in the class of models reviewed here the collateral constraint is assumed in an ad hoc fashion. This conclusion suggests two priorities for future research in this area. One is empirical and should aim at answering a simple question: what do foreign lenders accept as collateral from borrowers residing in emerging countries? The second line of research suggested as important by the current state of the overborrowing literature is theoretical. It concerns the production of microfoundations for the type of borrowing constrains analyzed in this chapter. The hope here is that through such foundations we will be able to narrow the type of items that can sensibly be included on the right-hand side of the borrowing constraint. Open Economy Macroeconomics, Chapter 10 Exercise 10.1 (The Temporariness Hypothesis With Downward Nominal Wage Rigidity) Consider a small open perfect-foresight economy populated by a large number of identical infinitely lived consumers with preferences described by the utility function ∞ X β t ln ct , where ct denotes consumption and β ∈ (0, 1) denotes the subjective discount factor. Consumption N is a composite good made of imported and nontradable goods, denoted cM t and ct respectively, via the aggregator function ct = N cM t ct . The sequential budget constraint of the representative household is given by N dht = (1 + r)dht−1 + (1 + τt)(cM t + pt ct ) − wt ht − y − xt , where dht denotes debt acquired in period t and maturing in t + 1, ht denotes hours worked, τt is a proportional consumption tax, wt denotes the real wage in terms of importables, pt denotes the relative price of nontradables in terms of importables, y = 1 is an endowment of exportable goods, xt denotes a lump-sum transfer received from the government, and r denotes the real interest rate. Debt is denominated in terms of importables. The terms of trade are assumed to be constant and normalized to unity. Households are subject to the no-Ponzi-game constraint dht+j ≤ 0. j→∞ (1 + r)j lim M. Uribe and S. Schmitt-Groh´e Assume that the household’s initial debt position is nil (dh−1 = 0). Households supply inelastically 1 unit of labor to the market each period. Suppose that the law of one price holds for importables, so that PtM = PtM ∗ Et, where PtM denotes the domestic-currency price of importables, Et denotes the nominal exchange rate, defined as the price of foreign currency in terms of domestic currency, and PtM ∗ denotes the foreign-currency price of importables. Assume that PtM ∗ is constant and equal to unity for all t. Firms in the nontraded sector produce goods by means of the linear technology ytN = ht , where ytN denotes output of nontradables. Firms are price takers in product and labor markets and there is free entry, so that all firms make zero profits at all times. The government starts period 0 with no debt or assets outstanding and runs a balanced budget N period by period, that is, xt = τt (cM t + pt ct ). The monetary authority pegs the exchange rate at unity, so that Et = 1 for all t. Finally, assume that 1 + r = β −1 = 1.04. 1. Suppose that nominal wages are flexible and that before period 0 the economy was in a steady state with constant consumption of importables and nontradables and no external debt. (a) Compute the equilibrium paths of cM t , wt , Wt , pt , the trade balance, and the current account under two alternative tax Policy 1: τt = 0, ∀t Policy 2: τt = 0 ≤ t ≤ 11 0.3 t ≥ 12 (b) Compute the welfare cost of policy 2 relative to policy 1, defined as the percentage increase in the consumption stream of a consumer living under policy 2 required to make him as well off as living under policy 1. Formally, the welfare cost of policy 2 relative to policy Open Economy Macroeconomics, Chapter 10 1 is given by λ × 100, where λ is implicitly given by ∞ X t=0 β t ln[cp2 t (1 + λ)] = ∞ X β t ln cp1 t , p2 where cp1 t and ct denote consumption in period t under policies 1 and 2, respectively. 2. Now answer question 1 under the assumption that wages are downwardly inflexible. Specifically, assume that Wt ≥ Wt−1 for all t ≥ 0. Begin by computing W−1 under the assumption that before period 0 the economy was in a steady state with constant consumption of tradables and nontradables, full employment, no debt, and a nominal exchange rate equal to unity. 3. How would your answers to questions 1 and 2 change if τt was a tax only on consumption of importables (i.e., an import tariff )? Provide a quantitative and intuitive answer. 4. Continue to assume that τt is a tax on consumption of imported goods only. In section 10.1, in the context of a model with tradable goods only, we derived the result that policy 2 induces the same equilibrium paths of consumption, ct , as policy 1 in the presence of imperfect credibility, that is, when the government announces and successfully implements policy 1 but households believe that policy 2 is in place and begin to believe in the permanence of the tariff cut only after period 11. How does this result change in the present model with tradables and nontradables? Consider separately the cases of flexible and downwardly rigid nominal wages. M. Uribe and S. Schmitt-Groh´e Chapter 11 Sovereign Debt Why do countries pay their international debts? This is a fundamental question in open-economy macroeconomics. A key distinction between international and domestic debts is that the latter are enforceable. Countries typically have in place domestic judicial systems capable of punishing defaulters. Thus, one reason why residents of a given country honor their debts with other residents of the same country is because creditors are protected by a government able and willing to apply force against delinquent debtors. At the international level the situation is quite different. For there is no such thing as a supranational authority with the capacity to enforce financial contracts between residents of different countries. Defaulting on international financial contracts appears to have no legal consequences. If agents have no incentives to pay their international debts, then lenders should have no reason to lend internationally to begin with. Yet, we do observe a significant amount of borrowing and lending across nations. It follows that international borrowers must have reasons to repay their debts other than pure legal enforcement. Two main reasons are typically offered for why countries honor their international debts: economic sanctions and reputation. Economic sanctions may take many forms, such as seizures of 461 M. Uribe and S. Schmitt-Groh´e debtor country’s assets located abroad, trade embargoes, and import tariffs and quotas. itively, the stronger is the ability of creditor countries to impose economic sanctions, the weaker the incentives for debtor countries to default. A reputational motive to pay international debts arises when creditor countries have the ability to exclude from international financial markets countries with a reputation of being defaulters. Being isolated from international financial markets is costly, as it precludes the use of the current account to smooth consumption over time in response to aggregate domestic income shocks. As a result, countries may choose to repay their debts simply to preserve access to international financing. This chapter investigates whether the existing theories of sovereign debt are capable of explaining the observed levels of sovereign debt. Before plunging into theoretical models of country debt, however, we will present some stylized facts about international lending and default that will guide us in evaluating the existing theories. Empirical Regularities In this section, we take a look at the observed patterns of sovereign defaults and their relation to macroeconomic indicators of interest. We draw on existing empirical research and also provide some new evidence. Frequency, Size, And Length of Defaults How often do countries default? How large are defaults? How long do countries take to resolve their debt disputes? To address these questions, we must first establish empirical definitions of 1 The use of force by one country or a group of countries to collect debt from another country was not uncommon until the beginning of the twentieth century. In 1902, an attempt by Great Britain, Germany, and Italy to collect the public debt of Venezuela by force prompted the Argentine jurist Luis-Mar´ıa Drago, who at the time was serving as minister of foreign affairs of Argentina, to articulate a doctrine stating that no public debt should be collected from a sovereign American state by armed force or through the occupation of American territory by a foreign power. The Drago doctrine was approved by the Hague Conference of 1907. Open Economy Macroeconomics, Chapter 11 default and of its resolution. Much of the data on sovereign default is produced by credit rating agencies, especially Standard and Poor’s. Standard and Poor’s defines default as the failure to meet a principal or interest payment on the due date (or within a specified grace period) contained in the original terms of a debt issue (see Beers and Chambers, 2006). This definition includes not only situations in which the sovereign simply refuses to pay interest or principal, but also situations in which it forces an exchange of old debt for new debt with less-favorable terms than the original issue or it converts debt into a different currency of less than equivalent face value. A country is considered to have emerged from default when it resumes payments of interest and principal including arrears. But defaults often involve a debt renegotiation that culminates in a restructuring of debt contracts that may include the swap of old debt for new debt. For this reason, when such a settlement occurs and the rating agency concludes that no further near-term resolution of creditors’ claims is likely, the sovereign is regarded as having emerged from default (see Beers and Chambers, 2006). This definition of reemergence from default clearly requires a value judgment as it involves the expectation on the part of the rating agency that no further disputes will emerge in relation to the default in question. Such a judgment call may or may not turn out to be correct. For example, all rating agencies concluded that Argentina emerged from the 2001 default in 2005 when it restructured its 81.8 billion dollar debt by issuing new instruments involving a hair cut of 73 percent. However, in 2014 a small group of holdouts (i.e., bond holders that did not participate in the debt restructuring) won a law suit against Argentina in a U.S. court bringing the country back to the brink of default not only with the holdouts but possibly also with holders of the totality of the restructured debt. Table 11.1 displays empirical probabilities of default for 9 emerging countries over the period 1824-1999. On average, the probability of default is about 3 percent per year. That is, countries defaulted on average once every 33 years. Table 11.1 also reports the average number of years countries are in state of default or restructuring after a default or restructuring episode. After a M. Uribe and S. Schmitt-Groh´e Table 11.1: Frequency And Length of Sovereign Default: 1824-1999 Country Argentina Brazil Chile Colombia Egypt Mexico Philippines Turkey Venezuela Average Probability of Default per year 0.023 0.040 0.017 0.040 0.011 0.046 0.006 0.034 0.051 0.030 Years in State of Default per Default Episode 11 6 14 10 11 6 32 5 7 11 Note: The sample includes only emerging countries with at least one external-debt default or restructuring episode between 1824 and 1999. Therefore, the average probability is conditional on at least one default in the sample period. Source: Own calculations based on Reinhart, Rogoff, and Savastano (2003), table 1. Open Economy Macroeconomics, Chapter 11 debt crisis, countries are in state of default for about 11 years on average. If one assumes that while in state of default countries have limited access to fresh funds from international markets, one would conclude that default causes countries to be in financial autarky for about a decade. But the connection between being in state of default and being in financial autarky should not be taken too far. For being in state of default with a set of lenders, does not necessarily preclude the possibility of obtaining new loans from other lenders with which the borrower has no unpaid debts. In this case, the period of financial autarky would be shorter than the period of being in state of default. The converse can also be true. Suppose that foreign lenders choose to punish defaulters by excluding them from financial markets even after the delinquent country has come to an agreement with its creditors. In this case, the period of financial autarky could last longer than the state of being in default. We discuss empirical estimates of exclusion periods in section 11.2.1. The information on frequency of default and length of default state provided in table 11.1 spans a long period of time (1824-1999). In the past decades, however, international financial markets have experienced enormous changes, including an expansion in the set of sovereigns with access to international credit markets and the participation of small lenders. For this reason, it is of interest to ask if the frequency and length of sovereign defaults have changed. Table 11.9 in the appendix displays data on the beginning and end of default episodes for all defaulters between 1975 and 2014. During this period 93 sovereigns defaulted at least once and there were a total of 147 default episodes. This means that the empirical probability of default over the 40-year period 1975-2014 equals 147/(40 × 93) or around 4 percent. Comparing this number with the one corresponding to the period 1824-1999 suggests that the default frequency has increased from 3 defaults per century per country to 4 defaults per country, conditional on the country having defaulted at least once. Table 11.9 also reveals that the average length of a default episode is 8 years, 3 years shorter than during the earlier period 1824-1999. We therefore conclude that in recent years, sovereign defaults have become more frequent but shorter. Table 11.2 summarizes these results. M. Uribe and S. Schmitt-Groh´e Table 11.2: Frequency and Length Of Sovereign Defaults: 1824-1999 Versus 1975-2014 Probability of Default per year 0.030 0.040 Period 1824-1999 1975-2014 Years in State of Default per Default Episode 11 8 Source: Tables 11.1 and 11.9. Figure 11.1: Distribution Of The Length Of Default Episodes, 1970-2014 50 Probability in Percent Source. Table 11.9. 20 Years in Default Open Economy Macroeconomics, Chapter 11 Figure 11.1 displays the empirical probability distribution of the length of default episodes over the period 1975 to 2014. The distribution is strongly skewed to the right, which means that there are some default episodes that took very long to be resolved. For example, table 11.9 documents that out of the 147 defaults recorded over the period 1975-2014, nine lasted longer than 30 years. The skewness of the distribution is reflected in a median of default length significantly lower than the mean, 5 versus 8 years. As pointed out by Tomz and Wright (2013), the shape of the empirical probability distribution resembles an exponential distribution. Assuming that the length of default episodes is a good proxy for the time of exclusion from international financial markets (for further discussion of this assumption, see section 11.2.1), this suggests modeling the probability of a defaulter regaining access to financial markets as constant over time. As we will see in section ??, this is precisely the way reentry is modeled in most quantitative models of sovereign default. How large are defaults? Most existing theoretical models of default assume that when the country defaults it does so on the entire stock of outstanding external debt. In reality, however, this is not the case. Typically, countries default on a fraction of their outstanding debts. The resulting losses inflicted to creditors are called haircuts. A number of studies have estimated the size of haircuts. Sturzenegger and Zettelmeyer (2008) measure haircuts as the percentage difference between the present values of old and new instruments discounted at market rates prevailing immediately after the debt exchange. They estimate haircuts for all major debt restructurings that occurred between 1998 and 2005. They find that haircuts are on average 40 percent. That is, after the default the creditor expects to receive a stream of payments with present discounted value 40 percent lower than prior to the default. At the same time, they report a high dispersion of the size of haircuts ranging from 13 percent (Uruguay, 2003) to 73 percent (Argentina, 2005). Other studies find similar results. Cruces and Trebesch (2013) use the same methodology to M. Uribe and S. Schmitt-Groh´e measure haircuts as Sturzenegger and Zettelmeyer but greatly expand the data set to include all debt restructurings with foreign banks and bond holders that took place between 1970 and 2010. The resulting data set covers 180 default episodes in 68 countries. They find that the average haircut is 37 percent, with a standard deviation of 22 percent. Benjamin and Wright (2008) use a constant rate of 10 percent to discount pre- and post-restructuring payments in their computation of haircuts. Their data set includes 90 default episodes in 73 countries over the period 1989 to 2006. Like the other two studies, these authors find an average haircut of 40 percent with a large associated dispersion. Default, Debt, And Country Premia Table 11.3 displays average debt-to-GNP ratios over the period 1970-2000 for a number of emerging countries that defaulted upon or restructured their external debt at least once between 1824 and 1999. The table also displays average debt-to-GNP ratios at the beginning of default or restructuring episodes. The data suggest that at the time of default debt-to-GNP ratios are significantly above average. In effect, for the countries considered in the sample, the debt-to-GNP ratio at the onset of a default or restructuring episode was on average 14 percentage points above normal times. The information provided in the table is silent, however, about whether the higher debt-to-GNP ratios observed at the brink of default episodes result from a contraction in aggregate activity or from a faster-than-average accumulation of debt in periods immediately preceding default or both. Table 11.3 also shows the country premium paid by the 9 emerging countries listed over a period starting on average in 1996 and ending in 2002. The country premium, or country spread, is the difference between the interest rate at which a country borrows in the international financial market and the interest rate at which developed countries borrow from one another. During the period considered, the country premium for the 9 countries included in the table was on average about 640 basis points per year, or 6.4 percent per year. Open Economy Macroeconomics, Chapter 11 Table 11.3: Debt-to-GNP Ratios and Country Premia Among Defaulters Country Argentina Brazil Chile Colombia Egypt Mexico Philippines Turkey Venezuela Average Average Debt-to-GNP Ratio 37.1 30.7 58.4 33.6 70.6 38.2 55.2 31.5 41.3 44.1 Debt-to-GNP Ratio in Year of Default 54.4 50.1 63.7 112.0 46.7 70.6 21.0 46.3 58.1 Average Country Spread 1756 845 186 649 442 593 464 663 1021 638 Notes: The sample includes only emerging countries with at least one external-debt default or restructuring episode between 1824 and 1999. Debt-to-GNP ratios are averages over the period 1970-2000. Country spreads are measured by EMBI country spreads, produced by J.P. Morgan, and expressed in basis points, and are averages through 2002, with varying starting dates as follows: Argentina 1993; Brazil, Mexico, and Venezuela, 1992; Chile, Colombia, and Turkey, 1999; Egypt 2002; Philippines, 1997. Debt-to-GNP ratios at the beginning of a default episodes are averages over the following default dates in the interval 1970-2002: Argentina 1982 and 2001; Brazil 1983; Chile 1972 and 1983; Egypt 1984; Mexico 1982; Philippines 1983; Turkey 1978; Venezuela 1982 and 1995. Colombia did not register an external default or restructuring episode between 1970 and 2002. Source: Own calculations based on Reinhart, Rogoff, and Savastano (2003), tables 3 and 6. M. Uribe and S. Schmitt-Groh´e Do Countries Default In Bad Times? An important question in the theoretical literature on default is whether countries tend to dishonor their debt obligations during economic expansions or contractions. The reason is that different models produce opposite predictions in this regard. As a preview of the theory to come, consider the following two simple examples. Suppose first that a country that wishes to smooth consumption signs a contract with foreign investors that is state contingent. Specifically, suppose that the contract specifies that the country receives a transfer from the rest of the world if domestic output is below average and makes a payment to the rest of the world if domestic output is above average. Clearly, under this contract, incentives to default are highest when output is above average, since these are the states in which the country must make payments. The second example is one in which the country cannot sign state-contingent contracts. Instead, suppose that the sovereign borrows internationally and has to pay next period the amount borrowed plus a fixed amount in interest regardless of the state of the domestic economy next period. In this case, the incentive to default is likely to be strongest when output is low, because the cost of sacrificing consumption to service the debt is higher when consumption is already low due to the weak level of domestic output. Which of these two examples is favored by the data? Figure 11.2 displays the typical behavior of output around the default episodes listed in table 11.9. Output is measured as percent deviations of real GDP per capita from a log-quadratic trend.2 The figure displays a window of 3 years before and after a default episode. The year of default is normalized to 0. The typical behavior of output around a default episode is captured by computing the median of output period by period across the default episodes. The figure shows that defaults typically occur after long and severe economic contractions. Specifically, the typical 2 The trend was estimated over the longest available sample. Countries with less than 30 consecutive years of output observations were excluded. The longest output sample contains 54 observations ranging form 1960 to 2013. The shortest sample contains 33 output observations. The sample contains 105 default episodes. Open Economy Macroeconomics, Chapter 11 Figure 11.2: Output Around Default Episodes 5 Percent deviation from trend −5 −3 0 Years after default Note: Output is measured as real per capita GDP, log quadratically detrended at annual frequency. Median across the default episodes listed in table 11.9. Countries with less than 30 consecutive years of output data were excluded, resulting in 105 default episodes over the period 1975-2014. Source: Output is from World Development Indicators, and default dates are from table 11.9. M. Uribe and S. Schmitt-Groh´e country experiences a 6.5 percent contraction in output per capita in the 3 years leading up to default. This result suggests that the answer the question posed in the title of this subsection is yes, countries default in bad times. Some authors, however, arrive at a different conclusion. For example, Tomz and Wright (2007, 2013), using data from 1820 to 2005, argue that countries do default during bad times, but that the supporting evidence is weak.3 Tomz’s and Wright’s argument is based on two observations. First, they find that at the time of default, output is only about 1.5 percent below trend. This finding is in line with the results shown in figure 11.2. However, this observation ignores that by the time the economy reaches the period of default, period 0 in the figure, it has contracted by more than 5 percent between periods -3 and -1. The second reason why Tomz and Wright argue that the evidence that countries default in bad times is weak is their finding that only 60 percent of the default episodes occur when output is below trend. This finding is also corroborated in the sample considered here. However, if one asks the question of what fraction of the countries were contracting (i.e., experiencing output growth below trend growth) at the time of default, the answer is quite different. We find that 75 percent of the default episodes occur at a time in which output growth is below trend. This finding further strengthens the conclusion that defaults occur in times of significant economic distress. Note that the graph in figure 11.2 flattens one period after default. This means that output growth returns to its long-run trend one year after default. Levy Yeyati and Panizza (2011) were the first to identify the growth recovery regularity. We note, however, that even three years after the default, the level of output remains 4 percent below trend. The broad picture that emerges from figure 11.2 is that default marks the end of a large contraction, and the beginning of a growth recovery, albeit not the beginning of a recovery in the level of output. 3 Benjamin and Wright (2008) and Durdu, Nunes, and Sapriza (2010) arrive at similar conclusions using different samples. Open Economy Macroeconomics, Chapter 11 Figure 11.3 shows that the conclusion that countries default in bad times extends to variables other than output. The figure displays the cyclical components of private consumption, gross investment, the trade-balance-to-GDP ratio, and the real effective exchange rate. Private consumption contracts by as much as output (about 6 percent) in the run up to default, and investment experiences a fall 3 times as large as output. At the same time, the trade balance is below average up until the year of default, when it experiences a reversal of about 2 percent. The bottom right panel displays the behavior of the real exchange rate. This variable is defined in such a way that an increase means a real appreciation (i.e., the economy in question becomes more expensive than its trading partners). The figure shows that the real exchange rate depreciates significantly, by more than 4 percent, in the year of default. After the default, the real exchange rate begins to gradually appreciate. The Cost of Default The empirical literature has identified three main sources of costs associated with sovereign default: exclusion from financial markets, output losses, and international trade reductions. Default and Exclusion From Financial Markets Do defaulting sovereigns lose access to credit markets upon default? And, if so, how long does it take them to regain access? One possible approach to addressing this question is to assume that countries are excluded from credit markets as long as they are in default status. Under this approach, exclusion from credit markets begins in the year the default occurs and reaccess takes place in the year after the country reemerges from default. According to table 11.2, for the sample 1975-2014, the average length of exclusion from international financial markets was 8 years. As mentioned earlier, however, this approach is a bit naive, because in principle nothing prevents M. Uribe and S. Schmitt-Groh´e Figure 11.3: Consumption, Investment, The Trade Balance, and The Real Exchange Rate Around Default Episodes Consumption Investment 20 Percent deviation from trend Percent deviation from trend 6 4 2 0 −2 −4 −3 −1 0 1 Years after default −20 −3 Trade−Balance−To−Output Ratio 4 Percent deviation from trend Percent deviation from trend Real Exchange Rate 1.5 1 0.5 0 −0.5 −1 −3 −1 0 1 Years after default −1 0 1 Years after default 2 0 −2 −4 −6 −3 −1 0 1 Years after default Note: The data is annual. Consumption, investment, and the real exchange rate are logquadratically detrended. The trade-balance-to-output ratio is linearly detrended. Median across all default episodes. Countries with less than 30 consecutive years of data were excluded. Data sources: Default dates are from table 11.9 and all other variables from World Development Indicators. An increase in the real exchange rate indicates a real appreciation of the domestic currency. Open Economy Macroeconomics, Chapter 11 foreign lenders from extending fresh loans to countries before or after their default disputes are settled. For instance, it took Argentina 11 years to settle its 1982 default (see table 11.9). However, according to Gelos, Sahay, and Sandleris (2011) the country began to receive fresh external funds already in 1986. At the same time, and according to the same sources, Chile was in default status from 1983 to 1990, but was able to reaccess international credit markets only in 1994. Thus, the arrival of new funds and the end of default status do not seem to always match, suggesting the importance of directly measuring the date in which countries are able to borrow again after a default. Gelos, Sahay, and Sandleris (2011) undertake this approach systematically by examining micro data on sovereign bond issuance and public syndicated bank loans over the period 1980-2000. As in most of the related literature, including this chapter, their measure of the year of default follows Beers and Chambers (2006). Their innovation is to measure resumption of access to credit markets as the first year after default in which the government borrows either in the form of bonds or syndicated loans and the stock of debt increases. The latter requirement is aimed at avoiding counting as reaccess cases in which the country is simply rolling over an existing debt. It is not clear, however, that this requirement is appropriate. For the rolling over of existing debt is actually a manifestation of participation in international capital markets. Exclusion from financial markets is measured as the number of years between default and resumption. Gelos et al. find that the mean exclusion period is 4.7 years — about half the length obtained by using the date of settlement as a proxy for reaccess. However, this result is likely to be downwardly biased by the authors’ decision to include only default episodes associated with resumptions happening within the sample. That is, countries that defaulted between 1980 and 2000 and had not regained access to credit markets by 2000 were excluded from the calculations. The bias is likely to be strong because the sample includes equal numbers of resumptions and no resumptions before 2000.4 We 4 The authors also report a significant drop in the exclusion period from 4 to 2 years between the 1980s and the 1990s. This result is likely to be particularly affected by the bias reported in the body of the text. M. Uribe and S. Schmitt-Groh´e therefore interpret the results reported in Gelos et al. as suggesting that the period of exclusion from financial markets resulting from sovereign default is at least 4 years. Using data from 1980 to 2005, Richmond and Dias (2009) also study the issue of exclusion after default. Their methodology differs from Gelos et al. (2011) in three aspects. First, Richmond and Dias measure net borrowing using aggregate data. Second, and more importantly, they exclude from the definition of reaccess situations in which an increase in borrowing reflects the capitalization of arrears.5 Third, they count years of exclusion starting the year the country emerged from default, as opposed to the year in which the country entered into default. Their definition of reaccess distinguishes the cases of partial and full reaccess. Partial reaccess is defined as the first year with positive aggregate flows to the public sector after the country has emerged from default. Full reaccess is defined as the first year in which debt flows exceeds 1 percent of GDP. They find that on average countries regain partial access to credit markets 5.7 years after emerging from default and full access 8.4 years after emerging from default. To make these numbers comparable, with those reported by Gelos et al., we add the average number of years a country is in default, which according to table 11.2 is 8 years. This adjustment is reasonable because Richmond and Dias find that only a small fraction (less than 10 percent in their sample) of default episodes were associated with reaccess while the country was in default. Thus, according to the adjusted estimate, countries regain partial access to international credit markets 13.7 years after default and full access 16.4 years after default. Cruces and Trebesch (2013) extend and combine the data and criteria for market exclusion of Gelos et al. (2011) and Richmond and Dias (2009). Their data on reaccess covers the period 1980 to 2010. They find that on average countries regain partial access 5.1 years after emerging from default and full access 7.4 years after emerging from default. Introducing the same adjustment we 5 It is not clear that increases in net foreign government debt due to the capitalization of arrears is not the reflection of reaccess. For such situations may be the result of a successful negotiation between the lender and the debtor culminating in the ability of the latter to tap international markets again. Open Economy Macroeconomics, Chapter 11 Table 11.4: Estimates Of Years Of Exclusion From Credit Markets After Default Measure Length of Default Status (table 11.9)* First Issuance of New Debt -Gelos, Sahay, and Sandleris (2011)* -Richmond and Dias (2009)** -Adjusted Richmond and Dias (2009)* -Cruces and Trebesch (2013)** -Adjusted Cruces and Trebesch (2013)* Average Partial Reaccess Full Reaccess (flows> 0) (flows> 1%GDP) 8 4.7 5.7 13.7 5.1 13.1 9.8 8.4 16.4 7.4 15.4 15.9 Sample Period 1975-2014 1980-2000 1980-2005 1980-2005 1980-2010 1980-2010 Note. Reaccess is measured in years. after the beginning of default (*) or in years after the end of default (**). Averages are taken over single-star lines. applied to the Richmond and Dias estimates to measure exclusion from the beginning of the default episode, the estimates of Cruces and Trebesch imply that it takes defaulters on average 13.1 years to regain partial access and 15.4 years to gain full access after default. Table 11.4 summarizes the results of the estimates of exclusion we have discussed. The studies covered above also analyze the determinants of the length of exclusion after default. Gelos et al. (2011) find that the frequency of default is not a significant determinant of the length of exclusion. That is, markets seem to punish more or less equally one-time and serial defaulters. This finding lends support to an assumption along these lines maintained by most existing theories of sovereign default. Gelos et al. also find that defaults that resolve quickly do not result in significant exclusion. Richmond and Dias (2009) find that excusable defaults (such as those following a natural disaster) are associated with reduced exclusion periods. Cruces and Trebesch (2013) provide evidence suggesting that lenders may use exclusion as a punishment, by documenting that restructurings involving higher hair cuts (i.e., higher losses to creditors) are associated with significantly longer periods of capital market exclusion. M. Uribe and S. Schmitt-Groh´e We close this discussion by pointing out that the existing attempts to measure exclusion do not incorporate distinctly supply and demand determinants of external credit. Since observed variations in debt are equilibrium outcomes, the lack of increase in debt need not be a reflection of supply restrictions related to sanctions. In this regard, the studies reviewed above may incorporate an upward bias in the estimates of the numbers of years a defaulter is excluded from international financial markets. Output Costs Of Default A standard assumption in theoretical models of sovereign debt is that default entails an output loss (see section ??). This assumption helps the model economy to sustain a higher level of external debt in equilibrium. A number of authors have attempted to empirically estimate this cost (see, for example, Chuan and Sturzenegger, 2005; Borensztein and Panizza, 2009; De Paoli, Hoggarth, and Saporta, 2011; and Levy Yeyati and Panizza, 2011). The typical approach is to use cross country panel data to run a standard growth regression augmented with variables capturing default. Borensztein and Panizza (2009), for example, estimate the following regression using data from 83 countries over the period from 1972 to 2000, Growthit = α + βXit + γDefaultit + 3 X δj DefaultBit−j , +it where Growthit denotes the growth rate of real per capital GDP in country i from year t − 1 to year t in percent, Xit denotes a vector of controls typically used in growth regressions,6 Defaultit is a dummy variable taking the value 1 if country i is in default in year t and zero otherwise, DefaultBit 6 The variables included in Xit are investment divided by GDP, population growth, GDP per capita in 1970, percentage of the population that completed secondary education, total population, lagged government consumption over GDP, an index of civil rights, the change in terms of trade, trade openness (defined as exports plus imports divided by GDP), a dummy variable taking a value of one in presence of a banking crisis, and three regional dummies (for sub-Saharan Africa, Latin America and Caribbean, and transition economies). Open Economy Macroeconomics, Chapter 11 Figure 11.4: Simulated Path Of Output After A Default Implied By the Borensztein-Panizza (2009) Regression 0.18 no default trajectory default trajectory 0.16 log of per capita output 0 −3 2 3 years after default is a dummy variable taking the value 1 if country i entered in default in period t, and it is an error term. As in table 11.9, the dates of entering and exiting default are based on data from Standard and Poor’s. Borensztein and Panizza estimate γ = −1.184 and δi = −1.388, 0.481, 0.337, 0.994 for i = 0, 1, 2, 3, respectively, with γ and δ0 significant at confidence level of 5% or less. This estimate implies that the beginning of a default is accompanied by a 2.6 percent fall in the growth rate of output (γ + δ0 ). Subsequently, the growth rate recovers. However, the level of output never recovers, implying that default is associated with a permanent loss of output. Figure 11.4 displays with a broken line the logarithm of per capita output after a default implied by the Borensztein and Panizza regression. In the figure, the default is assumed to last for 5 years, the median length of the defaults reported in table 11.9. The long-run growth rate is assumed to be 1.5 percent, and the default date is normalized to 0. For comparison, the figure displays with a solid line the trajectory of output absent a default. After an initial fall, output gradually regains M. Uribe and S. Schmitt-Groh´e its long-run growth rate of 1.5 percent. However, the level of output remains forever 5.5 percent below the pre-default trajectory. Taken at face value, the above regression results suggest an enormous cost of default. But they are likely to represent an upwardly biased estimate of the output cost of default for two reasons. First, the regression may include an insufficient number of lags in the default variables Defaultit and DefaultBit . The former variable actually appears only contemporaneously, and the latter with 3 lags. To the extent that the coefficients associated with these variables are positive, the gap between the no-default trajectory and the default trajectory could be narrowed. Thus, adding more lags could be important even if they are individually estimated with low significance. Second, and more fundamentally, output growth and the default decision are endogenous variables, which may introduce a bias in the coefficients of the default variables. For instance, if defaults tend to occur during periods of low growth, then the estimated coefficient in the default variables may be negative even if default had no effect on growth. Thus, as stressed by Borensztein and Panizza (2009) and others, regression results of the type presented here should be interpreted as simply documenting a partial correlation between output growth and default. Zarazaga (2012) proposes a growth accounting approach to gauge the output loss associated with default. He documents that the Argentine defaults of 1982 and 2001 were both characterized by a peak in the capital-output ratio in the run up to the default, followed by a significant decline in the years after the default (see figure 11.5). For instance, by 2002 the capital-output ratio had reached 1.9. Indeed, Zarazaga argues that a value of around 1.9 or higher is a normal long-run level for the capital-output ratio in emerging economies. Thus, his argument goes, absent any crisis, the capital output ratio should have remained at 1.9 or higher after 2002. However, after the default the capital-output ratio fell, reaching a trough of 1.35 in 2007. What is the fall in output per person associated with this loss of capital per unit of final production? Zarazaga assumes a production function of the form yt = kt0.4 h0.6 t , where yt denotes output, kt denotes physical capital, Open Economy Macroeconomics, Chapter 11 Figure 11.5: The Capital Stock Around Default Episodes: Argentina 1951-2009 Source: Zarazaga (2012). and ht denotes employment. This technology implies that the output-to-worker ratio, yt /ht, is linked to the capital-to-output ratio, kt /yt, by the relationship yt /ht = (kt/yt )2/3 . This means that if the capital-output ratio had not fallen between 2002 and 2007, that is, if kt/yt had not fallen from 1.9 to 1.35, output per worker in 2007 would have been 26 percent higher than it actually was ([1.9/1.35]2/3 − 1] × 100). Thus, on average, between 2002 and 2007, output per worker was 13(= 26/2) percent lower than it would have been had the capital output ratio not fallen. If one ascribes all of the fall in the capital-to-output ratio observed between 2002 and 2007 to the sovereign default of 2002, then one would conclude that the output cost of the default was 13 percent each year between 2002 and 2007. Further, because it takes time for the capital-output ratio to recover its long-run level, the 13 percent loss should continue for more years. Assuming that the recovery is as fast as the decline, the cost should be extended for another 5 years, that is, until 2012. In summary, this accounting suggests that the Argentine default of 2002 had an output cost of M. Uribe and S. Schmitt-Groh´e percent per year per worker for 10 years, which is quite large. The 1982 default is not as clear cut as the default of 2001 because the capital-output ratio did not decline until late in the exclusion period (1982 to 1993). The capital-output ratio stayed at around 1.9 until 1990 and then fell to about 1.45 by 1996. It then took until 2002 to reach 1.9 again. Calculating the output loss following the same strategy applied to the 2002 default, we find that the output loss was 0 percent between 1982 and 1990 (8 years) and around 10 percent between 1990 and 2002 (12 years). On average, the output loss associated with the 1982 default, was therefore 6 percent of output per worker per year for 20 years, which is also a large number. It is important to keep in mind that these cost estimates hinge on the assumption that the entire decline in the capital-output ratio was caused by the defaults. To the extent that the fall in the capital-output ratio was in part driven by factors other than default, the cost estimates must be interpreted as an upper bound. Trade Costs of Default Default episodes are also associated with disruptions in international trade. Rose (2005) investigates this issue empirically. The question of whether default disrupts international trade is of interest because if for some reason trade between two countries is significantly diminished as a result of one country defaulting on its financial debts with other countries, then maintaining access to international trade could represent a reason why countries tend to honor their international financial obligations. Rose estimates an equation of the form ln Tijt = β0 + βXijt + M X φm ACREDijt−m + ijt , where Tijt denotes the average real value of bilateral trade between countries i and j in period t. Rose identifies default with dates in which a country enters a debt restructuring deal with the Open Economy Macroeconomics, Chapter 11 Paris Club. The Paris Club is an informal association of creditor-country finance ministers and central bankers that meets to negotiate bilateral debt rescheduling agreements with debtor-country governments. The regressor ACREDijt is a proxy for default. It is a binary variable taking the value 1 if one of the countries in the pair (i, j) is involved with the other in a Paris-Club debtrestructuring deal in period t and zero otherwise. The main focus of Rose’s work is the estimation of the coefficients φm . Rose’s empirical model belongs to the family of gravity models. The variable Xijt is a vector of regressors including (current and possibly lagged) characteristics of the country pair ij at time t such as distance, combined output, combined population, combined area, sharing of a common language, sharing of land borders, being co-signers of a free trade agreement, and having had a colonial relationship. The vector Xijt also includes country-pair-specific dummies and current and lagged values of a variable denoted IM Fijt that takes the values 0, 1, or 2, respectively, if neither, one, or both countries i and j engaged in an IMF program at time t. The data set used for the estimation of the model covers all bilateral trades between 217 countries between 1948 and 1997 at an annual frequency. The sample contains 283 Paris-Club debt-restructuring deals. Rose finds sensible estimates of the parameters pertaining to the gravity model. Specifically, countries that are more distant geographically trade less, whereas high-income country pairs trade more. Countries that share a common currency, a common language, a common border, or membership in a regional free trade agreement trade more. Landlocked countries and islands trade less, and most of the colonial effects are large and positive. The inception of IMF programs is associated with a cumulative contraction in trade of about 10 percent over three years. Default, as measured by the dummy variable ACREDijt, has a significant and negative effect on bilateral trade. Rose estimates the parameter φm to be on average about -0.07, i.e., φm 1+M −0.07, and the lag length, M , to be about 15 years. This means that entering in a debt restructuring agreement with a member of the Paris Club leads to a decline in bilateral trade of about 7 M. Uribe and S. Schmitt-Groh´e per year for about 16 years. For instance, if trade in period -1 was 100 and the country enters a restructuring agreement with the Paris Club in period 0, then its trade in periods 0 to 15 will be on average 93. Thus, the cumulative effect of default on trade is more than one year worth of trade in the long run. Based on this finding, Rose concludes that one reason why countries pay back their international financial obligations is fear of trade disruptions in the case of default. Do the estimated values of φm really capture the effect of trade sanctions imposed by creditor countries to defaulting countries? Countries undergoing default or restructuring of their external financial obligations typically are subject to severe economic distress, which may be associated with a general decline in international trade that is unrelated to trade sanctions by the creditor country. If this is indeed the case, then the coefficients φm would be picking up the combined effects of trade sanctions and of general economic distress during default episodes. To disentangle these two effects, Mart´ınez and Sandleris (2011) estimate the following variant of Rose’s gravity model: ln Tijt = β0 + βXijt + M X φm ACREDijt−m + M X γm DEBT ORijt−m + ijt , where DEBT ORijt−m is a binary variable taking the value one if either country i or country j is a debtor country involved in a debt-restructuring deal in the context of the Paris Club in period t, and zero otherwise. Notice that, unlike variable ACREDijt, variable DEBT ORijt is unity as long as one of the countries is a debtor involved in a restructuring deal with the Paris Club, regardless of whether or not the other country in the pair is the restructuring creditor. This regressor is meant to capture the general effect of default on trade with all trading countries, not just with those with which the debtor country is restructuring debt through the Paris Club. In this version of the gravity model, evidence of trade sanctions would require a point estimate P for M m=0 φm that is negative and significant, and evidence of a general effect of default on trade P would require a negative and significant estimate of M ınez and Sandleris estimate m=0 γm . Mart´ Open Economy Macroeconomics, Chapter 11 P15 γm to be -0.41. That is, when a country enters in default its international trade falls by about 40 percent over 15 years with all countries. More importantly, they obtain a point estimate P of 15 m=0 φm that is positive and equal to 0.01. The sign of the point estimates are robust to setting the number of lags, M , at 0, 5, or 10. This result would point at the absence of trade sanctions if creditor countries acted in isolation against defaulters. However, if creditors behaved collectively by applying sanctions to defaulters whether or not they are directly affected, then the γm coefficients might in part be capturing sanction effects. Mart´ınez and Sandleris control for collective-sanction effects by estimating two additional variants of the gravity model. One is of the ln Tijt = β0 + βXijt + M X φm CREDijt−m + M X γm DEBT ORijt−m + ijt , where CREDijt is a binary variable that takes the value 1 if one of the countries in the pair ij is a debtor in a debt restructuring deal with the Paris Club in period t and the other is a creditor, independently of whether or not it is re-negotiating with the debtor country in the pair. Evidence P of trade sanctions would require M m=0 φm to be negative and significant. The point estimate of PM m=0 φm turns out to be sensitive to the lag length considered. At lag lengths of 0, 5, and 10 years the point estimate is positive and equal to 0.09, 0.19, and 0.01, respectively. But when the lag length is set at 15 years, the point estimate turns negative and equal to -0.19. The third variant of Rose’s gravity model considered by Mart´ınez and Sandleris aims at disentangling the individual and collective punishment effects. It takes the form: ln Tijt = β0 + βXijt + M X φm ACREDijt−m + M X γm N OT CREDijt−m + ijt . M X ξm N ACREDijt−m M. Uribe and S. Schmitt-Groh´e Here, ACREDijt, N ACREDijt, and N OT CREDijt are all binary variables taking the values 1 or 0. The variable N ACREDijt takes the value 1 if one of the countries in the pair ij is a defaulter negotiating its debt in the context of the Paris Club in period t and the other country is a nonnegotiating Paris Club creditor (a nonaffected creditor). The variable N OT CREDijt takes the value 1 if one of the countries in the pair ij is a defaulter negotiating its debt in the context of the Paris Club in period t and the other country is not a creditor. In this variant of the model, P PM evidence of individual and collective trade sanctions would require both M m=0 φm and m=0 ξm to be negative and significant. The cumulative effect of default on trade between defaulters and P nonaffected creditors, given by M m=0 ξm , is consistently negative and robust across lag lengths. Specifically, it takes the values -0.0246, -0.2314, -0.4675, and -0.5629, at lag lengths of 0, 5, 10, and 15 years, respectively. However, the cumulative effect of default on trade between defaulters P and directly affected creditors, given by M m=0 φm , is again sensitive to the specified lag length, taking positive values at short and medium lag lengths and turning negative at long lag lengths. Specifically, the point estimate is 0.0631, 0.0854, 0.0119, and -0.3916 at lag lengths of 0, 5, 10, and 15, respectively. We interpret the work of Mart´ınez and Sandleris as suggesting that the importance of trade sanctions as a cost of default depends crucially upon one’s beliefs regarding the magnitude of the delay with which creditors are able or willing to punish defaulting debtors. If one believes that a reasonable period over which creditors apply trade sanctions to defaulting debtors is less than a decade, then the gravity model offers little evidence of trade sanctions to defaulters. Virtually all of the observed decline in the bilateral trade of debtors after a default episode can be attributed to economic distress and not to punishment inflicted by creditors. However, if one believes that creditors have good memory and are capable of castigating defaulting debtors many years (more than a decade) after a default episode, then the gravity model identifies a significant punishment component in the observed decline in bilateral trade following default episodes of about 50 percent Open Economy Macroeconomics, Chapter 11 of the trade volume cumulated over 15 years. Default Incentives With State-Contingent Contracts The focus of this section is to analyze the structure of international debt contracts when agents have access to state-contingent financial instruments but may lack commitment to honor debt obligations. The material in this section draws from the influential work of Grossman and Van Huyck (1988). Consider a one-period economy facing a stochastic endowment given by y = y¯ + , where y¯ > 0 is a constant and is a mean-zero random variable with density π() defined over the interval [L , H ]. Thus, y¯ is the mean of the endowment process and is an endowment shock satisfying π()d = 0. Assume that before is realized, the country can buy insurance against endowment shocks. This insurance is sold by foreign investors and takes the form of state-contingent debt contracts. Specifically, these debt contracts stipulate that the country must pay d() units of goods to foreign lenders after the realization of the shock. The random variable d() can take positive or negative values. In states in which d() is negative, the country receives a payment from foreign lenders, and in states in which d() is positive, the country makes payments to the rest of the world. Foreign lenders are assumed to be risk neutral, to operate in a perfectly competitive market, and to face an opportunity cost of funds equal to zero. These assumptions imply that debt contracts carrying an expected payment of zero are sufficient to ensure the participation of foreign investors. Formally, M. Uribe and S. Schmitt-Groh´e the zero-expected-profit condition, known as the participation constraint, can be written as Z d()π()d = 0. The country seeks to maximize the welfare of its representative consumer, which is assumed to be of the form where c() denotes consumption, and u(·) is a strictly increasing and strictly concave utility index. For the remainder of this section, we will use the terms country and household indistinctly. The household’s budget constraint is given by c() = y¯ + − d(). We are now ready to characterize the form of the optimal external debt contract. We begin by considering the case in which the country can commit to honor its promises. The Optimal Debt Contract With Commitment Let’s assume that after the realization of the endowment shock , the household honors any promises made before the occurrence of the shock. In this case, before the realization of the shock, the household’s problem consists in choosing a state-contingent debt contract d() to maximize Z c() = y¯ + − d(), subject to Open Economy Macroeconomics, Chapter 11 Z d()π()d = 0. The Lagrangian associated with this problem can be written as π() [u(¯ y + − d()) + λd()] d, where λ denotes the Lagrange multiplier associated with the participation constraint (11.1). Note that λ is not state contingent. The first-order conditions associated with the representative household’s problem are (11.1), (11.3), and u0 (c()) = λ. Noting that the multiplier λ is independent of , this expression and the budget constraint (11.3) imply that consumption is also independent of . That is, the optimal debt contract achieves perfect consumption smoothing across states of nature. Multiplying both sides of the budget conR H R H straint (11.3) by π() and integrating over the interval [L , H ] yields c() L π()d = y¯ L π()d+ R H R H L π()d − L d()π()d. In this step, we are using the fact that c() is independent of . Since R H R H R H L π()d = 1, L π()d = 0, and L d()π()d = 0, we have that c() = y¯. That is, under the optimal contract consumption equals the average endowment in all states. It then follows from the budget constraint (11.3), that the associated debt payments are exactly equal to the endowment shocks, d() = . Under the optimal contract, domestic risk-averse households transfer all of their income uncertainty to risk-neutral foreign lenders. They do so by receiving full compensation from foreign investors M. Uribe and S. Schmitt-Groh´e for any realization of the endowment below average and by transferring to foreign investors any amount of endowment in excess of the mean. Thus, net payments to the rest of the world move one for one with the endowment, that is, d0 () = 1. The derivative of the debt contract with respect to the endowment shock is a convenient summary of how much insurance the contract provides. A unit slope is a benchmark that we will use to ascertain how much protection from output fluctuations the country can achieve through the optimal debt contract under alternative environments varying in the amount of commitment the country has to honor debt, and on the ability of foreign lenders to punish defaulters. Optimal Debt Contract Without Commitment In the economy under analysis, there are no negative consequences for not paying debt obligations. Moreover, debtors have incentives not to pay. In effect, in any state of the world in which the contract stipulates a payment to foreign lenders (i.e., in states in which the endowment is above average), the debtor country would be better off consuming the resources it owes. After consuming these resources, the world simply ends, so debtors cannot be punished for having defaulted. The perfect-risk-sharing equilibrium we analyzed in the previous subsection was built on the basis that the sovereign can resist the temptation to default. What if this commitment to honoring debts was absent? Clearly, in our one-shot world, the country would default in any state in which the contract stipulates a payment to the rest of the world. It then follows that any debt contract must include the additional incentive-compatibility constraint d() ≤ 0, Open Economy Macroeconomics, Chapter 11 for all ∈ [L , H ]. The representative household’s problem then is to maximize Z c() = y¯ + − d(), subject to d()π()d = 0, d() ≤ 0. Restrictions (11.1) and (11.4) state that debt payments must be zero on average and never positive. The only debt contract that can satisfy these two requirements simultaneously is clearly d() = 0, for all . This is a trivial contract stipulating no transfers of any sort in any state. It follows that under lack of commitment international risk sharing breaks down. No meaningful debt contract can be supported in equilibrium. As a result, the country is in complete financial autarky and must consume its endowment in every state c() = y + , for all s. This consumption profile has the same mean as the one that can be supported with commitment, namely, y¯. However, the consumption plan under commitment is constant across states, whereas the one associated with autarky inherits the volatility of the endowment process. It follows immediately that risk-averse households (i.e., households with concave preferences) are M. Uribe and S. Schmitt-Groh´e worse off in the financially autarkic economy.7 Put differently, commitment is welfare increasing. Because in the economy without commitment international transfers are constant (and equal to zero) across states, we have that d0 () = 0. This result is in sharp contrast with what we obtained under full commitment. In that case, the derivative of debt payments with respect to the endowment shock is unity at all endowment levels. Direct Sanctions Suppose that foreign lenders (or their representative governments) could punish defaulting sovereigns by seizing national property, such as financial assets or exports. One would expect that this type of actions would deter borrowers from defaulting at least as long as debt obligations do not exceed the value of the seizure. What is the shape of the optimal debt contract that emerges in this type of environment? We model direct sanctions by assuming that in the case of default lenders can seize k > 0 units of goods from the delinquent debtor. It follows that the borrower will honor all debts not exceeding k in value. Formally, this means that the incentive-compatibility constraint now takes the form d() ≤ k. Under commitment, the optimal debt contract stipulates a maximum payment of H . This means that if k ≥ H , the optimal contract under commitment can be supported in an environment without commitment but with sanctions. At the opposite extreme, if k = 0, we are in the case with no commitment and no sanctions, and no payments can be supported in equilibrium, which 7 Formally, by the definition of concavity, we have that u H (¯ y+ L ” R H )π()d > L u(¯ y + )π()d. Open Economy Macroeconomics, Chapter 11 results in financial autarky. Here, our interest is to characterize the optimal debt contract in the intermediate case k ∈ (0, H ). The representative household’s problem then consists in maximizing Z c() = y¯ + − d(), subject to d()π()d = 0, d() ≤ k. The Lagrangian associated with this problem can be written as {u(¯ y + − d()) + λd() + γ()[k − d()]} π()d, where λ denotes the Lagrange multiplier associated with the participation constraint (11.1) and γ()π() denotes the Lagrange multiplier associated with the incentive-compatibility constraint (11.5) in state (there is a continuum of such multipliers one for each possible value of ). The first-order conditions associated with the representative household’s problem are (11.1), (11.3), (11.5), and u0 (c()) = λ − γ(), γ() ≥ 0, M. Uribe and S. Schmitt-Groh´e and the slackness condition (k − d())γ() = 0. In states in which the incentive-compatibility constraint does not bind, i.e., when d() < k, the slackness condition (11.8) states that the Lagrange multiplier γ() must vanish. It then follows from optimality condition (11.6) that the marginal utility of consumption equals λ for all states of nature in which the incentive compatibility constraint does not bind. This means that consumption is constant across all states in which the incentive-compatibility constraint does not bind. In turn, the budget constraint (11.3) implies that across states in which the incentive-compatibility constraint does not bind, payments to foreign lenders must differ from the endowment innovation by only a constant. Formally, we have that d() = d¯ + , for all such that d() < k, where d¯ is an endogenously determined constant. Based on our analysis of the case with commitment, in which payments to the rest of the world take place in states of nature featuring positive endowment shocks, it is natural to conjecture that the optimal contract will feature the incentive compatibility constraint binding at relatively high levels of income and not binding at relatively low levels of income. To see that this is indeed the case, let us show that if d(0 ) < k for some 0 ∈ (L , H ), then d(00 ) < d(0 ) for any 00 ∈ [L , 0 ). The proof is by contradiction. Let 00 ∈ [L , 0 ). Suppose that d (00 ) ≥ d(0 ). Then, by the budget constraint (11.3), we have that c(00 ) = y¯+00 −d(00 ) < y¯+0 −d(0 ) = c(0 ). It follows from the strict concavity of the utility function that u0 (c(00 )) > u0 (c (0 )). Then by optimality condition (11.6) we have that γ(00) < γ(0 ). But γ(0 ) = 0 by the slackness condition (11.8) and the assumption that d(0 ) < k. So we have that γ(00) < 0, which contradicts optimality condition (11.7). Open Economy Macroeconomics, Chapter 11 It follows that there exists an ¯ such that d¯ + d() = k for < ¯ for > ¯ We will show shortly that the debt contract described by this expression is indeed continuous in the endowment. That is, we will show that d(¯ ) = d¯ + ¯ = k. We will also show that this condition implies that the constant d¯ is indeed positive. This means that under the optimal debt contract without commitment but with direct sanctions the borrower enjoys less insurance than in the case of full commitment. This is because in relatively low-endowment states (i.e., states in which the incentive-compatibility constraint does not bind) the borrower must pay d¯ + , which is a larger sum than the one that is stipulated for the same state in the optimal contract with full commitment, given simply by . To see that if condition (11.10) holds, i.e., if the optimal debt contract is continuous, then d¯ is positive, write the participation constraint (11.1), which indicates that debt payments must be nil on average, as 0 = = L Z ¯ (d¯ + )π()d + (d¯ + )π()d + = d¯ + = d¯ − π()d + L Z H ¯ kπ()d ¯ H (d¯ + ¯)π()d ¯ H ( − ¯)π()d M. Uribe and S. Schmitt-Groh´e Since ¯ < H , we have that8 d¯ > 0. In showing that d¯ is positive, we made use of the conjecture that the debt contract is continuous in the endowment, that is, that d¯ + ¯ = k. We are now ready to establish this result. Using (11.9) to eliminate d() from (11.2) and (11.1), the optimal contract sets ¯ and d¯ to maximize Z subject to where F (¯ ) ≡ R ¯ ¯ u(¯ y − d)π()d + ¯ L u(¯ y + − k)π()d (d¯ + )π()d + [1 − F (¯ )]k = 0, π()d denotes the probability that is less than ¯. Now differentiate (11.11) with respect to d¯ and ¯ and set the result equal to zero. Also, differentiate (11.12). The resulting expressions are, respectively, ¯ (¯ ¯ − u(¯ − u0 (¯ y − d)F )dd¯ + [u(¯ y − d) y + ¯ − k)]π(¯ )d¯ = 0 ¯ )d + F (¯ [(¯ − k + d]π(¯ )dd¯ = 0. Combining equations (11.13) and (11.14) we obtain the optimality condition ¯ − d¯ − ¯] + [u(¯ ¯ − u(¯ − u0 (¯ y − d)[k y − d) y + ¯ − k)] = 0. ¯ Conditions (11.12) and (11.15) represent a system of two equations in the two unknowns, ¯ and d. 8 To see that ¯ < H , show that if ¯ = H , then (11.1) and (11.9) imply that d() = , which violates the incentive compatibility constraint (11.5) for all > k. Open Economy Macroeconomics, Chapter 11 ¯ such that d¯+ ¯ = k. That is, any continuous Clearly, equation (11.15) is satisfied for any pair (¯ , d) contract from the family defined in (11.9) satisfies the optimality condition (11.15). We now need to show that there exists a continuous contract that satisfies (11.12). To this end, replace d¯ in (11.12) by k − ¯ to obtain k= (¯ − )π()d. The function (¯ − )π() is nonnegative for ≤ ¯, which means that the right-hand side of this expression is a continuous, nondecreasing function of ¯. Moreover, the right-hand side takes the value 0 at ¯ = L and the value H for ¯ = H . Since the sanction k belongs to the interval (0, H ), we have that there is at least one value of ¯ that satisfies the above expression. We have therefore established that there exists at least one continuous debt contract that satisfies the two optimality conditions (11.12) and (11.15). Clearly, if the density function π() is strictly positive for all ∈ (L , H ), then there is a unique continuous contract that satisfies both optimality conditions. Our analysis shows that the case with no commitment and direct sanctions falls in between the case with full commitment and the case with no commitment and no direct sanctions. In particular, payments to foreign creditors increase one-for-one with the endowment shock for < ¯ (as in the case with full commitment), and are independent of the endowment shock for larger than ¯ (as in the case without commitment and no direct sanctions), that is, d0 () = 1 < ¯ 0 > ¯ Note also that if the sanction is sufficiently large—specifically, if k > H —then ¯ > H and the optimal contract is identical to the one that results in the case of full commitment. By contrast, if creditors are unable to impose sanctions (k = 0), then ¯ = L and the optimal contract stipulates financial autarky as in the case with neither commitment nor direct sanctions. It follows, perhaps M. Uribe and S. Schmitt-Groh´e Figure 11.6: Consumption Profiles Under Full Commitment and No Commitment With Direct Sanctions y(0) = y¯ + 0 cs (0) cc (0) = y¯ Note: cc () and cs () denote the levels of consumption in state under commitment and sanctions, respectively, y() ≡ y¯ + denotes output, and denotes the endowment shock. paradoxically, that the larger the ability of creditors to punish debtor countries in case of default, the higher the welfare of the debtor countries themselves. Finally, it is of interest to compare the consumption profiles across states in the model with commitment and in the model with direct sanctions and no commitment. Figure 11.6 provides a graphical representation of this comparison. In the model with commitment, consumption is perfectly smooth across states and equal to the average endowment. As mentioned earlier, in this case the risk-averse debtor country transfers all of the risk to risk neutral lenders. In the absence of commitment, consumption smoothing is a direct function of the ability of the lender to punish debtors in the case of default. Consumption is flat in low-endowment states (from L to ¯) Open Economy Macroeconomics, Chapter 11 and increasing in the endowment in high-endowment states (from ¯ to H ). The reduced ability of the risk averse agent to transfer risk to risk neutral lenders is reflected in two features of the consumption profile. First, the profile is no longer flat across all states of nature. Second, the flat segment of the consumption profile is lower than the level of consumption achieved under full commitment. This means that in the case with sanctions but no commitment, although households ¯ below which consumption cannot fall no matter how severe are protected by a safety net (¯ y − d) the contraction of their income is, this safety net is more precarious than the one provided by full ¯ commitment (¯ y > y¯ − d). Finally, a consequence of our maintained assumption that asset markets are complete is that the punishment is never materialized in equilibrium. The incentive compatibility constraint ensures that the amount of payments contracted in each state of nature never exceeds the possible punishment. As a result, in equilibrium, the debtor never has an incentive to default. Reputation Suppose now that creditors do not have access to direct sanctions to punish debtors who choose to default. Instead, assume that creditors have the ability to exclude delinquent debtors from financial markets. Because financial autarky entails the cost of an elevated consumption volatility, financial exclusion has the potential to support international lending. Debtor countries pay their obligations to maintain their performing status. Clearly, the model we have in mind here can no longer be a one-period model like the one studied thus far. Time is at the center of any reputational model of debt. Accordingly, we assume that the debtor country lives forever and each period receives an endowment equal to y¯ + , where y¯ is a positive constant and is a mean-zero random variable with a time-invariant density π() > 0 over the continuous support [L , H ]. For simplicity, we assume that the country cannot transfer resources intertemporally via a storage technology or financial markets. M. Uribe and S. Schmitt-Groh´e In any date t and state , if the country defaulted in the present or in the past it is considered to be in bad financial standing. Suppose that foreign lenders punish countries that are in bad financial standing by perpetually excluding them from international financial markets. Let v b () denote the welfare of a country that is in bad financial standing. Then, we have that v b () is given by b v () ≡ u(¯ y + ) + β H L β = u(¯ y + ) + 1−β v b(0 )π(0 )d0 Z u(¯ y + 0 )π(0 )d0 . Consider designing a debt contract with the following four characteristics: (1) It is signed before period 0 and stipulates precise payments in every date t ≥ 0 and state ∈ [L , H ]. (2) Payments are state contingent, but time independent. That is, the contract stipulates that in any state , the country must pay d() to foreign lenders independently of t. This restriction makes sense because the endowment is assumed to be i.i.d. (3) The contract is incentive compatible, that is, in every state and date, the country prefers to pay its debt rather than default. (4) The contract satisfies the participation constraint (11.1) period by period. That is, for each date, the expected value of payments to foreign lenders across states must be equal to zero. In any date and state, the country can be either in good or bad financial standing. For any date t and state , we denote by v g () the welfare of a country that enters the period in good standing and by v c () the welfare of a country that enters the period in good standing and chooses to honor its external obligations in that period. Then, we have that c v () ≡ u(¯ y + − d()) + β v g (0 )π(0 )d0 and v g () = max{v b (), v c()}. Open Economy Macroeconomics, Chapter 11 The incentive compatibility constraint requires that the country always prefer to honor its debt, or, formally, that v c () ≥ v b (). The above two expressions then imply that v g () = v c (). This equation implies that in equilibrium countries never default. As we will see shortly, this will not be the case in models with incomplete financial markets. The above expression can be used to eliminate v g () from (11.17) to get c v () = u(¯ y + − d()) + β v c (0 )π(0 )d0 . Iterating this expression forward, yields β v () = u(¯ y + − d()) + 1−β c u(¯ y + 0 − d(0 ))π(0 )d0 . We can then rewrite the incentive compatibility constraint (11.18) as β u(¯ y +−d())+ 1−β β u(¯ y + −d( ))π( )d ≥ u(¯ y +)+ 1−β 0 u(¯ y +0 )π(0 )d0 . (11.20) This restriction must hold for every state . Before continuing with the characterization of the optimal debt contract, it is instructive to see what factors make the first-best contract, d() = , fail in the present environment without commitment. Evaluating the above incentive compatibility constraint at the first-best contract, M. Uribe and S. Schmitt-Groh´e and rearranging terms, one obtains u(¯ y + ) − u(¯ y) ≤ where Eu(¯ y +) ≡ R H L β [u(¯ y ) − Eu(¯ y + )], 1−β u(¯ y +)π()d denotes the expected value of the period utility under financial autarky. The left-hand side of this expression measures the short-run gains of defaulting, whereas the right-hand side, which is positive because of the assumption of strict concavity of the period utility index, measures the long-run costs of default. The short-run gains have to do with the extra utility derived from above-average realizations of the current endowment, and the long-run costs of default are associated with the lack of consumption smoothing that defaulters must endure under financial autarky. In general, incentives to default are stronger the larger the current realization of the endowment. In particular, under the first-best debt contract, default will take place only in states in which the endowment is above average ( > 0). Also, the more impatient the debtor is (i.e., the lower β is), the larger the incentive to default. Intuitively, an impatient debtor does not place much value on the fact that future expected utility is higher under the first-best contract than under financial isolation. In addition, all other things equal, more risk averse countries have weaker incentives to default. It follows from this analysis that the first-best contract is in general not incentive compatible in the absence of commitment even if creditors could use financial isolation as a discipline devise. Let’s get back, then, to the characterization of the optimal incentive-compatible debt contract. Because the debt contracting problem is stationary, in the sense that the contract must be time independent, it suffices to maximize the period utility index, Z u(¯ y + − d())π()d, Open Economy Macroeconomics, Chapter 11 subject to the the participation constraint (11.1) and the incentive-compatibility constraint (11.20). The Lagrangian associated with this problem is L = + λ + u(¯ y + − d())π()d d()π()d " L H Z H β u(¯ y + 0 − d(0 ))π(0)d0 γ() u(¯ y + − d()) + 1 − β L L # Z H β −u(¯ y + ) − u(¯ y + 0 )π(0 )d0 π()d, 1 − β L where λ and π()γ() denote the Lagrange multipliers on (11.1) and (11.20), respectively. The first-order conditions associated with the problem of choosing the transfer schedule d() are (11.1), (11.20), u0 (¯ y + − d()) = 1 + γ() + β 1−β γ() ≥ 0 λ R H L γ(0 )π(0 )d0 (11.21) (11.22) and the slackness condition " β γ() u(¯ y + − d()) + 1−β β u(¯ y + − d( ))π( )d − u(¯ y + ) − 1−β 0 u(¯ y + )π( )d In states in which the incentive compatibility constraint (11.20) is not binding, the slackness condition (11.23) stipulates that the Lagrange multiplier γ() must vanish. It follows that in these states, the optimality condition (11.21) becomes u0 (¯ y + − d()) = β 1−β λ . R H 0 0 0 L γ( )π( )d = 0. M. Uribe and S. Schmitt-Groh´e Because the right-hand side of this expression is independent of the current value of , we have that the marginal utility of consumption must be constant across states in which the incentive compatibility constraint does not bind. This, in turn, implies that consumption is constant across these states, and that transfers are of the form d() = d¯ + , where d¯ is a constant. Over these states, consumption and payments to or from the rest of the world behave exactly as in the case with direct sanctions: domestic risk-averse agents transfer their endowment shock plus a constant to risk-neutral foreign lenders. In states in which the incentive compatibility constraint (11.20) is binding, consumption is greater than or equal to consumption in states in which the incentive compatibility constraint is not binding. To see this, notice that because γ() ≥ 0 for all , the right-hand side of (11.21) is smaller than or equal to the right-hand side of (11.24). It then follows from the concavity of the period utility function that consumption must be higher in states in which the incentive compatibility constraint is binding. It is again natural to expect that the incentive compatibility constraint will bind in highendowment states and that it will not bind in low-endowment states. The intuition is, again, that the debt contract should stipulate payments to the rest of the world in high-endowment states and transfers from the rest of the world to the domestic households in low endowment states, creating the largest incentives to default in high endowment states. To see that this intuition is correct, consider an 1 for which the incentive compatibility constraint is not binding, that is, γ(1 ) = 0. Consider now any endowment 2 < 1 . We wish to show that γ(2 ) = 0. The proof is by contradiction. Suppose that γ(2 ) > 0. The analysis of the previous paragraph implies that c(2 ) > c(1 ) and hence that 2 − d(2 ) > 1 − d(1 ). This, in turn, implies that as the endowment shock falls from 1 to 2 , the left-hand side of the incentive compatibility constraint (11.20) increases and its right-hand side decreases. This means that (11.20) must hold with strict inequality at 2 . It follows that the slackness condition (11.23) is violated, which shows that γ(2) cannot be positive. Open Economy Macroeconomics, Chapter 11 It follows from this analysis that there exists a threshold level of the endowment shock ¯ ≤ H such that the incentive compatibility constraint binds for all > ¯ and does not bind for all < ¯. That is, γ() = 0 for < ¯ > 0 for > ¯ Consider the question of how the optimal transfer d() varies across states in which the collateral constraint is binding. Does it increase as one moves from low- to high-endowment states, and by how much? To address this question, let us examine the incentive compatibility constraint (11.20) holding with equality: β u(¯ y + − d()) + 1−β H L β u(¯ y + − d0 )π( )d = u(¯ y + ) + 1−β 0 Notice that in this expression the terms β 1−β R H L u(¯ y + 0 )π(0 )d0 . (11.25) u(¯ y + 0 − d0 )π(0 )d0 and β 1−β R H L u(¯ y + 0 )π(0 )d0 are both independent of the current endowment shock . Only the first terms on the right- and left-hand sides of (11.25) change with the current level of endowment. Differentiating (11.25) with respect to the current endowment, , yields d0 () = u0 (¯ y + − d()) − u0 (¯ y + ) . 0 u (¯ y + − d()) Because the incentive compatibility constraint binds only when the risk-averse agent must make payments (d() > 0)—there are no incentives to default when the country receives income from the foreign agent—and because the utility index is strictly concave, it follows that u0 (¯ y + − d()) > u0 (¯ y + ) in all states in which the incentive compatibility constraint binds. This implies that when M. Uribe and S. Schmitt-Groh´e the incentive compatibility constraint binds we have 0 < d0 () < 1. That is, payments to the foreign lender increase with the level of income, but less than one for one. It might seem counterintuitive that as the current endowment increases the payment to creditors that can be supported without default also increases. After all, the higher is the current level of endowment, the higher is the level of current consumption that can be achieved upon default. The intuition behind the direct relation between income and payments is that given a positive level of current payments, d() > 0, a small increase in current endowment, , raises the current-period utility associated with not defaulting, u(¯ y + − d()), by more than it raises the utility associated with the alternative of defaulting, u(¯ y + ). (This is because the period utility function is assumed to be strictly concave.) It follows that in states in which d() > 0, the higher is the current endowment, the higher is the level of payments to foreign lenders that can be supported without inducing default. This does not mean that default incentives are weaker the higher is the level of the endowment. Recall that the analysis in this paragraph is restricted to states in which the incentive compatibility constraint is binding. The incentive compatibility constraint tends to bind in relatively high-endowment states. The positive slope of the payment schedule with respect to the endowment (when the incentive compatibility constraint is binding) presents a contrast with the pattern that emerges in the case of direct sanctions. In that case, when the incentive compatibility constraint binds payments equal the maximum punishment k, which implies that the slope of the payment schedule equals zero. Finally, the fact that the slope of the payment schedule is positive but less than one implies that when the incentive compatibility constraint is binding consumption is strictly increasing in the endowment. Figure 11.7 plots the consumption schedule as a function of the endowment shock. Open Economy Macroeconomics, Chapter 11 Figure 11.7: Consumption Profiles Under Full Commitment and No Commitment in a Reputational Model of Debt y(0) = y¯ + 0 cR (0) cc (0) = y¯ 0 0¯ Note: cc () and cR () denote the levels of consumption in state under commitment and no commitment, respectively, y() ≡ y¯ + denotes output, and denotes the endowment shock. M. Uribe and S. Schmitt-Groh´e Default Incentives With Non-State-Contingent Contracts In a world with complete financial markets, optimal risk-sharing arrangements stipulate positive payoffs in low-income states and negative payoffs in high-income states. In this way, the optimal financial contract facilitates a smooth level of consumption across states of nature. An implication of this result is that default incentives are stronger in high-income states and weaker in low-income states. In the real world, however, as documented earlier in this chapter, countries tend to default during economic contractions. One goal of this section is to explain this empirical regularity. To this end, we remove the assumption that financial markets are complete. Indeed, we focus on the polar case of a single non-state-contingent asset. In this environment, debts assumed in the current period impose financial obligations in the next period that are independent of whether income in that period is high or low. The debtor is no longer able to design debt contracts that carry a high interest rate in good states and a low interest rate in bad states. As a result, debtors facing high debt obligations and low endowments will have strong incentives to default. The pioneer model of Eaton and Gersovitz (1981), which we study in this section, represents the first formalization of this idea. Our version of the Eaton-Gersovitz model follows Arellano (2008). Consider a small open economy populated by a large number of identical individuals. Preferences are described by the utility function ∞ X β t u(ct), where ct denotes consumption in period t, u is a period utility function assumed to be strictly increasing and strictly concave, and β ∈ (0, 1) is a parameter denoting the subjective discount factor. Throughout our analysis, we will use the terms household, country, and government indistinguishably. We have in mind an arrangement in which the government makes all decisions Open Economy Macroeconomics, Chapter 11 concerning international borrowing and default in a benevolent fashion. Each period t ≥ 0, the representative country is endowed with yt units of consumption goods. This endowment is assumed to be exogenous, stochastic, and i.i.d., with a distribution featuring a bounded support Y ≡ [y, y¯]. At the beginning of each period, the country can be either in good financial standing or in bad financial standing. If the country is in bad financial standing, then it is prevented from borrowing or lending in financial markets. As a result, the country is forced to consume its endowment. Formally, consumption under bad financial standing is given by c = y. We drop the time subscript in expressions where all variables are dated in the current period. If the country is in good financial standing, it can choose to default on its debt obligations or to honor its debt. If it chooses to default, then it immediately acquires a bad financial status. If it chooses to honor its debt, then it maintains its good financial standing until the beginning of the next period. If the country is in good standing and chooses not to default, its budget constraint is given by c + d = y + q(d0 )d0 , where d denotes the country’s debt due in the current period, d0 denotes the debt acquired in the current period and due in the next period, and q(d0 ) denotes the market price of the country’s debt. Note that the price of debt depends on the amount of debt acquired in the current period and due next period, d0 , but not on the level of debt acquired in the previous period and due in the current period, d. This is because the default decision in the next period depends on the amount of debt due then. Notice also that q(·) is independent of the current level of output. This is because of the assumed i.i.d. nature of the endowment, which implies that its current value conveys no information about future expected endowment levels. If instead we had assumed that y was serially correlated, M. Uribe and S. Schmitt-Groh´e then bond prices would depend on the current level of the endowment, since it would be informative of the state of the business cycle—and hence of the probability of default—next period. We assume that ‘bad financial standing’ is an absorbent state. This means that once the country falls into bad standing, it remains in that status forever. The country acquires a bad standing when it defaults on its financial obligations. The value function associated with bad financial standing is denoted v b (y) and is given by v b (y) = u(y) + βEv b (y 0 ). Here, y 0 denotes next period’s endowment, and E denotes the l expectations operator. If the country is in good standing, the value function associated with continuing to participate in capital markets by honoring its current debts is denoted v c (d, y) and is given by 0 0 g 0 0 u(y + q(d )d − d) + βEv (d , y ) , v c (d, y) = max 0 d subject to ¯ d0 ≤ d, where v g (d, y) denotes the value function associated with being in good financial standing, and is given by v g (d, y) = max{v b(y), v c(d, y)}. The parameter d¯ > 0 is a debt limit that prevents agents from engaging in Ponzi games. In this economy, the country chooses to default when servicing the debt entails a cost in terms of forgone current consumption that is larger than the inconvenience of living in financial autarky forever. It is then reasonable to conjecture that default is more likely the larger the level of debt and the lower the current endowment. In what follows, we demonstrate that this intuition is in fact correct. We do so in steps. Open Economy Macroeconomics, Chapter 11 The Default Set The default set contains all endowment levels at which a country chooses to default given a particular level of debt. We denote the default set by D(d). Formally, the default set is defined by D(d) = {y ∈ Y : v b (y) > v c (d, y)}. Because it is never in the agent’s interest to default when its asset position is nonnegative (or d ≤ 0), it follows that D(d) is empty for all d ≤ 0. The trade balance is given by tb ≡ y − c. The budget constraint (11.26) then implies that tb = d − q(d0 )d0 . The following proposition shows that at debt levels for which the default set is not empty, an economy that chooses not to default will run a trade surplus. ¯ Proposition 11.1 If D(d) 6= ∅, then tb = d − q(d0 )d0 > 0 for all d0 ≤ d. Proof: The proof is by contradiction. Suppose that D(d) 6= ∅ and that q db db − d ≥ 0 for some ¯ Then, db ≤ d. v c (d, y) ≡ max u(y + q(d0 )d0 − d) + βEv g (d0 , y 0 ) 0 ¯ d u0 (y). It follows that vyb (y) − vyc (d, y) < 0, for all y ∈ D(d). That is, v b (y) − v c (d, y) is a decreasing function of y for all y ∈ D(d). This means that if v b (y1 ) > v c (d, y1 ), then v b (y2 ) > v c (d, y2 ) for y ≤ y2 < y1 . Equivalently, if y1 ∈ D(d), then y2 ∈ D(d) for any y ≤ y2 < y1 . We have shown that the default set is an interval with a lower bound given by the lowest endowment y. We now show that the default set D(d) is a larger interval the larger the stock of debt. Put differently, the higher the debt, the larger the probability of default. Proposition 11.3 If D(d) 6= ∅, then D(d) is an interval, [y, y ∗ (d)], where y ∗ (d) is increasing in d if y ∗ (d) < y¯. Proof: We already proved that the default set D(d) is an interval. By definition, every y ∈ D(d) satisfies v b(y)−v c (d, y) > 0. At the same time, we showed that vyb (y)−vyc (d, y) < 0 for all y ∈ D(d). It follows that y ∗ (d) is given either by y¯ or (implicitly) by v b (y ∗ (d)) = v c (d, y ∗(d)). Differentiating this expression yields v c (d, y ∗(d)) dy ∗ (d) = b ∗ d , dd vy (y (d)) − vyc (d, y ∗(d)) where vdc (d, y) ≡ ∂v c (d, y)/∂d. We have shown that vyb (y ∗ (d))−vyc (d, y ∗(d)) < 0. Using the definition of vdc (d, y) and applying the envelope theorem, it follows that vdc (d, y ∗ (d)) = −u0 (y ∗ (d)+q(d0 )d0 −d) < 0. We then conclude that dy ∗ (d) > 0, dd as stated in the proposition. > v b (y1) − v c (d, y1 ) > 0, 0 ˜ where d is the value of d that maximizes maxd0 {u(y2 + q(d0 )d0 − d) − u(y1 + q(d0 )d0 − d)}. The first inequality holds because we are distributing the max operator. The second inequality holds because u is concave. And the third inequality follows because, by assumption, y1 ∈ D(d). M. Uribe and S. Schmitt-Groh´e Summarizing, we have obtained two important results: First, given the stock of debt, default is more likely the lower the level of output. Second, the larger the stock of debt, the higher the probability of default. These two results are in line with the stylized facts presented earlier in this chapter, indicating that at the time of default countries tend to display above-average debt-to-GNP ratios (see table 11.3). Default Risk and the Country Premium We now characterize the behavior of the country interest-rate premium in this economy. Let the world interest rate be constant and equal to r ∗ > 0. We assume that foreign lenders are risk neutral and perfectly competitive. It follows that the expected rate of return on the country’s debt must equal r ∗ . If the country does not default, foreign lenders receive 1/q(d0) units of goods per unit lent. If the country does default, foreign lenders receive nothing. Therefore, equating the expected rate of return on the domestic debt to the risk-free world interest rate, one obtains 1 + r∗ = Prob {y 0 ≥ y ∗ (d0 )} . q(d0) The numerator on the right side of this expression is the probability that the country will not default next period. Letting F (y) denote the cumulative density function of the endowment shock, we can write q(d0 ) = 1 − F (y ∗ (d0 )) . 1 + r∗ This expression states that the gross country premium, 1/[q(d0)(1 + r ∗ )], equals the inverse of the probability of repayment, or approximately one plus the probability of default. Hence, the net country premium is approximately equal to the probability of default. The above expression Open Economy Macroeconomics, Chapter 11 implies that the derivative of the price of debt with respect to next period’s debt is given by 0 dq(d0) −F 0 (y ∗ (d0 ))y ∗ (d0 ) = ≤ 0. dd0 1 + r∗ 0 The inequality follows because by definition F 0 ≥ 0 and because, by proposition 11.3, y ∗ (d0 ) ≥ 0. It follows that the country spread, given by the difference between 1/q(d0) and 1+r ∗ , is nondecreasing in the stock of debt. We summarize this result in the following proposition: Proposition 11.4 The country spread, given by 1/q(d0) − 1 − r ∗ is nondecreasing in the stock of debt. Saving and the Breakdown of Reputational Lending A key assumption of the reputational model of sovereign debt is that when a country defaults foreign lenders coordinate to exclude it from the possibility to borrow or lend in international financial markets. At a first glance, it might seem that what is important is that defaulters be precluded from borrowing in international financial markets. Why should defaulting countries not be allowed to save? Bulow and Rogoff (1989) have shown that prohibiting defaulters to lend to foreign agents, or to hold a positive net foreign asset position, is crucial for the reputational model to work. If delinquent countries were not allowed to borrow but could run current account surpluses, no lending at all could be supported on reputational grounds alone. To illustrate this insight in a simple setting, consider a deterministic economy. Suppose that a reputational equilibrium supports a path for external debt given by {dt }∞ t=0 , where dt denotes the level of external debt assumed in period t and due in period t + 1.10 Assume that default is punished with perpetual exclusion from borrowing in international financial markets, but that 10 For an example of a deterministic model with sovereign debt supported by reputation, see Eaton and Fern´ andez (1995). M. Uribe and S. Schmitt-Groh´e saving in these markets is allowed after default. This assumption and the fact that the economy operates under perfect foresight imply that any reputational equilibrium featuring positive debt in at least one date must be characterized by no default. To see this, notice that if the country defaults at some date T > 0, then no foreign investor would want to lend to this country in period T − 1, since default would occur for sure one period later. Thus, dT −1 ≤ 0. In turn, if the country is excluded from borrowing starting in period T − 1, then it will have no incentives to honor any debts outstanding in that period. As a result, no foreign investor will be willing to lend to the country in period T − 2. That is, dT −2 ≤ 0. Continuing with this logic, we arrive at the conclusion that default in period T implies no debt at any time. That is, dt ≤ 0 for all t ≥ 0. It follows from this result that in an equilibrium with positive external debt the interest rate must equal the world interest rate r ∗ > 0, because the probability of default is nil. The country premium is therefore also nil. The evolution of the equilibrium level of debt is then given by dt = (1 + r ∗ )dt−1 − tbt for t ≥ 0, where tbt ≡ yt − ct denotes the trade balance in period t. Assume that the endowment path {yt}∞ t=0 is bounded. Let dT > 0 be the maximum level of external debt in this equilibrium sequence.11 That is, dT ≥ dt for all t ≥ −1. Does it pay for the country to honor this debt? The answer is no. The reason is that the country could default in period T + 1—and therefore be excluded from borrowing internationally forever thereafter—and still be able to maintain a level of consumption no lower than the one that would have obtained in the absence of default. To see this, let det for t > T denote the post-default path of external debt, or external assets if negative. Let the debt Problem 11.10 asks you to derive the main result of this section when a maximal debt level does not exist. Open Economy Macroeconomics, Chapter 11 position acquired in the period of default be deT +1 = −tbT +1 , where tbT +1 is the trade balance prevailing in period T + 1 under the original debt sequence {dt }. By (11.27) We have that −tbT +1 = dT +1 − (1 + r ∗ )dT , which implies that deT +1 = dT +1 − (1 + r ∗ )dT . Because by assumption dT ≥ dT +1 and r ∗ > 0, we have that deT +1 < 0. That is, in period T + 1 the country can achieve the same level of trade balance (and hence consumption) under the default strategy as under the no-default strategy, without having to borrow internationally. Let the external debt position in period T + 2 in the default strategy be deT +2 = (1 + r ∗ )deT +1 − tbT +2 , where, again, tbT +2 is the trade balance prevailing in period T + 2 under the original (no default) debt sequence {dt}. Using (11.27) and (11.28) we can rewrite the above expression as deT +2 = dT +2 − (1 + r ∗ )2 dT < 0. The inequality follows because by assumption dT +2 ≤ dT and r ∗ > 0. We have shown that the defaulting strategy can achieve the no-default level of trade balance in period t+2 without requiring any international borrowing. Continuing in this way, one obtains that the no-default sequence of M. Uribe and S. Schmitt-Groh´e trade balances, tbt for t ≥ T + 1, can be supported by the debt path det satisfying det = dt − (1 + r ∗ )t−T dT , which is strictly negative for all t ≥ T +1. The fact that the entire post-default debt path is negative e t satisfying implies that the country could also implement a post default path of trade balances tb e t ≤ tbt for t ≥ T + 1 and tb e t0 < tbt0 for at least one t0 ≥ T + 1 and still generate no positive tb debt at any date t ≥ T + 1. This new path for the trade balance would be strictly preferred to the no-default path because it would allow consumption to be strictly higher than under the no-default strategy in at least one period and to be at least as high as under the no-default strategy in all other periods (recall that tbt = yt − ct). It follows that it pays for the country to default immediately after reaching the largest debt level dT . But we showed that default in this perfect foresight economy implies zero debt at all times. Therefore, no external debt can be supported in equilibrium. In other words, allowing the country to save in international markets after default implies that no equilibrium featuring strictly positive levels of debt can be supported on reputational grounds alone. For simplicity, we derived this breakdown result using a model without uncertainty. But the result also holds in a stochastic environment (see Bulow and Rogoff, 1989). Quantitative Analysis The reputational model of default analyzed in section 11.4 has been subject to intense quantitative scrutiny. However, as formulated there, the model is too stylized to capture salient features of actual defaults. To give the model a chance to match the data, researchers have enriched it along a number of dimensions. Three basic features that can be found in virtually all quantitative Open Economy Macroeconomics, Chapter 11 models are serial correlation of the endowment process, a finite exclusion period from international credit markets after default, and an output cost of default. These features render the model less analytically tractable, but a full characterization of the equilibrium dynamics is possible using numerical methods. Serially Correlated Endowment Shocks We begin by introducing the assumption that the endowment process has an autoregressive component. Specifically, assume that the variable yt has the AR(1) law of motion ln yt = ρ ln yt−1 + σ t , where ln denotes the natural logarithm, ρ ∈ [0, 1) is a parameter denoting the serial correlation of the endowment process, σ > 0 is a parameter denoting the standard deviation of the innovations to the endowment process, and t is an i.i.d. random variable following a standard normal distribution, t ∼ N (0, 1). When ρ = 0, this process nests as a special case the white noise specification assumed in sections 11.3 and 11.4. A theoretical implication of assuming a serially correlated output process is that now the period t price of debt assumed in period t and due in period t+1 is no longer only a function of the amount of debt assumed in t, dt+1 , but also of the current endowment, yt . The reason is that, as we have seen, the price of debt in t depends on the expected value of default in t + 1. In turn, the decision to default in t + 1 depends on that period’s output, yt+1 . When the output process is serially correlated, yt provides information on yt+1 , and therefore affects the current price of debt. So we can write the price of debt as q(yt , dt+1 ). M. Uribe and S. Schmitt-Groh´e A second ubiquitous generalization of the default model is to assume that upon default the country is not perpetually excluded from international credit markets. This assumption makes the model more realistic, as defaulting countries are not excluded from international financial markets indefinitely. As discussed in section 11.2.1, depending on the empirical strategy, the typical exclusion period is estimated to last between 4.7 and 13.7 years (see table 11.4). The assumption of permanent exclusion upon default is typically replaced with the assumption that after default the country regains access to financial markets with constant probability θ ∈ [0, 1) each period. This assumption implies that the average exclusion period is 1/θ periods. To see this, assume that the first period of exclusion is the period of default. Then, the probability that the country will be excluded for exactly 1 period is θ. The probability that the country will be excluded for exactly 2 periods is (1 − θ)θ. In general, the probability of being excluded for exactly j periods is given by (1 − θ)j−1 θ. Thus we have average exclusion period = 1 × θ + 2 × (1 − θ)θ + 3 × (1 − θ)2 θ + . . . ∞ X = θ j(1 − θ)j−1 j=1 1 . θ Most calibrations of the default model use an estimate of the left-hand side of this expression (the average length of the exclusion period) to identify the value of θ (see, for instance, section 11.6.5). The assumption of a constant probability of reentry regardless of how long the country has been excluded from credit markets has some empirical support. As shown in figure 11.1, the empirical distribution of the length of time defaulters are in default status resembles that of an exponential distribution. Open Economy Macroeconomics, Chapter 11 The larger is θ, the quicker the country regains credit access after default. As a result, θ affects the model’s predictions regarding default frequency, average risk premium, and the amount of debt that can be sustained in equilibrium. The assumed specification of reentry nests as a special case (θ = 0), the setup studied earlier, in which financial autarky is an absorbent state. It is common to assume that when the country regains good financial standing, it starts with no external obligations. As we saw in section 11.1.1, this assumption is unrealistic. There we documented that available estimates indicate that the typical haircut is about 40 percent of the external debt. Later, we will discuss theoretical studies that attempt to make the default model more realistic in this regard. Output Costs As it turns out, exclusion by itself is not enough punishment for defaulting countries, in the sense that it is unable to support empirically realistic levels of external debt. For this reason, a third generalization of the standard Eaton-Gersovitz model that has become commonplace in quantitative applications is the introduction of direct output costs of default. Thus, upon default, countries are assumed to be punished not only by being excluded from international credit markets, but also by losing part of their endowment for the duration of the bad standing status. The empirical work surveyed in section 11.2.2 provides strong evidence that defaults are associated with sizable and protracted contractions in output. However, as discussed there, the direction of causality has not yet been established. Assume that the endowment received by the country is not yt , but y˜t ≤ yt , where y˜t is defined as yt if the country is in good standing y˜t = , yt − L(yt ) if the country is in bad standing where L(yt ) is an output loss function assumed to be positive and nondecreasing. The introduction M. Uribe and S. Schmitt-Groh´e of this type of direct costs affects the model’s predictions along two dimensions. First, it discourages default, and therefore tends to increase the amount of debt sustainable in equilibrium and to reduce the risk premium. Second, it discourages default in good states of nature, i.e., when yt is high. This is because the higher is yt , the higher is the output loss in case of default, as L(yt ) is positive and nondecreasing. This way of modeling output losses caused by default is, however, ad-hoc, as it is not based on micro-foundations. Later, we will discuss ways to remedy this drawback. We assume the following specification for the loss function L(yt) L(yt) = max{0, a0 + a1 yt + a2 yt2 }. This specification encompasses a number of cases of interest. For example, Arellano (2008) assumes that when the country is in bad standing, it loses any endowment above a certain threshold, y¯, that is, yt − L(yt ) = yt if yt < y¯ y¯ if yt ≥ y¯ Figure 11.8 displays with a dashed line the endowment net of the output cost as a function of the endowment itself for the Arellano specification. This specification obtains as a special case of (11.31) when one sets a0 = −¯ y , a1 = 1, and a2 = 0. Chatterjee and Eyigungor (2012) adopt a two-parameter specification of the output loss function under which the output loss is quadratic at sufficiently large values of yt . This case obtains by setting a0 = 0, a1 < 0, and a2 > 0 in (11.31). Figure 11.8 displays with a dash-dotted line the endowment net of output cost under the Chatterjee-Eyigungor specification. Open Economy Macroeconomics, Chapter 11 Figure 11.8: Output Cost of Default yt yt − L(yt ) (flat) yt − L(yt ) (quadratic) The Model With these three modifications, the Eaton-Gersovitz model of section 11.4 changes as follows. The value of continuing to participate in financial markets, v c (d, y), is the solution to the Bellman equation v c (d, y) = max u(y + q(d0 , y)d0 − d) + βEy v g (d0 , y 0 ) , 0 d where Ey denotes the expectations operator conditional on y, and v g (d, y) is the value of being in good financial standing. As before, we drop time subscripts and denote variables dated next period with a prime. The value of being in bad financial standing is given by v b (y) = u(y − L(y)) + θEy v g (0, y 0) + (1 − θ)Ey v b (y 0 ). M. Uribe and S. Schmitt-Groh´e The value of being in good financial standing, v g (d, y), is given by v g (d, y) = max{v c (d, y), v b(y)}. Finally, given the assumption that foreign lenders are risk neutral, the price of debt must satisfy q(d0 , y) = Proby {v c (d0 , y 0 ) ≥ v b (y 0 )} , 1 + r∗ where Proby denotes probability conditional on y, and r ∗ is the risk free real interest rate, assumed to be constant. The country interest rate, denoted r is given by the inverse of the price of debt minus one, 1/q(d0, y) − 1, r≡ 1 − 1. q(d0 , y) In turn, the country premium, or country spread, is defined as the difference between the country interest rate, r, and the world interest rate r ∗ , that is country premium = r − r ∗ . Calibration and Functional Forms Obtaining quantitative predictions of the model requires assigning functional forms to preferences and technologies and numerical values to the structural parameters. We begin by assuming that one period in the model corresponds to one quarter (of a year). We adopt a CRRA form for the period utility function, u(c) = c1−σ − 1 , 1−σ Open Economy Macroeconomics, Chapter 11 and set the intertemporal elasticity of substitution, σ, at 2, as in much of the related literature. We set the world interest rate, r ∗ to 1 percent per quarter. The remaining parameters are calibrated to match characteristics of the Argentine economy. To calibrate the probability of reentry, θ, we revisit the evidence on the average exclusion period presented in section 11.2.1. Consider first measuring the exclusion period by the number of years a country is in default status. According to table 11.9, after the 1982 default Argentina was in default status until 1993, or 11 years. And after the 2001 default, Argentina was in default status until 2005, or 4 years. Thus, on average, Argentina was in default status for 7.5 years. Consider now measuring the end of the exclusion period by the first year of issuance of new debt. Cruces and Trebesch (2013, table A.2) find that for both Argentine defaults, the first period of issuance of new debt coincided with the end of default status, also suggesting an average exclusion period of 7.5 years. However, Gelos et al. (2011, table A.7) estimate that after the default of 1982, Argentina was able to issue new debt already in 1986, resulting in an exclusion period of 4 years. Their sample does not include the exit from the 2001 default. A simple average of the above estimates, (7.5+7.5+4)/3, yields 6.33 years. We round this number to 6.5 years. Applying the formula in equation (11.30), this estimate yields a value of θ of 0.0385 at a quarterly frequency. This value is the same as the one used in Chatterjee and Eyigungor (2012). We use data from Argentina to estimate the output process. Choosing a proxy for yt is complicated by the fact that in the model output is fully traded internationally. In reality, a large fraction of the goods and services produced by any country is nontraded. We choose to proxy yt by a measure of tradable output. In turn, as in chapter 8, we measure tradable output as the sum of GDP in agriculture, forestry, fishing, mining, and manufacturing in Argentina over the period 1983:Q1 to 2001:Q4. We obtain the cyclical component of output by removing a quadratic trend. M. Uribe and S. Schmitt-Groh´e The OLS estimate of (11.29) is then ln yt = 0.9317 ln yt−1 + 0.037 t. The data used in the estimation is in the matlab file lgdp traded.mat. The estimated process is quite persistent, with a serial correlation of 0.93. It is also quite volatile. The implied unconditional standard deviation of output is 10%. Such a volatile process gives risk averse domestic agents a strong incentive to use the current account to smooth consumption over time. Following Chatterjee and Eyigungor (2012), we consider a two-parameter specification of the output loss function by setting a0 = 0 in equation (11.31). We set a1 = −0.35 and a2 = 0.4403, which are the values assigned in Na, Schmitt-Groh´e, Uribe, and Yue (2014). Also following these authors, we calibrate β, the subjective discount factor, at 0.85. Together with the rest of the parameter values, the chosen value for the triplet (a1 , a2 , β) produces the following three equilibrium implications: (a) the average debt to GDP ratio in periods of good financial standing is about 60 percent per quarter; (b) the frequency of default is 2.6 times per century; and (c) the average output loss is 7 percent per year conditional on being in financial autarky. These three targets are empirically justified as follows: (a) The target of a 60 percent quarterly debt-to-output ratio is motivated as follows. The net external debt in Argentina over the inter-default period 1994 to 2001 fluctuated around 30 percent of GDP (Lane and Milesi-Ferretti, 2007). At the same time, the haircuts in the 1982 and 2001 defaults were on average about 50 percent (Cruces and Trebesch, 2013).12 Since the EatonGersovitz model assumes that the country defaults on 100 percent of the debt, we assume that only 50 percent of the country’s external debt is unsecured, and thus target an annual debt-to-output Since 1975, Argentina restructured its debt 4 times, in August 1985, in August 1987, in April 1993, and in April 2005. The corresponding haircuts were, respectively, 0.303, 0.217, 0.325, and 0.768, and the amount of debt upon which these haircuts were applied were, respectively, 9.9, 29.5, 28.5, and 60.6 billion dollars. Therefore, the debt-weighted average haircut is 50.7 percent. Open Economy Macroeconomics, Chapter 11 Table 11.5: Calibration of the Default Model Parameter σ β r∗ θ a0 a1 a2 ρ σ ny nd [y, y] [d, d]f loat Value 2 0.85 0.01 0.0385 0 -0.35 0.4403 0.9317 0.037 Description Inverse of intertemporal elasticity of consumption Quarterly subjective discount factor world interest rate Probability of reentry parameter of output loss function parameter of output loss function parameter of output loss function serial correlation of ln yt std. dev. of innovation t Discretization of State Space 200 Number of output grid points (equally spaced in logs) 200 Number of debt grid points (equally spaced) [0.6523,1.5330] output range [0,1.5] debt range Note. The time unit is one quarter. ratio of 15 percent or a quarterly debt-to-output ratio of 60 percent. (b) The predicted average frequency of default of 2.6 times per century is in line with the Argentine experience since the late 19th century. Table 11.1 implies that Argentina defaulted 4 times between 1824 and 1999. Between 1999 and 2013, Argentina defaulted once more, resulting in 5 defaults over 190 years, or 2.6 times per century. (c) The implied average output loss of 7 percent per year for the duration of the default status is in the lower range of the our estimates based on Zarazaga’s methodology for calculating output losses in the Argentine defaults of 1982 and 2001 (see the discussion in section 11.2.2 of this chapter). The assumed value of β is low compared the values used in models without default, but not uncommon in models a` la Eaton-Gersovitz (see, for example, Mendoza and Yue, 2012). Table 11.5 summarizes the calibration of the model. M. Uribe and S. Schmitt-Groh´e We approximate the equilibrium dynamics by value-function iteration over a discretized state space. The AR(1) process for ln yt given in equation (11.29) takes on continuous values, because the innovation t is assumed to be normally distributed. We discretize this process using 200 equally spaced points for ln yt . The first and last values of the grids for ln yt are set to ±4.2 times the unconditional standard deviation of the estimated process. Given the assumed normality of the process, the probability that (the log of) an output observation falls outside of this range is less than 10−4 . The values of σ and ρ given in equation (11.34) imply an unconditional standard √ deviation of ln yt of 0.037/ 1 − 0.93172 or 0.102. Thus, the first and last points of the grid for ln yt are ±0.427. To construct the transition probability matrix of the process ln ytT , we apply the iterative procedure proposed by Schmitt-Groh´e and Uribe (2013). Specifically, we simulate a time series of length 10 million drawn from the process (11.34). We associate each observation in the time series with one of the 200 possible discrete values of ln yt by distance minimization. The resulting discrete-valued time series is used to compute the probability of transitioning from a particular discrete state in one period to a particular discrete state in the next period. Given the discretized series of draws, the algorithm proceeds as follows. Start with a 200 × 200 matrix of zeros. Suppose the first draw is element i of the grid and the second draw is to element j of the grid. Then, add 1 to element (j, i) of the 200 × 200 matrix. Now suppose that the third draw is element k of the grid. Then add 1 to element (j, k) of the 200 × 200 matrix. Continue in this way until draw 107 . Then divide each row of the 200 × 200 matrix by the sum of its 200 elements. The resulting matrix is the transition probability matrix we wished to estimate. The resulting transition probability matrix captures well the covariance matrices of order 0 and 1. This matrix is available in the file tpm.mat and the Matlab code used to compute it available in the file tpm.m. An alternative method for Open Economy Macroeconomics, Chapter 11 Table 11.6: Selected First and Second Moments: Data and Model Predictions Data Model Default frequency 2.7 2.7 E(d/y) 58.0 59.0 E(r − r ∗ ) 7.4 3.5 σ(r − r ∗ ) 2.9 3.2 corr(r − r ∗ , y) -0.64 -0.54 corr(r − r ∗ , tb/y) 0.72 0.81 Note. Data moments are from Argentina over the inter-default period 1994:1 to 2001:3, except for the default frequency, which is calculated over the period 1824 to 2013. The variable d/y denotes the quarterly debt-to-GDP ratio in percent, r − r ∗ denotes the country premium, in percent per year, y denotes (quarterly detrended) output, and tb/y denotes the trade-balance-toGDP ratio. The symbols E, σ, and corr denote, respectively, the mean, the standard deviation, and the correlation. In the theoretical model, all moments are conditional on the country being in good financial standing. Theoretical moments were computed by running the Matlab script statistics model.m. computing the transition probability matrix of the exogenous state is the quadrature-based method proposed by Tauchen and Hussey (1991). Finally, the stock of net external debt, dt is also a continuous state of the model. We discretize this variable with a grid of 200 equally spaced points starting at 0 and ending at 1.5. These two values were chosen by a try-and-error procedure. Widening the grid did not produce significant changes in the shape and position of the debt distribution. The matlab code eg.m computes the equilibrium policy functions. Table 11.6 displays selected empirical and theoretical first and second moments. By design, the model fits the average default frequency of 2.7 times per century and the average quarterly debt-toGDP ratio of about 60 percent. But the model also does quite well at replicating moments that were not targeted in the calibration. Specifically, it explains the observed volatility and countercyclicality of the country premium, as well as its positive correlation with the trade-balance-to-GDP ratio. M. Uribe and S. Schmitt-Groh´e The countercyclicality of default is intuitive. The lower is output, the harder is it for the country to give up goods to service the debt. This results is also in line with the theoretical analysis of section 11.4. There, we proved that the default set is an interval, implying that, all other things equal, if the country defaults at a certain level of the endowment, it also defaults at all other lower levels. The quantitative model studied here features an additional incentive not to default when output is high that is absent in the canonical model of section 11.4, namely an output cost of default, L(y), that increases more than proportionally with the level of y. The positive correlation between the trade balance and the country premium is also in line with the analytical results of section 11.4. Proposition 11.1 shows that in any period in which the default risk is positive, the trade balance must also be positive. Intuitively, when the country is at risk of default, foreign lenders demand that the economy makes an effort to improve its financial situation by at least paying part of the interest due. The model, however, explains only half of the observed average country premium in Argentina (3.5 versus 7.4 percent per year). Indeed, in the context of the present model, it is impossible to explain both the observed average frequency of default and the observed average country premium. This is because in the model the average country premium is approximately equal to the average frequency of default. To see this, note that the country premium, r − r ∗ ≡ 1/q(d0, y) − (1 + r ∗ ), is h i 1 approximately equal to ln q(d0 ,y)(1+r∗ ) . Then, from equation (11.33) we have that r−r 1 ≈ ln q(d0 , y)(1 + r ∗ ) 1 = ln Prob{repayment in t + 1 given information in t} 1 = ln 1 − Prob{default in t + 1 given information in t} ≈ Prob{default in t + 1 given information in t}. Open Economy Macroeconomics, Chapter 11 According to this expression, the model can explain either the average country premium or the average frequency of default, but not both at the same time, unless both moments are the same in the data.13 A natural question is why the frequency of default and the average country premium are so different in the data. After all, what else could explain country spreads other than default risk? One reason for the discrepancy could be a short sample problem. The average country premium is based on relatively few observations, the 31 quarters covering the period 1994:Q1 to 2001:Q3. By contrast, the default frequency is computed for a much longer sample, spanning the period 1824 to 2013. Another possible explanation for the difference between the observed country spread and the frequency of default is that the structure of the economy could have changed substantially over time. In this case, it may be possible that the default frequency has increased in the past few decades rendering inappropriate the use of a long historical sample. Consider, for instance, the default history of Argentina over the past four decades. Between 1975 and 2014, Argentina defaulted twice, in 1982 and in 2001, which implies a default frequency of 5 defaults per one hundred years. This number is almost twice as large as the one based on the sample 1824-2014. There can also be theoretical reasons, not captured in the present model, for a discrepancy between the average country premium and the frequency of default. We derived the result that the default frequency is approximately equal to the country premium under the assumption that foreign lenders are risk neutral. Altering this assumption may break the equality result. Lizarazo (2013) shows that this is indeed the case, by augmenting an otherwise standard Eaton-Gersovitz model with the assumption of risk averse lenders. She shows that for plausible calibrations this assumption increases the predicted average country spread substantially without significantly affecting the 13 The frequency of default reported in table 11.6 is defined as the number of defaults per one hundred years, whereas the probability of default that appears on the right-hand side of the above expression (Prob{default in t + 1 given information in t}), indicates the average number of defaults per one hundred years of good financial standing. In the model, these two moments are quite similar, 2.7 and 3.2, respectively. M. Uribe and S. Schmitt-Groh´e average probability of default. The reason for the predicted increase in the country spread is that in this type of environment, the country spread is the sum of two compensations, one for the possibility of default and a second one necessary to induce risk sensitive creditors to accept the default risk. Dynamics Around The Typical Default Episode What happens around default episodes? To answer this question, we simulate 1.1 million time periods from the theoretical model. After discarding the first 0.1 million periods, we identify all periods in which a default occurs and extract a window of 12 quarters prior the default and 12 quarters after each default episode. Finally, we compute the median period by period across all windows and normalize the period of default to 0. Figure ?? presents the behavior of the economy around the typical default episode. Defaults occur after a sudden contraction in output. As shown in the upper left panel, y is at its mean level of unity until three quarters prior to default. Then, three consecutive negative shocks push y 1.3 standard deviations below normal. One may wonder whether a fall in traded output of this magnitude squares with a default frequency of only 2.6 per century. The reason why it does is that it is the sequence of output shocks that matters. The probability of traded output falling from its mean value to 1.3 standard deviations below mean in only three quarters is much lower than the unconditional probability of traded output being 1.3 standard deviations below mean. In period 0, the government defaults, triggering a loss of output L(yt ), as shown by the difference between the solid and the broken lines in the upper left panel. After the default, output begins to recover. Thus, the period of default coincides with the trough of the contraction in tradable endowment, yt . The same is true for GDP measured in terms of tradables. Therefore, the model captures the empirical regularity regarding the cyclical behavior of output around default episodes documented in figure 11.2 and first identified by Levy-Yeyati and Panizza (2011), according to Open Economy Macroeconomics, Chapter 11 Figure 11.9: Typical Default Episode Output Consumption 1 ytT y˜tT 0.8 −12 0.8 −12 Trade Balance 0.4 0.02 0.2 0 −12 0 −12 Trade−Balance−Output Ratio Risk premium 0.02 0 −12 2 −12 Note. Solid lines display medians of 25-quarter windows centered around default episodes occurred in an artificial time series of 1 million quarters. The default date is normalized to 0. Dotted lines display unconditional medians. The figure is produced by running the matlab script typical default episode.m M. Uribe and S. Schmitt-Groh´e which default marks the end of a contraction and the beginning of a recovery. As can be seen from the top right panel of the figure, the model predicts that the country does not smooth out the temporary decline in the tradable endowment. Instead, the country sharply adjusts the consumption of tradables downward, by 14 percent. The contraction in consumption is actually larger than the contraction in the endowment so that the trade balance improves. In fact, the trade balance surplus is large enough to generate a slight decline in the level of external debt. These dynamics seem at odds with the quintessential dictum of the intertemporal approach to the balance of payments according to which countries should finance temporary declines in income by external borrowing. The country deviates from this prescription because foreign lenders raise the interest rate premium prior to default. This increase in the cost of credit discourages borrowing and induces agents to postpone consumption. Goodness of Approximation of the Eaton-Gersovitz Model We used the quantitative predictions of the Eaton-Gersovitz model to gauge its ability to explain observed patters of default and country spread dynamics and their comovement with other macroeconomic indicators. Because the model does not have a closed form solution, its quantitative predictions are based on an approximation. As a result, the validity of the model evaluation requires trust in the accuracy of the approximate solution. The question of how close the approximation is to the true solution is impossible to answer with certainty because the latter is unknown. One way to address this issue is based on the reasonable assumption that as the number of grid points is increased, the approximate solution gets closer to the true solution. This suggests an accuracy test consisting in examining how stable the quantitative predictions of the model are to varying the number of grid points. Hatchondo, Mart´ınez, and Sapriza (2010) find that the numerical solution of the Eaton-Gersovitz model deteriorates significantly when the endowment grid is coarsely specified. The deterioration Open Economy Macroeconomics, Chapter 11 Table 11.7: Approximating the Eaton-Gersovitz Model: Accuracy Tests Grid Points ny nd Data Model* Model Model Model Model Default frequency 2.70 2.65 2.30 2.63 2.65 2.65 E(d/y) 58.00 59.05 69.43 58.64 59.46 59.46 E(r − r ∗ ) 7.44 3.47 3.01 3.43 3.44 3.44 σ(r − r ∗ ) 2.89 3.21 4.20 3.12 3.13 3.13 Correlation (r − r ∗ , y) (r − r ∗ , tb/y) -0.64 0.72 -0.54 0.81 -0.28 0.44 -0.55 0.82 -0.55 0.83 -0.55 0.83 Note. Data moments are from Argentina over the inter-default period 1994:1 to 2001:3, except for the default frequency, which is calculated over the period 1824 to 2013. The variable d/y denotes the quarterly debt-to-GDP ratio in percent, r−r ∗ denotes the country premium, in percent per year, y denotes (quarterly detrended) output, and tb/y denotes the trade-balance-to-GDP ratio. The symbols E and σ denote, respectively, the mean and the standard deviation. The symbols ny and nd denote the number of grid points for the endowment and debt, respectively. In the theoretical model, all moments are conditional on the country being in good financial standing. Theoretical moments were computed by running the Matlab script statistics model.m after appropriately adjusting the number of grid points in eg.m. *Baseline grid specification. M. Uribe and S. Schmitt-Groh´e affects primarily the volatility and comovement of the country premium. To check the validity of their result in the context of the present parameterization of the Eaton-Gersovitz model, in table 11.7 we present the predictions of the model under a number of alternative grid specifications. Consider an approximation based on an endowment grid containing 25 equally spaced points, 8 times coarser than the baseline grid specification, which contains 200 equally spaced endowment points. A specification with 25 points is of interest because it is representative of the one used in most early quantitative default studies (e.g., Arellano, 2008; and Aguiar and Gopinath, 2006). The table shows that the coarser approximation affects mostly the correlation of the country premium with output and with the trade-balance-to-output ratio. Although the sign is preserved, the magnitude of both correlations falls to about half. Also affected are the standard deviation of the country premium and the average debt-to-output ration. The former is an entire percentage point higher than under the baseline grid specification and the latter is 10 percentage points higher. A natural question is whether the predictions of the model also change as one increases the number of endowment points above the baseline value of 200. Table 11.7 shows that this not the case. All first and second moments displayed are quite stable as the number of endowment grid points is doubled from 200 to 400. Furthermore, the predictions under the baseline grid specification do not change substantially as one doubles the number of debt grid points from 200 to 400 either holding constant or doubling the number of endowment grid points. We therefore conclude that the baseline grid specification (with 200 points for the endowment and 200 points for debt) yields a reasonable numerical approximation to the equilibrium dynamics of the Eaton-Gersovitz model studied here. Alternative Output Cost Specification Thus far, we have assumed a two-parameter specification of the output cost function L(yt ) that is quadratic above some level of output. Another form that is often used in the default literature is Open Economy Macroeconomics, Chapter 11 one in which during periods of bad financial standing all output beyond a certain threshold is lost. This specification is given in equation (11.32) and illustrated with a dashed line in figure 11.8. As mentioned earlier, it is a special case of the three-parameter quadratic form given in (11.31) that arises when the coefficient of the constant term, a0 , takes a negative value, the coefficient of the linear term, a1 , is unity, and the coefficient of the quadratic term, a2 , is nil. To make the calibration of the model under this cost function comparable to the one associated with the baseline (quadratic) specification, we set β and a1 to match the average debt-to-output ratio and the average default frequency observed in Argentina. This yields β = 0.875 and a0 = −0.88. The fact that the present specification features one parameter less than the quadratic specification means that one targeted empirical statistic must be left out. In this case, it is the average output cost during periods of bad financial standing. The present model delivers an average output cost of default of 8.3 percent of the endowment for the duration of the bad financial status. This number is larger than the one corresponding to the baseline calibration (7.1 percent) but still within the range estimated for Argentina by Zarazaga (see section 11.2.2.) The predictions of the model are displayed in table 11.8. For comparison, the table reproduces from table 11.6 the empirical moments and the moments predicted by the baseline model. Overall, the model with a flat post-default endowment performs as well as the model with quadratic post default output. Moreover, as one makes the output grid coarser (in the table ny is reduced from 200 to 25), the model deteriorates along the same dimensions as it does under the quadratic specification, namely, by overpredicting the volatility of the country spread and by underpredicting the correlation of the spread with output and the correlation of the spread with the trade-balanceto-output ratio. [TO BE CONTINUED] M. Uribe and S. Schmitt-Groh´e Table 11.8: The Eaton-Gersovitz Model: Alternative Output Cost Specification Data Model Quadratic Flat Flat and ny = 25 Default frequency 2.7 E(d/y) 58.0 E(r − r ∗ ) 7.4 σ(r − r ∗ ) 2.9 corr(r − r ∗ , y) -0.64 corr(r − r ∗ , tb/y) 0.72 2.7 2.8 2.4 59.0 59.9 71.4 3.5 3.5 3.1 3.2 4.2 5.7 -0.54 -0.43 -0.26 0.81 0.74 0.48 Note. See note to table 11.6. Quadratic refers to the baseline specification, L(y) = max(0, −0.35y + 0.4403y2 ). Flat refers to the specification L(y) = max(0, −0.88 + y). Table 11.9: Sovereign Default Episodes 1975-2014 Country Albania Algeria Angola Angola Antigua and Barbuda Argentina Argentina Belize Belize Bolivia Bolivia Bosnia and Herzegovina Brazil Bulgaria Burkina Faso Cabo Verde Start of Default 1991 1991 1976 1985 1996 1982 2001 2006 2012 1980 1986 1992 1983 1990 1983 1981 End of Data Default Source 1995 BC, Table 7. 1996 BC, Table 7. 1976 BC, Table 4. 2003 BC, Table 7. 2011 BC, Table 7; BN. 1993 BC, Table 6. 2005 BC, Table 6. 2007 CG, Table 2. 2013 CG, Table 2. 1984 BC, Table 6. 1997 BC, Table 6. 1997 BC, Table 7. 1994 BC, Table 6. 1994 BC, Table 6. 1996 BC, Table 6. 1996 BC, Table 7. (continued on next page) Open Economy Macroeconomics, Chapter 11 Table 11.9 (continued from previous page) Start of End of Data Country Default Default Source Cameroon 1985 2004 BC, Table 4. Central African Republic 1981 1981 BC, Table 7. Central African Republic 1983 2012* BC, Table 7; BN. Chile 1983 1990 BC, Table 6. Congo, Dem. Rep. 1976 2010 BC, Table 7; BN. Congo, Rep. 1983 2012* BC, Table 7; BN. Costa Rica 1981 1990 BC, Table 4. Cote d’Ivoire 1983 1998 BC, Table 7. Cote d’Ivoire 2000 2013* BC, Table 7; BN. Croatia 1992 1996 BC, Table 6. Cuba 1982 2013* BC, Table 7; BN. Cyprus 2013 2013 CG, Table 2. Dominica 2003 2005 BC, Table 7. Dominican Republic 1975 2001 BC, Table 4. Dominican Republic 2005 2005 CG, Table 2. Ecuador 1982 1995 BC, Table 6. Ecuador 1999 2000 BC, Table 6. Ecuador 2008 2009 CG, Table 2. El Salvador 1981 1996 BC, Table 4. Ethiopia 1991 1999 BC, Table 7. Gabon 1986 1994 BC, Table 7. Gabon 1999 2005 BC, Table 7. Gambia, The 1986 1990 BC, Table 7. Ghana 1979 1979 BC, Table 4. Ghana 1982 1982 BC, Table 4. Ghana 1987 1987 BC, Table 6. Greece 2012 2012 CG, Table 2. Grenada 2004 2005 CG, Table 2. Grenada 2012 2012 CG, Table 2. Grenada 2013 2014* CG, Table 2. Guatemala 1986 1986 BC, Table 4. Guatemala 1989 1989 BC, Table 6. Guinea 1986 1988 BC, Table 7. Guinea 1991 1998 BC, Table 7. Guinea,Bissau 1983 1996 BC, Table 7. Guyana 1979 1979 BC, Table 7. (continued on next page) M. Uribe and S. Schmitt-Groh´e Table 11.9 (continued from previous page) Start of End of Data Country Default Default Source Guyana 1982 2006 BC, Table 7. Haiti 1982 1994 BC, Table 7. Honduras 1981 2006 BC, Table 7; BN. Indonesia 1999 2000 CG, Table 2. Indonesia 2002 2002 CG, Table 2. Iran, Islamic Rep. 1978 1995 BC, Table 7. Iraq 1987 2006 BC, Table 7; BN. Jamaica 1978 1979 BC, Table 6. Jamaica 1981 1985 BC, Table 6. Jamaica 1987 1993 BC, Table 6. Jamaica 2010 2010 CG, Table 2. Jamaica 2013 2013 CG, Table 2. Jordan 1989 1993 BC, Table 6. Kenya 1994 1998 BC, Table 4. Kenya 2000 2000 BC, Table 4. Korea, Dem. Rep. 1974 2013* BC, Table 7; BN. Kuwait 1990 1991 BC, Table 4. Liberia 1981 2010 BC, Table 7; BN. Macedonia, FYR 1992 1997 BC, Table 6. Madagascar 1981 2002 BC, Table 6. Malawi 1982 1982 BC, Table 7. Malawi 1988 1988 BC, Table 7. Mali 2012 2012* CG, Table 2. Mauritania 1992 1996 BC, Table 7. Mexico 1982 1990 BC, Table 6. Moldova 1998 1998 BC, Table 7. Moldova 2002 2002 BC, Table 7. Mongolia 1997 2000 BC, Table 4. Morocco 1983 1983 BC, Table 6. Morocco 1986 1990 BC, Table 6. Mozambique 1980 1980 BC, Table 6. Mozambique 1983 1992 BC, Table 6. Myanmar 1984 1984 BC, Table 5. Myanmar 1987 1987 BC, Table 5. Myanmar 1997 2012* BC, Table 7; BN. Nicaragua 1979 2008 BC, Table 7; BN. (continued on next page) Open Economy Macroeconomics, Chapter 11 Table 11.9 (continued from previous page) Start of End of Data Country Default Default Source Niger 1983 1991 BC, Table 7. Nigeria 1982 1992 BC, Table 6. Nigeria 2001 2001 BC, Table 6. Nigeria 2004 2005 BC, Table 6. Pakistan 1999 1999 CG, Table 2. Panama 1983 1996 BC, Table 6. Paraguay 1986 1992 BC, Table 6. Paraguay 2003 2004 CG, Table 2. Peru 1978 1978 BC, Table 6. Peru 1980 1980 BC, Table 6. Peru 1983 1997 BC, Table 4. Philippines 1983 1992 BC, Table 6. Poland 1981 1994 BC, Table 6. Romania 1981 1983 BC, Table 6. Romania 1986 1986 BC, Table 6. Russian Federation 1991 1997 BC, Table 6. Russian Federation 1998 2000 BC, Table 6. Rwanda 1995 1995 BC, Table 5. Sao Tome and Principe 1987 1994 BC, Table 7. Senegal 1981 1985 BC, Table 6. Senegal 1990 1990 BC, Table 6. Senegal 1992 1996 BC, Table 6. Serbia 1992 2004 BC, Table 6. Seychelles 2000 2002 BC, Table 6. Seychelles 2008 2009 CG, Table 2; BN. Sierra Leone 1983 1984 BC, Table 7. Sierra Leone 1986 1995 BC, Table 7. Sierra Leone 1997 1998 BC, Table 5. Slovenia 1992 1996 BC, Table 6. Solomon Islands 1995 2011 BC, Table 5; BN. South Africa 1985 1987 BC, Table 6. South Africa 1989 1989 BC, Table 6. South Africa 1993 1993 BC, Table 6. South Sudan 1979 2013* BC, Table 7; BN. St. Kitts and Nevis 2011 2013* TDMOVP. Suriname 2000 2002 BC, Table 4. (continued on next page) M. Uribe and S. Schmitt-Groh´e Table 11.9 (continued from previous page) Start of End of Data Country Default Default Source Tanzania 1984 2004 BC, Table 7. Togo 1979 1980 BC, Table 7. Togo 1982 1984 BC, Table 7. Togo 1988 1988 BC, Table 7. Togo 1991 1997 BC, Table 7. Trinidad and Tobago 1988 1989 BC, Table 6. Turkey 1978 1979 BC, Table 6. Turkey 1982 1982 BC, Table 6. Uganda 1980 1993 BC, Table 7. Ukraine 1998 2000 BC, Table 6. Uruguay 1983 1985 BC, Table 6. Uruguay 1987 1987 BC, Table 6. Uruguay 1990 1991 BC, Table 6. Uruguay 2003 2003 CG, Table 2. Venezuela, RB 1983 1988 BC, Table 6. Venezuela, RB 1990 1990 BC, Table 6. Venezuela, RB 1995 1998 BC, Table 4. Venezuela, RB 2005 2005 CG, Table 2. Vietnam 1975 1975 BC, Table 4. Vietnam 1985 1998 BC, Table 6. Yemen, Rep. 1985 2001 BC, Table 7. Zambia 1983 1994 BC, Table 7. Zimbabwe 1965 1980 BC, Table 7. Zimbabwe 2000 2012* BC, Table 7; BN. Note. *Most recent available observation (not necessarily emerged from default). BC = Beers and Chambers (2006). CG = Chambers and Gurwitz (2014). TDMOVP = Tudela et al. (2014). BN = Beers and Nadeau (2014). Open Economy Macroeconomics, Chapter 11 Exercise 11.1 Sturzenegger and Zettelmeyer (2006) argue that default episodes tend to happen in clusters. Devise an empirical strategy and use the information provided in table 11.9 to ascertain whether the data provided there supports or contradicts their finding. Exercise 11.2 Figure 11.5 shows that the capital-output ratio in Argentina peaked in the run up to defaults of 1982 and 2001 and fell significantly thereafter. This exercise aims at establishing whether this finding holds more generally. To this end, proceed as follows: 1. Use the World Development Indicators data base to download data on real GDP per capita and real gross capital formation per capita (i.e., investment). The primary series to use here are GDP per capita in constant local currency units (NY.GDP.PCAP.KN) and gross capital formation in percent of GDP (NE.GDI.TOTL.ZS). Let Yit and Iit denote, respectively, real per capita output and real per capita investment in country i in year t. 2. For each country, compute the average growth rate of real per capita output, denoted gi (i.e., gi = 0.02 means 2 percent). 3. Assume that the capital stock in country i evolves according to Kit+1 = (1 − δ)Kit + Iit , where δ denotes the depreciation rate, which is assumed to be the same in all countries. Use this expression to construct a time series for capital. Set δ = 0.1 (or 10%). We need an initial value for the capital stock, Ki1 . Assume that Ki2 = (1 + gi )Ki1 , that is, assume that between periods 1 and 2, the capital stock grew at the average growth rate of the economy. Use this assumption, equation (11.35), and Ii1 to obtain Ki1 . M. Uribe and S. Schmitt-Groh´e 4. Now use Ki1 , the time series Iit , and iterations on equation (11.35) to derive a time series for Kit . 5. For each country i, use the time series for capital, Kit, and output, Yit , to construct a time series for the capital-output ratio, Kit/Yit. 6. Combine the data on the capital-output ratio with the data on default dates from table 11.9 to produce a figure (in the spirit of figure 11.2) displaying the typical behavior of the (demeaned) capital output ratio around default episodes. 7. Discuss to which extend the behavior of the capital-output ratio pre and post default in Argentina is representative of what happens around the typical default episode. Exercise 11.3 [No Excessive Punishment] In section 11.3.2 we studied an environment in which creditors seize k units of goods from delinquent debtors. This threat could be viewed as excessive punishment in some states, because it implies that creditors will take away k units of goods even if the size of the defaulted debt is smaller than k. A more compelling assumption is that the punishment takes the form min{k, d()}, where k ∈ (0, H ) and d() denotes the debt obligation in state . Show that the analysis of section 11.3.2 goes through under this assumption. Exercise 11.4 [Proportional Sanctions] In the model with direct sanctions of section 11.3.2, replace the assumption of a constant sanction k with the assumption of a proportional sanction k() ≡ α(¯ y + ). Characterize the optimal debt contract. How does it compare with the case of constant sanctions? Exercise 11.5 [Moral Sanctions] Consider another variant of the model with direct sanctions of section 11.3.2. Suppose that direct sanctions are not possible, that is, k = 0. Instead, assume that defaulting countries experience a self-inflicted moral punishment. Specifically, assume that the Open Economy Macroeconomics, Chapter 11 utility of the country that defaults in state is given by u(y + ) − m, where m > 0 is a parameter defining the severity of the moral punishment. 1. Write the incentive compatibility constraint. 2. Characterize the optimal debt contract d() and compare it to the one corresponding to the case of direct sanctions. Exercise 11.6 [Non-zero Opportunity Cost of Lending] Consider yet another variant of the model with direct sanctions of section 11.3.2. Suppose that the opportunity cost of funds of foreign lenders is not zero but positive and equal to the constant r. Suppose first that the borrowing country does not have the option of not writing a debt contract with foreign lenders. 1. What restrictions on k and r do you need to impose to guarantee the existence of an nonautarkic equilibrium. 2. Write the participation constraint. 3. Characterize the optimal debt contract d() and compare it to the one corresponding to the case of zero opportunity costs. 4. How do the answers to the above two questions change if the borrowing country was assumed to have the option of not writing a contract with foreign lenders. Exercise 11.7 [Reputation, Complete Asset Markets, And Reentry] Extend the reputational model of section 11.3.2 by allowing for the possibility of regaining access to international capital markets after default. Specifically, assume that with constant probability δ ∈ (0, 1) defaulters can reentry capital markets the next period. 1. Derive the value function of a country in bad financial standing, v b(), as a function of current and future expected values of u(¯ y + ) and u(¯ y + − d()) only. M. Uribe and S. Schmitt-Groh´e 2. Write down the incentive compatibility constraint. 3. Write down the optimization problem of the country and its associated Lagrangian. 4. Derive the optimality conditions of the country’s problem. 5. Show that all of the results of section 11.3.2 pertaining to the reputation model hold under this extension. 6. It is intuitively obvious that if δ = 1, lending breaks down, since in this case lenders have no way to punish delinquent debtors. Show this result formally. Exercise 11.8 Show that in the default model of section 11.4, the current account, denoted ca, can be written as ca = q(d)d − q(d0 )d0 . Exercise 11.9 Show that Proposition 11.1 holds when the endowment process is assumed to be serially correlated. Exercise 11.10 Consider a perfect-foresight, endowment economy in which an equilibrium sequence of net external debt {dt}∞ t=−1 is supported on reputational grounds when saving in international markets is not allowed after default. Suppose that this sequence contains no maximal element, ¯ Show that the reputational equilibrium with debt breaks down but has a positive least upper bound d. if the country is allowed to save in international financial markets after default. References Abel, Andrew, “Asset Prices Under Habit Formation and Catching Up With The Joneses,” American Economic Review 80, 1990, 38-42. Aguiar, Mark, and Gita Gopinath, “Defaultable debt, interest rates and the current account,” Journal of International Economics 69, June 2006, 64-83. Aguiar, Mark, and Gita Gopinath, “Emerging Market Business Cycles: The Cycle is the Trend,” Journal of Political Economy 115, 2007, 69-102. ¨ Akinci, Ozge, “A Note on the Estimation of the Atemporal Elasticity of Substitution Between Tradable and Nontradable Goods,” manuscript, Columbia University, February 2, 2011. Akitoby, B. and T. Stratmann, “Fiscal Policy and Financial Markets,” Economic Journal 118, November 2008, 1971-1985. Arellano, Cristina, “Default Risk and Income Fluctuations in Emerging Economies,” American Economic Review 98, June 2008, 690-712. Arellano, M. and S. Bond, “Some Tests of Specification for Panel Data: Monte Carlo Evidence and An Application to Employment Equations,” Review of Economic Studies 58, 1991, 277-297. Anderson, T. H. and Hsiao, C., “Estimation of Dynamic Models With Error Components,” Journal of the American Statistical Association 76, 1981, 598-606. Auernheimer, L. and R. Garc´ıa-Saltos, “International Debt and the Price of Domestic Assets,” IMF 547 M. Uribe and S. Schmitt-Groh´e Working Paper wp/00/177, October 2000. Barattieri, Alessandro, Susanto Basu, and Peter Gottschalk, “Some Evidence on the Importance of Sticky Wages,” manuscript, November 2012. Basu, Susanto, and Alan M. Taylor, “Business Cycles in International Historical Perspective,” Journal of Economic Perspectives 13, Spring 1999, 45-68. Beers, David T., and John Chambers, “Default Study: Sovereign Defaults At 26-Year Low, To Show Little Change In 2007,” Global Credit Portal, RatingsDirect, Standard and Poor’s, September 18, 2006. Beers, David T., and Jean-S´ebastien Nadeau, “Introducing a New Database of Sovereign Defaults,” Technical Report No. 101, Bank of Canada, February 2014. Benigno, Gianluca; Huigang Chen; Christopher Otrok; Alessandro Rebucci; and Eric R. Young, “Revisiting Overborrowing and its Policy Implications,” manuscript, University of Virginia, November 2009. Benjamin, David, and Mark L.J. Wright, “Recovery before redemption: a theory of delays in sovereign debt renegotiations,” Working Paper, University of California Los Angeles, 2008. Bianchi, Javier, “Overborrowing and Systemic Externalities in the Business Cycle,” Unpublished, University of Maryland, 2010. Blanchard, Olivier, “Debt, Deficits, and Finite Horizons,” Journal of Political Economy 93, 1985, 223-247. Blundell, Richard and Thomas H. MaCurdy, “Labor Supply: A Review of Alternative Approaches,” in O. Ashenfelter and D. Card (eds.), Handbook of Labor Economics, Vol. 3, North-Holland, Amsterdam, 1999, 1559-1695. Boldrin, Michele; Lawrence J. Christiano, and Jonas Fisher, “Asset Pricing Lessons for Modeling Business Cycles,” American Economic Review 91, 2001, pp. 149-166. Bullow, J., and K. Rogoff, “Sovereign Debt: Is to Forgive to Forget?,” American Economic Review Open Economy Macroeconomics, Chapter 11 79, March 1989, 43-50. Burstein, Ariel, Martin Eichenbaum, and Sergio Rebelo, “Large Devaluations and the Real Exchange Rate,” Journal of Political Economy 113, August 2005, 742-784. Caballero, Ricardo and Arvind Krishnamurthy, “International and Domestic Collateral Constraints in a Model of Emerging Market Crises,” Journal of Monetary Economics 48, December 2001, 513-548. Calder´ on, C´esar and Rodrigo Fuentes, “Characterizing the Business Cycles of Emerging Economies,” Policy Research Working Paper 5343, The World Bank, June, 2010. Calvo, G.A., “Temporary Stabilization: Predetermined Exchange Rates,” Journal of Political Economy 94, 1986, 1319-1329. Calvo, G.A., “Balance of Payments Crises in a Cash-in-Advance Economy,” Journal of Money, Credit and Banking 19, 1987, 19-32. Calvo, Guillermo, “On the costs of temporary policy,” Journal of Development Economics 27, 1987, 245-261. Campbell, John Y., “Does Saving Anticipate Declining Labor Income? An Alternative Test of the Permanent Income Hypothesis,” Econometrica 55, November 1987, 1249-1273. Cardia, Emanuela, “The Dynamics of a Small Open Economy in Response to Monetary, Fiscal, and Productivity Shocks,” Journal of Monetary Economics 28, 1991, 411-434. Chambers, John, and Zev Gurwitz, “Default Study: Sovereign Defaults And Rating Transition Data, 2013 Update,” RatingsDirect, Standard and Poor’s, April 18, 2014. Chang, Roberto and Andr´es Fern´ andez, “On the Sources of Aggregate Fluctuations in Emerging Economies,” International Economic Review 54, November 2013, 1265-1293. Christiano, Lawrence J., Martin Eichenbaum, and Charles Evans, “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy,” NBER working paper No. 8403, July 2001. Chuhan, P. and F. Sturzenegger, “Default Episodes in the 1980s and 1990s: What have We M. Uribe and S. Schmitt-Groh´e Learned?,” manuscript, Universidad Torcuato Di Tella, November 24, 2003. Cline, William R., International Debt Reexamined, Washington, DC, Institute for International Finance/ 1995. Cline, William R. and Kevin S. Barnes, “Spreads and Risk in Emerging Market Lending,” Research paper 97-1, Washington, DC, Institute for International Finance, 1997. Constantinides, George M., “Habit Persistence: A Resolution of the Equity Premium Puzzle,” Journal of Political Economy 1990, 98, 519-543. Correia, Isabel, Jo˜ ao C.Neves, and Sergio Rebelo, “Business Cycles in a Small Open Economy,” European Economic Review 39, 1995, 1089-1113. Cruces, Juan J. and Christoph Trebesch, “Sovereign Defaults: The Price of Haircuts,” American Economic Journal: Macroeconomics 5, 2013, 85-117. Daly, Mary, Bart Hobijn, and Brian Lucking, “Why Has Wage Growth Stayed Strong?,” FRBSF Economic Letter 2012-10, April 2012, 1-5. De Paoli, Bianca, Glenn Hoggarth, and Victoria Saporta, “Chapter 8: Output Costs of Sovereign Default,” in Robert W. Kolb, Editor, Sovereign Debt: From Safety to Default, John Wiley & Sons, Inc., 2011, pages 23-31. Eaton, J. and R. Fern´ andez, “Sovereign Debt,” in G. Grossman and K. Rogoff editors, Handbook of International Economics III, Amsterdam, The Netherlands: Elsevier Science, 1995, chapter 39, pages 2031-2077. Eaton, Jonathan and Mark Gersovitz, “Debt with Potential Repudiation: Theoretical and Empirical Analysis,” Review of Economic Studies 48, April 1981, 289-309. Edwards, Sebastian, “LDC Foreign Borrowing and Default Risk: An Empirical Investigation,” American Economic Review 74, 1984, 726-734. Eichengreen, Barry, Ricardo Hausmann, and Hugo Panizza, “The Pain of Original Sin,” in Eichengreen, Barry and Ricardo Hausmann, eds., Other People’s Money: Debt Denomination and Open Economy Macroeconomics, Chapter 11 Financial Instability in Emerging Market Economies, The University of Chicago Press, 2005, 13-47. Eichengreen, Barry and Ashoka Mody, “What Explains Changing Spreads on Emerging-Market Debt: Fundamentals or Market Sentiment?,” NBER Working Paper No. 6408, February 1998. Eichengreen, Barry, and Jeffrey Sachs, “Exchange Rates and Economic Recovery in the 1930s,” Journal of Economic History 45, December 1985, 925-946. Erceg, Christopher J., Dale W. Henderson, and Andrew T. Levin, “Optimal Monetary Policy With Staggered Wage and Price Contracts,” Journal of Monetary Economics 46, 2000, 281-314. Farhi, Emmanuel, Gita Gopinath, and Oleg Itskhoki, “Fiscal Devaluations,” Review of Economic Studies, forthcoming, 2014. Fehr, Ernst, and Lorenz Goette, “Robustness and real consequences of nominal wage rigidity,” Journal of Monetary Economics 52, May 2005, 779-804. Fortin, Pierre, “The Great Canadian Slump,” The Canadian Journal of Economics 29, November 1996, 761-787. Frankel, Jeffrey, “Monetary Policy in Emerging Markets,” in Benjamin M. Friedman and Michael Woodford, eds., Handbook of Monetary Economics, Volume 3B, North Holland, 2011, 14391520. Friedman, Milton, “The Case For Flexible Exchange Rates,” in Milton Friedman, Ed., Essays in Positive Economics, Chicago: The University of Chicago Press, 1953, 157-203. Friedman, Milton, and Anna Schwartz, A Monetary History of the United States, 1867-1960, Princeton University Press, 1963. Gal´ı, Jordi, Unemployment Fluctuations and Stabilization Policies: A New Keynesian Perspective, Cambridge MA: The MIT Press, 2011. Garc´ıa-Cicco, Javier; Roberto Pancazi; and Mart´ın Uribe, “Real Business Cycles in Emerging Countries?,” American Economic Review 100, December 2010, 25102531. M. Uribe and S. Schmitt-Groh´e Garc´ıa-Cicco, Javier; Roberto Pancazi; and Mart´ın Uribe, “Real Business Cycles in Emerging Countries?: Expanded Version,” unpublished, Columbia University, 2009. Gelos, R. Gaston, Ratna Sahay, and Guido Sandleris, “Sovereign borrowing by developing countries: What determines market access?,” Journal of International Economics 83, 2011, 243-254. Gonz´ alez Rozada, Mart´ın, Pablo Andr´es Neumeyer, Alejandra Clemente, Diego Luciano Sasson, and Nicholas Trachter, “The Elasticity of Substitution in Demand for Non-Tradable Goods in Latin America: The Case of Argentina,” Inter-American Development Bank, Research Network Working paper #R-485, August 2004. Gottschalk, Peter, “Downward Nominal-Wage Flexibility: Real or Measurement Error?,” The Review of Economic and Statistics 87, August 2005, 556-568. Greenwood, Jeremy, Zvi Hercowitz, and Gregory Huffman, “Investment, Capacity Utilization, And The Real Business Cycle,” American Economic Review 78, 1988, 402-417. Grossman, Herschel I. and John B. Van Huyck, “Sovereign Debt as a Contingent Claim: Excusable Default, Repudiation, and Reputation,” American Economic Review 78, December 1988, 10881097. Hall, Robert, “Stochastic Implications of the Life Cycle-Permanent Income Hypothesis: Theory and Evidence,” Journal of Political Economy 86, 1978, 971-987. Harberger, Arnold C., “Currency Depreciation, Income, and the Balance of Trade,” Journal of Political Economy 58, 1950, 47-60. Hatchondo, Juan Carlos, Leonardo Mart´ınez and Horacio Sapriza, “Quantitative properties of sovereign default models: Solution methods matter,” Review of Economic Dynamics 13, October 2010, 919-933. Heston, Alan, Robert Summers, and Bettina Aten, “Penn World Table Version 7.0,” Center for International Comparisons of Production, Income and Prices at the University of Pennsylvania, May 2011. Open Economy Macroeconomics, Chapter 11 Hodrick, Robert, and Edward C. Prescott, “Postwar U.S. Business Cycles: An Empirical Investigation,” Journal of Money, Credit, and Banking 29, February 1997, 1-16. Holden, Steinar and Fredrik Wulfsberg, “Downward Nominal Wage Rigidity in the OECD,” The B.E. Journal of Macroeconomics: Advances 8, 2008, Article 15. Judson, Ruth A. and Ann L. Owen, “Estimating Dynamic Panel Data Models A Guide For Macroeconomists,” Economic Letters 65, 1999, 9-15. Justiniano, Alejandro, Giorgio Primiceri, and Andrea Tambalotti, “Investment Shocks and Business Cycles,” Journal of Monetary Economics 57, March 2010, 132-145. Kamin, Steven and Karsten von Kleist, “The Evolution and Determinants of Emerging Market Credit Spreads in the 1990s,” BIS Working Paper No. 68, May 1999. Kaur, Supreet, “Nominal Wage Rigidity in Village Labor Markets,” unpublished manuscript, Columbia University, January 15, 2012. Keynes, John Maynard (1925), “The Economic Consequences of Mr. Churchill,” in Donald Moggridge Ed., The Collected Writings of John Maynard Keynes, Vol. IX, 1972, New York: St. Martin’s Press, 207-230. Kim, Sunghyun Henry and Ayhan Kose, “Dynamics of Open Economy Business Cycle Models: Understanding the Role of the Discount Factor,” Macroeconomic Dynamics 7, 2001, pp. 26390. King, Robert G., Charles I. Plosser, and Sergio T. Rebelo, “Production, Growth, And Business Cycles I. The Basic Neoclassical Model,” Journal of Monetary Economics 21, 1988a, 195-232. King, Robert G., Charles I. Plosser, and Sergio T. Rebelo, “Production, Growth, And Business Cycles II. New Directions,” Journal of Monetary Economics 21, 1988b, 309-341. Klein, Paul, “Using the Generalized Schur Form to Solve a Multivariate Linear Rational Expectations Model,” Journal of Economic Dynamics and Control 24, 2000, 1405-1423. Korinek, Anton, “Excessive Dollar Borrowing in Emerging Markets: Balance Sheet Effects and M. Uribe and S. Schmitt-Groh´e Macroeconomic Externalities,” manuscript, University of Maryland, 2010. Krueger, Alan B., and Andreas I. Mueller, “Time Use, Emotional Well-Being, and Unemployment: Evidence from Longitudinal Data,” American Economic Review 102, May 2012, 594-599. Krugman, Paul R., “A Model of Balance of Payments Crises,” Journal of Money, Credit and Banking 11, 1979, 311-25. Kuroda, Sachiko and Isamu Yamamoto, “Are Japanese Nominal Wages Downwardly Rigid? (Part I): Examinations of Nominal Wage Change Distributions,” Monetary and Economic Studies Bank of Japan, August 2003, 1-29. Kydland, Finn E., and Carlos E. J. M. Zarazaga, “Is the Business Cycle of Argentina Different?,” Economic Review Federal Reserve Bank of Dallas, 1997, 21-36. Lane, Philip R., and Gian Maria Milesi-Ferretti, “The external wealth of nations mark II: Revised and extended estimates of foreign assets and liabilities, 1970-2004,” Journal of International Economics 73, November 2007, 223-250. Laursen, Svend and Lloyd A. Metzler, “Flexible Exchange Rates and the Theory of Employment,” Review of Economics and Statistics 32, November 1950, 281-299. Lizarazo, Sandra Valentina, “Default Risk and Risk Averse International Investors,” Journal of International Economics 89, March 2013, 317-330. Lucas, R.E., Jr., “Econometric Policy Evaluation: A Critique,” Carnegie-Rochester Conference Series on Public Policy 1, 1976, 19-46. Lucas, Robert E., Jr, Models of Business Cycles, Cambridge MA: Basil Blackwell, 1987. Mart´ınez, Jos´e V. and Guido Sandleris, “Is It Punishment? Sovereign Default and the Decline in Trade,” Journal of International Money and Finance 30, February 2011, 909-930. McKinnon, Ronald I., Money and Capital in Economic Development, Brookings Institution Press, Washington, DC/ 1973. Mendoza, Enrique, “Real Business cycles in a small-open economy,” American Economic Open Economy Macroeconomics, Chapter 11 81, 1991, 797-818. Mendoza, Enrique, “The Terms of Trade, the Real Exchange Rate, and Economic Fluctuations,” International Economic Review 36, February 1995, 101-137. Mendoza, Enrique G., “Sudden Stops, Financial Crises and Leverage,” American Economic Review, 2010, forthcoming. Mendoza, Enrique and Mart´ın Uribe, “Devaluation Risk and the Business-Cycle Implications of Exchange-Rate Management,” Carnegie-Rochester Conference Series on Public Policy 53, December 2000, 239-96. Miyamoto, Wataru, and Thuy Lan Nguyen, “The Role of Common Shocks in Small Open Economies between 1900 and 2006: A Structural Estimation,” working paper, Columbia University, April 13, 2013. Monacelli, Tommaso, “Into the Mussa puzzle: monetary policy regimes and the real exchange rate in a small open economy,” Journal of International Economics 62, January 2004, 191-217. Mundell, Robert, “Capital Mobility and Stabilization Policy Under Fixed and Flexible Exchange Rates,” Canadian Journal of Economics 29, 1963, 475-485. Mussa, Michael, “Nominal exchange regimes and the behavior of real exchange rates: evidence and implications,” Carnegie-Rochester Conference Series on Public Policy 25, 1986, 117-214. Na, Seunghoon, Stephanie Schmitt-Groh´e, Mart´ın Uribe, and Vivian Z. Yue, “A Model of the Twin Ds: Optimal Default and Devaluation,” NBER Working Paper No. 20314, July 2014. Nakamura, Emi, and Jon Steinsson, “Five Facts About Prices: A Reevaluation of Menu Cost Models,” Quarterly Journal of Economics 123, November 2008, 1415-1464. Nason, James M. and John H. Rogers, “The Present-Value Model of the Current Account Has Been Rejected: Round Up the Usual Suspects,” Journal of International Economics 68, January 2006, 159-187. Neumeyer, Pablo Andres and Martin Gonz´ alez Rozada, “The elasticity of Substitution in demand M. Uribe and S. Schmitt-Groh´e for Non tradable Goods in Latin America. Case Study: Argentina,” Department of Economics Working Papers 027, Universidad Torcuato Di Tella, November 2003. Neumeyer, Pablo A. and Fabrizio Perri, “Business Cycles in Emerging Markets:The Role of Interest Rates,” manuscript, New York University, November 2001. Neumeyer, Pablo A. and Fabrizio Perri, “Business Cycles in Emerging Markets:The Role of Interest Rates,” Journal of Monetary Economics 52, March 2005, 345-380. Obstfeld, Maurice, “Aggregate Spending and the Terms of Trade: Is There a Laursen-Metzler Effect?,” Quarterly Journal of Economics 97, May 1982, 251-270. Obstfeld, Maurice, “Intertemporal dependence, impatience, and dynamics,” Journal of Monetary Economics 26, 1990, 45-75. Obstfeld, Maurice, “Rational and Self-Fulfilling Balance of Payments Crises,” American Economic Review 76, 1986, 72-81. Obstfeld, Maurice, “The Logic of Currency Crises,” Cahiers Economiques et Monetaires, Bank of France 43, 1994, 189-213. Obstfeld, Maurice, and Kenneth Rogoff, Foundations of International Macroeconomics, The MIT Press, Cambridge, Massachusetts, 1996. Ravn, Morten O., and Harald Uhlig, “On Adjusting the Hodrick-Prescott Filter for the Frequency of Observations,” The Review of Economics and Statistics 84, May 2002, 371-380. Reinhart, Carmen M., and Carlos A. V´egh, “Nominal interest rates, consumption booms, and lack of credibility: A quantitative examination,” Journal of Development Economics 46, April 1995, 357-378. Reinhart, Carmen M., Kenneth Rogoff, and Miguel Savastano, “Debt Intolerance,” Brookings Papers On Economic Activity 2003(1), 2003, 1-74. Richmond, Christine, and Daniel A. Dias, “Duration of Capital Market Exclusion: An Empirical Investigation,” mimeo, UCLA, July 2009. Open Economy Macroeconomics, Chapter 11 Rose, Andrew K., “One Reason Countries Pay Their Debts: Renegotiation and International Trade,” Journal of Development Economics 77(1), 2005, 189-206. Schmitt-Groh´e, Stephanie, “The international transmission of economic fluctuations: Effects of U.S. business cycles on the Canadian economy,” Journal of International Economics 44, April 1998, 257-287. Schmitt-Groh´e, Stephanie and Mart´ın Uribe, “Stabilization Policy and the Costs of Dollarization,” Journal of Money, Credit, and Banking 33, May 2001, 482-509. Schmitt-Groh´e, Stephanie and Mart´ın Uribe, “Closing Small Open Economy Models,” Journal of International Economics 61, October 2003, pp. 163-185 . Schmitt-Groh´e, Stephanie and Mart´ın Uribe, “Solving Dynamic General Equilibrium Models Using a Second-Order Approximation to the Policy Function,” Journal of Economic Dynamics and Control vol. 28, January 2004, pp. 755-775. Schmitt-Groh´e, Stephanie, and Mart´ın Uribe, “Pegs and Pain,” Columbia University, November 2010. Schmitt-Groh´e, Stephanie and Mart´ın Uribe, “Managing Currency Pegs,” American Economic Review Papers and Proceedings 102, May 2012, 192-197. Schmitt-Groh´e, Stephanie, and Mart´ın Uribe, “Prudential Policy For Peggers,” Columbia University, 2011. Schmitt-Groh´e, Stephanie and Mart´ın Uribe, “Downward Nominal Wage Rigidity, Currency Pegs, And Involuntary Unemployment,” manuscript, Columbia University, August 2013. Senhadji, Abdelhak S., “Adjustment of a Small Open Economy to External Shocks,” Dissertation, University of Pennsylvania, 1994. Sims, Christopher, “Solving Linear Rational Expectations Models,” mimeo, Yale University/ 1996. Smets, Frank and Raf Wouters, “Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach,” American Economic Review 97, June 2007, 586-606. M. Uribe and S. Schmitt-Groh´e Stock, James H. and Mark W. Watson, “Business Cycle Fluctuations in US Macroeconomic Time Series,” in John B. Taylor and Michael Woodford, eds., Handbook of Macroeconomics, Amsterdam, The Netherlands: Elsevier Science, 1999, p. 3-64. Stockman, Alan C., “Real exchange rate variability under pegged and floating nominal exchange rate systems: an equilibrium theory,” Carnegie-Rochester Conference Series on Public Policy 29, 1988, 259-294. Stockman, Alan C. and Linda L. Tesar, “Tastes and Technology in a Two-Country Model of the Business Cycle: Explaining International Comovements,” American Economic Review 85, 1995, 168-85. Sturzenegger, Federico, “Toolkit for the analysis of debt problems,” Journal of Restructuring Finance 1, 2004, 201-203. Sturzenegger, Federico, and Jeromin Zettelmeyer, “Haircuts: Estimating Investor Losses in Sovereign Debt Restructurings, 19982005,” Journal of International Money and Finance 27, 2008, 780805. Sturzenegger, Federico, and Jeromin Zettelmeyer, Debt Defaults and Lessons from a Decade of Crises, Cambridge, Massachusetts, MIT Press, 2006. Svensson, Lars E.O., and Assaf Razin, “The Terms of Trade and the Current Account: The Harberger-Laursen-Metzler Effect,” Journal of Political Economy 91, February, 1983, 97-125. Tauchen, George, and Robert Hussey, “Quadrature-Based Methods for Obtaining Approximate Solutions to Nonlinear Asset Pricing Models,” Econometrica 59, March 1991, 371-396. Tomz, Michael, and Mark L.J. Wright, “Do Countries Default in “Bad Times”?,” Journal of the European Economic Association 5, April-May 2007, 352-360. Tomz, Michael, and Mark L.J. Wright, “Empirical Research on Sovereign Debt and Default,” The Annual Review of Economics 5, 2013, 247-272. Tudela, Merxe, Elena Duggar, Albert Metz, Bart Oosterveld, and Anne Van Praagh, “Sovereign Open Economy Macroeconomics, Chapter 11 Default and Recovery Rates, 1983-2013,” Credit Policy Research, Moody’s Investor Service, April 11, 2014. Uribe, Mart´ın, “Exchange rate based inflation stabilization: the initial real effects of credible plans,” Journal of Monetary Economics 39, 1997, 197-221. Uribe, Mart´ın, “The Price-Consumption Puzzle of Currency Pegs,” Journal of Monetary Economics 49, April 2002, 533-569. Uribe, Mart´ın and Z. Vivian Yue, “Country Spreads and Emerging Countries: Who Drives Whom?,” Journal of International Economics 69, June 2006, 6-36. Uribe, Mart´ın, “On Overborrowing,” American Economic Review Papers and Proceedings 96, May 2006, 417-421. Uribe, Mart´ın, “Individual Versus Aggregate Collateral Constraints and the Overborrowing Syndrome,” NBER working paper 12260, May 2007. Uzawa, H., “Time Preference, the Consumption Function and Optimum Asset Holdings,” in Wolfe, J.N. (Ed.), Value, Capital and Growth: Papers in Honor of Sir John Hicks, The University of Edinburgh Press Edinburgh, 1968, pp. 485-504. Winkelmann, Liliana and Rainer Winkelmann, “Why are the Unemployed So Unhappy? Evidence from Panel Data,” Economica 65, February 1998, 1-15. Zarazaga, Carlos, “Default and Lost Opportunities: A Message from Argentina for Euro-Zone Countries,” Federal Reserve Bank of Dallas, Economic Letter 7, May 2012, 1-4.
{"url":"https://vdoc.pub/documents/open-economy-macroeconomics-1fufmv131sng","timestamp":"2024-11-04T04:14:39Z","content_type":"text/html","content_length":"1049664","record_id":"<urn:uuid:fa9c4364-931e-4da8-8158-ba8db8e77786>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00002.warc.gz"}
Contrary Currents Good times are rumored to be short-lived. Bad times, too, say optimists. Reality may flow unquestioned and unhindered like a steady torrent for a long time. Every once in a while, however, for better or for worse, change’s trumpet sounds, like a stone flung into that stream, creating turbulence and unpredictable currents. The current era, in terms of global health, is one such change-causing Change! Its notion, entwined with its possibility and promise, excites the weary, alarms the unsuspecting, worries the timid. A tiny word primed to trigger myriad emotions. To a lover of data, the idea incites lots of questions, too; not just knee-jerk reactions. Imagine a random but steady flow of incoming data traffic—random in the sense that it is hard to forecast when the next event will happen, steady in the sense that following the process is continuous. Unrelenting. Non-intermittent. That is, observations from some system, abstract or tangible, are allowed to crop up any time they want. It is often crucial to ask whether drastic deviations have corrupted the rate at which these observations emerge. Why? For many reasons. Chief among these could be policy formations, for instance, if those are the types of data you are dealing with. Take gun violence in America, as a grim instance. You may think, probably from looking at raw numbers, that these crimes are happening more frequently nowadays. Calling for tougher gun control is one thing. Establishing the increase with significance-laden statistical reasoning, however, is quite another. Or timely interventions, in case you are asking whether the rate at which a specific type of disturbingly violent posts a person is generating on social media has reached a point where they can be classified as an impending menace to society. Academic sleuthing could offer another possibility. Through tracking the rate at which prepositions are written, you may unearth probable plagiarism. The frequency at which neurons fire up inside a patient’s brain may help medical professionals check whether they have undergone severe trauma. The list could go on. Regardless of the context, and the lens through which you choose to view it, the unpredictability of these shocks and the relevance of change detection remain the undeniable commonalities that glue these examples together. Inseparable from, and frequently a corollary to, the “whether” question is the “when” aspect. In case we can supply an answer, objectively and (statistically) emphatically, to whether a deviation from normalcy happened at all, we remain responsible for pinpointing the precise location at which the process began to “go astray.” Another aspect, almost automatic, is the quantification of change. By how much did the process deviate at the point at which it began to deviate? Some pressing issues to probe. Symbols, math, and some concrete examples will ensure a firmer grip of our intentions. The Backdrop Maybe it’s wisdom; maybe it’s expediency. But in one way or another, through disguises blatant or subtle, change sustains and propels almost all of statistical research and thought. Exaggeration? Not quite. Construct 10 random variables to represent the numbers of violent gun-related assaults during a given month in 10 neighboring states. If they all happen to be the same number, the data set—10 ones or a set of 10 twos—although informative, probably becomes a tad boring. The standard deviation will vanish; measures that depend on it will suffer. A confidence interval for the average number of attacks will collapse to a point, for instance. The lack of variation in that 10-number set is to blame for all this. It is only when there is some change within those values—some perceptible non-constant-ness—that interesting results emerge. Then there are times when we are not even conscious of ourselves talking, fundamentally, about change. It remains camouflaged in language and structure that run parallel. Nowadays, for instance, we are used to hearing of countries experiencing fresh waves of COVID-19 attacks. For a wave to form, the number of people contracting the virus per unit time must increase from whatever it used to be. A wave, at its core, therefore, is the consequence of a change—an increase, to be precise, in infection rates. Figure 1 samples four countries—India, South Africa, the UK, and the USA—and records the number of confirmed new COVID cases per day since January 22, 2020. Data are collected from sources maintained by Johns Hopkins University. Ignore the red and blue vertical separators for now. There are, for each country, clear periods of elevated and depressed intensities. Can an arrangement be made—an arrangement beyond crude graph-gazing—to locate the origin of these waves; waves that unproven intuition and experience allege to be palpable? The Mathematical Machinery What overwhelms a country’s healthcare system is not so much the massive number of people contracting the virus per day but the regularity with which this number exceeds a certain threshold. (That threshold, in some way, is a reflection of that country’s preparedness. It is unreasonable to expect the same threshold to work for all countries. Bigger countries, with larger populations, or with better healthcare systems—more ventilators/hospital beds per unit population—may have larger tolerance.) For now, take the horizontal lines on Figure 1 to indicate these cut-offs. Typically, these are the median numbers of new daily confirmed cases. Every time this separator is crossed, say that a shock happened, essentially burdening the country. The top-left panel of Figure 2 stores these shocks for India. For every day India sees new cases in excess of its overall (until May 23, 2021) median of 23,285, there is a vertical strip. With the next shock, the same effect, except the strip height increases by one unit. This is so you can gauge the total number of shocks through the reach along the vertical axis of the last tower. Figure 2, in technical terms, is a point process that gets filtered out of the time series shown in Figure 1. Call the time (in days since the origin on January 22, 2020) at which the i^th shock happens by T[i] in general. The points at which the towers are standing on Figure 2 represent these global times and the white spaces between the towers represent the inter-event times. We are checking whether, beyond a point on the horizontal axis, the white empty spaces on these barcode-type diagrams have shifted features. They may have shrunk alarmingly, making the towers—the shocks—more frequent. This seems to be true for India based on the top-left panel of Figure 2: The strips seem to have non-uniform mixing. They are crowded at certain times and spread out at other times. But that is only visually. We have to be more objective. Crowded in relation to whom? Non-uniform against what benchmark? We have to answer the first question—the “whether” one that we began with. That benchmark, that yardstick, lives right next door, in the top-right panel of Figure 2. If we take the average white space from the Indian shock sequence and simulate (that is, create imaginary data with) an equal number of shocks from an exponential distribution with similar mean, the top-right panel of Figure 2 may result. It represents a scenario where the rate at which shocks emerge has not changed throughout the evolution. The gap lengths are roughly constant, these towers disperse purely democratically, not favoring any specific time period over any other. Stable (sometimes called stationary or homogeneous) processes like these will function as ideal corruption-free models—as emblems of what change-devoid normalcy is meant to mean. Through the years, such stable flows have become the touchstone against which all aberrations (such as the one actually seen in the top-left panel) will be discussed; the barometer through which all deviations will be measured. Visually still, what is seen (top-left, Figure 2) seems different from what should have occurred under a stable no-change environment (top-right, Figure 2). But how do we detect the locations at which the observed shock sequences began to exhibit strong shifts from stability? An estimation question, by the sound of it. Oddly, it can be answered by using a different arm of statistical inference: through testing several hypotheses. Finding the Waves Manufacture some functions that output certain types of values under stationarity and certain other types of values under change-infected processes. The trick is to use those functions repeatedly, once every time a new shock happens, and locate the earliest time at which the first type of values begins to drift toward the second. What qualifies as drifting? If they are more extreme than they typically would be under a stable process, such as the top-right panel of Figure 2. Z andZ[B] could be two such functions where these T[i], as before, represent the global time (in days, for this example) at which the ith shock, and T[n], the one at which the final (i.e., the n^th) shock took place. Under stability, they output roughly similar values while under change-inflicted non-stationarities, such as deterioration (i.e., denser crowding over the recent history, like the Indian example), they behave oppositely: Z[B] will be large, and Z will be small. Technicalities exist (with a sequence of tests, the false discovery rate has to be controlled), and my 2018 dissertation describes the intricacies. But this remains the overall idea. This work found, through simulations, that often, the maximum of Z and Z[B], termed R, and the minimum, termed L, generate more accurate changepoint These tools, applied to the shock processes from each country, will lead to the colored vertical lines on Figure 1. The dates are shown alongside. Groups of days over which infection figures began to worsen are picked with stunning precision. These are separators in phases of differing intensities and, therefore, make it possible to define a wave objectively: It is the piece of the process trapped within two neighboring change-point bands. The rate of shock generation changes in crossing over these separators. India, for instance, had its first wave from July 5, 2020, through April 2, 2021. A second wave began after April 2, 2021. The equivalent of stones pelted into India’s stable reality stream on those dates. The Quality of Estimation Every theory longs for longevity; every algorithm craves permanence. This one is no different. My dissertation describes how this sequential arrangement accurately pinpoints phase shifts when the non-stationarity is entering the system in structured, controlled ways: as a (deterministic or random) union of two (and hence, more) stationary pieces, each piece held stable at a different intensity. That is reasonably sufficient since such step intensities work as close approximations to those that are more misbehaving, once the steps are made sufficiently fine. But there are different varieties of non-stationarity. What guarantees the accuracies will stay intact if corruptions creep in through less-structured ways? Figure 3 provides an answer. It presents results of simulation studies when the data generation process was governed by a random intensity (leading to a self-exciting Hawkes process, in technical terms). True breaks are at time points 50 and 100, shown through the broken lines in the top panel. In addition to these proposals, a host of other competitors are designed for a similar job of picking changes. The energy divergence (EDiv) option, for instance, calculates the difference in characteristic functions between the pre- and the post-change pieces, while the exponential (Exp) choice measures the disagreement between the average inter-event times over those pieces, assuming exponentially distributed arrivals. They all, in some way, keep a running record of these gaps, and propose the one that maximizes the separation as the change-point estimate (Bhaduri. 2018). A dot on the top panel of Figure 3 represents a time that a change was signaled. Most of the competitors are sounding too many false alarms, alleging a change before the real change. Ours, on the other hand, demonstrates the clearest and the strongest right-skewed clustering after the actual changes, without too many false positives. This experiment was conducted with a fixed sample size. One natural worry is what happens if there is a shock sequence that increases progressively in size. The bottom panel deals with that scenario. Here, the concern is not about the location of the changes, but rather the number of changes. Technically, we are checking whether the probability of detecting the right number of changes approaches one asymptotically. For most of the competitors, this probability decays quite fast. With the available choices, this probability increases predominantly; at worst, it stays stable. Therefore, if you wait sufficiently long (either in terms of clock-time, or in terms of data), you will figure out the right number of changes. You will not over-count or under-count. What Else Can Be Done? Our algorithm, through repeated testing, picks out dates heralding elevated shock rates. A change-detection experiment, however, transcends mere reportage. The number of days with new cases between 26,376 and 77,082 for the USA over the last month (as of this writing) was 24. Imagine trying to forecast this number using prior data. A reasonable way could be through multiplying the rate by the length of the forecast horizon, leading to (220/459) x 30 = 14.38. In contrast, if we choose to calculate the rate, not over the entire data, but only over the post-change period (beyond February 17, 2021, time point 392, Figure 1), this becomes {56/(459-392)} x 30 = 25.075, an estimate much closer to the actual. This demonstrates how the belief typically held so dear—the larger the data set, the better it is—may scupper forecasting prospects. In such times, it is not the size of the data that matters. It’s the relevance of it. The vertical separators help us realize how much history is worth remembering and how much is worth forgetting. The UK is among those countries that began to show early symptoms of an impending catastrophe. Its first wave began around March 31, 2020. None of the other countries tracked through Figure 1 revealed changed rates that early. A question may be raised about how different the UK’s change pattern is in relation to India’s or South Africa’s. Why not measure the distance between the two competing changepoint sets through the Euclidean metric from elementary geometry? A valid thought, although the implementation may pose problems. There is no way to ensure these sets would be of the same size. If the UK identifies five changepoints, there is no guarantee that South Africa will do so as well. It may have only three or two. The pairwise differences that are integral to the Euclidean metric cannot be calculated. The Hausdorff metric, similar in spirit to the Euclidean, offers an alternative through being the sets being compared, not necessarily of the same size. As an example, from Figure 1, India’s changepoint set is {165, 168, 215, 416, 436, 447}; the UK’s {69, 70, 235, 238, 247, 248}; and South Africa’s {147, 196, 323, 358, 486}. The Hausdorff distance between the UK and India is 199, and the one between India and South Africa is 93. Thus, the change patterns of India are more similar to South Africa than they are to the UK. This is acceptable from Figure 1, too. Notice that the red and blue separators on India are temporally close to their counterparts for South Africa. There is no need to stop with three or four countries, though. Computational power enables calculating this metric between all possible pairs sampled in the 275 tracked countries and regions. A tree, shown in Figure 4, condenses these similarities—stereotyping countries, in a way. Those sharing smaller distances are similar in terms of their change patterns and are grouped together. Notice how, quite expectedly, India and South Africa are nearby within the red cluster, and the UK got pushed to a different gray one. Similarities exist, however, between the UK and some of its European neighbors, such as France or Iceland. Countries setting similar policies (about, say, international travel, mask-wearing, etc.) around similar times could be a probable cause. Countries may have similar change locations, but the degree of change they endure could be rather different. How to quantify that magnitude? One way could be through measuring the difference in shock rates over the post- and pre-change phases—the rates used to forecast the U.S. situation, for instance. This would be ideal in case the process exhibits purely stationary benchmark-type tendencies over each block between adjacent change-bands. That, in the current context, is a stretch. It is more likely to see clustered shocks (leading to a “self-exciting point process”). If the median threshold gets crossed someday, it is likely to be crossed the following day, too (through an inherent autocorrelation). A more data-dependent method, therefore, could be replacing the second wave through a bootstrapped version of the first wave (following Branu and Kulperger’s suggestion) and measuring the similarities between what we saw and what we would have seen had there been no second wave through a host of similarity indices collected in Table 1 (Hino and others. 2015). Versions may differ each time there is bootstrapping (two Indian versions are shown through the bottom panel of Figure 2), so it is important to note the median similarity. The larger these measures, the more similar the first waves are to the corresponding second waves. Both methods indicate the UK underwent the most change. Figure 4 and Table 1, taken together, show how India and Nepal are similar both in terms of change locations and the quantity of change suffered despite, perhaps, seeing different numbers of actual infections. Closing Thoughts Definitions, although seemingly pesky nuisances at times, are statements that, among other aspects, anchor a debate, ensuring the objectivity of science and guaranteeing its effectiveness. They clarify the structure of the concepts being dealt with, accentuating necessary boundaries. Without these statements, without a sturdy grasp of the nature of the objects studied, serious analyses cannot commence, and even if they somehow do commence, the conclusions become open to misinterpretations. COVID waves offer a sterling example. Policies are formed on their basis; contemporary media discuss them relentlessly and the lay public endures an incessant, soul-crushing fear of the next one. What constitutes a wave, however, remains largely subjective, venturing not beyond crude graph-gazing. This article supplies a concrete definition. That was the key aim; other benefits are welcome All of us, regardless of how embedded we have become in technocratic language and the necessary conditioning that statistical theories inflict upon us or endow us with, still have the capacity for tremendous awe when confronted with the simple truth that the tiny dome of our reality can be punctured at any point by other realities, by other truths. Change-detection is mainly about estimating such moments of departure. The Z– and Z[B]-based technique offers a way to conduct the cautious craft of finding such structural breaks, by discovering an environment where the language of change and the language of hypothesis-testing coalesce. This puts a finger on the pulse of an evolving process. It helps in locating, within that process, episodes of shifted tendencies. It allows asking, and answering, for instance, “What is a COVID wave?” Subsequent questions may be raised at a time of deviation, probing into the cause of which that deviation is a symptom. Questions like why mask-wearing is having different effects in different regions could delve into causes that function as unexpected catalysts. Nothing can stay insulated from change; not even change-detection approaches. A fascinating area in perpetual need of dedicated researchers. Further Reading Dong, E., Du, H., and Gardner, L. 2020. An interactive web-based dashboard to track COVID-19 in real time. The Lancet Infectious Diseases 20(5): 533–534. Bhaduri, M. 2018. Bi-Directional Testing for Change Point Detection in Poisson Processes. UNLV Theses, Dissertations, Professional Papers, and Capstones. 3217. Branu, W.J., and Kulperger, R.J. 1998. A bootstrap for point processes. Journal of Statistical Computation and Simulation 60(2): 129–155. Hino, H., Takano, K., and Murata, N. 2015. mmpp: A package for calculating similarity and distance metrics for simple and marked temporal point processes. The R Journal 7(2): 237–248. About the Author Moinak Bhaduri earned his PhD in mathematical sciences from the University of Nevada Las Vegas in 2018 and is presently a tenure-line assistant professor in the Department of Mathematical Sciences at Bentley University. He studies stochastic systems that exhibit point process-type flavors and develops algorithms to locate structural shifts in their driving intensities. He is the current editor of the NextGen column of the New England Journal of Statistics in Data Science.
{"url":"https://chance.amstat.org/2022/02/contrary/","timestamp":"2024-11-12T09:13:28Z","content_type":"application/xhtml+xml","content_length":"63612","record_id":"<urn:uuid:cba08e0b-f00f-4822-b707-ece1053ec0c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00608.warc.gz"}
Density-based spatial clustering of applications with noise (DBSCAN) idx = dbscan(X,epsilon,minpts) partitions observations in the n-by-p data matrix X into clusters using the DBSCAN algorithm (see Algorithms). dbscan clusters the observations (or points) based on a threshold for a neighborhood search radius epsilon and a minimum number of neighbors minpts required to identify a core point. The function returns an n-by-1 vector (idx) containing cluster indices of each observation. idx = dbscan(X,epsilon,minpts,Name,Value) specifies additional options using one or more name-value pair arguments. For example, you can specify 'Distance','minkowski','P',3 to use the Minkowski distance metric with an exponent of three in the DBSCAN algorithm. idx = dbscan(D,epsilon,minpts,'Distance','precomputed') returns a vector of cluster indices for the precomputed pairwise distances D between observations. D can be the output of pdist or pdist2, or a more general dissimilarity vector or matrix conforming to the output format of pdist or pdist2, respectively. [idx,corepts] = dbscan(___) also returns a logical vector corepts that contains the core points identified by dbscan, using any of the input argument combinations in the previous syntaxes. Perform DBSCAN on Input Data Cluster a 2-D circular data set using DBSCAN with the default Euclidean distance metric. Also, compare the results of clustering the data set using DBSCAN and k-Means clustering with the squared Euclidean distance metric. Generate synthetic data that contains two noisy circles. rng('default') % For reproducibility % Parameters for data generation N = 300; % Size of each cluster r1 = 0.5; % Radius of first circle r2 = 5; % Radius of second circle theta = linspace(0,2*pi,N)'; X1 = r1*[cos(theta),sin(theta)]+ rand(N,1); X2 = r2*[cos(theta),sin(theta)]+ rand(N,1); X = [X1;X2]; % Noisy 2-D circular data set Visualize the data set. The plot shows that the data set contains two distinct clusters. Perform DBSCAN clustering on the data. Specify an epsilon value of 1 and a minpts value of 5. idx = dbscan(X,1,5); % The default distance metric is Euclidean distance Visualize the clustering. title('DBSCAN Using Euclidean Distance Metric') Using the Euclidean distance metric, DBSCAN correctly identifies the two clusters in the data set. Perform DBSCAN clustering using the squared Euclidean distance metric. Specify an epsilon value of 1 and a minpts value of 5. idx2 = dbscan(X,1,5,'Distance','squaredeuclidean'); Visualize the clustering. title('DBSCAN Using Squared Euclidean Distance Metric') Using the squared Euclidean distance metric, DBSCAN correctly identifies the two clusters in the data set. Perform k-Means clustering using the squared Euclidean distance metric. Specify k = 2 clusters. kidx = kmeans(X,2); % The default distance metric is squared Euclidean distance Visualize the clustering. title('K-Means Using Squared Euclidean Distance Metric') Using the squared Euclidean distance metric, k-Means clustering fails to correctly identify the two clusters in the data set. Perform DBSCAN on Pairwise Distances Perform DBSCAN clustering using a matrix of pairwise distances between observations as input to the dbscan function, and find the number of outliers and core points. The data set is a Lidar scan, stored as a collection of 3-D points, that contains the coordinates of objects surrounding a vehicle. Load the x, y, z coordinates of the objects. loc = lidar_subset; To highlight the environment around the vehicle, set the region of interest to span 20 meters to the left and right of the vehicle, 20 meters in front and back of the vehicle, and the area above the surface of the road. xBound = 20; % in meters yBound = 20; % in meters zLowerBound = 0; % in meters Crop the data to contain only points within the specified region. indices = loc(:,1) <= xBound & loc(:,1) >= -xBound ... & loc(:,2) <= yBound & loc(:,2) >= -yBound ... & loc(:,3) > zLowerBound; loc = loc(indices,:); Visualize the data as a 2-D scatter plot. Annotate the plot to highlight the vehicle. annotation('ellipse',[0.48 0.48 .1 .1],'Color','red') The center of the set of points (circled in red) contains the roof and hood of the vehicle. All other points are obstacles. Precompute a matrix of pairwise distances D between observations by using the pdist2 function. Cluster the data by using dbscan with the pairwise distances. Specify an epsilon value of 2 and a minpts value of 50. [idx, corepts] = dbscan(D,2,50,'Distance','precomputed'); Visualize the results and annotate the figure to highlight a specific cluster. numGroups = length(unique(idx)); annotation('ellipse',[0.54 0.41 .07 .07],'Color','red') As shown in the scatter plot, dbscan identifies 11 clusters and places the vehicle in a separate cluster. dbscan assigns the group of points circled in red (and centered around (3,–4)) to the same cluster (group 7) as the group of points in the southeast quadrant of the plot. The expectation is that these groups should be in separate clusters. You can try using a smaller value of epsilon to split up large clusters and further partition the points. The function also identifies some outliers (an idx value of –1 ) in the data. Find the number of points that dbscan identifies as outliers. dbscan identifies 412 outliers out of 19,070 observations. Find the number of points that dbscan identifies as core points. A corepts value of 1 indicates a core point. dbscan identifies 18,446 observations as core points. See Determine Values for DBSCAN Parameters for a more extensive example. Input Arguments X — Input data numeric matrix Input data, specified as an n-by-p numeric matrix. The rows of X correspond to observations (or points), and the columns correspond to variables. Data Types: single | double D — Pairwise distances numeric row vector | numeric square matrix | logical row vector | logical square matrix Pairwise distances between observations, specified as a numeric row vector that is the output of pdist, numeric square matrix that is the output of pdist2, logical row vector, or logical square matrix. D can also be a more general dissimilarity vector or matrix that conforms to the output format of pdist or pdist2, respectively. For the aforementioned specifications, the following table describes the formats that D can take, given an input matrix X that has n observations (rows) and p dimensions (columns). Specification Format • A row vector of length n(n – 1)/2, corresponding to pairs of observations in X Numeric row vector (output of pdist(X)) • Distances arranged in the order (2,1), (3,1), ..., (n,1), (3,2), ..., (n,2), ..., (n,n – 1)) • An n-by-n matrix, where D(i,j) is the distance between observations i and j in X Numeric square matrix (output of pdist2(X,X)) • A symmetric matrix having diagonal elements equal to zero • A row vector of length n(n – 1)/2, corresponding to pairs of observations in X Logical row vector • A logical row vector with elements indicating distances that are less than or equal to epsilon • Elements of D arranged in the order (2,1), (3,1), ..., (n,1), (3,2), ..., (n,2), ..., (n,n – 1)) Logical square matrix • An n-by-n matrix, where D(i,j) indicates the distance between observations i and j in X that are less than or equal to epsilon If D is a logical vector or matrix, then the value of epsilon must be empty; for example, dbscan(D,[],5,'Distance','precomputed'). Data Types: single | double | logical epsilon — Epsilon neighborhood numeric scalar | [] Epsilon neighborhood of a point, specified as a numeric scalar that defines a neighborhood search radius around the point. If the epsilon neighborhood of a point contains at least minpts neighbors, then dbscan identifies the point as a core point. The value of epsilon must be empty ([]) when D is a logical vector or matrix. Example: dbscan(X,2.5,10) Example: dbscan(D,[],5,'Distance','precomputed'), for a logical matrix or vector D Data Types: single | double minpts — Minimum number of neighbors required for core point positive integer Minimum number of neighbors required for a core point, specified as a positive integer. The epsilon neighborhood of a core point in a cluster must contain at least minpts neighbors, whereas the epsilon neighborhood of a border point can contain fewer neighbors than minpts. Example: dbscan(X,2.5,5) Data Types: single | double Name-Value Arguments Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Before R2021a, use commas to separate each name and value, and enclose Name in quotes. Example: dbscan(D,2.5,5,'Distance','precomputed') specifies DBSCAN clustering using a precomputed matrix of pairwise distances D between observations, an epsilon neighborhood of 2.5, and a minimum of 5 neighbors. Distance — Distance metric character vector | string scalar | function handle Distance metric, specified as the comma-separated pair consisting of 'Distance' and a character vector, string scalar, or function handle, as described in this table. Value Description 'precomputed' Precomputed distances. You must specify this option if the first input to dbscan is a vector or matrix of pairwise distances D. 'euclidean' Euclidean distance (default) 'squaredeuclidean' Squared Euclidean distance. (This option is provided for efficiency only. It does not satisfy the triangle inequality.) Standardized Euclidean distance. Each coordinate difference between observations is scaled by dividing by the corresponding element of the standard deviation, S = std(X,'omitnan'). 'seuclidean' Use Scale to specify another value for S. 'mahalanobis' Mahalanobis distance using the sample covariance of X, C = cov(X,'omitrows'). Use Cov to specify another value for C, where the matrix C is symmetric and positive definite. 'cityblock' City block distance 'minkowski' Minkowski distance. The default exponent is 2. Use P to specify a different exponent, where P is a positive scalar value. 'chebychev' Chebychev distance (maximum coordinate difference) 'cosine' One minus the cosine of the included angle between points (treated as vectors) 'correlation' One minus the sample correlation between points (treated as sequences of values) 'hamming' Hamming distance, which is the percentage of coordinates that differ 'jaccard' One minus the Jaccard coefficient, which is the percentage of nonzero coordinates that differ 'spearman' One minus the sample Spearman's rank correlation between observations (treated as sequences of values) Custom distance function handle. A distance function has the form function D2 = distfun(ZI,ZJ) % calculation of distance • ZI is a 1-by-n vector containing a single observation. • ZJ is an m2-by-n matrix containing multiple observations. distfun must accept a matrix ZJ with an arbitrary number of observations. • D2 is an m2-by-1 vector of distances, and D2(k) is the distance between observations ZI and ZJ(k,:). If your data is not sparse, you can generally compute distance more quickly by using a built-in distance instead of a function handle. For definitions, see Distance Metrics. When you use the 'seuclidean', 'minkowski', or 'mahalanobis' distance metric, you can specify the additional name-value pair argument 'Scale', 'P', or 'Cov', respectively, to control the distance Example: dbscan(X,2.5,5,'Distance','minkowski','P',3) specifies an epsilon neighborhood of 2.5, a minimum of 5 neighbors to grow a cluster, and use of the Minkowski distance metric with an exponent of 3 when performing the clustering algorithm. P — Exponent for Minkowski distance metric 2 (default) | positive scalar Exponent for the Minkowski distance metric, specified as the comma-separated pair consisting of 'P' and a positive scalar. This argument is valid only if 'Distance' is 'minkowski'. Example: 'P',3 Data Types: single | double Cov — Covariance matrix for Mahalanobis distance metric cov(X,'omitrows') (default) | numeric matrix Covariance matrix for the Mahalanobis distance metric, specified as the comma-separated pair consisting of 'Cov' and a symmetric, positive definite, numeric matrix. This argument is valid only if 'Distance' is 'mahalanobis'. Data Types: single | double Scale — Scaling factors for standardized Euclidean distance metric std(X,'omitnan') (default) | numeric vector of positive values Scaling factors for the standardized Euclidean distance metric, specified as the comma-separated pair consisting of 'Scale' and a numeric vector of positive values. Each dimension (column) of X has a corresponding value in 'Scale'; therefore, 'Scale' is of length p (the number of columns in X). For each dimension of X, dbscan uses the corresponding value in 'Scale' to standardize the difference between X and a query point. This argument is valid only if 'Distance' is 'seuclidean'. Data Types: single | double Output Arguments idx — Cluster indices numeric column vector Cluster indices, returned as a numeric column vector. idx has n rows, and each row of idx indicates the cluster assignment of the corresponding observation in X. An index equal to –1 indicates an outlier (or noise point). Cluster assignment using the DBSCAN algorithm is dependent on the order of observations. Therefore, shuffling the rows of X can lead to different cluster assignments for the observations. For more details, see Algorithms. Data Types: double corepts — Indicator for core points logical vector Indicator for core points, returned as an n-by-1 logical vector indicating the indices of the core points identified by dbscan. A value of 1 in any row of corepts indicates that the corresponding observation in X is a core point. Otherwise, corepts has a value of 0 for rows corresponding to observations that are not core points. Data Types: logical More About Core Points Core points in a cluster are points that have at least a minimum number of neighbors (minpts) in their epsilon neighborhood (epsilon). Each cluster must contain at least one core point. Border Points Border points in a cluster are points that have fewer than the required minimum number of neighbors for a core point (minpts) in their epsilon neighborhood (epsilon). Generally, the epsilon neighborhood of a border point contains significantly fewer points than the epsilon neighborhood of a core point. Noise Points Noise points are outliers that do not belong to any cluster. • For improved speed when iterating over many values of epsilon, consider passing in D as the input to dbscan. This approach prevents the function from having to compute the distances at every point of the iteration. • If you use pdist2 to precompute D, do not specify the 'Smallest' or 'Largest' name-value pair arguments of pdist2 to select or sort columns of D. Selecting fewer than n distances results in an error, because dbscan expects D to be a square matrix. Sorting the distances in each column of D leads to a loss in the interpretation of D and can give meaningless results when used in the dbscan function. • For efficient memory usage, consider passing in D as a logical matrix rather than a numeric matrix to dbscan when D is large. By default, MATLAB^® stores each value in a numeric matrix using 8 bytes (64 bits), and each value in a logical matrix using 1 byte (8 bits). • To select a value for minpts, consider a value greater than or equal to the number of dimensions of the input data plus one [1]. For example, for an n-by-p matrix X, set 'minpts' equal to p+1 or • One possible strategy for selecting a value for epsilon is to generate a k-distance graph for X. For each point in X, find the distance to the kth nearest point, and plot sorted points against this distance. Generally, the graph contains a knee. The distance that corresponds to the knee is typically a good choice for epsilon, because it is the region where points start tailing off into outlier (noise) territory [1]. • DBSCAN is a density-based clustering algorithm that is designed to discover clusters and noise in data. The algorithm identifies three kinds of points: core points, border points, and noise points [1]. For specified values of epsilon and minpts, the dbscan function implements the algorithm as follows: 1. From the input data set X, select the first unlabeled observation x[1] as the current point, and initialize the first cluster label C to 1. 2. Find the set of points within the epsilon neighborhood epsilon of the current point. These points are the neighbors. 1. If the number of neighbors is less than minpts, then label the current point as a noise point (or an outlier). Go to step 4. dbscan can reassign noise points to clusters if the noise points later satisfy the constraints set by epsilon and minpts from some other point in X. This process of reassigning points happens for border points of a cluster. 2. Otherwise, label the current point as a core point belonging to cluster C. 3. Iterate over each neighbor (new current point) and repeat step 2 until no new neighbors are found that can be labeled as belonging to the current cluster C. 4. Select the next unlabeled point in X as the current point, and increase the cluster count by 1. 5. Repeat steps 2–4 until all points in X are labeled. • If two clusters have varying densities and are close to each other, that is, the distance between two border points (one from each cluster) is less than epsilon, then dbscan can merge the two clusters into one. • Every valid cluster might not contain at least minpts observations. For example, dbscan can identify a border point belonging to two clusters that are close to each other. In such a situation, the algorithm assigns the border point to the first discovered cluster. As a result, the second cluster is still a valid cluster, but it can have fewer than minpts observations. [1] Ester, M., H.-P. Kriegel, J. Sander, and X. Xiaowei. “A density-based algorithm for discovering clusters in large spatial databases with noise.” In Proceedings of the Second International Conference on Knowledge Discovery in Databases and Data Mining, 226-231. Portland, OR: AAAI Press, 1996. Extended Capabilities C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. Usage notes and limitations: • The distance input argument value (Distance) must be a compile-time constant. For example, to use the Minkowski distance, include coder.Constant("Minkowski") in the -args value of codegen (MATLAB • The distance input argument value (Distance) cannot be a custom distance function. • dbscan does not support code generation for fast Euclidean distance computations, meaning those distance metrics whose names begin with fast (for example, "fasteuclidean"). • Names in name-value arguments must be compile-time constants. For example, to use the P name-value argument in the generated code, include {coder.Constant("P"),0} in the -args value of codegen. • The generated code of dbscan uses parfor (MATLAB Coder) to create loops that run in parallel on supported shared-memory multicore platforms in the generated code. If your compiler does not support the Open Multiprocessing (OpenMP) application interface or you disable OpenMP library, MATLAB Coder™ treats the parfor-loops as for-loops. To find supported compilers, see Supported Compilers. To disable OpenMP library, set the EnableOpenMP property of the configuration object to false. For details, see coder.CodeConfig (MATLAB Coder). • dbscan returns integer-type (int32) indices in generated standalone C/C++ code. Therefore, the function allows for strict single-precision support when you use single-precision inputs. For MEX code generation, the function still returns double-precision indices to match the MATLAB behavior. Version History Introduced in R2019a
{"url":"https://nl.mathworks.com/help/stats/dbscan.html","timestamp":"2024-11-02T09:35:41Z","content_type":"text/html","content_length":"143026","record_id":"<urn:uuid:578842d1-9083-4439-95d0-9a81a0eb2897>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00606.warc.gz"}
Review of Short Phrases and Links This Review contains major "Poles"- related terms, short phrases and links grouped together in the form of Encyclopedia article. 1. Poles were prisoners in nearly every camp in the extensive camp system in German-occupied Poland and the Reich. (Web site) 2. Poles were persecuted in the disputed territories, especially in Silesia, where this led to the Silesian Uprisings. 3. Poles are concentrated in the Vilnius region, area controlled by Poland in the interwar period. 4. Poles were considered to be a threat to "the master race", and thus millions of Poles were sent to concentration camps. (Web site) 5. Poles are the largest minority, concentrated in southeast Lithuania (the Vilnius region). 1. The prisoners at Birkinau mostly consisted of Jews, Poles, and Germans. (Web site) 2. The largest group of all those arrested or deported were ethnic Poles but Jews accounted for significant percentage of all the prisoners. 1. Hundreds of thousands of Poles were prisoners in the extensive concentration camp system in German-occupied Poland and the Reich. (Web site) 1. Hmiel had two poles, eight top-tens and a sixteenth place points finish, finishing behind Scott Riggs and Johnny Sauter for Rookie of the Year. 2. Roberts also won two poles there in 1961 but one was for a 250-mile race that was one of three races at Atlanta that year. (Web site) 3. He also scored two poles in back-to-back weeks at Darlington and Richmond during the same year. 1. Andrea Dovizioso: 0 (wins) 1 (podiums) 0 (poles) 0 ('laps) 6. 2. Lorenzo's victory, combined with his three poles and two more podiums, mean that Yamaha is no longer solely dependent on Rossi for its success. 1. Increasing intermarriage between Germans and Poles contributed much to the Germanisation of ethnic Poles in the Ruhr area. 1. Dissensions, however, arose among the Poles, and the Russians and Prussians again entered Poland in force. (Web site) 2. The Pope tried also mediations in war between Poland and Teutonic Knights, and when he failed to achieve success, he put a curse on the Prussians and Poles. (Web site) 3. Later, in 1806, the French armies defeated the Prussians at Jena and entered Posen (Poznan) led by the Poles under Dabrowski. 1. Some Medieval authors used the ethnonym "Vandals" applying it to Slavic peoples: Wends, Lusatians or Poles. (Web site) 1. For more than a millennium, Ukraine has been a revolving door for invaders, including the Vikings, Mongols, Poles, Lithuanians, Tatars and Soviets. 2. Simultaneously, the territories were claimed by the Soviet Union, but the Soviets were defeated by the Poles. 3. Soviets deported over a million Poles from Eastern Poland.[43]. 1. Just at Paneriai in the Vilnius region about 100,000 Jews, Poles and others were executed. (Web site) 1. Bill Elliott leads all drivers with eight Bud Poles at Talladega. (Web site) 2. Retired drivers Richard Petty and Bobby Allison lead all other drivers each with eight Bud Poles at Richmond. (Web site) 3. Buddy Baker scored seven Bud Poles at Atlanta, the most of all drivers. (Web site) 1. David Pearson swept both NEXTEL Cup poles in 1974,1977,1978 and Bill Elliott swept both poles in 1984, 1985 and 1988. (Web site) 2. • Thirty-six drivers have won Bud Poles, led by David Pearson with 14. (Web site) 1. The Lechi and Tzechi of Cinnamus are the Poles and Bohemians; and it is for the French that he reserves the ancient appellation of Germans. 2. These German Vandals are different from the Wends called Slavi, Slavonians, Poles, Bohemians who settled in the ancient lands of the Vandals. (Web site) 1. In two seasons of NASCAR Craftsman Truck competition, Tundra drivers have recorded 13 victories and 18 poles. (Web site) 1. A complex function which is holomorphic except for some isolated singularities and whose only singularities are poles is called meromorphic. (Web site) 2. The only singularities are the poles on the real axis. 3. The singularities of f(z) that lie inside are simple poles at the points and, and a pole of order 2 at the origin. (Web site) 1. Poles fought against Germany to regain freedom in: the Greater Poland Uprising in Posen and three Silesian Uprisings in Upper Silesia. 2. To stimulate the economy Protestant Czechs, Germans and Poles were invited to settle in the country, particularly in Upper Silesia. 3. However to many Poles today, Silesia (ÅšlÄ…sk) is understood to cover all of the area around Katowice, including Zagłębie. (Web site) 1. There, he fought in the Silesian Uprisings during which Poles were trying to wrestle Silesia out from Germany (1919, 1920 & 1921). 1. Make three or four wraps around the poles, keeping the rope very tight. (Web site) 2. Pass the rope once more between the poles then around one pole and tuck it under itself to form a half hitch. 3. Shear lashing uses two or three spars or poles, 15 - 20 feet (6.1 m) of rope. (Web site) 1. Notes: The frapping turns used to tighten the lashing may be omitted and replaced with wedges inserted between the poles (round lashing). 2. Shear Lashing adds frapping between the poles. (Web site) 1. The ecliptic and ecliptic poles define an ecliptic coordinate system similar to the equatorial coordinate system. 1. The angle between the celestial and ecliptic poles is also 23.5 degrees, because the equator is perpendicular to the poles. 2. The path of the ecliptic poles, like that of the celestial poles, is not a circle in space but a loop or spiral. 3. The Ecliptic Poles are two points in the heavens that lie exactly perpendicular to the ecliptic plane. (Web site) 1. After the end of World War I, clashes between Poles and Germans occurred during the Silesian Uprisings. 2. Adam Makowicz was born into a family of ethnic Poles in Czechoslovakia in 1940, during World War II. After the war, he was raised in Poland. 3. The relations between Poles and Jews during World War II present one of the sharpest paradoxes of the Holocaust. 1. After World War Two, the town was populated by Poles, many expelled from Polish areas annexed by the Soviet Union. (Web site) 2. The expulsions of Jews from Austria after the Anschluss, and deportations of Poles and Jews from Polish areas annexed by Nazi Germany. (Web site) 3. These settlers were mostly poor civilians and "asocial persons" from central Poland and also ethnic Poles from Polish areas annexed by the Soviet Union. (Web site) 1. It involves the simultaneous use of ones arms and legs and making use of skiing equipment like boots, skis and poles. (Web site) 2. No other equipment than skis and ski poles may be used for moving along the track. (Web site) 1. The following season, he won two poles, at Bristol and Martinsville respectively, but fell five spots in the standings. 2. He had eight top-tens and won two poles, finishing tenth in the standings, his most recent top-ten points finish. 3. In 1986, Bill Elliott won two races, four poles and finished fourth in the championship standings. 1. Sauter finished the year sixth in the standings with one win, seven top-five and 13 top-10 finishes along with two poles. (Web site) 2. In addition, he has two poles, 11 top-10 and seven top-five finishes in 20 starts at the track. (Web site) 3. This past season in ASA Midwest Tour competition he had four wins, three poles, seven top-five and nine top-ten finishes in eleven starts. (Web site) 1. Tying: To tie a round Lashing begin with a clove hitch around both poles, about six inches from the end of one pole. 2. Place it over one of the poles above the apex and move down to the apex so that the locking bar of the clove hitch is to the inside of the A-frame. 3. Tie a clove hitch around an outside pole and loosely wrap all three poles 5 to eight times. (Web site) 1. Square lashing is a type of lashing used to bind poles together. (Web site) 2. Square Lashing can be used to bind long sticks at right angles to trees or poles to form drying racks and for many other uses. (Web site) 3. Square lashing is a type of lashing knot used to bind poles together. 1. He would also win the Busch series championship with 5 wins, 6 poles, 20 top fives, and 24 top tens. (Web site) 2. This season, Busch has made 17 starts, posting nine wins, 13 top fives, 14 top 10s and two poles. 3. In 17 career starts at Infineon, Gordon has 10 top fives, 13top-10s and five poles. 1. By 1882 the city had almost 24.000 inhabitants, including roughly 12,000 Jews, 6,000 Ruthenians, and 4,000 Poles. 1. He did not win a race that season but did win two poles (Nashville Speedway USA and Atlanta), and he had six top-five finishes. 2. And in 15 Sprint Cup starts, Newman has two poles, one win, six top-five and seven top-10 finishes. 3. His six-race season has produced six top-five finishes and two Busch Poles. (Web site) 1. Rudd ended the season with two poles, six top-fives and 13 top-tens. 2. The team is now in third place in the Chase standings with six races to go, and has earned two poles, six top-fives and 16 top-10 finishes so far this year. 3. He won two poles, six top-fives and ten-top tens with a fourth place standing and the Busch Series "Most Popular Driver" award. 1. In one design, the train can be levitated by the repulsive force of like poles or the attractive force of opposite poles of magnets. 2. For example, when there are two electron pairs surrounding the central atom, their mutual repulsion is minimal when they lie at opposite poles of the sphere. 3. This can be neatly illustrated by magnets: two of the same "pole" will repel each other, but two opposite poles attract each other. (Web site) 1. He finished the year with two wins, two poles, nine top-five and 14 top-10s, which resulted in a seventh place finish in the point standings. 2. In 26 NEXTEL Cup starts at Michigan, Labonte has nine top-five finishes and has won the Bud Pole, four times, including capturing both poles in 2003. 1. The Australian took five wins and five other podiums, two poles and a fastest lap. 2. Williams remain the second most successful squad at the track, having clinched five wins, eight poles and five fastest laps. (Web site) 3. The Hamilton-Team Rensi partnership produced five wins, four poles and two finishes in the top 10 in points in his three seasons with the organization. 1. Auschwitz II was an extermination camp or Vernichtungslager, the site of the deaths of at least 960,000 Jews, 75,000 Poles, and some 19,000 Roma (Gypsies). (Web site) 2. October: Establishment of Auschwitz II (Birkenau) for the extermination of Jews; Gypsies, Poles, Russians, and others were also murdered at the camp. (Web site) 3. At the end of November of 1940 there were 8,000 Poles in Auschwitz, divided into three groups: political prisoners, criminals, and priests and Jews. (Web site) 1. Other victims included approximately 74,000 Poles, approximately 21,000 Roma (Gypsies), and approximately 15,000 Soviet prisoners of war. (Web site) 2. In addition, many non-Jews were also killed in these camps, mostly (non-Jewish) Poles and Soviet prisoners of war. (Web site) 3. Auschwitz I served as the administrative center, and was the site of the deaths of roughly 70,000 people, mostly ethnic Poles and Soviet prisoners of war. (Web site) 1. He set rookie records for the most top-10 finishes (22) and the most poles (six) in one season. (Web site) 2. He is tied for the most Bud Poles and leads all other drivers in top-10 starts this season. 3. In 19 starts this season, Harvick has two wins, two poles, 12 top-five and 13 top-10 finishes and has led 455 laps. 1. RCR has 16 top-five finishes, 20 top-10s and two poles at the track. 1. No active driver has more poles at Charlotte than Newman, who moved within five of David Pearson 's track record. (Web site) 2. No active driver has more poles at Charlotte than Newman, who moved within five of David Pearson's track record. 3. Borland looked on as Newman amassed eight poles, two wins, five top-fives and 11 top-10 finishes. (Web site) 1. Ryan Newman (2003) was the most recent driver to win both poles in a year. 2. Ryan Newman (four) leads all drivers in poles won at Phoenix. (Web site) 3. Ryan Newman, who had won the three previous poles, qualified fourth. (Web site) 1. On these tracks, Harvick has scored two poles, seven wins, 29 top-fives and 64 top-10 finishes in his career. 2. In addition to seven wins in 13 starts, he owns two poles, 10 top-five and 12 top-ten finishes, and an average finish of 3.8. 3. Staying with Newman-Haas for 2004, Bourdais dominated the series with seven wins and eight poles, beating his team-mate Junqueira to the title by 28 points. 1. Gordon also has the most poles amongst active drivers at Bristol with five. 2. Ryan Newman leads all active drivers in poles, with eight. (Web site) 3. Jeff Gordon has won seven poles at Charlotte, the most of all active drivers. (Web site) 1. Rule 4: A point on the real axis is a part of the root-locus if it is to the left of an odd number of poles or zeros. 2. As well as zeros, there are now also poles - singularities because of zeros in the denominator. 3. In fact, we recognize zeros and poles as points where shaded rings "gather round". 1. Isolated singularities may be classified as poles, essential singularities, logarithmic singularities, or removable singularities. (Web site) 2. Functions which have only poles but no essential singularities are called meromorphic. 1. A meromorphic function therefore may only have finite-order, isolated poles and zeros and no essential singularities in its domain. 2. A holomorphic function whose only singularities are poles is called a meromorphic function. (Web site) 3. The points at which such a function cannot be defined are called the poles of the meromorphic function. 1. Keselowski, who is also running his first full Cup season, has two wins, two poles, eight top-five and 10 top-10 finishes in 11 Nationwide races. 2. Posted two poles, one top-five and eight top-10 finishes along with a pole and a win in the NEXTEL Open All-Star event. 3. In 17 previous starts, he has recorded four wins, three poles, 10 top-five and 13 top-10 finishes. Related Keywords * Axis * Busch * Celestial Poles * Celestial Sphere * Czechs * Diagonal Lashing * Earth * Ends * Equator * Ethnic Poles * Five * Five Poles * Four * Four Poles * Four Wins * Germans * Germany * Great Circle * Jews * Lash * Lashing * Lashings * Latitude * Lithuania * Lithuanians * Magnet * Magnetic Poles * Millions * Nazis * North * Points * Poland * Pole * Prussia * Rotation * Russians * Season * Silesians * South Pole * South Poles * Sphere * Tens * Three * Three Poles * Top-Fives * Ukrainians * Victories * Warsaw * Winning * Wins 1. Books about "Poles" in Amazon.com
{"url":"http://www.keywen.com/en/POLES","timestamp":"2024-11-14T05:37:50Z","content_type":"text/html","content_length":"49248","record_id":"<urn:uuid:7c29c478-685a-46eb-b02d-e1a4cf697e55>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00028.warc.gz"}
Solving Techniques 19 TOP > Techniques > Aligned Pair Exclusion Solving Techniques 19 Aligned Pair Exclusion Please have a look below. We have the blue cells [4,8][2,8], which are strongly related to the pink cells [2,4][2,5,8]. The possible combinations of the two pink cells are. The combination of [2][2] is impossible. The [2][8] combination can’t be used because [R1C9]’s candidate is [2,8]. The [4,8] combination can’t be used because [R2C1]’s candidate is [4,8]. Hence we have, and [8] can’t be entered in [R3C1]. As seen above, the Aligned Pair Exclusion technique, considers all possible combinations between two cells to narrow down the candidates. Aligned Pair Exclusion 2 Here is another sample. Lets examine the pink cells in the diagram above. The combinations are If the pink cells are [1][3], then the blue cells in the same box will be [7][7], which won’t work out. If the pink cells are [1][4], then the blue cell in [R5C3]’s candidate will be [1,4], which won’t work out. If the pink cels are [3][8], the candidates in the blue cell, [R9C3], will be [3,8], which won’t work out. Hence, a [3] can’t be entered to the right of the pink boxes. Aligned Pair Exclusion 3 Here is another sample. Lets examine the pink cells in the diagram above. The combinations are: If the pink cells are [4][7], then the same box’s blue cells will become [1][1], which won’t work out. If the pink cells are [5][7], then the blue cell in [R3C8]’s candidates will be [5,7] which won’t work out. Therefore, a [7] can’t be entered below the pink cells, so a [3] is confirmed here. Aligned Pair Exclusion 4 Here is another sample. Lets examine the pink cells in the diagram above. The combinations are: If the pink cells are [1][8], the candidates in the blue cell, [R1C1], will be [1,8], which won’t work out. If the pink cells are [9][8], the candidates in the blue cell, [R6C2], will be [8,9], which won’t work out. Hence, an [8] can’t be entered below the pink cells. Aligned Pair Exclusion 5 Here is another sample. Lets examine the pink cells in the diagram above. The combinations are: If the pink cells are [1][4], the candidates for the blue cell, [R9C2], will be [1,4], which won’t work out. If the pink cells are [1][5], the candidates for the blue cell, [R8C2], will be [1,5], which won’t work out. If the pink cells are [1][7], the candidates in both the blue cells [R1C3] and [R3C3] will be [9], which won’t work out. Hence, a [1] can’t be entered to the left of the pink cells. Names of cells in Sudoku R1C1 R1C2 R1C3 R1C4 R1C5 R1C6 R1C7 R1C8 R1C9 R2C1 R2C2 R2C3 R2C4 R2C5 R2C6 R2C7 R2C8 R2C9 R3C1 R3C2 R3C3 R3C4 R3C5 R3C6 R3C7 R3C8 R3C9 R4C1 R4C2 R4C3 R4C4 R4C5 R4C6 R4C7 R4C8 R4C9 R5C1 R5C2 R5C3 R5C4 R5C5 R5C6 R5C7 R5C8 R5C9 R6C1 R6C2 R6C3 R6C4 R6C5 R6C6 R6C7 R6C8 R6C9 R7C1 R7C2 R7C3 R7C4 R7C5 R7C6 R7C7 R7C8 R7C9 R8C1 R8C2 R8C3 R8C4 R8C5 R8C6 R8C7 R8C8 R8C9 R9C1 R9C2 R9C3 R9C4 R9C5 R9C6 R9C7 R9C8 R9C9
{"url":"https://nanpre.adg5.com/tec_en19.html","timestamp":"2024-11-13T18:28:21Z","content_type":"text/html","content_length":"19515","record_id":"<urn:uuid:3a2c7a7a-cb30-43fd-bb57-65b2edced2ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00766.warc.gz"}
Mohamadreza Ahmadi Sep 09, 2021 Abstract:A large class of decision making under uncertainty problems can be described via Markov decision processes (MDPs) or partially observable MDPs (POMDPs), with application to artificial intelligence and operations research, among others. Traditionally, policy synthesis techniques are proposed such that a total expected cost or reward is minimized or maximized. However, optimality in the total expected cost sense is only reasonable if system behavior in the large number of runs is of interest, which has limited the use of such policies in practical mission-critical scenarios, wherein large deviations from the expected behavior may lead to mission failure. In this paper, we consider the problem of designing policies for MDPs and POMDPs with objectives and constraints in terms of dynamic coherent risk measures, which we refer to as the constrained risk-averse problem. For MDPs, we reformulate the problem into a infsup problem via the Lagrangian framework and propose an optimization-based method to synthesize Markovian policies. For MDPs, we demonstrate that the formulated optimization problems are in the form of difference convex programs (DCPs) and can be solved by the disciplined convex-concave programming (DCCP) framework. We show that these results generalize linear programs for constrained MDPs with total discounted expected costs and constraints. For POMDPs, we show that, if the coherent risk measures can be defined as a Markov risk transition mapping, an infinite-dimensional optimization can be used to design Markovian belief-based policies. For stochastic finite-state controllers (FSCs), we show that the latter optimization simplifies to a (finite-dimensional) DCP and can be solved by the DCCP framework. We incorporate these DCPs in a policy iteration algorithm to design risk-averse FSCs for POMDPs. * arXiv admin note: substantial text overlap with arXiv:2012.02423Â
{"url":"https://www.catalyzex.com/author/Mohamadreza%20Ahmadi","timestamp":"2024-11-08T18:54:48Z","content_type":"text/html","content_length":"229752","record_id":"<urn:uuid:760383e6-52b6-47a7-9ec9-eb1bef025839>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00530.warc.gz"}
Multivariate Clustering (Spatial Statistics) Finds natural clusters of features based solely on feature attribute values. • This tool produces an output feature class with the fields used in the analysis plus a new integer field named CLUSTER_ID. Default rendering is based on the CLUSTER_ID field and specifies which cluster each feature is a member of. If you indicate that you want three clusters, for example, each record will contain a 1, 2, or 3 for the CLUSTER_ID field. The output feature class will also contain a binary field called IS_SEED. The IS_SEED field indicates which features were used as starting points to grow clusters. The number of nonzero values in the IS_SEED field will match the value you entered for the Number of Clusters parameter. • Input Features can be points, lines, or polygons. • This tool creates messages and charts to help you understand the characteristics of the clusters identified. You may access the messages by hovering over the progress bar, clicking the pop-out button, or expanding the View details section in the Geoprocessing pane. You may also access the messages for a previous run of the Multivariate Clustering tool via the geoprocessing history. The charts created can be accessed from the Contents pane. • For more information about the output messages and charts, see How Multivariate Clustering works. • The Analysis Fields must be numeric and should contain a variety of values. Fields with no variation (that is, the same or very similar value for every record) will be dropped from the analysis but will be included in the Output Features. Categorical fields may be used with the Multivariate Clustering tool if they are represented as numeric dummy variables (a value of one for all features in a category and zeros for all other features). • The Multivariate Clustering tool will construct nonspatial clusters. For some applications you may want to impose contiguity or other proximity requirements on the clusters created. In those cases, you would use the Spatially Constrained Multivariate Clustering tool to create clusters that are spatially contiguous. • While there is a tendency to want to include as many Analysis Fields as possible, for this tool, it works best to start with a single variable and then add additional variables. Results are easier to interpret with fewer analysis fields. It is also easier to determine which variables are the best discriminators when there are fewer fields. • There are three options for the Initialization Method: Optimized seed locations, User defined seed locations, and Random seed locations. Seeds are the features used to grow individual clusters. If, for example, you enter a 3 for the Number of Clusters parameter, the analysis will begin with three seed features. The default option, Optimized seed locations, randomly selects the first seed and makes sure that the subsequent seeds selected represent features that are far away from each other in data space (attribute values). Selecting initial seeds that capture different areas of data space improves performance. Sometimes you know that specific features reflect distinct characteristics that you want represented by different clusters. In that case, you can provide those locations by creating a seed field to identify those distinctive features. The seed field you create should have zeros for all but the initial seed features; the initial seed features should have a value of 1. You will then select User defined seed locations for the Initialization Method parameter. If you are interested in doing a sensitivity analysis to see which features are always found within the same cluster, you might select the Random seed locations option for the Initialization Method parameter. For this option, the seed features are randomly selected. When using random seeds, you may wish to choose a seed to initiate the random number generator through the Random Number Generator Environment setting. However, the Random Number Generator used by this tool is always Mersenne Twister. • Any values of 1 in the Initialization Field will be interpreted as a seed. If you choose to specify seed locations, the Number of Clusters parameter will be disabled and the tool will find as many clusters as there are non-zero entries in the Initialization Field. • Sometimes you know the Number of Clusters most appropriate for your data. In the case that you don't, you may have to experiment with different numbers of clusters, noting which values provide the best clustering differentiation. When you leave the Number of Clusters parameter empty, the tool will evaluate the optimal number of clusters by computing a pseudo F-statistic for clustering solutions with 2 through 30 clusters and report the optimal number of clusters in the messages window. When you specify an optional Output Table for Evaluating Number of Clusters, a chart will be created showing the pseudo F-statistic values for solutions with 2 through 30 clusters. The largest pseudo F-statistic values indicate solutions that perform best at maximizing both within-cluster similarities and between-cluster differences. If no other criteria guide your choice for Number of Clusters, use a number associated with one of the largest pseudo F-statistic • This tool uses either the K means or K medoids algorithm to partition features into clusters. When Random seed locations is selected for the Initialization Method, the algorithm incorporates heuristics and may return a different result each time you run the tool (even using the same data and the same tool parameters). This is because there is a random component to finding the initial seed features used to grow the clusters. Because of this heuristic solution, determining the optimal number of clusters is more involved, and the pseudo F-Statistic may be different each time the tool is run due to different initial seed features. When a distinct pattern exists in your data, however, solutions from one run to the next will be more consistent. Consequently, to help determine the optimal number of clusters, for each number of clusters 2 through 30, the tool solves 10 times and uses the highest of the ten pseudo F-statistic values. • K means and K medoids are both popular clustering algorithms and will generally produce similar results. However, K medoids is more robust to noise and outliers in the Input Features. K means is generally faster than K medoids and is preferred for large data sets. • The cluster number assigned to a set of features may change from one run to the next. For example, suppose you partition features into two clusters based on an income variable. The first time you run the analysis you might see the high income features labeled as cluster 2 and the low income features labeled as cluster 1; the second time you run the same analysis, the high income features might be labeled as cluster 1. You might also see that some of the middle income features switch cluster membership from one run to another. arcpy.stats.MultivariateClustering(in_features, output_features, analysis_fields, {clustering_method}, {initialization_method}, {initialization_field}, {number_of_clusters}, {output_table}) Parameter Explanation Data in_features The feature class or feature layer for which you want to create clusters. Feature output_features The new output feature class created containing all features, the analysis fields specified, and a field indicating to which cluster each feature belongs. Feature A list of fields you want to use to distinguish one cluster from another. Field The clustering algorithm used. K_MEANS is the default. clustering_method K_MEANS and K_MEDOIDS are both popular clustering algorithms and will generally produce similar results. However, K_MEDOIDS is more robust to noise and outliers in the in_features. K_MEANS is generally faster than K_MEDOIDS and is preferred for large data sets. String • K_MEANS —The in_features will be clustered using the K means algorithm. This is the default. • K_MEDOIDS —The Input Features will be clustered using the K medoids algorithm. Specifies how initial seeds to grow clusters are obtained. If you indicate you want three clusters, for example, the analysis will begin with three seeds. • OPTIMIZED_SEED_LOCATIONS —Seed features will be selected to optimize analysis results and performance. This is the default. String (Optional) • USER_DEFINED_SEED_LOCATIONS —Nonzero entries in the initialization_field will be used as starting points to grow clusters. • RANDOM_SEED_LOCATIONS —Initial seed features will be randomly selected. The numeric field identifying seed features. Features with a value of 1 for this field will be used to grow clusters. All other features should contain zeros. Field number_of_clusters The number of clusters to create. When you leave this parameter empty, the tool will evaluate the optimal number of clusters by computing a pseudo F-statistic for clustering solutions with 2 through 30 clusters. (Optional) Long This parameter is disabled if the seed locations were provided in an initialization field. output_table If specified, the table created contains the pseudo F-statistic for clustering solutions 2 through 30, calculated to evaluate the optimal number of clusters. The chart created from this table can be accessed in the stand-alone tables section of the Contents pane. Table Code sample MultivariateClustering example 1 (Python window) The following Python window script demonstrates how to use the MultivariateClustering tool. import arcpy arcpy.env.workspace = r"C:\Analysis" arcpy.MultivariateClustering_stats("District_Vandalism", "outVandalism", ["TOTPOP", "VACANT_CY", "UNEMP"], "K_MEANS", "OPTIMIZED_SEED_LOCATIONS", None, "5") MultivariateClustering example 2 (stand-alone script) The following stand-alone Python script demonstrates how to use the MultivariateClustering tool # Clustering Vandalism data in a metropolitan area # using the Multivariate Clustering Tool # Import system modules import arcpy # Set environment property to overwrite existing output, by default arcpy.env.overwriteOutput = True # Set the current workspace (to avoid having to specify the full path to # the feature classes each time) arcpy.env.workspace = r"C:\GA" # Join the 911 Call Point feature class to the Block Group Polygon feature # class # Process: Spatial Join fieldMappings = arcpy.FieldMappings() sj = arcpy.SpatialJoin_analysis("ReportingDistricts.shp", "Vandalism2006.shp", "Dist_Vand.shp", "JOIN_ONE_TO_ONE","KEEP_ALL", fieldMappings, # Use the Multivariate Clustering tool to create groups based on different # variables or analysis fields # Process: Cluster Similar Features ga = arcpy.MultivariateClustering_stats("District_Vandalism", "outVandalism", ["Join_Count", "TOTPOP", "VACANT_CY", "UNEMP"], "K_MEANS", "OPTIMIZED_SEED_LOCATIONS", None, 5) # Use Summary Statistic tool to get the Mean of variables used to group # Process: Summary Statistics SumStat = arcpy.Statistics_analysis("outVandalism", "outSS", [["Join_Count", "MEAN"], ["VACANT_CY", "MEAN"], ["TOTPOP_CY", "MEAN"], ["UNEMP_CY", "MEAN"]], except arcpy.ExecuteError: # If an error occurred when running the tool, print out the error message. Output Coordinate System Geographic Transformations Current Workspace Scratch Workspace Qualified Field Names Output has M values M Resolution M Tolerance Output has Z values Default Output Z Value Z Resolution Z Tolerance XY Resolution XY Tolerance Random number generator Feature geometry is projected to the Output Coordinate System prior to analysis. All mathematical computations are based on the Output Coordinate System spatial reference. When the Output Coordinate System is based on degrees, minutes, and seconds, geodesic distances are estimated using chordal distances. The Random Generator Type used is always Mersenne Twister. Licensing information • Basic: Yes • Standard: Yes • Advanced: Yes
{"url":"https://pro.arcgis.com/en/pro-app/2.7/tool-reference/spatial-statistics/multivariate-clustering.htm","timestamp":"2024-11-05T20:07:04Z","content_type":"text/html","content_length":"41101","record_id":"<urn:uuid:cc5b7d13-b7cc-4921-b79e-baac8d719c3c>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00347.warc.gz"}
MATH 131 A - College Algebra College Algebra (MATH 131) Term: 2021-2022 Fall Term This course reviews basic operations with integers, fractions and decimals, algebraic expressions, multiplication and factorization of algebraic expressions, and graphing. Emphasis is placed on solving equations and inequalities, solving systems of equations and inequalities, and analysis of functions (e.g., absolute value, radical, polynomial, rational, exponential, and logarithmic) in multiple representations. Upon completion, students should be able to select and use appropriate models and techniques for finding solutions to algebra-related problems with and without technology.
{"url":"https://tiger.voorhees.edu/ICS/Academics/MATH/MATH_131/2021_10-MATH_131-A/Course_Information.jnz","timestamp":"2024-11-07T07:41:44Z","content_type":"text/html","content_length":"26997","record_id":"<urn:uuid:53948411-7363-450b-a9ec-9120a1d60c8f>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00705.warc.gz"}
Differential Calculus: A Basic Guide - Calculus Help Unless you’re a math genius, you’re probably quite intimidated by calculus. The good news is, calculus isn’t as complicated as it seems, as long as you have a strong background in algebra, trigonometry, and geometry. Whether you’re preparing for an upcoming semester of calculus class, or you’re looking for some extra help understanding what you’ve been learning, our basic guide to differential calculus can help. Once you’ve mastered the basics, you’re ready to take on more challenges. What is Differential Calculus? Photo credit to YouTube Calculus is a branch of math that’s focused on the study of continuous change. Differential calculus looks at the instantaneous rate of change. So, what does that mean? We’ll use the speed of a car as an example. Your odometer gives you an estimate of how fast you’re driving, based on the distance you’ve travelled over the past few seconds. It works fairly quickly, but it’s not instant. If you wanted to know precisely how fast you were going at any given second, you’d need to use calculus to figure it out. Before we can explain how to do that, we need to define some of the terms you’ll use in calculus so you can follow along with our basic guide to differential calculus. History of Differentiation Differentiation dates all the way back to the Greek era, such as Euclid, Archimedes, and Apollonius of Perga. The modern-day calculus development is generally credited to Isaac Newton and Gottfried Wilhelm Leibniz who provided different approaches to differentiation and derivatives. The main insight that earned them the accreditation of calculus is due to the fundamental theorem of calculus relating differentiation and integration. Many other mathematicians have contributed to the differentiation theory since the 17th century. In the 19th century, calculus became more rigorous by those such as Bernhard Riemann, Augustin Louis Cauchy, and Karl Weierstrass. Differentiation was then generalized to Euclidean space and the complex plane. Photo credit to YouTube You’ve probably already covered functions in advanced algebra or trigonometry, so we’ll just do a quick review. A function is typically notated like this: f(x) = y, though there’s usually much more to it than that. For an equation to be a function, any number you plug in as the (x) variable has to cause the equation to equal precisely one value of y. An independent variable is a value of (x) in a function. It’s a number that you’re plugging into the function to change the output. Dependent Variables A dependent variable is whatever value is yielded by the function, represented in our example as y. The value of y will change depending on the value of (x), the independent variable. The value of the independent variable always determines the value of a dependent variable. The domain of an equation or function is a set of all the possible values of the independent variable (x) that will produce a valid dependent variable (y). While the domain can be essentially infinite, for our purposes, we will create a relatively small domain. For our little function, we’ll say that x can be [1, 2, 3…9] The ellipsis (…) indicates that all numbers in between 3 and 9 are included in the domain. It’s just a more natural way of writing it, so you don’t have to list every number in the set. The limit is the value that y approaches as x approaches a given value. If that doesn’t make sense, think of it this way: In our domain, 9 is the highest value x can be. So, the limit of our function is the value of y as x gets closer to 9. Typically, we use this with much larger domains, where the highest possible value of x isn’t known or doesn’t exist because it’s infinite. The limit allows you to define the pattern of values for y that you can extrapolate for any possible value of x. Photo credit to YouTube Interval is a math term that isn’t limited to just calculus. It’s a set of numbers in between two given real numbers. Since those numbers can be broken down infinitesimally small, using an interval notation gives you a way to include all of those numbers without having to write them all out. There are two main types of intervals: closed and open. Closed Interval A closed interval is expressed as [a, b] which is a set of all numbers between a and b, including a and b themselves. Using the variable x to represent the set itself, you can write this as a ≤ x ≤ Open Interval An open interval is expressed using parenthesis instead of brackets, (a, b). That is a set of all numbers between a and b, NOT including a and b themselves. You can think of it as a < x < b. A half-open interval includes just one of its endpoints, meaning a ≤ x < b, or vice versa. It’s expressed as [a, b), or (a, b]. Photo credit to GlobalSpec Derivatives are a special type of function that calculates the rate of change of something. Expressed in a graph, derivatives are the calculation of the slope of a curved line. Derivatives are a fundamental concept of differential calculus, so you need to have a complete understanding of what they are and how they work if you’re going to survive the class. Simply put, a derivative explains how a pattern will change, allowing you to plot the past/present/future of the pattern on a graph, and find the minimums and maximums. There are several ways to represent the idea of derivatives: • Derivatives measure the pitch of the line that represents a function on a graph, at one particular point on that line. That means derivatives are the slope. • If the independent variable (the “input” variable in a function) is “time,” then the derivative is the rate of change, as the velocity. • If we look at a short section of the line of the function so that the line is nearly straight, the derivative of that section is the slope of the line. • The slope of a line connecting two points on a function graph approaches the derivative when the interval between the points is zero. • As we mentioned before, derivatives are functions, and they’re always changing. When you find the derivative of a function everywhere, you get another function, which you can then solve for the derivative of, leading to another function, which you can get the derivative of, and so on. Each derivative tells you the rate of change of the previous function. In a mathematical expression, the definition of derivative is: In this function, h is a point on the line that’s very close to zero, and the limit is all of the tiny numbers between h and zero. Derivatives and Polynomials Photo credit to Decoded Science Finding the derivative of a polynomial would be very tedious using limits and the slope formula. However, there is another way to calculate the derivative of polynomials. We’ll work through an example, step by step. First, you’ll need to multiply the exponent (2, as in x^2) by the coefficient (2, as in 2x). Then we reduce the exponent by 1. Since 2x^1 is simply 2x, the first term of the function can be expressed as 2*(2x), or 4x. We ended up with – 5(x^0) in the second term of the function by assuming the exponent in -5x could be written as -5x^1, so we multiply it by the coefficient in front of the x, which is -5. Now we reduce the exponent by 1, which leaves us with 1*(-5)x^0. Anything to the power of zero is one, so ultimately we get 1*(-5)*1, which equals -5. The third and final term of this function, +3, doesn’t have an x, which means it’s a constant. Following the same procedure as before, we start with 3x^1. Multiplying the exponent by the coefficient, then reducing the exponent by 1, leaves us with 3x^0. Once the zero comes down, we end up with 0 as our third term. Our simplified derivative at the end of this process is: The original function we started with was quadratic, but the derivative we ended up with is linear. The derivative will always be one degree less than the original function. Other Rules and Theorems Photo credit to YouTube There are a few other theorems you’ll need to learn in differential calculus, and memorizing them ahead of time will give you an excellent foundation for your calculus class. Rolle’s Theorem If a function is continuous on a closed interval and differentiable on the open interval (a, b), and f(a)=f(b) (the y’s on the endpoints are the same), then there is at least one number c in (a, b), where f′(c)=0. Mean Value Theorem If a function is continuous on a closed interval [a, b] and differentiable on the open interval (a, b), then there is at least one number c in (a, b), where: Intermediate Value Theorem If a function f is continuous on a closed interval [a,b], where f(a)≠f(b), and m is any number between f(a) and f(b), there must be at least one number c in [a,b] such that f(c)=m. Photo credit to YouTube Final Thoughts We hope our basic guide to differential calculus has provided you with a solid foundation to build from in your class. Calculus can be a gratifying subject to learn because it has so many applications in the real world. And if you have any interest in physics or other sciences, calculus will go with it hand in hand! Now that you armed yourself with all of this information, you should have no problem jumping into calculus head-first.
{"url":"https://calculus-help.com/a-basic-guide-to-differential-calculus/","timestamp":"2024-11-04T15:18:57Z","content_type":"text/html","content_length":"51996","record_id":"<urn:uuid:8123159c-d789-4552-b7e0-435779bfafcc>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00269.warc.gz"}
TypeError: non-integral exponents not supported TypeError: non-integral exponents not supported Being new to Sage, I can't understand this error "TypeError: non-integral exponents not supported". This is raised by "find_root" in the following code snippet: #for beta in betas: #for beta in srange(0.01,0.09,0.02): for beta in srange(0.91,0.99,0.02): f=x^n - 3*x^(n-1)+x^(n-2)+x^(n-3)-2*x^(n-beta*n-1)-3^(beta*n) find_root(f, 1,4); But what is strange for me is that if I replace the "for" line with one of the commented line, then it works well. So I think there must be some Sage concepts that I haven't been aware of. Thanks for your help! 2 Answers Sort by » oldest newest most voted Does the following help (if not, ask for more) ? sage: for beta in srange(0.91,0.99,0.02): ....: print n-beta*n-1 a numerical noise that can be fixed as follows: sage: for beta in srange(0.91,0.99,0.02): ....: print round(n-beta*n-1) edit flag offensive delete link more You declare R.<x>=RR[] but this invokes the normal polynomial ring with positive integer exponents. You should use symbolics for your numeric purposes, as tmonteil has shown to you. In fact, in mathematics there exist polynomial rings with other type of exponents: • from ZZ: Laurent polynomials • from QQ: Puiseux polynomials edit flag offensive delete link more
{"url":"https://ask.sagemath.org/question/26718/typeerror-non-integral-exponents-not-supported/","timestamp":"2024-11-12T02:00:04Z","content_type":"application/xhtml+xml","content_length":"59628","record_id":"<urn:uuid:64d2410e-7f13-4a23-81b2-a45ca17c75b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00024.warc.gz"}
How Do You Spell Spanish Numbers 1-30 - SpellingNumbers.com How Do You Spell Spanish Numbers 1-30 How Do You Spell Spanish Numbers 1-30 – Although it may seem difficult to figure out how to spell numbers but it’s possible. If you have the right tools, it can make the process of learning to spell simpler. There are numerous ways to aid in spelling, no matter where you work or school. They include games on the internet or workbooks, tips, and tips. The Associated Press format If you’re writing for newspaper (or any other print media), you must be capable of spelling number in the AP format. In order to simplify your work, the AP Style provides detailed instructions on how you are able to type numbers as well as other types of information. The Associated Press Stylebook was first released in 1953. Since then, hundreds of changes have been made to the stylebook. This stylebook is at its 55th anniversary. The stylebook is utilized by the majority of American newspapers, periodicals and online news sources. Journalism is often guided by AP Style. These guidelines include punctuation and grammar. The top practices of AP Style are capitalization, use of dates and time, and citations. Regular numbers Ordinal numbers are a unique integer that indicates a particular location in the list. These numbers are used frequently to indicate size, significance or the speed of time. They also show how long it took. Based on the context the normal numbers can be expressed either verbally and numerically. The use of a distinctive suffix distinguishes the two in the most important way. To create an ordinal number by adding an “th” at the end. The ordinal number 31 can be represented by 31. An ordinal is a term that can be used to refer to a variety of things, such as dates and names. It is equally crucial to differentiate between an ordinal and a cardinal. Millions of people and trillions Numerology is used in a variety of situations, such as the stock market, geology, and the history of the globe. Examples of this include billions and millions. A million is a natural number that is prior to 1,000,001 while the number of billions comes after 999.999.999. The annual revenue of any company is calculated in millions. They also serve to calculate the value the value of a fund, stock or other piece of money is worth. Furthermore, billions are often used as a measure for a company’s capitalization. You can check the accuracy of your estimates using a unit conversion calculator for converting millions into billions. Fractions can be used to indicate elements or even parts of numbers within the English language. The numerator is separated from the denominator into two parts. The numerator shows how many pieces of identical sizes were taken. The other is the denominator, which displays how many portions were divided. Fractions can be expressed mathematically, or in words. Be careful when spelling fractions. This could be difficult when you need to utilize a large number of hyphens, in particular when it comes to larger fractions. There are a few fundamental guidelines you can apply to write fractions like words. The best way to start sentences is by writing the numbers complete. Another option is to write the fractions in decimal format. Many Years Writing a thesis paper, a research paper or email may require you to utilize decades of experience spelling numbers. You can avoid typing the same number over and maintain the proper formatting using these suggestions and methods. When writing in formal style, numbers should be written down. There are many styles guides that offer various guidelines. For example, the Chicago Manual of Style advises using numerals ranging between 1 and 100. It isn’t recommended to write out numbers larger than 401. There are exceptions. The American Psychological Association’s (APA) style guide is one of them. While it is not a specific publication, this manual is widely used in writing scientific papers. Date and time The Associated Press style book provides some general guidelines regarding how to style numbers. Numbers 10 and higher are classified as numerals in the system. Numerology is also utilized in different contexts. For the first five numbers on your document, “n-mandated” is the rule. However, there are some exceptions. Both the Chicago Manual of Techniques as well as the AP stylingbook recommend using plenty of numbers. But, this doesn’t suggest that it is not possible to make a version that is not based on numbers. I know for a fact that the distinction is present since I am an AP Graduate. To determine which ones you’re missing, a stylebook ought to be used. For instance you shouldn’t ignore the “t” for instance, like as the “t” in “time”. Gallery of How Do You Spell Spanish Numbers 1-30 The Numbers In Spanish Numbers 1 30 Spanish Language ShowMe Spanish Numbers From 1 To 30 Learn Spanish Online
{"url":"https://www.spellingnumbers.com/how-do-you-spell-spanish-numbers-1-30/","timestamp":"2024-11-01T19:59:15Z","content_type":"text/html","content_length":"61765","record_id":"<urn:uuid:aee23cae-da85-4218-9304-dbc117a4685b>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00893.warc.gz"}
VRS sheet 1 exercise 1 (brainteaser) answer format + General Questions (11) Hi, I have tried to submit the following answer to the brainteaser on propositional logic. The online tool accepts this and outputs a truth table but the automatic checker in the exercise shows 0%. What is the correct format for this? Please guide. P.S. I'm sorry if this is a duplicate, couldn't find any older reference. A great lesson on syntax vs semantics. Your formula is a formula that does use the allowed variables. But it is not equivalent to the problem description. See “operator precedence” in the lecture Due to operator precedence, your formula is incorrect. But one can change your formula by just adding a bunch of parenthesis until it is correct. Yes, brackets are not operators. However, they are required to disambiguate formulas since without brackets, we do not know whether a&b|c means (a&b)|c or a&(b|c). If you do not like brackets, you could use postfix or prefix notation which has been used in elder pocket calculators. Then, (a&b)|c becomes | & a b c while a&(b|c) becomes & a | b c. But usually, people do not like these notations.
{"url":"https://q2a.cs.uni-kl.de/2643/vrs-sheet-1-exercise-1-brainteaser-answer-format?show=2647","timestamp":"2024-11-02T15:42:46Z","content_type":"text/html","content_length":"62789","record_id":"<urn:uuid:4b6ca737-674a-40b2-8339-86268cdc21eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00144.warc.gz"}
The thermodynamic model was performed through elemental simulation units. It was developed in a partial 2D flow elements for the solution of the thermal flow equation. The earth thermal flow is correlated in a dynamic system with the fluid flow inside the pipe in order to establish a continuous energy stream in a solid/fluid thermal exchange. The feeding surface is simulated by 2D feeding disks perpendicular to the flow line. The simulation have been finalized based on standard values of the thermal capacity and conductivity of standard formations. These parameters will be recalculated before each project based on 3D seismic measurements and available well log measurements. During the drilling project these parameters will be continually measured and updated in realtime. Unit volumes used for simulation are conformed to the requirements of electricity production power plants requiring a standard flow rate of 100 Liter/Second and a temperature of 120° C to produce about 3-4 MWatt electricity depending on the project system and plant type, efficiency and manufacture. The calculation has been performed in a conservative manner accounting for heterogeneity and anisotropy of thermal flow through formations consisting of aggregates of different thermal conductivity and thermal capacity mineral components and possible thermal barrier at the casing/ cement/formation interface. The volumes account for the well geometry consisting of standard 7″ Liner. The flowing component has been discretized into units. Each block unit have to absorb a total 30 * 10-6 Joules from the formation during an equivalent circulation time. The total earth thermal flow Q has to equalize the flow rate of the fluid through the casing and the lung volume and has to be compensated from the earth thermal flow at the external near-field interface. The thermal system is constrained by the thermal conductivity and capacity of the formation aggregate and the flow component consequently adapted. EFFICIENCY PARAMETERS PRODUCED BY THE SYSTEM (APPROXIMATED VALUES +/- 3%) Flow unit: ~30 * 10^6 Joules (29.260.000 Joules) Thermal rate: ~15 * 10^3 Joules/Sec (14.630 Joules/Sec) Radial thermal conductivity of the 1st unit formation sector / area (average): ~10^5 Joules/Sec (102.885 Joules/Sec) Radial thermal conductivity of the 1st unit formation sector / area (worst case): ~4.6 * 10^4 Joules/Sec (46298 Joules/Sec) Recharge radial thermal capacity content on 2nd unit formation sector at 120°C (Vol) (average): ~10^5 Joules (105504 Joules) T Gradient perturbation @ external far field: Far field unit energy content: ~34 * 10^9 Joules (33.912.000.000 Joules) Far field generating surface – volume energy content: ~27 * 10^16 Joules Worst case regeneration time: 3´ cycle. Δt = 70°C Simulation properties are intended for a standard pilot project based on realistic thermal formation parameters. Lung units allow to add over 30% additional efficiency to the main flow system. DeepDirectivity Systems 1 – All Rights Reserved
{"url":"http://deepdirectivity.eu/thermodynamic-simulation","timestamp":"2024-11-04T07:33:57Z","content_type":"text/html","content_length":"25165","record_id":"<urn:uuid:ea4cd24c-5c37-407c-b41a-90665f90bdc6>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00572.warc.gz"}
What is: Hooke's Law What is Hooke’s Law? Hooke’s Law is a fundamental principle in physics that describes the behavior of elastic materials when subjected to external forces. Formulated by the 17th-century British scientist Robert Hooke, this law states that the force exerted by an elastic object is directly proportional to the displacement or deformation of that object, provided the limit of elasticity is not exceeded. Mathematically, Hooke’s Law can be expressed as F = -kx, where F represents the restoring force exerted by the spring, k is the spring constant, and x is the displacement from the equilibrium position. This relationship is essential in various fields, including engineering, materials science, and even data analysis, where understanding the behavior of materials under stress is crucial. The Spring Constant (k) The spring constant, denoted by the symbol k, is a crucial parameter in Hooke’s Law that quantifies the stiffness of a spring or elastic material. A higher spring constant indicates a stiffer spring that requires more force to achieve the same displacement compared to a spring with a lower spring constant. The value of k is determined through experimental methods, where the force applied to the spring is measured alongside the resulting displacement. This relationship allows engineers and scientists to predict how materials will behave under various loads, making it a vital concept in mechanical design and structural analysis. Applications of Hooke’s Law Hooke’s Law has a wide range of applications across multiple disciplines. In mechanical engineering, it is used to design springs, shock absorbers, and other components that rely on elastic deformation. In civil engineering, understanding the elastic properties of materials helps in the analysis of structures, ensuring they can withstand applied loads without permanent deformation. Additionally, Hooke’s Law is applicable in the field of data science, where it can be used to model and analyze the behavior of materials under stress, contributing to predictive analytics and simulations in material science research. Limitations of Hooke’s Law While Hooke’s Law is widely applicable, it is essential to recognize its limitations. The law holds true only within the elastic limit of a material, beyond which permanent deformation occurs. When materials are subjected to forces that exceed their elastic limit, they may undergo plastic deformation or even fracture, rendering Hooke’s Law inapplicable. Understanding these limitations is crucial for engineers and scientists, as it helps them determine the safe operating ranges for materials and avoid catastrophic failures in structures and mechanical systems. Hooke’s Law in Data Analysis In the realm of data analysis, Hooke’s Law can be utilized to model the relationship between force and displacement in various datasets. By employing statistical techniques, analysts can derive insights into how materials respond to different forces, allowing for better predictions and optimizations in material usage. This application of Hooke’s Law in data analysis not only enhances the understanding of material properties but also aids in the development of more efficient engineering solutions, ultimately leading to advancements in technology and innovation. Visualizing Hooke’s Law Visual representations of Hooke’s Law often include graphs that plot the force exerted on a spring against its displacement. These graphs typically exhibit a linear relationship, illustrating the proportionality between force and displacement within the elastic limit. Such visualizations are invaluable in educational settings, as they help students and professionals alike grasp the concept of elasticity and the behavior of materials under stress. Additionally, advanced data visualization techniques can be employed to analyze complex datasets, providing deeper insights into the elastic properties of various materials. Historical Context of Hooke’s Law The historical context of Hooke’s Law is rooted in the scientific revolution of the 17th century, a period marked by significant advancements in physics and mathematics. Robert Hooke, a contemporary of Isaac Newton, conducted experiments that led to the formulation of this law. His work laid the groundwork for modern mechanics and materials science, influencing subsequent research and development in these fields. Understanding the historical significance of Hooke’s Law provides valuable insights into the evolution of scientific thought and the foundational principles that govern the behavior of materials. Hooke’s Law and Material Science In material science, Hooke’s Law serves as a cornerstone for understanding the mechanical properties of materials. Researchers utilize this law to characterize the elastic behavior of various substances, including metals, polymers, and composites. By analyzing the stress-strain relationships of these materials, scientists can determine their suitability for specific applications, ensuring that they meet the required performance criteria. This knowledge is essential for developing new materials and improving existing ones, ultimately driving innovation in industries ranging from aerospace to consumer goods. Conclusion on Hooke’s Law Hooke’s Law is a fundamental principle that underpins much of modern physics and engineering. Its applications span various fields, from mechanical design to data analysis, highlighting its versatility and importance. By understanding the intricacies of Hooke’s Law, professionals can make informed decisions regarding material selection and structural design, ensuring safety and efficiency in their projects. As research continues to evolve, the principles of Hooke’s Law will remain integral to advancements in technology and material science.
{"url":"https://statisticseasily.com/glossario/what-is-hookes-law/","timestamp":"2024-11-04T07:33:34Z","content_type":"text/html","content_length":"139578","record_id":"<urn:uuid:ad70d94c-26ce-4f22-b62e-6ca0769c4cdc>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00519.warc.gz"}
Excel Formula for Unique Combinations of Adjacent Cells In this tutorial, we will learn how to write an Excel formula in Python that displays the unique combinations of adjacent cells in columns N and P. This formula is useful when you want to find and display only the distinct combinations of values in two columns. To achieve this, we will use the INDEX and MATCH functions in Excel. The INDEX function retrieves a value from a specified range based on a given position, while the MATCH function searches for a specified value in a range and returns its position. The formula we will use is as follows: =INDEX($N:$N, MATCH(0, COUNTIF($A$1:A1, $N:$N) + COUNTIF($B$1:B1, $P:$P), 0)) Let's break down the formula step by step: 1. The COUNTIF function is used to count the occurrences of each value in column N in the range $A$1:A1. This creates an array of counts for each value in column N. 2. The COUNTIF function is also used to count the occurrences of each value in column P in the range $B$1:B1. This creates an array of counts for each value in column P. 3. The two arrays of counts are added together using the + operator. 4. The MATCH function is used to find the position of the first occurrence of 0 in the combined array of counts. This represents a unique combination of adjacent cells in columns N and P. 5. The INDEX function is used to retrieve the corresponding value from column N at the position found by the MATCH function. 6. The formula is entered as an array formula by pressing Ctrl + Shift + Enter instead of just Enter. 7. The formula is then dragged down in columns A and B to display all unique combinations. To use this formula in Python, you can use the openpyxl library to read and write Excel files. First, import the necessary modules: import openpyxl from openpyxl.utils import get_column_letter Next, load the Excel file and select the desired sheet: wb = openpyxl.load_workbook('filename.xlsx') sheet = wb['Sheet1'] Then, iterate over the rows in columns N and P, and use the formula to calculate the unique combinations: for row in range(2, sheet.max_row + 1): n_value = sheet['N' + str(row)].value p_value = sheet['P' + str(row)].value unique_combination = # apply the formula here sheet['A' + str(row)].value = unique_combination[0] sheet['B' + str(row)].value = unique_combination[1] Finally, save the modified Excel file: By following these steps, you can easily write an Excel formula in Python to display the unique combinations of adjacent cells in columns N and P. This can be useful for data analysis and reporting tasks where you need to identify and extract distinct combinations of values from two columns. An Excel formula =INDEX($N:$N, MATCH(0, COUNTIF($A$1:A1, $N:$N) + COUNTIF($B$1:B1, $P:$P), 0)) Formula Explanation This formula uses the INDEX and MATCH functions to display the unique combinations of adjacent cells in columns N and P. The unique combinations are then placed in columns A and B. Step-by-step explanation 1. The COUNTIF function is used to count the occurrences of each value in column N in the range $A$1:A1. This creates an array of counts for each value in column N. 2. The COUNTIF function is also used to count the occurrences of each value in column P in the range $B$1:B1. This creates an array of counts for each value in column P. 3. The two arrays of counts are added together using the + operator. 4. The MATCH function is used to find the position of the first occurrence of 0 in the combined array of counts. This represents a unique combination of adjacent cells in columns N and P. 5. The INDEX function is used to retrieve the corresponding value from column N at the position found by the MATCH function. 6. The formula is entered as an array formula by pressing Ctrl + Shift + Enter instead of just Enter. 7. The formula is then dragged down in columns A and B to display all unique combinations. For example, if we have the following data in columns N and P: | N | O | P | | | | | | A | X | 1 | | B | Y | 2 | | C | Z | 1 | | A | X | 3 | | B | Y | 2 | | D | Z | 4 | The formula =INDEX($N:$N, MATCH(0, COUNTIF($A$1:A1, $N:$N) + COUNTIF($B$1:B1, $P:$P), 0)) would return the following unique combinations in columns A and B: | A | B | | | | | A | 1 | | B | 2 | | C | 3 | | D | 4 |
{"url":"https://codepal.ai/excel-formula-generator/query/11Z2jw1d/excel-formula-unique-combinations-adjacent-cells","timestamp":"2024-11-03T19:14:31Z","content_type":"text/html","content_length":"111854","record_id":"<urn:uuid:3fff7ef5-2a50-45eb-bc12-dbc2d79bfa9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00472.warc.gz"}
5.2. Props and presentations# See also Signal-flow graph and Flow graph (mathematics) for an extended discussion. 5.2.1. Props: definition and first examples# Definition 5.2.# What’s the point of the symmetry map? Why would you use it? There’s no point in building it for all these examples if you would never use it. The purpose is for when you build what you think “should” be the same thing in two different ways (\(m+n\) vs \(n+m\)) that you have a way to convert the one to the other. If you constructed something as \(m+n\) but wanted \(n+m\) (or some other canonical form) then you can use the symmetry map to get it. This is the message of the “bookkeeping” language of Section 4.4.3. Said another way, you may only have an isomorphism (approximately equal i.e. ≅) between two objects. If you think they “should” be the same object (and that decision is related to your engineering goals) then you can apply the mapping to make them the same again (get back to an equality i.e. =). For example, there’s an isomorphism between the ℝ defined by \(f(x) = 2x\). If you only care about the ratio between objects, then perhaps you consider this something that you can ignore. If you compare about absolute magnitude, then it’s likely you can’t ignore this conversion. This goes back to the language about “preservation” of Chapter 1; you have to know what you care about to be able to stay whether two things are pretty much the same as each other (other than some “bookkeeping”). To accountants, bookkeeping is important work. The more you can tell a model about how much is “essentially the same” the easier you can make its job. If four images are “essentially the same” (think translation-invariance) then the model doesn’t need to memorize each of the four examples separately: just one and the translation of the input for the other three. If four images are “essentially the same” other than some offset in your result (think translation-equivariance) then the model doesn’t need to memorize all four; only one and the offsets to apply to both the inputs and outputs for the other three. Classification nets typically don’t preserve the location of objects from the first layer; it’s information that can be thrown away. Complexity kills; what can be deleted, archived, thrown away, ignored, etc.? Example 5.3.# Is \(m + n = n + m\)? It depends on what \(n\) and \(m\) are. If they’re integers, then we’ve defined them to be the same (equal i.e. =, not equivalent i.e ≅). If they’re sets, then if by “same” you mean equal, they are not (only equivalent). See Rough Definition 4.45(d) or \(s_{AB}\) in Symmetric monoidal category. One way to remember this is that a “symmetric” monoidal category is not actually commutative, or it would be called a commutative monoidal category (even though this conflicts with how “symmetric” is used in the Commutative property article). Again, this is Categorification. You can’t “add” sets, only take the disjoint union of them. We are confusingly defining + = ⊔, when we could have used what is arguably a more specific symbol. Unfortunately the + seems appropriate for the monoidal product on objects and ⊔ seems appropriate for the monoidal product on morphisms. Similarly, does \(m + n = n + m = k\)? Only partially, and only after application of a function or functor. It may be the case that \(k = +(m,n) = +(n,m)\) for integers, but for sets we only have \(k = +(m,n)\) and \(k = +(n,m)\). This suggests a simpler definition than the one the author provides, following Disjoint union where the second index indicates the set: \[\begin{split} i,j \mapsto \begin{cases} f(i), 1 & j = 1 \\ g(i), 2 & j = 2 \\ \end{cases} \end{split}\] Although this definition is simpler, and the logic clearer, it does assume the application of another (trivial) isomorphism to get to a set of size \(|\underline{n+m}| = |\underline{m+n}| = |\ underline{k}|\) where the elements are labeled from 1 to \(k\). In fact, one could see Equation (5.4) as a “bookkeeping” isomorphism performing a similar function. Exercise 5.5.# Part 1.: Draw \(f + g\): The composition rule for morphisms: \[ f⨟g = g(f(i)) \] The identities are: \[ f(x) = x \] Some identities: The symmetry map \(\sigma_{1,2}\): Example 5.8# As discussed in Binary relation, this example uses bipartite graphs to visualize heterogeneous binary relations. It’s tempting to immediately index the involved sets: The disadvantage to this approach is it doesn’t make it clear what’s happening in the construction. As before, let’s follow Disjoint union in defining the domain and codomain: There are clearly bookkeeping isomorphisms we could build to \(\underline{5}\) and \(\underline{3}\), but we’ll skip that for now. What this construction makes clear is the following statement from the footnote: \[ R_1 ⊔ R_2 ⊆ (\underline{m_1} ⊔ \underline{m_2}) × (\underline{n_1} ⊔ \underline{n_2}) \] Which we could also express: The footnote also includes: \[ R_1 ⊔ R_2 ⊆ (\underline{m_1} × \underline{n_1}) ⊔ (\underline{m_2} × \underline{n_2}) \] Which we could express: The fact that: \[ (\underline{m_1} × \underline{n_1}) ⊔ (\underline{m_2} × \underline{n_2}) ⊆ (\underline{m_1} ⊔ \underline{m_2}) × (\underline{n_1} ⊔ \underline{n_2}) \] Is obvious from the rightmost picture in Cartesian product / Intersections, unions, and subsets. It’s this relationship that apparently leads to the new domain and codomain of the morphism. That is, the smallest possible square that \((\underline{m_1} × \underline{n_1}) ⊔ (\underline{m_2} × \underline{n_2})\) can fit into is \((\underline{m_1} ⊔ \underline{m_2}) × (\underline{n_1} ⊔ \underline The general lesson here is to keep track of your domain and codomain, especially when they are changing. This is a general lesson of category theory, but regarding this specific situation, in the article Binary relation the term “correspondence” is used for a binary relation that includes the domain and codomain. In the context of binary relations, you can often replace a ⊆ and × (as in f ⊆ A × B) with a : and → (as in f : A → B). The lesson of keeping track of your domain/codomain is less obvious in the category Set because functions must always completely cover the domain (though not the codomain). Exercise 5.9# The question asks for posetal props although preorder props are possible in theory (e.g. the example at the top of Preorder on the natural numbers). One could make all the natural numbers equivalent and the monoidal product could still be +. A prop requires the objects to be natural numbers, and the monoidal product on objects to be +. This excludes posets (such as Example 2.32) where the monoidal product is \(*\). In theory the definition of a prop doesn’t restrict non-linear posets on the natural numbers, but the author hasn’t discussed any other non-linear posets on the natural numbers and there don’t seem to be many examples online. We’ll have to construct custom examples, which simply end up being losets. The requirement the the monoidal product be + leads to the following “template” for posetal props. On the right are the natural numbers, with no morphisms between them to start. On the left is the product category ℕ × ℕ, the domain of the monoidal product +. The requirement that the monoidal product be + means every pair from the rows on the left must map to the objects on the right: In fact, this “template” corresponds to the posetal prop with a discrete ordering, an example in itself. Another obvious example posetal prop is with the standard order of the natural numbers (Example 1.45, Example 2.30). The mapping of morphisms (monoidal product on morphisms) is fully constrained by the monoidal product on objects and the morphisms between objects: Another example is the opposite ordering: Another example could include only the even or odd numbers in the poset. Exercise 5.10# For Example 5.6: 1. Bij(m,n) will be the subset of FinSet(m,n) that are bijections. 2. Identities will be as in FinSet (those were already bijections). 3. The symmetry maps will be as in FinSet (those were already bijections). 4. Composition will be as in FinSet; the composition of two bijections will be a bijection under this definition. 5. The monoidal product on morphisms will be as in FinSet; the product of two bijections will be a bjiection under this definition. Point 5. in the solution seems wrong. Why is the second part of the definition not \(m' + g(i-m)\)? For Example 5.7: 1. Corel(m,n) will be the subset of Rel(k,k) that are equivalence relations, where \(m+n=k\). The objects in Corel are “square” unlike in Rel. 2. The identities are given in the last paragraph of Example 4.61. One could visualize this as an Identity matrix (square by definition). 3. The symmetry maps will be the same as in FinSet; these functions can also be seen as equivalence relations. 4. The composition rule is given in the second to last paragraph of Example 4.61. 5. The monoidal product will be as in Rel (see Example 5.8). For Example 5.8: 1. Rel(m,n) will be all the relations \(R ⊆ \underline{m} × \underline{n}\). 2. The identities will be as in CoRel; these equivalence relations are still relations. 3. The symmetry maps will be the same as in FinSet; these functions can also be seen as relations. 4. The composition rule is given in Example 5.8. 5. The monoidal product is given in Example 5.8. Why is Rel a prop, when Set apparently is not? Or is it? It seems like in using FinSet rather than Set as an example the author was indirectly making a claim about it, in particular that because Set includes objects that are larger than the set of natural numbers it would not be possible to identify every object with a natural number. This is orthogonal to the issue of large and small categories, which is concerned with avoiding paradoxes (FinSet is a large category). Definition 5.11# Later in the chapter it will become clearer that natural language for point 2. in this definition is that a prop functor preserves the monoidal product. A prop functor is also a functor, so this is in addition to preserving identities and composition. 5.2.2. The prop of port graphs# See “port graph grammar” only mentioned in Linear graph grammar and Graph rewriting. Later in the chapter we’ll attempt to rewrite graphs. A category PG whose morphisms are port graphs. The author prefers to use \(!\) for almost any unique function: see Example 3.80, the diagram in Example 3.87, and many variable subscripts. Generally, these seem to be associated with a universal The author calling one function “the” unique function in this section may have been a mistake (an unfinished sentence). See instead comments in Initial and terminal objects and Category of sets about how the empty set serves as the initial object in Set, with the empty function the unique function to all objects. See also this helpful SO comment. Exercise 5.16# See the book’s solution, this is straightforward. Exercise 5.18# See the book’s solution, this is straightforward. 5.2.3. Free constructions and universal properties# See also Free object (Wikipedia). How does the universal property for free objects differ from universal properties in general? First, there are two ways to define a universal morphism in Universal property, and free objects definition uses the first (in terms of maps out). Compare to the example of a universal property we saw in Definition 3.86 for product objects, which is based on maps in (also documented in Universal property / Examples / Products). Another difference is that the universal morphism for a free object is defined to include an injection \(i\), not just any morphism \(u\). Additionally, Wikipedia’s free object definition seems to only be based on Set while the universal property definition is based on any category. But, compare Free object (Wikipedia) and free object (nLab). The nLab definition generalizes the Wikipedia definition, changing the faithful/forgetful functor from C → Set to C → D and hence the free functor from Set → C to D → C. Apparently: • Example 5.19 discusses the forgetful functor with signature Ord → Rel (free functor reversed) • Example 5.22 discusses Cat → Quiv • Example 5.24 discusses Mon → Set All of these differences may seem minor. The article forgetful functor (nLab) suggests there may be no difference. From free functor in nLab: Classically, examples of free constructions were characterized by a universal property. For example, in the case of the free group on a set X the universal property states that any map X→G as sets uniquely extends to a group homomorphism F(X)→G. When such a free construction can be realized as a left adjoint functor, this universal property is just a transliteration of the fact that the unit of the free-forgetful adjunction is an initial object in the comma category (X↓U) (where U is the forgetful functor out of the category of algebras, see e.g. the proof of Freyd’s general adjoint functor theorem.) The value in this section is that you can define less than what you “need to” and still get some “functioning” object. It’s always helpful to be able to type/specify less to a computer as part of achieving some goal. For example, it’s convenient to only specify edges to graphviz and get a full graph (with nodes). It’s convenient to able to define a linear function from one vector space to another using only its value on a basis of its domain. We typically do this kind of thing with “constructors” of objects (notice “free constructions” in the title of section 5.2.3), though there’s no requirement to use the object-oriented perspective (a regular function could also do these expansions). As the author points out, we often use the term Specification (technical standard) when we want to refer to the arguments that are necessary to construct something (e.g. a preorder spec.). Why are universal properties important? Because there are always many ways to skin a cat (solve a problem). We often want to let people construct in whatever way works for them as long as the final product meets some specification. That is, as long as we end up with something useful in the end. Whether something is useful is more often defined by what it can do, rather than how it was built. The Universal property and Adjoint functors articles discuss universal properties as setting up an optimization problem. You could also see them as setting up the acceptance criteria for some user story. To some degree, any question is an imprecise specification: it asks for an answer in its own terms (think of begging the question) but can still also often be answered in more than one way. Example 5.19# It’s helpful to review section 3.2.3. before this example. Exercise 5.20# For both 1. and 2. we know that the operation of a reflexive transitive closure implies: \[ R(x,y) → x \leq_P y \] For 1. we are given that \(R(x,y) → f(x) \leq_Q f(y)\), and would like to show \(x \leq_P y → f(x) \leq_Q f(y)\). The author is technically a bit lax with the language he uses for \(f\). In the introduction to the question it is a function (a morphism between sets, in Set/Rel) and by the end of 1. it’s a monotone map (a morphism between preorders, in Ord). In the context of 1. we’ll distinguish these as \(f_1\) and \(f_2\): Notice that in the Category of relations (on the right) the objects (e.g. the basis P) are sets and the morphisms (e.g. R) are (binary) relations. So \(R: P → P\) is a morphism in the category on the right, a loop on \(P\). For most of the implications we need for \(f_1\) to define a monotone map \(f_2\), we can rely on \(R(x,y) → f(x) \leq_Q f(y)\). However, the reflexive transitive closure \(UF\) will “change” the relation \(R\) (call this new relation \(R'\)) by adding reflexive and transitive elements. Will these be respected by \(f_2\)? Consider the new reflexive elements; we may not have had \(R(x,x)\) before the closure but will have it after. Can we conclude that if \(R'(x,x)\) (corresponding to \(x ≤ x\)) is filled in, that we will also have \(f(x) ≤_Q f(x)\)? Since \(f\) is a function (as opposed to e.g. a multivalued function), both these terms will be equal. Since \((Q, \leq_Q)\) is a preorder, we can also be sure that \(f(x) ≤_Q f(x)\). Consider the new transitive elements. In some cases we will have \(R\) with elements for x ≤ y and y ≤ z, but with only \(R'\) having a new element for x ≤ z. While x ≤ y and y ≤ z will imply f(x) ≤ f(y) and f(y) ≤ f(z) by \(R(x,y) → f(x) \leq_Q f(y)\), they will not directly imply f(x) ≤ f(z). However, given that \(f(x) ≤_Q f(y)\) and \(f(y) ≤_Q f(z)\) we know (because Q is a preorder) that \(f (x) ≤_Q f(z)\). It may help to attempt to construct a counter-example to convince yourself. Some positive examples: For part 2., we know that \(x \leq_P y → f(x) \leq_Q f(y)\). By hypothetical syllogism with the first equation above, we can conclude \(R(x,y) → f(x) \leq_Q f(y)\). Exercise 5.21# For 1. we are given that \(a \leq_Q b → R(g(a),g(b))\). Replace the two variables in the equation that starts the solution to Exercise 5.20 to produce \(R(g(a),g(b)) → g(a) \leq_P g(b)\). Combine these two equations to produce \(a \leq_Q b → g(a) \leq_P g(b)\). That is, it is in fact automatically true that g defines a monotone map. For 2. we are given that \(a ≤_Q b → g(a) ≤_P g(b)\), and want to show that \(a ≤_Q b → R(g(a), g(b))\). We still have that \(R(g(a),g(b)) → g(a) \leq_P g(b)\) but this doesn’t help; we need the reverse implication of it. In general, this is not true. Consider the following counter-example, where \(g\) defines a monotone map but the highlighted-red element \((g(m), g(n)) = (a, c)\) ∉ R: Paragraph following Exercise 5.21# The author makes the claim: This means the domain of a map must be somehow more constrained than the codomain. This looks exactly wrong: it should read that the codomain of a map must be somehow more constrained than the domain. The domain of all these maps is the free object on our specification, which should be the minimally-constrained structure (including only the generating constraints, in one way or another). For all other objects for which there is a map from it, we are only adding constraints (not removing constraints, which would not be preserving them). See Example 3.41 for nearly the same sentiment, and a start to understanding what “most maps out” means for a Free category. The free square has a map to the commutative square, but there’s no equivalent map from the commutative square to the free square. The codomain (the commutative square) is more constrained. In the language of section 3.2.3, these “other objects” are on the spectrum from the free category (no equations) to its preorder reflection (all possible equations). Example 5.22# See Free category, also mentioned as an example in free functor in nLab (which points to free category in nLab and path category in nLab): the free category functor \(Grph_{d,r}\) → \(Cat\); One question that may come up about the nature of the free (F or Free) and forgetful (U) functors defined in these sources is what a round trip looks like. The author uses the word “closure” to describe the application of Free to a quiver. To be more specific, he probably means the application of F then U (UF); the application of only F produces an object in Cat rather than an object of the same type in Quiv. Let’s try to naively apply this logic to the free category 3 from Exercise 3.10: The downside to this naming scheme becomes apparent if one then tries to apply F again. Since the morphisms 1 and 2 are not connected to the morphism in 3 in any way, there’s really no reason not to freely concatenate them together again to produce another arrow: Obviously this is not the expected closure. If we wanted to only worry about preorders, then we could collapse all duplicate paths into a single paths as discussed in the answer to Exercise 1.40. This solution isn’t particularly desirable because it doesn’t let us turn e.g. the graph in Example 1.37 into a regular category (a Set-category) and not lose information, or to go to quiver and back again without losing information. Based on path category in nLab, the proper solution seems to be to label the morphisms freely constructed from arrows in G (in the free category) as lists, despite Categories for the Working Mathematician claiming the forgetful functor U should be in charge of: forgetting which arrows are composites and which are identities In this approach we would apply pattern-matching on the lists to avoid regenerating paths: Exercise 5.23# Part 1.# See dom/cod used in the definition of category in Category (mathematics). These are both functions from the set of morphisms to objects in the category, where dom produces the source of the arrow/ morphism and cod produces the target. Part 2.# Let’s start with an overview of the universal property we are trying to prove: Special cases When f is not injective, it is mapping at least two vertices from the original graph G to a single vertex in the target category U(𝓒). When g is not injective, it is replacing the set of all possible paths in G with a smaller set where some paths are considered equivalent (corresponding to adding path equations i.e. equivalences to 𝓒). Perhaps ironically, to remove morphisms is to add path equations (constraints). To add morphisms while keeping the set of objects fixed is to remove path equations. See examples in Section 3.2.2. The cardinality of the set of objects or morphisms/arrows in 𝓒 may technically be greater than, the same, or smaller than the cardinality of the same sets in the free category 𝓖. General case Let’s repeat the two equations from the text on top of each other so they’re easier to compare: \[\begin{split} dom(g(a)) & = f(s(a)) \\ cod(g(a)) & = f(t(a)) \end{split}\] The first equation is associated with arrow sources, and the second with arrow targets. That is, for every arrow \(a\) in G, the vertex associated with the source is mapped to the source of the mapped arrow, and the vertex associated with the target is mapped to the the target of the mapped arrow. These two equations will naturally associate identities (trivial paths) in G with identities in C. For them to indirectly define a functor, though, we also need them to preserve composition. The two equations capture what we typically do to visually construct a legitimate functor (see e.g. Example 3.36), but how does this correspond to preserving composition? There’s a disconnect between the “visual” rule and the algebraic “rule” that indicates a functor preserves composition: \[ F(a_1 ∘_G a_2) = F(a_1) ∘_C F(a_2) \] Notice this equation is a question (like a universal property), but it’s begging the question to some degree by assuming that the given morphisms compose. That is, this “algebraic” rule also requires \(t(a_2) = s(a_1)\), and \(cod(F(a_2)) = dom(F(a_1))\) (or composition would not be possible). Said another way, the “algebraic” rule works on two arrows at a time, while the “visual” rule works on one arrow at a time. One way to put the algebraic rule is that the mapped source of the first of a two-arrow path must equal the mapped source of the first arrow in the path, and the the mapped target of the second of a two-arrow path must equal the mapped target of the second arrow in the path. Let’s say that we have two arrows where \(t(a_2) = s(a_1)\) in G, i.e. that can be concatentated into a path \(p_G = a_2 ⨟ a_1 = a_1 ∘_G a_2\). The rules above will hold for both arrows: \[\begin{split} dom(g(a_1)) & = f(s(a_1)) \\ cod(g(a_1)) & = f(t(a_1)) \\ dom(g(a_2)) & = f(s(a_2)) \\ cod(g(a_2)) & = f(t(a_2)) \end{split}\] Combining the first and last equations using \(t(a_2) = s(a_1)\): \[\begin{split} dom(g(a_1)) & = cod(g(a_2)) \\ cod(g(a_1)) & = f(t(a_1)) \\ dom(g(a_2)) & = f(s(a_2)) \end{split}\] Because \(dom(g(a_1)) = cod(g(a_2))\) we can concatenate these arrows in C to produce the path \(p_C = g(a_2) ⨟ g(a_1) = g(a_1) ∘ g(a_2)\). Because we also know: \[\begin{split} dom(p_C) = dom(g(a_2)) \\ cod(p_C) = cod(g(a_1)) \\ s(p_G) = s(a_2) \\ t(p_G) = t(a_1) \end{split}\] We can say: \[\begin{split} cod(p_C) & = f(t(p_G)) \\ dom(p_C) & = f(s(p_G)) \end{split}\] In short, mapping vertices source-to-source and target-to-target per-arrow is equivalent to mapping vertices source-to-source and target-to-target per-path. Using mathematical induction, you can extend this to arbitrary length paths. The preceding logic constructs demonstrates the existence of a functor from 𝓖 to 𝓒 (labeled h on the commutative diagram) for every (f,g). This does not establish a bijection between h and (f,g); we still need to demonstrate that an (f,g) for every functor h exists (the opposite direction) and then show these two constructions are mutual inverses. See the text’s answer for one possible Variable names The variable names f and g are not ideal. The first is for vertices, so it’s likely better named v. The second is for arrows, so it’s likely better named a. Similarly, we use dom/cod so we can talk in terms of the category 𝓒, but we could use the cleaner names \(s_c\) and \(t_c\) if we talked in terms of U(𝓒) instead. A more readable version of the original equations: \[\begin{split} s_C(m_a(a)) & = m_v(s_G(a)) \\ t_C(m_a(a)) & = m_v(t_G(a)) \end{split}\] Part 3.# Yes this is a graph because Ob(𝓒) and Mor(𝓒) are both sets (all that is required of \(V,A\)), and dom and cod are functions between those sets (all that is required of \(s,t\)). Specifically, it’s the graph U(𝓒) in the diagram above. There’s an adjunction between the free functor Free and the forgetful functor U in the diagram above. Exercise 5.24# See Free monoid - Wikipedia. For examples of \(B\) objects per the definition in Free object - Wikipedia, see Example 3.18 and Exercise 3.19. For 1. the elements are {id, a, aa, aaa, …}. For 2. the natural numbers with zero under addition i.e. \((N_0, +)\). For 3. the elements are {id, a, b, aa, ab, bb, ba, aaa, …}. 5.2.4. The free prop on a signature# This section uses the word/language of Generator (mathematics), although in Free object we would call these a basis. See also Signature (logic) and Term algebra. It’s worth re-emphasizing the ambitious goal of the first paragraph: to define the free prop given a generic signature. To some extent this section explains why the author introduced port graphs, which is to allow the definition of a free prop that doesn’t include all morphisms between all (m,n). Still, the port graph concept isn’t necessary if you use Definition 5.30. Definition 5.25# It’s worth noting that \(G\) is a plain set in this example (e.g. \(G = \{f,g,h\}\)), though in the paragraph after it becomes what looks like a set of morphisms. As mentioned, the author is already using G as a shorthand for \((G,s,t)\). Exercise 5.28# From the paragraph after Exercise 5.16, the morphisms in PG(m,n) are all (m,n)-port graph \((V,in,out,\iota)\). The given prop signature lets one construct any (m,n)-port graph, so per the last paragraph of Definition 5.25 the Free(G) will equal PG. 5.2.5. Props via presentations# Exercise 5.35# There should be no difference; if we aren’t applying any additional equivalences then we should only cut down the set of prop expressions by the axioms of props (as we already did before).
{"url":"https://davidvandebunte.gitlab.io/executable-notes/notes/ssc/5-2.html","timestamp":"2024-11-02T01:50:19Z","content_type":"text/html","content_length":"99893","record_id":"<urn:uuid:4f1c4479-fad4-4996-ba62-1b4ec41e898f>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00478.warc.gz"}
Unexpected Result I normally prefer to share posts of a more practical nature, but in celebration of a hard-fought semester fraught with theory, research, and not a whole lot of actual development in the curriculum, I've decided to share some of what I learned even if it isn't directly applicable to the kind of day-to-day development most of us face on the job. One of my most vicious classes last semester was combinatorial algorithms, a course focusing on algorithm design and analysis with particular attention to time complexity. Needless to say, computers and programming didn't factor into the curriculum at all. This was strictly a math class. Our big assignment for the semester was to pick an existing problem in the math domain, describe it in detail, and analyze a variety of the known approaches to solving it. I chose the independent set family of problems, and the professor - considered one of the most brutal on campus - wrote me to congratulate me on a flawless paper. So I figured it must not have been too shabby, and I'll pass it on to the masses! Now, let's talk about independent sets! Introduction: What is an Independent Set? If you already know about graphs, and what an independent set is, you can skip ahead, but a little bit of background knowledge is needed to understand what an independent set is. Fair warning, it would be quite helpful to know a little bit about graphs, time complexity, and dynamic programming (or at least recursion). But I'll do my best to make this digestible with a minimum of prerequisite The very least you need to know is that a graph is made of vertices and edges, so that the edges connect the vertices together, like connect-the-dots. You can have a graph with any number of vertices, connected to each other in any number of ways by any number of edges. We call the group of vertices together a set of vertices, and the group of edges is also a set. Any group of items that belongs to a set (but might not include all of the items in that set) is called a subset. So, if we have a graph, let's call it G, composed of vertex set V and edge set E, an independent set within that graph is a subset of the vertices in G where none of the vertices in the subset are connected by any edge. In other words, an independent set of vertices in a graph is a subset of the graph's vertices with no two vertices adjacent. Two important types of independent set we will talk about below include the maximal independent set and the maximum independent set (they are sometimes, but not always, different from each other). We will also discuss what a graph’s independence number is. For the most part, we're only concerned with maximum independent sets and independence numbers, but I want to talk about maximal independent sets because that will help us to understand just what a maximum independent set is. A maximal independent set in G is a type of independent set. In a maximal independent set, if you add any vertex from the total vertex set V, that wasn't already in the subset, that would force an adjacency into the subset. Remember, if we have two adjacent nodes in our graph, it's not independent. So a maximal independent set is a subset of the vertices in G that can't have any more of G's vertices added without stopping the subset from being independent. There are two things I find worth noting about maximal independent sets. First, a given graph might have any number of maximal independent sets. Second, each maximal independent set might have a different total cardinality (number of vertices in the subset). In other words, a single graph might contain multiple maximal independent sets, each of varying size. The largest possible one of these is called the maximum independent set. This is the largest independent set found in a given G. Note also that a graph can have multiple maximum independent sets, but in this case, all of the maximum independent sets will have the same cardinality. Finally, whether there is only one maximum independent set, or there are many, we refer to the cardinality of maximum independent sets as the independence number of the graph to which they belong. We'll use these terms and concepts more below to discuss a variety of problems relating to independent sets, most importantly to my research, the maximum independent set problem and the independent set decision problem Details: Problem Formulation and Motivation As we noted above, the maximum independent set problem takes a graph as input, G = (V, E). The goal is to find a subset of V comprising one maximum independent set on G, which might just be one of several maximum independent sets in G. The solution can then be used to obtain the answer to the independence number problem, which takes the same input (a graph) but seeks, instead of a maximum independent set, the independence number of the graph (the number of vertices in the maximum independent set). All we have to do to get the independence number once we have the maximum independent set is return the cardinality of the maximum independent set, and we're done. Along the same lines, the solution to the maximum independent set problem also comes packaged with the solution to the independent set decision problem, which is looking for a simple Boolean value: true if the graph’s independence number is greater than or equal to some given number, k, and false otherwise. In other words, we can formally define the independent set decision problem as taking a graph, G = (V, E) and an integer k, and returning a Boolean value, true if k is less than or equal to G’s independence number, or false if k is greater than that independence number. In the simplest terms, we have a number, and we want to know if there is an independent set in some graph that is as big as our number. Input: a graph G = (V, E), and an integer k Output: a Boolean true or false value Believe it or not, these versions of the same problem – maximum independent set, independence number problem, and independent set decision – each has a specific real world application. The maximum independent set and independence number problems in particular have a wide variety of practical uses. The maximum independent set, for instance, is a problem that materializes in visual pattern recognition, molecular biology, and certain scheduling applications; and the independence number problem has applications in molecular stability prediction and network configuration optimization. Astute readers will notice that I've omitted the very specific subject of the independent set decision problem, which (to refresh our memories) seeks only a Boolean value and no other data. This member of the independent set family of problems is not considered to have any practical "real world" application. That said, it plays a crucial role in the realm of theoretical research. It is considered necessary in order to apply the theory of NP-completeness to problems related to independent sets. That's a topic for a whole other kind of blog post. Though the topic of NP-Completeness can get pretty hairy, the dominating factor of the time complexity of solving the decision version of this problem is the same as for the independence number problem or the maximum independent set problem. As mentioned above, the decision version of the problem uses the same underlying algorithm as the independence number problem, and simply examines the resulting data to a different end. In short, to get the solution to the decision problem, we reduce the independence number solution to a Boolean value. The independence number problem itself is the same as the maximum independent set problem, with the output reduced to a single value representing the cardinality of a maximum independent set in G. But we're getting a bit beside the point here. Since the domains of practical applications defer in large part to the maximum independent set and independence number versions of this problem, this is where the vast majority of interesting research on the topic of independent sets has been done. The complexity of the algorithms to solve all three problems is dominated overwhelmingly by the operations performed in the original maximum independent set problem. The transformations we have to make to produce solutions to the other versions of the problem are pretty trivial (counting cardinality, or comparing a given k value to the independence number to determine a Boolean value). For that reason, we'll focus on the research surrounding the maximum independent set problem. The Brute Force Approach – Slow but Simple Now let's talk about how we actually solve this problem. The brute force approach to the maximum independent set problem is very simple in principle. In short, all we need to do is iterate over all possible subsets in G, and then for each subset, check all vertices v in V for pair-wise adjacency. The time complexity of checking each possible subset is O(2^n), yielding an exponential time complexity, and the checking of all vertices yields an additional (relatively insignificant) polynomial factor to the complexity, referred to in this case as poly(n), where n is, of course, the number of vertices in G. We consider the polynomial factor insignificant because the complexity of any polynomial factor will never dominate the exponential factor of O(2^n), and so becomes irrelevant for large graphs. In summary, for each vertex subset S in V, we check all vertices in S for adjacencies. If an adjacency exists, this subset can be thrown out because the adjacency violates the defintion of an independent set. As we iterate over subsets, we simply measure each and track which is the largest. For maximum independent set, we return the independent set that ends up providing the largest number of vertices over all iterations (this being the largest, read maximum independent set). This ends up taking a total of O(2^npoly(n)) time. For the independence number problem, instead of returning the largest independent set, we return only the cardinality of the largest independent set, which we can trivially store while iterating over subsets. Finally, in the case of the independent set decision problem, we simply compare the independence number gained via the above brute force algorithm, trivially compare it against our given k, and return the appropriate Boolean value, yielding a worst case that is still O(2^npoly(n)) time. Serious Business: Strategy and Analysis of an Improved Algorithm Before we get too excited, it's only fair to say right up front that we are not going to beat exponential time complexity with any known algorithm for solving this problem. That said, a fair amount of research has been done on it, yielding impressive improvements to the base of exponentiation. This, in turn has led to vastly improved run times in real world applications that rely on solutions to the independent set family of problems. (We're going to start getting very technical here.) Following Jeff Erickson’s guide to the optimization of the independence number algorithm, we see that there are a series of steps that can be taken in turn to improve the performance of our algorithm. As a first step toward understanding the complicated means of optimization, Erickson provides a recursive formulation of an algorithm for obtaining the independence number on a graph, G (shown below). Next, he shows that the worst case scenario for that recursive algorithm can be split into subcases, some of which are redundant recursive calls that do not need to be performed in duplicity. Eliminating these duplicate cases, obviously, improves run time complexity. Finally, he shows that this improvement can be repeated on the resultant worst case, splitting that into further subcases, some of which are also redundant; and so on and so forth, continually improving the run time complexity in seemingly small, but ultimately meaningful, increments. To begin, observe Erickson’s most elementary version of the algorithm in its recursive form: if G = { empty set } return 0 v = any node in G withv = 1 + MaximumIndSetSize( G \ N(v) ) withoutv = MaximumIndSetSize( G \ {v} ) return max { withv, withoutv } To clarify the notation here, N(v) refers to the neighborhood of v, meaning the set that includes v and all of its adjacent neighbors, but no other vertices. The backslash symbol refers to set-wise exclusion or "subtraction." For example: { a, b, c } \ { b } yields { a, c }. As you can see at a glance, the number of recursive calls at each iteration is doubled! This yields the same exponential growth of time complexity we saw in the brute force approach. What’s worse, we see that in the worst case, G \ N(v) = G \ {v}, meaning that v has no neighbors! This means that in both recursive calls at each iteration, our vertex subset is only being reduced by size 1. The result is the recurrence equation T(n) = 2T(n – 1) + poly(n), yielding a time complexity of O(2^npoly(n)). T(x) here represents the time required to solve a problem of size x. At this stage, though no clear improvement has yet been made, we already have the information we need to make our first improvement to the algorithm. As noted above, in the worst case, G \ N(v) = G \ {v}, meaning that v has no neighbors. However, if v has no neighbors, then it is guaranteed to be included in every maximal independent set, including maximum independent sets, because a node that never has neighbors is always independent! As a result, one of the recursive calls, the one assigning the value withoutv, becomes superfluous, because no maximum independent set can exclude v if it never has neighbors in any subset. At the same time (but on the other hand), if v does have at least one neighbor, then G \ N(v) will have at most (n – 2) vertices. That's quite a brain-full, so let me explain it another way, with concrete examples. Let's say G has 10 nodes total. So n is 10. Then, if our random node v has at least 1 neighbor, then the neighborhood of v is size 2 or greater. Since N(v) is 2 or greater, then G \ N(v) is 10, minus some number 2 or greater. This line of reasoning yields a new recursive formula with a dramatically improved run time complexity: T(n) ≤ O(poly(n)) + max{T(n - 1), [T(n - 1) + T(n - 2)]} The run time complexity on this improved version of the recursive algorithm is reduced to T(n) ≤ O(1.61803398875^n) after just one subcase division! That might not seem like much of an improvement, but in a little bit we're going to see some concrete examples of just how big an improvement this can make at execution time. In the meantime, following this line of logic and making further observations, we can immediately improve on our new worst case by splitting that into subcases – some of which are, again, redundant and superfluous. Note that our new worst case is the case in which both T(n – 1) and T(n – 2) must be calculated. In other words, this is the case in which our recursive calls are doubled, and in which the graphs we pass along are of relatively large size (only 1 or 2 vertices smaller than the graph from which they were derived). In this specific case, v has exactly 1 neighbor; let's call this neighbor w. Specifically because v has exactly 1 neighbor, we will find that either v or w will appear in every maximal independent subset of our original G. Think about it: if w is in a maximal independent set, v simply can't be, because they're neighbors, and two neighbors can't both be in the same independent set. Conversely, if w is not in a maximal independent set, v will be, because its only neighbor is not in the set, which frees v up to join without fear of standing by an adjacent neighbor. Taking this line of reasoning yet another step further, we see that given a maximal independent set containing w, we can remove w and instead include v. Therefore, we can conclude that some maximal independent set will include vertex v (whether this maximal independent set ends up being maximum or Note also that if we know there is a strict pair-wise relationship between v and w (due to the given fact that v has 1 adjacent neighbor), we never have to evaluate the recursive case in which the call MaximumIndSetSize(G \ {v}) needs to be made. Our case evaluating T(n – 1) + T(n – 2) becomes only T(n – 2). Furthermore, expanding on our idea from the last worst-case scenario subcase split, we will see that if the degree of v is 2 or more, then N(v) will contain at least 3 vertices, and therefore G \ N(v) will have at most (n – 3) vertices. In such a case, the max{...} function of our recursive algorithm is expanded: T(n) ≤ O(poly(n)) + max{ T(n - 1), T(n - 2), T(n - 1) + T(n - 3) This second improvement yields a time complexity of T(n) ≤ O(1.46557123188^n), a smaller but still valuable improvement. On the other hand, note that the complexity of the algorithm is growing rapidly here. Not the time complexity, but the algorithmic complexity, as well as the difficulty we find in determining where to make future optimizations. At each step we take toward a faster algorithm, things are getting much more complicated. At this point, the complexity of the algorithm and the steps required to make further improvements go beyond the scope of what I can talk about in a blog post (even one as ridiculously long and complicated as this). However, Erickson does go on to show that further clever observations can be made continuously, yielding even better time complexities. Two further observations of worst-case splitting and problem structure yield time complexities T(n) ≤ O(1.44224957031^n) and T(n) ≤ O(1.3802775691^n), respectively. According to Erickson, the best published time complexities for this problem are solutions quoted by Fomin, who achieved T(n) ≤ O(1.2210^n, and Bourgeois, who managed to achieve an impressive T(n) ≤ O(1.2125^n). Considering we're still ultimately stuck with exponential time, it may seem like a lot of work needs to be done for miniscule returns. After all, 1.221 and 1.2125 are almost the same number, right? On the other hand, certain practical applications demand tremendous effort, and therefore any ounce of efficiency that an improved algorithm can provide becomes valuable. Larson includes as an example in his writings a reference to the prediction of molecular stability in the tetrahedral C100 isomer. For this example, the time complexity T(n) of determining the independence number of this molecule is T(100). Therefore, working out the arithmetic, we see the run time for this example reduced as follows (ignoring poly(n): With these concrete numbers to reference, we can see clearly that the jump from the brute force method to even a slightly optimized solution is more than worth the effort. The first jump alone results in an improvement to run time of many orders of magnitude! After that, we admittedly begin to see diminishing returns for the effort taken to develop improved algorithms. For instance, I personally doubt that the extra effort by Bourgeois to beat Fomin by a mere 0.0085 on the exponentiation base was of debatable value as anything more than a stepping stone. In my personal opinion, the return on invested effort put in to develop Robson’s final state-of-the-art solution (discussed below) was not worth it for such a small gain from Fomin's algorithm. At least, not in the case of the C100 stability prediction problem. That said, 100 may be considered a small quantity of vertices in some problem domains, such as large network configuration optimization or scheduling problems. In those cases, it may actually be worth it to go to the trouble of researching and developing the absolute best state-of-the-art in terms of fast, efficient algorithms. Robson's Incredible State of the Art, Conclusions Research has yielded relatively fast algorithms for solving the independent set family of problems. Some, notably by Bourgeois, are capable of a time complexity as low as a flabbergasting T(n) ≤ O (1.0854^n) time complexity using strategies of case analysis and reduction rules. However, it's worth noting that algorithms that are as fast as these come with severe limitations. For example, the Bourgeois algorithm is limited to graphs with average vertex degree 3. That is a serious limitation! The fastest published algorithm for solving the independent set family of problems for generic graphs (with unbounded number of degrees), by Bourgeois, runs in only T(n) ≤ O(1.2125^n). The absolute state of the art for solving the independent set family of problems is actually a computer-generated algorithm. The algorithm is described in an unpublished technical report by Robson, and is claimed to boast a time complexity of only O(1.1889^n). However, the trade-off at this level of acceleration is mind-boggling algorithmic complexity. Just the high level description of the algorithm, excluding the rest of the accompanying report, fills fully 15 pages, and thus excludes discussion of the algorithm in any depth from the scope of most academic papers, let along blogs. I'm not an expect on algorithmic analysis at quite that level, but I'd wager such an algorithm would even be excluded from use in applications at that point, due to its sheer complexity. As algorithms grow in efficiency, but also in complexity, it gets tough to make universal recommendations on which to use – or, in the case of the algorithms described above, how deep through the iterations of optimization a project team should go to try and improve the performance of their algorithm. The context of the project would have to come into factor. For projects dealing with small graphs, it wouldn't be worth the effort to interpret and implement something like Robson's solution. Something like the example described by Erickson and outlined above would probably be good enough. For projects dealing with graphs of moderate size and predictably small average vertex degrees throughout, but needing accelerated solutions, something like the algorithm developed by Bourgeois would provide a superior middle ground. Thanks for reading! - Steven Kitzes
{"url":"https://www.stevenkitzes.com/2015_02_08_archive.html","timestamp":"2024-11-08T08:16:35Z","content_type":"text/html","content_length":"75770","record_id":"<urn:uuid:65d8fe33-8dfe-479f-a28d-fe661313211d>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00082.warc.gz"}
Worldsheet instantons in A-model In the usual gauge theory in four dimensions instantons are solutions of the anti-self-dual equation $\star F=- F$ where $F$ is the curvature of the connection of a principal $G$-bundle over a four-manifold $M$. In the topological string A-model one tries to count stable maps from a genus $g$ Riemann surface to a target space $$ f : \Sigma_g \to X $$ and to get a consistent string theory $X$ has to be a Calabi-Yau 3fold. These stable maps are also called "world sheet instantons". Does this name arise because they are solutions of a 2d analogue of the (anti)self dual equation for the corresponding $\sigma$-model? And if yes which analogue is this? Also, the corresponding topological string partition function sums over all different worldsheet instantons which are classified (with respect to their finite automorphism group) by their pullback to the corresponding (co)homology. Let $\beta$ denote a class in that vector space. Then, the Gromov-Witten invariants $ N_{g,\beta}(X)$ appear in the free energy as $$ \log Z_{\text{top.}} =F_{\text{top.}}(g_s, \vec{q}) = \sum_g \sum_{\beta} g_s^{\chi(g)} N_{g,\beta}(X) \vec{q}^{\beta} $$ Is it correct to interpret physically the GW invariants as the number of isomorphic instants configurations for a specific $\beta$? That would be confusing though because, e.g. in usual 4d theories these configurations are quotients by the gauge group and the number of representatives within some isomorphism class is infinite. But maybe I have some misconception here. Edit: Another question that popped into my mind is in what sense there even exists some instanton moduli space since the A-model (as well as the B-model) are topological twists of the standard non-linear sigma model which does not consider gauge fields. The Lagrangian for this model (before any twist) is $$ \mathcal{L} \backsim \int \frac{1}{2}g_{IJ}(\Phi)\partial_z \phi^I \partial_{\bar{z}}\phi^J + \frac{i}{2}g_{IJ} \psi_{-}^I D_z \psi_{-}^J + \frac{i}{2}g_{IJ} \psi_{+}^I D_{\bar{z}} \psi_{+}^J + R_ {IJKL} \psi_{+}^I\psi_{+}^J\psi_{-}^K \psi_{-}^L $$ So, how do any notion of instantons (in the sense of the particular gauge field configuration we are familiar with) comes into the game here? A "world sheet instanton" $f : \Sigma_g \rightarrow X$ is by definition an holomorphic map: $\bar{\partial}f=0$. This equation is the analogue of the anti-self dual (ASD) equation for 4d gauge In fact, there exists a really precise analogy between the A-model and 4d instantons (known since the 1980's: Gromov's idea to use (pseudo)holomorphic curves to study symplectic manifolds was directly inspired by Donaldson's work using 4d instantons to study 4-manifolds)(in fact, in a non-supersymmetric context, it was known in the 1970's in physics that there is an analogy between 2d $\ sigma$-models and 4d gauge theories). From a physics point of view, this comes from the fact that the A-model is a topological twist of a 2d theory with $\mathcal{N}=(2,2)$ SUSY and that the natural setting for 4d instantons is the Donaldson-Witten theory, topological twist of a 4d theory with $\mathcal{N}=2$ SUSY. The key point is that $\mathcal{N}$ 2d SUSY is dimensional reduction of $\mathcal {N}=1$ 4d SUSY and $\mathcal{N}=2$ 4d SUSY is dimensional reduction of $\mathcal{N}=1$ 6d SUSY. Both SUSY algebras comes by dimensional reduction of a SUSY algebra in two higer dimensions, and so will have a charge of topological nature, a BPS-like bound and a general notion of instanton as finite action configuration saturating this bound. In both cases, instanton configurations are determined by a first order PDE (ASD equation in 4d, holomorphic map equation in 2d) whereas the general equations of motion of the (untwisted) theory are second order PDE (Yang-Mills equations in 4d, harmonic map equation in 2d). Another way to understand the analogy is not to go up but to go down in dimensions. Both 4d ASD equations and 2d holomorphic map equation are gradient flow lines for some (infinite dimensional version of) Morse theory (in the same way that the usual tunneling instantons of quantum mechanics are related to the usual finite dimensional Morse theory: see Witten's paper "Supersymmetry and Morse theory"). More precisely, the 4d ASD equation is the equation of gradient flow lines for the 3d Chern-Simons functional, whereas the 2d holomorphic map equation is the equation of gradient flow lines for the 1d action functional ($\int_\gamma pdq)$. It is possible to extend the analogy at some level of details like finding the analogue of the topological subtleties appearing in the definition of the Chern-Simons functional, and so on. In summary, both stories come from a miracle. The miracle in 4d is the splitting of 2-forms in self-dual and anti-self-dual parts. The miracle in 2d is the splitting of harmonic functions in holomorphic and anti-holomorphic parts. In other words, the 4d miracle is the existence of quaternions and the 2d miracle is the existence of complex numbers (from there, you could ask: what is the 8d miracle coming from the existence of octonions?). The A-model, and in fact the topological string (i.e. the coupling of the A-model with topological 2d gravity) makes sense for $X$ Kähler manifold of any dimension (which is different from the physical string where the coupling to physical 2d gravity imposes a critical dimension). For any $\beta \in H_2(X,\mathbb{Z})$, there is a moduli space $M_g(X,\beta)$ parametrizing holomorphic maps of class $\beta$ from genus $g$ Riemann surfaces to $X$. It is the analogue of a moduli space of 4d ASD instantons of given instanton number. If the dimension of the moduli space of 4d ASD instantons is zero, it means that the moduli space is made of finitely many points and one can obtain a number by counting the number of points. If the dimension is not zero, one can obtain numbers by integrating differential forms over the moduli space: one obtains the Donaldson invariants, which determine the correlation functions of the Donaldson-Witten theory. In fact, to make precise sense of that, one has to ensure that the moduli space is compact and to have that, one has to had "punctual instantons" (singular limits of instantons shrinking to a point). There is a similar story for holomorphic maps: one has to had singular configurations bubbling-off as limit of smooth configurations (the precise technical thing to do was found by Kontsevich and is the "stable" part of "stable map"). If the compactified moduli space has dimension zero, one can simply count the number of points, if not, one can integrate differential forms over the moduli space and obtain the Gromov-Witten invariants of $X$, which determine the correlation functions of the A-model/topological string. What is special about $X$ Calabi-Yau 3-fold is that all the moduli spaces $M_g(X,\beta)$ are of dimensional zero, and one obtains numbers $N_{g,\beta}$ by counting the number of points in these moduli spaces. In fact, it is not true, $M_g(X,\beta)$ can be of higher dimension, but it is always "virtually of dimension zero" and it is always possible to extract numbers $N_{g,\beta}$ (which are rational in general). To make the preceding phrase precise is the main technical difficulty of the theory (this difficulty has been solved but requires some thinking), from a physics point of view, it has to do with the correct treatment of the fermionic zero modes. @40227 Thanks for your answer. I will think about it and will comment any further questions. But, in a first glance, the term instanton here is not related to some gauge field configuration, correctly? This is confusing since one can indeed have 2d gauge theories. That's right. In a standard 2d $\sigma$-model, there is no gauge field and the "world-sheet instanton" is made of the scalar fields (and fermionic partners) with value in the target manifold. For a 2d gauge theory, a natural topologically non-trivial configuration, natural analogue of the 4d gauge instanton, happening for an abelian gauge group (e.g. U(1)), is a vortex. But in some cases, there is a relation between the two stories: some 2d abelian gauge theory in the UV can become strongly coupled in the IR and can flow under the RG flow to some 2d $\sigma$-model (without gauge groups), and non-perturbative UV effects due to the vortices are mapped to the non-perturbative IR effects due to the "world-sheet instantons" (see https://arxiv.org/abs/hep-th/9301042
{"url":"https://www.physicsoverflow.org/37886/worldsheet-instantons-in-a-model","timestamp":"2024-11-04T17:23:22Z","content_type":"text/html","content_length":"128423","record_id":"<urn:uuid:4dca4d94-6d37-4325-aa0f-b2bda13b0791>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00755.warc.gz"}
rip-environment script not running on RPi4 64bit rip-environment script not running on RPi4 64bit 10 May 2021 23:56 10 May 2021 23:57 #208511 by mwinterm I set-up a RPi4 with a 64bit real-time kernel 5.4.61-rt37-v8+. I also installed 2.8.1 which is running fine. Then I got the source from github, satisfied all the dependencies and compiled successfully. Finally to test my build I run from the src directory: . ../scripts/rip-environment and get the feedback: This script is only useful on run-in-place systems. Any suggestions what could be wrong? Best regards, PS: ...the very same procedure worked fine for me on my laptop with Debian Buster. Last edit: 10 May 2021 23:57 by Please Log in or Create an account to join the conversation. 11 May 2021 22:36 #208594 by andypugh I had this yesterday. I didn't even bother trying to work out why (it was on a VM and I have no idea what I last did in there). Have you ever build a .deb on the machine? That can change the config away from run-in-place. make clean ./configure --with-realtime-uspace sudo make setuid Should get you back to a run-in-place. Please Log in or Create an account to join the conversation. 13 May 2021 16:11 #208740 by mwinterm Hello Andy, you were exactly right. Had built a .deb package before and make clean solved it. Thanks a lot Please Log in or Create an account to join the conversation. 23 May 2021 18:23 #209935 by cakeslob make clean ./configure --with-realtime-uspace sudo make setuid I use the linuxcnc prebuilt images provided on the downloads page, when ever I build a run-in-place version of linuxcnc, the first time I usually need to install all the dependencies/library manually despite having linuxcnc already installed globally. Why is that? Is there a configuration option Im not using properly? Please Log in or Create an account to join the conversation. 23 May 2021 18:55 23 May 2021 19:03 #209940 by Grotius Hi Cakeslob, That is normal. I understand your question. For building lcnc from source you actually need all the libraries it depends on only at compile time. But you also need a few extra's like build-essentials, gcc (the compiler itself), etc. Some of the dependencies will be compiled into the lcnc code. Some will stay out like python. After compiling you have a closed source, if you delete the src folder. The process of compiling lcnc is quite complicated. A trick for enabling "ripping the environment" # solve sudo set setuid problems. $ sudo chmod 777 yourpath/linuxcnc/bin/rtapi_app $ sudo chmod 777 yourpath/linuxcnc/bin/linuxcnc_module_helper Last edit: 23 May 2021 19:03 by Please Log in or Create an account to join the conversation. 23 May 2021 21:21 23 May 2021 21:22 #209952 by cakeslob Hi Cakeslob, That is normal. I understand your question. For building lcnc from source you actually need all the libraries it depends on only at compile time. But you also need a few extra's like build-essentials, gcc (the compiler itself), etc. Some of the dependencies will be compiled into the lcnc code. Some will stay out like python. After compiling you have a closed source, if you delete the src folder. Ok, because I use this as my install guide (thanks, it explains whats happening step by step , which I desperately need ) So if I understand correctly, the premade images, linuxcnc is compiled beforehand? And on any system, once linuxcnc is compiled, you can strip away (some) the things required only to build it (the things compiled into linuxcnc code)? Last edit: 23 May 2021 21:22 by Please Log in or Create an account to join the conversation. 23 May 2021 22:17 #209954 by andypugh I use the linuxcnc prebuilt images provided on the downloads page, when ever I build a run-in-place version of linuxcnc, the first time I usually need to install all the dependencies/library manually despite having linuxcnc already installed globally. Why is that? Is there a configuration option Im not using properly? The ISO includes all the run-time dependencies, but does not include all the build-time dependencies. But that as led me to wonder why linuxcnc-dev does not have all the dependencies as dependencies. Please Log in or Create an account to join the conversation. 24 May 2021 04:40 #210001 by BeagleBrainz I really don’t want to be installing all the dependencies to build Linuxcnc from source to for the bits needed to build a custom module. If I remember correctly up to about 6 -12 months there was a script in the source tree that used to help with dependencies. Building Linuxcnc itself is not at all difficult once all the build deps are there. I used to supply a “build dev package” along with the regular Linuxcnc updates when I had the Mint repo. TBH I don’t think anyone actually downloaded or used the package. Please Log in or Create an account to join the conversation. Time to create page: 0.164 seconds
{"url":"https://www.forum.linuxcnc.org/9-installing-linuxcnc/42517-rip-environment-script-not-running-on-rpi4-64bit?start=0","timestamp":"2024-11-04T05:41:21Z","content_type":"text/html","content_length":"64823","record_id":"<urn:uuid:b5e683d0-f9e7-444a-962b-9b972a7755cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00521.warc.gz"}
n Python? The area under the receiver operating characteristic curve (AUC) or the area under the receiver operating characteristic curve (AUROC) The AUC value is a measure of the model’s overall performance. Increased AUC means that the model is doing better, and vice versa. The true positive rate of the ideal classifier will be extremely high, while the false positive rate will be extremely low. What is AUC and why is it used? According to the mathematical definition, it may be used to calculate the area under any number of curves that are used to evaluate the performance of a model; for example, it might be used to calculate the area under a precision-recall curve.Wherever possible, AUC is defined as the area under the Receiver Operating Characteristic (ROC) curve where there is no other specification of the term. How to calculate ROC and AUC in Python? The following are the formulas to calculate the ROC and AUC, which will then be plotted using the matplotlib tool in Python: The area under the curve in the above figure is referred to as the AUC, and the curve that you can see in the above picture is referred to as the ROC curve. When the AUC is equal to one, it indicates that the scenario is excellent for a machine learning model. What is a good AUC in Python? This value is between 0.5 and 1, with lower values representing poor classifiers and higher values representing great classifiers. To put it another way, how does one compute AUC in Python? The AUC score should be computed by employing the roc auc score () function in conjunction with the test set labels y test and the predicted probabilities y pred prob. What is AUC — area under the curve? – SecretDataScientist.com is a website dedicated to the study of secret data.What is AUC (Area Under the Curve) and how does it work?AUC is an abbreviation for ″Area Under the Curve.″ According to the mathematical definition, it may be used to calculate the area under any number of curves that are used to evaluate the performance of a model; for example, it might be used to calculate the area under a precision-recall curve.
{"url":"https://kanyakumari-info.com/interesting/what-is-auc-in-python.html","timestamp":"2024-11-06T23:38:47Z","content_type":"text/html","content_length":"15074","record_id":"<urn:uuid:376b8b3d-e681-46f7-a02f-d143ae5a54a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00650.warc.gz"}
Current Search: Cosmology Mendonca, Phillip, Engle, Jonathan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Physics 1In this dissertation we work out in detail a new proposal to define rigorously a sector of loop quantum gravity at the diffeomorphism invariant level corresponding to homogeneous and isotropic cosmologies, and propose how to compare in detail the physics of this sector with that of loop quantum cosmology. The key technical steps we have completed are (a) to formulate conditions for homogeneity and isotropy in a diffeomorphism covariant way on the classical phase space of general relativity,... Show more1In this dissertation we work out in detail a new proposal to define rigorously a sector of loop quantum gravity at the diffeomorphism invariant level corresponding to homogeneous and isotropic cosmologies, and propose how to compare in detail the physics of this sector with that of loop quantum cosmology. The key technical steps we have completed are (a) to formulate conditions for homogeneity and isotropy in a diffeomorphism covariant way on the classical phase space of general relativity, and (b) to translate these conditions consistently using well-understood techniques to loop quantum gravity. To impose the symmetry at the quantum level, on both the connection and its conjugate momentum, the method used necessarily has similiarities to the Gupta-Bleuler method of quantizing the electromagnetic field. Lastly, a strategy for embedding states of loop quantum cosmology into this new homogeneous isotropic sector, and using this embedding to compare the physics, is presented. Show less Date Issued Subject Headings Diffeomorphisms, Quantum gravity, Quantum cosmology, Invariants, Isotropy Document (PDF) LOOP QUANTUM GRAVITY DYNAMICS: MODELS AND APPLICATIONS. Vilensky, Ilya, Engle, Jonathan, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Physics In this dissertation we study the dynamics of loop quantum gravity and its applications. We propose a tunneling phenomenon of a black hole-white hole transition and derive an amplitude for such transition using the spinfoam framework. We investigate a special class of kinematical states for loop quantum gravity - Bell spin networks - and show that their entanglement entropy obeys the area law. We develop a new spinfoam vertex amplitude that has the correct semi-classical limit. We then apply... Show moreIn this dissertation we study the dynamics of loop quantum gravity and its applications. We propose a tunneling phenomenon of a black hole-white hole transition and derive an amplitude for such transition using the spinfoam framework. We investigate a special class of kinematical states for loop quantum gravity - Bell spin networks - and show that their entanglement entropy obeys the area law. We develop a new spinfoam vertex amplitude that has the correct semi-classical limit. We then apply this new amplitude to calculate the graviton propagator and a cosmological transition amplitude. The results of these calculations show feasibility of computations with the new amplitude and its viability as a spinfoam model. Finally, we use physical principles to radically constrain ambiguities in the cosmological dynamics and derive unique Hamiltonian dynamics for Friedmann-Robertson-Walker and Bianchi I cosmologies. Show less Date Issued Subject Headings Quantum gravity, Loop quantum gravity, Cosmology, Spinfoam Document (PDF) Loop Quantum Gravity with Cosmological Constant. Huang, Zichang, Han, Muxin, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Physics The spin-foam is a covariant path-integral style approaching to the quantization of the gravity. There exist several spin-foam models of which the most successful one is the Engle-Pereira-Rovelli-Levine/Freidel-Krasnov (EPRL-FK) model. Using the EPRLFK model people are able to calculate the transition amplitude and the n-point functions of 4D geometry (both Euclidean and Lorentzian) surrounding by a given triangulated 3D geometry. The semi-classical limit of the EPRL-FK amplitude reproduces... Show moreThe spin-foam is a covariant path-integral style approaching to the quantization of the gravity. There exist several spin-foam models of which the most successful one is the Engle-Pereira-Rovelli-Levine/Freidel-Krasnov (EPRL-FK) model. Using the EPRLFK model people are able to calculate the transition amplitude and the n-point functions of 4D geometry (both Euclidean and Lorentzian) surrounding by a given triangulated 3D geometry. The semi-classical limit of the EPRL-FK amplitude reproduces discrete classical gravity under certain assumptions, which shows that the EPRLFK model can be understood as UV completion of general relativity. On the other hand, it is very hard to dene a continuum limit and couple a cosmological constant to the EPRL-FK model. In this dissertation, we addressed the problems about continuum limit and coupling a cosmological constant to the EPRL-FK model. Followed by chapter one as a brief introduction of the loop quantum gravity and EPRL-FK model, chapter two introduces our work about demonstrating (for the first time) that smooth curved spacetime geometries satisfying Einstein equation can emerge from discrete spin-foam models under an appropriate low energy limit, which corresponds to a semi-classical continuum limit of spin-foam models. In chapter three, we bring in the cosmological constant into the spin-foam model by coupling the SL(2, C) Chern-Simons action with the EPRL action, and find that the quantum simplicity constraint is realized as the 2d surface defect in SL(2, C) Chern-Simons theory in the construction of spin-foam amplitudes. In chapter four, we present a way to describe the twisted geometry with cosmological constant whose corresponding quantum states can forms the Hilbert space of the loop quantum gravity with cosmological constant. In chapter five, we introduced a new definition of the graviton propagator, and calculate its semi-classical limit in the contents of spin-foam model with the cosmological constant. Finally the chapter six will be a outlook for my future work. Show less Date Issued Subject Headings Quantum gravity, Cosmological constants, Spin foam models Document (PDF) The bones of the ox: how J.R.R. Tolkien's cosmology reflects ancient Near Eastern creation myths. Dutton, Amanda M., Dorothy F. Schmidt College of Arts and Letters, Department of English Scholars have well established the influence of the Old and Middle English, Norse, Welsh, and also Medieval Latin and Christian mythologies that influenced the writings of J.R.R. Tolkien. In particular, the mythology contained in The Silmarillion, specific the cosmology, behaves as sacred texts do in the primary world and mirrors a number of extant mythologies when they are directly compared. Several scholars have note, but as yet no one has studied in depth, the relationship between the... Show moreScholars have well established the influence of the Old and Middle English, Norse, Welsh, and also Medieval Latin and Christian mythologies that influenced the writings of J.R.R. Tolkien. In particular, the mythology contained in The Silmarillion, specific the cosmology, behaves as sacred texts do in the primary world and mirrors a number of extant mythologies when they are directly compared. Several scholars have note, but as yet no one has studied in depth, the relationship between the cosmology the The Silmarillion to that of a number of extant ancient Near Eastern mythologies. This thesis seeks to address that gap in the scholarship by specifically exploring Tolkien's mythological creation story in relation to those of the Mesopotamian, Egyptian, and Abrahamic of the Near East. Such a comparative study reveals a number of structural and thematic parallels that attest to the complexity of Tolkien's work that and can be used to argue that his mythology can be considered as well-developed and surprisingly authentic as any of these ancient mythological traditions. Show less Date Issued Subject Headings Criticism and interpretation, Criticism and interpretation, Myths in literature, Symbolism in literature, Cosmology, Middle Eastern literature, Criticism and interpretation Document (PDF)
{"url":"https://fau.digital.flvc.org/islandora/search/catch_all_subjects_mt%3A(Cosmology)","timestamp":"2024-11-03T00:30:20Z","content_type":"text/html","content_length":"74349","record_id":"<urn:uuid:87249c80-3873-407b-8c2a-20d6a735ebbd>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00236.warc.gz"}
Time-Frequency Feature Embedding with Deep Metric Learning This example shows how to use deep metric learning with a supervised contrastive loss to construct feature embeddings based on a time-frequency analysis of electroencephalographic (EEG) signals. The learned time-frequency embeddings reduce the dimensionality of the time-series data by a factor of 16. You can use these embeddings to classify EEG time-series from persons with and without epilepsy using a support vector machine classifier. Deep Metric Learning Deep metric learning attempts to learn a nonlinear feature embedding, or encoder, that reduces the distance (a metric) between examples from the same class and increases the distance between examples from different classes. Loss functions that work in this way are often referred to as contrastive. This example uses supervised deep metric learning with a particular contrastive loss function called the normalized temperature-scaled cross-entropy loss [3],[4],[8]. The figure shows the general workflow for this supervised deep metric learning. Positive pairs refer to training samples with the same label, while negative pairs refer to training samples with different labels. A distance, or similarity, matrix is formed from the positive and negative pairs. In this example, the cosine similarity matrix is used. From these distances, losses are computed and aggregated (reduced) to form a single scalar-valued loss for use in gradient-descent learning. Deep metric learning is also applicable in weakly supervised, self-supervised, and unsupervised contexts. There is a wide variety of distance (metrics) measures, losses, reducers, and regularizers that are employed in deep metric learning. Data — Description, Attribution, and Download Instructions The data used in this example is the Bonn EEG Data Set. The data is currently available at EEG Data Download and Ralph Andrzejak's EEG data download page. See Ralph Andrzejak's EEG data for legal conditions on the use of the data. The authors have kindly permitted the use of the data in this example. The data in this example were first analyzed and reported in: Andrzejak, Ralph G., Klaus Lehnertz, Florian Mormann, Christoph Rieke, Peter David, and Christian E. Elger. "Indications of Nonlinear Deterministic and Finite-Dimensional Structures in Time Series of Brain Electrical Activity: Dependence on Recording Region and Brain State." Physical Review E 64, no. 6 (2001). <https://doi.org/10.1103/physreve.64.061907> The data consists of five sets of 100 single-channel EEG recordings. The resulting single-channel EEG recordings were selected from 128-channel EEG recordings after visually inspecting each channel for obvious artifacts and satisfying a weak stationarity criterion. See the linked paper for details. The original paper designates these five sets as A-E. Each recording is 23.6 seconds in duration sampled at 173.61 Hz. Each time series contains 4097 samples. The conditions are as follows: A -- Normal subjects with eyes open B -- Normal subjects with eyes closed C -- Seizure-free recordings from patients with epilepsy. Recording from hippocampus in the hemisphere opposite the epileptogenic zone D -- Seizure-free recordings obtained from patients with epilepsy. Recordings from epileptogenic zone. E - Recordings from patients with epilepsy showing seizure activity. The zip files corresponding to this data are labeled as z.zip (A), o.zip (B), n.zip (C), f.zip (D), and s.zip (E). The example assumes you have downloaded and unzipped the zip files into folders named Z, O, N, F, and S respectively. In MATLAB® you can do this by creating a parent folder and using that as the OUTPUTDIR variable in the unzip command. This example uses the folder designated by MATLAB as tempdir as the parent folder. If you choose to use a different folder, adjust the value of parentDir accordingly. The following code assumes that all the .zip files have been downloaded into parentDir. Unzip the files by folder into a subfolder called BonnEEG. parentDir = tempdir; dataDir = fullfile(parentDir,'BonnEEG'); Creating In-Memory Data and Labels The individual EEG time series are stored as .txt files in each of the Z, N, O, F, and S folders under dataDir. Use a tabularTextDatastore to read the data. Create a tabular text datastore and create a categorical array of signal labels based on the folder names. tds = tabularTextDatastore(dataDir,'IncludeSubfolders',true,'FileExtensions','.txt'); The zip files were created on a macOS and accordingly there may be a MACOSX folder created with unzip that results in extra files. If those exist, remove them. extraTXT = contains(tds.Files,'__MACOSX'); tds.Files(extraTXT) = []; Create labels for the data based on the first letter of the text file name. labels = filenames2labels(tds.Files,'ExtractBetween',[1 1]); Each read of the tabular text datastore creates a table containing the data. Create a cell array of all signals reshaped as row vectors so they conform with the deep learning networks used in the ii = 1; eegData = cell(numel(labels),1); while hasdata(tds) tsTable = read(tds); ts = tsTable.Var1; eegData{ii} = reshape(ts,1,[]); ii = ii+1; Time-Frequency Feature Embedding Deep Network Here we construct a deep learning network that creates an embedding of the input signal based on a time-frequency analysis. TFnet = [sequenceInputLayer(1,'MinLength',4097,'Name',"input") 'FrequencyLimits',[0 0.23]) TFnet = dlnetwork(TFnet); After the input layer, the network obtains the continuous wavelet transform (CWT) of the data using the analytic Morlet wavelet. The output of cwtLayer is the magnitude of the CWT, or scalogram. Unlike the analyses in [1],[2], and [7], no pre-processing bandpass filter is used in this network. Instead, the CWT is obtained only over the frequency range of [0.0, 0.23] cycles/sample which is equivalent to [0,39.93] Hz for the sample rate of 173.61 Hz. This is the approximate range of the bandpass filter applied to the data before analysis in [1]. After the network obtains the scalogram, the network cascades a series of 2-D convolutional, batch normalization, and RELU layers. The final layer is a fully connected layer with 256 output units. This results in a 16-fold reduction in the size of the input. See [7] for another scalogram-based analysis of this data and [2] for another wavelet-based analysis using the tunable Q-factor wavelet transform. Differentiating Normal, Pre-Seizure, and Seizure EEG Given the five conditions present in the data, there are multiple meaningful and clinically informative ways to partition the data. One relevant way is to group the Z and O labels (non-epileptic subjects with eyes open and closed) as "Normal". Similarly, the two conditions recorded in the persons with epilepsy without overt seizure activity (N and F) may be grouped as "Pre-seizure". Finally, we designate the recordings obtained in epileptic subjects with seizure activity as "Seizure". To create labels, which may be cast to numeric values during training, designate these three classes as: • 0 -- "Normal" • 1 -- "Pre-seizure" • 2 -- "Seizure" Partition the data into training and test sets. First, create the new labels in order to partition the data. Examine the number of examples in each class. labelsPS = labels; labelsPS = removecats(labelsPS,{'F','N','O','S','Z'}); labelsPS(labels == categorical("Z") | labels == categorical("O")) = categorical("0"); labelsPS(labels == categorical("N") | labels == categorical("F")) = categorical("1"); labelsPS(labels == categorical("S")) = categorical("2"); labelsPS(isundefined(labelsPS)) = []; The resulting classes are unbalanced with twice as many signals in the "Normal" and "Pre-seizure" categories as in the "Seizure" category. Partition the data for training the encoder and the hold-out test set. Allocate 80% of the data to the training set and 20% to the test set. idxPS = splitlabels(labelsPS,[0.8 0.2]); TrainDataPS = eegData(idxPS{1}); TrainLabelsPS = labelsPS(idxPS{1}); testDataPS = eegData(idxPS{2}); testLabelsPS = labelsPS(idxPS{2}); Training the Encoder To train the encoder, set trainEmbedder to true. To skip the training and load a pretrained encoder and corresponding embeddings, set trainEmbedder to false and go to the Test Data Embeddings Because this example uses a custom loss function, you must use a custom training loop. To manage data through the custom training loop, use a signalDatastore (Signal Processing Toolbox) with a custom read function that normalizes the input signals to have zero mean and unit standard deviation. if trainEmbedder sdsTrain = signalDatastore(TrainDataPS,MemberNames = string(TrainLabelsPS)); transTrainDS = transform(sdsTrain,@(x,info)helperReadData(x,info),'IncludeInfo',true); Train the network by measuring the normalized temperature-controlled cross-entropy loss between embeddings obtained from identical classes (corresponding to positive pairs) and disparate classes (corresponding to negative pairs) in each mini-batch. The custom loss function computes the cosine similarity between each training example, obtaining a M-by-M similarity matrix, where M is the mini-batch size. The function computes the normalized temperature-controlled cross entropy for the similarity matrix with the temperature parameter equal to 0.07. The function calculates the scalar loss as the mean of the mini-batch losses. Specify Training Options The model parameters are updated based on the loss using an Adam optimizer. Train the encoder for 150 epochs with a mini-batch size of 50, a learning rate of 0.001, and an L2-regularization rate of 0.01. if trainEmbedder NumEpochs = 150; minibatchSize = 50; learnRate = 0.001; l2Regularization = 1e-2; Calculate the number of iterations per epoch and the total number of iterations to display training progress. if trainEmbedder numObservations = numel(TrainDataPS); numIterationsPerEpoch = floor(numObservations./minibatchSize); numIterations = NumEpochs*numIterationsPerEpoch; Create a minibatchqueue (Deep Learning Toolbox) object to manage data flow through the custom training loop. if trainEmbedder numOutputs = 2; mbqTrain = minibatchqueue(transTrainDS,numOutputs,... 'minibatchFormat', {'CBT','B'}); Train the encoder. if trainEmbedder progress = "final-loss"; if progress == "training-progress" lineLossTrain = animatedline; ylim([0 inf]) grid on % Initialize some training loop variables trailingAvg = []; trailingAvgSq = []; iteration = 1; lossByIteration = zeros(numIterations,1); % Loop over epochs and time the epochs start = tic; for epoch = 1:NumEpochs % Shuffle the mini-batches each epoch % Loop over mini-batches while hasdata(mbqTrain) % Get the next mini-batch and one-hot coded targets [dlX,Y] = next(mbqTrain); % Evaluate the model gradients and contrastive loss [gradients, loss, state] = dlfeval(@modelGradcontrastiveLoss,TFnet,dlX,Y); if progress == "final-loss" lossByIteration(iteration) = loss; % Update the gradients with the L2-regularization rate idx = TFnet.Learnables.Parameter == "Weights"; gradients(idx,:) = ... dlupdate(@(g,w) g + l2Regularization*w, gradients(idx,:), TFnet.Learnables(idx,:)); % Update the network state TFnet.State = state; % Update the network parameters using an Adam optimizer [TFnet,trailingAvg,trailingAvgSq] = adamupdate(... % Display the training progress D = duration(0,0,toc(start),'Format','hh:mm:ss'); if progress == "training-progress" title("Epoch: " + epoch + ", Elapsed: " + string(D)) iteration = iteration + 1; disp("Training loss after epoch " + epoch + ": " + loss); if progress == "final-loss" grid on title('Training Loss by Iteration') Training loss after epoch 1: 1.4759 Training loss after epoch 2: 1.5684 Training loss after epoch 3: 1.0331 Training loss after epoch 4: 1.1621 Training loss after epoch 5: 0.70297 Training loss after epoch 6: 0.29956 Training loss after epoch 7: 0.42671 Training loss after epoch 8: 0.23963 Training loss after epoch 9: 0.021723 Training loss after epoch 10: 0.50336 Training loss after epoch 11: 0.34225 Training loss after epoch 12: 0.63325 Training loss after epoch 13: 0.31603 Training loss after epoch 14: 0.25883 Training loss after epoch 15: 0.52879 Training loss after epoch 16: 0.27623 Training loss after epoch 17: 0.070335 Training loss after epoch 18: 0.073039 Training loss after epoch 19: 0.2657 Training loss after epoch 20: 0.10312 Training loss after epoch 21: 0.33435 Training loss after epoch 22: 0.24089 Training loss after epoch 23: 0.083583 Training loss after epoch 24: 0.33138 Training loss after epoch 25: 0.006466 Training loss after epoch 26: 0.44036 Training loss after epoch 27: 0.028106 Training loss after epoch 28: 0.14215 Training loss after epoch 29: 0.018414 Training loss after epoch 30: 0.018228 Training loss after epoch 31: 0.026751 Training loss after epoch 32: 0.026275 Training loss after epoch 33: 0.13545 Training loss after epoch 34: 0.029467 Training loss after epoch 35: 0.0088911 Training loss after epoch 36: 0.12077 Training loss after epoch 37: 0.1113 Training loss after epoch 38: 0.14529 Training loss after epoch 39: 0.10718 Training loss after epoch 40: 0.10141 Training loss after epoch 41: 0.018227 Training loss after epoch 42: 0.0086456 Training loss after epoch 43: 0.025808 Training loss after epoch 44: 0.00021023 Training loss after epoch 45: 0.0013423 Training loss after epoch 46: 0.0020328 Training loss after epoch 47: 0.012152 Training loss after epoch 48: 0.00025792 Training loss after epoch 49: 0.0010626 Training loss after epoch 50: 0.0015668 Training loss after epoch 51: 0.00048469 Training loss after epoch 52: 0.00073284 Training loss after epoch 53: 0.00043141 Training loss after epoch 54: 0.0009649 Training loss after epoch 55: 0.00014656 Training loss after epoch 56: 0.00024468 Training loss after epoch 57: 0.00092313 Training loss after epoch 58: 0.00022878 Training loss after epoch 59: 6.3505e-05 Training loss after epoch 60: 5.0711e-05 Training loss after epoch 61: 0.0006025 Training loss after epoch 62: 0.00010356 Training loss after epoch 63: 0.00018479 Training loss after epoch 64: 0.00042666 Training loss after epoch 65: 6.88e-05 Training loss after epoch 66: 0.00019625 Training loss after epoch 67: 0.00064875 Training loss after epoch 68: 0.00017705 Training loss after epoch 69: 0.00086301 Training loss after epoch 70: 0.00044735 Training loss after epoch 71: 0.00099668 Training loss after epoch 72: 3.7804e-05 Training loss after epoch 73: 9.1751e-05 Training loss after epoch 74: 2.6748e-05 Training loss after epoch 75: 0.0012345 Training loss after epoch 76: 0.00019493 Training loss after epoch 77: 0.00058993 Training loss after epoch 78: 0.0024207 Training loss after epoch 79: 7.1345e-05 Training loss after epoch 80: 0.00015598 Training loss after epoch 81: 9.3623e-05 Training loss after epoch 82: 8.9839e-05 Training loss after epoch 83: 0.0024844 Training loss after epoch 84: 0.0001383 Training loss after epoch 85: 0.00027976 Training loss after epoch 86: 0.17246 Training loss after epoch 87: 0.61378 Training loss after epoch 88: 0.41423 Training loss after epoch 89: 0.35526 Training loss after epoch 90: 0.081963 Training loss after epoch 91: 0.09392 Training loss after epoch 92: 0.026856 Training loss after epoch 93: 0.18554 Training loss after epoch 94: 0.04293 Training loss after epoch 95: 0.0002686 Training loss after epoch 96: 0.0071139 Training loss after epoch 97: 0.0028931 Training loss after epoch 98: 0.029305 Training loss after epoch 99: 0.0080128 Training loss after epoch 100: 0.0018248 Training loss after epoch 101: 0.00012145 Training loss after epoch 102: 7.6166e-05 Training loss after epoch 103: 0.0001156 Training loss after epoch 104: 8.262e-05 Training loss after epoch 105: 0.00023958 Training loss after epoch 106: 0.00016227 Training loss after epoch 107: 0.00025268 Training loss after epoch 108: 0.0022929 Training loss after epoch 109: 0.00029386 Training loss after epoch 110: 0.00029691 Training loss after epoch 111: 0.00033467 Training loss after epoch 112: 5.31e-05 Training loss after epoch 113: 0.00013522 Training loss after epoch 114: 1.4335e-05 Training loss after epoch 115: 0.0015768 Training loss after epoch 116: 2.4165e-05 Training loss after epoch 117: 0.00031281 Training loss after epoch 118: 3.4592e-05 Training loss after epoch 119: 7.1151e-05 Training loss after epoch 120: 0.00020099 Training loss after epoch 121: 1.7647e-05 Training loss after epoch 122: 0.00010945 Training loss after epoch 123: 0.0012003 Training loss after epoch 124: 4.5947e-05 Training loss after epoch 125: 0.00043231 Training loss after epoch 126: 7.3228e-05 Training loss after epoch 127: 2.3522e-05 Training loss after epoch 128: 0.00014366 Training loss after epoch 129: 0.00010692 Training loss after epoch 130: 0.00066842 Training loss after epoch 131: 9.2536e-06 Training loss after epoch 132: 0.0007364 Training loss after epoch 133: 3.0709e-05 Training loss after epoch 134: 5.4056e-05 Training loss after epoch 135: 3.3361e-05 Training loss after epoch 136: 8.1937e-05 Training loss after epoch 137: 0.00012198 Training loss after epoch 138: 3.9838e-05 Training loss after epoch 139: 0.00025224 Training loss after epoch 140: 4.9974e-05 Training loss after epoch 141: 8.302e-05 Training loss after epoch 142: 2.009e-05 Training loss after epoch 143: 7.2674e-05 Training loss after epoch 144: 4.8355e-05 Training loss after epoch 145: 0.0008231 Training loss after epoch 146: 0.00017177 Training loss after epoch 147: 3.4427e-05 Training loss after epoch 148: 0.0095201 Training loss after epoch 149: 0.026009 Training loss after epoch 150: 0.071619 Test Data Embeddings Obtain the embeddings for the test data. If you set trainEmbedder to false, you can load the trained encoder and embeddings obtained using the helperEmbedTestFeatures function. if trainEmbedder finalEmbeddingsTable = helperEmbedTestFeatures(TFnet,testDataPS,testLabelsPS); load('TFnet.mat'); %#ok<*UNRCH> Use a support vector machine (SVM) classifier with a Gaussian kernel to classify the embeddings. template = templateSVM(... 'KernelFunction', 'gaussian', ... 'PolynomialOrder', [], ... 'KernelScale', 4, ... 'BoxConstraint', 1, ... 'Standardize', true); classificationSVM = fitcecoc(... finalEmbeddingsTable, ... "EEGClass", ... 'Learners', template, ... 'Coding', 'onevsone'); Show the final test performance of the trained encoder. The recall and precision performance for all three classes is excellent. The learned feature embeddings provide nearly 100% recall and precision for the normal (0), pre-seizure (1), and seizure classes (2). Each embedding represents a reduction in the input size from 4097 samples to 256 samples. predLabelsFinal = predict(classificationSVM,finalEmbeddingsTable); testAccuracyFinal = sum(predLabelsFinal == testLabelsPS)/numel(testLabelsPS)*100 hf = figure; set(gca,'Title','Confusion Chart -- Trained Embeddings') For completeness, test the cross-validation accuracy of the feature embeddings. Use five-fold cross validation. partitionedModel = crossval(classificationSVM, 'KFold', 5); [validationPredictions, validationScores] = kfoldPredict(partitionedModel); validationAccuracy = ... (1 - kfoldLoss(partitionedModel, 'LossFun', 'ClassifError'))*100 validationAccuracy = single The cross-validation accuracy is also excellent at near 100%. Note that we have used all the 256 embeddings in the SVM model, but the embeddings returned by the encoder are always amenable to further reduction by using feature selection techniques such as neighborhood component analysis, minimum redundancy maximum relevance (MRMR), or principal component analysis. See Introduction to Feature Selection (Statistics and Machine Learning Toolbox) for more details. In this example, a time-frequency convolutional network was used as the basis for learning feature embeddings using a deep metric model. Specifically, the normalized temperature-controlled cross-entropy loss with cosine similarities was used to obtain the embeddings. The embeddings were then used with a SVM with a Gaussian kernel to achieve near perfect test performance. There are a number of ways this deep metric network can be optimized which are not explored in this example. For example, the size of the embeddings can likely be reduced further without affecting performance while achieving further dimensionality reduction. Additionally, there are a large number of similarity (metrics) measures, loss functions, regularizers, and reducers which are not explored in this example. Finally, the resulting embeddings are compatible with any machine learning algorithm. An SVM was used in this example, but you can explore the feature embeddings in the Classification Learner app and may find that another classification algorithm is more robust for your application. [1] Andrzejak, Ralph G., Klaus Lehnertz, Florian Mormann, Christoph Rieke, Peter David, and Christian E. Elger. "Indications of Nonlinear Deterministic and Finite-Dimensional Structures in Time Series of Brain Electrical Activity: Dependence on Recording Region and Brain State." Physical Review E 64, no. 6 (2001). https://doi.org/10.1103/physreve.64.061907. [2] Bhattacharyya, Abhijit, Ram Pachori, Abhay Upadhyay, and U. Acharya. "Tunable-Q Wavelet Transform Based Multiscale Entropy Measure for Automated Classification of Epileptic EEG Signals." Applied Sciences 7, no. 4 (2017): 385. https://doi.org/10.3390/app7040385. [3] Chen, Ting, Simon Kornblith, Mohammed Norouzi, and Geoffrey Hinton. "A Simple Framework for Contrastive Learning of Visual Representations." (2020). https://arxiv.org/abs/2002.05709 [4] He, Kaiming, Fan, Haoqi, Wu, Yuxin, Xie, Saining, Girschick, Ross. "Momentum Contrast for Unsupervised Visual Representation Learning." (2020). https://arxiv.org/abs/1911.05722 [6] Musgrave, Kevin. "PyTorch Metric Learning" https://kevinmusgrave.github.io/pytorch-metric-learning/ [7] Türk, Ömer, and Mehmet Siraç Özerdem. “Epilepsy Detection by Using Scalogram Based Convolutional Neural Network from EEG Signals.” Brain Sciences 9, no. 5 (2019): 115. https://doi.org/10.3390/ [8] Van den Oord, Aaron, Li, Yazhe, and Vinyals, Oriol. "Representation Learning with Contrastive Predictive Coding." (2019). https://arxiv.org/abs/1807.03748 function [grads,loss,state] = modelGradcontrastiveLoss(net,X,T) % This function is only for use in the "Time-Frequency Feature Embedding % with Deep Metric Learning" example. It may change or be removed in a % future release. % Copyright 2022, The Mathworks, Inc. [y,state] = net.forward(X); loss = contrastiveLoss(y,T); grads = dlgradient(loss,net.Learnables); loss = double(gather(extractdata(loss))); function [out,info] = helperReadData(x,info) % This function is only for use in the "Time-Frequency Feature Embedding % with Deep Metric Learning" example. It may change or be removed in a % future release. % Copyright 2022, The Mathworks, Inc. mu = mean(x,2); stdev = std(x,1,2); z = (x-mu)./stdev; out = {z,info.MemberName}; function [dlX,dlY] = processMB(Xcell,Ycell) % This function is only for use in the "Time-Frequency Feature Embedding % with Deep Metric Learning" example. It may change or be removed in a % future release. % Copyright 2022, The Mathworks, Inc. Xcell = cellfun(@(x)reshape(x,1,1,[]),Xcell,'uni',false); Ycell = cellfun(@(x)str2double(x),Ycell,'uni',false); dlX = cat(2,Xcell{:}); dlY = cat(1,Ycell{:}); function testFeatureTable = helperEmbedTestFeatures(net,testdata,testlabels) % This function is only for use in the "Time-Frequency Feature Embedding % with Deep Metric Learning" example. It may change or be removed in a % future release. % Copyright 2022, The Mathworks, Inc. testFeatures = zeros(length(testlabels),256,'single'); for ii = 1:length(testdata) yhat = predict(net,dlarray(reshape(testdata{ii},1,1,[]),'CBT')); yhat= extractdata(gather(yhat)); testFeatures(ii,:) = yhat; testFeatureTable = array2table(testFeatures); testFeatureTable = addvars(testFeatureTable,testlabels,... function loss = contrastiveLoss(features,targets) % This function is for is only for use in the "Time-Frequency Feature % Embedding with Deep Metric Learning" example. It may change or be removed % in a future release. % Replicates code in PyTorch Metric Learning % https://github.com/KevinMusgrave/pytorch-metric-learning. % Python algorithms due to Kevin Musgrave % Copyright 2022, The Mathworks, Inc. loss = infoNCE(features,targets); function loss = infoNCE(embed,labels) ref_embed = embed; [posR,posC,negR,negC] = convertToPairs(labels); dist = cosineSimilarity(embed,ref_embed); loss = pairBasedLoss(dist,posR,posC,negR,negC); function [posR,posC,negR,negC] = convertToPairs(labels) Nr = length(labels); % The following provides a logical matrix which indicates where % the corresponding element (i,j) of the covariance matrix of % features comes from the same class or not. At each (i,j) % coming from the same class we have a 1, at each (i,j) from a % different class we have 0. Of course the diagonal is 1s. labels = stripdims(labels); matches = (labels == labels'); % Logically negate the matches matrix to obtain differences. differences = ~matches; % We negate the diagonal of the matches matrix to avoid biasing % the learning. Later when we identify the positive and % negative indices, these diagonal elements will not be picked % up. matches(1:Nr+1:end) = false; [posR,posC,negR,negC] = getAllPairIndices(matches,differences); function dist = cosineSimilarity(emb,ref_embed) emb = stripdims(emb); ref_embed = stripdims(ref_embed); normEMB = emb./sqrt(sum(emb.*emb,1)); normREF = ref_embed./sqrt(sum(ref_embed.*ref_embed,1)); dist = normEMB'*normREF; function loss = pairBasedLoss(dist,posR,posC,negR,negC) if any([isempty(posR),isempty(posC),isempty(negR),isempty(negC)]) loss = dlarray(zeros(1,1,'like',dist)); Temperature = 0.07; dtype = underlyingType(dist); idxPos = sub2ind(size(dist),posR,posC); pos_pair = dist(idxPos); pos_pair = reshape(pos_pair,[],1); idxNeg = sub2ind(size(dist),negR,negC); neg_pair = dist(idxNeg); neg_pair = reshape(neg_pair,[],1); pos_pair = pos_pair./Temperature; neg_pair = neg_pair./Temperature; n_per_p = negR' == posR; neg_pairs = neg_pair'.*n_per_p; neg_pairs(n_per_p==0) = -realmax(dtype); maxNeg = max(neg_pairs,[],2); maxPos = max(pos_pair,[],2); maxVal = max(maxPos,maxNeg); numerator = exp(pos_pair-maxVal); denominator = sum(exp(neg_pairs-maxVal),2)+numerator; logexp = log((numerator./denominator)+realmin(dtype)); loss = mean(-logexp,'all'); function [posR,posC,negR,negC] = getAllPairIndices(matches,differences) % Here we just get the row and column indices of the anchor % positive and anchor negative elements. [posR, posC] = find(extractdata(matches)); [negR,negC] = find(extractdata(differences)); See Also Related Topics
{"url":"https://in.mathworks.com/help/wavelet/ug/time-frequency-feature-embedding-with-deep-metric-learning.html","timestamp":"2024-11-11T20:56:26Z","content_type":"text/html","content_length":"119710","record_id":"<urn:uuid:1b7b68f1-de7b-4627-baca-73f9dcc1ec01>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00865.warc.gz"}
Slow indexing of constant-valued variables in MTK I’m currently working with large hierarchical models in ModelingToolkit.jl and found that indexing the solution with constant-valued variables becomes incredibly slow for larger systems. This results in plotting of the solution taking multiple minutes whereas the actual solving only takes seconds. Replacing this variable with a parameter or constant is the obvious solution (and does reduce the indexing time to normal levels), but doesn’t seem like an option for my case as the constant-valued variable in question is actually only constant for some subsystems and variable in others. Please consider the below MWE, where B serves as the constant-valued variable: using ModelingToolkit, OrdinaryDiffEq import ModelingToolkit: t_nounits as t, D n = 1000 @variables A(t)[1:n] = zeros(n) B(t)[1:n] @parameters B_val[1:n] = rand(n) eqs = [ # some complex equation for A [D(A[i]) ~ sin(t + B[i]) for i in eachindex(A)]..., # constant value for B - in reality some B also have complex eqs [B[i] ~ B_val[i] for i in eachindex(B)]..., @named sys = ODESystem(eqs, t) sys_simpl = structural_simplify(sys); prob = ODEProblem(sys_simpl, [], (0.0, 100)) sol = solve(prob); @time sol[A[rand(1:n)]]; # ~0.0002s @time sol[B[rand(1:n)]]; # ~0.8s I feel like I must be doing something incredibly dumb but can’t figure it out. Any help would be greatly appreciated. I’m using Julia 1.10.4 with following package versions: [961ee093] ModelingToolkit v9.41.0 [1dea7af3] OrdinaryDiffEq v6.89.0 Welcome @bspanoghe! I’ve not dug into this, but there are two things here: • the simplified system doesn’t directly solve for B • solving for each B index involves compilation time the first time you do it and then the second time is fast. I’m not sure how I’d apply the generic perf tips here, @nsajko if you have ideas please elaborate. 1 Like Thank you for your response! I’ve considered forcing the system to solve directly for B by instantiating it with the desired values @variables B(t)[1:n] = rand(n) and then setting the gradient to 0. [D(B[i]) ~ 0 for i in eachindex(B)]... This does solve the indexing time problem (at the cost of slightly higher solving time), but I was hoping for a cleaner solution. Based on this thread it seems there is some way to mark variables you want to track, but the closest I’ve found is the irreducible metadata tag mentioned here: @variables A(t)[1:n] = zeros(n) B(t)[1:n] [irreducible = true] Sadly it slows down the solver immensely, making it an even worse solution for my problem. Maybe it would be faster to use SymbolicIndexingInterface: using SymbolicIndexingInterface evalsol = getu(sol, B) Bvals = evalsol(sol) # should be a vector of vector of values of B at each time You can reuse evalsol each time you want to pull out the B values (as long as the underlying problem is the same I believe), so it will only be slow the first time it is called and compiled. 2 Likes I did a quick test and it seems that this method is equally quick to simply using sol[B] - I assume MTK might use SymbolicIndexingInterface under the hood for indexing? However, I also learned some unexpected behavior for indexing multiple variables: Indexing a subset of the variables simultaneously sol[[B[i] for i in rand(1:n, 10)]] is about equally quick as indexing just a single variable while indexing all variables sol[B] is twice as quick as indexing a single one - sol[B] ends up being ~1000x faster than indexing all B separately! Indexing for all variables will be more complex for my actual use case but should get plotting times to reasonable levels, so thanks a lot! 1 Like Glad that helped. It is using SymbolicIndexingInterface under the hood, but I was worried it was recompiling each time you called sol via a symbolic index (even if previously used). It is good to hear that it is apparently caching the function. Every time you index a single (new) variable I imagine it is compiling a function to extract that variable, whereas when grabbing the whole vector it is only compiling one function. So that would give a noticeable performance difference. Indeed, it’s very obvious in hindsight why it took so long to index the solution a few hundred times but simply hadn’t thought of it. Just changed the plotting function to do all the indexing in one step and it dropped plotting time from ~5 minutes to a few seconds. Thanks both for the help 1 Like
{"url":"https://discourse.julialang.org/t/slow-indexing-of-constant-valued-variables-in-mtk/120922","timestamp":"2024-11-06T18:02:14Z","content_type":"text/html","content_length":"35303","record_id":"<urn:uuid:11e3d865-6266-4574-8bf2-c71d77584466>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00559.warc.gz"}
Jacob 4 - math word problem (81360) Jacob 4 Jacob is dividing 5 aquariums into 1/8 of aquarium sections for his different animals. How many 1/8s are there in Jacobs 5 aquariums? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators Need help calculating sum, simplifying, or multiplying fractions? Try our fraction calculator You need to know the following knowledge to solve this word math problem: Grade of the word problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/81360","timestamp":"2024-11-04T07:55:48Z","content_type":"text/html","content_length":"52452","record_id":"<urn:uuid:26be375b-ecf9-4dfd-9641-d7d7658de0d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00821.warc.gz"}
Real Numbers Class 10 Extra Questions Additional Practice Questions on Real Numbers for Class 10 Math Chapter 1 1. What is the smallest Fibonacci number greater than the number 100? How many prime numbers are there between 1 and 100? How can you determine if a number is rational or irrational? These are examples of the types of questions that may be included in additional practice questions on real numbers for a class 10 math chapter. These questions can help students further understand and apply their knowledge of real numbers, including integers, rational numbers, and irrational numbers. By practicing these types of questions, students can improve their problem-solving skills and gain a deeper understanding of how real numbers work. 2. Other practice questions on real numbers may involve simplifying expressions involving real numbers, finding the sum or product of two real numbers, or identifying properties of real numbers such as the commutative or associative property. Students may also be asked to determine the square root or cube root of a given real number or to convert a repeating decimal into a fraction. These types of questions can help students develop their foundational skills in working with real numbers and prepare them for more advanced math concepts. 3. In addition to numerical practice questions, students may also be asked to solve real-world problems that involve real numbers, such as calculating the total cost of a purchase or determining the dimensions of a geometric shape. By applying their knowledge of real numbers to real-world scenarios, students can see the practical applications of math in everyday life. Overall, practicing a variety of questions on real numbers can help students build their confidence and proficiency in working with this fundamental concept in mathematics. CBSE Class 10 Maths Chapter 1: Extra Questions on Real Numbers 1. Real numbers form the basic building blocks of mathematics and are essential in studying various mathematical concepts. In CBSE Class 10 Maths Chapter 1, students are introduced to the concept of real numbers and their properties. These extra questions on real numbers are designed to help students practice and reinforce their understanding of this fundamental topic. By solving these questions, students can improve their problem-solving skills and gain a deeper insight into the properties of real numbers. 2. The extra questions on real numbers in CBSE Class 10 Maths Chapter 1 cover a wide range of topics, including the properties of rational and irrational numbers, the closure properties of real numbers under addition and multiplication, and the relationship between different types of numbers. By working through these questions, students can sharpen their analytical thinking and logical reasoning abilities. These questions also provide a good opportunity for students to consolidate their knowledge and prepare for exams. 3. Overall, the extra questions on real numbers in CBSE Class 10 Maths Chapter 1 serve as a valuable resource for students to test their understanding of the concepts covered in the chapter. By practicing these questions regularly, students can enhance their problem-solving skills and build a strong foundation in mathematics. Additionally, these questions can help students develop a deeper appreciation for the importance of real numbers in mathematics and beyond. Class 10 CBSE Maths Chapter 1: Supplementary Questions on Real Numbers Real numbers form the foundation of mathematics, encompassing all rational and irrational numbers. In Class 10 CBSE Maths Chapter 1, students delve into the realm of real numbers, exploring their properties and relationships. Supplementary questions on real numbers challenge students to apply their understanding of concepts such as factors, multiples, divisibility, prime numbers, and the Fundamental Theorem of Arithmetic. By engaging with supplementary questions on real numbers, students can deepen their problem-solving skills and critical thinking abilities. These questions often require students to think creatively, apply various mathematical operations, and make connections between different concepts within the realm of real numbers. Through practice and persistence, students can enhance their mathematical reasoning and analytical skills, preparing them for more complex topics in higher grades. Overall, the supplementary questions in Chapter 1 of Class 10 CBSE Maths serve as a valuable tool for students to consolidate their understanding of real numbers and strengthen their mathematical proficiency. By tackling these questions with dedication and perseverance, students can not only improve their academic performance but also develop a deeper appreciation for the elegance and logic of mathematics. Test Your Understanding with Chapter 1 Real Numbers Class 10 Extra Questions 1. Understanding the concept of real numbers is crucial in mathematics as it forms the foundation for various mathematical operations and concepts. Real numbers include rational numbers, irrational numbers, and whole numbers, as well as integers and natural numbers. By grasping the properties and characteristics of real numbers, students can enhance their problem-solving skills and mathematical aptitude. These extra questions for Chapter 1 Real Numbers in Class 10 provide an opportunity for students to test their comprehension and application of real numbers in different 2. By attempting these extra questions, students can assess their understanding of the key principles related to real numbers, such as closure, commutativity, associativity, and distributivity. They can also practice solving problems involving real numbers through calculations, equations, and inequalities. This exercise can help students improve their confidence and proficiency in working with real numbers, which are essential in various branches of mathematics, including algebra, geometry, calculus, and number theory. 3. Furthermore, by engaging with these extra questions, students can develop critical thinking skills, logical reasoning, and analytical abilities. They can also gain insights into the practical applications of real numbers in everyday life scenarios, such as measurements, calculations, and financial transactions. Overall, these extra questions for Chapter 1 Real Numbers in Class 10 provide a valuable opportunity for students to deepen their understanding and mastery of real numbers, paving the way for a solid foundation in mathematics. Enhance Your Knowledge with Chapter 1 Class 10 Maths Extra Questions on Real Numbers Chapter 1 of Class 10 Maths introduces students to the concept of real numbers. This fundamental concept is essential for understanding various mathematical operations and applications. Real numbers include integers, fractions, decimals, and irrational numbers. By mastering this chapter, students can build a strong foundation for more advanced topics in mathematics. Extra questions on real numbers provide students with the opportunity to practice and enhance their understanding of the topic. These questions can challenge students to think critically and apply their knowledge of real numbers in different contexts. By solving these extra questions, students can improve their problem-solving skills and develop a deeper comprehension of the concepts covered in the chapter. Moreover, practicing extra questions on real numbers can help students prepare for exams and assessments. By working through a variety of problems, students can improve their confidence and proficiency in handling real number-related questions. Additionally, these extra questions can aid in reinforcing key concepts and identifying areas where students may need additional support or An analytical paragraph is a structured form of writing that explores a specific topic in depth. In Class 10, students learn to analyze texts, events, or ideas critically. The key components of an analytical paragraph include a clear topic sentence, evidence or examples to support the main point, and an explanation that connects the evidence back to the topic. This method encourages students to think critically and articulate their thoughts effectively. Mastering analytical paragraphs enhances writing skills and prepares students for higher-level academic tasks. Thus, understanding how to craft an analytical paragraph is essential for success in Class 10 and beyond. One comment 1. […] Real Numbers Class 10 Extra Questions are an essential resource for students preparing for their board exams. These extra questions help in understanding the concepts of real numbers, including Euclid’s Division Lemma, the Fundamental Theorem of Arithmetic, and the properties of irrational numbers. Solving these questions not only strengthens problem-solving skills but also builds confidence in tackling various types of exam questions. Students can use these Real Numbers Class 10 Extra Questions to practice more and ensure a better grasp of the topic, helping them perform well in their math exams. […] You must be logged in to post a comment.
{"url":"https://thepenpost.com/real-numbers-class-10-extra-questions/","timestamp":"2024-11-02T00:22:07Z","content_type":"text/html","content_length":"106418","record_id":"<urn:uuid:448c2d08-ac7d-42a7-af07-ab3748f6e8fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00698.warc.gz"}
where can i hold a wombat?? I know there's a colony of them up near the Murray River somewhere.... but I think they may be wild ones! Apparently there is a "Womobat Muster" organised by Zoos SA - you could try contacting them: I'm not sure if it is an annual thing or was a one-off, but I found this leaflet which has some contact numbers on it http://www.zoossa.com.au/__files/f/201130/womba I held one at a nature park thing, just south of Townsville up.in qld - not sure about down here in SA Possibly called billabong wildlife park, or something - will look it out if that's any good? It was amazing, but they are rude and stick they dingles out when you cuddle them (if its a boy that is lol) I know there's a colony of them up near the Murray River somewhere.... but I think they may be wild ones! Apparently there is a "Womobat Muster" organised by Zoos SA - you could try contacting them: I'm not sure if it is an annual thing or was a one-off, but I found this leaflet which has some contact numbers on it that looks like a fantastic experience but I don't think I'd have the nerve to stay there will all the bugs and spiders lol and the toilet thing.....no just couldn't do it, I'm not sure I'll be one for the outback I held one at a nature park thing, just south of Townsville up.in qld - not sure about down here in SA Possibly called billabong wildlife park, or something - will look it out if that's any good? It was amazing, but they are rude and stick they dingles out when you cuddle them (if its a boy that is lol) I've seen that one but a bit too far for a cuddle lol I'll get there some day though and will make sure I ask for a female hehe I don't think you'd like cuddling a fully grown one unless it had been hand reared. They dig burrows so have long nails/claws. I've been here a long time and haven't found any where that you can get that close to one. I've never cuddled one, but have stroked 2 at a wildlife park in Batemans bay, and also they take them for walks on leads at Australia zoo near the sunshine coast, so you can get real close. Healsville Sanctuary near Melbourne do wombat cuddles. Last time I was there they were offering cuddles with two baby ones - they were so cute and were clambering all over the poor girl sitting in their enclosure! I just had a picture through <social media> from Cleland Wildlife Park - looks like the staff can do it............ "Icey" ,the Golden Wombat. I have been lucky enough to cuddle a baby wombat, someone from Fauna Rescue brought one to a scout event once. May be worth contacting Fauna Rescue to see if they can help.
{"url":"https://www.pomsinadelaide.com/topic/34199-where-can-i-hold-a-wombat/","timestamp":"2024-11-04T05:31:09Z","content_type":"text/html","content_length":"173514","record_id":"<urn:uuid:eee8c5c9-af33-4d97-839a-b4d79178811c>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00185.warc.gz"}
Similarity-law entrainment method for thick axisymmetric turbulent boundary layers in pressure gradients Analytical relations have been derived for calculating developing thick, axisymmetric, turbulent boundary layer in a pressure gradient from two simultaneous differential equations: momentum and shape parameter. An entrainment method is used to obtain the shape parameter equation. Both equations incorporate the velocity similarity laws that provide a two-parameter velocity profile general enough to include any range of Reynolds numbers. Newly defined quadratic shape parameters which arise from the geometry of the thick axisymmetric boundary layer are analytically related to the two-dimensional shape parameter by means of these velocity similarity laws. The variation of momentum loss, boundary-layer thickness, local skin friction, and local velocity profile may be calculated for the axisymmetric turbulent boundary layers on underwater bodies, including the thick boundary layers on the tails. The various formulations are shown to correlate well with available experimental NASA STI/Recon Technical Report N Pub Date: December 1975 □ Pressure Gradients; □ Similarity Theorem; □ Turbulent Boundary Layer; □ Differential Equations; □ Equations Of Motion; □ Momentum; □ Skin Friction; □ Fluid Mechanics and Heat Transfer
{"url":"https://ui.adsabs.harvard.edu/abs/1975STIN...7720396G/abstract","timestamp":"2024-11-01T21:27:43Z","content_type":"text/html","content_length":"35784","record_id":"<urn:uuid:cad93273-dab9-4c37-86f4-5bf3dcafb974>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00802.warc.gz"}
Compiling regular expressions (II) This article is a continuation of the earlier post, "Compiling regular expressions (I)". Automata are modeled as 'state' records with two fields. The pos field contains the set of positions that are valid for recognition in the given state. Transitions are modeled as lists of pairs of symbols and states. In this way a state may contain transitions that reference itself. type state = { pos : Int_set.t; mutable trans : (char * state) list ; We will require a function that for each input symbol $a$ and a given set of positions $s$, computes the list of pairs $(a, s')$ where $s'$ is the subset of $s$ that correspond to $a$. let (partition : char option array -> Int_set.t -> (char option * Int_set.t) list) = fun chars s -> let f acc c = match c with | Some _ -> if List.mem_assoc c acc then acc let f i acc = if chars.(i) <> c then acc else Int_set.add i acc in (c, Int_set.fold f s (Int_set.empty)) :: acc | None -> if List.mem_assoc c acc then acc else (c, Int_set.empty) :: acc in List.rev (Array.fold_left f [] chars) This function makes a list from a set of ints. let list_of_int_set : Int_set.t -> Int_set.elt list = fun s -> List.rev (Int_set.fold (fun e acc -> e :: acc) s []) This function, given a state, computes the list of sets that accessible from that state. let (accessible : state -> Int_set.t array -> char option array -> (char * Int_set.t) list) = fun s follow chars -> let part = partition chars s.pos in let f p rest = match p with | (Some c, l) -> (List.map (Array.get follow) (list_of_int_set l)) ) :: rest | _ -> rest in List.fold_right f part [] takes a set $s$ and two lists of states (marked and unmarked). It searches for a state which has a field equal to $s$ and returns this state or it fails. let (find_state : Int_set.t -> state list -> state list -> state) = fun s l m -> let test e = e.pos = s in List.find test l | Not_found -> List.find test m The algorithm to compute the automata works like this. Two lists are maintained, marked and unmarked states. The algorithm is initialized such that the only state is unmarked with a pos field containing first_pos $n_{0}$ where $n_{0}$ is the root of the syntax tree; the list of transitions is empty. For an unmarked state $st$, the algorithm does these things: • Calculate a set of numbers accessible from $st$. That is, a set of pairs $(c, s)$, where $c$ is a character and $s$ a set of positions. A position $j$ is accessible from $st$ by $c$ if there is an $i$ in st.pos such that $j$ is in follow $i$ and $i$ numbers the character $c$. • For each of the pairs $(c, s)$ □ If there exists a state st' (whether marked or unmarked) such that $s = $st'.pos, it adds $(c, st')$ to the transitions of $st$; □ Otherwise, a new state $st'$ without transitions is created, added to the transitions of $st$, and $st'$ is added to the list of unmarked states. • It marks $st$. The algorithm terminates only when there are no remaining unmarked states. The result is an array of states obtained from the list of marked states. The terminal states are all those containing the number associated with . Here then is the algorithm in code. let rec (compute_states : state list -> state list -> Int_set.t array -> char option array -> state array) = fun marked unmarked follow chars -> match unmarked with | [] -> Array.of_list marked | st :: umsts -> let access = accessible st follow chars in let marked1 = st :: marked in let f (c, s) umsts = if Int_set.is_empty s then umsts (*Suppress empty sets*) st.trans <- (c, find_state s marked1 umsts) ::st.trans ; | Not_found -> let state1 = {pos = s; trans = []} in st.trans <- (c, state1) :: st.trans; state1 :: umsts in let unmarked1 = List.fold_right f access umsts in compute_states marked1 unmarked1 follow chars We are just about ready to write the function to compute the automaton. It is fundamentally a call to compute_states but does one more thing. That is, it searches the resulting array for the index of the initial state and puts the index in the first slot of the array. To do this it uses the utility function array_indexq which performs the search for the index using physical equality. This is because the usual test using structural equality will not terminate on structures that loop. let (array_indexq : 'a array -> 'a -> int) = fun arr e -> let rec loop i = if i = Array.length arr then raise (Not_found) else if Array.get arr i == e then i else loop (i + 1) in loop 0 So, here it is, , the function to compute the automaton. let (dfa_of : augmented_regexp * Int_set.t array * char option array -> state array) = fun (e, follow, chars) -> let init_state = {pos = first_pos e; trans = []} in let dfa = compute_states [] [init_state] follow chars in (*Installing initial state at index 0*) let idx_start = array_indexq dfa init_state in dfa.(idx_start) <- dfa.(0); dfa.(0) <- init_state; We are now on the home stretch. All that remains is to write a function to interpret the automaton. To do this, we'll make use of a mini-combinator library of recognizers. I'll not provide the OCaml code for that today - you could reverse engineer from my earlier 'Recognizers' blog-post or, consult [1]. let (interpret_dfa : state array -> int -> char Recognizer.recognizer) = fun dfa accept -> let num_states = Array.length dfa in let fvect = Array.make (num_states) (fun _ -> failwith "no value") in for i = 0 to num_states - 1 do let trans = dfa.(i).trans in let f (c, st) = let pc = Recognizer.recognizer_of_char c in let j = array_indexq dfa st in Recognizer.compose_and pc (fun l -> fvect.(j) l) in let parsers = List.map f trans in if Int_set.mem accept (dfa.(i).pos) then fvect.(i) <- compose_or_list (Recognizer.end_of_input) parsers else match parsers with | [] -> failwith "Impossible" | p :: ps -> fvect.(i) <- Recognizer.compose_or_list p ps We wrap up with a couple of high level convenience functions : produces a recognizer from a string representation of a regular expression and takes a recognizer (that is, a compiled regular expression) and a string and uses the recognizer to categorize the given string as admissible or not (where is a simple function that transforms a into a char list - recognizers operate on lists). let compile xpr = let ((e, follow, chars) as ast) = regexp_follow xpr in let dfa = dfa_of ast in let parser = interpret_dfa dfa (Array.length chars - 1) in fun s -> parser (explode s) let re_match xpr s = let result = xpr s in match result with | Recognizer.Remains [] -> true | _ -> false Here's a simple test driver that shows how these functions can be used. let test xpr s = match re_match xpr s with | true -> Printf.printf "\"%s\" : success\n" s | false -> Printf.printf "\"%s\" : fail\n" s let _ = let xpr = compile "(a|b)*abb" in Printf.printf "Pattern: \"%s\"\n" "(a|b)*abb" ; test xpr "abb" ; test xpr "aabb" ; test xpr "baabb" ; test xpr "bbbbbbbbbbbbbaabb" ; test xpr "aaaaaaabbbaabbbaabbabaabb" ; test xpr "baab" ; test xpr "aa" ; test xpr "ab" ; test xpr "bb" ; test xpr "" ; test xpr "ccabb" ; | Failure msg -> print_endline msg So that's it for this series of posts on building recognizers for regular expressions. Hope you enjoyed it! "The Functional Approach to Programming" - Cousineau & Mauny "Compilers Principles, Techniques & Tools" - Aho et. al.
{"url":"https://blog.shaynefletcher.org/2014/12/compiling-regular-expressions-ii.html","timestamp":"2024-11-09T23:45:50Z","content_type":"application/xhtml+xml","content_length":"96582","record_id":"<urn:uuid:6f83a64b-b482-40be-a9b4-8f66b21d76e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00851.warc.gz"}
Eliashberg theory applied to the study of an Nb-Ge series We use the strong coupling theory of superconductivity to perform a detailed analysis of the Eliashberg functions alpha(exp 2)F(omega), for thirteen samples of Nb-Ge with critical temperatures ranging from 7.0 to 21.1 K. As critical temperature increases, we analyze the general trends of the electron-phonon coupling parameter lambda, of the integral of alpha(exp 2)F(omega)'A', and of other characteristics of alpha(exp 2)F(omega) on the basis of empirical criteria. While we find that the samples have in general the behavior expected, a closer analysis points to an overall attenuation in alpha(exp 2)F(omega) of unclear origin. Though the gap edge, (delta)0, appears to be well described by the (alpha)(exp 2)F(omega) obtained, the thermodynamic critical field agrees poorly with that reproduced by alpha(exp 2)F(omega) in the only case where the necessary (tunneling alpha(sup 2)F(omega) critical field measurements) data are available. Our analysis suggests there is a gross uncertainty in the measured H(sub c)(0). The overall analysis shows that the samples obtained are of enough quality to already give meaningful results upon inversion of the tunneling data. Pub Date: July 1991 □ Coupling; □ Critical Temperature; □ Electron Phonon Interactions; □ Electron Tunneling; □ High Temperature Superconductors; □ Superconductivity; □ Transition Temperature; □ Germanium; □ Josephson Junctions; □ Niobium; □ Solid-State Physics
{"url":"https://ui.adsabs.harvard.edu/abs/1991etas.rept.....B/abstract","timestamp":"2024-11-14T21:01:56Z","content_type":"text/html","content_length":"37015","record_id":"<urn:uuid:45b79c3e-5054-48b7-aec6-4bf5eefe509d>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00201.warc.gz"}
[Solved] Write down the bit pattern assuming that | SolutionInn Write down the bit pattern assuming that we are using base 15 numbers in the fraction instead Write down the bit pattern assuming that we are using base 15 numbers in the fraction instead of base 2. (Base 16 numbers use the symbols 0–9 and A–F. Base 15 numbers would use 0–9 and A–E.) Assume there are 24 bits, and you do not need to normalize. Is this representation exact? Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 81% (11 reviews) Answered By Mugdha Sisodiya My self Mugdha Sisodiya from Chhattisgarh India. I have completed my Bachelors degree in 2015 and My Master in Commerce degree in 2016. I am having expertise in Management, Cost and Finance Accounts. Further I have completed my Chartered Accountant and working as a Professional. Since 2012 I am providing home tutions. 3.30+ 2+ Reviews 10+ Question Solved Students also viewed these Computer science questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/study-help/computer-organization-design/write-down-the-bit-pattern-assuming-that-we-are-using","timestamp":"2024-11-14T08:46:29Z","content_type":"text/html","content_length":"80457","record_id":"<urn:uuid:f2598ec7-fda0-46bf-83f2-8f16e0cb7154>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00162.warc.gz"}
Articles collected on (around) 2000.03.22 on the topic of magnetic hysteresis simulation. No. Records Request * 1 9 "simulation of hysteresis" Record 1 of 5 - CC Search(R) 7 Editions Part 3 • An accurate and efficient 3-D micromagnetic simulation of metal evaporated tape Jones-M; Miles-JJ JOURNAL-OF-MAGNETISM-AND-MAGNETIC-MATERIALS. JUL 1997; 171 (1-2) : 190-208 (1997) Metal evaporated tape (MET) has a complex column-like structure in which magnetic domains are arranged randomly. In order to accurately simulate the behaviour of MET it is important to capture these aspects of the material in a high-resolution 3-D micromagnetic model. The scale of this problem prohibits the use of traditional scalar computers and leads us to develop algorithms for a vector processor architecture. We demonstrate that despite the materials highly non-uniform structure, it is possible to develop fast vector algorithms for the computation of the magnetostatic interaction field. We do this by splitting the field calculation into near and far components. The near held component is calculated exactly using an efficient vector algorithm, whereas the far field is calculated approximately using a novel fast Fourier transform ( FFT) technique. Results are presented which demonstrate that, in practice, the algorithms require sub-O(N log(N)) computation time. In addition results of highly realistic simulation of hysteresis in MET are presented. • Some of the magnetic properties of polycrystalline soft ferrites: Origins and developments of a model for the description of the quasistatic magnetization LeFloch-M; Konn-AM JOURNAL-DE-PHYSIQUE-IV. MAR 1997; 7 (C1) : 187-190 (1997) Since the first laws discovered by Lord Rayleigh in 1887, the literature has given a spectacular number of works in the magnetisation modeling field. The authors remark that most of these works are devoted to soft magnetic metals with intent to improve the knowledge of the dynamic power losses (anomalous losses), and that only a few models have really been developed on true physical considerations. In the soft ferrites area, Globus has proposed a model of magnetisation based on the motions (reversible bulgings and irreversible movements of translation) of a single 180 degrees domain wall contained in an ideal spherical grain. The authors discuss the main assumptions of this model with regard to their own practical experience of soft ferrites (actions of external anisotropies), and also to some particular results published by other workers. In the last part of this paper, a new progress on the Globus model is presented which concerns the hysteresis simulation. The result is tested on actual hysteresis loops of different shapes. • A new algorithm for thermal decay simulations Klik-I; Yao-YD IEEE-TRANSACTIONS-ON-MAGNETICS. JUL 1998; 34 (4) Part 1 : 1285-1287 (1998) A simulation algorithm is proposed which reproduces, at a great CPU time saving, solutions of multilevel master equations used to describe thermally driven dynamics of interacting: particle arrays. Hysteresis loops for planar particle arrays are computed and the stability of intermediate, partially demagnetized configurations and their dependence on array size are discussed. • Application of the Preisach and Jiles-Atherton models to the simulation of hysteresis in soft magnetic alloys Pasquale-M; Bertotti-G; Jiles-DC; Bi-Y JOURNAL-OF-APPLIED-PHYSICS. APR 15 1999; 85 (8) Part 2A : 4373-4375 (1999) This article describes the advances in unification of model descriptions of hysteresis in magnetic materials and demonstrates the equivalence of two widely accepted models, the Preisach (PM) and Jiles-Atherton (JA) models. Recently it was shown that starting from general energy relations, the JA equation for a loop branch can be derived from PM. The unified approach is here applied to the interpretation of magnetization measured in nonoriented Si-Fe steels with variable grain size [s], and also in as-cast and annealed Fe amorphous alloys. In the case of NO Fe-Si, the modeling parameter k defined by the volume density of pinning centers is such that k approximate to A+B/[s], where the parameters A and B are related to magnetocrystalline anisotropy and grain texture. The value of k in the amorphous alloys can be used to estimate the microstructural correlation length playing the role of effective grain size in these materials. (C) 1999 American Institute of Physics. [S0021-8979(99)23808- X]. • Monte Carlo simulation of hysteresis loops, of single-domain particles with cubic anisotropy and their temperature dependence Garcia-Otero-J; Porto-M; Rivas-J; Bunde-A JOURNAL-OF-MAGNETISM-AND-MAGNETIC-MATERIALS. AUG 1999; 203 Special Iss. SI : 268-270 (1999) By means of Monte Carlo simulation the hysteresis of non-intereacting single-domain magnetic particles presenting cubic crystalline anisotropy are studied. Both signs of the anisotropy constant are considered and relevant properties, such as remanence and coercivity, are obtained as a function of temperature. (C) 1999 Elsevier Science B.V. All rights reserved.
{"url":"https://nucssp.rmki.kfki.hu/articles/hyster/art0322.html","timestamp":"2024-11-15T01:16:36Z","content_type":"text/html","content_length":"5823","record_id":"<urn:uuid:7fa8b640-a3e1-438e-9c48-9eb616a40321>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00857.warc.gz"}
Finding the average of multiple rows given 3 criteria (Two Sheets) Context- I am attempting to track the average settlement percentage for a given attorney at an opposing firm with a given client of theirs. In the first screenshot, I've listed that given attorney and their client in columns 1 & 2 (Sheet 1). We are using data from one of our attorney's tracking sheet (Sheet 2). It lists all information for that given case. There are other columns I need to calculate for, but I can't get even the first formula to work. I've tried AVERAGEIF as well as AVG(COLLECT but to no avail. I believe the issue I am running into is that I am needing the formula to recognize part of a value listed in the Case Info column, since no two cases will have the same name and I'd rather not include another column. Here is my formula and I've included the screenshots below: =AVG(COLLECT({%s}:{%s}, {OC Range}:{OC Range}, ="Attorney1", {Style Range}:{Style Range}, IF(CONTAINS("Capital One", {Style Range}:{Style Range}), "Capital One"))) To sum it up, I am trying to find the average % for every row where OC = Attorney 1 and Case Info contains a given Creditor ("Capital One" for example) and is listed as a Settlement in the Type column and have that listed in sheet 1. (I did not include the type part in my initial formula because I was just trying to get the first part to work) Am I using the incorrect formulas, is this not possible or am I just missing something entirely? Best Answer • That means you don't have any rows in your source data that fit the criteria. I see in your formula you are specifying "Attorney1", but it looks like your source data in the screenshots has a space before the number. Try adjusting that and see if it works. • Try this: =AVG(COLLECT({%s}, {OC Range}, @cell = "Attorney1", {Style Range}, CONTAINS("Capital One", @cell))) • This one popped up as Divide by zero- but we're getting somewhere! • That means you don't have any rows in your source data that fit the criteria. I see in your formula you are specifying "Attorney1", but it looks like your source data in the screenshots has a space before the number. Try adjusting that and see if it works. • Bingo- thanks a bunch Paul! Help Article Resources
{"url":"https://community.smartsheet.com/discussion/91229/finding-the-average-of-multiple-rows-given-3-criteria-two-sheets","timestamp":"2024-11-06T20:59:23Z","content_type":"text/html","content_length":"414606","record_id":"<urn:uuid:77e07df8-d2b7-4cb0-a4d4-736ed5ac89fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00384.warc.gz"}
Semi-Annual Compound Interest Calculator - GEGCalculators Semi-Annual Compound Interest Calculator Semi-Annual Compound Interest Calculator `; } How do you calculate semiannually compounded interest? To calculate semiannually compounded interest, you can use the formula: A = P(1 + r/n)^(nt) • A is the final amount (including principal and interest) • P is the principal amount (initial amount of money) • r is the annual interest rate (as a decimal) • n is the number of times the interest is compounded per year (for semiannual, n = 2) • t is the number of years the money is invested or borrowed for How do you calculate semi-annual interest rate? To calculate the semi-annual interest rate, you can rearrange the formula above and solve for r: r = (2 * (A/P)^(1/(nt)) – 1) What is 8% compounded semi-annually? An 8% interest rate compounded semi-annually means that you have an annual interest rate of 8%, and it is compounded twice a year (every six months). To find the semi-annual interest rate (r), you can use the formula mentioned above. How do you calculate compound interest for half yearly? You calculate compound interest for half-yearly compounding using the formula A = P(1 + r/2)^(2t), where r is the annual interest rate and t is the number of years. How much is semi-annually compounded? “Semi-annually compounded” refers to the frequency at which interest is added to the principal amount. It means that interest is added twice a year, or every six What is the interest on $3000 at 8% compounded semi-annually for 6 years? To calculate the interest on $3,000 at 8% compounded semi-annually for 6 years, you can use the formula mentioned earlier: A = P(1 + r/n)^(nt) • P = $3,000 • r = 0.08 (8% annual interest rate) • n = 2 (semi-annual compounding) • t = 6 years Plugging these values into the formula will give you the final amount (A), and you can subtract the principal (P) to find the interest earned. What is an example of semi-annually? An example of something occurring semi-annually is a bi-annual checkup with a doctor, which means you have a checkup twice a year, every six months. What is the effective interest rate for 12% if compounded semi-annually? To find the effective interest rate for 12% compounded semi-annually, you can use the formula: Effective Rate = (1 + (r/n))^(n*t) – 1 • r = 0.12 (12% annual interest rate) • n = 2 (semi-annual compounding) • t = 1 year Calculate the effective rate using this formula to get the result. How many months is semi-annual? Semi-annual means occurring every six months, so it is equivalent to 6 months. What is 10% compounded semi-annually? A 10% interest rate compounded semi-annually means that you have an annual interest rate of 10%, and it is compounded twice a year (every six months). You can use the formula mentioned earlier to calculate the effective interest rate for this scenario. What rate (%) compounded quarterly is equivalent to 6% compounded semi-annually? To find the equivalent quarterly compounding rate for 6% compounded semi-annually, you can use the effective rate Effective Rate = (1 + (r/n))^(n*t) – 1 For this case: • r is the unknown quarterly rate • n = 4 (quarterly compounding) • t = 1 year • The effective rate should equal 6%, so set Effective Rate = 0.06 and solve for r. How do you calculate compound interest on a calculator? Most scientific or financial calculators have built-in functions for calculating compound interest. You would typically enter the principal amount, annual interest rate, compounding frequency, and the time period to calculate the final amount or interest earned. What is the fastest way to calculate compound interest? The fastest way to calculate compound interest is to use a financial calculator, spreadsheet software (like Excel), or online compound interest calculators. These tools can quickly compute compound interest based on the given parameters. How do you calculate compound interest for 6 months? To calculate compound interest for 6 months, you can use the formula A = P(1 + r/n)^(nt) and set t = 0.5 (since 6 months is half a year). This will give you the interest earned in half a year. What is the fastest way to find compound interest? The fastest way to find compound interest is to use online calculators or spreadsheet software like Excel, which can perform the calculations automatically. Simply input the principal amount, interest rate, compounding frequency, and time period to get the result. Is compounded semi-annually better? The frequency of compounding affects the effective interest rate. Compounding more frequently (e.g., daily or quarterly) generally results in slightly higher effective rates compared to semi-annual compounding. Whether semi-annual compounding is better depends on the specific financial product or investment and your goals. Is it better to compound monthly or annually? Compounding monthly typically results in a slightly higher effective interest rate compared to annual compounding, so it may be better for savings or investments. However, the choice between monthly or annual compounding depends on the specific terms of the financial product and your financial goals. What is $5000 invested for 10 years at 10 percent compounded annually? To calculate the future value of $5,000 invested for 10 years at 10 percent compounded annually, you can use the formula A = P(1 + r)^t: A = $5,000 * (1 + 0.10)^10 Estimating the result would give you approximately $12,193.72. How long will it take $10,000 to reach $50,000 if it earns 10% interest compounded semi-annually? To find the time it takes for $10,000 to grow to $50,000 at a 10% interest rate compounded semi-annually, you can rearrange the formula A = P(1 + r/n)^(nt) and solve for t: $50,000 = $10,000 * (1 + 0.10/2)^(2t) Estimating the result would be approximately 18.81 years. How many years will it take to double your money at 6% compounded semi-annually? To find the number of years it takes to double your money at 6% compounded semi-annually, you can use the formula: 2 = (1 + 0.06/2)^(2t) Solving for t: t ≈ 11.90 years How does semi-annual compounding work? Semi-annual compounding means that interest is calculated and added to the principal amount twice a year, typically every six months. This results in faster growth of the invested or borrowed money compared to annual compounding. Is semi-annually the same as half-yearly? Yes, semi-annually and half-yearly both mean an occurrence every six months. How do you calculate monthly compounded interest? To calculate monthly compounded interest, you can use the formula A = P(1 + r/n)^(nt), where n is the number of times interest is compounded per year (in this case, n = 12 for monthly compounding). How many times does an interest pays a year if it is compounded semi-annually? Interest is paid twice a year if it is compounded semi-annually. Which is better 15% compounded monthly or 12% compounded annually? In general, 15% compounded monthly is likely to yield a higher effective interest rate compared to 12% compounded annually, making it potentially a better option for savings or investments. However, you should consider the specific terms and conditions of the financial product or investment to make an informed choice. How many years will it take to triple the amount at 12% interest compounded semi-annually? To find the number of years it takes to triple an amount at 12% interest compounded semi-annually, you can use the formula: 3 = (1 + 0.12/2)^(2t) Solving for t: t ≈ 5.66 years How many days is semi-annual? Semi-annual means occurring every six months, so it is equivalent to 182.5 days on average. What is semi-annually in math? In mathematics, “semi-annually” refers to an event or occurrence happening every six months or twice a year. What number is compounded annually? A number compounded annually means that it is experiencing simple interest rather than compound interest. In this case, the number doesn’t change due to compounding; it remains the same. Can you live off the interest of $1 million dollars? Living off the interest of $1 million dollars depends on various factors, including the interest rate, your expenses, and investment returns. If you have a conservative interest rate, you may need a significant amount of savings to sustain your lifestyle. Estimating that interest rates may be around 2-4%, you might generate $20,000 to $40,000 per year from $1 million in savings. How much is $10,000 compound interest over 10 years? The compound interest on $10,000 over 10 years depends on the interest rate and compounding frequency. Without specifying these details, it’s not possible to provide an exact value. However, you can use the compound interest formula to calculate it based on your specific interest rate and compounding frequency. How much is 5% interest on $50,000? To find 5% interest on $50,000, you can simply calculate 5% of $50,000: 0.05 * $50,000 = $2,500 What nominal rate compounded monthly is equivalent to 6% compounded semi-annually? To find the equivalent monthly nominal rate for 6% compounded semi-annually, you can use the effective rate formula and solve for the monthly rate: Effective Rate = (1 + (r/12))^12 – 1 For this case: • r is the monthly rate you want to find • The effective rate should equal 6%, so set Effective Rate = 0.06 and solve for r. What is 4% per year compounded semi-annually? A 4% interest rate compounded semi-annually means that you have an annual interest rate of 4%, and it is compounded twice a year (every six months). You can use the compound interest formula to calculate the future value or interest earned over a specific time period. What is the effective rate of 14% compounded semi-annually? To find the effective rate of 14% compounded semi-annually, you can use the effective rate formula: Effective Rate = (1 + (r/2))^2 – 1 For this case: • r is the semi-annual rate, which is 0.14/2 = 0.07 Calculate the effective rate using this formula to get the result. What is the formula for compounding quarterly? The formula for compounding quarterly is the same as the general compound interest formula, but with n = 4 (since it’s compounded quarterly): A = P(1 + r/4)^(4t) How do you manually calculate compound interest? To manually calculate compound interest, you can use the formula A = P(1 + r/n)^(nt), where: • A is the final amount • P is the principal amount • r is the annual interest rate (as a decimal) • n is the number of times interest is compounded per year • t is the number of years Plug in these values and calculate step by step. What are the three steps to calculating compound interest? The three steps to calculating compound interest are: 1. Identify the principal amount (P), annual interest rate (r), compounding frequency (n), and time period (t). 2. Use the compound interest formula A = P(1 + r/n)^(nt) to calculate the final amount (A) or the interest earned. 3. Subtract the principal amount (P) from the final amount (A) to find the interest earned. What is the magic of compound interest? The “magic” of compound interest is that it allows your money to grow exponentially over time. As your interest earns interest, your savings or investments can grow substantially, especially over long periods. Compound interest is a powerful concept in finance that can help individuals accumulate wealth over time. What is the formula for compounded semiannually? The formula for compounded semiannually is the general compound interest formula: A = P(1 + r/2)^(2t) • A is the final amount • P is the principal amount • r is the annual interest rate (as a decimal) • t is the number of years What is the formula of compound interest with an example? The formula for compound interest is A = P(1 + r/n)^(nt), where: • A is the final amount • P is the principal amount • r is the annual interest rate (as a decimal) • n is the number of times interest is compounded per year • t is the number of years For example, if you have $5,000 (P) invested at an annual interest rate of 6% (r) compounded quarterly (n = 4) for 3 years (t), you can use this formula to calculate the final amount (A). How do I calculate compound interest without a formula? While it’s more convenient to use a formula, you can estimate compound interest without one by using a spreadsheet, calculator, or online compound interest calculator. These tools allow you to input the necessary values and calculate compound interest easily. GEG Calculators is a comprehensive online platform that offers a wide range of calculators to cater to various needs. With over 300 calculators covering finance, health, science, mathematics, and more, GEG Calculators provides users with accurate and convenient tools for everyday calculations. The website’s user-friendly interface ensures easy navigation and accessibility, making it suitable for people from all walks of life. Whether it’s financial planning, health assessments, or educational purposes, GEG Calculators has a calculator to suit every requirement. With its reliable and up-to-date calculations, GEG Calculators has become a go-to resource for individuals, professionals, and students seeking quick and precise results for their calculations. Leave a Comment
{"url":"https://gegcalculators.com/semi-annual-compound-interest-calculator/","timestamp":"2024-11-05T08:56:44Z","content_type":"text/html","content_length":"187932","record_id":"<urn:uuid:cfc1857e-6427-414b-8ecf-3b5cd9030625>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00845.warc.gz"}
Implement Hardware-Efficient Complex Burst Q-less QR with Forgetting Factor This example shows how to use the hardware-efficient Complex Burst Q-less QR Decomposition with Forgetting Factor Whole R Output block. Q-less QR Decomposition with Forgetting Factor The Complex Burst Q-less QR Decomposition with Forgetting Factor Whole R Output block implements the following recursion to compute the upper-triangular factor R of continuously streaming n-by-1 row vectors A(k,:) using forgetting factor . It is as if matrix A is infinitely tall. The forgetting factor in the range keeps it from integrating without bound. Define System Parameters n is the length of the row vectors A(k,:) and the number of rows and columns in R. m is the effective numbers of rows of A to integrate over. Use the fixed.forgettingFactor function to compute the forgetting factor as a function of the number of rows that you are integrating over. forgettingFactor = fixed.forgettingFactor(m) forgettingFactor = precisionBits defines the number of bits of precision required for the QR Decomposition. Set this value according to system requirements. In this example, complex-valued matrix A is constructed such that the magnitude of the real and imaginary parts of its elements is less than or equal to one, so the maximum possible absolute value of any element is . Your own system requirements will define what those values are. If you don't know what they are, and A is a fixed-point input to the system, then you can use the upperbound function to determine the upper bounds of the fixed-point types of A. max_abs_A is an upper bound on the maximum magnitude element of A. Select Fixed-Point Types Use the fixed.qlessqrFixedpointTypes function to compute fixed-point types. T = fixed.qlessqrFixedpointTypes(m,max_abs_A,precisionBits) T = struct with fields: A: [0x0 embedded.fi] T.A is the fixed-point type computed for transforming A to R in-place so that it does not overflow. ans = DataTypeMode: Fixed-point: binary point scaling Signedness: Signed WordLength: 31 FractionLength: 24 AMBA AXI Handshaking Process The Data Handler subsystem in this model takes complex matrix A as inputs. It sends rows of A to QR Decomposition block using the AMBA AXI handshake protocol. The validIn signal indicates when data is available. The ready signal indicates that the block can accept the data. Transfer of data occurs only when both the validIn and ready signals are high. You can set delay for the feeding in rows of A in the Data Handler to emulate the processing time of the upstream block. validOut signal of Data Handler remain high when delayLen is set to 0 because this indicates the Data Handler always has data available. Define Simulation Parameters Create random matrix A to contain a specified number of inputs. numInputs is the number of input rows A(k,:) for this example. numInputs = 500; A = fixed.example.complexUniformRandomArray(-1,1,numInputs,n); Cast the inputs to the types determined by fixed.qlessqrFixedpointTypes. Cast the forgetting factor to a fixed-point type with the same word length as A and best-precision scaling. forgettingFactor = fi(forgettingFactor,1,T.A.WordLength); Set delay for feeding in rows of A. Select a stop time for the simulation that is long enough to process all the inputs from A. stopTime = 4*numInputs*T.A.WordLength; Open the Model model = 'ComplexBurstQlessQRForgettingFactorModel'; Set Variables in the Model Workspace Use the helper function setModelWorkspace to add the variables defined above to the model workspace. Simulate the Model Verify the Accuracy of the Output Define matrix as follows Then using the formula for the computation of the th output , and the fact that , you can show that So to verify the output, the difference between and should be small. Choose the last output of the simulation. R = double(out.R(:,:,end)) R = Columns 1 through 4 7.4030 + 0.0000i 0.2517 - 0.3472i 0.4163 - 0.1448i 0.4088 + 0.5546i 0.0000 + 0.0000i 7.3291 + 0.0000i -0.1239 - 0.3553i -0.8237 + 0.2091i 0.0000 + 0.0000i 0.0000 + 0.0000i 7.3507 + 0.0000i 0.2622 - 0.6994i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 7.2422 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i Column 5 0.5450 + 0.7208i 0.1945 + 0.1716i -0.5292 + 0.4192i 0.4574 - 0.1519i 7.0558 + 0.0000i Verify that R is upper triangular. Verify that the diagonal is greater than or equal to zero. ans = Synchronize the last output R with the input by finding the number of inputs that produced it. A = double(A); alpha = double(forgettingFactor); relative_errors = nan(1,n); for k = 1:numInputs A_k = alpha.^(k:-1:1)' .* A(1:k,:); relative_errors(k) = norm(A_k'*A_k - R'*R)/norm(A_k'*A_k); k is the number of inputs A(k,:) that produced the last R. k = find(relative_errors==min(relative_errors),1,'last') Verify that with a small relative error. A_k = alpha.^(k:-1:1)' .* A(1:k,:); relative_error = norm(A_k'*A_k - R'*R)/norm(A_k'*A_k) relative_error = Suppress mlint warnings in this file. See Also
{"url":"https://kr.mathworks.com/help/fixedpoint/ug/implement-hardware-efficient-complex-burst-qless-qr-with-forgetting-factor.html","timestamp":"2024-11-06T01:42:26Z","content_type":"text/html","content_length":"86257","record_id":"<urn:uuid:18b07263-95ce-4838-9b67-1b969ded9cd4>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00700.warc.gz"}
Unmet Hours To model PCM hysteresis in EnergyPlus, what are the high/low temp difference of freezing/melting curves? I read the EnergyPlus input output reference to see what are the inputs for modeling PCM hysteresis, but I can't understand what is the input for Low Temperature/ High Temperature Difference of Melting / Freezing Curve. Can someone explain in a graphic way what these fields mean? Thanks in advance for your help. I am looking for the same information. Can anyone help, please. Your cooperation is highly appreciated. Thank you. 1 Answer Sort by ยป oldest newest most voted I think I have figured out what these mean. If we look at the phase transmission in the following picture(A heat flux-temperature curve, and it could also be viewed as a Cp-Temperature curve), we will get what these inputs mean. Btw, Enthalpy-temperature curves could be calculated by integrating the Cp-Temperature curve using the following Equations. edit flag offensive delete link more Thanks for taking the time to explain this. Can you be more clear, what is Exo and Endo in the graph? In the picture you sent, what are the inputs for Low Temperature/ High Temperature Difference of Melting / Freezing Curve? It's still not clear for me. andresgallardo ( 2020-02-18 08:59:02 -0600 )edit Endo is endothermal. Exo is exothermal. The low temperature difference is the temperature difference between the melting/freezing onset point and peak point, and high temperature is the difference on the other side of the peak.. Fan ( 2020-03-03 17:58:52 -0600 )edit Sorry for the late reply Fan ( 2020-03-03 17:59:25 -0600 )edit
{"url":"https://unmethours.com/question/38447/to-model-pcm-hysteresis-in-energyplus-what-are-the-highlow-temp-difference-of-freezingmelting-curves/","timestamp":"2024-11-14T21:56:55Z","content_type":"application/xhtml+xml","content_length":"62806","record_id":"<urn:uuid:2e470232-d65e-483e-8566-ca94659863d2>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00104.warc.gz"}
[Solved] Prove that:cot26π+cosec65π+3tan26π=6 - Mathematics... | Filo Not the question you're searching for? + Ask your question Was this solution helpful? Found 5 tutors discussing this question Discuss this question LIVE 15 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions from Mathematics (NCERT) Practice questions from Mathematics (NCERT) View more Practice more questions from Trigonometric Functions Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text Prove that: Topic Trigonometric Functions Subject Mathematics Class Class 11 Answer Type Text solution:1 Upvotes 1
{"url":"https://askfilo.com/math-question-answers/prove-that-displaystyle-cot-2-frac-pi-6-cosec-frack9h","timestamp":"2024-11-14T18:24:08Z","content_type":"text/html","content_length":"425586","record_id":"<urn:uuid:b0272824-a8e3-49c2-908b-33811ee82e41>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00254.warc.gz"}
The Fixed Point Property of the Infinite M-Sphere MATHEMATICS, vol.8, no.4, 2020 (SCI-Expanded) • Publication Type: Article / Article • Volume: 8 Issue: 4 • Publication Date: 2020 • Doi Number: 10.3390/math8040599 • Journal Name: MATHEMATICS • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Aerospace Database, Communication Abstracts, Metadex, zbMATH, Directory of Open Access Journals, Civil Engineering • Hacettepe University Affiliated: Yes The present paper is concerned with the Alexandroff one point compactification of the Marcus-Wyse (M-, for brevity) topological space . This compactification is called the infinite M-topological sphere and denoted by ((Z2),gamma), where (Z2):=Z2{},is not an element of Z2 and gamma is the topology for (Z2) induced by the topology gamma on Z2. With the topological space ((Z2),gamma), since any open set containing the point "" has the cardinality aleph 0, we call ((Z2),gamma) the infinite M-topological sphere. Indeed, in the fields of digital or computational topology or applied analysis, there is an unsolved problem as follows: Under what category does ((Z2),gamma) have the fixed point property (FPP, for short)? The present paper proves that ((Z2),gamma) has the FPP in the category Mop(gamma) whose object is the only ((Z2),gamma) and morphisms are all continuous self-maps g of ((Z2),gamma) such that |=aleph 0 with is an element of g((Z2)) or g((Z2)) is a singleton. Since ((Z2),gamma) can be a model for a digital sphere derived from the M-topological space , it can play a crucial role in topology, digital geometry and applied sciences.
{"url":"https://avesis.hacettepe.edu.tr/yayin/1178ab81-3e85-4914-8ee6-1e59df97cf4c/the-fixed-point-property-of-the-infinite-m-sphere","timestamp":"2024-11-05T23:13:50Z","content_type":"text/html","content_length":"50234","record_id":"<urn:uuid:0304be8f-ff0d-4b63-b42e-bc392c74198f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00829.warc.gz"}
seminars - On the emergent behaviors of Cucker-Smale type flocks n this thesis, we investigate Cucker-Smale type (in short, CS-type) models, mainly focusing on a case of a singular kernel. The CS-type model introduces an activation function to the CS model to describe various group phenomena, and the theory of relativity can be reflected as an example. To motivate the CS-type model, we first introduce the relativistic Cucker-Smale (in short, RCS) model with a singular kernel. More precisely, we study collision avoidance and flocking dynamics for the RCS model with a singular communication weight. For a bounded and regular communication weight, RCS particles can exhibit collisions in finite time depending on the geometry of the initial configuration. In contrast, for a singular communication weight, when particles collide, the associated Cucker-Smale vector field becomes unbounded, and the standard Cauchy-Lipschitz theory cannot be applied, so the existence theory after collisions is problematic. Thus, the collision avoidance problem is directly linked to the global solvability of the singular RCS model and asymptotic flocking We then propose the CS-type model, which is a general nonlinear consensus model incorporating the RCS model. Depending on the regularity and singularity of communication weight at the origin and far-field, we provide diverse clustering patterns for collective behaviors on the real line. The singularity of the kernel induces collision avoidance or sticking property, depending on the integrability of the kernel near the origin. We study the regularity of sticking solutions of the proposed model on the real line. On the other hand, we provide a sufficient framework beyond collision avoidance property to guarantee a strict lower bound between agents in the Euclidean space. We then introduce a kinetic analog of the proposed model and study its well-posedness. We also show the structural stability in both particle and kinetic levels.
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=22&l=en&sort_index=date&order_type=desc&document_srl=1005666","timestamp":"2024-11-09T01:31:56Z","content_type":"text/html","content_length":"47888","record_id":"<urn:uuid:5b3bb5c5-bac8-4954-a942-bd658493959d>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00456.warc.gz"}
entimeters to Kilometers Centimeters to Kilometers Converter Enter Centimeters β Switch toKilometers to Centimeters Converter How to use this Centimeters to Kilometers Converter π € Follow these steps to convert given length from the units of Centimeters to the units of Kilometers. 1. Enter the input Centimeters value in the text field. 2. The calculator converts the given Centimeters into Kilometers in realtime β using the conversion formula, and displays under the Kilometers label. You do not need to click any button. If the input changes, Kilometers value is re-calculated, just like that. 3. You may copy the resulting Kilometers value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Centimeters to Kilometers? The formula to convert given length from Centimeters to Kilometers is: Length[(Kilometers)] = Length[(Centimeters)] / 100000 Substitute the given value of length in centimeters, i.e., Length[(Centimeters)] in the above formula and simplify the right-hand side value. The resulting value is the length in kilometers, i.e., Calculation will be done after you enter a valid input. Consider that a high-end smartphone has a screen size of 15 centimeters. Convert this screen size from centimeters to Kilometers. The length in centimeters is: Length[(Centimeters)] = 15 The formula to convert length from centimeters to kilometers is: Length[(Kilometers)] = Length[(Centimeters)] / 100000 Substitute given weight Length[(Centimeters)] = 15 in the above formula. Length[(Kilometers)] = 15 / 100000 Length[(Kilometers)] = 0.00015 Final Answer: Therefore, 15 cm is equal to 0.00015 km. The length is 0.00015 km, in kilometers. Consider that a luxury handbag measures 30 centimeters in width. Convert this width from centimeters to Kilometers. The length in centimeters is: Length[(Centimeters)] = 30 The formula to convert length from centimeters to kilometers is: Length[(Kilometers)] = Length[(Centimeters)] / 100000 Substitute given weight Length[(Centimeters)] = 30 in the above formula. Length[(Kilometers)] = 30 / 100000 Length[(Kilometers)] = 0.0003 Final Answer: Therefore, 30 cm is equal to 0.0003 km. The length is 0.0003 km, in kilometers. Centimeters to Kilometers Conversion Table The following table gives some of the most used conversions from Centimeters to Kilometers. Centimeters (cm) Kilometers (km) 0 cm 0 km 1 cm 0.00001 km 2 cm 0.00002 km 3 cm 0.00003 km 4 cm 0.00004 km 5 cm 0.00005 km 6 cm 0.00006 km 7 cm 0.00007 km 8 cm 0.00008 km 9 cm 0.00009 km 10 cm 0.0001 km 20 cm 0.0002 km 50 cm 0.0005 km 100 cm 0.001 km 1000 cm 0.01 km 10000 cm 0.1 km 100000 cm 1 km A centimeter (cm) is a unit of length in the International System of Units (SI). One centimeter is equivalent to 0.01 meters or approximately 0.3937 inches. The centimeter is defined as one-hundredth of a meter, making it a convenient measurement for smaller lengths. Centimeters are used worldwide to measure length and distance in various fields, including science, engineering, and everyday life. They are commonly used in everyday measurements, such as height, width, and depth of objects, as well as in educational settings. A kilometer (km) is a unit of length in the International System of Units (SI), equal to 0.6214 miles. One kilometer is one thousand meters. The prefix "kilo-" means one thousand. A kilometer is defined by 1000 times the distance light travels in 1/299,792,458 seconds. This definition may change, but a kilometer will always be one thousand meters. Kilometers are used to measure distances on land in most countries. However, the United States and the United Kingdom still often use miles. The UK has adopted the metric system, but miles are still used on road signs. Frequently Asked Questions (FAQs) 1. How do I convert centimeters to kilometers? To convert centimeters to kilometers, divide the number of centimeters by 100,000, since there are 100,000 centimeters in a kilometer. For example, 250,000 centimeters divided by 100,000 equals 2.5 2. What is the formula for converting centimeters to kilometers? The formula is: \( \text{kilometers} = \dfrac{\text{centimeters}}{100,000} \). 3. How many kilometers are in a centimeter? There are 0.00001 kilometers in one centimeter. 4. Is 100,000 centimeters equal to 1 kilometer? Yes, 100,000 centimeters is exactly equal to 1 kilometer. 5. How do I convert kilometers to centimeters? To convert kilometers to centimeters, multiply the number of kilometers by 100,000. For example, 3 kilometers multiplied by 100,000 equals 300,000 centimeters. 6. Why do we divide by 100,000 to convert centimeters to kilometers? Because there are 100 centimeters in a meter and 1,000 meters in a kilometer. Multiplying 100 Γ 1,000 gives 100,000 centimeters in a kilometer. 7. How many kilometers are there in 750,000 centimeters? Divide 750,000 centimeters by 100,000 to get 7.5 kilometers. 8. How to convert 1,200,000 centimeters to kilometers? Divide 1,200,000 centimeters by 100,000 to get 12 kilometers.
{"url":"https://convertonline.org/unit/?convert=centimeters-kilometers","timestamp":"2024-11-13T04:33:10Z","content_type":"text/html","content_length":"99389","record_id":"<urn:uuid:54d342ef-d6b5-4586-a8d5-be53bff5377f>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00106.warc.gz"}
Question with mkl_dcsrmultcsr 11-03-2011 08:51 PM I have trouble using mkl_dcsrmultcsr when calculating A'A if A is not a square matrix. The problem is the size of array ia. The doc says ia is an array of length m + 1 when trans = 'N' or 'n', or n + 1 otherwise. In my case, I was using 'T', so I need to allocate an array of length n + 1. How can a matrix having m rows, but rowIndices array only have length n? I'm really confusing. 11-09-2011 05:13 AM
{"url":"https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Question-with-mkl-dcsrmultcsr/td-p/798386","timestamp":"2024-11-02T05:35:51Z","content_type":"text/html","content_length":"164223","record_id":"<urn:uuid:99dd938b-f2ee-49f7-aa48-62af3217791a>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00199.warc.gz"}
How To Read Logarithmic Graph Paper - The Graph Paper All our mathematicians and the other general scholars can here get the Logarithmic Graph Paper. This particular grid paper comes in very handy to be used primarily in the domain of mathematics. The scholars can use the map in the calculation of the log with the given equation. With our printable templates of the logarithmic graph paper, users can easily facilitate their calculation of the logs in the domain of mathematics. Graph paper is an interactive piece of paper that comes with some fine lines over it. These fine lines make the shape of grids and these grids are useful for several purposes. For instance, users can use graphs to plot mathematics and scientific data. They can take some significant decisions based on this data presentation. Likewise, some artistic work can also be executed on paper by enthusiasts of arts and crafts. Logarithmic graph paper is the specific form of the graph that is useful in the calculation of logs. This graph comes highly significant for mathematics scholars as they can use it to simplify the log equations. Here in the article, we are going to simplify the whole concept of logarithmic graphs so that our readers can take full advantage of them. Related Article: Logarithmic Graph Paper PDF Well, graph paper these days is turning to be fully digital due to the preference of the users. Most individuals now prefer using digital paper templates as these templates are convenient and easily accessible. The log papers are no exception to the trend since the majority of mathematical scholars prefer using graphs on their computers. Keeping the same in our consideration we have also developed the fully digital pdf format of the logarithmic graph paper. This graph is special since it offers the utmost compatibility to be used in the integration with digital devices. You can typically use and access it with your computer, smartphone, tablet, etc. Mathematics scholars can easily plot the mathematical functions on this graph to calculate the logs. Being the paper it’s primarily useful for logarithmic functions only. Both the scholars and teachers can prefer using it in their respective usage. Moreover, the pdf paper can also be shared with the other individuals as well in the group learning of logs functions. How To Read Logarithmic Graph Paper Well, reading the logarithmic graph paper is not an easy task, especially for amateurs in this domain. It generally requires a basic understanding of the logarithmic equations and then their plotting on the graph. The whole purpose of the paper is to simplify the calculations of the logs. The graph simplifies the logarithmic functions in the form of drawn data and then makes the calculation easier. With paper, one can easily substitute the calculator for getting the answers to the logs. You can ideally use paper to calculate the logs of both the variables in the data point. The only prerequisite to log graph paper usage is that you should be aware of the basic concept of the logarithmic graph functions. For instance, while plotting the set of numbers that have the representation of factor powers 10 we have to use the logarithmic axis. It comes with the major internals of power 10 that comes in its order. On the other hand, to represent the set of numbers with raw log values we consider the linear axis. These are some of the very basic rules or fundamentals of reading and using paper. Blank Logarithmic Graph Paper We have all types of readers and some of them have the graph customization preference for their usage. For the same reason, we here have the fully blank template for the logarithmic graphs to the preference of such readers. This template comes with the exclusive feature of customization and users can use it to their advantage. For instance, with this template, they can design their logarithmic graphs for plotting the logarithmic data. We believe this blank template will complement the personal customization preference of the users in the best possible manner. The users are free to give the desired final shape to this template by their requirements.
{"url":"https://thegraphpaper.com/tag/how-to-read-logarithmic-graph-paper/","timestamp":"2024-11-03T10:38:46Z","content_type":"text/html","content_length":"39106","record_id":"<urn:uuid:149815b4-1dd7-40d0-8630-88cdcc78047e>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00568.warc.gz"}
3rd Grade Basic Math Worksheets - 3rd Grade Math Worksheets 3rd Grade Basic Math Worksheets 3rd Grade Basic Math Worksheets – The expression “A thousand mile journey starts with a single step” is a perfect illustration. This saying is incredibly applicable to the journey that students embark on while learning math during the third grade. This is where students are beginning to master more advanced math concepts which are vital to their academic development. The importance of math in 3rd grade In the third grade students are able to move from basic math concepts to more complex ones like division, multiplication and fractions. This big step prepares the students for advanced mathematics during the upper grades. The foundation of a solid foundation is vital in math 3rd grade. Understanding 3rd Grade Math Worksheets Consider the math worksheets for 3rd grade as practical tools for reinforcing concepts that are taught in class. They enable students to apply the knowledge they have acquired and improve their skills by solving math problems. • Different Types of Worksheets There are numerous worksheets that can be adapted to different goals for learning. Some worksheets focus on basic operations, such as subtraction or addition. Other worksheets cover more advanced topics, such as division, fractions, multiplication and more. Benefits of using Math Worksheets Promoting Math Skills Worksheets let students test and enhance their mathematical skills. They help reinforce the concepts taught in class. Promoting Self-Reliance One of the great things about worksheets is the way they encourage self-directed learning. Students can solve the problems at their pace, thereby increasing confidence and self-confidence. Enhancing Problem Solutions Worksheets in math are an excellent opportunity to improve your problem-solving skills. By completing different problems students will be able to apply their knowledge in various contexts and develop their thinking skills. How do you utilize math worksheets efficiently Similar to other skills mastering math requires constant practice. Regularly working on math worksheets can help to provide this consistency, helping students learn concepts more effectively. It’s not necessary to do workbooks on your own. Parents and educators can enhance learning for students by guiding them and providing them with feedback. It is important to review the completed worksheets frequently in order to track the progress. Celebrating small achievements and improvements can be a fantastic way to keep students motivated and makes learning enjoyable. What are the top math worksheets? There are the 3rd grade math books from a wide range of online sources. Websites like Math-Drills, Education.com, and GreatSchools provide a range of worksheets to practice different concepts. Many websites have answers, so that you can check your child’s homework. Consider the worksheet’s alignment with the curriculum of your child the complexity of the worksheet, as well as the aesthetic appeal it offers for your child. Remember that learning should be both exciting and challenging! The importance of 3rd grade math in the academic life of a child cannot be overemphasized. It is an important phase which prepares children for more advanced math concepts in the future. They’re a valuable aid in this kind of learning. They reinforce learned concepts and help students develop their own learning. They also improve the ability to solve problems. With a well-planned strategy and the right tools, 3rd grade math worksheets can make learning math an enjoyable and rewarding learning experience for your child. • What are good resources for 3rd grade math worksheets? • Math-Drills.com Education.com GreatSchools are all websites that provide 3rd Grade Math Worksheets. • How often should my child use math worksheets? • Math is all about the sameness of concepts. Your child should be using worksheets frequently according to how comfortable they are with math concepts and their level of comfort. • Do math worksheets aid students in learning? • Math worksheets are effective tools for strengthening the concepts taught in class, encouraging independent learning and enhancing the ability to solve problems. • Worksheets can be used to teach other subjects, besides math? • Absolutely. Worksheets can be used to help reinforce different ideas and abilities, like English and Science. • What is the best method to make use of math work sheets? • To make math worksheets effective, you must practice consistently. Making the learning process fun and engaging, and also revisiting completed worksheets often can improve the learning Gallery of 3rd Grade Basic Math Worksheets Math Practice Worksheets Publicationstews 3rd Grade Math Worksheets Best Coloring Pages For Kids Math Third Grade Math Worksheets Free Printable K5 Learning Browse Leave a Comment
{"url":"https://www.3stgrademathworksheets.com/3rd-grade-basic-math-worksheets/","timestamp":"2024-11-07T03:08:54Z","content_type":"text/html","content_length":"67589","record_id":"<urn:uuid:33da0f7e-cda9-48c9-8da1-ce213d613185>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00763.warc.gz"}
[Revealed] What Hits Harder 1 Ohm or 4 Ohm? [2024] People who are crazy about their car maintenance do not compromise on amplifiers and subwoofer specs. It is a puzzle for some people to understand the basic terminologies about the speaker, like what is meant by impedance. Why is it so important? And the most important one, What hits harder 1 ohm or 4 ohms? Let’s make clear all these blurred concepts in the world of impedance. Starting with the basic and key questions: What is meant by the speaker’s impedance? Impedance is a resistance offered to the flow of alternating current by the electrical circuit. So, the impedance of the subwoofer coil is the resistance accounts for the electrical resistance that a subwoofer coil places in the path of amplifier output. The unit for measuring impedance is the same as resistance which is “ohm”, it is symbolized with the Greek alphabet omega Ω. Subwoofers come in standard impedance of 2 ohms, 4 ohms, and 8 ohms. To know what you are having right now, you can check it on the magnet where it is printed. The overall impedance of the subwoofers wired together must be very low, so the amplifier does not get overheated. Further clearing the misunderstandings about the impedance of amplifiers and subwoofers, the main question that arises is What hits harder than 1 ohm or 4 ohms? Read also : 1 Ohm Vs 2 Ohm Vs 4 Ohm Importance of Speakers’ Impedance The most important thing to make clear is impedance changes with the frequency, for example, if a car audio manufacturer says the impedance of the speaker to be 8 ohms at 45 hertz of frequency, it will fluctuate with the change in frequency like at 2,000 hertz might be 3 ohms. The overall impedance of a speaker is a dynamic profile that varies with frequency. So, when we see the manufacturer’s label on a speaker, we should have an idea it is a nominal impedance a rough average estimate of speakers and amplifiers when it is driven within that designed part of the audio spectrum. Impedance changes because of the following: • Resonance frequency • Coil inductance Resonance frequency: It refers to the frequency below which loudspeakers cannot create sound output for a given input signal. Voice coil inductance: Another reason for fluctuation in impedance is voice coil inductance which causes impedance to increase at higher frequencies. The impedance of the speaker determines the amount of wattage it fed on the amplifier. Hence, current flow increases with the decrease in impedance (resistance). Higher current requires the amplifier to amplify the power output, so with the decrease in impedance, more current will flow through a circuit which in turn exerts more workload on the speaker. For instance: 4-ohm speakers offer double resistance as a 2-ohm speaker, which results in half of the energy flow through the speaker. As a result, an amplifier of ohms with power delivered is 100 watts can deliver 200 watts at 4 ohms of resistance. Around all amplifiers works well at 4 ohms of load and very few amplifiers are stable at 1-ohm load. We can categories the above things as: • With low impedance, more current will flow exerting greater load with increased power. • High impedance will reduce the current flow, a smaller load exerted on the speaker will decrease the power. Need to Know About Speaker’s Impedance It is important for you to know about the speaker’s impedance for a reason to make sure that the speaker’s impedance lies within the range that the amplifier is designed for. Amplifier capability must be taken into account before applying a load. One will like the subwoofers and speaker to be very low, so the amplifier can be run without overheating. Wiring options cause the impedance of the subwoofer to change. The point is worth noting that impedance changes with frequency but also with how the subwoofers and voice coils are grouped together: either with parallel or series arrangement. Subwoofer Series Wiring Arrangement The series arrangement means subwoofers are connected end to end with each other, which is the positive terminal of one connected with the negative of others and so on. Overall impedance increases in series arrangement with the addition of individual impedance. As if two subwoofers of 2 ohms connected in a series wiring, the resultant impedance will be 4 ohms Subwoofer Parallel Wiring Arrangement In parallel wiring, all the terminals of subwoofers find a common attachment. Like all positive ends are connected together and all negative ones are attached together. The resultant impedance of subwoofers is equal to individual impedance infraction. As an example, we can conclude 4 ohms speakers arranged parallel to each other will have an overall impedance of 1 ohm. Difference Between 1 Ohm Or 4 Ohms A voice coil is present in every speaker and subwoofer, this device basically offers electrical resistance which resisting property we call is impedance. As the speaker’s impedance declines, it becomes very easy for the amplifier to deliver power to it. So, technically considering the difference lies between 1 ohm and 4 ohms is only about the resistance of a speaker’s voice coil offers to the audio current provided by the amplifier. The main reason for overheating the amplifier is when resistance is too low the power supply increases greatly than its capability to manage. Make clear in your mind, if you are thinking of connecting a 4-ohm speaker with a 4-ohm amplifier you are totally wrong. Finding a perfect matching is highly recommended to meet your desired requirements like a 4-ohm speaker can be connected to a 1-ohm amplifier. It is the only way your loudspeaker will not handle the amplifier’s full power output. Another aspect to be very clear, one should be very careful and connecting while connecting the 1-ohm speaker with a 4-ohm amplifier, the amplifier will be overloaded to be even over burned. Summing up the discussion Q. What Hits Harder 1-Ohm Or 4-Ohm? If we talk in technical terms, a 1-ohm impedance will hit harder than 4-ohm subwoofer to create more output when given with similar wattage, just because of the low impedance. • This makes a louder sound. • Costly. • Puts load on the amplifier. • This produces a sound of better quality. • Cheap. • Good amplifier efficiency. 1. Jason oliveira says 2 dual 4ohm speakers. 1ohm final or 4 ohm final. Wouldn’t I benefit from running a 4ohm load vs 1ohm with load, if I have enough power at a 4ohm load Wouldn’t it make more sense not only in terms of efficiency and sq but control over my subwoofer??? □ Andrew says Box rise (impedance rise) will be more pronounced in the 4ohm resulting in greater losses of power. Leave a Reply Cancel reply
{"url":"https://speakersninja.com/what-hits-harder-1-ohm-or-4-ohm/","timestamp":"2024-11-10T02:53:27Z","content_type":"text/html","content_length":"158683","record_id":"<urn:uuid:d9758fbc-fd46-4faa-959e-bd12e51505f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00794.warc.gz"}
Proceedings of the 7th Economics & Finance Conference, Tel Aviv DOES THE GRAVITY MODEL WORK FOR THE MODELLING OF MIGRATION BETWEEN EUROPEAN COUNTRIES FROM 2011 TO 2014? The gravity model is an interesting adaptation of Newton's law of gravitation, in which the effect of gravity is used to describe the spatial interactions between economic units. The force of interaction is supposed to be positively influenced by the size of the units (the push factor) and negatively by the distance between them (the pull factor). The model is used to estimate the dependence of migration on the GDP, as well as the distance between European countries. Based on the gravity model, the GWP of both (source and host) countries, is expected to be a push factor and the distance is expected to be a pull factor. However, in economic theory, the impact of the GDP of a source country is expected to be negative, the opposite to the gravity model. The goal of the paper is to test which of the two is valid for eight European countries from 2011 to 2014. Keywords: gravity model, spatial dependence, migration, model selection, random effects, panel data DOI: 10.20472/EFC.2017.007.018 PDF: Download
{"url":"https://iises.net/proceedings/7th-economics-finance-conference-tel-aviv-israel/table-of-content/detail?article=does-the-gravity-model-work-for-the-modelling-of-migration-between-european-countries-from-2011-to-2014-","timestamp":"2024-11-04T14:30:35Z","content_type":"text/html","content_length":"14639","record_id":"<urn:uuid:88ce04dc-aa4d-49ef-ba4e-e8b1c40ebcea>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00430.warc.gz"}
CatalanSolids Calculators | List of CatalanSolids Calculators List of CatalanSolids Calculators CatalanSolids calculators give you a list of online CatalanSolids calculators. A tool perform calculations on the concepts and applications for CatalanSolids calculations. These calculators will be useful for everyone and save time with the complex procedure involved to obtain the calculation results. You can also download, share as well as print the list of CatalanSolids calculators with all the formulas.
{"url":"https://www.calculatoratoz.com/en/catalansolids-Calculators/CalcList-2673","timestamp":"2024-11-12T16:01:13Z","content_type":"application/xhtml+xml","content_length":"145159","record_id":"<urn:uuid:a5a8d85e-26fa-4120-be20-e29498c51146>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00200.warc.gz"}
Transformations are a fundamental concept in computer graphics and are used to manipulate the position, orientation, and size of objects in a virtual space. One of the most commonly used transformations is “rotation”, which allows you to rotate an object around a specific point or axis. In this article, we will provide a step-by-step guide on how to perform rotation transformations, along with some code snippets for a better understanding. Understanding Rotation Transformations Before we dive into the implementation, let’s first understand the basic concepts of rotation transformations. When we rotate an object, it means we are changing its orientation in space. This rotation can be performed around different axes: 1. Rotation around the X-Axis Rotating an object around the X-axis means that it will pivot up and down. The positive rotation is clockwise, and the negative rotation is counterclockwise. Here’s an example code snippet in JavaScript that demonstrates how to perform a rotation around the X-axis: const rotateX = (angle) => { const cos = Math.cos(angle); const sin = Math.sin(angle); // Apply rotation matrix to the object's vertices // ... 2. Rotation around the Y-Axis When we rotate an object around the Y-axis, it will pivot left and right. Similar to the X-axis rotation, the positive rotation is clockwise, and the negative rotation is counterclockwise. Here’s a code snippet in Python that demonstrates how to perform a rotation around the Y-axis: def rotate_y(angle): cos = math.cos(angle) sin = math.sin(angle) # Apply rotation matrix to the object's vertices # ... 3. Rotation around the Z-Axis Rotation around the Z-axis causes an object to rotate in a circular motion, as if it is rolling. Once again, positive rotation is clockwise, and negative rotation is counterclockwise. Here’s a code snippet in C++ that demonstrates how to perform a rotation around the Z-axis: void rotateZ(float angle) { float cos = cos(angle); float sin = sin(angle); // Apply rotation matrix to the object's vertices // ... Rotation transformations are a powerful tool in computer graphics that allow us to change the orientation of objects in a virtual space. In this article, we have explored the concepts of rotation around the X, Y, and Z axes and provided code snippets in various programming languages to help you implement these transformations in your own projects. Embrace the power of rotations and let your objects spin in the virtual world! FAQ - Frequently Asked Questions Q1: Can I rotate an object around an arbitrary point? A1: Yes, you can. To rotate an object around an arbitrary point, you need to first translate the object to the origin (0, 0, 0), perform the rotation, and then translate it back to the original Q2: Are there any libraries or frameworks that simplify rotation A2: Yes, many graphics libraries and frameworks provide built-in functions for performing rotation transformations. Some popular ones include Three.js, OpenGL, and DirectX. Q3: How are rotation angles measured? A3: Rotation angles can be measured in degrees or radians. In most programming languages, trigonometric functions like sin and cos expect angles to be in radians. Q4: Can I combine multiple rotations? A4: Absolutely! You can combine multiple rotations by applying them sequentially. This is known as “composition” and allows you to create complex transformations. Q5: Can I rotate 3D objects in 2D space? A5: No, rotation transformations in 2D space are limited to rotating objects around the Z-axis. To rotate objects in a full 3D space, you need to utilize all three axes. Struggling with Coding Concepts? Join our vibrant community at 30 Days Coding! • 👥 Joining a group accelerates your learning journey and supercharges your coding skills! • 🚀 Get 24/7 help from 500+ expert developers! • 🛠️ Unlock exclusive learning resources! • 🌟 Elevate your coding skills and land your dream job!
{"url":"https://blogs.30dayscoding.com/blogs/css/intermediate-css-concepts/transformations/rotate/","timestamp":"2024-11-10T05:58:38Z","content_type":"text/html","content_length":"112444","record_id":"<urn:uuid:acea5467-a01f-4155-9808-c11715f5a72d>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00778.warc.gz"}
Logistic Regression You can order print and ebook versions of Think Bayes 2e from Bookshop.org and Amazon. 16. Logistic Regression# This chapter introduces two related topics: log odds and logistic regression. In <<_BayessRule>>, we rewrote Bayes’s Theorem in terms of odds and derived Bayes’s Rule, which can be a convenient way to do a Bayesian update on paper or in your head. In this chapter, we’ll look at Bayes’s Rule on a logarithmic scale, which provides insight into how we accumulate evidence through successive updates. That leads directly to logistic regression, which is based on a linear model of the relationship between evidence and the log odds of a hypothesis. As an example, we’ll use data from the Space Shuttle to explore the relationship between temperature and the probability of damage to the O-rings. As an exercise, you’ll have a chance to model the relationship between a child’s age when they start school and their probability of being diagnosed with Attention Deficit Hyperactivity Disorder 16.1. Log Odds# When I was in grad school, I signed up for a class on the Theory of Computation. On the first day of class, I was the first to arrive. A few minutes later, another student arrived. At the time, about 83% of the students in the computer science program were male, so I was mildly surprised to note that the other student was female. When another female student arrived a few minutes later, I started to think I was in the wrong room. When third female student arrived, I was confident I was in the wrong room. And as it turned out, I was. I’ll use this anecdote to demonstrate Bayes’s Rule on a logarithmic scale and show how it relates to logistic regression. Using \(H\) to represent the hypothesis that I was in the right room, and \(F\) to represent the observation that the first other student was female, we can write Bayes’s Rule like this: \[O(H|F) = O(H) \frac{P(F|H)}{P(F|not H)}\] Before I saw the other students, I was confident I was in the right room, so I might assign prior odds of 10:1 in favor: \[O(H) = 10\] If I was in the right room, the likelihood of the first female student was about 17%. If I was not in the right room, the likelihood of the first female student was more like 50%, \[\frac{P(F|H)}{P(F|not H)} = 17 / 50\] So the likelihood ratio is close to 1/3. Applying Bayes’s Rule, the posterior odds were \[O(H|F) = 10 / 3\] After two students, the posterior odds were \[O(H|FF) = 10 / 9\] And after three students: \[O(H|FFF) = 10 / 27\] At that point, I was right to suspect I was in the wrong room. The following table shows the odds after each update, the corresponding probabilities, and the change in probability after each step, expressed in percentage points. Show code cell content Hide code cell content def prob(o): return o / (o+1) Show code cell source Hide code cell source import pandas as pd index = ['prior', '1 student', '2 students', '3 students'] table = pd.DataFrame(index=index) table['odds'] = [10, 10/3, 10/9, 10/27] table['prob'] = prob(table['odds']) table['prob diff'] = table['prob'].diff() * 100 │ │ odds │ prob │ prob diff │ │ prior │10.000000│0.909091│-- │ │ 1 student │3.333333 │0.769231│-13.986014 │ │2 students │1.111111 │0.526316│-24.291498 │ │3 students │0.370370 │0.270270│-25.604552 │ Each update uses the same likelihood, but the changes in probability are not the same. The first update decreases the probability by about 14 percentage points, the second by 24, and the third by 26. That’s normal for this kind of update, and in fact it’s necessary; if the changes were the same size, we would quickly get into negative probabilities. The odds follow a more obvious pattern. Because each update multiplies the odds by the same likelihood ratio, the odds form a geometric sequence. And that brings us to consider another way to represent uncertainty: log odds, which is the logarithm of odds, usually expressed using the natural log (base \(e\)). Adding log odds to the table: Show code cell source Hide code cell source import numpy as np table['log odds'] = np.log(table['odds']) table['log odds diff'] = table['log odds'].diff() │ │ odds │ prob │prob diff │log odds │log odds diff │ │ prior │10.000000│0.909091│-- │2.302585 │-- │ │1 student │3.333333 │0.769231│-13.986014│1.203973 │-1.098612 │ │2 students│1.111111 │0.526316│-24.291498│0.105361 │-1.098612 │ │3 students│0.370370 │0.270270│-25.604552│-0.993252│-1.098612 │ You might notice: • When probability is greater than 0.5, odds are greater than 1, and log odds are positive. • When probability is less than 0.5, odds are less than 1, and log odds are negative. You might also notice that the log odds are equally spaced. The change in log odds after each update is the logarithm of the likelihood ratio. That’s true in this example, and we can show that it’s true in general by taking the log of both sides of Bayes’s Rule. \[\log O(H|F) = \log O(H) + \log \frac{P(F|H)}{P(F|not H)}\] On a log odds scale, a Bayesian update is additive. So if \(F^x\) means that \(x\) female students arrive while I am waiting, the posterior log odds that I am in the right room are: \[\log O(H|F^x) = \log O(H) + x \log \frac{P(F|H)}{P(F|not H)}\] This equation represents a linear relationship between the log likelihood ratio and the posterior log odds. In this example the linear equation is exact, but even when it’s not, it is common to use a linear function to model the relationship between an explanatory variable, \(x\), and a dependent variable expressed in log odds, like this: \[\log O(H | x) = \beta_0 + \beta_1 x\] where \(\beta_0\) and \(\beta_1\) are unknown parameters: • The intercept, \(\beta_0\), is the log odds of the hypothesis when \(x\) is 0. • The slope, \(\beta_1\), is the log of the likelihood ratio. This equation is the basis of logistic regression. 16.2. The Space Shuttle Problem# As an example of logistic regression, I’ll solve a problem from Cameron Davidson-Pilon’s book, Bayesian Methods for Hackers. He writes: “On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23 (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend.” The dataset is originally from this paper, but also available from Davidson-Pilon. Show code cell content Hide code cell content I’ll read the data and do some cleaning. Show code cell content Hide code cell content data = pd.read_csv('challenger_data.csv', parse_dates=[0]) # avoiding column names with spaces data.rename(columns={'Damage Incident': 'Damage'}, inplace=True) # dropping row 3, in which Damage Incident is NaN, # and row 24, which is the record for the Challenger data.drop(labels=[3, 24], inplace=True) # convert the Damage column to integer data['Damage'] = data['Damage'].astype(int) │ │ Date │Temperature │Damage│ │0 │1981-04-12 │66 │0 │ │1 │1981-11-12 │70 │1 │ │2 │1982-03-22 │69 │0 │ │4 │1982-01-11 │68 │0 │ │5 │1983-04-04 │67 │0 │ │6 │1983-06-18 │72 │0 │ │7 │1983-08-30 │73 │0 │ │8 │1983-11-28 │70 │0 │ │9 │1984-02-03 │57 │1 │ │10│1984-04-06 │63 │1 │ │11│1984-08-30 │70 │1 │ │12│1984-10-05 │78 │0 │ │13│1984-11-08 │67 │0 │ │14│1985-01-24 │53 │1 │ │15│1985-04-12 │67 │0 │ │16│1985-04-29 │75 │0 │ │17│1985-06-17 │70 │0 │ │18│1985-07-29 │81 │0 │ │19│1985-08-27 │76 │0 │ │20│1985-10-03 │79 │0 │ │21│1985-10-30 │75 │1 │ │22│1985-11-26 │76 │0 │ │23│1986-01-12 │58 │1 │ Here are the first few rows: Show code cell source Hide code cell source │ │ Date │Temperature │Damage│ │0│1981-04-12 │66 │0 │ │1│1981-11-12 │70 │1 │ │2│1982-03-22 │69 │0 │ │4│1982-01-11 │68 │0 │ │5│1983-04-04 │67 │0 │ The columns are: • Date: The date of launch, • Temperature: Outside temperature in Fahrenheit, and • Damage: 1 if there was a damage incident and 0 otherwise. There are 23 launches in the dataset, 7 with damage incidents. Show code cell content Hide code cell content len(data), data['Damage'].sum() The following figure shows the relationship between damage and temperature. Show code cell source Hide code cell source import matplotlib.pyplot as plt from utils import decorate def plot_data(data): """Plot damage as a function of temperature. data: DataFrame plt.plot(data['Temperature'], data['Damage'], 'o', label='data', color='C0', alpha=0.4) decorate(ylabel="Probability of damage", xlabel="Outside temperature (deg F)", title="Damage to O-Rings vs Temperature") Show code cell source Hide code cell source When the outside temperature was below 65 degrees, there was always damage to the O-rings. When the temperature was above 65 degrees, there was usually no damage. Based on this figure, it seems plausible that the probability of damage is related to temperature. If we assume this probability follows a logistic model, we can write: \[\log O(H | x) = \beta_0 + \beta_1 x\] where \(H\) is the hypothesis that the O-rings will be damaged, \(x\) is temperature, and \(\beta_0\) and \(\beta_1\) are the parameters we will estimate. For reasons I’ll explain soon, I’ll define \ (x\) to be temperature shifted by an offset so its mean is 0. offset = data['Temperature'].mean().round() data['x'] = data['Temperature'] - offset And for consistency I’ll create a copy of the Damage columns called y. data['y'] = data['Damage'] Before doing a Bayesian update, I’ll use statsmodels to run a conventional (non-Bayesian) logistic regression. import statsmodels.formula.api as smf formula = 'y ~ x' results = smf.logit(formula, data=data).fit(disp=False) Intercept -1.208490 x -0.232163 dtype: float64 results contains a “point estimate” for each parameter, that is, a single value rather than a posterior distribution. The intercept is about -1.2, and the estimated slope is about -0.23. To see what these parameters mean, I’ll use them to compute probabilities for a range of temperatures. Here’s the range: inter = results.params['Intercept'] slope = results.params['x'] xs = np.arange(53, 83) - offset We can use the logistic regression equation to compute log odds: log_odds = inter + slope * xs And then convert to probabilities. odds = np.exp(log_odds) ps = odds / (odds + 1) Show code cell content Hide code cell content Converting log odds to probabilities is a common enough operation that it has a name, expit, and SciPy provides a function that computes it. from scipy.special import expit ps = expit(inter + slope * xs) Show code cell content Hide code cell content Here’s what the logistic model looks like with these estimated parameters. Show code cell source Hide code cell source plt.plot(xs+offset, ps, label='model', color='C1') At low temperatures, the probability of damage is high; at high temperatures, it drops off to near 0. But that’s based on conventional logistic regression. Now we’ll do the Bayesian version. 16.3. Prior Distribution# I’ll use uniform distributions for both parameters, using the point estimates from the previous section to help me choose the upper and lower bounds. from utils import make_uniform qs = np.linspace(-5, 1, num=101) prior_inter = make_uniform(qs, 'Intercept') qs = np.linspace(-0.8, 0.1, num=101) prior_slope = make_uniform(qs, 'Slope') We can use make_joint to construct the joint prior distribution. from utils import make_joint joint = make_joint(prior_inter, prior_slope) The values of intercept run across the columns, the values of slope run down the rows. For this problem, it will be convenient to “stack” the prior so the parameters are levels in a MultiIndex, and put the result in a Pmf. from empiricaldist import Pmf joint_pmf = Pmf(joint.stack()) │ │ │ probs │ │Slope│Intercept │ │ │-0.8 │ -5.00 │0.000098 │ │ ├──────────┼─────────┤ │ │ -4.94 │0.000098 │ │ ├──────────┼─────────┤ │ │ -4.88 │0.000098 │ joint_pmf is a Pmf with two levels in the index, one for each parameter. That makes it easy to loop through possible pairs of parameters, as we’ll see in the next section. 16.4. Likelihood# To do the update, we have to compute the likelihood of the data for each possible pair of parameters. To make that easier, I’m going to group the data by temperature, x, and count the number of launches and damage incidents at each temperature. grouped = data.groupby('x')['y'].agg(['count', 'sum']) │ │count │sum│ │ x │ │ │ │-17.0 │1 │1 │ │-13.0 │1 │1 │ │-12.0 │1 │1 │ │ -7.0 │1 │1 │ │ -4.0 │1 │0 │ The result is a DataFrame with two columns: count is the number of launches at each temperature; sum is the number of damage incidents. To be consistent with the parameters of the binomial distributions, I’ll assign them to variables named ns and ks. ns = grouped['count'] ks = grouped['sum'] To compute the likelihood of the data, let’s assume temporarily that the parameters we just estimated, slope and inter, are correct. We can use them to compute the probability of damage at each launch temperature, like this: xs = grouped.index ps = expit(inter + slope * xs) ps contains the probability of damage for each launch temperature, according to the model. Now, for each temperature we have ns, ps, and ks; we can use the binomial distribution to compute the likelihood of the data. from scipy.stats import binom likes = binom.pmf(ks, ns, ps) array([0.93924781, 0.85931657, 0.82884484, 0.60268105, 0.56950687, 0.24446388, 0.67790595, 0.72637895, 0.18815003, 0.8419509 , 0.87045398, 0.15645171, 0.86667894, 0.95545945, 0.96435859, Each element of likes is the probability of seeing k damage incidents in n launches if the probability of damage is p. The likelihood of the whole dataset is the product of this array. That’s how we compute the likelihood of the data for a particular pair of parameters. Now we can compute the likelihood of the data for all possible pairs: likelihood = joint_pmf.copy() for slope, inter in joint_pmf.index: ps = expit(inter + slope * xs) likes = binom.pmf(ks, ns, ps) likelihood[slope, inter] = likes.prod() To initialize likelihood, we make a copy of joint_pmf, which is a convenient way to make sure that likelihood has the same type, index, and data type as joint_pmf. The loop iterates through the parameters. For each possible pair, it uses the logistic model to compute ps, computes the likelihood of the data, and assigns the result to a row in likelihood. 16.5. The Update# Now we can compute the posterior distribution in the usual way. posterior_pmf = joint_pmf * likelihood Because we used a uniform prior, the parameter pair with the highest likelihood is also the pair with maximum posterior probability: Show code cell content Hide code cell content index=['slope', 'inter']) slope -0.233 inter -1.220 dtype: float64 So we can confirm that the results of the Bayesian update are consistent with the maximum likelihood estimate computed by StatsModels: Show code cell content Hide code cell content Intercept -1.208490 x -0.232163 dtype: float64 They are approximately the same, within the precision of the grid we’re using. If we unstack the posterior Pmf we can make a contour plot of the joint posterior distribution. from utils import plot_contour joint_posterior = posterior_pmf.unstack() decorate(title='Joint posterior distribution') The ovals in the contour plot are aligned along a diagonal, which indicates that there is some correlation between slope and inter in the posterior distribution. But the correlation is weak, which is one of the reasons we subtracted off the mean launch temperature when we computed x; centering the data minimizes the correlation between the parameters. Exercise: To see why this matters, go back and set offset=60 and run the analysis again. The slope should be the same, but the intercept will be different. And if you plot the joint distribution, the contours you get will be elongated, indicating stronger correlation between the estimated parameters. In theory, this correlation is not a problem, but in practice it is. With uncentered data, the posterior distribution is more spread out, so it’s harder to cover with the joint prior distribution. Centering the data maximizes the precision of the estimates; with uncentered data, we have to do more computation to get the same precision. 16.6. Marginal Distributions# Finally, we can extract the marginal distributions. from utils import marginal marginal_inter = marginal(joint_posterior, 0) marginal_slope = marginal(joint_posterior, 1) Here’s the posterior distribution of inter. Show code cell source Hide code cell source marginal_inter.plot(label='intercept', color='C4') title='Posterior marginal distribution of intercept') And here’s the posterior distribution of slope. Show code cell source Hide code cell source marginal_slope.plot(label='slope', color='C2') title='Posterior marginal distribution of slope') Here are the posterior means. Show code cell content Hide code cell content pd.Series([marginal_inter.mean(), marginal_slope.mean()], index=['inter', 'slope']) inter -1.376107 slope -0.289795 dtype: float64 Both marginal distributions are moderately skewed, so the posterior means are somewhat different from the point estimates. Show code cell content Hide code cell content Intercept -1.208490 x -0.232163 dtype: float64 16.7. Transforming Distributions# Let’s interpret these parameters. Recall that the intercept is the log odds of the hypothesis when \(x\) is 0, which is when temperature is about 70 degrees F (the value of offset). So we can interpret the quantities in marginal_inter as log odds. To convert them to probabilities, I’ll use the following function, which transforms the quantities in a Pmf by applying a given function: def transform(pmf, func): """Transform the quantities in a Pmf.""" ps = pmf.ps qs = func(pmf.qs) return Pmf(ps, qs, copy=True) If we call transform and pass expit as a parameter, it transforms the log odds in marginal_inter into probabilities and returns the posterior distribution of inter expressed in terms of marginal_probs = transform(marginal_inter, expit) Pmf provides a transform method that does the same thing. marginal_probs = marginal_inter.transform(expit) Here’s the posterior distribution for the probability of damage at 70 degrees F. Show code cell source Hide code cell source decorate(xlabel='Probability of damage at 70 deg F', title='Posterior marginal distribution of probabilities') The mean of this distribution is about 22%, which is the probability of damage at 70 degrees F, according to the model. Show code cell content Hide code cell content mean_prob = marginal_probs.mean() This result shows the second reason I defined x to be zero when temperature is 70 degrees F; this way, the intercept corresponds to the probability of damage at a relevant temperature, rather than 0 degrees F. Now let’s look more closely at the estimated slope. In the logistic model, the parameter \(\beta_1\) is the log of the likelihood ratio. So we can interpret the quantities in marginal_slope as log likelihood ratios, and we can use exp to transform them to likelihood ratios (also known as Bayes factors). marginal_lr = marginal_slope.transform(np.exp) The result is the posterior distribution of likelihood ratios; here’s what it looks like. Show code cell source Hide code cell source decorate(xlabel='Likelihood ratio of 1 deg F', title='Posterior marginal distribution of likelihood ratios') Show code cell content Hide code cell content mean_lr = marginal_lr.mean() The mean of this distribution is about 0.75, which means that each additional degree Fahrenheit provides evidence against the possibility of damage, with a likelihood ratio (Bayes factor) of 0.75. • I computed the posterior mean of the probability of damage at 70 deg F by transforming the marginal distribution of the intercept to the marginal distribution of probability, and then computing the mean. • I computed the posterior mean of the likelihood ratio by transforming the marginal distribution of slope to the marginal distribution of likelihood ratios, and then computing the mean. This is the correct order of operations, as opposed to computing the posterior means first and then transforming them. To see the difference, let’s compute both values the other way around. Here’s the posterior mean of marginal_inter, transformed to a probability, compared to the mean of marginal_probs. Show code cell content Hide code cell content expit(marginal_inter.mean()), marginal_probs.mean() (0.2016349762400815, 0.2201937884647988) And here’s the posterior mean of marginal_slope, transformed to a likelihood ratio, compared to the mean marginal_lr. Show code cell content Hide code cell content np.exp(marginal_slope.mean()), marginal_lr.mean() (0.7484167954660071, 0.7542914170110268) In this example, the differences are not huge, but they can be. As a general rule, transform first, then compute summary statistics. 16.8. Predictive Distributions# In the logistic model, the parameters are interpretable, at least after transformation. But often what we care about are predictions, not parameters. In the Space Shuttle problem, the most important prediction is, “What is the probability of O-ring damage if the outside temperature is 31 degrees F?” To make that prediction, I’ll draw a sample of parameter pairs from the posterior distribution. sample = posterior_pmf.choice(101) The result is an array of 101 tuples, each representing a possible pair of parameters. I chose this sample size to make the computation fast. Increasing it would not change the results much, but they would be a little more precise. Show code cell content Hide code cell content Show code cell content Hide code cell content Show code cell content Hide code cell content To generate predictions, I’ll use a range of temperatures from 31 degrees F (the temperature when the Challenger launched) to 82 degrees F (the highest observed temperature). temps = np.arange(31, 83) xs = temps - offset The following loop uses xs and the sample of parameters to construct an array of predicted probabilities. pred = np.empty((len(sample), len(xs))) for i, (slope, inter) in enumerate(sample): pred[i] = expit(inter + slope * xs) The result has one column for each value in xs and one row for each element of sample. To get a quick sense of what the predictions look like, we can loop through the rows and plot them. Show code cell content Hide code cell content for ps in pred: plt.plot(temps, ps, color='C1', lw=0.5, alpha=0.4) The overlapping lines in this figure give a sense of the most likely value at each temperature and the degree of uncertainty. In each column, I’ll compute the median to quantify the central tendency and a 90% credible interval to quantify the uncertainty. np.percentile computes the given percentiles; with the argument axis=0, it computes them for each column. low, median, high = np.percentile(pred, [5, 50, 95], axis=0) The results are arrays containing predicted probabilities for the lower bound of the 90% CI, the median, and the upper bound of the CI. Here’s what they look like: Show code cell source Hide code cell source plt.fill_between(temps, low, high, color='C1', alpha=0.2) plt.plot(temps, median, color='C1', label='logistic model') According to these results, the probability of damage to the O-rings at 80 degrees F is near 2%, but there is some uncertainty about that prediction; the upper bound of the CI is around 10%. At 60 degrees, the probability of damage is near 80%, but the CI is even wider, from 48% to 97%. But the primary goal of the model is to predict the probability of damage at 31 degrees F, and the answer is at least 97%, and more likely to be more than 99.9%. Show code cell content Hide code cell content low = pd.Series(low, temps) median = pd.Series(median, temps) high = pd.Series(high, temps) Show code cell content Hide code cell content t = 80 print(median[t], (low[t], high[t])) 0.016956535510200765 (0.000563939208692237, 0.1335417225332125) Show code cell content Hide code cell content t = 60 print(median[t], (low[t], high[t])) 0.7738185742694538 (0.45512110762641983, 0.9654437697137236) Show code cell content Hide code cell content t = 31 print(median[t], (low[t], high[t])) 0.9998129598124814 (0.97280101769455, 0.999999987740933) One conclusion we might draw is this: If the people responsible for the Challenger launch had taken into account all of the data, and not just the seven damage incidents, they could have predicted that the probability of damage at 31 degrees F was nearly certain. If they had, it seems likely they would have postponed the launch. At the same time, if they considered the previous figure, they might have realized that the model makes predictions that extend far beyond the data. When we extrapolate like that, we have to remember not just the uncertainty quantified by the model, which we expressed as a credible interval; we also have to consider the possibility that the model itself is unreliable. This example is based on a logistic model, which assumes that each additional degree of temperature contributes the same amount of evidence in favor of (or against) the possibility of damage. Within a narrow range of temperatures, that might be a reasonable assumption, especially if it is supported by data. But over a wider range, and beyond the bounds of the data, reality has no obligation to stick to the model. 16.9. Empirical Bayes# In this chapter I used StatsModels to compute the parameters that maximize the probability of the data, and then used those estimates to choose the bounds of the uniform prior distributions. It might have occurred to you that this process uses the data twice, once to choose the priors and again to do the update. If that bothers you, you are not alone. The process I used is an example of what’s called the Empirical Bayes method, although I don’t think that’s a particularly good name for it. Although it might seem problematic to use the data twice, in these examples, it is not. To see why, consider an alternative: instead of using the estimated parameters to choose the bounds of the prior distribution, I could have used uniform distributions with much wider ranges. In that case, the results would be the same; the only difference is that I would spend more time computing likelihoods for parameters where the posterior probabilities are negligibly small. So you can think of this version of Empirical Bayes as an optimization that minimizes computation by putting the prior distributions where the likelihood of the data is worth computing. This optimization doesn’t affect the results, so it doesn’t “double-count” the data. 16.10. Summary# So far we have seen three ways to represent degrees of confidence in a hypothesis: probability, odds, and log odds. When we write Bayes’s Rule in terms of log odds, a Bayesian update is the sum of the prior and the likelihood; in this sense, Bayesian statistics is the arithmetic of hypotheses and evidence. This form of Bayes’s Theorem is also the foundation of logistic regression, which we used to infer parameters and make predictions. In the Space Shuttle problem, we modeled the relationship between temperature and the probability of damage, and showed that the Challenger disaster might have been predictable. But this example is also a warning about the hazards of using a model to extrapolate far beyond the data. In the exercises below you’ll have a chance to practice the material in this chapter, using log odds to evaluate a political pundit and using logistic regression to model diagnosis rates for Attention Deficit Hyperactivity Disorder (ADHD). In the next chapter we’ll move from logistic regression to linear regression, which we will use to model changes over time in temperature, snowfall, and the marathon world record. 16.11. Exercises# Exercise: Suppose a political pundit claims to be able to predict the outcome of elections, but instead of picking a winner, they give each candidate a probability of winning. With that kind of prediction, it can be hard to say whether it is right or wrong. For example, suppose the pundit says that Alice has a 70% chance of beating Bob, and then Bob wins the election. Does that mean the pundit was wrong? One way to answer this question is to consider two hypotheses: • H: The pundit’s algorithm is legitimate; the probabilities it produces are correct in the sense that they accurately reflect the candidates’ probabilities of winning. • not H: The pundit’s algorithm is bogus; the probabilities it produces are random values with a mean of 50%. If the pundit says Alice has a 70% chance of winning, and she does, that provides evidence in favor of H with likelihood ratio 70/50. If the pundit says Alice has a 70% chance of winning, and she loses, that’s evidence against H with a likelihood ratio of 50/30. Suppose we start with some confidence in the algorithm, so the prior odds are 4 to 1. And suppose the pundit generates predictions for three elections: • In the first election, the pundit says Alice has a 70% chance of winning and she does. • In the second election, the pundit says Bob has a 30% chance of winning and he does. • In the third election, the pundit says Carol has an 90% chance of winning and she does. What is the log likelihood ratio for each of these outcomes? Use the log-odds form of Bayes’s Rule to compute the posterior log odds for H after these outcomes. In total, do these outcomes increase or decrease your confidence in the pundit? If you are interested in this topic, you can read more about it in this blog post. Show code cell content Hide code cell content # Solution prior_log_odds = np.log(4) Show code cell content Hide code cell content # Solution lr1 = np.log(7/5) lr2 = np.log(3/5) lr3 = np.log(9/5) lr1, lr2, lr3 (0.3364722366212129, -0.5108256237659907, 0.5877866649021191) Show code cell content Hide code cell content # Solution # In total, these three outcomes provide evidence that the # pundit's algorithm is legitmate, although with K=1.8, # it is weak evidence. posterior_log_odds = prior_log_odds + lr1 + lr2 + lr3 Exercise: An article in the New England Journal of Medicine reports results from a study that looked at the diagnosis rate of Attention Deficit Hyperactivity Disorder (ADHD) as a function of birth month: “Attention Deficit–Hyperactivity Disorder and Month of School Enrollment”. They found that children born in June, July, and August were substantially more likely to be diagnosed with ADHD, compared to children born in September, but only in states that use a September cutoff for children to enter kindergarten. In these states, children born in August start school almost a year younger than children born in September. The authors of the study suggest that the cause is “age-based variation in behavior that may be attributed to ADHD rather than to the younger age of the children”. Use the methods in this chapter to estimate the probability of diagnosis as a function of birth month. The notebook for this chapter provides the data and some suggestions for getting started. The paper includes this figure: In my opinion, this representation of the data does not show the effect as clearly as it could. But the figure includes the raw data, so we can analyze it ourselves. Note: there is an error in the figure, confirmed by personal correspondence: The May and June [diagnoses] are reversed. May should be 317 (not 287) and June should be 287 (not 317). So here is the corrected data, where n is the number of children born in each month, starting with January, and k is the number of children diagnosed with ADHD. Show code cell content Hide code cell content n = np.array([32690, 31238, 34405, 34565, 34977, 34415, 36577, 36319, 35353, 34405, 31285, 31617]) k = np.array([265, 280, 307, 312, 317, 287, 320, 309, 225, 240, 232, 243]) First, I’m going to “roll” the data so it starts in September rather than January. Show code cell content Hide code cell content x = np.arange(12) n = np.roll(n, -8) k = np.roll(k, -8) And I’ll put it in a DataFrame with one row for each month and the diagnosis rate per 10,000. Show code cell content Hide code cell content adhd = pd.DataFrame(dict(x=x, k=k, n=n)) adhd['rate'] = adhd['k'] / adhd['n'] * 10000 │ │x │ k │ n │ rate │ │0 │0 │225│35353 │63.643821 │ │1 │1 │240│34405 │69.757303 │ │2 │2 │232│31285 │74.156944 │ │3 │3 │243│31617 │76.857387 │ │4 │4 │265│32690 │81.064546 │ │5 │5 │280│31238 │89.634420 │ │6 │6 │307│34405 │89.231216 │ │7 │7 │312│34565 │90.264719 │ │8 │8 │317│34977 │90.630986 │ │9 │9 │287│34415 │83.393869 │ │10│10│320│36577 │87.486672 │ │11│11│309│36319 │85.079435 │ Here’s what the diagnosis rates look like. Show code cell content Hide code cell content def plot_adhd(adhd): plt.plot(adhd['x'], adhd['rate'], 'o', label='data', color='C0', alpha=0.4) plt.axvline(5.5, color='gray', alpha=0.2) plt.text(6, 64, 'Younger than average') plt.text(5, 64, 'Older than average', horizontalalignment='right') decorate(xlabel='Birth date, months after cutoff', ylabel='Diagnosis rate per 10,000') Show code cell content Hide code cell content For the first 9 months, from September to May, we see what we would expect if some of the excess diagnoses are due to “age-based variation in behavior”. For each month of difference in age, we see an increase in the number of diagnoses. This pattern breaks down for the last three months, June, July, and August. This might be explained by random variation, but it also might be due to parental manipulation; if some parents hold back children born near the deadline, the observations for these month would include a mixture of children who are relatively old for their grade and therefore less likely to be diagnosed. Unfortunately, the dataset includes only month of birth, not year, so we don’t know the actual ages of these students when they started school. However, we can use the first nine months to estimate the effect of age on diagnosis rate; then we can think about what to do with the other three months. Use the methods in this chapter to estimate the probability of diagnosis as a function of birth month. Start with the following prior distributions. Show code cell content Hide code cell content qs = np.linspace(-5.2, -4.6, num=51) prior_inter = make_uniform(qs, 'Intercept') Show code cell content Hide code cell content qs = np.linspace(0.0, 0.08, num=51) prior_slope = make_uniform(qs, 'Slope') 1. Make a joint prior distribution and update it using the data for the first nine months. 2. Then draw a sample from the posterior distribution and use it to compute the median probability of diagnosis for each month and a 90% credible interval. 3. As a bonus exercise, do a second update using the data from the last three months, but treating the observed number of diagnoses as a lower bound on the number of diagnoses there would be if no children were kept back. Show code cell content Hide code cell content # Solution joint = make_joint(prior_inter, prior_slope) │Intercept│ -5.200 │ -5.188 │ -5.176 │ -5.164 │ -5.152 │ -5.140 │ -5.128 │ -5.116 │ -5.104 │ -5.092 │...│ -4.708 │ -4.696 │ -4.684 │ -4.672 │ -4.660 │ -4.648 │ -4.636 │ -4.624 │ -4.612 │ -4.600 │ │ Slope │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ 0.0000 │0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│...│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│ │ 0.0016 │0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│...│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│ │ 0.0032 │0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│...│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│ │ 0.0048 │0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│...│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│ │ 0.0064 │0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│...│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│0.000384│ 5 rows × 51 columns Show code cell content Hide code cell content # Solution joint_pmf = Pmf(joint.stack()) │ │ │ probs │ │Slope│Intercept │ │ │ 0.0 │ -5.200 │0.000384 │ │ ├──────────┼─────────┤ │ │ -5.188 │0.000384 │ │ ├──────────┼─────────┤ │ │ -5.176 │0.000384 │ Show code cell content Hide code cell content # Solution num_legit = 9 adhd1 = adhd.loc[0:num_legit-1] adhd2 = adhd.loc[num_legit:] │ │x│ k │ n │ rate │ │0│0│225│35353 │63.643821 │ │1│1│240│34405 │69.757303 │ │2│2│232│31285 │74.156944 │ │3│3│243│31617 │76.857387 │ │4│4│265│32690 │81.064546 │ │5│5│280│31238 │89.634420 │ │6│6│307│34405 │89.231216 │ │7│7│312│34565 │90.264719 │ │8│8│317│34977 │90.630986 │ Show code cell content Hide code cell content │ │x │ k │ n │ rate │ │9 │9 │287│34415 │83.393869 │ │10│10│320│36577 │87.486672 │ │11│11│309│36319 │85.079435 │ Show code cell content Hide code cell content # Solution from scipy.stats import binom likelihood1 = joint_pmf.copy() xs = adhd1['x'] ks = adhd1['k'] ns = adhd1['n'] for slope, inter in joint_pmf.index: ps = expit(inter + slope * xs) likes = binom.pmf(ks, ns, ps) likelihood1[slope, inter] = likes.prod() Show code cell content Hide code cell content # Solution # This update uses the binomial survival function to compute # the probability that the number of cases *exceeds* `ks`. likelihood2 = joint_pmf.copy() xs = adhd2['x'] ks = adhd2['k'] ns = adhd2['n'] for slope, inter in joint_pmf.index: ps = expit(inter + slope * xs) likes = binom.sf(ks, ns, ps) likelihood2[slope, inter] = likes.prod() Show code cell content Hide code cell content # Solution posterior_pmf = joint_pmf * likelihood1 Show code cell content Hide code cell content # Solution Show code cell content Hide code cell content # Solution posterior_pmf = joint_pmf * likelihood1 * likelihood2 Show code cell content Hide code cell content # Solution Show code cell content Hide code cell content # Solution joint_posterior = posterior_pmf.unstack() decorate(title='Joint posterior distribution') Show code cell content Hide code cell content # Solution marginal_inter = marginal(joint_posterior, 0) marginal_slope = marginal(joint_posterior, 1) marginal_inter.mean(), marginal_slope.mean() (-4.999322906782624, 0.044607616771986124) Show code cell content Hide code cell content # Solution title='Posterior marginal distribution of intercept') Show code cell content Hide code cell content # Solution title='Posterior marginal distribution of slope') Show code cell content Hide code cell content # Solution sample = posterior_pmf.choice(101) xs = adhd['x'] ps = np.empty((len(sample), len(xs))) for i, (slope, inter) in enumerate(sample): ps[i] = expit(inter + slope * xs) Show code cell content Hide code cell content # Solution low, median, high = np.percentile(ps, [2.5, 50, 97.5], axis=0) array([0.00663988, 0.00695303, 0.00728085, 0.00762401, 0.00798321, 0.00835919, 0.00875272, 0.00915734, 0.00955774, 0.00997548, 0.01043603, 0.01094356]) Show code cell content Hide code cell content # Solution plt.fill_between(xs, low*10000, high*10000, color='C1', alpha=0.2) plt.plot(xs, median*10000, label='model', color='C1', alpha=0.5)
{"url":"https://allendowney.github.io/ThinkBayes2/chap16.html","timestamp":"2024-11-07T23:21:50Z","content_type":"text/html","content_length":"172481","record_id":"<urn:uuid:5b9c06d9-c822-4be7-8a7c-405049048836>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00657.warc.gz"}
Will 2024 be just 007 car models? Hey everyone Just had a thought and wondered if 2024 will be 007 car models for the year?do you think there could be anything else planned for 2024? Don't get me wrong, it's kool that agora got the rights to do the 007 franchise.ย Some of the cars coming are great for bond fans new and old.ย Good stuff.๐ It was really kool to get optimus this year (looking fantastic so far btw,everyone building him seems very pleased๐ )which made a nice change from vehicles (for me) and being im UK based we can't get the alien through Agora...๐ . Whats everyones thoughts? Apologies if this is in the wrong section..wasn't sure if the thread should be in general chat? 9 hours ago, Christopher Christou said: Hey everyone Just had a thought and wondered if 2024 will be 007 car models for the year?do you think there could be anything else planned for 2024? Don't get me wrong, it's kool that agora got the rights to do the 007 franchise.ย Some of the cars coming are great for bond fans new and old.ย Good stuff.๐ It was really kool to get optimus this year (looking fantastic so far btw,everyone building him seems very pleased๐ )which made a nice change from vehicles (for me) and being im UK based we can't get the alien through Agora...๐ . Whats everyones thoughts? Sorry Chris, is it because it Alien takes too long to do on Hachette, because I got kit one thru Hachette. Have Hachette sold out of Alien Chris? Not long to wait for ET though, also know that Darth Vader is being trialed over here, so that could also be another in the pipeline, I would def go for scale 1/2 Darth Vader.๐ ๐ ป 1 hour ago, Phil said: Sorry Chris, is it because it Alien takes too long to do on Hachette, because I got kit one thru Hachette. Hey Phil it's not so much it takes too long for Alien build through hatchette,I'm just interested to know if agora had anything else up its sleeve for 2024 instead of 007.ย ย I know there is a lot of bond cars going on but who knows they might drop this for you lol๐ gen 1 ๐ 1 hour ago, Phil said: Have Hachette sold out of Alien Chris? Not long to wait for ET though, also know that Darth Vader is being trialed over here, so that could also be another in the pipeline, I would def go for scale 1/2 Darth Vader.๐ ๐ ป Vader 1/2 scale๐ now that would be an announcement lol 45 minutes ago, Christopher Christou said: Yes I'm in close negotiations now for Megatron hahaha ๐ คฃ๐ คฃ๐ คฃ 1 hour ago, Christopher Christou said: Very close...so near but like the lottery could be a million miles away ! But Chris who knows eh? I would love to have these two standing side by side. • 4 weeks later... My understanding was that a new Agora model was going to be launched in the autumn, maybe at the Scale Model show in the UK. I havenโ t seen anything, does anyone have any information? • 2 weeks later... On 11/21/2023 at 8:15 PM, Mark Norton said: My understanding was that a new Agora model was going to be launched in the autumn, maybe at the Scale Model show in the UK. I havenโ t seen anything, does anyone have any information? Hi Mark I think they have a kool looking helicopter coming at some point.not 100%sure it's agoras though. I forgot the name of it but world of wayne visited a scale model show a couple of weeks back and there was one on display. It's pretty much winter now so I'm.thinking maybe it's been delayed.ย ย I'm pretty sure the lotus 007 car is the next model to be launched.if not the helicopter๐ ค Yes that's right Chris, it's the Apache AH-64 Attack helicopter, that's where I saw it too on WoW, though nothing's confirmed in time frame yet, but I hope to be in on it, it's a wonderful piece of military kit. Btw Chris, I'm up to 94 now on R2-D2 ๐ 11 hours ago, Phil said: Yes that's right Chris, it's the Apache AH-64 Attack helicopter, that's where I saw it too on WoW, though nothing's confirmed in time frame yet, but I hope to be in on it, it's a wonderful piece of military kit. Btw Chris, I'm up to 94 now on R2-D2 ๐ Hey Phil That's right I forgot the name (thanks for that๐ ) but the apache does look awesome and it comes with the scenery too which looks really nice. I still wait in hope for the Vader 1:2 scale though.that would be epic!! Ah lucky you! I got 87-90 coming in december with r2d2 so your couple of months in front!I look forward to seeing how you get on once he's fully up and complete. Do you have the app downloaded ready for use?ย 5 hours ago, Scotter said: Not sure this is on agoras list of vehicle's for 007! Would sure be an interesting one though! On 11/30/2023 at 1:49 PM, Christopher Christou said: Hey Phil That's right I forgot the name (thanks for that๐ ) but the apache does look awesome and it comes with the scenery too which looks really nice. I still wait in hope for the Vader 1:2 scale though.that would be epic!! Ah lucky you! I got 87-90 coming in december with r2d2 so your couple of months in front!I look forward to seeing how you get on once he's fully up and complete. Do you have the app downloaded ready for use?ย Hey Chris, yes Vader would be interesting indeed, I would also go for that, I wonder how far away that is. I haven't downloaded the app yet Chris, and I'll sure keep you informed of progress. Cheers.๐ ป
{"url":"https://community.agoramodels.com/topic/6293-will-2024-be-just-007-car-models/","timestamp":"2024-11-11T16:14:49Z","content_type":"text/html","content_length":"301979","record_id":"<urn:uuid:885d8e33-5f45-4b04-84ce-b7bdd5c3d0fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00137.warc.gz"}
Results (II) The Amplitude of the mucosal wave showed a significant effect of Test Moment (F(1,58)=30.5, p<.001, η^2[p]=.067) suggesting that the three groups had a greater amplitude in the Post-Test, but with no effect on Treatment Groups (F(2,58)=0.410, p=.665, η^2[p]=.011). The interaction between the two variables was significant (F(2,58)=11.5, p<.001, η^2[p]=.050). To follow up the interaction, as for the other variables, a one-way ANOVA for Pre and Post-Test conditions was carried out. Results showed that in the Pre-Test, mean amplitudes were not significantly different (F(2,38)=1.40, p=.259, η^2 [p]=.031), but in the Post-Test condition the difference was significant (F(2,38)=3.67, p=.036, η^2[p]=.098). The unpaired t-test showed that the Gauze group significantly differed from the other groups (Control group: t(19)=2.12, p=.047, Cohen’s d= 0.476 and Exercise group: t(19)=2.35, p=.020, Cohen’s d= 0.567), and no significant differences were found between these two groups (t(19)=0.29, p=.772, Cohen’s d= 0.066), suggesting that the Gauze group showed significantly greater amplitude of mucosal wave after the warmup (see figure 2). The G of GRBAS did not show any significant effect (all p > .12), neither the R (all p > .31), the A (all p > .11) and the S (all p > .13). However, the B showed a significant effect of Test Moment (F(1,58)=15.79, p<.001, 2p=.0565) and interaction between the two variables (F(2,58)=3.20, p=.048, η^2[p]=.026). The Pre-Test one-way ANOVA showed that the three groups were not significantly different (F(2,38)=0.658, p=.524, η^2[p]=.022), but the Post-Test one-way ANOVA was significant (F(2,38)=4.01, p=.027, η^2[p]=.062). The unpaired sample t-test showed that the Gauze group was significantly different from the other two (Control group: t (19)=2.17, p=.043, Cohen’s d= 0.486 and Exercise group: t(19)=3.48, p=.002, Cohen’s d= 0.780) and that the Control and the Exercise groups did not differ significantly (t(19)=0.76, p=.453, Cohen’s d= 0.171), suggesting that only the Gauze group improved breathiness after warm-up (see Figure 3). Figure 3. Bar plots of the B of GRBAS for the Pre- and Post-Test Condition of the three treatments (Control group, Exercise group and Gauze group). Error bars refer to the standard error (SE) of each Acoustic analysis showed that the F0 variations were not significant (all p > .63) nor the Jitter variations (all p > .53). However, the Shimmer showed a significant effect of Test Moment (F(1,58)= 20.55, p<.001, η^2[p]=.079) and Treatment Groups (F(2,58)=3.69, p=.031, η^2[p]=.073) with a significant interaction between the two variables (F(2,58)=6.48, p=.003, η^2[p]=.050), suggesting that the Gauze group was different from the other two. The pre-test one-way ANOVA analysis showed that the three groups were not significantly different before the treatment (F(2,38)=0.35, p=.703, η^2[p]=.026), but after the treatment they differed significantly (F(2,38)=15.41, p<.001, η^2[p]=.018). The unpaired t-test showed that the Gauze group differed from the other two groups after the treatment (Control group: t(19)=3.31, p=.004, Cohen’s d= 0.741 and Exercise group: t(19)=5.52, p<.001, Cohen’s d= 1.235), while no significant differences were differences were found between Exercise and Control group (t(19)=0.97, p=.341, Cohen’s d= 0.218), suggesting that after the Gauze treatment task participants had significantly better Shimmer values (see Figure 4). Figure 4. Bar plots of the Shimmer (%) for the Pre and Post-Test condition of the three groups (Control group, Exercise group and Gauze group). Error bars refer to the standard error (SE) of each
{"url":"https://dampgauze.cflpapers.com/results-ii/","timestamp":"2024-11-05T20:04:49Z","content_type":"text/html","content_length":"87987","record_id":"<urn:uuid:5b638fd3-1665-4069-ae7d-291edf852f08>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00356.warc.gz"}
HDUOJ2199 Can solve this equation? A Free Trial That Lets You Build Big! Start building with 50+ products and up to 12 months usage for Elastic Compute Service • Sales Support 1 on 1 presale consultation • After-Sales Support 24/7 Technical Support 6 Free Tickets per Quarter Faster Response • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.
{"url":"https://topic.alibabacloud.com/a/hduoj2199-can-solve-this-equation_8_8_10243825.html","timestamp":"2024-11-08T07:55:36Z","content_type":"text/html","content_length":"83840","record_id":"<urn:uuid:f1b0cc51-deb3-4da9-992d-477304768ce6>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00211.warc.gz"}
A Novel Attention-Mechanism Based Cox Survival Model by Exploiting Pan-Cancer Empirical Genomic Information College of Computer Science and Technology, Qingdao Institute of Software, China University of Petroleum, Qingdao 266580, China China High Performance Computer Research Center, Institute of Computer Technology, Chinese Academy of Sciences, Beijing 100190, China Author to whom correspondence should be addressed. These authors contributed equally to this work. Submission received: 22 March 2022 / Revised: 15 April 2022 / Accepted: 19 April 2022 / Published: 22 April 2022 Cancer prognosis is an essential goal for early diagnosis, biomarker selection, and medical therapy. In the past decade, deep learning has successfully solved a variety of biomedical problems. However, due to the high dimensional limitation of human cancer transcriptome data and the small number of training samples, there is still no mature deep learning-based survival analysis model that can completely solve problems in the training process like overfitting and accurate prognosis. Given these problems, we introduced a novel framework called SAVAE-Cox for survival analysis of high-dimensional transcriptome data. This model adopts a novel attention mechanism and takes full advantage of the adversarial transfer learning strategy. We trained the model on 16 types of TCGA cancer RNA-seq data sets. Experiments show that our module outperformed state-of-the-art survival analysis models such as the Cox proportional hazard model (Cox-ph), Cox-lasso, Cox-ridge, Cox-nnet, and VAECox on the concordance index. In addition, we carry out some feature analysis experiments. Based on the experimental results, we concluded that our model is helpful for revealing cancer-related genes and biological functions. 1. Introduction Since the 20th century, cancer has become a serious threat to human life and health. Multi-angle and multi-level prognostic studies on cancer emerge one after another. However, prognostic research for cancer is still a challenging task. One of the most important factors is that due to the existence of censored patient samples, the traditional analysis models cannot effectively determine the actual time of death [ ]. Compared with traditional prognosis, survival analysis models focus more on the survival time point of the patient rather than the death time point. The use of survival analysis models can be very effective in dealing with censored data. The most widely used is the Cox proportional hazards model (Cox-ph) model [ ]. The Cox-ph model is a semi-parametric proportional hazards model. The covariates of the model can explain the relative risk of a patient, called hazard ratio, prognostic index or risk score. According to the relative risk score, the risk of changes in various factors can be effectively analyzed, thereby helping doctors to develop effective targeted therapy strategies. The arrival of the Human Genome Project has raised the survival sub-model to the research category of high-throughput multi-omics [ ]. There are many experiments and studies showing that it is very meaningful to do cancer survival analysis on high-throughput transcriptome data [ ]. Computationally, however, performing survival analysis on high-dimensional transcriptome gene expression data is equivalent to solving a regression problem of complex nonlinear equations. Using traditional Cox-ph regression models cannot effectively handle high-dimensional feature representation and accurately predict prognosis. In the past two decades, some researchers have used some machine learning methods to modify the original Cox survival analysis model, and the regression effect of the model has been improved to a certain extent. Some methods use the support vector machine (SVM) algorithm to perform feature extraction and dimensionality reduction for high-dimensional gene expression data [ ]. The result after mechanized dimensionality reduction of SVM fuses the original high-dimensional gene features, and the use of Cox-ph model to represent low-dimensional gene expression features can effectively implement survival analysis prediction. Some methods will use an ensemble learning method such as Cox-Boost, which divides the parameters into several independent partitions for ensemble training and fitting [ ]. Some methods replace the original proportional hazards model by using the ensemble mean cumulative hazard function (CHF) with the help of the nonlinear ensemble method of random forests [ With the maturity of deep learning methods in different fields [ ], the Cox model, based on an artificial neural network, has received extensive attention from researchers. To the best of our knowledge, the earliest application of artificial neural networks for survival analysis is Faraggi et al. [ ]. They used four diagnoses as inputs to model the learning of a survival analysis for prostate cancer. Then Ching et al. designed a Cox-nnet composed of two-layer neural networks, and successfully used Cox-nnet to make reasonable survival analysis recommendations for 10 different cancer gene expression data [ ]. Katzman et al. proposed a Cox regression model constructed by a multi-layer neural network, while formulating corresponding treatment recommendations based on the trained Cox model [ ]. Huang et al. [ ] proposed a survival analysis model for multi-omics data of breast cancer. They first constructed a co-expression feature matrix of mRNA data and miRNA data through gene co-expression analysis to alleviate the learning difficulties and overfitting problem caused by high-dimensional data. They finally proposed a multi-group student survival analysis model for breast cancer [ ]. Kim et al. used a transfer learning approach, first pre-training a VAE model, and then using the VAE model for fine-tuning training on 20 TCGA datasets [ ]. Using the above method effectively alleviates the overfitting problem caused by the high genetic dimension and the small number of training patients. Ramirez et al. [ ] focused on the degree of correlation between different genes. They constructed gene association graphs through correlation coefficient scores, PPI networks and other methods, introduced the correlation map as a prior knowledge into the training of the GCN survival analysis model, and explained the important biological significance of the GCNN model in survival analysis [ The high dimensionality and complex semantics of genetic data bring many challenges to feature extraction. So far, there is a lot of work that is trying to use new ideas to ensure rich feature extraction. The most classic method in deep learning was multilayer perceptron (MLP), which learns linear correlations between different data. It was considered to be the optimal choice for processing sequential data. There are a lot of works to extract high-dimensional genetic data based on MLP [ ]. In the past decades, convolutional neural networks (CNN) have shown excellent results in computer vision. More and more studies have demonstrated the powerful ability of CNNs to deal with spatial structural features. Many recent works have attempted to extract features from high-dimensional genetic data using CNNs and demonstrated the transferability of CNNs in multi-omics data. Rehman et al. proposed densely connected neural network based N4-methylcytosine site prediction (DCNN-4mC), and this framework obtains the greatest performance for 4mC site identification in all species [ ]. Chen et al. used Lasso and CNN as a target model and studied the trade-off between the defense power against MIA and the prediction accuracy of the target model under various privacy settings of DP [ ]. Torada et al. proposed a CNN-based program, called ImaGene, on genomic data for the detection and quantification of natural selection [ ]. Hao et al. proposed a biologically interpretable deep learning model (PAGE-Net) that integrates histopathological images and genomic data [ ]. Jeong et al. proposed a new tool called GMStool-based CNN for selecting optimal marker sets and predicting quantitative phenotypes [ ]. Rehman et al. proposed the m6A-NeuralTool to extract the important features from the one-hot encoded input sequence based on CNN [ ]. At the same time, some works [ ] take advantage of novel feature extraction methods for sequence data and exhibit remarkable results. Since the Generative Adversarial Network (GAN) proposed by Ian et al. [ ], a lot of research has used this training strategy for data generation, reconstruction and dimensionality reduction tasks. GAN adopts an adversarial training strategy, which effectively learns the distribution of high-dimensional data and effectively generates results sampled from the overall data distribution. In recent years, GAN has been widely used in protein and gene sequence research. Repecka et al. [ ] developed a variant of attention-based GAN and called it ProteinGAN. ProteinGAN learns the evolutionary relationships of protein sequences directly from the complex multidimensional amino acid sequence space and creates highly diverse new sequence variants with natural physical properties, which demonstrates the potential of GANs to rapidly generate highly diverse functional proteins within the biological constraints allowed by the sequence space. LIN et al. [ ] proposed the DR-A framework, which implements dimensionality reduction for scRNA-seq data based on an adversarial variational autoencoder approach. Compared with traditional methods, this method can obtain a low-dimensional representation of scRNA-seq more accurately. Jiang et al. [ ] introduced a novel GAN framework for predicting disease genes from RNA-seq data. Compared to state-of-the-art methods, the model improves the identification accuracy of disease genes. In this paper, we propose a novel deep Self Attention Variational Autoencoder Cox Survival Analysis Model (SAVAE-Cox). This model takes advantages of adversarial transfer learning strategy. In the adversarial pretraining stage, the generator was a variational autoencoder (VAE), which is jointly trained with the discriminator. Meanwhile, we introduce a novel self-attention mechanism [ ] to enhance semantically relevant features extraction of the encoder from high-dimensional data. After the pretraining stage, the generator was able to learn the common features of 33 cancer transcriptome data. Next, the encoder of the generator was used to learn survival analysis on 16 cancers. By comparison with state-of-the-art models such as Cox-nnet and VAECox, our model achieved the highest concordance index on 10 TCGA cancer datasets. Finally, we performed feature analysis of SAVAE-Cox. We select oncogenes and compute correlations with hidden layer nodes in which we find that our hidden layer nodes are highly correlated with oncogenes. We used these nodes to draw Kaplan–Meier plots, and found that these nodes significantly affected the survival of patients. Based on the correlation of hidden layer nodes with genes, we selected leader genes, which we found enriched on cancer-related pathways. According to our experiments, we conclude that our proposed SAVAE-Cox model has significant cancer prognostic ability. Our source code of SAVAE-Cox is available at (Last visited on 21 March 2022). 2. Materials and Methods 2.1. Dataset Preparation In this work we used 17 datasets from the TCGA database. These 17 datasets are bladder carcinoma (BLCA), breast carcinoma (BRCA), head and neck squamous cell carcinoma (HNSC), kidney renal cell carcinoma (KIRC), brain lower-grade glioma (LGG), liver hepatocellular carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), ovarian carcinoma (OV), stomach adenocarcinoma (STAD), cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC), colon adenocarcinoma (COAD), sarcoma (SARC), uterine corpus endometrial carcinoma (UCEC), prostate adenocarcinoma (PRAD), skin cutaneous melanoma (SKCM) and pan-cancer (PANCAN). The detailed description of the dataset and download way were in Appendix A.1 . Since there are a large number of empty genes and noise genes, it is necessary to perform some preprocessing options to exclude some redundant noise genes. In Table 1 , we present the statistics of the 17 datasets used in this work, including the number of samples in each dataset and the clinical information of 16 cancer types. We use the PANCAN dataset to draw scatter plots of 56,716 genes and observe the statistical distribution. Figure 1 shows the distribution of mean and standard deviation of RNA-seq data. From the standard deviation distribution of Figure 1 , we observe that there is a valley between 0.278–0.403, and plenty of genes are with zero variance. Therefore, we defined gene expression with a standard deviation in the range (0, 0.4) as noise genes, and removed those noise genes whose standard deviation satisfies this range. At the same time, in order to eliminate the influence of empty data, we removed the RNA-seq genes whose mean value satisfies (0, 0.8). We processed each RNA-seq dataset following these two strategies described above and selected 20,034 intersection genes of 16 cancer types. Before feeding the genes into our module, we performed feature wise min-max normalization of each gene of 16 cancer types. 2.2. Dimensionality Reduction Pretraining Using GAN We adopted the strategy of a generative adversarial network (GAN) [ ] to design the pre-training stage. The pre-training process is shown in Figure 2 a. The generator takes the genes $x i n$ as input and generates the reconstructed genes $x r e c$ . The discriminator takes the $x i n$ $x r e c$ as the input and outputs a value which reflects the authenticity of the gene. Through adversarial training with the , the encoding module gradually improves the feature extraction ability to generate a low-dimensional feature $x r e c$ generation. After training, can be used for dimensionality reduction of $x i n$ . Compared with computational dimensionality reduction methods, our dimensionality reduction based was a data-driven method and can be adaptively adjusted according to different data characteristics. The network structure of the is a self-attention VAE(SAVAE) framework. For each $x i n$ , SVAE is described as: $μ ( x i n ) = w μ ( ℒ α ( δ ( w h x i n + b h ) ) ) + b μ$ $σ ( x i n ) 2 = e x p ( w ν ( ℒ α ( δ ( w h x i n + b h ) ) ) + b ν )$ $z = μ ζ + σ , ζ ∼ N ( 0 , 1 )$ $x r e c = w h z + b h ,$ $μ ( x i n )$ $σ ( x i n ) 2$ are the mean and the variance of the Gaussian disribution. is the activate function. is randomly sampled from the standard Gaussian distribution. We introduce a residual self-attention module ( Figure 3 ) in the hidden layer of the encoder to enhance the fitting ability of VAE [ ]. The self-attention mechanism [ ] can effectively learn the semantic correlation of high-dimensional features. It is denoted as: $ℒ α ( x ) = s a ( x ) + α × x .$ In Equation (5), represents a learnable parameter, which can adaptively adjust the weight of the residual connection. $s a$ stands for Self-Attention Module [ ]. It is denoted as: $s a ( x ) = s o f t m a x ( Q ( x ) T × K ( x ) ) × V ( x ) ,$ , and represent the query, key and value obtained by performing three Dense layers on the input x. is a simple binary classification network whose framework is represented as follows: $D ( x ) = ℒ o u t d ⊙ ℒ n d ⊙ ℒ n − 1 d ⊙ ⋯ ⊙ ℒ 1 d ( x ) ,$ where the $ℒ n d : ℝ N → ℝ N 2$ maps high-dimensional features to low-dimensional features through linear transformation. Finally, the output feature of $ℒ n d$ was fed to a classification layer $ℒ o u t d : ℝ N 2 n → ℝ$ . The classification layer generates a numerical value, which represents the judgment of the input gene. We introduce the Wasserstein loss [ ] to train and discriminator jointly. For , the goal is to synthesize more similar reconstructions $x r e c$ so that the discriminator cannot describe whether it is real or fake. For , the goal is to distinguish between true genes $x i n$ and reconstructed genes $x r e c$ synthesized by G. The loss of is computed as: $ℒ g a n = E x i n ∼ P d a t a ( x i n ) [ D ( x i n ) ] − E x i n ∼ P d a t a ( x i n ) [ D ( G ( x i n ) ) ] − λ p E x ∼ X [ | | ∇ x D ( x ) | | 2 − 1 ] 2 ,$ $λ p$ represents the hyper parameter for setting the gradient penalty, and represents the overall Sample Space of $x i n$ $x r e c$ . The Wasserstein loss was calculated to minimize the Wasserstein distance between $x i n$ $G ( x i n )$ by the , which made the overall sample distribution of $x i n$ $G ( x i n )$ more similar. Furthermore, we introduce the Kullback–Leibler divergence [ ] used in training the VAE. Given the input Genes $x i n$ . it is denoted as: $ℒ K L = ∑ i = 0 n ( μ ( x i n ) 2 + σ ( x i n ) 2 − l o g ( σ ( x i n ) 2 ) − 1 ) ,$ represents the dim of . This error measures the distance between the real latent code under standard Gaussian distribution and the posterior latent variable $P ( z | x i n )$ generated by encoder. The introduction of Kullback–Leibler divergence into the generator can guarantee the similarity between low-dimensional latent variables, which significantly improves the authenticity of the distribution of $x i n$ $G ( x i n )$ . At the same time, we introduce 1 loss to measure the similarity of each sample. The 1 loss is computed as: $ℒ L 1 = | | x r e c − x i n | | 1 .$ Unlike the Wasserstein loss and KL divergence, the 1 loss focuses on making learn to synthesize $x r e c$ more similar to $x i n$ on genes-wise. Therefore, the overall loss function for can be expressed as: $ℒ t o t a l = ℒ g a n + λ 1 ℒ K L + λ 2 ℒ L 1 ,$ $λ 1$ $λ 2$ represent the hyper parameters. We therefore aim to solve: $θ G * = a r g min G max D ℒ t o t a l .$ 2.3. Survival Analysis Based on Transfer Learning After the pre-training stage, we transfer the weights learned by the encoder in the pre-training stage to the survival analysis stage as shown in Figure 2 b. At the same time, we set an additional classification module to learn the hazard ratio. The hazard ratio measures the likelihood a patient has of dying, and a higher hazard ratio indicates a higher likelihood that a patient will die. We adopt the training strategy of Cox-ph [ ] to train our module, which is denoted as: $h ( t | x i ) = h 0 ( t ) e x p ( w x i ) ,$ $h 0 ( t )$ was baseline hazard function, was the trainable parameters of the module, and $x i$ represents the risk factors of patients. At this stage, $x i$ is the low-dimensional feature $μ ( x i n )$ that output by SAVE encoder. We aim to solve: $θ * = a r g min θ ∑ C ( i ) = 1 ( w x i − l o g ∑ t j ≥ t i w x j ) ,$ is the the survival time of patient sample, $C ( i )$ indicates whether the patient sample is censored. 2.4. Experiment Settings We implemented our model using the PyTorch framework. We train and validate our model using an Nvidia Tesla V100 (32GB) GPU. We first pre-train SAVAE using the pan-cancer database. To validate the pre-trained reconstruction results, we randomly divide the training dataset and the test dataset with a ratio of 9:1. For survival analysis on 16 cancer datasets, we trained our model by dividing the dataset into five-fold cross-validation. Both stages are trained using the Adam optimizer. In the pre-training stage, the learning rate of the optimizer is 0.0001, the total epochs are 300, and the batch size is 256. At the same time, since the loss fluctuation problem often occurs in the training process of GAN, it is necessary to set the learning rate decay strategy. Therefore, we stipulate that the learning rate remains constant for the first 150 epochs, and decays linearly to 0 for the last 150 epochs. In the SAVAE-Cox training phase, the learning rate of the optimizer is 0.001, the total training epochs are 20, and the batch size is 512. Note that there are some hyperparameters such as learning rate, $λ p$ $λ 1$ $λ 2$ . The specific selection methods and optimal parameter settings are in Appendix A.2 . To ensure the fairness of training, we use the same dataset settings to train and evaluate Cox-nnet, Cox-lasso, Cox-ridge and VAE-Cox. 2.5. Evaluation Metric In this work, the evaluation method we mainly use is the concordance index [ ], which is widely used in survival analysis models. It ranges from 0 to 1. When the concordance index ≤ 0.5 means that the model has completed an ineffective survival analysis prediction. When the concordance index > 0.5 and higher this indicates that the prediction effect of the model is better. 3. Results 3.1. Performance of Dimensionality Reduction Section 3.1 , we evaluated the generator performance in the pre-training stage. All 990 samples in the pan-cancer test dataset were used in this section. We fed the test set of the pan-cancer dataset to the generator, and the generator synthesized the reconstruction results. We then use UMap to perform visualization of the real genes and the reconstructed genes. Figure 4 is the visualization using UMap. We found that the distribution of reconstructed genes closely coincided with the distribution of real genes, which shows that: • Generator can reconstruct $x r e c$ that are consistent with $x i n$; • The reconstruction of the generator is based on the latent encoding $z$, indicating that the encoder of the generator can effectively generate $z$, which retains rich features that can represent $x i n$. To further verify the superiority of our proposed dimensionality reduction method, we compare the performance with other dimensionality reduction methods. The comparison results were shown in Table 2 and the optimal hyperparameter settings were shown in Table A1 . First, we choose Autoencoder (AE) and denoising Autoencoder (denoise-AE) to compare with our model. At the same time, we also compare some classical dimensionality reduction methods based on feature selection such as Chi2, Pearson, mutual information and maximal information coefficient (MIC) and principal component analysis (PCA). Note that unlike data-driven dimensionality reduction methods, the feature selection method does not use pan-cancer data for pre-training but directly applies statistical methods on 16 cancer types to select high-correlation features. We used five-fold cross-validation on 16 types of data to train these models and calculated the mean concordance index. By comparing these six methods on 16 cancer types, our dimensionality reduction method performed best in nine of them. Interestingly, using the Chi2 feature selection-based dimensionality reduction method outperforms data-driven methods in LUSC, CESC, and SKCM datasets. As shown in Table 2 , the samples in the CESC and SKCM datasets are small. Meanwhile, we found significant overfitting problems that emerged while training used the data-driven dimensionality reduction methods on the LUSC dataset. These characteristics reflect the shortcomings of data-driven dimensionality reduction methods that are highly dependent on data. 3.2. Performance of Survival Analysis We use the concordance index to evaluate the performance of SAVAE-cox models. Figure 5 shows the comparisons of performance on 16 cancer datasets. In Figure 5 , we selected four models to compare with SAVAE-Cox, which include the classic model like Cox-lasso and Cox-ridge methods, as well as some state-of-the-art methods such as Cox-nnet and VAECox. Each model chooses the optimal parameter settings to ensure the fairness of the experiment. Table A1 shows the optimal parameters of these models. Then we divided the 16 cancer types into train and validation datasets by five-fold cross-validation. For each cancer type, we compute the mean concordance index for the validation set. Finally, we draw boxplots according to the concordance index of the five models. From the experimental results we can see that the predicted concordance index using our model on 12 cancer types is significantly higher than the other four models. We performed further survival analysis on 12 cancer types. Based on the hazard ratios of patient samples predicted by SAVAE-Cox, we can calculate the mean reference hazard ratios for 12 cancer types. For each cancer type, we defined patients with predicted hazard ratios above the average to be in the high-risk group, and patients with predicted hazard ratios below the average to be in the low-risk group. In this way, we divided patient samples into high-risk and low-risk groups for each cancer type. Therefore, we draw Kaplan–Meier (KM) survival curves for different cancer types according to these two groups. At the same time, we adopted the same strategy to plot the KM survival curve of the Cox-nnet prediction results. Figure 6 shows the comparison of KM survival curves of SAVAE-Cox and Cox-nnet on 12 cancer types. Based on the results in Figure 6 , we found that the hazard ratio predicted by SAVAE-Cox significantly affected patient survival. At the same time, by analyzing the -value, we can find that the hazard ratios predicted by SAVAE-Cox have a more significant impact on patient survival than that predicted by Cox-nnet. 3.3. Feature Analysis of SAVAE-Cox The incidence of BRCA is very high, so there were a lot of related studies on this malignancy. At the same time, benefiting from the professionalism of the TCGA project, the researchers collected abundant samples. Choosing BRCA for survival analysis can obtain the most stable survival analysis results and conclusions. Therefore, we take BRCA as an example to conduct experiments for further correlation analysis between model features and genes. All of the 1031 patient samples in the BRCA dataset were tested in this study. First, we analyzed the hidden layer nodes that contribute the most to the prognosis according to the mean and variance. After analysis, we selected the top 20 key prognostic hidden layer nodes. Finally, we calculated a Pearson correlations matrix for each node and gene expression across the patient samples. According to the calculated correlation matrix, we can analyze the correlation between hidden layer nodes and different genes. We selected 34 cancer-related genes from DISEASE ( , Last visited on 21 March 2022) and plotted them in Figure 7 . These genes have high correlation scores in DISEASE, and some genes are oncogenes in ovarian cancer and significantly affect patient survival. The meta-analysis strongly supports the prognostic role of BCL2 as assessed by immunohistochemistry in breast cancer [ ]. Germline variation NEK10 is associated with breast cancer incidence [ ]. The progesterone receptor (PgR) is one of the most important prognostic and predictive immunohistochemical markers in breast cancer [ ]. The CCDC170 gene affects both breast cancer risk and progression [ ]. ESR1 amplification may be a common mechanism in proliferative breast disease and a very early genetic alteration in a large subset of breast cancers [ ]. The SLC4A7 variant rs4973768 is associated with breast cancer risk [ ]. ATM mutations that cause ataxia-telangiectasia are breast cancer susceptibility alleles [ ]. RAD51 is a potential biomarker and attractive drug target for metastatic triple negative breast cancer [ ]. CTLA-4 was expressed and functional on human breast cancer cells through influencing maturation and function of DCs in vitro [ ]. MYC deregulation contributes to breast cancer development and progression and is associated with poor outcomes [ ]. The CDH1 mutation frequency affecting exclusively lobular breast cancer [ ]. Inherited mutations of BRCA1 are responsible for about 40–45% of hereditary breast cancers [ ]. EGFR is an oncogene in breast cancer [ ]. ERBB2 is an oncogene in breast cancer [ ]. Six different germline mutations in breast cancer families are likely to be due to BRCA2 [ ]. Rare mutations in XRCC2 increase the risk of breast cancer [ ]. Amino acid substitution variants of XRCC1 and XRCC3 genes may contribute to breast cancer susceptibility [ ]. Overexpression of an ectopic H19 gene enhances the tumorigenic properties of breast cancer cells [ ]. CYP19A1 genetic variants in relation to breast cancer survival in a large cohort of patients [ ]. BARD1 mutations may be regarded as cancer risk alleles [ ]. Master regulators of FGFR2 signalling and breast cancer risk [ ]. Interestingly, from Figure 7 we can see that 20 key nodes are significantly associated with these oncogenes, which suggests that exploring patient survival using our proposed model could serve as a new avenue for the discovery of oncogenes. To further explore the contribution of hidden layer nodes to patient survival, we plot Kaplan–Meier survival curves with 20 key nodes. In this experiment, we also selected all patient samples in BRCA for evaluation. By analyzing the variation of each node in the overall patient sample, we divided them into two groups according to the mean value of nodes. Finally, we calculated the log-rank -value for each node and drew the Kaplan–Meier survival curves. Figure 8 shows the survival curves of the first four key nodes. From the survival curve, we found that patient samples can be significantly divided into risk groups and safety groups according to key nodes, and the larger the value of the node, the lower the survival probability of the patient, which proves that our hidden layer nodes can be used as a key prognostic factor. 3.4. Biological Function Analysis of Hidden Nodes To further explore the biological relevance of hidden layer nodes, we performed a gene set enrichment analysis (GSEA) using the KEGG pathway. We ranked genes according to the Pearson correlation and selected the leader gene for each key node. Based on the leader genes, we created the pathway association network ( Figure 9 ). In Figure 9 , each point represents a pathway, and the size of the point represents the number of genes enriched in this pathway, which indicates that the hidden layers of our module can effectively learn the biological functions associated with diseases. 3.5. Ablation Study for SAVAE-Cox To verify the contribution of our proposed model to the effect of survival prognosis, we performed ablation experiments on SAVAE-Cox. Briefly, we divided the model into four groups: Cox-nnet, SAVAE-Cox without pretrain, SAVAE-Cox without attention, and SAVAE-Cox. We trained four models on 16 cancer types and divided ranks 1, 2, 3, and 4 in descending order according to the performance of the four models in each cancer and plotted Figure 10 . Note that optimal parameter settings were selected for all four models in the ablation study. In Table A1 we listed the parameter settings of these four models. By comparing SAVAE-Cox and SAVAE-Cox without attention, we find that using the self-attention module can significantly improve the model prognosis accuracy. This result shows that there is a potential feature semantic correlation in high-dimensional gene expression, and this correlation cannot be effectively learned using traditional fully connected layers. This latent relationship can be found to a large extent using attention-based methods. By comparing SAVAE-Cox and SAVAE-Cox without pretrain, we find it interesting that the module that used transfer learning on the five datasets HNSC, KIRC, LUAD, LUSC, and OV fails to improve the model’s prognostic results. In general, using the transfer learning strategy improves the C-index of the model. 4. Discussion In this work, we introduced a novel survival analysis model for different cancer types, which is the first attempt to improve the overall survival analysis accuracy with the help of the self-attention mechanism. At the same time, we designed a data-driven dimensionality reduction method regarding the idea of transfer learning and GAN [ ] to further improve the prediction effect. Our results in Figure 5 suggest that the best performance can be achieved using SAVAE-Cox in the prediction of survival analysis for 12 cancer types including BLCA, BRCA, HNSC, KIRC, LGG, LIHC, LUAD, LUSC, SARC, SKCM, STAD, and UCEC. There are multiple factors or complex genetic associations in most of these 12 cancer types that potentially influence patient survival, which shows that the SAVAE-Cox can effectively discover such latent semantic correlations. However, for some small sample datasets like CESC, our model still cannot achieve optimal results. Using transfer learning can indeed alleviate the overfitting problem to a certain extent. However, due to the deepening of the network, it is not comparable to some classical methods in the cancer dataset with sparse samples. Besides, for some cancer datasets like PRAD with an extremely uneven number of positive and negative samples, using SAVAE-Cox for survival analysis is also not the best choice. We analyze that the cause for this problem is also that the complexity of the network increases, which leads to a larger differentiation of the fitting to the two samples, resulting in a loss of accuracy. We proposed an adversarial VAE-based [ ] pre-training method for dimensionality reduction of high-dimensional genes. Unlike classical feature selection methods, our proposed dimensionality reduction method is data-driven. SAVAE-Cox can adaptively extract useful features from high-dimensional genes based on this novel method. This dimensionality reduction method is applicable to any type of data distribution. From Table 2 , we found that using the data-driven method showed the best mean concordance index on 13 cancer types, and the effect of using the data-driven dimensionality reduction method was more significant. However, in some aspects, traditional feature selection dimensionality reduction methods may be more effective. For example, in dimensionality reduction tasks for small sample datasets, data-driven methods cannot predict the whole low-dimensional latent space based on a small number of sample distributions. According to the results in Table 2 , we can find that using the Chi2 method for dimensionality reduction achieved the best results on two datasets with small sample sized datasets such as LUSC, CESC and SKCM. Meanwhile, we combine GAN ] and VAE [ ] to design a more powerful data-driven dimensionality reduction strategy. This method introduces distribution constraints in GAN, which improves the stability of dimensionality reduction under the condition of ensuring strong generation and fitting capabilities. By comparing the state-of-the-art data-driven dimensionality reduction methods in Table 2 , the performance using our method is better. Through the analysis of BRCA, our model can discover cancer-related genes and reveal biological functions. From Figure 7 Figure 8 we find that each key node of the hidden layer of the SAVAE-Cox model is a prognostic features affecting patient survival. Furthermore, our model helps to explain and discover new cancer-related genes. Meanwhile, according to Figure 9 , numerous node-related genes are enriched in cancer pathways such as the breast cancer pathway, the PI3K-Atk signaling pathway, the Rap1 signaling pathway, and the MPKA signaling pathway, in which we confirmed that the hidden layer of the model is highly related to biological functions and reveal rich biological function signals. However, we found that the overfitting of the SAVAE-Cox is still a very serious problem. At the same time, the performance of the model is not good enough on datasets with an imbalanced number of positive and negative samples. In future work, we will study some data augmentation methods to solve these problems and explore some novel multi-head attention-based survival analysis frameworks. 5. Conclusions We introduced a brand new survival analysis model and performed survival prognosis on 16 cancer types. The prognosis was significantly improved when self-attention and transfer learning was integrated to the SAVAE-Cox. With the further analysis of the hidden layer features of SAVAE-Cox, we confirmed that the hidden layer features of this model play a significant role in cancer prognosis and the revealing of biological function. In conclusion, the SAVAE-Cox combines a self-attention mechanism with transfer learning, and feature selection provides a new prospect for future deep cancer Author Contributions Conceptualization, X.M., X.W. and S.W.; methodology, X.M.; software, X.M.; validation, X.W., X.Z. and C.Z.; formal analysis, Z.Z.; investigation, K.Z.; resources, X.W.; data curation, X.M.; writing—original draft preparation, X.M.; writing—review and editing, X.W.; visualization, X.Z.; supervision, X.M.; project administration, X.W.; funding acquisition, X.W. and S.W. All authors have read and agreed to the published version of the manuscript. This research was funded by the National Natural Science Foundation of China [Grant Nos. 61873280, 61873281, 61972416] and Natural Science Foundation of Shandong Province [No. ZR2019MF012]. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. The authors are grateful to Shanchen Pang and Tao Song for advice and excellent technical assistance. Conflicts of Interest The authors declare no conflict of interest. Appendix A Appendix A.1. Descriptions and Download Details of the Datasets In this work, we used TCGA mRNA expression and clinical datasets of 16 cancer types. The 16 cancer types of data are: bladder carcinoma (BLCA), breast carcinoma (BRCA), head and neck squamous cell carcinoma (HNSC), kidney renal cell carcinoma (KIRC), brain lower-grade glioma (LGG), liver hepatocellular carcinoma carcinoma (LIHC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), ovarian carcinoma (OV), stomach adenocarcinoma (STAD), cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC), colon adenocarcinoma (COAD), sarcoma (SARC), uterine corpus endometrial carcinoma (UCEC), prostate adenocarcinoma (PRAD), and skin cutaneous melanoma (SKCM). At the same time, the TCGA Pan-Cancer (PANCAN) dataset covering 33 cancer types was used in this study. The Pan-Cancer dataset is from the Pan-Cancer Atlas project. Thirty-three cancer types from the TCGA database were analyzed in this program. The GDC mRNA quantification analysis pipeline measures gene-level expression with STAR as raw read counts. Subsequently, the counts are augmented with several transformations including fragments per kilobase of transcript per million mapped reads (FPKM), upper quartile normalized FPKM (FPKM-UQ), and transcripts per million (TPM). The main purpose of using this normalization is to remove technical bias from different sequencing data. More information on the GDC pipeline used to generate this data is at: (Last visited on 21 March 2022). In this work, we mainly use the data normalized by FPKM-uq. UCSC Xena did some preliminary preprocessing on the TCGA dataset, all of which was open access, which can save a lot of preprocessing work. All the datasets used by our work can be download at (Last visited on 21 March 2022). The mRNA dataset obtained by Xena includes 56,716 genes was normalized by $l o g ( F P K M + 1 )$ , many of which are empty. So further preprocessing work is required. Appendix A.2. Hyperparameter Selection In this work, we mainly chose the optimal hyperparameters based on empirical knowledge and Bayesian optimization methods. In the training process of SAVAE, the optimal hyperparameters were: $l r = 0.0001$ $λ p = 10$ $λ 1 = 10$ $λ 2 = 10$ , epoch = 300, batch_size = 256. In training SAVAE-Cox, the optimal hyperparameter in survival analysis was shown in Table A1 Survival Analysis Models Learning Rate Epoch Batch Size Cox-Chi2 0.0005 15 1024 Cox-Pearson 0.001 15 1024 dimensionality reduction Cox-MIC 0.0005 15 1024 Cox-PCA 0.0005 15 1024 Cox-AE 0.001 15 1024 Cox-dnoiseAE 0.0005 15 1024 Cox-lasso 0.0005 15 1024 Comparative Experiment Cox-ridge 0.001 15 1024 Cox-nnet 0.001 15 1024 VAECox 0.001 15 1024 Ablation Study Without pretrain 0.001 20 512 Without attention 0.001 20 512 Ours SAVAE-Cox 0.001 20 512 Figure 1. Scatter plot of Pan-Cancer data statistics distribution. The width of the scatter plot represents the number of patient samples. The solid black line represents the mean of the statistic. Figure 2. Overview of the SAVAE module. (a) Dimensionality reduction pretraining stage using GAN. (b) Survival analysis based on transfer learning. Figure 3. Network framework of residual self-attention module. This network structure can learn the latent semantic correlation of genes. Figure 4. UMap plot of real genes and reconstructed genes. The reconstructed genes and the real genes are highly coincident under low dimension. Figure 5. Performance comparison of survival analysis on 16 cancer types. The “+” of each box plot denotes the mean concordance index. The mean concordance index of hazard ratios predicted using our model was best on 12 cancer types. Figure 6. Kaplan–Meier survival curves using SAVE-Cox and Cox-nnet on 12 cancer types. The smaller the p-value, the more significant the risk difference between the two groups predicted by the model. Figure 7. Pearson correlation heatmap of 34 cancer-related genes and 20 key nodes in the BRCA study. All of the 34 genes are highly associated with breast cancer. Figure 8. Kaplan–Meier survival curves for four key nodes in a hidden layer. The smaller the p-value, the more significant the effect of the node on the survival of the patient. Figure 9. Pathway association network of leader genes. Each point represents a pathway signal, and the gray solid represents the association between pathways. The size of points represents the number of genes enriched in this pathway. Figure 10. Ablation study on 16 cancer types. The performance results of four models were divided to ranks 1, 2, 3, 4 in descending order. Cancer Type Data Attribute Total Samples Censored Samples Time Range PANCAN 9895 # # BLCA 397 227 13–5050 BRCA 1031 896 1–8605 HNSC 489 302 1–6417 KIRC 504 347 2–4537 LGG 491 302 1–6423 LIHC 359 183 1–3765 LUAD 491 290 4–7248 LUSC 463 327 1–5287 OV 351 95 8–5481 STAD 345 227 1–3720 CESC 283 215 2–6408 COAD 415 239 1–4270 SARC 253 116 15–5723 UCEC 524 404 1–6859 PRAD 477 289 23–5024 SKCM 312 239 14–1785 #: Not measured in this experiment. Table 2. Mean Concordance Index on 16 cancer types using different dimensionality reduction methods. Cancer Type Dimensionality Reduction Method AE Denoise-AE Chi2 Pearson MIC PCA SAVAE BLCA 0.642 0.643 0.582 0.552 0.624 0.545 0.654 BRCA 0.704 0.709 0.651 0.500 0.492 0.488 0.724 HNSC 0.649 0.642 0.522 0.590 0.531 0.489 0.651 KIRC 0.725 0.731 0.620 0.698 0.673 0.547 0.723 LGG 0.844 0.843 0.712 0.820 0.786 0.673 0.857 LIHC 0.704 0.696 0.467 0.627 0.401 0.423 0.713 LUAD 0.617 0.635 0.627 0.595 0.571 0.570 0.647 LUSC 0.552 0.559 0.605 0.534 0.529 0.496 0.575 OV 0.608 0.621 0.550 0.512 0.517 0.471 0.620 STAD 0.602 0.616 0.556 0.572 0.531 0.476 0.610 CESC 0.690 0.722 0.724 0.565 0.598 0.398 0.663 COAD 0.631 0.638 0.489 0.533 0.521 0.496 0.728 SARC 0.698 0.700 0.558 0.647 0.637 0.511 0.720 UCEC 0.677 0.701 0.640 0.591 0.611 0.472 0.698 PRAD 0.724 0.649 0.687 0.751 0.586 0.606 0.774 SKCM 0.684 0.655 0.863 0.652 0.531 0.512 0.734 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Meng, X.; Wang, X.; Zhang, X.; Zhang, C.; Zhang, Z.; Zhang, K.; Wang, S. A Novel Attention-Mechanism Based Cox Survival Model by Exploiting Pan-Cancer Empirical Genomic Information. Cells 2022, 11, 1421. https://doi.org/10.3390/cells11091421 AMA Style Meng X, Wang X, Zhang X, Zhang C, Zhang Z, Zhang K, Wang S. A Novel Attention-Mechanism Based Cox Survival Model by Exploiting Pan-Cancer Empirical Genomic Information. Cells. 2022; 11(9):1421. Chicago/Turabian Style Meng, Xiangyu, Xun Wang, Xudong Zhang, Chaogang Zhang, Zhiyuan Zhang, Kuijie Zhang, and Shudong Wang. 2022. "A Novel Attention-Mechanism Based Cox Survival Model by Exploiting Pan-Cancer Empirical Genomic Information" Cells 11, no. 9: 1421. https://doi.org/10.3390/cells11091421 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2073-4409/11/9/1421","timestamp":"2024-11-15T03:20:28Z","content_type":"text/html","content_length":"506717","record_id":"<urn:uuid:6ef27481-2883-4a14-9c27-6e4c28056c49>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00859.warc.gz"}
Entanglement Entropy of the Dirac Field 40th LQP Workshop "Foundations and Constructive Aspects of QFT" Onirban Islam on June 24, 2017 We compute an upper bound for the relative entanglement entropy of the ground state of the massive Dirac field on a static spacetime. This entanglement measure is bounded by an exponential decay for apart regions in a spacelike compact Cauchy hypersurface.
{"url":"https://www.lqp2.org/node/1391","timestamp":"2024-11-08T01:53:29Z","content_type":"text/html","content_length":"13975","record_id":"<urn:uuid:fce1974b-e048-4ca0-a95e-51e2ecea4a45>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00420.warc.gz"}
Binomial coefficient explained In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is It is the coefficient of the term in the polynomial expansion of the binomial power ; this coefficient can be computed by the multiplicative formula n x (n-1) x … x (n-k+1) k x (k-1) x … x 1 which using notation can be compactly expressed as For example, the fourth power of is \begin{align} (1+x)^4&=\tbinom{4}{0}x^0+\tbinom{4}{1}x^1+\tbinom{4}{2}x^2+\tbinom{4}{3}x^3+\tbinom{4}{4}x^4\\ &=1+4x+6x^2+4x^3+x^4, \end{align} and the binomial coefficient \tbinom{4}{2}=\tfrac{4 x 3}{2 x 1}=\tfrac{4!}{2!2!}=6 is the coefficient of the term. Arranging the numbers in successive rows for gives a triangular array called Pascal's triangle , satisfying the recurrence relation The binomial coefficients occur in many areas of mathematics, and especially in combinatorics. The symbol is usually read as " choose " because there are ways to choose an (unordered) subset of elements from a fixed set of elements. For example, there are ways to choose elements from, namely,,,, and . The binomial coefficients can be generalized to for any complex number and integer, and many of their properties continue to hold in this more general form. History and notation Andreas von Ettingshausen introduced the notation in 1826, although the numbers were known centuries earlier (see Pascal's triangle ). In about 1150, the Indian mathematician gave an exposition of binomial coefficients in his book Alternative notations include,,,,, and, in all of which the stands for combinations or choices. Many calculators use variants of the because they can represent it on a single-line display. In this form the binomial coefficients are easily compared to -permutations of, written as, etc. Definition and interpretations 0 1 2 3 4 ⋯ 0 1 ⋯ 1 1 1 ⋯ 2 1 2 1 ⋯ 3 1 3 3 1 ⋯ 4 1 4 6 4 1 ⋯ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ The first few binomial coefficients on a left-aligned Pascal's triangle For natural numbers (taken to include 0) and, the binomial coefficient can be defined as the of the in the expansion of . The same coefficient also occurs (if) in the binomial formula (valid for any elements, of a commutative ring ),which explains the name "binomial coefficient". Another occurrence of this number is in combinatorics, where it gives the number of ways, disregarding order, that objects can be chosen from among objects; more formally, the number of -element subsets (or -combinations) of an -element set. This number can be seen as equal to the one of the first definition, independently of any of the formulas below to compute it: if in each of the factors of the power one temporarily labels the term with an index (running from to), then each subset of indices gives after expansion a contribution, and the coefficient of that monomial in the result will be the number of such subsets. This shows in particular that is a natural number for any natural numbers and . There are many other combinatorial interpretations of binomial coefficients (counting problems for which the answer is given by a binomial coefficient expression), for instance the number of words formed of s (digits 0 or 1) whose sum is is given by , while the number of ways to write where every is a nonnegative integer is given by . Most of these interpretations can be shown to be equivalent to counting -combinations. Computing the value of binomial coefficients Several methods exist to compute the value of without actually expanding a binomial power or counting -combinations. Recursive formula One method uses the recursive, purely additive formula$\binom nk = \binom + \binomk$ for all integers such that with boundary values $\binom n0 = \binom nn = 1$ for all integers . The formula follows from considering the set and counting separately (a) the -element groupings that include a particular set element, say "", in every group (since "" is already chosen to fill one spot in every group, we need only choose from the remaining) and (b) all the k-groupings that don't include ""; this enumerates all the possible -combinations of elements. It also follows from tracing the contributions to X^k in . As there is zero or in, one might extend the definition beyond the above boundaries to include when either or . This recursive formula then allows the construction of Pascal's triangle , surrounded by white spaces where the zeros, or the trivial coefficients, would be. Multiplicative formula A more efficient method to compute individual binomial coefficients is given by the formula$\binom nk = \frac = \frac = \prod_^k\frac,$where the numerator of the first fraction } is expressed as a falling factorial power .This formula is easiest to understand for the combinatorial interpretation of binomial coefficients.The numerator gives the number of ways to select a sequence of distinct objects, retaining the order of selection, from a set of objects. The denominator counts the number of distinct sequences that define the same -combination when order is disregarded. Due to the symmetry of the binomial coefficient with regard to and, calculation may be optimised by setting the upper limit of the product above to the smaller of and . Factorial formula Finally, though computationally unsuitable, there is the compact form, often used in proofs and derivations, which makes repeated use of the familiar factorial function:$\binom nk = \frac \quad \text \ 0\leq k\leq n,$where denotes the factorial of . This formula follows from the multiplicative formula above by multiplying numerator and denominator by ; as a consequence it involves many factors common to numerator and denominator. It is less practical for explicit computation (in the case that is small and is large) unless common factors are first cancelled (in particular since factorial values grow very rapidly). The formula does exhibit a symmetry that is less evident from the multiplicative formula (though it is from the definitions)which leads to a more efficient multiplicative computational routine. Using the falling factorial notation,$\binom nk =\begin n^/k! & \text\ k \le \frac \\ n^/(n-k)! & \text\ k > \frac\end.$ Generalization and connection to the binomial series See main article: Binomial series. The multiplicative formula allows the definition of binomial coefficients to be extended^[2] by replacing n by an arbitrary number α (negative, real, complex) or even an element of any commutative ring in which all positive integers are invertible:$\binom \alpha k = \frac = \frac\quad\text k\in\N \text \alpha.$ With this definition one has a generalization of the binomial formula (with one of the variables set to 1), which justifies still calling the binomial coefficients: This formula is valid for all complex numbers α and X with |X| < 1. It can also be interpreted as an identity of formal power series in X, where it actually can serve as definition of arbitrary powers of power series with constant coefficient equal to 1; the point is that with this definition all identities hold that one expects for exponentiation, notably$(1+X)^\alpha(1+X)^\beta=(1+X)^ \ quad\text\quad ((1+X)^\alpha)^\beta=(1+X)^.$ If α is a nonnegative integer n, then all terms with are zero,^[3] and the infinite series becomes a finite sum, thereby recovering the binomial formula. However, for other values of α, including negative integers and rational numbers, the series is really infinite. Pascal's triangle See main article: Pascal's triangle and Pascal's rule. Pascal's rule is the important recurrence relationwhich can be used to prove by mathematical induction that is a natural number for all integer ≥ 0 and all integer , a fact that is not immediately obvious from formula (1). To the left and right of Pascal's triangle, the entries (shown as blanks) are all zero. Pascal's rule also gives rise to Pascal's triangle: Row number contains the numbers for . It is constructed by first placing 1s in the outermost positions, and then filling each inner position with the sum of the two numbers directly above. This method allows the quick calculation of binomial coefficients without the need for fractions or multiplications. For instance, by looking at row number 5 of the triangle, one can quickly read off that Combinatorics and statistics Binomial coefficients are of importance in combinatorics, because they provide ready formulas for certain frequent counting problems: ways to choose elements from a set of elements. See ways to choose elements from a set of elements if repetitions are allowed. See ones and strings consisting of ones and zeros such that no two ones are adjacent. Binomial coefficients as polynomials For any nonnegative integer k, the expression $\binom$ can be simplified and defined as a polynomial divided by : } = \frac;this presents a As such, it can be evaluated at any real or complex number t to define binomial coefficients with such first arguments. These "generalized binomial coefficients" appear in Newton's generalized binomial theorem. For each k, the polynomial can be characterized as the unique degree polynomial satisfying and . Its coefficients are expressible in terms of Stirling numbers of the first kind: can be calculated by logarithmic differentiation This can cause a problem when evaluated at integers from , but using identities below we can compute the derivative as: Binomial coefficients as a basis for the space of polynomials Over any field of characteristic 0 (that is, any field that contains the rational numbers), each polynomial p(t) of degree at most d is uniquely expressible as a linear combination $\sum_^d a_k \ binom$ of binomial coefficients. The coefficient a[k] is the kth difference of the sequence p(0), p(1), ..., p(k). Explicitly,^[5] Integer-valued polynomials See main article: Integer-valued polynomial. Each polynomial : it has an integer value at all integer inputs . (One way to prove this is by induction on , using Pascal's identity .) Therefore, any integer linear combination of binomial coefficient polynomials is integer-valued too. Conversely, shows that any integer-valued polynomial is an integer linear combination of these binomial coefficient polynomials. More generally, for any subring of a characteristic 0 field , a polynomial in [''t''] takes values in at all integers if and only if it is an -linear combination of binomial coefficient polynomials. The integer-valued polynomial can be rewritten as Identities involving binomial coefficients The factorial formula facilitates relating nearby binomial coefficients. For instance, if k is a positive integer and n is arbitrary, thenand, with a little more work, We can also get Moreover, the following may be useful: For constant n, we have the following recurrence: To sum up, we have Sums of the binomial coefficients The formulasays that the elements in the th row of Pascal's triangle always add up to 2 raised to the th power. This is obtained from the binomial theorem by setting and . The formula also has a natural combinatorial interpretation: the left side sums the number of subsets of of sizes k = 0, 1, ..., n, giving the total number of subsets. (That is, the left side counts the power set of .) However, these subsets can also be generated by successively choosing or excluding each element 1, ..., n; the n independent binary choices (bit-strings) allow a total of choices. The left and right sides are two ways to count the same collection of subsets, so they are equal. The formulasand follow from the binomial theorem after with respect to (twice for the latter) and then substituting . The Chu–Vandermonde identity, which holds for any complex values m and n and any non-negative integer k, isand can be found by examination of the coefficient of in the expansion of using equation . When, equation reduces to equation . In the special case, using, the expansion becomes (as seen in Pascal's triangle at right)where the term on the right side is central binomial coefficient Another form of the Chu–Vandermonde identity, which applies for any integers j, k, and n satisfying, isThe proof is similar, but uses the binomial series expansion with negative integer exponents.When, equation gives the hockey-stick identity and its relative Let F(n) denote the n-th Fibonacci number.Then This can be proved by using or by Zeckendorf's representation . A combinatorial proof is given below. Multisections of sums For integers s and t such that series multisection gives the following identity for the sum of binomial coefficients: \binom{n}{t}+\binom{n}{t+s}+\binom{n}{t+2s}+\ldots= 1 For small, these series have particularly nice forms; for example,^[6] \binom{n}{0}+\binom{n}{3}+\binom{n}{6}+ … = \binom{n}{1}+\binom{n}{4}+\binom{n}{7}+ … = \binom{n}{2}+\binom{n}{5}+\binom{n}{8}+ … = \binom{n}{0}+\binom{n}{4}+\binom{n}{8}+ … = \binom{n}{1}+\binom{n}{5}+\binom{n}{9}+ … = \binom{n}{2}+\binom{n}{6}+\binom{n}{10}+ … = \binom{n}{3}+\binom{n}{7}+\binom{n}{11}+ … = Partial sums Although there is no closed formula for partial sums of binomial coefficients, one can again use and induction to show that for, with special case for . This latter result is also a special case of the result from the theory of finite differences that for any polynomial ) of degree less than times and setting = −1 yields this for ,when 0 ≤ ,and the general case follows by taking linear combinations of these. When P(x) is of degree less than or equal to n,where is the coefficient of degree More generally for, are complex numbers. This follows immediately applying to the polynomial instead of, and observing that still has degree less than or equal to , and that its coefficient of degree The series $\frac \sum_^\infty \frac 1 = \frac 1$ is convergent for k ≥ 2. This formula is used in the analysis of the German tank problem. It follows from $\frack\sum_^\frac 1 =\frac 1-\frac 1$ which is proved by induction on M. Identities with combinatorial proofs Many identities involving binomial coefficients can be proved by combinatorial means. For example, for nonnegative integers , the identity (which reduces to when = 1) can be given a double counting proof , as follows. The left side counts the number of ways of selecting a subset of [''n''] = with at least elements, and marking elements among those selected. The right side counts the same thing, because there are ways of choosing a set of elements to mark, and to choose which of the remaining elements of [''n''] also belong to the subset. In Pascal's identity both sides count the number of -element subsets of [''n'']: the two terms on the right side group them into those that contain element and those that do not. The identity also has a combinatorial proof. The identity reads Suppose you have empty squares arranged in a row and you want to mark (select) of them. There are ways to do this. On the other hand, you may select your squares by selecting squares from among the first squares from the remaining squares; any from 0 to will work. This gives Now apply to get the result. If one denotes by the sequence of Fibonacci numbers, indexed so that, then the identity$\sum_^ \binom k = F(n)$has the following combinatorial proof. One may show by induction that counts the number of ways that a strip of squares may be covered by and tiles. On the other hand, if such a tiling uses exactly of the tiles, then it uses of the tiles, and so uses tiles total. There are ways to order these tiles, and so summing this coefficient over all possible values of gives the identity. Sum of coefficients row The number of k-combinations for all k, $\sum_\binom nk = 2^n$, is the sum of the nth row (counting from 0) of the binomial coefficients. These combinations are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to , where each digit position is an item from the set of Dixon's identity or, more generally, , and are non-negative integers. Continuous identities Certain trigonometric integrals have values expressible in terms of binomial coefficients: For any \sin((2m-n)x)\sin^n(x)dx= \begin{cases} (-1)^m+(n+1)/2 \binom{n}{m},&nodd\\ 0,&otherwise \end{cases} \cos((2m-n)x)\sin^n(x)dx= \begin{cases} (-1)^m+(n/2) \binom{n}{m},&neven\\ 0,&otherwise \end{cases} These can be proved by using Euler's formula to convert trigonometric functions to complex exponentials, expanding using the binomial theorem, and integrating term by term. If n is prime, then $\binom k \equiv (-1)^k \mod n$ for every k with More generally, this remains true if is any number and is such that all the numbers between 1 and are coprime to Indeed, we have \binom{n-1}k={(n-1)(n-2) … (n-k)\over1 ⋅ 2 … k} = Generating functions Ordinary generating functions For a fixed, the ordinary generating function of the sequence For a fixed, the ordinary generating function of the sequence The bivariate generating function of the binomial coefficients is A symmetric bivariate generating function of the binomial coefficients is which is the same as the previous generating function after the substitution Exponential generating function A symmetric exponential bivariate generating function of the binomial coefficients is: Divisibility properties See main article: Kummer's theorem and Lucas' theorem. In 1852, Kummer proved that if m and n are nonnegative integers and p is a prime number, then the largest power of p dividing , where is the number of carries when are added in base .Equivalently, the exponent of a prime equals the number of nonnegative integers such that the fractional part is greater than the fractional part of . It can be deduced from this that is divisible by ). In particular therefore it follows that for all positive integers such that . However this is not true of higher powers of : for example 9 does not divide A somewhat surprising result by David Singmaster (1974) is that any integer divides almost all binomial coefficients. More precisely, fix an integer d and let f(N) denote the number of binomial such that . Then Since the number of binomial coefficients + 1) / 2, this implies that the density of binomial coefficients divisible by goes to 1. Binomial coefficients have divisibility properties related to least common multiples of consecutive integers. For example:^[10] is a multiple of (n,n+1,\ldots,n+k)}{n ⋅ Another fact:An integer is prime if and only ifall the intermediate binomial coefficients are divisible by Proof:When p is prime, p divides p ⋅ (p-1) … (p-k+1) k ⋅ (k-1) … 1 for all because is a natural number and divides the numerator but not the denominator.When is composite, let be the smallest prime factor of and let . Then and k(n-1)(n-2) … (n-p+1) otherwise the numerator has to be divisible by, this can only be the case when is divisible by . But is divisible by , so does not divide and because is prime, we know that does not divide and so the numerator cannot be divisible by Bounds and asymptotic formulas The following bounds for hold for all values of such that : $\frac \le \le \frac < \left(\frac\right)^k.$ The first inequality follows from the fact that $= \frac \cdot \frac \cdots \frac$ and each of these terms in this product is $\geq \frac$ . A similar argument can be made to show the second inequality. The final strict inequality is equivalent to $e^k > k^k / k!$ , that is clear since the RHS is a term of the exponential series $e^k = \sum_^\infty k^j/j!$ From the divisibility properties we can infer that$\frac\leq\binom \leq \frac,$where both equalities can be achieved. The following bounds are useful in information theory:^[11] $\frac 2^ \leq \leq 2^$where is the binary entropy function . It can be further tightened to $\sqrt 2^ \leq \leq \sqrt 2^$ for all Both n and k large Stirling's approximation yields the following approximation, valid when both tend to infinity: $\sim\sqrt \cdot$ Because the inequality forms of Stirling's formula also bound the factorials, slight variants on the above asymptotic approximation give exact bounds.In particular, when is sufficiently large, one has $\sim \frac$ . More generally, for and (again, by applying Stirling's formula to the factorials in the binomial coefficient), $\sqrt \ge \frac.$ If n is large and k is linear in n, various precise asymptotic estimates exist for the binomial coefficient $\binom$. For example, if $\binom \sim \binom e^ \sim \frac e^$ − 2 much larger than If is large and is (that is, if), then$\binom \sim \left(\frac \right)^k \cdot (2\pi k)^ \cdot \exp\left(- \frac(1 + o(1))\right)$where again is the little o notation.^[14] Sums of binomial coefficients A simple and rough upper bound for the sum of binomial coefficients can be obtained using the binomial theorem:$\sum_^k \leq \sum_^k n^i\cdot 1^ \leq (1+n)^k$More precise bounds are given by$\frac \ cdot 2^ \leq \sum_^ \binom \leq 2^,$valid for all integers Generalized binomial coefficients The infinite product formula for the gamma function also gives an expression for binomial coefficients$(-1)^k = = \frac \frac \prod_ \frac$which yields the asymptotic formulas$\approx \frac \qquad \ text \qquad = \frac\left(1+\frac+\mathcal\left(k^\right)\right)$as This asymptotic behaviour is contained in the approximation$\approx \frac$as well. (Here is the harmonic number is the Euler–Mascheroni constant Further, the asymptotic formula$\frac\to \left(1-\frac\right)^\quad\text\quad \frac\to \left(\frac\right)^z$hold true, whenever for some complex number Generalization to multinomials See main article: Multinomial theorem. Binomial coefficients can be generalized to multinomial coefficients defined to be the number: While the binomial coefficients represent the coefficients of, the multinomial coefficientsrepresent the coefficients of the polynomial The case = 2 gives binomial coefficients: The combinatorial interpretation of multinomial coefficients is distribution of n distinguishable elements over r (distinguishable) containers, each containing exactly k[i] elements, where i is the index of the container. Multinomial coefficients have many properties similar to those of binomial coefficients, for example the recurrence relation: and symmetry: is a of (1, 2, ..., Taylor series Using Stirling numbers of the first kind the series expansion around any arbitrarily chosen point Binomial coefficient with The definition of the binomial coefficients can be extended to the case where is real and is integer. In particular, the following identity holds for any non-negative integer {{1/2}\choose{k}}={{2k}\choose{k}} (-1)^k+1 This shows up when expanding into a power series using the Newton binomial series : Products of binomial coefficients One can express the product of two binomial coefficients as a linear combination of binomial coefficients: where the connection coefficients are multinomial coefficients. In terms of labelled combinatorial objects, the connection coefficients represent the number of ways to assign labels to a pair of labelled combinatorial objects—of weight m and n respectively—that have had their first k labels identified, or glued together to get a new labelled combinatorial object of weight . (That is, to separate the labels into three portions to apply to the glued part, the unglued part of the first object, and the unglued part of the second object.) In this regard, binomial coefficients are to exponential generating series what falling factorials are to ordinary generating series. The product of all binomial coefficients in the nth row of the Pascal triangle is given by the formula: Partial fraction decomposition The partial fraction decomposition of the reciprocal is given by }= \sum_^ (-1)^ \frac,\qquad \frac= \sum_^n (-1)^ \frac. Newton's binomial series See main article: Binomial series. Newton's binomial series, named after Sir Isaac Newton, is a generalization of the binomial theorem to infinite series: {\alpha\choosen}z^n=1+{\alpha\choose1}z+{\alpha\choose2}z^2+ … . The identity can be obtained by showing that both sides satisfy the differential equation . The radius of convergence of this series is 1. An alternative expression is where the identity is applied. Multiset (rising) binomial coefficient Binomial coefficients count subsets of prescribed size from a given set. A related combinatorial problem is to count multisets of prescribed size with elements drawn from a given set, that is, to count the number of ways to select a certain number of elements from a given set with the possibility of selecting the same element repeatedly. The resulting numbers are called multiset coefficients; ^[16] the number of ways to "multichoose" (i.e., choose with replacement) k items from an n element set is denoted $\left(\!\!\binom n k\!\!\right)$. To avoid ambiguity and confusion with ns main denotation in this article, let and . Multiset coefficients may be expressed in terms of binomial coefficients by the rule$\binom=\left(\!\!\binom\!\!\right)=\binom.$One possible alternative characterization of this identity is as follows:We may define the falling factorial as$(f)_=f^=(f-k+1)\cdots(f-3)\cdot(f-2)\cdot(f-1)\cdot f,$and the corresponding rising factorial as$r^=\,r^=\,r\cdot(r+1)\cdot(r+2)\cdot(r+3)\cdots(r+k-1); $so, for example,$17\cdot18\cdot19\cdot20\cdot21=(21)_=21^=17^=17^.$Then the binomial coefficients may be written as$\binom = \frac =\frac,$while the corresponding multiset coefficient is defined by replacing the falling with the rising factorial:$\left(\!\!\binom\!\!\right)=\frac=\frac.$ Generalization to negative integers n For any n, -n ⋅ -(n+1)...-(n+k-2) ⋅ -(n+k-1) k n ⋅ (n+1) ⋅ (n+2) … (n+k-1) \\ &=(-1) \\ &=(-1)^k\binom{n+k-1}{k}\\ &=(-1)^k\left(\binom{n}{k}\right).\end{align} In particular, binomial coefficients evaluated at negative integers are given by signed multiset coefficients. In the special case , this reduces to For example, if n = −4 and k = 7, then r = 4 and f = 10: -10 ⋅ -9 ⋅ -8 ⋅ -7 ⋅ -6 ⋅ -5 ⋅ -4 1 ⋅ 2 ⋅ 3 ⋅ 4 ⋅ 5 ⋅ 6 ⋅ 7 7 4 ⋅ 5 ⋅ 6 ⋅ 7 ⋅ 8 ⋅ 9 ⋅ 10 1 ⋅ 2 ⋅ 3 ⋅ 4 ⋅ 5 ⋅ 6 ⋅ 7 \\ &=(-1) \\ &=\left(\binom{-7}{7}\right)\left(\binom{4}{7}\right)=\binom{-1}{7}\binom{10}{7}.\end{align} Two real or complex valued arguments The binomial coefficient is generalized to two real or complex valued arguments using the gamma function or beta function via This definition inherits these following additional properties from {x\choosey} ⋅ {y\choosex}= The resulting function has been little-studied, apparently first being graphed in . Notably, many binomial identities fail: $\binom = \binom$ but $\binom eq \binom$ for n positive (so negative). The behavior is quite complex, and markedly different in various octants (that is, with respect to the axes and the line ), with the behavior for negative having singularities at negative integer values and a checkerboard of positive and negative regions: it is a smoothly interpolated form of the usual binomial, with a ridge ("Pascal's ridge"). and in the quadrant the function is close to zero. the function is alternatingly very large positive and negative on the parallelograms with vertices $(-n,m+1), (-n,m), (-n-1,m-1), (-n-1,m)$ the behavior is again alternatingly very large positive and negative, but on a square grid. it is close to zero, except for near the singularities. Generalization to q-series The binomial coefficient has a q-analog generalization known as the Gaussian binomial coefficient. Generalization to infinite cardinals The definition of the binomial coefficient can be generalized to infinite cardinals by defining: where is some set with . One can show that the generalized binomial coefficient is well-defined, in the sense that no matter what set we choose to represent the cardinal number will remain the same. For finite cardinals, this definition coincides with the standard definition of the binomial coefficient. Assuming the Axiom of Choice, one can show that $= 2^$ for any infinite cardinal See also • Book: Ash, Robert B. . Information Theory . Dover Publications, Inc. . 1965. 1990 . 0-486-66521-6 . • Book: Proofs that Really Count: The Art of Combinatorial Proof . Arthur T. . Benjamin . Arthur T. Benjamin . Jennifer J. . Quinn . Jennifer Quinn . . Dolciani Mathematical Expositions . 27 . 2003 . 978-0-88385-333-7 . Proofs That Really Count . • Book: Bryant, Victor . Victor Bryant . Aspects of combinatorics . Cambridge University Press . 1993 . 0-521-41974-3 . • Book: Flum . Jörg . Grohe . Martin . Martin Grohe . Parameterized Complexity Theory . 2006 . Springer . 978-3-540-29952-3 . 2017-08-28 . 2007-11-18 . https://web.archive.org/web/20071118193434/ http://www.springer.com/east/home/generic/search/results?SGWID=5-40109-22-141358322-0 . dead . • 10.2307/2975209 . The Binomial Coefficient Function . David . Fowler . David Fowler (mathematician) . . 103 . January 1996 . 1–17 . 1 . Mathematical Association of America . 2975209 . • P. . Goetgheluck . Computing Binomial Coefficients . American Mathematical Monthly . 94 . 4 . 1987 . 360–365 . 10.2307/2323099. 2323099 . • Book: Graham . Ronald L. . Ronald L. Graham . Knuth . Donald E. . Donald E. Knuth . Patashnik . Oren . Oren Patashnik . Concrete Mathematics - A foundation for computer science . 2nd . . Reading, MA, USA . February 1994 . 154-155 . 0-201-55802-5 . 1397498. • Book: Gradshteyn . I. S. . Ryzhik . I. M. . Table of Integrals, Series, and Products . Gradshteyn and Ryzhik . 8th . 2014 . Academic Press . 978-0-12-384933-5. • Book: Knuth, Donald E.. Donald Knuth . The Art of Computer Programming, Volume 1: Fundamental Algorithms . Third . Addison-Wesley . 1997 . 0-201-89683-4 . 52–74 . • David . Singmaster . David Singmaster . Notes on binomial coefficients. III. Any integer divides almost all binomial coefficients . Journal of the London Mathematical Society . 8 . 3 . 1974 . 555–560 . 10.1112/jlms/s2-8.3.555 . • Book: Shilov, G. E. . Linear algebra . Dover Publications . 1977 . 978-0-486-63518-7. External links □ Andrew Granville . Arithmetic Properties of Binomial Coefficients I. Binomial coefficients modulo prime powers . CMS Conf. Proc . 20 . 151–162 . 1997 . Andrew Granville . 2013-09-03 . https:/ /web.archive.org/web/20150923201436/http://www.cecm.sfu.ca/organics/papers/granville/Binomial/toppage.html . 2015-09-23 . dead . Notes and References 1. See, which also defines for . Alternative generalizations, such as to two real or complex valued arguments using the Gamma function assign nonzero values to for , but this causes most binomial coefficient identities to fail, and thus is not widely used by the majority of definitions. One such choice of nonzero values leads to the aesthetically pleasing "Pascal windmill" in Hilton, Holton and Pedersen, Mathematical reflections: in a room with many mirrors, Springer, 1997, but causes even Pascal's identity to fail (at the origin). 2. When is a nonnegative integer, for because the -th factor of the numerator is . Thus, the -th term is a zero product for all . 3. Muir. Thomas. Note on Selected Combinations. Proceedings of the Royal Society of Edinburgh. 1902. 4. This can be seen as a discrete analog of Taylor's theorem. It is closely related to Newton's polynomial. Alternating sums of this form may be expressed as the Nörlund–Rice integral. 5. Ruiz. Sebastian. An Algebraic Identity Leading to Wilson's Theorem. The Mathematical Gazette. 1996. 80. 489. 579–582. 10.2307/3618534. 3618534. math/0406086. 125556648 . 6. Farhi . Bakir . Nontrivial lower bounds for the least common multiple of some finite sequence of integers . Journal of Number Theory . 125 . 2 . 2007 . 393–411 . 10.1016/j.jnt.2006.10.017 . 0803.0290 . 115167580 . 7. Book: Thomas M. Cover . Joy A. Thomas . Elements of Information Theory . 18 July 2006 . Wiley . Hoboken, New Jersey . 0-471-24195-4. 8. Book: F. J. MacWilliams . N. J. A. Sloane . The Theory of Error-Correcting Codes . 16 . 3rd . 1981 . North-Holland . 0-444-85009-0. 9. Book: Asymptopia. Spencer. Joel. Florescu. Laura. 2014. AMS. 978-1-4704-0904-3. Student mathematical library. 71. 66. 865574788. Joel Spencer. 10. Book: Asymptopia. Spencer. Joel. Florescu. Laura. 2014. AMS. 978-1-4704-0904-3. Student mathematical library. 71. 59. 865574788. Joel Spencer.
{"url":"https://everything.explained.today/Binomial_coefficient/","timestamp":"2024-11-06T11:24:15Z","content_type":"text/html","content_length":"125816","record_id":"<urn:uuid:8e2e8e46-f25d-4bc9-bdc1-aa9e1b424468>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00009.warc.gz"}
Contrastive Model Contrastive models learn to compare^1. Contrastive use special objective functions such as [[NCE]] Noise Contrastive Estimation: NCE Noise contrastive estimation (NCE) objective function is1 $$ \ mathcal L = \mathbb E_{x, x^{+}, x^{-}} \left[ - \ln \frac{ C(x, x^{+})}{ C(x,x^{+}) + C(x,x^{-}) } \right], $$ where $x^{+}$ represents data similar to $x$, $x^{-}$ represents data dissimilar to $x$, $C(\cdot, \cdot)$ is a function to compute the similarities. For example, we can use $$ C(x, x^{+}) = e^{ f(x)^T f(x^{+}) }, $$ so that the objective function becomes $$ \mathcal L = \mathbb E_ {x, x^{+}, x^{-}} \left[ - \ln \frac{ e^{ … and [[Mutual Information]] Mutual Information Mutual information is defined as $$ I(X;Y) = \mathbb E_{p_{XY}} \ln \frac{P_{XY}}{P_X P_Y}. $$ In the case that $X$ and $Y$ are independent variables, we have $P_{XY} = P_X P_Y$, thus $I(X;Y) = 0$. This makes sense as there would be no “mutual” information if the two variables are independent of each other. Entropy and Cross Entropy Mutual information is closely related to entropy. A simple decomposition shows that $$ I(X;Y) = H(X) - H(X\mid Y), $$ which is the reduction of … . Planted: by L Ma; L Ma (2021). 'Contrastive Model', Datumorphism, 08 April. Available at: https://datumorphism.leima.is/wiki/machine-learning/contrastive-models/contrastive/.
{"url":"https://datumorphism.leima.is/wiki/machine-learning/contrastive-models/contrastive/?ref=footer","timestamp":"2024-11-12T02:13:07Z","content_type":"text/html","content_length":"114054","record_id":"<urn:uuid:1a9eaa12-fefc-449a-ac60-f53987a11ddf>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00039.warc.gz"}
iNZight for Data Analysis Add to Plot The Add to Plot window lets your customise your graph, explore patterns in the data, and highlight important features. The options available to you depend on the type of variables you have selected and the plot drawn. At the top of the Add to Plot window is a drop down that lets you select from one of the available panels: Customise Plot Appearance This panel allows you to control most of the visual aspects of the graph, including size and colour. General Appearance This section is common across all graphs, with the exception of the options listed in the "Plot type" dropdown. • Background colour: you can customise the background colour of your graphs to suit your preference, or make certain features easier to distinguish. You can specify colours using names or HEX codes. • Overall size scale: this lets you adjust the overall size of everything drawn on the screen. This works as a baseline, so all other size settings will be multiplied by this value. Size / Point Size Control point sizes on scatter plots and dot plots, hexagons on hexbin plots, and bar-widths on histograms. Size by variable: On scatter plots, you can choose a numeric variable to be used to resize the points. Colour / Point Colour Change the colour of points, hexagons, and bars. Colours can be chosen from the list, or you can type your own. See choosing colours for more information. Colour by variable: (scatter plots / dot plots / hexbin plots / one-way barplots). Select a variable to code colours. Several choices for colour palettes will be offered. • Categorical variable: Each category will be assigned a colour (depending on your chosen palette) Scatter plots and dot plots: Points are coloured according to the level of the variable. Hexbin plots: hexagons are coloured depending on the proportion of points in each category that fall within that hexagon. One-way bar plots: the bars are segmented depending on the proportions of each category and coloured accordingly. • Numeric variable: a linear scale is set up from the lowest to the highest value, and each point is coloured based on its value. If you click the Use rank checkbox, the colours will be based on quantiles, rather than absolute values. This can be helpful if you have a few very large values of the chosen variable. NOTE: if you choose a numeric variable on a hexbin plot, iNZight will convert it to a 4-level categorical variable. Cycle levels: You can cycle through the various levels (or quantiles for numeric variables), making it stand out from the others, by clicking the left and right arrows. You can adjust the number of quantiles to use by adjusting the number in the box. Point Symbols Select different symbols to use on scatter plots and dot plots. Symbol by variable: You can also chose a categorical variable with 5 or fewer levels, and iNZight will give each its own symbol. This is particularly useful when creating colour-blind friendly plots, or for black-and-white printed graphs. WARNING: transparency can make drawing very slow if you have a large dataset Diamonds and triangles can take a long time to draw if you have a big data set (at least on Windows). Circles (the default) and squares seem to be OK. iNZight will turn off transparency if you change plotting symbol to one of these; you can put transparency back, but iNZight may become unresponsive while it tries to draw the graph. Trend Lines and Curves On scatter plots, hexbin plots, and grid density plots, you can add various types of lines to the graph. Trend Curves Trend curves are fitted using linear regression models, which minimises the overall vertical distance between points and the line. The formula for the line (which can be found by clicking the Get Summary button after adding a trend line) depends on the type of curve fitted. In these equations, y is "Variable 1", the primary variable of interest, and x is "Variable 2". The greek letter \(\beta\) ("beta") represents an unknown value, and iNZight picks the best value to make the line fit the data as well as possible. The subscripts (\(\beta_0, \beta_1, \ldots\)) simply indicate different unknown values. • Linear: a straight line fitted through the points: $$ y = \beta_0 + \beta_1 x $$ It has an intercept, \( \beta_0 \), which represents the value of \(y\) when \(x=0\), and a slope, \(\beta_1\), which describes the change in \(y\) for a unit change in \(x\). • Quadratic: a curved line with at most one bend: $$ y = \beta_0 + \beta_1 x + \beta_2 x^2 $$ • Cubic: a curved line with up to two bends: $$ y = \beta_0 + \beta_1 x + \beta_2 x^2 + \beta_3 x^3 $$ For more details, see this page of the Data to Insight course. These curves are fitted without the restrictions of the regression models shown above. iNZight uses a loess smoother, and you can control the degree of "smoothness" using the slider. If you have a large data set, you can add quantile smoothers, which draw lines at not only the middle (median) of the data, but also at the quartiles, 25% and 75%, and the 10% and 90% quantiles if the sample size is large enough. Join Points In some cases, you may have ordered data that makes sense to connect points, for example a time series. If you specified a categorical colour by variable, you can optionally connect points within each level of that variable. Trend Line Options If you specified a categorical colour by variable, you can fit a trend curve through each level of that variable. By default, the lines will have the same slope, but different intercepts, so they will be parallel. If you want the lines to be completely independent, you can uncheck the parallel trend lines box and each level will also be given its own intercept. Line Width Multiplier will adjust the thickness of lines. The line of equality is useful when the units of the two variables are the same (for example, "before" and "after" measurements). Axes and Labels Axis Labels Type your own labels to override those automatically created by iNZight. To remove labels, just enter one or more spaces into the box and press enter. Axis Features For scatter plots only. • Jitter: This adds a small amount of noise to each value, which helps to separate out discrete variables (for example, age). • Rugs: This adds a small line on the axis for each point, making it easier to read values for extreme points. Axis Limits This allows you to adjust the limits of the axes, effectively allowing you to "zoom in" on specific regions. Number of bars On bar charts, you can adjust the total number of bars shown at a time. This is useful if a categorical factor has too many levels to display all at ones. Identify Points This allows users to label points in one of three ways (text labels, colour labels, and related points), using one of three methods (clicking, selecting values, or labelling extreme points). How do you want to label points? • Text labels allows you to select a variable from the data set with which to label selected points • Colour points allows selected points to be filled in with a selected color. (See ways of choosing colours.) • With the same level of allows you to easily locate points related to the selected points. The main use of this would be to locate points with the same level of a chosen factor (e.g., if you are exploring survey data, you may wish to identify observations in the same cluster), or you may want to retain points selected over multiple graphs (e.g., in the Gap Minder data set, included in the Data folder, has several observations of countries over multiple years. In this case you can label certain coutries and track them over time). Note: when you do this, the point you clicked will be highlighted for easier reference. How do you want to select points? • Clicking with the mouse: After clicking "Click to Locate ...", you can then click a point on the plot. iNZight will then label it using the options you defined above. You can select multiple points by clicking the button again and locating new points without losing the current selection. Note: due to the way the software works, you may see a "Busy" cursor after you click the "Click to Locate ..." button. This will not go away until you click a point, so go ahead and click points ignoring the cursor. • Select by value of ...: this will allow you to select a variable to use to select points. If the variable is a factor (or a numeric with less than 20 unique values) a slider will appear allowing you so quickly select the levels you wish to label. If the variable is numeric, or you want to select multiple levels, you can click the "Select levels ..." button to do so. • Extreme values: This will use a very simple algorithm that will select the points "farthest away" from the bulk of the data. This will display a slider, allowing you to select more or less points as desired. Once you have selected the points, you can optionally save the selection (e.g., if you want to track the same observations over multiple plots) you can click the "Save these points ..." button. From there you can use the "With the same level of" option to select related points, etc. The statistic used is Mahalanobis' distance.
{"url":"https://www.stat.auckland.ac.nz/~wild/iNZight/user_guides/plot_options/?topic=add_to_plot","timestamp":"2024-11-11T10:37:33Z","content_type":"text/html","content_length":"30964","record_id":"<urn:uuid:2905f6cd-0f4d-44a7-a07d-28ce64e62f5f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00299.warc.gz"}
Permform some computations with vectors in R3 - Stumbling Robot Permform some computations with vectors in R^3 Given vectors 1. Compute the components of 2. If 3. Find values for 1. We compute, 2. Proof. Hence, we have 3. If Finally this leaves Point out an error, ask a question, offer an alternative solution (to use Latex type [latexpage] at the top of your comment):
{"url":"https://www.stumblingrobot.com/2016/04/25/permform-computations-vectors-r3/","timestamp":"2024-11-07T08:56:21Z","content_type":"text/html","content_length":"60381","record_id":"<urn:uuid:99a14736-6b89-4bf4-8918-6e8703dc8108>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00342.warc.gz"}
How Water Is Supposed To Be Consumed Tap water has mainly been around since the discovery that chlorine can kill bacteria in water. Notice that when tap water is promoted or researched, the conclusion is that ''tap water is safe in parts of X, Y, and Z''. Nowhere in this conclusion does it say ''tap water is healthy for you and it should be consumed regularly''. Tap water is promoted with the same words as Coca-Cola and Redbull - ''safe to drink''. However, what is the best way to consume water in order to increase your quality of life? At the end of the day, we are 80% water. Here at NVTV Blogs, we ran some tests with tap water in London, and used universal indicator paper to determine the ph of the running tap water. To our surprise, we found that the ph was that of what is should be (ph7-8). Therefore, we can conclude that the water that was tested was not acidic - which is a good start. More information can be found here. Which qualities would we be looking for when drinking water? We want our water to be bacteria-free, pure, and non-toxic. There is scientific research that concludes that the molecular structure of water is very important when consuming it into the body. We are all familiar with water’s three phases – liquid, ice, and vapour. There is now a theory about a fourth phase. This phase is called structured water (also known as vortexed or hexagonal water, exclusion zone or H302). “Structured water is also found in natural, pristine flowing rivers, streams, lakes and waterfalls all over the planet, and is essential for the cellular health of not just us, but of all living things,” says Rob Gourlay, an expert in biological research and water-structure science. This new form of water is found in the cells of the human body and is more of a gel-like substance than it is a liquid. However, the mainstream media is professing that this is a scam, and that it is not to be trusted. Structured water, however, can be found in humans. TWE writes; Why do plants look so vibrant after a rain, but when they are watered with groundwater, it just isn’t the same? The answer is in the molecular structure of rainwater. You probably think of the formula for water as H2O, but the water in rain, rivers, and springs is different, it is H3O2, a form of water that is beyond liquid, solid and vapor. This 4th phase of water, between liquid and ice, is known by many names including structured water, EZ water, gel water, vortexed water, and hexagonal water and is known for its ability to store and release energy. H3O2, or structured water, is the reason why plants love rain; it is energized and hydrating, as nature intended it to be. Why would nature provide us with a certain structure of water, and then governments encourage a totally different type of water? Something doesn't make sense. Yogis and ancient spiritual practitioners believe that the molecular structure and energetic vibrations of water need to be in line with the person who is drinking it. They say that ''the best way to drink water is with your own hands. It is important that before you drink water, you touch it first'', this is so that the vibrations of the water match the vibrations of your body so that it can harmonise with your energy which is being emitted through your hands. This also ensures that the water matches the temperature of your hands and means that your body can more easily digest the water. In spiritual practice, one of the best ways to block a nation's energy field is to contaminate its water supply, which in turn will block thoughts and create brain fog in anybody who consumes the If you are actively waking up with headaches, try to drink less tap water. Furthermore, try to consume less drinks containing CO2 (fizzy drinks such as soda) as your body actively tries to remove CO2 (Carbon Dioxide). When we breathe, our bodies take in Oxygen, and remove Carbon Dioxide - therefore it would be unwise to fill our bodies with drinks containing Carbon Dioxide. A famous yogi said the following: ''It is not just about drinking liquid water; you must eat high water content foods. If you eat a fruit, it is nearly ninety percent water. Vegetables are over seventy percent water. Your food must have a minimum of seventy percent water content. If you eat food with very low water content, it goes and gets stuck in your stomach like concrete. If you eat dry food and then drink water, it does not work. When you consume food, it must be at least level with the percentage of water content in your own body. This is why vegetables and fruits must be a part of your diet. Fruit has nearly eighty to ninety percent water, which is why it is the best thing to consume.'' 3 Comments Never trust your drinking water. It is full of VOCs. These are cancer causing agents. No one has been watching the water for decades. Take your own precautions for your household. I drink pure water clean water in the form of DISTILLED water. My family & I all have water distillers & have used them for +8 years. It resolved my daughters eczema (she was 2 at the time) within a month & I lost weight, had a clearer mind, more energy & could actually taste again. Now saying that I could taste again didn't mean I didn't have taste before, because I was like you, but now I can easily smell & taste the toxins in liquids & foods now because my pallette is clean. Don't eat foods like McD anymore (pure revolting substance) & can't have tap water (notice they don't say "drinking water") anymore because the flouride gives… 1 capsule of coral calcium can raise tap water from 7.4 to 9.8 and 9.9 when left in the daylight for a few hours
{"url":"https://www.nvtvblogs.com/post/how-water-is-supposed-to-be-consumed","timestamp":"2024-11-07T11:04:27Z","content_type":"text/html","content_length":"1050481","record_id":"<urn:uuid:745d4429-ce61-40b9-bc04-ba1805a06308>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00779.warc.gz"}
Specifying Inputs and Transfer Functions Input variables and transfer functions for them can be specified using the INPUT= option in the ESTIMATE statement. The variables used in the INPUT= option must be included in the CROSSCORR= list in the previous IDENTIFY statement. If any differencing is specified in the CROSSCORR= list, then the differenced variable is used as the input to the transfer function. General Syntax of the INPUT= Option The general syntax of the INPUT= option is ESTIMATE …INPUT=( transfer-function variable …) The transfer function for an input variable is optional. The name of a variable by itself can be used to specify a pure regression term for the variable. If specified, the syntax of the transfer function is S is the number of periods of time delay (lag) for this input series. Each term in parentheses specifies a polynomial factor with parameters at the lags specified by the values. The terms before the slash (/) are numerator factors. The terms after the slash (/) are denominator factors. All three parts are optional. Commas can optionally be used between input specifications to make the INPUT= option more readable. The $ sign after the shift is also optional. Except for the first numerator factor, each of the terms indicates a factor of the form The form of the first numerator factor depends on the ALTPARM option. By default, the constant 1 in the first numerator factor is replaced with a free parameter . Alternative Model Parameterization When the ALTPARM option is specified, the parameter is factored out so that it multiplies the entire transfer function, and the first numerator factor has the same form as the other factors. The ALTPARM option does not materially affect the results; it just presents the results differently. Some people prefer to see the model written one way, while others prefer the alternative representation. Table 7.9 illustrates the effect of the ALTPARM option. Table 7.9: The ALTPARM Option INPUT= Option ALTPARM Model INPUT=((1 2)(12)/(1)X); No Differencing and Input Variables If you difference the response series and use input variables, take care that the differencing operations do not change the meaning of the model. For example, if you want to fit the model then the IDENTIFY statement must read identify var=y(1,12) crosscorr=x(1,12); estimate q=1 input=(/(1)x) noconstant; If instead you specify the differencing as identify var=y(1,12) crosscorr=x; estimate q=1 input=(/(1)x) noconstant; then the model being requested is which is a very different model. The point to remember is that a differencing operation requested for the response variable specified by the VAR= option is applied only to that variable and not to the noise term of the model.
{"url":"http://support.sas.com/documentation/cdl/en/etsug/65545/HTML/default/etsug_arima_details24.htm","timestamp":"2024-11-06T17:27:35Z","content_type":"application/xhtml+xml","content_length":"25385","record_id":"<urn:uuid:7b2eb0f6-bc9e-49a1-a195-c2056c92c7bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00100.warc.gz"}
Transforming rectangles into squares, with applications to strong colorings It is proved that every singular cardinal λ admits a function rts:[λ+]2→[λ+]2 that transforms rectangles into squares. Namely, for every cofinal subsets A,B of λ+, there exists a cofinal subset C⊆λ+, such that rts[A{circled asterisk operator}B]⊇C{circled asterisk operator}C. As a corollary, we get that for every uncountable cardinal λ, the classical negative partition relation λ+/→[λ+]λ+2 coincides with the following syntactically stronger statement. There exists a function f:[λ+]2→λ+ such that for every positive integer n, every family A⊆[λ+]n of size λ+ of mutually disjoint sets, and every coloring d:n×n→λ+, there exist a,b∈A with max(a)<min(b) such that c(ai,bj)=d(i,j) for all i,j<n. • Minimal walks • Off-center club guessing • Partition relations • Square-bracket operation • Successor of singular cardinal All Science Journal Classification (ASJC) codes
{"url":"https://cris.iucc.ac.il/en/publications/transforming-rectangles-into-squares-with-applications-to-strong--2","timestamp":"2024-11-05T02:58:35Z","content_type":"text/html","content_length":"38084","record_id":"<urn:uuid:c6cb0a5d-3281-44dd-a961-ce13dd544b82>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00711.warc.gz"}
Practice makes perfect Many students oftentimes are not sure what their strategy should be for studying for standardized exams, science and math classes. My answer is always practice, practice and practice more. A student should practice all the quizzes, tests, HW and classwork problems the teacher gave, as well as end of the chapter problems. For standardized tests, use different sources for practice to see different wording of questions on the same topic. Pay most attention to the practice sources that come from the company that actually makes the test you would take, such as College Board for SAT, and AAMC for MCAT. When practicing, go one problem at a time. Do the problem by yourself and then look at the answer and solutions to learn from it and your mistakes. Never look at the answer first. It might give you a false assurance that you know how to do the problem without doing it. For more practice, form study groups. Teach topics and problems to your fellow classmates. Teaching is the best way to learn material well. At Transformation Tutoring we strongly believe that practice does make perfect! Our amazing tutors are here to help you with any of you science and math class as well as standardized exams!
{"url":"https://www.transformationtutoring.com/single-post/2018/03/09/practice-makes-perfect","timestamp":"2024-11-04T04:38:18Z","content_type":"text/html","content_length":"1050501","record_id":"<urn:uuid:2f578bde-baeb-4af1-9a68-3006d7ad664d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00893.warc.gz"}
What is the difference between megabytes and mebibytes What is the difference between bits and bytes? Bits (short for binary digits) are generally used to measure data transfer speeds in the binary system. It has a single numeric value, either 1 (one) or 0 (zero). A bit also can be represented by other values such as true/false, yes/no, on/off, plus/minus (+/−) etc. A bit is the smallest unit of information in computer technology and digital communication. And it is smaller than a byte. There are 0.125 bytes in 1 bit. That also means 1 bit equals to 0.125 bytes. Bytes are mostly used in information and digital technology. And they are used to measure data storage. A byte is another small unit of digital information. On the other hand, 1 byte is larger than a bit. There are 8 bits in a byte. That also means 1 byte consists of 8 bits of data. The abbreviation of the byte is a capital B. And a lowercase b is the abbreviation of the bit. What is the difference between kilobytes and kibibytes? Kilobytes are commonly used to measure digital information such as text, sound, graphic, video and other kinds of information. The kilobyte is a multiple of the unit byte for digital information with SI decimal prefix Kilo. The abbreviation of the kilobyte is KB. There are 1,000 bytes in a kilobyte. The kibibyte (short for kilo and binary) is a unit of digital information storage with IEC binary prefix Kibi. The abbreviation of the kibibyte is KiB. The kibibyte was created by the International Electrotechnical Commission (IEC) in 1998 to replace the prefix "kilo”. According to this new standard, a kibibyte equals to 1,024 bytes. Kilobyte and kibibyte are closely related to each other. 1 KB is equivalent to 0.9765625 KiB. That also means there are 0.9765625 kibibytes in a Kilobyte. What is the difference between megabytes and mebibytes? A megabyte is a unit of digital information and it is extensively used in the computer and information technology. The abbreviation of the megabyte is MB. For example, the storage capacity of a CD is 700 MB. That means a CD-ROM can store about 700 MB digital information. One megabyte is equal to 1,000,000 bytes. You can also say that there are 1,000,000 (one million) bytes in one megabyte. A mebibyte (short for mega and binary) is the unit of data with IEC binary prefix Mebi. The unit symbol of Mebibyte is MiB. The unit MiB was defined by the International Electrotechnical Commission (IEC) in 1998. One mebibyte is equivalent to 1,048,576 bytes. And there are 0.95367431640625 MiB in one megabyte. Also, one mebibyte equals 1,024 kibibytes and one gibibyte equals 1,024 mebibytes.
{"url":"https://www.gbmb.org/blog/what-is-the-difference-between-megabytes-and-mebibytes-32","timestamp":"2024-11-04T04:29:35Z","content_type":"text/html","content_length":"17673","record_id":"<urn:uuid:5928faea-978b-4bed-b228-c95e29d1d1ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00666.warc.gz"}