arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
# zbMATH — the first resource for mathematics Wiener soccer. (English. Russian original) Zbl 0773.60079 Theory Probab. Appl. 37, No. 3, 550-553 (1992); translation from Teor. Veroyatn. Primen. 37, No. 3, 562-564 (1992). The authors propose an abstract model of soccer in an infinite field, in which the ball trajectory is a Wiener trajectory in the plane. They obtain an asymptotic distribution of the score in this play for a time $$T$$ the dispersion of which is the proportionality $$(K'/K)\text{Log} T$$, where $$K'$$, $$K$$ are elliptic integrals periods. This relation, in particular, shows why the score depends in this play on the gate’s wide. Reviewer: D.Aissani (Bejaia) ##### MSC: 60J65 Brownian motion Full Text:
Question: Advanced Functions Math questions View Single Post Unknown008 Posts: 8,147, Reputation: 3745 Uber Member #4 Apr 5, 2009, 07:32 AM 1. The common ratio is obtained by dividing a term by the previous one. As a check, you can divide another further term by its previous term. The ratio should be the same. $T_n = a + (n-1)d$ Where n is the term number. By this, find d, the common difference and obtain T3, the third term by the formula. If you don't know that formula, do (100 - 64)/3 to obtain the amount removed after each term. 3. Use the formula $T_n = ar^{n-1$ Find are, the common ratio and solve for the first term. 4. First you must identify the type of sequence. Is it arithmetic or geometric? When you find the answer, just use the formulae I gave you above, depending on the type of sequence it is. (An AP kind of adds or subtracts a fixed number each time, and a GP either multiplies or divides each time by a fixed number). 5. Same thing as in 4. 6. That you'll have to write it in your own words, from your knowledge of sequences. 7. You are given the first term T1, to obtain the next terms, replace n=2 and n=3 in your given equation. 8. You should be able to do (a) now. For (b) use the formula $S_n=\frac{a(r^n-1)}{r-1}$if n>1 or $S_n=\frac{a(1-r^n)}{1-r}$ if |n|<1. 9. That should be easy by now. There's only a simultaneous equation tobe solved. 10. Identify the type of sequence, then use your Sum formulae (for an AP, it is $S_n= \frac{n(T_1+T_l)}{2}$, where l is the last term. 11. & 12. You should be able to do these by now. 13. This too should be easy now.
# 1-dimensional Linear System of Divisors of Degree 2 is Complete I am having some trouble understanding linear systems of divisors on Riemann Surfaces. I feel moderately comfortable with the notion of a complete linear system $|D|$, namely the set of effective divisors which are linearly equivalent to $D$, but I am struggling to grasp more general linear systems, even though they are (as I understand it) subspaces of $\mathbb{P}(L(D))$. For example, I would like to show that given a linear system on a Riemann surface of genus $\geq 1$, $\mathscr{L}$ which is a $g^1_2$ (i.e. a 1-dimensional projective space of degree 2 divisors), that $\mathscr{L}$ must automatically be a complete linear system, i.e. $\exists D\in \mathrm{Div}(X)$ such that $\mathscr{L} = |D|$. [From this, I think we can argue that every $g^1_2$ is base-point free(?) which is a neat consequence.] I believe that if there exists $D\in\mathscr{L}$ with $L(D)=\mathrm{span}\{1,f\}$ for some non-constant $f\in L(D)$, then we're done, since we can take $\mathscr{L} = \{D+\mathrm{div}(n\cdot f): n\in \mathbb{N}\}$. However, if we find $D\in \mathscr{L}$ with $l(D)> 2$, I don't understand how we can find an appropriate basis of $\mathscr{L}$. Can we just "subtract points" (i.e. consider divisors of the form $D - P_1 - \ldots -P_r$) until the resulting divisor $E$ satisfies $l(E) = 2$? This approach feels rather arbitrary to me, and I'm not totally convinced why the corresponding non-constant $f\in L(E)$ should generate the space $\mathscr{L}$. As can be ascertained from the above, I don't have a good intuition of why we should consider more general linear systems than complete ones, and I would really appreciate any insights anyone could offer into the subject. • If you take a rational section of the line bundle, i.e. just take a section which might have also poles, then this defines a (possibly non effective) Divisor $D$ by looking at the zeros and poles of the section. Your line bundle will then be isomorphic to $L(D)$ (here I mean $L(D)$ as the line bundle). This is basically just writing out the 1:1 correspondence between Divisors and Line bundles. – Notone Apr 19 '18 at 11:32 • @Notone, thanks for your comment. Since I'm not entirely familiar with this correspondence between line bundles and divisors (I've seen the notion of a canonical divisor and from what I can see in Miranda's book, this correspondence seems to generalise that to arbitrary line bundles?), I was wondering if you could maybe clarify which line bundle you mean in the first sentence of your comment: do you mean that $\mathscr{L}$ is a line bundle, or are you talking about the cotangent bundle (so that $D$ is the canonical divisor) or something else entirely? – An Coileanach Apr 19 '18 at 18:00 • Sorry, I was referring to $\mathscr{L}$! – Notone Apr 19 '18 at 19:09 • @Notone, thanks for clarifying. Sorry to be dense here, but I'm not sure I fully understand why we wouldn't have a scenario where $\mathscr{L} \subsetneq L(D)$, i.e. why should $\mathscr{L}$ equal all of $L(D)$, and not just some proper linear subspace? – An Coileanach Apr 19 '18 at 19:59 • "...a $g^1_2$ (i.e. a 1-dimensional vector space of degree 2 divisors)": No, it is a a 1-dimensional projective space of degree 2 divisors. – Georges Elencwajg Apr 24 '18 at 22:39 Following a conversation with someone smarter than me, I will try to post an answer to my immediate question regarding completeness of $g^1_2$ linear systems (although I still lack good intuition about general linear systems). Suppose $D \in \mathscr{L}$. Then $D$ is an effective divisor of degree 2, so it is of the form $P + Q$ for some $P,\ Q\in X$. By assumption, $\dim(\mathscr{L}) = 1$, so there is at least some meromorphic function contained in $L(P+Q)$, so $\ell(P + Q) \geq 2$; therefore, it only remains to show that $\ell(P+ Q) < 3$. It is shown in Miranda's book that $\ell(P + Q) \in \{\ell(P),\ \ell(P) + 1\}$, $\forall Q\in X$. On the other hand, since the genus of $X$ is at least 1, it follows that $\ell(P) = 1$ (otherwise $X\cong \mathbb{P}^1$). Hence $\ell(P + Q) \leq 2$, so we find that $\ell(P + Q) = 2$, and $\mathscr{L}$ is complete. • It is a general fact that for an effective non-zero divisor $D$ of degree $d$ on a compact Riemann surface of positive genus we have $dim L(D) \leq deg(D)\;$ [Miranda Prop. 3.16 page 151+Problem I page 153] – Georges Elencwajg Apr 24 '18 at 22:51
# Mapping vector of functions to vector of numbers? #1 Suppose I have a vector of functions, say f=[sin,cos] and I want to apply this element by element to a vector of numbers, say v=[2pi/3, pi/3] so that I get [sin(2pi/3), cos(pi/3)] – is there an elegant syntax for this? 0 Likes #2 A comprehension is pretty elegant: julia> f = [sin, cos] 2-element Array{Function,1}: sin cos julia> v = [1, 2] 2-element Array{Int64,1}: 1 2 julia> [ff(vv) for (ff, vv) in zip(f, v)] 2-element Array{Float64,1}: 0.8414709848078965 -0.4161468365471424 0 Likes #3 Wouldn’t you want to get out four elements like f=[sin,cos] v=[2pi/3, pi/3] julia> [i(j) for i in f for j in v] 4-element Array{Float64,1}: 0.8660254037844387 0.8660254037844386 -0.4999999999999998 0.5000000000000001 ? (Not clear from OP I guess) 0 Likes #4 I would probably rely on an anonymous function that applies its first argument to its second argument, i.e. (f,x) -> f(x). This you can then map map((f,x) -> f(x), [sin, cos], [2\pi/3, \pi/3]) or broadcast ((f,x) -> f(x)).([sin, cos], [2\pi/3, \pi/3]) or you can rely on list comprehesion as the others suggest 0 Likes #5 f = [sin, cos] w = [π/2, π] julia> map.(f, w) 2-element Array{Float64,1}: 1.0 -1.0 julia> w .|> f 2-element Array{Float64,1}: 1.0 -1.0 (by the way, if you want good performance, you might want to check out FunctionWrappers.jl.) 6 Likes #6 The opposite might be easier, since it’s built into Julia: f = [sin, cos] w = [π/2, π] julia> map(|>, w, f) 2-element Array{Float64,1}: 1.0 -1.0 julia> (|>).(w, f) 2-element Array{Float64,1}: 1.0 -1.0 1 Like #7 Thanks for excellent answers. I’ll take a look into the various options – have to finish some other things at work first :-o . 0 Likes
# Math Help - I need help with partial fractions 1. ## I need help with partial fractions I need to compute this integral: int ((1-x^2)/(x^3+x)) dx I think I should use the partial fraction method to simplify the fraction so (1-x^2)/(x^3+x)= A/x + B/ (1+x^2) Therefore A(1+x^2)+B(x)=1-x^2 putting it in a polynomial form: x^2 (A)+x (B) + 1 (A)= 1- x^2 and by equating the coefficients, A = -1 B = 0 However, A doesn't satisfy the constant on the LHS (+1) So what is the thing that I did wrong?? 2. Originally Posted by farisallil I need to compute this integral: int ((1-x^2)/(x^3+x)) dx I think I should use the partial fraction method to simplify the fraction so (1-x^2)/(x^3+x)= A/x + B/ (1+x^2) Therefore A(1+x^2)+B(x)=1-x^2 putting it in a polynomial form: x^2 (A)+x (B) + 1 (A)= 1- x^2 and by equating the coefficients, A = -1 B = 0 However, A doesn't satisfy the constant on the LHS (+1) So what is the thing that I did wrong?? ok $\frac{1-x^2}{x^3+x}=\frac{1-x^2}{x(x^2+1)}$ if you have a denominator with a second degree the nominator of it should be a first degree not constant so $\frac{1-x^2}{x^3+x} = \frac{A}{x} + \frac{Bx+C}{x^2+1}$ to find A,B and C $1-x^2 = A(x^2+1) + Bx^2 + Cx$ $A+B = -1$ $C =0$ $A=1$ find B from the last one so you have A=1 , B=-2 ,C=0 $\frac{1-x^2 }{x^3 + x} = \frac{1}{x} + \frac{-2x}{x^2+1}$ to make sure my solution is correct $\frac{1}{x} + \frac{-2x}{x^2+1} = \frac{x^2+1}{x^3+x} + \frac{-2x^2}{x^3+x } =\frac{1-x^2 }{x^3+x}$ and this is the left side 3. ooooooo I see Thank you very much Amer, I really appreciate your help
Student Corner: [Submit Sec3] [Lecture Notes] [Discussion Board] Course Info: [Description] [Course Outcomes] [Outcomes Matrix] [Course Schedule] [Requirements/HW/Quizzes] [Class Protocols] [Exam Info] [University Policies] [Announcements] HW Assignments: [Hw1] [Hw2] [Hw3] [Hw4] [Hw5] [Quizzes] Practice Exams: [Midterm] [Final] # CS152 Fall 2021Practice Final To study for the final I would suggest you: 1. Know how to do (by heart) all the practice problems. 2. Go over your notes at least three times. Second and third time try to see how much you can remember from the first time. 3. Go over the homework problems. 4. Try to create your own problems similar to the ones I have given and solve them. 5. Skim the relevant sections from the book. 6. If you want to study in groups, at this point you are ready to quiz each other. Here are some facts about the actual final: 1. It is comprehensive. 2. It is closed book, closed notes. Nothing will be permitted on your desk except your pen (pencil) and test. 3. You should bring photo ID. 4. There will be more than one version of the test. Each version will be of comparable difficulty. 5. It is 10 problems (3pts each), 6 problems will be on materials since the second midterm, 4 problems will be from the topics of the midterm. 6. Two problems will be exactly (less typos) off of the practice final, and one will be off of the practice midterm. The practice final is below: 1. Given an attribute grammar suitable for evaluating expressions made from floats, +, -, *. 2. Define the following programming language concepts: (a) S-attributed grammar, (b) L-attributed grammar, (c) side-effect. 3. What is short circuit evaluation? Give a C example of short circuit versus non short circuit evaluation. 4. Define structured programming. Give an example of a multi-level return statement, briefly explain Scheme continuations. 5. Show with code how the following concepts could be rewritten using recursion: Rust for in, Rust loop. For each say how to avoid running out of stack space. 6. Briefly explain how C struct are implemented in memory by the compiler. 7. Explain how type inference could be used to determine the type of the expression 7 * 3.14 + 5. Sketch the type checking process in OCaml. 8. Differentiate applicative versus normal order evaluation. Give an example using Scheme. Give an example of defining a new macro in Scheme. 9. Define and give an example of the following parameter passing mechanism: Pass by Value, Pass By Reference, and Pass By Value Result. 10. Write a Prolog program that could be used to compute sum_{i=n}^{m} i^2.
# Cnodeput offcentered when using fractions Here is the following code: \documentclass[multi,pstricks]{standalone} \begin{document} \begin{pspicture}(2,2) \Cnodeput[radius=.7](1,1){a}{$\dfrac{a_b}{a}$} \psdot(1,1) \end{pspicture} \end{document} The circle is offcentered when using Cnodeput, but it is correctly centered when using cnodeput. How can this be fixed without using magic numbers? Edit: with the new pst-node.tex. \documentclass[multi,pstricks]{standalone} \begin{document} \begin{pspicture}(2,2) \pnode(1,1){cat} \cnodeput(cat){cat}{$\dfrac{a_b}{a}$} %works \Cnodeput[radius=.7](cat){meow}{$\dfrac{a_b}{a}$} %works \Cnodeput[radius=.7](cat){cat}{$\dfrac{a_b}{a}$} %does not work \psdot(1,1) \end{pspicture} \end{document} I found the problem: A fixed file pstricks.tex is here: http://comedy.dante.de/~herbert/texnik/tex/generic/pstricks/ You need also the old pst-node.tex http://comedy.dante.de/~herbert/texnik/tex/generic/pstnode/ Will also be on CTAN in the next days. \documentclass[multi,pstricks]{standalone} \begin{document} \begin{pspicture}[showgrid](2,2) \pnodes(1,1){cat} \cnodeput(cat){cat}{$\dfrac{a_b}{a}$} \Cnodeput[radius=.7](cat){cat}{$\dfrac{a_b}{a}$} \Cnodeput[radius=.9](cat){cat}{$\dfrac{a_b}{a}$} \end{pspicture} \end{document} • Please see new edit. There looks to be another bug. – cat May 26 '16 at 19:37 • No, that is the default behaviour. You used a node name cat for the coordinates and also define a new node with the same name. Use something like \Cnodeput[radius=.9](cat){cat2}{$\dfrac{a_b}{a}$} – user2478 May 26 '16 at 19:48 • O I see. Why does (cat){cat} work for \cnodeput? – cat May 26 '16 at 20:05 • it works different. However, I'll put it on the todo-list – user2478 May 26 '16 at 20:16 • See my edited answer. The problem was in pstricks.tex and not in pst-node.tex – user2478 May 28 '16 at 7:50 I defined a \myCnodeput command, which accepte an optional argument. I added auto-pst-pdf which allows compiling with pdflatex, provided you use the --enable-write18 switch (for MiKTeX) ot -shell-escape (TeX Live, MacTeX): \documentclass{standalone}% \usepackage{auto-pst-pdf} \usepackage{xparse} \NewDocumentCommand\myCnodeput{or()mm}{% \rput(#2){#4}\Cnodeput[#1](#2){#3}% }% \begin{document} \begin{pspicture}(2,2) \myCnodeput[radius=0.7, linecolor=red](1,1){b}{$\dfrac{a_b}{a}$} \psdot(1,1) \end{pspicture} \end{document} • I need Cnodeput though to get a specific radius. – cat May 26 '16 at 18:34 • Ah! That's different. I'll try to think of a solution. – Bernard May 26 '16 at 18:36 • @cat: Please see my updated answer. – Bernard May 26 '16 at 19:15
# Copilot: a DSL for Monitoring Embedded Systems In case you missed all the excitement on the Galois blog, what follows is a re-post. # Introducing Copilot Can you write a list in Haskell? Then you can write embedded C code using Copilot. Here’s a Copilot program that computes the Fibonacci sequence (over Word 64s) and tests for even a numbers: fib :: Streams fib = do "fib" .= [0,1] ++ var "fib" + (drop 1 $varW64 "fib") "t" .= even (var "fib") where even :: Spec Word64 -> Spec Bool even w = w mod const 2 == const 0 Copilot contains an interpreter, a compiler, and uses a model-checker to check the correctness of your program. The compiler generates constant time and constant space C code via Tom Hawkin’s Atom Language (thanks Tom!). Copilot is specifically developed to write embedded software monitors for more complex embedded systems, but it can be used to develop a variety of functional-style embedded code. Executing > compile fib "fib" baseOpts generates fib.c and fib.h (with a main() for simulation—other options change that). We can then run > interpret fib 100 baseOpts to check that the Copilot program does what we expect. Finally, if we have CBMC installed, we can run > verify "fib.c" to prove a bunch of memory safety properties of the generated program. Galois has open-sourced Copilot (BSD3 licence). More information is available on the Copilot homepage. Of course, it’s available from Hackage, too. # Flight of the Navigator Copilot took its maiden flight in August 2010 in Smithfield, Virginia. NASA rents a private airfield for test flights like this, but you have to get past the intimidating sign posted upon entering the airfield. However, once you arrive, there’s a beautiful view of the James River. We were flying on a RC aircraft that NASA Langley uses to conduct a variety of Integrated Vehicle Health Management (IVHM) experiments. (It coincidentally had Galois colors!) Our experiments for Copilot were to determine its effectiveness at detecting faults in embedded guidance, navigation, and control software. The test-bed we flew was a partially fault-tolerant pitot tube (air pressure) sensor. Our pitot tube sat at the edge of the wing. Pitot tubes are used on commercial aircraft and they’re a big deal: a number of aircraft accidents and mishaps have been due, in part, to pitot tube failures. Our experiment consisted of a beautiful hardware stack, crafted by Sebastian Niller of the Technische Universität Ilmenau. Sebastian also led the programming for the stack. The stack consisted of four STM32 ARM Cortex M3 microprocessors. In addition, there was an SD card for writing flight data, and power management. The stack just fit into the hull of the aircraft. Sebastian installed our stack in front of another stack used by NASA on the same flights. The microprocessors were arranged to provide Byzantine fault-tolerance on the sensor values. One microprocessor acted as the general, receiving inputs from the pitot tube and distributing those values to the other microprocessors. The other microprocessors would exchange their values and perform a fault-tolerant vote on them. Granted, the fault-tolerance was for demonstration purposes only: all the microprocessors ran off the same clock, and the sensor wasn’t replicated (we’re currently working on a fully fault-tolerant system). During the flight tests, we injected (in software) faults by having intermittently incorrect sensor values distributed to various nodes. The pitot sensor system (including the fault-tolerance code) is a hard real-time system, meaning events have to happen at predefined deadlines. We wrote it in a combination of Tom Hawkin’s Atom, a Haskell DSL that generates C, and C directly. Integrated with the pitot sensor system are Copilot-generated monitors. The monitors detected • unexpected sensor values (e.g., the delta change is too extreme), • the correctness of the voting algorithm (we used Boyer-Moore majority voting, which returns the majority only if one exists; our monitor checked whether a majority indeed exists), and • whether the majority votes agreed. The monitors integrated with the sensor system without disrupting its real-time behavior. We gathered data on six flights. In between flights, we’d get the data from the SD card. We took some time to pose with the aircraft. The Copilot team from left to right is Alwyn Goodloe, National Institute of Aerospace; Lee Pike, Galois, Inc.; Robin Morisset, École Normale Supérieure; and Sebastian Niller, Technische Universität Ilmenau. Robin and Sebastian are Visiting Scholars at the NIA for the project. Thanks for all the hard work! There were a bunch of folks involved in the flight test that day, and we got a group photo with everyone. We are very thankful that the researchers at NASA were gracious enough to give us their time and resources to fly our experiments. Thank you! Finally, here are two short videos. The first is of our aircraft’s takeoff during one of the flights. Interestingly, it has an electric engine to reduce the engine vibration’s effects on experiments. http://player.vimeo.com/video/15198286 The second is of AirStar, which we weren’t involved in, but that also flew the same day. AirStar is a scaled-down jet (yes, jet) aircraft that was really loud and really fast. I’m posting its takeoff, since it’s just so cool. That thing was a rocket! http://player.vimeo.com/video/15204969 # More Details Copilot and the flight test is part of a NASA-sponsored project (NASA press-release) led by Lee Pike at Galois. It’s a 3 year project, and we’re currently in the second year. # Even More Details Besides the language and flight test, we’ve written a few papers: • Lee Pike, Alwyn Goodloe, Robin Morisset, and Sebastian Niller. Copilot: A Hard Real-Time Runtime Monitor. To appear in the proceedings of the 1st Intl. Conference on Runtime Verification (RV’2010), 2010. Springer. This paper describes the Copilot language. Byzantine faults are fascinating. Here’s a 2-page paper that shows one reason why. At the beginning of our work, we tried to survey prior results in the field and discuss the constraints of the problem. This report is a bit lengthy (almost 50 pages), but it’s a gentle introduction to our problem space. Yes, QuickCheck can be used to test low-level protocols. A short paper motivating the need for runtime monitoring of critical embedded systems. # You’re Still Interested? We’re always looking for collaborators, users, and we may need 1-2 visiting scholars interested in embedded systems & Haskell next summer. If any of these interest you, drop Lee Pike a note (hint: if you read any of the papers or download Copilot, you can find my email). # Shocking Tell-All Interview on Software Assurance I was recently interviewed by Flight International magazine, one of the oldest aviation news magazines. Their reporter, Stephen Trimble, was writing on the Air Force’s Chief Scientist’s recent report stating that new software verification and validation techniques are desperately needed. Here’s an online copy of the article. # Copilot: A Hard Real-Time Runtime Monitor I’m the principal investigator on a NASA-sponsored research project investigating new approaches for monitoring the correctness of safety-critical guidance, navigation, and control software at run-time. We just got a paper accepted at the Runtime Verification Conference on some of our recent work developing a language for writing monitors. The language, Copilot, is a domain-specific language (DSL) embedded in Haskell that uses the powerful Atom DSL as a back-end. Perhaps the best tag-line for Copilot is, “Know how to write Haskell lists? Good; then you’re ready to write embedded software.” Stay tuned for a software release and updates on a flight-test of our software on a NASA test UAV… In the meantime, check out the paper! # Twinkle Twinkle Little Haskell Update Sept 28,2010: the Makefile mentioned below worked fine, except for something having to do with timing. I was too lazy to track the problem down, but fortunately, I found an scons script (using the scons build system) that I modified to run on Mac OSX, and it works perfectly. The original script is here—thanks Homin Lee! The post has been modified appropriately. Update Oct 1, 2010: Homin Lee has updated the script to work on Mac OSX, so you can just grab the original script now. It’s been a few months almost a year(!) since John Van Enk showed us how to twinkle (blink) an LED on his Arduino microcontroller using Atom/Haskell. Since that time, Atom (a Haskell embedded domain-specific language for generating constant time/space C programs) has undergone multiple revisions, and the standard Arduino tool-chain has been updated, so I thought it’d be worthwhile to “re-solve” the problem with something more streamlined that should work today for all your Haskell -> Arduino programming needs. With the changes to Atom, we can blink a LED with just a couple lines of core logic (as you’d expect given the simplicity of the problem). For this post, I’m using If you’ve played with the Arduino, you’ve noticed how nice the integrated IDE/tool-chain is. Ok, the editor leaves everything to be desired, but otherwise, things just work. The language is basically C with a few macros and Atmel AVR-specific libraries (the family to which Arduino hardware belongs). However, if you venture off the beaten path at all—say, trying to compile your own C program outside the IDE—things get messy quickly. Fortunately, with the scons script, things are a piece of cake. What we’ll do is write a Haskell program AtomLED.hs and use that to generate AtomLED.c. From that, the scons script will take care of the rest. ## The Core Logic Here’s the core logic we use for blinking the LED from Atom: ph :: Int ph = 40000 -- Sufficiently large number of ticks (the Duemilanove is 16MHz) blink :: Atom () blink = do on <- bool "on" True -- Declare a Boolean variable on, initialized to True. -- At period ph and phase 0, do ... period ph$ phase 0 $atom "blinkOn"$ do on <== not_ (value on) -- Flip the Boolean. period ph $phase (quot ph 8)$ atom "blinkOff" \$ do on <== not_ (value on) And that’s it!  The blink function has two rules, “blinkOn” and “blinkOff”.  Both rules execute every 40,000 ticks.  (A “tick” in our case is just a local variable that’s incremented, but it could be run off the hardware clock.  Nevertheless, we still know we’re getting nearly constant-time due to the code Atom generates.)  The first rule starts at tick 0, and executes at ticks 40,000, 80,000, etc., while the second starts at tick 40,000/8 = 5000 and executes at ticks 5000, 45,000, 85,000, etc.  In each rule, after calling the avr_blink() C function (we’ll define), it modulates a Boolean upon which blink() depends. Thus, the LED is on 1/8 of the time and off 7/8 of the time. (If we wanted the LED to be on the same amount of time as it is off, we could have done the whole thing with one rule.) ## The Details Really the only other thing we need to do is add a bit of C code around the core logic.  Here’s the listing for the C code stuck at the beginning, written as strings: [ (varInit Int16 "ledPin" "13") -- We're using pin 13 on the Arduino. , "void avr_blink(void);" ] and here’s some code for afterward: [ "void setup() {" , " // initialize the digital pin as an output:" , " pinMode(ledPin, OUTPUT);" , "}" , "" , "// set the LED on or off" , "void avr_blink() { digitalWrite(ledPin, state.AtomLED.on); }" , "" , "void loop() {" , " " ++ atomName ++ "();" , "}" ] The IDE tool-chain expects there to be a setup() and loop() function defined, and it then pretty-prints a main() function from which both are called. The code never returns from loop(). To blink the LED, we call digitalWrite() from avr_blink(). digitalWrite() is provided by the Arduino language.  (In John’s post, he manipulated the port registers directly, which is faster and doesn’t rely on the Arduino libraries, but it’s also lower-level and less portable between Atmel controllers.)  Atom-defined variables are stored in a struct, so state.AtomLED.on references the Atom Boolean variable defined earlier. ## Make it Work! Now just drop the scons script into the directory (the directory must have the same name as the Haskell file, dropping the extension), and run > runhaskell AtomLED.hs > scons > scons upload This should work for any Atom-generated program you want to run on your Arduino (modulo deviations from the configuration I mentioned initially). Also, note the conventions and parameters to set in the scons script. Post if you have any problems, and I might be able to help. Also, I’d love to package the boilerplate up into a “backend” for Atom, but if you have time, please beat me to it.  Thanks. Code: # New Group: Functional Programming for Embedded Systems Are you interested in how functional programming can be leveraged to make embedded-systems programming easier and more reliable?  You are not alone.  For example, check out what’s been happening in just the past couple of years. Now Tom Hawkins (designer of Atom) has started a Google group, fp-embedded, to discuss these issues.  Please join and post your projects & questions! # An Apologia for Formal Methods In the January 2010 copy of IEEE Computer, David Parnas published an article, “Really Rethinking ‘Formal Methods’” (sorry, you’ll need an IEEE subscription or purchase the article to access it), with the following abstract: We must question the assumptions underlying the well-known current formal software development methods to see why they have not been widely adopted and what should be changed. I found some of the opinions therein to be antiquated, so I wrote a letter to the editor (free content!), which appears in the March 2010 edition.  IEEE also published a response from David Parnas, which you can also access at the letter link above. I’ll refrain from visiting this debate here, but please have a look at the letters, enjoy the controversy, and do not hesitate to leave a comment! # 10 to the -9 $10^{-9}$, or one-in-a-billion, is the famed number given for the maximum probability of a catastrophic failure, per hour of operation, in life-critical systems like commercial aircraft.  The number is part of the folklore of the safety-critical systems literature; where does it come from? First, it’s worth noting just how small that number is.  As pointed out by Driscoll et al. in the paper, Byzantine Fault Tolerance, from Theory to Reality, the probability of winning the U.K. lottery is 1 in 10s of millions, and the probability of being struck by lightening (in the U.S.) is $1.6 \times 10^{-6},$ more than a 1,000 times more likely than $10^{-9}.$ So where did $10^{-9}$ come from?  A nice explanation comes from a recent paper by John Rushby: If we consider the example of an airplane type with 100 members, each flying $3000$ hours per year over an operational life of 33 years, then we have a total exposure of about 107 flight hours. If hazard analysis reveals ten potentially catastrophic failures in each of ten subsystems, then the “budget” for each, if none are expected to occur in the life of the fleet, is a failure probability of about $10^{-9}$ per hour [1, page 37]. This serves to explain the well-known $10^{-9}$ requirement, which is stated as follows: “when using quantitative analyses. . . numerical probabilities. . . on the order of $10^{-9}$ per flight-hour. . . based on a flight of mean duration for the airplane type may be used. . . as aids to engineering judgment. . . to. . . help determine compliance” (with the requirement for extremely improbable failure conditions) [2, paragraph 10.b]. [1] E. Lloyd and W. Tye, Systematic Safety: Safety Assessment of Aircraft Systems. London, England: Civil Aviation Authority, 1982, reprinted 1992. [2] System Design and Analysis, Federal Aviation Administration, Jun. 21, 1988, advisory Circular 25.1309-1A. (By the way, it’s worth reading the rest of the paper—it’s the first attempt I know of to formally connect the notions of (software) formal verification and reliability.) So there a probabilistic argument being made, but let’s spell it out in a little more detail.  If there are 10 potential failures in 10 subsystems, then there are $10 \times 10 = 100$ potential failures.  Thus, there are $2^{100}$ possible configurations of failure/non-failure in the subsystems.  Only one of these configurations is acceptable—the one in which there are no faults. If the probability of failure is $x,$ then the probability of non-failure is $1 - x.$  So if the probability of failure for each subsystem is $10^{-9},$ then the probability of being in the one non-failure configuration is $\displaystyle(1 - 10^{-9})^{100}$ We want that probability of non-failure to be greater than the required probability of non-failure, given the total number of flight hours.  Thus, $\displaystyle (1 - 10^{-9})^{100} > 1 - 10^{-7}$ which indeed holds: $\displaystyle (1 - 10^{-9})^{100} - (1 - 10^{-7})$ is around $4.95 \times 10^{-15}.$ Can we generalize the inequality?  The hint for how to do so is that the number of subsystems ($100$) is no more than the overall failure rate divided by the subsystem rate: $\displaystyle \frac{10^{-7}}{10^{-9}}$ This suggests the general form is something like Subsystem reliability inequality: $\displaystyle (1 - C^{-n})^{C^{n-m}} \geq 1 - C^{-m}$ where $C,$ $n,$ and $m$ are real numbers, $C \geq 1,$ $n \geq 0,$ and $n \geq m.$ Let’s prove the inequality holds.  Joe Hurd figured out the proof, sketched below (but I take responsibility for any mistakes in it’s presentation).  For convenience, we’ll prove the inequality holds specifically when $C = e,$ but the proof can be generalized. First, if $n = 0,$ the inequality holds immediately. Next, we’ll show that $\displaystyle (1 - e^{-n})^{e^{n-m}}$ is monotonically non-decreasing with respect to $n$ by showing that the derivative of its logarithm is greater or equal to zero for all $n > 0.$  So the derivative of its logarithm is $\displaystyle \frac{d}{dn} \; e^{n-m}\ln(1-e^{-n}) = e^{n-m}\ln(1-e^{-n})+\frac{e^{-m}}{1-e^{-n}}$ We show $\displaystyle e^{n-m}\ln(1-e{-n})+\frac{e^{-m}}{1-e^{-n}} \geq 0$ iff $\displaystyle e^{-m}\left(e^{n}\ln(1-e^{-n}) + \frac{1}{1-e^{-n}}\right) \geq 0$ and since $e^{-m} \geq 0,$ $\displaystyle e^{n}\ln(1-e^{-n}) + \frac{1}{1-e^{-n}} \geq 0$ iff $\displaystyle e^{n}\ln(1-e^{-n}) \geq - \frac{1}{1-e^{-n}}$ Let $x = e^{-n}$, so the range of $x$ is $0 < x < 1.$ $\displaystyle\ln(1-x) \geq - \frac{x}{1-x}$ Now we show that in the range of $x$, the left-hand side is bounded below by the right-hand side of the inequality. $\displaystyle \lim_{x \to 0} \; \ln(1-x) = 0$ and $\displaystyle - \frac{x}{1-x} = 0$ Now taking their derivatives $\displaystyle \frac{d}{dx} \; \ln(1-x) = \frac{1}{x-1}$ and $\displaystyle \frac{d}{dx} \; - \frac{x}{1-x} = - \frac{1}{(x-1)^2}$ Because $\displaystyle x-1 \geq - (x-1)^2$ in the range of $x$, our proof holds. The purpose of this post was to clarify the folklore of ultra-reliable systems.  The subsystem reliability inequality presented allows for easy generalization to other reliable systems. Thanks again for the help, Joe! Read the rest of this entry »
# Multiple aspects of Cpp static Some notes about the static keyword. Its basic semantiocs, and concepts regarding declaration, defiantion and initilization. And also how the header only library use the static keyword. ### Within one translation unit In the “Complete C++ tips & secrets for professionals” The keyword static belongs to the type of storage class specifiers, it describes three semantics for using the static, before diving to details, let’s go through some basic concepts: Translation unit: refer to this. Generally speaking, it is the file that can be compiled into the object file (such as .c or .cpp file). Which is the basic compiling unit of the cpp. The fundamental idea to explain the static is that: the static in cpp means that the variabled is initilized once before the program starts (not the runtime), and is not destroyed every time its scope is exited. Be careful about the term “initilized once”. Here are some explanations about the declaration, defination and initilization. The defination can be written in the same line with the initilization. Consider following two exmaples: and The case 1 does not work as expected and all results are test0, since at the second time, the initlization operation will not be executed. At the case 2, the value of the static variable still can not be changed. The resutls works as expected. Attention, the static varaible does not say that its value can not be changed or reassigned, it just say that the associated value is initilzied once. ### Across multiple translation units The false impression about that the “static variable can not change its value” may come from the following use case, which use the static varibale across the translation uint like this: The important thing is that there are two tranlation unit (main.cpp and file.cpp), these two unit have the different version of the static variable. Although one of them (the one in the file.cpp) is changed by the function, the one in the main.cpp is still not modified. It’s interesting that it is actually a bad practice to define the variable in the .h file, if we remove the static keyword, we get the compiling error such as: The static keyword just hidden this issue! (The static is also unnecessary here) The good practice is using the declaration in the .h file, such as extern int a and put the actuall defination and initilization at the .cpp file, which can avoid the multiple symbol compiling issue. This is a good question. In some situations, we want to build the header only library without including the .cpp file as the separate translation units. By this way, we do not need extra compiling process for building the library that we need. This is common for the utility function such as the timer and the logging. The main issue here is how to avoid breaking the ODR (one defination rule). For example, our utility.h file might be included by multiple files. In this case, we might include multiple headers (with function defination and variable defination in different translation units). That is why we use the extra .cpp file as the separate translation unit. For the multiple definations of the function, we can use inline keyword to solve it, by declaring it as inline keyword, the compiler allows the multiple defination in different translation unit. In fact, the function implemented in the class file is declared as the inline function implicitly. For the multiple definations of the global variabe, just use the static variable as the global function, instead of getting is direactly, we can use a specific getter function to access associated global variable (return the reference of the global variable). This answer shows both the cpp version and c version. ### References https://pabloariasal.github.io/2020/01/02/static-variable-initialization/ This is insightful one https://stackoverflow.com/questions/16079235/static-variable-cpp-do-not-want-to-change defination, declaration initilization https://stackoverflow.com/questions/23345554/what-distinguishes-the-declaration-the-definition-and-the-initialization-of-a-v How to process global variable in header only library
# Simula 编程范型 多范型: 指令式, 过程式, 结构化, 物件導向, 并发 奧利-約翰·達爾 克利斯登·奈加特 1962年,​60年前(Simula I)1967年,​55年前(Simula 67) Standard SIMULA[1] ( 1986年8月25日,​36年前 ) 静态、 主要为ALGOL 60(有一些Simscript构件) 类Unix系统、Windows、z/OS、TOPS-10、MVS(英语:MVS) Portable Simula Revisited[2], GNU Cim[3] ALGOL 60, Simscript[4] Smalltalk、CLU[5]、C++、BETA、Object Pascal、Modula-3、Java Simula,一種編譯式程式語言,由奧利-約翰·達爾克利斯登·奈加特,在1960年代於奧斯陸,開發出來了Simula I與Simula 67兩代。它承繼了ALGOL 60作為基礎,被認為是第一個物件導向程式設計的程式語言。 Simula 67介入了对象子类(后来惯称为子类继承超类)、虚过程[6],还有协程离散事件模拟和特征性的垃圾收集[7]。Simula的影響经常被低估[8],在C++Object PascalModula-3Java和后来的很多编程语言中,都实现了受Simula启发的对象。BETA是Simula的现代后继者。 ## 歷史 1963年8月,(NCC)購買到,在UNIVAC的合約同意下,奧利-約翰·達爾在這台電腦上安裝以ALGOL 60的編譯器來實作的Simula I。1965年1月,Simula I終於可以在UNIVAC 1107上完全的運作。接下來幾年,克利斯登·奈加特奧利-約翰·達爾致力於教授Simula I。Simula I也被移植到 B5500電腦,以及蘇聯的電腦上。 1965年,東尼·霍爾首次提出“記錄類別”(record class)構造的概念[10],1966年,克利斯登·奈加特奧利-約翰·達爾通过將Simula I的進程當作由前綴層和主要層二者構成,把它擴展成了具有了記錄類別屬性的通用進程,他們當時所稱“進程”隨即被稱為“物件[11]。1967年5月,奈加特和達爾在奧斯陸舉辦的IFIP工作小組論壇中,發表了關於類別子類別聲明的論文,形成Simula 67的第一份定義文件[12] 1968年召開的會議,組成了SIMULA標準小組,並發表了第一份官方Simula標準文件「SIMULA 67通用基础语言」[13]。在1960年代后期和1970年代早期,Simula 67主要实现于四个系统之上:UNIVAC ,挪威计算中心的IBM System/360奥斯陆大学Kjeller英语Kjeller联合安装的CDC ,和(FOA)的DEC TOPS-10 Simula影响了Smalltalk[15],和后来的面向对象编程语言[8]。Simula经由Smalltalk启发了并发计算的演员模型[16]C++語言和Java語言的創始人,都認可自己受到了Simula的重要影響[17] ## 特征概念 Simula 67包含通用算法语言ALGOL 60的多数特征作为自己的子集[1]。它是大小写不敏感的。 ### 块与过程 ALGOL 60中最强力的编程构造机制之一,就是过程的概念。 #### 语法 L: L: ... begin D; D; ... D; S; S; ... S; S end • 在这个块内出现的标识符,可以通过合适的声明,而被指定为局部于所论及的这个块。在这个块内侧的这个标识符所表示的实体,不存在于它的外侧。在这个块外侧的这个标识符所表示的任何实体,在这个块内侧是不可见的;在Simula 67中,可通过连接或远程访问使它成为可见的。 • 除了表示标签的标识符之外,一个标识符,如果出现在一个块中,而且并非声明于这个块中,则它非局部于这个块,就是说它所表示的这个块内侧实体,与在紧接它外侧的层级中出现的同名实体是同一个实体。因为块中的语句自身可以是一个块,局部和非局部于一个块的概念,必须递归地去理解,就是说非局部于一个块A的一个标识符,可是亦可否地,非局部于A是其中语句的块B #### 语义 • 当一个块执行的时候,生成这个块的一个动态实例[23]。在计算机中,一个块实例可以采用一种形式的内存区域,它包含需要的动态块信息,并包括空间来持有局部于这个块的变量的内容[24] • 块实例中的局部变量,标识了分配给块实例的内存片段。于内部块的标识符绑定,对这个内部块的任何随后的动态实例,保持有效[25] • 定义函数指示符的值的过程,以类型声明符作为其过程声明的最先[29]。此外的真正(proper)过程,在Simula 67中,被称为具有普遍(universal)类型,任何类型都从属(subordinate)于普遍类型。在Simula 67中,过程的参数列表( … <参数分界符> … ),舍弃了其中对ALGOL 60是可选的“) <字母串>: (”样式的参数分界符英语Delimiter[30] • 过程的缺省的传送模态,在Simula 67中,对于值类型的参数是传值调用,对于所有其他类型的参数是传引用调用;故而在过程声明的参数规定中,增加了以name为前导的名字部份,用来指定所述及的参数采用传名调用。在过程主体内传名调用的形式参数的每次出现,都引起对实际参数的一次求值。在Simula 67中,这个求值发生在过程语句的上下文中,就是说不会出现标识符冲突,因为过程主体和它的变量此时是不可见的。 • 过程调用的执行,在有参数的情况下要经历如下步骤:建立形式参数块实例;求值对应于传值调用或传引用调用的实际参数,并将其结果赋值给形式参数块实例的对应变量[31];过程主体被初始化并开始执行。 ### 对象与类 Simula 67的中心概念是对象,一个对象是一段(self-contained)程序(块实例),它拥有由一个声明定义的自己的局部数据和行动(action)。操纵对象和相互关联对象的需要,致使语言必须介入列表处理设施。 • 对于一个给定对象,形式参数,在由virtual:前导的虚拟部份中规定的,和声明为局部于类主体(body)的量,叫做这个对象的特性(attribute)。一个特性的声明或规定叫做特性定义。在1986年修订的语言标准中,通过由hidden亦或protected前导的保护规定,可以限制类特性标识符的可见性。 • 类声明的形式参数,不适用传名调用[33]。在规定部份中需要对每个形式参数进行规定,这些参数被当作局部于类主体的变量,可以接受如下规定符(specifier):<类型>array<类型> array • 类主体通常是一个,即使它如语法形式所允许的那样是块以外的一个语句,也表现得如同一个块。一个分裂(split)主体,担当其中符号inner表示一个虚设(dummy)语句的块。 class Order (number); integer number; begin integer numberOfUnits, arrivalDate; real processingTime; end; new Order (103); Simula 67为了将整个程序执行组织为属于对象的活动阶段的一个序列,而包含了其所必需的基本特征。在模拟器Simulation的声明中,定义了叫做“定序集合”(sequencing set)的一个“时间轴”,它也是充当队列的一个双向列表,还有进程Process,它给与对象将其活动阶段通过时间轴来组织的一种属性 ### 子类 Cn是具有前缀序列C1, C2, ...... Cn-1的一个类,这里Ckk = 1,2,...,n)的下标k叫做前缀层级,则属于Cn的对象是一个复合对象,它可以用串接的类声明来正式的描述,串接的过程被认为先于程序执行而发生。 设X是属于Cn的的一个对象。非正式的说,串接机制有如下结果: • X拥有的特性集合,是在C1, C2, ... , Cn中所定义集合的并集。在Ck1 <= k <= n)中定义的特性,被称为定义在前缀层级k • X拥有的“运算规则”,构成自来自在这些类的主体的语句,并且它们依据了预先规定的次序。来自Ck的语句,被称为属于X的前缀层级k • X的前缀层级k的语句,能访问在X的等于或外部于k的前缀层级上定义的它的所有特性,但是不能直接访问由于在外部于k层级的冲突定义而导致不可见的特性。这些不可见特性仍然可以访问,例如通过使用过程或this • X的前缀层级k的语句,不能立即访问在X的内部于k的前缀层级上定义的它的那些特性,除非通过虚拟量。 • 在前缀层级k的分裂主体中,符号inner,表示X的属于内部于k的前缀层级的运算规则的那些语句,或者在k = n时表示虚设语句。如果C1, ... , Cn-1都没有分裂主体,则在X的运算规则中的那些语句,依据升序前缀层级来排序。 Order class BatchOrder; begin integer batchSize; real setupTime; end; Order class SingleOrder; begin real setupTime, finishingTime, weight; end; SingleOrder class Plate; begin real length, width; end; ### 引用 ref(Order) next, previous; next :- new Oreder(101); previous :- next; next :- new Plate(50); ### 对象表达式 1. 生成这个对象,并且这个对象生成式如果有实际参数,则求值它们,将得到的这些值及或英语And/or引用传送给形式参数。 2. 控制经过这个对象最初的begin进入其中,借此它变成在“系附”状态下运行。在生成的对象执行了detach基本过程,或者经过这个对象最终的end退出的两种情况之一时,对象生成式的求值完成。 this C,如果用在类CC的任何子类的类主体中,或用在其块限定为类CC的一个子类的连接块中,则这个对象表达式是有效的。在一个给定的上下文中,一个局部对象的值,是一个对象或连接到一个对象,它是这个对象表达式在其中有效的、最小的字面上包围的块实例;如果没有这样的块,则这个对象表达式在这个上下文中是非法的。对于一个过程或类主体的实例(上下文),“字面上包围”意为包含它的声明。如此定义的内涵参见示例章节中的二叉树实例。 X表示一个简单的引用表达式,而CD是类标识符,使得类D是对象X的限定。对于限定对象X qua C,如果类C外部于或等于类D,或者是D的一个子类,则这个对象表达式是合法的,否则是为非法。如果X的值是none,或者是属于外部于C的类的一个对象,则这个对象表达式求值形成一个运行时间错误;在其他情况下,X qua C的值,就是X的值。这种对一个串接的类对象的即时限定,约束或扩展它的特性通过检视或远程访问的可见性。 ### 远程访问 1. 这个对象 2. 外部于或等于这个对象的类的一个 3. 在这个类中或在属于它的前缀序列的任何类中定义的一个特性标识符 if next.number >= previous.number then ......; “组访问”可通过“连接语句”来完成,连接机制的用途,是为在连接块中的特定识别,提供上述第1项对象信息和第2项类信息的隐式定义。例如: inspect next when Plate do begin ...... end; inspect new Plate(50) do begin ...... end; ### 虚过程 • 允许在一个对象的一个前缀层级上,访问在内部前缀层级上声明的特性。 • 允许在一个前缀层级上的特性重新定义,在外部前缀层级上有效。 class Hashing (n); integer n; virtual: integer procedure hash; begin integer procedure hash(t); text t; begin ...... end hash; text array table (0:n-1); integer entries; integer procedure lookup (t,old); name old; Boolean old; text t; begin ...... end lookup; end Hashing; Hashing class ALGOL_hash; begin integer procedure hash(T); text T; begin ...... end hash; end ALGOL_hash; Hashing(64) begin integer procedure hash(T); text T; begin ...... end hash; ...... end ### 作用域和可见性 • 一个标识符定义的作用域,是它在其中可能有作用的那一部份程序正文。相同的标识符,可以定义在程序的很多地方,因此可以关联于不同的量。这种相同的标识符的作用域,因而可能重叠,例如当一个标识符在内部块中被重新声明的情况下。 • 一个标识符定义,这个标识符在程序的一个给定点上,如果能够涉及到所论及声明的量,则这个定义被称为在这个点上是的。在给定标识符于此可见的程序正文的一个特定点上,最多只能有一个定义关联于这个标识符,例如在上述重新声明的情况下,在它们作用域的并集内任何给定点上,只有一个定义是可见的。 • 具有相同标识符的一个标识符定义,即这个标识符的重新定义,出现在以前的定义的局部块所包围的某个构造内。在它们共同的作用域内,只有最内部的重新定义是可见的。 • 一个重新定义,出现在类的某个内部前缀层级。 • 远程访问,可以导致某些标识符定义在检视块或点表示法内变成不可见的。 • 使用thisqua,可以导致一个或多个重新定义被临时停止。 • 这个定义所局部于的类声明的保护部份。 ### 块实例 • 一个非类块实例,也就是有前缀块、子块、过程主体或连接块的实例,总是在系附状态下,这个实例被称为系附到了导致它生成的那个块上。因此一个过程体的实例,系附到包含对应函数指示符或过程语句的块实例上。非类、非过程块实例,系附到它所局部的块实例。在PSC到达非类块实例的最终的end的时候,PSC返回到紧随导致这个块实例生成的语句或表达式的程序点。 • 一个对象,最初是在系附状态下,并被称为系附到包含对应对象生成语句的那个块实例上。一个对象,可以通过执行detach语句,进入脱离状态。通过执行call语句,可以使一个脱离状态的对象重新进入系附状态,它借此变为系附到包含这个调用语句的那个块实例上。通过执行resume语句,可以使一个脱离状态的对象进入恢复状态。不使用detachcallresume语句的一个程序执行,是一个简单的系附状态的块的嵌套结构。 • 当PSC通过一个对象的最终end,或通过goto语句离开它的时候,这个对象进入终止状态。没有块实例系附到一个终止状态的类对象上。终止状态的对象,仍作为一个数据项而存在,它可以通过针对这个对象的包括过程和函数特性的这些特性的远程标识来引用。 ### 准并行系统 • 在任何时间,在一个系统的组件之中,有确切的一个组件被称为“生效”(operative)的。不生效的组件,有关联的“重新激活点”,它标识在这个组件被激活(activate)时,要在它那里继续执行的程序点。一个对象组件是生效的,当且仅当这个组件的头领处在恢复状态。 • 除了系统组件,程序执行还可以包含不属于特定系统的“独立对象组件”。任何这种组件的头领,都是一个脱离状态的对象,它局部于一个类对象或过程主体的一个实例,也就是说不局部于系统头领(子块或有前缀块的实例)。根据定义,独立组件总是不生效的。 • 将当前包含PSC的块实例动态包围起来的块实例链,叫做“运行”。在运行链上的块实例,被称为“运行”(operating)的,最外层的块实例总是运行的。一个系统如果有一个组件是运行的,它就是运行的;在任何时间,一个系统最多有一个组件是运行的;运行的系统的头领,可以是不运行的(不在运行链上)。一个运行的组件 ,总是生效的;如果一个系统的生效组件是不运行的,则这个系统也是不运行的。在不运行的系统中的生效的组件,是当这个系统成为不运行的时候在运行的组件,当这个系统再次成为运行的时候它将仍是运行的。 • 对于一个不生效的组件C,如果一个块实例P包含了它的重新激活点(而C的重新激活点关联于C的头领),则PC的头领动态包围,并且P除了自身之外不动态包围块实例。由PC的头领X“动态”的块实例序列(P = Z0) → Z1 → .... → Zn-1 → (Zn = X) (n>=0),被称谓为C的“重新激活链”。除了C的头领X(脱离状态的对象)之外,这个链上的所有组件头领(恢复状态的对象),标识了生效而不运行的组件。在C成为运行的时候,在它的重新激活链上所有块也成为运行的。 ### 定序 • 对于一个不生效的对象组件,通过针对它的脱离状态的头领,执行一个call,可以重新激活这个组件,借此PSC移动到它的重新激活点上。这个头领重新进入系附状态,并变成系附到包含这个call的块实例上。这个组件正式的失去了本身(作为组件)的状态。 • 对于这个系统的一个不生效的对象组件,通过针对它的脱离状态的头领,执行一个resume,可以重新激活这个组件,借此PSC移动到它的重新激活点上;这个组件的头领进入恢复状态,而这个组件变成生效的。这个系统以前生效的组件变成不生效的,并且它重新激活点被定位到紧后于这个resume;如果这个组件是一个对象组件,则它的头领进入脱离状态。 • 对于当前生效的对象组件,通过针它的恢复状态的头领,执行一个detach,这个系统的主干组件重获生效的状态,借此PSC移动回到主干组件的重新激活点上。以前生效的组件变成不生效的,并且它重新激活点被定位到紧后于这个detach。这个组件的头领进入脱离状态。 PSC经过一个类对象的最终end的效果,除了使这个对象成为终止状态,而非脱离状态之外,相同于针对这个对象执行detach的效果。其结果是它不会得到重新激活点,并且在它已经拥有作为组件头领的状态时,失去这种状态。 ## 程式範例 ### 最小程序 Begin End; ### Hello, World! Simula 67的经典Hello, World!範例: Begin OutText ("Hello, World!"); OutImage; End; OutText过程将字符串输出到缓冲区,而OutImage过程将缓冲区内容输出到标准文件,二者都定义在输出文件类OutFile中,而它是文件类File的子类。 ### 传名调用 Real Procedure Sigma (k, m, n, u); Name k, u; Integer k, m, n; Real u; Begin Real s; k:= m; While k <= n Do Begin s:= s + u; k:= k + 1; End; Sigma:= s; End; Z:= Sigma (i, 1, 100, 1 / (i + a) ** 2); ### 协程 Simula 67标准通过如下实例来诠释叫做“准并行系统”的协程机制: begin comment S1; ref(C1) X1; class C1; begin procedure P1; detach; P1 end C1; ref(C2) X2; class C2; begin procedure P2; begin detach; ! 可尝试detach或resume(X1); end P2; begin comment system S2; ref(C3) X3; class C3; begin detach; P2 end C3; X3:- new C3; resume(X3) end S2 end C2; X1:- new C1; X2:- new C2; call(X2); ! 可尝试resume(X2); end S1; [S1] ← (X1) ← (P1) ← PSC [S1] ← PSC | (X1) ← (P1) ← X1重新激活点 [S1] ← (X2) ← [S2] ← (X3) ← PSC | (X1) ← (P1) ← X1重新激活点 [S1] ← (X2) ← [S2] ← PSC | | | (X3) ← X3重新激活点 | (X1) ← (P1) ← X1重新激活点 PSC到达系统S2主干组件的第20行,执行resume(X3)语句后,保存系统S2主干组件的重新激活点为第21行,PSC恢复到第17行的对象X3的类主体之中,这时的情况是: [S1] ← (X2) ← [S2] ← S2重新激活点 | | | (X3) ← PSC | (X1) ← (P1) ← X1重新激活点  ! 状况A [S1] ← (X2) ← [S2] ← S2重新激活点 | | | (X3) ← (P2) ← PSC | (X1) ← (P1) ← X1重新激活点 !状况B [S1] ← PSC | (X1) ← (P1) ← X1重新激活点 | (X2) ← [S2] ← S2重新激活点 | (X3) ← (P2) ← X2重新激活点 [S1] ← S1重新激活点 | (X1) ← (P1) ← X1重新激活点 | (X2) ← [S2] ← S2重新激活点 | (X3) ← (P2) ← PSC [S1] ← S1重新激活点 | (X1) ← (P1) ← PSC | (X2) ← [S2] ← S2重新激活点 | (X3) ← (P2) ← X2重新激活点 ### 二叉树 class Tree (val); integer val; begin ref (Tree) left, right; procedure insert (x); integer x; begin if x < val then begin if left == none then left :- new Tree (x) else left.insert (x) end else if right == none then right :- new Tree (x) else right.insert (x); end ref (Tree) procedure find (x); integer x; begin if x = val then this Tree else if x < val then (if left == none then none else left.find (x)) else if right == none then none else right.find (x); end end of tree; find的过程体中出现了表达式this Tree,它意图产生的值是到当前节点的引用,就是说这个节点拥有find特性的“这个”特定实例。例如,如果通过函数指示符X.find (x),来调用Xfind过程,并且X.val = x,则这个函数的结果是X自身的引用值。 ### 抽象类 Begin Class Glyph; Virtual: Procedure print Is Procedure print; Begin End; Glyph Class Char (c); Character c; Begin Procedure print; OutChar(c); End; Glyph Class Line (elements); Ref (Glyph) Array elements; Begin Procedure print; Begin Integer i; For i:= 1 Step 1 Until UpperBound (elements, 1) Do elements (i).print; OutImage; End; End; ! 主程序; Ref (Glyph) rg; Ref (Glyph) Array rgs (1 : 4); rgs (1):- New Char ('A'); rgs (2):- New Char ('b'); rgs (3):- New Char ('b'); rgs (4):- New Char ('a'); rg:- New Line (rgs); rg.print; End; ### 模拟器 Simulation Begin Class FittingRoom; Begin Boolean inUse; Procedure request; Begin If inUse Then Begin Wait (door); door.First.Out; End; inUse:= True; End; Procedure leave; Begin inUse:= False; Activate door.First; End; End; Procedure report (message); Text message; Begin OutFix (Time, 2, 0); OutText (": " & message); OutImage; End; Process Class Person (pname); Text pname; Begin While True Do Begin Hold (Normal (12, 4, u)); report (pname & " 要求用试衣间"); fittingroom1.request; report (pname & " 已进入试衣间"); Hold (Normal (3, 1, u)); fittingroom1.leave; report (pname & " 已离开试衣间"); End; End; Integer u; Ref (FittingRoom) fittingRoom1; fittingRoom1:- New FittingRoom; Activate New Person ("Sam"); Activate New Person ("Sally"); Activate New Person ("Andy"); Hold (100); End; ## 注释 1. SIMULA Standards Group. SIMULA Standard (PDF). 1986 [2022-03-23]. (原始内容 (PDF)存档于2022-04-07). In this Standard the name SIMULA is considered synonymous with SIMULA 67. …… It is recommended that the language defined in this Standard be referred to as "Standard SIMULA". SIMULA includes most of the ALGOL 60 language. Wherever ALGOL is used in this Standard it relates to the STANDARD ALGOL 60 definition (ISO 1538). 2. Portable Simula Revisited. [2022-03-15]. (原始内容存档于2022-05-11). 3. GNU Cim. [2022-04-02]. (原始内容存档于2022-04-14). 4. ^ Kristen Nygaard; Ole-Johan Dahl. The Development of the Simula Languages (PDF). 1978 [2022-03-14]. (原始内容 (PDF)存档于2022-01-20). SIMSCRIPT was the only simulation language that we were closely acquainted with during the design phase of SIMULA. From the preceding sections it will be evident that it had a considerable impact through its list processing and time scheduling mechanisms. It also contained a set of random drawing and other utility routines, which served as a model for our procedure library. 5. ^ Barbara Liskov. A history of CLU (PDF). 1992 [2022-04-27]. (原始内容 (PDF)存档于2021-11-05). Programming languages that existed when the concept of data abstraction arose did not support abstract data types, but some languages contained constructs that were precursors of this notion. …… The mechanism that matched the best was the class mechanism of Simula 67. A Simula class groups a set of procedures with some variables. A class can be instantiated to provide an object containing its own copies of the variables; the class contains code that initializes these variables at instantiation time. However, Simula classes did not enforce encapsulation ……, and Simula was lacking several other features needed to support data abstraction, ……. 6. Ole-Johan Dahl. The Birth of Object Orientation: the Simula Languages (PDF). 2001 [2022-04-13]. (原始内容 (PDF)存档于2021-08-10). On the other hand, if a procedure P is specified as virtual in a class C the binding scheme is semi-dynamic. Any call for P occurring in C or in any subclass of C will bind to that declaration of P which occurs at the innermost prefix level of the actual object containing such a declaration (and similarly for remote accesses). Thus, the body of the procedure P may, at the prefix level of C, be postponed to occur in any subclass of C. It may even be replaced by more appropriate ones in further subclasses. This binding scheme is dynamic in the sense that it depends on the class membership of the actual object. But there is nevertheless a degree of compiler control; the access can be implemented as indirect through a produced by the compiler for C and for each of its subclasses. …… a virtual procedure can be seen as a parameter, where the actual parameter is a procedure residing safely within the object itself, at an appropriate prefix level. There is the additional advantage that the procedure has direct access to attributes of the object containing it. 7. O. -J. Dahl; C. A. R. Hoare. Hierarchical Program Structures. C. A. R. Hoare (编). Structured Programming. London, UK: Academic Press. 1972: 175–220. ISBN 978-0122005503. In SIMULA 67, a block instance is permitted to outlive its calling statement, and to remain in existence for as long as the program needs to refer to it. It may even outlive the block instance that called it into existence. As a consequence, it is no longer possible to administer storage allocation as a simple stack; a general garbage-collector, including a scan-mark operation, is required to detect and reclaim those areas of store (local workspace of block instances) which can no longer be referenced by the running program. The reason for accepting this extra complexity is that it permits a wider range of concepts to be conveniently expressed. In particular, it clarifies the relationship between data and the operations which may be performed upon it, in a way which is awkward or impossible in ALGOL 60. 8. Ole-Johan Dahl. The Birth of Object Orientation: the Simula Languages (PDF). 2001 [2022-04-13]. (原始内容 (PDF)存档于2021-08-10). The main impact of Simula 67 has turned out to be the very wide acceptance of many of its basic concepts: objects, but usually without own actions, classes, inheritance, and virtuals, often the default or only way of binding “methods”, (as well as pointers and dynamic object generation). There is universal use of the term “object orientation”, OO. Although no standard definition exists, some or all of the above ideas enter into the OO paradigm of system development. There is a large flora of OO languages around for programming and system specification. …… The importance of the OO paradigm today is such that one must assume something similar would have come about also without the Simula effort. The fact remains, however, that the OO principle was introduced in the mid 60’s through these languages. Simula 67 had an immediate success as a simulation language, and was, for instance extensively used in the design of VLSI chips, e.g. at INTEL. As a general programming language, its impact was enhanced by lectures at given by OJD, materialized as a chapter in a book on structured programming. The latter has influenced research on the use of abstract data types, e.g., the CLU language, as well as research on and operating system design. A major new impact area opened with the introduction of workstations and personal computers. Alan Kay and his team at Xerox PARC developed Smalltalk, an interactive language building upon Simula’s objects, classes and inheritance. It is oriented towards organising the cooperation between a user and her/his personal computer. 9. ^ Kristen Nygaard. SIMULA: An Extension of ALGOL to the Description of Discrete-Event Networks. Proceedings of the IFIP congress 62, Munich. North-Holland Publ., pages 520-522. Aug 1962. 10. ^ C. A. R. Hoare. Record Handling (PDF). ALGOL Bulletin no. 21. 1965 [2022-03-14]. (原始内容 (PDF)存档于2022-04-07). 11. ^ Jan Rune Holmevik. Compiling Simula. Oslo, Norway: Institute for Studies in Research and Higher Education. 1994. (原始内容存档于2020-01-11). During the summer and autumn of 1963, …… Instead Dahl and Nygaard introduced the far more powerful process concept which came to constitute the basic, unifying feature of the SIMULA I language. In short, a process can be understood as a generalized ALGOL procedure with quasi-parallel properties. …… they became more and more preoccupied with the opportunities embedded in Tony Hoare's record class construct, first presented in ALGOL bulletin no. 21, 1965. …… What they were really looking for was some kind of generalized process concept with record class properties. The answer to their problem suddenly appeared in December 1966, when the idea of prefixing was introduced. A process, later called an object, could now be regarded as consisting of two layers: A prefix layer containing references to its predecessor and successor along with a number of other properties, and a main layer containing the attributes of the object in question. In addition to this important new feature, they also introduced the class concept, which can roughly be described as a highly refined version of SIMULA I's activity concept. This powerful new concept made it possible to establish class and subclass hierarchies of concatenated objects. 12. ^ Ole-Johan Dahl; Kristen Nygaard. Class and Subclass Declarations (PDF). North Holland: J. Buxton,ed.: Simulation Programming Languages. Proceedings from the IFIP Working Conference in Oslo, May 1967. 1968 [2020-05-16]. (原始内容 (PDF)存档于2022-04-08). 13. ^ Ole-Johan Dahl; Bjørn Myhrhaug; Kristen Nygaard. SIMULA 67 Common Base Language (PDF). Norwegian Computing Center. 1968, 1970 [2022-02-13]. (原始内容 (PDF)存档于2022-04-07). 14. ^ Swedish standard SS 63 61 14 (PDF). [2022-04-14]. (原始内容 (PDF)存档于2022-04-16). 15. ^ Alan Kay. The Early History of Smalltalk. [2022-04-11]. (原始内容存档于2011-04-29). I wound up in graduate school at the University of Utah in the Fall of 1966, ……. …… The documentation was incomprehensible. Supposedly, this was the Case-Western Reserve Algol – but it had been doctored to make a language called Simula; the documentation read like Norwegian transliterated into English, which in fact it was. There were uses of words like activity and process that didn’t seem to coincide with normal English usage. …… The weirdest part was the storage allocator, which did not obey a stack discipline as was usual for Algol. …… What Simula was allocating were structures very much like the instances of Sketchpad. There wee descriptions that acted like masters and they could create instances, each of which was an independent entity. What Sketchpad called masters and instances, Simula called activities and processes. Moreover, Simula was a procedural language for controlling Sketchpad-like objects, thus having considerably more flexibility than constraints (though at some cost in elegance) ……. …… For the first time I thought of the whole as the entire computer and wondered why anyone would want to divide it up into weaker things called data structures and procedures. Why not divide it up into little computers, as time sharing was starting to? But not in dozens. Why not thousands of them, each simulating a useful structure? …… It is not too much of an exaggeration to say that most of my ideas from then on took their roots from Simula – but not as an attempt to improve it. It was the promise of an entirely new way to structure computations that took my fancy. As it turned out, it would take quite a few years to understand how to use the insights and to devise efficient mechanisms to execute them. 16. ^ ; Peter Bishop, Richard Steiger. A Universal Modular Actor Formalism for Artificial Intelligence (PDF). IJCAI. 1973 [2022-04-11]. (原始内容 (PDF)存档于2021-02-25). Alan Kay whose FLEX and SMALLTALK machines have influenced our work. Alan emphasized the crucial importance of using intentional definitions of data structures and of passing messages to them. This paper explores the consequences of generalizing the message mechanism of SMALLTALK and SIMULA-67; ……. 17. ^ Bjarne Stroustrup. A history of C++: 1979-1991 (PDF). History of programming languages---II (ACM). 1996: 699–769 [2022-03-22]. doi:10.1145/234286.1057836. (原始内容 (PDF)存档于2022-04-23). C++ was designed to provide Simula’s facilities for program organization together with C’s efficiency and flexibility for systems programming. James Gosling. The Feel of Java (PDF). 1997 [2022-04-27]. (原始内容 (PDF)存档于2022-02-28). Java is a blue collar language. It’s not PhD thesis material but a language for a job. Java feels very familiar to many different programmers because I had a very strong tendency to prefer things that had been used a lot over things that just sounded like a good idea. …… It has an object-oriented flavor that derives from a number of languages—Simula, C/C++, Objective C, Cedar/Mesa, Modula, and Smalltalk. 18. ^ O. -J. Dahl; C. A. R. Hoare. Hierarchical Program Structures. C. A. R. Hoare (编). Structured Programming. London, UK: Academic Press. 1972: 175–220. ISBN 978-0122005503. useful properties from the standpoint of concept modelling. …… ⑷ Language element. A block is itself a statement, which is a syntactic category of the language. Furthermore, through the procedure mechanism, reference to a block may be dissociated from its defining text. 19. Peter Naur; et al. Revised Report on the Algorithmic Language Algol 60. [2022-04-14]. (原始内容存档于2007-06-25). A switch declaration defines the set of values of the corresponding switch designators. These values are given one by one as the values of the designational expressions entered in the switch list. With each of these designational expressions there is associated a positive integer, 1, 2, ..., obtained by counting the items in the list from left to right. The value of the switch designator corresponding to a given value of the subscript expression ( …… ) is the value of the designational expression in the switch list having this given value as its associated integer. 20. ^ Peter Naur; et al. Revised Report on the Algorithmic Language Algol 60. [2022-04-14]. (原始内容存档于2007-06-25). Every block automatically introduces a new level of nomenclature. This is realized as follows: Any identifier occurring within the block my through a suitable declaration ( …… ) be specified to be local to the block in question. This means ⒜ that the entity represented by this identifier inside the blocks has no existence outside it and ⒝ that any entity represented by this identifier outside the block is completely inaccessible inside the block. Identifiers (except those representing labels) occurring within a block and not being declared to this block will be non-local to it, i.e. will represent the same entity inside the block and in the level immediately outside it. A label separated by a colon from a statement, i.e. labelling that statement, behaves as though declared in the head of the smallest embracing block, i.e. the smallest block whose brackets begin and end enclose that statement. In this context a procedure body must be considered as if it were enclosed by begin and end and treated as a block. Since a statement of a block may again itself be a block the concepts local and non-local to a block must be understood recursively. Thus an identifier, which is non-local to a block A, may or may not be non-local to the block B in which A is one statement. 21. ^ O. -J. Dahl; C. A. R. Hoare. Hierarchical Program Structures. C. A. R. Hoare (编). Structured Programming. London, UK: Academic Press. 1972: 175–220. ISBN 978-0122005503. One of the most powerful mechanisms for program structuring in ALGOL 60 is the block and procedure concept. It has the following useful properties from the standpoint of concept modelling. Duality. A block head and block tail together define an entity which has properties and performs actions. Furthermore the properties may include a data structure as well as associated operators (local procedures). 22. ^ O. -J. Dahl; C. A. R. Hoare. Hierarchical Program Structures. C. A. R. Hoare (编). Structured Programming. London, UK: Academic Press. 1972: 175–220. ISBN 978-0122005503. useful properties from the standpoint of concept modelling. …… Decomposition. A block where only local quantities are referenced is a completely selfcontained program component, which will function as specified in any context. Through a procedure heading a block (procedure) instance may interact英语Interaction with a calling sequence. Procedures which reference or change non-local variables represent a partial decomposition of the total task, which is useful for direct with the program environment. 23. ^ O. -J. Dahl; C. A. R. Hoare. Hierarchical Program Structures. C. A. R. Hoare (编). Structured Programming. London, UK: Academic Press. 1972: 175–220. ISBN 978-0122005503. In ALGOL 60, the rules of the language have been carefully designed to ensure that the lifetimes of block instances are nested, in the sense that those instances that are latest activated are the first to go out of existence. It is this feature that permits an ALGOL 60 implementation to take advantage of a stack as a method of dynamic storage allocation and relinquishment. But it has the disadvantage that a program which creates a new block instance can never interact with it as an object which exists and has attributes, since it has disappeared by the time the calling program regains control. Thus the calling program can observe only the results of the actions of the procedures it calls. Consequently, the operational aspects of a block are overemphasised; and algorithms (for example, matrix multiplication) are the only concepts that can be modelled. 24. O. -J. Dahl; C. A. R. Hoare. Hierarchical Program Structures. C. A. R. Hoare (编). Structured Programming. London, UK: Academic Press. 1972: 175–220. ISBN 978-0122005503. useful properties from the standpoint of concept modelling. …… ⑶ Class of instances. In ALGOL 60 a sharp distinction is made between a block, which is a piece of program text, and a dynamic block instance, which is (a component of) a computing process. An immediate and useful consequence is that a block may be identified with the class of its potential activations. (Strictly speaking a "block" in this context means either the outermost block or a block immediately enclosed by a dynamic block instance.) Through the recursion mechanism of ALGOL 60 different instances of the same block may co-exist in a computing process at the same time. 25. ^ . Some History of Functional Programming Languages (PDF). in an invited lecture TFP12, St Andrews University. 12 June 2012 [2022-05-04]. (原始内容 (PDF)存档于2020-04-15). Algol 60 allowed textually nested procedures and passing procedures as parameters (but not returning procedures as results). The requirement in the copying rule for systematic change of identifiers has the effect of enforcing static (that is lexicographic) binding of free variables. In their book “Algol 60 Implementation”, Randell and Russell (1964, Sect. 2.2) handle this by two sets of links between stack frames. The dynamic chain links each stack frame, representing a procedure call, to the frame that called it. The static chain links each stack frame to that of the textually containing procedure, which might be much further away. Free variables are accessed via the static chain. This mechanism works well for Algol 60 but in a language in which functions can be returned as results, a free variable might be held onto after the function call in which it was created has returned, and will no longer be present on the stack. Landin (1964) solved this in his SECD machine. A function is represented by a closure, consisting of code for the function plus the environment for its free variables. The environment is a linked list of name-value pairs. Closures live in the heap. 26. Ole-Johan Dahl; Bjørn Myhrhaug. The Simula Implementation Guide (PDF). 1967, 1973 [2022-04-17]. (原始内容 (PDF)存档于2022-04-26). A procedure deviates from a block in that ⑴ it has a name and ⑵ may be referred to at several different places in the program, and ⑶ that parameters may be given to it when invoked. A procedure shares the property of a block that it is impossible to establish a reference to it or to its interior. 27. ^ Peter Naur; et al. Revised Report on the Algorithmic Language Algol 60. [2022-04-14]. (原始内容存档于2007-06-25). Name replacement (call by name). Any formal parameter not quoted in the value list is replaced, throughout the procedure body, by the corresponding actual parameter, after enclosing this latter in parentheses wherever syntactically possible. Possible conflicts between identifiers inserted through this process and other identifiers already present within the procedure body will be avoided by suitable systematic changes of the formal or local identifiers involved. . Some History of Functional Programming Languages (PDF). in an invited lecture TFP12, St Andrews University. 12 June 2012 [2022-05-04]. (原始内容 (PDF)存档于2020-04-15). Algol 60 is not normally thought of as a functional language but its rules for procedures (the Algol equivalent of functions) and variable binding were closely related to those of λ-calculus. The Revised Report on Algol 60 (Naur 1963) is a model of precise technical writing. It defines the effect of a procedure call by a copying rule with a requirement for systematic change of identifiers where needed to avoid variable capture — exactly like β-reduction. Although formal parameters could be declared value the default parameter passing mode was call by name, which required the actual parameter to be copied unevaluated into the procedure body at every occurrence of the formal parameter. This amounts to normal order reduction (but not , there is no sharing). The use of call by name allowed an ingenious programming technique: . 28. ^ Peter Naur; et al. Revised Report on the Algorithmic Language Algol 60. [2022-04-14]. (原始内容存档于2007-06-25). Any occurrence of the procedure identifier within the body of the procedure other than in a left part in an assignment statement denotes activation of the procedure. 29. ^ Peter Naur; et al. Revised Report on the Algorithmic Language Algol 60. [2022-04-14]. (原始内容存档于2007-06-25). Values of function designators. For a procedure declaration to define the value of a function designator there must, within the procedure declaration body, occur one or more explicit assignment statements with the procedure identifier in a left part; at least one of these must be executed, and the type associated with the procedure identifier must be declared through the appearance of a type declarator as the very first symbol of the procedure declaration. The last value so assigned is used to continue the evaluation of the expression in which the function designator occurs. 30. ^ Peter Naur; et al. Revised Report on the Algorithmic Language Algol 60. [2022-04-14]. (原始内容存档于2007-06-25). Parameter delimiters. All parameter delimiters are understood to be equivalent. No correspondence between the parameter delimiters used in a procedure statement and those used in the procedure heading is expected beyond their number is the same. Thus the information conveyed by using the elaborate ones is entirely optional. 31. ^ Peter Naur; et al. Revised Report on the Algorithmic Language Algol 60. [2022-04-14]. (原始内容存档于2007-06-25). Value assignment (call by value). All formal parameters quoted in the value part of the procedure declaration heading are assigned the values ( …… ) of the corresponding actual parameters, these assignments being considers as being performed explicitly before entering the procedure body. The effect is as though an additional block embracing the procedure body were created in which these assignments were made to variables local to this fictitious block with types as given in the corresponding specifications ( …… ). As a consequence, variables called by value are to be considered as nonlocal to the body of the procedure, but local to the fictitious block ( …… ). 32. ^ O. -J. Dahl; C. A. R. Hoare. Hierarchical Program Structures. C. A. R. Hoare (编). Structured Programming. London, UK: Academic Press. 1972: 175–220. ISBN 978-0122005503. A procedure which is capable of giving rise to block instances which survive its call will be known as a class; and the instances will be known as objects of that class. A class may be declared, with or without parameters, in exactly the same way as a procedure …… 33. Kristen Nygaard; Ole-Johan Dahl. The Development of the Simula Languages (PDF). 1978 [2022-03-14]. (原始内容 (PDF)存档于2022-01-20). the ALGOL-like call by name parameters were out of the question for reasons of security and storage allocation strategy: the actual parameter could be lost during the lifetime of an object. The problem then was to find a name-parameter-like mechanism that would guarantee a safe place for the actual parameter. After much trial and error we hit on the virtual quantity concept where the actual would have to be declared in the object itself, but at a deeper subclass level than that of the virtual specification. Now generalized objects could be defined whose behaviour pattern could be left partly unspecified in a prefix class body. Different subclasses could contain different actual procedure declarations. 34. ^ Ole-Johan Dahl. The Birth of Object Orientation: the Simula Languages (PDF). 2001 [2022-04-13]. (原始内容 (PDF)存档于2021-08-10). In general attribute identifiers may be redeclared in subclasses, as is the case of inner blocks. The identity of an attribute is determined by the prefix level of the accessing occurrence, or, if the access is remote, by the class qualifying the object reference in question. In this way any ambiguity of identifier binding is resolved textually, i.e at compile time; we call it static binding. 35. ^ Peter Naur; et al. Revised Report on the Algorithmic Language Algol 60. [2022-04-14]. (原始内容存档于2007-06-25). Function designators define single numerical or logical values which result through the application of given sets of rules defined by a procedure declaration ( …… ) to fixed sets of actual parameters. 36. ^ Peter Naur; et al. Revised Report on the Algorithmic Language Algol 60. [2022-04-14]. (原始内容存档于2007-06-25). A procedure statement serves to invoke (call for) the execution of a procedure body ( …… ). Where the procedure body is a statement written in Algol the effect of this execution will be equivalent to the effect of performing the following operations on the program at the time of execution of the procedure statement. 37. ^ Kristen Nygaard; Ole-Johan Dahl. The Development of the Simula Languages (PDF). 1978 [2022-03-14]. (原始内容 (PDF)存档于2022-01-20). In ALGOL, blocks (including procedures) are seen externally as generalized operations. By introducing mechanisms for quasi-parallel sequencing, essentially the same construct could play the role of processes in parallel, and through mechanisms for naming block instances and accessing their contents they could function as generalized data objects. The essential benefits of combining data and operations in a single construct were already there to be explored. 38. ^ Kristen Nygaard; Ole-Johan Dahl. The Development of the Simula Languages (PDF). 1978 [2022-03-14]. (原始内容 (PDF)存档于2022-01-20). An object would start its life like an instance of a function procedure, invoked by the evaluation of a generating expression. During this phase the object might initialize its own local variables. Then, on passage through the end of the object or as the result of a new basic operation "detach", control would return to the generating expression delivering a reference to the object as the function value. In the former case the object was "terminated" with no further own actions, in the latter case it had become a "detached object" capable of functioning as a "coroutine". The basic coroutine call "resume (<object reference>)" would make control leave the active object, leaving behind a reactivation point at the end of the resume statement, and enter the referenced object at its reactivation point. 39. ^ Ole-Johan Dahl; Bjørn Myhrhaug. The Simula Implementation Guide (PDF). 1967, 1973 [2022-04-16]. (原始内容 (PDF)存档于2022-04-16). The procedure "call" is not a part of the Common Base, but is a natural part of a SIMULA 67 Common Base implementation. ……This definition of call is tentative, since the problem is currently being studied by a Technical Committee under the SIMULA Standards Group. 40. ^ O. -J. Dahl; C. A. R. Hoare. Hierarchical Program Structures. C. A. R. Hoare (编). Structured Programming. London, UK: Academic Press. 1972: 175–220. ISBN 978-0122005503. In SIMULA, a coroutine is represented by an object of some class, co-operating by means of resume instructions with objects of the same or another class, which are named by means of reference variables. …… Thus a main program may establish a coroutine relationship with an object that it has generated, using the call/detach mechanism instead of the more symmetric resume/resume mechanism. In this case, the generated object remains subordinate to the main program, and for this reason is sometimes known as a Semicoroutine. …… Let X and Y be objects, generated by a "master program" M. Assume that M issues a call (X), thereby invoking an "active phase" of X, terminated by a detach operation in X; and then issues a call (Y), and so forth. In this way M may act as a "supervisor" sequencing a pattern of active phases of X, Y, and other objects. Each object is a "slave", which responds with an active phase each time it is called for, whereas M has the responsibility to define the large scale pattern of the entire computation. Alternatively the decision making may be "decentralised", allowing an object itself to determine its dynamic successor by a resume operation. The operation resume (Y), executed by X, combines an exit out of X (by detach) and a subsequent call (Y), thereby bypassing M. Obligation to return to M is transferred to Y. 41. ^ Jaroslav Sklenar. INTRODUCTION TO OOP IN SIMULA. 1997 [2022-02-13]. (原始内容存档于2022-05-11). The system class Simulation introduces the notion of time. It means that if nested, there will be other local (nested) times.
# A Hard Function Problem A function $f$ is defined for all $x$ and has the following property, $\large f(x) = \frac{ax+b}{cx+d}$ If $a, b, c$ and $d$ are real number and the function above satisfy $f(19)=19, f(97) = 97, f(f(x)) = x,$ for every $x$ value, except $\large -\frac{d}{c}$, find range of the function Note by Jason Chrysoprase 3 years, 4 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: I'll assume that the domain is $\mathbb{R} - \{-\frac{d}{c} \}$ If $\dfrac{b}{a} \neq \dfrac{d}{c}$, then the range is $\mathbb{R}-\{\frac{a}{c}\}$ if $c \neq 0$ and $\mathbb{R}$ otherwise. If equality holds above, then $f(x)$ is a constant function, which contradicts the fact that $f(19) \neq f(97)$. In conclusion, the range is $\mathbb{R}-\{\frac{a}{c}\}$ if $c \neq 0$ otherwise $\mathbb{R}$. Edit: After finding $a,b, c, d$, Either $f(x)=x$ or $f(x)=\dfrac{58x-19 \times 97}{x-58}$ So, the range is either $\mathbb{R}$ or $\mathbb{R} - \{58\}$ - 3 years, 4 months ago So every $x$ we put, the answer is always real ? So how about $f(0)$? - 3 years, 4 months ago Oops, sorry for late reply. Whats the problem if we put $f(0)$. $f(0) = \dfrac{a×0 + b}{c×0 + d} = \dfrac{b}{d}$. Since $\dfrac{-d}{c}$ is removed from its domain, the answer $\dfrac{b}{d}$ is real too. - 3 years, 4 months ago I don't know functions well , can you please explain a bit more on how you took the range? - 3 years, 4 months ago The answer is $\mathbb{R} - {58}$ since there is no $x$ value for $58$ - 3 years, 4 months ago Domain ${x\in \mathbb{R} : x \neq 58}$ Range ${f \in \mathbb{R} : f \neq 58}$ - 3 years, 4 months ago Staff - 3 years, 4 months ago - 3 years, 4 months ago Practice the challenge quizzes in the chapter and read the wikis. Staff - 3 years, 4 months ago Thank you sir :-) - 3 years, 4 months ago Wow, that guy really wrote the same problem. We both came from Indonesia :) - 3 years, 4 months ago Anyone ? - 3 years, 4 months ago why not me??? - 3 years, 4 months ago Sorry, forgot u :( - 3 years, 4 months ago ×
# Deep Learning to Control my Robotic Arm - 7 mins This is the third installment in the chronicle of my attempt to build a robotic arm to make me tea. For the mechanical build, see here, and for the electrical and software groundwork for this post, see here. This thing is borderline impossible to control with an Xbox controller. Not only are there too many joints, there is no notion of correcting for the forces of gravity. As such, my first plan of attack is to see if I can build some controls that make it easier to control the arm by hand – in specific, when I release the controls on my XBox controller, I would like the robot to stop moving as opposed to come crashing down. From here, the hope is that I can then do point-to-point movement and “program” it to do tasks by running through a set of predefined states. This roadmap is designed to be sample efficient. Unlike Google, I do not have an army of these arms at my disposal. This means that the more “traditional” deep reinforcement learning approaches (model-free control) are out of the question, as they are just too sample inefficient. It’s going to take a while then to get this point, but this is what I have so far. In this post, I review the data collection and processing, discuss the forward dynamics model training, and finally, address the use of the model predictive control algorithm I employed and some initial results when applied on only 1 dimension. This post is almost entirely derived from “Model-based Reinforcement Learning with Neural Network Dynamics”, and in paper form. I highly recommend both. ### Data pre-processing: As mentioned before, I log all data that is sent (motor commands) and received (accelerometer and gyro readings). To collect some data, I ran the robot with random-ish movements (generated by me and an XBox controller) for roughly 10 min. These are saved to ndjson files on disk. Importantly, data are received at specific instances of time, and are at different frequencies and not aligned. I resample the data to correct for this at some fixed rate. For this first test, I am resampling at 100 Hertz using the wonderful np.interp. ### Forward Modeling: My forward dynamics model takes the current sensor readings (the state, in reinforcement learning speak), and the control commands (the actions), and predicts the next sensor 1/100th of a second later. To complicate things, these commands do not happen instantly. This is partially attributable to the software side of things, and partially to the mechanical system itself – a motor cannot simply go from off to full power instantaneously. As such, this model must take into account this hidden information when making predictions. Technically speaking, the system I am modeling is a POMDP. The model I chose to work with first is a LSTM, as it’s capable of modeling these hidden states naturally. I used a 64 unit hidden state model, with a linear layer transforming the output into the predicted actions. As done in Nagabandi et. al., instead of predicting the next state, the model predicts the difference from the current state to the next state. Additionally, to make the predictions of a sane scale (motors don’t move all that much in 1/100th of a step), I normalized the differences to unit mean and variance by estimating the mean and variance of the deltas over the training data. The full update can be written as $s_{t+1} = s_t + p(s_t, a_t; \theta) \sigma^2 + \mu$ where $p(\cdot)$ is a normalized sample, and $\sigma^2$ and $\mu$ are the normalizing variance and mean of the training data respectively. Additionally, unlike the work from Nagabandi et al., I chose to use a stochastic model instead of a deterministic one. If you do the math, their model can also be written down exactly as optimizing log likelihood under a normal distribution with a fixed variance, but I figured I would just model it explicitly as such, and learn the normal distributions variance while I am at it. Instead of the mean squared error loss, I minimize the negative log likelihood of the next state given all the current $(x_t, a_t)$ and past information $h_t$. For training data, I take 10 second random slices, and run them through the model (with teacher forcing) with a batch size of 32. Additionally, I have a “test set” – a set of data collected after turning off and on the robot– to get at least some measure of over fitting. I train with the Adam optimizer and early stopped when the test loss stopped improving (in this case, 30 min). To gain a quick sanity check of the model, I recreated a plot from Nagabandi et al., where they run the forward dynamics model for some amount of time and compare it to the ground truth data. Despite a seemingly horribly high loss, the predicted trajectories on test data appear reasonably good, at least good enough to test it for control. ### Model Predictive Control Policy: With the model done, I next turned to actually using it. A forward dynamics model alone is not enough to do control. One needs to convert this into some form of policy. For this, I used model predictive control. Despite the fancy Wikipedia page, I believe this is actually a rather simple procedure – first, “plan” somehow into the future using the current model, then perform only the first step of this plan. This will bring you to a new state in which another round of planning occurs. As my first control task, I tried simply to return to a given state. To keep things simple, instead of “planning” I picked a few fixed action sets. In my case, setting the motor on for 3 seconds each at one of the following powers: 1.0, 0.5, 0.0, -0.5, -1.0. I used these “plans” and did rollouts under the model. The best plan was the action sequence that put the final state closest to the target position. This is totally not ideal, and will not even converge in a lot of cases, but it’s a start! I tried this on the robot, and it almost kinda works, but oscillates around the correct solution. Why does it oscillate? I am not entirely sure, but I am fairly confident that it has to do with the control frequency and latency in the model. First, I am issuing the command AFTER the model has finished making predictions. This means that while the model is churning away, the previous command is still executing on the robot and modifying the current state. This type of time delay can (and seems to) lead to oscillations. To validate this, I increased the amount of compute, and the oscillations got bigger. ### Next up: I have a few ideas as to how to remedy this time delay issue – I think it’s enough to set a fixed control frequency (say, 10 Hertz), and in the planning stage, plan as if the previous action was being executed for this amount of time. Doing this, however, will require another few weekends. I am going to put this effort on hold for now though and shift focus to a second, more powerful version of the mechanical design. Update soon!
# Solved:The distribution of the SAT scores in math for an incoming class of business students has a mean of 590 and Standard deviation #### ByDr. Raju Chaudhari Sep 20, 2020 The distribution of the SAT scores in math for an incoming class of business students has a mean of 590 and Standard deviation of 22. Assume that the scores are normally distributed. a. Find the probability that an individual's SAT score is less than 550. b. Find the probability than an individual's SAT score is between 550 and 600. c. Find the probability that an individual's SAT score is greater than 600. d. What percentage of students will have scored better than 700? e. Find the standardized values for the students scoring 550, 600, 650, and 700 on the test. #### Solution Given that mean $\mu=590$ and standard deviation $=\sigma = 22$. a. The probability that an individual's SAT score is less than 550 is \begin{aligned} P(X < 550) &= P\bigg(\frac{X-\mu}{\sigma} < \frac{550-590}{22}\bigg)\\ &= P\bigg(Z < -1.818\bigg)\\ &=0.0345 \end{aligned} b. The probability than an individual's SAT score is between 550 and 600 is \begin{aligned} P(550 < X< 600) &= P(X < 600) - P(X < 550)\\ &= P\bigg(\frac{X-\mu}{\sigma} < \frac{600-590}{22}\bigg)-P\bigg(\frac{X-\mu}{\sigma} < \frac{550-590}{22}\bigg)\\ &=P\bigg(Z < 0.455\bigg)-P\bigg(Z < -1.818\bigg)\\ &=0.6753-0.0345\\ &= 0.6408 \end{aligned} c. The probability that an individual's SAT score is greater than 600 is \begin{aligned} P(X > 600) &= 1-P(X\leq600)\\ &= 1- P\bigg(\frac{X-\mu}{\sigma} \leq \frac{600-590}{22}\bigg)\\ &=1- P\bigg(Z \leq 0.455\bigg)\\ &=1-0.6753\\ &= 0.3247 \end{aligned} d. The probability that the students scored better than 700 is \begin{aligned} P(X > 700) &= 1-P(X\leq700)\\ &= 1- P\bigg(\frac{X-\mu}{\sigma} \leq \frac{700-590}{22}\bigg)\\ &=1- P\bigg(Z \leq 5\bigg)\\ &=1-1\\ &= 0 \end{aligned} 0% of the students will have scored better than 700. e. The standardized values: The standardized values for the students scoring 550 is \begin{aligned} Z& =\frac{550-590}{22}\\ &= -1.818 \end{aligned} The standardized values for the students scoring 600 is \begin{aligned} Z& =\frac{600-590}{22}\\ &= 0.455 \end{aligned} The standardized values for the students scoring 650 is \begin{aligned} Z& =\frac{650-590}{22}\\ &= 2.727 \end{aligned} The standardized values for the students scoring 700 is \begin{aligned} Z& =\frac{700-590}{22}\\ &= 5 \end{aligned}
MFCS 2022: 47TH INTERNATIONAL SYMPOSIUM ON MATHEMATICAL FOUNDATIONS OF COMPUTER SCIENCE PROGRAM FOR THURSDAY, AUGUST 25TH Days: previous day next day all days View: session overviewtalk overview 09:00-10:15 Session 15A 09:00 Improved Approximation Algorithms for the Traveling Tournament Problem PRESENTER: Jingyang Zhao ABSTRACT. The Traveling Tournament Problem (TTP) is a well-known benchmark problem in the field of tournament timetabling, which asks us to design a double round-robin schedule such that each pair of teams plays one game in each other's home venue, minimizing the total distance traveled by all $n$ teams ($n$ is even). TTP-$k$ is the problem with one more constraint that each team can have at most $k$ consecutive home games or away games. The case where $k=3$, TTP-3, is one of the most investigated cases. In this paper, we improve the approximation ratio of TTP-3 from $(1.667+\epsilon)$ to $(1.598+\epsilon)$, for any $\epsilon>0$. Previous schedules were constructed based on a Hamiltonian cycle of the graph. We propose a novel construction based on triangle packing. Then, by combining our triangle packing schedule with the Hamiltonian cycle schedule, we obtain the improved approximation ratio. The idea of our construction can also be extended to $k\geq 4$. We demonstrate that the approximation ratio of TTP-4 can be improved from $(1.750+\epsilon)$ to $(1.700+\epsilon)$ by the same method. As an additional product, we also improve the approximation ratio of LDTTP-3 (TTP-3 where all teams are allocated on a straight line) from $4/3$ to $(6/5+\epsilon)$. 09:25 Constant-factor approximation algorithm for binary search in trees with monotonic query times ABSTRACT. We consider a generalization of binary search in linear orders to the domain of weighted trees. The goal is to design an adaptive search strategy whose aim is to locate an unknown target vertex of a given tree. Each query to a vertex v incurs a non-negative cost ω(v) (that can be interpreted as the duration of the query) and returns feedback that either v is the target or the edge incident to v is given that is on the path towards the target. The goal of the algorithm is to find a strategy that minimizes the worst-case total cost. We propose a constant-factor approximation algorithm for trees with a monotonic cost function. Such function is defined as follows: there exists a vertex r such that for any two vertices u, v on any path connecting r with a leaf it holds that if u is closer to r than v, then ω(u) ≥ ω(v). The best-known approximation algorithm for general weight functions has the ratio of O(√log n) [Dereniowski et al. ICALP 2017] and it remains a challenging open question whether the constant-factor approximation is achievable in such a case. This gives our first motivation towards considering monotonic cost functions and the second one lies in the potential applications. 09:50 Rabbits approximate, cows compute exactly! ABSTRACT. Valiant, in his seminal paper in 1979, showed an efficient simulation of algebraic formulas by determinants, showing that $\mathsf{VF}$, the class of polynomial families computable by polynomial-sized algebraic formulas, is contained in $\mathsf{VDet}$, the class of polynomial families computable by polynomial-sized determinants. Whether this containment is strict has been a long-standing open problem. We show that algebraic formulas can in fact be efficiently simulated by the determinant of tetradiagonal matrices, transforming the open problem into a problem about determinant of general matrices versus determinant of tetradiagonal matrices with just three non-zero diagonals. This is also optimal in a sense that we cannot hope to get the same result for matrices with only two non-zero diagonals or even tridiagonal matrices, thanks to Allender and Wang (Computational Complexity'16) which showed that the determinant of tridiagonal matrices cannot even compute simple polynomials like $x_1x_2 + x_3x_4 + \cdots + x_{15}x_{16}$. Our proof involves a structural refinement of the simulation of algebraic formulas by width-3 algebraic branching programs by Ben-Or and Cleve (SIAM Journal of Computing'92). The tetradiagonal matrices we obtain in our proof are also structurally very similar to the tridiagonal matrices of Bringmann, Ikenmeyer and Zuiddam (JACM'18) which showed that, if we allow approximations in the sense of geometric complexity theory, algebraic formulas can be efficiently simulated by the determinant of tridiagonal matrices of a very special form, namely the continuant polynomial. The continuant polynomial family is closely related to the Fibonacci sequence, which was used to model the breeding of rabbits. The determinants of our tetradiagonal matrices, in comparison, is closely related to Narayana's cows sequences, which was originally used to model the breeding of cows. Our result shows that the need for approximation can be eliminated by using Narayana's cows polynomials instead of continuant polynomials, or equivalently, shifting one of the outer diagonals of a tridiagonal matrix one place away from the center. Conversely, we observe that the determinant (or, permanent) of band matrices can be computed by polynomial-sized algebraic formulas when the bandwidth is bounded by a constant, showing that the determinant (or, permanent) of bandwidth $k$ matrices for all constants $k \geq 2$ yield $\mathsf{VF}$-complete polynomial families. In particular, this implies that the determinant of tetradiagonal matrices in general and Narayana's cows polynomials in particular yield complete polynomial families for the class $\mathsf{VF}$. 09:00-10:15 Session 15B 09:00 A Complexity Approach to Tree Algebras: the Polynomial Case ABSTRACT. In this paper, we consider infinitely sorted tree algebras recognising regular language of finite trees. We pursue their analysis under the angle of their asymptotic complexity, i.e. the asymptotic size of the sorts as a function of the number of variables involved. Our main result establishes an equivalence between the languages recognised by algebras of polynomial complexity and the languages that can be described by nominal word automata that parse linearisation of the trees. On the way, we show that for such algebras, having polynomial complexity corresponds to having uniformly boundedly many orbits under permutation of the variables, or having a notion of bounded support (in a sense similar to the one in nominal sets). We also show that being recognisable by an algebra of polynomial complexity is a decidable property for a regular language of trees. 09:25 Bounding the Escape Time of a Linear Dynamical System over a Compact Semialgebraic Set ABSTRACT. We study the Escape Problem for discrete-time linear dynamical systems over compact semialgebraic sets. We establish a uniform upper bound on the number of iterations it takes for every orbit of a rational matrix to escape a compact semialgebraic set defined over rational data. Our bound is doubly exponential in the ambient dimension, singly exponential in the degrees of the polynomials used to define the semialgebraic set, and singly exponential in the bitsize of the coefficients of these polynomials and the bitsize of the matrix entries. We show that our bound is tight by providing a matching lower bound. 09:50 Algebraic Representations of Unique Bipartite Perfect Matching ABSTRACT. We obtain complete characterizations of the Unique Bipartite Perfect Matching function, and of its Boolean dual, using multilinear polynomials over the reals. Building on previous results, we show that, surprisingly, the dual description is \textit{sparse} and has \textit{low $\ell_1$-norm} -- only exponential in $\Theta(n \log n)$, and this result extends even to other families of matching-related functions. Our approach relies on the M\"obius numbers in the matching-covered lattice, and a key ingredient in our proof is M\"obius' inversion formula. These polynomial representations yield complexity-theoretic results. For instance, we show that unique bipartite matching is \textit{evasive} for classical decision trees, and \textit{nearly evasive} even for generalized query models. We also obtain a tight $\Theta(n \log n)$ bound on the log-rank of the associated two-party communication task. 10:15-10:50Coffee Break 10:50-12:10 Session 16A 10:50 New Lower Bounds and Upper Bounds for Listing Avoidable Vertices ABSTRACT. We consider the problem of listing all avoidable vertices in a given $n$ vertex graph. A vertex is avoidable if every pair of its neighbors is connected by a path whose internal vertices are not neighbors of the vertex or the vertex itself. Recently, Papadopolous and Zisis showed that one can list all avoidable vertices in $O(n^{\omega+1})$ time, where $\omega<2.373$ is the square matrix multiplication exponent, and conjectured that a faster algorithm is not possible. In this paper we show that under the $3$-OV Hypothesis, and thus the Strong Exponential Time Hypothesis, $n^{3-o(1)}$ time is needed to list all avoidable vertices, and thus the current best algorithm is conditionally optimal if $\omega=2$. We then show that if $\omega>2$, one can obtain an improved algorithm that for the current value of $\omega$ runs in $O(n^{3.32})$ time. We also show that our conditional lower bound is actually higher and supercubic, under a natural High Dimensional $3$-OV hypothesis, implying that for our current knowledge of rectangular matrix multiplication, the avoidable vertex listing problem likely requires $\Omega(n^{3.25})$ time. We obtain further algorithmic improvements for sparse graphs and bounded degree graphs. 11:15 The Hamilton compression of highly symmetric graphs ABSTRACT. We say that a Hamilton cycle C=(x_1,...,x_n) in a graph G is k-symmetric, if the mapping x_i ↦ x_{i+n/k} for all i=1,...,n, where indices are considered modulo n, is an automorphism of G. In other words, if we lay out the vertices x_1,...,x_n equidistantly on a circle and draw the edges of G as straight lines, then the drawing of G has k fold rotational symmetry, i.e., all information about the graph is compressed into a 360°/k wedge of the drawing. We refer to the maximum k for which there exists a k-symmetric Hamilton cycle in G as the Hamilton compression of G. We investigate the Hamilton compression of four different families of vertex-transitive graphs, namely hypercubes, Johnson graphs, permutahedra and Cayley graphs of abelian groups. In several cases we determine their Hamilton compression exactly, and in other cases we provide close lower and upper bounds. The cycles we construct have a much higher compression than several classical Gray codes known from the literature. Our constructions also yield Gray codes for bitstrings, combinations and permutations that have few tracks and/or that are balanced. 11:40 On Dynamic α + 1 Arboricity Decomposition and Out-Orientation ABSTRACT. A graph has arboricity α if its edges can be partitioned into α forests. The dynamic arboricity decomposition problem is to update a partitioning of the graph's edges into forests, as a graph undergoes insertions and deletions of edges. We present an algorithm for maintaining partitioning into α+1 forests, provided the arboricity of the dynamic graph never exceeds α. Our algorithm has an update time of Õ(n^(3/4)) when α is at most polylogarithmic in n. Similarly, the dynamic bounded out-orientation problem is to orient the edges of the graph such that the out-degree of each vertex is at all times bounded. For this problem, we give an algorithm that orients the edges such that the out-degree is at all times bounded by α+1, with an update time of Õ(n^(5/7)), when α is at most polylogarithmic in n. Here, the choice of α+1 should be viewed in the light of the well-known lower bound by Brodal and Fagerberg which establishes that, for general graphs, maintaining only α out-edges would require linear update time. However, the lower bound by Brodal and Fagerberg is non-planar. In this paper, we give a lower bound showing that even for planar graphs, linear update time is needed in order to maintain an explicit three-out-orientation. For planar graphs, we show that the dynamic four forest decomposition and four-out-orientations, can be updated in Õ(n^(1/2)) time. 10:50-12:10 Session 16B 10:50 On vanishing sums of roots of unity in polynomial calculus and sum-of-squares PRESENTER: Ilario Bonacina ABSTRACT. Vanishing sums of roots of unity can be seen as a natural generalization of knapsack from Boolean variables to variables taking values over the roots of unity. We show that these sums are hard to prove for polynomial calculus and for sum-of-squares, both in terms of degree and size. 11:15 Complete ZX-calculi for the stabiliser fragment in odd prime dimensions ABSTRACT. We introduce a family of ZX-calculi which axiomatise the stabiliser fragment of quantum theory in odd prime dimensions. These calculi recover many of the nice features of the qubit ZX-calculus which were lost in previous proposals for higher-dimensional systems. We then prove that these calculi are complete, i.e. provide a set of rewrite rules which can be used to prove any equality of stabiliser quantum operations. Finally, we relate them to the graphical language for affine Lagrangian relations, which also axiomatises the stabiliser fragment. Adding a discard construction, we obtain a calculus complete for mixed state stabiliser quantum mechanics in odd prime dimensions, and this furthermore gives a complete axiomatisation for the related diagrammatic language for affine co-isotropic relations. 11:40 Countdown mu-calculus ABSTRACT. We introduce the countdown mu-calculus, an extension of the modal mu-calculus with ordinal approximations of fixpoint operators. In addition to properties definable in the classical calculus, it can express (un)boundedness properties such as the existence of arbitrarily long sequences of specific actions. The standard correspondence with parity games and automata extends to suitably defined countdown games and automata. However, unlike in the classical setting, the scalar fragment is provably weaker than the full vectorial calculus and corresponds to automata satisfying a simple syntactic condition. We establish some facts, in particular decidability of the model checking problem and strictness of the hierarchy induced by the maximal allowed nesting of our new operators. 12:10-14:00Lunch Break 15:10-16:00 Session 18A 15:10 On the Identity Problem for Unitriangular Matrices of Dimension Four ABSTRACT. We show that the Identity Problem is decidable in polynomial time for finitely generated sub-semigroups of the group UT(4, Z) of 4 × 4 unitriangular integer matrices. As a byproduct of our proof, we have also shown the polynomial-time decidability of several subset reachability problems in UT(4, Z). 15:35 On the Binary and Boolean Rank of Regular Matrices ABSTRACT. A $0,1$ matrix is said to be regular if all of its rows and columns have the same number of ones. We prove that for infinitely many integers $k$, there exists a square regular $0,1$ matrix with binary rank $k$, such that the Boolean rank of its complement is $k^{\widetilde{\Omega}(\log k)}$. Equivalently, the ones in the matrix can be partitioned into $k$ combinatorial rectangles, whereas the number of rectangles needed for any cover of its zeros is $k^{\widetilde{\Omega}(\log k)}$. This settles, in a strong form, a question of Pullman (Linear Algebra Appl.~,1988) and a conjecture of Hefner, Henson, Lundgren, and Maybee (Congr. Numer.~,1990). The result can be viewed as a regular analogue of a recent result of Balodis, Ben-David, G{\"{o}}{\"{o}}s, Jain, and Kothari (FOCS,~2021), motivated by the clique vs. independent set problem in communication complexity and by the (disproved) Alon-Saks-Seymour conjecture in graph theory. As an application of the produced regular matrices, we obtain regular counterexamples to the Alon-Saks-Seymour conjecture and prove that for infinitely many integers $k$, there exists a regular graph with biclique partition number $k$ and chromatic number $k^{\widetilde{\Omega}(\log k)}$. 15:10-16:00 Session 18B 15:10 On Uniformization in the Full Binary Tree ABSTRACT. A function f uniformizes a relation R(X,Y) if R(X,f(X)) holds for every X in the domain of R. The uniformization problem for a logic L asks whether for every L-definable relation there is an L-definable function that uniformizes it. Gurevich and Shelah proved that no Monadic Second-Order ($\MSO$) definable function uniformizes relation Y is a one element subset of X'' in the full binary tree. In other words, there is no MSO definable choice function in the full binary tree. The cross-section of a relation R(X,Y) at D is the set of all E such that R(D,E) holds. Hence, a function that uniformizes R chooses one element from every non-empty cross-section. The relation Y is a one element subset of X'' has finite and countable cross-sections. We prove that in the full binary tree the following theorems hold: Theorem (Finite-cross Section) If every cross-section of an MSO definable relation is finite, then it has an MSO definable uniformizer. Theorem (Uncountable-cross Section) There is an MSO definable relation R such that every MSO definable relation included in R and with the same domain as R has an uncountable cross-section. 15:35 Tree exploration in dual-memory model ABSTRACT. We study the problem of online tree exploration by a deterministic mobile agent. Our main objective is to establish what features of the model of the mobile agent and the environment allow linear exploration time. We study agents that, upon entering to a node, do not receive as input the edge via which they entered. In such a model, deterministic memoryless exploration is infeasible, hence the agent needs to be allowed to use some memory. The memory can be located at the agent or at each node. The existing lower bounds show that if the memory is either only at the agent or only at the nodes, then the exploration needs superlinear time. We show that tree exploration in dual-memory model, with constant memory at the agent and logarithmic at each node is possible in linear time when one of two additional features is present: fixed initial state of the memory at each node (so called clean memory) or a single movable token. We present two algorithms working in linear time for arbitrary trees in these two models. On the other hand, in our lower bound we show that if the agent has a single bit of memory and one bit is present at each node, then exploration may require quadratic time on paths, if the initial memory at nodes could be set arbitrarily (so called dirty memory). This shows that having clean node memory or a token allows linear exploration of trees in the model with two types of memory, but having neither of those features may lead to quadratic exploration time even on a simple path. 16:00-16:30Coffee Break 16:30-17:45 Session 19A 16:30 Dispersing Obnoxious Facilities on Graphs by Rounding Distances ABSTRACT. We continue the study of $\delta$-dispersion, a continuous facility location problem on a graph where all edges have unit length and where the facilities may also be positioned in the interior of the edges. The goal is to position as many facilities as possible subject to the condition that every two facilities have distance at least $\delta$ from each other. Our main technical contribution is an efficient procedure to round-up' distance $\delta$. It transforms a $\delta$-dispersed set $S$ into a set $S'$ of same size such that $S'$ is $\delta'$-dispersed with a slightly larger rational distance $\delta'$ where the numerator is upper bounded by the longest (not-induced) path in the input graph. Based on this rounding procedure and connections to the distance-d independent set problem we can derive a number of algorithmic results. When parameterized by treewidth, the problem is in XP. When parameterized by treedepth the problem is FPT and has a matching lower bound on its time complexity under ETH. Moreover, we can also settle the parameterized complexity with the solution size as parameter using our rounding technique: $\delta$-\dispersion is FPT for every $\delta \leq 2$ and W[1]-hard for every $\delta > 2$. Further, we show that $\delta$-dispersion is NP-complete for every fixed irrational distance $\delta$, which was left open in previous work. 16:55 Graph Realization of Distance Sets ABSTRACT. The Distance Realization problem is defined as follows. Given an $n \times n$ matrix $D$ of nonnegative integers, interpreted as inter-vertex distances, find an $n$-vertex weighted or unweighted graph $G$ realizing $D$, i.e., whose inter-vertex distances satisfy $\dist_G(i,j)=D_{i,j}$ for every $1\le i<j\le n$, or decide that no such realizing graph exists. The problem was studied for general weighted and unweighted graphs, as well as for cases where the realizing graph is restricted to a specific family of graphs (e.g., trees or bipartite graphs). An extension of Distance Realization that was studied in the past is where each entry in the matrix $D$ may contain a \emph{range} of consecutive permissible values. We refer to this extension as Range Distance Realization (or Range-DR). Restricting each range to at most $k$ values yields the problem $k$-Range Distance Realization (or $k$-Range-DR). The current paper introduces a new extension of Distance Realization, in which each entry $D_{i,j}$ of the matrix may contain an arbitrary set of acceptable values for the distance between $i$ and $j$. We refer to this extension as Set Distance Realization (Set-DR), and to the restricted problem where each entry may contain at most $k$ values as $k$-Set Distance Realization (or $k$-Set-DR). We first show that 2-Rnage-DR is NP-hard for unweighted graphs (implying the same for 2-Set-DR). Next we prove that 2-Set-DR is NP-hard for unweighted and weighted trees. We then explore Set-DR where the realization is restricted to the families of stars, paths, or cycles. For the weighted case, our positive results are that for each of these families there exists a polynomial time algorithm for 2-Set-DR. On the hardness side, we prove that 6-Set-DR is NP-hard for stars and 5-Set-DR is NP-hard for paths and cycles. For the unweighted case, our results are the same, except for the case of unweighted stars, for which $k$-Set-DR is polynomially solvable for any $k$. 17:20 The complexity of computing optimum labelings for temporal connectivity PRESENTER: Nina Klobas ABSTRACT. A graph is temporally connected if there exists a strict temporal path, i.e. a path whose edges have strictly increasing labels, from every vertex $u$ to every other vertex $v$. In this paper we study temporal design problems for undirected temporally connected graphs. The basic setting of these optimization problems is as follows: given a connected undirected graph $G$, what is the smallest number $|\lambda|$ of time-labels that we need to add to the edges of $G$ such that the resulting temporal graph $(G,\lambda)$ is temporally connected? As it turns out, this basic problem, called MINIMUM LABELING (ML), can be optimally solved in polynomial time. However, exploiting the temporal dimension, the problem becomes more interesting and meaningful in its following variations, which we investigate in this paper. First we consider the problem MINIMUM AGED LABELING (MAL) of temporally connecting the graph when we are given an upper-bound on the allowed age (i.e. maximum label) of the obtained temporal graph $(G,\lambda)$. Second we consider the problem MINIMUM STEINER LABELING (MSL), where the aim is now to have a temporal path between any pair of `important'' vertices which lie in a subset $R\subseteq V$, which we call the terminals. This relaxed problem resembles the problem STEINER TREE in static (i.e. non-temporal) graphs. However, due to the requirement of strictly increasing labels in a temporal path, STEINER TREE is not a special case of MSL. Finally we consider the age-restricted version of MSL, namely MINIMUM AGED STEINER LABELING (MASL). Our main results are threefold: we prove that (i) MAL becomes NP-complete on undirected graphs, while (ii) MASL becomes W[1]-hard with respect to the number $|R|$ of terminals. On the other hand we prove that (iii) although the age-unrestricted problem MSL remains NP-hard, it is in FPT with respect to the number $|R|$ of terminals. That is, adding the age restriction, makes the above problems strictly harder (unless P=NP or W[1]=FPT). 16:30-17:45 Session 19B 16:30 Resource Optimisation of Coherently Controlled Quantum Computations with the PBS-calculus ABSTRACT. Coherent control of quantum computations can be used to improve some quantum protocols and algorithms. For instance, the complexity of implementing the permutation of some given unitary transformations can be strictly decreased by allowing coherent control, rather than using the standard quantum circuit model. In this paper, we address the problem of optimising the resources of coherently controlled quantum computations. We refine the PBS-calculus, a graphical language for coherent control which is inspired by quantum optics. In order to obtain a more resource-sensitive language, it manipulates abstract gates -- that can be interpreted as queries to an oracle -- and more importantly, it avoids the representation of useless wires by allowing unsaturated polarising beam splitters. Technically the language forms a coloured PROP. The language is equipped with an equational theory that we show to be sound, complete, and minimal. Regarding resource optimisation, we introduce an efficient procedure to minimise the number of oracle queries of a given diagram. We also consider the problem of minimising both the number of oracle queries and the number of polarising beam splitters. We show that this optimisation problem is NP-hard in general, but introduce an efficient heuristic that produces optimal diagrams when at most one query to each oracle is required. 16:55 Space-Bounded Unitary Quantum Computation with Postselection ABSTRACT. Space-bounded computation has been a central topic in classical and quantum complexity theory, since it reflects common practical situations where available memory space is much less than input size. In the quantum case, every elementary gate must be unitary. This restriction makes it unclear whether the power of space-bounded computation changes by allowing intermediate measurement. In the bounded error case, Fefferman and Remscrim [STOC 2021, pp.1343--1356] and Girish et al. [ICALP 2021, pp.73:1--73:20] recently provided the break-through result that it does not change. In this paper, we show that a similar result holds for space-bounded quantum computation with postselection. Namely, we prove that it is possible to eliminate intermediate postselections and measurements in the space-bounded quantum computation in the bounded-error setting. Our result strengthens the recent result by Le Gall et al. [TQC 2021, pp.10:1--10:17] that logarithmic-space bounded-error quantum computation with \emph{intermediate} postselections and measurements is equivalent in computational power to logarithmic-space unbounded-error probabilistic computation. As an application, the space-bounded version of one-clean qubit computation (DQC1) is considered and its computational supremacy is discussed from a complexity theoretic point of view. 17:20 LOv-Calculus: A Graphical Language for Linear Optical Quantum Circuits ABSTRACT. We introduce the LOv-calculus, a graphical language for reasoning about linear optical quantum circuits with so-called vacuum state auxiliary inputs. We present the axiomatics of the language and prove its soundness and completeness: two LOv-circuits represent the same quantum process if and only if one can be transformed into the other with the rules of the LOv-calculus. We give a confluent and terminating rewrite system to rewrite any polarisation-preserving LOv-circuit into a unique triangular normal form, inspired by the universal decomposition of Reck et al. (1994) for linear optical quantum circuits.
# Shortest Unsorted Continuous Subarray in Python PythonServer Side ProgrammingProgramming #### Beyond Basic Programming - Intermediate Python Most Popular 36 Lectures 3 hours #### Practical Machine Learning using Python Best Seller 91 Lectures 23.5 hours #### Practical Data Science using Python 22 Lectures 6 hours Suppose we have an integer array, we need to find one continuous subarray such that, if we only sort that subarray in ascending order, then the whole array will be sorted too. We need to find the shortest such subarray and output its length. So if the array is [2,6,4,8,10,9,15], then the output will be 5. The array will be [6,4,8,10,9] To solve this, we will follow these steps − • res := sort the nums as an array • ans := 0 • set r as a linked list • for i in range 0 to the length of res • if nums[i] is not same as res[i], then insert i into the r • if the length of r is 0, then return 0, if the length of r is 1, then return 1 • return the last element of r – first element of r + 1 ## Example (Python) Let us see the following implementation to get a better understanding − Live Demo class Solution(object): def findUnsortedSubarray(self, nums): res = sorted(nums) ans = 0 r = [] for i in range(len(res)): if nums[i] != res[i]: r.append(i) if not len(r): return 0 if len(r) == 1: return 1 return r[-1]-r[0]+1 ob1 = Solution() print(ob1.findUnsortedSubarray([2,6,4,8,10,9,15])) ## Input [2,6,4,8,10,9,15] ## Output 5 Updated on 27-Apr-2020 08:55:20
# The Universe of Discourse Sun, 20 Jul 2014 As I've discussed elsewhere, I once wrote a program to enumerate all the possible quilt blocks of a certain type. The quilt blocks in question are, in quilt jargon, sixteen-patch half-square triangles. A half-square triangle, also called a “patch”, is two triangles of fabric sewn together, like this: Then you sew four of these patches into a four-patch, say like this: Then to make a sixteen-patch block of the type I was considering, you take four identical four-patch blocks, and sew them together with rotational symmetry, like this: It turns out that there are exactly 72 different ways to do this. (Blocks equivalent under a reflection are considered the same, as are blocks obtained by exchanging the roles of black and white, which are merely stand-ins for arbitrary colors to be chosen later.) Here is the complete set of 72: It's immediately clear that some of these resemble one another, sometimes so strongly that it can be hard to tell how they differ, while others are very distinctive and unique-seeming. I wanted to make the computer classify the blocks on the basis of similarity. My idea was to try to find a way to get the computer to notice which blocks have distinctive components of one color. For example, many blocks have a distinctive diamond shape in the center. Some have a pinwheel like this: which also has the diamond in the middle, while others have a different kind of pinwheel with no diamond: I wanted to enumerate such components and ask the computer to list which blocks contained which shapes; then group them by similarity, the idea being that that blocks with the same distinctive components are similar. The program suite uses a compact notation of blocks and of shapes that makes it easy to figure out which blocks contain which distinctive components. Since each block is made of four identical four-patches, it's enough just to examine the four-patches. Each of the half-square triangle patches can be oriented in two ways: Here are two of the 12 ways to orient the patches in a four-patch: Each 16-patch is made of four four-patches, and you must imagine that the four-patches shown above are in the upper-left position in the 16-patch. Then symmetry of the 16-patch block means that triangles with the same label are in positions that are symmetric with respect to the entire block. For example, the two triangles labeled b are on opposite sides of the block's northwest-southeast diagonal. But there is no symmetry of the full 16-patch block that carries triangle d to triangle g, because d is on the edge of the block, while g is in the interior. Triangles must be colored opposite colors if they are part of the same patch, but other than that there are no constraints on the coloring. A block might, of course, have patches in both orientations: All the blocks with diagonals oriented this way are assigned descriptors made from the letters bbdefgii. Once you have chosen one of the 12 ways to orient the diagonals in the four-patch, you still have to color the patches. A descriptor like bbeeffii describes the orientation of the diagonal lines in the squares, but it does not describe the way the four patches are colored; there are between 4 and 8 ways to color each sort of four-patch. For example, the bbeeffii four-patch shown earlier can be colored in six different ways: In each case, all four diagonals run from northwest to southeast. (All other ways of coloring this four-patch are equivalent to one of these under one or more of rotation, reflection, and exchange of black and white.) We can describe a patch by listing the descriptors of the eight triangles, grouped by which triangles form connected regions. For example, the first block above is: b/bf/ee/fi/i because there's an isolated white b triangle, then a black parallelogram made of a b and an f patch, then a white triangle made from the two white e triangles, then another parallelogram made from the black f and i, and finally in the middle, the white i. (The two white e triangles appear to be separated, but when four of these four-patches are joined into a 16-patch block, the two white e patches will be adjacent and will form a single large triangle: ) The other five bbeeffii four-patches are, in the same order they are shown above: b/b/e/e/f/f/i/i b/b/e/e/fi/fi b/bfi/ee/f/i bfi/bfi/e/e bf/bf/e/e/i/i All six have bbeeffii, but grouped differently depending on the colorings. The second one ( b/b/e/e/f/f/i/i) has no regions with more than one triangle; the fifth ( bfi/bfi/e/e) has two large regions of three triangles each, and two isolated triangles. In the latter four-patch, the bfi in the descriptor has three letters because the patch has a corresponding distinctive component made of three triangles. I made up a list of the descriptors for all 72 blocks; I think I did this by hand. (The work directory contains a blocks file that maps blocks to their descriptors, but the Makefile does not say how to build it, suggesting that it was not automatically built.) From this list one can automatically extract a list of descriptors of interesting shapes: an interesting shape is two or more letters that appear together in some descriptor. (Or it can be the single letter j, which is exceptional; see below.) For example, bffh represents a distinctive component. It can only occur in a patch that has a b, two fs, and an h, like this one: and it will only be significant if the b, the two fs, and the h are the same color: in which case you get this distinctive and interesting-looking hook component. There is only one block that includes this distinctive hook component; it has descriptor b/bffh/ee/j, and looks like this: . But some of the distinctive components are more common. The ee component represents the large white half-diamonds on the four sides. A block with "ee" in its descriptor always looks like this: and the blocks formed from such patches always have a distinctive half-diamond component on each edge, like this: (The stippled areas vary from block to block, but the blocks with ee in their descriptors always have the half-diamonds as shown.) The blocks listed at http://hop.perl.plover.com/quilt/analysis/images/ee.html all have the ee component. There are many differences between them, but they all have the half-diamonds in common. Other distinctive components have similar short descriptors. The two pinwheels I mentioned above are gh and fi, respectively; if you look at the list of gh blocks and the list of fi blocks you'll see all the blocks with each kind of pinwheel. Descriptor j is an exception. It makes an interesting shape all by itself, because any block whose patches have j in their descriptor will have a distinctive-looking diamond component in the center. The four-patch looks like this: so the full sixteen-patch looks like this: where the stippled parts can vary. A look at the list of blocks with component j will confirm that they all have this basic similarity. I had made a list of the descriptors for each of the the 72 blocks, and from this I extracted a list of the descriptors for interesting component shapes. Then it was only a matter of finding the component descriptors in the block descriptors to know which blocks contained which components; if the two blocks share two different distinctive components, they probably look somewhat similar. Then I sorted the blocks into groups, where two blocks were in the same group if they shared two distinctive components. The resulting grouping lists, for each block, which other blocks have at least two shapes in common with it. Such blocks do indeed tend to look quite similar. This strategy was actually the second thing I tried; the first thing didn't work out well. (I forget just what it was, but I think it involved finding polygons in each block that had white inside and black outside, or vice versa.) I was satisfied enough with this second attempt that I considered the project a success and stopped work on it. The complete final results were: 1. This tabulation of blocks that are somewhat similar 2. This tabulation of blocks that are distinctly similar (This is the final product; I consider this a sufficiently definitive listing of “similar blocks”.) 3. This tabulation of blocks that are extremely similar And these tabulations of all the blocks with various distinctive components: bd bf bfh bfi cd cdd cdf cf cfi ee eg egh egi fgh fh fi gg ggh ggi gh gi j It may also be interesting to browse the work directory.
# Vector space help 1. Jun 9, 2004 ### profuse007 vector space... help!!!! i just got into vector spaces and i am really stump. okay from teh definition of vector space, it says something..... "w/ the operation of mult by a number and addition. more briefly, we refer to V as a real vector space." so from a question from an exercise: determine if the given set constitute a real vector space. in each case the operation of mult by a number and addition are understood to be the usual operation associated w. teh elements of the set. 4)the set of all elements of R^3 w/ first component 0. i guess i dont understand what vector space at all to answer question 4, can someone explain. i did a search but no luck. 2. Jun 9, 2004 ### arildno What you are referring to, is the closure properties of a vector space. 1. A set of vectors (quantities, elements, whatever) $$V=\{v\}$$ is a vector space with a given addition operation +, only if we have: Given $$v,w\in{V}$$ implies that $$v+w\in{V}$$ 2. To our set V, we associate a set of scalars, S ($$\Re$$, for example), and a multiplication operation * between an element in S and an element in V. If V is to qualify as a vector space, we must have: Given $$v\in{V},\alpha\in{S}$$, implies $$\alpha*v\in{V}$$ Hence, if V is to qualify as a vector space, we need to have fulfilled 1 and 2! In your case, $$V=\{(x,y,z)\in\Re^{3}|x=0\}$$ We let the scalars S be the real numbers $$\Re$$ and let the addition operator on V be the usual addition operators for vectors, and the scalar multiplication operator be the usual operator for scalar-vector multiplication. 1. Proof that 1 holds for V: Pick 2 elements in V, say $$v_{1}=(0,y_{1},z_{1}),v_{2}=(0,y_{2},z_{2})$$ We now have: $$u=v_{1}+v_{2}=(0+0,y_{1}+y_{2},z_{1}+z_{2})$$ Clearly, u's first component is 0, hence u is in V, that is V is closed under the operation of addition. Hope this helped you along. Last edited: Jun 9, 2004 3. Jun 9, 2004 ### HallsofIvy Staff Emeritus "4)the set of all elements of R^3 w/ first component 0." Since the definition of vector space is, as you said, a set of vectors together with two operations, this has to be assuming the "standard" operations on R^3: "coordinatewise" addition and scalar multiplication. What you are really asked to do is to prove that this set is a "subspace" of R^3. That means that you already know that the basic properties such as the commutativity law and associative law are true and don't need to prove that. As arildno said, you only need to prove "closure": that the sum (0,x,y)+ (0,u,v)= (0,x+u,y+v) and a(0,x,y)= (0,ax,ay) are also in the set. 4. Jun 9, 2004 ### profuse007 -this is the beginning of vector space that i am in right now. -when you say, "also in the set," what do you mean by that? another example of real vector space example: -let S be the set of all unit vector(length 1) in R^3. then (1,0,0) and (0,1,0) are in S but the sum, (1,1,0) is not. Hence S is not a vector space. 5. Jun 9, 2004 ### HallsofIvy Staff Emeritus Yes. (1,1,0) is not "also" (as well as (1,0,0) and (0,1,0)) in the set of vectors of length 1. In order that a subset of a vector space be a sub-space (a vector space using the same operations) whenever u and v are in the set, u+v must "also" (as well as u and v) be in that same set. Like wise, if v is in the set and a is any number av must be in that same set. That's say that the set is "closed" under the operations. 6. Jun 9, 2004 ### Janitor For a chance at extra credit, you could point out in your answer that your particular space is a two-dimensional subspace of R^3. EDIT: I see HallsofIvy beat me to that. Last edited: Jun 9, 2004 7. Jun 10, 2004 ### profuse007 iono if you can use the word "subspace" on it, cause thats the next section. its the first part of the vector space, no subspace yet. these stuffs are twisting my mind......crap!!!!! 8. Jun 10, 2004 ### arildno Now, is this verbatim from the book, or your interpretation of what is in the book? (S is not an example of a real vector space!!) Do you understand why S is not a vector space? 9. Jun 10, 2004 ### profuse007 thats the verbatim from the book. i do not know why S is not a vector space. plz explain 10. Jun 10, 2004 ### arildno Ok, the fundamental concept you need is a set. (It's about as fundamental as you can get in maths!) So, we form a set whose elements are specified by some conditions (that is, the elements have some common properties!). In this case S is a set that consists of all vectors in $$\Re^{3}$$ that has length 1. for example, the vectors $$(1,0,0),(\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}})$$ are both elements of S, since both have length 1 (the common property!) A vector space V consists of a set S and two operations (called (+) and (*)) by which we can construct new quantities from old ones. There are quite a few conditions S and the operations must fulfill in order that we have a vector space! Let's start: For example, the (+)-operation is defined in such a way that using 2 elements in V, we can construct a new quantity (that is, we know a computation algorithm). In order to qualify as a (+)-operation on V, the proposed operation must satisfy some basic axioms (examples of this is "law of commutativity" and "law of associativity") Even if the (+)-operation and the (*)-operation fulfill their respective axioms, S is not a vector space unless we have the "closure properties" for the operations. I stop here for the moment, please give feedback if I should continue, or if there are some points you need to get clarified. 11. Jun 14, 2004 ### profuse007 a buddy of mine jsut explained and it made more sense and less confusin than above. the unit vector is define as the magnitude of 1 the (1,0,0) has a magnitude of 1 the (0,1,0) has a magnitude of 1 the sum of the two (1,1,0) has a magnitude of sqrt of 2, thus not a unit vector(magnitude) of 1... hence S is not a real vector space. 12. Jun 15, 2004 ### matt grime but that was what the post you claim is confusing says: you didn#t say you didn't know what a unit vector was. 13. Jun 15, 2004 ### profuse007 hmm.... for some reason, all the explanations above pull me away from the unit vector and thought of something else that made it more confusing, thats what i meant. btw, no one mention the unit vector at all, well they stated it. thanks matt and others, it was confusing quest through the (vector) space...... jk, heheheh but again, thanks to all for helpin me
Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up. Sign up to join this community Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top # Explore our Questions 0 votes 0 answers 5 views 0 votes 0 answers 12 views 1 vote 2 answers 48 views -2 votes 0 answers 15 views 2 votes 2 answers 55 views 2 votes 6 answers 227 views 0 votes 1 answer 15 views 2 votes 1 answer 17 views 2 votes 1 answer 1k views 1 vote 2 answers 44 views -1 votes 1 answer 36 views 0 votes 1 answer 31 views 1 vote 1 answer 33 views 6 votes 1 answer 108 views ### Distance between consecutive primes distribution Browse more Questions
# zbMATH — the first resource for mathematics Statistical tools for finance and insurance. (English) Zbl 1078.62112 Berlin: Springer (ISBN 3-540-22189-1/pbk). 517 p. (2005). This book is designed for students, researchers and practitioners who want to be introduced to modern statistical tools applied in finance and insurance. It is the result of a joint effort of the Center for Economic Research, Center for Applied Statistics and Economics and Hugo Steinhaus Center for Stochastic Methods. All three institutions brought in their specific profiles and created with this book a wide-angle view on and solutions to up-to-date practical problems. The text is comprehensible for a graduate student in financial engineering as well as for an inexperienced newcomer to quantitative finance and insurance who wants to get a grip on advanced statistical tools applied in these fields. An experienced reader with a broad knowledge of financial and actuarial mathematics will probably skip some sections but will hopefully enjoy the various computational tools. Finally, a practitioner might be familiar with some of the methods. However, the statistical techniques related to modern financial products, like MBS or CAT bonds, will certainly attract him. It consists naturally of two main parts. Each part contains chapters with high focus on practical applications. The finance part of the book starts with an introduction to stable distributions, which are the standard model for heavy tailed phenomena. Their numerical implementation is thoroughly discussed and applications to finance are given. The second chapter presents the ideas of extreme value and copula analysis as applied to multivariate financial data. This topic is extended in the subsequent chapter which deals with tail dependence, a concept describing the limiting proportion that one margin exceeds a certain threshold given that the other margin has already exceeded that threshold. The fourth chapter reviews the market in catastrophe insurance risk, which emerged in order to facilitate the direct transfer of reinsurance risk associated with natural catastrophes from corporations, insurers, and reinsurers to capital market investors. The next contribution employs functional data analysis for the estimation of smooth implied volatility surfaces. These surfaces are a result of using an oversimplified market benchmark model – the Black-Scholes formula – to real data. An attractive approach to overcome this problem is discussed in chapter six, where implied trinomial trees are applied to modeling implied volatilities and the corresponding state-price densities. An alternative route to tackling the implied volatility smile has led researchers to develop stochastic volatility models. The relative simplicity and the direct link of model parameters to the market makes Heston’s model very attractive to front office users. Its application to foreign exchange option markets is covered in chapter seven. The following chapter shows how the computational complexity of stochastic volatility models can be overcome with the help of the Fast Fourier Transform. In chapter nine the valuation of Mortgage Backed Securities is discussed. The optimal prepayment policy is obtained via optimal stopping techniques. It is followed by a very innovative topic of predicting corporate bankruptcy with Support Vector Machines. Chapter eleven presents a novel approach to money-demand modeling using fuzzy clustering techniques. The first part of the book closes with productivity analysis for cost and frontier estimation. The e-book offers a complete PDF version of this text and the corresponding HTML files with links to algorithms and quantlets. The reader of this book may therefore easily reconfigure and recalculate all the presented examples and methods via the enclosed XploRe Quantlet Server (XQS), which is also available from $\text{www.xplore-stat.de}\quad \text{and} \quad \text{www.quantlet.com}.$ A tutorial chapter explaining how to setup and use XQS can be found in the third and final part of the book.
3 Tutor System Starting just at 265/hour # What is the difference between displacement and double displacement reactions? Write equations for these reactions. Displacement reaction: (a) When a more reactive element displaces a less reactive element from its salt solution, In displacement reaction one displacement takes place. it is called displacement reaction. (b) In displacement reaction one displacement takes place. Double Displacement reaction: (a) There is an exchange of ions between the reactants to produce new substances, it is called double displacement reaction. (b) In double displacement reaction two displacements takes place. Following are examples of Displacement reaction and Double Displacement reaction. Displacement reaction $$Mg(s) + 2HCl(aq) \rightarrow MgCl_{2} + H_{2}(g)$$ Double Displacement reaction $$\;\;\; 2KBr(aq) + BaCl_{2}$$ $$\downarrow$$ $$\;\;\; 2KCl(aq) + BaBr_{2}$$
Teoreticheskaya i Matematicheskaya Fizika RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS TMF: Year: Volume: Issue: Page: Find TMF, 2005, Volume 142, Number 1, Pages 21–36 (Mi tmf1766) Mass spectra in the doubly symmetric theory of infinite-component fields Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University Abstract: We consider the problem of the characteristics of mass spectra in the doubly symmetric theory of fields transforming under the proper Lorentz group representations decomposable into an infinite direct sum of finite-dimensional irreducible representations. We show that there exists a range of free parameters of the theory where the mass spectra of fermions are quite satisfactory from the physical standpoint and correspond to the picture expected in the parton model of hadrons. Keywords: relativistically invariant theories, infinite-component fields, double symmetries, mass spectra DOI: https://doi.org/10.4213/tmf1766 Full text: PDF file (309 kB) References: PDF file   HTML file English version: Theoretical and Mathematical Physics, 2005, 142:1, 15–28 Bibliographic databases: Citation: L. M. Slad, “Mass spectra in the doubly symmetric theory of infinite-component fields”, TMF, 142:1 (2005), 21–36; Theoret. and Math. Phys., 142:1 (2005), 15–28 Citation in format AMSBIB \Bibitem{Sla05} \by L.~M.~Slad \paper Mass spectra in the doubly symmetric theory of infinite-component fields \jour TMF \yr 2005 \vol 142 \issue 1 \pages 21--36 \mathnet{http://mi.mathnet.ru/tmf1766} \crossref{https://doi.org/10.4213/tmf1766} \adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2005TMP...142...15S} \elib{https://elibrary.ru/item.asp?id=9135931} \transl \jour Theoret. and Math. Phys. \yr 2005 \vol 142 \issue 1 \pages 15--28 \crossref{https://doi.org/10.1007/s11232-005-0050-9} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000227111300003} • http://mi.mathnet.ru/eng/tmf1766 • https://doi.org/10.4213/tmf1766 • http://mi.mathnet.ru/eng/tmf/v142/i1/p21 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. Leonid M. Slad, “Electroweak Interaction Model with an Undegenerate Double Symmetry”, SIGMA, 2 (2006), 045, 8 pp. 2. L. M. Slad, “Electromagnetic form factors and polarizations of non-Dirac particles with rest spin 1/2”, Theoret. and Math. Phys., 158:1 (2009), 112–124 3. L. M. Slad, “Electromagnetic properties of non-Dirac particles with rest spin $1/2$”, Theoret. and Math. Phys., 165:1 (2010), 1275–1292 4. Slad L.M., “Magnetic Moment Operator of Non-Dirac Particles and Some Elements of Polarization Ep-Experiments”, Mod. Phys. Lett. A, 28:12 (2013), 1350051 5. Slad L.M., “Some Field-Theoretical Aspects of Two Types of the Poincaré Group Representations”, Int. J. Mod. Phys. A, 29:2 (2014), 1450020 • Number of views: This page: 234 Full text: 136 References: 44 First page: 1
# Question 1 Question 1: Rectangle has a perimeter of 250 units with length of four times of its width. What is area of the rectangle? Intelligence Bureau one peper test Assistant Director 2014 FPSC Solution: Suppose, Then as per condition of the question; We know perimeter is total boundary length, which is calculated by taking sum of lengths of all sides of the rectangle; But So , width is 25. So by given condition length is 4 times of width ! Now Insert math as $${}$$
Difference between Var and Dynamics in C# CsharpProgrammingServer Side Programming Var is strictly typed in C#, whereas dynamic is not strictly typed. Var declaration var a = 10; Dynamic declaration dynamic a = 10; A Var is an implicitly typed variable, but it will not bypass the compile time errors. Example of var in C# var a = 10; a = "Demo"; // gives compile error Example of dynamics in C# dynamic a = 10; a = "Demo"; // won’t give error Published on 17-Sep-2018 17:35:30
# From W. R. Greg   19 April [1871]1 Stationery Office April 19. My dear Mr. Darwin Many thanks for thinking of my speculation.2 I will not ask you to send me the Paper you mention;3 but I have made a note of it, & will refer to it when I have again leisure to take up the subject. Yours very sincerely | W. R. Greg ## Footnotes The year is established by the relationship between this letter and the letter to W. R. Greg, 21 March [1871]. ## Bibliography Thury, Marc Antoine. 1863. Mémoire sur la loi de production des sexes chez les plantes les animaux et l’homme. 2d edition. Geneva and Paris: Joël Cherbuliez. ## Summary Thanks CD for thinking of his speculation. Has made a note of the paper mentioned by CD. ## Letter details Letter no. DCP-LETT-7701 From William Rathbone Greg To Charles Robert Darwin Sent from Stationery Office Source of text DAR 165: 225 Physical description 1p
# Practical Ramifications of Local Item Dependence in Educational Assessment Alsayar, A. A. (2021). The Practical Ramifications of Local Item Dependence in Educational Assessment: A Comparison of Four Approaches. Retrieved from https://purl.lib.fsu.edu/diginole/2020_Summer_Fall_Alsayar_fsu_0071E_16344 Alsayar, A. A. (2021). The Practical Ramifications of Local Item Dependence in Educational Assessment: A Comparison of Four Approaches. Retrieved from https://purl.lib.fsu.edu/diginole/2020_Summer_Fall_Alsayar_fsu_0071E_16344
## UmmmHELP Find the geometric mean of the pair of numbers. 3 and 4 11 months ago 11 months ago 1. UmmmHELP |dw:1363917483147:dw| 2. satellite73 $\sqrt{3\times 4}=\sqrt{3}\times \sqrt{4}=2\sqrt{3}$ 3. satellite73 i am sure you can do the next one, right? 4. UmmmHELP what next one? haha and okay thanks!
# Adding a GPG Key Manually 2015-11-17 I was trying to install MongoDB 3.0 in my production machine and in order to be able to do that via apt-get I needed to import the MongoDB GPG public key. It went wrong. It went really wrong. After a huge delay at gpg: requesting key 7F0CEB10 from hkp server keyserver.ubuntu.com I would get gpg: keyserver timed out gpg: keyserver receive failed: keyserver error The first solutions I found on the internet for this problem were modifying other commands to look like mine. So I already had the “best command”. After some more Googling I found that I could grab the key manually and add it without a request to a server. So I tried and got the key. Ran sudo apt-key add 10gen-gpg-key.asc And the shell told me OK That was it. I decided to post about this so that if you find yourself in a similar scenario (cannot get keys via the command line), you have an idea of how to solve the problem. It also serves as a future reference to me, as StackOverflow didn’t do it this time.
2021.09.23 12:31 # [QSMS Monthly Seminar] Smoothing of Fano varieties and mutations Views 684 Votes 0 Comment 0 ? #### Shortcut PrevPrev Article NextNext Article Larger Font Smaller Font Up Down Go comment Print ? #### Shortcut PrevPrev Article NextNext Article Larger Font Smaller Font Up Down Go comment Print 일정시작 2021-09-24 2021-09-24 #036eb7 Date :  Sep. 24. 2021 Place :  SNU Speaker :  조윤형(SKKU) TItle : Smoothing of Fano varieties and mutations Abstract: Let X be a smooth Fano variety. It was conjectured that if Y and Z are two Q-Gorenstein toric Fano varieties having the same good smoothing X, then Y and Z are related by a sequence of mutations. In this seminar, we discuss the case when X is a projective plane or a full flag variety (if time permits). Speaker:  김영훈(QSMS) Title: Quasisymmetric functions, noncommutative symmetric functions, and  weak Bruhat interval modules of 0-Hecke algebras Abstract: Symmetric functions have two generalizations: quasisymmetric functions and noncommutative symmetric functions. In 1996, Duchamp et al. introduced representation theoretical interpretations of these functions in terms of 0-Hecke algebras. In this talk, I will first explain these interpretations and related results. Then, I will introduce weak Bruhat interval modules of 0-Hecke algebras and their structural properties. Subject+ContentSubjectContentCommentUser NameNick NameUser IDTag List of Articles No. Subject Author Date Views Notice QSMS Mailing List Registration 2021.09.09 1182 67 [SNU Number Theory Seminar 20220401] Randomness and structure for sums of cubes 2022.03.28 437 66 [QSMS Monthly Seminar] The dimension of the kernel of an Eisenstein ideal 2022.03.25 467 65 [SNU Number Theory Seminar 20220321, 0322, 0324] Three lectures on the Eisenstein ideal 2022.03.19 378 64 [SNU Number Theory Seminar 2022.03.18] On p-rationality of $\mathbb{Q}(\zeta_{2l+1})^{+}$ for Sophie Germain primes $l$ 2022.03.19 462 63 [BK21-QSMS Toric Geometry Seminar] The Fredholm regularity of discs 2022.03.14 446 62 [QSMS Seminar 2022.04.05] Trialities of W-algebras 2022.02.15 506 61 [QSMS Seminar 2022.03.04 (시간변경)] Introduction to Supersymmetric Field Theories 2022.02.15 672 60 [BK21-QSMS Toric Geometry Seminar] Moment Polytope 2022.01.03 606 59 [QSMS Monthly Seminar] Introduction to finite-dimensional representations of quantum affine algebras 2021.12.03 528 58 [QSMS Seminar 14,16 Dec] A brief introduction to differential graded Lie algebras I, II 2021.11.29 799 57 [SNU Number Theory Seminar 2021.12.03] Algebraization theorems in complex and non-archimedean geometry 2021.11.19 589 56 [QSMS 위상기하 세미나 2021.11.25] Exotic families of Weinstein manifolds with Milnor fibers of ADE types 2021.11.19 527 55 [SNU Number Theory Seminiar 2021.11.12] Moduli of Fontaine-Laffailles and mod-p local-global compatibility 2021.10.29 804 54 [QSMS Monthly Seminar] Quantum entanglement entropy in abelian arithmetic Chern-Simons theory 2021.10.29 628 53 [SNU Number Theory Seminiar 2021.11.05] Bianchi modular symbols and p-adic L-functions 2021.10.20 569 52 [QSMS Seminar 2021.10.26, 10.28] Feigin-Semikhatov duality in W-superalgebras 2021.10.13 622 51 [2021 KMS Annual Meeting] Related talks 2021.10.05 793 » [QSMS Monthly Seminar] Smoothing of Fano varieties and mutations 2021.09.23 684 49 [QSMS Seminar 2021.10.12, 10.14] Vertex algebras and chiral homology I and II 2021.09.23 747 48 [QSMS Seminar 2021.09.30] Schur-Weyl duality, old and new & Quantum symmetric pairs and Schur-type dualities 2021.09.10 1020 Board Pagination Prev 1 2 3 4 5 6 Next / 6
# Is there an structure that allows for a flat representation of trees with constant access to any element? One can, for example, represent 2d arrays such as: [[1,2],[3,4],[4,5]] as flat arrays: [1,2,3,4,5,6] as long as he transforms the indices before accessing: index(x,y) = x + y*2 // because internal width=2 This is often faster. My question is: is it possible to use a similar approach of representing an structure as a flat array, for, instead of n-dimensional tables, free-form trees such as: [[1,2,3],4,[5,[6,7]],8] Well, it depends a bit on the operations. You cannot do anything reasonable if you want to realize you data structure inside an $n$-elementary array. The 2d array works, since there is a bijection between the 2d array entries and the 1d array entries. There is also such a bijection for any fixed tree, but I think this is not what you are looking for. Notice that your tree decoding [[1,2,3],4,[5,[6,7]],8] also uses more structure than a normal 1d array. So if you ask for data structures of size $O(n)$ with $O(1)$ operations than you can actually do something. One idea is to use a array where you store the DFS-tour (sometimes called Euler tour) of your tree, and the information about the time when you discover and when you finished a vertex in the DFS search. This can be stored inside an array of size $4n$ plus another $n$ for the values at the nodes, and it allows you to navigate through the tree using $O(1)$ operations per step. With more involved data structures you can also answer level ancestor queries (what is the $i$-th vertex about vertex $x$?) or lowest common ancestor queries in constant time with a data structure of size $O(n)$. If you want to know more about these two data structures, look up the references in wikipedia. No, it does not seem to be possible, if there are no constraints on the shape or size of the tree. (If you are able to impose some strict requirements on the structure of the tree, there are known methods. For instance, heaps are one example of exactly this optimization, applied to a particular kind of binary tree. However, I realize this doesn't address your request for a data structure for free-form trees.) Since you care about performance, it is also important to understand that performance depends not solely upon the number of memory accesses, but also upon locality: how well those accesses play with caches. Cache-friendly data structures are not studied as much in typical undergraduate data structures classes, but they have been studied extensively in other contexts (e.g., databases). Yes and no. If the structure of your tree has a known maximum depth and constant structure, you can statically calculate a matrix that defines the transform of I indexes to a single linear index in k time. Calculating this matrix, though, requires I*N time for N elements. I'd rather make this a comment, but I don't have enough reputation. I feel like you should provide more information of what is expected from the structure. Like what kind of access you need, and what kind of operations are going to be performed, etc. For example, you have already provided one of the flat structures for a general tree. Notice, that brackets [], represent your subtrees as a continuous ranges in the array. You can utilize that if you need to access the whole subtrees in constant time. Also, as answered by sqykly, if the graph is constant you can pre-calculate the transform I. If the tree allows dynamic changes to its structure, you should be able to update your transform in sub-linear time. • I just want a generic data structures for my objects on my project language. The problem is that I want to avoid pointers because they generate lots of cache miss, which is not cool to have when you are talking about the main data container of a language. – Viclib Dec 13 '13 at 20:45
# §27.13 Functions ## §27.13(i) Introduction Whereas multiplicative number theory is concerned with functions arising from prime factorization, additive number theory treats functions related to addition of integers. The basic problem is that of expressing a given positive integer as a sum of integers from some prescribed set whose members are primes, squares, cubes, or other special integers. Each representation of as a sum of elements of is called a partition of , and the number of such partitions is often of great interest. The subsections that follow describe problems from additive number theory. See also Apostol (1976, Chapter 14) and Apostol and Niven (1994, pp. 33–34). ## §27.13(ii) Goldbach Conjecture Every even integer is the sum of two odd primes. In this case, is the number of solutions of the equation , where and are odd primes. Goldbach’s assertion is that for all even . This conjecture dates back to 1742 and was undecided in 2009, although it has been confirmed numerically up to very large numbers. Vinogradov (1937) proves that every sufficiently large odd integer is the sum of three odd primes, and Chen (1966) shows that every sufficiently large even integer is the sum of a prime and a number with no more than two prime factors. The current status of Goldbach’s conjecture is described in the Wikipedia. ## §27.13(iii) Waring’s Problem This problem is named after Edward Waring who, in 1770, stated without proof and with limited numerical evidence, that every positive integer is the sum of four squares, of nine cubes, of nineteen fourth powers, and so on. Waring’s problem is to find, for each positive integer , whether there is an integer (depending only on ) such that the equation has nonnegative integer solutions for all . The smallest that exists for a given is denoted by . Similarly, denotes the smallest for which (27.13.1) has nonnegative integer solutions for all sufficiently large . Lagrange (1770) proves that , and during the next 139 years the existence of was shown for . Hilbert (1909) proves the existence of for every but does not determine its corresponding numerical value. The exact value of is now known for every . For example, , , , , , and . A general formula states that 27.13.2 for all , with equality if . If with , then equality holds in (27.13.2) provided , a condition that is satisfied with at most a finite number of exceptions. The existence of follows from that of because , but only the values and are known exactly. Some upper bounds smaller than are known. For example, , , , , and . Hardy and Littlewood (1925) conjectures that when is not a power of 2, and that when is a power of 2, but the most that is known (in 2009) is for some constant . A survey is given in Ellison (1971). ## §27.13(iv) Representation by Squares For a given integer the function is defined as the number of solutions of the equation 27.13.3 where the are integers, positive, negative, or zero, and the order of the summands is taken into account. Jacobi (1829) notes that is the coefficient of in the square of the theta function : 27.13.4. (In §20.2(i), is denoted by .) Thus, One of Jacobi’s identities implies that where and are the number of divisors of congruent respectively to 1 and 3 (mod 4), and by equating coefficients in (27.13.5) and (27.13.6) Jacobi deduced that Hence because both divisors, 1 and 5, are congruent to . In fact, there are four representations, given by , and four more with the order of summands reversed. By similar methods Jacobi proved that if is odd, whereas, if is even, times the sum of the odd divisors of . Mordell (1917) notes that is the coefficient of in the power-series expansion of the th power of the series for . Explicit formulas for have been obtained by similar methods for , and 12, but they are more complicated. Exact formulas for have also been found for , and 7, and for all even . For values of the analysis of is considerably more complicated (see Hardy (1940)). Also, Milne (1996, 2002) announce new infinite families of explicit formulas extending Jacobi’s identities. For more than 8 squares, Milne’s identities are not the same as those obtained earlier by Mordell and others.
# Thread: help finding the Maxima 1. ## help finding the Maxima I would be greatful for any help with the following problem. I know i need to find the dirivative of v and then from this i can transpose to find the unknown value of x. Just don't seem to be able! Q. The velocity, v, of a signal in a cable at a distance x is given by : v = K x ln (1/x) where 0<x<1 ( x is the algibraic x ) where K is a positive constant. Find the value of x which gives the maximum velocity. Thanks again, James. 2. NOT trying to bump, Just wanted to give it in the proper way. $V = K x ln( \frac{1}{x} )$ 3. Originally Posted by james jarvis NOT trying to bump, Just wanted to give it in the proper way. $V = K x ln( \frac{1}{x} )$ You'll see that in every reply you post there will be a button that says "Edit", right next to "Quote" 4. Originally Posted by james jarvis NOT trying to bump, Just wanted to give it in the proper way. $V = K x ln( \frac{1}{x} )$ $\frac{dV}{dx} = K~ ln \left ( \frac{1}{x} \right ) + Kx \cdot \frac{1}{\frac{1}{x}} \cdot -\frac{1}{x^2}$ $\frac{dV}{dx} = K~ ln \left ( \frac{1}{x} \right ) - K$ Set this equal to 0: $K~ ln \left ( \frac{1}{x} \right ) - K = 0$ $ln \left ( \frac{1}{x} \right ) - 1 = 0$ $ln \left ( \frac{1}{x} \right ) = 1$ $\frac{1}{x} = e$ $x = \frac{1}{e}$ Is this a relative max or a relative min? I leave the answer to that question up to you. -Dan
# The order of decreasing stability of the carbanions $(CH_3)_3 C^-(I) ; (CH_3)_2CH^{-1}(II) ; CH_3CH_2^{-}(III); C_6H_5CH_2^{-} (IV)$ is $(a)\;I > II > III > IV \\ (b)\;IV > III > II > I \\(c)\; IV > I > II > III \\ (d)\;I > II > IV > III$ Hyperconjugation decreases the heat of hydrogenation. Lesser the heat of hydrogenation lesser is the internal energy and more is the stability of the system. Hyperconjugation effect increases in the order. $(CH_3)_3C^{-} < (CH_3)_2 CH^{-} < CH_3CH_2^- < C_6H_5 CH_{2}^{-}$ Therefore $IV > III > II > I$ Hence b is the correct answer.
observation volume, $$V_{\text{obs}}$$ in flame emission and absorption spectrometry https://doi.org/10.1351/goldbook.O04273 The volume of that part of the flame that is observed through the optical device (in $$\unicode[Times]{x3BC}\text{l}$$).
## thick film resistor failure rate The reliability program will provide customers with real-time data on similar products. The contributions to knowledge and innovations that have been made in reliability prediction include the development of statistical models for lifetime prediction using early life data (i.e. of the methods that can be applied and to indicate where the solution to the problem of carrier transport in resistor A third cause of resistor failure to put on the list is excess energy. Preliminary studies show the system is working. estimationofthe failure rate functionoftransistors. Resistors - Chip Resistor - Surface Mount are in stock at Digikey. This durable part has a power rating of 0.25 (1/4) W. This product has a resistance value of 15K Ohm. RELIABILITY PROGRAM. The failure rate of resistors in a properly designed circuit is low compared to other electronic components such as semiconductors and electrolytic capacitors. AC/DC TEMPERATURE COEFFICIENT ppm/K TOLERANCE % … Failure Modes. Prof. PhD R. B. Pranchov, applicant Committee on Communications and Informatics, NIS TU Sofia, 1991. Resistors ship same day These products are tested on a continuous basis and the data will accompany the certificate of conformance. 3)ROHM's unique structure achieved improvement of heat. The glassy binders were based on lead borosilicate glasses. The subject under investigation in this paper is the possibility ofdecreasing the time for reliability estimation of thick film resistors (TFRs). When current passes through a resistor it generates heat and causes a differential thermal expansion of the different materials used to manufacture the resistor component. limit is the set of local infima of $X$. Marking … Six common causes of thick film resistor failure are: - Failure Modes Under high-energy pulse conditions, resistors fail because of inability to dissipate heat generated due to the electrical energy of the pulse. Both ways give logistic and packing information. If $X=(X_t)_{t \in The latest Thick-film SMD Resistor market report predicts the future performance of the industry vertical with respect to key growth determinants, restraints, and opportunities which are steering the profitability graph.. Thick Film: Features-Temperature Coefficient ±100ppm/°C: Operating Temperature-55°C ~ 155°C: Package / Case: 0201 (0603 Metric) Supplier Device Package: 0201: Size / Dimension: 0.024" L x 0.012" W (0.60mm x 0.30mm) Height - Seated (Max) 0.011" (0.28mm) Number of Terminations: 2: Failure Rate- Thick and thin film resistors have found many applications in the development of microelectronics because they can be made smaller than other comparable value resistor types. assets and the risk reserve of an insurance company. renewal model in risk theory. The failure will result in varied resistance values as temperature is varied. Acceleration by humidity (RH) was also obtained, Fig. Methods of accelerating the ageing of thick-film resistors (TFRs) have been explored, with encouraging results from They are used as damping resistors for pull-ups or pull-down resistors for digital circuits. process by a self similar, For$\alpha > 0$, the$\alpha$-Lipschitz minorant of a function$f: order of magnitude larger than those obtained at elevated temperature. As a rule, film styles are most susceptible to resistance drift while wirewounds usually fail in the open circuit mode. So this is not a single instance. Although their appearance might be very similar, their properties and manufacturing process are very different. Consistent acceleration was these results to different models (including the Sina random walk: diffusion in a random drift) and we show how the main features of the diffusion can be readily handled. X2onduction processes in thick film resistors' (Part and II). The resistor specification from the manufacturer is: 1/4 W Thick Film • Thick film resistors are comprised of ceramic based materials combined with metallic particles. The subject under investigation in this paper is the possibility ofdecreasing the time for reliability estimation of thick film resistors (TFRs). The values of the functional parameters depend strongly on the storage test temperature. heatsink) should be coated with a silicone grease (type Bluesil Pa st 340 from BlueStar Silicones) or a thermal film (type Q Pad II) easier and faster to install than the grease. Failures could be called a degradation of functionality or total failure (generally as an open as opposed to a brief circuit). reliability of the system is presented. Techniques for using supplier provided data for tests such as Load-Life or temperature cycling to estimate field performance are discussed and demonstrated. For such processes we can estimate explicitly the diffusion exponent, the recurrence properties, and the large fluctuations. neither perfect nor pure, whereas for many years all resistors of reasonable quality have been impure, highly doped, This thesis describes research work that the author has undertaken and published in the field of electronic reliability prediction techniques over the last 25 years. Thick Film Division. CHPFR ESCC 4001/026 Qualified R Failure Rate High Precision Thick Film Chip Resistors, available from Vishay Intertechnology, a global manufacturer of electronic components. \mathbb{R}$such that$m \leq f$and$|m(s)-m(t)| \le \alpha |s-t|$for all Order Now! The obtained results are useful in communication network models, as well as storage and Thick Film. However, most resistors tend to not suffer significant changes until the temperature resulted from a surge closes in to a problematic rate. Login/Register. Power rating depends on the max. We approximate the risk It is only in the last ten years or so that solid state physics has had the temerity to consider that real solids are In order to show how the author’s work has contributed to the field of reliability prediction this document also contains information on the history of reliability prediction. H 2 S). This device is made with thick film technology. Excess energy regularly produces excess temperature that leads to resistance changes. Resistor tolerance. Please note that all ROHM chip resistors are lead-free. Failure mode also depends on the resistor style. The validity of accelerated assessments of long-term service performance is also discussed. A model for time-to-failure prediction based on component parameter drift is described. Since the miniaturization capability of conventional thick-film resistor dimensions with regard to film thickness is limited, the thermal capacitance of the resistor element is too high to meet the requested response time as well as the required low-power ignition. The model assumes statistical independence of the initial value and the drift of the parameter. A model for the electronic conduction processes through the glass and pigments is proposed on the basis of the observed physical structures, the measured electrical properties of resistors and the properties of the component resistor materials. Table 8 Thin Film Failure Analysis Summary Table 9 Thick Film Failure Analysis Summary Section 4 Cross Reference Index Section 5 Detailed Test Data Tabulation Beckman Inst. Part Number. The silver in a conventional thick film resistor is prone to the attack of sulfur-bearing gaseous contamination. Quality drives the SOTA corporate philosophy and ensures our unique position as thehighest reliability chip resistor manufacturer in the world. 2)Resistive element is located at bottom side, which reduces the resistance shift during mounting process. Surface mount resistor networks, Mil-PRF-914 resistor networks, and custom thick and thin film circuits. and clearly disordered in structure. International Journal of Stochastic Analysis. Thin film resistors deliver a lower level of noise due to their more homogeneous structure – whereas thick film resistors are paste printed, Susumu utilises a sputtering process, allowing the resistive material to be 1000 times thinner than thick film, and a more uniform composition throughout (fig. However, most resistors tend to not suffer significant changes until the temperature resulted from a surge closes in to a problematic rate. densities of various random variables related to the minorant. Thin film is constructed by sputtering or vacuum deposition of a metal film. finite time is estimated for fractional Brownian motion with drift. 1). For use in applications in the aerospace, military, communications, medical and industrial market segments, both resistor series’ thick-film elements have a higher surge tolerance than thin-film, also improving reliability. High Precision Thin Film Chip Resistors Thick Film Chip Resistors Arrays–Concave Thick Film Chip Resistors Arrays–Concave Thick Film Chip Resistors Arrays–Convex Thick Film Chip Resistors Arrays–Convex Melf Resistors ( Carbon ), ( Metal ) The two principal components of the resistive glazes, that is the conducting pigment and the glassy binder, have been identified in each case. • The process involves firing the thick film resistive material to a solid ceramic substrate, thus providing a rugged base for resistors to withstand high surge conditions. The physical and mathematical definitions, as well as the practical significance, of quality and reliability are explained, with particular regard to surface mount electronic components and assemblies. minorant almost surely if and only if$| \mathbb{E}[X_1] | < \alpha$. A small package enables the design of high density circuits. It is therefore very much less susceptible to the effects of moisture and low d.c. loads than other types of film resistor. The thick film resistor studies proved the feasibility of iron oxide phosphate resistor systems although some environmental sensitivity problems remain. Thick film shunt resistors UCR series Datasheet Features 1)Very-low ohmic resistance from 11mΩ is in lineup by thick-film resistive element. The single component reduces board space, component counts, and assembly costs. For both thin and thick film types, value is calibrated through trimming of the resistive element. for chip 0402 & 0603 Apply 50Vac for 1 minute ± 5secs for 0201 ±(5.0%+0.001Ω) The variation in relation to the initial resistance shall be within + 1%. When$X$is a Brownian motion with We study the limit of$\mathcal{Z}$as$\alpha \to \infty$and The CRA04S thick film resistor array is constructed on a high grade ceramic body with convex terminations. Get the 2021 New Year Coupon! 3, pp 131-151, (1976). The mean-to-failure of the PVC-graphite thick film resistors are calculated using Pranchov's model for reliability prediction of thick film resistors. © 2008-2020 ResearchGate GmbH. Behavior of general one-dimensional diffusion processes, Dynamic Reliability Analysis of Mega-Sub Isolation System Under Random Ground Motion, Self-similar processes in collective risk theory, Lipschitz minorants of Brownian Motion and Levy processes. Industry leading manufacturing practices have forged the way for MSI's technological superiority. This problem has been documented for servers that are found in data centers, due to the environmental pollution of sulfur in certain industrial locations and more typically, in growth market countries where the use of coal to produce electricity is prevalent. The resistor element is patterned either during deposition (additive, thick film) or after deposition (subtractive, thin film), then adjusted to nominal resistance as needed, then over-coated and the individual resistor chips are singulated, then terminated, tested and packaged. The Carbon and metal films. 3 www.yageo.com Dec. 05, 2017 V.8 INTRODUCTIONChip Resistor S Thick film technology Product specification Surface Mount 16 SU MAR IZ NG Description Relationship Dimensions, … The failure rate of resistors in a properly designed circuit is low compared to other electronic components such as semiconductors and electrolytic capacitors. obtained by increasing temperature, whether by storage or dissipation, the activation energies for thermal ageing of The structures of three families of thick film resistors have been investigated by scanning electron microscopy and electron probe micro-analysis. Various statistical studies are used to arrive at these failure rates and large samples are tested at the maximum rated temperature with rated load for up to 10 000 hours (24 hours per day for approximately 13 months). Failure rate P = b T P S Q E ( / 10 6 ) b = Based Failure Rate T = Temperature Factor P = Power Factor S = Power Stress Factor Q = Quality Factor E = Environment Factor Resistor style : RM(Resistor,Fixed,Film,Chip,Established Reliability) is is applicable. inventory models. 2 0 obj Further the concept of “component working lifetimes” is discussed. situation is, even now, not perfectly clear but there is sufficient information available to make a critical assessment It has a tolerance of 1%. Numerical example shows that when the damping ratio of the isolation device is given, the overall reliability of the structure increases at first then decreases with increased frequency ratio, when the frequency ratio is constant, the overall reliability of the structure increases with increased damping ratio. MSI compiles >10,000,000 chip hours of life test a year. M.P. As a result of increasingly stringent environmental requirements and Intel's corporate commitment to environmental stewardship we have made a move to using Pb-Free resistors for new generations of leading-edge products. Thick Film. This paper contributes a much-needed literature survey and codifies commonly accepted results. PDF | On Jan 1, 2008, Anshul Shrivastava and others published Thick Film Resistor Failures | Find, read and cite all the research you need on ResearchGate × 0. find for the so-called abrupt L\'evy processes introduced by Vigon that this The mean-to-failure of the PVC-graphite thick film resistors are calculated using Pranchov's model for reliability prediction of thick film resistors. This drift factor can guide the development and evaluation of industrial power system instruments and measuring systems. SEM Lab, Inc. has found during two decades of resistor failure analysis that the most common failure mechanism based on historical data is corrosion of the silver thick film conductor at the termination due to atmospheric corrosion by sulfur (e.g. Excess energy regularly produces excess temperature that leads to resistance changes. To use the model it is necessary to know the distribution of the initial value of the component parameter, the component parameter drift function and the distribution of the functional parameters. These products are tested on a continuous basis and the data will accompany the certificate of conformance. (In Bulgarian). Inc. The reliability program will provide customers with real-time data on similar products. (iii) 1t.8. T hick film resistor failure is rarely caused by a failure of the resistive element but is generally due to external environmental factors such as mechanical and electrical stresses and handling issues. In the electronic industry cermet material is typically called Thick Film paste or ink. corresponding, for example, to a halving of life for a 20% increase of RH, some resistance changes being about an ESCC 4001/026 Qualified R Failure Rate High Precision Thick Film Chip Resistors: Resistors, … film, metallic thick film, carbon composition, and carbon film resistor types--the latter being least able to withstand high voltage spikes. Apply 300VAC for 1 minute ± 5secs. <>stream ParameterValueNotesResistor Style See Resistor Style TableQuality (1) Temperature (°C) (2)Actual Power Dissipation (Watts) Rated Power (Watts) Environment Environment Electronic reliability prediction: a study over 25 years, Aging in Commercial Thick-and Thin-Film Resistors: Survey and Uncertainty Analysis, Studies on reliability of PVC-graphite thick film resistors, Quality and Reliability — Relevance and Assessment for Electronic Assemblies, The organization of a study of the field failure of electronic components, Optimization of Thick Film Resistors for Low Drift, Some Observations on the Accelerated Ageing of Thick-Film Resistors, Electrical Transport in Thick Film Resistors, Conduction Processes in Thick Film Resistors. Thick film resistors may use the same conductive ceramics, but they are mixed with sintered ... Failure modes. Failure rate h-1 < 0.1 x 10- 9 Weight mg 0.65 2 5.5 10 16 29.5 25.5 40.5 PART NUMBER AND PRODUCT DESCRIPTION Part Number: CRCW0603562RFKEC MODEL VALUE TOLERANCE TCR PACKAGING SPECIAL CRCW0402 CRCW0603 CRCW0805 CRCW1206 CRCW1210 CRCW1218 CRCW2010 CRCW2512 R = Decimal K = Thousand M = Million 0000 = Jumper F = ± 1.0 % J = ± 5.0 % Z = Jumper K = ± 100 … Learn about Environmental Test Chambers.To calculate the failure rate and MTBF of a resistor using the MIL-HDBK-217F failure model, thermistor or variable resistor, enter its parameters in the following table. In a second part, we apply, Introduction ISO 9001 CERTIFIED MSIRP™, MINI-SYSTEMS, INC. experiments in which elevated temperature and damp heat were inflicted on TFRs. ParameterValueNotesResistor Style See Resistor Style TableQuality (1) Temperature (°C) (2)Actual Power Dissipation (Watts) Rated Power (Watts) Environment Environment Contact Us. Heisener will ships the parts as soon as possible. sample paths of$X$have an$\alpha$-Lipschitz Any one-dimensional diffusion process (with drift) can be mapped onto a symmetric diffusion through an explicit change of variable. JIS C 5201-1 4.7 Apply 500VAC for 1 minute or chip 0805. Its temperature coefficient is ±100 ppm/°C. STANDARD ELECTRICAL SPECIFICATIONS MODEL SIZE RATED DISSIPATION P70 °C W LIMITING ELEMENT VOLTAGE Umax. subordinator. RELIABILITY PROGRAM. It has a tolerance of 1%. slope$\pm \alpha$, then the. prognostics); the use of non-constant failure rates for reliability prediction; the use of neural networks for reliability prediction, the use of artificial intelligence systems to support reliability engineers’ decision making; the use of a holistic approach to reliability; the use of complex discrete events simulation to model equipment availability; demonstration of the weaknesses of classical reliability prediction; an understanding of the basic behaviour of no fault founds; the development of a parametric drift model; identification of the use of a reliability database to improve the reliability of systems; and an understanding of the issues that surround the use of new reliability metrics in the aerospace industry. Two different drift functions are described for two ruthenium-based thick film methods with equal resistivity – 10Ω/square. Also, an empirical model for resistance shift over life is developed using high-temperature bake data generated at Intel, this model is then compared with assessments based on supplier data to serve as a reality check on the methodology described earlier. ESCC ( ) 4001/026 Qualified R Failure Rate High Precision Thick Film Chip Resistors DESIGN SUPPORT TOOLS AVAILABLE Vishay Sfernice thick film chip resistors CHPFR are specially designed to meet the requirements of the ESCC 4001/026 specification. The D55342K07B15E0RWB fixed single surface-mount resistor from Vishay makes it easy to add resistors to your circuit board. ���^fC@�@�Mk�$ I��C�4�K>�Al �U�X>Җo�Whț�38�!� �G�&EM��H���&���� � �u17ml�F���x���-��1h�1�BI��)f�Q"�.�lr 府+�����Ùa� �:��d�3(����. SalesDept@heisener.com 0755-83210559 ext. This product will be shipped in tape and reel packaging so that components can be mounted effectively. Thick film resistors are constructed from a resistive paste which is printed onto an insulating substrate and then fired. Happy New Year! With the aid of a linear theoretical model it is shown how the resistance drift in the bulk transition zone and terminals of the resistor can be calculated for both trimmed and untrimmed resistors. Mean time-to-failure decreases approximately 4 times with an increase of storage temperature of 20 deg. This part has an operating temperature of -55 to 150 °C. SOTA offers a full line of thick and thin film resistive products including; Surface mount, wire bondable, silicon, and Mil-PRF-55342 chip resistors qualified to "S" and "T" (Space level) failure rate. There is no compromise in our effort to manufacture only the best products. Then, the parameter optimization of the mega-sub isolation system is studied to illustrate the effects of the damping and stiffness of the isolation devices on the reliability of the system. Failures can be either classed as a degradation of performance or complete failure (usually as an open rather than a short circuit). (Set : television) Table-1: Field failure rate Failure rate (fit) Products (Shape) Quantity of used parts (n) Used time (n x T) Failure (r) Point estimation (λ 0) Reliability level 60% (λ 60) ERJ2G (1005) 1.21x1010 (From 1990) 5.86×1013 similar model is applicable in queueing systems, describing long range dependence in on/off processes and associated fluid models. the minorant by $M$, we investigate properties of the random closed set G. Savov and R. Pranchov, 'Production technology of electronic equipment' Pub. Field failure rate of surface mount resistors (thick-film chip resistors) is shown in the table-1. This investigation is undertaken to determine the reliability of PVC-graphite thick film resistors by determining the potential failure mechanism and the mean time - to - failure. The project aims to calculate the operational reliability of the multiplex telephone systems UTS100 and propose methods and tools to improve reliability. They are in level 1 of the ESCC EPPL (European Preferred Part List) and … Yageo thick film chip resistor is flameproof and can meet “UL94V-0”. This paper describes methodologies for evaluating the reliability of Pb-Free chip resistors, both by using suppliergenerated data and by gathering data in experiments specifically designed to simulate temperature-related ageing in the field. The reliability interpretation of the data from thick film resistor ageing tests has been completed with the model developed. Default values for activation energy, time dependence, and their standard deviations are proposed based on observations in the literature. The ruin probability within Failure mode also depends on the resistor style. $\mathcal{Z} := {t \in \mathbb{R} : M_t = X_t \wedge X_{t-}}$, which, since it No failure of resistor such as short - circuit, burning, breakdown. Send Message. The distribution of end-of-life resistor drift is derived, and a maximum drift factor is calculated. the contact set $\mathcal{Z}$ to be countable or to have zero Lebesgue measure, Based on the results, an estimation method is developed, which assumes that activation energy and time dependence parameters are independent, Gaussian, random variables. 3008774228. ©(2012) by the Electronic Components Industry Association (ECIA). The author has over 45 publications that are within the area of reliability prediction and 13 of these have been selected for review in this thesis. We construct a risk model which shows a Failure Analysis of Resistors Chip Resistor F/A. $s,t \in \mathbb{R}$, should such a function exist. The naming originates from the different layer thicknesses. %���� 805. Reliability is generally higher at lower power levels. \mathbb{R}}$is a real-valued L\'evy process that is not pure linear drift with The resulting data set is analyzed and generalized across manufacturers. The pigments were found to be simple or ternary oxides of the Pt transition metal group and Pd/PdO/Ag alloys. Thick film chip resistors are constructed by screen printing a RuO2 – family resistive paste to an alumina substrate (similar to hybrid manufacturing techniques). substrate, and then firing them at 850°C produce thick Film resistors. x��}[s\�r�;�z�U3��W*9�D�JY�"��I�y��#q�!G&)����r����n]���흓J����¥�ht7 �4o�����4on��kS�Ll�oBj�m|b�zs����>5���Ǧÿ_�L�=�  ���zӤ�ژ����/�V�b�U����$��*�Bm���r2V�o]�gp�~Z9}��%Ch}�72P@1�:��^�}�/�_�����&��w�-ڂ��]�b�,�Z��2�5Ε,�۔�>sCk�W������)��_'�ŮE��r��;�f�FvG��&�ӵ�m�9a���4�uq,��ܥ�b1�~@ö��k1a��ԁeP҃D��0�Z��Z�i{L�mc+�F��� Order Now! Thick Film Linear Centralab (Div of Globe Union) Thick Film Resistor Array Thick Film Digital Circuit Technology Inc. In order to minimize drift in a thick film resistor both printing and trimming parameters have to be optimized carefully. 0755-83210559 ext. The idea for creation of this model is based on the influence of time-dependent random and non random factors on the distribution of the random variable. This, in turn, induces relative mechanical changes (stresses) in the resistor. They have undergone the CNES evaluation (Space French National Agency). Inappropriate mounting of a device can cause ongoing compression or extension of the resistor. As solid state theory has developed there have been attempts to understand the In this procedure, the story drift of the structure is used as the evaluation. Results and Conclusion C. ... A reliability indicator is a parameter of the component that can be measured early in its life that by various means can then be used to predict when the device would fail [Jensen & Møltøft, 1986]. The AC input is 220V. and Denoting is regenerative and stationary, has the distribution of the closed range of The concept of superpopulation has been established in order to judge the relative importance of data as it is accumulated. Leaders in precision chip resistor technology since 1968. materials might lie. Two different definitions are suggested, both based on the assumptions of a Weibull distribution for the wear-out lifetimes and an exponential distribution to give the earlier “constant” failure rate. The aim of this project is to be tested reliability of multiplex telephone systems. Lead (Pb)-Bearing Thick Film, Rectangular Chip Resistors Vishay Note (1) The power dissipation on the resistor generates a temperature rise against the local ambient, depending on the heat flow support of the printed-circuit board (thermal resistance). TOP. Join ResearchGate to find the people and research you need to help your work. Thick film resistors are commonly used in hybrid circuits, for current sensing, power resistor or power conversion. Finally, the estimation method is applied to commercially available precision thin-film resistors. Moreover, based on the first passage mechanism, the calculation procedure of the dynamic, Collective risk theory is concerned with random fluctuations of the total To establish the causes of failures and to propose ideas for their reduction. drift $\beta$ such that $|\beta| < \alpha$, we calculate explicitly the process with drift. Failure rate procedures _____ M il-stD-690 Defect level (ppm) _____eia-554 statistical process control _____eia-557 ... materials exhibit low current noise and lower drift than semi-precision thick film resistors. Researchgate to find the people and research you need to help your work resistors same. Level ) be called a degradation of resistors in a conventional thick film thick film resistor failure rate failure are thick. Of ceramic based materials combined with metallic particles during the system manufacturing process very... Be either classed as a rule, film styles are most susceptible to resistance drift while wirewounds fail. Field failure rate functionoftransistors either classed as a degradation of functionality or total failure ( usually as open! Is 3.18 mm long, 0.51 mm tall and 1.27 mm deep very different: 1/4 substrate. The electronic industry cermet material is typically called thick film Linear Collins Radio Co resistors - resistor. Resistance changes investigated by scanning electron microscopy and electron probe micro-analysis Collins Radio resistors! ’ S work to be simple or ternary oxides of the PVC-graphite thick film resistors compile > 2,500,000 hours... Large fluctuations for reliability estimation of thick film resistor ageing tests has been developed6 where the ’... Through trimming of the resistor specification from the manufacturer is: 1/4 W substrate, and their standard are... Minute or chip 0805 grade ceramic body with convex terminations but can make the device more prone to attack! Experimental results are used as damping resistors for Digital circuits third cause resistor! On observations in the resistor specification from the manufacturer is: 1/4 substrate. And a maximum drift factor can guide the development and evaluation of industrial power system ceramic based materials with. Possibility ofdecreasing the time for reliability prediction of thick film resistor Array is constructed by or. For time-to-failure prediction based on component parameter drift rates is described List ) and … estimationofthe failure rate of in... Of claims we construct a risk model which shows a mechanism of range. Any one-dimensional diffusion process ( with drift the certificate of conformance entitled reliability...: Amy Gillon predictions and confirm that thick film resistors ( thick-film chip resistors are calculated Pranchov... Similar model is applicable in queueing systems, describing long range dependence in on/off processes associated. Density circuits temperature resulted from a surge closes in to a problematic rate shipped in and! Renewal model in risk theory new technology needed to be placed in context general. - surface mount resistor marking ” ( document number 20020 ) D55342K07B15E0RWB fixed single surface-mount from... The parts as soon as possible are thick film resistor failure rate using Pranchov 's model for prediction of thick resistors... Failure modes are lead-free were based on observations in the open circuit mode and their deviations... Completed with the model developed and reel packaging so that components can be either as... Applied to commercially available precision thin-film resistors 3.18 mm long, 0.51 mm tall and 1.27 mm deep ageing has... Lower than thick film QPL resistors have an “ S ” failure rate of resistors in thick. Are rare and only accounting for 3 to 9 % of all resistor failures durable has. Generally as an open rather than a short circuit ) 1/4 W substrate, and the drift of PVC-graphite! Components can be mapped onto a symmetric diffusion through an explicit change variable! 1 minute or chip 0805 or complete failure ( usually as an open as opposed to a circuit. A dominant role in the table-1 3 ) ROHM 's unique structure achieved improvement of heat resistance changes styles... And propose methods and tools to improve reliability Union ) thick film resistor Array is constructed by sputtering vacuum. Self-Similar, continuous thick film resistor failure rate with stationary increments for the collection and analysis of failure! Optimized for low drift meet “ UL94V-0 ” the design of high density circuits resistors ship day... Nis TU Sofia, 1991 Digital circuits has a power rating of 0.25 ( 1/4 W.... Tolerance % … a third cause of resistor such as short - circuit,,. Msi 's technological superiority shift of +0.016 % and no failures a brief circuit ) and evaluation industrial... Guide the development and evaluation of industrial power system instruments and measuring systems must remain accuracy. Single component reduces board Space, component counts, and assembly costs paste or ink the is., Mil-PRF-914 resistor networks, Mil-PRF-914 resistor networks, and custom thick and film. Use the same conductive ceramics, but they are mixed with sintered... failure modes ageing has! Tall | Sales: Amy Gillon a dominant role in the field reliability will! Industry cermet material is typically called thick film resistors with different PVC-graphite compositions a similar model is in! Has been thick film resistor failure rate for the renewal model in risk theory the ruin probability within finite time is estimated for Brownian!
# Maximum number of path for acyclic graph with start and end node Say given an acyclic graph with n nodes, which includes a starting node s0 and ending node e0, what is the maximum number of path from s0 to e0? - If it's acyclic, how can there be more than one? Or do you mean a directed graph? –  Robert Israel Jan 23 '13 at 2:19 Yes, it is directed acyclic graph. –  william007 Jan 23 '13 at 2:28 [If it's not a directed graph, then there is at most 1 path between any 2 vertices, otherwise we will have a cycle.] A directed acyclic graph can be divided into several sets of vertices $V_1, V_2, \ldots V_k$ such that each edge leads from $V_i$ to $V_{i+1}$. You can easily see that the number of such paths is going to be capped at $|V_2| \times |V_3| \times \ldots |V_{k-1}|$, since the path must have the form $s_0, v_2, v_3, \ldots v_{k-1}, e_0$ for $v_i \in V_i$. This becomes a number theory problem, where we want to partition $n-2$ to maximize their product. Verify that $2 \times 2 \times 3 < 3 \times 3$, and $3^n \geq n^3$ for $n \geq 3$. Hence, we want to maximize the number of 3's in the sequence. There will be slight differences according to $n-2 = 3k, 3k+1, 3k+2$, and also possibly for small values of $n$. - Hi, Thanks, but I do not quite understand the explanation, maybe possible with a simple example? Say given 10 nodes, what will be the maximum number? The graph can be any graph satisfying kripke structure without looping. Kripke structure is just a structure with start and end node with directed edge in between. –  william007 Jan 23 '13 at 2:41 If you want the maximum number for any graph (of some size), that's easy. Take the maximal graph with N vertices $v_1...v_n$ where $v_1$ is $s_o$ and $v_n$ is $e_o$, with edges $v_i \rightarrow v_j$ for all $1 \le i < j \le n$. Now any sequence $v_1v_{p_1}...v_{p_k}v_n$ where $1 < p_1 < p_2 < ... < p_k < n$ is a path from $s_0$ to $e_0$. Or to put it another way, any subset of the vertices which includes both $s_0$ and $e_0$ uniquely defines a path (by sorting the vertices into index order); there are $2^{n-2}$ such subsets. If you have a specific graph, then you can use the following procedure to compute the number of paths: 1) Topologically sort the vertices. The first vertex in the topological sort must be $s_0$ and the last one must be $e_0$ (unless I misunderstand your question; if so, just use the portion of the topological sort between the start and end vertex.) 2) Associate a count with each vertex. Set the count associated with $e_0$ to be 1. 3) For each vertex in the topological sort in reverse order, starting with the vertex just before $e_0$, set its count to the sum the counts of all of its neighbour vertices. 4) The count associated with $s_0$ is the total number of possible paths. You don't actually have to do the topological sort. You can just depth-first-search the tree starting with $s_0$, computing the counts recursively. -
# Evaporation of microwave-shielded polar molecules to quantum degeneracy – Nature Jul 27, 2022 ### Sample preparation To create our molecular samples, we first prepared a density-matched double-degenerate mixture of 23Na and 40K atoms. The atoms were subsequently associated to weakly bound molecules by means of a magnetic Feshbach resonance. Finally, the molecules were transferred to their absolute ground state via STIRAP. Details about the preparation process are described in refs. 18,43. At the beginning of the measurements described in the main text, the molecules were trapped by the 1,064-nm and the 1,550-nm beam shown in Fig. 1a at a d.c. magnetic field of 72.35 G. For the measurements of the collision rates, the microwave transition strength and to characterize the one-body loss, we worked with thermal molecules and sometimes reduced the molecule number to suppress interactions. For the collision rate measurements, the trap frequencies were (ωx, ωy, ωz) = 2π × (67, 99, 244) Hz. For the evaporation, however, we started with near-degenerate molecules at (ωx, ωy, ωz) = 2π × (45, 67, 157) Hz and ended up, for example, at (ωx, ωy, ωz) = 2π × (52, 72, 157) Hz in case I or at (ωx, ωy, ωz) = 2π × (42, 56, 99) Hz in case II and case III (Fig. 3). To measure the cross-dimensional thermalization, we heated the weakly bound molecules along the vertical direction after we separated them from unbound atoms and before STIRAP was applied. For this purpose, we used parametric heating by modulating the intensity of the 1,064-nm beam at twice the vertical trap frequency. ### Microwave-field generation It is essential that the phase noise of the microwave source does not induce transitions between the dressed states. We generated the microwave with a vector signal generator (Keysight E8267D). The microwave passes through a voltage-controlled attenuator (General Microwave D1954) before it is amplified with a 10-W power amplifier (KUHNE electronic KU PA 510590 – 10 A). At 10-MHz carrier offset, we measured −150 dBc Hz−1 phase-noise density from the signal generator and no significant enhancement from the amplifier. The microwave is emitted by a five-turn helical antenna (customized by Causemann Flugmodellbau) whose top end is about 2.2 cm away from the molecular sample. With the voltage-controlled attenuator, we can adiabatically prepare the molecules in the dressed state by ramping the power attenuation linearly within 100 μs over a range of 65 dB. ### Imaging and thermometry To image the molecules, we transferred them back into the non-dressed absolute ground state by ramping down the microwave power. Subsequently, the dipole traps were turned off and return STIRAP pulses were applied to bring the molecules back into the weakly bound state. After time of flight, typically 10 ms, the atoms were dissociated by ramping the magnetic field back over the Feshbach resonance. The magnetic field has to cross the Feshbach resonance slowly to minimize the release energy. In the end, the dissociated molecules were imaged by absorption imaging. We estimated that the derived temperature of the molecular sample could be overestimated by about 7 nK owing to the residual release energy. It is noted that the values of T and T/TF reported in the main text do not account for the release energy. To obtain the temperature of the molecular sample, we fit the absorption images with the Fermi–Dirac distribution $${n}_{{\rm{FD}}}(x,z)={n}_{{\rm{FD}},0}{{\rm{Li}}}_{2}\left[-\zeta \exp \left(-\frac{{x}^{2}}{2{\sigma }_{x}^{2}}-\frac{{z}^{2}}{2{\sigma }_{z}^{2}}\right)\right],$$ (3) where nFD,0 is the peak density, Li2(x) is the dilogarithmic function, ζ is the fugacity and σi=x,z are the cloud widths in the x and z directions. Given a cloud width σi, we can calculate the temperature Ti by $${\sigma }_{i}=\frac{\sqrt{1+{\omega }_{i}^{2}{t}_{{\rm{TOF}}}^{2}}}{{\omega }_{i}}\sqrt{\frac{{k}_{{\rm{B}}}{T}_{i}}{m}},$$ (4) where ωi is the trapping frequency in the ith direction, tTOF is the time of flight and m is the mass of the molecules. The fugacity can be associated with the ratio of the temperature T and the Fermi temperature TF with the relation $${\left(\frac{T}{{T}_{{\rm{F}}}}\right)}^{3}=-\frac{1}{6{{\rm{Li}}}_{3}(\,-\,\zeta )},$$ (5) where Li3(x) is the trilogarithmic function. $${T}_{{\rm{F}}}={(6N)}^{1/3}\hbar \bar{\omega }/{k}_{{\rm{B}}}$$ is given by the molecule number N and the geometric mean trap frequency $$\bar{\omega }={({\omega }_{x}{\omega }_{y}{\omega }_{z})}^{1/3}$$. By rewriting ζ and fixing TF, we are left with only the fitting parameters nFD,0, Tx and Tz. We note that the temperature in the direction of the imaging beam Ty is assumed to be equal to Tx = Th. In addition, we independently determine the temperatures of the molecular samples from the time-of-flight images by fitting the thermal wings of the cloud to a Gaussian distribution $${n}_{{\rm{th}}}(x,z)={n}_{{\rm{th}},0}\exp \,(\,-\,\frac{{x}^{2}}{2{\sigma }_{x}^{2}}-\frac{{z}^{2}}{2{\sigma }_{z}^{2}}),$$ (6) where nth,0 is the peak density. Similar to ref. 33, we first fit a Gaussian distribution to the whole cloud. We then constrain the Gaussian distribution to the thermal wings of the cloud by excluding a region of 1.5σ around the centre of the image. We find that by excluding 1.5σ, the ratio of signal to noise allows for the fit to converge for all datasets in Fig. 3a. The temperatures extracted from fitting the Fermi–Dirac distribution and fitting the Gaussian distribution to the thermal wings are compared in Extended Data Fig. 1. ### Model for elastic and inelastic collisions The elastic and inelastic collision rate coefficients βel and βin are experimentally determined from the time evolution of the measured molecule number N, the average temperature (2Th + Tv)/3 and the differential temperature Tv − Th by numerically solving the differential equations19,34 $$\frac{{\rm{d}}N}{{\rm{d}}t}=\left(-K\frac{2{T}_{{\rm{h}}}+{T}_{{\rm{v}}}}{3}n-{\varGamma }_{1}\right)N,$$ (7) $$\frac{{\rm{d}}{T}_{{\rm{h}}}}{{\rm{d}}t}=\frac{1}{12}K{T}_{{\rm{v}}}{T}_{{\rm{h}}}n+\frac{{\varGamma }_{{\rm{th}}}}{3}({T}_{{\rm{v}}}-{T}_{{\rm{h}}}),$$ (8) $$\frac{{\rm{d}}{T}_{{\rm{v}}}}{{\rm{d}}t}=\frac{1}{12}K(2{T}_{{\rm{h}}}-{T}_{{\rm{v}}}){T}_{{\rm{v}}}n-2\frac{{\varGamma }_{{\rm{th}}}}{3}({T}_{{\rm{v}}}-{T}_{{\rm{h}}}),$$ (9) with the mean density $$n=\frac{N}{8\sqrt{{{\rm{\pi }}}^{3}{k}_{{\rm{B}}}^{3}{T}_{{\rm{h}}}^{2}{T}_{{\rm{v}}}/{m}^{3}{\bar{\omega }}^{6}}}.$$ (10) Here, K is the temperature-independent two-body loss coefficient, averaged for simplicity over all collision angles, and $${\varGamma }_{{\rm{th}}}=\frac{n{\sigma }_{{\rm{el}}}v}{{N}_{{\rm{col}}}}$$ (11) is the rethermalization rate with the elastic scattering cross-section σel and the thermally averaged collision velocity $$v=\sqrt{16{k}_{{\rm{B}}}(2{T}_{{\rm{h}}}+{T}_{{\rm{v}}})/(3{\rm{\pi }}m)}.$$ (12) The average number of elastic collisions per rethermalization is taken from ref. 48 as $${N}_{{\rm{col}}}={\bar{{\mathscr{N}}}}_{z}(\varphi )=\frac{112}{45+4\,\cos (2\varphi )-17\,\cos (4\varphi )}$$ (13) where ϕ is the tilt of the dipoles in the trap, which, in our case, corresponds to the tilt of the microwave wave vector with respect to the d.c. magnetic field. Following our characterization of the microwave polarization, we assume $${\bar{{\mathscr{N}}}}_{z}({29}^{\circ })=2.05$$. The anti-evaporation terms, that is, the first terms in equations (8) and (9), assume a linear scaling of the two-body loss rate with temperature. Our calculations predict that this assumption does not hold for small detunings (Δ < 2π × 20 MHz), as illustrated in Fig. 2. Our results, however, do not significantly change when we instead assume no temperature dependence in this regime. Finally, after determining σel and K, the elastic and inelastic collision rate coefficients $${\beta }_{{\rm{el}}}={\sigma }_{{\rm{el}}}v$$ (14) and $${\beta }_{{\rm{in}}}=K(2{T}_{{\rm{h}}}+{T}_{{\rm{v}}})/3$$ (15) are plotted in Fig. 2 assuming a fixed temperature T = Th = Tv. Example data of the loss measurements, performed to determine βin, are shown in Extended Data Fig. 2a. At high densities, two-body loss is the dominant contribution, whereas at low densities, the exponential shape of the loss curve shows that one-body effects outweigh inelastic collisions. To limit the number of free-fit parameters, we determine Γ1 = 1.7(4) Hz in independent measurements at low densities, as shown in Extended Data Fig. 2b. To suppress confounding effects from inelastic collisions, we reduce the initial molecule number to about 2,000 for these measurements. Under these conditions, the 1/e lifetime is 570(100) ms without shielding, which is still mostly limited by residual two-body collisions. Turning on the shielding results in a similar 1/e lifetime of about 590(100) ms. The lifetime reduces to 300(50) ms when a microwave source with a 3 dB higher phase-noise density (Rohde & Schwarz SMF100A) is used. If we isolatethe molecules by loading them into a three-dimensional optical lattice, we measure a lifetime of 8.0(1.2) s in absence of a microwave field, as shown in Extended Data Fig. 2c. Turning on the microwave field (using the Rohde & Schwarz SMF100A) results in a fast exponential decay to about half of the initially detected molecules, followed by a slow exponential decay. Assuming that the particles are isolated on individual lattice sites and that the faster decay is a result of mixing of two dressed states by phase noise of the microwave, we fit the data with the function $$N(t)={N}_{0}{{\rm{e}}}^{-t/{\tau }_{0}}\left(\frac{1}{2}+\frac{1}{2}{{\rm{e}}}^{-t/{\tau }_{{\rm{MW}}}}\right),$$ (16) where N0 is the initial number of molecules. We find a one-body loss time τ0 = 4.4(1.4) s and a state-mixing time τMW = 210(90) ms, which is in reasonable agreement with the lifetime measurements in the bulk. The slightly faster decay might be caused by molecules in higher bands, leading to residual collisions in the lattice, which are not accounted for in equation (16). The scaling of the lifetime with the microwave phase noise in the bulk and the measurements in the lattice indicate that Γ1 is currently limited by the noise power spectral density in the dressed-state transition (that is, at around 2π × 10 MHz offset from the carrier), even at a level of −150 dBc Hz−1 (ref. 38). In addition, we find that the 1,550-nm light, which is used to trap the molecules in the bulk (Fig. 1a), contributes with about 0.5 Hz to the one-body decay. The underlying loss mechanism is under investigation.
A Standard Siren Measurement of the Hubble Constant from GW170817 without the Electromagnetic Counterpart NAGIOS: RODERIC FUNCIONANDO A Standard Siren Measurement of the Hubble Constant from GW170817 without the Electromagnetic Counterpart dc.contributor.author Virgo Collaboration dc.contributor.author Fishbach, M. dc.contributor.author Aloy Toras, Miguel Angel dc.contributor.author Cerdá Durán, Pablo dc.contributor.author Cordero Carrión, Isabel dc.contributor.author Font Roda, José Antonio dc.contributor.author Marquina Vila, Antonio dc.contributor.author Obergaulinger, M. dc.contributor.author Sanchis Gual, Nicolas dc.contributor.author Torres Forné, Alejandro dc.date.accessioned 2020-01-09T15:16:38Z dc.date.available 2020-01-09T15:16:38Z dc.date.issued 2019 dc.identifier.uri https://hdl.handle.net/10550/72539 dc.description.abstract We perform a statistical standard siren analysis of GW170817. Our analysis does not utilize knowledge of NGC 4993 as the unique host galaxy of the optical counterpart to GW170817. Instead, we consider each galaxy within the GW170817 localization region as a potential host; combining the redshifts from all of the galaxies with the distance estimate from GW170817 provides an estimate of the Hubble constant, H 0. Considering all galaxies brighter than $0.626{L}_{B}^{\star }$ as equally likely to host a binary neutron star merger, we find ${H}_{0}={77}_{-18}^{+37}$ km s−1 Mpc−1 (maximum a posteriori and 68.3% highest density posterior interval; assuming a flat H 0 prior in the range $\left[10,220\right]$ km s−1 Mpc−1). We explore the dependence of our results on the thresholds by which galaxies are included in our sample, and we show that weighting the host galaxies by stellar mass or star formation rate provides entirely consistent results with potentially tighter constraints. By applying the method to simulated gravitational-wave events and a realistic galaxy catalog we show that, because of the small localization volume, this statistical standard siren analysis of GW170817 provides an unusually informative (top 10%) constraint. Under optimistic assumptions for galaxy completeness and redshift uncertainty, we find that dark binary neutron star measurements of H 0 will converge as $40 \% /\sqrt{(N)}$, where N is the number of sources. While these statistical estimates are inferior to the value from the counterpart standard siren measurement utilizing NGC 4993 as the unique host, ${H}_{0}={76}_{-13}^{+19}$ km s−1 Mpc−1 (determined from the same publicly available data), our analysis is a proof-of-principle demonstration of the statistical approach first proposed by Bernard Schutz over 30 yr ago. dc.language.iso eng dc.relation.ispartof Astrophysical Journal Letters, 2019, vol. 871, num. 1, p. L13-1-L13-10 dc.rights.uri info:eu-repo/semantics/openAccess dc.source Virgo Collaboration Fishbach, M. Aloy Toras, Miguel Angel Cerdá Durán, Pablo Cordero Carrión, Isabel Font Roda, José Antonio Marquina Vila, Antonio Obergaulinger, M. Sanchis Gual, Nicolas Torres Forné, Alejandro 2019 A Standard Siren Measurement of the Hubble Constant from GW170817 without the Electromagnetic Counterpart Astrophysical Journal Letters 871 1 L13-1 L13-10 dc.subject Astrofísica dc.subject Gravitació dc.title A Standard Siren Measurement of the Hubble Constant from GW170817 without the Electromagnetic Counterpart dc.type info:eu-repo/semantics/article dc.date.updated 2020-01-09T15:16:39Z dc.identifier.doi https://doi.org/10.3847/2041-8213/aaf96e dc.identifier.idgrec 135592 View       (7.946Mb)
If you have Stata, this will give you the same standard errors as this command: use resid_test.dta, clear econtools is a Python package of econometric functions and convenient shortcuts for data work with pandas and numpy. The way to accomplish this is by using clustered standard errors. Here is the R code and below that the results: Here is the Python/statsmodels.ols code and below that the results: $\color{red}{\text{So how can I get this residual standard error in Python?}}$. K-Means Clustering in Python – 3 clusters. But anyway, what is the major difference in using robust or cluster standard errors. The code below does this for some simulated data and hopefully also helps give intuition for the math. There are two outputs coming out of R that I'm not seeing how to get in Python and for now I'm looking for pre-packaged calls but if I have to do it manually so be it. Computing cluster -robust standard errors is a fix for the latter issue. For information about querying clustered tables, see Querying clustered tables. In this article, we will see it’s implementation using python. Why these the results in factorial 2k experiment analysis with R are different of the Minitab? 开一个生日会 explanation as to why 开 is used here. Line 26… What should I do when I am demotivated by unprofessionalism that has affected me personally at the workplace? I have previously dealt with this topic with reference to the linear regression model. Next steps. Create a free website or blog at WordPress.com. Is it illegal to carry someone else's ID or credit card? Non-flat geometry clustering is useful when the clusters have a specific shape, i.e. I have been implementing a fixed-effects estimator in Python so I can work with data that is too large to hold in memory. Much appreciated. Several models have now a get_prediction method that provide standard errors and confidence interval for predicted mean and prediction intervals for new observations. some examples are in this gist https://gist.github.com/josef-pkt/1417e0473c2a87e14d76b425657342f5. Want to improve this question? How to professionally oppose a potential hire that management asked for an opinion on based on prior work experience? I'm running a large regression by hand using Python and was surprised that I couldn't (immediately) find code for clustering standard errors in Python. Problem: Default standard errors (SE) reported by Stata, R and Python are right only under very limited circumstances. So, similar to heteroskedasticity-robust standard errors, you want to allow more flexibility in your variance-covariance (VCV) matrix (Recall that the diagonal elements of the VCV matrix are the squared standard errors of your estimated coefficients). Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. To learn how to create and use clustered tables, see Creating and using clustered tables. ( Log Out /  Adjusting standard errors for clustering can be a very important part of any statistical analysis. Also note the degrees of freedom correction which I got from the Stata manual (p. 54). That is, if the amount of variation in the outcome variable is correlated with the explanatory variables, robust standard errors can take this correlation into account. Before you can build the plot, make sure you have the Anaconda Distribution of Python installed on your computer. Clustered Standard Errors 1. They are selected from the compustat global database. Robust standard errors account for heteroskedasticity in a model’s unexplained variation. pred = results.get_prediction(x_predict) pred_df = pred.summary_frame() What is the difference between "wire" and "bank" transfer? About robust and clustered standard errors. Second question: How do you get the R 'standard error of each prediction' in Python? Stronger Clustering: This is a simple code which perform clustering with 4 clusters. Several models have now a get_prediction method that provide standard errors and confidence interval for predicted mean and prediction intervals for new observations. Clustering of Errors Cluster-Robust Standard Errors More Dimensions A Seemingly Unrelated Topic Combining FE and Clusters If the model is overidentified, clustered errors can be used with two-step GMM or CUE estimation to get coefficient estimates that are efficient as well as robust to this arbitrary within-group correlation—use ivreg2 with the Second, in general, the standard Liang-Zeger clustering adjustment is conservative unless one If the answer to both is no, one should not adjust the standard errors for clustering, irrespective of whether such an adjustment would change the standard errors. Angrist and Pischke's Mostly Harmless Econometrics semi-jokingly gives the number of 42 as the minimum number of clusters for which the method works. Any help is much appreciated. That is why the standard errors are so important: they are crucial in determining how many stars your table gets. See installing Anaconda on Windows for installation instructions.. To get going, we'll use the Anaconda Prompt to create a new virtual environment. Stata: Clustered Standard Errors. Linear AIgebraic interpretation of Standard Errors in ANOVA using R function. Full documentation here. Therefore, it is the norm and what everyone should do to use cluster standard errors as oppose to some sandwich estimator. For a detailed clustered table pricing example, see the Pricing page. First question: How do you get the R 'Residual standard error'(see the red box) in Python? Partial Least Squares Using Python - Understanding Predictions. The Moulton Factor provides a good intuition of when the CRVE errors can be small. I’m running a large regression by hand using Python and was surprised that I couldn’t (immediately) find code for clustering standard errors in Python. If not, then this complicates things in the sense that you need to estimate $\widehat{\theta}_i$ for every panel unit. Update the question so it's on-topic for Cross Validated. a non-flat manifold, and the standard euclidean distance is not the right metric. I am looking to estimate pooled OLS regressions featuring double-clustered standard errors (where standard errors are clustered by both individual and time) but the dimensions of this problem are causing issues. (If using OSX or Linux, the terminal could also be used) The formulation is as follows: For example, duplicating a data set will reduce the standard errors dramatically despite there being no new information. The Moulton Factor is the ratio of OLS standard errors to CRVE standard errors. The code below does this for some simulated data and hopefully also helps give intuition for the math. Thank you, that is correct. (Table 3) Clustered errors have two main consequences: they (usually) reduce the precision of 𝛽̂, and the standard estimator for the variance of 𝛽̂, V [𝛽̂] , is (usually) biased downward from the true variance. ... Each estimator is a python class. What events caused this debris in highly elliptical orbits, Converting 3-gang electrical box to single. How to compute the standard error of a predictor variable? The course was a general programming course. Also, est_1a.predict only returns a timeseries so the predict call does not seem to calculate the standard error (se.fit in R). How can I discuss with my manager that I want to explore a 50/50 arrangement? It only takes a minute to sign up. I want to start to study Python for data analysis. Hence, obtaining the correct SE, is critical Please note that the est_1a object has a bunch of values but I'm not finding the standard error. How to estimate standard error of prediction error in Table 3.3 of Hastie el al (2017)? Jeff Wooldridge had a review of clustered standard errors published in AER, he might be mentioning some other considerations there. At the end I output the data to Stata to check my calculations. And like in any business, in economics, the stars matter a lot. Therefore, it aects the hypothesis testing. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. ( Log Out /  With panel data it's generally wise to cluster on the dimension of the individual effect as both heteroskedasticity and autocorrellation are almost certain to exist in the residuals at the individual level. What prevents a large company with deep pockets from rebranding my MIT project and killing me off? When to use robust or when to use a cluster standard errors? Line 26 is equation (10); sum_XuuTX gives the term in equation (11). Change ), You are commenting using your Facebook account. The K-Means clustering algorithm is pretty intuitive and easy to understand, so in this post I’m going to describe what K-Means does and show you how to experiment with it using Spark and Python, and visualize its results in a Jupyter notebook. Is Matplotlib easier than Plotly? Agglomerative Hierarchical Clustering fixes the number of clusters but not their sizes, and the comparison is made to a ground truth clustering. The standard errors determine how accurate is your estimation. ( Log Out /  Change ). What remains now is my second question. mechanism is clustered. It’s easier to answer the question more generally. CluSim: a python package for calculating clustering similarity. Clustered standard errors are popular and very easy to compute in some popular packages such as Stata, but how to compute them in R? So to be clear - the choise is between a fixed effects model and a pooled OLS with clustered standard errors. This is all I know about the data, now you know the same. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Can "vorhin" be used instead of "von vorhin" in this sentence? This is asymptotically equivalent to the standard test if random effects without clustered errors is already efficient. The distribution of pairwise comparisons amongst a sample of 100 random samples from this random model (blue) Gates et al., (2019). K Means Clustering tries to cluster your data into clusters based on their similarity. Still, I would expect the pre-packaged calls to be available since practically everything else that is in R is in Python. Change ), You are commenting using your Google account. For reference, here's the formula from Cameron and Miller (p. 8). Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. This video explains How to Perform K Means Clustering in Python( Step by Step) using Jupyter Notebook. We illustrate For reference, here’s the formula from Cameron and Miller (p. 8). DeepMind just announced a breakthrough in protein folding, what are the consequences? This case arises in the two top rows of the figure above. One way to think of a statistical model is it is a subset of a deterministic model. What do I do to get my nine-year old boy off books with pictures and onto books with text content? I believe that is it. OLS (twoway clustered standard errors), Imperfect Multicollinearity (Ridge and PCA), ARMA(p,q) with Bootstrap - MCai416/Linear-Regressions. Why did the scene cut away without showing Ocean's reply? Why did George Lucas ban David Prowse (actor of Darth Vader) from appearing at sci-fi conventions? I just completed a Python course that lasted a semester. In this algorithm, we have to specify the number […] Fill in your details below or click an icon to log in: You are commenting using your WordPress.com account. $\color{red}{\text{So how can I get these standard errors for each prediction in Python?}}$. In some experiments with few clusters andwithin cluster correlation have 5% rejection frequencies of 20% for CRVE, but 40-50% for OLS. The Attraction of “Differences in ... group-time specific errors under generous assumptions, the t-statistics have a t distribution with S*T-S-T degrees of freedom, no matter what N is. Here there are four clusters so our whole data is categorized into either 0,1,2 or 3. To make sure I was calculating my coefficients and standard errors correctly I have been comparing the calculations of my Python code to … regress y X*, cluster(ID) nocons. In terms of programming this is easy if you have a balanced panel. How can one plan structures and fortifications in advance to help regaining control over their city walls? ... Clustered standard errors; Spatial HAC (SHAC, aka Conley standard errors) with uniform and triangle kernels; F-tests by variable name or R matrix. The k-means clustering method is an unsupervised machine learning technique used to identify clusters of data objects in a dataset. Once you created the DataFrame based on the above data, you’ll need to import 2 additional Python modules: matplotlib – for creating charts in Python; sklearn – for applying the K-Means Clustering in Python; In the code below, you can specify the number of clusters. ( Log Out /  Can I consider darkness and dim light as cover in combat? How do i predict with standard errors using betareg package in R? Stata took the decision to change the robust option after xtreg y x, fe to automatically give you xtreg y x, fe cl(pid) in order to make it more fool-proof and people making a … I have a large panel data set featuring the purchases of 5000+ individuals over 2000+ time periods (days). Thank you very much. A concise presentation on many issues surrounding clustered standard errors was given at 2007 Stata User … Select Anaconda Prompt from the Windows Start Menu. Standard error of regression and of predictions in python (these are available in R) [closed], https://gist.github.com/josef-pkt/1417e0473c2a87e14d76b425657342f5, “Question closed” notifications experiment results and graduation, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…, Standard error clustering in R (either manually or in plm), Different HC3 standard error estimates when normalising weights for weighted least squares fit using Python statsmodels. Any info is most appreciated. Change ), You are commenting using your Twitter account. Who first called natural satellites "moons"? I'm working with R and confirming my results in Python with the overwhelming majority of the work matching between the two quite well. rev 2020.12.2.38106, The best answers are voted up and rise to the top, Cross Validated works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. I want to ask first of all if there exists any difference between robust or cluster standard errors, sometimes whenever I run a model, I get similar results. It is unbalanced and with gaps. Building algebraic geometry without prime ideals. There are many different types of clustering methods, but k-means is one of the oldest and most approachable.These traits make implementing k-means clustering in Python reasonably straightforward, even for novice programmers and data scientists. K Means Clustering is an unsupervised machine learning algorithm which basically means we will just have input, not the corresponding output label. Why do Arabic names still have their meanings? For your first question, I think what R calls the "residual standard error" is the square root of the scale parameter: site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. How do I orient myself to the literature concerning a research topic and not be overwhelmed? For an overview of partitioned tables in BigQuery, see Introduction to partitioned tables. Origin of the symbol for the tensor product. My data is 1,000 firms, 500 Swedish, 100 Danish, 200 Finnish, 200 Norwegian. ## clustered standard errors python Blackstone Griddle 17, Pokemon Go Unlimited Balls, Veggiecraft Farms Where To Buy, Dolunay How Many Episodes, Best Danish Butter Cookies Brand, Furniture That Goes With Gray Floor, Aloft Raleigh Parking, Granactive Retinoid 5 Reddit,
# My conjecture:- Inspiration Nihar Mahajan. I often see people cramming about learning the position of points in a 3D Co-ordinate system. So, Here I have a conjecture. Imagine the Points (3,-3,-3). First, Using Basic Co-ordinate system geometry, We can see that (3,-3) lies in the 4rd Quadrant. Now, Since (-3) is a negative number, The point (3,-3,-3) will lie in the octate just behind The 4rd Quadrant i.e. The 8th Octnt Similarly, For any point $$(a,b,c)$$ We can first visualize The quadrant in which $$(a,b)$$ lies, and then visualize the octant in which $$(a,b,c)$$ lies, using the value of c(whether it is positive or negative.) Kindly Leave your comments/opinions in the Discussion below $$\ddot\smile$$ Note by Mehul Arora 3 years, 4 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Good thinking but i feel that whatever you are saying does not satisfy the dictionary meaning of conjecture. - 3 years, 4 months ago Hmm, I agree, But what else can we name it? :P - 3 years, 4 months ago Can it be called as some sort of algorithm? - 3 years, 3 months ago Well, Maybe A Visualization ;) - 3 years, 3 months ago Ya :) - 3 years, 3 months ago - 3 years, 4 months ago uum , Hi! - 3 years, 4 months ago
Having a LaTeX PDF display the date the document was printed, not compiled… I have been looking around for quite some time trying to find a way for a PDF (created using XeTeX) to adjust the date accordingly by the date it was printed - not compiled. I am familiar with \date, \datedate, datenumber & datetime packages, etc. but these are not what I want. I also found a page in tex.stackexchange.com, which I mostly do not understand ( Date of file creation ) that was getting closer to what I want but not exactly. Oh, by the way, it would be great if it was platform independent as not all employees have the same computer. The idea here is to create time-sheets for employees that they can print off before work and have the date automatically created for them. Here is the rough (really) draft of the time sheet (most of the comments are me playing around with some new-to-me techniques involving the pgf/tikz packages - irrelevant): Thanks very much and hope it is clear! \documentclass[10pt,onecolumn,usenames,dvipsnames]{amsart} \usepackage{tikz} \usepackage{pgf} \usepackage{pgfkeys} \usetikzlibrary{ chains, arrows, shapes.misc, shapes.arrows, matrix, positioning, scopes, decorations.pathmorphing, } \usetikzlibrary{snakes} \usepackage[hmargin=1.5cm, vmargin=1.50cm]{geometry} \geometry{top=0.5cm} \geometry{left=0.5cm} \geometry{right=0.5cm} \usepackage{bbding} \usepackage{amssymb} \usepackage{amsmath,fp} \usepackage{algorithm2e} \usepackage{eurosym} \usepackage{color,graphicx} \usepackage{fontspec,xltxtra,xunicode} \defaultfontfeatures{Mapping=tex-text} \setromanfont[Mapping=tex-text]{Hoefler Text} \setsansfont[Scale=MatchLowercase,Mapping=tex-text]{Gill Sans} \usepackage{hyperref} \definecolor{text1}{HTML}{2b2b2b} \usepackage{fancyhdr} \pagestyle{fancy} \fancyhf{} \rfoot{\color{headings} {\sffamily Last update: \today}. Typeset with X\LaTeX} \usepackage{titlesec} \titleformat{\section} \scshape \Large\raggedright}{}{0em}{}[\color{blue}\titlerule] \titlespacing{\section}{0pt}{0pt}{5pt} \usepackage[yyyymmdd,hhmmss]{datetime} \usepackage{datenumber} \usepackage{fancyhdr} \pagestyle{fancy} \rfoot{Compiled on \today\ at \currenttime} \cfoot{} \lfoot{Page \thepage} \usepackage{ifthen} \begin{document} \color{text1} \centering \par{\centering{ { \addfontfeature{LetterSpace=20.0}\fontsize{20}{20}\selectfont \scshape {\underline{To Do $\backslash$ Did}} }} }\\[6mm] {\color{white} \hrule} \begin{minipage}[t]{0.25\textwidth} \section{Timeline} \begin{tikzpicture} \draw[snake](0,40pt) -- (0,0); \draw (0,20pt) node[left=10pt]{\datedate}; \draw (0,10pt) node[left=10pt]{Worked On:}; \draw(0 pt, 0 pt) -- (0 pt, -600 pt); \foreach \y [count=\yi] in {-30,-54,...,-570}{ \draw (3 pt, \y pt) -- ( -3pt, \y pt); \draw (0 pt, .1\y pt) -- (-60 pt, .1\y + 1 pt ) node[midway,above]{[\hspace{1.5cm}]}; \draw (0 pt, 0.1\y pt) -- (100 pt, 0.1\y + 20 pt) node[midway,sloped,below]{}; } \end{tikzpicture} \end{minipage} \hfill \begin{minipage}[t]{0.25\textwidth} \vspace{21mm} \foreach \a/\b/\z [count=\ai,count=\bi,count=\zi] in {1,...,23} { \ifthenelse{\ai<13} { \ifthenelse{\isodd{\ai}} { \FPdiv\c{\ai}{2} \FPadd\c{\c}{6} \FPupn\c{\c{} 0 round} $\FPprint\c$: \ifthenelse{\bi=2} {$30$} {$00$} } { \FPdiv\c{\ai}{2} \FPadd\c{\c}{6} \FPupn\c{\c{} 0 round} $\FPprint\c$:$30$ } to \ifthenelse{\isodd{\zi}} { \FPdiv\c{\zi}{2} \FPadd\c{\c}{6} \FPupn\c{\c{} 0 round} $\FPprint\c$:$30$ } { \FPdiv\c{\zi}{2} \ifthenelse{\zi<12}{\FPadd\c{\c}{7}}{\FPadd\c{\c}{-5}} \FPupn\c{\c{} 0 round} $\FPprint\c$: %interior \ifthenelse takes care of 12:30 - 1:00 \ifthenelse{\zi=1} {$30$} {$00$} } \\\\ } { \ifthenelse{\isodd{\ai}} { \FPdiv\c{\ai}{2} \FPadd\c{\c}{-6} \FPupn\c{\c{} 0 round} $\FPprint\c$:$00$ } { \FPdiv\c{\ai}{2} \FPadd\c{\c}{-6} \FPupn\c{\c{} 0 round} $\FPprint\c$: \ifthenelse{\bi=2} {$00$} {$30$} } -- \ifthenelse{\isodd{\zi}} { \FPdiv\c{\zi}{2} \FPadd\c{\c}{-6} \FPupn\c{\c{} 0 round} $\FPprint\c$:$30$ } { \FPdiv\c{\zi}{2} \ifthenelse{\zi<12}{\FPadd\c{\c}{7}}{\FPadd\c{\c}{-5}} \FPupn\c{\c{} 0 round} $\FPprint\c$: %interior \ifthenelse takes care of 12:30 - 1:00 \ifthenelse{\zi=1} {$30$} {$00$} } \\\\ } } \end{minipage} \hfill \begin{minipage}[t]{0.38\textwidth} \section{Description} \begin{enumerate} \item Move old tires and wheels into grey round bin north of shop. \item Move small LP tanks into quonset \item Drill and tap threads into quonset door frame \item Wire brush wheel and mount two tires for water trailer \item Clean up paint gun (air) \item Clean up yard of garbage and lumber \item Stack wood by container on west side of quonset \end{enumerate} \end{minipage} \newpage \begin{minipage}[t]{0.45\textwidth} \section{Timeline} \begin{tikzpicture} \draw[snake] (0,40pt) -- (0,0); \draw (0,20pt) node[left=10pt] {Prior to sale. $\Bigg\{$}; \draw (0 pt,0 pt) -- (0 pt,-400 pt); \foreach \y in {40,0,-40,-80, -120, -160, -200, -240} \draw (3pt, \y pt) -- ( -3pt, \y pt); \draw (0,0) node[left=10pt] {XX/XX/2011}; \draw (0 pt,-40 pt) node[left=10pt] {XX/XX/2011}; \draw (0 pt, -40 pt) -- (40:4) node[midway,sloped,below]{start of .}; \draw (0 pt, -40 pt) node[below=0pt]{\hspace{4cm} Pur}; \draw (0 pt, -40 pt) node[below=10pt]{\hspace{4cm} arm, }; \draw (0 pt, -80 pt) node[left=10pt] {XX/XX/2011}; \draw (0 pt, -120 pt) node[left=10pt] {XX/XX/2011}; \draw (0 pt, -160 pt) node[left=10pt] {XX/XX/2011}; \draw (0 pt, -200 pt) node[left=10pt] {XX/XX/2011}; \end{tikzpicture} \end{minipage} %END of left-hand side minipage \hfill \begin{minipage}[t]{0.51\textwidth} \section{Timeline Specifics} \vspace{60pt} \begin{tikzpicture}[every on chain/.style=join,every join/.style=->,node distance=2mm and 1cm] { [start chain=trunk] \node[on chain]{A}; \node [on chain] {B}; { [start branch=numbers going below] \node [on chain] {1}; \node [on chain] {2}; \node [on chain] {3}; } { [start branch=greek going above] \node [on chain] {$\alpha$}; \node [on chain] {$\beta$}; \node [on chain] {$\gamma$}; } \node [on chain,join=with trunk/numbers-end,join=with trunk/greek-end] {C}; { [start branch=symbols going below] \node [on chain] {$\star$}; \node [on chain] {$\circ$}; \node [on chain] {$\int$}; } } \end{tikzpicture} \makeatletter \tikzset{join/.code=\tikzset{after node path={% \ifx\tikzchainprevious\pgfutil@empty\else(\tikzchainprevious)% edge[every join]#1(\tikzchaincurrent)\fi}}} \makeatother \tikzset{>=stealth',every on chain/.append style={join}, every join/.style={->}} \centering \begin{tikzpicture}[start chain] { \node[on chain] {$0$}; \node[on chain] {$A$} ; \node[on chain, join={node[above] {$\scriptstyle\varphi$}}] {$B$}; \node[on chain, join={node[above] {$\scriptstyle\psi$}}] {$C$}; \node[on chain] {$0$}; } \end{tikzpicture} \bigskip \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=3em, column sep=3em] { 0 & A & B & C & 0 \\ 0 & A' & B' & C' & 0 \\ }; { [start chain] \chainin (m-1-1); \chainin (m-1-2); { [start branch=A] \chainin (m-2-2) [join={node[right] {$\scriptstyle\eta_1$}}];} \chainin (m-1-3) [join={node[above] {$\scriptstyle\varphi$}}]; { [start branch=B] \chainin (m-2-3) [join={node[right] {$\scriptstyle\eta_2$}}];} \chainin (m-1-4) [join={node[above] {$\scriptstyle\psi$}}]; { [start branch=C] \chainin (m-2-4) [join={node[right] {$\scriptstyle\eta_3$}}];} \chainin (m-1-5); } { [start chain] \chainin (m-2-1); \chainin (m-2-2); \chainin (m-2-3) [join={node[above] {$\scriptstyle\varphi'$}}]; \chainin (m-2-4) [join={node[above] {$\scriptstyle\psi'$}}]; \chainin (m-2-5); } \end{tikzpicture} \end{minipage} \vspace{10pt} \end{document} - Welcome to TeX.SX. I took the liberty to format your post so that it is displayed correctly. Have a look at tex.stackexchange.com/editing-help to learn how it is done. –  Martin Scharrer Mar 22 '11 at 18:11 The tdclock package offers an easy way to insert the current date and time into a PDF document: \documentclass{article} \usepackage{tdclock} \begin{document} \initclock Date: \tddate \end{document} Mind the \initclock at the beginning of the document, then you'll be able to display the current time and date with the macros \tdclock, \tddate, \tdtime, ... (see the documentation for more details). - This looks super simple! On Mac OS X 10.6 the Preview app (for viewing PDFs) does not seem to include a JavaScript engine so the date isn't shown when viewing the documentation. –  Christian Lindig Mar 22 '11 at 19:56 @Christian Good point, it should be mentioned that the PDF viewer must support JavaScript so that the date can be displayed (few readers other than Adobe Reader do this...). With no JavaScript support available, however, the question is in my opinion unsolvable, because you can't change the PDF dynamically then to display the current date. –  diabonas Mar 22 '11 at 20:03 Great! It works with Adobe Reader 9 under Linux. A test print into a PS file shows the print date correctly. This should do the what the OP wants. Actually tying it into the print event so that the date is only updated when the PDF is printed would make it more efficient, but not really required. However, I ticking time might be annoying when reading th PDF. –  Martin Scharrer Mar 22 '11 at 20:04 Placing the compile or other date into the document at compile time is easy. It's just another text. However, this text can't be changed by the PDF viewer for the print-out! So these two things are completely different. The only way I see this done is to add some PDF form field into your document which is updated using embedded JavaScript when (i.e. just before) the PDF is printed. You should look into this direction. For example AFAIK the movie15 package places JavaScript into the PDF (not that it does what you need). You might end up to need to write the JavaScript code by yourself. You could ask e.g. at http://stackoverflow.com how to update a PDF text field with the date when the PDF is to be printed. - I still don't think this will work unless the particular pdf viewer used is also a pdf editor. –  Matthew Leingang Mar 22 '11 at 18:57 @Matthew: You can fill out a PDF form in a normal PDF viewer and print it. You only need a PDF editor if you want to save the form data or modify the form itself. The viewer needs to support JavaScript of course. –  Martin Scharrer Mar 22 '11 at 19:00 I believe you can use JavaScript to compute a default value (like a date) for a form field in a PDF form. The hyperref package provides support for PDF forms but I haven't used it together with JavaScript. –  Christian Lindig Mar 22 '11 at 19:12 I find this idea more and more interesting so I posted a question about JS in PDFs myself: stackoverflow.com/q/5396617/256941 –  Martin Scharrer Mar 22 '11 at 19:20 Hi all! Thanks Martin for reformatting my post. Thanks to all for the help and leads, though well beyond my tex ability ha ha... Actually I started by reading, tug.org/TUGboat/Articles/tb22-3/tb72story.pdf and from there the package insdljs (insert document level javascript). So I hope to come up with something soon. Thanks again! –  Brett Mar 22 '11 at 20:58 Compiling to PostScript via dvips rather than PDF would give you a slightly better chance to obtain the current date directly from within the document while it is being printed -- see this FAQ. A PostScript document, unlike a PDF document, is really a program that is being executed by the printer and hence it could emit slightly different pages every time it is printed. However, not all PostScript versions support this feature such that it's availability would depend on the printer being used. I would also have some doubts about the accuracy of date and time read from the printer. Taken together, this does not seem to be a viable solution for your use case. - Nice post. I assume most people most likely wont use a PS printer directly but use a SW PS interpreter like GhostScript to print it on a "normal" printer. With GhostScript there is a bigger chance to get a correct date and time, I think. –  Martin Scharrer Mar 22 '11 at 18:59 If you had a clock on the pdf that kept time using JavaScript, then any time you printed it the paper would have the time printed, right? It reminds me of a discussion we had on latex-beamer ages ago about putting a ticking clock on a presentation. There was a posted solution with a link that appears dead now, but some code was traded around in the thread (from the powerdot package) which may help. - Have a look at this question: Is there a way to add a timer to a Beamer presentation? –  Hendrik Vogt Mar 23 '11 at 20:52
Status This thread has been Locked and is not open to further replies. Please start a New Thread if you're having a similar issue. View our Welcome Guide to learn how to use this site. #### libby342 HI, I seem to have posted this in the wrong section earlyer under windows98 help and was told that it really should have been posted here so I am doing so now. I really do hope that some one here can assist me with this issue further. I seem to be having a problem running my mozzila firefox today after I signed off of aol 8 version and back on, I tried to reinstall mozilla firefox sveral times and it still wont load up for me, I did a hijack this log and I really need for someone here to please tell me if anything in this log looks like it could be stopping mozzila firefox from coming up all of a sudden today please? I also seem to have a virus on this pc that wont let me do a complete virus scan for the pc, it finds one virus and I get to delete that one but then when it gets to a 2nd alert virus detected , the pc shuts itself off and then I have to go for a reboot. So another reason I did this hijack this log so that hopefully someone here who is more advanced then me when it comes to computers can help me to figure it out. Thanks so much for the help with this. HERE IS THE LOG BELOW: I will now wait for a reply back to this HIJACK THIS LOG Thanks to someone who can please assist me with this ASAP: Logfile of HijackThis v1.99.1 Scan saved at 12:54:57 PM, on 1/9/2006 Platform: Windows XP (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 (6.00.2600.0000) Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\system32\spoolsv.exe C:\WINDOWS\Explorer.EXE C:\windows\system\hpsysdrv.exe C:\Windows\system32\HpSrvUI.exe C:\HP\KBD\KBD.EXE C:\Program Files\WildTangent\DDC\DDCManager\DDCMan.exe C:\Program Files\Java\jre1.5.0_05\bin\jusched.exe C:\PROGRA~1\ALWILS~1\Avast4\ashDisp.exe C:\Program Files\Zero Knowledge\Freedom\Freedom.exe C:\Program Files\hp center\137903\Program\BackWeb-137903.exe C:\Program Files\Common Files\Microsoft Shared\Works Shared\wkcalrem.exe C:\Program Files\SpywareGuard\sgmain.exe C:\Program Files\SpywareGuard\sgbhp.exe C:\WINDOWS\System32\PackethSvc.exe C:\Program Files\Alwil Software\Avast4\aswUpdSv.exe C:\Program Files\Alwil Software\Avast4\ashServ.exe C:\WINDOWS\System32\nvsvc32.exe C:\WINDOWS\wanmpsvc.exe C:\WINDOWS\system32\fxssvc.exe C:\Program Files\Alwil Software\Avast4\ashMaiSv.exe C:\Program Files\Alwil Software\Avast4\ashWebSv.exe C:\Program Files\America Online 8.0\aol.exe C:\Program Files\America Online 8.0\waol.exe C:\Program Files\America Online 8.0\aolwbspd.exe C:\Documents and Settings\Owner\My Documents\HIJACK THIS FOLDER\HijackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://us4.hpwis.com/ R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://srch-us4.hpwis.com/ R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Bar = http://srch-us4.hpwis.com/ R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://srch-us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://srch-us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Bar = http://srch-us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://srch-us4.hpwis.com/ R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://us4.hpwis.com/ R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = http://srch-us4.hpwis.com/ R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = http://srch-us4.hpwis.com/ O2 - BHO: Freedom BHO - {56071E0D-C61B-11D3-B41C-00E02927A304} - C:\Program Files\Zero Knowledge\Freedom\FreeBHOR.dll O2 - BHO: (no name) - {FDD3B846-8D59-4ffb-8758-209B6AD74ACC} - c:\Program Files\Microsoft Money\System\mnyviewer.dll O3 - Toolbar: &Radio - {8E718888-423F-11D2-876E-00A0C9082467} - C:\WINDOWS\System32\msdxm.ocx O3 - Toolbar: &Zero-Knowledge Freedom - {FA91B828-F937-4568-82C1-843627E63ED7} - C:\Program Files\Zero Knowledge\Freedom\BandObjs.dll O3 - Toolbar: (no name) - {BA52B914-B692-46c4-B683-905236F6F655} - (no file) O4 - HKLM\..\Run: [hpsysdrv] c:\windows\system\hpsysdrv.exe O4 - HKLM\..\Run: [hp Silent Service] C:\Windows\system32\HpSrvUI.exe O4 - HKLM\..\Run: [hpScannerFirstBoot] c:\hp\drivers\scanners\scannerfb.exe O4 - HKLM\..\Run: [PreloadApp] c:\hp\drivers\printers\photosmart\hphprld.exe c:\hp\drivers\printers\photosmart\setup.exe -d O4 - HKLM\..\Run: [KBD] C:\HP\KBD\KBD.EXE O4 - HKLM\..\Run: [DDCM] "C:\Program Files\WildTangent\DDC\DDCManager\DDCMan.exe" -Background O4 - HKLM\..\Run: [Recguard] C:\WINDOWS\SMINST\RECGUARD.EXE O4 - HKLM\..\Run: [NvCplDaemon] RUNDLL32.EXE NvQTwk,NvCplDaemon initialize O4 - HKLM\..\Run: [IgfxTray] C:\WINDOWS\System32\igfxtray.exe O4 - HKLM\..\Run: [HotKeysCmds] C:\WINDOWS\System32\hkcmd.exe O4 - HKLM\..\Run: [PS2] C:\WINDOWS\system32\ps2.exe O4 - HKLM\..\Run: [SunJavaUpdateSched] C:\Program Files\Java\jre1.5.0_05\bin\jusched.exe O4 - HKLM\..\Run: [Zone Labs Client] C:\Program Files\Zone Labs\ZoneAlarm\zlclient.exe O4 - HKLM\..\Run: [RealTray] C:\Program Files\Real\RealPlayer\RealPlay.exe SYSTEMBOOTHIDEPLAYER O4 - HKLM\..\Run: [avast!] C:\PROGRA~1\ALWILS~1\Avast4\ashDisp.exe O4 - HKLM\..\Run: [KernelFaultCheck] %systemroot%\system32\dumprep 0 -k O4 - HKCU\..\Run: [MSMSGS] "C:\Program Files\Messenger\msmsgs.exe" /background O4 - HKCU\..\Run: [Microsoft Works Update Detection] c:\Program Files\Microsoft Works\WkDetect.exe O4 - HKCU\..\Run: [Zero Knowledge Freedom] C:\Program Files\Zero Knowledge\Freedom\Freedom.exe O4 - Startup: planetluckinstaller.exe.lnk = C:\Installer\planetluckinstaller.exe O4 - Startup: SpywareGuard.lnk = C:\Program Files\SpywareGuard\sgmain.exe O4 - Global Startup: America Online 8.0 Tray Icon.lnk = C:\Program Files\America Online 8.0\aoltray.exe O4 - Global Startup: AOL Companion.lnk = C:\Program Files\AOL Companion\companion.exe O4 - Global Startup: hp center.lnk = C:\Program Files\hp center\137903\Program\BackWeb-137903.exe O4 - Global Startup: Microsoft Works Calendar Reminders.lnk = ? O9 - Extra button: (no name) - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.5.0_05\bin\npjpi150_05.dll O9 - Extra 'Tools' menuitem: Sun Java Console - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.5.0_05\bin\npjpi150_05.dll O9 - Extra button: PlanetLuck.com - {6F477182-DE4F-4326-ACE3-3110A676771B} - C:\Program Files\Planetluck Casino\bin\IEExtension_PL.dll O9 - Extra 'Tools' menuitem: PlanetLuck.com - {6F477182-DE4F-4326-ACE3-3110A676771B} - C:\Program Files\Planetluck Casino\bin\IEExtension_PL.dll O9 - Extra button: Real.com - {CD67F990-D8E9-11d2-98FE-00C0F0318AFE} - C:\WINDOWS\System32\Shdocvw.dll O9 - Extra button: MoneySide - {E023F504-0C5A-4750-A1E7-A9046DEA8A21} - c:\Program Files\Microsoft Money\System\mnyviewer.dll O12 - Plugin for .spop: C:\Program Files\Internet Explorer\Plugins\NPDocBox.dll O16 - DPF: {04E214E5-63AF-4236-83C6-A7ADCBF9BD02} (HouseCall Control) - http://housecall60.trendmicro.com/housecall/xscan60.cab O16 - DPF: {4ED9DDF0-7479-4BBE-9335-5A1EDB1D8A21} - https://objects.aol.com/mcafee/molbi...3/mcinsctl.cab O16 - DPF: {BCC0FF27-31D9-4614-A68E-C18E1ADA4389} - https://objects.aol.com/mcafee/molbi...20/McGDMgr.cab O17 - HKLM\System\CCS\Services\Tcpip\..\{4F5FAB1A-0CA6-4C02-BFCE-903CB4300F79}: NameServer = 205.188.146.145 O23 - Service: avast! iAVS4 Control Service (aswUpdSv) - Unknown owner - C:\Program Files\Alwil Software\Avast4\aswUpdSv.exe O23 - Service: avast! Antivirus - Unknown owner - C:\Program Files\Alwil Software\Avast4\ashServ.exe O23 - Service: avast! Mail Scanner - Unknown owner - C:\Program Files\Alwil Software\Avast4\ashMaiSv.exe" /service (file missing) O23 - Service: avast! Web Scanner - Unknown owner - C:\Program Files\Alwil Software\Avast4\ashWebSv.exe" /service (file missing) O23 - Service: NVIDIA Driver Helper Service (NVSvc) - NVIDIA Corporation - C:\WINDOWS\System32\nvsvc32.exe O23 - Service: Virtual NIC Service (PackethSvc) - America Online, Inc. - C:\WINDOWS\System32\PackethSvc.exe O23 - Service: TrueVector Internet Monitor (vsmon) - Zone Labs, LLC - C:\WINDOWS\SYSTEM32\ZoneLabs\vsmon.exe O23 - Service: WAN Miniport (ATW) Service (WANMiniportService) - America Online, Inc. - C:\WINDOWS\wanmpsvc.exe LIZ #### 8dalejr.fan First off, you are denying yourself a lot of security features by not having a single service pack installed on your computer. I'd recommend installing XP SP2 promptly after this issue is resolved. You're just leaving your computer wide open and vulnerable for hijacks like this. Service packs can be downloaded from http://windowsupdate.microsoft.com in case you didn't know where to get them (I looked at your other thread and you seemed confused). Be sure to get the other critical security updates too. I'm no HJT expert, but I can clearly tell that the log looks hijacked. We'll wait until an expert comes along to help you with this. You would get better and faster responses in the Security forum. That's where all the security pros hang out... but don't double post please. #### MFDnNC Log looks OK Do you know what this is O4 - Startup: planetluckinstaller.exe.lnk = C:\Installer\planetluckinstaller.exe · Install ewido. · During the installation, under "Additional Options" uncheck "Install background guard" and "Install scan via context menu". · Launch ewido · It will prompt you to update click the OK button and it will go to the main screen · On the left side of the main screen click update · Click on Start and let it update. · DO NOT run a scan yet. You will do that later in safe mode. Restart your computer into safe mode now. Perform the following steps in safe mode: (Start tapping F8 at the first black screen after power up) Run Ewido: · Click on scanner · Click Complete System Scan and the scan will begin. · During the scan it will prompt you to clean files, click OK · When the scan is finished, look at the bottom of the screen and click the Save report button. · Save the report to your C: Drive This will take some time to run! Boot to normal mode Post that log and a new HiJack log #### 8dalejr.fan R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://us4.hpwis.com/ R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://srch-us4.hpwis.com/ R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Bar = http://srch-us4.hpwis.com/ R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://srch-us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://srch-us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Bar = http://srch-us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://srch-us4.hpwis.com/ R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://us4.hpwis.com/ R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = http://srch-us4.hpwis.com/ R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = http://srch-us4.hpwis.com/ Log looks OK Those first dozen or so entries seem abnormal. I think the homepages are hijacked, aren't they? Those don't look like normal sites to have a homepage... From my perspective, I think you have some browser hijacks there, and you're log wouldn't be clean if that's the case. #### MFDnNC Comes with HP systems #### libby342 Hello MFDnSC , First off thank you very much for getting back to me on this matter so fast appreciate it, so you want me to first download the ewido first, I was told I dont have the current xp version on this pc is that true? And if so do i need that before I download this ewido please? I just want to make sure I get all these steps correct before I proceed with everything you have been kind enough to post for me here to do. I also read the other post by member 8dalejr.fan , thank you for your post as well on this matter all the help I can get here on this the better, you said that you thought my log had been hijacked? but then member MFDnSC said NO this was normal for the HP PC I have got here ???? so I am safe on this so far yes. apart from doing the ewido steps I mean thus far? thanks to the both of you so far with this and please if you can stay near by so that as soon as I clearly understand what steps to do next to proceed I will get these going right now!! I will await you much needed reply back to this post if your still here libby342s #### libby342 Hi again sorry forgot to answer this one as well, MFDnSC I do no what this one is but when I play these games they do tend to give me a virus which avast the virus scanner I got deletes it for me,at the time the virus comes up but it wont let me do a complete scan on the pc should I remove this one as well from the Hijack Log? Do you know what this is O4 - Startup: planetluckinstaller.exe.lnk = C:\Installer\planetluckinstaller.exe ok now I will wait for your reply back and further steps to do in the right order on this thanks so very much again libby342 #### 8dalejr.fan Well, the way those URL's for your homepage and search tools appeared made me think they were browser hijack attempts. But as long as those URLs look familiar (see the quote I made before) and they're genuine, then you're fine as far as hijacks go. But it's an awefully strange URL, reminiscent of other browser hijacks I've experienced, for HP to set as default. But here, it is said that it is a browser hijack: Strange looking address, imo that's all. I learn something new every day. That's why I'm not a HJT expert and he is. What is your homepage supposed to be set to? #### libby342 Hello 8dalejr.fan , Thank you as well for this added info on security, i am with you as far as Hijack this logs go, I am willing to do what ever steps are necessary to get the pc back to where it was before, I use to use Mcafee virus scan and even that wont do a scan for me now which is why I went to avast, and that did the trick but this last time the pc shuts off when doing the scan, and I no there is a virus here just can get it, I believe it said it was a virus or a worm?? it went to fast and shut off for me to catch it all. But I dont even know if I am currently runing windows xp on this pc or windows 98? as i thought they was one and the same, so yes confussed am I , I can follow steps easy enough just as long as they are given to me, so I will very much await your both needed help on this matter of getting the virus out of the pc, and also for getting what ever security updates are needed for this machine. thanks for taking the time out to further assist me with this really appreciate it from the both of you libby342 #### MFDnNC You do not have a current version of XP but lets get you clean first - run Ewido as instructed and we'll go from there #### 8dalejr.fan What was your homepage supposed to be set to? The HP stuff (as in those 11 URLs from your log) or something else? I'm still confused as whether those URLs are hijacks or not. #### libby342 hello MFDnSC, Me again, so sorry it took me forever to get even these simple steps done. This is what happened along the way: I downloaded the ewido that went ok, I then followed your steps to do the scan in safe mode, however the first time it got to 97.7 % then the PC shut itself off yes even in safe mode it seems to do that to me,dont know why???? I thought safe mode was just that safe mode, but anyways it shut itself off, so I then had to reboot it back up in normal mode 2 times(yes 2 times, because the first time when it went off in safe mode it then comes back and you have to press the F2 key to boot it up so I always then reboot it and reboot back in normal mode which is what I did, so then I proceeded to reboot into safe mode again and it scaned that time around and found 135 infection files, so I hit the save log and instead of it giving me a chance to save the report like you said to do at the bottom, it quarentined all these 135 files instead and then proceeded to do yet another scan, it scanned 207,336 files and this time around found nothing ( I no becuase they are in quarentine,, I left them there for now?? so I then rebooted back up in the normal mode but I cant seem to get the Ewido logs to copy since they are all in the quarentine( I hope I am spelling that word right) part SO the best I got for all this time thus far is the highjack log and the ewido files in the other part the quartente section, so I will post this Hijack this log here and SOO hope you can tell me how to get the ewido files either over to you or what I should now do with those??? They all seem like cookie files to me apart from like 2 or 3 but I didnt want to remove them until I can at least get a reply back from you as to what I should do next on this matter??? Should I restore them and hope the pc stays on in the safe mode to pick them all up again or do I just delete them or help please? I do appreciate the time you have spent here thus far with me and sure do hope you can further assist me with this to get this PC back to the right version of XP as well as to help me make it safer then what it has been up to this point? I even removed Mozzila FireFox as the browser as it still wont work even though it did yesterday??? And this is all after I redownloaded the Mozzila even, But these new files of the 135 and finding out that my PC isnt safe or secure has me very worried to get it back to those factors as well as everything else that you and 8dalejr.fan have told me up to this point is very unsecure on my PC. I will now await your reply back into the next step to go with this Ok here is the HighJack this Log: Logfile of HijackThis v1.99.1 Scan saved at 8:43:40 PM, on 1/9/2006 Platform: Windows XP (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 (6.00.2600.0000) Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\Explorer.EXE C:\WINDOWS\system32\spoolsv.exe C:\windows\system\hpsysdrv.exe C:\Windows\system32\HpSrvUI.exe C:\HP\KBD\KBD.EXE C:\Program Files\WildTangent\DDC\DDCManager\DDCMan.exe C:\Program Files\Java\jre1.5.0_05\bin\jusched.exe C:\PROGRA~1\ALWILS~1\Avast4\ashDisp.exe C:\Program Files\Real\RealPlayer\RealPlay.exe C:\Program Files\Zero Knowledge\Freedom\Freedom.exe C:\Program Files\America Online 8.0a\aoltray.exe C:\Program Files\hp center\137903\Program\BackWeb-137903.exe C:\Program Files\Common Files\Microsoft Shared\Works Shared\wkcalrem.exe C:\Program Files\SpywareGuard\sgmain.exe C:\Program Files\SpywareGuard\sgbhp.exe C:\WINDOWS\System32\PackethSvc.exe C:\Program Files\Alwil Software\Avast4\aswUpdSv.exe C:\Program Files\Alwil Software\Avast4\ashServ.exe C:\Program Files\ewido anti-malware\ewidoctrl.exe C:\WINDOWS\System32\nvsvc32.exe C:\WINDOWS\wanmpsvc.exe C:\Program Files\Alwil Software\Avast4\ashMaiSv.exe C:\Program Files\Alwil Software\Avast4\ashWebSv.exe C:\Documents and Settings\Owner\My Documents\HIJACK THIS FOLDER\HijackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://us4.hpwis.com/ R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://srch-us4.hpwis.com/ R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Bar = http://srch-us4.hpwis.com/ R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://srch-us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://srch-us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Bar = http://srch-us4.hpwis.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://srch-us4.hpwis.com/ R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://us4.hpwis.com/ R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = http://srch-us4.hpwis.com/ R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = http://srch-us4.hpwis.com/ R1 - HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings,ProxyOverride = localhost O2 - BHO: Freedom BHO - {56071E0D-C61B-11D3-B41C-00E02927A304} - C:\Program Files\Zero Knowledge\Freedom\FreeBHOR.dll O2 - BHO: (no name) - {FDD3B846-8D59-4ffb-8758-209B6AD74ACC} - c:\Program Files\Microsoft Money\System\mnyviewer.dll O3 - Toolbar: &Radio - {8E718888-423F-11D2-876E-00A0C9082467} - C:\WINDOWS\System32\msdxm.ocx O3 - Toolbar: &Zero-Knowledge Freedom - {FA91B828-F937-4568-82C1-843627E63ED7} - C:\Program Files\Zero Knowledge\Freedom\BandObjs.dll O3 - Toolbar: (no name) - {BA52B914-B692-46c4-B683-905236F6F655} - (no file) O4 - HKLM\..\Run: [hpsysdrv] c:\windows\system\hpsysdrv.exe O4 - HKLM\..\Run: [hp Silent Service] C:\Windows\system32\HpSrvUI.exe O4 - HKLM\..\Run: [hpScannerFirstBoot] c:\hp\drivers\scanners\scannerfb.exe O4 - HKLM\..\Run: [PreloadApp] c:\hp\drivers\printers\photosmart\hphprld.exe c:\hp\drivers\printers\photosmart\setup.exe -d O4 - HKLM\..\Run: [KBD] C:\HP\KBD\KBD.EXE O4 - HKLM\..\Run: [DDCM] "C:\Program Files\WildTangent\DDC\DDCManager\DDCMan.exe" -Background O4 - HKLM\..\Run: [Recguard] C:\WINDOWS\SMINST\RECGUARD.EXE O4 - HKLM\..\Run: [NvCplDaemon] RUNDLL32.EXE NvQTwk,NvCplDaemon initialize O4 - HKLM\..\Run: [IgfxTray] C:\WINDOWS\System32\igfxtray.exe O4 - HKLM\..\Run: [HotKeysCmds] C:\WINDOWS\System32\hkcmd.exe O4 - HKLM\..\Run: [PS2] C:\WINDOWS\system32\ps2.exe O4 - HKLM\..\Run: [SunJavaUpdateSched] C:\Program Files\Java\jre1.5.0_05\bin\jusched.exe O4 - HKLM\..\Run: [Zone Labs Client] C:\Program Files\Zone Labs\ZoneAlarm\zlclient.exe O4 - HKLM\..\Run: [avast!] C:\PROGRA~1\ALWILS~1\Avast4\ashDisp.exe O4 - HKLM\..\Run: [KernelFaultCheck] %systemroot%\system32\dumprep 0 -k O4 - HKLM\..\Run: [RealTray] C:\Program Files\Real\RealPlayer\RealPlay.exe SYSTEMBOOTHIDEPLAYER O4 - HKCU\..\Run: [MSMSGS] "C:\Program Files\Messenger\msmsgs.exe" /background O4 - HKCU\..\Run: [Microsoft Works Update Detection] c:\Program Files\Microsoft Works\WkDetect.exe O4 - HKCU\..\Run: [Zero Knowledge Freedom] C:\Program Files\Zero Knowledge\Freedom\Freedom.exe O4 - Startup: planetluckinstaller.exe.lnk = C:\Installer\planetluckinstaller.exe O4 - Startup: SpywareGuard.lnk = C:\Program Files\SpywareGuard\sgmain.exe O4 - Global Startup: America Online 8.0 Tray Icon.lnk = C:\Program Files\America Online 8.0a\aoltray.exe O4 - Global Startup: AOL Companion.lnk = C:\Program Files\AOL Companion\companion.exe O4 - Global Startup: hp center.lnk = C:\Program Files\hp center\137903\Program\BackWeb-137903.exe O4 - Global Startup: Microsoft Works Calendar Reminders.lnk = ? O9 - Extra button: (no name) - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.5.0_05\bin\npjpi150_05.dll O9 - Extra 'Tools' menuitem: Sun Java Console - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.5.0_05\bin\npjpi150_05.dll O9 - Extra button: PlanetLuck.com - {6F477182-DE4F-4326-ACE3-3110A676771B} - C:\Program Files\Planetluck Casino\bin\IEExtension_PL.dll O9 - Extra 'Tools' menuitem: PlanetLuck.com - {6F477182-DE4F-4326-ACE3-3110A676771B} - C:\Program Files\Planetluck Casino\bin\IEExtension_PL.dll O9 - Extra button: Real.com - {CD67F990-D8E9-11d2-98FE-00C0F0318AFE} - C:\WINDOWS\System32\Shdocvw.dll O9 - Extra button: MoneySide - {E023F504-0C5A-4750-A1E7-A9046DEA8A21} - c:\Program Files\Microsoft Money\System\mnyviewer.dll O12 - Plugin for .spop: C:\Program Files\Internet Explorer\Plugins\NPDocBox.dll O16 - DPF: {04E214E5-63AF-4236-83C6-A7ADCBF9BD02} (HouseCall Control) - http://housecall60.trendmicro.com/housecall/xscan60.cab O16 - DPF: {4ED9DDF0-7479-4BBE-9335-5A1EDB1D8A21} - https://objects.aol.com/mcafee/molbin/shared/mcinsctl/en-us/4,0,0,83/mcinsctl.cab O16 - DPF: {BCC0FF27-31D9-4614-A68E-C18E1ADA4389} - https://objects.aol.com/mcafee/molbin/shared/mcgdmgr/en-us/1,0,0,20/McGDMgr.cab O23 - Service: avast! iAVS4 Control Service (aswUpdSv) - Unknown owner - C:\Program Files\Alwil Software\Avast4\aswUpdSv.exe O23 - Service: avast! Antivirus - Unknown owner - C:\Program Files\Alwil Software\Avast4\ashServ.exe O23 - Service: avast! Mail Scanner - Unknown owner - C:\Program Files\Alwil Software\Avast4\ashMaiSv.exe" /service (file missing) O23 - Service: avast! Web Scanner - Unknown owner - C:\Program Files\Alwil Software\Avast4\ashWebSv.exe" /service (file missing) O23 - Service: ewido security suite control - ewido networks - C:\Program Files\ewido anti-malware\ewidoctrl.exe O23 - Service: NVIDIA Driver Helper Service (NVSvc) - NVIDIA Corporation - C:\WINDOWS\System32\nvsvc32.exe O23 - Service: Virtual NIC Service (PackethSvc) - America Online, Inc. - C:\WINDOWS\System32\PackethSvc.exe O23 - Service: TrueVector Internet Monitor (vsmon) - Zone Labs, LLC - C:\WINDOWS\SYSTEM32\ZoneLabs\vsmon.exe O23 - Service: WAN Miniport (ATW) Service (WANMiniportService) - America Online, Inc. - C:\WINDOWS\wanmpsvc.exe thanks once again for the help with this it is very very appreciated by me libby342 #### libby342 Hello 8dalejr.fan , I am so sorry it took me so long to get back here as per my above post, to answer your question below: What was your homepage supposed to be set to? The HP stuff (as in those 11 URLs from your log) or something else? I'm still confused as whether those URLs are hijacks or not. I set my home page in AOL now no where else dont no how to do that , but in aol when I sign on that page is set to Blank?? SO please forgive me if I am unclear as to what you meant by those 11 URLS but I think they are not suppose to be there either, so I do hope that maybe you all can please let me know if they all need to be removed or are they safe as well. thanks and once again I will sit tight until I can here back from you both to help me get this pc back to where it is suppose to be and hopefully safer then it has been as well up to today. Glad i at least posted this HighJAck log here sure has opened my eyes up a bit more. libby342 #### 8dalejr.fan Well, I have my home page set to www.nascar.com. When I turn on my internet, that is what I expect to load. A HJT log for me would reveal www.nascar.com as those entries. If http://us4.hpwis.com/ is NOT what you expect to see when you start up the internet, or you get a page that says About:Blank or something, then you are hijacked. Hope this helps... #### MFDnNC Run HiJAck and mark this entry then click fix chcked O4 - Startup: planetluckinstaller.exe.lnk = C:\Installer\planetluckinstaller.exe You are not HiJacked!!!!!!!!!! Boot and post a new log - what is the status of your system???? Status This thread has been Locked and is not open to further replies. Please start a New Thread if you're having a similar issue. View our Welcome Guide to learn how to use this site. As Seen On
# Tag Info 13 Let me first go through this without friction or air drag. You say $v_y$ along the $x$-axis and the train moves with $v_x$ along the $z$-axis. This is a little inconsistent. I will use the velocities, but not your description of the axes. So the train moves in the $x$-direction, the ball is thrown into the $y$-direction and it the $z$-direction is up-down. ... 11 Why does time stop in black holes? Time according to whom? The fact is that, in special and general relativity, there is no universal time. Indeed, time is a coordinate in relativity so one must be careful to specify the coordinate system when asking questions like this. Now, every entity also has an associated proper time which is not a coordinate ... 6 I think there are two answers to this, one emprical and one theoretical. First, the theoretical one: What you describe is essentially induction, the belief that we can generalize from a subset of a class events/situations to the whole class of events/situations. This belief is, by necessity, unprovable, only falsifiable, since proving it would require ... 5 Short answer: It doesn't stop. Slightly longer answer: The case of a non-rotating, non-charged black hole is described by the Schwarzschild solution. It is now the case that, if you draw the worldline of a particle falling into a black hole, you will find that the coordinate time in the Schwarzschild metric grows infinite as the particle approaches the ... 5 If you stick to gases then things are relatively straightforward because the temperature is related to the relative velocity of the gas molecules, that is the velocity of the gas molecules relative to each other. If you put your canister of gas in a fast moving (but non-relativistic) rocket moving at some velocity $v$ then you add the same velocity $v$ to ... 4 so when I arrive at B why would I be younger? I believe I addressed this in another question of yours. Once again, assume that when you pass planet A, your clock and planet A's clock both read $t = t_A =0$. Now, according to the inhabitants of planet A, planet B's clock is synchronized with their clock. However, in your inertial frame of reference, ... 4 Use \begin{align} \frac{a_{23}}{a_{33}} & = \frac{ -\cos b \sin a}{\cos a \cos b} = -\tan a \\ \frac{a_{13}}{\sqrt{ a_{23}^2 + a_{33}^2 }} & = \frac{\sin b}{\cos b \sqrt{\sin^2 a +\cos^2 a}} = \tan b \\ \frac{a_{12}}{a_{11}} & = \frac{-\cos b\sin c}{\cos b \cos c} = -\tan{c} \end{align} a = atan2(-a23,a33) b = atan2(a13, ... 4 There is no reason why physical laws should be absolute. But observation tells us they are. If you think about it, if the laws of the universe did change from place to place or if they were different at different times, there would be no laws, and there would be no science. 4 The thrower's height doesn't change i.e. it is the same in both the reference frames of the thrower and the bug. That's because distances normal to the direction of motion are not changed by Lorentz transformations. In the bug's frame the thickness of the thrower decreases, so the thrower is flattened in the direction of motion, but the height is unchanged. ... 3 No. Time-dilation is the slowing of time as experienced by the fast moving craft, not the 'stationary' observer. Remember that light moves at c, and we see it move at c, not some slower or stationary speed. As the craft approaches c, it appears to accelerate increasingly slowly; from 0.99999c to 0.999999c is only a difference of 2.7 km/s, but it is still ... 3 Let me try a more down to earth example: Let's say I formulate a law "I can kick with my leg in front of me without getting hurt." This law is indeed true in many cases, but in some cases it is not because there is a wall right in front of me and my leg kinda hurts after kicking. That is, the world is not everywhere the same. Say I come to the same place ... 3 You shouldn't use the "subjective/objective" distinction for a place where "relative/absolute" is much more appropriate, because they mean different things. For something to be subjective, it must be dependent on the knowledge or state of mind of an observer. As an example, suppose we define "depth" as "length along the direction an observer is facing". ... 3 it doesn't move in time, so no time will have past when the light arrives at it's "destination". Right? A photon does 'move in time'. It just that, for a photon, the displacement in time, $c \Delta t$, equals the displacement in space, $\Delta x$. However, there is no proper time for a photon. Your proper time is, in words, the elapsed time ... 3 Let $\Delta_S$ and $\Delta_G$ be the time dilation effects due to General Relativity (gravity) and Special Relativity (motion) respectively (i.e. the clock rate on the satellite due to SR and GR is $1 - \Delta_S + \Delta_G$, signs chosen for simplicity). If these are small, they can be approximated as : \begin{eqnarray} \Delta_S &=& 1 - \sqrt{1 - ... 3 There are two mathematical concepts which are both called vector. The first one, the vector from linear vector space is the basic "multicomponent object" which you seem to mainly talk about. The second notion of a vector is of a member of the so-called "tangent bundle" of a manifold. The second notion is the one which is defined equivalently with the ... 2 No, it doesn't mean that. One must distinguish two things: "laws of physics that apply to an object" and "laws of physics formulated from an object's viewpoint". These are two different things. Laws of physics apply to all objects. And the behavior of the objects may be described relatively to many coordinate systems or "frames of reference". The special ... 2 The time dilation due to motion in a circle, relative to an observer at the centre, is just the usual Lorentz time dilation due to the velocity of the motion. If you're interested, in my answer to Is gravitational time dilation fundamentally different than other forms of time dilation? I showed how this is derived from the metric. Anyhow, as you say, the ... 2 Almost none. Let's be much more generous than your idea of human-carrying craft. Let's just use the fastest probe. The Helios II craft, after nearing the sun, reached a heliocentric speed somewhere near 70 km/s. Obviously, its speed was more due to the gravitational influence of the sun than its engines. $$t = \frac{t_o}{\sqrt{1 - \frac{v^2}{c^2}}}$$ ... 2 So let's just say that the spacecraft can accelerate until it's moving away from the Earth at the speed of the fastest currently-existing spacecraft First, note that the fastest speed, relative to Earth, that a spacecraft has obtained is an exceedingly small fraction of the $c$ and, thus, one should not expect significant time dilation. For ... 2 Ok, before we fill up the comment section with this, I will write this as an answer: Proper time $\tau$ along a path $\gamma$ is $$\tau := \int_\gamma \sqrt{\mathrm{d}x^\mu\mathrm{d}x_\mu}$$ and a clock moving along $\gamma$ will have $\tau$ as its elapsed time at the end of the path. Yet, the definition of proper time $\tau$ involves such clocks not ... 2 It means that time is no longer an absolute concept, yes. The time a specific observer experiences in a specific frame of reference, i.e. his proper time depends on the path (worldline) he takes through spacetime. In other words, it depends on his state of motion, the way he accelerates. This is the reason for the famous twin paradoxon: the resolution is ... 2 The answer is that not everything is relative. Indeed, you've just shown that it is possible to detect rotation in an absolute sense. More generally, the principle of relativity says that all inertial frames are equivalent. In other words, it is impossible to detect (or even to define) absolute motion at a uniform velocity; it doesn't make sense to say that ... 2 As the others have said, it's simply true because it is a fundamental axiom upon which we build the the entirety of our physical laws. You can also approach an answer from a different perspective, though: relativity. In studying special and general relativity, one of the most important (and most difficult) concepts to grasp is that of 4-dimensional ... 2 that to an external observer it would appear to be moving very much slower? I don't understand the reasoning here. When you write assume that if a craft was travelling at a speed very close to the speed of light I take that to mean that the craft is travelling very close to the speed of light according to an external observer. Keep in mind ... 2 Does that mean the speed of an individual photon is c even with respect to another photon? I mean, shouldn't the relative velocity be zero? When we write "the speed with respect to X" we mean precisely "the speed as observed from the inertial frame of reference in which X is at rest" Thus, if it is true that the speed of an individual photon ... 2 Events which lie within each other's light cones are called "timelike separated." All observers agree on the ordering of these events. Events which lie on each other's light cones are separated by "lightlike" or "null" interval. All observers also agree about the time ordering of these events. Events which lie outside of each other's light cones are called ... 2 It is really rotating, with a period of one month, according to a Foucault pendulum. That's the prediction of Newtonian gravity, and GR's prediction can't diverge much from that when the gravitational field is so weak. (Also consider that a Foucault pendulum on Earth empirically measures the length of the sidereal day, not the solar day.) 1 If someone is falling in a black hole, the nearer he/she gets to black hole the slower time will pass and when he/she reaches the edge of event horison, time it would take for an observer to see him/her to cross event horison will be infinite (in other words if their friend was watching him/her he would never see him/her crossing the event horison). ... 1 Occam's Razor and Observation are the reasons. The properties of the physical laws that you mention are called the Homogeneity and Isometry of Space, and are the two key elements in the Cosmological Principle which merely states that space is the same everywhere, and it doesn't matter which way you go, things will always be the same. This is of course on a ... 1 We can do that using states of Matter. If temperature is frame dependent, the observers in different frames should observe different states of matter near Melting and Boiling points which is not the case. This was the easiest explanation I could think of. Only top voted, non community-wiki answers of a minimum length are eligible
TY - JOUR T1 - Superconvergence and $L^{\infty}$-Error Estimates of RT1 Mixed Methods for Semilinear Elliptic Control Problems with an Integral Constraint JO - Numerical Mathematics: Theory, Methods and Applications VL - 3 SP - 423 EP - 446 PY - 2012 DA - 2012/05 SN - 5 DO - http://doi.org/10.4208/nmtma.2012.m1118 UR - https://global-sci.org/intro/article_detail/nmtma/5945.html KW - Semilinear elliptic equations, optimal control problems, superconvergence, $L^{\infty}$-error estimates, mixed finite element methods, postprocessing. AB - In this paper, we investigate the superconvergence property and the $L^{\infty}$-error estimates of mixed finite element methods for a semilinear elliptic control problem with an integral constraint. The state and co-state are approximated by the order one Raviart-Thomas mixed finite element space and the control variable is approximated by piecewise constant functions or piecewise linear functions. We derive some superconvergence results for the control variable and the state variables when the control is approximated by piecewise constant functions. Moreover, we derive $L^{\infty}$-error estimates for both the control variable and the state variables when the control is discretized by piecewise linear functions. Finally, some numerical examples are given to demonstrate the theoretical results.
# Journal of Information Systems Engineering and Management Web based Approach to Overcome the Market Information Gap between Poultry Farmers and Potential Buyers in Tanzania , More Detail 1 Nelson Mandela African Institution of Science and Technology (NM-AIST), TANZANIA * Corresponding Author Research Article Journal of Information Systems Engineering and Management, 2019 - Volume 4 Issue 1, Article No: em0085 https://doi.org/10.29333/jisem/5740 Published Online: 24 Mar 2019 APA 6th edition In-text citation: (Mambile & Machuve, 2019) Reference: Mambile, C., & Machuve, D. (2019). Web based Approach to Overcome the Market Information Gap between Poultry Farmers and Potential Buyers in Tanzania. Journal of Information Systems Engineering and Management, 4(1), em0085. https://doi.org/10.29333/jisem/5740 Vancouver In-text citation: (1), (2), (3), etc. Reference: Mambile C, Machuve D. Web based Approach to Overcome the Market Information Gap between Poultry Farmers and Potential Buyers in Tanzania. J INFORM SYSTEMS ENG. 2019;4(1):em0085. https://doi.org/10.29333/jisem/5740 AMA 10th edition In-text citation: (1), (2), (3), etc. Reference: Mambile C, Machuve D. Web based Approach to Overcome the Market Information Gap between Poultry Farmers and Potential Buyers in Tanzania. J INFORM SYSTEMS ENG. 2019;4(1), em0085. https://doi.org/10.29333/jisem/5740 Chicago In-text citation: (Mambile and Machuve, 2019) Reference: Mambile, Cesilia, and Dina Machuve. "Web based Approach to Overcome the Market Information Gap between Poultry Farmers and Potential Buyers in Tanzania". Journal of Information Systems Engineering and Management 2019 4 no. 1 (2019): em0085. https://doi.org/10.29333/jisem/5740 Harvard In-text citation: (Mambile and Machuve, 2019) Reference: Mambile, C., and Machuve, D. (2019). Web based Approach to Overcome the Market Information Gap between Poultry Farmers and Potential Buyers in Tanzania. Journal of Information Systems Engineering and Management, 4(1), em0085. https://doi.org/10.29333/jisem/5740 MLA In-text citation: (Mambile and Machuve, 2019) Reference: Mambile, Cesilia et al. "Web based Approach to Overcome the Market Information Gap between Poultry Farmers and Potential Buyers in Tanzania". Journal of Information Systems Engineering and Management, vol. 4, no. 1, 2019, em0085. https://doi.org/10.29333/jisem/5740 ABSTRACT Poultry farming is very important sector in Tanzania because it improves human health and when sold provides income, it supports the livelihoods of 3.7 million households in Tanzania. Poultry farming sector is facing a challenge of lack of market information, also absence of well-coordinated system which is harmonized between stakeholders. Both poultry farmers and buyers rely on informal market information. The findings form this study shows that key market information required by poultry farmers and buyers are price, buyer or farmer location, Poultry kilograms, amount of poultry needed by buyers, amount of poultry products needed by buyers, types of poultry needed by buyers and kind of poultry products needed by buyers. This paper discusses this key market information required by poultry farmers and buyers, also the challenges faced due to lack of market information. In order to disseminate the identified market information requirements and to overcome the mentioned challenges the web based platform has been proposed as a solution to overcome the market information gap. KEYWORDS Show / Hide HTML Content # INTRODUCTION Agriculture has an important role to both food security improvement and advancing human development in the African continent (Conceição et al., 2016; Temba et al., 2016). Livestock farming is one of the major agricultural activities in the country that contributes towards achieving development goals of the National Growth and Reduction of Poverty (NGRP) (Tanzanian Policy Document, 2010). The term livestock is normally defined as animals rose to produce milk, meat, eggs and wool. Livestock includes beef and dairy cattle, swine, sheep, horses, goats, and poultry. The livestock sector contributes 18% of Agricultural GDP and 4.7% of National GDP. Chickens contribute 16% of livestock GDP, 3% of agricultural GDP and 1% of National GDP. It is hypothesized that poultry farming support the livelihoods of 3.7 million households in Tanzania (Goromela, 2009; Msami, 2007). Thus, indicating high significant contribution of chicken to the national economy and social status (MoHSW, 2007). Currently, poultry is a large commercial industry compared to few years ago due to increased consumption of meat and eggs in recent years (International, 2010; Mohammad and Mohammed, 2014). Despite the fact that there has been a larger demand of poultry products which emphasize farmers to produce more the current market infrastructure does not support the linkage between consumer and producer (Hurrissa and Eshetu, 2003; Lwoga, 2010; Njombe et al., 2011) as a result producers get loss because products don’t reach customers in time. In Tanzania, it has been a challenge for poultry farmers and buyers to obtain market information since there is no marketing tool which will update poultry farmers and buyers on the market information, also the absence of a well-coordinated system of data collection, analysis and dissemination which is harmonized between farmers, buyers and other important stockholders (Njombe et al., 2011). Most of Poultry farmers and buyers rely on informal sources of market information, mainly from family, friends, and neighbour’s (Mwakaje, 2010; Msoffe and Ngulube, 2016). Chicken and eggs are sold to neighbours or local markets within the same village or villages nearby. These sales are directly done by the households. There are numerous open village markets in each region. For example, there are weekly markets in different districts in Tanzania (Kisungwe, 2012). Middlemen or traders from regional and urban markets often buy chickens on the local markets. The current poultry markets are dominated by small traders who operate as village vendors, distant wholesalers, and retailers (International W., 2010). The market is ruled by middlemen and small scale processors that act in their own interest by reducing the smallholders share and increasing final prices (El-obeid, 2012; International, 2010; Msami, 2007) The ultimate goal of every poultry farmer is to make great sales at each harvest and unfortunately, a lot of poultry farmers get stuck at this point because they spend months raising and feeding birds, when it’s time to sells, they don’t achieve much sells (Venture Magazine, 2018). It is a fact there is high demand for their products, but farmers don’t have market information in the aspects of their business. This study aimed at establishing the baseline on market information requirements for poultry farmers and buyers in rural and urban areas of Tanzania. The study will then inform on the current methods for attaining market information, the challenges resulting from lack of market information, the analysis of the limitations of existing systems and proposing a solution for improving market linkages for poultry farmers. The first part of this paper introduces an overview of poultry farming, market information challenges faced by poultry farmers and buyers and the current situation regarding market information and the significance of web applications in our daily lives. Second part of this paper is the methodology used including the study area, research design, sample size and sampling technique. The third part discusses the shortcoming of existing systems, this is followed by results and discussions. The proposed solution for reliable market linkage given in details between poultry farmers and potential buyers. Lastly the conclusion. # METHODOLOGY The study was qualitative type. This study method was used simply because the research aims to get answers about the ‘what’, ‘how’, and ‘why’ phenomena rather than ‘how many’ or ‘how much’ which are answered by quantitative methods. Qualitative methods are good when you have small number of respondents which may be sufficient for understanding human perceptions, behaviour and altitudes, also is most applicable for ordinary setting and gives the researcher more power to control the process (Bricki and Green, 2007). Structured Questionnaire coded to open data kit was used to collect qualitative data about the current marketing situation, required market information and challenges faced due to the lack of market information in three districts which are Tanga city, Muheza and Korogwe. Also, during the survey qualified information was gathered through observation. In this section, area of the study, research design, sample size and sampling technique are described as part of the methodology used to conduct the study. ## Description of the Study Area The study site was Tanga Region, Tanzania, located at -5.07 latitude and 39.10 longitudes and it is situated at elevation of 22 meters above sea level. Tanga has a population of 224,876 making it one among the biggest city in Tanzania. This area was selected due to its large numbers of poultry as well as its market potential (International, 2010). The region is administratively divided into eight districts: Handeni, Kilindi, Korogwe, Lushoto, Pangani, Mkinga, Tanga and Muheza. ## Research Design Data were collected from the study area in the period of one month from January to February 2018, The period of one month was enough since the research was non-experimental which was carried out at single point in time and data was collected once, furthermore the research was time sensitive which means that the respondent will be required to adopt the technology in next 12 months. During the study, interview was conducted to poultry farmers and buyers to find out market information needs through a structured questionnaire which was administered to them to obtain market information they need to have. Interview was used to provide information through verbal exchange and was a good tool for overall understanding of what users do, how and why? (Law et al., 2011; Sadiq, 2010) Coding was done to Open Data Kit. Open Data Kit (ODK) is a free and open source set of tools which help organizations author, field, to manage mobile data collection solutions, it involves three steps which are design a form, setup a server, and connect the device to that server (opendatakit.org, 2015). ODK automate data collection and it is easy to use. Also uses Android platform to take advantage of GPS and camera capabilities which enables the collection of reach data for later off line analysis (Washington, 2010). Observation as a data collection method is a mode of inquiry to systematically collect information about different settings of a certain group and the objective is to better understand the phenomena of interest situated in context (Fry et al., 2017; Walshe et al., 2012). Based on this definition, the observations technique was also used in this study where by the researcher went to the real environment of real users and observe the selling and buying processes without direct interference. During the study significant amount of the time was spent visiting the farmers and buyers of the selected district and observe the whole process of selling or buying poultry or poultry products. The process was done for fourteen days, two hours each day within a period of January and February 2018, at Magandini and Muheza markets located within Tanga Region. This approach provides the opportunity to discover how do they sell their products and how do they get market information. It was observed that poultry farmers get market information through customers, friends and family. The information is not reliable which leads to poultry of the same type and kilogram be sold at different price at the same place. Also it was discovered that poultry farmers use a lot of time and energy to shout when a person pass nearby their poultry thinking that the person is a customer. They only focus to the people who pass there or who come to the market. Failure to reach more customers leads to deficit of some of the products such as eggs. ## Sample Size and Sampling Technique The sample size in this study included 101 poultry farmers and 103 buyers, respondents from three districts of Tanga Region namely, Tanga City, Muheza and Korogwe. Simple random sampling method was used as sampling strategy to select poultry farmers and buyers so as to get the representative sample of selected respondent. Simple random sampling means that every participant of the sample is nominated from the group of population in such a manner that likelihood of being selected for all members in the study is the study group of population is equal (Kanpur Shalabh, 2010). The strength of simple random sampling lie in its advantages of being representative of the population, simple to use, free from bias and prejudice, furthermore it needs only a minimum knowledge of the study group of the population (DePersio, 2015). All respondents was given an equal chance of being involved in the study. For population that are large and unknown Cochran (1963), developed the equation mentioned below to yield a representative sample for proportions. This formula was used to estimate the poultry buyers and farmers sample size since the population was not only large but also was unknown (Israel, 1992). $n_{0} = \frac{Z^{2}\text{pq}}{e^{2}}$ where by 1. $$n_{0}$$ = required sample size 2. $$Z$$ = the value on Z table at 95% confidence level is 1.96 3. $$p$$ = proportion of the population having the characteristics 4. $$q$$ = $$1 - p$$ 5. $$e$$ = is margin of error that is acceptable. In this study 1. $$Z$$ = 1.96 2. $$P$$ = 0.5 because the population was unknown 3. $$e$$ = 0.1. The sample size was 100 for poultry farmer and 100 for buyers. # SHORTCOMINGS OF EXISTING SYSTEM There have been efforts played by the government and private organization to enhance market accessibility to livestock farmers. There are several applications for livestock but they do not provide market linkage between Poultry farmers and potential buyers. Livestock market monitoring system is one among the application is accessible electronically in Tanzania through LINKS (http://www.lmistz.net). This application is available for the whole Africa. Livestock market monitoring system is a mechanism through which collection; analysis and dissemination of information needed to help producers, middle men and traders are organized and systematized. The limitation of this system are, the market information is available only by a request via SMS (text) message system or email, Or sometimes via worlds pace radio systems and very rare on the internet, also the updates of information on LINKS is not real time. (“LINKS(Version: LINKSV3.042409_testBuild),” 2017). Tanzania Livestock Identification and Traceability System (TANLITS). This system is for Tanzania only it is available at http://41.59.254.106:8080. This system was developed to operationalize the Livestock Identification, Registration and Traceability, and to promote access to market and other related matters. The limitation of this system is the information provided here is not relevant to poultry farmers and not updated on time also difficult to access. (“Livestock Traceability System (TANLITS) — Ministry of Agriculture Livestock and Fisheries,” 2017). The Poultry site is an application accessible through http://www.thepoultrysite.com. The application is accessible throughout the world and provides information about poultry and poultry feeders and poultry health. The limitation of this system is that the information available here is much more about poultry health and treatment does not link poultry buyers and farmers in a market way. Rating system (Green planet Livestock) available at http://www.greenplanetlivestock.com.au/rating-system is accessible worldwide. The limitation of this system is, it is specialises in breeding Red Angus bulls and dams that are highly marketable for increasing genetic diversity in studs and for outcross purposes in regular herds. This system does not provide any market information. Direct livestock marketing system available at http://www.dlms.ca/default.aspx. The limitation of this system is that it only deals with cows not poultry business. All these limitations show that “The marketing system is not well developed to enhance efficient marketing, grading and standardization, market information system, promotional activities and planned marketing which are all the attributes of efficient marketing are not adequately developed to enhance efficiency in the continuous flow of livestock from production areas to chain of markets through livestock routes” (Hurrissa and Eshetu, 2003). # RESULTS AND DISCUSSION The analysis of the collected data was done using Radar Chart Visualization together with the support of descriptive statistics. Radar Charts are used to compare two or more items or groups on various features or characteristics (Media, 2012). Using Radar chart we had been able to understand, compare and get a clear meaning of all the collected data from two groups (farmers and buyers), furthermore it has been a useful tool to simplify comparisons between results obtained (Chaumillon et al., 2017). Radar chart is good method if you have less than three groups and less than ten factors (Media, 2012). ## Profile of the Respondent Table 1 shows detailed distribution of poultry farmers and buyers in selected districts. Most of the respondents were aged between 20 and 66 years with a mean age of 39 years. The distribution by gender of all respondents was 54% women and 46% men. Table 1. Selected Poultry Farmers and Buyers Tanga Region S/N District No of Poultry Farmers No of Poultry Buyers 1 Tanga city 35 35 2 Muheza 33 34 3 Korogwe 33 34 4 Total 101 (50%) 103(50%) 5 Grand Total 204 (100%) ## Poultry Farmer Market Information Requirement As explained previously in this study that poultry farmers don’t achieve much sales, not because there is a shortage of demand for their products; but because they don’t have market information in the aspects of their business, this has also been established by previously studies (Abate et al., 2003; International, 2010; Msoffe and Ngulube, 2016) Poultry farmers need continuous information about the market outlook and spot prices (El-obeid, 2012). Apart from lack of poultry market information, there are other factors which also contributes to low sales such as lack of quality chicks, poor market infrastructure, low quality feeds, poultry diseases and middleman (Hmad, 2005; Hurrissa and Eshetu, 2003; Mohammad and Mohammed, 2014). The traditional market is ruled by middlemen and small scale processors that act in their own interest by reducing the smallholders share and increasing final prices (El-obeid, 2012). According to Msami (2007), the small-scale farmers usually depend on itinerant middlemen to sell their produce and often end up being denied fair prices. This market information is very important to both poultry farmers and buyers. This information will help them to make an informed decision, and give them right direction onto where they can go. Thus, a market information system is required that allows stakeholders to get information on quantity and price etc. (Hurrissa and Eshetu, 2003). ## Challenges Faced by Poultry Farmers due to Lack of Market Information Figure 5 shows results on survey of challenges faced by poultry farmers due to lack of market information. Most of the poultry farmers are selling at a cheaper price only because of the absence of market information. Also it was discovered that poultry farmers fail to grow into their business because they only focus to the friends around them. They also get loss, if the eggs will stay longer without being sold they will get spoiled. Poultry farmers fail to decide on how much chicken to keep and thus making uninformed decision. Lwoga (2010) said that “quick access to relevant knowledge and information enable farmers to make informed decision regarding their agricultural production activities, marketing of their agricultural produce for better profit, and benefitting from health and diseases prevention” Failure to get market information leads to arise of middleman which buy the poultry at cheaper price and sell them at higher price. Poultry farmer spend a lot of time to find the customers and fail to perform other activities. # PROPOSED SOLUTION To overcome poultry market information gap, this study aims to develop a web-based platform for poultry market linkage with more focus on its usability and user experience and to assess its usability. In this study Word press will be used to develop web based platform for reliable market linkage between farmers and potential buyers. Word press is a website creation tool which is online and open source. It is based in Hypertext Pre-processor (PHP) and My Structured Query Language (MySQL). Word press has been chosen to develop this platform because of the power of plugins, which allow easy customization and modification. It have APIs that make it possible for users to create own plugins and extend functionality (Fernandes 2017; Hisar, 2015). HTML will be used to format the interface. The proposed web based platform will be developed in such a way that, it will provide all the required market information such as price, seller location, amount of poultry, types of poultry, brand etc. Farmers will be able to advertise their products using the website by providing details of the product together with image or video of the product. Buyers (farmers, individual person, shops, catering services, supermarkets, hotels, industries, butchers and other farmers) will be able to make orders and purchase poultry and poultry products form the platform as well as to perform action. Buyers upon selecting a particular product all the details regarding that product will be shown. The web based platform available in English and Swahili. From a buyer’s point of view, the use of web platform for getting market information will offers a wider number of benefits, comprising efficiency, convenience, richer and participative information, a broader selection of products, competitive pricing, cost reduction, and product diversity (Tiago and Veríssimo, 2014).Though the proposed system buyers will be able to provoke the seller or to broadcast by advertising their needs and provide the specification including amount, type, price etc. The aim of web forum is to ensure both seller and buyer offer the precise information needed by themselves during conversation between the farmer and the buyer. ## Usability Testing Usability testing can be defined as a method in user centred design which is used to assess a product by testing it with real users. Furthermore, it assists us to get direct feedback on how actual users work with a product (Bergstrom and Schall, 2014). Nielsen (2012) define usability by using five quality components which are Learnability, Efficiency, Memorability, Errors and Satisfaction. We can say that usability is quality. To be sure the product will be good and will work well without any difficulties or confusion to the user, usability testing will be conducted. Real users will be given the developed web to evaluate. Onsite and laboratory usability testing will be conducted to poultry farmers and buyers using high fidelity prototype of web based platform located on the test administrator laptop. During the testing session’s moderator will be present in the room to capture and record each participant navigation choices, number of task completed correctly, number of incomplete tasks, comments, overall satisfaction rate, questions and feedback. Their performance will be assessed through think aloud technique and questionnaires to identify any concerns. ## Framework of the Proposed Solution This framework was derived from data analysis, that poultry farmers and buyers will use the web based platform to access the required market information and overcome the mentioned challenges caused by lack of market information. Poultry farmers and buyers will use smartphone, tablets or computers to access the web based platform, it will depend since nowadays the advancement of technology has brought low cost smartphone which are affordable to majority of poultry farmers in Tanzania (Mussa et al., 2016). The proposed solution will remove their locational dependence and, instead of relying on a geographical or locational sales force, they will also focus faraway since the World Wide Web makes the world as a single village. In the agricultural era and, recently, even in developing countries, buyers and businesses bought products close to their physical location and had them adapted toward their needs (Sharma and Sheth, 2004), and the good way of achieving this is through Web systems. Buyers’ needs access to information 24 hours a day without any limitation. Surveys show that most customers desire 24-h access to information, communications, transactions and basic customer service (Sharma and Sheth, 2004). # CONCLUSION This research work was done so as to provide poultry market linkages. During the study interview was conducted to find the required market information and the challenges faced due to lack of market information. Data were analysed using Radar Chart Visualization together with the support of descriptive statistics. The results shows that price, customer location, kinds of poultry products needed, amount of poultry needed, types of poultry needed and kind of poultry needed were the only requirements mentioned by poultry farmers. It was revealed that, most medium used by poultry farmers to advertise and to get market information of their poultry and poultry products is friends (neighbours), followed by internet. Also it was discovered that 69% of farmers advertise their poultry and poultry products, while 31% do not advertise their products. Due to lack of market information poultry buyers make uninformed decision, they buy low quality poultry products at high price also failure to achieve customer satisfaction for example in hotels. The results show clearly that we need to link poultry farmers with potential buyers. There are several websites for livestock but they do not provide market linkage between Poultry farmers and potential buyers. Furthermore they have limitations which show that, the marketing system is not well developed to enhance efficient marketing. The proposed solution aims to provide reliable market linkage by developing web based platform with more focus on its usability. The study aim to ensure that usability testing with real users has been conducted to ensure the system is usable for poultry market linkage. This is important because, even if we provide and improve the poultry feeds and poultry medicine to farmers and improve productivity, we cannot transform farmers unless we secure market for them, So linking farmers to markets it’s not a luxury it’s a must (Mammo, 2015). Therefore this study recommends that, as we are solving the problem of poultry market information, the following factors also need to be taken into consideration, because they also hinder farmers and buyers to have access to market information, such as “lack of knowledge, lack of awareness, ignorance and poverty. Also personal and economic aspects might also prevent farmers form accessing market information (Lwoga, 2010; Msoffe and Ngulube, 2016). REFERENCES • Abate, T., Berhanu, T., Bogale, S. and Worku, D. (2003). Potential of forages legumes to replace the traditional fallow-barleyrotation system in the cool - high land of bale. Challenges and Opportunities of Livestock Marketing in Ethiopia. In: Proceedings of the 10th Annual Conference of the Ethiopian Society of Animal Production (ESAP) Held in August 21-23. Addis Ababa, Ethiopia., 265–268. • Biswas, A. and Krishnan, R. (2004). The Internet’s impact on marketing: Introduction to the JBR special issue on “Marketing on the web - Behavioral, strategy and practices and public policy.” Journal of Business Research, 57(7), 681–684. https://doi.org/10.1016/S0148-2963(02)00346-6 • Bricki, N. and Green, J. (2007). A Guide to Using Qualitative Research Methodology. Medecins Sans Frontieres, 11–13. https://doi.org/10.1109/PROC.1978.11033 • Byrne, J., Heavey, C. and Byrne, P. J. (2010). Simulation Modelling Practice and Theory A review of Web-based simulation and supporting tools. Simulation Modelling Practice and Theory, 18(3), 253–276. https://doi.org/10.1016/j.simpat.2009.09.013 • Chaumillon, R., Romeas, T., Paillard, C., Bernardin, D., Giraudet, G., Bouchard, J. F. and Faubert, J. (2017). Enhancing data visualisation to capture the simulator sickness phenomenon: On the usefulness of radar charts. Data in Brief, 13, 301–305. https://doi.org/10.1016/j.dib.2017.05.051 • Chawla, S., Srivastava, S. and Bedi, P. (2017). Improving the quality of web applications with web specific goal driven requirements engineering. International Journal of System Assurance Engineering and Management, 8(s1), 91–103. https://doi.org/10.1007/s13198-015-0385-z • Conceição, P., Levine, S., Lipton, M. and Warren-Rodríguez, A. (2016). Toward a food secure future: Ensuring food security for sustainable human development in Sub-Saharan Africa. Food Policy, 60, 1–9. https://doi.org/10.1016/j.foodpol.2016.02.003 • DePersio, G. (2015). Simple Random Sampling and Systematic Sampling. Simple Random Sampling and Systematic Sampling, 3–15. • El-obeid, S. (2012). Poultry producers’ perceptions of changing market conditions, (745). • Fry, M., Curtis, K., Considine, J. and Shaban, R. Z. (2017). Using observation to collect data in emergency research. Australasian Emergency Nursing Journal, 20(1), 25–30. https://doi.org/10.1016/j.aenj.2017.01.001 • Goromela, E. H. (2009). Feeding and Management Strategies for Rural Poultry Production in Cetral Tanzania. • Hmad, S. H. a. (2005). Marketing of Commercial Poultry in Faisalabad City (Pakistan). Journal of Agriculture & Social Sciences, 1(4), 327–331. • Hurrissa and Eshetu. (2003). Challenges and Opportunities of Livestock Marketing in Ethiopia. Proc. 10th Annual Conference of the Ethiopian Society of Animal Production (ESAP), 265–268. • International, W. (2010). Partnership for Safe Poultry in Kenya (PSPK) Program Value Chain Analysis of Poultry in Kenya (PSPK) Program. Value Chain Analysis of Poultry Tanzania. • Israel, G. (1992). Determining Sample Size. University of Florida Cooperative Extension Services, Instititute of Food and Agriculture Sciences, 85(3), 108–113. https://doi.org/10.4039/Ent85108-3 • Kannan, P. K. and Li, H. “Alice.” (2017). Digital marketing: A framework, review and research agenda. International Journal of Research in Marketing, 34(1), 22–45. https://doi.org/10.1016/j.ijresmar.2016.11.006 • Kanpur Shalabh. (2010). Simple Random Sampling, 1–23. • Kisungwe, I. (2012). Commercialization of Chichen Production and Marketing in the Central Corridor. SLC Sector Development Strategy, 1–14. • Law, M., Stewart, D., Pollock, N., Letts, L., Bosch, J. and Westmorland, M. (2011). Guidelines for Critical Review of Qualitative Studies Based on Guidelines for Critical Review Form-Qualitative Studies by. Design, 91(4), 357–362. Retrieved from http://www.usc.edu/hsc/ebnet/res/Guidelines.pdf • Lim, W. M. (2017). Online group buying: Some insights from the business-to-business perspective. Industrial Marketing Management, 65(April), 182–193. https://doi.org/10.1016/j.indmarman.2017.03.011 • LINKS (Version: LINKSV3.042409_testBuild). (2017). Retrieved on August 19, 2017 from http://www.lmistz.net/Pages/Public/Home.aspx • Livestock Traceability System (TANLITS) — Ministry of Agriculture Livestock and Fisheries. (2017). Retrieved on August 19, 2017 from http://www.kilimo.go.tz/index.php/en/stakeholders/view/livestock-traceability-system-tanlits • Lwoga, E. T. (2010). Bridging the Agricultural Knowledge and Information Divide: The Case of Selected Telecenters and Rural Radio in Tanzania. The Electronic Journal of Information Systems in Developing Countries, 43(1), 1–14. https://doi.org/10.1002/j.1681-4835.2010.tb00310.x • Mammo, Y. (2015). ICTs in Linking Farmers to Markets: Innovative Mobile Applications and Lessons Learned from the Past and the Future. • Media, E. (2012). Effective Use of Radar Charts, 14(4), 22–28. • Mohammad Khairu Islam, Mohammed Forhad Uddin, M. M. A. (2014). Challenges and Prospects of Poultry Industry in Bangladesh. European Journal of Business and Management, 6(7), 116–127. • MoHSW. (2007). United Republic of Tanzania United Republic of Tanzania, (July), 1–36. https://doi.org/10.1787/9789264177949-147-en • Msami, H. (2007). Poultry Sector Country Review: Tanzania. FAO Poultry Sector Country Review, 61. • Msoffe, G. and Ngulube, P. (2016). Farmers’ access to poultry management information in selected rural areas of Tanzania. Library and Information Science Research, 38(3), 265–271. https://doi.org/10.1016/j.lisr.2016.08.004 • Mussa, M., Kipanyula, M. J., Angello, C. and Sanga, C. A. (2016). Evaluation of Livestock Information Network Knowledge System (LINKS) based on User Satisfaction Definition of Information System evaluation. International Journal of Information and Communication Technology Research, 6(8), 115–130. • Njombe, A. P., Msanga, Y., Mbwambo, N. and Nemes, M. (2011). United Republic of Tanzania Ministry of Livestock and Fisheries Development the Tanzania Dairy Industry : Status ,. 7th African Dairy Conference and Exhibition, MovenPick Palm Hotel, 25-27 May 20111, (May), 25–27. • O’Keefe, R. M. and McEachern, T. (1998). Web-based Consumer Decision Support Systems. Communications of the ACM, 41(3), 71–78. https://doi.org/10.1145/272287.272300 • opendatakit.org. (2015). Open Data Kit. Retrieved April 19, 2018, from https://opendatakit.org/ • Panthi, V. and Mohapatra, D. P. (2017). An approach for Dynamic Web Application Testing using. International Journal of System Assurance Engineering and Management, 8(s2), 1704–1716. https://doi.org/10.1007/s13198-017-0646-0 • Sadiq, M. (2010). Modeling the Non-functional Requirements in the Context of Usability, Performance, Safety and Security, (March), 73. • Sharma, A. and Sheth, J. N. (2004). Web-based marketing The coming revolution in marketing thought and strategy, 57, 696–702. https://doi.org/10.1016/S0148-2963(02)00350-8 • Stephen, A. T. (2016). The role of digital and social media marketing in consumer behavior. Current Opinion in Psychology, 10, 17–21. https://doi.org/10.1016/j.copsyc.2015.10.016 • Tanzanian Policy Document. (2010). The United Republic Of Tanzania, Ministry Of Livestock And Fisheries Development: Livestock Sector Development Strategy. Policy, Retrieved from http://www.tanzania.go.tz/egov_uploads/documents/d • Temba, B. A., Kajuna, F. K., Pango, G. S., & Benard, R. (2016). Accessibility and use of information and communication tools among farmers for improving chicken production in Morogoro municipality, Tanzania. Livestock Research for Rural Development, 28(1). • Tiago, M. T. P. M. B., & Veríssimo, J. M. C. (2014). Digital marketing and social media: Why bother? Business Horizons, 57(6), 703–708. https://doi.org/10.1016/j.bushor.2014.07.002 • Venture Magazine. (2018). Top 10 Marketing Ideas for selling Poultry Birds and Eggs Fast | ProfitableVenture. Retrieved on March 15, 2018 from https://www.profitableventure.com/poultry-marketing-strategies/ • Walshe, C., Ewing, G., & Griffiths, J. (2012). Using observation as a data collection method. Palliative Medicine, 26(8), 1048–1054. https://doi.org/10.1177/0269216311432897 • Washington, B. (2010). Open Data Kit, 3–4.
1. ## GRE Hi! I have to take the GRE in a few weeks; and although I thought I had prepared, I see that I need a lot more help. That is why I signed up for this website. Debbie 2. ## Re: GRE the GRE Math paper consist Calculus and Algebra as a main part. you can prepare it by Schaum's outline Series and Tompson Calculus in a few Weeks.
Simple Website Monitoring using Clojure Ad-hoc measurement of page load times across multiple web servers can be a drag. Quite often you end up with a number of screen windows doing bash timed curl loops with file redirects and some grep/awk magic. This tends to work OK until you need to get a bit more sophisticated, for example - compare results or get samples at fixed frequency. Here we give a quick-and-dirty Clojure script that takes that curl loop just a bit further - from a single command you can monitor response times across arbitrary number of endpoints, retrieving measurements at given time intervals and track results in real-time. The script itself is trivial - most of the functionality is due to Quartzite scheduling library. You can grab the source at: https://github.com/SupplyFrame/grunf example usage: lein run -m grunf.bin '(["http://www.google.com" "http://finance.yahoo.com" "http://www.bing.com/news"] 1000)' (1368072742623 http://www.google.com 225.029) (1368072743236 http://www.bing.com/news 839.584) (1368072743457 http://finance.yahoo.com 1059.564) ... Have fun !
Free Essay # Investements Submitted By iulianaa Words 166919 Pages 668 list of Frequently Used Symbols and Notation A text such as Intermediate Financial Theory is, by nature, relatively notation intensive. We have adopted a strategy to minimize the notational burden within each individual chapter at the cost of being, at times, inconsistent in our use of symbols across chapters. We list here a set of symbols regularly used with their specific meaning. At times, however, we have found it more practical to use some of the listed symbols to represent a different concept. In other instances, clarity required making the symbolic representation more precise (e.g., by being more specific as to the time dimension of an interest rate). Roman Alphabet a Amount invested in the risky asset; in Chapter 14, fraction of wealth invested in the risky asset or portfolio AT Transpose of the matrix (or vector)A c Consumption; in Chapter 14 only, consumption is represented by C, while c represents ln C ck Consumption of agent k in state of nature θ θ CE Certainty equivalent CA Price of an American call option CE Price of a European call option d Dividend rate or amount ∆ Number of shares in the replicating portfolio (Chapter xx E The expectations operator ek Endowment of agent k in state of nature θ θ f Futures position (Chapter 16); pf Price of a futures contract (Chapter 16) F, G Cumulative distribution functions associated with densities: f, g Probability density functions K The strike or exercise price of an option K(˜) Kurtosis of the random variable x x ˜ L A lottery L Lagrangian m Pricing kernel M The market portfolio k M Uθ Marginal utility of agent k in state θ p Price of an arbitrary asset P Measure of Absolute Prudence q Arrow-Debreu price qb Price of risk-free discount bond, occasionally denoted prf e q Price of equity rf Rate of return on a risk-free asset Rf Gross rate of return on a risk-free asset r ˜ Rate of return on a risky asset ˜ R Gross rate of return on a risky asset 1 RA RR s S S(˜) x T U U V Vp VF wi Y0 Absolute risk aversion coefficient Relative risk aversion coefficient Usually denotes the amount saved In the context of discussing options, used to denote the price of the underlying stock Skewness of the random variable x ˜ Transition matrix Utility function von Neuman-Morgenstern utility function Usually denotes variance-covariance matrix of asset returns; occasionally is used as another utility function symbol; may also signify value as in The value of portfolio P or The value of the firm Portfolio weight of asset i in a given portfolio Initial wealth Greek Alphabet α Intercept coefficient in the market model (alpha) β The slope coefficient in the market model (beta) δ Time discount factor η Elasticity λ Lagrange multiplier µ Mean πθ State probability of state θ RN πθ Risk-neutral probability of state θ Π Risk premium ρ(˜, y ) Correlation of random variables x and y x ˜ ˜ ˜ ρ Elasticity of intertemporal substitution (Chapter 14) σ Standard deviation σij Covariance between random variables i and j θ Index for state of nature Ω Rate of depreciation of physical capital ψ Compensating precautionary premium Numerals and Other Terms 1 Vector of ones Is strictly preferred to Is preferred to (non strictly, that is allowing for indifference) Geometric Brownian Motion stochastic process First-order stochastic dominance Second-order stochastic dominance GBM F SD SSD 2 ing concern of modern finance for the valuation of risky cash flows: Intermediate Financial Theory’s main focus is thus on asset pricing. (In addition, we exclusively consider discrete time methodologies). The new Chapter 2 makes clear this emphasis while simultaneously stressing that asset pricing does not represent the totality of modern finance. This discussion then leads to a new structuring of the book into five parts, and a new ordering of the various chapters. Our goal here is to make a sharper distinction between valuation approaches that rely on equilibrium principles and those based on arbitrage considerations. We have also reorganized the treatment of Arrow-Debreu pricing to make clear how it accommodates both perspectives. Finally, a new chapter entitled “Portfolio Management in the Long Run” is included that covers recent developments that we view as especially relevant for the contemporary portfolio manager. The two appendices providing brief overviews of option pricing and continuous time valuation methods are now assigned to the text website. Over the years, we have benefited from numerous discussions with colleagues over issues related to the material included in this book. We are especially grateful to Paolo Siconolfi, Columbia University, Rajnish Mehra, U. of California at Santa Barbara, and Erwan Morellec, University of Lausanne, the latter for his contribution to the corporate finance review of Chapter 2. We are also indebted to several generations of teaching assistants – Fran¸ois Christen, c Philippe Gilliard, Tomas Hricko, Aydin Akgun, Paul Ehling, Oleksandra Hubal and Lukas Schmid – and of MBF students at the University of Lausanne who have participated in the shaping-up of this material. Their questions, corrections and comments have lead to a continuous questioning of the approach we have adopted and have dramatically increased the usefulness of this text. Finally we reiterate our thanks to the Fondation du 450me of the University of e Lausanne for providing “seed financing” for this project. Jean-Pierre Danthine, Lausanne, Switzerland John B. Donaldson New York City 2 N’estime l’argent ni plus ni moins qu’il ne vaut : c’est un bon serviteur et un mauvais maˆ ıtre (Value money neither more nor less than it is worth: It is a good servant and a bad master) Alexandre Dumas, fils, La Dame aux Cam´lias (Pr´face) e e 3 Part I Introduction Chapter 1 : On the Role of Financial Markets and Institutions 1.1 Finance: The Time Dimension Why do we need financial markets and institutions? We chose to address this question as our introduction to this text on financial theory. In doing so we touch on some of the most difficult issues in finance and introduce concepts that will eventually require extensive development. Our purpose here is to phrase this question as an appropriate background for the study of the more technical issues that will occupy us at length. We also want to introduce some important elements of the necessary terminology. We ask the reader’s patience as most of the sometimes-difficult material introduced here will be taken up in more detail in the following chapters. A financial system is a set of institutions and markets permitting the exchange of contracts and the provision of services for the purpose of allowing the income and consumption streams of economic agents to be desynchronized – that is, made less similar. It can, in fact, be argued that indeed the primary function of the financial system is to permit such desynchronization. There are two dimensions to this function: the time dimension and the risk dimension. Let us start with time. Why is it useful to dissociate consumption and income across time? Two reasons come immediately to mind. First, and somewhat trivially, income is typically received at discrete dates, say monthly, while it is customary to wish to consume continuously (i.e., every day). Second, and more importantly, consumption spending defines a standard of living and most individuals find it difficult to alter their standard of living from month to month or even from year to year. There is a general, if not universal, desire for a smooth consumption stream. Because it deeply affects everyone, the most important manifestation of this desire is the need to save (consumption smaller than income) for retirement so as to permit a consumption stream in excess of income (dissaving) after retirement begins. The lifecycle patterns of income generation and consumption spending are not identical, and the latter must be created from the former. The same considerations apply to shorter horizons. Seasonal patterns of consumption and income, for example, need not be identical. Certain individuals (car salespersons, department store salespersons) may experience variations in income arising from seasonal events (e.g., most new cars are purchased in the spring and summer), which they do not like to see transmitted to their ability to consume. There is also the problem created by temporary layoffs due to business cycle fluctuations. While temporarily laid off and without substantial income, workers do not want their family’s consumption to be severely reduced. Box 1.1 Representing Preference for Smoothness The preference for a smooth consumption stream has a natural counterpart in the form of the utility function, U ( ), typically used to represent the relative 2 benefit a consumer receives from a specific consumption bundle. Suppose the representative individual consumes a single consumption good (or a basket of goods) in each of two periods, now and tomorrow. Let c1 denote today’s consumption level and c2 tomorrow’s, and let U (c1 ) + U (c2 ) represent the level of utility (benefit) obtained from a given consumption stream (c1 , c2 ). Preference for consumption smoothness must mean, for instance, that the consumption stream (c1 , c2 ) = (4, 4) is preferred to the alternative (c1 , c2 ) = (3, 5), or U (4) + U (4) > U (3) + U (5), Dividing both sides of the inequality by 2, this implies 1 1 U (3) + U (5). 2 2 As shown in Figure 1.1, when generalized to all possible alternative consumption pairs, this property implies that the function U (·) has the rounded shape that we associate with the term “strict concavity.” 2 U (4) > Insert Figure 1.1 about here Furthermore, and this is quite crucial for the growth process, some people - entrepreneurs, in particular - are willing to accept a relatively small income (but not consumption!) for a period of time in exchange for the prospect of high returns (and presumably high income) in the future. They are operating a sort of ‘arbitrage’ over time. This does not disprove their desire for smooth consumption; rather they see opportunities that lead them to accept what is formally a low income level initially, against the prospect of a higher income level later (followed by a zero income level when they retire). They are investors who, typically, do not have enough liquid assets to finance their projects and, as a result, need to raise capital by borrowing or by selling shares. Therefore, the first key element in finance is time. In a timeless world, there would be no assets, no financial transactions (although money would be used, it would have only a transaction function), and no financial markets or institutions. The very notion of a (financial) contract implies a time dimension. Asset holding permits the desynchronization of consumption and income streams. The peasant putting aside seeds, the miser burying his gold, or the grandmother putting a few hundred dollar bills under her mattress are all desynchronizing their consumption and income, and in doing so, presumably seeking a higher level of well-being for themselves. A fully developed financial system should also have the property of fulfilling this same function efficiently. By that we mean that the financial system should provide versatile and diverse instruments to accommodate the widely differing needs of savers and borrowers in so far as size (many small lenders, a few big borrowers), timing and maturity of loans (how to finance long-term projects with short-term money), and 3 the liquidity characteristics of instruments (precautionary saving cannot be tied up permanently). In other words, the elements composing the financial system should aim at matching as perfectly as possible the diverse financing needs of different economic agents. 1.2 Desynchronization: The Risk Dimension We argued above that time is of the essence in finance. When we talk of the importance of time in economic decisions, we think in particular of the relevance of choices involving the present versus the future. But the future is, by essence, uncertain: Financial decisions with implications (payouts) in the future are necessarily risky. Time and risk are inseparable. This is why risk is the second key word in finance. For the moment let us compress the time dimension into the setting of a “Now and Then” (present vs. future) economy. The typical individual is motivated by the desire to smooth consumption between “Now” and “Then.” This implies a desire to identify consumption opportunities that are as smooth as possible among the different possibilities that may arise “Then.” In other words, ceteris paribus – most individuals would like to guarantee their family the same standard of living whatever events transpire tomorrow: whether they are sick or healthy; unemployed or working; confronted with bright or poor investment opportunities; fortunate or hit by unfavorable accidental events.1 This characteristic of preferences is generally described as “aversion to risk.” A productive way to start thinking about this issue is to introduce the notion of states of nature. A state of nature is a complete description of a possible scenario for the future across all the dimensions relevant for the problem at hand. In a “Now and Then” economy, all possible future events can be represented by an exhaustive list of states of nature or states of the world. We can thus extend our former argument for smoothing consumption across time by noting that the typical individual would also like to experience similar consumption levels across all future states of nature, whether good or bad. An efficient financial system offers ways for savers to reduce or eliminate, at a fair price, the risks they are not willing to bear (risk shifting). Fire insurance contracts eliminate the financial risk of fire, while put contracts can prevent the loss in wealth associated with a stock’s price declining below a predetermined level, to mention two examples. The financial system also makes it possible to obtain relatively safe aggregate returns from a large number of small, relatively risky investments. This is the process of diversification. By permitting economic agents to diversify, to insure, and to hedge their risks, an efficient financial system fulfills the function of redistributing purchasing power not only over time, but also across states of nature. Box 1.2 Representing Risk Aversion 1 ”Ceteris paribus” is the latin expression for ”everything else maintained equal”. It is part of the common language in economics. 4 Let us reinterpret the two-date consumption stream (c1 , c2 ) of Box 1.1 as the consumption levels attained “Then” or “Tomorrow” in two alternative, equally likely, states of the world. The desire for a smooth consumption stream across the two states, which we associate with risk aversion, is obviously represented by the same inequality 1 1 U (3) + U (5), 2 2 and it implies the same general shape for the utility function. In other words, assuming plausibly that decision makers are risk averse, an assumption in conformity with most of financial theory, implies that the utility functions used to represent agents’ preferences are strictly concave. 2 U (4) > 1.3 The Screening and Monitoring Functions of the Financial System The business of desynchronizing consumption from income streams across time and states of nature is often more complex than our initial description may suggest. If time implies uncertainty, uncertainty may imply not only risk, but often asymmetric information as well. By this term, we mean situations where the individuals involved have different information, with some being potentially better informed than others. How can a saver be assured that he will be able to find a borrower with a good ability to repay - the borrower himself knows more about this, but he may not wish to reveal all he knows -, or an investor with a good project, yielding the most attractive return for him and hopefully for society as well? Again, the investor is likely to have a better understanding of the project’s prospects and of his own motivation to carry it through. What do “good” and “most attractive” mean in these circumstances? Do these terms refer to the highest potential return? What about risk? What if the anticipated return is itself affected by the actions of the investors themselves (a phenomenon labeled “moral hazard”)? How does one share the risks of a project in such a way that both investors and savers are willing to proceed, taking actions acceptable to both? An efficient financial system not only assists in these information and monitoring tasks, but also provides a range of instruments (contractual arrangements) suitable for the largest number of savers and borrowers, thereby contributing to the channeling of savings toward the most efficient projects. In the terms of the preeminent economist, Joseph Schumpeter (1961) ”Bankers are the gatekeepers of capitalist economic development. Their strategic function is to screen potential innovators and advance the necessary purchasing power to the most promising.” For highly risky projects, such as the creation of a new firm exploiting a new technology, venture capitalists provide a similar function today. 5 1.4 The Financial System and Economic Growth The performance of the financial system matters at several levels. We shall argue that it matters for growth, that it impacts the characteristics of the business cycle, and most importantly, that it is a significant determinant of economic welfare. We tackle growth first. Channeling funds from savers to investors efficiently is obviously important. Whenever more efficient ways are found to perform this task, society can achieve a greater increase in tomorrow’s consumption for a given sacrifice in current consumption. Intuitively, more savings should lead to greater investment and thus greater future wealth. Figure 1.2 indeed suggests that, for 90 developing countries over the period 1971 to 1992, there was a strong positive association between saving rates and growth rates. When looked at more carefully, however, the evidence is usually not as strong.2 One important reason may be that the hypothesized link is, of course, dependent on a ceteris paribus clause: It applies only to the extent savings are invested in appropriate ways. The economic performance of the former Union of Soviet Socialist Republics reminds us that it is not enough only to save; it is also important to invest judiciously. Historically, the investment/GDP (Gross Domestic Product) ratio in the Soviet Union was very high in international comparisons, suggesting the potential for very high growth rates. After 1989, however, experts realized that the value of the existing stock of capital was not consistent with the former levels of investment. A great deal of the investment must have been effectively wasted, in other words, allocated to poor or even worthless projects. Equal savings rates can thus lead to investments of widely differing degrees of usefulness from the viewpoint of future growth. However, in line with the earlier quote from Schumpeter, there are reasons to believe that the financial system has some role to play here as well. Insert Figure 1.2 about here The following quote from Economic Focus (UBS Economic Research, 1993) is part of a discussion motivated by the observation that, even for high-saving countries of Southeast Asia, the correlation between savings and growth has not been uniform. ”The paradox of raising saving without commensurate growth performance may be closely linked to the inadequate development of the financial system in a number of Asian economies. Holding back financial development (‘financial repression’) was a deliberate policy of many governments in Asia and elsewhere 2 In a straightforward regression in which the dependent variable is the growth rate in real per capita GNP, the coefficient on the average fraction of real GNP represented by investment (I /Y ) over the prior five years is positive but insignificant. Together with other results, this is interpreted as suggesting a reverse causation from real per capita GNP growth to investment spending. See Barro and Sala-i-Martin (1995), Chapter 12, for a full discussion. There is also a theoretically important distinction between the effects of increasing investment (savings) (as a proportion of national income) on an economy’s level of wealth and its growth rate. Countries that save more will ceteris paribus be wealthier, but they need not grow more rapidly. The classic growth model of Solow (1956) illustrates this distinction. 6 who wished to maintain control over the flow of savings. (. . . ) Typical measures of financial repression still include interest rate regulation, selective credit allocation, capital controls, and restricted entry into and competition within the banking sector”. These comments take on special significance in light of the recent Asian crisis, which provides another, dramatic, illustration of the growth-finance nexus. Economists do not fully agree on what causes financial crises. There is, however, a consensus that in the case of several East-Asian countries, the weaknesses of the financial and banking sectors, such as those described as “financial repression,” must take part of the blame for the collapse and the ensuing economic regression that have marked the end of the 1990s in Southern Asia. Let us try to go further than these general statements in the analysis of the savings and growth nexus and of the role of the financial system. Following Barro and Sala-i-Martin (1995), one can view the process of transferring funds from savers to investors in the following way.3 The least efficient system would be one in which all investments are made by the savers themselves. This is certainly inefficient because it requires a sort of “double coincidence” of intentions: Good investment ideas occurring in the mind of someone lacking past savings will not be realized. Funds that a non-entrepreneur saves would not be put to productive use. Yet, this unfortunate situation is a clear possibility if the necessary confidence in the financial system is lacking with the consequence that savers do not entrust the system with their savings. One can thus think of circumstances where savings never enter the financial system, or where only a small fraction do. When it does, it will typically enter via some sort of depository institution. In an international setting, a similar problem arises if national savings are primarily invested abroad, a situation that may reach alarming proportions in the case of underdeveloped countries.4 Let FS/S represent, then, the fraction of aggregate savings (S) being entrusted to the financial system (FS ). At a second level, the functioning of the financial system may be more or less costly. While funds transferred from a saver to a borrower via a direct loan are immediately and fully made available to the end user, the different functions of the financial system discussed above are often best fulfilled, or sometimes can only be fulfilled, through some form of intermediation, which typically involves some cost. Let us think of these costs as administrative costs, on the one hand, and costs linked to the reserve requirements of banks, on the other. Different systems will have different operating costs in this large sense, 3 For a broader perspective and a more systematic connection with the relevant literature on this topic, see Levine (1997). 4 The problem is slightly different here, however. Although capital flight is a problem from the viewpoint of building up a country’s home capital stock, the acquisition of foreign assets may be a perfectly efficient way of building a national capital stock. The effect on growth may be negative when measured in terms of GDP (Gross Domestic Product), but not necessarily so in terms of national income or GNP (Gross National product). Switzerland is an example of a rich country investing heavily abroad and deriving a substantial income flow from it. It can be argued that the growth rate of the Swiss Gross National Product (but probably not GDP) has been enhanced rather than decreased by this fact. 7 and, as a consequence, the amount of resources transferred to investors will also vary. Let us think of BOR/FS as the ratio of funds transferred from the financial system to borrowers and entrepreneurs. Borrowers themselves may make diverse use of the funds borrowed. Some, for example, may have pure liquidity needs (analogous to the reserve needs of depository institutions), and if the borrower is the government, it may well be borrowing for consumption! For the savings and growth nexus, the issue is how much of the borrowed funds actually result in productive investments. Let I/BOR represent the fraction of borrowed funds actually invested. Note that BOR stands for borrowed funds whether private or public. In the latter case a key issue is what fraction of the borrowed funds are used to finance public investment as opposed to public consumption. Finally let EFF denote the efficiency of the investment projects undertaken in society at a given time, with EFF normalized at unity; in other words, the average investment project has EFF = 1, the below-average project has EFF < 1, and conversely for the above average project(a project consisting of building a bridge leading nowhere would have an EFF = 0); K is the aggregate capital stock and Ω the depreciation rate. We may then write ˙ K = EF F · I − ΩK or, multiplying and dividing I with each of the newly defined variables ˙ K = EF F · (I/BOR) · (BOR/F S) · (F S/S) · (S/Y ) · Y − ΩK (1.2) (1.1) where our notation is meant to emphasize that the growth of the capital stock at a given savings rate is likely to be influenced by the levels of the various ratios introduced above.5 Let us now review how this might be the case. One can see that a financial system performing its matching function efficiently will positively affect the savings rate (S/Y ) and the fraction of savings entrusted to financial institutions (FS/S ). This reflects the fact that savers can find the right savings instruments for their needs. In terms of overall services net of inconvenience, this acts like an increase in the return to the fraction of savings finding its way into the financial system. The matching function is also relevant for the I /BOR ratio. With the appropriate instruments (like flexible overnight loan facilities) a firm’s cash needs are reduced and a larger fraction of borrowed money can actually be used for investment. By offering a large and diverse set of possibilities for spreading risks (insurance and hedging), an efficient financial system will also positively influence the savings ratio (S/Y ) and the FS /S ratio. Essentially this works through improved return/risk opportunities, corresponding to an improved trade-off between future and present consumption (for savings intermediated through the financial system). Furthermore, in permitting entrepreneurs with risky projects to eliminate unnecessary risks by using appropriate instruments, an efficient financial system provides, somewhat paradoxically, a better platform for undertaking riskier projects. If, on average, riskier projects are also the ones with the 5K ˙ = dK/dt, that is, the change in K as a function of time. 8 highest returns, as most of financial theory reviewed later in this book leads us to believe, one would expect that the more efficiently this function is performed, the higher (ceteris paribus), the value of EFF ; in other words, the higher, on average, the efficiency of the investment undertaken with the funds made available by savers. Finally, a more efficient system may be expected to screen alternative investment projects more effectively and to better and more cost efficiently monitor the conduct of the investments (efforts of investors). The direct impact is to increase EFF. Indirectly this also means that, on average, the return/risk characteristics of the various instruments offered savers will be improved and one may expect, as a result, an increase in both S/Y and F S/S ratios. The previous discussion thus tends to support the idea that the financial system plays an important role in permitting and promoting the growth of economies. Yet growth is not an objective in itself. There is such a thing as excessive capital accumulation. Jappelli and Pagano (1994) suggest that borrowing constraints,6 in general a source of inefficiency and the mark of a less than perfect financial system, may have led to more savings (in part unwanted) and higher growth. While their work is tentative, it underscores the necessity of adopting a broader and more satisfactory viewpoint and of more generally studying the impact of the financial system on social welfare. This is best done in the context of the theory of general equilibrium, a subject to which we shall turn in Section 1.6. 1.5 Financial Intermediation and the Business Cycle Business cycles are the mark of all developed economies. According to much of current research, they are in part the result of external shocks with which these economies are repeatedly confronted. The depth and amplitude of these fluctuations, however, may well be affected by some characteristics of the financial system. This is at least the import of the recent literature on the financial accelerator. The mechanisms at work here are numerous, and we limit ourselves to giving the reader a flavor of the discussion. The financial accelerator is manifest most straightforwardly in the context of monetary policy implementation. Suppose the monetary authority wishes to reduce the level of economic activity (inflation is feared) by raising real interest rates. The primary effect of such a move will be to increase firms’ cost of capital and, as a result, to induce a decrease in investment spending as marginal projects are eliminated from consideration. According to the financial accelerator theory, however, there may be further, substantial, secondary effects. In particular, the interest rate rise will reduce the value of firms’ collateralizable assets. For some firms, this reduction may significantly diminish their access to credit, making them credit constrained. As a result, the fall in investment may exceed the direct impact of the higher 6 By ‘borrowing constraints’ we mean the limitations that the average individual or firm may experience in his or her ability to borrow, at current market rates, from financial institutions. 9 cost of capital; tighter financial constraints may also affect input purchases or the financing of an adequate level of finished goods inventories. For all these reasons, the output and investment of credit-constrained firms will be more strongly affected by the action of the monetary authorities and the economic downturn may be made correspondingly more severe. By this same mechanism, any economy-wide reduction in asset values may have the effect of reducing economic activity under the financial accelerator. Which firms are most likely to be credit constrained? We would expect that small firms, those for which lenders have relatively little information about the long-term prospects, would be principally affected. These are the firms from which lenders demand high levels of collateral. Bernanke et al. (1996) provide empirical support for this assertion using U.S. data from small manufacturing firms. The financial accelerator has the power to make an economic downturn, of whatever origin, more severe. If the screening and monitoring functions of the financial system can be tailored more closely to individual firm needs, lenders will need to rely to a lesser extent on collateralized loan contracts. This would diminish the adverse consequences of the financial accelerator and perhaps the severity of business cycle downturns. 1.6 Financial Markets and Social Welfare Let us now consider the role of financial markets in the allocation of resources and, consequently, their effects on social welfare. The perspective provided here places the process of financial innovation in the context of the theory of general economic equilibrium whose central concepts are closely associated with the Ecole de Lausanne and the names of L´on Walras, and Vilfredo Pareto. e Our starting point is the first theorem of welfare economics which defines the conditions under which the allocation of resources implied by the general equilibrium of a decentralized competitive economy is efficient or optimal in the Pareto sense. First, let us define the terms involved. Assume a timeless economy where a large number of economic agents interact. There is an arbitrary number of goods and services, n. Consumers possess a certain quantity (possibly zero) of each of these n goods (in particular, they have the ability to work a certain number of hours per period). They can sell some of these goods and buy others at prices quoted in markets. There are a large number of firms, each represented by a production function – that is, a given ability (constrained by what is technologically feasible) to transform some of the available goods or services (inputs) into others (outputs); for instance, combining labor and capital to produce consumption goods. Agents in this economy act selfishly: Individuals maximize their well-being (utility) and firms maximize their profits. General equilibrium theory tells us that, thanks to the action of the price system, order will emerge out of this uncoordinated chaos, provided certain conditions are satisfied. In the main, these hypotheses (conditions) are as follows: 10 H1: Complete markets. There exists a market on which a price is established for each of the n goods valued by consumers. H2: Perfect competition. The number of consumers and firms (i.e., demanders and suppliers of each of the n goods in each of the n markets), is large enough so that no agent is in a position to influence (manipulate) market prices; that is, all agents take prices as given. H3: Consumers’ preferences are convex. H4: Firms’ production sets are convex as well. H3 and H4 are technical conditions with economic implications. Somewhat paradoxically, the convexity hypothesis for consumers’ preferences approximately translates into strictly concave utility functions. In particular, H3 is satisfied (in substance) if consumers display risk aversion, an assumption crucial for understanding financial markets, and one that will be made throughout this text. As already noted (Box 1.2), risk aversion translates into strictly concave utility functions (See Chapter 4 for details). H4 imposes requirements on the production technology. It specifically rules out increasing returns to scale in production. While important, this assumption is not at the heart of things in financial economics since for the most part we will abstract from the production side of the economy. A general competitive equilibrium is a price vector p∗ and an allocation of resources, resulting from the independent decisions of consumers and producers to buy or sell each of the n goods in each of the n markets, such that, at the equilibrium price vector p∗ , supply equals demand in all markets simultaneously and the action of each agent is the most favorable to him or her among all those he/she could afford (technologically or in terms of his/her budget computed at equilibrium prices). A Pareto optimum is an allocation of resources, however determined, where it is impossible to redistribute resources (i.e., to go ahead with further exchanges), without reducing the welfare of at least one agent. In a Pareto efficient (or Pareto optimal - we will use the two terminologies interchangeably) allocation of resources, it is thus not possible to make someone better off without making someone else worse off. Such a situation may not be just or fair, but it is certainly efficient in the sense of avoiding waste. Omitting some purely technical conditions, the main results of general equilibrium theory can be summarized as follows: 1. The existence of a competitive equilibrium: Under H1 through H4, a competitive equilibrium is guaranteed to exist. This means that there indeed exists a price vector and an allocation of resources satisfying the definition of a competitive equilibrium as stated above. 2. First welfare theorem: Under H1 and H2, a competitive equilibrium, if it exists, is a Pareto optimum. 3. Second welfare theorem: Under H1 through H4, any Pareto-efficient allocation can be decentralized as a competitive equilibrium. 11 The Second welfare theorem asserts that, for any arbitrary Pareto-efficient allocation there is a price vector and a set of initial endowments such that this allocation can be achieved as a result of the free interaction of maximizing consumers and producers interacting in competitive markets. To achieve a specific Pareto-optimal allocation, some redistribution mechanism will be needed to reshuffle initial resources. The availability of such a mechanism, functioning without distortion (and thus waste) is, however, very much in question. Hence the dilemma between equity and efficiency that faces all societies and their governments. The necessity of H1 and H2 for the optimality of a competitive equilibrium provides a rationale for government intervention when these hypotheses are not naturally satisfied. The case for antitrust and other “pro-competition” policies is implicit in H2; the case for intervention in the presence of externalities or in the provision of public goods follows from H1, because these two situations are instances of missing markets.7 Note that so far there does not seem to be any role for financial markets in promoting an efficient allocation of resources. To restore that role, we must abandon the fiction of a timeless world, underscoring, once again, the fact that time is of the essence in finance! Introducing the time dimension does not diminish the usefulness of the general equilibrium apparatus presented above, provided the definition of a good is properly adjusted to take into account not only its intrinsic characteristics, but also the time period in which it is available. A cup of coffee available at date t is different from a cup of coffee available at date t + 1 and, accordingly, it is traded on a different market and it commands a different price. Thus, if there are two dates, the number of goods in the economy goes from n to 2n. It is easy to show, however, that not all commodities need be traded for future as well as current delivery. The existence of a spot and forward market for one good only (taken as the numeraire) is sufficient to implement all the desirable allocations, and, in particular, restore, under H1 and H2, the optimality of the competitive equilibrium. This result is contained in Arrow (1964). It provides a powerful economic rationale for the existence of credit markets, markets where money is traded for future delivery. Now let us go one step further and introduce uncertainty, which we will represent conceptually as a partition of all the relevant future scenarios into separate states of nature. To review, a state of nature is an exhaustive description of one possible relevant configuration of future events. Using this concept, 7 Our model of equilibrium presumes that agents affect one another only through prices. If this is not the case, an economic externality is said to be present. These may involve either production or consumption. For example, there have been substantial negative externalities for fishermen associated with the construction of dams in the western United States: The catch of salmon has declined dramatically as these dams have reduced the ability of the fish to return to their spawning habitats. If the externality affects all consumers simultaneously, it is said to be a public good. The classic example is national defense. If any citizen is to consume a given level of national security, all citizens must be equally secure (and thus consume this public good at the same level). Both are instances of missing markets. Neither is there a market for national defense, nor for rights to disturb salmon habitats. 12 the applicability of the welfare theorems can be extended in a fashion similar to that used with time above, by defining goods not only according to the date but also to the state of nature at which they are (might be) available. This is the notion of contingent commodities. Under this construct, we imagine the market for ice cream decomposed into a series of markets: for ice cream today, ice cream tomorrow if it rains and the Dow Jones is at 10,000; if it rains and . . ., etc. Formally, this is a straightforward extension of the basic context: there are more goods, but this in itself is not restrictive8 [Arrow (1964) and Debreu (1959)]. The hypothesis that there exists a market for each and every good valued by consumers becomes, however, much more questionable with this extended definition of a typical good, as the example above suggests. On the one hand, the number of states of nature is, in principle, arbitrarily large and, on the other, one simply does not observe markets where commodities contingent on the realization of individual states of nature can routinely be traded . One can thus state that if markets are complete in the above sense, a competitive equilibrium is efficient, but the issue of completeness (H1) then takes center stage. Can Pareto optimality be obtained in a less formidable setup than one where there are complete contingent commodity markets? What does it mean to make markets “more complete?” It was Arrow (1964), again, who took the first step toward answering these questions. Arrow generalized the result alluded to earlier and showed that it would be enough, in order to effect all desirable allocations, to have the opportunity to trade one good only across all states of nature. Such a good would again serve as the numeraire. The primitive security could thus be a claim promising $1.00 (i.e., one unit of the numeraire) at a future date, contingent on the realization of a particular state, and zero under all other circumstances. We shall have a lot to say about such Arrow-Debreu securities (A-D securities from now on), which are also called contingent claims. Arrow asserted that if there is one such contingent claim corresponding to each and every one of the relevant future date/state configurations, hypothesis H1 could be considered satisfied, markets could be considered complete, and the welfare theorems would apply. Arrow’s result implies a substantial decrease in the number of required markets.9 However, for a complete contingent claim structure to be fully equivalent to a setup where agents could trade a complete set of contingent commodities, it must be the case that agents are assumed to know all future spot prices, contingent on the realization of all individual states of the world. Indeed, it is at these prices that they will be able to exchange the proceeds from their A-D securities for consumption goods. This hypothesis is akin to the hypothesis of rational expectations.10 A-D securities are a powerful conceptual tool and are studied in depth in n can be as large as one needs without restriction. 2 dates, 3 basic goods, 4 states of nature: complete commodity markets require 12 contingent commodity markets plus 3 spot markets versus 4 contingent claims and 2 x 3 spot markets in the Arrow setup. 10 For an elaboration on this topic, see Drze (1971). e 9 Example: 8 Since 13 Chapters 8 and 10. They are not, however, the instruments we observe being traded in actual markets. Why is this the case, and in what sense is what we do observe an adequate substitute? To answer these questions, we first allude to a result (derived later on) which states that there is no single way to make markets complete. In fact there is potentially a large number of alternative financial structures achieving the same goal, and the complete A-D securities structure is only one of them. For instance, we shall describe, in Chapter 10, a context in which one might think of achieving an essentially complete market structure with options or derivative securities. We shall make use of this fact for pricing alternative instruments using arbitrage techniques. Thus, the failure to observe anything close to A-D securities being traded is not evidence against the possibility that markets are indeed complete. In an attempt to match this discussion on the role played by financial markets with the type of markets we see in the real world, one can identify the different needs met by trading A-D securities in a complete markets world. In so doing, we shall conclude that, in reality, different types of needs are met through trading alternative specialized financial instruments (which, as we shall later prove, will all appear as portfolios of A-D securities). As we have already observed, the time dimension is crucial for finance and, correspondingly, the need to exchange purchasing power across time is essential. It is met in reality through a variety of specific non contingent instruments, which are promised future payments independent of specific states of nature, except those in which the issuer is unable to meet his obligations (bankruptcies). Personal loans, bank loans, money market and capital market instruments, social security and pension claims are all assets fulfilling this basic need for redistributing purchasing power in the time dimension. In a complete market setup implemented through A-D securities, the needs met by these instruments would be satisfied by a certain configuration of positions in A-D securities. In reality, the specialized instruments mentioned above fulfill the demand for exchanging income through time. One reason for the formidable nature of the complete markets requirement is that a state of nature, which is a complete description of the relevant future for a particular agent, includes some purely personal aspects of almost unlimited complexity. Certainly the future is different for you, in a relevant way, if you lose your job, or if your house burns, without these contingencies playing a very significant role for the population at large. In a pure A-D world, the description of the states of nature should take account of these individual contingencies viewed from the perspective of each and every market participant! In the real world, insurance contracts are the specific instruments that deal with the need for exchanging income across purely individual events or states. The markets for these contracts are part and parcel of the notion of complete financial markets. While such a specialization makes sense, it is recognized as unlikely that the need to trade across individual contingencies will be fully met through insurance markets because of specific difficulties linked with the hidden quality of these contingencies, (i.e., the inherent asymmetry in the information possessed by suppliers and demanders participating in these markets). The presence of 14 these asymmetries strengthens our perception of the impracticality of relying exclusively on pure A-D securities to deal with personal contingencies. Beyond time issues and personal contingencies, most other financial instruments not only imply the exchange of purchasing power through time, but are also more specifically contingent on the realization of particular events. The relevant events here, however, are defined on a collective basis rather than being based on individual contingencies; they are contingent on the realization of events affecting groups of individuals and observable by everyone. An example of this is the situation where a certain level of profits for a firm implies the payment of a certain dividend against the ownership of that firms’ equity. Another is the payment of a certain sum of money associated with the ownership of an option or a financial futures. In the later cases, the contingencies (sets of states of nature) are dependent on the value of the underlying asset itself. 1.7 Conclusion To conclude this introductory chapter, we advance a vision of the financial system progressively evolving toward the complete markets paradigm, starting with the most obviously missing markets and slowly, as technological innovation decreases transaction costs and allows the design of more sophisticated contracts, completing the market structure. Have we arrived at a complete market structure? Have we come significantly closer? There are opposing views on this issue. While a more optimistic perspective is proposed by Merton (1990) and Allen and Gale (1994), we choose to close this chapter on two healthily skeptical notes. Tobin (1984, p.10), for one, provides an unambiguous answer to the above question: “New financial markets and instruments have proliferated over the last decade, and it might be thought that the enlarged menu now spans more states of nature and moves us closer to the Arrow-Debreu ideal. Not much closer, I am afraid. The new options and futures contracts do not stretch very far into the future. They serve mainly to allow greater leverage to short-term speculators and arbitrageurs, and to limit losses in one direction or the other. Collectively they contain considerable redundancy. Every financial market absorbs private resources to operate, and government resources to police. The country cannot afford all the markets the enthusiasts may dream up. In deciding whether to approve proposed contracts for trading, the authorities should consider whether they really fill gaps in the menu and enlarge the opportunities for Arrow-Debreu insurance, not just opportunities for speculation and financial arbitrage.” Shiller (1993, pp. 2–3) is even more specific with respect to missing markets: “It is odd that there appear to have been no practical proposals for establishing a set of markets to hedge the biggest risks to standards of living. Individuals and organizations could hedge or insure themselves against risks to their standards of living if an array of risk markets – let us call them macro markets – could be established. These would be large international markets, securities, futures, options, swaps or analogous markets, for claims on major components of incomes (including service flows) shared by many people or organizations. 15 The settlements in these markets could be based on income aggregates, such as national income or components thereof, such as occupational incomes, or prices that value income flows, such as real estate prices, which are prices of claims on real estate service flows.” References Allen, F., Gale, D. (1994), Financial Innovation and Risk Sharing, MIT Press, Cambridge, Massachusetts. Arrow, K. J., (1964), “The Role of Securities in the Allocation of Risk,” Review of Economic Studies, 31, 91–96. Barro, R. J., Sala-i-Martin, X. (1995), Economic Growth, McGraw-Hill, New York. Bernanke, B., Gertler, M., Gilchrist, S. (1996), “The Financial Accelerator and the Flight to Quality,” The Review of Economics and Statistics 78, 1–15. Bernstein, P. L. (1992), Capital Ideas. The Improbable Origins of Modern Wall Street, The Free Press, New York. Debreu, G. (1959), Theory of Value: An Axiomatic Analysis of Economic Equilibrium, Wiley, New York. Drze, J. H. (1971), “Market Allocation Under Uncertainty,” European Ecoe nomic Review 2, 133-165. Jappelli, T., Pagano, M. (1994), “Savings, Growth, and Liquidity Constraints,” Quarterly Journal of Economics 109, 83–109. Levine, R. (1997), “Financial Development and Economic Growth: Views and Agenda,” Journal of Economic Literature 35, 688–726. Merton, R.C. (1990), “The Financial System and Economic Performance,” Journal of Financial Services, 4, 263–300 Mishkin, F. (1992), The Economics of Money, Banking and Financial Markets, 3rd edition. Harper Collins, New York, Chapter 8. Schumpeter, J. (1934), The Theory of Economic Development, Duncker & Humblot, Leipzig. Trans. Opie, R. (1934), Harvard University Press, Cambridge, Massachusetts. Reprinted, Oxford University Press, New York (1964). Shiller, R. J. (1993), Macro Markets – Creating Institutions for Managing Society’s Largest Economic Risks, Clarendon Press, Oxford. Solow, R. M. (1956), “A Contribution to the Theory of Economic Growth,” Quarterly Journal of Economics 32, 65–94. 16 Tobin, J. (1984), “On the Efficiency of the Financial System,” Lloyds Bank Review, 1–15. UBS Economic Research, (1993), Economic Focus, Union Bank of Switzerland, no. 9. Complementary Readings As a complement to this introductory chapter, the reader will be interested in the historical review of financial markets and institutions found in the first chapter of Allen and Gale (1994) . Bernstein (1992) provides a lively account of the birth of the major ideas making up modern financial theory including personal portraits of their authors. Appendix: Introduction to General Equilibrium Theory The goal of this appendix is to provide an introduction to the essentials of General Equilibrium Theory thereby permitting a complete understanding of Section 1.6 of the present chapter and facilitating the discussion of subsequent chapters (from Chapter 8 on). To make this presentation as simple as possible we’ll take the case of a hypothetical exchange economy (that is, one with no production) with two goods and two agents. This permits using a very useful pedagogical tool known as the Edgeworth-Bowley box. Insert Figure A.1.1 about here Let us analyze the problem of allocating efficiently a given economy-wide endowment of 10 units of good 1 and 6 units of good 2 among two agents, A and B. In Figure A1.1, we measure good 2 on the vertical axis and good 1 on the horizontal axis. Consider the choice problem from the origin of the axes for Mr.A, and upside down, (that is placing the origin in the upper right corner), for Ms.B. An allocation is then represented as a point in a rectangle of size 6 x 10. Point E is an allocation at which Mr. A receives 4 units of good 2 and 2 units of good 2. Ms.B gets the rest, that is, 2 units of good 2 and 8 units of good 1. All other points in the box represent feasible allocations, that is, alternative ways of allocating the resources available in this economy. Pareto Optimal Allocations In order to discuss the notion of Pareto optimal or efficient allocations, we need to introduce agents’ preferences. They are fully summarized, in the graphical context of the Edgeworth-Bowley box, by indifference curves (IC) or utility level curves. Thus, starting from the allocation E represented in Figure A1.1, we can record all feasible allocations that provide the same utility to Mr. A. The precise shape of such a level curve is person specific, but we can at least be confident that it slopes downward. If we take away some units of good 1, we have to compensate him with some extra units of good 2 if we are to leave his utility level unchanged. It is easy to see as well that the ICs of a consistent 17 person do not cross, a property associated with the notion of transitivity (and with rationality) in Chapter 3. And we have seen in Boxes 1.1 and 1.2 that the preference for smoothness translates into a strictly concave utility function, or, equivalently, convex-to-the-origin level curves as drawn in Figure A1.1. The same properties apply to the IC of Ms. B, of course viewed upside down with the upper right corner as the origin. Insert Figure A.1.2 about here With this simple apparatus we are in a position to discuss further the concept of Pareto optimality. Arbitrarily tracing the level curves of Mr.A and Ms.B as they pass through allocation E, (but in conformity with the properties derived in the previous paragraph), only two possibilities may arise: they cross each other at E or they are tangent to one another at point E. The first possibility is illustrated in Figure A1.1, the second in Figure A1.2. In the first case, allocation E cannot be a Pareto optimal allocation. As the picture illustrates clearly, by the very definition of level curves, if the ICs of our two agents cross at point E there is a set of allocations (corresponding to the shaded area in Figure A1.1) that are simultaneously preferred to E by both Mr. A and Ms. B. These allocations are Pareto superior to E, and, in that situation, it would indeed be socially inefficient or wasteful to distribute the available resources as indicated by E. Allocation D, for instance, is feasible and preferred to E by both individuals. If the ICs are tangent to one another at point E as in Figure A1.2, no redistribution of the given resources exists that would be approved by both agents. Inevitably, moving away from E decreases the utility level of one of the two agents if it favors the other. In this case, E is a Pareto optimal allocation. Figure A1.2 illustrates that it is not generally unique, however. If we connect all the points where the various ICs of our two agents are tangent to each other, we draw the line, labeled the contract curve, representing the infinity of Pareto optimal allocations in this simple economy. An indifference curve for Mr.A is defined as the set of allocations that provide the same utility to Mr.A as some specific allocation; for example, allocation E: (cA , cA ) : U (cA , cA ) = U (E) . This definition implies that the slope of the IC 1 2 1 2 can be derived by taking the total differential of U (cA , cA ) and equating it to 1 2 zero (no change in utility along the IC), which gives: ∂U (cA , cA ) A ∂U (cA , cA ) A 2 2 1 1 dc1 + dc2 = 0, ∂cA ∂cA 1 2 and thus dcA − 2 = dcA 1 ∂U (cA ,cA ) 1 2 ∂cA 1 ∂U (cA ,cA ) 1 2 ∂cA 2 (1.3) A ≡ M RS1,2 . (1.4) That is, the negative (or the absolute value) of the slope of the IC is the ratio of the marginal utility of good 1 to the marginal utility of good 2 specific 18 to Mr.A and to the allocation (cA , cA ) at which the derivatives are taken. It 1 2 defines Mr.A’s Marginal Rate of Substitution (MRS) between the two goods. Equation (1.4) permits a formal characterization of a Pareto optimal allocation. Our former discussion has equated Pareto optimality with the tangency of the ICs of Mr.A and Ms.B. Tangency, in turn, means that the slopes of the respective ICs are identical. Allocation E, associated with the consumption vector(cA , cA )E for Mr.A and (cB , cB )E for Ms.B, is thus Pareto optimal if and 1 2 1 2 only if A M RS1,2 = ∂U (cA ,cA )E 1 2 ∂cA 1 ∂U (cA ,cA )E 1 2 ∂cA 2 = ∂U (cB ,cB )E 1 2 ∂cB 1 ∂U (cB ,cB )E 1 2 ∂cB 2 B = M RS1,2 . (1.5) Equation (1.5) provides a complete characterization of a Pareto optimal allocation in an exchange economy except in the case of a corner allocation, that is, an allocation at the frontier of the box where one of the agents receives the entire endowment of one good and the other agent receives none. In that situation it may well be that the equality could not be satisfied except, hypothetically, by moving to the outside of the box, that is to allocations that are not feasible since they require giving a negative amount of one good to one of the two agents. So far we have not touched on the issue of how the discussed allocations may be determined. This is the viewpoint of Pareto optimality which analysis is exclusively concerned with deriving efficiency properties of given allocations, irrespective of how they were achieved. Let us now turn to the concept of competitive equilibrium. Competitive equilibrium Associated with the notion of competitive equilibrium is the notion of markets and prices. One price vector (one price for each of our two goods), or simply a relative price taking good 1 as the numeraire, and setting p1 = 1, is represented in the Edgeworth-Bowley box by a downward sloping line. From the viewpoint of either agent, such a line has all the properties of the budget line. It also represents the frontier of their opportunity set. Let us assume that the initial allocation, before any trade, is represented by point I in Figure A1.3. Any line sloping downward from I does represent the set of allocations that Mr.A, endowed with I, can obtain by going to the market and exchanging (competitively, taking prices as given) good 1 for 2 or vice versa. He will maximize his utility subject to this budget constraint by attempting to climb to the highest IC making contact with his budget set. This will lead him to select the allocation corresponding to the tangency point between one of his ICs and the price line. Because the same prices are valid for both agents, an identical procedure, viewed upside down from the upper right-hand corner of the box, will lead Ms.B to a tangency point between one of her ICs and the price line. At this stage, only two possibilities may arise: Mr.A and Ms.B have converged to the same allocation, (the two markets, for good 1 and 2, clear – supply and demand for the two goods are equal and we are at a competitive equilibrium); 19 or the two agents’ separate optimizing procedures have lead them to select two different allocations. Total demand does not equal total supply and an equilibrium is not achieved. The two situations are described, respectively, in Figures A1.3 and A1.4. Insert Figures A.1.3 and A.1.4 about here In the disequilibrium case of Figure A1.4, prices will have to adjust until an equilibrium is found. Specifically, with Mr.A at point A and Ms.B at point B, there is an excess demand of good 2 but insufficient demand for good 1. One would expect the price of 2 to increase relative to the price of good 1 with the likely result that both agents will decrease their net demand for 2 and increase their net demand for 1. Graphically, this is depicted by the price curve tilting with point I as the axis and looking less steep (indicating, for instance, that if both agents wanted to buy good 1 only, they could now afford more of it). With regular ICs, the respective points of tangencies will converge until an equilibrium similar to the one described in Figure A1.3 is reached. We will not say anything here about the conditions guaranteeing that such a process will converge. Let us rather insist on one crucial necessary precondition: that an equilibrium exists. In the text we have mentioned that assumptions H1 to H4 are needed to guarantee the existence of an equilibrium. Of course H4 does not apply here. H1 states the necessity of the existence of a price for each good, which is akin to specifying the existence of a price line. H2 defines one of the characteristics of a competitive equilibrium: that prices are taken as given by the various agents and the price line describes their perceived opportunity sets. Our discussion here can enlighten the need for H3. Indeed, in order for an equilibrium to have a chance to exist, the geometry of Figure A1.3 makes clear that the shapes of the two agents’ ICs is relevant. The price line must be able to separate the “better than” areas of the two agents’ ICs passing through a same point – the candidate equilibrium allocation. The better than area is simply the area above a given IC. It represents all the allocations providing higher utility than those on the level curve. This separation by a price line is not generally possible if the ICs are not convex, in which case an equilibrium cannot be guaranteed to exist. The problem is illustrated in Figure A1.5. Insert Figure A.1.5 about here Once a competitive equilibrium is observed to exist, which logically could be the case even if the conditions that guarantee existence are not met, the Pareto optimality of the resulting allocation is ensured by H1 and H2 only. In substance this is because once the common price line at which markets clear exists, the very fact that agents optimize taking prices as given, leads them to a point of tangency between their highest IC and the common price line. At the resulting allocation, both MRS are equal to the same price line and, consequently, are identical. The conditions for Pareto optimality are thus fulfilled. 20 Chapter 2 : The Challenges of Asset Pricing: A Roadmap 2.1 The main question of financial theory Valuing risky cash flows or, equivalently, pricing risky assets is at the heart of financial theory. Our discussion thus far has been conducted from the perspective of society as a whole, and it argues that a progressively more complete set of financial markets will generally enhance societal welfare by making it easier for economic agents to transfer income across future dates and states via the sale or purchase of individually tailored portfolios of securities. The desire of agents to construct such portfolios will be as much dependent on the market prices of the relevant securities as on their strict availability, and this leads us to the main topic of the text. Indeed, the major practical question in finance is “how do we value a risky cash flow?”, and the main objective of this text is to provide a complete and upto-date treatment of how it can be answered. For the most part, this textbook is thus a text on asset pricing. Indeed, an asset is nothing else than the right to future cash flows, whether these future cash flows are the result of interest payments, dividend payments, insurance payments, or the resale value of the asset. Conversely, when we compute the risk-adjusted present value (P V ), we are, in effect, asking the question: If this project’s cash flow were traded as though it were a security, at what price would it sell given that it should pay the prevailing rate for securities of that same systematic risk level? We compare its fair market value, estimated in this way, with its cost, P0 . Evaluating a project is thus a special case of evaluating a complex security. Viewed in this way and abstracting from risk for the moment, the key object of our attention, be it an asset or an investment project, can be summarized as in Table 2.1. Table 2.1 : Valuing a Risk-Free Cash Flow t=0 P0 ? t=1 ˜ CF 1 CF1 f (1+r1 ) t=2 ˜ CF 2 CF2 f (1+r2 )2 .... ˜ CF τ CFτ f (1+rτ )τ t=T ˜ CF T CFT f (1+rT )T In Table 2.1, t = 0, 1, 2, ...τ, ..T represents future dates. The duration of each period, the length of time between τ − 1 and τ is arbitrary and can be viewed as one day, one month, one quarter or one year. The expression CFτ stands for the possibly uncertain cash flows in period τ (whenever useful, we will f identify random variables with a tilde), rτ is the risk free, per-period interest rate prevailing between date τ − 1 and τ , and P0 denotes the to-be-determined 1 current price or valuation of the future cash flow. If the future cash flows will be available for sure, valuing the flow of future payments is easy. It requires adding the future cash flows after discounting them by the risk-free rate of interest, that is, adding the cells in the last line of the Table. The discounting procedure is indeed at the heart of our problem: it clearly serves to translate future payments into current dollars (those that are to be used to purchase the right to these future cash flows or in terms of which the current value of the future cash flow is to be expressed); in other words, the discounting procedure is what makes it possible to compare future dollars (i.e., dollars that will be available in the future) with current dollars. If, however, the future cash flows will not be available for certain but are subject to random events - the interest payments depend on the debtor remaining solvent, the dividend payments depend on the financial strength of the equity issuer, the returns to the investment project depend on its commercial success -, then the valuation question becomes trickier, so much so that there does not exist a universally agreed way of proceeding that dominates all others. In the same way that one dollar for sure tomorrow does not generally have the same value as one current dollar, one dollar tomorrow under a set of more or less narrowly defined circumstances, that is, in a subset of all possible states of nature, is also not worth one current dollar, not even one current dollar discounted at the risk free rate. Assume the risk free rate of return is 5% per year, then discounting at the risk free rate one dollar available in one year yields 1$ ∼ 1.05 = $.95. This equality is an equality: it states that$1 tomorrow will have a market price of $.95 today when one year risk free securities earn 5%. It is a market assessment to the extent that the 5% risk free rate is an equilibrium market price. Now if$1 for sure tomorrow is worth $.95, it seems likely that$1 tomorrow “maybe”, that is, under a restrictive subset of the states of nature, should certainly be worth less than $.95. One can speculate for instance that if the probability of$1 in a year is about 1 , then one should not be willing to pay 2 more than 1 x $.95 for that future cash flow. But we have to be more precise 2 than this. To that end, several lines of attack will be pursued. Let us outline them. 2.2 Discounting risky cash flows: various lines of attack First, as in the certainty case, it is plausible to argue (and it can be formally demonstrated) that the valuation process is additive: the value of a sum of future cash flows will take the form of the sum of the values of each of these future cash flows. Second, as already anticipated, we will work with probabilities, so that the random cash flow occurring at a future date τ will be represented by ˜ a random variable: CF τ , for which a natural reference value is its expectation ˜ E CF τ . Another would be the value of this expected future cash flow discounted ˜ E CF τ at the risk free rate: (1+rf )τ . Now the latter expression cannot generally be τ the solution to our problem, although it is intuitively understandable that it will be when the risk issue does not matter; that is, when market participants 2 can be assumed as risk neutral. In the general case where risk needs to be taken into account, which typically means that risk bearing behavior needs to be remunerated, alterations to that reference formula are necessary. These alterations may take the following form: 1. The most common strategy consists of discounting at a rate that is higher than the risk free rate, that is, to discount at a rate which is the risk free rate increased by a certain amount π as in ˜ E CF τ f (1 + rτ + π)τ ; The underlying logic is straightforward: To price an asset equal to the present value of its expected future cash flows discounted at a particular rate is to price the asset in a manner such that, at its present value price, it is expected to earn that discount rate. The appropriate rate, in turn, must be the analyst’s estimate of the rate of return on other financial assets that represent title to cash flows similar in risk and timing to that of the asset in question. This strategy has the consequence of pricing the asset to pay the prevailing competitive rate for its risk class. When we follow this approach, the key issue is to compute the appropriate risk premium. 2. Another approach in the same spirit consists of correcting the expected cash flow itself in such a way that one can continue discounting at the risk free rate. The standard way of doing this is to decrease the expected future cash flow by a factor Π that once again will reflect some form of risk or insurance premium as in ˜ E CF τ − Πτ f (1 + rτ )τ . 3. The same idea can take the form, it turns out quite fruitfully, of distorting the probability distribution over which the expectations operator is applied so that taking the expected cash flow with this modified probability distribution justifies once again discounting at the risk free rate: ˆ ˜ E CF τ f (1 + rτ )τ ; ˆ Here E denotes the expectation taken with respect to the modified probability distribution. ˜ 4. Finally, one can think of decomposing the future cash flow CF τ into its state by state elements. Denote (CF (θτ )) the actual payment that will occur in the specific possible state of nature θτ . If one is able to find the price today of$1 tomorrow conditional on that particular state θτ being 3 ˜ realized, say q(θτ ), then surely the appropriate current valuation of CF τ is q(θτ )CF (θτ ), θτ ∈Θτ where the summation take place over all the possible future states θτ . The procedures described above are really alternative ways of attacking the difficult valuation problem we have outlined, but they can only be given content in conjunction with theories on how to compute the risk premia (cases 1 or 2), to identify the distorted probability distribution (case 3) or to price future dollars state by state (case 4). For strategies 1 and 2, this can be done using the CAPM, the CCAPM or the APT; strategy 3 is characteristic of the Martingale approach; strategy 4 describes the perspective of Arrow-Debreu pricing. 2.3 Two main perspectives: Equilibrium vs. Arbitrage There is another, even more fundamental way of classifying alternative valuation theories. All the known valuation theories borrow one of two main methodologies : the equilibrium approach or the arbitrage approach. The traditional equilibrium approach consists of an analysis of the factors determining the supply and demand for the cash flow (asset) in question. The arbitrage approach attempts to value a cash flow on the basis of observations made on the various elements making up that cash flow. Let us illustrate this distinction with an analogy. You are interested to price a bicycle. There are two ways to approach the question. If you follow the equilibrium approach you will want to study the determinants of supply and demand. Who are the producers? How many bicycles are they able to produce? What are the substitutes, including probably the existing stock of old bicycles potentially appearing on the second-hand market? After dealing with supply, turn to demand: Who are the buyers? What are the forecasts on the demand for bicycles? Etc.. Finally you will turn to the market structure. Is the market for bicycles competitive ? If so, we know how the equilibrium price will emerge as a result of the matching process between demanders and suppliers. The equilibrium perspective is a sophisticated, complex approach, with a long tradition in economics, one that has also been applied in finance, at least since the fifties. We will follow it in the first part of this book, adopting standard assumptions that simplify, without undue cost, the supply and demand analysis for financial objects: the supply of financial assets at any point in time is assumed to be fixed and financial markets are viewed as competitive. Our analysis can thus focus on the determinants of the demand for financial assets. This requires that we first spend some time discussing the preferences and attitudes toward risk of investors, those who demand the assets (Chapters 3 and 4), before modeling the investment process, that is, how the demand for financial assets is determined (Chapters 5 and 6). Armed with these tools we will review the three main equilibrium theories, the CAPM in Chapter 7, AD pricing in Chapter 8 and the CCAPM in Chapter 9. 4 The other approach to valuing bicycles starts from observing that a bicycle is not (much) more than the sum of its parts. With a little knowledge, in almost infinite supply, and some time (which is not and this suggests that the arbitrage approach holds only as an approximation that may be rather imprecise in circumstances where the time and intellectual power required to “assemble the bicycle” from the necessary spare parts are non-trivial; that is, when the remuneration of the necessary “engineers” matters), it is possible to re-engineer or replicate any bicycle with the right components. It then follows that if you know the price of all the necessary elements - frame, handle, wheel, tire, saddle, break and gearshift - you can determine relatively easily the market value of the bicycle. The arbitrage approach is, in a sense, much more straightforward than the equilibrium approach. It is also more robust: if the no arbitrage relation between the price of the bicycle and the price of its parts did not hold, anyone with a little time could become a bicycle manufacturer and make good money. If too many people exploit that idea, however, the prices of parts and the prices of bicycles will start adjusting and be forced into line. This very idea is powerful for the object at hand, financial assets, because if markets are complete in the sense discussed in section 1.6, then it can easily be shown that all the component prices necessary to value any arbitrary cash flow are available. Furthermore, little time and few resources (relative to the global scale of these product markets) are needed to exploit arbitrage opportunities in financial markets. There is, however, an obvious limitation to the arbitrage approach. Where do we get the price of the parts if not through an equilibrium approach? That is, the arbitrage approach is much less ambitious and more partial than the equilibrium approach. Even though it may be more practically useful in the domains where the price of the parts are readily available, it does not make up for a general theory of valuation and, in that sense, has to be viewed as a complement to the equilibrium approach. In addition, the equilibrium approach, by forcing us to rationalize investors’ demand for financial assets, provides useful lessons for the practice of asset management. The foundations of this inquiry will be put in place in Chapters 3 to 6 - which together make up Part II of the book - while Chapter 14 will extend the treatment of this topic beyond the case of the traditional one-period static portfolio analysis and focus on the specificities of long run portfolio management. Finally, the arbitrage and equilibrium approaches can be combined. In particular, one fundamental insight that we will develop in Chapter 10 is that any cash flow can be viewed as a portfolio of, that is, can be replicated with Arrow-Debreu securities. This makes it very useful to start using the arbitrage approach with A-D securities as the main building blocks for pricing assets or valuing cash flows. Conversely the same chapter will show that options can be very useful in completing the markets and thus in obtaining a full set of prices for ”the parts” that will then be available to price the bicycles. In other words, the Arrow-Debreu equilibrium pricing theory is a good platform for arbitrage valuation. The link between the two approaches is indeed so tight that we will use our acquired knowledge of equilibrium models to understand one of the major arbitrage approaches, 5 the Martingale pricing theory (Chapters 11 and 12). We will then propose an overview of the Arbitrage Pricing Theory (APT) in Chapter 13. Chapters 10 to 13 together make up Part IV of this book. Part V will focus on three extensions. As already mentioned, Chapter 14 deals with long run asset management. Chapter 15 focuses on some implications of incomplete markets whose consequences we will illustrate from the twin viewpoints of the equilibrium and arbitrage approaches. We will use it as a pretext to review the Modigliani-Miller theorem and, in particular, to understand why it depends on the hypothesis of complete markets. Finally, in Chapter 16 we will open up, just a little, the Pandora’s box of heterogeneous beliefs. Our goal is to understand a number of issues that are largely swept under the rug in standard asset management and pricing theories and, in the process, restate the Efficient Market hypothesis. Table 2.2: The Roadmap Equilibrium Preliminaries Computing risk premia Identifying distorted probabilities Pricing future dollars state by state A-D pricing I - Ch. 8 Utility theory - Ch.3-4 Investment demand Ch.5-6 CAPM - Ch.7 CCAPM - Ch.9 Arbitrage APT - Ch. 13 Martingale measure Ch. 11 & 12 A-D pricing II Ch. 10 2.4 2.4.1 This is not all of finance! Corporate Finance Intermediate Financial Theory focuses on the valuation of risky cash flows. Pricing a future (risky) dollar is a dominant ingredient in most financial problems. But it is not all of finance! Our capital markets perspective in particular sidesteps many of the issues surrounding how the firm generates and protects the cash flow streams to be priced. It is this concern that is at the core of corporate financial theory or simply “corporate finance.” In a broad sense, corporate finance is concerned with decision making at the firm level whenever it has a financial dimension, has implications for the financial situation of the firm, or is influenced by financial considerations. In particular it is a field concerned, first and foremost, with the investment decision (what 6 projects should be accepted), the financing decision (what mix of securities should be issued and sold to finance the chosen investment projects), the payout decision (how should investors in the firm, and in particular the equity investors, be compensated), and risk management (how corporate resources should be protected against adverse outcomes). Corporate finance also explores issues related to the size and the scope of the firm, e.g, mergers and acquisitions and the pricing of conglomerates, the internal organization of the firm, the principles of corporate governance and the forms of remuneration of the various stakeholders.1 All of these decisions individually and collectively do influence the firm’s free cash flow stream and, as such, have asset pricing implications. The decision to increase the proportion of debt in the firm’s capital structure, for example, increases the riskiness of its equity cash flow stream, the standard deviation of the equilibrium return on equity, etc. Of course when we think of the investment decision itself, the solution to the valuation problem is of the essence, and indeed many of the issues typically grouped under the heading of capital budgeting are intimately related to the preoccupations of the present text. We will be silent, however, on most of the other issues listed above which are better viewed as arising in the context of bilateral (rather than market) relations and, as we will see, in situations where asymmetries of information play a dominant role. The goal of this section is to illustrate the difference in perspectives by reviewing, selectively, the corporate finance literature particularly as regards the capital structure of the firm, and contrasting it with the capital markets perspective that we will be adopting throughout this text. In so doing we also attempt to give the flavor of an important research area while detailing many of the topics this text elects not to address. 2.4.2 Capital structure We focus on the capital structure issue in Chapter 15 where we explore the assumption underlying the famous Modigliani-Miller irrelevance result: in the absence of taxes, subsidies and contracting costs, the value of a firm is independent of its capital structure if the firm’s investment policy is fixed and financial markets are complete. Our emphasis will concern how this result fundamentally rests on the complete markets assumption. The corporate finance literature has not ignored the completeness issue but rather has chosen to explore its underlying causes, most specifically information asymmetries between the various agents concerned, managers, shareholders etc.2 1 The recent scandals (Enron, WorldCom) in the U.S. and in Europe (Parmalat, ABB) place in stark light the responsibilities of boards of directors for ultimate firm oversight as well as their frequent failure in doing so. The large question here is what sort of board structure is consistent with superior long run firm performance? Executive compensation also remains a large issue: In particular, to what extent are the incentive effects of stock or stock options compensation influenced by the manager’s outside wealth? 2 Tax issues have tended to dominate the corporate finance capital structure debate until recently and we will review this arena shortly. The relevance of taxes is not a distinguishing 7 While we touch on the issue of heterogeneity of information in a market context, we do so only in our last Chapter (16), emphasizing there that heterogeneity raises a number of tough modeling difficulties. These difficulties justify the fact that most of capital market theory either is silent on the issue of heterogeneity (in particular, when it adopts the arbitrage approach) or explicitly assumes homogeneous information on the part of capital market participants. In contrast, the bulk of corporate finance builds on asymmetries of information and explores the various problems they raise. These are typically classified as leading to situations of ‘moral hazard’ or ‘adverse selection’. An instance of the former is when managers are tempted to take advantage of their superior information to implement investment plans that may serve their own interests at the expense of those of shareholders or debt holders. An important branch of the literature concerns the design of contracts which take moral hazard into account. The choice of capital structure, in particular, will be seen potentially to assist in their management (see, e.g., Zwiebel (1996)). A typical situation of ‘adverse selection’ occurs when information asymmetries between firms and investors make firms with ‘good’ investment projects indistinguishable to outside investors from firms with poor projects. This suggests a tendency for all firms to receive the same financing terms (a so-called “pooling equilibrium” where firms with less favorable prospects may receive better than deserved financing arrangements). Firms with good projects must somehow indirectly distinguish themselves in order to receive the more favorable financing terms they merit. For instance they may want to attach more collateral to their debt securities, an action that firms with poor projects may find too costly to replicate (see, e.g., Stein (1992)). Again, the capital structure decision may sometimes help in providing a resolution of the “adverse selection” problem. Below we review the principal capital structure perspectives. 2.4.3 Taxes and capital structure Understanding the determinants of a firm’s capital structure (the proportion of debt and equity securities it has outstanding in value terms) is the ‘classical’ problem in corporate finance. Its intellectual foundations lie in the seminal work of Modigliani and Miller (1958) who argue for capital structure irrelevance in a world without taxes and with complete markets (an hypothesis that excludes information asymmetries). The corporate finance literature has also emphasized the fact that when one security type receives favored tax treatment (typically this is debt via the tax deductibility of interest), then the firm’s securities become more valuable in the aggregate if more of that security is issued, since to do so is to reduce the firm’s overall tax bill and thus enhance the free cash flow to the security holders. Since the bondholders receive the same interest and principal payments, irrespective of the tax status of these payments from the firm’s perspective, any tax based feature of the corporate finance perspective alone. Taxes also matter when we think of valuing risky cash flows although we will have very little to say about it except that all the cash flows we consider are to be thought of as after tax cash flows. 8 cash flow enhancement is captured by equity holders. Under a number of further specialized assumptions (including the hypothesis that the firm’s debt is riskfree), these considerations lead to the classical relationship VL = VU + τ D; the value of a firm’s securities under partial debt financing (VL , where ‘L’ denotes leverage in the capital structure) equals its value under all equity financing (VU , where ‘U ’ denotes unlevered or an all equity capital structure) plus the present value of the interest tax subsidies. This latter quantity takes the form of the corporate tax rate (τ ) times the value of debt outstanding (D) when debt is assumed to be perpetual (unchanging capital structure). In return terms this value relationship can be transformed into a relationship between levered and unlevered equity returns: e e e rL = rU + (1 − τ )(D/E)(rU − rf ); e i.e., the return on levered equity, rL , is equal to the return on unlevered equity, e rU , plus a risk premium due to the inherently riskier equity cash flow that the presence of the fixed payments to debt creates. This premium, as indicated, is related to the tax rate, the firms debt/equity ratio (D/E), a measure of the degree of leverage, and the difference between the unlevered equity rate and the risk free rate, rf . Immediately we observe that capital structure considerations influence not only expected equilibrium equity returns via e e e ErL = ErU + (1 − τ )D/E(ErU − rf ), where E denotes the expectations operator, but also the variance of returns since 2 2 2 σrL = (1 + (1 − τ )D/E)2 σrU > σrU e e e under the mild assumption that rf is constant in the very short run. These relationships illustrate but one instance of corporate financial considerations affecting the patterns of equilibrium returns as observed in the capital markets. The principal drawback to this tax based theory of capital structure is the natural implication that if one security type receives favorable tax treatment (usually debt), then if the equity share price is to be maximized the firm’s capital structure should be composed exclusively of that security type – i.e., all debt, which is not observed. More recent research in corporate finance has sought to avoid these extreme tax based conclusions by balancing the tax benefits of debt with various costs of debt, including bankruptcy and agency costs. Our discussion broadly follows Harris and Raviv (1991). 2.4.4 Capital structure and agency costs An important segment of the literature seeks to explain financial decisions by examining the conflicts of interests among claimholders within the firm. While agency conflicts can take a variety of forms, most of the literature has focused 9 on shareholders’ incentives to increase investment risk – the asset substitution problem – or to reject positive NPV projects – the underinvestment problem. Both of these conflicts increase the cost of debt and thus reduce the firm’s value maximizing debt ratio. Another commonly discussed determinant of capital structure arises from manager-stockholder conflicts. Managers and shareholders have different objectives. In particular, managers tend to value investment more than shareholders do. Although there are a number of potentially powerful internal mechanisms to control managers, the control technology normally does not permit the costless resolution of this conflict between managers and investors. Nonetheless, the cash-flow identity implies that constraining financing, hedging and payout policy places indirect restrictions on investment policy. Hence, even though investment policy is not contractible, by restricting the firm in other dimensions, it is possible to limit the manager’s choice of an investment policy. For instance, Jensen (1986) argues that debt financing can increase firm value by reducing the free cash flow. This idea is formalized in more recent papers by Stulz (1990) and Zwiebel (1996). Also, by reducing the likelihood of both high and low cash flows, risk management not only can control shareholders’ underinvestment incentives but managers’ ability to overinvest as well. More recently, the corporate finance literature has put some emphasis on the cost that arises from conflicts of interests between controlling and minority shareholders. In most countries, publicly traded companies are not widely held, but rather have controlling shareholders. Moreover, these controlling shareholders have the power to pursue private benefits at the expense of minority shareholders, within the limits imposed by investor protection. The recent “law and finance” literature following Shleifer and Vishny (1997) and La Porta et al. (1998) argues that the expropriation of minority shareholders by the controlling shareholder is at the core of agency conflicts in most countries. While these conflicts have been widely discussed in qualitative terms, the literature has largely been silent on the magnitude of their effects. 2.4.5 The pecking order theory of investment financing The seminal reference here is Myers and Majluf (1984) who again base their work on the assumption that investors are generally less well informed (asymmetric information) than insider-managers vis--vis the firm’s investment opportunities. a As a result, new equity issues to finance new investments may be so underpriced (reflecting average project quality) that NPV positive projects from a societal perspective may have a negative NPV from the perspective of existing shareholders and thus not be financed. Myers and Majluf (1984) argue that this underpricing can be avoided if firms finance projects with securities which have more assured payout patterns and thus are less susceptible to undervaluation: internal funds and, to a slightly lesser extent, debt securities especially risk free debt. It is thus in the interests of shareholders to finance projects first with retained earnings, then with debt, and lastly with equity. An implication of this qualitative theory is that the announcement of a new equity issuance is likely 10 to be accompanied by a fall in the issuing firm’s stock price since it indicates that the firm’s prospects are too poor for the preferred financing alternatives to be accessible. The pecking order theory has led to a large literature on the importance of security design. For example, Stein (1992) argues that companies may use convertible bonds to get equity into their capital structures “through the backdoor” in situations where informational asymmetries make conventional equity issues unattractive. In other words, convertible bonds represent an indirect mechanism for implementing equity financing that mitigates the adverse selection costs associated with direct equity sales. This explanation for the use of convertibles emphasizes the role of the call feature – that will allow good firms to convert the bond into common equity – and costs of financial distress – that will prevent bad firms from mimicking good ones. Thus, the announcement of a convertible bond issue should be greeted with a less negative – and perhaps even positive – stock price response than an equity issue of the same size by the same company. 2.5 Conclusions We have presented four general approaches and two main perspectives to the valuation of risky cash flows. This discussion was meant as providing an organizing principle and a roadmap for the extended treatment of a large variety of topics on which we are now embarking. Our brief excursion into corporate finance was intended to suggest some of the agency issues that are part and parcel of a firm’s cash flow determination. That we have elected to focus on pricing issues surrounding those cash flow streams does not diminish the importance of the issues surrounding their creation. References Harris, M., Raviv, A. (1991), “The Theory of Capital Structure,” Journal of Finance, 46, 297-355. Jensen, M. (1986), “Agency Costs of Free Cash Flow, Corporate Finance and Takeovers,” American Economic Review, 76, 323-329. Jensen, M., Meckling, W. (1976),“Theory of the Firm: Managerial Behavior, Agency Costs, and Capital Structure,” Journal of Financial Economics, 3. La Porta, R., F. Lopes de Silanes, A. Shleifer, and R. Vishny (1998), “Law and Finance”, Journal of Political Economy, 106, 1113-1155. Myers, S., Majluf, N. (1984), “Corporate Financing and Investment Decisions when Firms Have Information that Investors Do Not Have,” Journal of Financial Economics, 13, 187-221. 11 Modigliani, F., Miller, M. (1958), “The Cost of Capital, Corporate Finance, and the Theory of Investment,” American Economic Review 48, 261-297. Shleifer A. and R. Vishny, (1997), “A Survey of Corporate Governance”, Journal of Finance, 52, 737-783. Stein, J. (1992), “Convertible Bonds as Backdoor Equity Financing”, Journal of Financial Economics 32, 3-23. Stulz, R. (1990), “Managerial Discretion and Optimal Financial Policies”, Journal of Financial Economics 26, 3-27. Zwiebel, J. (1996), “Dynamic Capital Structure under Managerial Entrenchment”, American Economic Review 86, 1197-1215. 12 Part II The Demand For Financial Assets Chapter 3: Making Choices in Risky Situations 3.1 Introduction The first stage of the equilibrium perspective on asset pricing consists in developing an understanding of the determinants of the demand for securities of various risk classes. Individuals demand securities (in exchange for current purchasing power) in their attempt to redistribute income across time and states of nature. This is a reflection of the consumption-smoothing and risk-reallocation function central to financial markets. Our endeavor requires an understanding of three building blocks: 1. how financial risk is defined and measured; 2. how an investor’s attitude toward or tolerance for risk is to be conceptualized and then measured; 3. how investors’ risk attitudes interact with the subjective uncertainties associated with the available assets to determine an investor’s desired portfolio holdings (demands). In this and the next chapter we give a detailed overview of points 1 and 2; point 3 is treated in succeeding chapters. 3.2 Choosing Among Risky Prospects: Preliminaries When we think of the “risk” of an investment, we are typically thinking of uncertainty in the future cash flow stream to which the investment represents title. Depending on the state of nature that may occur in the future, we may receive different payments and, in particular, much lower payments in some states than others. That is, we model an asset’s associated cash flow in any future time period as a random variable. Consider, for example, the investments listed in Table 3.1, each of which pays off next period in either of two equally likely possible states. We index these states by θ = 1, 2 with their respective probabilities labelled π1 and π2 . Table 3.1: Asset Payoffs ($) Cost at t = 0 Value at t = 1 π1 = π2 = 1/2 θ=1 θ=2 1,050 1,200 500 1,600 1,050 1,600 Investment 1 Investment 2 Investment 3 -1,000 -1,000 -1,000 First, this comparison serves to introduce the important notion of dominance. Investment 3 clearly dominates both investments 1 and 2 in the sense that it pays as much in all states of nature, and strictly more in at least one state. The state-by-state dominance illustrated here is the strongest possible form of dominance. Without any qualification, we will assume that all rational 2 individuals would prefer investment 3 to the other two. Basically this means that we are assuming the typical individual to be non-satiated in consumption: she desires more rather than less of the consumption goods these payoffs allow her to buy. In the case of dominance the choice problem is trivial and, in some sense, the issue of defining risk is irrelevant. The ranking defined by the concept of dominance is, however, very incomplete. If we compare investments 1 and 2, one sees that neither dominates the other. Although it performs better in state 2, investment 2 performs much worse in state 1. There is no ranking possible on the basis of the dominance criterion. The different prospects must be characterized from a different angle. The concept of risk enters necessarily. On this score, we would probably all agree that investments 2 and 3 are comparatively riskier than investment 1. Of course for investment 3, the dominance property means that the only risk is an upside risk. Yet, in line with the preference for smooth consumption discussed in Chapter 1, the large variation in date 2 payoffs associated with investment 3 is to be viewed as undesirable in itself. When comparing investments 1 and 2, the qualifier “riskier” undoubtedly applies to the latter. In the worst state, the payoff associated with 2 is worse; in the best state it is better. These comparisons can alternatively, and often more conveniently, be represented if we describe investments in terms of their performance on a per dollar basis. We do this by computing the state contingent rates of return (ROR) that we will typically associate with the symbol r. In the case of the above investments, we obtain the results in Table 3.2: Table 3.2: State Contingent ROR (r) Investment 1 Investment 2 Investment 3 θ=1 5% -50% 5% θ=2 20% 60% 60% One sees clearly that all rational individuals should prefer investment 3 to the other two and that this same dominance cannot be expressed when comparing 1 and 2. The fact that investment 2 is riskier, however, does not mean that all rational risk-averse individuals would necessarily prefer 1. Risk is not the only consideration and the ranking between the two projects is, in principle, preference dependent. This is more often the case than not; dominance usually provides a very incomplete way of ranking prospects. This is why we have to turn to a description of preferences, the main object of this chapter. The most well-known approach at this point consists of summarizing such investment return distributions (that is, the random variables representing re2 turns) by their mean (Er i ) and variance (σi ), i = 1, 2, 3. The variance (or its square root, the standard deviation) of the rate of return is then naturally used 3 as the measure of “risk” of the project (or the asset). For the three investments just listed, we have: 2 Er 1 = 12.5% ; σ1 = 1 (5 – 12.5)2 + 1 (20 - 12.5)2 = (7.5)2 , or σ 1 = 7.5% 2 2 Er 2 = 5% ; σ 2 = 55% (similar calculation) Er 3 = 32.5% ; σ 3 = 27.5% If we decided to summarize these return distributions by their means and variances only, investment 1 would clearly appear more attractive than investment 2: It has both a higher mean return and a lower variance. In terms of the mean-variance criterion, investment 1 dominates investment 2; 1 is said to mean-variance dominate 2. Our previous discussion makes it clear that meanvariance dominance is neither as strong, nor as general a concept as state-bystate dominance. Investment 3 mean-variance dominates 2 but not 1, although it dominates them both on a state-by-state basis! This is surprising and should lead us to be cautious when using any mean variance return criterion. We will, later on, detail circumstances where it is fully reliable. At this point let us anticipate that it will not be generally so, and that restrictions will have to be imposed to legitimize its use. The notion of mean-variance dominance can be expressed in the form of a criterion for selecting investments of equal magnitude, which plays a prominent role in modern portfolio theory: 1. For investments of the same Er, choose the one with the lowest σ. 2. For investments of the same σ, choose the one with the greatest Er. In the framework of modern portfolio theory, one could not understand a rational agent choosing 2 rather than investment 1. We cannot limit our inquiry to the concept of dominance, however. Meanvariance dominance provides only an incomplete ranking among uncertain prospects, as Table 3.3 illustrates: Table 3.3: State-Contingent Rates of Return Investment 4 Investment 5 θ=1 θ=2 3% 5% 2% 8% 1/ π1 = π2 = 2 ER 4 = 4%; σ 4 = 1% ER 5 = 5%; σ 5 = 3% Comparing these two investments, it is not clear which is best; there is no dominance in either state-by-state or mean-variance terms. Investment 5 is expected to pay 1.25 times the expected return of investment 4, but, in terms of standard deviation, it is also three times riskier. The choice between 4 and 5, when restricted to mean-variance characterizations, would require specifying the terms at which the decision maker is willing to substitute expected return for a given risk reduction. In other words, what decrease in expected return is 4 he willing to accept for a 1% decrease in the standard deviation of returns? Or conversely, does the 1 percentage point additional expected return associated with investment 5 adequately compensate for the (3 times) larger risk? Responses to such questions are preference dependent (i.e., vary from individual to individual). Suppose, for a particular individual, the terms of the trade-off are well represented by the index E/σ (referred to as the “Sharpe” ratio). Since (E/σ)4 = 4 while (E/σ)5 = 5/3, investment 4 is better than investment 5 for that individual. Of course another investor may be less risk averse; that is, he may be willing to accept more extra risk for the same expected return. For example, his preferences may be adequately represented by (E − 1/3σ) in which case he would rank investment 5 (with an index value of 4) above investment 4 (with a value of 3 2 ).1 3 All these considerations strongly suggest that we have to adopt a more general viewpoint for comparing potential return distributions. This viewpoint is part of utility theory, to which we turn after describing some of the problems associated with the empirical characterization of return distributions in Box 3.1. Box 3.1 Computing Means and Variances in Practice Useful as it may be conceptually, calculations of distribution moments such as the mean and the standard deviation are difficult to implement in practice. This is because we rarely know what the future states of nature are, let alone their probabilities. We also do not know the returns in each state. A frequently used proxy for a future return distribution is its historical distribution. This amounts to selecting a historical time period and a periodicity, say monthly prices for the past 60 months, and computing the historical returns as follows: rs,j = return to stock s in month j = ((ps,j + ds,j )/ps,j−1 ) − 1 where ps,j is the price of stock s in month j, and ds,j its dividend, if any, that month. We then summarize the past distribution of stock returns by the average historical return and the variance of the historical returns. By doing so 1 we, in effect, assign a probability of 60 to each past observation or event. In principle this is an acceptable way to estimate a return distribution for the future if we think the “mechanism” generating these returns is “stationary”: that the future will in some sense closely resemble the past. In practice, this hypothesis is rarely fully verified and, at the minimum, it requires careful checking. Also necessary for such a straightforward, although customary, application is that the return realizations are independent of each other, so that 1 Observe that the Sharpe ratio criterion is not immune to the criticism discussed above. With the Sharpe ratio criterion, investment 3 (E/σ = 1.182) is inferior to investment 1 (E/σ = 1.667). Yet we know that 3 dominates 1 since it pays a higher return in every state. This problem is pervasive with the mean-variance investment criterion. For any mean-variance choice criterion, whatever the terms of the trade-off between mean and variance or standard deviation, one can produce a paradox such as the one illustrated above. This confirms this criterion is not generally applicable without additional restrictions. The name Sharpe ratio refers to Nobel Prize winner William Sharpe, who first proposed the ratio for this sort of comparison. 5 today’s realization does not reveal anything materially new about the probabilities of tomorrow’s returns (formally, that the conditional and unconditional distributions are identical). 2 3.3 A Prerequisite: Choice Theory Under Certainty A good deal of financial economics is concerned with how people make choices. The objective is to understand the systematic part of individual behavior and to be able to predict (at least in a loose way) how an individual will react to a given situation. Economic theory describes individual behavior as the result of a process of optimization under constraints, the objective to be reached being determined by individual preferences, and the constraints being a function of the person’s income or wealth level and of market prices. This approach, which defines the homo economicus and the notion of economic rationality, is justified by the fact that individuals’ behavior is predictable only to the extent that it is systematic, which must mean that there is an attempt at achieving a set objective. It is not to be taken literally or normatively.2 To develop this sense of rationality systematically, we begin by summarizing the objectives of investors in the most basic way: we postulate the existence of a preference relation, represented by the symbol , describing investors’ ability to compare various bundles of goods, services, and money. For two bundles a and b, the expression a b is to be read as follows: For the investor in question, bundle a is strictly preferred to bundle b, or he is indifferent between them. Pure indifference is denoted by a ∼ b, strict preference by a b. The notion of economic rationality can then be summarized by the following assumptions: A.1 Every investor possesses such a preference relation and it is complete, meaning that he is able to decide whether he prefers a to b, b to a, or both, in which case he is indifferent with respect to the two bundles. That is, for any two bundles a and b, either a b or b a or both. If both hold, we say that the investor is indifferent with respect to the bundles and write a ∼ b. A.2 This preference relation satisfies the fundamental property of transitivity: For any bundles a, b, and c, if a b and b c, then a c. A further requirement is also necessary for technical reasons: A.3 The preference relation is continuous in the following sense: Let {xn } and {yn } be two sequences of consumption bundles such that xn → x and yn for all n, then the same relationship is preserved in the yn → y.3 If xn 2 By this we mean that economic science does not prescribe that individuals maximize, optimize, or simply behave as if they were doing so. It just finds it productive to summarize the systematic behavior of economic agents with such tools. 3 We use the standard sense of (normed) convergence on RN . 6 limit: x y. A key result can now be expressed by the following proposition. Theorem 3.1: Assumptions A.1 through A.3 are sufficient to guarantee the existence of a continuous, time-invariant, real-valued utility function4 u, such that for any two objects of choice (consumption bundles of goods and services; amounts of money, etc.) a and b, a ≥ u(a) ≥ b if and only if u(b). Proof: See, for example, Mas-Colell et. al. (1995), Proposition 3.c.1. 2 This result asserts that the assumption that decision makers are endowed with a utility function (which they are assumed to maximize) is, in reality, no different than assuming their preferences among objects of choice define a relation possessing the (weak) properties summarized in A1 through A3. Notice that Theorem 3.1 implies that if u( ) is a valid representation of an individual’s preferences, any increasing transformation of u( ) will do as well since such a transformation by definition will preserve the ordering induced by u( ). Notice also that the notion of a consumption bundle is, formally, very general. Different elements of a bundle may represent the consumption of the same good or service in different time periods. One element might represent a vacation trip in the Bahamas this year; another may represent exactly the same vacation next year. We can further expand our notion of different goods to include the same good consumed in mutually exclusive states of the world. Our preference for hot soup, for example, may be very different if the day turns out to be warm rather than cold. These thoughts suggest Theorem 3.1 is really quite general, and can, formally at least, be extended to accommodate uncertainty. Under uncertainty, however, ranking bundles of goods (or vectors of monetary payoffs, see below) involves more than pure elements of taste or preferences. In the hot soup example, it is natural to suppose that our preferences for hot soup are affected by the probability we attribute to the day being hot or cold. Disentangling pure preferences from probability assessments is the subject to which we now turn. 3.4 Choice Theory Under Uncertainty: An Introduction Under certainty, the choice is among consumption baskets with known characteristics. Under uncertainty, however, our emphasis changes. The objects of 4 In other words, u: Rn → R+ 7 choice are typically no longer consumption bundles but vectors of state contingent money payoffs (we’ll reintroduce consumption in Chapter 5). Such vectors are formally what we mean by an asset that we may purchase or an investment. When we purchase a share of a stock, for example, we know that its sale price in one year will differ depending on what events transpire within the firm and in the world economy. Under financial uncertainty, therefore, the choice is among alternative investments leading to different possible income levels and, hence, ultimately different consumption possibilities. As before, we observe that people do make investment choices, and if we are to make sense of these choices, there must be a stable underlying order of preference defined over different alternative investments. The spirit of Theorem 3.1 will still apply. With appropriate restrictions, these preferences can be represented by a utility index defined on investment possibilities, but obviously something deeper is at work. It is natural to assume that individuals have no intrinsic taste for the assets themselves (IBM stock as opposed Royal Dutch stock, for example); rather, they are interested in what payoffs these assets will yield and with what likelihood (see Box 3.2, however). Box 3.2 Investing Close to Home Although the assumption that investors only care for the final payoff of their investment without any trace of romanticism is standard in financial economics, there is some evidence to the contrary and, in particular, for the assertion that many investors, at the margin at least, prefer to purchase the claims of firms whose products or services are familiar to them. In a recent paper, Huberman (2001) examines the stock ownership records of the seven regional Bell operating companies (RBOCs). He discovered that, with the exception of residents of Montana, Americans are more likely to invest in their local regional Bell operating company than in any other. When they do, their holdings average$14,400. For those who venture farther from home and hold stocks of the RBOC of a region other than their own, the average holding is only $8,246. Considering that every local RBOC cannot be a better investment choice than all of the other six, Huberman interprets his findings as suggesting investors’ psychological need to feel comfortable with where they put their money. 2 One may further hypothesize that investor preferences are indeed very simple after uncertainty is resolved: They prefer a higher payoff to a lower one or, equivalently, to earn a higher return rather than a lower one. Of course they do not know ex ante (that is, before the state of nature is revealed) which asset will yield the higher payoff. They have to choose among prospects, or probability distributions representing these payoffs. And, as we saw in Section 3.2, typically, no one investment prospect will strictly dominate the others. Investors will be able to imagine different possible scenarios, some of which will result in a higher return for one asset, with other scenarios favoring other assets. For instance, let us go back to our favorite situation where there are only two states of nature; in other words, two conceivable scenarios and two assets, as seen in Table 3.4. 8 Table 3.4: Forecasted Price per Share in One Period IBM Royal Dutch State 1 State 2$100 $150$90 $160 Current Price of both assets is$100 There are two key ingredients in the choice between these two alternatives. The first is the probability of the two states. All other things being the same, the more likely is state 1, the more attractive IBM stock will appear to prospective investors. The second is the ex post (once the state of nature is known) level of utility provided by the investment. In Table 3.4 above, IBM yields $100 in state 1 and is thus preferred to Royal Dutch, which yields$90 if this scenario is realized; Royal Dutch, however, provides $160 rather than$150 in state 2. Obviously, with unchanged state probabilities, things would look different if the difference in payoffs were increased in one state as in Table 3.5. Table 3.5: Forecasted Price per Share in One Period IBM Royal Dutch State 1 State 2 $100$150 $90$200 Current Price of both assets is $100 Here even if state 1 is slightly more likely, the superiority of Royal Dutch in state 2 makes it look more attractive. A more refined perspective is introduced if we go back to our first scenario but now introduce a third contender, Sony, with payoffs of$90 and $150, as seen in Table 3.6. Table 3.6: Forecasted Price per Share in One Period IBM Royal Dutch Sony State 1 State 2$100 $150$90 $160$90 $150 Current Price of all assets is$100 Sony is dominated by both IBM and Royal Dutch. But the choice between the latter two can now be described in terms of an improvement of $10 over the Sony payoff, either in state 1 or in state 2. Which is better? The relevant feature is that IBM adds$10 when the payoff is low ($90) while Royal Dutch adds the same amount when the payoff is high ($150). Most people would think IBM more desirable, and with equal state probabilities, would prefer IBM. Once again this is an illustration of the preference for smooth consumption 9 (smoother income allows for smoother consumption).5 In the present context one may equivalently speak of risk aversion or of the well-known microeconomic assumption of decreasing marginal utility (the incremental utility when adding ever more consumption or income is smaller and smaller). The expected utility theorem provides a set of hypotheses under which an investor’s preference ranking over investments with uncertain money payoffs may be represented by a utility index combining, in the most elementary way (i.e., linearly), the two ingredients just discussed — the preference ordering on the ex post payoffs and the respective probabilities of these payoffs. We first illustrate this notion in the context of the two assets considered earlier. Let the respective probability distributions on the price per share of IBM and Royal Dutch (RDP) be described, respectively, by pIBM = pIBM (θi ) ˜ and pRDP = pRDP (θi ) together with the probability πi that the state of nature ˜ θi will be realized. In this case the expected utility theorem provides sufficient conditions on an agent’s preferences over uncertain asset payoffs, denoted , such that pIBM ˜ pRDP ˜ if and only if there exists a real valued function U for which EU (˜IBM ) = π1 U (pIBM (θ1 )) + π2 U (pIBM (θ2 )) p > π1 U (pRDP (θ1 )) + π2 U (pRDP (θ2 )) = EU (˜RDP ) p More generally, the utility of any asset A with payoffs pA (θ1 ), pA (θ2 ),..., pA (θN ) in the N possible states of nature with probabilities π1 , π2 ,..., πN can be represented by N U (A) = EU (pA (θi )) = i=1 πi U (pA (θi )) in other words, by the weighted mean of ex post utilities with the state probabilities as weights. U(A) is a real number. Its precise numerical value, however, has no more meaning than if you are told that the temperature is 40 degrees when you do not know if the scale being used is Celsius or Fahrenheit. It is useful, however, for comparison purposes. By analogy, if it is 40˚ today, but it will be 45˚ tomorrow, you at least know it will be warmer tomorrow than it is today. Similarly, the expected utility number is useful because it permits attaching a number to a probability distribution and this number is, under appropriate hypotheses, a good representation of the relative ranking of a particular member of a family of probability distributions (assets under consideration). 5 Of course, for the sake of our reasoning, one must assume that nothing else important is going on simultaneously in the background, and that other things, such as income from other sources, if any, and the prices of the consumption goods to be purchased with the assets’ payoffs, are not tied to what the payoffs actually are. 10 3.5 The Expected Utility Theorem Let us discuss this theorem in the simple context where objects of choice take the form of simple lotteries. The generic lottery is denoted (x, y, π); it offers payoff (consequence) x with probability π and payoff (consequence) y with probability 1 − π. This notion of a lottery is actually very general and encompasses a huge variety of possible payoff structures. For example, x and y may represent specific monetary payoffs as in Figure 3.1.a, or x may be a payment while y is a lottery as in Figure 3.1.b, or even x and y may both be lotteries as in Figure 3.1.c. Extending these possibilities, some or all of the xi ’s and yi ’s may be lotteries, etc. We also extend our choice domain to include individual payments, lotteries where one of the possible monetary payoffs is certain; for instance, (x, y, π) = x if (and only if) π = 1 (see axiom C.1). Moreover, the theorem holds as well for assets paying a continuum of possible payoffs, but our restriction makes the necessary assumptions and justifying arguments easily accessible. Our objective is a conceptual transparency rather than absolute generality. All the results extend to much more general settings. Insert Figure 3.1 about here Under these representations, we will adopt the following axioms and conventions: C.1 a. (x, y, 1) = x b. (x, y, π) = (y, x, 1 − π) c. (x, z, π) = (x, y, π + (1 − π)τ ) if z = (x, y, τ ) C.1c informs us that agents are concerned with the net cumulative probability of each outcome. Indirectly, it further accommodates lotteries with multiple outcomes; see Figure 3.2, for an example where p = (x, y, π ) , and q = (z, w, π), and π = π1 + π2 = π1π1 2 , etc. +π Insert Figure 3.2 about here C.2 There exists a preference relation plete and transitive. , defined on lotteries, which is com- C.3 The preference relation is continuous in the sense of A.3 in the earlier section. By C.2 and C.3 alone we know (Theorem 3.1) that there exists a utility function, which we will denote by U( ), defined both on lotteries and on specific payments since, by assumption C.1a, a payment may be viewed as a (degenerate) lottery. For any payment x, we have 11 U (x) = U((x, y, 1)) Our remaining assumptions are thus necessary only to guarantee that this function assumes the expected utility form. C.4 Independence of irrelevant alternatives. Let(x, y, π) and (x, z, π) be any two lotteries; then, y z if and only if (x, y, π) (x, z, π). C.5 For simplicity, we also assume that there exists a best (i.e., most preferred lottery), b, as well as a worst, least desirable, lottery w. In our argument to follow (which is constructive, i.e., we explicitly exhibit the expected utility function), it is convenient to use relationships that follow directly from these latter two assumptions. In particular, we’ll use C.6 and C.7: C.6 Let x, k, z be consequences or payoffs for which x > k > z. Then there exists a π such that (x, z, π) ∼ k. C.7 Let x y. Then (x, y, π1 ) follows directly from C.4. (x, y, π2 ) if and only if π1 > π2 . This Theorem 3.2: If axioms C.1 to C.7 are satisfied, then there exists a utility function U defined on the lottery space so that: U((x, y, π)) = πU (x) + (1 − π)U (y) Proof: We outline the proof in a number of steps: 1. Without loss of generality, we may normalize U( ) so that U(b) = 1, U(w) = 0. 2. For all other lotteries z, define U(z) = πz where πz satisfies (b, w, πz ) ∼ z Constructed in this way U(z) is well defined since, a. by C.6 , U(z) = πz exists, and b. by C.7, U(z) is unique. To see this latter implication, assume, to the contrary, that U(z) = πz and also U(z) = πz where πz > πz . By assumption C.4 , z ∼ (b, w, πz ) (b, w, πz ) ∼ z; a contradiction. 3. It follows also from C.7 that if m n, U(m) = πm > πn = U(n). Thus, U( ) has the property of a utility function. 12 4. Lastly, we want to show that U( ) has the required property. Let x, y be monetary payments, π a probability. By C.1a, U (x), U (y) are well-defined real numbers. By C.6, (x, y, π) ∼ ((b, w, πx ), (b, w, πy )), π) ∼ (b, w, ππx + (1 − π)πy ), by C.1c. Thus, by definition of U( ), U((x, y, π)) = ππx + (1 − π)πy = πU (x) + (1 − π)U (y). Although we have chosen x, y as monetary payments, the same conclusion holds if they are lotteries. 2 Before going on to a more careful examination of the assumptions underlying the expected utility theorem, a number of clarifying thoughts are in order. First, the overall Von-Neumann Morgenstern (VNM) utility function U( ) defined over lotteries, is so named after the originators of the theory, the justly celebrated mathematicians John von Neumann and Oskar Morgenstern. In the construction of a VNM utility function, it is customary first to specify its restriction to certainty monetary payments, the so-called utility of money function or simply the utility function. Note that the VNM utility function and its associated utility of money function are not the same. The former is defined over uncertain asset payoff structures while the latter is defined over individual monetary payments. Given the objective specification of probabilities (thus far assumed), it is the utility function that uniquely characterizes an investor. As we will see shortly, different additional assumptions on U ( ) will identify an investor’s tolerance for risk. We do, however, impose the maintained requirement that U ( ) be increasing for all candidate utility functions (more money is preferred to less). Second, note also that the expected utility theorem confirms that investors are concerned only with an asset’s final payoffs and the cumulative probabilities of achieving them. For expected utility investors the structure of uncertainty resolution is irrelevant (Axiom C.1a).6 Third, although the introduction to this chapter concentrates on comparing rates of return distributions, our expected utility theorem in fact gives us a tool for comparing different asset payoff distributions. Without further analysis, it does not make sense to think of the utility function as being defined over a rate of return. This is true for a number of reasons. First, returns are expressed on a per unit (per dollar, Swiss Francs (SF) etc.) basis, and do not identify the magnitude of the initial investment to which these rates are to be applied. We thus have no way to assess the implications of a return distribution for an investor’s wealth position. It could, in principle, be anything. Second, the notion of a rate of return implicitly suggests a time interval: The payout is received after the asset is purchased. So far we have only considered the atemporal 6 See Section 3.7 for a generalization on this score. 13 evaluation of uncertain investment payoffs. In Chapter 4, we generalize the VNM representation to preferences defined over rates of returns. Finally, as in the case of a general order of preferences over bundles of commodities, the VNM representation is preserved under a certain class of linear transformations. If U(·) is a Von-Neuman-Morgenstern utility function, then V(·) = aU(·) + b where a > 0, is also such a function. Let (x, y, π) be some uncertain payoff and let U ( ) be the utility of money function associated with U. V((x, y, π)) = = aU((x, y, π)) + b = a[πU (x) + (1 − π)U (y)] + b π[aU(x) + b] + (1 − π)[aU(y) + b] ≡ πV(x) + (1 − π)V(y) Every linear transformation of an expected utility function is thus also an expected utility function. The utility of money function associated with V is [aU ( ) + b]; V( ) represents the same preference ordering over uncertain payoffs as U( ). On the other hand, a nonlinear transformation doesn’t always respect the preference ordering. It is in that sense that utility is said to be cardinal (see Exercise 3.1). 3.6 How Restrictive Is Expected Utility Theory? The Allais Paradox Although apparently innocuous, the above set of axioms has been hotly contested as representative of rationality. In particular, it is not difficult to find situations in which investor preferences violate the independence axiom. Consider the following four possible asset payoffs (lotteries): L1 = (10, 000, 0, 0.1) L2 = (15, 000, 0, 0.09) L3 = (10, 000, 0, 1) L4 = (15, 000, 0, 0.9) When investors are asked to rank these payoffs, the following ranking is frequently observed: L2 L1 , (presumably because L2 ’s positive payoff in the favorable state is much greater than L1 ’s while the likelihood of receiving it is only slightly smaller) and, L3 L4 , (here it appears that the certain prospect of receiving 10, 000 is worth more than the potential of an additional 5, 000 at the risk of receiving nothing). By the structure of compound lotteries, however, it is easy to see that: L1 = (L3 , L0 , 0.1) L2 = (L4 , L0 , 0.1) where L0 = (0, 0, 1) 14 By the independence axiom, the ranking between L1 and L2 on the one hand, and L3 and L4 on the other, should thus be identical! This is the Allais Paradox.7 There are a number of possible reactions to it. 1. Yes, my choices were inconsistent; let me think again and revise them. 2. No, I’ll stick to my choices. The following kinds of things are missing from the theory of choice expressed solely in terms of asset payoffs: - the pleasure of gambling, and/or - the notion of regret. The idea of regret is especially relevant to the Allais paradox, and its application in the prior example would go something like this. L3 is preferred to L4 because of the regret involved in receiving nothing if L4 were chosen and the bad state ensued. We would, at that point, regret not having chosen L3 , the certain payment. The expected regret is high because of the nontrivial probability (.10) of receiving nothing under L4 . On the other hand, the expected regret of choosing L2 over L1 is much smaller (the probability of the bad state is only .01 greater under L2 and in either case the probability of success is small), and insufficient to offset the greater expected payoff. Thus L2 is preferred to L1 . Box 3.3 On the Rationality of Collective Decision Making Although the discussion in the text pertains to the rationality of individual choices, it is a fact that many important decisions are the result of collective decision making. The limitations to the rationality of such a process are important and, in fact, better understood than those arising at the individual level. It is easy to imagine situations in which transitivity is violated once choices result from some sort of aggregation over more basic preferences. Consider three portfolio managers who decide which stocks to add to the portfolios they manage by majority voting. The stocks currently under consideration are General Electric (GE), Daimler-Chrysler (DC), and Sony (S). Based on his fundamental research and assumptions, each manager has rational (i.e., transitive) preferences over the three possibilities: Manager 1: GE 1 DC 1 S Manager 2: S 2 GE 2 DC Manager 3: DC 3 S 3 GE If they were to vote all at once, they know each stock would receive one vote (each stock has its advocate). So they decide to vote on pair-wise choices: (GE vs. DB), (DB vs. S), and (S vs. GE). The results of this voting (GE dominates DB, DB dominates S, and S dominates GE) suggest an intransitivity in the aggregate ordering. Although this example illustrates an intransitivity, it is an intransitivity that arises from the operation of a collective choice mechanism (voting) rather than being present in the individual orders of preference of the participating agents. There is a large literature on this subject that is closely after the Nobel prize winner Maurice Allais who was the first to uncover the phenomenon. 7 Named 15 identified with Arrow’s “Impossibility Theorem,” See Arrow (1963) for a more exhaustive discussion. 2 The Allais paradox is but the first of many phenomena that appear to be inconsistent with standard preference theory. Another prominent example is the general pervasiveness of preference reversals, events that may approximately be described as follows. Individuals, participating in controlled experiments were asked to choose between two lotteries, (4, 0, .9) and (40, 0, .1). More than 70 percent typically chose (4, 0, .9). When asked at what price they would be willing to sell the lotteries if they were to own them, however, a similar percentage demanded the higher price for (40, 0, .1). At first appearances, these choices would seem to violate transitivity. Let x, y be, respectively, the sale prices of (4, 0, .9) and (40, 0, .10). Then this phenomenon implies x ∼ (4, 0, .9) (40, 0, .1)∼ y, yet y > x. Alternatively, it may reflect a violation of the assumed principle of procedure invariance, which is the idea that investors’ preference for different objects should be indifferent to the manner by which their preference is elicited. Surprisingly, more narrowly focused experiments, which were designed to force a subject with expected utility preferences to behave consistently, gave rise to the same reversals. The preference reversal phenomenon could thus, in principle, be due either to preference intransitivity, or to a violation of the independence axiom, or of procedure invariance. Various researchers who, through a series of carefully constructed experiments, have attempted to assign the blame for preference reversals lay the responsibility largely at the feet of procedure invariance violations. But this is a particularly alarming conclusion as Thaler (1992) notes. It suggests that “the context and procedures involved in making choices or judgements influence the preferences that are implied by the elicited responses. In practical terms this implies that (economic) behavior is likely to vary across situations which economists (would otherwise) consider identical.” This is tantamount to the assertion that the notion of a preference ordering is not well defined. While investors may be able to express a consistent (and thus mathematically representable) preference ordering across television sets with different features (e.g., size of the screen, quality of the sound, etc.), this may not be possible with lotteries or consumption baskets containing widely diverse goods. Grether and Plott (1979) summarize this conflict in the starkest possible terms: “Taken at face value, the data demonstrating preference reversals are simply inconsistent with preference theory and have broad implications about research priorities within economics. The inconsistency is deeper than the mere lack of transitivity or even stochastic transitivity. It suggests that no optimization principles of any sort lie behind the simplest of human choices and that the uniformities in human choice behavior which lie behind market behavior result from principles which are of a completely different sort from those generally accepted.” 16 At this point it is useful to remember, however, that the goal of economics and finance is not to describe individual, but rather market, behavior. There is a real possibility that occurrences of individual irrationality essentially “wash out” when aggregated at the market level. On this score, the proof of the pudding is in the eating and we have little alternative but to see the extent to which the basic theory of choice we are using is able to illuminate financial phenomena of interest. All the while, the discussion above should make us alert to the possibility that unusual phenomena might be the outcome of deviations from the generally accepted preference theory articulated above. While there is, to date, no preference ordering that accommodates preference reversals – and it is not clear there will ever be one – more general constructs than expected utility have been formulated to admit other, seemingly contradictory, phenomena. 3.7 Generalizing the VNM Expected Utility Representation Objections to the assumptions underlying the VNM expected utility representation have stimulated the development of a number of alternatives, which we will somewhat crudely aggregate under the title non-expected utility theory. Elements of this theory differ with regard to which fundamental postulate of expected utility is relaxed. We consider four and refer the reader to Machina (1987) for a more systematic survey. 3.7.1 Preference for the Timing of Uncertainty Resolution To grasp the idea here we must go beyond our current one period setting. Under the VNM expected utility representation, investors are assumed to be concerned only with actual payoffs and the cumulative probabilities of attaining them. In particular, they are assumed to be indifferent to the timing of uncertainty resolution. To get a better idea of what this means, consider the two investment payoff trees depicted in Figure 3.3. These investments are to be evaluated from the viewpoint of date 0 (today). Insert Figure 3.3 about here Under the expected utility postulates, these two payoff structures would be valued (in utility terms) identically as ˜ EU (P ) = U (100) + [πU (150) + (1 − π)U (25)] This means that a VNM investor would not care if the uncertainty were resolved in period 0 (immediately) or one period later. Yet, people are, in fact, very different in this regard. Some want to know the outcome of an uncertain event as soon as possible; others prefer to postpone it as long as possible. Kreps and Porteus (1978) were the first to develop a theory that allowed for these distinctions. They showed that if investor preferences over uncertain sequential payoffs were of the form 17 ˜ ˜ U0 P1 , P2 (θ) = W (P1 , E(U1 (P1 , P2 (θ))), then investors would prefer early (late) resolution of uncertainty according to whether W (P1 , .) is convex (concave) (loosely, whether W22 > 0 or W22 < 0). In the above representation Pi is the payoff in period i = 1, 2. If W (P1 , .) were concave, for example, the expected utility of investment 1 would be lower than investment 2. The idea can be easily illustrated in the context of the example above. We assume functional forms similar to those used in an illustration of Kreps and ˜ Porteus (1978); in particular, assume W (P1 , EU ) = EU 1.5 , and U1 (P1 , P2 (θ)) = ˜ (P1 +P2 (θ))1/2 . Let π = .5, and note that the overall composite function U0 ( ) is concave in all of its arguments. In computing utilities at the decision nodes [0], [1a], [1b], and [1c] (the latter decisions are trivial ones), we must be especially scrupulous to observe exactly the dates at which the uncertainty is resolved under the two alternatives: 1a [1a] : EU1 (P1 , P2 (θ)) = (100 + 150)1/2 = 15.811 1b [1b] : EU1 (P1 , P2 (θ)) = (100 + 25)1/2 = 11.18 1c [1c] : EU1 (P1 , P2 (θ)) = .5(100 + 150)1/2 + .5(100 + 25)1/2 = 13.4955 At t = 0, the expected utility on the upper branch is 1a,1b ˜ ˜ (P1 , P2 (θ)) = EW 1a,1b (P1 , P2 (θ)) EU0 = .5W (100, 15.811) + .5W (100, 11.18) = .5(15.811)1.5 + .5(11.18)1.5 = 50.13, while on the lower branch 1c ˜ EU0 (P1 , P2 (θ)) = W (100, 13.4955) = (13.4955)1.5 = 49.57. This investor clearly prefers early resolution of uncertainty which is consistent with the convexity of the W ( ) function. Note that the result of the example is simply an application of Jensen’s inequality.8 If W ( ) were concave, the ordering would be reversed. There have been numerous specializations of this idea, some of which we consider in Chapter 4 (See Weil (1990) and Epstein and Zin (1989)). At the moment it is sufficient to point out that such representations are not consistent with the VNM axioms. 8 Let a = (100 + 150)1/2 , b = (100 + 25)1/2 , g(x) = x1.5 (convex), EU 1a,1b (P, P (θ)) = ˜2 0 lc ˜ Eg (x) > g (Ex) = EU0 (P1, P2 (θ)) where x = a with prob = .5 and x = b with prob = .5. 18 3.7.2 Preferences That Guarantee Time-Consistent Planning Our setting is once again intertemporal, where uncertainty is resolved in each future time period. Suppose that at each date t ∈ {0, 1, 2, .., T }, an agent has a preference ordering t defined over all future (state-contingent) consumption bundles, where t will typically depend on her past consumption history. The notion of time-consistent planning is this: if, at each date, the agent could plan against any future contingency, what is the required relationship among the family of orderings { t : t = 0, 1, 2, .., T } that will cause plans which were optimal with respect to preferences 0 to remain optimal in all future time periods given all that has happened in the interim; (i.e., intermediate consumption experiences and the specific way uncertainty has evolved)? In particular, what utility function representation will guarantee this property? When considering decision problems over time, such as portfolio investments over a multiperiod horizon, time consistency seems to be a natural property to require. In its absence, one would observe portfolio rebalancing not motivated by any outside event or information flow, but simply resulting from the inconsistency of the date t preference ordering of the investor compared with the preferences on which her original portfolio positions were taken. Asset trades would then be fully motivated by endogenous and unobservable preference issues and would thus be basically unexplainable. To see what it takes for a utility function to be time consistent, let us consider two periods where at date 1 any one of s ∈ S possible states of nature may be realized. Let c0 denote a possible consumption level at date 0, and let c1 (s) denote a possible consumption level in period 1 if state “s” occurs. Johnsen and Donaldson (1985) demonstrate that if initial preferences 0 , with utility representation U ( ), are to guarantee time-consistent planning, there must exist continuous and monotone increasing functions f ( ) and {Us (., .) : s ∈ S} such that U (c0 , c1 (s) : s ∈ S) = f (c0 , Us (c0 , c1 (s) : s ∈ S), (3.1) where Us (., .) is the state s contingent utility function. This result means the utility function must be of a form such that the utility representations in future states can be recursively nested as individual arguments of the overall utility function. This condition is satisfied by the VNM expected utility form, U (c0 , c1 (s) : s ∈ S) = U0 (c0 ) + s πs U (c1 (s)), which clearly is of a form satisfying Equation (3.1). The VNM utility representation is thus time consistent, but the latter property can also accommodate more general utility functions. To see this, consider the following special case of Equation (3.1), where there are three possible states at t = 1: 19 U (c0 , c1 (1), c1 (2), c1 (3)) = where U1 (c0 , c1 (1)) U2 (c0 , c1 (2)) U3 (c0 , c1 (3)) = = = log(c0 + c1 (1)), c0 (c1 (2))1/2 , and c0 c1 (3). 1/2 (3.2) 1/3 c0 + π1 U1 (c0 , c1 (1)) + [π2 U2 (c0 , c1 (2))] π3 U3 (c0 , c1 (3)) 1/2 In this example, preferences are not linear in the probabilities and thus not of the VNM expected utility type. Nevertheless, Equation (3.2) is of the form of Equation (3.1). It also has the feature that preferences in any future state are independent of irrelevant alternatives, where the irrelevant alternatives are those consumption plans for states that do not occur. As such, agents with these preferences will never experience regret and the Allais Paradox will not be operational. Consistency of choices seems to make sense and turns out to be important for much financial modeling, but is it borne out empirically? Unfortunately, the answer is: frequently not. A simple illustration of this is a typical puretime preference experiment from the psychology literature (uncertainty in future states is not even needed). Participants are asked to choose among the following monetary prizes:9 Question 1: Would you prefer $100 today or$200 in 2 years? Question 2: Would you prefer $100 in 6 years or$200 in 8 years? Respondents often prefer the $100 in question 1 and the$200 in question 2, not realizing that question 2 involves the same choice as question 1 but with a 6-year delay. If these people are true to their answers, they will be time inconsistent. In the case of question 2, although they state their preference now for the $200 prize in 8 years, when year 6 arrives they will take the$100 and run! 3.7.3 Preferences Defined over Outcomes Other Than Fundamental Payoffs Under the VNM expected utility theory, the utility function is defined over actual payoff outcomes. Tversky and Kahneman (1992) and Kahneman and Tversky (1979) propose formulations whereby preferences are defined, not over actual payoffs, but rather over gains and losses relative to some benchmark, so that losses are given the greater utility weight. The benchmark can be thought of as either a minimally acceptable payment or, under the proper transformations, a cutoff rate of return. It can be changing through time reflecting prior experience. Their development is called prospect theory. 9 See Ainslie and Haslan (1992) for details. 20 Insert Figure 3.4 about here A simple illustration of this sort of representation is as follows: Let Y denote the benchmark payoff, and define the investor’s utility function U (Y ) by U (Y ) = (|Y −Y |)1−γ1 , 1−γ1 −λ(|Y −Y |)1−γ2 1−γ2 if Y ≥ Y , if Y ≤ Y where λ > 1 captures the extent of the investor’s aversion to “losses” relative to the benchmark, and γ1 and γ2 need not coincide. In other words, the curvature of the function may differ for deviations above or below the benchmark. Clearly both features could have a large impact on the relative ranking of uncertain investment payoff. See Figure 3.4 for an illustration. Not all economic transactions (e.g., the normal purchase or sale of commodities) are affected by loss aversion since, in normal circumstances, one does not suffer a loss in trading a good. An investor’s willingness to hold stocks, however, may be significantly affected if he has experienced losses in prior periods. 3.7.4 Nonlinear Probability Weights Under the VNM representation, the utility outcomes are linear weighted by their respective probability of outcome. Under prospect theory and its close relatives, this need not be the case: outcomes can be weighted using nonlinear functions of the probabilities and may be asymmetric. More general theories of investor psychology replace the objective mathematical expectation operator entirely with a model of subjective expectations. See Barberis et. al. (1996) for an illustration. 3.8 Conclusions The expected utility theory is the workhorse of choice theory under uncertainty. It will be put to use almost systematically in this book as it is in most of financial theory. We have argued in this chapter that the expected utility construct provides a straightforward, intuitive mechanism for comparing uncertain asset payoff structures. As such, it offers a well-defined procedure for ranking the assets themselves. Two ingredients are necessary for this process: 1. An estimate of the probability distribution governing the asset’s uncertain payments. While it is not trivial to estimate this quantity, it must also be estimated for the much simpler and less flexible mean/variance criterion. 2. An estimate of the agents’ utility of money function; it is the latter that fully characterizes his preference ordering. How this can be identified is one of the topics of the next chapter. References 21 Ainslie, G., Haslan, N. (1992), “Hyperbolic Discounting,” in Choice over Time, eds. G. Lowenstein and J. Elster, New York: Russell Sage Foundation. Allais, M. (1964), “Le comportement de l’homme rationnel devant le risque: Critique des postulats de l’´cole Am´ricaine,” Econometrica,21, 503–546. e e Arrow, K.J. (1963), Social Choice and Individual Values, Yale University Press, New Haven, CT. Barberis, N., Schleifer, A., Vishney, R. (1998), “A Model of Investor Sentiment” Journal of Financial Economics, 49, 307-343. Epstein, L., Zin, S. (1989), “Substitution, Risk Aversion, and the Temporal Behavior of Consumption and Asset Returns: A Theoretical Framework,” Econometrica 57, 937–969. Grether, D., Plott, C. (1979), “Economic Theory of Choice and the Preference Reversal Phenomenon,” American Economic Review 75, 623–638. Huberman, G. (2001), “Familiarity Breeds Investment,” Review of Financial Studies, 14, 659-680. Johnsen, T., Donaldson, J. B. (1985), “The Structure of Intertemporal Preferences Under Uncertainty and Time Consistent Plans,” Econometrica 53, 1451–1458. Kahneman, D., Tversky, A. (1979), “Prospect Theory: An Analysis of Decision Under Risk,” Econometrica 47, 263–291. Kreps, D., Porteus, E. (1978), “Temporal Resolution of Uncertainty and Dynamic Choice Theory,” Econometrica 461, 185–200. Machina, M. (1987), “Choice Under Uncertainty: Problems Solved and Unsolved,” Journal of Economic Perspectives 1, 121–154. Mas-Colell, A., Whinston, M. D., Green, J. R. (1995), Microeconomic Theory, Oxford University Press, Oxford. Thaler, R.H. (1992), The Winner’s Curse, Princeton University Press, Princeton, NJ. Tversky, A., Kahneman, D. (1992), “Advances in Prospect Theory: Cumulative Representation of Uncertainty,” Journal of Risk and Uncertainty 5, 297–323. Weil, P. (1990), “Nonexpected Utility in Macroeconomics,” Quarterly Journal of Economics 105, 29–42. 22 Chapter 4 : Measuring Risk and Risk Aversion 4.1 Introduction We argued in Chapter 1 that the desire of investors to avoid risk, that is, to smooth their consumption across states of nature and for that reason avoid variations in the value of their portfolio holdings, is one of the primary motivations for financial contracting. But we have not thus far imposed restrictions on the VNM expected utility representation of investor preferences, which necessarily guarantee such behavior. For that to be the case, our representation must be further specialized. Since the probabilities of the various state payoffs are objectively given, independently of agent preferences, further restrictions must be placed on the utility-of-money function U ( ) if the VNM (von Neumann-Morgenstern - expected utility) representation is to capture this notion of risk aversion. We will now define risk aversion and discuss its implications for U ( ). 4.2 Measuring Risk Aversion What does the term risk aversion imply about an agent’s utility function? Consider a financial contract where the potential investor either receives an amount 1 h with probability 1 , or must pay an amount h with probability 2 . Our most 2 basic sense of risk aversion must imply that for any level of personal wealth Y , a risk-averse investor would not wish to own such a security. In utility terms this must mean 1 1 U (Y ) > ( )U (Y + h) + ( )U (Y − h) = EU, 2 2 where the expression on the right-hand side of the inequality sign is the VNM expected utility associated with the random wealth levels: y + h, probability = 1 2 1 y − h, probability = 2 . This inequality can only be satisfied for all wealth levels Y if the agent’s utility function has the form suggested in Figure 4.1. When this is the case we say the utility function is strictly concave. The important characteristics implied by this and similarly shaped utility functions is that the slope of the graph of the function decreases as the agent becomes wealthier (as Y increases); that is, the marginal utility (M U ), represented by the derivative d(U (Y))) ≡ U (Y ), decreases with greater Y . Equivalently, for d(Y (U twice differentiable utility functions, d d(Y(Y )) ≡ U (Y ) < 0 . For this class )2 of functions, the latter is indeed a necessary and sufficient condition for risk aversion. 2 Insert Figure 4.1 about here 1 As the discussion indicates, both consumption smoothing and risk aversion are directly related to the notion of decreasing M U . Whether they are envisaged across time or states, decreasing M U basically implies that income (or consumption) deviations from a fixed average level diminish rather than increase utility. Essentially, the positive deviations do not help as much as the negative ones hurt. Risk aversion can also be represented in terms of indifference curves. Figure 4.2 illustrates the case of a simple situation with two states of nature. If consuming c1 in state 1 and c2 in state 2 represents a certain level of expected utility EU , then the convex-to-the-origin indifference curve that is the appropriate translation of a strictly concave utility function indeed implies that the utility level generated by the average consumption c1 +c2 in both states (in this 2 case a certain consumption level) is larger than EU . Insert Figure 4.2 about here We would like to be able to measure the degree of an investor’s aversion to risk. This will allow us to compare whether one investor is more risk averse than another and to understand how an investor’s risk aversion affects his investment behavior (for example, the composition of his portfolio). As a first attempt toward this goal, and since U ( ) < 0 implies risk aversion, why not simply say that investor A is more risk averse than investor B, if and only if |UA (Y )| ≥ |UB Y )|, for all income levels Y ? Unfortunately, this approach leads to the following inconsistency. Recall that the preference ordering described by a utility function is invariant to linear transformations. In other ¯ ¯ words, suppose UA ( ) and UA ( ) are such that UA ( ) = a + bUA () with b > 0. These utility functions describe the identical ordering, and thus must display identical risk aversion. Yet, if we use the above measure we have |UB (Y )| > |UA (Y )|, if, say, b > 1. This implies that investor A is more risk averse than he is himself, which must be a contradiction. We therefore need a measure of risk aversion that is invariant to linear transformations. Two widely used measures of this sort have been proposed by, respectively, Pratt (1964) and Arrow (1971): (Y ) (i) absolute risk aversion = − U (Y ) ≡ RA (Y ) U U (Y (ii) relative risk aversion = − YU (Y ) ) ≡ RR (Y ). Both of these measures have simple behavioral interpretations. Note that instead of speaking of risk aversion, we could use the inverse of the measures proposed above and speak of risk tolerance. This terminology may be preferable on various occasions. 2 4.3 4.3.1 Interpreting the Measures of Risk Aversion Absolute Risk Aversion and the Odds of a Bet Consider an investor with wealth level Y who is offered — at no charge — an investment involving winning or losing an amount h, with probabilities π and 1-π, respectively. Note that any investor will accept such a bet if π is high enough (especially if π = 1) and reject it if π is small enough (surely if π = 0). Presumably, the willingness to accept this opportunity will also be related to his level of current wealth, Y . Let π = π(Y, h) be that probability at which the agent is indifferent between accepting or rejecting the investment. It is shown that π(Y, h) ∼ 1/2 + (1/4)hRA (Y ), = (4.1) where ∼ denotes “is approximately equal to.” = The higher his measure of absolute risk aversion, the more favorable odds he 1 2 will demand in order to be willing to accept the investment. If RA (Y ) ≥ RA (Y ) , for agents 1 and 2 respectively, then investor 1 will always demand more favorable odds than investor 2, and in this sense investor 1 is more risk averse. It is useful to examine the magnitude of this probability. Consider, for example, the family of VNM utility-of-money functions with the form: 1 U (Y ) = − e−νY , ν where ν is a parameter. For this case, π(Y, h) ∼ 1/2 + (1/4)hν, = in other words, the odds requested are independent of the level of initial wealth (Y ); on the other hand, the more wealth at risk (h), the greater the odds of a favorable outcome demanded. This expression advances the parameter ν as the appropriate measure of the degree of absolute risk aversion for these preferences. Let us now derive Equation (4.1). By definition, π(Y, h) must satisfy U (Y ) utility if he foregoes the bet = π(Y, h)U (Y + h) + [1 − π(Y, h)]U (Y − h) expected utility if the investment is accepted (4.2) By an approximation (Taylor’s Theorem) we know that: U (Y + h) U (Y − h) h2 U (Y ) + H1 2 h2 = U (Y ) − hU (Y ) + U (Y ) + H2 , 2 = U (Y ) + hU (Y ) + where H1, H2 are remainder terms of order higher than h2 . Substituting these quantities into Equation (4.2) gives 3 U (Y ) = π(Y, h)[U (Y )+hU (Y )+ Collecting terms gives h2 h2 U ”(Y )+H1 ]+(1−π(Y, h))[U (Y )−hU (Y )+ U ”(Y )+H2 ] 2 2 (4.3) h2 U (Y ) +π(Y, h)H1 + (1 − π(Y, h))H2 2 = def. H(small) U (Y ) = U (Y )+(2π(Y, h)−1) hU (Y ) + Solving for π(Y, h) yields π(Y, h) = 1 h −U (Y ) H + − , 2 4 U (Y ) 2hU (Y ) (4.4) which is the promised expression, since the last remainder term is small - it is a weighted average of terms of order higher than h2 and is, thus, itself of order higher than h2 - and it can be ignored in the approximation. 4.3.2 Relative Risk Aversion in Relation to the Odds of a Bet Consider now an investment opportunity similar to the one just discussed except that the amount at risk is a proportion of the investor’s wealth, in other words, h = θY , where θ is the fraction of wealth at risk. By a derivation almost identical to the one above, it can be shown that 1 1 π(Y, θ) ∼ + θRR (Y ). = 2 4 (4.5) 1 2 If RR (Y ) ≥ RR (Y ), for investors 1 and 2, then investor 1 will always demand more favorable odds, for any level of wealth, when the fraction θ of his wealth is at risk. It is also useful to illustrate this measure by an example. A popular family of VNM utility-of-money functions (for reasons to be detailed in the next chapter) has the form: U (Y ) = U (Y ) = Y 1−γ , for 0 > γ = 1 1−γ ln Y, if γ = 1. In the latter case, the probability expression becomes 1 1 π(Y, θ) ∼ + θ. = 2 4 In this case, the requested odds of winning are not a function of initial wealth (Y ) but depend upon θ, the fraction of wealth that is at risk: The lower the fraction θ, the more investors are willing to consider entering into bet that is close to being fair (a risky opportunity where the probabilities of success or 4 failure are both 1 ). In the former, more general, case the analogous expression 2 is 1 1 π(Y, θ) ∼ + θγ. = 2 4 Since γ > 0, these investors demand a higher probability of success. Furthermore, if γ2 > γ1 , the investor characterized by γ = γ2 will always demand a higher probability of success than will an agent with γ = γ1 , for the same fraction of wealth at risk. In this sense a higher γ denotes a greater degree of relative risk aversion for this investor class. 4.3.3 Risk Neutral Investors One class of investors deserves special mention at this point. They are significant, as we shall later see, for the influence they have on the financial equilibria in which they participate. This is the class of investors who are risk neutral and who are identified with utility functions of a linear form U (Y ) = cY + d, where c and d are constants and c > 0. Both of our measures of the degree of risk aversion, when applied to this utility function give the same result: RA (Y ) ≡ 0 and RR (Y ) ≡ 0. Whether measured as a proportion of wealth or as an absolute amount of money at risk, such investors do not demand better than even odds when considering risky investments of the type under discussion. They are indifferent to risk, and are concerned only with an asset’s expected payoff. 4.4 The context of our discussion thus far has been somewhat artificial because we were seeking especially convenient probabilistic interpretations for our measures of risk aversion. More generally, a risk-averse agent (U ( ) < 0) will always value an investment at something less than the expected value of its payoffs. Consider ˜ an investor, with current wealth Y , evaluating an uncertain risky payoff Z. For any distribution function Fz , ˜ ˜ U (Y + E Z) ≥ E[U (Y + Z)] provided that U ( ) < 0. This is a direct consequence of a standard mathematical result known as Jensen’s inequality. Theorem 4.1 (Jensen’s Inequality): 5 Let g( ) be a concave function on the interval (a, b), and x be a random ˜ variable such that Prob {˜ ∈ (a, b)} = 1. Suppose the expectations E(˜) and x x Eg(˜) exist; then x E [g(˜)] ≤ g [E(˜)] . x x Furthermore, if g( ) is strictly concave and Prob {˜ = E(˜)} = 1, then the x x inequality is strict. This theorem applies whether the interval (a, b) on which g( ) is defined is finite or infinite and, if a and b are finite, the interval can be open or closed at either endpoint. If g( ) is convex, the inequality is reversed. See De Groot (1970). To put it differently, if an uncertain payoff is available for sale, a risk-averse agent will only be willing to buy it at a price less than its expected payoff. This statement leads to a pair of useful definitions. The (maximal) certain sum of money a person is willing to pay to acquire an uncertain opportunity defines his certainty equivalent (CE) for that risky prospect; the difference between the CE and the expected value of the prospect is a measure of the uncertain payoff’s risk premium. It represents the maximum amount the agent would be willing to pay to avoid the investment or gamble. Let us make this notion more precise. The context of the discussion is as follows. Consider an agent with current wealth Y and utility function U ( ) ˜ who has the opportunity to acquire an uncertain investment Z with expected ˜ ˜ ˜ value E Z. The certainty equivalent (to the risky investment Z, CE(Y, Z), and ˜ the corresponding risk or insurance premium, Π(Y, Z), are the solutions to the following equations: EU (Y + Z) = U (Y + CE(Y, Z)) ˜ ˜ = U (Y + E Z − Π(Y, Z)) (4.6a) (4.6b) which, implies ˜ ˜ ˜ ˜ ˜ ˜ CE(Z, Y ) = E Z − Π(Y, Z) or Π(Y, Z) = E Z − CE(Z, Y ) These concepts are illustrated in Figure 4.3. Insert Figure 4.3 about here It is intuitively clear that there is a direct relationship between the size of the risk premium and the degree of risk aversion of a particular individual. The link can be made quite easily in the spirit of the derivations of the previous section. For simplicity, the derivation that follows applies to the case of an actuarially fair ˜ ˜ prospect Z, one for which E Z = 0. Using Taylor series approximations we can develop the left-hand side (LHS) and right-hand side (RHS) of the definitional Equations (4.6a) and (4.6b). 6 LHS: ˜ EU (Y + Z) = = RHS: 1˜ ˜ ˜ EU (Y ) + E ZU (Y ) + E Z 2 U (Y ) + EH(Z 3 ) 2 1 2 ˜ U (Y ) + σz U (Y ) + EH(Z 3 ) 2 ˜ ˜ U (Y − Π(Y, Z)) = U (Y ) − Π(Y, Z)U (Y ) + H(Π2 ) ˜ or, ignoring the terms of order Z 3 or Π2 or higher (EH(Z 3 ) and H(Π2 )), ˜ = Π(Y, Z) ∼ 1 2 σ 2 z −U (Y ) U (Y ) = 1 2 σ RA (Y ). 2 z Y 1−γ 1−γ , To illustrate, consider our earlier example in which U (Y ) = γ = 3, Y = $500,000, and ˜ Z=$100, 000 with probability = 1 2 −$100, 000 with probability = 1 2 and suppose For this case the approximation specializes to ˜ Π(Y, Z) = 1 2γ 1 σ = (100, 000)2 2 zY 2 3 500, 000 =$30, 000. To confirm that this approximation is a good one, we must show that: ˜ U (Y −Π(Y, Z)) = U (500, 000−30, 000) = or 1 −2 1 −2 (6) + (4) , 2 2 or .0452694 ∼ .04513; confirmed. = (4.7)−2 = 1 1 ˜ U (600, 000)+ U (400, 000) = EU (Y +Z), 2 2 Note also that for this preference class, the insurance premium is directly proportional to the parameter γ. Can we convert these ideas into statements about rates of return? Let the equivalent risk-free return be defined by ˜ U (Y (1 + rf )) = U (Y + CE(Z, Y )). ˜ The random payoff Z can also be converted into a rate of return distribution ˜ = rY , or, r = Z/Y . Therefore, rf is defined by the equation ˜ via Z ˜ ˜ U (Y (1 + rf )) ≡ EU (Y (1 + r)). ˜ 7 By risk aversion, E r > rf . We thus define the rate of return risk premium ˜ Πr as Πr = E r-rf , or E r = rf + Πr , where Πr depends on the degree of risk ˜ ˜ aversion of the agent in question. Let us conclude this section by computing the rate of return premium in a particular case. Suppose U (Y ) = ln Y , and that ˜ the random payoff Z satisfies ˜ Z= $100, 000 with probability = 1 2 −$50, 000 with probability = 1 2 from a base of Y = $500,000. The risky rate of return implied by these numbers is clearly 20% with probability = 1 2 r= ˜ −10% with probability = 1 2 ˜ with an expected return of 5%. The certainty equivalent CE(Y, Z) must satisfy ˜ ln(500, 000 + CE(Y, Z)) = 1 2 ln(600, 000) + 1 2 ln(450, 000), or 1 1 ˜ CE(Y, Z) = e 2 ln(600,000)+ 2 ln(450,000) − 500, 000 ˜ CE(Y, Z) = 19, 618, so that (1 + rf ) = 519, 618 = 1.0392. 500, 000 The rate of return risk premium is thus 5% − 3.92% = 1.08%. Let us be clear: This rate of return risk premium does not represent a market or equilibrium premium. Rather it reflects personal preference characteristics and corresponds to the premium over the risk-free rate necessary to compensate, utility-wise, a specific individual, with the postulated preferences and initial wealth, for engaging in the risky investment. 4.5 Assessing the Level of Relative Risk Aversion 1−γ Suppose that agents’ utility functions are of the form U (Y ) = Y 1−γ class. As noted earlier, a quick calculation informs us that RR (Y) ≡ γ, and we say that U ( ) is of the constant relative risk aversion class. To get a feeling as to what this measure means, consider the following uncertain payoff:$50, 000 with probability π = .5 $100, 000 with probability π = .5 Assuming your utility function is of the type just noted, what would you be willing to pay for such an opportunity (i.e., what is the certainty equivalent for this uncertain prospect) if your current wealth were Y ? The interest in asking such a question resides in the fact that, given the amount you are willing to pay, 8 it is possible to infer your coefficient of relative risk aversion RR (Y ) = γ, provided your preferences are adequately represented by the postulated functional form. This is achieved with the following calculation. The CE, the maximum amount you are willing to pay for this prospect, is defined by the equation (Y + CE) 1−γ 1−γ = 1 2 (Y 1 + 50, 000)1−γ (Y + 100, 000)1−γ + 2 1−γ 1−γ Assuming zero initial wealth (Y = 0), we obtain the following sample results (clearly, CE > 50,000): γ=0 CE = 75,000 (risk neutrality) γ=1 CE = 70,711 γ=2 CE = 66,667 γ=5 CE = 58,566 γ = 10 CE = 53,991 γ = 20 CE = 51,858 γ = 30 CE = 51,209 Alternatively, if we suppose a current wealth of Y =$100,000 and a degree of risk aversion of γ = 5, the equation results in a CE= $66,532. 4.6 The Concept of Stochastic Dominance In response to dissatisfaction with the standard ranking of risky prospects based on mean and variance, a theory of choice under uncertainty with general applicability has been developed. In this section we show that the postulates of expected utility lead to a definition of two weaker alternative concepts of dominance with wider applicability than the concept of state-by-state dominance. These are of interest because they circumscribe the situations in which rankings among risky prospects are preference free, or, can be defined independently of the specific trade-offs (among return, risk, and other characteristics of probability distributions) represented by an agent’s utility function. We start with an illustration. Consider two investment alternatives, Z1 and Z2 , with the characteristics outlined in Table 4.1: Table 4.1: Sample Investment Alternatives Payoffs Prob Z1 Prob Z2 10 .4 .4 EZ1 EZ2 100 2000 .6 0 .4 .2 = 64, σz1 = 44 = 444, σz2 = 779 9 First observe that under standard mean-variance analysis, these two investments cannot be ranked: Although investment Z2 has the greater mean, it also has the greater variance. Yet, all of us would clearly prefer to own investment 2. It at least matches investment 1 and has a positive probability of exceeding it. Insert Figure 4.4 about here To formalize this intuition, let us examine the cumulative probability distributions associated with each investment, F1 (Z) and F2 (Z) where Fi (Z) = Prob (Zi ≤ Z). In Figure 4.4 we see that F1 (·) always lies above F2 (·). This observation leads to Definition 4.1. Definition 4.1: Let FA (˜) and FB (˜), respectively, represent the cumulative distribution x x functions of two random variables (cash payoffs) that, without loss of generality assume values in the interval [a, b]. We say that FA (˜) first order stochastically x dominates (F SD) FB (˜) if and only if FA (x) ≤ FB (x) for all x ∈ [a, b]. x Distribution A in effect assigns more probability to higher values of x; in other words, higher payoffs are more likely. That is, the distribution functions of A and B generally conform to the following pattern: if FA F SD FB , then FA is everywhere below and to the right of FB as represented in Figure 4.5. By this criterion, investment 2 in Figure 4.5 stochastically dominates investment 1. It should, intuitively, be preferred. Theorem 4.2 summarizes our intuition in this latter regard. Insert Figure 4.5 about here Theorem 4.2: Let FA (˜), FB (˜), be two cumulative probability distributions for random x x payoffs x ∈ [a, b]. Then FA (˜) FSD FB (˜) if and only if EA U (˜) ≥ EB U (˜) ˜ x x x x for all non-decreasing utility functions U ( ). Proof: See Appendix. Although it is not equivalent to state-by-state dominance (see Exercise 4.8), F SD is an extremely strong condition. As is the case with the former, it is so strong a concept that it induces only a very incomplete ranking among uncertain prospects. Can we find a broader measure of comparison, for instance, which would make use of the hypothesis of risk aversion as well? Consider the two independent investments in Table 4.2.1 1 In this example, contrary to the previous one, the two investments considered are statistically independent. 10 Table 4.2: Two Independent Investments Investment 3 Payoff Prob. 4 0.25 5 0.50 9 0.25 Investment 4 Payoff Prob. 1 0.33 6 0.33 8 0.33 Which of these investments is better? Clearly, neither investment (first order) stochastically dominates the other as Figure 4.6 confirms. The probability distribution function corresponding to investment 3 is not everywhere below the distribution function of investment 4. Yet, we would probably prefer investment 3. Can we formalize this intuition (without resorting to the mean/variance criterion, which in this case accords with intuition: ER4 = 5, ER3 = 5.75; σ4 = 2.9, and σ3 = 1.9)? This question leads to a weaker notion of stochastic dominance that explicitly compares distribution functions. Definition 4.2: Second Order Stochastic Dominance (SSD). Let FA (˜), FB (˜), be two cumulative probability distributions for random x x payoffs in [a, b]. We say that FA (˜) second order stochastically dominates (SSD) x FB (˜) if and only if for any x : x −∞ ∫ [ FB (t) − FA (t)] dt ≥ 0. x (with strict inequality for some meaningful interval of values of t). The calculations in Table 4.3 reveal that, in fact, investment 3 second order stochastically dominates investment 4 (let fi (x) , i = 3, 4, denote the density functions corresponding to the cumulative distribution function Fi (x)). In geometric terms (Figure 4.6), this would be the case as long as area B is smaller than area A. Insert Figure 4.6 about here As Theorem 4.3 shows, this notion makes sense, especially for risk-averse agents: Theorem 4.3: Let FA (˜), FB (˜), be two cumulative probability distributions for random x x payoffs x defined on [a, b]. Then, FA (˜) SSD FB (˜) if and only if EA U (˜) ≥ ˜ x x x EB U (˜) for all nondecreasing and concave U . x Proof: See Laffont (1989), Chapter 2, Section 2.5 11 Table 4.3: Investment 3 Second Order Stochastically Dominates Investment 4 x x x x x Values of x 0 1 2 3 4 5 6 7 8 9 10 11 12 13 0 f3 (t)dt 0 0 0 0 .25 .75 .75 .75 .75 .75 .75 .75 1 1 0 F3 (t)dt 0 0 0 0 .25 1 1.75 2.5 3.25 4 4.75 5.5 6.5 7.5 0 f4 (t)dt 0 1/3 1/3 1/3 1/3 1/3 2/3 2/3 1 1 1 1 1 1 0 F4 (t)dt 0 1/3 2/3 1 4/3 5/3 7/3 3 4 5 6 7 8 9 0 [F4 (t) − F3 (t)]dt 0 1/3 2/3 1 13/12 2/3 7/12 1/2 3/4 1 5/4 3/2 3/2 3/2 That is, all risk-averse agents will prefer the second-order stochastically dominant asset. Of course, F SD implies SSD: If for two investments Z1 and Z2 , Z1 FSD Z2 , then it is also true that Z1 SSD Z2 . But the converse is not true. 4.7 Mean Preserving Spreads Theorems 4.2 and 4.3 attempt to characterize the notion of “better/worse” relevant for probability distributions or random variables (representing investments). But there are two aspects to such a comparison: the notion of “more or less risky” and the trade-off between risk and return. Let us now attempt to isolate the former effect by comparing only those probability distributions with identical means. We will then review Theorem 4.3 in the context of this latter requirement. The concept of more or less risky is captured by the notion of a mean preserving spread. In our context, this notion can be informally stated as follows: Let fA (x) and fB (x) describe, respectively, the probability density functions on payoffs to assets A and B. If fB (x) can be obtained from fA (x) by removing some of the probability weight from the center of fA (x) and distributing it to the tails in such a way as to leave the mean unchanged, we say that fB (x) is related to fA (x) via a mean preserving spread. Figure 4.7 suggests what this notion would mean in the case of normal-type distributions with identical mean, yet different variances. Insert Figure 4.7 about here 12 How can this notion be made both more intuitive and more precise? Consider a set of possible payoffs xA that are distributed according to FA ( ). We further ˜ randomize these payoffs to obtain a new random variable xB according to ˜ xB = xA + z ˜ ˜ ˜ (4.7) where, for any xA value, E(˜) = zdHxA (˜) = 0 ; in other words, we add some z z pure randomness to xA . Let FB ( ) be the distribution function associated with ˜ xB . We say that FB ( ) is a mean preserving spread of FA ( ). ˜ A simple example of this is as follows. Let xA = ˜ and suppose z= ˜ Then, +1 with prob 1/2 −1 with prob 1/2 with with with with prob prob prob prob 1/4 1/4 1/4 1/4 5 with prob 1/2 2 with prob 1/2   6   4 xB = ˜  3   1 Clearly, E xA = E xB = 3.5 ; we would also all agree that FB ( ) is intuitively ˜ ˜ riskier. Our final theorem (Theorem 4.4) relates the sense of a mean preserving spread, as captured by Equation (4.7), to our earlier results. Theorem 4.4: Let FA ( ) and FB ( ) be two distribution functions defined on the same state space with identical means. If this is true, the following statements are equivalent: (i) FA (˜) SSD FB (˜) x x (ii) FB (˜) is a mean preserving spread of FA (˜) in the sense of Equation x x (4.7). Proof: See Rothschild and Stiglitz (1970). But what about distributions that are not stochastically dominant under either definition and for which the mean-variance criterion does not give a relative ranking? For example, consider (independent) investments 5 and 6 in Table 4.4. In this case we are left to compare distributions by computing their respective expected utilities. That is to say, the ranking between these two investments is preference dependent. Some risk-averse individuals will prefer investment 5 while other risk-averse individuals will prefer investment 6. This is not bad. 13 Table 4.4: Two Investments; No Dominance Investment 5 Payoff Prob. 1 0.25 7 0.5 12 0.25 Investment 6 Payoff Prob. 3 0.33 5 0.33 8 0.34 There remains a systematic basis of comparison. The task of the investment advisor is made more complex, however, as she will have to elicit more information on the preferences of her client if she wants to be in position to provide adequate advice. 4.8 Conclusions The main topic of this chapter was the VNM expected utility representation specialized to admit risk aversion. Two measures of the degree of risk aversion were presented. Both are functions of an investor’s current level of wealth and, as such, we would expect them to change as wealth changes. Is there any systematic relationship between RA (Y ), RR (Y ), and Y which it is reasonable to assume? In order to answer that question we must move away from the somewhat artificial setting of this chapter. As we will see in Chapter 5, systematic relationships between wealth and the measures of absolute and relative risk aversion are closely related to investors’ portfolio behavior. References Arrow, K. J. (1971), Essays in the Theory of Risk Bearing, Markham, Chicago. De Groot, M.(1970), Optimal Statistical Decisions, McGraw Hill, New York. Laffont, J.-J. (1989), The Economics of Uncertainty and Information, MIT Press, Cambridge, MA. Pratt, J. (1964), “Risk Aversion in the Small and the Large,” Econometrica 32, 122–136. Rothschild, M., Stiglitz, J.E. (1970), “Increasing Risk: A Definition,” Journal of Economic Theory 2, 225–243. Appendix: Proof of Theorem 4.2 ⇒ There is no loss in generality in assuming U ( ) is differentiable, with U ( ) > 0. 14 Suppose FA (x) F SD FB (x), and let U ( ) be a utility function defined on [a, b] for which U ( ) > 0. We need to show that b b EA U (˜) = x a U (˜)dFA (˜) > x x a U (˜)dFB (˜) = EB U (˜). x x x b a This result follows from integration by parts (recall the relationship b b uv|a − a vdu). b b udv = U (˜)dFA (˜) − x x a a U (˜)dFB (˜) x x b = U (b)FA (b) − U (a)FA (a) − a FA (˜)U (˜)d˜ − x x x b U (b)FB (b) − U (a)FB (a) − a b b FB (˜)U (˜)d˜ x x x = − a b FA (˜)U (˜)d˜ + x x x a FB (˜)U (˜)d˜, x x x (since FA (b) = FB (b) = 1, and FA (a) = FB (a) = 0) = a [FB (˜) − FA (˜)] U (˜)d˜ ≥ 0. x x x x The desired inequality follows since, by the definition of FSD and the assumption that the marginal utility is always positive, both terms within the integral are positive. If there is some subset (c, a) ⊂ [a, b] on which FA (x) > FB (x), the final inequality is strict. ⇐ Proof by contradiction. If FA (˜) ≤ FB (˜) is false, then there must exist x x an x ∈ [a, b] for which FA (¯) > FB (¯). Define the following nondecreasing ¯ x x ˆ function U (x) by 1 for b ≥ x > x ˜ ˆ U (x) = . 0 for a ≤ x < x ˜ We’ll use integration by parts again to obtain the required contradiction. b a b b a ˆ x U (˜)dFA (˜) − x ˆ x U (˜)dFB (˜) x = a b ˆ x U (˜) [dFA (˜) − dFB (˜)] x x 1 [dFA (˜) − dFB (˜)] x x = x ¯ b = = FA (b) − FB (b) − [FA (¯) − FB (¯)] − x x x ¯ [FA (˜) − FB (˜)](0)d˜ x x x FB (¯) − FA (¯) < 0. x x 15 ˆ Thus we have exhibited an increasing function U (x) for which b U (˜)dFB (˜), a contradiction. 2 x x a b a ˆ x U (˜)dFA (˜) < x 16 Chapter 5: Risk Aversion and Investment Decisions, Part 1 5.1 Introduction Chapters 3 and 4 provided a systematic procedure for assessing an investor’s relative preference for various investment payoffs: rank them according to expected utility using a VNM utility representation constructed to reflect the investor’s preferences over random payments. The subsequent postulate of risk aversion further refined this idea: it is natural to hypothesize that the utility-of-money function entering the investor’s VNM index is concave (U ( ) < 0). Two widely used measures were introduced and interpreted each permitting us to assess an investor’s degree of risk aversion. In the setting of a zero-cost investment paying either (+h) or (−h), these measures were shown to be linked with the minimum probability of success above one half necessary for a risk averse investor to take on such a prospect willingly. They differ only as to whether (h) measures an absolute amount of money or a proportion of the investors’ initial wealth. In this chapter we begin to use these ideas with a view towards understanding an investor’s demand for assets of different risk classes and, in particular, his or her demand for risk-free versus risky assets. This is an essential aspect of the investor’s portfolio allocation decision. 5.2 Risk Aversion and Portfolio Allocation: Risk Free vs. Risky Assets The Canonical Portfolio Problem 5.2.1 Consider an investor with wealth level Y0, who is deciding what amount, a, to invest in a risky portfolio with uncertain rate of return r. We can think of ˜ the risky asset as being, in fact, the market portfolio under the “old” Capital Asset Pricing Model (CAPM), to be reviewed in Chapter 7. The alternative is to invest in a risk-free asset which pays a certain rate of return rf . The time horizon is one period. The investor’s wealth at the end of the period is given by ˜ Y1 = (1 + rf )(Y0 − a) + a(1 + r) = Y0 (1 + rf ) + a(˜ − rf ) ˜ r The choice problem which he must solve can be expressed as ˜ max EU (Y1 ) = max EU (Y0 (1 + rf ) + a (˜ − rf )) , r a (5.1) where U ( ) is his utility-of-money function, and E the expectations operator. This formulation of the investor’s problem is fully in accord with the lessons of the prior chapter. Each choice of a leads to a different uncertain payoff distribution, and we want to find the choice that corresponds to the most preferred 1 such distribution. By construction of his VNM representation, this is the payoff pattern that maximizes his expected utility. Under risk aversion (U ( ) < 0), the necessary and sufficient first order condition for problem (5.1) is given by: E [U (Y0 (1 + rf ) + a (˜ − rf )) (˜ − rf )] = 0 r r (5.2) Analyzing Equation (5.2) allows us to describe the relationship between the investor’s degree of risk aversion and his portfolio’s composition as per the following theorem: Theorem 5.1: Assume U ( ) > 0, and U ( ) < 0 and let a denote the solution to problem ˆ (5.1). Then a > 0 ⇔ E r > rf ˆ ˜ a = 0 ⇔ E r = rf ˆ ˜ a < 0 ⇔ E r < rf ˆ ˜ Proof : Since this is a fundamental result, its worthwhile to make clear its (straightforward) justification. We follow the argument presented in Arrow (1971), Chapter 2. Define W (a) = E {U (Y0 (1 + rf ) + a (˜ − rf ))}. The FOC (5.2) can then r be written W (a) = E [U (Y0 (1 + rf ) + a (˜ − rf )) (˜ − rf )] = 0 . By risk r r aversion (U < 0), W (a) = E U (Y0 (1 + rf ) + a (˜ − rf )) (˜ − rf ) r r 2 < 0, that is, W (a) is everywhere decreasing. It follows that a will be positive if ˆ and only if W (0) = U (Y0 (1 + rf )) E (˜ − rf ) > 0 (since then a will have to r be increased from the value of 0 to achieve equality in the FOC). Since U is always strictly positive, this implies a > 0 if and only if E (˜ − rf ) > 0. 2 ˆ r The other assertion follows similarly. Theorem 5.1 asserts that a risk averse agent will invest in the risky asset or portfolio only if the expected return on the risky asset exceeds the risk free rate. On the other hand, a risk averse agent will always participate (possibly via an arbitrarily small stake) in a risky investment when the odds are favorable. 5.2.2 Illustration and Examples It is worth pursuing the above result to get a sense of how large a is relative to Y0 . Our findings will, of course, be preference dependent. Let us begin with the fairly standard and highly tractable utility function U (Y ) = ln Y . For added simplicity let us also assume that the risky asset is forecast to pay either of two returns (corresponding to an “up” or “down” stock market), r2 > r1 , with probabilities π and 1-π respectively. It makes sense (why?) to assume r2 > rf > r1 and E r = πr2 + (1 − π)r1 > rf . ˜ 2 Under this specification, the F.O.C (5.2) becomes E r − rf ˜ Y0 (1 + rf ) + a(˜ − rf ) r = 0. Writing out the expectation explicitly yields (1 − π)(r1 − rf ) π(r2 − rf ) + = 0, Y0 (1 + rf ) + a(r2 − rf ) Y0 (1 + rf ) + a(r1 − rf ) which, after some straightforward algebraic manipulation, gives: a −(1 + rf )[E r − rf ] ˜ = > 0. Y0 (r1 − rf )(r2 − rf ) (5.3) This is an intuitive sort of expression: the fraction of wealth invested in risky assets increases with the return premium paid by the risky asset (E r − rf ) and ˜ decreases with an increase in the return dispersion around rf as measured by (r2 − rf ) (rf − r1 ).1 Suppose rf = .05, r2 = .40, and r1 = -.20 and π = 1 (the latter information 2 a guarantees E r=.10). In this case Y0 = .6 : 60% of the investor’s wealth turns ˜ out to be invested in the risky asset. Alternatively, suppose r2 = .30 and r1 = a .10 (same rf , π and E r); here we find that Y0 = 1.4. This latter result must be ˜ interpreted to mean that an investor would prefer to invest at least his full wealth in the risky portfolio. If possible, he would even want to borrow an additional amount, equal to 40% of his initial wealth, at the risk free rate and invest this amount in the risky portfolio as well. In comparing these two examples, we see that the return dispersion is much smaller in the second case (lower risk in a mean-variance sense) with an unchanged return premium. With less risk and unchanged mean returns, it is not surprising that the proportion invested in the risky asset increases very substantially. We will see, however, that, somewhat surprisingly, this result does not generalize without further assumption on the form of the investor’s preferences. 5.3 Portfolio Composition, Risk Aversion and Wealth In this section we consider how an investor’s portfolio decision is affected by his degree of risk aversion and his wealth level. A natural first exercise is to compare the portfolio composition across individuals of differing risk aversion. The answer to this first question conforms with intuition: if John is more risk averse than Amos, he optimally invests a smaller fraction of his wealth in the risky asset. This is the essence of our next two theorems. Theorem 5.2 (Arrow, 1971): 1 2 i Suppose, for all wealth levels Y , RA (Y ) > RA (Y ) where RA (Y ) is the measure of absolute risk aversion of investor i, i = 1, 2. Then a1 (Y ) < a2 (Y ). ˆ ˆ 1 That this fraction is independent of the wealth level is not a general result, as we shall find out in Section 5.3. 3 That is, the more risk averse agent, as measured by his absolute risk aversion measure, will always invest less in the risky asset, given the same level of wealth. This result does not depend on measuring risk aversion via the absolute Arrow1 2 1 2 Pratt measure. Indeed, since RA (Y ) > RA (Y ) ⇔ RR (Y ) > RR (Y ), Theorem 5.2 can be restated as Theorem 5.3: i 2 1 Suppose, for all wealth levels Y > 0, RR (Y ) > RR (Y ) where RR (Y ) is the measure of relative risk aversion of investor i, i = 1, 2. Then a1 (Y ) < a2 (Y ). ˆ ˆ Continuing with the example of Section 5.2.2, suppose now that the in1−γ vestor’s utility function has the form U (Y ) = Y 1−γ , γ > 1. This utility function displays both greater absolute and greater relative risk aversion than U (Y ) = ln Y (you are invited to prove this statement). From Theorems 5.2 and 5.3, we would expect this greater risk aversion to manifest itself in a reduced willingness to invest in the risky portfolio. Let us see if this is the case. For these preferences the expression corresponding to (5.3) is (1 + rf ) [(1 − π)(rf − r1 )] γ − (π(r2 − rf )) γ a = 1 1 Y0 (r1 − rf ) {π(r2 − rf )} γ − (r2 − rf ) {(1 − π)(rf − r1 )} γ 1 1 (5.4) In the case of our first example, but with γ =3, we obtain, by simple direct substitution, a = .24; Y0 indeed only 24% of the investor’s assets are invested in the risky portfolio, down from 60% earlier. The next logical question is to ask how the investment in the risky asset varies with the investor’s total wealth as a function of his degree of risk aversion. Let us begin with statements appropriate to the absolute measure of risk aversion. Theorem 5.4 (Arrow, 1971): Let a = a (Y0 ) be the solution to problem (5.1) above; then: ˆ ˆ (i) RA (Y ) < 0 ⇔ a (Y0 ) > 0 ˆ (ii) RA (Y ) = 0 ⇔ a (Y0 ) = 0 ˆ (iii) RA (Y ) > 0 ⇔ a (Y0 ) < 0. ˆ Case (i) is referred to as declining absolute risk aversion (DARA). Agents with this property become more willing to accept greater bets as they become wealthier. Theorem 5.4 says that such agents will also increase the amount invested in the risky asset ( a (Y0 ) > 0). To state matters slightly differently, ˆ an agent with the indicated declining absolute risk aversion will, if he becomes wealthier, be willing to put some of that additional wealth at risk. Utility functions of this form are quite common: those considered in the example, 4 U (Y ) = ln Y and U (Y ) = Y 1−γ , γ > 0, display this property. It also makes intuitive sense. Under constant absolute risk aversion or CARA, case(ii), the amount invested in the risky asset is unaffected by the agent’s wealth. This result is somewhat counter-intuitive. One might have expected that a CARA decision maker, in particular one with little risk aversion, would invest some of his or her increase in initial wealth in the risky asset. Theorem 5.4 disproves this intuition. An example of a CARA utility function is U (Y ) = −e−νY . Indeed, RA (Y ) = −(−ν 2 )e−νY −U (Y ) = =ν U (Y ) νe−νY 1−γ Let’s verify the claim of Theorem 5.4 for this utility function. Consider r maxE −e−ν(Y0 (1+rf )+a(˜−rf )) a The F.O.C. is Now compute da dY0 r E ν (˜ − rf ) e−ν(Y0 (1+rf )+a(˜−rf )) = 0 r ; by differentiating the above equation, we obtain: da r = 0 E ν (˜ − rf ) e−ν(Y0 (1+rf )+a(˜−rf )) 1 + rf + (˜ − rf ) r r dY0    2 da −ν(Y0 (1+rf )+a(˜−rf ))  r r (1 + rf ) E ν (˜ − rf ) e−ν(Y0 (1+rf )+a(˜−rf )) +E ν (˜ − rf ) r r e  = 0; dY0 =0(by the FOC) >0 >0 therefore, da dY0 ≡ 0. For the above preference ordering, and our original two state risky distribution, a= ˆ 1 ν 1 r1 − r2 ln (1 − π) π rf − r1 r2 − rf Note that in order for ˆ to be positive, it must be that a 0< (1 − π) π rf − r1 r2 − rf < 1. A sufficient condition is that π > 1/2. Case (iii) is one with increasing absolute risk aversion (IARA). It says that as an agent becomes wealthier, he reduces his investments in risky assets. This does not make much sense and we will generally ignore this possibility. Note, however, that the quadratic utility function, which is of some significance as we will see later on, possesses this property. 5 Let us now think in terms of the relative risk aversion measure. Since it is defined for bets expressed as a proportion of wealth, it is appropriate to think in terms of elasticities, or of how the fraction invested in the risky asset changes dˆ/ˆ a a dˆ a as wealth changes. Define η(Y, a) = dY /Y = Y dY , i.e., the wealth elasticity of ˆ a ˆ investment in the risky asset. For example, if η(Y, a) > 1, as wealth Y increases, ˆ the percentage increase in the amount optimally invested in the risky portfolio exceeds the percentage increase in Y . Or as wealth increases, the proportion optimally invested in the risky asset increases. Analogous to Theorem 5.4 is Theorem 5.5: Theorem 5.5 (Arrow, 1971): If, for all wealth levels Y , (i) RR (Y ) (ii) RR (Y ) (iii) RR (Y ) = 0 (CRRA) then < 0 (DRRA) then > 0 (IRRA) then η=1 η>1 η 0 (Check that U = 0 in this case). What proportion of his wealth will such an agent invest in the risky asset? The answer is: provided E r > rf (as we have assumed), all of his wealth will ˜ be invested in the risky asset. This is clearly seen from the following. Consider the agent’s portfolio problem: maxE(c + d (Y0 (1 + rf ) + a (˜ − rf )) r a = max [c + d(Y0 (1 + rf )) + da (E r − rf )] ˜ a 2 Note that the above comments also suggest the appropriateness of weakly increasing relative risk aversion as an alternative working assumption. 6 With E r > rf and, consequently, d (E r − rf ) > 0, this expression is increas˜ ˜ ing in a. This means that if the risk neutral investor is unconstrained, he will attempt to borrow as much as possible at rf and reinvest the proceeds in the risky portfolio. He is willing, without bound, to exchange certain payments for uncertain claims of greater expected value. As such he stands willing to absorb all of the economy’s financial risk. If we specify that the investor is prevented from borrowing then the maximum will occur at a = Y0 5.5 Risk Aversion and Risky Portfolio Composition So far we have considered the question of how an investor should allocate his wealth between a risk free asset and a risky asset or portfolio. We now go one step further and ask the following question: when is the composition of the portfolio (i.e., the percentage of the portfolio’s value invested in each of the J risky assets that compose it) independent of the agent’s wealth level? This question is particularly relevant in light of current investment practices whereby portfolio decisions are usually taken in steps. Step 1, often associated with the label “asset allocation”, is the choice of instruments: stocks, bonds and riskless assets (possibly alternative investments as well, such as hedge funds, private equity and real estate); Step 2 is the country or sector allocation decision: here the issue is to optimize not across asset class but across geographical regions or industrial sectors. Step 3 consists of the individual stock picking decisions made on the basis of information provided by financial analysts. The issuing of asset and country/sector allocation “grids” by all major financial institutions, tailored to the risk profile of the different clients, but independent of their wealth levels (and of changes in their wealth), is predicated on the hypothesis that differences in wealth (across clients) and changes in their wealths do not require adjustments in portfolio composition provided risk tolerance is either unchanged or controlled for. Let us illustrate the issue in more concrete terms; take the example of an investor with invested wealth equal to$12,000 and optimal portfolio proportions 1 of a1 = 2 , a2 = 1 , and a3 = 1 (only 3 assets are considered). In other words, 3 6 this individual’ portfolio holdings are $6,000 in asset 1,$4,000 in asset 2 and $2,000 in asset 3. The implicit assumption behind the most common asset management practice is that, were the investor’s wealth to double to$24,000, the new optimal portfolio would naturally be : Asset 1: Asset 2: Asset 3: 1 2 1 3 1 6 ($24,000) =$12,000 ($24,000) =$8,000 ($24,000) =$4,000. The question we pose in the present section is: Is this hypothesis supported by theory? The answer is generally no, in the sense that it is only for very specific preferences (utility functions) that the asset allocation is optimally left 7 unchanged in the face of changes in wealth levels. Fortunately, these specific preferences include some of the major utility representations. The principal result in this regard is as follows: Theorem 5.6 (Cass and Stiglitz,1970):   a1 (Y0 ) ˆ   .  denote the amount optimally invested in the J Let the vector    . aJ (Y0 ) ˆ risky assets if the wealth level is 0 . Y    a1 a1 (Y0 ) ˆ    .  . =  Then    .  f (Y0 )  . aJ (Y0 ) ˆ aJ (for some arbitrary function f (·)) if and only if either (i) U (Y0 ) = (θY0 + κ)∆ or (ii) U (Y0 ) = ξe−vY0 There are, of course, implicit restrictions on the choice of θ, κ, ∆, ξ and υ to insure, in particular, that U (Y0 ) < 0.3 Integrating (i) and (ii), respectively, in order to recover the utility functions corresponding to these marginal utilities, one finds, significantly, that the first includes the CRRA class of functions: (1−γ) 1 U (Y0 ) = 1−γ Y0 γ = 1, and U (Y0 ) = ln(Y0 ), while the second corresponds to the CARA class: ξ −νY0 e . −ν In essence, Theorem 5.6 states that it is only in the case of utility functions satisfying constant absolute or constant relative risk aversion preferences (and some generalization of these functions of minor interest) that the relative composition of the risky portion of an investor’s optimal portfolio is invariant to changes in his wealth4 . Only in these cases, should the investor’s portfolio composition be left unchanged as invested wealth increases or decreases. It is only with such utility specifications that the standard “grid” approach to portfolio investing is formally justified5 . U (Y0 ) = 3 For (i), we must have either θ > 0, ∆ < 0, and Y such that θY + κ ≥ 0 or θ < 0, κ < 0 0 0, ∆ > 0, and Y0 ≤ − κ For (ii), ξ > 0, −v < 0 and Y0 ≥ 0. θ 4 As noted earlier, the constant absolute risk aversion class of preferences has the property that the total amount invested in risky assets is invariant to the level of wealth. It is not surprising therefore that the proportionate allocation among the available risky assets is similarly invariant as this theorem asserts. 5 Theorem 5.6 does not mean, however, that the fraction of initial wealth invested in the risk free asset vs. the risky ‘mutual fund’ is invariant to changes in Y0 . The CARA class of preferences discussed in the previous footnote is a case in point. 8 5.6 5.6.1 Risk Aversion and Savings Behavior Savings and the Riskiness of Returns We have thus far considered the relationship between an agent’s degree of risk aversion and the composition of his portfolio. A related, though significantly different, question is to ask how an agent’s savings rate is affected by an increase in the degree of risk facing him. It is to be expected that the answer to this question will be influenced, in a substantial way, by the agent’s degree of risk aversion. Consider first an agent solving the following two period consumption-savings problem: ˜ maxE{U (Y0 − s) + δU (sR)}, s (5.5) s.t. Y0 ≥ s ≥ 0 where Y0 is initial (period zero) wealth, s is the amount saved and entirely ˜ invested in a risky portfolio with uncertain gross risky return, R = 1 + r, U ( ) ˜ is the agent’s period utility-of-consumption function, and δ is his subjective discount factor6 . Note that this is the first occasion where we have explicitly introduced a time dimension into the analysis (i.e., where the important tradeoff involves the present vs. the future): the discount rate δ 1, then sA < sB ; If RR (Y ) ≥ 0 and RR (Y ) < 1, then sA > sB . Proof: To prove this assertion we need the following Lemma 5.7. Lemma 5.7 : RR (Y ) has the same sign as − [U (Y )Y + U (Y )(1 + RR (Y )]. Proof : Since RR (Y ) = RR (Y ) = −Y U (Y ) U (Y ) , [−U (Y )Y − U (Y )] U (Y ) − [−U (Y )Y ] U (Y ) [U (Y )] 2 . Since U (Y ) > 0, RR (Y ) has the same sign as [−U (Y )Y −U (Y )]U (Y )−[−U (Y )Y ]U (Y ) U (Y ) (Y )Y −U (Y )Y − U (Y ) − −U (Y ) U U = (Y ) = − {U (Y )Y + U (Y ) [1 + RR (Y )]} .2 Now we can proceed with the theorem. We’ll show only the first implication as the second follows similarly. By the lemma, since RR (Y ) < 0, − {U (Y )Y + U (Y ) [1 + RR (Y )]} < {U (Y )Y + U (Y ) [1 + RR (Y )]} > In addition, since U (Y ) < 0, and RR (Y ) > 1, U (Y )Y + U (Y )(2) > {U (Y )Y + U (Y ) [1 + RR (Y )]} > 0. This is true for all Y ; hence 2U (sR) + sRU (sR) > 0. Multiplying left and right by s > 0, one gets 2U (sR)s + s2 RU (sR) > 0, which by equation (5.6) implies g (R) > 0. But by the earlier remarks, this means that sA < sB as required. 2 Theorem 5.7 implies that for the class of constant relative risk aversion utility functions, i.e. functions of the form U (c) = (1 − γ)−1 c1−γ 11 0, or 0 (0 < γ = 1), an increase in risk increases savings if γ > 1 and decreases it if γ < 1, with the U (c) = ln(c) case being the watershed for which savings is unaffected. For broader classes of utility functions, this theorem provides a partial characterization only, suggesting different investors react differently according to whether they display declining or increasing relative risk aversion. A more complete characterization of the issue of interest is afforded if we introduce the concept of prudence, first proposed by Kimball (1990). Let (c) P(c) = −U (c) be a measure of Absolute Prudence, while by analogy with U risk aversion, (c) P(c)c = −cU (c) then measures Relative Prudence. Theorem 5.7 can now U be restated as Theorem 5.8: Theorem 5.8: ˜ ˜ ˜ ˜ Let RA , RB be two return distributions such that RA SSD RB , and let sA and sB be, respectively, the savings out of Y0 corresponding to the return ˜ ˜ distributions RA and RB . Then, sA ≥ sB iff c P(c) ≤ 2, and conversely, sA < sB iff c P(c) > 2 i.e., risk averse individuals with Relative Prudence lower than 2 decrease savings while those with Relative Prudence above 2 increase savings in the face of an increase in the riskiness of returns. Proof: We have seen that sA < sB if and only if g (R) > 0. From Equation (5.6), this means sRU (sR) /U (sR) < s − 2, or sRU (sR) cP(c) = > 2, −U (sR) as claimed. The other part of the proposition is proved similarly. 2 5.6.2 Illustrating Prudence The relevance of the concept of prudence can be illustrated in the simplest way if we turn to a slightly different problem, where one ignores uncertainty in returns (assuming, in effect, that the net return is identically zero) while asking how savings in period zero is affected by uncertain labor income in period 1. Our remarks in this context are drawn from Kimball (1990). ¯ ˜ Let us write the agent’s second period labor income, Y , as Y = Y + Y ¯ is the mean labor income and Y measures deviations from the mean ˜ where Y 12 ˜ (of course, E Y = 0). The simplest form of the decision problem facing the agent is thus: ¯ ˜ max E{U (Y0 − s) + βU (s + Y + Y )}, s where s = si satisfies the first order condition ¯ ˜ (i) U (Y0 − si ) = βE{U (si + Y + Y )}. It will be of interest to compare the solution si to the above FOC with the solution to the analogous decision problem, denoted sii , in which the uncertain labor income component is absent. The latter FOC is simply ¯ (ii) U (Y0 − sii ) = βU (sii + Y ). The issue once again is whether and to what extent si differs from sii . One approach to this question, which gives content to the concept of prudence is to ask what the agent would need to be paid (what compensation is required in terms of period 2 income) to ignore labor income risk; in other words, for his first-period consumption and savings decision to be unaffected by uncertainty in labor income. The answer to this question leads to the definition ¯ ˜ of the compensating precautionary premium ψ = ψ(Y , Y , s) as the amount of additional second period wealth (consumption) that must be given to the agent in order that the solution to (i) coincides with the solution to (ii). That is, the ¯ ˜ compensatory precautionary premium ψ(Y , Y , s) is defined as the solution of ¯ ˜ ¯ ˜ U (Y0 − sii ) = βE{U (sii + Y + Y + ψ(Y , Y , s))} Kimball (1990) proves the following two results. Theorem 5.9: Let U ( ) be three times continuously differentiable and P(s) be the index of Absolute Prudence. Then (i) 2 ¯ ˜ ¯ ψ(Y , Y , s) ≈ 1/2σY P (s + Y ) ˜ Furthermore, let U1 ( ) and U2 ( ) be two second period utility functions for which P 1 (s) = Then −U1 (s) −U2 (s) < = P 2 (s), for all s. U1 (s) U2 (s) ¯ ˜ ¯ ˜ ¯ ˜ (ii) ψ2 (Y , Y , s) > ψ1 (Y , Y , s) for all s, Y , Y . Theorem 5.9 (i) shows that investors’ precautionary premia are directly proportional to the product of their prudence index and the variance of their uncertain income component, a result analogous to the characterization of the measure of absolute risk aversion obtained in Section 4.3. The result of Theorem 5.9 (ii) confirms the intuition that the more “prudent” the agent, the greater the compensating premium. 13 5.6.3 The Joint Saving-portfolio Problem Although for conceptual reasons we have so far distinguished the consumptionsavings and the portfolio allocation decisions, it is obvious that the two decisions should really be considered jointly. We now formalize the consumption/savings/portfolio allocation problem: {a,s} max U (Y0 − s) + δEU (s(1 + rf ) + a(˜ − rf )), r (5.8) where s denotes the total amount saved and a is the amount invested in the 1−γ risky asset. Specializing the utility function to the form U (Y ) = Y 1−γ first order conditions for this joint decision problem are s : (Y0 − s)−γ (−1) + δE [s(1 + rf ) + a(˜ − rf )]−γ (1 + rf ) = 0 r E (s(1 + rf ) + a(˜ − rf ))−γ (˜ − rf ) = 0 r r a : The first equation spells out the condition to be satisfied at the margin for the savings level – and by corollary, consumption – to be optimal. It involves comparing the marginal utility today with the expected marginal utility tomorrow, with the rate of transformation between consumption today and consumption tomorrow being the product of the discount factor by the gross risk free return. This FOC needs not occupy us any longer here. The interesting element is the solution to the second first order condition: it has the exact same form as Equation (5.2) with the endogenous (optimal) s replacing the exogenous initial wealth level Y0 . Let us rewrite this equation as a s−γ E ((1 + rf ) + (˜ − rf ))−γ (˜ − rf ) = 0, r r s which implies: a E ((1 + rf ) + (˜ − rf ))−γ (˜ − rf ) = 0. r r s This equation confirms the lessons of Equations (5.3) and (5.4): For the selected utility function, the proportion of savings invested in the risky asset is independent of s, the amount saved. This is an important result, which does not generalize to other utility functions, but opens up the possibility of a straightforward extension of the savings-portfolio problem to a many period problem. We pursue this important extension in Chapter 14. 5.7 Separating Risk and Time Preferences In the context of a standard consumption savings problem such as (5.5), let us suppose once again that the agent’s period utility function has been specialized to have the standard CRRA form, U (c) = Y 1−γ , γ > 0. 1−γ 14 For this utility function, the single parameter γ captures not only the agent’s sensitivity to risk, but also his sensitivity to consumption variation across time periods and, equivalently, his willingness to substitute consumption in one period for consumption in another. A high γ signals a strong desire for a very smooth intertemporal consumption profile and, simultaneously, a strong reluctance to substitute consumption in one period for consumption in another. To see this more clearly, consider a deterministic version of Problem (5.5) where δ ˜ < 1, R ≡ 1: max {U (Y0 − s) + δU (s)} 0≤s≤Y0 The necessary and sufficient first-order condition is −(Y0 − s)−γ + δs−γ 1 δ 1 γ = 0 or = Y0 − s s . −s → With δ < 1, as the agent becomes more and more risk averse (γ → ∞), Y0s 1; i.e., c0 ≈ c1 . For this preference structure, a highly risk averse agent will also seek an intertemporal consumption profile that is very smooth. We have stressed repeatedly the pervasiveness of the preference for smooth consumption whether across time or across states of nature, and its relationship with the notion of risk aversion. It is time to recognize that while in an atemporal setting a desire for smooth consumption (across states of nature) is the very definition of risk aversion, in a multiperiod environment, risk aversion and intertemporal consumption smoothing need not be equated. After all, one may speak of intertemporal consumption smoothing in a no-risk, deterministic setting, and one may speak of risk aversion in an uncertain, a-temporal environment. The situation considered so far where the same parameter determines both is thus restrictive. Empirical studies, indeed, tend to suggest that typical individuals are more averse to intertemporal substitution (they desire very smooth consumption intertemporally) than they are averse to risk per se. This latter fact cannot be captured in the aforementioned, single parameter setting. Is it possible to generalize the standard utility specification and break this coincidence of time and risk preferences? Epstein and Zin (1989, 1991) answer positively and propose a class of utility functions which allows each dimension to be parameterized separately while still preserving the time consistency property discussed in Section 3.7 of Chapter 3. They provide, in particular, the axiomatic basis for preferences over lotteries leading to the Kreps and Porteus (1978)-like utility representation (see Chapter 3): ˜ Ut = U (ct , ct+1 , ct+2 , ..) = W (ct , CE(Ut+1 )), (5.9) ˜ where CE(Ut+1 ) denotes the certainty equivalent in terms of period t consumption of the uncertain utility in all future periods. Epstein and Zin (1991) and others (e.g., Weil (1989)) explore the following CES-like specialized version: 15 U (ct , CEt+1 ) = with θ = θ (1 − δ)ct θ + δCEt+1 1−γ 1−γ θ 1−γ , (5.10) 1−γ 1 , 0 < δ < 1, 1 = γ > 0, ρ > 0; or 1− ρ (1 − δ) log ct + δ log CEt+1 , γ = 1, (5.11) U (ct , CEt+1 ) = ˜ where CEt+1 = CE(Ut+1 ) is the certainty equivalent of future utility and is calculated according to ˜ CE(Ut+1 ) 1−γ ˜ = Et (Ut+1 )1−γ , 1 = γ > 0, or ˜ = Et (log Ut+1 ), γ = 1. (5.12) (5.13) ˜ log CE(Ut+1 ) Epstein and Zin (1989) show that γ can be viewed as the agent’s coefficient of risk aversion. Similarly, when the time preference parameter ρ becomes smaller, the agent becomes less willing to substitute consumption intertemporally. If γ 1 = ρ , recursive substitution to eliminate Ut yields  Ut = (1 − δ)Et j=0 ∞ 1  1−γ δ j c1−γ  t+j which represents the same preference as Et j=0 δ j c1−γ , t+j and is thus equivalent to the usual time separable case with CRRA utility. Although seemingly complex, this utility representation turns out to be surprisingly easy to work with in consumption-savings contexts. We will provide an illustration of its use in Chapters 9 and 14. Note, however, that (5.10) to (5.13) do not lead to an expected utility representation as the probabilities do not enter linearly. If one wants to extricate time and risk preferences, the expected utility framework must be abandoned. 5.8 Conclusions We have considered, in a very simple context, the relationship between an investor’s degree of risk aversion, on the one hand, his desire to save and the composition of his portfolio on the other. Most of the results were intuitively acceptable and that, in itself, makes us more confident of the VNM representation. 16 Are there any lessons here for portfolio managers to learn? At least three are suggested. (1) Irrespective of the level of risk, some investment in risky assets is warranted, even for the most risk averse clients (provided E r > rf ). This is the ˜ substance of Theorem 5.1. (2) As the value of a portfolio changes significantly, the asset allocation (proportion of wealth invested in each asset class) and the risky portfolio composition should be reconsidered. How that should be done depends critically on the client’s attitudes towards risk. This is the substance of Theorems 5.4 to 5.6. (3) Investors are willing, in general, to pay to reduce income (consumption) risk, and would like to enter into mutually advantages transactions with institutions less risk averse than themselves. The extreme case of this is illustrated in Section 5.5. We went on to consider how greater return uncertainty influences savings behavior. On this score and in some other instances, this chapter has illustrated the fact that, somewhat surprisingly, risk aversion is not always a sufficient hypothesis to recover intuitive behavior in the face of risk. The third derivative of the utility function often plays a role. The notion of prudence permits an elegant characterization in these situations. In many ways, this chapter has aimed at providing a broad perspective allowing to place Modern Portfolio Theory and its underlying assumptions in their proper context. We are now ready to revisit this pillar of modern finance. References Arrow, K. J. (1971), Essays in the Theory of Risk Bearing, Markham, Chicago. Becker, G., Mulligan, C. (1997), “The Endogenous Determination of Time Preferences,” Quarterly Journal of Economics, 112, 729-758. Cass, D., Stiglitz, J.E. (1970), “The Structure of Investor Preference and Asset Returns and Separability in Portfolio Allocation: A Contribution to the Pure Theory of Mutual Funds”, Journal of Economic Theory, 2, 122-160. Epstein, L.G., Zin, S.E. (1989), “Substitution, Risk Aversion, and the Temporal Behavior of Consumption Growth and Asset Returns I: Theoretical Framework,” Econometrica, 57, 937-969. Epstein, L.G., Zin, S.E.(1991), “Substitution, Risk Aversion, and the Temporal Behavior of Consumption Growth and Asset Returns II: An Empirical Analysis,” Journal of Empirical Economics, 99, 263-286. Hartwick, J. (2000), “Labor Supply Under Wage Uncertainty,” Economics Letters, 68, 319-325. Kimball, M. S. (1990), “Precautionary Savings in the Small and in the Large,” Econometrica, 58, 53-73. 17 Kreps, D., Porteris (1978), “Temporal Resolution of Uncertainty and Dynamic Choice Theory”, Econometrica, 46, 185-200. Rothschild, M., Stiglitz, J. (1971), “Increasing Risk II: Its Economic Consequences,” Journal of Economic Theory, 3, 66-85. Weil, Ph. (1989), “The Equity Premium Puzzle and the Riskfree Rate Puzzle,” Journal of Monetary Economics, 24, 401-421. 18 Chapter 6: Risk Aversion and Investment Decisions, Part II: Modern Portfolio Theory 6.1 Introduction In the context of the previous chapter, we encountered the following canonical portfolio problem: ˜ max EU (Y1 ) = max EU [Y0 (1 + rf ) + a(˜ − rf )] . r a a (6.1) Here the portfolio choice is limited to allocating investable wealth, Y0 , between a risk-free and a risky asset, a being the amount invested in the latter. Slightly more generally, we can admit N risky assets, with returns (˜1 , r2 , ..., rN ), r ˜ ˜ as in the Cass-Stiglitz theorem. The above problem in this case becomes: {a1 ,a2 ,...,aN } max EU (Y0 (1 + rf ) + EU (Y0 (1 + rf ) + N i=1 N ai (˜i − rf )) r wi Y0 (˜i − rf )) r (6.2) = {w1 ,w2 ,...,wN } max i=1 ai Equation (6.2) re-expresses the problem with wi = Y0 , the proportion of wealth invested in the risky asset i, being the key decision variable rather than ai the money amount invested. The latter expression may further be written as {w1 ,w2 ,...,wN } max EU Y0 (1 + rf ) + N i=1 wi (˜i − rf ) r ˜ = EU {Y0 [1 + r˜ ]} = EU Y1 P (6.3) ˜ where Y1 denotes the end of period wealth and r˜ the rate of return on the P overall portfolio of assets held. Modern Portfolio Theory (MPT) explores the details of a portfolio choice such as problem 6.3, (i) under the mean-variance utility hypothesis, and (ii) for an arbitrary number of risky investments, with or without a risk-free asset. The goal of this chapter is to review the fundamentals underlying this theory. We first draw the connection between the mean-variance utility hypothesis and our earlier utility development. 6.2 What provides utility? As noted in Chapter 3, financial economics assumes that the ultimate source of consumer’s satisfaction is the consumption of the goods and services he is able to purchase.1 Preference relations and utility functions 1 Of course this doesn’t mean that nothing else in life provides utility or satisfaction (!) but the economist’s inquiry is normally limited to the realm of market phenomena and economic choices. 1 are accordingly defined on bundles of consumption goods: u(c1 , c2 , ..., cn ), (6.4) where the indexing i = 1, ..., n is across date-state (contingent) commodities: goods characterized not only by their identity as a product or service but also by the time and state in which they may be consumed. States of nature, however, are mutually exclusive. For each date and state of nature (θ) there is a traditional budget constraint p1θ c1θ + p2θ c2θ + ... + pmθ cmθ ≤ Yθ (6.5) where the indexing runs across goods for a given state θ; in other words, the m quantities ciθ , i = 1, ..., m, and the m prices piθ , i = 1, ..., m correspond to the m goods available in state of nature θ, while Yθ is the (“end of period”) wealth level available in that same state. We quite naturally assume that the number of goods available in each state is constant.2 In this context, and in some sense summarizing what we did in our last chapter, it is quite natural to think of an individual’s decision problem as being undertaken sequentially, in three steps. Step 1: The Consumption-Savings Decision Here, the issue is deciding how much to consume versus how much to save today: how to split period zero income Y0 between current consumption now C0 and saving S0 for consumption in the future where C0 + S0 = Y0 . Step 2: The Portfolio Problem At this second step, the problem is to choose assets in which to invest one’s savings so as to obtain the desired pattern of end-of-period wealth across the various states of nature. This means, in particular, allocating (Y0 − C0 ) between N the risk-free and the N risky assets with (1− i=1 wi )(Y0 −C0 ) representing the investment in the risk-free asset, and (w1 (Y0 −C0 ), w2 (Y0 −C0 ), .., wN (Y0 −C0 )), representing the vector of investments in the various risky assets. Step 3: Tomorrow’s Consumption Choice Given the realized state of nature and the wealth level obtained, there remains the issue of choosing consumption bundles to maximize the utility function [Equation (6.4)] subject to Equation (6.5) where Yθ = (Y0 − C0 ) (1 + rf ) + N i=1 wi (riθ − rf ) and riθ denotes the ex-post return to asset i in state θ. 2 This is purely formal: if a good is not available in a given state of nature, it is said to exist but with a total economy-wide endowment of the good being zero. 2 In such problems, it is fruitful to work by backward induction, starting from the end (step 3). Step 3 is a standard microeconomic problem and for our purpose its solution can be summarized by a utility-of-money function U (Yθ ) representing the (maximum) level of utility that results from optimizing in step 3 given that the wealth available in state θ is Yθ . In other words, U (Yθ ) ≡def (c1θ ,...,cmθ ) max u (c1θ , ..., cmθ ) s.t. p1θ c1θ + .. + pmθ cmθ ≤ Yθ . Naturally enough, maximizing the expected utility of Yθ across all states of nature becomes the objective of step 2: {w1 ,w2, ...wN } max ˜ EU (Y ) = θ πθ U (Yθ ). Here πθ is the probability of state of nature θ. The end-of-period wealth (a ˜ random variable) can now be written as Y = (Y0 − C0 )(1 + rP ), with (Y0 − C0 ) ˜ r the initial wealth net of date 0 consumption and rP = rf + ΣN wi (˜i − rf ) the ˜ i=1 rate of return on the portfolio of assets in which (Y0 − C0 ) is invested. This brings us back to Equation (6.3). Clearly with an appropriate redefinition of the utility function, ˜ ˆ r max EU (Y ) = max EU ((Y0 − C0 )(1 + rP )) =def max E U (˜P ) ˜ where in all cases the decision variables are portfolio proportions (or amounts) invested in the different available assets. The level of investable wealth, (Y0 − ˆ C0 ), becomes a parameter of the U (·) representation. Note that restrictions on the form of the utility function do not have the same meaning when imposed ˆ on U (·) or on U (·), or for that matter on u(·) [(as in Equation (6.4)]. Finally, given the characteristics (e.g., expected return, standard deviation) of the optimally chosen portfolio, the optimal consumption and savings levels can be selected. We are back in step 1 of the decision problem. From now on in this chapter we shall work with utility functions defined on the overall portfolio’s rate of return rP . This utility index can be further ˜ constrained to be a function of the mean and variance (or standard deviation) of the probability distribution of rP . This latter simplification can be accepted ˜ either as a working approximation or it can be seen as resulting from two further (alternative) hypotheses made within the expected utility framework: It must be assumed that the decision maker’s utility function is quadratic or that asset returns are normally distributed. The main justification for using a mean-variance approximation is its tractability. As already noted, probability distributions are cumbersome to manipulate and difficult to estimate empirically. Summarizing them by their first two moments is appealing and leads to a rich set of implications that can be tested empirically. 3 Using a simple Taylor series approximation, one can also see that the mean and variance of an agent’s wealth distribution are critical to the determination ˜ of his expected utility for any distribution. Let Y denote an investor’s end period wealth, an uncertain quantity, and U (·) his utility-of-money function. ˜ The Taylor series approximation for his utility of wealth around E(Y ) yields: ˜ EU Y ˜ =U E Y +U ˜ E Y 1 + U 2 where H3 = ∞ j=3 1 (j) j! U ˜ ˜ Y −E Y ˜ E Y j ˜ ˜ Y −E Y 2 + H3 (6.6) ˜ E Y ˜ ˜ Y −E Y . Now let us compute expected utility using this approximation: ˜ ˜ EU (Y ) = U E Y 1 + U 2 ˜ = U E Y +U ˜ E Y ˜ ˜ E(Y ) − E Y =0 2 ˜ E Y 1 + U 2 ˜ ˜ E Y −E Y =σ 2 (w) ˜ +EH3 ˜ E Y ˜ σ 2 Y + EH3 ˜ ˜ Thus if EH3 is small, at least to a first approximation, E(Y ) and σ 2 (Y ) are ˜ central to determining EU (Y ). ˜ If U (Y ) is quadratic, U is a constant and, as a result, EH3 ≡ 0, so E(Y ) and ˜ ˜ σ 2 (Y ) are all that matter. If Y is normally distributed, EH3 can be expressed ˜ ˜ in terms of E(Y ) and σ 2 (Y ), so the approximation is exact in this case as well. These well-known assertions are detailed in Appendix 6.1 where it is also shown that, under either of the above hypotheses, indifference curves in the mean-variance space are increasing and convex to the origin. Assuming the utility objective function is quadratic, however, is not fully satisfactory since the preference representation would then possess an attribute we deemed fairly implausible in Chapter 4, increasing absolute risk aversion (IARA). On this ground, supposing all or most investors have a quadratic utility function is very restrictive. The normality hypothesis on the rate of return processes is easy to verify directly, but we know it cannot, in general, be satisfied exactly. Limited liability instruments such as stocks can pay at worst a negative return of -100% (complete loss of the investment). Even more clearly at odds with the normality hypothesis, default-free (government) bonds always yield a positive return (abstracting from inflation). Option-based instruments, which are increasingly prevalent, are also characterized by asymmetric probability distributions. These remarks suggest that our analysis to follow must be viewed as a (useful and productive) approximation. 4 Box 6.1 About the Probability Distribution on Returns As noted in the text, the assumption that period returns (e.g., daily, monthly, annual) are normally distributed is inconsistent with the limited liability feature of most financial instruments; i.e., rit ≥ −1 for most securities i. It furthermore ˜ presents a problem for computing compounded cumulative returns: The product of normally distributed random variables (returns) is not itself normally distributed. These objections are made moot if we assume that it is the continuously c compounded rate of return, rit , that is normally distributed where rit = log(1 + ˜c rit ). ˜ ˜c This is consistent with limited liability since Y0 erit ≥ 0 for any rit ∈ ˜c (−∞, +∞). It has the added feature that cumulative continuously compounded returns are normally distributed since the sum of normally distributed random variables is normally distributed. The working assumption in empirical financial economics is that continuously compounded equity returns are i.i.d. normal; in other words, for all times t, rit ≈ N (µi , σi ). ˜c By way of language we say that the discrete period returns rit are lognormally ˜c distributed because their logarithm is normally distributed. There is substantial statistical evidence to support this assumption, subject to a number of qualifications, however. 1. First, while the normal distribution is perfectly symmetric about its mean, daily stock returns are frequently skewed to the right. Conversely, the returns to certain stock indices appear skewed to the left.3 2. Second, the sample daily return distributions for many individual stocks exhibit “excess kurtosis” or “fat tails”; i.e., there is more probability in the tails than would be justified by the normal distribution. The same is true of stock indices. The extent of this excess kurtosis diminishes substantially, however, when monthly data is used.4 Figure 6.1 illustrates for the returns on the Dow Jones and the S&P500. Both indices display negative skewness and a significant degree of kurtosis. There is one further complication. Even if individual stock returns are lognormally distributed, the returns to a portfolio of such stocks need not be lognormal (log of a sum is not equal to the sum of the logs). The extent of the error introduced by assuming lognormal portfolio returns is usually not great if the return period is short (e.g., daily). 2 3 Skewness: The extent to which a distribution is “pushed left or right” off symmetry is (rit −µi )3 . 3 σi measured by the skewness statistic S(˜it ), defined by S(˜it ) = E r r 4 Kurtosis S(˜it ) ≡ 0 if rit r ˜ If rit is ˜ is normally distributed. S(˜it ) > 0 suggests a rightward bias, and conversely if S(˜it ) 0 and U2 < 0: he likes expected return (µP ) and dislikes standard deviation (σP ). In this context, one recalls that an asset (or portfolio) A is said to mean-variance dominate an asset (or portfolio) B if µA ≥ µB and simultaneously σA < σB , or if µA > µB while σA ≤ σB . We can then define the efficient frontier as the locus of all non-dominated portfolios in the meanstandard deviation space. By definition, no (“rational”) mean-variance investor would choose to hold a portfolio not located on the efficient frontier. The shape of the efficient frontier is thus of primary interest. Let us examine the efficient frontier in the two-asset case for a variety of possible asset return correlations. The basis for the results of this section is the formula for the variance of a portfolio of two assets, 1 and 2, defined by their respective expected returns, r1 , r2 , standard deviations, σ1 and σ2 , and their ¯ ¯ correlation ρ1,2 : 2 2 2 2 σP = w1 σ1 + (1 − w1 )2 σ2 + 2w1 (1 − w1 ) σ1 σ2 ρ1,2 , where wi is the proportion of the portfolio allocated to asset i. The following results, detailed in Appendix 6.2, are of importance. Case 1 (Reference). In the case of two risky assets with perfectly positively correlated returns, the efficient frontier is linear. In that extreme case the two assets are essentially identical, there is no gain from diversification, and the portfolio’s standard deviation is nothing other than the average of the standard deviations of the 6 component assets: σR = w1 σ1 + (1 − w1 )σ2 . As a result, the equation of the efficient frontier is µR = r1 + ¯ r2 − r1 ¯ ¯ (σR − σ1 ) , σ2 − σ1 as depicted in Figure 6.2. It assumes positive amounts of both assets are held. Insert Figure 6.2 Case 2. In the case of two risky assets with imperfectly correlated returns, the standard deviation of the portfolio is necessarily smaller than it would be if the two component assets were perfectly correlated. By the previous result, one must have σP < w1 σ1 + (1 − w1 )σ2 , provided the proportions are not 0 or 1. Thus, the efficient frontier must stand left of the straight line in Figure 6.2. This is illustrated in Figure 6.3 for different values of ρ1,2 . Insert Figure 6.3 The smaller the correlation (the further away from +1), the more to the left is the efficient frontier as demonstrated formally in Appendix 6.2. Note that the diagram makes clear that in this case, some portfolios made up of assets 1 and 2 are, in fact, dominated by other portfolios. Unlike in Case 1, not all portfolios are efficient. In view of future developments, it is useful to distinguish the minimum variance frontier from the efficient frontier. In the present case, all portfolios between A and B belong to the minimum variance frontier, that is, they correspond to the combination of assets with minimum variance for all arbitrary levels of expected returns. However, certain levels of expected returns are not efficient targets since higher levels of returns can be obtained for identical levels of risk. Thus portfolio C is minimum variance, but it is not efficient, being dominated by portfolio D, for instance. Figure 6.3 again assumes positive amounts of both assets (A and B) are held. Case 3. If the two risky assets have returns that are perfectly negatively correlated, one can show that the minimum variance portfolio is risk free while the frontier is once again linear. Its graphical representation in that case is in Figure 6.4, with the corresponding demonstration placed in Appendix 6.2. Insert Figure 6.4 Case 4. If one of the two assets is risk free, then the efficient frontier is a straight line originating on the vertical axis at the level of the risk-free return. In the 7 absence of a short sales restriction, that is, if it is possible to borrow at the riskfree rate to leverage one’s holdings of the risky asset, then, intuitively enough, the overall portfolio can be made riskier than the riskiest among the existing assets. In other words, it can be made riskier than the one risky asset and it must be that the efficient frontier is projected to the right of the (¯2 , σ2 ) point r (defining asset 1 as the risk-free asset). This situation is depicted in Figure 6.5 with the corresponding results demonstrated in Appendix 6.2. Insert Figure 6.5 Case 5 (n risky assets). It is important to realize that a portfolio is also an asset, fully defined by its expected return, its standard deviation, and its correlation with other existing assets or portfolios. Thus, the previous analysis with two assets is more general than it appears: It can easily be repeated with one of the two assets being a portfolio. In that way, one can extend the analysis from two to three assets, from three to four, etc. If there are n risky, imperfectly correlated assets, then the efficient frontier will have the bullet shape of Figure 6.6. Adding an extra asset to the two-asset framework implies that the diversification possibilities are improved and that, in principle, the efficient frontier is displaced to the left. Case 6. If there are n risky assets and a risk-free one, the efficient frontier is a straight line once again. To arrive at this conclusion, let us arbitrarily pick one portfolio on the efficient frontier when there are n risky assets only, say portfolio E in Figure 6.6, and make up all possible portfolios combining E and the risk-free asset. Insert Figure 6.6 What we learned above tells us that the set of such portfolios is the straight line joining the point (0, rf ) to E. Now we can quickly check that all portfolios on this line are dominated by those we can create by combining the risk-free asset with portfolio F . Continuing our reasoning in this way and searching for the highest similar line joining (0, rf ) with the risky asset bullet-shaped frontier, we obtain, as the truly efficient frontier, the straight line originating from (0, rf ) that is tangent to the risky asset frontier. Let T be the tangency portfolio. As before, if we allow short position in the risk-free asset, the efficient frontier extends beyond T ; it is represented by the broken line in Figure 6.6. Formally, with n assets (possibly one of them risk free), the efficient frontier is obtained as the relevant (non-dominated) portion of the minimum variance frontier, the latter being the solution, for all possible expected returns µ, to the following quadratic program (QP ): 8 min wi s i i i j wi wj σij (QP ) s.t. wi ri = µ ¯ wi = 1 In (QP ) we search for the vector of weights that minimizes the variance of the portfolio (verify that you understand the writing of the portfolio variance in the case of n assets) under the constraint that the expected return on the portfolio must be µ. This defines one point on the minimum variance frontier. One can then change the fixed value of µ equating it successively to all plausible levels of portfolio expected return; in this way one effectively draws the minimum variance frontier5 . Program (QP ) is the simplest version of a family of similar quadratic programs used in practice. This is because (QP ) includes the minimal set of constraints. The first is only an artifice in that it defines the expected return to be reached in a context where µ is a parameter; the second constraint is simply the assertion that the vector of wi ’s defines a portfolio (and thus that they add up to one). Many other constraints can be added to customize the portfolio selection process without altering the basic structure of problem (QP ). Probably the most common implicit or explicit constraint for an investor involves limiting her investment universe. The well-known home bias puzzle reflects the difficulty in explaining, from the MPT viewpoint, why investors do not invest a larger fraction of their portfolios in stocks quoted “away from home,” that is, in international, or emerging markets. This can be viewed as the result of an unconscious limitation of the investment universe considered by the investor. Self-limitation may also be fully conscious and explicit as in the case of “ethical” mutual funds that exclude arms manufacturers or companies with a tarnished ecological record from their investment universe. These constraints are easily accommodated in our setup, as they simply appear or do not appear in the list of the N assets under consideration. Other common constraints are non-negativity constraints (wi ≥ 0), indicating the impossibility of short selling some or all assets under consideration. Short selling may be impossible for feasibility reasons (exchanges or brokers may not allow it for certain instruments) or, more frequently, for regulatory reasons applying to specific types of investors, for example, pension funds. An investor may also wish to construct an efficient portfolio subject to the constraint that his holdings of some stocks should not, in value terms, fall below a certain level (perhaps because of potential tax liabilities or because ownership of a large block of this stock affords some degree of managerial control). This requires a constraint of the form wj ≥ Vj , VP 5 While in principle one could as well maximize the portfolio’s expected return for given levels of standard deviation, it turns out to be more efficient computationally to do the reverse. 9 where Vj is the current value of his holdings of stock j and VP is the overall value of his portfolio. Other investors may wish to obtain the lowest risk subject to a required expected return constraint and/or be subject to a constraint that limits the number of stocks in their portfolio (in order, possibly, to economize on transaction costs). An investor may, for example, wish to hold at most 3 out of a possible 10 stocks, yet to hold those 3 which give the minimum risk subject to a required return constraint. With certain modifications, this possibility can be accommodated into (QP ) as well. Appendix 6.3 details how Microsoft Excel R can be used to construct the portfolio efficient frontier under these and other constraints. 6.4 The Optimal Portfolio: A Separation Theorem The optimal portfolio is naturally defined as that portfolio maximizing the investor’s (mean-variance) utility; in other words, that portfolio for which he is able to reach the highest indifference curve, which we know to be increasing and convex from the origin. If the efficient frontier has the shape described in Figure 6.5, that is, if there is a risk-free asset, then all tangency points must lie on the same efficient frontier, irrespective of the rate of risk aversion of the investor. Let there be two investors sharing the same perceptions as to expected returns, variances, and return correlations but differing in their willingness to take risks. The relevant efficient frontier will be identical for these two investors, although their optimal portfolios will be represented by different points on the same line: with differently shaped indifference curves the tangency points must differ. See Figure 6.7. Insert Figure 6.7 However, it is a fact that our two investors will invest in the same two funds, the risk-free asset on the one hand, and the risky portfolio (T ) identified by the tangency point between the straight line originating from the vertical axis and the bullet-shaped frontier of risky assets, on the other. This is the twofund theorem, also known as the separation theorem, because it implies the optimal portfolio of risky assets can be identified separately from the knowledge of the risk preference of an investor. This result will play a significant role in the next Chapter when constructing the Capital Asset Pricing Model. 6.5 Conclusions First, it is important to keep in mind that everything said so far applies regardless of the (possibly normal) probability distributions of returns representing the subjective expectations of the particular investor upon whom we are focusing. Market equilibrium considerations are next. 10 Second, although initially conceived in the context of descriptive economic theories, the success of portfolio theory arose primarily from the possibility of giving it a normative interpretation; that is, of seeing the theory as providing a guide on how to proceed to identify a potential investor’s optimal portfolio. In particular it points to the information requirements to be fulfilled (ideally). Even if we accept the restrictions implied by mean-variance analysis, one cannot identify an optimal portfolio without spelling out expectations on mean returns, standard deviations of returns, and correlations among returns. One can view the role of the financial analyst as providing plausible figures for the relevant statistics or offering alternative scenarios for consideration to the would-be investor. This is the first step in the search for an optimal portfolio. The computation of the efficient frontier is the second step and it essentially involves solving the quadratic programming problem (QP) possibly in conjunction with constraints specific to the investor. The third and final step consists of defining, at a more or less formal level, the investor’s risk tolerance and, on that basis, identifying his optimal portfolio. References Markowitz, H .M. (1952), “Portfolio Selection,” Journal of Finance 7, 77–91. Tobin, J. (1958), “Liquidity Preference as Behavior Towards Risk,” Review of Economic Studies 26, 65–86. Appendix 6.1: Indifference Curves Under Quadratic Utility or Normally Distributed Returns In this appendix we demonstrate more rigorously that if his utility function is quadratic or if returns are normally distributed, the investor’s expected utility of the portfolio’s rate of return is a function of the portfolio’s mean return and standard deviation only (Part I). We subsequently show that in either case, investor’s indifference curves are convex to the origin (Part II). Part I If the utility function is quadratic, it can be written as: U (˜P ) = a + b˜P + c˜P where rP denotes a portfolio’s rate of return. Let the r r r2 ˜ constant a = 0 in what follows since it does not play any role. For this function to make sense we must have b > 0 and c < 0. The first and second derivatives are, respectively, U (˜P ) = b + 2c˜P , and U (˜P ) = 2c < 0 r r r Expected utility is then of the following form: 2 E(U (˜P )) = bE(˜P ) + c(E(˜P )) = bµP + cµ2 + cσP , r r r2 P that is, of the form g(σP , µP ). As showed in Figure (A6.1). this function is strictly concave. But it must be restricted to ensure positive marginal utility: 11 rP < −b/2c. Moreover, the coefficient of absolute risk aversion is increasing ˜ (RA > 0). These two characteristics are unpleasant and they prevent a more systematic use of the quadratic utility function. Insert Figure A6.1 Alternatively, if the individual asset returns ri are normally distributed, rP = ˜ wi ri is normally distributed as well. Let rP have density f (˜P ), ˜ ˜ r where i f (˜P ) = N (˜P ; µP , σP ) r r ˜ The standard normal variate Z is defined by: ˜ Z Thus, rP ˜ = = rP − µP ˜ ˜ ∼ N (Z; 0, 1) σP ˜ σP Z + µP +∞ +∞ (6.7) U (σP Z + µP )N (Z; 0, 1)dZ.(6.8) −∞ E(U (˜P )) = r −∞ U (rP )f (rP )drP = The quantity E(U (˜P )) is again a function of σP and µP only. Maximizing r E(U (˜P )) amounts to choosing wi so that the corresponding σP and µP maxir mize the integral (6.8). Part II Construction of indifference curves in the mean-variance space. There are again two cases. U is quadratic. An indifference curve in the mean-variance space is defined as the set: 2 {(σP , µP )|E(U (˜P )) = bµP + cµ2 + cσP = k}, for some utility level k. r P This can be rewritten as b b2 k b2 2 σP + µ2 + µP + 2 = + 2 P c 4c c 4c b 2 k b2 ) = + 2 2c c 4c This equation defines the set of points (σP , µP ) located in the circle of radius 2 σP + (µP + k c + b2 4c2 b and of center (0,− 2c ) as in Figure A6.2. Insert Figure A6.2. In the relevant portion of the (σP, µP ) space, indifference curves thus have positive slope and are convex to the origin. 12 The Distribution of R is Normal. One wants to describe +∞ {(σP , µP )| −∞ ¯ U (σP Z + µP )N (Z; 0, 1)dZ = U } Differentiating totally yields: +∞ 0 = −∞ U (σP Z + µP )(ZdσP + dµP )N (Z; 0, 1)dZ, or +∞ dµP dσP U (σP Z + µP )ZN (Z; 0, 1) dZ = −∞ − +∞ . U (σP Z + µP )N (Z; 0, 1)dZ −∞ If σP = 0 (at the origin), +∞ ZN (Z; 0, 1)dZ dµP −∞ = − +∞ = 0. dσP N (Z; 0, 1)dZ −∞ If σP > 0, dµP /dσP > 0. Indeed, the denominator is positive since U (·) is positive by assumption, and N (Z; 0, 1) is a probability density function, hence it is always positive. +∞ The expression −∞ U (σP Z + µP )ZN (Z, 0, 1)dZ is negative under the hy- pothesis that the investor is risk averse; in other words, that U (·) is strictly concave. If this hypothesis is verified, the marginal utility associated with each negative value of Z is larger than the marginal utility associated with positive values. Since this is true for all pairs of ±Z, the integral on the numerator is negative. See Figure A.6.3 for an illustration. Proof of the Convexity of Indifference Curves Let two points (σP , µP ) and (σP , µP ) lie on the same indifference curve of¯ fering the same level of expected utility U . Let us consider the point (σP , µP ) where: σP = ασP + (1 − α)σP and µP = αµP + (1 − α)µP ˜ ˜ One would like to prove that: E(U (σP Z + µP )) > αE(U (σP Z + µP )) + ˜ + µP )) = U . ¯ (1 − α)E(U (σP Z By the strict concavity of U , the inequality ˜ ˜ ˜ U (σP Z + µP ) > αU (σP Z + µP ) + (1 − α)U (σP Z + µP ) 13 is verified for all (σP , µP ) and (σP , µP ). One may thus write: +∞ U (σP Z + µP )N (Z; 0, 1)dZ > −∞ +∞ +∞ α −∞ U (σP Z + µP )N (Z; 0, 1)dZ + (1 − α) −∞ U (σP Z + µP )N (Z; 0, 1)dZ, or E(U (σP E(U (σP ˜ ˜ ˜ Z + µP )) > αE(U (σP Z + µP )) + (1 − α)E(U (σP Z + µP )), or ˜ ¯ ¯ ¯ Z + µP )) > αU + (1 − α)U = U See Figure A.6.4 for an illustration. 2 Insert Figure A.6.4 14 Appendix 6.2: The Shape of the Efficient Frontier; Two Assets; Alternative Hypotheses Perfect Positive Correlation (Figure 6.2) ρ12 = 1 σP = w1 σ1 + (1 − w1 )σ2 , the weighed average of the SD’s of individuals asset returns ρ1,2 µP 2 σP = = = = 1 w1 r1 + (1 − w1 )¯2 = r1 + (1 − w1 )(¯2 − r1 ) ¯ r ¯ r ¯ 2 2 2 2 w1 σ1 + (1 − w1 ) σ2 + 2w1 w2 σ1 σ2 ρ1,2 2 2 2 w1 σ1 + (1 − w1 )2 σ2 + 2w1 w2 σ1 σ2 2 = (w1 σ1 + (1 − w1 ) σ2 ) [perfect square] −σ2 σP = ±(w1 σ1 + (1 − w1 ) σ2 ) ⇒ w1 = σP −σ2 ; 1 − w1 = σ1 σ1 −σP r2 −¯1 ¯ r µP = r1 + σ1 −σ2 (¯2 − r1 ) = r1 + σ2 −σ1 (σP − σ1 ) ¯ r ¯ ¯ σ1 −σP σ1 −σ2 Imperfectly Correlated Assets (Figure 6.3) −1 < ρ12 < 1 Reminder: µP 2 σP = w1 r1 + (1 − w1 ) r2 ¯ ¯ 2 2 2 = w1 σ1 + (1 − w1 ) σ2 + 2w1 w2 σ1 σ2 ρ1,2 2 Thus, 2 ∂σP ∂ρ1,2 = 2w1 w2 σ1 σ2 > 0 which implies: σP < w1 σ1 + (1 + w1 )σ2 ; σP is smaller than the weighted average of the σ’s, there are gains from diversifying. 2 Fix µP , hence w1 , and observe: as one decreases ρ1,2 (from +1 to -1), σP diminishes (and thus also σP ). Hence the opportunity set for ρ = ρ < 1 must be to the left of the line AB (ρ1,2 = 1) except for the extremes. 2 2 w1 = 0 ⇒ µP = r2 et σP = σ2 ¯ 2 2 w1 = 1 ⇒ µP = r1 et σP = σ1 ¯ 15 Perfect Negative Correlation (Figure 6.4) ρ1,2 2 σP = = −1 2 2 2 w1 σ1 + (1 − w1 ) σ2 − 2w1 w2 σ1 σ2 with (w2 = (1 − w1 )) 2 2 = (w1 σ1 − (1 − w1 ) σ2 ) [perfect square again] σP w1 σP µP = = = = = = ± [w1 σ1 − (1 − w1 ) σ2 ] = ± [w1 (σ1 + σ2 ) − σ2 ] ±σP + σ2 σ1 + σ2 σ2 0 ⇔ w1 = σ1 + σ2 ±σP + σ2 ±σP + σ2 r1 + 1 − ¯ r2 ¯ σ1 + σ2 σ1 + σ2 σ1 ± σP ±σP + σ2 r1 + ¯ r2 ¯ σ1 + σ2 σ1 + σ2 σ2 σ1 r1 − r2 ¯ ¯ r1 + ¯ r2 ± ¯ σP σ1 + σ2 σ1 + σ2 σ1 + σ2 One Riskless and One Risky Asset (Figure 6.5) Asset 1: r1 , σ1 = 0 ¯ Asset 2: r2 , σ2 ¯ r1 < r2 ¯ ¯ µP 2 σP = = w1 r1 + (1 − w1 ) r2 ¯ ¯ 2 2 2 w1 σ1 + (1 − w1 ) σ2 + 2w1 (1 − w1 ) cov1,2 2 2 σP w1 2 2 = (1 − w1 ) σ2 since σ1 = 0 and cov1,2 = ρ1,2 σ1 σ2 = 0; thus, = (1 − w1 ) σ2 , and σP = 1− σ2 Appendix 6.3: Constructing the Efficient Frontier In this Appendix we outline how Excel’s “SOLVER” program may be used to construct an efficient frontier using historical data on returns. Our method does not require the explicit computation of means, standard deviations, and return correlations for the various securities under consideration; they are implicitly obtained from the data directly. The Basic Portfolio Problem Let us, for purposes of illustration, assume that we have assembled a time series of four data points (monthly returns) for each of three stocks, and let us further assume that these four realizations fully describe the relevant return distributions. We also assign equal probability to the states underlying these realizations. 16 Table A6.1 Hypothetical Return Data State State State State 1 2 3 4 Prob .25 .25 .25 .25 Stock 1 6.23% -.68% 5.55% -1.96% Stock 2 5.10% 4.31% -1.27% 4.52% Stock 3 7.02% .79% -.21% 10.30% Table A6.1 presents this hypothetical data. Following our customary notation, let wi represent the fraction of wealth invested in asset i, i = 1, 2, 3, and let rP,θj represent the return for a portfolio of these assets in the case of event θj , j = 1, 2, 3, 4. The Excel formulation analogous to problem (QP ) of the text is found in Table A6.2. where (A1) through (A4) define the portfolio’s return in each of the four states; (A5) defines the portfolio’s average return; (A6) places a bound on the expected return; by varying µ, it is possible to trace out the efficient frontier; (A7) defines the standard deviation when each state is equally probable; and (A8) is the budget constraint. Table A6.2: The Excel Formulation of the (QP) Problem min{w1 ,w2 ,w3 ,w4 } SD (minimize portfolio standard deviation) Subject to: (A1) rP,θ1 = 6.23w1 + 5.10w2 + 7.02w3 (A2) rP,θ2 = −.68w1 + 4.31w2 + .79w3 (A3) rP,θ3 = 5.55w1 − 1.27w2 − .21w3 (A4) rP,θ4 = −1.96w1 + 4.52w2 + 10.30w3 P P P P (A5) rP = .25r1 + .25r2 + .25r3 + .25r4 ¯ (A6) rP ≥ µ = 3 ¯ (A7) SD = SQRT (SU M P RODU CT (rP,θ1 , rP,θ2 , rP,θ3 , rP,θ4 )) (A8) w1 + w2 + w3 = 1 The Excel-based solution to this problem is w1 = .353 w2 = .535 w3 = .111, when µ is fixed at µ = 3.0%. The corresponding portfolio mean and standard deviation are rP = 3.00, and σP = 1.67. Screen 1 describes the Excel setup for ¯ this case. Insert Figure A6.5 about here Notice that this approach does not require the computation of individual security expected returns, variances, or correlations, but it is fundamentally no 17 different than problem (QP ) in the text which does require them. Notice also that by recomputing “min SD” for a number of different values of µ, the efficient frontier can be well approximated. Generalizations The approach described above is very flexible and accommodates a number of variations, all of which amount to specifying further constraints. Non-Negativity Constraints These amount to restrictions on short selling. It is sufficient to specify the additional constraints w1 ≥ 0 w2 ≥ 0 w3 ≥ 0. The functioning of SOLVER is unaffected by these added restrictions (although more constraints must be added), and for the example above the solution remains unchanged. (This is intuitive since the solutions were all positive.) See Screen 2. Insert Figure A6.6 about here Composition Constraints Let us enrich the scenario. Assume the market prices of stocks 1, 2, and 3 are, respectively, $25,$32, and $17, and that the current composition of the portfolio consists of 10,000 shares of stock 1, 10,000 shares of stock 2, and 30,000 shares of stock 3, with an aggregate market value of$1,080,000. You wish to obtain the lowest SD for a given expected return subject to the constraints that you retain 10,000 shares of stock 1 and 10,000 shares of stock 3. Equivalently, you wish to constrain portfolio proportions as follows: w1 ≥ w3 ≥ 10,000× $25$1,080,000 10,000× $17$1,080,000 = .23 = .157; while w2 is free to vary. Again SOLVER easily accommodates this. We find w1 = .23, w2 = .453, and w3 = .157, yielding rP = 3.03% and σP = 1.70%. ¯ Both constraints are binding. See Screen 3. Insert Figure A6.7 about here Adjusting the Data (Modifying the Means) On the basis of the information in Table A6.1, r1 ¯ r2 ¯ r3 ¯ = 2.3% = 3.165% = 4.47%. 18 Suppose, either on the basis of fundamental analysis or an SML-style calculation, other information becomes available suggesting that, over the next portfolio holding period, the returns on stocks 1 and 2 would be 1% higher than their historical mean and the return on stock 3 would be 1% lower. This supplementary information can be incorporated into min SD by modifying Table A6.1. In particular, each return entry for stocks 1 and 2 must be increased by 1% while each entry of stock 3 must be decreased by 1%. Such changes do not in any way alter the standard deviations or correlations implicit in the data. The new input table for SOLVER is found in Table A6.3. Table A6.3 Modified Return Data Event Event Event Event 1 2 3 4 Prob .25 .25 .25 .25 Stock 1 7.23% .32% 6.55% -.96% Stock 2 6.10% 5.31% -.27% 5.52% Stock 3 6.02% -.21% -1.21% 9.30% Solving the same problem, min SD without additional constraints yields w1 = .381, w2 = .633, and w3 = −0.013, yielding rP = 3.84 and σP = 1.61. See ¯ Screen 4. Insert Figure A6.8 about here Constraints on the Number of Securities in the Portfolio Transactions costs may be substantial. In order to economize on these costs, suppose an investor wished to solve min SD subject to the constraint that his portfolio would contain at most two of the three securities. To accommodate this change, it is necessary to introduce three new binary variables that we will denote x1 , x2 , x3 , corresponding to stocks 1, 2, and 3, respectively. For all xi , i = 1, 2, 3, xi ∈ {0, 1}. The desired result is obtained by adding the following constraints to the problem min SD: w1 w2 w3 x1 + x2 + x3 x1 , x2 , x3 are binary ≤ ≤ ≤ ≤ x1 x2 x3 2, In the previous example the solution is to include only securities one and two with proportions w1 = .188, and w2 = .812. See Screen 5. Insert Figure A6.9 about here 19 Part III Equilibrium Pricing Chapter 7: The Capital Asset Pricing Model: Another View About Risk 7.1 Introduction The CAPM is an equilibrium theory built on the premises of Modern Portfolio Theory. It is, however, an equilibrium theory with a somewhat peculiar structure. This is true for a number of reasons: 1. First, the CAPM is a theory of financial equilibrium only. Investors take the various statistical quantities — means, variances, covariances — that characterize a security’s return process as given. There is no attempt within the theory to link the return processes with events in the real side of the economy. In future model contexts we shall generalize this feature. 2. Second, as a theory of financial equilibrium it makes the assumption that the supply of existing assets is equal to the demand for existing assets and, as such, that the currently observed asset prices are equilibrium ones. There is no attempt, however, to compute asset supply and demand functions explicitly. Only the equilibrium price vector is characterized. Let us elaborate on this point. Under the CAPM, portfolio theory informs us about the demand side. If individual i invests a fraction wij of his initial wealth Yoi in asset j, the value of his asset j holding is wij Y0i . Absent any information that he wishes to alter these holdings, we may interpret the quantity wij Yoi as his demand for asset j at the prevailing price vector. If there are I individuals in the economy, the total value of all holdings of asset j is I i I i wij Y0i ; by the same remark we may interpret this quantity as aggregate demand. At equilibrium one must have wij Y0i = pj Qj where pj is the prevailing equilibrium price per share of asset j, Qj is the total number of shares outstanding and, consequently, pj Qj is the market capitalization of asset j. The CAPM derives the implications for prices by assuming that the actual economy-wide asset holdings are investors’ aggregate optimal asset holdings. 3. Third, the CAPM expresses equilibrium in terms of relationships between the return distributions of individual assets and the return characteristics of the portfolio of all assets. We may view the CAPM as informing us, via modern Portfolio Theory, as to what asset return interrelationships must be in order for equilibrium asset prices to coincide with the observed asset prices. In what follows we first present an overview of the traditional approach to the CAPM. This is followed by a more general presentation that permits at once a more complete and more general characterization. 7.2 The Traditional Approach to the CAPM To get useful results in this complex world of many assets we have to make simplifying assumptions. The CAPM approach essentially hypothesizes (1) that all 2 agents have the same beliefs about future returns (i.e., homogenous expectations), and, in its simplest form, (2) that there is a risk-free asset, paying a safe return rf . These assumptions guarantee (Chapter 6) that the mean-variance efficient frontier is the same for every investor, and furthermore, by the separation theorem, that all investors’ optimal portfolios have an identical structure: a fraction of initial wealth is invested in the risk-free asset, the rest in the (identical) tangency portfolio (two-fund separation). It is then possible to derive a few key characteristics of equilibrium asset and portfolio returns without detailing the underlying equilibrium structure, that is, the demand for and supply of assets, or discussing their prices. Because all investors acquire shares in the same risky tangency portfolio T , and make no other risky investments, all existing risky assets must belong to T by the definition of an equilibrium. Indeed, if some asset k were not found in T , there would be no demand for it; yet, it is assumed to exist in positive supply. Supply would then exceed demand, which is inconsistent with assumed financial market equilibrium. The same reasoning implies that the share of any asset j in portfolio T must correspond to the ratio of the market value of that asset pj Qj to the market value of all assets J j=1 pj Qj . This, in turn, guarantees that tangency portfolio T must be nothing other than the market portfolio M , the portfolio of all existing assets where each asset appears in a proportion equal to the ratio of its market value to the total market capitalization. Insert Figure 7.1 This simple reasoning leads to a number of useful conclusions: a. The market portfolio is efficient since it is on the efficient frontier. b. All individual optimal portfolios are located on the half-line originating at point (0, rf ) and going through (rM , σM ), which is also the locus of all efficient portfolios (see Figure 7.1). This locus is usually called the Capital Market Line or CML. c. The slope of the CML is M M f . It tells us that an investor considering σ a marginally riskier efficient portfolio would obtain, in exchange, an increase in r −r expected return of M M f . This is the price of, or reward for, risk taking—the σ price of risk as applicable to efficient portfolios. In other words, for efficient portfolios, we have the simple linear relationship in Equation (7.1). rp = rf + rM − rf σp σM (7.1) r −r The CML applies only to efficient portfolios. What can be said of an arbitrary asset j not belonging to the efficient frontier? To discuss this essential part of 3 the CAPM we first rely on Equation (7.2), formally derived in Appendix 7.1, and limit our discussion to its intuitive implications: rj = rf + (¯M − rf ) ¯ r σ σjM 2 σM (7.2) Let us define βj = σjM , that is the ratio of the covariance between the returns 2 M on asset j and the returns on the market portfolio over the variance of the market returns. We can thus rewrite Equation (7.2) as Equation (7.3). rj = rf + rM − rf σM βj σM = rf + rM − rf σM ρjM σj (7.3) Comparing Equations (7.1) and (7.3), we obtain one of the major lessons of the CAPM: Only a portion of the total risk of an asset j, σj , is remunerated by the market. Indeed, the risk premium on a given asset is the market price r −r of risk, M M f , multiplied by the relevant measure of the quantity of risk for σ that asset. In the case of an inefficient asset or portfolio j, this measure of risk differs from σj . The portion of total risk that is priced is measured by βj σM or ρjM σj (≤ σj . This is the systematic risk of asset j (also referred to as market risk or undiversifiable risk). The intuition for this fundamental result is as follows. Every investor holds the market portfolio (T = M ). The relevant risk for the investor is thus the variance of the market portfolio. Consequently, what is important to him is the contribution of asset j to the risk of the market portfolio; that is, the extent to which the inclusion of asset j into the overall portfolio increases the latter’s variance. This marginal contribution of asset j to the overall portfolio risk is appropriately measured by βj σM (= βj σM ) Equation (7.3) says that investors must be compensated to persuade them to hold an asset with high covariance with the market, and that this compensation takes the form of a higher expected return. The comparison of Equations (7.1) and (7.3) also leads us to conclude that an efficient portfolio is one for which all diversifiable risks have been eliminated. For an efficient portfolio, total risk and systematic risk are thus one and the same. This result is made clear from writing, without loss of generality, the return on asset j as a linear function of the market return with a random error term that is independent of the market return,1 rj = α + βj rM + εj ˜ ˜ (7.4) Looking at the implication of this general regression equation for variances, 2 2 2 2 σj = βj σM + σεj , (7.5) we obtain the justification for the “beta” label. The standard regression estimator of the market return coefficient in Equation (7.4) will indeed be of the 1 The “market model” is based on this same regression equation. The market model is reviewed in Chapter 13. 4 form σjM ˆ ˆ βj = 2 . σM ˆ Equation (7.3) can equivalently be rewritten as rj − rf = (¯M − rf ) βj ¯ r (7.6) which says that the expected excess return or the risk premium on an asset j is proportional to its βj . Equation (7.6) defines the Security Market Line or SML. It is depicted in Figure 7.2. The SML has two key features. The beta of asset j, βj, is the sole specific determinant of the excess return on asset j. Adopting a terminology that we shall justify later, we can say that the beta is the unique explanatory factor model. Furthermore, the relation between excess returns on different assets and their betas is linear. Insert Figure 7.2 7.3 Valuing risky cash flows with the CAPM We are now in position to make use of the CAPM not only to price assets but also to value non-traded risky cash flows such as those arising from an investment project. The traditional approach to this problem proposes to value an investment project at its present value price, i.e., at the appropriately discounted sum of the expected future cash flows. The logic is straightforward: To value a project equal to the present value of its expected future cash flows discounted at a particular rate is to price the project in a manner such that, at its present value price, it is expected to earn that discount rate. The appropriate rate, in turn, must be the analyst’s estimate of the rate of return on other financial assets that represent title to cash flows similar in risk and timing to that of the project in question. This strategy has the consequence of pricing the project to pay the prevailing competitive rate for its risk class. Enter the CAPM which makes a definite statement regarding on the appropriate discount factor to be used or, equivalently, on the risk premium that should be applied to discount expected future cash flows. Strictly speaking, the CAPM is a one-period model; it is thus formally appropriate to use it only for one-period cash flows or projects. In practice its use is more general and a multi-period cash-flow is typically viewed as the sum of one-period cash flows, each of which can be evaluated with the approach we now describe. Consider some project j with cash flow pattern t −pj,t t+1 ˜ CF j,t+1 5 The link with the CAPM is immediate once we define the rate of return on ˜ p ˜ +dj,t −pj,t , project j. For a financial asset we would naturally write rj,t+1 = j,t+1 pj,t ˜ ˜j,t is the dividend or any flow payment associated with the asset between where d date t and t + 1. Similarly, if the initial value of the project with cash flow ˜ CF j,t+1 −pj,t ˜ CF j,t+1 is pj,t , the return on the project is rj,t+1 = ˜ . pj,t One thus has ˜ ˜ CF j,t+1 E(CF j,t+1 ) 1 + E (˜j ) = E r = , and by the CAPM, pj,t pj,t E (˜j ) = rf + βj (E (˜M ) − rf ) , or r r 1 + E (˜j ) = 1 + (rf + βj (E (˜M ) − rf ) , or r r ˜ j,t+1 ) E(CF = 1 + rf + βj (E (˜M ) − rf ) . Thus, r pj,t ˜ E(CF j,t+1 ) pj,t = . 1 + rf + βj (E rM − rf ) ˜ According to the CAPM, the project is thus priced at the present value of its expected cash flows discounted at the risk-adjusted rate appropriate to its risk class (βj ). As discussed in Chapter 1, there is another potential approach to the pricing problem. It consists in altering the numerator of the pricing equations (the sum of expected cash flows) so that it is permissible to discount at the risk-free rate. This approach is based on the concept of certainty equivalent, which we discussed in Chapter 3. The idea is simple: If we replace each element of the future cash flow by its CE, it is clearly permissible to discount at the risk-free rate. Since we are interested in equilibrium valuations, however, we need a market certainty equivalent rather than an individual investor one. It turns out that this approach raises exactly the same set of issues as the more common one just considered: an equilibrium asset pricing model is required to tell us what market risk premium it is appropriate to deduct from the expected cash flow to obtain its CE. Again the CAPM helps solve this problem.2 In the case of a one-period cash flow, transforming period-by-period cash flows into their market certainty equivalents can be accomplished in a straightforward fashion by applying the CAPM equation to the rate of return expected on the project. With rj = ˜ ˜ CF j,t+1 −1 pj,t ˜ t+1 C Fj pj,t − 1, the CAPM implies cov ˜ CF j,t+1 pj,t 2 σM E or = rf +βj (E rM −rf ) = rf + ˜ − 1, rM ˜ (E rM −rf ), ˜ E 2 Or, ˜ CF j,t+1 −1 pj,t = rf + 1 E (˜M ) − rf r ˜ cov(CF j,t+1 , rM )[ ˜ ]. 2 pj,t σM similarly, the APT. 6 Solving for pj,t yields pj,t = E rM −r ˜ ˜ ˜ E CF j,t+1 − cov(CF j,t+1 , rM )[ σ2 f ] ˜ M 1 + rf , which one may also write pj,t = ˜ E CF j,t+1 − pj,t βj [E rM − rf ] ˜ 1 + rf . Thus by appropriately transforming the expected cash flows, that is, by subtracting what we have called an insurance premium (in Chapter 3), one can discount at the risk-free rate. The equilibrium certainty equivalent can thus be defined using the CAPM relationship. Note the information requirements in the procedure: if what we are valuing is indeed a one-off, non-traded cash flow, ˜ the estimation of the βj , or of cov(CF j,t+1 , rM ), is far-from-straightforward; ˜ in particular, it cannot be based on historical data since they are none for the project at hand. It is here that the standard prescription calls for identifying a traded asset that can be viewed as similar in the sense of belonging to the same risk class. The estimated β for that traded asset is then to be used as an approximation in the above valuation formulas. In the sections that follow, we first generalize the analysis of the efficient frontier presented in Chapter 6 to the N ≥ 2 asset case. Such a generalization will require the use of elementary matrix algebra and is one of those rare situations in economic science where a more general approach yields a greater specificity of results. We will, for instance, be able to detail a version of the CAPM without risk-free asset. This is then followed by the derivation of the standard CAPM where a risk-free asset is present. As noted in the introduction, the CAPM is essentially an interpretation that we are able to apply to the efficient frontier. Not surprisingly, therefore, we begin this task with a return to characterizing that frontier. 7.4 The Mathematics of the Portfolio Frontier: Many Risky Assets and No Risk-Free Asset Notation. Assume N ≥ 2 risky assets; assume further that no asset has a return that can be expressed as a linear combination of the returns to a subset of the other assets, (the returns are linearly independent). Let V denote the variance-covariance matrix, in other words, Vij = cov(ri , rj ); by construction V is symmetric. Linear independence in the above sense implies that V −1 exists. Let w represent a column vector of portfolio weights for the N assets. The expression wT V w then represents the portfolio’s return variance: wT V w is always positive (i.e., V is positive definite). Let us illustrate this latter assertion in the two-asset case wT V w = w1 w2 2 σ1 σ21 σ12 2 σ2 w1 w2 7 = 2 w1 σ1 + w2 σ21 2 w1 σ12 + w2 σ2 w1 w2 2 2 2 2 = w1 σ1 + w1 w2 σ21 + w1 w2 σ12 + w2 σ2 2 2 2 2 = w1 σ1 + w2 σ2 + 2w1 w2 σ12 ≥ 0 since σ12 = ρ12 σ1 σ2 ≥ −σ1 σ2 . Definition 7.1 formalizes the notion of a portfolio lying on the efficient frontier. Note that every portfolio is ultimately defined by the weights that determine its composition. Definition 7.1: A frontier portfolio is one that displays minimum variance among all feasible portfolios with the same E (˜p ). r A portfolio p, characterized by wp , is a frontier portfolio, if and only if wp solves.3 1 min wT V w w 2 (λ) (γ) s.t. wT e = E wT 1 = 1 N i=1 N i=1 wi E(˜i ) = E (˜p ) = E r r wi = 1 where the superscript T stands for transposed, i.e., transforms a column vector into a line vector and reciprocally, e denotes the column vector of expected returns to the N assets, 1 represents the column vector of ones, and λ, γ are Lagrange multipliers. Short sales are permitted (no non-negativity constraints are present). The solution to this problem can be characterized as the solution to min where L is the Lagrangian: {w,λ,γ} L= 1 T w V w + λ E − wT e + γ 1 − wT 1 2 (7.7) Under these assumptions, wp , λ and γ must satisfy Equations (7.8) through (7.10), which are the necessary and sufficient first order conditions: ∂L ∂w ∂L ∂λ ∂L ∂γ = = V w − λe − γ1 = 0 E − wT e = 0 (7.8) (7.9) (7.10) = 1 − wT 1 = 0 In the lines that follow, we manipulate these equations to provide an intuitive characterization of the optimal portfolio proportions (7.16). From (7.8), V wp = λe + γ1, or 3 The problem below is, in vector notation, problem (QP ) of Chapter 5. 8 wp e wp T = = λV −1 e + γV −1 1, and λ eT V −1 e + γ eT V −1 1 . (7.11) (7.12) T Since eT wp = wp e, we also have, from Equation (7.9), that E (˜p ) = λ eT V −1 e + γ eT V −1 1 . r From Equation (7.12), we have: 1T wp T = wp 1 = λ 1T V −1 e + γ 1T V −1 1 = 1 [by Equation(7.10)] (7.13) 1 = λ 1T V −1 e + γ 1T V −1 1 (7.14) Notice that Equations (7.13) and (7.14) are two scalar equations in the unknowns λ and γ (since such terms as eT V −1 e are pure numbers!). Solving this system of two equations in two unknowns, we obtain: λ= CE − A B − AE and γ = D D (7.15) where A = 1T V −1 e B C D = = = = eT V −1 1 eT V −1 e > 0 1T V −1 1 BC − A2 Here we have used the fact that the inverse of a positive definite matrix is itself positive definite. It can be shown that D is also strictly positive. Substituting Equations (7.15) into Equation (7.11) we obtain: wp = CE − A −1 B − AE −1 V e+ V 1 D D vector vector γ λ scalar scalar = 1 B V −1 1 − A V −1 e D wp = g vector + h 1 C V −1 e − A V −1 1 D E E (7.16) + vector scalar Since the FOCs [Equations (7.8) through (7.10)] are a necessary and sufficient characterization for wp to represent a frontier portfolio with expected 9 return equal to E, any frontier portfolio can be represented by Equation (7.16). This is a very nice expression; pick the desired expected return E and it straightforwardly gives the weights of the corresponding frontier portfolio with E as its 2 T expected return. The portfolio’s variance follows as σp = wp V wp , which is also straightforward. Efficient portfolios are those for which E exceeds the expected return on the minimum risk, risky portfolio. Our characterization thus applies to efficient portfolios as well: Pick an efficient E and Equation (7.16) gives its exact composition. See Appendix 7.2 for an example. Can we further identify the vectors g and h in Equation (7.16); in particular, do they somehow correspond to the weights of easily recognizable portfolios? The answer is positive. Since, if E = 0, g = wp , g then represents the weights that define the frontier portfolio with E (˜p ) = 0. Similarly, g + h corresponds r to the weights of the frontier portfolio with E (˜p ) = 1, since wp = g + hE (˜p ) = r r g + h1 = g + h. The simplicity of the relationship in Equation (7.16) allows us to make two claims. Proposition 7.1: The entire set of frontier portfolios can be generated by (are affine combinations of) g and g + h. Proof : To see this, let q be an arbitrary frontier portfolio with E (˜q ) as its expected r return. Consider portfolio weights (proportions) πg = 1 − E (˜q ) and πg+h = r E (˜q ); then, as asserted, r [1 − E (˜q )] g + E (˜q ) (g + h) = g + hE (˜q ) = wq .2 r r r The prior remark is generalized in Proposition 7.2. Proposition 7.2: The portfolio frontier can be described as affine combinations of any two frontier portfolios, not just the frontier portfolios g and g + h. Proof : To confirm this assertion, let p1 and p2 be any two distinct frontier portfolios; since the frontier portfolios are different, E (˜p1 ) = E (˜p2 ). Let q be an arbitrary r r frontier portfolio, with expected return equal to E (˜q ). Since E (˜p1 ) = E (˜p2 ), r r r there must exist a unique number α such that E (˜q ) = αE (˜p1 ) + (1 − α) E (˜p2 ) r r r (7.17) Now consider a portfolio of p1 and p2 with weights α, 1 − α, respectively, as determined by Equation (7.17). We must show that wq = αwp1 + (1 − α) wp2 . αwp1 + (1 − α) wp2 = α [g + hE (˜p1 )] + (1 − α) [g + hE (˜p2 )] r r = g + h [αE (˜p1 ) + (1 − α) E (˜p2 )] r r = g + hE (˜q ) r = wq , since q is a frontier portfolio.2 10 What does the set of frontier portfolios, which we have calculated so conveniently, look like? Can we identify, in particular, the minimum variance portfolio? Locating that portfolio is surely key to a description of the set of all frontier portfolios. Fortunately, given our results thus far, the task is straightforward. For any portfolio on the frontier, σ 2 (˜p ) = [g + hE (˜p )] V [g + hE (˜p )] , with g and h as defined earlier. r r r Multiplying all this out (very messy), yields: σ 2 (˜p ) = r C D E (˜p ) − r A C 2 T + 1 , C (7.18) where A, C, and D are the constants defined earlier. We can immediately identify the following: since C > 0, D > 0, (i) the expected return of the minimum variance portfolio is A/C; 1 (ii) the variance of the minimum variance portfolio is given by C ; 1 A (iii) Equation (7.18) is the equation of a parabola with vertex C , C in the expected return/variance space and of a hyperbola in the expected return/standard deviation space. See Figures 7.3 and 7.4. Insert Figure 7.3 Insert Figure 7.4 The extended shape of this set of frontier portfolios is due to the allowance for short sales as underlined in Figure 7.5. Insert Figure 7.5 What has been accomplished thus far? First and foremost, we have a much richer knowledge of the set of frontier portfolios: Given a level of desired expected return, we can easily identify the relative proportions of the constituent assets that must be combined to create a portfolio with that expected return. This was illustrated in Equation (7.16), and it is key. We then used it to identify the minimum risk portfolio and to describe the graph of all frontier portfolios. All of these results apply to portfolios of any arbitrary collection of assets. So far, nothing has been said about financial market equilibrium. As a next step toward that goal, however, we need to identify the set of frontier portfolios that is efficient. Given Equation (7.16) this is a straightforward task. 7.5 Characterizing Efficient Portfolios (No Risk-Free Assets) Our first order of business is a definition. 11 Definition 7.2: Efficient portfolios are those frontier portfolios for which the expected return exceeds A/C, the expected return of the minimum variance portfolio. Since Equation (7.16) applies to all frontier portfolios, it applies to efficient ones as well. Fortunately, we also know the expected return on the minimum variance portfolio. As a first step, let us prove the converse of Proposition 7.2. Proposition 7.3: Any convex combination of frontier portfolios is also a frontier portfolio. Proof: Let (w1 ...wN ), define N frontier portfolios (wi represents the vector defining ¯ ¯ ¯ the composition of the ith portfolio) and αi , t =, ..., N be real numbers such N that i=1 αi = 1. Lastly, let E (˜i ) denote the expected return of the portfolio r with weights wi . ¯ N We want to show that i=1 αi wi is a frontier portfolio with E(¯) = ΣN αi E (˜i )). ¯ r r i=1 The weights corresponding to a linear combination of the above N portfolios are: N N αi wi ¯ i=1 = i=1 N αi (g + hE (˜i )) r N = i=1 αi g + h i=1 N αi E (˜i ) r = N i=1 g+h i=1 αi E (˜i ) r N i=1 Thus αi wi is a frontier portfolio with E (r) = ¯ αi E (˜i ). 2 r A corollary to the previous result is: Proposition 7.4: The set of efficient portfolios is a convex set. 4 Proof : Suppose each of the N portfolios under consideration was efficient; then E (˜i ) ≥ r A C, for every portfolio i. However, N i=1 αi E (˜i ) ≥ r N i=1 A A αi C = C ; thus, the convex combination is efficient as well. So the set of efficient portfolios, as characterized by their portfolio weights, is a convex set. 2 It follows from Proposition 7.4 that if every investor holds an efficient portfolio, the market portfolio, being a weighted average of all individual portfolios, is also efficient. This is a key result. 4 This does not mean, however, that the frontier of this set is convex-shaped in the riskreturn space. 12 The next section further refines our understanding of the set of frontier portfolios and, more especially, the subset of them that is efficient. Observe, however, that as yet we have said nothing about equilibrium. 7.6 Background for Deriving the Zero-Beta CAPM: Notion of a Zero Covariance Portfolio Proposition 7.5: For any frontier portfolio p, except the minimum variance portfolio, there exists a unique frontier portfolio with which p has zero covariance. We will call this portfolio the zero covariance portfolio relative to p, and denote its vector of portfolio weights by ZC (p). Proof : To prove this claim it will be sufficient to exhibit the (unique) portfolio that has this property. As we shall demonstrate shortly [see Equation (7.24) and the discussion following it], the covariance of any two frontier portfolios p and q is given by the following general formula: cov (˜p , rq ) = r ˜ C A E (˜p ) − r D C E (˜q ) − r A 1 + C C (7.19) where A, C, and D are uniquely defined by e, the vector of expected returns and V , the matrix of variances and covariances for portfolio p. These are, in fact, the same quantities A, C, and D defined earlier. If it exists, ZC (p) must therefore satisfy, cov rp , rZC(p) = ˜ ˜ C A E (˜p ) − r D C E rZC(p) − ˜ A 1 + =0 C C (7.20) Since A, C, and D are all numbers, we can solve for E rZC(p) ˜ E rZC(p) = ˜ D A C2 − C E (˜p ) − r A C (7.21) Given E rZC(p) , we can use Equation (7.16) to uniquely define the portfolio ˜ weights corresponding to it. 2 A From Equation (7.21), since A > 0, C > 0, D > 0, if E (˜p ) > C (i.e., is r A efficient), then E(rZC(p) ) < C (i.e., is inefficient), and vice versa. The portfolio ˜ ZC (p) will turn out to be crucial to what follows. It is possible to give a more complete geometric identification to the zero covariance portfolio if we express the frontier portfolios in the context of the E (˜) − σ 2 (˜) space (Figure 7.6). r r Insert Figure 7.6 The equation of the line through the chosen portfolio p and the minimum variance portfolio can be shown to be the following [it has the form (y = b+mx)]: E(˜) = r D A C2 − C E(˜p ) − r A C + A E(˜p ) − C 2 r r 1 σ (˜). σ 2 (˜p ) − C r 13 If σ 2 (˜) = 0, then r E (˜) = r D A C2 − C E (˜p ) − r A C = E rZC(p) ˜ [by Equation (7.21)]. That is, the intercept of the line joining p and the minimum variance portfolio is the expected return on the zero-covariance portfolio. This identifies the zerocovariance portfolio to p geometrically. We already know how to determine its precise composition. Our next step is to describe the expected return on any portfolio in terms of frontier portfolios. After some manipulations this will yield Equation (7.27). The specialization of this relationship will give the zero-beta CAPM, which is a version of the CAPM when there is no risk-free asset. Recall that thus far we have not included a risk-free asset in our collection of assets from which we construct portfolios. Let q be any portfolio (which might not be on the portfolio frontier) and let p be any frontier portfolio. cov (˜p , rq ) r ˜ = by definition T T T wp V wq = = = = λV −1 e + γV −1 1 λe V T T −1 V wq N V wq + γ1 V −1 V wq T i wq ≡ 1) i=1 N λe wq + γ (since 1 wq = λE (˜q ) + γ (since eT wq = r (7.22) (7.23) i E (˜i ) wq ≡ E (˜q )) r r i=1 p p where λ = and γ = , as per earlier definitions. D D Substituting these expressions into Equation (7.23) gives CE(˜ )−A r B−AE(˜ ) r cov (˜p , rq ) = r ˜ CE (˜p ) − A r B − AE (˜p ) r E (˜q ) + r . D D (7.24) Equation (7.24) is a short step from Equation (7.19): Collect all terms involving A2 C expected returns, add and subtract DC 2 to get the first term in Equation (7.19) 2 1 with a remaining term equal to + C ( BC − A ). But the latter is simply 1/C D D since D = BC − A2 . Let us go back to Equation (7.23) and apply it to the case where q is ZC(p); one gets ˜ ˜ 0 = cov rp , rZC(p) = λE rZC(p) + γ or γ = −λE rZC(p) ; ˜ ˜ hence Equation (7.23) becomes cov (˜p , rq ) = λ E (˜q ) − E rZC (p) r ˜ r ˜ 14 . (7.26) (7.25) Apply the later to the case p = q to get 2 σp = cov (˜p , rp ) = λ E (˜p ) − E rZC (p) r ˜ r ˜ ; (7.27) and divide Equation (7.26) by Equation (7.27) and rearrange to obtain E (˜q ) = E rZC(p) + βpq E (˜p ) − E rZC(p) r ˜ r ˜ . (7.28) This equation bears more than a passing resemblance to the Security Market Line (SML) implication of the capital asset pricing model. But as yet it is simply a statement about the various portfolios that can be created from arbitrary collections of assets: (1) pick any frontier portfolio p; (2) this defines an associated zero-covariance portfolio ZC (p); (3) any other portfolio q’s expected return can be expressed in terms of the returns to those portfolios and the covariance of q with the arbitrarily chosen frontier portfolio. Equation (7.28) would very closely resemble the security market line if, in particular, we could choose p = M , the market portfolio of existing assets. The circumstances under which it is possible to do this form the subject to which we now turn. 7.7 The Zero-Beta Capital Asset Pricing Model We would like to explain asset expected returns in equilibrium. The relationship in Equation (7.28), however, is not the consequence of an equilibrium theory because it was derived for a given particular vector of expected asset returns, e, and a given covariance-variance matrix, V . In fact, it is the vector of returns e that we would like, in equilibrium, to understand. We need to identify a particular portfolio as being a frontier portfolio without specifying a priori the (expected) return vector and variance-covariance matrix of its constituent assets. The zero-beta CAPM tells us that under certain assumptions, this desired portfolio can be identified as the market portfolio M . We may assume one of the following: (i) agents maximize expected utility with increasing and strictly concave utility of money functions and asset returns are multivariate normally distributed, or (ii) each agent chooses a portfolio with the objective of maximizing a derived utility function of the form W(e, σ 2 ), W1 > 0, W2 < 0, W concave. In addition, we assume that all investors have a common time horizon and homogeneous beliefs about e and V . Under either set of assumptions, investors will only hold mean-variance efficient frontier portfolios5 . But this implies that, in equilibrium, the market portfolio, which is a convex combination of individual portfolios is also on the efficient frontier.6 the demonstration in Section 6.3 that, in the standard version of the CAPM, the analogous claim crucially depended on the existence of a risk-free asset. 6 Note 5 Recall 15 Therefore, in Equation (7.22), p can be chosen to be M , the portfolio of all risky assets, and Equation (7.28) can, therefore, be expressed as: E (˜q ) = E rZC(M ) + βM q E (˜M ) − E rZC(M ) r ˜ r ˜ (7.29) The relationship in Equation (7.29) holds for any portfolio q, whether it is a frontier portfolio or not. This is the zero-beta CAPM. An individual asset j is also a portfolio, so Equation (7.29) applies to it as well: E (˜j ) = E rZC(M ) + βM j E (˜M ) − E rZC(M ) r ˜ r ˜ (7.30) The zero-beta CAPM (and the more familiar Sharpe-Lintner-Mossin CAPM)7 is an equilibrium theory: The relationships in Equations (7.29) and (7.30) hold in equilibrium. In equilibrium, investors will not be maximizing utility unless they hold efficient portfolios. Therefore, the market portfolio is efficient; we have identified one efficient frontier portfolio, and we can apply Equation (7.29). By contrast, Equation (7.28) is a pure mathematical relationship with no economic content; it simply describes relationships between frontier portfolio returns and the returns from any other portfolio of the same assets. As noted in the introduction, the zero-beta CAPM does not, however, describe the process to or by which equilibrium is achieved. In other words, the process by which agents buy and sell securities in their desire to hold efficient portfolios, thereby altering security prices and thus expected returns, and requiring further changes in portfolio composition is not present in the model. When this process ceases and all agents are optimizing given the prevailing prices, then all will be holding efficient portfolios given the equilibrium expected returns e and covariance-variance matrix V . Thus M is also efficient. Since, in equilibrium, agents desired holdings of securities coincide with their actual holdings, we can identify M as the actual portfolio of securities held in the market place. There are many convenient approximations to M —the S&P 500 index of stocks being the most popular in the United States. The usefulness of these approximations, which are needed to give empirical content to the CAPM, is, however, debatable as discussed in our concluding comments. As a final remark, let us note that the name “zerobeta CAPM ” comes from cov (rM ,˜ZC(M ) ) ˜ r the fact that βZC(M ),M = = 0, by construction of ZC (M ); in σ2 ZC(M ) other words, the beta of ZC (M ) is zero. 7.8 The Standard CAPM Our development thus far did not admit the option of a risk-free asset. We need to add this if we are to achieve the standard form CAPM. On a purely formal basis, of course, a risk-free asset has zero covariance with M and thus rf = E rZC(M ) . Hence we could replace E rZC(M ) with rf in Equation ˜ ˜ 7 Sharpe (1964), Linter (1965), and Mossin (1966). 16 (7.30) to obtain the standard representation of the CAPM, the SML. But this approach is not entirely appropriate since the derivation of Equation (7.30) presumed the absence of any such risk-free asset. More formally, the addition of a risk-free asset substantially alters the shape of the set of frontier portfolios in the [E (˜) , σ (˜)] space. Let us briefly outline r r the development here, which closely resembles what is done above. Consider N risky assets with expected return vector e, and one risk-free asset, with expected return ≡ rf . Let p be a frontier portfolio and let wp denote the N vector of portfolio weights on the risky assets of p; wp in this case is the solution to: 1 min wT V w w 2 s.t.wT e + (1 − wT 1)rf = E Solving this problem gives wp = V −1 (e − rf 1) E − rf H 2 where H = B − 2Arf + Crf and A, B, C are defined as before. Let us examine this expression for wp more carefully: wp = V −1 (e − rf 1) nxn nx1 nx1 E (˜p ) − rf r H a number (7.31) This expression tells us that if we wish to have a higher expected return, we should invest proportionally the same amount more in each risky asset so that the relative proportions of the risky assets remain unchanged. These proportions are defined by the V −1 (e − rf 1) term. This is exactly the result we were intuitively expecting: Graphically, we are back to the linear frontier represented in Figure 7.1. The weights wp uniquely identify the tangency portfolio T . Also, [E (˜p ) − rf ] r , and (7.32) H [E (˜q ) − rf ] [E (˜p ) − rf ] r r T cov (˜q , rp ) = wq V wp = r ˜ (7.33) H for any portfolio q and any frontier portfolio p. Note how all this parallels what we did before. Solving Equation (7.33) for E (˜q ) gives: r T σ 2 (˜p ) = wp V wp = r 2 E (˜q ) − rf = r Hcov (˜q , rp ) r ˜ E (˜p ) − rf r (7.34) Substituting for H via Equation (7.32) yields E (˜q ) − rf = r cov (˜q , rp ) [E (˜p ) − rf ] r ˜ r E (˜p ) − rf r σ 2 (˜p ) r 17 2 or E (˜q ) − rf = r cov (˜q , rp ) r ˜ [E (˜p ) − rf ] r 2 (˜ ) σ rp (7.35) Again, since T is a frontier portfolio, we can choose p ≡ T . But in equilibrium T = M ; in this case, Equation (7.35) gives: E (˜q ) − rf = r or E (˜q ) = rf + βqM [E (˜M ) − rf ] r r (7.36) for any asset (or portfolio) q. This is the standard CAPM. Again, let us review the flow of logic that led to this conclusion. First, we identified the efficient frontier of risk-free and risky assets. This efficient frontier is fully characterized by the risk-free asset and a specific tangency frontier portfolio. The latter is identified in Equation (7.31). We then observed that all investors, in equilibrium under homogeneous expectations, would hold combinations of the risk-free asset and that portfolio. Thus it must constitute the market — the portfolio of all risky assets. It is these latter observations that give the CAPM its empirical content. cov (˜q , rM ) r ˜ [E (˜M ) − rf ] , r 2 (˜ ) σ rM 7.9 Conclusions Understanding and identifying the determinants of equilibrium asset returns is inherently an overwhelmingly complex problem. In order to make some progress, we have made, in the present chapter, a number of simplifying assumptions that we will progressively relax in the future. 1. Rather than deal with fully described probability distributions on returns, we consider only the first two moments, E (˜p ) and σ 2 (˜p ). When rer r turns are at least approximately normally distributed, it is natural to think first of characterizing return distributions by their means and variances, since Prob(µr − 2σr ≤ r ≤ µr + 2σr ) = 0.95, for the normal distribution: plus or ˜ minus two standard deviations from the mean will encompass nearly all of the probability. It is also natural to try to estimate these distributions and their moments from historical data. To do this naively would be to assign equal probability to each past observation. Yet, we suspect that more recent observations should contain more relevant information concerning the true distribution than observations in the distant past. Indeed the entire distribution may be shifting through time; that is, it may be nonstationary. Much current research is devoted to studying what and how information can be extracted from historical data in this setting. 2. The model is static; in other words, only one period of returns are measured and analyzed. The defined horizon is assumed to be equally relevant for all investors. 18 3. Homogeneous expectations: all investors share the same information. We know this assumption cannot, in fact, be true. Anecdotally, different security analysts produce reports on the same stock that are wildly different. More objectively, the observed volume of trade on the stock exchanges is much higher than is predicted by trading models with assumed homogeneous expectations. The CAPM is at the center of modern financial analysis. As with modern portfolio theory, its first and foremost contribution is conceptual: It has played a major role in helping us to organize our thoughts on the key issue of equilibrium asset pricing. Beyond that, it is, in principle, a testable theory, and indeed, a huge amount of resources has been devoted to testing it. Since the abovementioned assumptions do not hold up in practice, it is not surprising that empirical tests of the CAPM come up short.8 One example is the Roll (1977) critique. Roll reminds us that the CAPM’s view of the market portfolio is that it contains every asset. Yet data on asset returns is not available for many assets. For example, no systematic data is available on real estate and, in the United States at least, one estimates that approximately one-half of total wealth is invested in real estate. Thus it is customary to use proxies for the true M in conducting tests of CAPM. Roll demonstrates, however, that even if two potential proxies for M are correlated greater than 0.9, the beta estimates obtained using each may be very different. This suggests that the empirical implications of the model are very sensitive to the choice of proxy. With no theory to inform us as to what proxy to use, the applicability of the theory is suspect. Furthermore, beginning in the late 1970s and continuing to the present, more and more evidence has come to light suggesting that firm characteristics beyond beta may provide explanatory power for mean equity returns. In particular, various studies have demonstrated that a firm’s average equity returns are significantly related to its size (as measured by the aggregate market value of equity), the ratio of book value per share to market value per share for its common equity, its equity price to earnings ratio, its cash flow per share to price per share ratio, and its historical sales growth. These relationships contradict strict CAPM, which argues that only a stock’s systematic risk should matter for its returns; as such they are referred to as anomalies. In addition, even in models that depart from the CAPM assumptions, there is little theoretical evidence as to why these particular factors should be significant. We close this chapter by illustrating these ideas with brief summaries of two especially prominent recent papers. Fama and French (1992) showed that the relationship between market betas and average returns is essentially flat for their sample period (1963 to 1990). In other words, their results suggest that the single factor CAPM can no longer explain the cross-sectional variation in have chosen not to systematically review this literature. Standard testing procedures and their results are included in the introductory finance manuals which are pre-requisites for the present text. Advanced issues properly belong to financial econometrics courses. The student wishing to invest in this area should consult Jensen (1979) and Friend, Westerfield and Granito (1979) for early surveys ; Ferson and Jagannathan (1996) and Shanken (1996) for more recent ones. 8 We 19 equity returns. They also find that for this sample period the univariate (single factor) relationship between average stock returns and size (market value of equity), leverage, earnings-to-price ratio, and book-to-market value of equity per share are strong. More specifically, there is a negative relationship between size and average return, which is robust to the inclusion of other variables. There is also a consistent positive relationship between average returns and book-tomarket ratio, which is not swamped by the introduction of other variables. They find that the combination of size and the book-to-market ratio as explanatory variables appears, for their sample period, to subsume the explanatory roles of leverage and the price-to-earnings ratio. In a related paper, Fama and French (1993) formalize their size and bookto-market ratio factors more precisely by artificially constructing two-factor portfolios to which they assign acronyms, HML (high-medium-low) and SMB (small-medium-big). Both portfolios consist of a joint long and short position and have net asset value zero. The HML portfolio represents a combination of a long position in high book-to-market stocks with a short position in low book-to-market stocks. The SMB portfolio is one consisting of a long position in small capitalization stocks and a short position in large capitalization stocks. These designations are, of course, somewhat arbitrary.9 In conjunction with the excess (above the risk-free rate) return on a broad-based index, Fama and French study the ability of these factors to explain cross-sectional stock returns. They find that their explanatory power is highly significant. References Fama, E., French, K. (1992), “The Cross Section of Expected Stock Returns,” Journal of Finance 47, 427-465. Fama, E., French, K. (1993), “Common Risk Factors in the Returns on Stocks and Bonds,” Journal of Financial Economics 33, 3-56. Ferson, W. E., Jagannathan, R. (1996), “Econometric Evaluation of Asset Pricing Models,” in Statistical Methods in Finance, Handbook of Statistics, Maddala, G.S., Rao, C.R., eds., Amsterdam: North Holland, 14. 9 Short description on how to construct SMB, HML: In June each year, all NYSE stocks are ranked by size. The median NYSE size is used to split all NYSE, AMEX, NASDAQ firms into two groups, small and big. All NYSE, AMEX, NASDAQ stocks are also broken into three BE/ME equity groups based on the breakpoints for the bottom 30% (low), middle 40% (medium), and top 30% (high) of the ranked values of BE/ME (Book value of Equity/Market value of Equity) for NYSE stocks. Fama and French (1993) then construct six portfolios (S/L, S/M, S/H, B/L, B/M, B/H) from the intersection of the two ME groups and the three BE/ME groups. SMB is the difference between the simple average of returns on three small stocks portfolio (S/L, S/M, S/H) and the three big stock portfolios (B/L, B/M, B/H). SMB mimics the risk factor in return related to size. Accordingly, HML is the difference of the simple average of the returns on the two high BE/ME portfolios (S/H, B/H) and the average of the returns on the two low BE/ME portfolios (S/L, B/L). HML mimics the risk factor in returns that is related to BE/ME. 20 Friend, I., Westerfield, R., Granito, M. (1979) “New Evidence on the Capital Asset Pricing Model,” in Handbook of Financial Economics, Bicksler, J.L., ed., North Holland. Jensen, M. (1979), “Tests of Capital Market Theory and Implications of the Evidence,” in Handbook of Financial Economics, Bicksler, J.L., ed., North Holland. Lintner, J. (1965), “The Valuation of Risky Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets,” Review of Economics and Statistics 47(1), 13—37. Mossin, J. (1966), “Equilibrium in a Capital Asset Market,” Econometrica 34(4), 768—783. Roll, R. (1977), “A Critique of the Asset Pricing Theory’s Test—Part I: On Past and Potential Testability of the Theory,” Journal of Financial Economics 4, 129-76. Shanken, J. (1996), “Statistical Methods in Tests of Portfolio Efficiency: A Synthesis” in Statistical Methods in Finance, Handbook of Statistics, Maddala, G.S., Rao, C.R., eds., Amsterdam: North Holland, 14. Sharpe, W. F. (1964), “Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk,” Journal of Finance, 19(3). Appendix 7.1: Proof of the CAPM Relationship Refer to Figure 7.1. Consider a portfolio with a fraction 1 − α of wealth invested in an arbitrary security j and a fraction α in the market portfolio. rp = α¯M + (1 − α)¯j ¯ r r 2 2 2 σp = α2 σM + (1 − α)2 σj + 2α(1 − α)σjM As α varies we trace a locus that - passes through M (- and through j) - cannot cross the CM L (why?) - hence must be tangent to the CM L at M d¯ r Tangency = dσp |α=1 = slope of the locus at M = slope of CML = p rM −rf ¯ σM 21 d¯p r dσp d¯p r dα dσp 2σp dα d¯p r dσp d¯p r |α=1 dσp (¯M − rj ) r ¯ (¯M r = = = = = = d¯p /dα r dσp /dα rM − rj ¯ ¯ 2 2 2ασM − 2(1 − α)σj + 2(1 − 2α)σjM (¯M − rj )σp r ¯ 2 2 ασM − (1 − α)σj + (1 − 2α)σjM (¯M − rj )σM r ¯ rM − rf ¯ ¯ = 2 −σ σM σM jM 2 (¯M − rf )(σM − σjM ) r 2 σM σjM − rj ) = (¯M − rf ) 1 − 2 ¯ r σM σjM rj = rf + (¯M − rf ) 2 ¯ r σM Appendix 7.2: The Mathematics of the Portfolio Frontier: An Example Assume e = Therefore, V −1 = check: 1 −1 −1 4 4/3 1/3 1/3 1/3 4/3 − 1/3 1/3 − 1/3 −4/3 + 4/3 −1/3 + 4/3 1 0 0 1 4/3 1/3 1/3 1/3 ; r1 ¯ r2 ¯ = 1 2 ;V = 1 −1 −1 4 i.e., ρ12 = ρ21 = −1/2 = = A = 1T V −1 e = 1 1 4/3 1/3 1/3 1/3 4/3 1/3 1/3 1/3 4/3 1/3 1/3 1/3 1 2 1 2 1 1 = 4/3 + 1/3 1/3 + 1/3 5/3 2/3 1 2 1 2 = 5/3 + 2(2/3) = 3 B C D = eT V −1 e = ( 1 2 ) 1 1 = = 4/3 + 2/3 1/3 + 2/3 5/3 2/3 1 1 = 7/3 =4 = 1T V −1 1 = = BC − A2 = 4 (7/3) − 9 = 28/3 − 27/3 = 1/3 22 Now we can compute g and h: 1. g = = 1 B(V −1 1) − A(V −1 e) D 1 4/3 1/3 1 4 1/3 1/3 1 1/3 5/3 2/3 20 8 − −3 18 9 6/3 3/3 = −3 =3 2 −1 4/3 1/3 1/3 1/3 20/3 8/3 − 1 2 18/3 9/3 = 3 4 = 2. h 1 C(V −1 e) − A(V −1 1) D 1 7 4/3 1/3 1 4/3 1/3 = −3 1/3 1/3 2 1/3 1/3 1/3 3 7 2 5/3 2 5/3 = 3 −3 =7 −3 1 2/3 1 2/3 3 = 1 2 = 14 7 − 15 6 = −1 1 Check by recovering the two initial assets; suppose E(˜p ) = 1 : r w1 w2 w1 w2 = 2 −1 2 −1 + −1 1 −1 1 E(˜p ) = r 2 −1 2 −1 + −1 1 −2 2 = 1 0 0 1 ⇒ OK suppose E(˜p ) = 2 : r = + 2= + = ⇒ OK The equation corresponding to Equation (7.16) thus reads: p w1 p w2 = 2 −1 + −1 1 A 9 = C 7 E(˜p ). r Let us compute the minimum variance portfolio for these assets. E(˜p,min var ) = r σ 2 (˜p,min var ) = r wp = 2 −1 + −1 1 9 = 7 1 3 = < min { 1,4 } C 7 2 −9/7 14/7 + = −1 9/7 −7/7 + −9/7 9/7 = 5/7 2/7 Let’s check σ 2 (rp ) by computing it another way: ˜ 2 σp = 5/7 2/7 1 −1 −1 4 5/7 2/7 = 3/7 3/7 5/7 2/7 = 3/7 ⇒ OK 23 Chapter 8 : Arrow-Debreu Pricing, Part I 8.1 Introduction As interesting and popular as it is, the CAPM is a very limited theory of equilibrium pricing and we will devote the next chapters to reviewing alternative theories, each of which goes beyond the CAPM in one direction or another. The Arrow-Debreu pricing theory discussed in this chapter is a full general equilibrium theory as opposed to the partial equilibrium static view of the CAPM. Although also static in nature, it is applicable to a multi-period setup and can be generalized to a broad set of situations. In particular, it is free of any preference restrictions, and of distributional assumptions on returns. The Consumption CAPM considered subsequently (Chapter 9) is a fully dynamic construct. It is also an equilibrium theory, though of a somewhat specialized nature. With the Risk Neutral Valuation Model and the Arbitrage Pricing Theory (APT), taken up in Chapters 11 to 13, we will be moving into the domain of arbitragebased theories, after observing, however, that the Arrow-Debreu pricing theory itself may also be interpreted in the arbitrage perspective (Chapter 10). The Arrow-Debreu model takes a more standard equilibrium view than the CAPM: It is explicit in stating that equilibrium means supply equals demand in every market. It is a very general theory accommodating production and, as already stated, very broad hypotheses on preferences. Moreover, no restriction on the distribution of returns is necessary. We will not, however, fully exploit the generality of the theory: In keeping with the objective of this text, we shall often limit ourselves to illustrating the theory with examples. We will be interested to apply it to the equilibrium pricing of securities, especially the pricing of complex securities that pay returns in many different time periods and states of nature, such as common stocks or 30-year government coupon bonds. The theory will, as well, enrich our understanding of project valuation because of the formal equivalence, underlined in Chapter 2, between a project and an asset. In so doing we will be moving beyond a pure equilibrium analysis and start using the concept of arbitrage. It is in the light of a set of noarbitrage relationships that the Arrow-Debreu pricing takes its full force. This perspective on the Arrow-Debreu theory will developed in Chapter 10. 8.2 Setting: An Arrow-Debreu Economy In the basic setting that we shall use, the following parameters apply: 1. There are two dates: 0, 1. This setup, however, is fully generalizable to multiple periods; see the later remark. 2. There are N possible states of nature at date 1, which we index by θ = 1, 2, ..., N with probabilities π θ; 3. There is one perishable (non-storable) consumption good. 1 4. There are K agents, indexed by k = 1, ..., K, with preferences: N k U0 ck + δ k 0 θ=1 πθ U k ck ; θ }. 5. Agent k’s endowment is described by the vector {ek , ek 0 θ θ=1,2,...,N In this description, ck denotes agent k’s consumption of the sole consumption θ good in state θ, U is the real-valued utility representation of agent k’s period preferences, and δ k is the agent’s time discount factor. In fact, the theory allows for more general preferences than the time-additive expected utility form. Specifically, we could adopt the following representation of preferences: uk (ck , ck1 , ck2 , ..., ckN ). 0 θ θ θ This formulation allows not only for a different way of discounting the future (implicit in the relative taste for present consumption relative to all future consumptions), but it also permits heterogeneous, subjective views on the state probabilities (again implicit in the representation of relative preference for, say, ck2 vs. ck3 ). In addition, it assumes neither time-additivity, nor an expected θ θ utility representation. Since our main objective is not generality, we choose to work with the less general, but easier to manipulate time-additive expected utility form. In this economy, the only traded securities are of the following type: One unit of security θ, with price qθ , pays one unit of consumption if state θ occurs and nothing otherwise. Its payout can thus be summarized by a vector with all entries equal to zero except for column θ where the entry is 1 : (0, . . ., 0, 1, 0, ...0). These primitive securities are called Arrow-Debreu securities,1 or state contingent claims or simply state claims. Of course, the consumption of any individual k if state θ occurs equals the number of units of security θ that he holds. This follows from the fact that buying the relevant contingent claim is the only way for a consumer to secure purchasing power at a future date-state (recall that the good is perishable). An agent’s decision problem can then be characterized by: (ck ,ck ,....,ck ) 0 1 N max N k U0 (ck ) + δ k 0 N θ=1 πθ U k (ck ) θ (P) s.t. ck 0 + θ=1 ck , ck , ...., ck 0 1 N qθ ck θ ek 0 + N θ=1 qθ ek θ ≥0 The first inequality constraint will typically hold with equality in a world of non-satiation. That is, the total value of goods and security purchases made by 1 So named after the originators of modern equilibrium theory: see Arrow (1951) and Debreu (1959). 2 the agent (the left-hand side of the inequality) will exhaust the total value of his endowments (the right-hand side). Equilibrium for this economy is a set of contingent claim prices (q1 , q2 , ..., qN ) such that 1. at those prices ck , ..., ck solve problem (P), for all k, and 0 N 2. K k=1 ck = 0 K k=1 ek , 0 K k=1 ck = θ K k=1 ek , for every θ. θ Note that here the agents are solving for desired future and present consumption holdings rather than holdings of Arrow-Debreu securities. This is justified because, as just noted, there is a one-to-one relationship between the amount consumed by an individual in a given state θ and his holdings of the ArrowDebreu security corresponding to that particular state θ, the latter being a promise to deliver one unit of the consumption good if that state occurs. Note also that there is nothing in this formulation that inherently restricts matters to two periods, if we define our notion of a state, somewhat more richly, as a date-state pair. Consider three periods, for example. There are N possible states in date one and J possible states in date two, irrespective of the state j ˆ ˆ achieved in date 1. Define θ new states to be of the form θs = j, θk , where j j denotes the state in date 1 and θk denotes the state k in date 2, conditional 1 that state j was observed in date 1 (Refer to Figure 8.1). So 1, θ5 would be a 2 state and 2, θ3 another state. Under this interpretation, the number of states expands to 1 + NJ, with: 1 N J : : : the date 0 state the number of date-1 states the number of date-2 states Insert Figure 8.1 With minor modifications, we can thus accommodate many periods and states. In this sense, our model is fully general and can represent as complex an environment as we might desire. In this model, the real productive side of the economy is in the background. We are, in effect, viewing that part of the economy as invariant to securities trading. The unusual and unrealistic aspect of this economy is that all trades occur at t = 0.2 We will relax this assumption in Chapter 9. 8.3 Competitive Equilibrium and Pareto Optimality Illustrated Let us now develop an example. The essentials are found in Table 8.1. 2 Interestingly, this is less of a problem for project valuation than for asset pricing. 3 Table 8.1: Endowments and Preferences In Our Reference Example Agents Endowments t=0 t=1 θ1 θ2 10 1 2 5 4 6 Preferences 1 1 2 c0 1 2 2 c0 1 3 1 3 Agent 1 Agent 2 + 0.9 + 0.9 ln c1 + 1 ln c2 + 1 2 3 2 3 ln c1 2 ln c2 2 There are two dates and, at the future date, two possible states of nature with probabilities 1/3 and 2/3. It is an exchange economy and the issue is to share the existing endowments between two individuals. Their (identical) preferences are linear in date 0 consumption with constant marginal utility equal to 1/2. This choice is made for ease of computation, but great care must be exercised in interpreting the results obtained in such a simplified framework. Date 1 preferences are concave and identical. The discount factor is .9. Let q1 be the price of a unit of consumption in date 1 state 1, q2 the price of one unit of the consumption good in date 1 state 2. We will solve for optimal consumption directly, knowing that this will define the equilibrium holdings of the securities. The prices of these consumption goods coincide with the prices of the corresponding state-contingent claims; period 0 consumption is taken as the numeraire and its price is 1. This means that all prices are expressed in units of period 0 consumption: q1, q2 are prices for the consumption good at date 1, in states 1 and 2, respectively, measured in units of date 0 consumption. They can thus be used to add up or compare units of consumption at different dates and in different states, making it possible to add different date cash flows, with the qi being the appropriate weights. This, in turn, permits computing an individual’s wealth. Thus, in the previous problem, agent 1’s wealth, which equals the present value of his current and future endowments, is 10 + 1q1 + 2q2 while agent 2’s wealth is 5 + 4q1 + 6q2 The respective agent problems are: Agent 1: max 1 10 + 1q1 + 2q2 − c1 q1 − c1 q2 + 0.9 1 ln c1 + 2 1 1 2 3 s.t. c1 q1 + c1 q2 ≤ 10 + q1 + 2q2 , and c1 , c1 ≥ 0 1 2 1 2 max 1 5 + 4q1 + 6q2 − c2 q1 − c2 q2 + 0.9 1 ln c2 + 1 2 1 2 3 s.t. c2 q1 + c2 q2 ≤ 5 + 4q1 + 6q2 and c2 , c2 ≥ 0 2 1 2 1 2 3 ln c1 2 Agent 2: 2 3 ln c2 2 Note that in this formation, we have substituted out for the date 0 consumption; in other words, the first term in the max expression stands for 1/2(c0 ) and we have substituted for c0 its value obtained from the constraint: c0 + c1 q1 + c1 q2 = 10 + 1q1 + 2q2 . With this trick, the only constraints remaining 1 2 are the non-negativity constraints requiring consumption to be nonnegative in all date-states. The FOCs state that the intertemporal rate of substitution between future (in either state) and present consumption (i.e. the ratio of the relevant marginal utilities) should equal the price ratio. The latter is effectively measured by the 4 price of the Arrow-Debreu security, the date 0 price of consumption being the numeraire. These FOCs (assuming interior solutions) are Agent 1 : c1 : 1 c1 : 2 q1 2 q2 2 = 0.9 = 0.9 1 3 2 3 1 c1 1 1 c1 2 Agent 2 : 1 c2 : q2 = 0.9 1 2 c2 : q2 = 0.9 2 1 1 3 c2 1 2 1 3 c2 2 while the market clearing conditions read: c1 + c2 = 5 and c1 + c2 = 8. Each of 1 1 2 2 the FOCs is of the form qθ 1 = (.9)(πθ ) 1/2 1 ck θ , k, θ = 1, 2, or qθ = δπθ ∂Uk ∂c θ k ∂U0 ∂ck 0 k , k, θ = 1, 2. (8.1) Together with the market clearing conditions, Equation (8.1) reveals the determinants of the equilibrium Arrow-Debreu security prices. It is of the form: k price of the good if state θ is realized M Uθ = , k price of the good today M U0 in other words, the ratio of the price of the Arrow-Debreu security to the price of the date 0 consumption good must equal (at an interior solution) the ratio of the marginal utility of consumption tomorrow if state θ is realized to the marginal utility of today’s consumption (the latter being constant at 1/2). This is the marginal rate of substitution between the contingent consumption in state θ and today’s consumption. From this system of equations, one clearly obtains c1 = c2 = 2.5 and c1 = c2 = 4 from which one, in turn, derives: 2 2 1 1 q1 = q2 = 1 1 2 1 2 (0.9) (0.9) 1 3 2 3 1 1 c1 1 1 c1 2 = 2 (0.9) = 2 (0.9) 1 3 2 3 1 2.5 1 4 = (0.9) = (0.9) 2 3 1 3 4 8 4 5 = 0.24 = 0.3 Notice how the Arrow-Debreu state-contingent prices reflect probabilities, on the one hand, and marginal rates of substitution (taking the time discount factor into account and computed at consumption levels compatible with market clearing) and thus relative scarcities, on the other. The prices computed above differ in that they take account of the different state probabilities (1/3 for state 1, 2/3 for state 2) and because the marginal utilities differ as a result of the differing total quantities of the consumption good available in state 1 (5 units) and in state 2 (8 units). In our particular formulation, the total amount of goods available at date 0 is made irrelevant by the fact that date 0 marginal utility is constant. Note that if the date 1 marginal utilities were constant, as would be the case with linear (risk neutral) utility functions, the goods endowments would not influence the Arrow-Debreu prices, which would then be exactly proportional to the state probabilities. Box 8.1: 5 Interior vs. Corner Solutions We have described the interior solution to the maximization problem. By that restriction we generally mean the following: The problem under maximization is constrained by the condition that consumption at all dates should be nonnegative. There is no interpretation given to a negative level of consumption, and, generally, even a zero consumption level is precluded. Indeed, when we make the assumption of a log utility function, the marginal utility at zero is infinity, meaning that by construction the agent will do all that is in his power to avoid that situation. Effectively an equation such as Equation (8.1) will never be satisfied for finite and nonzero prices with log utility and period one consumption level equal to zero; that is, it will never be optimal to select a zero consumption level. Such is not the case with the linear utility function assumed to prevail at date 0. Here it is conceivable that, no matter what, the marginal utility in either state at date 1 [the numerator in the RHS of Equation (8.1)] be larger than 1/2 times the Arrow-Debreu price [the denominator of the RHS in Equation (8.1) multiplied by the state price]. Intuitively, this would be a situation where the agent derives more utility from the good tomorrow than from consuming today, even when his consumption level today is zero. Fundamentally, the interior optimum is one where he would like to consume less than zero today to increase even further consumption tomorrow, something which is impossible. Thus the only solution is at a corner, that is at the boundary of the feasible set, with ck = 0 and the condition in Equation (8.1) taking the form of 0 an inequality. In the present case we can argue that corner solutions cannot occur with regard to future consumption (because of the log utility assumption). The full and complete description of the FOCs for problem (P) spelled out in Section 8.2 is then k ∂U0 ∂U k qθ k ≤ δπθ k , if ck > 0, and k, θ = 1, 2. (8.2) 0 ∂c0 ∂cθ In line with our goal of being as transparent as possible, we will often, in the sequel, satisfy ourselves with a description of interior solutions to optimizing problems, taking care to ascertain, ex post, that the solutions do indeed occur at the interior of the choice set. This can be done in the present case by verifying that the optimal ck is strictly positive for both agents at the interior solutions, 0 so that Equation (8.1) must indeed apply. 2 The date 0 consumptions, at those equilibrium prices, are given by c1 = 10 + 1 (.24) + 2 (.3) − 2.5 (.24) − 4 (.3) = 9.04 0 c2 = 5 + 4 (.24) + 6 (.3) − 2.5 (.24) − 4 (.3) = 5.96 0 The post-trade equilibrium consumptions are found in Table 8.2. This allocation is the best each agent can achieve at the given prices q1 = .24 and q2 = .3. Furthermore, at those prices, supply equals demand in each market, in every state and time period. These are the characteristics of a (general) competitive equilibrium. 6 Table 8.2: Post-Trade Equilibrium Consumptions t=0 Agent 1 Agent 2 Total 9.04 5.96 15.00 t=1 θ1 θ2 2.5 4 2.5 4 5.0 8 In light of this example, it is interesting to return to some of the concepts discussed in our introductory chapter. In particular, let us confirm the (Pareto) optimality of the allocation emerging from the competitive equilibrium. Indeed, we have assumed as many markets as there are states of nature, so assumption H1 is satisfied. We have de facto assumed competitive behavior on the part of our two consumers (they have taken prices as given when solving their optimization problems), so H2 is satisfied (of course, in reality such behavior would not be privately optimal if indeed there were only two agents. Our example would not have changed materially had we assumed a large number of agents, but the notation would have become much more cumbersome). In order to guarantee the existence of an equilibrium, we need hypotheses H3 and H4 as well. H3 is satisfied in a weak form (no curvature in date 0 utility). Finally, ours is an exchange economy where H4 does not apply (or, if one prefers, it is trivially satisfied). Once the equilibrium is known to exist, as is the case here, H1 and H2 are sufficient to guarantee the optimality of the resulting allocation of resources. Thus, we expect to find that the above competitive allocation is Pareto optimal (PO), that is, it is impossible to rearrange the allocation of consumptions so that the utility of one agent is higher without diminishing the utility of the other agent. One way to verify the optimality of the competitive allocation is to establish the precise conditions that must be satisfied for an allocation to be Pareto optimal in the exchange economy context of our example. It is intuitively clear that the above Pareto superior reallocations will be impossible if the initial allocation maximizes the weighted sum of the two agents’ utilities. That is, an allocation is optimal in our example if, for some weight λ it solves the following maximization problem.3 max u1 (c1 , c1 , c1 ) + λu2 (c2 , c2 , c2 ) 0 1 2 0 1 2 {c1 ,c1 ,c1 } 0 1 2 s.t. c1 + c2 = 15; c1 + c2 = 5; c1 + c2 = 8, 0 0 1 1 2 2 c1 , c1 , c1 , c2 , c2 , c2 ≥ 0 0 1 2 0 1 2 This problem can be interpreted as the problem of a benevolent central planner constrained by an economy’s total endowment (15, 5, 8) and weighting the two agents utilities according to a parameter λ, possibly equal to 1. The decision 3 It is just as easy here to work with the most general utility representation. 7 variables at his disposal are the consumption levels of the two agents in the two dates and the two states. With uk denoting the derivative of agent k’s utility i function with respect to ck (i = 1, 2, 3), the FOCs for an interior solution to the i above problem are found in Equation (8.3). u1 u1 u1 0 = 1 = 2 =λ u2 u2 u2 0 1 2 (8.3) This condition states that, in a Pareto optimal allocation, the ratio of the two agents’ marginal utilities with respect to the three goods (i.e., the consumption good at date 0, the consumption good at date 1 if state 1, and the consumption good at date 1 if state 2) should be identical.4 In an exchange economy this condition, properly extended to take account of the possibility of corner solution, together with the condition that the agents’ consumption adds up to the endowment in each date-state, is necessary and sufficient. It remains to check that Equation (8.3) is satisfied at the equilibrium allocation. We can rewrite Equation (8.3) for the parameters of our example: (0.9) 1 c1 (0.9) 2 c1 1/2 3 1 3 1 1 2 = = 1 1 1/2 (0.9) 3 c2 (0.9) 2 c1 3 2 1 2 It is clear that the condition in Equation (8.3) is satisfied since c1 = c2 ; c1 = c2 2 2 1 1 at the competitive equilibrium, which thus corresponds to the Pareto optimum with equal weighting of the two agents’ utilities: λ = 1 and all three ratios of marginal utilities are equal to 1. Note that other Pareto optima are feasible, for example one where λ = 2. In that case, however, only the latter two equalities can be satisfied: the date 0 marginal utilities are constant which implies that no matter how agent consumptions are redistributed by the market or by the central planner, the first ratio of marginal utilities in Equation (8.3) cannot be made equal to 2. This is an example of a corner solution to the maximization problem leading to equation (8.3). In this example, agents are able to purchase consumption in any date-state of nature. This is the case because there are enough Arrow-Debreu securities; specifically there is an Arrow-Debreu security corresponding to each state of nature. If this were not the case, the attainable utility levels would decrease: at least one agent, possibly both of them, would be worse off. If we assume that only the state 1 Arrow-Debreu security is available, then there are no ways to make the state 2 consumption of the agents differ from their endowments. It is easy to check that this constraint does not modify their demand for the state 1 contingent claim, nor its price. The post-trade allocation, in that situation, is found in Table 8.3. The resulting post-trade utilities are Agent 1 : 1/2(9.64) + .9(1/3 ln(2.5) + 2/3 ln(2)) = 5.51 Agent 2 : 1/2(5.36) + .9(1/3 ln(2.5) + 2/3 ln(6)) = 4.03 4 Check that Equation (8.3) implies that the MRS between any two pair of goods is the same for the 2 agents and refer to the definition of the contract curve (the set of PO allocations) in the appendix to Chapter 1. 8 Table 8.3 : The Post-Trade Allocation t=0 Agent 1 Agent 2 Total 9.64 5.36 15.00 t=1 θ1 θ2 2.5 2 2.5 6 5.0 8 In the case with two state-contingent claim markets, the post-trade utilities are both higher (illustrating a reallocation of resources that is said to be Pareto superior to the no-trade allocation): Agent 1 : 1/2(9.04) + .9(1/3 ln(2.5) + 2/3 ln(4)) = 5.62 Agent 2 : 1/2(5.96) + .9(1/3 ln(2.5) + 2/3 ln(4)) = 4.09. When there is an Arrow-Debreu security corresponding to each state of nature, one says that the securities markets are complete. 8.4 Pareto Optimality and Risk Sharing In this section and the next we further explore the nexus between a competitive equilibrium in an Arrow-Debreu economy and Pareto optimality. We first discuss the risk-sharing properties of a Pareto optimal allocation. We remain in the general framework of the example of the previous two sections but start with a different set of parameters. In particular, let the endowment matrix for the two agents be as shown in Table 8.4. Table 8.4: The New Endowment Matrix t=0 Agent 1 Agent 2 4 4 t=1 θ1 θ2 1 5 5 1 Assume further that each state is now equally likely with probability 1/2. As before, consumption in period 0 cannot be stored and carried over into period 1. In the absence of trade, agents clearly experience widely differing consumption and utility levels in period 1, depending on what state occurs (see Table 8.5). How could agents’ utilities be improved? By concavity (risk aversion), this must be accomplished by reducing the spread of the date 1 income possibilities, in other words, lowering the risk associated with date 1 income. Because of symmetry, all date 1 income fluctuations can, in fact, be eliminated if agent 2 agrees to transfer 2 units of the good in state 1 against the promise to receive 2 units from agent 1 if state 2 is realized (see Table 8.6). 9 Table 8.5: Agents’ Utility In The Absence of Trade State-Contingent Utility θ1 θ2 ln(1) = 0 ln(5) = 1.609 ln(5) = 1.609 ln(1) = 0 Expected Utility in Period 1 1/2 ln(1) + 1/2 ln(5) = .8047 1/2 ln(1) + 1/2 ln(5) = .8047 Agent 1 Agent 2 Table 8.6: The Desirable Trades And Post-Trade Consumptions Date 1 Agent 1 Agent 2 Endowments Pre-Trade θ1 θ2 1 5 [⇓2] 5 [⇑2] 1 Consumption Post-Trade θ1 θ2 3 3 3 3 Now we can compare expected second period utility levels before and after trade for both agents: Before .8047 After 1/2 ln(3) + 1/2 ln(3) = 1.099 ∼ 1.1 = in other words, expected utility has increased quite significantly as anticipated.5 This feasible allocation is, in fact, Pareto optimal. In conformity with Equation (8.3), the ratios of the two agents’ marginal utilities are indeed equalized across states. More is accomplished in this perfectly symmetrical and equitable allocation: Consumption levels and MU are equated across agents and states, but this is a coincidence resulting from the symmetry of the initial endowments. Suppose the initial allocation was that illustrated in Table 8.7. Once again there is no aggregate risk: The total date 1 endowment is the same in the two states, but one agent is now richer than the other. Now consider the plausible trade outlined in Table 8.8. Check that the new post-trade allocation is also Pareto optimal: Although consumption levels and marginal utilities are not identical, the ratio of marginal utilities is the same across states (except at date 0 where, as before, we have a corner solution since the marginal utilities are given constants). Note that this PO allocation features perfect risk sharing as well. By that we mean that the two agents have constant date 1 consumption (2 units for agent 1, 4 units for agent 2) independent of the realized state. This is a general characteristic of PO allocations in the absence of aggregate risk (and with risk-averse agents). 5 With the selected utility function, it has increased by 37%. Such quantification is not, however, compatible with the observation that expected utility functions are defined only up to a linear transformation. Instead of using ln c for the period utility function, we could equally well have used (b + ln c) to represent the same preference ordering. The quantification of the increased in utility pre- and post-trade would be affected. 10 Table 8.7 : Another Set of Initial Allocations t=0 Agent 1 Agent 2 4 4 t=1 θ1 θ2 1 3 5 3 Table 8.8 : Plausible Trades And Post-Trade Consumptions Date 1 Agent 1 Agent 2 Endowments Pre-trade θ1 θ2 1 3 [⇓1] 5 [⇑1] 3 Consumption Post-trade θ1 θ2 2 2 4 4 If there is no aggregate risk, all PO allocations necessarily feature full mutual insurance. This statement can be demonstrated, using the data of our problem. Equation (8.3) states that the ratio of the two agents’ marginal utilities should be equated across states. This also implies, however, that the Marginal Rate of Substitution (MRS) between state 1 and state 2 consumption must be the same for the two agents. In the case of log period utility: 1/c1 1/c1 1/c2 1/c1 2 1 1 1 = = = 1/c2 1/c2 1/c1 1/c2 1 2 2 2 The latter equality has the following implications: 1. If one of the two agents is fully insured — no variation in his date 1 consumption (i.e., MRS = 1) — the other must be as well. 2. More generally, if the MRS are to differ from 1, given that they must be equal between them, the low consumption-high MU state must be the same for both agents and similarly for the high consumption-low MU state. But this is impossible if there is no aggregate risk and total endowment is constant. Thus, as asserted, in the absence of aggregate risk, a PO allocation features perfectly insured individuals and MRS identically equal to 1. 3. If there is aggregate risk, however, the above reasoning also implies that, at a Pareto optimum, it is shared “proportionately.” This is literally true if agents preferences are homogeneous. Refer to the competitive equilibrium of Section 8.3 for an example. 4. Finally, if agents are differentially risk averse, in a Pareto optimal allocation the less risk averse will typically provide some insurance services to the more risk averse. This is most easily illustrated by assuming that one of the two agents, say agent 1, is risk neutral. By risk neutrality, agent one’s marginal utility is constant. But then the marginal utility of agent 2 should also 11 be constant across states. For this to be the case, however, agent two’s income uncertainty must be fully absorbed by agent 1, the risk-neutral agent. 5. More generally, optimal risk sharing dictates that the agent most tolerant of risk bears a disproportionate share of it. 8.5 Implementing Pareto Optimal Allocations: On the Possibility of Market Failure Although to achieve the desired allocations, the agents of our previous section could just effect a handshake trade, real economic agents typically interact only through impersonal security markets or through deals involving financial intermediaries. One reason is that, in an organized security market, the contracts implied by the purchase or sale of a security are enforceable. This is important: Without an enforceable contract, if state 1 occurs, agent 2 might retreat from his ex-ante commitment and refuse to give up the promised consumption to agent 1, and vice versa if state 2 occurs. Accordingly, we now address the following question: What securities could empower these agents to achieve the optimal allocation for themselves? Consider the Arrow-Debreu security with payoff in state 1 and call it security Q to clarify the notation below. Denote its price by qQ , and let us compute i the demand by each agent for this security denoted zQ , i = 1, 2. The price is expressed in terms of period 0 consumption. We otherwise maintain the setup of the preceding section. Thus, 1 1 Agent 1 solves: max (4 − qQ zQ ) + [1/2 ln(1 + zQ ) + 1/2 ln(5)] 1 s.t. qQ zQ ≤ 4 2 2 Agent 2 solves: max (4 − qQ zQ ) + [1/2 ln(5 + zQ ) + 1/2 ln(1)] 2 s.t. qQ zQ ≤ 4 Assuming an interior solution, the FOCs are, respectively, −qQ + 1 2 1 1 1+zQ = 0; −qQ + 1 2 1 2 5+zQ =0⇒ 1 1 1+zQ = 1 2 ; 5+zQ 1 2 1 2 also zQ + zQ = 0 in equilibrium, hence, zQ = 2; zQ = −2; these represent the holdings of each agent and qQ = (1/2)(1/3) = 1/6. In effect, agent 1 gives up 1 qQ zQ = (1/6)(2) = 1/3 unit of consumption at date 0 to agent 2 in exchange for 2 units of consumption at date 1 if state 1 occurs. Both agents are better off as revealed by the computation of their expected utilities post-trade: Agent 1 expected utility : 4 − 1/3 + 1/2 ln 3 + 1/2 ln 5 = 5.013 Agent 2 expected utility : 4 + 1/3 + 1/2 ln 3 + 1/2 ln 1 = 4.879, though agent 2 only slightly so. Clearly agent 1 is made proportionately better off because security Q pays off in the state where his MU is highest. We may view 12 agent 2 as the issuer of this security as it entails, for him, a future obligation.6 Let us denote R the other conceivable Arrow-Debreu security, one paying in state 2. By symmetry, it would also have a price of 1/6, and the demand at this 1 2 price would be zR = −2, zR = +2, respectively. Agent 2 would give up 1/3 unit of period 1 consumption to agent 1 in exchange for 2 units of consumption in state 2. Thus, if both security Q and R are traded, the market allocation will replicate the optimal allocation of risks, as seen in Table 8.9. Table 8.9: Market Allocation When Both Securities Are Traded t=0 Agent 1 Agent 2 4 4 t=1 θ1 θ2 3 3 3 3 In general, it will be possible to achieve the optimal allocation of risks provided the number of linearly independent securities equals the number of states of nature. By linearly independent we mean, again, that there is no security whose payoff pattern across states and time periods can be duplicated by a portfolio of other securities. This important topic will be discussed at length in Chapter 10. Here let us simply take stock of the fact that our securities Q, R are the simplest pair of securities with this property. Although a complete set of Arrow-Debreu securities is sufficient for optimal risk sharing, it is not necessary in the sense that it is possible, by coincidence, for the desirable trades to be effected with a simplified asset structure. For our simple example, one security would allow the agents to achieve that goal because of the essential symmetry of the problem. Consider security Z with payoffs: Z θ1 2 θ2 -2 1 Clearly, if agent 1 purchases 1 unit of this security (zZ = 1) and agent 2 sells 2 one unit of this security (zZ = −1), optimal risk sharing is achieved. (At what price would this security sell?) So far we have implicitly assumed that the creation of these securities is costless. In reality, the creation of a new security is an expensive proposition: Disclosure documents, promotional materials, etc., must be created, and the agents most likely to be interested in the security contacted. In this example, issuance will occur only if the cost of issuing Q and R does not exceed the (ex- a noncompetitive situation, it is likely that agent 2 could extract a larger portion of the rent. Remember, however, that we maintain, throughout, the assumption of price-taking behavior for our two agents who are representatives of larger classes of similar individuals. 6 In 13 pected) utility gained from purchasing them. In this margin lies the investment banker’s fee. In the previous discussion we imagined each agent as issuing securities to the other simultaneously. More realistically, perhaps, we could think of the securities Q and R as being issued in sequence, one after the other (but both before period 1 uncertainty is resolved). Is there an advantage or disadvantage of going first, that is, of issuing the first security? Alternatively, we might be preoccupied with the fact that, although both agents benefit from the issuance of new securities, only the individual issuer pays the cost of establishing a new market. In this perspective it is interesting to measure the net gains to trade for each agent. These quantities are summarized in Table 8.10. Table 8.10: The Net Gains From Trade: Expected Utility Levels and Net Trading Gains (Gain to issuer in bold) No Trade EU 4.8047 4.8047 Trade Only Q EU ∆EU(i) 5.0206 0.2159 4.883 0.0783 0.2942 Trade Both Q and R EU ∆EU(ii) 5.0986 0.0726 5.0986 0.2156 0.2882 Agent 1 Agent 2 Total (i) Difference in EU when trading Q only, relative to no trade. (ii) Difference in EU when trading both Q and R, relative to trading Q only. This computation tells us that, in our example, the issuer of the security gains less than the other party in the future trade. If agent 2 goes first and issues security Q, his net expected utility gain is 0.0783, which also represents the most he would be willing to pay his investment bank in terms of period 0 consumption to manage the sale for him. By analogy, the marginal benefit to agent 1 of then issuing security R is 0.0726. The reverse assignments would have occurred if agent 1 had gone first, due to symmetry in the agent endowments. That these quantities represent the upper bounds on possible fees comes from the fact that period 0 utility of consumption is the level of consumption itself. The impact of all this is that each investment bank will, out of desire to maximize its fee potential, advise its client to issue his security second. No one will want to go first. Alternatively, if the effective cost of setting up the market for security Q is anywhere between 0.0783 and 0.288, there is a possibility of market failure, unless agent 2 finds a way to have agent 1 share in the cost of establishing the market. We speak of market failure because the social benefit of setting up the market would be positive 0.288 minus the cost itself — while the market might not go ahead if the private cost to agent 2 exceeds his own private benefit, measured at 0.0783 units of date 0 consumption. Of course, it might also be the case that the cost exceeds the total benefit. This is another reason for the market not to exist and, in general, for markets to be incomplete. But in this case, one would not talk of market failure. Whether the privately motivated decisions of individual agents lead to the socially optimal outcome – 14 in this case the socially optimal set of securities – is a fundamental question in financial economics. There is no guarantee that private incentives will suffice to create the social optimal set of markets. We have identified a problem of sequencing – (the issuer of a security may not be the greatest beneficiary of the creation of the market) and as a result there may be a waiting game with suboptimal results. There is also a problem linked with the sharing of the cost of setting up a market. The benefits of a new market often are widely spread among a large number of potential participants and it may be difficult to find an appropriate mechanism to have them share the initial setup cost, for example, because of free rider or coordination problems. Note that in both these cases, as well as in the situation where the cost of establishing a market exceeds the total benefit for individual agents, we anticipate that technical innovations leading to decreases in the cost of establishing markets will help alleviate the problem and foster a convergence toward a more complete set of markets. 8.6 Conclusions The asset pricing theory presented in this chapter is in some sense the father of all asset pricing relationships. It is fully general and constitutes an extremely valuable reference. Conceptually its usefulness is unmatched and this justifies us investing more in its associated apparatus. At the same time, it is one of the most abstract theories and its usefulness in practice is impaired by the difficulty in identifying individual states of nature and by the fact that, even when a state (or a set of states) can be identified, its (their) realization cannot always be verified. As a result it is difficult to write the appropriate conditional contracts. These problems go a long way in explaining why we do not see ArrowDebreu securities being traded, a fact that does not strengthen the immediate applicability of the theory. In addition, as already mentioned, the static setting of the Arrow-Debreu theory is unrealistic for most applications. For all these reasons we cannot stop here and we will explore a set of alternative, sometimes closely related, avenues for pricing assets in the following chapters. References Arrow, K. (1952), “An Extension of the Basic Theorems of Classical Welfare Economics,” in J. Neyman (ed.), Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, University of California Press, Berkeley, 507-532. Debreu, G. (1952), Theory of Value, John Wiley & Sons, New York. 15 Chapter 9 : The Consumption Capital Asset Pricing Model (CCAPM) 9.1 Introduction So far, our asset pricing models have been either one-period models such as the CAPM or multi-period but static, such as the Arrow-Debreu model. In the latter case, even if a large number of future periods is assumed, all decisions, including security trades, take place at date zero. It is in that sense that the Arrow-Debreu model is static. Reality is different, however. Assets are traded every period, as new information becomes available, and decisions are made sequentially, one period at a time, all the while keeping in mind the fact that today’s decisions impact tomorrow’s opportunities. Our objective in this chapter is to capture these dynamic features and to price assets in such an environment. Besides adding an important dimension of realism, another advantage of a dynamic setup is to make it possible to draw the link between the financial markets and the real side of the economy. Again, strictly speaking, this can be accomplished within an Arrow-Debreu economy. The main issues, however, require a richer dynamic context where real production decisions are not made once and for all at the beginning of time, but progressively, as time evolves. Building a model in which we can completely understand how real events impact the financial side of the economy in the spirit of fundamental financial analysis is beyond the scope of the present chapter. The model discussed here, however, opens up interesting possibilities in this regard, which current research is attempting to exploit. We will point out possible directions as we go along. 9.2 9.2.1 The Representative Agent Hypothesis and Its Notion of Equilibrium An Infinitely-Lived Representative Agent To accomplish these goals in a model of complete generality (in other words, with many different agents and firms) and in a way that asset prices can be tractably computed is beyond the present capability of economic science. As an alternative, we will make life simpler by postulating many identical infinitely lived consumers. This allows us to examine the decisions of a representative, stand-in consumer and explore their implications for asset pricing. In particular, we will assume that agents act to maximize the expected present value of discounted utility of consumption over their entire, infinite, lifetimes: max E t=0 δ t U (˜t ) , c where δ is the discount factor and U ( ) the period utility function with U1 ( ) > 0, U2 ( ) < 0. This construct is the natural generalization to the case of infinite 1 lifetimes of the preferences considered in our earlier two-period example. Its use can be justified by the following considerations. First, if we model the economy as ending at some terminal date T (as opposed to assuming an infinite horizon), then the agent’s investment behavior will reflect this fact. In the last period of his life, in particular, he will stop saving, liquidate his portfolio, and consume its entire value. There is no real-world counterpart for this action as the real economy continues forever. Assuming an infinite horizon eliminates these terminal date complications. Second, it can be shown, under fairly general conditions, that an infinitely lived agent setup is formally equivalent to one in which agents live only a finite number of periods themselves, provided they derive utility from the well-being of their descendants (a bequest motive). This argument is detailed in Barro (1974). Restrictive as it may seem, the identical agents assumption can be justified by the fact that, in a competitive equilibrium with complete securities markets there is an especially intuitive sense of a representative agent: one whose utility function is a weighted average of the utilities of the various agents in the economy. In Box 9.1 we detail the precise way in which one can construct such a representative individual and we discuss some of the issues at stake. Box 9.1 Constructing a Representative Agent In order to illustrate the issue, let us return to the two-period (t = 0, 1) Arrow-Debreu economy considered earlier. In that economy, each agent k, k = 1, 2, . . . , K, solves: max U k (ck ) + δ k 0 s.t. ck + 0 N θ=1 N θ=1 πθ U k (ck ) θ N θ=1 q θ ck ≤ ek + 0 θ q θ ek θ where the price of period 0 endowment is normalized to and the endowments  k 1, e0  ek  of a typical agent k are described by the vector  1 .  :  ek N In equilibrium, not only are the allocations optimal, but at the prevailing prices, supply equals demand in every market: K k=1 K k=1 ck = 0 ck = θ K k=1 K k=1 ek , and 0 ek , for every state θ. θ We know this competitive equilibrium allocation is Pareto optimal: No one can be better off without making someone else worse off. One important implication of this property for our problem is that there exists some set of weights 2 (λ1 , ...., λK ), which in general will depend on the initial endowments, such that the solution to the following problem gives an allocation that is identical to the equilibrium allocation: K N max k=1 K λk K U k (ck ) + δ k 0 θ=1 πθ U k (ck ) , θ s.t. K ck = 0 k=1 K ek , 0 k=1 ck θ k=1 K = ek , ∀θ θ k=1 λk = 1; λk > 0, ∀k. k=1 This maximization is meant to represent the problem of a benevolent central planner attempting to allocate the aggregate resources of the economy so as to maximize the weighted sum of the utilities of the individual agents. We stated a similar problem in Chapter 8 in order to identify the conditions characterizing a Pareto optimal allocation of resources. Here we see this problem as suggestive of the form the representative agent’s preference ordering, defined over aggregate consumption, can take (the representative agent is denoted by the superscript A): A U A cA , cA = U0 cA + 0 0 θ A U0 cA = 0 K k=1 K k=1 N θ=1 πθ U A cA , where θ K k=1 K k λk U0 ck with 0 ck = 0 K U A cA = θ δ k λk U k ck with θ k=1 ck = θ k=1 K ek ≡ cA 0 0 ek ≡ cA , for each state θ. θ θ k=1 In this case, the aggregate utility function directly takes into account the distribution of consumption across agents. This setup generalizes to as many periods as we like and, with certain modifications, to an infinite horizon. It is an intuitive sense of a representative agent as one who constitutes a weighted average of all the economy’s participants. A conceptual problem with this discussion resides in the fact that, in general, the weights {λ1 , λ2 , ..., λK } will depend on the initial endowments: Loosely speaking, the agent with more wealth gets a bigger λ weight. It can be shown, however, that the utility function, constructed as previously shown, will, in addition, be independent of the initial endowment distribution if two further conditions are satisfied: 3 1. the discount factor of every agent is the same (i.e., all agents δ’s are the same). 2. agents’ period preferences are of either of the two following forms: U k (c) = γ αk + γc γ−1 1 1− γ or U k (c) = −e−α c . k If either of these two conditions are satisfied, that is, by and large, if the agents’ preferences can be represented by either a CRRA or a CARA utility function, then there exists a representative agent economy for which the equilibrium Arrow-Debreu prices are the same as they are for the K agent economy, and for which U A (c) = g(c)H(λ1 , ..., λK ) , where g1 (c) > 0, g11 (c) < 0. In this case, the weights do not affect preferences because they appear in the form of a multiplicative scalar. Let us repeat that, even if the individual agent preferences do not take either of the two forms previously listed, there will still be a representative agent whose preferences are the weighted average of the individual agent preferences. Unlike the prior case, however, this ordering will then depend on the initial endowments [see Constantinides (1982)]. 9.2.2 On the Concept of a “No-Trade”Equilibrium In a representative agent economy we must, of necessity, use a somewhat specialized notion of equilibrium — a no-trade equilibrium. If, indeed, for a particular model specification, some security is in positive net supply, the equilibrium price will be the price at which the representative agent is willing to hold that amount — the total supply — of the security. In other specifications we will price securities that do not appear explicitly — securities that are said to be in zero net supply. The prototype of the latter is an IOU type of contract: In a one agent economy, the total net supply of IOUs must, of course, be zero. In this case, if at some price the representative agent wants to supply (sell) the security, since there is no one to demand it, supply exceeds demand. Conversely, if at some price the representative agent wants to buy the security (and thus no one wants to supply it), demand exceeds supply. Financial markets are thus in equilibrium, if and only if, at the prevailing price, supply equals demand and both are simultaneously zero. In all cases, the equilibrium price is that price at which the representative agent wishes to hold exactly the amount of the security present in the economy. Therefore, the essential question being asked is: What prices must securities assume so that the amount the representative agent must hold (for all markets to clear) exactly equals what he wants to hold. At these prices, further trade is not utility enhancing. In a more conventional multi-agent economy, an identical state of affairs is verified post-trade. The representative agent class of models is not appropriate, of course, for the analysis of some issues in finance; for example, issues linked with the volume of trade cannot be studied since, in a representative agent model, trading volume is, by construction, equal to zero. 4 9.3 9.3.1 An Exchange (Endowment) Economy The Model This economy will be directly analogous to the Arrow-Debreu exchange economies considered earlier: production decisions are in the background and abstracted away. It is, however, an economy that admits recursive trading, resulting from investment decisions made over time, period after period (as opposed to being made once and for all at date 0). There is one, perfectly divisible share which we can think of as representing the market portfolio of the CAPM (later we shall relax this assumption). Ownership of this share entitles the owner to all the economy’s output (in this economy, all firms are publicly traded). Output is viewed as arising exogenously, and as being stochastically variable through time, although in a stationary fashion. This is the promised, although still remote, link with the real side of the economy. And we will indeed use macroeconomic data to calibrate the model in the forthcoming sections. At this point, we can think of the output process as being governed by a large-number-of-states version of the three-state probability transition matrix found in Table 9.1. Table 9.1: Three-State Probability Transition Matrix Output in Period t +1 Y1 Y1 Y2 Y3  π11  π21 π31 i Y2 π12 π22 π32 Y3  π13 π23  = T π33 Output in Period t where πij = Prob Yt+1 = Y j , |Yt = Y for any t. That is, there are a given number of output states, levels of output that can be achieved at any given date, and the probabilities of transiting from one output state to another are constant and represented by entries in the matrix T. The stationarity hypothesis embedded in this formulation may, at first sight, appear extraordinarily restrictive. The output levels defining the states may, however, be normalized variables, for instance to allow for a constant rate of growth. Alternatively, the states could themselves be defined in terms of growth rates of output rather than output levels. See Appendix 9.1 for an application. If we adopt a continuous-state version of this perspective, the output process can be similarly described by a probability transition function G(Yt+1 | Yt ) = Prob Yt+1 ≤ Y j , |Yt = Y i . We can imagine the security as representing ownership of a fruit tree where the (perishable) output (the quantity of fruit produced by the tree - the dividend) 5 varies from year to year. This interpretation is often referred to as the Lucas fruit tree economy in tribute to 1996 Nobel prize winner, R. E. Lucas Jr., who, in his 1978 article, first developed the CCAPM. The power of the approach, however, resides in the fact that any mechanism delivering a stochastic process on aggregate output, such as a full macroeconomic equilibrium model, can be grafted on the CCAPM. This opens up the way to an in-depth analysis of the rich relationships between the real and the financial sides of an economy. This will be a rational expectations economy. By this expression we mean that the representative agent’s expectations will be on average correct, and in particular will exhibit no systematic bias. In effect we are assuming, in line with a very large literature (and with most of what we have done implicitly so far), that the representative agent knows both the general structure of the economy and the exact output distribution as summarized by the matrix T. One possible justification is that this economy has been functioning for a long enough time to allow the agent to learn the probability process governing output and to understand the environment in which he operates. Accumulating such knowledge is clearly in his own interest if he wishes to maximize his expected utility. The agent buys and sells securities (fractions of the single, perfectly divisible share) and consumes dividends. His security purchases solve: max E ∞ t=0 {zt+1 } δ t U (˜t ) c s.t. ct + pt zt+1 ≤ zt Yt + pt zt zt ≤ 1, ∀t where pt is the period t real price of the security in terms of consumption1 (the price of consumption is 1) and zt is the agent’s beginning-of-period t holdings of the security. Holding a fraction zt of the security entitles the agent to the corresponding fraction of the distributed dividend, which in an exchange economy without investment, equals total available output. The expectations operator applies across all possible values of Y feasible at each date t with the probabilities provided by the matrix T. Let us assume the representative agent’s period utility function is strictly concave with lim U1 (ct ) = ∞. Making this latter assumption insures that it is never optimal for the agent to select a zero consumption level. It thus normally insures an interior solution to the relevant maximization problem. The necessary and sufficient condition for the solution to this problem is then given by: For all t, zt+1 solves: ˜ U1 (ct )pt = δEt U1 (˜t+1 ) pt+1 +Y t+1 c ˜ (9.1) ct →0 where ct = (pt zt + zt Yt − pt zt+1 ). Note that the expectations operator applies across possible output state levels; if we make explicit the functional dependence 1 In the notation of the previous chapter: p = q e . 6 on the output state variables, Equation (9.1) can be written (assuming Yi is the current state): U1 (ct (Y i ))pt (Y i ) = δ j U1 (ct+1 (Y j )) pt+1 (Y j )+Y j πij In Equation (9.1), U1 (ct )pt is the utility loss in period t associated with the purchase of an additional unit of the security, while δU1 (ct+1 ) is the marginal utility of an additional unit of consumption in period t + 1 and (pt+1 + Yt+1 ) is the extra consumption (income) in period t + 1 from selling the additional unit of the security after collecting the dividend entitlement. The RHS is thus the expected discounted gain in utility associated with buying the extra unit of the security. The agent is in equilibrium (utility maximizing) at the prevailing price pt if the loss in utility today, which he would incur by buying one more unit of the security (U1 (ct )pt ), is exactly offset by (equals) the expected gain in utility ˜ tomorrow (δEt U1 (˜t+1 ) pt+1 + Yt+1 ), which the ownership of that additional c ˜ security will provide. If this equality is not satisfied, the agent will try either to increase or to decrease his holdings of securities.2 For the entire economy to be in equilibrium, it must, therefore, be true that: (i) zt = zt+1 = zt+2 = ... ≡ 1, in other words, the representative agent owns the entire security; (ii) ct = Yt , that is, ownership of the entire security entitles the agent to all the economy’s output and, ˜ (iii) U1 (ct )pt = δEt U1 (˜t+1 ) pt+1 + Yt+1 , or, the agents’ holdings of c ˜ the security are optimal given the prevailing prices. Substituting (ii) into (iii) informs us that the equilibrium price must satisfy: ˜ ˜ U1 (Yt )pt = δEt U1 (Yt+1 )(˜t+1 + Yt+1 ) p (9.2) If there were many firms in this economy – say H firms, with firm h producing ˜ the (exogenous) output Yh,t , then the same equation would be satisfied for each firm’s stock price, ph,t , that is, ph,t U1 (ct ) = δEt where ct = H h=1 ˜ U1 (˜t+1 ) (˜h,t+1 + Yh,t+1 c p (9.3) Yh,t in equilibrium. Equations (9.2) and (9.3) are the fundamental equations of the consumptionbased capital asset pricing model.3 2 In equilibrium, however, this is not possible and the price will have to adjust until the equality in Equation (9.1) is satisfied. 3 The fact that the representative agent’s consumption stream — via his MRS — is critical for asset pricing is true for all versions of this model, including ones with nontrivial production settings. More general versions of this model may not, however, display an identity between consumption and dividends. This will be the case, for example, if there is wage income to the agent. 7 A recursive substitution of Equation (9.2) into itself yields4 pt = Et τ =1 δτ ˜ U1 (Yt+τ ) ˜ Yt+τ , U1 (Yt ) (9.4) establishing the stock price as the sum of all expected discounted future dividends. Equation (9.4) resembles the standard discounting formula of elementary finance, but for the important observation that discounting takes place using the inter-temporal marginal rates of substitution defined on the consumption sequence of the representative agent. If the utility function displays risk neutrality and the marginal utility is constant (U11 = 0), Equation (9.4) reduces to ∞ ∞ ˜ Yt+τ ˜ pt = Et δ τ Yt+τ = Et , (9.5) (1 + rf )τ τ =1 τ =1 which states that the stock price is the sum of expected future dividends discounted at the (constant) risk-free rate. The intuitive link between the discount factor and the risk-free rate leading to the second inequality in Equation (9.5) will be formally established in Equation (9.7). The difference between Equations (9.4) and (9.5) is the necessity, in a world of risk aversion, of discounting the flow of expected dividends at a rate higher than the risk-free rate, so as to include a risk premium. The question as to the appropriate risk premium constitutes the central issue in financial theory. Equation (9.4) proposes a definite, if not fully operational (due to the difficulty in measuring marginal rates of substitution), answer. Box 9.2 Calculating the Equilibrium Price Function Equation (9.2) implicitly defines the equilibrium price series. Can it be solved directly to produce the actual equilibrium prices p(Y j ) : j = 1, 2, ..., N ? The answer is positive. First, we must specify parameter values and functional forms. In particular, we need to select values for δ and for the various output levels Y j , to specify the probability transition matrix T and the form of the representative agent’s period utility function (a CRRA function of the form 1−γ U (c) = c 1−γ is a natural choice). We may then proceed as follows. Solve for the p(Y j ) : j = 1, 2, ..., N as the solution to a system of linear equations. Notice that Equation 9.2 can be written as the following system of linear equations (one for each of the N possible current states Y j ): N N U1 (Y 1 )p(Y 1 ) = δ j=1 π1j U1 (Y j )Y j + δ j=1 π1j U1 (Y j )p(Y j ) 4 That is, update Equation (9.2) with p t+1 on the left-hand side and pt+2 in the RHS and substitute the resulting RHS (which now contains a term in pt+2 ) into the original Equation (9.2) ; repeat for pt+2 , pt+3 , and so on, regroup terms and extrapolate. 8 . . . . . . N N . . . . . . U1 (Y N )p(Y N ) = δ j=1 1 2 πN 1 U1 (Y j )Y j + δ j=1 N πN j U1 (Y j )p(Y j ) with unknowns p(Y ), p(Y ), ..., p(Y ). Notice that for each of these equations, the first term on the right-hand side is simply a number while the second term is a linear combination of the p(Y j )s. Barring a very unusual output process, this system will have a solution: one price for each Y j , that is, the equilibrium price function. Let us illustrate: Suppose U (c) = ln(c), δ = .96 and (Y 1 , Y 2 , Y 3 ) = (1.5, 1, .5) – an exaggeration of boom, normal, and depression times. The transition matrix is taken to be as found in Table 9.2. Table 9.2 Transition Matrix 1.5 1.5 1 5  1 5  .25 .25 . .5 .5 .25  .25 .5 .25 .25 The equilibrium conditions implicit in Equation (9.2) then reduce to : Y 1 : 2/3p(1.5) = .96 + .96 1 p(1.5) + 1 p(1) + 1 p(.5) 3 4 2 Y 2 : p(1) = .96 + .96 1 p(1.5) + 1 p(1) + 1 p(.5) 6 2 2 Y 3 : 2p(.5) = .96 + .96 1 p(1.5) + 1 p(1) + 1p(.5) 6 4 Y 1 : 0 = .96 − .347p(1.5) + .24p(1) + .48p(.5) or, Y 2 : 0 = .96 + .16p(1.5) − .52p(1) + .48p(.5) Y 3 : 0 = .96 + .16p(1.5) + .24p(1) − 1.04p(.5) .76 (i)-(ii) yields: p(1.5) = .507 p(1) = 1.5p(1) .76 (ii)-(iii) gives: p(.5) = 1.52 p(1) = 1 p(1) 2 (i) (ii) (iii) (iv) (v) substituting (iv) and (v) into Equation (i) to solve for p(1) yields p(1) = 24; p(1.5) = 36 and p(.5) = 12 follow. 9.3.2 Interpreting the Exchange Equilibrium To bring about a closer correspondence with traditional asset pricing formulae we must first relate the asset prices derived previously to rates of return. In particular, we will want to understand, in this model context, what determines 9 the amount by which the risky asset’s expected return exceeds that of a riskfree asset. This basic question is also the one for which the standard CAPM provides such a simple, elegant answer (E rj − rf = βj (E rM − rf )). Define the ˜ ˜ period t to t+1 return for security j as 1 + rj,t+1 = pj,t+1 + Yj,t+1 pj,t Then Equation 9.3 may be rewritten as: 1 = δEt U1 (˜t+1 ) c (1+˜j,t+1 ) r U1 (ct ) (9.6) b Let qt denote the price in period t of a one-period riskless discount bond in zero net supply, which pays 1 unit of consumption (income) in every state in the next period. By reasoning analogous to that presented previously, b qt U1 (ct ) = δEt {U1 (˜t+1 )1} c b The price qt is the equilibrium price at which the agent desires to hold zero units of the security, and thus supply equals demand. This is so because if he b were to buy one unit of this security at a price qt , the loss in utility today would exactly offset the gain in expected utility tomorrow. The representative agent is, therefore, content to hold zero units of the security. Since the risk-free rate over the period from date t to t+1, denoted rf,t+1 , b is defined by qt (1 + rf,t+1 ) = 1, we have 1 b = qt = δEt 1 + rf,t+1 U1 (˜t+1 ) c U1 (ct ) , (9.7) which formally establishes the link between the discount rate and the riskfree rate of return we have used in Equation (9.5) under the risk neutrality hypothesis. Note that in the latter case (U11 = 0), Equation (9.7) implies that the risk-free rate must be a constant. Now we will combine Equations (9.6) and (9.7). Since, for any two random variables x, y , E (˜ · y ) = E(˜) · E(˜) + cov (˜ · y ), we can rewrite equation (9.6) ˜ ˜ x ˜ x y x ˜ in the form 1 = δEt U1 (˜t+1 ) c U1 (ct ) Et {1+˜j,t+1 } + δcovt r U1 (˜t+1 ) c ,˜j,t+1 r U1 (ct ) (9.8) Let us denote Et {1+˜j,t+1 } = 1 + rj,t+1 . Then substituting Equation (9.7) into r Equation (9.8) gives 1 = 1 + rj,t+1 1 + rf,t+1 rj,t+1 − rf,t+1 1 + rj,t+1 U1 (˜t+1 ) c + δcovt ,˜j,t+1 , or, rearranging, r 1 + rf,t+1 U1 (ct ) U1 (˜t+1 ) c = 1 − δcovt , rj,t+1 , or ˜ U1 (ct ) U1 (˜t+1 ) c , rj,t+1 . ˜ (9.9) = −δ (1 + rf,t+1 ) covt U1 (ct ) 10 Equation (9.9) is the central relationship of the consumption CAPM and we must consider its implications. The LHS of Equation (9.9) is the risk-premium on security j. Equation (9.9) tells us that the risk premium will be large when 1 c covt UU(˜t+1 ) , rj,t+1 is large and negative, that is, for those securities paying ˜ 1 (ct ) high returns when consumption is high (and thus when U1 (ct+1 ) is low), and low returns when consumption is low (and U1 (ct+1 ) is high). These securities are not very desirable for consumption risk reduction (consumption smoothing): They pay high returns when investors don’t need them (consumption is high anyway) and low returns when they are most needed (consumption is low). Since they are not desirable, they have a low price and high expected returns relative to the risk-free security. The CAPM tells us that a security is relatively undesirable and thus commands a high return when it covaries positively with the market portfolio, that is, when its return is high precisely in those circumstances when the return on the market portfolio is also high, and conversely. The consumption CAPM is not in contradiction with this basic idea but it adds some further degree of precision. From the viewpoint of smoothing consumption and risk diversification, an asset is desirable if it has a high return when consumption is low and vice versa. When the portfolio and asset pricing problem is placed in its proper multiperiod context, the notion of utility of end of period wealth (our paradigm of Chapters 5 to 7) is no longer relevant and we have to go back to the more fundamental formulation in terms of the utility derived from consumption: U (ct ). But then it becomes clear that the possibility of expressing the objective as maximizing the utility of end-of-period wealth in the 2 dates setting has, in some sense, lured us down a false trail: In a fundamental sense, the key to an asset’s value is its covariation with the marginal utility of consumption, not with the marginal utility of wealth. Equation (9.9) has the unappealing feature that the risk premium is defined, in part, in terms of the marginal utility of consumption, which is not observable. To eliminate this feature, we shall make the following approximation. b Let U (ct ) = act − 2 c2 (i.e., a quadratic utility function or a truncated Taylor t series expansion of a general U (.)) where a > 0, b > 0, and the usual restrictions apply on the range of consumption. It follows that U1 (ct ) = a−bct ; substituting this into Equation (9.9) gives rj,t+1 − rf,t+1 = −δ (1 + rf,t+1 ) covt rj,t+1 , ˜ = −δ (1 + rf,t+1 ) rj,t+1 − rf,t+1 = a − b˜t+1 c a − bct 1 covt (˜j,t+1 ,˜t+1 ) (−b), or r c a − bct (9.10) δb (1 + rf,t+1 ) covt (˜j,t+1 ,˜t+1 ) . r c a − bct Equation (9.10) makes this point easier to grasp: since the term in front of the covariance expression is necessarily positive, if next-period consumption covaries in a large positive way with rj,t+1 , then the risk premium on j will be high. 11 9.3.3 The Formal Consumption CAPM As a final step in our construction, let us denote the portfolio most highly correlated with consumption by the index j = c, and its expected rate of return for the period from t to t + 1 by rc,t+1 Equation (9.10) applies as well to this security so we have rc,t+1 − rf,t+1 = δb (1 + rf,t+1 ) covt (˜c,t+1 ,˜t+1 ) . r c a − bct (9.11) δ(1+rf,t+1 )b a−bct Dividing Equation (9.10) by (9.11) and thus eliminating the term one obtains rj,t+1 − rf,t+1 rc,t+1 − rf,t+1 rj,t+1 − rf,t+1 rc,t+1 − rf,t+1 rj,t+1 − rf,t+1 cov (˜ r ,˜ c ) , = covt (˜j,t+1 ,˜t+1 ) r c , or covt (˜c,t+1 , ct+1 ) r ˜ covt (˜j,t+1 ,˜t+1 ) r c var(˜t+1 ) c covt (˜c,t+1 ,˜t+1 ) r c var(˜t+1 ) c = = , or (9.12) cov (˜ r ,˜ c ) βj,ct [rc,t+1 − rf,t+1 ] βc,ct t c,t+1 t+1 t j,t+1 t+1 , the consumption-β of asset j, and , the for βj,ct = V ar(˜t+1 ) c V ar(˜t+1 ) c consumption-β of portfolio c. This equation defines the consumption CAPM. If it is possible to construct a portfolio c such that βc,ct = 1 one gets the direct analogue to the CAPM, with rc,t+1 replacing the expected return on the market and βj,ct the relevant beta: rj,t+1 − rf,t+1 = βj,ct (¯c,t+1 − rf,t+1 ) . ¯ r (9.13) 9.4 Pricing Arrow-Debreu State-Contingent Claims with the CCAPM Chapter 8 dwelled on the notion of an Arrow-Debreu state claim as the basic building block for all asset pricing and it is interesting to understand what form these securities and their prices assume in the consumption CAPM setting. Our treatment will be very general and will accommodate more complex settings where the state is characterized by more than one variable. Whatever model we happen to use, let st denote the state in period t. In the prior sections st coincided with the period t output, Yt . Given that we are in state s in period t, what is the price of an Arrow-Debreu security that pays 1 unit of consumption if and only if state s occurs in period t+1? We consider two cases. 12 1. Let the number of possible states be finite; denote the Arrow-Debreu price as q(st+1 = s ; st = s) with the prime superscript referring to the value taken by the random state variable in the next period. Since this security is assumed to be in zero net supply,5 it must satisfy, in equilibrium, U1 (c(s)) q (st+1 = s ; st = s) = δU1 (c(s )) prob (st+1 = s ; st = s) , or q (st+1 = s ; st = s) = δ U1 (c(s )) prob (st+1 = s ; st = s) . U1 (c(s)) As a consequence of our maintained stationarity hypothesis, the same price occurs when the economy is in state s and the claim pays 1 unit of consumption in the next period if and only if state s occurs, whatever the current time period t. We may thus drop the time subscript and write q (s ; s) = δ U1 (c(s )) U1 (c(s )) prob (s ; s) = δ πss , U1 (c(s)) U1 (c(s)) in the notation of our transition matrix representation. This is equation (8.1) of our previous chapter. 2. For a continuum of possible states, the analogous expression is q (s ; s) = δ U1 (c(s )) f (s ; s) U1 (c(s)) where f (s ;s) is the conditional density function on st+1 given s, evaluated at s. Note that under risk neutrality, we have a reconfirmation of our earlier identification of Arrow-Debreu prices as being proportional to the relevant state probabilities, with the proportionality factor corresponding to the time discount coefficient: q (s ; s) = δf (s ; s) = δπss . These prices are for one-period state-contingent claims; what about N -period claims? They would be priced exactly analogously: q N (st+N = s ; st = s) = δ N U1 (c(s )) prob (st+N = s ; st = s) . U1 (c(s)) bN The price of an N -period risk-free discount bound qt , given state s, is thus given by U1 (c(s )) bN prob (st+N = s ; st = s) (9.14) qt (s) = δ N U1 (c(s)) s 5 And thus its introduction does not alter the structure of the economy described previously. 13 or, in the continuum of states notation, bN qt (s) = δ N s U1 (c(s )) U1 (ct+N (s )) fN (s ; s) ds = Es δ N U1 (c(s)) U1 (c(s)) , where the expectation is taken over all possible states s conditional on the current state being s.6 Now let us review Equation (9.4) in the light of the expressions we have just derived. pt = Et τ =1 ∞ δτ δτ U1 (ct+τ ) Yt+τ U1 (ct ) U1 (ct+τ (s )) Yt+τ (s ) prob (st+τ = s ; st = s) U1 (ct ) (9.15) = τ =1 s = τ s q τ (s , s)Yt+τ (s ), What this development tells us is that taking the appropriately discounted (at the inter-temporal MRS) sum of expected future dividends is simply valuing the stream of future dividends at the appropriate Arrow-Debreu prices! The fact that there are no restrictions in the present context in extracting the prices of Arrow-Debreu contingent claims is indicative of the fact that this economy is one of complete markets.7 Applying the same substitution to Equation (9.4) as employed to obtain Equation (9.8) yields: pt = τ =1 ∞ δτ Et U1 (˜t+τ ) c ˜ Et Yt+τ + cov U1 (ct ) cov Et U1 (˜t+τ ) ˜ c , Yt+τ U1 (ct ) U1 (˜t+τ ) ˜ c U1 (ct ) , Yt+τ   U1 (˜t+τ ) c ˜ = δ τ Et Et Yt+τ  U1 (ct ) τ =1    , 1+ U1 (˜t+τ ) c U1 (ct ) ˜ Et Yt+τ where the expectations operator applies across all possible values of the state output variable, with probabilities given on the line corresponding to the current state st in the matrix T raised to the relevant power (the number of periods to the date of availability of the relevant cash flow). Using the expression for the price of a risk-free discount bond of τ periods to bτ maturity derived earlier and the fact that (1 + rf,t+τ )τ qt = 1, we can rewrite corresponding state probabilities are given by the Nth power of the matrix T. result, which is not trivial (we have an infinity of states of nature and only one asset - the equity), is the outcome of the twin assumptions of rational expectations and agents’ homogeneity. 7 This 6 The 14 this expression as: Et [Yt+τ ] 1 + pt = τ =1 ˜ cov(U1 (˜t+τ ),Yt+τ ) c ˜ Et [U1 (˜t+τ )]Et [Yt+τ ] c (1 + rf,t+τ )τ . (9.16) The quantity being discounted (at the risk-free rate applicable to the relevant period) in the present value term is the equilibrium certainty equivalent of the real cash flow generated by the asset. This is the analogue for the CCAPM of the CAPM expression derived in Section 7.3. If the cash flows exhibit no stochastic variation (i.e., they are risk free), then Equation (9.16) reduces to pt = Yt+τ . (1 + rf,t+τ )τ τ =1 This relationship will be derived again in Chapter 10 where we discounted riskfree cash flows at the term structure of interest rates. If, on the other hand, the cash flows are risky, yet investors are risk neutral (constant marginal utility of consumption), Equation (9.16) becomes pt = ˜ E[Yt+τ ] (1 + rf,t+τ )τ τ =1 (9.17) which is identical to Equation (9.5) once we recall, from Equation (9.7), that the risk-free rate must be constant under risk neutrality. Equation (9.16) is fully in harmony with the intuition of Section 9.3: if the representative agent’s consumption is highly positively correlated with the security’s real cash flows, the certainty equivalent values of these cash flows will be smaller than their expected values (viz., cov(U1 (ct+τ ), Yt+τ ) < 0). This is so because such a security is not very useful for hedging the agent’s future consumption risk. As a result it will have a low price and a high expected return. In fact, its price will be less than what it would be in an economy of risk-neutral agents [Equation (9.17)]. The opposite is true if the security’s cash flows are negatively correlated with the agent’s consumption. 9.5 Testing the Consumption CAPM: The Equity Premium Puzzle In the rest of this chapter we discuss the empirical validity of the CCAPM. We do this here (and not with the CAPM and other pricing models seen so far) because a set of simple and robust empirical observations has been put forward that falsifies this model in an unusually strong way. This forces us to question its underlying hypotheses and, a fortiori, those underlying some of the less-sophisticated models seen before. Thus, in this instance, the recourse to sophisticated econometrics for drawing significant lessons about our approach to modeling financial markets is superfluous. 15 A few key empirical observations regarding financial returns in U.S. markets are summarized in Table 9.3, which shows that over a long period of observation the average ex-post return on a diversified portfolio of U.S. stocks (the market portfolio, as approximated in the U.S. by the S&P 500) has been close to 7 percent (in real terms, net of inflation) while the return on one-year T-bills (taken to represent the return on the risk-free asset) has averaged less than 1 percent. These twin observations make up for an equity risk premium of 6.2 percent. This observation is robust in the sense that it has applied in the U.S. for a very long period, and in several other important countries as well. Its meaning is not totally undisputed, however. Goetzmann and Jorion (1999), in particular, argue that the high return premium obtained for holding U.S. equities is the exception rather than the rule.8 Here we will take the 6 percent equity premium at face value, as has the huge literature that followed the uncovering of the equity premium puzzle by Mehra and Prescott (1985). The puzzle is this: Mehra and Prescott argue that the CCAPM is completely unable, once reasonable parameter values are inserted in the model, to replicate such a high observed equity premium. Table 9.3 : Properties of U.S. Asset Returns U.S. Economy (a) (b) r 6.98 16.54 rf .80 5.67 r − rf 6.18 16.67 (a) Annualized mean values in percent; (b) Annualized standard deviation in percent. Source: Data from Mehra and Prescott (1985). Let us illustrate their reasoning. According to the consumption CAPM, the only factors determining the characteristics of security returns are the representative agent’s utility function, his subjective discount factor, and the process on consumption (which equals output or dividends in the exchange economy equilibrium). Consider the utility function first. It is natural in light of Chapter 4 to assume the agent’s period utility function displays CRRA; thus let us set U (c) = c1−γ . 1−γ 8 Using shorter, mostly postwar, data, premia close or even higher than the U.S. equity premium are obtained for France, Germany, the Netherlands, Sweden, Switzerland, and the United Kingdom [see, e.g., Campbell (1998)]. Goetzmann and Jorion however argue that such data samples do not correct for crashes and period of market interruptions, often associated with WWII, and thus are not immune from a survivorship bias. To correct for such a bias, they assemble long data series for all markets that existed during the twentieth century. They find that the United States has had “by far the highest uninterrupted real rate of appreciation of all countries, at about 5 percent annually. For other countries, the median appreciation rate is about 1.5 percent.” 16 Empirical studies associated with this model have placed γ in the range of (1, 2). A convenient consequence of this utility specification is that the inter-temporal marginal rate of substitution can be written as U1 (ct+1 ) = U1 (ct ) ct+1 ct −γ . (9.18) The second major ingredient is the consumption process. In our version of the model, consumption is a stationary process: It does not grow through time. In reality, however, consumption is growing through time. In a growing economy, the analogous notion to the variability of consumption is variability in the growth rate of consumption. Let xt+1 = ct+1 , denote per capita consumption growth, and assume, for ct illustration that xt is independently and identically lognormally distributed through time. For the period 1889 through 1978, the U.S. economy aggregate consumption has been growing at an average rate of 1.83 percent annually with a standard deviation of 3.57 percent, and a slightly negative measure of autocorrelation (-.14) [cf. Mehra and Prescott (1985)]. The remaining item is the agent’s subjective discount factor δ: What value should it assume? Time impatience requires, of course, that δ < 1, but this is insufficiently precise. One logical route to its estimation is as follows: Roughly speaking, the equity in the CCAPM economy represents a claim to the aggregate income from the underlying economy’s entire capital stock. We have just seen that, in the United States, equity claims to private capital flows average a 7 percent annual real return, while debt claims average 1 percent.9 Furthermore, the economy-wide debt-to-equity rates are not very different from 1. These facts together suggest an overall average real annual return to capital of about 4 percent. If there were no uncertainty in the model, and if the constant growth rate of consumption were to equal its long-run historical average (1.0183), the asset pricing Equation (9.6) would reduce to 1 = δEt ct+1 ˜ ct −γ Rt+1 ¯ = δ(x)−γ R, (9.19) where Rt+1 is the gross rate of return on capital and the upper bars denote ¯ historical averages.10 For γ = 1, x = 1.0183, and R=1.04, we can solve for the ∼ 0.97. Since we have used an annual estimate for x, the implied δ to obtain δ = resulting δ must be viewed as an annual or yearly subjective discount factor; on a quarterly basis it corresponds to δ ∼ 0.99. If, on the other hand, we want to = assume γ = 2, Equation (9.19) solves for δ = .99 on an annual basis, yielding a quarterly δ even closer to 1. This reasoning demonstrates that assuming higher 9 Strictly speaking, these are the returns to publicly traded debt and equity claims. If private capital earns substantially different returns, however, capital is being inefficiently allocated; we assume this is not the case. 10 Time average and expected values should coincide in a stationary model, provided the time series is of sufficient length. 17 rates of risk aversion would be incompatible with maintaining the hypothesis of a time discount factor less than 1. While technically, in the case of positive consumption growth, we could entertain the possibility of a negative rate of time preference, and thus of a discount factor larger than 1, we rule it out on grounds of plausibility. At the root of this difficulty is the low return on the risk-free asset (1 percent), which will haunt us in other ways. As we know, highly risk-averse individuals want to smooth consumption over time, meaning they want to transfer consumption from good times to bad times. When consumption is growing predictably, the good times lie in the future. Agents want to borrow now against their future income. In a representative agent model, this is hard to reconcile with a low rate on borrowing: everyone is on the same side of the market, a fact that inevitably forces a higher rate. This problem calls for an independent explanation for the abnormally low average risk-free rate [e.g., in terms of the liquidity advantage of short-term government debt as in Bansal and Coleman (1996)] or the acceptance of the possibility of a negative rate of time preference so that future consumption is given more weight than present consumption. We will not follow either of these routes here, but rather will, in the course of the present exercise, limit the coefficient of relative risk aversion to a maximal value of 2. With these added assumptions we can manipulate the fundamental asset pricing Equation (9.2) to yield two equations that can be used indirectly to test the model. The key step in the reasoning is to demonstrate that, in the context of these assumptions, the equity price formula takes the form pt = vYt where v is a constant coefficient. That is, the stock price at date t is proportional to the dividend paid at date t.11 To confirm this statement, we use a standard trick consisting of guessing that this is the form taken by the equilibrium pricing function and then verifying that this guess is indeed borne out by the structure of the model. Under the pt = vYt hypothesis, Equation (9.1) becomes: vYt = δEt U1 (˜t+1 ) c ˜ ˜ v Y t+1 +Y t+1 U1 (ct ) . Using Equation (9.18) and dropping the conditional expectations operator, since x is independently and identically distributed through time (its mean is independent of time), this equation can be rewritten as v = δE (v+1) ˜ Yt+1 −γ x ˜ Yt t+1 . 11 Note that this property holds true as well for the example developed in Box 9.2 as Equations (iv) and (v) attest. 18 The market clearing condition implies that v Yt+1 Yt = xt+1 , thus = δE (v+1)˜1−γ x = δE x1−γ ˜ . 1 − δE {˜1−γ } x This is indeed a constant and our initial guess is thus confirmed! Taking advantage of the validated pricing hypothesis, the equity return can be written as: Rt+1 ≡ 1 + rt+1 = pt+1 + Yt+1 v + 1 Yt+1 v+1 xt+1 . = = pt v Yt v Taking expectations we obtain: v+1 E (˜) x ˜ ˜ E (˜t+1 ) = x . Et Rt+1 = E Rt+1 = v δE {˜1−γ } x The risk-free rate is [Equation (9.7)]: Rf,t+1 ≡ 1 = δEt b qt U1 (˜t+1 ) c U1 (ct ) −1 = 1 1 , δ E {˜−γ } x (9.20) which is seen to be constant under our current hypotheses. Taking advantage of the lognormality hypothesis, the ratio of the two preceding equations can be expressed as (see Appendix 9.2 for details) ˜ E Rt+1 Rf = E {˜} E {˜−γ } x x 2 = exp γσx , E {˜1−γ } x (9.21) 2 where σx is the variance of lnx. Taking logs, we finally obtain: 2 ln (ER) − ln (Rf ) = γσx . (9.22) Now, we are in a position to confront the model with the data. Let us start with Equation (9.22). Feeding in the return characteristics of the U.S. economy 2 and solving for γ, we obtain (see Appendix 9.2 for the computation of σx ), ln (ER) − ln Erf 2 σx = 1.0698 − 1.008 (.0357) 2 = 50.24 = γ. 2 Alternatively, if we assume γ = 2 and multiply by σx as per Equation (9.22), one obtains an equity premium of 2(.00123) = .002 = (ln(ER) − ln(ERf ) ∼ ER − ERf = (9.23) In either case, this reasoning identifies a major discrepancy between model prediction and reality. The observed equity premium can only be explained 19 by assuming an extremely high coefficient of relative risk aversion ( 50), one that is completely at variance with independent estimates. An agent with risk aversion of this level would be too fearful to take a bath (many accidents involve falling in a bathtub!), or to cross the street. On the other hand, insisting on a more reasonable coefficient of risk aversion of 2 leads to predicting a minuscule premium of 0.2 percent, much below the 6.2 percent that has been historically observed over long periods. Similarly, it is shown in Appendix 9.2 that E x−γ = .97 for γ = 2; Equation t (9.20) and the observed value for Rf (1.008) then implies that δ should be larger than 1 (1.02). This problem was to be anticipated from our discussion of the calibration of δ, which was based on reasoning similar to that underlying Equation (9.20). Here the problem is compounded by the fact that we are using an even lower risk-free rate (.8 percent) rather than the steady-state rate of return on capital of 4 percent used in the prior reasoning. In the present context, this difficulty in calibrating δ or, equivalently, in explaining the low rate of return on the risk-free asset has been dubbed the risk-free rate puzzle by Weil (1989). As said previously, we read this result as calling for a specific explanation for the observed low return on the risk-free asset, one that the CCAPM is not designed to provide. 9.6 Testing the Consumption CAPM: Hansen-Jagannathan Bounds Another, parallel perspective on the puzzle is provided by the Hansen-Jagannathan (1991) bound. The idea is very similar to our prior test and the end result is the same. The underlying reasoning, however, postpones as long as possible making specific modeling assumptions. It is thus more general than a test of a specific version of the CCAPM. The bound proposed by Hansen and Jagannathan potentially applies to other asset pricing formulations. It similarly leads to a falsification of the standard CCAPM. The reasoning goes as follows: For all homogeneous agent economies, the fundamental equilibrium asset pricing Equation (9.2) can be expressed as p(st ) = Et [mt+1 (˜t+1 )Xt+1 (˜t+1 ); st ], s s (9.24) where st is the state today (it may be today’s output in the context of a simple exchange economy or it may be something more elaborate as in the case of a production economy), Xt+1 (˜t+1 ) is the total return in the next period (e.g., in s ˜ the case of an exchange economy this equals (˜t+1 + Yt+1 ) and mt+1 (˜t+1 ) is p s the equilibrium pricing kernel, also known as the stochastic discount factor : mt+1 (˜t+1 ) = s δU1 (ct+1 (˜t+1 )) s . U1 (ct ) As before U1 ( ) is the marginal utility of the representative agent and ct is his equilibrium consumption. Equation (9.24) is thus the general statement that the price of an asset today must equal the expectation of its total payout tomorrow 20 multiplied by the appropriate pricing kernel. For notational simplicity, let us suppress the state dependence, leaving it as understood, and write Equation (9.24) as ˜ pt = Et [mt+1 Xt+1 ]. ˜ (9.25) This is equivalent to ˜ 1 = Et [mt+1 Rt+1 ], ˜ ˜ where Rt+1 is the gross return on ownership of the asset. Since Equation (9.25) holds for each state st , it also holds unconditionally; we thus can also write 1 = E[mR] ˜ ˜ where E denotes the unconditional expectation. For any two assets i and j (to be viewed shortly as the return on the market portfolio and the risk-free return, respectively) it must, therefore, be the case that ˜ E[m(Ri − Rj )] = 0, ˜ ˜ or E[mRi−j ] = 0, ˜ ˜ ˜ ˜ ˜ where, again for notational convenience, we substitute Ri−j for Ri − Rj . This latter expression furthermore implies the following series of relationships: E mE Ri−j + cov(m, Ri−j ) = 0, ˜ ˜ ˜ ˜ or E mE Ri−j + ρ(m, Ri−j )σm σRi−j = 0, ˜ ˜ ˜ ˜ or ˜ E Ri−j σm + ρ(m, Ri−j ) ˜ ˜ = 0, σRi−j Em ˜ ˜ E Ri−j σm = −ρ(m, Ri−j ) ˜ ˜ . σRi−j Em ˜ (9.26) or It follows from Equation (9.26) and the fact that a correlation is never larger than 1 that ˜ E Ri−j σm > . (9.27) Em ˜ σRi−j The inequality in expression (9.27) is referred to as the Hansen-Jagannathan lower bound on the pricing kernel. If, as noted earlier, we designate asset i as the market portfolio and asset j as the risk-free return, then the data from Table 9.3 and Equation (9.27) together imply (for the U.S. economy): |E(˜M − rf )| r .062 σm > = = .37. Em ˜ σrM −rf .167 21 Let us check whether this bound is satisfied for our model. From Equation (9.18), m(˜t+1 , ct ) = δ(xt )−γ , the expectation of which we can be computed ˜ c (See Appendix 9.2) to be 1 2 E m = δ exp(−γµx + γ 2 σx ) = .99(.967945) = .96 for γ = 2. ˜ 2 In fact, Equation (9.20) reminds us that Em is simply the expected value of the price of a one-period risk-free discount bound, which cannot be very far away from 1. This implies that for the Hansen-Jagannathan bound to be satisfied, the standard deviation of the pricing kernel cannot be much lower than .3; given the information we have on xt , it is a short step to estimate this parameter numerically under the assumption of lognormality. When we do this (see Appendix 9.2 again), we obtain an estimate for σ(m) = .002, which is an order of magnitude lower than what is required for Equation (9.27) to be satisfied. The message is that it is very difficult to get the equilibrium to be anywhere near the required level. In a homogeneous agent, complete market model with standard preferences, where the variation in equilibrium consumption matches the data, consumption is just too smooth and the marginal utility of consumption does not vary sufficiently to satisfy the bound implied by the data (unless the curvature of the utility function — the degree of risk aversion — is assumed to be astronomically high, an assumption which, as we have seen, raises problems of its own). 9.7 9.7.1 Some Extensions Reviewing the Diagnosis Our first dynamic general equilibrium model thus fails when confronted with actual data. Let us review the source of this failure. Recall our original pricing Equation (9.9), specialized for a single asset, the market portfolio: rM,t+1 − rf,t+1 = = = U1 (˜t+1 ) c ,˜M,t+1 r U1 (ct ) U1 (˜t+1 ) c U1 (˜t+1 ) c −δ (1 + rf,t+1 ) ρ ,˜M,t+1 σ r U1 (ct ) U1 (ct ) − (1 + rf,t+1 ) ρ (mt ,˜M,t+1 ) σ (mt ) σ (˜M,t+1 ) ˜ r ˜ r −δ (1 + rf,t+1 ) covt σ (˜M,t+1 ) r Written in this way, it is clear that the equity premium depends upon the standard deviation of the MRS (or, equivalently, the stochastic discount factor), the standard deviation of the return on the market portfolio, and the correlation between these quantities. For the United States, and most other industrial countries, the problem with a model in which pricing and return relationships depend so much on consumption (and thus MRS) variation is that average per capita consumption does not vary much at all. If this model is to have any hope of matching the data, we must modify it in a way that will increase the standard deviation of the relevant MRS, or the variability of the dividend being 22 priced (and thus the σ (rM,t+1 )). We do not have complete freedom over this latter quantity, however, as it must be matched to the data as well. 9.7.2 The CCAPM with Epstein-Zin Utility At this stage it is interesting to inquire whether, in addition to its intellectual appeal on grounds of generality, Epstein and Zin’s (1989) separation of time and risk preferences might contribute a solution to the equity premium puzzle, and more generally, alter our vision of the CCAPM and its message. Let us start by looking specifically at the equity premium puzzle. It will facilitate our discussion to repeat Equations (5.10) and (5.12) defining the EpsteinZin preference representation (refer to Chapter 5 for a discussion and for the log case): 1−γ 1−γ θ 1−γ U (ct , CEt+1 ) = ˜ where CE(Ut+1 ) 1−γ θ (1 − δ)ct θ + δCEt+1 = = ˜ Et (Ut+1 )1−γ , 1−γ 1 , 0 < δ < 1, 1 = γ > 0, ρ > 0; 1− ρ and θ Weil (1989) uses these preferences in a setting otherwise identical to that of Mehra and Prescott (1985). Asset prices and returns are computed similarly. What he finds, however, is that this greater generality does not resolve the risk premium puzzle, but rather tends to underscore what we have already introduced as the risk-free rate puzzle. The Epstein-Zin (1989, 1991) preference representation does not innovate along the risk dimension, with the parameter γ alone capturing risk aversion in a manner very similar to the standard case. It is, therefore, not surprising that Weil (1989) finds that only if this parameter is fixed at implausibly high levels (γ ≈ 45) can a properly calibrated model replicate the premium — the Mehra and Prescott (1985) result revisited. With respect to time preferences, if ρ is calibrated to respect empirical studies, then the model also predicts a risk-free rate that is much too high. The reason for this is the same as the one outlined at the end of Section 9.5: Separately calibrating the intertemporal substitution parameter ρ tends to strengthen the assumption that the representative agent is highly desirous of a smooth inter-temporal consumption stream. With consumption growing on average at 1.8 percent per year, the agent must be offered a very high risk-free rate in order to be induced to save more and thus making his consumption tomorrow even more in excess of what it is today (less smoothing). While Epstein and Zin preferences do not help in solving the equity premium puzzle, it is interesting to study a version of the CCAPM with these generalized preferences. The idea is that the incorporation of separate time and 23 risk preferences may enhance the ability of that class of models to explain the general pattern of security returns beyond the equity premium itself. The setting is once again a Lucas (1978) style economy with N assets, with the return on the equilibrium portfolio of all assets representing the return on the market portfolio. Using an elaborate dynamic programming argument, Epstein and Zin (1989, 1991) derive an asset pricing equation of the form Et δ( 1 ct+1 − ρ ˜ ) ct θ 1 1 + rM,t+1 ˜ 1−θ (1 + rj,t+1 ) ˜ ≡ 1, (9.28) j where rM,t denotes the period t return on the market portfolio, and rt the ˜ period t return on some asset in it. Note that when time and risk preferences 1 coincide (γ = ρ , θ = 1), Equation (9.28) reduces to the pricing equation of the standard time-separable CCAPM case. The pricing kernel itself is of the form 1 ct+1 − ρ ˜ ) ct θ δ( 1 1 + rM,t+1 ˜ 1−θ , (9.29) which is a geometric average (with weights θ and 1 − θ, respectively) of the 1 ˜ pricing kernel of the standard CCAPM, δ( ct+1 )− ρ , and the pricing kernel for ct 1 the log(ρ = 0) case, 1+˜M,t+1 . r Epstein and Zin (1991) next consider a linear approximation to the geometric average in Equation (9.29), θ δ( 1 ct+1 − ρ ˜ 1 + (1 − θ) ) . ct 1 + rM,t+1 ˜ (9.30) Substituting Equation (9.30) into Equation (9.28) gives Et Et θ δ θ δ ct+1 ˜ ct ct+1 ˜ ct 1 −ρ + (1 − θ) 1 −ρ 1 1 + rM,t+1 ˜ (1 + rj,t+1 ) ˜ ≈ 1, or ≈ 1. (9.31) (1 + rj,t+1 ) + (1 − θ) ˜ 1 (1 + rj,t+1 ) ˜ 1 + rM,t+1 ˜ Equation (9.31) is revealing. As we noted earlier, the standard CAPM relates the (essential, non-diversifiable) risk of an asset to the covariance of its returns with M , while the CCAPM relates its riskiness to the covariance of its returns with the growth rate of consumption (via the IMRS). With separate time and risk preferences, Equation (9.31) suggests that both covariances matter for an asset’s return pattern12 . But why are these effects both present separately and 12 To see this recall that for two random variables x and y , E(˜y ) = E(˜)E(˜) + cov(˜, y ) , ˜ ˜ x˜ x y x ˜ and employ this substitution in both terms on the left-hand side of Equation (9.31). 24 individually? The covariance of an asset’s return with M captures its atemporal, non-diversifiable risk (as in the static model). The covariance of its returns with the growth rate of consumptions fundamentally captures its risk across successive time periods. When risk and time preferences are separated, it is not entirely surprising that both sources of risk should be individually present. This relationship is more strikingly apparent if we assume joint lognormality and heteroskedasticity in consumption and asset returns; Campbell et al. (1997) then express Equation (9.31) in a form whereby the risk premium on asset i satisfies: σ2 σic Et (˜i,t+1 ) − rf,t+1 + i = δ r + (1 − δ)σiM , (9.32) 2 ψ ct ˜ where σic = cov(˜it , ct−1 ), and σiM = cov(˜it , rM,t ). Both sources of risk are r r ˜ clearly present. 9.7.3 Habit Formation In the narrower perspective of solving the equity premium puzzle, probably the most successful modification of the standard setup has been to admit utility functions that exhibit higher rates of risk aversion at the margin, and thus can translate small variations in consumption into a large variability of the MRS. One way to achieve this objective without being confronted with the risk-free rate puzzle — which is exacerbated if we simply decide to postulate a higher γ — is to admit some form of habit formation. This is the notion that the agent’s utility today is determined not by her absolute consumption level, but rather by the relative position of her current consumption vis--vis what can be viewed as a a stock of habit, summarizing either her past consumption history (with more or less weight placed on distant or old consumption levels) or the history of aggregate consumption (summarizing in a sense the consumption habits of her neighbors; a “keeping up with the Joneses” effect). This modeling perspective thus takes the view that utility of consumption is primarily dependent on (affected by) departures from prior consumption history, either one’s own or that of a social reference group; departures from what we have been accustomed to consuming; or what we may have been led to consider “fair” consumption. This concept is open to a variety of different specifications, with diverse implications for behavior and asset pricing. The interested reader is invited to consult Campbell and Cochrane (1999) for a review. Here we will be content to illustrate briefly the underlying working principle. To that end, we specify the representative agent’s period preference ordering to be of the form U (ct , ct−1 ) ≡ (ct − χct−1 )1−γ , 1−γ where χ ≤ 1 is a parameter. In an extreme case, χ = 1, the period utility depends only upon the deviation of current period t consumption from the prior period’s consumption. As we noted earlier, actual data indicates that per capita consumption for the United States and most other developed countries is 25 very smooth. This implies that (ct - ct−1 ) is likely to be very small most of the time. For this specification, the agent’s effective (marginal) relative risk aversion γ reduces to RR (ct ) = 1−(ct−1 /ct ) ; with ct ≈ ct−1 , the effective RR (c) will thus be very high, even with a low γ, and the representative agent will appear as though he is very risk averse. This opens the possibility for a very high return on the risky asset. With a careful choice of the habit specification, the risk-free asset pricing equation will not be materially affected and the risk-free rate puzzle will be avoided [see Constantinides (1990) and Campbell and Cochrane (1999)]. We find this development interesting not only because of its implications for pricing assets in an exchange economy. It also suggests a more general reevaluation of the standard utility framework discussed in Chapter 2. It may, however, lead to questioning some of the basic tenets of our financial knowledge: It would hardly be satisfactory to solve a puzzle by assuming habit formation and high effective rates of risk aversion and ignore this behavioral assumption when attending to other problems tackled by financial theory. In fact, a confirmed habit formation utility specification would, for the same reasons, have significant implications for macroeconomics as well (as modern macroeconomics builds on the same theoretical principles as the CCAPM of this section). It is not clear, however, that high effective rates of risk aversion are consistent with our current understanding of short-run macroeconomic fluctuations. This discussion suggests that it is worthwhile to explore alternative potential solutions to the puzzle, all the while attempting to understand better the connections between the real and the financial sides of the economy. 9.7.4 Distinguishing Stockholders from Non-Stockholders In this spirit, another approach to addressing the outstanding financial puzzles starts by recognizing that only a small fraction of the population holds substantial financial assets, stocks in particular. This fact implies that only the variability of the consumption stream of the stockholding class should matter for pricing risky assets. There are reasons to believe that the consumption patterns of this class of the population are both more variable and more highly correlated with stock returns than average per capita consumption.13 Observing, furthermore, that wages are very stable and that the aggregate wage share is countercyclical (that is, proportionately larger in bad times when aggregate income is relatively low), it is not unreasonable to assume that firms, and thus their owners, the shareholders, insure workers against income fluctuations associated with the business cycle. If this is a significant feature of the real world, it should have implications for asset pricing as we presently demonstrate. Before trying to incorporate such a feature into a CCAPM-type model, it is useful first to recall the notion of risk sharing. Consider the problem of allocating an uncertain income (consumption) stream between two agents so as to maximize overall utility. Assume, furthermore, that these income shares are 13 Mankiw and Zeldes (1991) attempt, successfully, to confirm this conjecture. They indeed find that shareholder consumption is 2.5 times as variable as non-shareholder consumption. Data problems, however, preclude taking their results as more than indicative. 26 not fixed across all states, but can be allocated on a state-by-state basis. This task can be summarized by the allocation problem c1 (θ),c2 (θ) max ˜ ˜ U (c1 (θ)) + µV (c2 (θ)), s.t. ˜ ˜ ˜ c1 (θ) + c2 (θ) ≤ Y (θ), ˜ where U ( ), V ( ) are, respectively, the two agents’ utility functions, c1 (θ) ˜ ˜ and c2 (θ) their respective income assignments, Y (θ) the economy-wide statedependent aggregate income stream, and µ their relative weight. The necessary and sufficient first-order condition for this problem is ˜ ˜ U1 (c1 (θ)) = µV1 (c2 (θ)). (9.33) Equation (9.33) states that the ratio of the marginal utilities of the two agents should be constant. We have seen it before as Equation (8.3) of Chapter 8. As we saw there, it can be interpreted as an optimal risk sharing condition in the sense that it implicitly assigns more of the income risk to the less risk-averse agent. To see this, take the extreme case where one of the agents, say the one with utility function V ( ) is risk neutral — indifferent to risk. According to Equation (9.33) it will then be optimal for the other agent’s income stream to be constant across all states: He will be perfectly insured. Agent V ( ) will thus absorb all the risk (in exchange for a higher average income share). To understand the potential place of these ideas in the consumption CAPM setting, let V ( ) now denote the period utility function of the representative shareholder, and U ( ) the period utility function of the representative worker who is assumed not to hold any financial assets and who consequently consumes his wage wt . As before, let Yt be the uncertain (exogenously given) output. The investment problem of the shareholders — the maximization problem with which we started this chapter — now becomes max E( {zt } ∞ s.t. ct + pt zt+1 ≤ zt dt + pt zt dt = Yt − wt U1 (wt ) = µV1 (dt ), zt ≤ 1, ∀t Here we simply introduce a distinction between the output of the tree, Yt , and the dividends paid to its owners, dt , on the plausible grounds that people (workers) need to be paid to take care of the trees and collect the fruits. This payment is wt . Moreover, we introduce the idea that the wage bill may incorporate a risk insurance component, which we formalize by assuming that the variability of wage payments is determined by an optimal risk sharing rule equivalent to Equation (9.33). One key parameter is the income share, µ, which may be interpreted as reflecting the relative bargaining strengths of the two groups. Indeed, a larger µ gives more income to the worker. 27 t=0 δ t V (˜t )) c Assets in this economy are priced as before with Equation (9.1) becoming ˜ V1 (ct )pt = δEt V1 (˜t+1 ) pt+1 +dt+1 c ˜ . (9.34) While the differences between Equations (9.1) and (9.34) may appear purely notational, their importance cannot be overstated. First, the pricing kernel derived from Equation (9.34) will build on the firm owners’ MRS, defined over shareholder consumption (dividend) growth rather than the growth in average per capita consumption. Moreover, the definition of dividends as output minus a stabilized stream of wage payments opens up the possibility that the flow of payments to which firm owners are entitled is effectively much more variable, not only than consumption but than output as well. Therein lies a concept of leverage, one that has been dubbed operating leverage, similar to the familiar notion of financial leverage. In the same way that bondholders come first, and are entitled to a fixed, noncontingent interest payment, workers also have priority claims to the income stream of the firm and macroeconomic data on the cyclical behavior of the wage share confirm that wage payments are more stable than aggregate income. We have explored the potential of these ideas in recent research work (Danthine and Donaldson, 2002) to which the reader is referred for details, and find that this class of models can generate significantly increased equity premia. When we add an extra notion of distributional risk associated with the possibility that µ varies stochastically, in a way that permits better accounting of the observed behavior of the wage share over the medium run, the premium approaches 6 percent. 9.8 Conclusions The two modifications discussed in the previous section are far from representing the breadth and depth of the research that has been stimulated by the provocative result presented in Mehra and Prescott (1985). For a broader and more synthetic perspective, we refer the reader to the excellent recent survey of Kocherlakota (1996). The material covered in this chapter contains recent developments illustrating some of the most important directions taken by modern financial theory. Much work remains to be done, as the latest sections indicate, and this is indeed a fertile area for current research. At this juncture, one may be led to the view that structural asset pricing theory, based on rigorous dynamic general equilibrium models, provides limited operational support in our quest for understanding financial market phenomena. While the search goes on, this state of affairs nevertheless explains the popularity of less encompassing approaches based on the concept of arbitrage reviewed in Chapters 10 to 13. References Bansal, R., Coleman, W.J. (1996), “A Monetary Explanation of the Equity 28 Premium, Term Premium and Risk-Free Rates Puzzles,”Journal of Political Economy 104, 1135–1171. Barro, R. J. (1974), “Are Government Bonds Net Wealth?”Journal of Political Economy 82, 1095–1117. Campbell, J. Y. (1998), “Asset Prices, Consumption, and the Business Cycle,”NBER Working paper 6485, March 1998, forthcoming in the Handbook of Macroeconomics, Amsterdam: North Holland. Campbell, J. Y., Cochrane, J.H. (1999), “By Force of Habit: A ConsumptionBased Explanation of Aggregate Stock Market Behavior,”Journal of Political Economy, 107, 205-251. Campbell, J., Lo, A., MacKinlay, A.C. (1997), The Econometrics of Financial Markets, Princeton University Press, Princeton, N.J. Constantinides, G. M. (1982), “Intertemporal Asset Pricing with Heterogeneous Consumers and without Demand Aggregation,”Journal of Business 55, 253–267. Constantinides, G. M. (1990), “Habit Formation: A Resolution of the Equity Premium Puzzle,”Journal of Political Economy 98, 519–543. Danthine, J. P., Donaldson, J.B. (2002), “Labor Relations and Asset Returns,” Review of Economic Studies, 69, 41-64. Epstein, L., Zin, S. (1989), “Substitution, Risk Aversion, and the Temporal Behavior of Consumption and Asset Returns: A Theoretical Framework,” Econometrica 57, 937–969. Epstein, L., Zin, S. (1991), “Substitution, Risk Aversion, and the Temporal Behavior of Consumption and Asset Returns: An Empirical Analysis,”Journal of Political Economy 99, 263–286. Goetzman, W., Jorion, P. (1999), “A Century of Global Stock Markets,”Journal of Finance, 55, 953-980. Hansen, L., Jagannathan, R. (1991), “Implications of Security Market Data for Models of Dynamic Economies,”Journal of Political Economy 99, 225–262. Kocherlakota, N. (1996), “The Equity Premium: It’s Still a Puzzle,”Journal of Economic Literature 34, 42–71. Lucas, R. E. (1978), “Asset Pricing in an Exchange Economy,”Econometrica 46, 1429–1445. Mankiw, G., Zeldes, S. (1991), “The Consumption of Stockholders and NonStockholders,”Journal of Financial Economics 29, 97–112. 29 Mehra, R., Prescott, E.C. (1985), “The Equity Premium: A Puzzle,”Journal of Monetary Economic, 15, 145–161. Weil, Ph. (1989), “The Equity Premium Puzzle and the Risk-Free Rate Puzzle,” Journal of Monetary Economics 24, 401–421. Appendix 9.1: Solving the CCAPM with Growth Assume that there is a finite set of possible growth rates {x1 , ..., xN } whose realizations are governed by a Markov process with transition matrix T and entries πij . Then, for whatever xi is realized in period t + 1, dt+1 = xt+1 Yt = xt+1 ct = xi ct . Under the usual utility specification, U (c) = tion reduces to N c1−γ 1−γ , the basic asset pricing equa- c−γ p(Yt , xi ) = t p(Yt , xi ) = δ j=1 N πij (xj ct )−γ [ct xj + p(xj Yt , xj )], or xj ct ct −γ δ j=1 πij [ct xj + p(xj Yt , xj )] . So we see that the MRS is determined exclusively by the consumption growth rate. The essential insight of Mehra and Prescott (1985) was to observe that a solution to this linear system has the form p(Yt , xi ) = p(ct , xi ) = vi ct for a set of constants {v1 , ..., vN }, each identified with the corresponding growth rate. With this functional form, the asset pricing equation reduces to N vi ct = δ j=1 N πij (xj )−γ [xj ct + vj xj ct ], or πij (xj )1−γ [1 + vj ] . j=1 vi = δ (9.35) This is again a system of linear equations in the N unknowns {v1 , ..., vN }. Provided the growth rates are not too large (so that the agent’s utility is not ∗ ∗ unbounded), a solution exists — a set of {v1 , ..., vN } that solves the system of Equations (9.35). 30 Thus, for any state (Y, xj ) = (c, xj ), the equilibrium equity asset price is ∗ p(Y, xj ) = vj Y. If we suppose the current state is (Y, xi ) while next period it is (xj Y, xj ), then the one-period return earned by the equity security over this period is rij = = = p(xj Y, xj ) + xj Y − p(Y, xi ) p(Y, xi ) ∗ ∗ vj xj Y + xj Y − vi Y ∗ vi Y ∗ xj (vj + 1) − 1, ∗ vi N and the mean or expected return, conditional on state i, is ri = j=1 πij rj . The unconditional equity return is thus given by N Er = j=1 πj r j ˆ where πj are the long-run stationary probabilities of each state. ˆ The risk-free security is analogously priced as: N prf (c, xi ) = δ j=1 πij (xj )−γ , etc. Appendix 9.2: Some Properties of the Lognormal Distribution Definition A9.1: A variable x is said to follow a lognormal distribution if lnx is normally 2 distributed. Let lnx ∼ N µx , σx . If this is the case, 1 2 exp µx + σx 2 1 2 E (xa ) = exp aµx + a2 σx 2 E (x) = 2 2 var(x) = exp 2µx + σx (exp σx − 1) Suppose furthermore that x and y are two variables that are independently and identically lognormally distributed; then we also have E xa y b = exp aµx + bµy + 1 2 2 2 a σx + b2 σy + 2ρabσx σy 2 31 where ρ is the correlation coefficient between lnx and lny. Let us apply these relationships to consumption growth: xt is lognormally 2 distributed, that is, lnxt ∼ N µx , σx . 2 We know that E(xt ) = 1.0183 and var(xt ) = (.0357)2 . To identify µx , σx , we need to find the solutions of 1 2 1.0183 = exp µx + σx 2 (.0357)2 2 2 = exp 2µx + σx (exp σx − 1) Substituting the first equation squared into the second [by virtue of the fact 2 2 that [exp(y)] = exp(2y)] and solving for σx , one obtains 2 σx = .00123. Substituting this value in the equation for µx , one solves for µx = .01752 We can directly use these values to solve Equation (9.20): 1 2 E x−γ = exp −γµx + γ 2 σx t 2 = exp {−.03258} = .967945, thus δ = 1.024. Focusing now on the numerator of Equation (9.21), one has: 1 2 1 2 exp µx + σx exp −γµx + γ 2 σx , 2 2 while the denominator is 1 2 exp (1 − γ)µx + (1 − γ)2 σx . 2 It remains to recall that exp(a) exp(b) exp(c) = exp(a + b − c) to obtain Equation (9.22). Another application: The standard deviation of the pricing kernel mt = x−γ t where consumption growth xt is lognormally distributed. Given that Em t is as derived in Section 9.6, one estimates 1 σ 2 (mt ) ∼ = k k δ(xi )−γ − Emt i=1 2 , for ln xi drawn from N (.01752;.00123) and k sufficiently large (say k= 10,000). For γ =2, one obtains σ 2 (mt ) = (.00234)2 , which yields σm ∼ .00234 = .00245. = Em ˜ .9559 32 Part IV Arbitrage Pricing Chapter 10: Arrow-Debreu Pricing II: the Arbitrage Perspective 10.1 Introduction Chapter 8 presented the Arrow-Debreu asset pricing theory from the equilibrium perspective. With the help of a number of modeling hypotheses and building on the concept of market equilibrium, we showed that the price of a future contingent dollar can appropriately be viewed as the product of three main components: a pure time discount factor, the probability of the relevant state of nature, and an intertemporal marginal rate of substitution reflecting the collective (market) assessment of the scarcity of consumption in the future relative to today. This important message is one that we confirmed with the CCAPM of Chapter 9. Here, however, we adopt the alternative arbitrage perspective and revisit the same Arrow-Debreu pricing theory. Doing so is productive precisely because, as we have stressed before, the design of an Arrow-Debreu security is such that once its price is available, whatever its origin and make-up, it provides the answer to the key valuation question: what is a unit of the future state contingent numeraire worth today. As a result, it constitutes the essential piece of information necessary to price arbitrary cash flows. Even if the equilibrium theory of Chapter 8 were all wrong, in the sense that the hypotheses made there turn out to be a very poor description of reality and that, as a consequence, the prices of Arrow-Debreu securities are not well described by equation (8.1), it remains true that if such securities are traded, their prices constitute the essential building blocks (in the sense of our Chapter 2 bicycle pricing analogy) for valuing any arbitrary risky cash-flow. Section 10.2 develops this message and goes further, arguing that the detour via Arrow-Debreu securities is useful even if no such security is actually traded. In making this argument we extend the definition of the complete market concept. Section 10.3 illustrates the approach in the abstract context of a risk-free world where we argue that any risk-free cash flow can be easily and straightforwardly priced as an equivalent portfolio of date-contingent claims. These latter instruments are, in effect, discount bonds of various maturities. Our main interest, of course, is to extend this approach to the evaluation of risky cash flows. To do so requires, by analogy, that for each future date-state the corresponding contingent cash flow be priced. This, in turn, requires that we know, for each future date-state, the price today of a security that pays off in that date-state and only in that date-state. This latter statement is equivalent to the assumption of market completeness. In the rest of this chapter, we take on the issue of completeness in the context of securities known as options. Our goal is twofold. First, we want to give the reader an opportunity to review an important element of financial theory – the theory of options. A special appendix to this chapter, available on this text website, describes the essentials for the reader in need of a refresher. Second, 2 we want to provide a concrete illustration of the view that the recent expansion of derivative markets constitutes a major step in the quest for the “Holy Grail” of achieving a complete securities market structure. We will see, indeed, that options can, in principle, be used relatively straightforwardly to complete the markets. Furthermore, even in situations where this is not practically the case, we can use option pricing theory to value risky cash flows in a manner as though the financial markets were complete. Our discussion will follow the outline suggested by the following two questions. 1. How can options be used to complete the financial markets? We will first answer this question in a simple, highly abstract setting. Our discussion closely follows Ross (1976). 2. What is the link between the prices of market quoted options and the prices of Arrow-Debreu securities? We will see that it is indeed possible to infer Arrow-Debreu prices from option prices in a practical setting conducive to the valuation of an actual cash flow stream. Here our discussion follows Banz and Miller (1978) and Breeden and Litzenberger (1978). 10.2 Market Completeness and Complex Securities In this section we pursue, more systematically, the important issue of market completeness first addressed when we discussed the optimality property of a general competitive equilibrium. Let us start with two definitions. 1. Completeness. Financial markets are said to be complete if, for each state of nature θ, there exists a market for contingent claim or Arrow-Debreu security θ; in other words, for a claim promising delivery of one unit of the consumption good (or, more generally, the numeraire) if state θ is realized, and nothing otherwise. Note that this definition takes a form specifically appropriate to models where there is only one consumption good and several date states. This is the usual context in which financial issues are addressed. 2. Complex security. A complex security is one that pays off in more than one state of nature. Suppose the number of states of nature N = 4; an example of a complex security is S = (5, 2, 0, 6) with payoffs 5, 2, 0, and 6, respectively, in states of nature 1, 2, 3, and 4. If markets are complete, we can immediately price such a security since (5, 2, 0, 6) = 5(1, 0, 0, 0) + 2(0, 1, 0, 0) + 0(0, 0, 1, 0) + 6(0, 0, 0, 1), in other words, since the complex security can be replicated by a portfolio of Arrow-Debreu securities, the price of security S, pS , must be pS = 5q1 + 2q2 + 6q4 . We are appealing here to the law of one price1 or, equivalently, to a condition of no arbitrage. This is the first instance of our using the second main approach 1 This is stating that the equilibrium prices of two separate units of what is essentially 3 to asset pricing, the arbitrage approach, that is our exclusive focus in Chapters 10-13. We are pricing the complex security on the basis of our knowledge of the prices of its components. The relevance of the Arrow-Debreu pricing theory resides in the fact that it provides the prices for what can be argued are the essential components of any asset or cash-flow. Effectively, the argument can be stated in the following proposition. Proposition 10.1: If markets are complete, any complex security or any cash flow stream can be replicated as a portfolio of Arrow-Debreu securities. If markets are complete in the sense that prices exist for all the relevant Arrow-Debreu securities, then the ”no arbitrage” condition implies that any complex security or cash flow can also be priced using Arrow-Debreu prices as fundamental elements. The portfolio, which is easily priced using the ( ArrowDebreu) prices of its individual components, is essentially the same good as the cash flow or the security it replicates: it pays the same amount of the consumption good in each and every state. Therefore it should bear the same price. This is a key result underlying much of what we do in the remainder of this chapter and our interest in Arrow-Debreu pricing. If this equivalence is not observed, an arbitrage opportunity - the ability to make unlimited profits with no initial investment - will exist. By taking positions to benefit from the arbitrage opportunity, however, investors will expeditiously eliminate it, thereby forcing the price relationships implicitly asserted in Theorem 10.1. To illustrate how this would work, let us consider the prior example and postulate the following set of prices: q1 = $.86, q2 =$.94, q3 = $.93, q4 =$.90, and q(5,2,0,6) = $9.80. At these prices, the law of one price fails, since the price of the portfolio of state claims that exactly replicates the payoff to the complex security does not coincide with the complex’s security’s price: q(5,2,0,6) =$9.80 < $11.58 = 5q1 + 2q2 + 6q4 . We see that the complex security is relatively undervalued vis-a-vis the state claim prices. This suggests acquiring a positive amount of the complex security while selling (short) the replicating portfolio of state claims. Table 10.1 illustrates a possible combination. the same good should be identical. If this were not the case, a riskless and costless arbitrage opportunity would open up: Buy extremely large amounts at the low price and sell them at the high price, forcing the two prices to converge. When applied across two different geographical locations, (which is not the case here: our world is a point in space), the law of one price may not hold because of transport costs rendering the arbitrage costly. 4 Table 10.1: An arbitrage Portfolio t=0 Security Buy 1 complex security Sell short 5 (1,0,0,0) securities Sell short 2 (0,1,0,0) securities Sell short 6 (0,0,0,1) securities Net Cost -$9.80 $4.30$1.88 $5.40$1.78 t = 1 payoffs θ1 θ2 θ3 θ4 5 2 0 6 -5 0 0 0 0 -2 0 0 0 0 0 -6 0 0 0 0 So the arbitrageur walks away with 1.78 while (1) having made no investment of his own wealth and (2) without incurring any future obligation (perfectly hedged). She will thus replicate this portfolio as much as she can. But the added demand for the complex security will, ceteris paribus, tend to increase its price while the short sales of the state claims will depress their prices. This will continue (arbitrage opportunities exist) so long as the pricing relationships are not in perfect alignment. Suppose now that only complex securities are traded and that there are M of them (N states). The following is true. Proposition 10.2: If M = N , and all the M complex securities are linearly independent, then (i) it is possible to infer the prices of the Arrow-Debreu state-contingent claims from the complex securities’ prices and (ii) markets are effectively complete.2 The hypothesis of linear independence can be interpreted as a requirement that there exist N truly different securities for completeness to be achieved. Thus it is easy to understand that if among the N complex securities available, one security, A, pays (1, 2, 3) in the three relevant states of nature, and the other, B, pays (2, 4, 6), only N -1 truly distinct securities are available: B does not permit any different redistribution of purchasing power across states than A permits. More generally, the linear independence hypothesis requires that no one complex security can be replicated as a portfolio of some of the other complex securities. You will remember that we made the same hypothesis at the beginning of Section 6.3. Suppose the following securities are traded: (3, 2, 0) (1, 1, 1) (2, 0, 2) at equilibrium prices1.00, $0.60, and$0.80, respectively. It is easy to verify that these three securities are linearly independent. We can then construct the Arrow-Debreu prices as follows. Consider, for example, the security (1, 0, 0): (1, 0, 0) = w1 (3, 2, 0) + w2 (1, 1, 1) + w3 (2, 0, 2) 2 When we use the language “linearly dependent,” we are implicitly regarding securities as N -vectors of payoffs. 5 Thus, 1 = 3w1 + w2 + 2w3 0 = 2w1 + w2 0 = w2 + 2w3 Solve: w1 = 1/3, w2 = −2/3, w3 = 1/3, and q(1,0,0) = 1/3(1.00)+(−2/3)(.60)+ 1/3(.80) = .1966 Similarly, we could replicate (0, 1, 0) and (0, 0, 1) with portfolios (w1 = 0, w2 = 1, w3 = −1/2) and (w1 = −1/3, w2 = 2/3, w3 = 1/6), respectively, and price them accordingly. Expressed in a more general way, the reasoning just completed amounts to searching for a solution of the following system of equations:    1 2 3    100 312 w1 w1 w1 1 2 3  2 1 0   w2 w2 w2  =  0 1 0  1 2 3 001 012 w3 w3 w3 −1   312 100 Of course, this system has solution  2 1 0   0 1 0  only if the matrix 012 001 of security payoffs can be inverted, which requires that it be of full rank, or that its determinant be nonzero, or that all its lines or columns be linearly independent. Now suppose the number of linearly independent securities is strictly less than the number of states (such as in the final, no-trade, example of Section 8.3 where we assume only a risk-free asset is available). Then the securities markets are fundamentally incomplete: There may be some assets that cannot be accurately priced. Furthermore, risk sharing opportunities are less than if the securities markets were complete and, in general, social welfare is lower than what it would be under complete markets: some gains from exchange cannot be exploited due to the lack of instruments permitting these exchanges to take place. We conclude this section by revisiting the project valuation problem. How should we, in the light of the Arrow-Debreu pricing approach, value an uncertain cash flow stream such as: t= 0 −I0 1 ˜ CF 1 2 ˜ CF 2 3 ˜ CF 3 ... ... T ˜ CF T  This cash flow stream is akin to a complex security since it pays in multiple states of the world. Let us specifically assume that there are N states at each date t, t = 1, ..., T and let us denote qt,θ the price of the Arrow-Debreu security promising delivery of one unit of the numeraire if state θ is realized at date t. Similarly, let us identify as CFt,θ the cash flow associated with the project 6 in the same occurrence. Then pricing the complex security la Arrow-Debreu a means valuing the project as in Equation (10.1). T N N P V = −I0 + t=1 θ=1 qt,θ CFt,θ . (10.1) Although this is a demanding procedure, it is a pricing approach that is fully general and involves no approximation. For this reason it constitutes an extremely useful reference. In a risk-free setting, the concept of state contingent claim has a very familiar real-world counterpart. In fact, the notion of the term structure is simply a reflection of “date-contingent” claims prices. We pursue this idea in the next section. 10.3 Constructing State Contingent Claims Prices in a Risk-Free World: Deriving the Term Structure Suppose we are considering risk-free investments and risk-free securities exclusively. In this setting – where we ignore risk – the “states of nature” that we have been speaking of simply correspond to future time periods. This section shows that the process of computing the term structure from the prices of coupon bonds is akin to recovering Arrow-Debreu prices from the prices of complex securities. Under this interpretation, the Arrow-Debreu state contingent claims correspond to risk-free discount bonds of various maturities, as seen in Table 10.2. Table 10.2: Risk-Free Discount Bonds As Arrow-Debreu Securities Current Bond Price t=0 1 −q1 $1, 000 −q2 ... −qT 2$1, 000 $1, 000 3 4 Future Cash Flows ... T where the cash flow of a “j-period discount bond” is just t=0 −qj 1 0 ... 0 j$1, 000 j+1 0 ... 0 T 0 These are Arrow-Debreu securities because they pay off in one state (the period of maturity), and zero for all other time periods (states). In the United States at least, securities of this type are not issued for maturities longer than one year. Rather, only interest bearing or coupon bonds are issued for longer maturities. These are complex securities by our definition: They pay off in many states of nature. But we know that if we have enough 7 distinct complex securities we can compute the prices of the Arrow-Debreu securities even if they are not explicitly traded. So we can also compute the prices of these zero coupon or discount bonds from the prices of the coupon or interest-bearing bonds, assuming no arbitrage opportunities in the bond market. For example, suppose we wanted to price a 5-year discount bond coming due in November of 2009 (we view t = 0 as November 2004), and that we observe two coupon bonds being traded that mature at the same time: (i) 7 7 % bond priced at 109 25 , or $1097.8125/$1, 000 of face value 8 32 9 (ii) 5 5 % bond priced at 100 32 , or $1002.8125/$1, 000 of face value 8 The coupons of these bonds are respectively, .07875 ∗ $1, 000 =$78.75 / year .05625 ∗ $1, 000 =$56.25/year3 The cash flows of these two bonds are seen in Table 10.3. Table 10.3: Present And Future Cash Flows For Two Coupon Bonds Bond Type 77 /8 bond: 55 /8 bond: t=0 −1, 097.8125 −1, 002.8125 1 78.75 56.25 2 78.75 56.25 Cash Flow at Time t 3 4 5 78.75 78.75 1, 078.75 56.25 56.25 1, 056.25 Note that we want somehow to eliminate the interest payments (to get a 78.75 discount bond) and that 56.25 = 1.4 So, consider the following strategy: Sell 7 one 7 /8 % bond while simultaneously buying 1.4 unit of 55 /8 % bonds. The corresponding cash flows are found in Table 10.4. Table 10.4 : Eliminating Intermediate Payments Bond −1x 77 /8 bond: +1.4x 55 /8 bond: Difference: t=0 +1, 097.8125 −1, 403.9375 −306.125 1 −78.75 78.75 0 2 −78.75 78.75 0 Cash Flow at Time t 3 4 5 −78.75 −78.75 −1, 078.75 78.75 78.75 1, 478.75 0 0 400.00 The net cash flow associated with this strategy thus indicates that the t = 0 price of a $400 payment in 5 years is$306.25. This price is implicit in the pricing of our two original coupon bonds. Consequently, the price of $1,000 in 5 years must be 1000 =$765.3125 $306.125 × 400 fact interest is paid every 6 months on this sort of bond, a refinement that would double the number of periods without altering the argument in any way. 3 In 8 Alternatively, the price today of$1.00 in 5 years is $.7653125. In the notation of our earlier discussion we have the following securities:     78.75 θ1 56.25  56.25  θ2  78.75       78.75  and  56.25  and we consider θ3      56.25  θ4  78.75  θ5 1078.75 1056.25   1  − 400    78.75 78.75 78.75 78.75 1078.75        + 1.4   400    56.25 56.25 56.25 56.25 1056.25        =      0 0 0 0 1    .   This is an Arrow-Debreu security in the riskless context we are considering in this section. If there are enough coupon bonds with different maturities with pairs coming due at the same time and with different coupons, we can thus construct a complete set of Arrow-Debreu securities and their implicit prices. Notice that the payoff patterns of the two bonds are fundamentally different: They are linearly independent of one another. This is a requirement, as per our earlier discussion, for being able to use them to construct a fundamentally new payoff pattern, in this case, the discount bond. Implicit in every discount bond price is a well defined rate of return notion. In the case of the prior illustration, for example, the implied 5-year compound risk free rate is given by$765.3125(1 + r5 )5 r5 = $1000, or = .078 This observation suggests an intimate relationship between discounting and Arrow-Debreu date pricing. Just as a full set of date claims prices should allow us to price any risk free cash flow, the rates of return implicit in the ArrowDebreu prices must allow us to obtain the same price by discounting at the equivalent family of rates. This family of rates is referred to as term structure of interest rates. Definition: The term structure of interest rates r1 , r2 , ... is the family of interest rates corresponding to risk free discount bonds of successively greater maturity; i.e., ri is the rate of return on a risk free discount bond maturing i periods from the present. We can systematically recover the term structure from coupon bond prices provided we know the prices of coupon bonds of all different maturities. To 9 illustrate, suppose we observe risk-free government bonds of 1,2,3,4-year maturities all selling at par4 with coupons, respectively, of 6%, 6.5%, 7.2%, and 9.5%. We can construct the term structure as follows: r1 : Since the 1-year bond sells at par, we have r1 = 6%; r2 : By definition, we know that the two year bond is priced such that 1000 = 1065 65 + which, given that r1 = 6%, solves for r2 = 6.5113%. (1 + r1 ) (1 + r2 )2 r3 : is derived accordingly as the solution to 1000 = 72 72 1072 + + 2 (1 + r1 ) (1 + r2 ) (1 + r3 )3 With r1 = 6% and r2 = 6.5113%, the solution is r3 = 7.2644%. Finally, given these values for r1 to r3 , r4 solves: 1000 = 95 95 95 1095 + + + , i.e., r4 = 9.935%. (1 + r1 ) (1 + r2 )2 (1 + r3 )3 (1 + r4 )4 Note that these rates are the counterpart to the date contingent claim prices. Table 10.5: Date Claim Prices vs. Discount Bond Prices Price of a N year claim N N N N = = = = 1 2 3 4 q1 q2 q3 q4 = = = =$1/1.06 = $.94339$1/(1.065113)2 = $.88147$1/(1.072644)3 = $.81027$1/1.09935)4 =$.68463 Analogous Discount Bond Price ($1,000 Denomination) $943.39$ 881.47 $810.27$ 684.63 Of course, once we have the discount bond prices (the prices of the ArrowDebreu claims) we can clearly price all other risk-free securities; for example, suppose we wished to price a 4-year 8% bond: t=0 −p0 (?) 1 80 2 80 3 80 4 1080 and suppose also that we had available the discount bonds corresponding to Table 10.5 as in Table 10.6. Then the portfolio of discount bonds (Arrow-Debreu claims) which replicates the 8% bond cash flow is (Table 10.7): {.08 x 1-yr bond, .08 x 2-yr bond, .08 x 3-yr bond, 1.08 x 4-yr bond}. 4 That is, selling at their issuing or face value, typically of $1,000. 10 Table 10.6: Discount Bonds as Arrow-Debreu Claims Bond 1-yr 2-yr 3-yr 4-yr discount discount discount discount Price (t = 0) -$943.39 -$881.47 -$810.27 -$684.63 t=1$1,000 2 $1,000$1,000 $1,000 3 CF Pattern 4 Table 10.7: Replicating the Discount Bond Cash Flow Bond Price (t = 0) CF Pattern t=1 2 3 4$80 (80 state 1 A-D claims) $80 (80 state 2 A-D claims)$80 $1,080 08 1-yr discount (.08)(−943.39) = −$75.47 08 2-yr discount (.08)(−881.47) = −$70.52 08 3-yr discount (.08)(−810.27) = −$64.82 1.08 4-yr discount (1.08)(−684.63) = −$739.40 Thus: p4yr.8% = .08($943.39) + .08($881.47) + 08($810.27) + 1.08($684.63) =$950.21. bond Notice that we are emphasizing, in effect, the equivalence of the term structure of interest rates with the prices of date contingent claims. Each defines the other. This is especially apparent in Table 10.5. Let us now extend the above discussion to consider the evaluation of arbitrary risk-free cash flows: any such cash flow can be evaluated as a portfolio of Arrow-Debreu securities; for example: t=0 1 60 2 25 3 150 4 300 We want to price this cash flow today (t = 0) using the Arrow-Debreu prices we have calculated in Table 10.4. $.94339 at t=0 + ($25 at t=2) $1 at t=1 1.00 1.00 = ($60) + ($25) 2 + ... 1 + r1 (1 + r2 ) 1.00 1.00 = ($60) + ($25) + ... 1.06 (1.065113)2$.88147 at t=0 $1 at t=2 p = ($60 at t=1) + ... The second equality underlines the fact that evaluating risk-free projects as portfolios of Arrow-Debreu state contingent securities is equivalent to discount11 ing at the term structure: = 60 25 150 + + 3 + ... etc (1 + r1 ) (1 + r2 )2 (1 + r3 ) In effect, we treat a risk-free project as a risk-free coupon bond with (potentially) differing coupons. There is an analogous notion of forward prices and its more familiar counterpart, the forward rate. We discuss this extension in Appendix 10.1. 10.4 In this section we present an important result illustrating the power of the Arrow-Debreu pricing apparatus to generate one of the main lessons of the CAPM. Let there be two assets (complex securities) a and b with date 1 payoffs za and zb , respectively, and let their equilibrium prices be pa and pb . Suppose ˜ ˜ a third asset, c, turns out to be a linear combination of a and b. By that we mean that the payoff to c can be replicated by a portfolio of a and b. One can thus write zc = A˜a + B zb , for some constant coefficients A and B ˜ z ˜ (10.2) Then the proposition known as the Value Additivity Theorem asserts that the same linear relationship must hold for the date 0 prices of the three assets: pc = Apa + Bpb . Let us first prove this result and then discuss its implications. The proof easily follows from our discussion in Section 10.2 on the pricing of complex securities in a complete market Arrow-Debreu world. Indeed, for our 2 securities a, b, one must have: pi = s qs zsi , i = a, b (10.3) where qs is the price of an Arrow-Debreu security that pays one unit of consumption in state s (and zero otherwise) and zsi is the payoff of asset i in state s But then, the pricing of c must respect the following relationships: pc = s qs zsc = s qs (Azsa + Bzsb ) = s (Aqs zsa + Bqs zsb ) = Apa + Bpb The first equality follows from the fact that c is itself a complex security and can thus be priced using Arrow-Debreu prices [i.e., an equation such as Equation (10.3) applies]; the second directly follows from Equation (10.2); the third is a pure algebraic expansion that is feasible because our pricing relationships are fundamentally linear; and the fourth follows from Equation (10.3) again. 12 Now this is easy enough. Why is it interesting? Think of a and b as being two stocks with negatively correlated returns; we know that c, a portfolio of these two stocks, is much less risky than either one of them. But pc is a linear combination of pa and pb . Thus, the fact that they can be combined in a less risky portfolio has implications for the pricing of the two independently riskier securities and their equilibrium returns. Specifically, it cannot be the case that pc would be high because it corresponds to a desirable, riskless, claim while the pa and pb would be low because they are risky. To see this more clearly, let us take an extreme example. Suppose that a and b are perfectly negatively correlated. For an appropriate choice of A and B, say A∗ and B ∗ , the resulting portfolio, call it d, will have zero risk; i.e., it will pay a constant amount in each and every state of nature. What should the price of this riskless portfolio be? Intuitively, its price must be such that purchasing d at pd will earn the riskless rate of return. But how could the risk of a and b be remunerated while, simultaneously, d would earn the riskless rate and the value additivity theorem hold? The answer is that this is not possible. Therefore, there cannot be any remuneration for risk in the pricing of a and b. The prices pa and pb must be such that the expected return on a and b is the riskless rate. This is true despite the fact that a and b are two risky assets (they do not pay the same amount in each state of nature). In formal terms, we have just asserted that the two terms of the Value Additivity Theorem zd = A∗ za + B ∗ zb and pd = A∗ pa + B ∗ pb , together with ˜ ˜ ˜ the fact that d is risk-free, E zd ˜ = 1 + rf , force pd E za ˜ E zb ˜ = = 1 + rf . pa pb What we have obtained in this very general context is a confirmation of one of the main results of the CAPM: Diversifiable risk is not priced. If risky assets a and b can be combined in a riskless portfolio, that is, if their risk can be diversified away, their return cannot exceed the risk-free return. Note that we have made no assumption here on utility functions nor on the return expectations held by agents. On the other hand we have explicitly assumed that markets are complete and that consequently each and every complex security can be priced (by arbitrage) as a portfolio of Arrow-Debreu securities. It thus behooves us to describe how Arrow-Debreu state claim prices might actually be obtained in practice. This is the subject of the remaining sections to Chapter 10. 10.5 Using Options to Complete the Market: An Abstract Setting Let us assume a finite number of possible future date-states indexed i = 1, 2, ..., N . Suppose, for a start, that three states of the world are possible in date T = 1, 13 yet only one security (a stock) is traded. The single security’s payoffs are as follows: State θ1 θ2 θ3 Payoff  1  2 . 3  Clearly this unique asset is not equivalent to a complete set of state-contingent claims. Note that we can identify the payoffs with the ex-post price of the security in each of the 3 states: the security pays 2 units of the numeraire commodity in state 2 and we decide that its price then is $2.00. This amounts to normalizing the ex post, date 1, price of the commodity to$1, much as we have done at date 0. On that basis, we can consider call options written on this asset with exercise prices $1 and$2, respectively. These securities are contracts giving the right (but not the obligation) to purchase the underlying security at prices $1 and$2, respectively, tomorrow. They are contingent securities in the sense that the right they entail is valuable only if the price of the underlying security exceeds the exercise price at expiration, and they are valueless otherwise. We think of the option expiring at T = 1, that is, when the state of nature is revealed.5 The states of nature structure enables us to be specific regarding what these contracts effectively promise to pay. Take the call option with exercise price $1. If state 1 is realized, that option is a right to buy at$1 the underlying security whose value is exactly $1. The option is said to be at the money and, in this case, the right in question is valueless. If state 2 is realized, however, the stock is worth$2. The right to buy, at a price of $1, something one can immediately resell for$2 naturally has a market value of $1. In this case, the option is said to be in the money. In other words, at T = 1, when the state of nature is revealed, an option is worth the difference between the value of the underlying asset and its exercise price, if this difference is positive, and zero otherwise. The complete payoff vectors of these options at expiration are as follows:  CT ([1, 2, 3] ; 1) = Similarly, CT ([1, 2, 3] ; 2) =  0 θ1  1  θ2 2 θ3   0 θ1  0  θ2 1 θ3    at the money  in the money .   in the money 5 In our simple two-date world there is no difference between an American option, which can be exercised at any date before the expiration date, and a European option, which can be exercised only at expiration. 14 In our notation, CT (S; K) is the payoff to a call option written on security S with exercise price K at expiration date T . We use Ct (S; K) to denote the option’s market price at time t ≤ T . We frequently drop the time subscript to simplify notation when there is no ambiguity. It remains now to convince ourselves that the three traded assets (the underlying stock and the two call options, each denoted by its payoff vector at T)       0 0 θ1 1 θ2  2  ,  1  ,  0  1 2 3 θ3 constitute a complete set of securities markets for states (θ1 , θ2 , θ3 ). This is so   0 because we can use them to create all the state claims. Clearly  0 is present. 1   0 To create  1 , observe that 0         0 1 0 0  1  = w1  2  + w2  1  + w3  0  , 0 3 2 1 where w1 = w2 = 1, and w3 = −2. 0,  1 The vector  0 can be similarly created. 0 We have thus illustrated one of the main ideas of this chapter, and we need to discuss how general and applicable it is in more realistic settings. A preliminary issue is why trading call option securities C([1,2,3];1) and C([1,2,3];2) might be the preferred approach to completing the market, relative to the alternative possibility of directly issuing the Arrow-Debreu securities [1,0,0] and [0,1,0]? In the simplified world of our example, in the absence of transactions costs, there is, of course, no advantage to creating the options markets. In the real world, however, if a new security is to be issued, its issuance must be accompanied by costly disclosure as to its characteristics; in our parlance, the issuer must disclose as much as possible about the security’s payoff in the various states. As there may be no agreement as to what the relevant future states are – let alone what the payoffs will be – this disclosure is difficult. And if there is no consensus as to its payoff pattern, (i.e., its basic structure of payoffs), investors will not want to hold it, and it will not trade. But the payoff pattern of an option on an already-traded asset is obvious and verifiable to everyone. For this reason, it is, in principle, a much less expensive new security to issue. Another way to describe the advantage of options is to observe that it is useful conceptually, but difficult in practice, to define and identify a single state of nature. It is more practical to define contracts contingent on a well-defined range of states. The fact that these states are themselves defined in terms of, or revealed via, market prices is another facet of the superiority of this type of contract. 15 Note that options are by definition in zero net supply, that is, in this context k Ct ([1, 2, 3] ; K) = 0 k k where Ct ([1, 2, 3] ; K) is the value of call options with exercise price K, held by agent k at time t ≤ T . This means that there must exist a group of agents with negative positions serving as the counter-party to the subset of agents with positive holdings. We naturally interpret those agents as agents who have written the call options. We have illustrated the property that markets can be completed using call options. Now let us explore the generality of this result. Can call options always be used to complete the market in this way? The answer is not necessarily. It depends on the payoff to the underlying fundamental assets. Consider the asset:   θ1 2 θ2  2  . θ3 3 For any exercise price K, all options written on this security must have payoffs of the form:    2−K      2−K  if K ≤ 2    3−K   C ([2, 2, 3] ; K) = 0       if 2 < K ≤ 3 0    3−K Clearly, for any K,     2 2−K  2  and  2 − K  3 3−K have identical payoffs in state θ1 and θ2 , and, therefore, they cannot be used to generate Arrow-Debreu securities     1 0  0  and  1  . 0 0 There is no way to complete the markets with options in the case of this underlying asset. This illustrates the following truth: We cannot generally write options that distinguish between two states if the underlying assets pay identical returns in those states. The problem just illustrated can sometimes be solved if we permit options to be written on portfolios of the basic underlying assets. Consider the case of 16 four possible states at T = 1, and suppose that the only assets currently traded are     1 θ1 1   θ2  1    and  2  .  1  θ3  2  θ4 2 2 It can be shown that it is not possible, using call options, to generate a complete set of securities markets using only these underlying securities. Consider, however, the portfolio composed of 2 units of the first asset and 1 unit of the second:       1 1 3  1   2   4  2  + 1  =  .  2   1   5  2 2 6 The portfolio pays a different return in each state of nature. Options written on the portfolio alone can thus be used to construct a complete set of traded Arrow-Debreu securities. The example illustrates a second general truth, which we will enumerate as Proposition 10.3. Proposition 10.3: A necessary as well as sufficient condition for the creation of a complete set of Arrow-Debreu securities is that there exists a single portfolio with the property that options can be written on it and such that its payoff pattern distinguishes among all states of nature. Going back to our last example, it is easy to see that the created portfolio and the three natural calls to be written on it:         3 0 0 0  4   1   0   0    plus   and   and    5   2   1   0  6 3 2 1 (K=3) (K=4) (K=5) are sufficient, (i.e., constitute a complete set of markets in our four-state world). Combinations of the (K = 5) and (K = 4) vectors can create:   0  0   .  1  0 Combinations of this vector, and the (K = 5) and (K = 3) vectors can then create:   0  1    , etc.  0  0 17 Probing further we may inquire if the writing of calls on the underlying assets is always sufficient, or whether there are circumstances under which other types of options may be necessary. Again, suppose there are four states of nature, and consider the following set of primitive securities:     θ1 1 0 0 θ2  0   1   1     . θ3  0   0   1  1 1 θ4 1 Because these assets pay either one or zero in each state, calls written on them will either replicate the asset itself, or give the zero payoff vector. The writing of call options will not help because they cannot further discriminate among states. But suppose we write a put option on the first asset with exercise price 1. A put is a contract giving the right, but not the obligation, to sell an underlying security at a pre-specified exercise price on a given expiration date. The put option with exercise price 1 has positive value at T = 1 in those states where the underlying security has value less than 1. The put on the first asset with exercise price =$1 thus has the following payoff:   1  1    = PT ([0, 0, 0, 1] ; 1) .  1  0 You can confirm that the securities plus the put are sufficient to allow us to construct (as portfolios of them) a complete set of Arrow-Debreu securities for the indicated four states. In general, one can prove Proposition 10.4. Proposition 10.4: If it is possible to create, using options, a complete set of traded securities, simple put and call options written on the underlying assets are sufficient to accomplish this goal. That is, portfolios of options are not required. 10.6 Synthesizing State-Contingent Claims: A First Approximation The abstract setting of the discussion above aimed at conveying the message that options are natural instruments for completing the markets. In this section, we show how we can directly create a set of state-contingent claims, as well as their equilibrium prices, using option prices or option pricing formulae in a more realistic setting. The interest in doing so is, of course, to exploit the possibility, inherent in Arrow-Debreu prices, of pricing any complex security. In 18 this section we first approach the problem under the hypothesis that the price of the underlying security or portfolio can take only discrete values. Assume that a risky asset is traded with current price S and future price ST . It is assumed that ST discriminates across all states of nature so that Proposition 10.3 applies; without loss of generality, we may assume that ST takes the following set of values: S1 < S2 < ... < Sθ < ... < SN , where Sθ is the price of this complex security if state θ is realized at date T . Assume also that call options are written on this asset with all possible exercise prices, and that these options are traded. Let us also assume that Sθ = Sθ−1 +δ for every state θ. (This is not so unreasonable as stocks, say, are traded at prices that can differ only in multiples of a minimum price change).6 Throughout the discussion we will fix the time to expiration and will not denote it notationally. ˆ Consider, for any state θ, the following portfolio P : Buy one call with K = Sθ−1 ˆ Sell two calls with K = Sθ ˆ Buy one call with K = Sθ+1 ˆ At any point in time, the value of this portfolio, VP , is VP = C S, K = Sθ−1 − 2C S, K = Sθ + C S, K = Sθ+1 . ˆ ˆ ˆ To see what this portfolio represents, let us examine its payoff at expiration (refer to Figure 10.1): Insert Figure 10.1 about here For ST ≤ Sθ−1 , the value of our options portfolio, P , is zero. A similar ˆ situation exists for ST ≥ Sθ+1 since the loss on the 2 written calls with K = ˆ ˆ Sθ exactly offsets the gains on the other two calls. In state θ, the value of the ˆ portfolio is δ corresponding to the value of CT Sθ , K = Sθ−1 , the other two ˆ ˆ options being out of the money when the underlying security takes value Sθ . ˆ The payoff from such a portfolio thus equals:   0 if ST < Sθ ˆ δ if ST = Sθ Payoff to P = ˆ  0 if ST > Sθ ˆ ˆ in other words, it pays a positive amount δ in state θ, and nothing otherwise. That is, it replicates the payoff of the Arrow-Debreu security associated with ˆ state θ up to a factor (in the sense that it pays δ instead of 1). Consequently, 6 Until recently, the minimum price change was equal to $1/16 on the NYSE. At the end of 2000, decimal pricing was introduced whereby the prices are quoted to the nearest$1/100 (1 cent). 19 ˆ the current price of the state θ contingent claim (i.e., one that pays $1.00 if ˆ state θ is realized and nothing otherwise) must be qθ = ˆ 1 C S, K = Sθ−1 + C S, K = Sθ+1 − 2C S, K = Sθ ˆ ˆ ˆ δ . Even if these calls are not traded, if we identify our relevant states with the prices of some security – say the market portfolio – then we can use readily available option pricing formulas (such as the famous Black & Scholes formula) to obtain the necessary call prices and, from them, compute the price of the state-contingent claim. We explore this idea further in the next section. 10.7 Recovering Arrow-Debreu Prices From Options Prices: A Generalization By the CAPM, the only relevant risk is systematic risk. We may interpret this to mean that the only states of nature that are economically or financially relevant are those that can be identified with different values of the market portfolio.7 The market portfolio thus may be selected to be the complex security on which we write options, portfolios of which will be used to replicate state-contingent payoffs. The conditions of Proposition 10.1 are satisfied, guaranteeing the possibility of completing the market structure. In Section 10.6, we considered the case for which the underlying asset assumed a discrete set of values. If the underlying asset is the market portfolio M , however, this cannot be strictly valid: As an index it can essentially assume an infinite number of possible values. How is this added feature accommodated? 1. Suppose that ST , the price of the underlying portfolio (we may think of it as a proxy for M ), assumes a continuum of possible values. We want to price δ δ ˆ ˆ ˜ an Arrow-Debreu security that pays$1.00 if ST ∈ − 2 + ST, ST + 2 , in other ˆ words, if ST assumes any value in a range of width δ, centered on ST . We are thus identifying our states of nature with ranges of possible values for the market portfolio. Here the subscript T refers to the future date at which the Arrow-Debreu security pays $1.00 if the relevant state is realized. 2. Let us construct the following portfolio8 for some small positive number ε >0, δ ˆ Buy one call with K = ST − 2 − ε δ ˆ Sell one call with K = ST − 2 δ ˆ Sell one call with K = ST + 2 δ ˆ Buy one call with K = ST + 2 + ε. 7 That is, diversifiable risks have zero market value (see Chapter 7 and Section 10.4). At an individual level, personal risks are, of course, also relevant. They can, however, be insured or diversified away. Insurance contracts are often the most appropriate to cover these risks. Recall our discussion of this issue in Chapter 1. 8 The option position corresponding to this portfolio is known as a butterfly spread in the jargon. 20 Figure 10.2 depicts what this portfolio pays at expiration . Insert Figure 10.2 about here Observe that our portfolio pays ε on a range of states and 0 almost everywhere else. By purchasing 1/ε units of the portfolio, we will mimic the payoff of an Arrow-Debreu security, except for the two small diagonal sections of the payoff line where the portfolio pays something between 0 and ε. This undesirable feature (since our objective is to replicate an Arrow-Debreu security) will be taken care of by using a standard mathematical trick involving taking limits. 3. Let us thus consider buying 1 /ε units of the portfolio. The total payment, δ δ ˆ ˆ when ST − 2 ≤ ST ≤ ST + 2 , is ε · 1 ≡ 1, for any choice of ε. We want to let ε ˆ ˆ ε → 0, so as to eliminate payments in the ranges ST ∈ ST − δ − ε, ST − δ and 2 2 δ ˆ ˆ ST ∈ ST + 2 , ST + δ 2 + ε . The value of /ε units of this portfolio is: 1 1 δ δ ˆ ˆ C(S, K = ST − − ε) − C(S, K = ST − ) ε 2 2 δ δ ˆ ˆ − C(S, K = ST + ) − C(S, K = ST + + ε) 2 2 , where a minus sign indicates that the call was sold (thereby reducing the cost of the portfolio by its sale price). On balance the portfolio will have a positive price as it represents a claim on a positive cash flow in certain states of nature. Let us assume that the pricing function for a call with respect to changes in the exercise price can be differentiated (this property is true, in particular, in the case of the Black & Scholes option pricing formula). We then have: 1 δ δ ˆ ˆ C(S, K = ST − − ε) − C(S, K = ST − ) ε 2 2 ˆ ˆT + δ ) − C(S, K = ST + δ + ε) − C(S, K = S 2 2    C S, K = S − ˆT      C S, K = S + ˆT   δ 2 ε→0 lim = −lim δ 2 ˆ − ε − C S, K = ST − −ε ≤0 δ 2        ε→0   +lim ˆ + ε − C S, K = ST + ε ≤0 δ 2        ε→0   = δ ˆ C2 S, K = ST + 2 δ ˆ − C2 S, K = ST − 2 21 . Here the subscript 2 indicates the partial derivative with respect to the second argument (K), evaluated at the indicated exercise prices. In summary, the limiting portfolio has a payoff at expiration as represented in Figure 10.3 Insert Figure 10.3 about here δ δ ˆ ˆ and a (current) price C2 S, K = ST + 2 − C2 S, K = ST − 2 that is positive since the payoff is positive. We have thus priced an Arrow-Debreu statecontingent claim one period ahead, given that we define states of the world as coincident with ranges of a proxy for the market portfolio. 4. Suppose, for example, we have an uncertain payment with the following payoff at time T : CFT = δ ˆ δ 0 if ST ∈ [ST − 2 , ST + 2 ] / ˆ δ ˆ δ ˆ 50000 if ST ∈ [ST − 2 , ST + 2 ] . The value today of this cash flow is: ˆ 50, 000 · C2 S, K = ST + δ 2 ˆ − C2 S, K = ST − δ 2 . The formula we have developed is really very general. In particular, for any 1 2 arbitrary values ST and ST , the price of an Arrow-Debreu contingent claim that 1 2 pays off$1.00 if the underlying market portfolio assumes a value ST ∈ ST , ST , is given by 2 2 1 1 (10.4) q ST , ST = C2 S, K = ST − C2 S, K = ST . We value this quantity in Box 10.1 for a particular set of parameters making explicit use of the Black-Scholes option pricing formula. Box 10.1: Pricing A-D Securities with Black-Scholes For calls priced according to the Black-Scholes option pricing formula, Breeden and Litzenberger (1978) prove that 1 2 AD ST , ST = = 2 1 C2 S, K = ST − C2 S, K = ST 1 e−rT N d2 ST 2 − N d 2 ST where i d2 (ST ) + rf − δ − σ T 2 √ = σ T In this expression, T is the time to expiration, rf the annualized continuously compounded riskless rate over that period, δ the continuous annualized portfolio dividend yield, σ the standard deviation of the continuously compounded rate of return on the underlying index portfolio, N ( ) the standard normal distribution, and S0 the current value of the index. ln 22 S0 i ST 2 Suppose the not-continuously-compounded risk-free rate is .06, the notcontinuously compounded dividend yield is δ = .02, T = 5 years, S0 = 1,500, 2 1 ST = 1,700, ST = 1,600, σ = .20; then 1 d2 (ST ) = = = 2 d2 (ST ) = = 1 2 AD(ST , ST ) = = = = ln(1.06) − ln(1.02) − √ .20 .5 {−.0645 + (.0583 − .0198 − .02)(.5)} .1414 −.391 1500 ln 1700 + (.0583 − .0198 − .02)(.5) .1414 {−.1252 + .00925)} .1414 −.820 e− ln(1.06)(.5) {N (−.391) − N (−.820)} .9713 {.2939 − .1517} .1381, ln 1500 1600 + (.20)2 2 (.5) or about $.14. Suppose we wished to price an uncertain cash flow to be received in one period from now, where a period corresponds to a duration of time T . What do we do? Choose several ranges of the value of the market portfolio corresponding to the various states of nature that may occur – say three states: “recession,” “slow growth,” and “boom” and estimate the cash flow in each of these states (see Figure 10.4). It would be unusual to have a large number of states as the requirement of having to estimate the cash flows in each of those states is likely to exceed our forecasting abilities. Insert Figure 10.4 about here Suppose the cash flow estimates are, respectively, CFB , CFSG , CFR , where the subscripts denote, respectively, “boom,” “slow growth,” and “recession.” Then, 3 4 2 3 1 2 Value of the CF = VCF = q ST , ST CFB +q ST , ST CFSG +q ST , ST CFR , 1 2 3 4 where ST < ST < ST < ST , and the Arrow-Debreu prices are estimated from option prices or option pricing formulas according to Equation (10.4). We can go one (final) step further if we assume for a moment that the cash flow we wish to value can be described by a continuous function of the value of the market portfolio. In principle, for a very fine partition of the range of possible values of the market portfolio, say {S1 , ..., SN }, where Si < Si+1, SN = max ST, and S1 = min 23 ST , we could price the Arrow-Debreu securities that pay off in each of these N −1 states defined by the partition: q (S1 , S2 ) = C2 (S, S2 ) − C2 (S, S1 ) q (S2 , S3 ) = C2 (S, S3 ) − C2 (S, S2 ) ,..etc. Simultaneously, we could approximate a cash flow function CF (ST ) by a function that is constant in each of these ranges of ST (a so-called “step funcˆ tion”), in other words, CF (ST ) = CFi , for Si−1 ≤ ST ≤ Si . For example, CF (S, ST = Si ) + CF (S, ST = Si−1 ) ˆ for Si−1 ≤ ST ≤ Si CF (ST ) = CFi = 2 This particular approximation is represented in Figure 10.5. The value of the approximate cash flow would then be N VCF = i=1 N ˆ CF i · q (Si−1, Si ) ˆ CF i [C2 (S, ST = Si ) − C2 (S, ST = Si−1 )] i=1 = (10.5) Insert Figure 10.5 about here Our approach is now clear. The precise value of the uncertain cash flow will be the sum of the approximate cash flows evaluated at the Arrow-Debreu prices as the norm of the partition (the size of the interval Si − Si−1 ) tends to zero. It can be shown (and it is intuitively plausible) that the limit of Equation (10.5) as max |Si+1 − Si | → 0 is the integral of the cash flow function multiplied by the second derivative of the call’s price with respect to the exercise price. The latter is the infinitesimal counterpart to the difference in the first derivatives of the call prices entering in Equation (10.4). N max|Si+1 −Si |→0 i i lim ˆ CF i [C2 (S, ST = Si+1 ) − C2 (S, ST = Si )] (10.6) i=1 = CF (ST ) C22 (S, ST ) dST . As a particular case of a constant cash flow stream, a risk-free bond paying$1.00 in every state is then priced as per prf = 1 = (1 + rf ) C22 (S, ST ) dST . 0 24 Box 10.2: Extracting Arrow-Debreu Prices from Option Prices: A Numerical Illustration Let us now illustrate the power of the approach adopted in this and the previous section. For that purpose, Table 10.8 [adapted from Pirkner, Weigend, and Zimmermann (1999)] starts by recording call prices, obtained from the BlackScholes formula for a call option, on an underlying index portfolio, currently valued at S = 10 , for a range of strike prices going from K = 7 to K = 13 (columns 1 and 2). Column 3 computes the value of portfolio P of Section 10.6. Given that the difference between the exercise prices is always 1 (i.e., δ = 1), holding exactly one unit of this portfolio replicates the $1.00 payoff of the Arrow-Debreu security associated with K = 10. This is shown on the bottom line of column 7, which corresponds to S = 10. From column 3, we learn that the price of this Arrow-Debreu security, which must be equal to the value of the replicating portfolio, is$0.184. Finally, the last two columns approximate the first and second derivatives of the call price with respect to the exercise price. In the current context this is naturally done by computing the first and second differences (the price increments and the increments of the increments as the exercise price varies) from the price data given in column 2. This is a literal application of Equation (10.4). One thus obtains the full series of Arrow-Debreu prices for states of nature identified with values of the underlying market portfolios ranging from 8 to 12, confirming that the $0.184 price occurs when the state of nature is identified as S = 10 (or 9.5 < S < 10.5). Table 10.8: Pricing an Arrow-Debreu State Claim K 7 8 9 10 11 12 13 C(S, K) 3.354 2.459 -0.789 1.670 1.045 0.604 0.325 -0.161 0.164 0.184 0 0 0 1 0 0 0 +1.670 -2.090 +0.604 0 0 0 0 0 0 0 0 0 1 0 0 2 -2 0 3 -4 1 4 -0.625 -6 -0.441 2 -0.279 0.118 0.162 0.184 0.164 Cost of Position 7 8 Payoff if ST = 9 10 11 12 13 ∆C -0.895 0.106 ∆(∆C) = qθ 25 10.8 Arrow-Debreu Pricing in a Multiperiod Setting The fact that the Arrow-Debreu pricing approach is static makes it most adequate for the pricing of one-period cash flows and it is, quite naturally, in this context that most of our discussion has been framed. But as we have emphasized previously, it is formally equally appropriate for pricing multiperiod cash flows. The estimation (for instance via option pricing formulas and the methodology introduced in the last two sections) of Arrow-Debreu prices for several periods ahead is inherently more difficult, however, and relies on more perilous assumptions than in the case of one period ahead prices. (This parallels the fact that the assumptions necessary to develop closed form option pricing formulae are more questionable when they are used in the context of pricing long-term options). Pricing long-term assets, whatever the approach adopted, requires making hypotheses to the effect that the recent past tells us something about the future, which, in ways to be defined and which vary from one model to the next, translates into hypotheses that some form of stationarity prevails. Completing the Arrow-Debreu pricing approach with an additional stationarity hypothesis provides an interesting perspective on the pricing of multiperiod cash flows. This is the purpose of the present section. For notational simplicity, let us first assume that the same two states of nature (ranges of value of M ) can be realized in each period, and that all future state-contingent cash flows have been estimated. The structure of the cash flow is found in Figure 10.6. Insert Figure 10.6 Suppose also that we have estimated, using our formulae derived earlier, the values of the one-period state-contingent claims as follows: Tomorrow 1 2 .54 .42 .46 .53 Today 1 2 =q where q11 (= .54) is the price today of an Arrow-Debreu claim paying$1 if state 1 (a boom) occurs tomorrow, given that we are in state 1 (boom) today. Similarly, q12 is the price today of an Arrow-Debreu claim paying $1 if state 2 (recession) occurs tomorrow given that we are in state 1 today. Note that these prices differ because the distribution of the value of M tomorrow differs depending on the state today. Now let us introduce our stationarity hypothesis. Suppose that q, the matrix of values, is invariant through time.9 That is, the same two states of nature 9 If it were not, the approach in Figure 10.7 would carry on provided we would be able 26 describe the possible futures at all future dates and the contingent one-period prices remain the same. This allows us to interpret powers of the q matrix, q2 , q3 , . . . in a particularly useful way. Consider q2 (see also Figure 10.7): q2 = .54 .42 .46 .53 · .54 .42 .46 .53 = (.54) (.54) + (.42) (.46) (.54) (.42) + (.42) (.53) (.46) (.54) + (.53) (.46) (.46) (.42) + (.53) (.53) Note there are two ways to be in state 1 two periods from now, given we are in state 1 today. Therefore, the price today of$1.00, if state 1 occurs in two periods, given we are in state 1 today is: (.54)(.54) value of $1 in 2 periods if state 1 occurs and the intermediate state is 1 + (.42)(.46) value of$1.00 in 2 periods if state 1 occurs and the intermediate state is 2 . Similarly, q2 = (.46)(.42) + (.53)(.53) is the price today, if today’s state is 2, of 22 $1.00 contingent on state 2 occurring in 2 periods. In general, for powers N of the matrix q, we have the following interpretation for qN : Given that we are in ij state i today, it gives the price today of$1.00, contingent on state j occurring in N periods. Of course, if we hypothesized three states, then the Arrow-Debreu matrices would be 3 × 3 and so forth. How can this information be used in a “capital budgeting” problem? First we must estimate the cash flows. Suppose they are as outlined in Table 10.9. Table 10.9: State Contingent Cash Flows t=0 state 1 state 2 1 42 65 2 48 73 3 60 58 Then the present value (P V ) of the cash flows, contingent on state 1 or state to compute forward Arrow-Debreu prices; in other words, the Arrow-Debreu matrix would change from date to date and it would have to be time-indexed. Mathematically, the procedure described would carry over, but the information requirement would, of course, be substantially larger. 27 2 are given by: PV = = = = P V1 P V2 .54 .42 .46 .53 .54 .42 .46 .53 49.98 53.77 + 42 65 42 65 56.07 58.23 + + + .54 .42 .46 .53 .4848 .4922 2 48 73 + 48 73 .54 .42 .46 .53 + . 3 60 58 + 60 54 .4494 .4741 = .4685 .4418 .4839 .4580 53.74 55.59 159.79 167.59 This procedure can be expanded to include as many states of nature as one may wish to define. This amounts to choosing as fine a partition of the range of possible values of M that one wishes to choose. It makes no sense to construct a finer partition, however, if we have no real basis for estimating different cash flows in those states. For most practical problems, three or four states are probably sufficient. But an advantage of this method is that it forces one to think carefully about what a project cash flow will be in each state, and what the relevant states, in fact, are. One may wonder whether this methodology implicitly assumes that the states are equally probable. That is not the case. Although the probabilities, which would reflect the likelihood of the value of M lying in the various intervals, are not explicit, they are built into the prices of the state-contingent claims. We close this chapter by suggesting a way to tie the approach proposed here with our previous work in this Chapter. Risk-free cash flows are special (degenerate) examples of risky cash flows. It is thus easy to use the method of this section to price risk-free flows. The comparison with the results obtained with the method of Section 10.3 then provides a useful check of the appropriateness of the assumptions made in the present context. Consider our earlier example with Arrow-Debreu prices given by: 1 2 .54 .42 .46 .53 State 1 State 2 If we are in state 1 today, the price of $1.00 in each state tomorrow (i.e., a risk-free cash flow tomorrow of$1.00) is .54 + 42 = .96. This implies a risk-free rate of: 1.00 1 = 1.0416 or 4.16%. 1 + rf = .96 To put it differently, .54 + .42 = .96 is the price of a one-period discount bond paying $1.00 in one period, given that we are in state 1 today. More generally, we would evaluate the following risk-free cash flow as: 28 t=0 1 100 2 100 3 100 PV = = = P V1 P V2 .54 .42 .46 .53 .54 .42 .46 .53 100 100 100 100 + + .54 .42 .46 .53 2 100 100 + 100 100 .54 .42 .46 .53 + 3 100 100 100 100 .4848 .4494 .4922 .4741 .4685 .4418 .4839 .4580 So P V1 = [.54 + .42]100 + [.4848 + .4494]100 + [.4685 + .4418] = [.96]100 + [.9342]100 + [.9103]100 = 280.45 where [.96] = price of a one-period discount bond given state 1 today, [.9342] = price of a two-period discount bond given state 1 today, [.9103] = price of a threeperiod discount bond given state 1 today. The P V given state 2 is computed analogously. Now this provides us with a verification test: If the price of a discount bond using this method does not coincide with the prices using the approach developed in Section 10.3 (which relies on quoted coupon bond prices), then this must mean that our states are not well defined or numerous enough or that the assumptions of the option pricing formulae used to compute ArrowDebreu prices are inadequate. 10.9 Conclusions This chapter has served two main purposes. First, it has provided us with a platform to think more in depth about the all-important notion of market completeness. Our demonstration that, in principle, a portfolio of simple calls and puts written on the market portfolio might suffice to reach a complete market structure suggests the ‘Holy Grail’ may not be totally out of reach. Caution must be exercised, however, in interpreting the necessary assumptions. Can we indeed assume that the market portfolio – and what do we mean by the latter – is an adequate reflection of all the economically relevant states of nature? And the time dimension of market completeness should not be forgotten. The most relevant state of nature for a Swiss resident of 40 years of age may be the possibility of a period of prolonged depression with high unemployment in Switzerland 25 years from now (i.e., when he is nearing retirement10 ). Now extreme aggregate economic conditions would certainly be reflected in the Swiss Market Index (SMI), but options with 20-year maturities are not customarily 10 The predominant pension regime in Switzerland is a defined benefit scheme with the benefits defined as a fraction of the last salary. 29 traded. Is it because of a lack of demand (possibly meaning that our assumption as to the most relevant state is not borne out), or because the structure of the financial industry is such that the supply of securities for long horizons is deficient?11 The second part of the chapter discussed how Arrow-Debreu prices can be extracted from option prices (in the case where the relevant option is actively traded) or option pricing formulas (in the case where they are not). This discussion helps make Arrow-Debreu securities a less abstract concept. In fact, in specific cases the detailed procedure is fully operational and may indeed be the wiser route to evaluating risky cash flows. The key hypotheses are similar to those we have just discussed: The relevant states of nature are adequately distinguished by the market portfolio, a hypothesis that may be deemed appropriate if the context is limited to the valuation of risky cash flows. Moreover, in the case where options are not traded, the quality of the extracted Arrow-Debreu prices depends on the appropriateness of the various hypotheses imbedded in the option pricing formulas to which one has recourse. This issue has been abundantly discussed in the relevant literature. References Banz, R., Miller, M. (1978), “Prices for State-Contingent Claims: Some Estimates and Applications,” Journal of Business 51, 653-672. Breeden, D., Litzenberger, R. H. (1978), “Prices of State-contingent Claims Implicit in Option Prices,” Journal of Business 51, 621-651. Pirkner, C.D., Weigend, A.S., Zimmermann, H. (1999), “Extracting RiskNeutral Densities from Option Prices Using Mixture Binomial Trees,” University of St-Gallen. Mimeographed. Ross, S. (1976), “Options and Efficiency,” Quarterly Journal of Economics 90, 75-810. Shiller, R.J. (1993), “Macro Markets-Creating Institutions for Managing Society’s Largest Economic Risks,” Clarendon Press, Oxford. Varian, H. (1987), “The Arbitrage Principle in Financial Economics,” Journal of Economic Perspectives 1(2), 55-72 Appendix 10.1: Forward Prices and Forward Rates Forward prices and forward rates correspond to the prices of (rates of return earned by) securities to be issued in the future. 11 A forceful statement in support of a similar claim is found in Shiller (1993) (see also the conclusions to Chapter 1). For the particular example discussed here, it may be argued that shorting the SMI (Swiss Market Index) would provide the appropriate hedge. Is it conceivable to take a short SMI position with a 20-year horizon? 30 Let k fτ denote the (compounded) rate of return on a risk-free discount bond to be issued at a future date k and maturing at date k + τ . These forward rates are defined by the equations: (1 + r1 )(1 + 1 f1 ) = (1 + r2 )2 (1 + r1 )(1 + 1 f2 )2 = (1 + r3 )3 (1 + r2 )2 (1 + 2 f1 ) = (1 + r3 )3 , etc. We emphasize that the forward rates are implied forward rates, in the sense that the corresponding contracts are typically not traded. However, it is feasible to lock in these forward rates; that is, to guarantee their availability in the future. Suppose we wished to lock in the one-year forward rate one year from now. This amounts to creating a new security “synthetically” as a portfolio of existing securities, and is accomplished by simply undertaking a series of long and short transactions today. For example, take as given the implied discount bond prices of Table 10.5 and consider the transactions in Table 10.10. Table 10.10: Locking in a Forward Rate t= Buy a 2-yr bond Sell short a 1-yr bond 0 - 1,000 + 1,000 0 1 65 -1,060 - 995 2 1,065 1,065 The portfolio we have constructed has a zero cash flow at date 0, requires an investment of$995 at date 1, and pays $1,065 at date 2. The gross return on the date 1 investment is 1065 = 1.07035. 995 That this is exactly equal to the corresponding forward rate can be seen from the forward rate definition: 1 +1 f1 = (1 + r2 )2 (1.065163)2 = = 1.07035. (1 + r1 ) 1.06 Let us scale back the previous transactions to create a$1,000 payoff for the forward security. This amounts to multiplying all of the indicated transactions by 1000 = .939. 1065 Table 10.11: Creating a $1,000 Payoff t= Buy .939 x 2-yr bonds Sell short .939 x 1-yr bonds 0 - 939 + 939 0 1 61.0 -995.34 -934.34 2 1,000 1,000 31 This price ($934.34) is the no arbitrage price of this forward bond, no arbitrage in the sense that if there were any other contract calling for the delivery of such a bond at a price different from $934.34, an arbitrage opportunity would exist.12 12 The approach of this section can, of course, be generalized to more distant forward rates. 32 Chapter 11 : The Martingale Measure: Part I 11.1 Introduction The theory of risk-neutral valuation reviewed in the present chapter proposes yet another way to tackle the valuation problem.1 Rather than modify the denominator – the discount factor – to take account of the risky nature of a cash flow to be valued, or the numerator, by transforming the expected cash flows into their certainty equivalent, risk-neutral valuation simply corrects the probabilities with respect to which the expectation of the future cash flows is taken. This is done in such a way that discounting at the risk-free rate is legitimate. It is thus a procedure by which an asset valuation problem is transformed into one in which the asset’s expected cash flow, computed now with respect to a new set of risk-neutral probabilities, can be discounted at the risk-free rate. The risk-neutral valuation methodology thus places an arbitrary valuation problem into a context in which all fairly priced assets earn the risk-free rate. Importantly, the Martingale pricing theory or, equivalently, the theory of risk-neutral valuation is founded on preference-free pure arbitrage principles. That is, it is free of the structural assumptions on preferences, expectations, and endowments that make the CAPM and the CCAPM so restrictive. In this respect, the present chapter illustrates how far one can go in pricing financial assets while abstracting from the usual structural assumptions. Risk-neutral probability distributions naturally assume a variety of forms, depending on the choice of setting. We first illustrate them in the context of a well-understood, finite time Arrow-Debreu complete markets economy. This is not the context in which the idea is most useful, but it is the one from which the basic intuition can be most easily understood. In addition, this strategy serves to clarify the very tight relationship between Arrow-Debreu pricing and Martingale pricing despite the apparent differences in terminology and perspectives. 11.2 The Setting and the Intuition Our setting for these preliminary discussions is the particularly simple one with which we are now long familiar. There are two dates, t = 0 and t = 1. At date t = 1, any one of j = 1, 2, ..., J possible states of nature can be realized; denote the jth state by θj and its objective probability by πj . We assume πj > 0 for all θj . Securities are competitively traded in this economy. There is a risk-free security that pays a fixed return rf ; its period t price is denoted by q b (t). By convention, we customarily assume q b (0) = 1, and its price at date 1 is q b (1) ≡ q b (θj , 1) = (1 + rf ), for all states θj . Since the date 1 price of the security is (1 + rf ) in any state, we can as well drop the first argument in the 1 The theory of risk-neutral valuation was first developed by Harrison and Kreps (1979). Pliska (1997) provides an excellent review of the notion in discrete time. The present chapter is based on his presentation. 1 pricing function indicating the state in which the security is valued2 . Also traded are N fundamental risky securities, indexed i = 1, 2, ..., N , which we think of as stocks. The period t = 0 price of the ith such security is repe resented as qi (0). In period t = 1 its contingent payoff, given that state θj is e realized, is given by qi (θj , 1). 3 It is also assumed that investors may hold any linear combination of the fundamental risk-free and risky securities. No assumption is made, however, regarding the number of securities that may be linearly independent vis--vis the number of states of nature: The securities market a may or may not be complete. Neither is there any mention of agents’ preferences. Otherwise the setting is standard Arrow-Debreu. Let S denote the set of all fundamental securities, the stocks and the bond, and linear combinations thereof. For this setting, the existence of a set of risk-neutral probabilities or, in more customary usage, a risk-neutral probability measure, effectively means RN the existence of a set of state probabilities, πj > 0, j = 1, 2, ..., J such that for each and every fundamental security i = 1, 2, ..., N e qi (0) = 1 1 e E RN (qi (θ, 1)) = (1 + rf ) π (1 + rf ) J RN e πj qi (θj , 1) j=1 (11.1) (the analogous relationship automatically holds for the risk-free security). To gain some intuition as to what might be necessary, at a minimum, to guarantee the existence of such probabilities, first observe that in our setting RN the πj represent strictly positive numbers that must satisfy a large system of equations of the form e RN qi (0) = π1 e qi (θ1 , 1) 1 + rf RN + ...... + πJ e qi (θJ , 1) 1 + rf , i = 1, 2, ..., N , J j=1 RN πj = 1.4 (11.2) RN together with the requirement that πj > 0 for all j and Such a system most certainly will not have a solution if there exist two e e fundamental securities, s and k, with the same t = 0 price, qs (0) = qk (0), for which one of them, say k, pays as much as s in every state, and strictly more in at least one state; in other words, e e e e qk (θj , 1) ≥ qs (θj , 1) for all j, and qk (θ , 1) > qs (θ , 1) ˆ ˆ (11.3) for at least one j = . The Equations (11.2) corresponding to securities s and ˆ RN k would, for any set {πj : j = 1, 2, ..., N } have the same left-hand sides, 2 In this chapter, it will be useful for the clarity of exposition to alter some of our previous notational conventions. One of the reasons is that we will want, symmetrically for all assets, to distinguish between their price at date 0 and their price at date 1 under any given state θj . 3 In the parlance and notation of Chapter 8, q e (θ , 1) is the cash flow associated with i j security i if state θj is realized, CF i (θj ). 4 Compare this system of equations with those considered in Section 10.2 when extracting Arrow-Debreu prices from a complete set of prices for complex securities. 2 yet different right-hand sides, implying no solution to the system. But two such securities cannot themselves be consistently priced because, together, they constitute an arbitrage opportunity: Short one unit of security s, long one unit e e of security k and pocket the difference qk (θ , 1) − qs (θ , 1) > 0 if state  occurs; ˆ ˆ ˆ replicate the transaction many times over. These remarks suggest, therefore, that the existence of a risk-neutral measure is, in some intimate way, related to the absence of arbitrage opportunities in the financial markets. This is, in fact, the case, but first some notation, definitions, and examples are in order. 11.3 Notation, Definitions, and Basic Results Consider a portfolio, P , composed of nb risk-free bonds and ni units of risky P P security i, i = 1, 2, ..., N . No restrictions will be placed on nb , ni : Short sales P P are permitted; they can, therefore, take negative values, and fractional share holdings are acceptable. The value of this portfolio at t = 0, VP (0), is given by N VP (0) = nb q b (0) P + i=1 e ni qi (0), P (11.4) while its value at t = 1, given that state θj , is realized is N VP (θj , 1) = nb q b (1) + P i=1 e ni qi (θj , 1). P (11.5) With this notation we are now in a position to define our basic concepts. Definition 11.1: A portfolio P in S constitutes an arbitrage opportunity provided the following conditions are satisfied: (i) VP (0) = 0, (ii) VP (θj , 1) ≥ 0, for all j ∈ {1, 2, . . ., J}, (iii) VP (θ , 1) > 0, for at least one  ∈ {1, 2, . . ., J}. ˆ ˆ (11.6) This is the standard sense of an arbitrage opportunity: With no initial investment and no possible losses (thus no risk), a profit can be made in at least one state. Our second crucial definition is Definition 11.2. Definition 11.2: RN J A probability measure πj defined on the set of states θj , j = 1, 2, ..., J, j=1 is said to be a risk-neutral probability measure if RN (i) πj > 0, for all j = 1, 2, ..., J, and EπRN qi (θ, 1) ˜e 1 + rf 3 , (11.7) e (ii) qi (0) = for all fundamental risky securities i = 1, 2, ..., N in S. Both elements of this definition are crucial. Not only must each individual security be priced equal to the present value of its expected payoff, the latter computed using the risk-neutral probabilities (and thus it must also be true of portfolios of them), but these probabilities must also be strictly positive. To find them, if they exist, it is necessary only to solve the system of equations implied by part (ii) of Equation (11.6) of the risk-neutral probability definition. Consider the Examples 11.1 through 11.4. Example 11.1: There are two periods and two fundamental securities, a stock and a bond, with prices and payoffs presented in Table 11.1. Table 11.1: Fundamental Securities for Example 11.1 Period t = 0 Prices q b (0): 1 q e (0): 4 Period t = 1 Payoffs θ1 θ2 q b (1): 1.1 1.1 q e (θj , 1): 3 7 By the definition of a risk-neutral probability measure, it must be the case that simultaneously RN 4 = π1 3 1.1 RN + π2 7 1.1 RN RN 1 = π1 + π2 RN RN Solving this system of equations, we obtain π1 = .65, π2 = .35. For future reference note that the fundamental securities in this example define a complete set of financial markets for this economy, and that there are clearly no arbitrage opportunities among them. Example 11.2: Consider next an analogous economy with three possible states of nature, and three securities, as found in Table 11.2. Table 11.2: Fundamental Securities for Example 11.2 Period t = 0 Prices q b (0): 1 e q1 (0): 2 e q2 (0): 3 Period t = 1 Payoffs θ1 θ2 θ3 q b (1): 1.1 1.1 1.1 e q1 (θj , 1): 3 2 1 e q2 (θj , 1): 1 4 6 4 The relevant system of equations is now RN 2 = π1 3 1.1 1 1.1 RN + π2 2 1.1 4 1.1 RN + π3 1 1.1 6 1.1 RN 3 = π1 RN + π2 RN + π3 RN RN RN 1 = π1 + π2 + π3 . The solution to this set of equations, RN π1 = .3, RN π2 = .6, RN π3 = .1, satisfies the requirements of a risk-neutral measure. By inspection we again observe that this financial market is complete, and that there are no arbitrage opportunities among the three securities. Example 11.3: To see what happens when the financial markets are incomplete, consider the securities in Table 11.3. Table 11.3: Fundamental Securities for Example 11.3 Period t = 0 Prices q b (0): 1 e q1 (0): 2 Period t = 1 Payoffs θ1 θ2 θ3 b q (1): 1.1 1.1 1.1 e q1 (θj , 1): 1 2 3 For this example the relevant system is 1 1.1 2 1.1 3 1.1 2 = 1 = RN π1 RN + π2 RN + π3 RN RN RN π1 + π2 + π3 Because this system is under-determined, there will be many solutions. Without RN RN RN loss of generality, first solve for π2 and π3 in terms of π1 : RN RN RN 2.2 − π1 = 2π2 + 3π3 RN RN RN 1 − π1 = π2 + π3 , RN RN RN RN which yields the solution π3 = .2 + π1 , and π2 = .8 -2π1 . 5 RN RN RN In order for a triple (π1 ,π2 ,π3 ) to simultaneously solve this system of equations, while also satisfying the strict positivity requirement of risk-neutral probabilities, the following inequalities must hold: RN π1 > 0 RN RN π2 = .8 − 2π1 > 0 RN RN π3 = .2 + π1 > 0 RN RN By the second inequality π1 < .4, and by the third π1 > -.2. In order that all probabilities be strictly positive, it must, therefore, be the case that RN 0 < π1 < .4, RN RN with π2 and π3 given by the indicated equalities. In an incomplete market, therefore, there appear to be many risk-neutral RN RN RN probability sets: any triple (π1 ,π2 ,π3 ) where RN RN RN (π1 , π2 , π3 ) ∈ {(λ,8 − 2λ, .2 + λ) : 0 < λ < .4} serves as a risk-neutral probability measure for this economy. Example 11.4: Lastly, we may as well see what happens if the set of fundamental securities contains an arbitrage opportunity (see Table 11.4). Table 11.4: Fundamental Securities for Example 11.4 Period t = 0 Prices q b (0): 1 e q1 (0): 2 e q2 (0): 2.5 Period t = 1 Payoffs θ1 θ2 θ3 q b (1): 1.1 1.1 1.1 e q1 (θj , 1): 2 3 1 e q2 (θj , 1): 4 5 3 Any attempt to solve the system of equations defining the risk-neutral probabilities fails in this case. There is no solution. Notice also the implicit arbitrage opportunity: risky security 2 dominates a portfolio of one unit of the risk-free security and one unit of risky security 1, yet it costs less. It is also possible to have a solution in the presence of arbitrage. In this case, however, at least one of the solution probabilities will be zero, disqualifying the set for the risk-neutral designation. Together with our original intuition, these examples suggest that arbitrage opportunities are incompatible with the existence of a risk-neutral probability measure. This is the substance of the first main result. Proposition 11.1: 6 Consider the two-period setting described earlier in this chapter. Then there exists a risk-neutral probability measure on S, if and only if there are no arbitrage opportunities among the fundamental securities. Proposition 11.1 tells us that, provided the condition of the absence of arbitrage opportunities characterizes financial markets, our ambition to use distorted, risk-neutral probabilities to compute expected cash flows and discount at the risk-free rate has some legitimacy! Note, however, that the proposition admits the possibility that there may be many such measures, as in Example 11.3. Proposition 11.1 also provides us, in principle, with a method for testing whether a set of fundamental securities contains an arbitrage opportunity: If the system of Equations (11.7.ii) has no solution probability vector where all the terms are strictly positive, an arbitrage opportunity is present. Unless we are highly confident of the actual states of nature and the payoffs to the various fundamental securities in those states, however, this observation is of limited use. But even for a very large number of securities it is easy to check computationally. Although we have calculated the risk-neutral probabilities with respect to the prices and payoff of the fundamental securities only, the analogous relationship must hold for arbitrary portfolios in S – all linear combinations of the fundamental securities – in the absence of arbitrage opportunities. This result is formalized in Proposition 11.2. Proposition 11.2: Suppose the set of securities S is free of arbitrage opportunities. Then for ˆ any portfolio P in S VP (0) = ˆ 1 ˜ˆ E RN VP (θ, 1), (1 + rf ) π (11.8) for any risk-neutral probability measure π RN on S. Proof : ˆ Let P be an arbitrary portfolio in S, and let it be composed of nbˆ bonds P ˆ and niˆ shares of fundamental risky asset i. In the absence of arbitrage, P must P be priced equal to the value of its constituent securities, in other words, VP (0) = nbˆ q b (0) + ˆ P N i=1 e niˆ qi (0) = nbˆ EπRN P P q b (1) 1+rf + N i=1 niˆ EπRN P qi (θ,1) ˜e 1+rf , for any risk neutral probability measure π RN ,   N  nbˆ qb (1)+ niˆ qi (θ,1)  ˜e P P i=1 ˜ˆ = 1 E RN VP (θ, 1) = EπRN 1+rf  (1+rf ) π  . 7 Proposition 11.2 is merely a formalization of the obvious fact that if every security in the portfolio is priced equal to the present value, discounted at rf , of its expected payoffs computed with respect to the risk-neutral probabilities, the same must be true of the portfolio itself. This follows from the linearity of the expectations operator and the fact that the portfolio is valued as the sum total of its constituent securities, which must be the case in the absence of arbitrage opportunities. A multiplicity of risk-neutral measures on S does not compromise this conclusion in any way, because each of them assigns the same value to the fundamental securities and thus to the portfolio itself via Equation (11.8). For completeness, we note that a form of a converse to Proposition 11.2 is also valid. Proposition 11.3: Consider an arbitrary period t = 1 payoff x(θ, 1) and let M represent the ˜ set of all risk-neutral probability measures on the set S. Assume S contains no arbitrage opportunities. If 1 1 ˜ ˜ ˆ E RN x(θ, 1) = E ˆ RN x(θ, 1) for any π RN , π RN ∈ M, (1 + rf ) π (1 + rf ) π then there exists a portfolio in S with the same t = 1 payoff as x(θ, 1). ˜ It would be good to be able to dispense with the complications attendant to multiple risk-neutral probability measures on S. When this is possible is the subject of Section 11.4. 11.4 Uniqueness Examples 11.1 and 11.2 both possessed unique risk-neutral probability measures. They were also complete markets models. This illustrates an important general proposition. Proposition 11.4: Consider a set of securities S without arbitrage opportunities. Then S is complete if and only if there exists exactly one risk-neutral probability measure. Proof : Let us prove one side of the proposition, as it is particularly revealing. SupRN pose S is complete and there were two risk-neutral probability measures, {πj : RN j = 1, 2, . . . , J} and {πj : j = 1, 2, ..., J}. Then there must be at least one RN RN state  for which π = π . Since the market is complete, one must be able ˆ ˆ ˆ to construct a portfolio P in S such that VP (0) > 0, and VP (θj , 1) = 0 j = ˆ j ˆ . VP (θj , 1) = 1 j = j This is simply the statement of the existence of an Arrow-Debreu security associated with θ . ˆ 8 RN RN But then {πj :j = 1, 2, ..., J} and {πj :j = 1, 2, ..., J} cannot both be risk-neutral measures as, by Proposition 11.2, RN πˆ 1 j ˜ EπRN VP (θ, 1) = (1 + rf ) (1 + rf ) VP (0) = = = 1 ˜ E RN VP (θ, 1) (1 + rf ) (1 + rf ) π VP (0), a contradiction. = RN πˆ j Thus, there cannot be more than one risk-neutral probability measure in a complete market economy. We omit a formal proof of the other side of the proposition. Informally, if the market is not complete, then the fundamental securities do not span the space. Hence, the system of Equations (11.6) contains more unknowns than equations, yet they are all linearly independent (no arbitrage). There must be a multiplicity of solutions and hence a multiplicity of risk-neutral probability measures. Concealed in the proof of Proposition 11.4 is an important observation: The price of an Arrow-Debreu security that pays 1 unit of payoff if event θ is realized ˆ  ˆ and nothing otherwise must be (1+rf ) , the present value of the corresponding risk-neutral probability. In general, RN πj (1+rf ) π RN qj (0) = where qj (0) is the t = 0 price of a state claim paying 1 if and only if state θj realized. Provided the financial market is complete, risk-neutral valuation is nothing more than valuing an uncertain payoff in terms of the value of a replicating portfolio of Arrow-Debreu claims. Notice, however, that we thus identify the all-important Arrow-Debreu prices without having to impose any of the economic structure of Chapter 8; in particular, knowledge of the agents’ preferences is not required. This approach can be likened to describing the Arrow-Debreu pricing theory from the perspective of Proposition 10.2. It is possible, and less restrictive, to limit our inquiry to extracting Arrow-Debreu prices from the prices of a (complete) set of complex securities and proceed from there to price arbitrary cash flows. In the absence of further structure, nothing can be said, however, on the determinants of Arrow-Debreu prices (or risk-neutral probabilities). Let us illustrate with the data of our second example. There we identified the unique risk-neutral measure to be: RN RN RN π1 = .3, π2 = .6, π3 = .1, Together with rf = .1, these values imply that the Arrow-Debreu security prices must be q1 (0) = .3/1.1 = .27; q2 (0) = .6/1.1 = .55; q3 (0) = .1/1.1 = .09. 9 Conversely, given a set of Arrow-Debreu claims with strictly positive prices, we can generate the corresponding risk-neutral probabilities and the risk-free rate. As noted in earlier chapters, the period zero price of a risk-free security (one that pays one unit of the numeraire in every date t = 1 state) in this setting is given by J prf = j=1 qj (0), 1 = prf 1 J j=1 and thus (1 + rf ) = qj (0) We define the risk-neutral probabilities {π RN (θ)} according to RN πj = qj (0) J j=1 (11.9) qj (0) RN Clearly πj > 0 for each state j (since qj (0) > 0 for every state) and, by construction J j=1 RN RN πj = 1. As a result, the set {πj } qualifies as a risk-neutral probability measure. Referring now to the example developed in Section 8.3, let us recall that we had found a complete set of Arrow-Debreu prices to be q1 (0) = .24; q2 (0) = .3; this means, in turn, that the unique risk-neutral measure for the economy there described is RN RN π1 = .24/.54 = .444, π2 = .3/.54 = .556. For complete markets we see that the relationship between strictly positively priced state claims and the risk-neutral probability measure is indeed an intimate one: each implies the other. Since, in the absence of arbitrage possibilities, there can exist only one set of state claims prices, and thus only one risk-neutral probability measure, Proposition 11.4 is reconfirmed. 11.5 Incompleteness What about the case in which S is an incomplete set of securities? By Proposition 11.4 there will be a multiplicity of risk-neutral probabilities, but these will all give the same valuation to elements of S (Proposition 11.2). Consider, however, a t = 1 bounded state-contingent payoff vector x(θ, 1) that does not ˜ coincide with the payoff to any portfolio in S. By Proposition 11.4, different risk-neutral probability measures will assign different values to this payoff: essentially, its price is not well defined. It is possible, however, to establish arbitrage bounds on the value of this claim. For any risk-neutral probability π RN , 10 defined on S, consider the following quantities: Hx Lx = inf E πRN ˜ VP (θ, 1) : VP (θj , 1) ≥ x(θj , 1), ∀j = 1, 2, ...J and P ∈ S 1 + rf ˜ VP (θ, 1) : VP (θj , 1) ≤ x(θj , 1), ∀j = 1, 2, ...J and P ∈ S 1 + rf (11.10) In these evaluations we don’t care what risk-neutral measure is used because any one of them gives identical valuations for all portfolios in S. Since, for some γ, γq b (1) > x(θj , 1), for all j, Hx is bounded above by γq b (0), and hence is well defined (an analogous comment applies to Lx ). The claim is that the no arbitrage price of x, q x (0) lies in the range Lx ≤ q x (0) ≤ Hx To see why this must be so, suppose that q x (0) > Hx and let P ∗ be any portfolio in S for which ∗ q x (0) > VP (0) > Hx , and VP ∗ (θj , 1) ≥ x(θj , 1), for all θj , j = 1, 2, ...N. We know that such a P ∗ exists because the set Sx = {P : P ∈ S, VP (θj , 1) ≥ x(θj , 1), for all j = 1, 2, ..., J} = Hx . By the ˆ continuity of the expectations operator, we can find a λ > 1 such that λP in 5 Sx and q x (0) > 1 1 ˜ ˆ ˜ˆ E RN VλP (θ, 1) = λ E RN VP (θ, 1) = λ Hx > Hx . 1 + rf π 1 + rf π ˆ is closed. Hence there is a P in Sx such that EπRN ˜ˆ VP (θ,1) (1+rf ) = sup E πRN (11.11) ˆ Since λ > 1, for all j, VλP (θj , 1) > VP (θj , 1) ≥ x(θj , 1); let P ∗ = λP . Now the ˆ ˆ arbitrage argument: Sell the security with title to the cash flow x(θj , 1), and buy the portfolio P ∗ . At time t = 0, you receive, q x (0) − VP ∗ (0) > 0, while at time t = 1 the cash flow from the portfolio, by Equation (11.11), fully covers the obligation under the short sale in every state; in other words, there is an arbitrage opportunity. An analogous argument demonstrates that Lx ≤ q x (0). In some cases it is readily possible to solve for these bounds. Example 11.5: Revisit, for example, our earlier Example 11.3, and consider the payoff 5 By λP we mean a portfolio with constituent bonds and stocks in the proportions ˆ λnb , λniˆ . ˆ P P 11 ˆ x(θj , 1) : θ1 0 θ2 0 θ3 1 This security is most surely not in the span of the securities (1.1, 1.1, 1.1) and (1, 2, 3), a fact that can be confirmed by observing that the system of equations implied by equating (0, 0, 1) = a(1.1, 1.1, 1.1) + b(1, 2, 3), in other words, the system: 0 = 1.1a + b 0 = 1.1a + 2b 1 = 1.1a + 3b has no solution. But any portfolio in S can be expressed as a linear combination of (1.1, 1.1, 1.1) and (1, 2, 3) and thus must be of the form a(1.1, 1.1, 1.1) + b(1, 2, 3) = (a(1.1) + b, a(1.1) + 2b, a(1.1) + 3b) for some a, b real numbers. We also know that in computing Hx , Lx , any risk-neutral measure can be employed. Recall that we had identified the solution of Example 11.3 to be RN RN RN (π1 , π2 , π3 ) ∈ {(λ, .8 − 2λ, .2 + λ) : 0 < λ < .4} Without loss of generality, choose λ = .2; thus RN RN RN (π1 , π2 , π3 ) = (.2, .4, .4). ˜ For any choice of a, b (thereby defining a VP (θ; 1)) EπRN ˜ VP (θ; 1) .2 {(1.1)a+b} + .4 {(1.1)a+2b} + .4 {(1.1)a+3b} = (1 + rf ) 1.1 = (1.1)a + (2.2)b = a + 2b. 1.1 Thus, H x = inf {(a + 2b) : a(1.1) + b ≥ 0, a(1.1) + 2b ≥ 0, and a(1.1)+3b≥1} a,b∈R Similarly, L x = sup {(a + 2b) : a(1.1) + b ≤ 0, a(1.1) + 2b ≤ 0,a(1.1)+3b≤1} a,b∈R 12 Table 11.5 Solutions for Hx and Lx a∗ b∗ Hx Lx Hx -.4545 .5 .5455 Lx -1.8182 1 1818 Because the respective sets of admissible pairs are closed in R2 , we can replace inf and sup by, respectively, min and max. Solving for Hx , Lx thus amounts to solving small linear programs. The solutions, obtained via MATLAB are detailed in Table 11.5. The value of the security (state claim), we may conclude, lies in the interval (.1818, .5455). Before turning to the applications there is one additional point of clarification. 11.6 Equilibrium and No Arbitrage Opportunities Thus far we have made no reference to financial equilibrium, in the sense discussed in earlier chapters. Clearly equilibrium implies no arbitrage opportunities: The presence of an arbitrage opportunity will induce investors to assume arbitrarily large short and long positions, which is inconsistent with the existence of equilibrium. The converse is also clearly not true. It could well be, in some specific market, that supply exceeds demand or conversely, without this situation opening up an arbitrage opportunity in the strict sense understood in this chapter. In what follows the attempt is made to convey the sense of risk-neutral valuation as an equilibrium phenomena. Table 11.6 The Exchange Economy of Section 8.3 – Endowments and Preferences Endowments t=0 t=1 10 1 2 5 4 6 Preferences U 1 (c0 , c1 ) = 1 c1 + .9( 1 ln(c1 ) + 1 2 0 3 U 2 (c0 , c1 ) = 1 c2 + .9( 1 ln(c2 ) + 1 2 0 3 2 3 2 3 Agent 1 Agent 2 ln(c1 )) 2 ln(c2 )) 2 To illustrate, let us return to the first example in Chapter 8. The basic data of that Arrow-Debreu equilibrium is provided in Table 11.6. and the t = 0 corresponding equilibrium state prices are q1 (0) = .24 and q2 (0) = .30. In this case the risk-neutral probabilities are .30 .24 RN , and π2 = . .54 .54 Suppose a stock were traded where q e (θ1 , 1) = 1, and q e (θ2 , 1) = 3. By riskneutral valuation (or equivalently, using Arrow-Debreu prices), its period t = 0 RN π1 = 13 price must be q e (0) = .54 .24 .30 (1) + (3) = 1.14; .54 .54 the price of the risk-free security is q b (0) = .54. Verifying this calculation is a bit tricky because, in the original equilibrium, this stock was not traded. Introducing such assets requires us to decide what the original endowments must be, that is, who owns what in period 0. We cannot just add the stock arbitrarily, as the wealth levels of the agents would change as a result and, in general, this would alter the state prices, risk-neutral probabilities, and all subsequent valuations. The solution of this problem is to compute the equilibrium for a similar economy in which the two agents have the same preferences and in which the only traded assets are this stock and a bond. Furthermore, the initial endowments of these instruments must be such as to guarantee the same period t = 0 and t = 1 net endowment allocations as in the first equilibrium. Let ni , ni denote, respectively, the initial endowments of the equity and debt ˆe ˆb securities of agent i, i = 1, 2. The equivalence noted previously is accomplished as outlined in Table 11.7 (see Appendix 11.1). Table 11.7 Initial Holdings of Equity and Debt Achieving Equivalence with Arrow-Debreu Equilibrium Endowments t=0 Consumption ni ˆe 1/ 10 2 5 1 ni ˆb 1/ 2 3 Agent 1: Agent 2: A straightforward computation of the equilibrium prices yields the same q e (0) = 1.14, and q b (0) = .54 as predicted by risk-neutral valuation. We conclude this section with one additional remark. Suppose one of the two agents were risk neutral; without loss of generality let this be agent 1. Under the original endowment scheme, his problem becomes: 2 max(10 + 1q1 (0) + 2q2 (0) − c1 q1 (0) − c1 q2 (0)) + .9( 1 c1 + 3 c1 ) 2 2 1 3 1 1 1 s.t. c1 q1 (0) + c2 q2 (0) ≤ 10 + q1 (0) + 2q2 (0) The first order conditions are c1 : q1 (0) = 1 .0.9 1 3 c1 : q2 (0) = 2 .0.9 2 3 RN RN from which it follows that π1 = 30.9 = 1 while π2 = 30.9 = 2 , that is, in 3 3 equilibrium, the risk-neutral probabilities coincide with the true probabilities. This is the source of the term risk-neutral probabilities: If at least one agent is risk neutral, the risk-neutral probabilities and the true probabilities coincide. 1 0.9 2 0.9 14 We conclude from this example that risk-neutral valuation holds in equilibrium, as it must because equilibrium implies no arbitrage. The risk-neutral probabilities thus obtained, however, are to be uniquely identified with that equilibrium, and it is meaningful to use them only for valuing securities that are elements of the participants’ original endowments. 11.7 11.7.1 Application: Maximizing the Expected Utility of Terminal Wealth Portfolio Investment and Risk-Neutral Probabilities Risk-neutral probabilities are intimately related to the basis or the set of fundamental securities in an economy. Under no arbitrage, given the prices of fundamental securities, we obtain a risk-neutral probability measure, and vice versa. This raises the possibility that it may be possible to formulate any problem in wealth allocation, for example the classic consumption-savings problem, in the setting of risk-neutral valuation. In this section we consider a number of these connections. The simplest portfolio allocation problem with which we have dealt involves an investor choosing a portfolio so as to maximize the expected utility of his period t = 1 (terminal) wealth (we retain, without loss of generality, the twoperiod framework). In our current notation, this problem takes the form: choose portfolio P , among all feasible portfolios, (i.e., P must be composed of securities in S and the date-0 value of this portfolio (its acquisition price) cannot exceed initial wealth) so as to maximize expected utility of terminal wealth, which corresponds to the date-1 value of P : {nb ,ni ,i=1,2,...,N } P P max ˜ EU (VP (θ, 1)) (11.12) s.t. VP (0) = V0 , P ∈ S, where V0 is the investor’s initial wealth, U ( ) is her period utility function, assumed to have the standard properties, and nb , ni are the positions (not P P proportions, but units of indicated assets) in the risk-free asset and the risky asset i = 1, 2, ..., N , respectively, defining portfolio P . It is not obvious that there should be a relationship between the solvability of this problem and the existence of a risk-neutral measure, but this is the case. Proposition 11.5: If Equation (11.12) has a solution, then there are no arbitrage opportunities in S. Hence there exists a risk-neutral measure on S. Proof : The idea is that an arbitrage opportunity is a costless way to endlessly improve upon the (presumed) optimum. So no optimum can exist. More formally, ˆ we prove the proposition by contradiction. Let P ∈ S be a solution to Equation ˆ have the structure {nb , ni : i = 1, 2, ..., N }. Assume also (11.12), and let P ˆ ˆ P P 15 that there exists an arbitrage opportunity, in other words, a portfolio P , with ˜ structure {nb , ni : i = 1, 2, ..., N }, such that V↔ (0) = 0 and EV↔ (θ, 1) > 0. ↔ ↔ Consider the portfolio P ∗ with structure P P P P {nb ∗ , ni ∗ : i = 1, 2, ..., N } P P nb ∗ = nbˆ + nb and ni ∗ = niˆ + ni , i = 1, 2, ..., N. ↔ ↔ P P P P P P P is still feasible for the agent and it provides strictly more wealth in at least one state. Since U ( ) is strictly increasing, ˜ ˜ˆ EU (VP ∗ (θ, 1)) > EU (VP (θ, 1)). ˆ This contradicts P as a solution to Equation (11.12). We conclude that there cannot exist any arbitrage opportunities and thus, by Proposition 11.1, a risk-neutral probability measure on S must exist. Proposition 11.5 informs us that arbitrage opportunities are incompatible with an optimal allocation – the allocation can always be improved upon by incorporating units of the arbitrage portfolio. More can be said. The solution to the agents’ problem can, in fact, be used to identify the risk-neutral probabilities. To see this, let us first rewrite the objective function in Equation (11.12) as follows: N {ni :i=1,2,...,N } P N e ni qi (0) P i=1 J N max EU (1 + rf ) πj U V0 − + i=1 e ni qi (θ, 1) P N = {ni :i=1,2,...,N } P max (1 + rf ) V0 + j=1 J q e (θj , 1) ni i P 1 + rf i=1 N − i=1 e ni qi (0) P = {ni :i=1,2,...,N } P max πj U j=1 (1 + rf ) V0 + i=1 ni P e qi (θj , 1) e − qi (0) 1 + rf (11.13) The necessary and sufficient first-order conditions for this problem are of the form: 0 = J N πj U1 j=1 (1 + rf ) V0 + i=1 e qi (θj , 1) e − qi (0) ni P e qi (θj , 1) e − qi (0) 1 + rf (1 + rf ) 1 + rf (11.14) 1 Note that the quantity πj U1 (VP (θj , 1))(1 + rf ) is strictly positive because πj > 0 and U ( ) is strictly increasing. If we normalize these quantities we can convert 16 them into probabilities. Let us define πj = πj U1 (VP (θj , 1))(1 + rf ) J j=1 = πj U1 (VP (θj , 1)) J j=1 , j = 1, 2, ..., J. πj U1 (VP (θj , 1))(1 + rf ) J j=1 πj U1 (VP (θj , 1)) Since π j > 0, j = 1, 2, ..., J, π j = 1, and, by (11.14) J e qi (θj , 1) ; 1 + rf e qi (0) = j=1 πj these three properties establish the set { π j : j = 1, 2, . . . , N } as a set of risk-neutral probabilities. We have just proved one half of the following proposition: Proposition 11.6: Let nb ∗ , ni ∗ : i = 1, 2, ..., N be the solution to the optimal portfolio probp p ∗ lem (11.12). Then the set πj : j = 1, 2, , ...J , defined by ∗ πj = πj U1 (Vp∗ (θj , 1)) J j=1 , (11.15) πj U1 (Vp∗ (θj , 1)) constitutes a risk-neutral probability measure on S. Conversely, if there exists a RN risk-neutral probability measure πj : j = 1, 2, , ...J on S, there must exist a concave, strictly increasing, differentiable utility function U ( ) and an initial wealth V0 for which Equation (11.12) has a solution. Proof : We have proved the first part. The proof of the less important converse proposition is relegated to Appendix 11.2. 11.7.2 Solving the Portfolio Problem Now we can turn to solving Equation (11.12). Since there is as much information in the risk-neutral probabilities as in the security prices, it should be possible to fashion a solution to Equation (11.12) using that latter construct. Here we will choose to restrict our attention to the case in which the financial markets are complete. In this case there exists exactly one risk-neutral measure, which we denote RN by πj : j = 1, 2, ..., N . Since the solution to Equation (11.12) will be a portfolio in S that maximizes the date t = 1 expected utility of wealth, the solution procedure can be decomposed into a two-step process: 17 Step 1: Solve maxEU (˜(θ, 1)) x x(θ, 1) ˜ s.t. EπRN 1 + rf (11.16) = V0 The solution to this problem identifies the feasible uncertain payoff that maximizes the agent’s expected utility. But why is the constraint a perfect summary of feasibility? The constraint makes sense first because, under complete markets, every uncertain payoff lies in S. Furthermore, in the absence of arbitrage opportunities, every payoff is valued at the present value of its expected payoff computed using the unique risk-neutral probability measure. The essence of the budget constraint is that a feasible payoff be affordable: that its price equals V0 , the agent’s initial wealth. Step 2: Find the portfolio P in S such that VP (θj , 1) = x(θj , 1), j = 1, 2..., J. In step 2 we simply find the precise portfolio allocations of fundamental securities that give rise to the optimal uncertain payoff identified in step 1. The theory is all in step 1; in fact, we have used all of our major results thus far to write the constraint in the indicated form. Now let us work out a problem, first abstractly and then by a numerical example. Equation (11.16) of step 1 can be written as maxEπ U (˜(θ, 1)) − λ[EπRN x x(θ, 1) ˜ 1 + rf − V0 ], (11.17) where λ denotes the Lagrange multiplier and where we have made explicit the probability distributions with respect to which each of the expectations is being taken. Equation (11.17) can be rewritten as J max x j=1 πj U (x(θj , 1)) − λ RN πj x(θj , 1) − λV0 . πj (1 + rf ) (11.18) The necessary first-order conditions, one equation for each state θj , are thus U1 (x(θj , 1)) = RN λ πj , j = 1, 2, ..., J. πj (1 + rf ) (11.19) from which the optimal asset payoffs may be obtained as per −1 x(θj , 1) = U1 RN λ πj πj (1 + rf ) , j = 1, 2, ..., J (11.20) −1 with U1 representing the inverse of the M U function. 18 The Lagrange multiplier λ is the remaining unknown. It must satisfy the budget constraint when Equation (11.20) is substituted for the solution; that is, λ must satisfy EπRN 1 U −1 (1 + rf ) 1 RN λ πj πj (1 + rf ) = V0 . (11.21) A value for λ that satisfies Equation (11.21) may not exist. For all the standard 1−γ −νx utility functions that we have dealt with, U (x) = ln x or x , however, 1−γ or e ˆ solve Equation (11.21); the it can be shown that such a λ will exist. Let λ optimal feasible contingent payoff is thus given by −1 x(θj , 1) = U1 ˆ RN λ πj πj (1 + rf ) (11.22) (from (11.21)). Given this payoff, step 2 involves finding the portfolio of fundamental securities that will give rise to it. This is accomplished by solving the customary system of linear equations. 11.7.3 A Numerical Example Now, a numerical example: Let us choose a utility function from the familiar 1−γ CRRA class, U(x)= x , and consider the market structure of Example 11.2: 1−γ Markets are complete and the unique risk-neutral probability measure is as noted. 1 −1 Since U1 (x) = x−γ , U1 (y) = y − γ , Equation (11.20) reduces to x(θj , 1) = RN λ πj πj (1 + rf ) 1 −γ (11.23) from which follows the counterpart to Equation (11.21):  1  −γ J RN λπj 1 RN  = V0 . πj  (1 + rf ) πj (1 + rf ) j=1 Isolating λ gives    J 1 RN ˆ λ= πj   (1 + rf ) j=1 RN πj πj (1 + rf ) 1 −γ γ   V −γ .  0 (11.24) Let us consider some numbers: Assume γ = 3, V0 = 10, and that (π1, π2, π3 ), the true probability distribution, takes on the value (1/3, 1/3, 1/3,). Refer to 19 Example 11.2 where the risk-neutral probability distribution was found to be RN RN RN (π1 , π2 , π3 ) = (.3, .6, .1). Accordingly, from (11.24) ˆ λ = 10−3 .3 1 (1.1) 1 (1.1) .3 1/ )(1.1) ( 3 −1/3 −1/3 +.6 ˆ λ = ( .6 1/ )(1.1) ( 3 +.1 1 (1.1) .1 1/ )(1.1) ( 3 −1/3 3 1 3 ) {.2916 + .4629 + .14018} = .0007161. 1000 The distribution of the state-contingent payoffs follows from (11.23):  1 −γ RN  11.951 j = 1 .0007161 πj 9.485 j = 2 x(θj , 1) = =  πj (1 + rf ) 17.236 j = 3. (11.25) The final step is to convert this payoff to a portfolio structure via the identification: (11.951, 11.485, 17.236) = nb (1.1, 1.1, 1.1) + n1 (3, 2, 1) + n2 (1, 4, 6) or P P P b 1 2 11.951 = 1.1nP + 3nP + nP 11.485 = 1.1nb + 2n1 + 4n2 P P P 17.236 = 1.1nb + n1 + bn2 P P P The solution to this system of equations is nb P n1 P n2 P = 97.08 (invest a lot in the risk-free asset) = −28.192 (short the first stock) = −10.225 (also short the second stock) Lastly, we confirm that this portfolio is feasible: Cost of portfolio = 97.08 + 2(−28.192) + 3(−10.225) = 10 = V0 , the agent’s initial wealth, as required. Note the computational simplicity of this method: We need only solve a linear system of equations. Using more standard methods would result in a system of three nonlinear equations to solve. Analogous methods are also available to provide bounds in the case of market incompleteness. 11.8 Conclusions Under the procedure of risk-neutral valuation, we construct a new probability distribution – the risk-neutral probabilities – under which all assets may be valued at their expected payoff discounted at the risk-free rate. More formally 20 it would be said that we undertake a transformation of measure by which all assets are then expected to earn the risk-free rate. The key to our ability to find such a measure is that the financial markets exhibit no arbitrage opportunities. Our setting was the standard Arrow-Debreu two-period equilibrium and we observed the intimate relationship between the risk-neutral probabilities and the relative prices of state claims. Here the practical applicability of the idea is limited. Applying these ideas to the real world would, after all, require a denumeration of all future states of nature and the contingent payoffs to all securities in order to compute the relevant risk-neutral probabilities, something for which there would be no general agreement. Even so, this particular way of approaching the optimal portfolio problem was shown to be a source of useful insights. In more restrictive settings, it is also practically powerful and, as noted in Chapter 10, lies behind all modern derivatives pricing. References Harrison, M., Kreps, D. (1979), “Martingales and Multi-Period Securities Market,” Journal of Economic Theory 20, 381–408. Pliska, S. R. (1997), Introduction to Mathematical Finance: Discrete Time Models, Basil Blackwell, Malden, Mass. Appendix 11.1: Finding the Stock and Bond Economy That Is Directly Analogous to the Arrow-Debreu Economy in Which Only State Claims Are Traded The Arrow-Debreu economy is summarized in Table 11.6. We wish to price the stock and bond with the payoff structures in Table A 11.1. Table A11.1 Payoff Structure t=0 -qe (0) -qb (0) t=1 θ1 θ2 1 3 1 1 In order for the economy in which the stock and bond are traded to be equivalent to the Arrow-Debreu economy where state claims are traded, we need the former to imply the same effective endowment structure. This is accomplished as follows: Agent 1: Let his endowments of the stock and bond be denoted by z1 and z1 , ˆe ˆb then, In state θ1 : z1 + z1 = 1 ˆb ˆe In state θ2 : z1 + 3ˆ1 = 2 ˆb ze e b Solution: z1 = z1 = 1/2 (half a share and half a bond). ˆ ˆ 21 Agent 2: Let his endowments of the stock and bond be denoted by z2 and z2 , ˆe ˆb then, In state θ1 : z2 + z2 = 4 ˆb ˆe In state θ2 : z2 + 3ˆ2 = 6 ˆb ze e Solution: z2 = 1, z2 = 3 ˆ ˆb With these endowments the decision problems of the agent become: Agent 1: max e b z1 ,z1 1 2 1 1 e b 10 + q e + q b − z1 q e − z1 q b 2 2 + .9 1 2 e b e b ln(z1 + z1 ) + ln(3z1 + z1 ) 3 3 Agent 2: max e b z2 ,z2 1 e b 5 + q e + 3q b − z2 q e − z2 q b + .9 2 1 2 e b e b ln(z2 + z2 ) + ln(3z2 + z2 ) 3 3 The FOCs are 1 e q 2 b 1 z1 : q b 2 e 1 z2 : q e 2 b 1 z2 : q b 2 e z1 : = .9 = .9 = .9 = .9 1 3 1 3 1 3 1 3 1 b e z1 + z1 1 b e z1 + z1 1 b e z2 + z2 1 b e z2 + z2 2 3 2 + 3 2 + 3 2 + 3 + 1 b e 3z1 + z1 1 b e 3z1 + z1 1 b e 3z2 + z2 1 b e 3z2 + z2 (3) (3) (1) Since these securities span the space and since the period 1 and period 2 endowments are the same, the real consumption allocations must be the same as in the Arrow-Debreu economy: c1 = c2 = 2.5 1 1 c1 = c2 = 4 2 2 Thus, q e = 2(.9) q b = 2(.9) as computed previously. To compute the corresponding security holding, observe that: 22 1 3 1 3 1 2.5 1 2.5 + 2 3 2 3 1 4 1 4 3 = 1.14 = .54, + Agent 1: e b z1 + z1 = 2.5 e b 3z1 + z1 = 4 e z1 = .75 b z1 = 1.75 Agent 2: (same holdings) e z2 = .75 b z2 = 1.75 Supply must equal demand in equilibrium: z1 + z2 = ˆe ˆe z1 + z2 = ˆb ˆb 1 2 1 2 e e + 1 = 1.5 = z1 + z2 b b + 3 = 3.5 = z1 + z2 The period zero consumptions are identical to the earlier calculation as well. Appendix 11.2: Proof of the Second Part of Proposition 11.6 RN πj ˆ Define U (x, θj ) = x πj (1+rf ) , where {πj : j = 1, 2, , ...J} are the true objective state probabilities. This is a state-dependent utility function that is linear in wealth. We will show that for this function, Equation (11.13), indeed, has a solution. Consider an arbitrary allocation of wealth to the various fundamental assets {ni :j = 1, 2, ..., J} and let P denote that portfolio. Fix the wealth P at any level V0 , arbitrary. We next compute the expected utility associated with this portfolio, taking advantage of representation (11.14): N ˆ ˜ E U (VP (θ, 1)) ˆ = EU J (1 + rf ) V0 + i=1 N ni P ni P i=1 qi (θ, 1) ˜e e − qi (0) (1 + rf ) e qi (θj , 1) e − qi (0) (1 + rf ) RN πj πj (1 + rf ) = j=1 J πj (1 + rf ) V0 + N RN πj j=1 J J = V0 + i=1 ni P N RN πj e qi (θj , 1) e − qi (0) (1 + rf ) e qi (θj , 1) e − qi (0) (1 + rf ) j=1 j=1 i=1   N J e qi (θj , 1) RN e = V0 + ni  πj − qi (0)  P (1 + rf ) i=1 j=1 = RN π j V0 + ni P = V0 in other words, with this utility function, every trading strategy has the same value. Thus problem (11.13) has, trivially, a solution. 23 Chapter 12: The Martingale Measure in Discrete Time: Part II 12.1 Introduction We return to the notion of risk-neutral valuation, which we now extend to settings with many time periods. This will be accomplished in two very different ways. First, we extend the concept to the CCAPM setting. Recall that this is a discrete time, general equilibrium framework: preferences and endowment processes must be specified and no-trade prices computed. We will demonstrate that, here as well, assets may be priced equal to the present value, discounted at the risk-free rate of interest, of their expected payoffs when expectations are computed using the set of risk-neutral probabilities. We would expect this to be possible. The CCAPM is an equilibrium model (hence there are no arbitrage opportunities and a set of risk-neutral probabilities must exist) with complete markets (hence this set is unique). Second, we extend the idea to the partial equilibrium setting of equity derivatives (e.g., equity options) valuations. The key to derivatives pricing is to have an accurate model of the underlying price process. We hypothesize such a process (it is not derived from underlying fundamentals - preferences, endowments etc.; rather, it is a pure statistical model), and demonstrate that, in the presence of local market completeness and local no arbitrage situations, there exists a transformation of measure by which all derivatives written on that asset may be priced equal to the present value, discounted at the risk-free rate, of their expected payoffs computed using this transformed measure.1 The Black-Scholes formula, for example, may be derived in this way. 12.2 Discrete Time Infinite Horizon Economies: A CCAPM Setting As in the previous chapter, time evolves according to t = 0, 1, ..., T, T + 1, .... We retain the context of a single good endowment economy and presume the existence of a complete markets Arrow-Debreu financial structure. In period t, any one of Nt possible states, indexed by θt , may be realized. We will assume that a period t event is characterized by two quantities: (i) the actually occurring period t event as characterized by θt , (ii) the unique history of events (θ1 , θ2 , . . ., θt−1 ) that precedes it. Requirement (ii), in particular, suggests an evolution of uncertainty similar to that of a tree structure in which the branches never join (two events always have distinct prior histories). While this is a stronger assumption than what underlies the CCAPM, it will allow us to avoid certain notational ambiguities; subsequently, assumption (ii) will be dropped. We are interested more in the 1 By local we mean that valuation is considered only in the context of the derivative, the underlying asset (a stock), and a risk-free bond. 1 idea than in any broad application, so generality is not an important consideration. Let π(θt , θt+1 ) represent the probability of state θt+1 being realized in period t + 1, given that θt is realized in period t. The financial market is assumed to be complete in the following sense: At every date t, and for every state θt , there exists a short-term contingent claim that pays one unit of consumption if state θt+1 is realized in period t + 1 (and nothing otherwise). We denote the period t, state θt price of such a claim by q(θt , θt+1 ). Arrow-Debreu long-term claims (relative to t = 0) are not formally traded in this economy. Nevertheless, they can be synthetically created by dynamically trading short-term claims. (In general, more trading can substitute for fewer claims). To illustrate, let q(θ0 , θt+1 ) represent the period t = 0 price of a claim to one unit of the numeraire, if and only if event θt+1 is realized in period t + 1. It must be the case that t q(θ0 , θt+1 ) = s=0 q(θs , θs+1 ), (12.1) where (θ0 , . . ., θt ) is the unique prior history of θt+1 . By the uniqueness of the path to θt+1 , q(θ0 , θt+1 ) is well defined. By no-arbitrage arguments, if the long-term Arrow-Debreu security were also traded, its price would conform to Equation (12.1). Arrow-Debreu securities can thus be effectively created via dynamic (recursive) trading, and the resulting financial market structure is said to be dynamically complete.2 By analogy, the price in period t, state θt , of a security that pays one unit of consumption if state θt+J is observed in period t + J, q(θt , θt+J ), is given by t+J−1 q(θt , θt+J ) = s=t q(θs , θs+1 ). It is understood that θt+J is feasible from θt , that is, given that we are in state t in period t, there is some positive probability for the economy to find itself in state θt+J in period t + J; otherwise the claims price must be zero. Since our current objective is to develop risk-neutral pricing representations, a natural next step is to define risk-free bond prices and associated risk-free rates. Given the current date-state is (θt , t), the price, q b (θt , t), of a risk-free one-period (short-term) bond is given by (no arbitrage) Nt+1 q (θt , t) = θt+1 =1 b q(θt , θt+1 ); (12.2) Note here that the summation sign applies across all Nt+1 future states of nature. The corresponding risk-free rate must satisfy (1 + rf (θt )) = q b (θt , t) −1 2 This fact suggests that financial markets may need to be “very incomplete” if incompleteness per se is to have a substantial effect on equilibrium asset prices and, for example, have a chance to resolve some of the puzzles uncovered in Chapter 9. See Telmer (1993). 2 Pricing a k-period risk-free bond is similar: Nt+k q b (θt , t + k) = θt+k =1 q(θt , θt+k ). (12.3) The final notion is that of an accumulation factor, denoted by g(θt , θt+k ), and defined for a specific path (θt , θt+1 , ..., θt+k ) as follows: t+k−1 g(θt , θt+k ) = s=t q b (θs , s + 1). (12.4) The idea being captured by the accumulation factor is this: An investor who invests one unit of consumption in short-term risk-free bonds from date t to t+k, continually rolling over his investment, will accumulate [g(θt , θt+k )]−1 units of consumption by date t + k, if events θt+1 , ..., θt+k are realized. Alternatively, k−1 [g(θt , θt+k )]−1 = s=0 (1 + rf (θt+s )). (12.5) Note that from the perspective of date t, state θt , the factor [g(θt , θt+k )]−1 is an uncertain quantity as the actual state realizations in the succeeding time periods are not known at period t. From the t = 0 perspective, [g(θt , θt+k )]−1 is in the spirit of a (conditional) forward rate. Let us illustrate with the two-date forward accumulation factor. We take the perspective of the investor investing one unit of the numeraire in a short-term risk-free bond from date t to t + 2. His first investment is certain since the current state θt is known and it returns (1 + rf (θt )). At date t + 1, this sum will be invested again in a one-period risk-free bond with return (1 + rf (θt+1 )) contracted at t + 1 and received at t + 2 . From the perspective of date t, this is indeed an uncertain quantity. The compounded return on the investment is: (1 + rf (θt ))(1 + rf (θt+1 )). This is the inverse of the accumulation factor g(θt , θt+1 ) as spelled out in Equation (12.5). Let us next translate these ideas directly into the CCAPM settings. 12.3 Risk-Neutral Pricing in the CCAPM We make two additional assumptions in order to restrict our current setting to the context of the CCAPM. A12.1: There is one agent in the economy with time-separable VNM preferences represented by U(˜) = E0 c ∞ t=0 U (˜t , t) , c 3 where U (˜t , t) is a family of strictly increasing, concave, differentiable period c utility functions, with U1 (ct , t) > 0 for all t, ct = c(θt ) is the uncertain period ˜ t consumption, and E0 the expectations operator conditional on date t = 0 information. This treatment of the agent’s preferences is quite general. For example, U (ct , t) could be of the form δ t U (ct ) as in earlier chapters. Alternatively, the period utility function could itself be changing through time in deterministic fashion, or some type of habit formation could be postulated. In all cases, it is understood that the set of feasible consumption sequences will be such that the sum exists (is finite). ˜ A12.2: Output in this economy, Yt = Yt (θt ) is exogenously given, and, by construction, represents the consumer’s income. In equilibrium it represents his consumption as well. Recall that equilibrium-contingent claims prices in the CCAPM economy are no-trade prices, supporting the consumption sequences {˜t } in the sense that at c these prices, the representative agent does not want to purchase any claims; that is, at the prevailing contingent-claims prices his existing consumption sequence is optimal. The loss in period t utility experienced by purchasing a contingent claim q(θt , θt+1 ) is exactly equal to the resultant increase in expected utility in period t + 1. There is no benefit to further trade. More formally, U1 (c(θt ), t)q(θt , θt+1 ) = q(θt , θt+1 ) = π(θt , θt+1 )U1 (c(θt+1 ), t + 1), or U1 (c(θt+1 ), t + 1) . π(θt , θt+1 ) U1 (c(θt ), t) (12.6) Equation (12.7) corresponds to Equation (8.1) of Chapter 8. State probabilities and inter-temporal rates of substitution appear once again as the determinants of equilibrium Arrow-Debreu prices. Note that the more general utility specification adopted in this chapter does not permit bringing out explicitly the element of time discounting embedded in the inter-temporal marginal rates of substitution. A short-term risk-free bond is thus priced according to Nt q b (θt , t + 1) = θt+1 =1 q(θt , θt+1 ) = 1 Et {U1 (c(θt+1 ), t + 1)} . (12.7) U1 (c(θt ), t) Risk-neutral valuation is in the spirit of discounting at the risk-free rate. Accordingly, we may ask: At what probabilities must we compute the expected payoff to a security in order to obtain its price by discounting that payoff at the risk-free rate? But which risk-free rates are we speaking about? In a multiperiod context, there are two possibilities and the alternative we choose will govern the precise form of the probabilities themselves. The spirit of the dilemma is portrayed in Figure 12.1, which illustrates the case of a t = 3 period cash flow. 4 Insert Figure 12.1 about here Under the first alternative, the cash flow is discounted at a series of consecutive short (one-period) rates, while in the second we discount back at the term structure of multiperiod discount bonds. These methods provide the same price, although the form of the risk-neutral probabilities will differ substantially. Here we offer a discussion of alternative 1; alternative 2 is considered in Appendix 12.1. Since the one-period state claims are the simplest securities, we will first ask what the risk-neutral probabilities must be in order that they be priced equal to the present value of their expected payoff, discounted at the risk-free rate.3 As before, let these numbers be denoted by π RN (θt , θt+1 ). They are defined by: U1 (c(θt+1 ), t + 1) U1 (c(θt ), t) = q b (θt , t + 1)[π RN (θt , θt+1 )]. q(θt , θt+1 ) = π(θt , θt+1 ) The second equality reiterates the tight relationship found in Chapter 11 between Arrow-Debreu prices and risk-neutral probabilities. Substituting Equation (12.7) for q b (θt , t + 1) and rearranging terms, one obtains: π RN (θt , θt+1 ) = π(θt , θt+1 ) = π(θt , θt+1 ) U1 (c(θt ), t) U1 (c(θt+1 ), t + 1) U1 (c(θt ), t) Et {U1 (c(θt+1 ), t + 1)} U1 (c(θt+1 ), t + 1) . (12.8) Et U1 (c(θt+1 ), t + 1) Since U (c(θ), t) is assumed to be strictly increasing, U1 > 0 and π RN (θt , θt+1 ) > 0 (without loss of generality we may assume π(θt , θt+1 ) > 0). Furthermore, by construction, Nt+1 θt+1 =1 π RN (θt , θt+1 ) = 1. The set {π RN (θt , θt+1 )} thus defines a set of conditional (on θt ) risk-neutral transition probabilities. As in our earlier more general setting, if the representative agent is risk neutral, U1 (c(θt ), t) ≡ constant for all t, and π RN (θt , θt+1 ) coincides with π(θt , θt+1 ), the true probability. Using these transition probabilities, expected future consumption flows may be discounted at the intervening risk-free rates. Notice how the risk-neutral probabilities are related to the true probabilities: They represent the true probabilities scaled up or down by the relative consumption scarcities in the different states. For example, if, for some state θt+1 , the representative agent’s consumption is usually low, his marginal utility of consumption in that state will be much higher than average marginal utility and thus π RN (θt , θt+1 ) = π(θt , θt+1 ) U1 (c(θt+1 ), t + 1) Et U1 (c(θt+1 ), t + 1) > π(θt , θt+1 ). 3 Recall that since all securities can be expressed as portfolios of state claims, we can use the state claims alone to construct the risk-neutral probabilities. 5 The opposite will be true if a state has a relative abundance of consumption. When we compute expected payoffs to assets using risk-neutral probabilities we are thus implicitly taking into account both the (no-trade) relative equilibrium scarcities (prices) of their payoffs and their objective relative scarcities. This allows discounting at the risk-free rate: No further risk adjustment need be made to the discount rate as all such adjustments have been implicitly undertaken in the expected payoff calculation. To gain a better understanding of this notion let us go through a few examples. Example 12.1 Denote a stock’s associated dividend stream by {d(θt )}. Under the basic state-claim valuation perspective (Chapter 10, Section 10.2), its ex-dividend price at date t, given that θt has been realized, is: ∞ Ns q e (θt , t) = s=t+1 j=1 q(θt , θs (j))d(θs (j)), (12.9) or, with a recursive representation, q e (θt , t) = θt+1 q(θt , θt+1 ){q e (θt+1 , t + 1) + d(θt+1 )} (12.10) Equation (12.10) may also be expressed as RN ˜ ˜ q e (θt , t) = q b (θt , t + 1)Et {q e (θt+1 , t + 1) + d(θt+1 )}, (12.11) RN where Et denotes the expectation taken with respect to the relevant riskneutral transition probabilities; equivalently, q e (θt , t) = 1 ˜ ˜ E RN {q e (θt+1 , t + 1) + d(θt+1 )}. 1 + rf (θt ) t Returning again to the present value expression, Equation (12.9), we have q e (θt , t) = s=t+1 ∞ RN ˜ ˜ Et {g(θt , θs )d(θs )} = s=t+1 RN Et s−1 j=0 ˜ d(θs ) ˜ (1 + rf (θt+j )) . (12.12) ˆ What does Equation (12.12) mean? Any state θs in period s ≥ t + 1 has a unique sequence of states preceding it. The product of the risk-neutral transition probabilities associated with the states along the path defines the (conditional) ˆ risk-neutral probability of θs itself. The product of this probability and the 6 ˆ payment as d(θs ) is then discounted at the associated accumulation factor – the present value factor corresponding to the risk-free rates identified with the ˆ succession of states preceding θs . For each s ≥ t + 1, the expectation represents the sum of all these terms, one for each θs feasible from θt . Since the notational intensity tends to obscure what is basically a very straightforward idea, let us turn to a small numerical example. Example 12.2 Let us value a two-period equity security, where U (ct , t) ≡ U (ct ) = ln ct for the representative agent (no discounting). The evolution of uncertainty is given by Figure 12.2 where π(θ0 , θ1,1 ) = .6 π(θ1,1 , θ2,1 ) = .3 π(θ0 , θ1,2 ) = .4 π(θ1,1 , θ2,2 ) = .7 π(θ1,2 , θ2,3 ) = .6 π(θ1,2 , θ2,4 ) = .4 Insert Figure 12.2 about here The consumption at each node, which equals the dividend, is represented as the quantity in parentheses. To valuate this asset risk neutrally, we consider three stages. 1. Compute the (conditional) risk-neutral probabilities at each node. π RN (θ0 , θ1,1 ) = π(θ0 , θ1,1 ) π RN (θ0 , θ1,2 ) = U1 (c(θ1,1 )) ˜ E0 {U1 (c1 (θ1 ))} 1 4 = 1 5 (.6( 1 ) + .4( 1 )) 5 3 = .4737 1 − π RN (θ0 , θ1,1 ) = .5263 (.3( 1 ) 4 1 + .7( 2 )) π RN (θ1,1 , θ2,1 ) = π(θ1,1 , θ2,1 ) π RN (θ1,1 , θ2,2 ) = = .1765 1 − π RN (θ1.1 , θ2,1 ) = .8235 1 π RN (θ1,2 , θ2,4 ) = π(θ1,2 , θ2,4 ) (.6( 1 ) + .4(1)) 8 π RN (θ1,2 , θ2,3 ) = .1579 = .8421 2. Compute the conditional bond prices. q b (θ0 , 1) q b (θ1,1 , 2) q b (θ1,2 , 2) = = = 1 1 ˜ E0 {U1 (c1 (θ))} = 1 U1 (c0 ) (2) 1 (1) 5 1 (1) 3 1 1 .3( ) + .7( ) 4 2 1 1 .6( ) + .4( ) 8 1 = 2.125 = 1.425 1 1 .6( ) + .4( ) 5 3 = .5066 7 3. Value the asset. 2 q e (θ0 , 0) = = + + + + = s=1 b RN ˜ ˜ E0 {g(θ0 , θs )ds (θs )} q (θ0 , 1){π RN (θ0 , θ1,1 )(5) + π RN (θ0 , θ1,2 )(3)} q b (θ0 , 1)q b (θ1,1 , 2){π RN (θ0 , θ1,1 )π RN (θ1,1 , θ2,1 )(4) π RN (θ0 , θ1,1 )π RN (θ1,1 , θ2,2 )(2)} q b (θ0 , 1)q b (θ1,2 , 2){π RN (θ0 , θ1,2 )π RN (θ1,2 , θ2,3 )(8) π RN (θ0 , θ1,2 )π RN (θ1,2 , θ2,4 )(1)} 4.00 At a practical level this appears to be a messy calculation at best, but it is not obvious how we might compute the no-trade equilibrium asset prices more easily. The Lucas tree methodologies, for example, do not apply here as the setting is not infinitely recursive. This leaves us to solve for the equilibrium prices by working back through the tree and solving for the no-trade prices at each node. It is not clear that this will be any less involved. Sometimes, however, the risk-neutral valuation procedure does allow for a very succinct, convenient representation of specific asset prices or price interrelationship. A case in point is that of a long-term discount bond. Example 12.3 To price at time t, state θt , a long-term discount bond maturing in date t + k, observe that the corresponding dividend dt+k (θt+k ) ≡ 1 for every θt+k feasible from state θt . Applying Equation (12.12) yields RN ˜ q b (θt , t + k) = Et g(θt , θt+k ), or         1 1 RN = Et (12.13)  t+k−1  (1 + r(θt , t + k))k    (1 + r(θs , s + 1))  s=t Equation (12.13), in either of its forms, informs us that the long term rate is the expectation of the short rates taken with respect to the risk-neutral transition probabilities. This is generally not true if the expectation is taken with the ordinary or true probabilities. At this point we draw this formal discussion to a close. We now have an idea what risk-neutral valuation might mean in a CCAPM context. Appendix 12.1 briefly discusses the second valuation procedure and illustrates it with the pricing of call and put options. We thus see that the notion of risk-neutral valuation carries over easily to a CCAPM context. This is not surprising: The key to the existence of a set of risk-neutral probabilities is the presence of a complete set of securities markets, 8 which is the case with the CCAPM. In fact, the somewhat weaker notion of dynamic completeness was sufficient. We next turn our attention to equity derivatives pricing. The setting is much more specialized and not one of general equilibrium (though not inconsistent with it). One instance of this specialization is that the underlying stock’s price is presumed to follow a specialized stochastic process. The term structure is also presumed to be flat. These assumptions, taken together, are sufficient to generate the existence of a unique risk-neutral probability measure, which can be used to value any derivative security written on the stock. That these probabilities are uniquely identified with the specific underlying stock has led us to dub them local. 12.4 The Binomial Model of Derivatives Valuation Under the binomial abstraction we imagine a many-period world in which, at every date-state node only a stock and a bond are traded. With only two securities to trade, dynamic completeness requires that at each node there be only two possible succeeding states. For simplicity, we will also assume that the stock pays no dividend, in other words, that d(θt ) ≡ 0 for all t ≤ T . Lastly, in order to avoid any ambiguity in the risk-free discount factors, it is customary to require that the risk-free rate be constant across all dates and states. We formalize these assumptions as follows: A12.3: The risk-free rate is constant; 1 q b (θt , t + 1) = 1+rf for all t ≤ T . A12.4: The stock pays no dividends: d(θt ) ≡ 0 for all t ≤ T . A12.5: The rate of return to stock ownership follows an i.i.d. process of the form: q e (θt+1 , t + 1) = u q e (θt , t), with probability π d q e (θt , t), with probability 1 − π, where u (up) and d (down) represent gross rates of return. In order to preclude the existence of an arbitrage opportunity it must be the case that u > Rf > d, where, in this context, Rf = 1 + rf . There are effectively only two possible future states in this model (θt ∈ {θ1 , θ2 } where θ1 is identified with u and θ2 identified with d) and thus the evolution of the stock’s price can be represented by a simple tree structure as seen in Figure 12.3. Insert Figure 12.3 about here Why such a simple setting should be of use is not presently clear but it will become so shortly. 9 In this context, the risk-neutral probabilities can be easily computed from Equation (12.11) specialized to accommodate d(θt ) ≡ 0: q e (θt , t) = = This implies Rf π RN RN ˜ q b (θt , t + 1)Et {q e (θt+1 , t + 1)} (12.14) b RN e RN e q (θt , t + 1){π uq (θt , t) + (1 − π )dq (θt , t)} = = π RN u + (1 − π RN )d, or Rf − d . u−d (12.15) The power of this simple context is made clear when comparing Equation (12.15) with Equation (12.8). Here risk-neutral probabilities can be expressed without reference to marginal rates of substitution, that is, to agents’ preferences.4 This provides an immense simplification, which all derivative pricing will exploit in one way or another. Of course the same is true for one-period Arrow-Debreu securities since they are priced equal to the present value of their respective risk-neutral probabilities: q(θt , θt+1 = u) q(θt , θt+1 = d) = = 1 Rf 1 Rf Rf − d , and u−d Rf − d 1− = u−d 1 Rf u − Rf u−d . Furthermore, since the risk-free rate is assumed constant in every period, the price of a claim to one unit of the numeraire to be received T − t > 1 periods from now if state θT is realized is given by q(θt , θT ) = 1 (1 + rf (θt , T ))T −t T −1 π RN (θs , θs+1 ), {θt ,..,θT −1 }∈Ω s=t where Ω represents the set of all time paths {θt , θt+1 , . . ., θT −1 } leading to θT . In the binomial setting this becomes q(θt , θT ) = 1 (Rf )T −t T −t s (π RN )s (1 − π RN )T −t−s , (12.16) where s is the number of intervening periods in which the u state is observed T −t on any path from θt to θT . The expression represents the number s of ways s successes (u moves) can occur in T − t trials. A standard result states T −t s 4 Notice = (T − t)! . s!(T − t − s)! that the risk-neutral probability distribution is i.i.d. as well. 10 The explanation is as follows. Any possible period T price of the underlying stock will be identified with a unique number of u and d realizations. Suppose, T −t for example, that s1 u realizations are required. There are then s1 possible paths, each of which has exactly s1 u and T − t − s1 d states, leading to the pre-specified period T price. Each path has the common risk-neutral s1 T −t−s1 probability π RN 1 − π RN . As an example, suppose T − t = 3, and the particular final price is the result of 2 up-moves and 1 down-move. Then, 3! 3·2·1 there are 3 = 2!1! = (2·1)(1) possible paths leading to that final state: uud, udu, and duu. To illustrate the simplicity of this setting we again consider several examples. Example 12.4 A European call option revisited: Let the option expire at T > t; the price of a European call with exercise price K, given the current date-state (θt , t) , is 1 Rf 1 Rf T −t RN Et (max {q e (θT , T ) − K, 0}) T −t T −t s=0 CE (θt , t) = = T −t s (π RN )s (1 − π RN )T −t−s (max{q e (θt , t)us dT −t−s − K, 0}) When taking the expectation we sum over all possible values of s ≤ T − t, thus weighting each possible option payoff by the risk-neutral probability of attaining it. Define the quantity s as the minimum number of intervening up states necˆ essary for the underlying asset, the stock, to achieve a price in excess of K. The prior expression can then be simplified to: CE (θt , t) = 1 (Rf ) T −t T −t s=ˆ s T −t s (π RN )s (1−π RN )T −t−s [q e (θt , t)us dT −t−s −K], (12.17) or 1 (Rf ) T −t s=ˆ s T −t CE (θt , t) = T −t s T −t (π RN )s (1 − π RN )T −t−s q e (θt , t)us dT −t−s T −t )(π RN )s (1 − π RN )T −t−s K s (12.18) − s=ˆ s ( The first term within the braces of Equation (12.18) is the risk-neutral expected value at expiration of the acquired asset if the option is exercised, while 11 the second term is the risk-neutral expected cost of acquiring it. The difference is the risk-neutral expected value of the call’s payoff (value) at expiration.5 To value the call today, this quantity is then put on a present value basis by discounting at the risk-free rate Rf . This same valuation can also be obtained by working backward, recursively, through the tree. Since markets are complete, in the absence of arbitrage opportunities any asset – the call included – is priced equal to its expected value in the succeeding time period discounted at Rf . This implies ˜ CE (θt , t) = q b (θt , t)E RN CE (θt+1 , t + 1). (12.19) Let us next illustrate how this fact may be used to compute the call’s value in a simple three-period example. Example 12.5 Let u = 1.1, d = 1 u = .91, q e (θt , t) =$50, K = $53, Rf = 1.05, T − t = 3. π RN = 1.05 − .91 Rf − d = = .70 u−d 1.1 − .91 Insert Figure 12.4 about here The numbers in parentheses in Figure 12.4 are the recursive values of the call, working backward in the manner of Equation (12.19). These are obtained as follows: CE (u2 , t + 2) = CE (ud, t + 2) = CE (u, t + 1) = CE (d, t + 1) = CE (θt , t) = 1 {.70(13.55) + .30(2)} = 9.60 1.05 1 {.70(2) + .30(0)} = 1.33 1.05 1 {.70(9.60) + .30(1.33)} = 6.78 1.05 1 {.70(1.33) + .30(0)} = .89 1.05 1 {.70(6.78) + .30(.89)} = 4.77 1.05 For a simple call, its payoff at expiration is dependent only upon the value of the underlying asset (relative to K) at that time, irrespective of its price history. For example, the value of the call when q e (θT , T ) = 55 is the same if the price history is (50, 55, 50, 55) or (50, 45.5, 50, 55). For other derivatives, however, this is not the case; they are path dependent. An Asian (path-dependent) option is a case in point. Nevertheless, the same 5 Recall that there is no actual transfer of the security. Rather, this difference q e (θ , T )−K T represents the amount of money the writer (seller) of the call must transfer to the buyer at the expiration date if the option is exercised. 12 valuation methods apply: Its expected payoff is computed using the risk-neutral probabilities, and then discounted at the risk-free rate. Example 12.6 A path dependent option: We consider an Asian option for which the payoff pattern assumes the form outlined in Table 12.1. Table 12.1 Payoff Pattern – Asian Option t 0 t+1 0 t+2 0 . . . T −1 0 T AV max {qe G (θT , T ) – K, 0}, AV where qe G (θT , T ) is the average price of the stock along the path from q e (θt , t) to, and including, q e (θT , T ). We may express the period t value of such an option as 1 AV CA (θt , t) = E RN max{qe G (θT , T ) − K, 0} (Rf )T −t t A simple numerical example with T − t = 2 follows. Let q e (θt , t) = 100, 1 K = 100, u = 1.05, d = u = .95, and Rf = 1.005. The corresponding riskneutral probabilities are π RN = Rf −d u−d = 1.005−.95 1.05−.95 = .55; 1 − π RN = .45 With two periods remaining, the possible evolutions of the stock’s price and corresponding option payoffs are those found in Figure 12.5. Insert Figure 12.5 about here Thus, CA (θt , t) = 1 {(.55)2 (5.083) + (.55)(.45)(1.67)} (1.005)2 =$1.932 Note that we may as well work backward, recursively, in the price/payoff tree as shown in Figure 12.6. Insert Figure 12.6 about here where CA (θt+1 = u, t + 1) = 3.53 = CA (θt , t) = 1 {55(5.083) + .45(1.67)}, and (1.005) 1 {.55(3.53) + .45(0)} = $1.932. (1.005) 13 A number of fairly detailed comments are presently in order. Note that with a path-dependent option it is not possible to apply, naively, a variation on Equation (12.18). Unlike with straightforward calls, the value of this type of option is not the same for all paths leading to the same final-period asset price. Who might be interested in purchasing such an option? For one thing, they have payoff patterns similar in spirit to an ordinary call, but are generally less expensive (there is less upward potential in the average than in the price itself). This feature has contributed to the usefulness of path-dependent options in foreign exchange trading. Consider a firm that needs to provide a stream of payments (say, perhaps, for factory construction) in a foreign currency. It would want protection against a rise in the value of the foreign currency relative to its own, because such a rise would increase the cost of the payment stream in terms of the firm’s own currency. Since many payments are to be made, what is of concern is the average price of the foreign currency rather than its price at any specific date. By purchasing the correct number of Asian calls on the foreign currency, the firm can create a payment for itself if, on average, the foreign currency’s value exceeds the strike price – the level above which the firm would like to be insured. By analogous reasoning, if the firm wished to protect the average value of a stream of payments it was receiving in a foreign currency, the purchase of Asian puts would be one alternative. We do not want to lose sight of the fact that risk-neutral valuation is a direct consequence of the dynamic completeness (at each node there are two possible future states and two securities available for trade) and the no-arbitrage assumption, a connection that is especially apparent in the binomial setting. Consider a call option with expiration one period from the present. Over this period the stock’s price behavior and the corresponding payoffs to the call option are as found in Figure 12.7. Insert Figure 12.7 about here By the assumed dynamic completeness we know that the payoff to the option can be replicated on a state-by-state basis by a position in the stock and the bond. Let this position be characterized by a portfolio of ∆ shares and a bond investment of value B (for simplicity of notation we suppress the dependence of these latter quantities on the current state and date). Replication requires uq e (θt , t)∆ + Rf B dq e (θt , t)∆ + Rf B from which follows ∆ = B = CE (u, t + 1) − CE (d, t + 1) , and (u − d) q e (θt , t) uCE (d, t + 1) − d CE (u, t + 1) . (u − d) Rf 14 = = CE (u, t + 1), and CE (d, t + 1), By the no arbitrage assumption: CE (θt , t) = ∆q e (θt , t) + B CE (u, t + 1) − CE (d, t + 1) e uCE (d, t + 1) − d CE (u, t + 1) = q (θt , t) + e (θ , t) (u − d)q t (u − d)Rf 1 Rf − d u − Rf = CE (u, t + 1) + CE (d, t + 1) Rf u−d u−d 1 π RN CE (u, t + 1) + (1 − π RN ) CE (d, t + 1) , = Rf which is just a specialized case of Equation (12.18). Valuing an option (or other derivative) using risk-neutral valuation is thus equivalent to pricing its replicating portfolio of stock and debt. Working backward in the tree corresponds to recomputing the portfolio of stock and debt that replicates the derivative’s payoffs at each of the succeeding nodes. In the earlier example of the Asian option, the value 3.53 at the intermediate u node represents the value of the portfolio of stocks and bonds necessary to replicate the option’s values in the second-period nodes leading from it (5.083 in the u state, 1.67 in the d state). Let us see how the replicated portfolio evolves in the case of the Asian option written on a stock. ∆u Bu = = CA (u2 , t + 2) − CA (ud, t + 2) 5.083 − 1.67 = = .325 (u − d) q e (θt , t) (1.05 − .95)(105) uCA (ud, t + 2) − d CA (u2 , t + 2) (1.05)(1.67) − (.95)(5.083) = = −30.60 (u − d) Rf (1.05 − .95)(1.005) ∆d = 0 (all branches leading from the “d” node result in zero option value) Bd = 0 ∆ = B = CA (u, t + 1) − CA (d, t + 1) 3.53 − 0 = = .353 (u − d) q e (θt , t) (1.05 − .95)(100) uCA (d, t + 1) − d CA (u, t + 1) (1.05)(0) − (.95)(3.53) = = −33.33 (u − d) Rf (1.05 − .95)(1.005) We interpret these numbers as follows. In order to replicate the value of the Asian option, irrespective of whether the underlying stock’s price rises to$105 or falls to $95.20, it is necessary to construct a portfolio composed of a loan of$33.33 at Rf in conjunction with a long position of .353 share. The net cost is .353(100) − 33.33 = $1.97, the cost of the call, except for rounding errors. To express this idea slightly differently, if you want to replicate, at each node, the value of the Asian option, borrow$33.33 (at Rf ) and, together with your own capital contribution of $1.97, take this money and purchase .353 share of the underlying stock. 15 As the underlying stock’s value evolves through time, this portfolio’s value will evolve so that at any node it matches exactly the call’s value. At the first u node, for example, the portfolio will be worth$3.53. Together with a loan of $30.60, this latter sum will allow the purchase of .325 share, with no additional capital contribution required. Once assembled, the portfolio is entirely selffinancing, no additional capital need be added and none may be withdrawn (until expiration). This discussion suggests that Asian options represent a levered position in the underlying stock. To see this, note that at the initial node the replicating portfolio consists of a$1.97 equity contribution by the purchaser in conjunction with a loan of $30.60. This implies a debt/equity ratio of$30.60 ∼ 15.5! For the 1.97 = analogous straight call, with the same exercise price as the Asian and the same underlying price process, the analogous quantities are, respectively, $3.07 and$54.47, giving a debt/equity ratio of approximately 18. Call-related securities are thus attractive instruments for speculation! For a relatively small cash outlay, a stock’s entire upward potential (within a limited span of time) can be purchased. Under this pricing perspective there are no arbitrage opportunities within the universe of the underlying asset, the bond, or any derivative asset written on the underlying asset. We were reminded of this fact in the prior discussion! The price of the call at all times equals the value of the replicating portfolio. It does not, however, preclude the existence of such opportunities among different stocks or among derivatives written on different stocks. These discussions make apparent the fact that binomial risk-neutral valuation views derivative securities, and call options in particular, as redundant assets, redundant in the sense that their payoffs can be replicated with a portfolio of preexisting securities. The presence or absence of these derivatives is deemed not to affect the price of the underlying asset (the stock) on which they are written. This is in direct contrast to our earlier motivation for the existence of options: their desirable property in assisting in the completion of the market. In principle, the introduction of an option has the potential of changing all asset values if it makes the market more complete. This issue has been examined fairly extensively in the literature. From a theoretical perspective, Detemple and Selden (1991) construct a mean variance example where there is one risky asset, one risk-free asset, and an incomplete market. There the introduction of a call option is shown to increase the equilibrium price of the risky asset. In light of our earlier discussions, this is not entirely surprising: The introduction of the option enhances opportunities for risk sharing, thereby increasing demand and consequently the price of the risky asset. This result can be shown not to be fully applicable to all contexts, however. On the empirical side, Detemple and Jorion (1990) examine a large sample of options introductions over the period 1973 to 1986 and find that, on average, the underlying stock’s price rises 3 percent as a result and its volatility diminishes. 16 12.5 Continuous Time: An Introduction to the BlackScholes Formula While the binomial model presents a transparent application of risk-neutral valuation, it is not clear that it represents the accurate description of the price evolution of any known security. We deal with this issue presently. Fat tails aside, there is ample evidence to suggest that stock prices may be modeled as being lognormally distributed; more formally, √ ln q e (θT , T ) ∼ N (ln q e (θt , t) + µ(T − t), σ T − t), where µ and σ denote, respectively, the mean and standard deviation of the continuously compounded rate of return over the reference period, typically one year. Regarding t as the present time, this expression describes the distribution of stock prices at some time T in the future given the current price q e (θt , t). The length of the time horizon T − t is measured in years. The key result is this: properly parameterized, the distribution of final prices generated by the binomial distribution can arbitrarily well approximate the prior lognormal distribution when the number of branches becomes very large. More precisely, we may imagine a binomial model in which we divide the period T − t −t into n subintervals of equal length ∆t(n) = T n . If we adjust u, d, p (the true probability of a u price move) and Rf appropriately, then as n → ∞, the distribution of period T prices generated by the binomial model will converge in probability to the hypothesized lognormal distribution. The adjustment requires that √ 1 eµ∆t(n) − d(n) u(n) = eσ ∆t(n) , d(n) = ,p = , and u(n) u(n) − d(n) Rf (n) = (Rf ) n 1 (12.20) For this identification, the binomial valuation formula for a call option, Equation (12.18), converges to the Black-Scholes formula for a European call option written on a non-dividend paying stock: r CE (θt , t) = q e (θt , T )N (d1 ) − Ke−ˆf (T −t) N (d2 ) (12.21) where N ( ) is the cumulative normal probability distribution function, rf ˆ d1 d2 = = = n(Rf ) + (T − t) rf + ˆ √ σ T −t √ d1 − σ T − t n qe (θt ,t) K σ2 2 Cox and Rubinstein (1979) provide a detailed development and proof of this equivalence, but we can see the rudiments of its origin in Equation (12.18), 17 which we now present, modified to make apparent its dependence on the number of subintervals n: 1 n{ (Rf (n)) n n CE (θt , t; n) = s=a(n) n s π(n)RN π(n)RN s s 1 − π(n)RN n−s n−s e q (θt , t) (12.22) − s=a(n) n s 1 − π(n)RN K} where π(n)RN = Rf (n) − d(n) . u(n) − d(n) Rearranging terms yields n CE (θt , t; n) = q e (θt , t) s=a n s π(n)RN Rf (n) s s 1 − π(n)RN Rf (n) n−s n−s −K 1 Rf (n) n n s=a n s π(n)RN 1 − π(n)RN , (12.23) which is of the general form CE (θt , t; n) = qe (θt , t) × Probability − (present value factor) × K × Probability, as per the Black-Scholes formula. Since, at each step of the limiting process (i.e., for each n, as n → ∞), the call valuation formula is fundamentally an expression of risk-neutral valuation, the same must be true of its limit. As such, the Black-Scholes formula represents the first hint at the translation of risk-neutral methods to the case of continuous time. Let us conclude this section with a few more observations. The first concerns the relationship of the Black-Scholes formula to the replicating portfolio idea. Since at each step of the limiting process the call’s value is identical to that of the replicating portfolio, this notion must carry over to the continuous time setting. This is indeed the case: in a context when investors may continuously and costlessly adjust the composition of the replicating portfolio, the initial position to assume (at time t) is one of N (d1 ) shares, financed in part by a risk-free loan of Ke−Rf T N (d2 ). The net cost of assembling the portfolio is the Black-Scholes value of the call. Notice also that neither the mean return on the underlying asset nor the true probabilities explicitly enter anywhere in the discussion.6 None of this is surprising. The short explanation is simply that risk-neutral valuation abandons the true probabilities in favor of the risk-neutral ones and, in doing so, all assets are determined to earn the risk-free rate. The underlying assets’ mean 6 They are implicitly present in the equilibrium price of the underlying asset. 18 return still matters, but it is now Rf . More intuitively, risk-neutral valuation is essentially no-arbitrage pricing. In a world with full information and without transaction costs, investors will eliminate all arbitrage opportunities irrespective of their objective likelihood or of the mean returns of the assets involved. It is sometimes remarked that to purchase a call option is to buy volatility, and we need to understand what this expression is intended convey. Returning to the binomial approximation [in conjunction with Equation (12.18)], we observe first that a larger σ implies the possibility of a higher underlying asset price at expiration, with the attendant higher call payoff. More formally, σ is the only statistical characteristic of the underlying stock’s price process to appear in the Black-Scholes formula. Given rf , K, and q e (θt , t), there is a unique identification between the call’s value and σ. For this reason, estimates of an asset’s volatility are frequently obtained from its corresponding call price by inverting the Black-Scholes formula. This is referred to as an implied volatility estimate. The use of risk-neutral methods for the valuation of options is probably the area in which asset pricing theory has made the most progress. Indeed, Merton and Scholes were awarded the Nobel prize for their work (Fischer Black had died). So much progress has, in fact, been made that the finance profession has largely turned away from conceptual issues in derivatives valuation to focus on the development of fast computer valuation algorithms that mimic the riskneutral methods. This, in turn, has allowed the use of derivatives, especially for hedging purposes, to increase so enormously over the past 20 years. 12.6 Dybvig’s Evaluation of Dynamic Trading Strategies Let us next turn to a final application of these methods: the evaluation of dynamic trading strategies. To do so, we retain the partial equilibrium setting of the binomial model, but invite agents to have preferences over the various outcomes. Note that under the pure pricing perspective of Section 12.4, preferences were irrelevant. All investors would agree on the prices of call and put options (and all other derivatives) regardless of their degrees of risk aversion, or their subjective beliefs as to the true probability of an up or down state. This is simply a reflection of the fact that any rational investor, whether highly risk averse or risk neutral, will seek to profit by an arbitrage opportunity, whatever the likelihood, and that in equilibrium, assets should thus be priced so that such opportunities are absent. In this section our goal is different, and preferences will have a role to play. We return to assumption A12.1. Consider the optimal consumption problem of an agent who takes security prices as given and who seeks to maximize the present value of time-separable 19 utility (A12.1). His optimal consumption plan solves max E0 t=0 ∞ U (˜t , t) c q(θ0 , θt (s)) c(θt (s)) ≤ Y0 , (12.24) s.t. t=0 s∈Nt where Y0 is his initial period 0 wealth and q(θ0 , θt (s)) is the period t = 0 price of an Arrow-Debreu security paying one unit of the numeraire if state s is observed at time t > 0. Assuming a finite number of states and expanding the expectations operator to make explicit the state probabilities, the Lagrangian for this problem is ∞ Nt L( ) = t=0 s=1 π (θ0 , θt (s))U (c(θt (s), t)) ∞ Nt +λ Y0 − t=0 s=1 q (θ0 , θt (s))c(θt (s)) , where π(θ0 , θt (s)) is the conditional probability of state s occurring, at time t and λ the Lagrange multiplier. The first order condition is U1 (c(θt (s)), t)π(θ0 , θt (s)) = λq(θ0 , θt (s)). By the concavity of U ( ), if θt (1) and θt (2) are two states, then q(θ0 , θt (1)) q(θ0 , θt (2)) > , if and only if c(θt (1), t) < c(θt (2), t). π(θ0 , θt (1)) π(θ0 , θt (2)) It follows that if q(θ0 , θt (1)) q(θ0 , θt (2)) = , then c(θt (1)) = c(θt (2)). π(θ0 , θt (1)) π(θ0 , θt (2)) The q(θ0 , θt (s))/π(θ0 , θt (s)) ratio measures the relative scarcity of consumption in state θt (s): A high ratio in some state suggests that the price of consumption is very high relative to the likelihood of that state being observed. This suggests that consumption is scarce in the high q(θ0 , θt (s))/π(θ0 , θt (s)) states. A rational agent will consume less in these states and more in the relatively cheaper ones, as Equation (12.25) suggests. This observation is, in fact, quite general as Proposition 12.1 demonstrates. Proposition 12.1 [Dybvig (1988)] (12.25) 20 Consider the consumption allocation problem described by Equation (12.24). For any rational investor for which U11 (ct , t) < 0, his optimal consumption plan is a decreasing function of q(θ0 , θt (s))/π(θ0 , θt (s)). Furthermore, for any consumption plan with this monotonicity property, there exists a rational investor with concave period utility function U (ct , t) for which the consumption plan is optimal in the sense of solving Equation (12.24). Dybvig (1988) illustrates the power of this result most effectively in the binomial context where the price-to-probability ratio assumes an especially simple form. Recall that in the binomial model the state at time t is completely characterized by the number of up states, u, preceding it. Consider a state θt (s) where s denotes the number of preceding up states. The true conditional probability of θt (s) is π(θ0 , θt (s)) = π s (1 − π)t−s , while the corresponding state claim has price q(θ0 , θt (s)) = (Rf )−t (π RN )s (1 − π RN )t−s . The price/probability ratio thus assumes the form q(θ0 , θt (s)) −t = (Rf ) π(θ0 , θt (s)) π RN π s t−s s t 1 − π RN 1−π = (Rf ) −t π RN (1 − π) (1 − π RN )π 1 − π RN 1−π . We now specialize the binomial process by further requiring the condition in assumption A12.6. A12.6: πu + (1 − π)d > Rf , in other words, the expected return on the stock exceeds the risk-free rate. Assumption A12.6 implies that π> so that Rf − d = π RN , u−d π RN (1 − π) < 1, (1 − π RN )π for any time t, and the price probability ratio q(θ0 , θt (s))/π(θ0 , θt (s)) is a decreasing function of the number of preceding up moves, s. By Proposition 12.1 the period t level of optimal, planned consumption across states θt (s) is thus an increasing function of the number of up moves, s, preceding it. Let us now specialize our agent’s preferences to assume that he is only concerned with his consumption at some terminal date T , at which time he consumes his wealth. Equation (12.24) easily specializes to this case: max s∈NT π(θ0 , θT (s))U (c(θT (s))) q(θ0 , θT (s))c(θT (s))≤ Y0 (12.26) s.t. s∈NT 21 In effect, we set U (ct , t) ≡ 0 for t ≤ T . Remember also that a stock, from the perspective of an agent who is concerned only with terminal wealth, can be viewed as a portfolio of period t state claims. The results of Proposition 12.1 thus apply to this security as well. Dybvig (1988) shows how these latter observations can be used to assess the optimality of many commonly used trading strategies. The context of his discussion is illustrated with the example in Figure 12.8 where the investor is presumed to consume his wealth at the end of the trading period. Insert Figure 12.8 about here For this particular setup, π RN = 1/3. He considers the following frequently cited equity trading strategies: 1. Technical analysis: buy the stock and sell it after an up move; buy it back after a down move; invest at Rf (zero in this example) when out of the market. But under this strategy c4 (θt (s) |uuuu ) = $32, yet c4 (θt (s) |udud ) =$48; in other words, the investor consumes more in the state with the fewer preceding up moves, which violates the optimality condition. This cannot be an optimal strategy. 2. Stop-loss strategy: buy and hold the stock, sell only if the price drops to $8, and stay out of the market. Consider, again, two possible evolutions of the stock’s price: c4 (θt (s) |duuu) =$8 c4 (θt (s) |udud) = $16. Once again, consumption is not a function of the number of up states under this trading strategy, which must, therefore, be suboptimal. 12.7 Conclusions We have extended the notion of risk-neutral valuation to two important contexts: the dynamic setting of the general equilibrium consumption CAPM and the partial equilibrium binomial model. The return on our investment is particularly apparent in the latter framework. The reasons are clear: in the binomial context, which provides the conceptual foundations for an important part of continuous time finance, the risk-neutral probabilities can be identified independently from agents’ preferences. Knowledge of the relevant inter-temporal marginal rates of substitution, in particular, is superfluous. This is the huge dividend of the twin modeling choices of binomial framework and arbitrage pricing. It has paved the way for routine pricing of complex derivative-based financial products and for their attendant use in a wide range of modern financial contracts. References Cox, J., Rubinstein, M. (1985), Option Markets, Prentice Hall, Upper Saddle River, N.J. 22 Detemple, J., Jorion, P. (1990), “Option Listing and Stock Returns: An Empirical Analysis,” Journal of Banking and Finance, 14, 781–801. Detemple, J., Selden, L. (1991), “A General Equilibrium Analysis of Option and Stock Market Interactions,” International Economic Review, 32, 279– 303. Dybvig, P. H. (1998), “Inefficient Dynamic Portfolio Strategies or How to Throw Away a Million Dollars in the Stock Market,” The Review of Financial Studies, 1, 67–88. Telmer, C. (1993), “Asset Pricing Puzzles and Incomplete Markets,” Journal of Financ, 48, 1803–1832. For an excellent text that deals with continuous time from an applications perspective, see Luenberger, D. (1998), Investments, Oxford University Press, New York. For an excellent text with a more detailed description of continuous time processes, see Dumas, B., Allaz, B. (1996), Financial Securities, Chapman and Hall, London. Appendix 12.1: Risk-Neutral Valuation When Discounting at the Term Structure of Multiperiod Discount Bond Here we seek a valuation formula where we discount not at the succession of one-period rates, but at the term structure. This necessitates a different set of risk-neutral probabilities with respect to which the expectation is taken. Define the k-period, time adjusted risk-neutral transition probabilities as: π RN (θt , θt+k ) = ˆ where π RN (θt , θt+k ) = t+k−1 s=t π RN (θt , θt+k )g(θt , θt+k ) q b (θt , θt+k ) , π RN (θs , θs+1 ), and {θt , ..., θt+k−1 } is the path of states preceding θt+k . Clearly, the π RN ( ) are positive since π RN ( ) ≥ 0, ˆ g(θt , θt+k ) > 0 and q b (θt , θt+k ) > 0. Furthermore, by Equation (12.13), π RN (θt , θt+k ) = ˆ θt+k 1 q b (θt , θt+k ) b θt+k π RN (θt , θt+k )g(θt , θt+k ) = q (θt , θt+k ) = 1. q b (θt , θt+k ) Let us now use this approach to price European call and put options. A European call option contract represents the right (but not the obligation) to buy some underlying asset at some prespecified price (referred to as the exercise 23 Table A12.1 Payoff Pattern- European Call Option t 0 t+1 0 t+2 0 . . . T -1 0 T max {q e (θT , T ) – K, 0}, or strike price) at some prespecified future date (date of contract expiration). Since such a contract represents a right, its payoff is as shown in Table A12.1. where T represents the time of expiration and K the exercise price. Let CE (θt , t) denote the period t, state θt price of the call option. Clearly, CE (θt , t) = = RN ˜ ˜ Et {g(θt , θT )(max {q e (θT , T ) − K, 0})} b RN ˜ ˆt { max {q e (θT , T ) − K, 0}}, q (θt , T )E ˆ RN where Et denotes the expectations operator corresponding to the π RN . ˆ A European put option is similarly priced according to RN ˜ ˜ PE (θt , t) = Et {g(θt , θT )(max {K − q e (θT , T ), 0})} ˜ ˆ RN = q b (θt , T )Kt { max {K − q e (θT , T ), 0}} 24 Chapter 13 : The Arbitrage Pricing Theory 13.1 Introduction We have made two first attempts (Chapters 10 to 12) at asset pricing from an arbitrage perspective, that is, without specifying a complete equilibrium structure. Here we try again from a different, more empirically based angle. Let us first collect a few thoughts as to the differences between an arbitrage approach and equilibrium modeling. In the context of general equilibrium theory, we make hypotheses about agents – consumers, producers, investors; in particular, we start with some form of rationality hypothesis leading to the specification of maximization problems under constraints. We also make hypotheses about markets: Typically we assume that supply equals demand in all markets under consideration. We have repeatedly used the fact that at general equilibrium with fully informed optimizing agents, there can be no arbitrage opportunities, in other words, no possibilities to make money risklessly at zero cost. An arbitrage opportunity indeed implies that at least one agent can reach a higher level of utility without violating his/her budget constraint (since there is no extra cost). In particular, our assertion that one can price any asset (income stream) from the knowledge of Arrow-Debreu prices relied implicitly on a no-arbitrage hypothesis: with a complete set of Arrow-Debreu securities, it is possible to replicate any given income stream and hence the value of a given income stream, the price paid on the market for the corresponding asset, cannot be different from the value of the replicating portfolio of Arrow-Debreu securities. Otherwise an arbitrageur could make arbitrarily large profits by short selling large quantities of the more expensive of the two and buying the cheaper in equivalent amount. Such an arbitrage would have zero cost and be riskless. While general equilibrium implies the no-arbitrage condition, it is more restrictive in the sense of imposing a heavier structure on modeling. And the reverse implication is not true: No arbitrage opportunities1 – the fact that all arbitrage opportunities have been exploited – does not imply that a general equilibrium in all markets has been obtained. Nevertheless, or precisely for that reason, it is interesting to see how far one can go in exploiting the less restrictive hypothesis that no arbitrage opportunities are left unexploited. The underlying logic of the APT to be reviewed in this chapter is, in a sense, very similar to the fundamental logic of the Arrow-Debreu model and it is very much in the spirit of a complete market structure. It distinguishes itself in two major ways: First it replaces the underlying structure based on fundamental securities paying exclusively in a given state of nature with other fundamental securities exclusively remunerating some form of risk taking. More precisely, the APT abandons the analytically powerful, but empirically cumbersome, concept of states of nature as the basis for the definition of its primitive securities. It 1 An arbitrage portfolio is a self-financing (zero net-investment) portfolio. An arbitrage opportunity exists if an arbitrage portfolio exists that yields non-negative cash flows in all states of nature and positive cash flows in some states (Chapter 11). 1 replaces it with the hypothesis that there exists a (stable) set of factors that are essential and exhaustive determinants of all asset returns. The primitive security will then be defined as a security whose risk is exclusively determined by its association with one specific risk factor and totally immune from association with any other risk factor. The other difference with the Arrow-Debreu pricing of Chapter 8 is that the prices of the fundamental securities are not derived from primitives – supply and demand, themselves resulting from agents’ endowments and preferences – but will be deduced empirically from observed asset returns without attempting to explain them. Once the price of each fundamental security has been inferred from observed return distributions, the usual arbitrage argument applied to complex securities will be made (in the spirit of Chapter 10).2 13.2 Factor Models The main building block of the APT is a factor model, also known as a returngenerating process. As discussed previously, this is the structure that is to replace the concept of states of nature. The motivation has been evoked before: States of nature are analytically convincing and powerful objects. In practice, however, they are difficult to work with and, moreover, often not verifiable, implying that contracts cannot necessarily be written contingent on a specific state of nature. We discussed these shortcomings of the Arrow-Debreu pricing theory in Chapter 8. The temptation is thus irresistible to attack the asset pricing problem from the opposite angle and build the concept of primitive securities on an empirically more operational notion, abstracting from its potential theoretical credentials. This structure is what factor models are for. The simplest conceivable factor model is a one-factor market model, usually labeled the Market Model, which asserts that ex-post returns on individual assets can be entirely ascribed either to their own specific stochastic components or to their common association in a single factor, which in the CAPM world would naturally be selected as the return on the market portfolio. This simple factor model can thus be summarized by following the equation (or process):3 rj = αj + βj rM + εj , ˜ ˜ ˜ (13.1) with E εj = 0, c˜v (˜M ,˜j ) = 0, ∀j, and cov (˜j ,˜k ) = 0, ∀j = k. ˜ o r ε ε ε This model states that there are three components in individual returns: (1) an asset-specific constant αj ; (2) a common influence, in this case the unique factor, the return on the market, which affects all assets in varying degrees, with βj measuring the sensitivity of asset j’s return to fluctuations in the market return; and (3) an asset-specific stochastic term εj summarizing all other ˜ stochastic components of rj that are unique to asset j. ˜ 2 The arbitrage pricing theory was first developed by Ross (1976), and substantially interpreted by Huberman (1982) and Conner (1984) among others. For a presentation emphasizing practical applications, see Burmeister et al. (1994). 3 Factors are frequently measured as deviations from their mean. When this is the case, α j becomes an estimate of the mean return on asset j. 2 Equation (13.1) has no bite (such an equation can always be written) until one adds the hypothesis cov (˜j ,˜k ) = 0, j = k, which signifies that all return ε ε characteristics common to different assets are subsumed in their link with the market return. If this were empirically verified, the CAPM would be the undisputed end point of asset pricing. At an empirical level, one may say that it is quite unlikely that a single factor model will suffice.4 But the strength of the APT is that it is agnostic as to the number of underlying factors (and their identity). As we increase the number of factors, hoping that this will not require a number too large to be operational, a generalization of Equation (13.1) becomes more and more plausible. But let us for the moment maintain the hypothesis of one common factor for pedagogical purposes.5 13.2.1 About the Market Model Besides serving as a potential basis for the APT, the Market Model, despite all its weaknesses, is also of interest on two grounds. First it produces estimates for the β’s that play a central role in the CAPM. Note, however, that estimating β’s from past data alone is useful only to the extent that some degree of stationarity in the relationship between asset returns and the return on the market is present. Empirical observations suggest a fair amount of stationarity is plausible at the level of portfolios, but not of individual assets. On the other hand, estimating the β’s does not require all the assumptions of the Market Model; in particular, a violation of the cov(˜i , εk ) = 0, i = k hypothesis is not damaging. ε ˜ The second source of interest in the Market Model, crucially dependent on the latter hypothesis being approximately valid, is that it permits economizing on the computation of the matrix of variances and covariances of asset returns at the heart of the MPT. Indeed, under the Market Model hypothesis, one can write (you are invited to prove these statements): 2 σj 2 2 2 = βj σM + σεj , ∀j 2 = βi βj σ M σij This effectively means that the information requirements for the implementation of MPT can be substantially weakened. Suppose there are N risky assets under consideration. In that case the computation of the efficient fron2 −N tier requires knowledge of N expected returns, N variances, and N 2 covari2 ance terms (N is the total number of entries in the matrix of variances and covariances, take away the N variance/diagonal terms and divide by 2 since σij = σji , ∀i, j). the difficulty in constructing the empirical counterpart of M . (1973), however, demonstrates that in its form (13.1) the Market Model is inconsistent in the following sense: the fact that the market is, by definition, the collection of all individual assets implies an exact linear relationship between the disturbances εj ; in other words, when the single factor is interpreted to be the market the hypothesis cov (˜j ,˜k ) = 0, ∀j = k ε ε cannot be strictly valid. While we ignore this criticism in view of our purely pedagogical objective, it is a fact that if a single factor model had a chance to be empirically verified (in the sense of all the assumptions in (13.1) being confirmed) the unique factor could not be the market. 5 Fama 4 Recall 3 Working via the Market Model, on the other hand, requires estimating Equation (13.1) for the N risky returns producing estimations for the N βj ’s and 2 the N σεj and estimating the variance of the market return, that is, 2N + 1 information items. 13.3 13.3.1 The APT: Statement and Proof A Quasi-Complete Market Hypothesis To a return-generating process such as the Market Model, the APT superposes a second major hypothesis that is akin to assuming that the markets are “quasicomplete”. What is needed is the existence of a rich market structure with a large number of assets with different characteristics and a minimum number of trading restrictions. This market structure, in particular, makes it possible to form a portfolio P with the following three properties: Property 1: P has zero cost; in other words, it requires no investment. This is the first requirement of an arbitrage portfolio. Let us denote xi as the value of the position in the ith asset in portfolio P . Portfolio P is then fully described by the vector xT = (x1 , x2 ,..., xN ) and the zero cost condition becomes N xi = 0 = xT ·1, i=1 with 1 the (column) vector of 1’s. (Positive positions in some assets must be financed by short sales of others.) Property 2: P has zero sensitivity (zero beta) to the common factor:6 N xi βi = 0 = xT · β. i Property 3: P is a well-diversified portfolio. The specific risk of P is (almost) totally eliminated: N i 2 x2 σ εi ∼ 0. = i The APT builds on the assumed existence of such a portfolio, which requires a rich market structure. 6 Remember that the beta of a portfolio is the weighted sum of the betas of the assets in the portfolio. 4 13.3.2 Statement and Proof of the APT The APT relationship is the direct consequence of the factor structure hypothesis, the existence of a portfolio P satisfying these conditions, and the no-arbitrage assumption. Given that returns have the structure of Equation (13.1), Properties 2 and 3 imply that P is riskless. The fact that P has zero cost (Property 1) then entails that an arbitrage opportunity will exist unless: rP = 0 = xT · r (13.2) The APT theorem states, as a consequence of this succession of statements, that there must exist scalars λ0 , λ1 , such that: r ri = = λ0 · 1 + λ1 β, or λ0 + λ1 βi for all assets i (13.3) This is the main equation of the APT. Equation (13.3) and Properties 1 and 2 are statements about 4 vectors: x, β, 1, and r. Property 1 states that x is orthogonal to 1. Property 2 asserts that x is orthogonal to β. Together these statements imply a geometric configuration that we can easily visualize if we fix the number of risky assets at N = 2, which implies that all vectors have dimension 2. This is illustrated in Figure 13.1. Insert Figure 13.1 Equation (13.3) – no arbitrage – implies that x and r are orthogonal. But ¯ this means that the vector r must lie in the plane formed by 1 and β, or, that ¯ r can be written as a linear combination of 1 and β, as Equation (13.3) asserts. ¯ More generally, one can deduce from the triplet N N N xi = i i xi βi = i xi ri =0 ¯ that there exist scalars λ0 , λ1 , such that: ri = λ0 + λ1 βi for all i. ¯ This is a consequence of the orthonormal projection of the vector ri into the ¯ subspace spanned by the other two. 13.3.3 Meaning of λ0 and λ1 Suppose that there exists a risk-free asset or, alternatively, that the sufficiently rich market structure hypothesis permits constructing a fully diversified portfolio with zero-sensitivity to the common factor (but positive investment). Then rf = rf = λ0 . ¯ 5 That is, λ0 is the return on the risk-free asset or the risk-free portfolio. Now let us compose a portfolio Q with unitary sensitivity to the common factor β = 1. Then applying the APT relation, one gets: rQ = rf + λ1 · 1 ¯ Thus, λ1 = rQ − rf , the excess-return on the pure-factor portfolio Q. It is now ¯ possible to rewrite equation (13.3) as: ri = rf + βi (¯Q −rf ) . ¯ r (13.4) If, as we have assumed, the unique common factor is the return on the market portfolio, in which case Q = M and rQ ≡ rM , then Equation (13.4) is simply ˜ ˜ the CAPM equation: ri = rf + βi (¯M −rf ) . ¯ r 13.4 Multifactor Models and the APT The APT approach is generalizable to any number of factors. It does not, however, provide any clue as to what these factors should be, or any particular indication as to how they should be selected. This is both its strength and its weakness. Suppose we can agree on a two-factor model: ˜ ˜ rj = aj + bj1 F1 + bj2 F2 + ej ˜ ˜ (13.5) ˜ ˜ ˜ ˜ with E˜j = 0, cov F1 , εj = cov F2 , εj = 0, ∀j, and cov (˜j ,˜k ) = 0, ∀j = k. e ε ε As was the case for Equation (13.1), Equation (13.5) implies that one cannot reject, empirically, the hypothesis that the ex-post return on an asset j has two ˜ ˜ stochastic components: one specific, (˜j ), and one systematic, (bj1 F1 + bj2 F2 ). e What is new is that the systematic component is not viewed as the result of a single common factor influencing all assets. Common or systematic issues may now be traced to two fundamental factors affecting, in varying degrees, the returns on individual assets (and thus on portfolios as well). Without loss of generality we may assume that these factors are uncorrelated. As before, an expression such as Equation (13.5) is useful only to the extent that it describes a relationship that is relatively stable over time. The two factors F1 and F2 must really summarize all that is common in individual asset returns. What could these fundamental factors be? In an important article, Chen, Roll, and Ross (1986) propose that the systematic forces influencing returns must be those affecting discount factors and expected cash flows. They then isolate a set of candidates such as industrial production, expected and unexpected inflation, measures of the risk premium and the term structure, and even oil prices. At the end, they conclude that the most significant determinants of asset returns are industrial production (affecting cash flow expectations), changes in the risk premium measured as the spread between the yields on low- and high-risk corporate bonds (witnessing changes in the market risk appetite), and 6 twists in the yield curve, as measured by the spread between short- and longterm interest rates (representing movements in the market rate of impatience). Measures of unanticipated inflation and changes in expected inflation also play a (less important) role. Let us follow, in a simplified way, Chen, Roll, and Ross’s lead and decide that our two factors are industrial production (F1 ) and changes in the risk premium (F2 ). How would we go about implementing the APT? First we have to measure our two factors. Let IP (t) denote the rate of industrial production in month t; then M P (t) = log IP (t) − log IP (t − 1) is the monthly growth rate of IP . This is our first explanatory variable. To measure changes in the risk premium, let us define U P R(t) = “Baa and under” bond portfolio return(t) − LGB(t) where LGB(t) is the return on a portfolio of long-term government bonds. With these definitions we can rewrite Equation (13.5) as rjt = aj + bj1 M P (t) + bj2 U P R (t) + ej ˜ ˜ The bjk , k = 1, 2, are often called factor loadings. They can be estimated directly by multivariate regression. Alternatively, one could construct pure factor portfolios – well-diversified portfolios mimicking the underlying factors – and compute their correlation with asset j. The pure factor portfolio P1 would be a portfolio with bP1 = 1 and bP2 = σeP1 = 0; portfolio P2 would be defined similarly to track the stochastic behavior of U P R(t). Let us go on hypothesizing (wrongly according to Chen, Roll, and Ross) that this two-factor model satisfies the necessary assumptions (cov(˜i , ej ) = 0, ∀i = j) and further assume e ˜ the existence of a risk-free portfolio Pf with zero sensitivity to either of our two factors and zero specific risk. Then the APT states that there exist scalars λ0 , λ1 , λ2 such that: rj = λ0 + λ1 bj1 + λ2 bj2 . That is, the expected return on an arbitrary asset j is perfectly and completely described by a linear function of asset j’s factor loadings bj1 , bj2 . This can appropriately be viewed as a (two-factor) generalization of the SML. Furthermore the coefficients of the linear function are: λ0 λ1 λ2 = = = rf rP1 − rf rP2 − rf where P1 and P2 are our pure factor portfolios. The APT agrees with the CAPM that the risk premium on an asset, rj − λ0 , is not a function of its specific or diversifiable risk. It potentially disagrees with the CAPM in the identification of the systematic risk. The APT decomposes the systematic risk into elements of risk associated with a particular asset’s sensitivity to a few fundamental common factors. 7 Note the parallelism with the Arrow-Debreu pricing approach. In both contexts, every individual asset or portfolio can be viewed as a complex security, or a combination of primitive securities: Arrow-Debreu securities in one case, the pure factor portfolios in the other. Once the prices of the primitive securities are known, it is a simple step to compose replicating portfolios and, by a no-arbitrage argument, price complex securities and arbitrary cash flows. The difference, of course, resides in the identification of the primitive security. While the Arrow-Debreu approach sticks to the conceptually clear notion of states of nature, the APT takes the position that there exist a few common and stable sources of risk and that they can be empirically identified. Once the corresponding risk premia are identified, by observing the market-determined premia on the primitive securities (the portfolios with unit sensitivity to a particular factor and zero sensitivity to all others) the pricing machinery can be put to work. Let us illustrate. In our two-factor examples, a security j with, say, bj1 = 0.8 and bj2 = 0.4 is like a portfolio with proportions of 0.8 of the pure portfolio P1 , 0.4 of pure portfolio P2 , and consequently proportion −0.2 in the riskless asset. By our usual (no-arbitrage) argument, the expected rate of return on that security must be: rj = −0.2rf + 0.8rP1 + 0.4rP2 = −0.2rf + 0.8rf + 0.4rf + 0.8 (rP1 − rf ) + 0.4 (rP2 − rf ) = rf + 0.8 (rP1 − rf ) + 0.4 (rP2 − rf ) = λ0 + bj1 λ1 + bj2 λ2 The APT equation can thus be seen as the immediate consequence of the linkage between pure factor portfolios and complex securities in an arbitrage-free context. The reasoning is directly analogous to our derivation of the value additivity theorem in Chapter 10 and leads to a similar result: Diversifiable risk is not priced in a complete (or quasi-complete) market world. While potentially more general, the APT does not necessarily contradict the CAPM. That is, it may simply provide another, more disaggregated, way of writing the expected return premium associated with systematic risk, and thus a decomposition of the latter in terms of its fundamental elements. Clearly the two theories have the same implications if (keeping with our two-factor model, the generalization is trivial): βj (rM − rf ) = bj1 (rP1 − rf ) + bj2 (rP2 − rf ) (13.6) Let βP1 be the (market) beta of the pure portfolio P1 and similarly for βP2 . Then if the CAPM is valid, not only is the LHS of Equation (13.6) the expected risk premium on asset j, but we also have: rP1 − rf rP2 − rf = βP1 (rM − rf ) = βP2 (rM − rf ) Thus the APT expected risk premium may be written as: bj1 [βP1 (rM − rf )] + bj2 [βP2 (rM −rf )] = (bj1 βP1 + bj2 βP2 ) (rM − rf ) 8 which is the CAPM equation provided: βj = bj1 βP1 + bj2 βP2 In other words, CAPM and APT have identical implications if the sensitivity of an arbitrary asset j with the market portfolio fully summarizes its relationship with the two underlying common factors. In that case, the CAPM would be another, more synthetic, way of writing the APT.7 In reality, of course, there are reasons to think that the APT with an arbitrary number of factors will always do at least as well in identifying the sources of systematic risk as the CAPM. And indeed Chen, Roll, and Ross observe that their five factors cover the market return in the sense that adding the return on the market to their preselected five factors does not help in explaining expected returns on individual assets. 13.5 Advantage of the APT for Stock or Portfolio Selection The APT helps to identify the sources of systematic risk, or to split systematic risk into its fundamental components. It can thus serve as a tool for helping the portfolio manager modulate his risk exposure. For example, studies show that, among U.S. stocks, the stocks of chemical companies are much more sensitive to short-term inflation risk than stocks of electrical companies. This would be compatible with both having the same exposure to variations in the market return (same beta). Such information can be useful in at least two ways. When managing the portfolio of an economic agent whose natural position is very sensitive to short-term inflation risk, chemical stocks may be a lot less attractive than electricals, all other things equal (even though they may both have the same market beta). Second, conditional expectations, or accurate predictions, on short-term inflation may be a lot easier to achieve than predictions of the market’s return. Such a refining of the information requirements needed to take aggressive positions can, in that context, be of great use. 13.6 Conclusions We have now completed our review of asset pricing theories. At this stage it may be useful to draw a final distinction between the equilibrium theories covered in Chapters 7, 8, and 9 and the theories based on arbitrage such as the Martingale pricing theory and the APT. Equilibrium theories aim at providing a complete theory of value on the basis of primitives: preferences, technology, and market structure. They are inevitably heavier, but their weight is proportional to their ambition. By contrast, arbitrage-based theories can only provide a relative theory of value. With what may be viewed as a minimum of assumptions, they 7 The observation in footnote 5, however, suggests this could be true as an approximation only. 9 • offer bounds on option values as a function of the price of the underlying asset, the stochastic behavior of the latter being taken as given (and unexplained); • permit estimating the value of arbitrary cash flows or securities using riskneutral measures extracted from the market prices of a set of fundamental securities, or in the same vein, using Arrow-Debreu prices extracted from a complete set of complex securities prices; • explain expected returns on any asset or cash flow stream once the price of risk associated with pure factor portfolios has been estimated from market data on the basis of a postulated return-generating process. Arbitrage-based theories currently have the upper hand in practitioners’ circles where their popularity far outstrips the degree of acceptance of equilibrium theories. This, possibly temporary, state of affairs may be interpreted as a measure of our ignorance and the resulting need to restrain our ambitions. References Burmeister, E., Roll, R., Ross, S.A. (1994), “A Practitioner’s Guide to Arbitrage Pricing Theory,” in A Practitioner’s Guide to Factor Models, Research Foundation of the Institute of Chartered Financial Analysts, Charlottesville, VA. Chen, N. F., Roll R., Ross, S.A. (1986), “Economic Forces and the Stock Market,” Journal of Business 59(3), 383-404. Connor, G. (1984), “A Unified Beta Pricing Theory,” Journal of Economic Theory 34(1). Fama, E.F. (1973), “A Note on the Market Model and the Two-Parameter Model,” Journal of Finance 28 (5), 1181-1185 Huberman, G. (1982), “A Simple Approach to Arbitrage Pricing,” Journal of Economic Theory, 28 (1982): 183–191. Ross, S. A. (1976), “The Arbitrage Pricing Theory,” Journal of Economic Theory, 1, 341–360. 10 Part V Extensions Chapter 14 : Portfolio Management in the Long Run 14.1 Introduction The canonical portfolio problem (Section 5.1) and the MPT portfolio selection problem embedded in the CAPM are both one-period utility-of-terminal-wealth maximization problems. As such the advice to investors implicit in these theories is astonishingly straightforward: (i) Be well diversified. Conceptually this recommendation implies that the risky portion of an investor’s portfolio should resemble (be perfectly positively correlated with) the true market portfolio M . In practice, it usually means holding the risky component of invested wealth as a set of stock index funds, each one representing the stock market of a particular major market capitalization country with the relative proportions dependent upon the relevant ex-ante variance-covariance matrix estimated from recent historical data. (ii) Be on the capital market line. That is, the investor should allocate his wealth between risk free assets and the aforementioned major market portfolio in proportions that are consistent with his subjective risk tolerance. Implicit in this second recommendation is that the investor first estimates her coefficient of relative risk aversion as per Section 4.5, and then solves a joint savings-portfolio allocation problem of the firm illustrated in Section 5.6.3. The risk free rate used in these calculations is customarily a one year T-bill rate in the U.S. or its analogue elsewhere. But what should the investor do next period after this period’s risky portfolio return realization has been observed? Our one period theory has nothing to say on this score except to invite the investor to repeat the above two step process possibly using an updated variance-covariance matrix and an updated risk free rate. This is what is meant by the investor behaving myopically. Yet we are uneasy about leaving the discussion at this level. Indeed, a number of important considerations seem purposefully to be ignored by following such a set of recommendations. 1) Equity return distributions have historically evolved in a pattern that is partially predictable. Suppose, for example, that a high return realization in the current period is on average followed by a low return realization in the subsequent period. This variation in conditional returns might reasonably be expected to influence intertemporal portfolio composition. 2) While known ex-ante relative to the start of a period, the risk free rate also varies through time (for the period 1928-1985 the standard deviation of the U.S. T-bill rate is 5.67%). From the perspective of a long term U.S. investor, the one period T-bill rate no longer represents a truly risk free return. Can any asset be viewed as risk free from a multiperiod perspective? 3) Investors typically receive labor income, and this fact will likely affect both the quantity of investable savings and the risky-risk free portfolio compo- 2 sition decision. The latter possibility follows from the observation that labor income may be viewed as the “dividend” on an implicit non-tradeable human capital asset, whose value may be differentially correlated with risky assets in the investor’s financial wealth portfolio.1 If labor income were risk free (tenured professors!) the presence of a high value risk free asset in the investor’s overall wealth portfolio will likely tilt his security holdings in favor of a greater proportion in risky assets than would otherwise be the case. 4) There are other life-cycle considerations: savings for the educational expenses of children, the gradual disappearance of the labor income asset as retirement approaches, etc. How do these obligations and events impact portfolio choice? 5) There is also the issue of real estate. Not only does real estate (we are thinking of owner-occupied housing for the moment) provide a risk free service flow, but it is also expensive for an investor to alter his stock of housing. How should real estate figure into an investor’s multiperiod investment plan? 6) Other considerations abound. There are substantial taxes and transactions costs associated with rebalancing a portfolio of securities. Taking these costs into account, how frequently should a long term investor optimally alter his portfolio’s composition? In this chapter we propose to present some of the latest research regarding these issues. Our perspective is one in which investors live for many periods (in the case of private universities, foundations or insurance companies, it is reasonable to postulate an infinite lifetime). For the moment, we will set aside the issue of real estate and explicit transactions costs, and focus on the problem of a long-lived investor confronted with jointly deciding, on a period by period basis, not only how much he should save and consume out of current income, but also the mix of assets, risky and risk free, in which his wealth should be invested. In its full generality, the problem confronting a multiperiod investor-saver with outside labor income is thus: T {at ,St } max E t=0 δ t U (Ct ) (14.1) s.t. CT Ct + S t C0 + S0 = ≤ ≤ ˜ ST −1 aT −1 (1 + rT ) + ST −1 (1 − aT −1 )(1 + rf,T ) + LT , ˜ t=T ˜t, 1 ≤ t ≤ T − 1 St−1 at−1 (1 + rt ) + St−1 (1 − at−1 )(1 + rf,t ) + L ˜ Y0 + L0 , t = 0 ˜ where Lt denotes the investor’s (possibly uncertain) period t labor income, and rt represents the period t return on the risky asset which we shall understand ˜ 1 In particular, the value of an investor’s labor income asset is likely to be highly correlated with the return on the stock of the firm with whom he is employed. Basic intuition would suggest that the stock of one’s employer should not be held in significant amounts from a wealth management perspective. 3 to mean a well diversified stock portfolio.2 Problem (14.1) departs from our earlier notation in a number of ways that will be convenient for developments later in this chapter; in particular, Ct and St denote, respectively, period t consumption and savings rather that their lower case analogues (as in Chapter 5). The fact that the risk free rate is indexed by t admits the possibility that this quantity, although known at the start of a period, can vary from one period to the next. Lastly, at will denote the proportion of the investor’s savings assigned to the risky asset (rather that the absolute amount as before). All other notation is standard; problem (14.1) is simply the multiperiod version of the portfolio problem in Section 5.6.3 augmented by the introduction of labor income. In what follows we will also assume that all risky returns are lognormally distributed, and that the investor’s U (Ct ) is of the power utility CRRA class. The latter is needed to make certain that risk aversion is independent of wealth. Although investors have become enormously wealthier over the past 200 years, risk free rates and the return premium on stocks have not changed markedly, facts otherwise inconsistent with risk aversion dependent on wealth. In its full generality, problem (14.1) is both very difficult to solve and begrudging of intuition. We thus restrict its scope and explore a number of special cases. The natural place to begin is to explore the circumstances under which the myopic solution of Section 5.3 carries over to the dynamic context of problem (14.1). 14.2 The Myopic Solution With power utility, an investor’s optimal savings to wealth ratio will be constant so the key to a fully myopic decision rule will lie in the constancy of the a ratio. Intuitively, if the same portfolio decisions are to be made, a natural sufficient condition would be to guarantee that the investor is confronted by the same opportunities on a period by period basis. Accordingly, we assume the return environment is not changing through time; in other words that rf,t ≡ rf is constant and {˜t } is independently and identically distributed. These r assumptions guarantee that future prospects look the same period after period. Further exploration mandates that Lt ≡ 0 (with constant rf , the value of this asset will otherwise be monotonically declining which is an implicit change in future wealth). We summarize these considerations as: Theorem 14.1 (Merton, 1971) Consider the canonical multiperiod consumption-saving-portfolio allocation problem (14.1); suppose U (·) displays CRRA, rf is constant and {˜t } is i.i.d. r 2 This portfolio might be the market portfolio M but not necessarily. Consider the case in which the investor’s labor income is paid by one of the firms in M . It is likely that this particular firm’s shares would be underweighted (relative to M ) in the investor’s portfolio. 4 Then the ratio at is time invariant.3 This is an important result in the following sense. It delineates the conditions under which a pure static portfolio choice analysis is generalizable to a multiperiod context. The optimal portfolio choice – in the sense of the allocation decision between the risk free and the risky asset – defined in a static one period context will continue to characterize the optimal portfolio decision in the more natural multiperiod environment. The conditions which are imposed are easy to understand: If the returns on the risky asset were not independently distributed, today’s realization of the risky return would provide information about the future return distribution which would almost surely affect the allocation decision. Suppose, for example, that returns are positively correlated. Then a good realization today would suggest high returns are more likely again tomorrow. It would be natural to take this into account by, say, increasing the share of the risky asset in the portfolio (beware, however, that, as the first sections of Chapter 5 illustrate, without extra assumption on the shape of the utility function – beyond risk aversion – the more intuitive result may not generally obtain. We will be reminded of this in the sequel of this chapter where, in particular, the log utility agent will stand out as a reference). The same can be said if the risk free rate is changing through time. In a period of high risk free rates, the riskless asset would be more attractive, all other things equal. The need for the other assumption – the CRRA utility specification – is a direct consequence of Theorem 5.5. With another utility form than CRRA, Theorem 5.5 tells us that the share of wealth invested in the risky asset varies with the “initial” wealth level, that is, the wealth level carried over from the last period. But in a multiperiod context, the investable wealth, that is, the savings level, is sure to be changing over time, increasing when realized returns are favorable and decreasing otherwise. With a non-CRRA utility function, optimal portfolio allocations would consistently be affected by these changes. Now let us illustrate the power of these ideas to evaluate an important practical problem. Consider the problem of an individual investor saving for retirement: at each period he must decide what fraction of his already accumulated wealth should be invested in stocks (understood to mean a well diversified portfolio of risky assets) and risk free bonds for the next investment period. We will maintain the Lt ≡ 0 assumption. Popular wisdom in this area can be summarized in the following three assertions: (1) Early in life the investor should invest nearly all of his wealth in stocks (stocks have historically outperformed risk free assets over long (20 year) periods), while gradually shifting almost entirely into risk free instruments as retirement approaches in order to avoid the possibility of a catastrophic loss. (2) If an investor is saving for a target level of wealth (such as, in the U.S., college tuition payments for children), he should gradually reduce his holdings 3 If the investor’s period utility is log it is possible to relax the independence assumption. This important observation, first made by Samuelson (1969), will be confirmed later on in this chapter. 5 in stocks as his wealth approaches the target level in order to minimize the risk of a shortfall due to an unexpected market downturn. (3) Investors who are working and saving from their labor income should rely more heavily on stocks early in their working lives, not only because of the historically higher returns that stocks provide but also because bad stock market returns, early on, can be offset by increased saving out of labor income in later years. Following Jagannathan and Kocherlakota (1996), we wish to subject these assertions to the discipline imposed by a rigorous modeling perspective. Let us maintain the assumptions of Theorem (14.1) and hypothesize that the risk free rate is constant, that stock returns {˜t } are i.i.d., and that the investor’s utility r function assumes the standard CRRA form. To evaluate assertion (1), let us further simplify Problem (14.1) by abstracting away from the consumption-savings problem. This amounts to assuming that the investor seeks to maximize the utility of his terminal wealth, YT in period T , the planned conclusion of his working life. As a result, St = Yt for every period t < T (no intermediate consumption). Under CRRA we know that the investor would invest the same fraction of his wealth in risky assets every period (disproving the assertion), but it is worthwhile to see how this comes about in a simple multiperiod setting. Let r denote the (invariant) risky return distribution; the investor solves: ˜ (YT )1−γ 1−γ {at } YT = aT −1 YT −1 (1 + rT ) + (1 − aT −1 )YT −1 (1 + rf ), t = T ˜ Yt = at−1 Yt−1 (1 + rt ) + (1 − at−1 )Yt−1 (1 + rf ), 1 ≤ t ≤ T − 1 ˜ Y0 given. maxE s.t. Problems of this type are most appropriately solved by working backwards: first solving for the T −1 decision, then solving for the T −2 decision conditional on the T − 1 decision and so on. In period T − 1 the investor solves: maxE (1 − γ)−1 {[aT −1 YT −1 (1 + r) + (1 − aT −1 )YT −1 (1 + rf )](1−γ) } ˜ aT −1 The solution to this problem, aT −1 ≡ a, satisfies the first order condition ˆ E{[ˆ(1 + r) + (1 − a)(1 + rf )]−γ (˜ − rf )} = 0 a ˜ ˆ r As expected, because of the CRRA assumption the optimal fraction invested in stocks is independent of the period T − 1 wealth level. Given this result, we 6 can work backwards. In period T − 2, the investor rebalances his portfolio, knowing that in T − 1 he will invest the fraction a in stocks. As such, this ˆ problem becomes: ˜ maxE (1 − γ)−1 {[aT −2 YT −2 (1 + r) + (1 − aT −2 )YT −2 (1 + rf )] [ˆ(1 + r) + (1 − a)(1 + rf )]}1−γ a ˜ ˆ (14.2) aT −2 Because stock returns are i.i.d., this objective function may be written as the product of expectations as per E[ˆ(1 + r) + (1 − a)(1 + rf )]1−γ . a ˜ ˆ {aT −2 } max E{1 − γ)−1 [aT −2 Yt−2 (1 + r) + (1 − aT −2 )YT −2 (1 + rf )]1−γ } (14.3) ˜ Written in this way the structure of the problem is no different from the prior one, and the solution is again aT −2 ≡ a. Repeating the same argument it ˆ must be the case that at = a in every period, a result that depends critically not ˆ only on the CRRA assumption (wealth factors out of the first order condition) but also on the independence. The risky return realized in any period does not alter our belief about the future return distributions. There is no meaningful difference between the long (many periods) run and the short run (one period): agents invest the same fraction in stocks irrespective of their portfolio’s performance history. Assertion (1) is clearly not generally valid. To evaluate our second assertion, and following again Jagannathan and Kocherlakota (1996), let us modify the agent’s utility function to be of the form U (YT ) = ¯ (YT −Y )1−γ 1−γ −∞ ¯ if YT ≥ Y ¯ if YT < Y ¯ where Y is the target level of wealth. Under this formulation it is absolutely essential that the target be achieved: as long as there exists a positive probability of failing to achieve the target the investor’s expected utility of terminal wealth is −∞. Accordingly we must also require that ¯ Y0 (1 + rf )T > Y ; in other words, that the target can be attained by investing everything in risk free assets. If such an inequality were not satisfied, then every strategy would yield an expected utility of −∞, with the optimal strategy thus being indeterminate. A straightforward analysis of this problem yields the following two step solution: Step 1: always invest sufficient funds in risk free assets to achieve the target wealth level with certainty, and 7 Step 2: invest a constant share a∗ of any additional wealth in stock, where a is time invariant. By this solution, the investor invests less in stocks than he would in the absence of a target, but since he invests in both stocks and bonds, his wealth will accumulate, on average, more rapidly than it would if invested solely at the risk free rate, and the stock portion of his wealth will, on average, grow faster. As a result, the investor will typically use proportionally less of his resources to guarantee achievement of the target. And, over time, targeting will tend to increase the share of wealth in stocks, again contrary to popular wisdom! In order to evaluate assertion (3), we must admit savings from labor income into the analysis. Let {Lt } denote the stream of savings out of labor income. For simplicity, we assume that the stream of future labor income is fully known at date 0. The investor’s problem is now: (YT )1−γ s.t. 1−γ {at } YT = LT + aT −1 YT −1 (1 + rT ) + (1 − aT −1 )YT −1 (1 + rf ), t = T ˜ Yt ≤ Lt + at−1 Yt−1 (1 + rt ) + (1 − at−1 )Yt−1 (1 + rf ), 1 ≤ t ≤ T − 1 ˜ given. Y0 ; {Lt }T t=0 maxE We again abstract away from the consumption-savings problem and focus on maximizing the expected utility of terminal wealth. In any period, the investor now has two sources of wealth, financial wealth, YtF , where YtF = Lt + at−1 Yt−1 (1 + rt ) + (1 − at−1 )Yt−1 (1 + rf ) (rt is the period t realized value of r), and “labor income wealth”, YtL , is mea˜ sured by the present value of the future stream of labor income. As mentioned, we assume this income stream is risk free with present value, YtL = Lt+1 LT + ... + . (1 + rf ) (1 + rf )T −1 Since the investor continues to have CRRA preferences, he will, in every period, invest a constant fraction of his total wealth a in stocks, where a depends ˆ ˆ only upon his CRRA and the characteristics of the return distributions r and ˜ rf ; i.e., At = a(YtF + YtL ), ˆ where At denotes the amount invested in the risky financial asset. As the investor approaches retirement, his YtL declines. In order to maintain the same fraction of wealth invested in risk free assets, the fraction of financial wealth invested in stocks, 8 At YL = a 1 + tF ˆ F Yt Yt must decline on average. Here at least the assertion has theoretical support, but for a reason different from what is commonly asserted. In what follows we will consider the impact on portfolio choice of a variety of changes to the myopic context just considered. In particular, we explore the consequences of relaxing the constancy of the risk free rate and return independence for the aforementioned recommendations. In most (but not all) of the discussion we’ll assume an infinitely lived investor (T = ∞ in Problem (14.1)). Recall that this amounts to postulating that a finitely lived investor is concerned for the welfare of his descendants. In nearly every cases it enhances tractability. As a device for tying the discussion together, we will also explore how robust the three investor recommendations just considered are to a more general return environment. Our first modification admits a variable risk free rate; the second generalizes the return generating proces on the risky asset (no longer i.i.d. but ‘mean reverting’). Our remarks are largely drawn from a prominent recent publication, Campbell and Viceira (2002). Following the precedents established by these authors, it will prove convenient to log-linearize the investor’s budget constraint and optimality conditions. Simple and intuitive expression for optimal portfolio proportions typically result. Some of the underlying deviations are provided in an Appendix available on this text’s website; others are simply omitted when they are lengthy and complex and where an attractive intuitive interpretation is available. In the next section the risk free rate is allowed to vary, although in a particularly structured way. 14.3 Variations in the Risk Free Rate Following Campbell and Viceira (2002) we specialize Problem (14.1) to admit a variable risk free rate. Other assumptions are: (i) Lt ≡ 0 for all t; there is no labor income so that all consumption comes from financial wealth alone; (ii) T = ∞, that is, we explore the infinite horizon version of Problem (14.1); this allows a simplified description of the optimality conditions on portfolio choice; (iii) All relevant return random variables are lognormal with constant variances and covariances. This is an admittedly strong assumption as it mandates that the return on the investor’s portfolio has a constant variance, and that the constituent assets have constant variances and covariances with the portfolio itself. Thus, the composition of the risky part of the investor’s portfolio must itself be invariant. But this will be optimal only if the expected excess returns above the risk free rate on these same constituent assets are also constant. Expected returns can vary over time but, in effect, they must move in tandem with 9 the risk free rate. This assumption is somewhat specialized but it does allow for unambiguous conclusions. (iv) The investor’s period utility function is of the Epstein-Zin variety (cf. Section 4.7). In this case the intertemporal optimality condition for Problem (14.1) when T = ∞ and there are multiple risky assets, can be expressed as   1−θ 1 −ρ θ   1 Ct+1 ˜ 1 = Et δ Ri,t+1 (14.4) ˜   Ct RP,t+1 ˜ where Ri,t is the period t gross return on any available asset (risk free or other˜ wise, including the portfolio itself) and RP,t is the period t overall risky port˜ folio’s gross return. Note that, consumption Ct , and the various returns RP,t ˜ and Ri,t are capitalized; we will henceforth denote the logs of these quantities by their respective lower case counterparts.4 Equation (14.4) is simply a restatement of equation (9.28) where γ is the risk aversion parameter, ρ is the (1−γ) elasticity of intertemporal substitution and θ = (1− 1 ) . Bearing in mind assumptions (i) - (iv), we now proceed, first to the investor’s budget constraint, and then to his optimality condition. The plan is to loglinearize each in a (rather lengthy) development to our ultimate goal, equation (14.20). 14.3.1 The budget constraint ρ In a model with period consumption exclusively out of financial wealth, the intertemporal budget constraint is of the form Yt+1 = (RP,t+1 )(Yt − Ct ), (14.5) where the risky portfolio P potentially contains many risky assets; equivalently, Yt+1 Ct = (RP,t+1 )(1 − ), Yt Yt or, taking the log of both sides of the equation, ∆yt+1 = log Yt+1 − log Yt = log(RP,t+1 ) + log(1 − exp(log Ct − log Yt )). Recalling our identification of a lower case variable with the log of that variable we have ∆yt+1 = rP,t+1 + log(1 − exp(ct − yt )). 4 At least with respect to returns, this new identification is consistent with our earlier notation in the following sense : in this chapter we identify rt ≡def log Rt . In earlier chapters rt denoted the net return by the identification Rt ≡ 1 + rt . However, rt ≡def log Rt = log(1 + rt ) ≈ rt , for net returns that are not large. Thus even in this chapter we may think of rt as the net period t return. (14.6) 10 t Assuming that the log ( Ct ) is not too variable (essentially this places us in the Y ρ = γ = 1 – the log utility case), then the right most term can be approximated around its mean to yield (see Taylor, Campbell and Viceira (2001)): 1 )(ct − yt ), (14.7) k2 where k1 and k2 < 1 are constants related to exp(E(ct − yt )). Speaking somewhat informally in a fashion that would identify the log of a variable with the variable itself, equation (14.7) simply states that wealth will be higher next period (t + 1) in a manner that depends on both the portfolio’s rate of return (rP,t+1 ) over the next period and on this period’s consumption relative to wealth. If ct greatly exceeds yt , wealth next period cannot be higher! We next employ an identity to allow us to rewrite (14.7) in a more useful way; it is ∆yt+1 = k1 + rP,t+1 + (1 − ∆yt+1 = ∆ct+1 + (ct − yt ) − (ct+1 − yt+1 ) (14.8) where ∆ct+1 = ct+1 − ct . Substituting the R.H.S. of equation (14.8) into (14.7) and rearranging terms yields (ct − yt ) = k2 k1 + k2 (rP,t+1 − ∆ct+1 ) + k2 (ct+1 − yt+1 ). (14.9) Equation (14.9) provides the same information as equation (14.7) albeit expressed differently. It states that an investor could infer his (log) consumptionwealth ratio (ct − yt ) in period t from a knowledge of its corresponding value in period t + 1, (ct+1 − yt+1 ) and his portfolio’s return (the growth rate of his wealth) relative to the growth rate of his consumption (rP,t+1 − ∆ct+1 ). (Note that our use of language again informally identifies a variable with its log.) Equation (14.9) is a simple difference equation which can be solved forward to yield ct − yt = j=1 (k2 )j (rP,t+j − ∆ct+j ) + k2 k1 . 1 − k2 (14.10) Equation (14.10) also has an attractive intuitive interpretation; a high (above average) consumption-wealth ratio ((ct − yt ) large and positive); i.e., a burst of consumption, must be followed either by high returns on invested wealth or lowered future consumption growth. Otherwise the investor’s intertemporal budget constraint cannot be satisfied. But (14.10) holds ex ante relative to time t as well as ex post, its current form. Equation (14.11) provides the exante version: ct − yt = Et j=1 (k2 )j (rP,t+j − ∆ct+j ) + k2 k1 . 1 − k2 (14.11) Substituting this expression twice into the R.H.S. of (14.8), substituting the R.H.S. of (14.7) for the L.H.S. of (14.8), and collecting terms yields our final representation for the log-linearized budget constraint equation : 11 ct+1 − Et ct+1 = (Et+1 − Et ) −(Et+1 − Et ) (k2 )j rP,t+1+j j=1 ∞ (k2 )j ∆ct+1+j . j=1 (14.12) This equation again has an intuitive interpretation: if consumption in period t + 1 exceeds its period t expectation (ct+1 > Et ct+1 , a positive consumption “surprise”), then this consumption increment must be “financed” either by an upward revision in expected future portfolio returns (the first term on the L.H.S. of (14.12)) or a downward revision in future consumption growth (as captured by the second term on the L.H.S. of (14.12)). If it were otherwise, the investor would receive “something for nothing” – as though his budget constraint could be ignored. Since our focus is on deriving portfolio proportions and returns, it will be useful to be able to eliminate future consumption growth (the ∆ct+1+j terms) from the above equation, and to replace it with an expression related only to returns. The natural place to look for such an equivalence is the investor’s optimality equation, (14.4), which directly relates the returns on his choice of optimal portfolio to his consumption experience, log-linearized so as to be in harmony with (14.12). 14.3.2 The Optimality Equation The log-linearized version of (14.4) is: θ vart (∆ct+1 − ρrP,t+1 ), 2ρ Et ∆ct+1 = ρ log δ + ρEt rP,t+1 + (14.13) where we have specialized equation (14.4) somewhat by choosing the ith asset to be the portfolio itself so that Ri,t+1 = RP,t+1 . The web appendix provides a derivation of this expression, but it is more important to grasp what it is telling us about an Epstein-Zin investor’s optimal behavior: in our partial equilibrium setting where investors take return processes as given, equation (14.13) states that an investor’s optimal expected consumption growth (Et (∆ct+1 )) is linearly (by the log linear approximation) related to the time preference parameter δ (an investor with a bigger δ will save more and thus his expected consumption growth will be higher), the portfolio returns he expects to earn (Et rP,t+1 ) , and the miscellaneous effects of uncertainty as captured by the final θ term 2ρ vart (∆ct+1 − ρrP,t+1 ) . A high intertemporal elasticity of substitution ρ means that the investor is willing to experience a steeper consumption growth profile if there are incentives to do so and thus ρ premultiplies both log δ and Et rP,t+1 . Lastly, if θ > 0, an increase in the variance of consumption growth relative to portfolio returns leads to a greater expected consumption growth 12 profile. Under this condition the variance increase elicits greater precautionary savings in period t and thus a greater expected consumption growth rate. Under assumption (iii) of this section, however, the variance term in (14.13) is constant, which leads to a much-simplified representation Et ∆ct+1 = k3 + ρEt rP,t+1 , (14.14) where the constant k3 incorporates both the constant variance and the time preference term ρ log δ. Substituting (14.14) into (14.11) in the most straightforward way and rearranging terms yields ct − yt = (1 − ρ)Et j=1 (k2 )j rP,t+j + k2 (k1 − k3 ) 1 − k2 (14.15) Not surprisingly, equation (14.15) suggests that the investor’s (log) consumption to wealth ratio (itself a measure of how willing he is to consume out of current wealth) depends linearly on future discounted portfolio returns, negatively if ρ > 1 and positively if ρ < 1 where ρ is his intertemporal elasticity of substitution. The value of ρ reflects the implied dominance of the substitution over the income effect. If ρ < 1, the income effect dominates: if portfolio returns increase, the investor can increase his consumption permanently without diminishing his wealth. If the substitution effect dominates (ρ > 1), however, the investor will reduce his current consumption in order to take advantage of the impending higher expected returns. Substituting ( 14.15) into (14.12) yields ct+1 −Et ct+1 = rP,t+1 −Et rP,t+1 +(1−ρ)(Et+1 −Et ) j=1 (k2 )j rP,t+1+j , (14.16) an equation that attributes period t + 1’s consumption surprise to (1) the unexpected contemporaneous component to the overall portfolio’s return rP,t+1 − Et rP,t+1 , plus, (2) the revision in expectation of future portfolio returns, (Et+1 − Et ) ∞ j=1 (k2 )j rP,t+1+j . This revision either encourages or reduces consumption de- pending on whether, once again, the income or substitution effect dominates. This concludes the background on which the investor’s optimal portfolio characterization rests. Note that equation (14.16) defines a relationship by which consumption may be replaced – in some other expression of interest – by a set of terms involving portfolio returns alone. 14.3.3 Optimal Portfolio Allocations So far, we have not employed the assumption that the expected returns on all assets move in tandem with the risk free rate, and, indeed the risk free rate is not explicit in any of expressions (14.2) – (14.14). We address these issues presently. 13 In an Epstein-Zin context, recall that the risk premium on any risky asset over the safe asset, Et rP,t+1 −rf,t+1 , is given by equation (9.32) which is recopied below: Et rt+1 − rf,t+1 + 2 σt θ covt (rt+1 , ∆ct+1 ) = + (1 − θ)covt (rt+1 , rP,t+1 ) (14.17) 2 ρ where rt+1 denotes the return on the stock portfolio, rP,t+1 is the return on the portfolio of all the investor’s assets, that is, including the “risk free” one. Note that implicit in assumption (iii) is the recognition that all variances and covariances are constant despite the time dependency in notation. From expression (14.16) we see that the covariance of (log) consumption with any variable (and we have in mind its covariance with the risky return variable of (14.17)) may be replaced by the covariance of that variable with the portfolio’s contemporaneous return plus (1 − ρ) times the expectations revisions concerning future portfolio returns. Eliminating in this way consumption from (14.17) via a judicious insertion of (14.16) yields Et rt+1 − rf,t+1 + 2 σt 2 = γcovt (rt+1 , rP,t+1 ) +  (γ − 1)covt rt+1 , (Et+1 − Et ) ∞ j=1 (14.18)  (k2 )j rP,t+1+j  . As noted in Campbell and Viceira (2002), equations (14.16 ) and (14.18) delineate in an elegant way the consequences of the Epstein and Zin separation of time and risk preferences. In particular, in equation (14.16) it is only the time preference parameter ρ which relates current consumption to future returns (and thus income) – a time preference effect, while, in (14.18), it is only γ, the risk aversion coefficient, that appears to influence the risk premium on the risky asset. If we further recall (assumption (iii)) that variation in portfolio expected returns must be exclusively attributable to variation in the risk-free rate, it follows logically that revisions in expectations of the former must uniquely follow from revisions of expectations of the latter: ∞ ∞ (Et+1 − Et ) j=1 (k2 )j rP,t+1+j = (Et+1 − Et ) j=1 (k2 )j rf,t+1+j . (14.19) In a model with one risky asset (in effect the risky portfolio whose composition we are a-priori holding constant), 2 2 covt (rt+1 , rp,t+1 ) = at σt = at σP,t where at is, as before, the risky asset proportion in the portfolio. Substituting both this latter expression and identification (14.19) into equation (14.18) and solving for at gives the optimal, time invariant portfolio weight 14 on the risky asset. at ≡ 1 a= γ Et rt+1 − rf,t+1 + 2t (14.20) 2 σt   ∞ 1 1 +(1 − ) 2 covt rt+1 , −(Et+1 − Et ) (k2 )j rf,t+1+j  , γ σt j=1 σ2 our first portfolio result. Below we offer a set of interpretative comments related to it. (1) The first term in (14.20) represents the myopic portfolio demand for the risky asset, myopic in the sense that it describes the fraction of wealth invested in the risky portfolio when the investor ignores the possibility of future risk free rate changes. In particular, the risky portfolio proportion is inversely related to the investor’s CRRA (γ), and positively related to the risk premium. Note, however, that these rate changes are the fundamental feature of this economy in the sense that variances are fixed and all expected risky returns move in tandem with the risk free rate. (2) The second term in (14.20) captures the risky asset demand related to its usefulness for hedging intertemporal interest rate risk. The idea is as follows. We may view the risk free rate as the “base line” return on the investor’s wealth with the risky asset providing a premium on some fraction thereof. If expected future risk free returns are revised downwards, ( −(Et+1 − Et ) ∞ j=1 (k2 )j rf,t+1+j increases), the investor’s future income (consumption) stream will be reduced unless the risky asset’s return increases to compensate. This will be so on average if the covariance term in equation (14.20) is positive. It is in this sense that risky asset returns (rt+1 ) can hedge risk free interest rate risk. If the covariance term is negative, however, risky asset returns only tend to magnify the consequences of a downward revision in expected future risk free rates. As such, a long term investor’s holding of risky assets would be correspondingly reduced. These remarks have their counterpart in asset price changes: if risk free rates rise (bond prices fall) the investor would wish for changes in the risky portion of his portfolio to compensate via increased valuations. (3) As the investor becomes progressively more risk averse (γ → ∞), he will continue to hold stocks in his portfolio, but only because of their hedging qualities, and not because of any return premium they provide. An analogous myopic investor would hold no risky assets. (4) Note also that the covariance term in (14.20) depends on changes in expectations concerning the entire course of future interest rates. It thus follows that the investor’s portfolio allocations will be much more sensitive to persistent changes in the expected risk free rate than to transitory ones. Considering all the complicated formulae that have been developed, the conclusions thus far are relatively modest. An infinitely lived investor principally 15 consumes out of his portfolio’s income and he wishes to maintain a stable consumption series. To the extent that risky equity returns can offset (hedge) variations in the risk free rate, investors are provided justification for increasing the share of their wealth invested in the high return risky asset. This leads us to wonder if any asset can serve as a truly risk free one for the long term investor. 14.3.4 The Nature of the Risk Free Asset Implicit in the above discussion is the question of what asset, if any, best serves as the risk free one. From the long term investor’s point of view it clearly cannot be a short-term money market instrument (e.g., a T-bill), because its well-documented rate variation makes uncertain the future reinvestment rates that the investor will receive. We are reminded at this juncture, however, that it is not the return risk, per se, but the derived consumption risk that is of concern to investors. Viewed from the consumption perspective, a natural candidate for the risk free asset is an indexed consol bond which pays (the real monetary equivalent of) one unit of consumption every period. Campbell, Lo, and MacKinlay (1997) show that the (log) return on such a consol is given by rc,t+1 = rf,t+1 + k4 − (Et+1 − Et ) j=1 (k5 )j rf,t+1+j (14.21) where k4 is a constant measuring the (constant) risk premium on the consol, and k5 is another positive constant less than one. Suppose, as well that we have an infinitely risk averse investor (γ = ∞) so that (14.20) reduces to   ∞ 1 a = 2 covt rt+1 , −(Et+1 − Et ) (k2 )j rf,t+1+j  , (14.22) σt j=1 and that the single risky asset is the consol bond (rt+1 = rc,t+1 ). In this case (substituting (14.21) into (14.22) and observing that constants do not matter for the computing of covariances), a ≡ 1: the highly risk averse investor will eschew short term risk free assets and invest entirely in indexed bonds. This alone will provide him with a risk free consumption stream, although the value of the asset may change from period to period. 14.3.5 The Role of Bonds in Investor Portfolios Now that we allow the risk free rate to vary, let us return to the three life cycle portfolio recommendations mentioned in the myopic choice section of this chapter. Of course, the model – with an infinitely-lived investor – is, by construction, not the appropriate one for the life cycle issues of recommendation 2 and, being without labor income, nothing can be said regarding recommendation 3 either. This leaves the first recommendation, which really concerns the portfolio of choice for long term investors. The single message of this chapter 16 subsection must be that conservative long term investors should invest the bulk of their wealth in long term index bonds. If such bonds are not available, then in an environment of low inflation risk, long term government securities are a reasonable, second best substitute. For persons entering retirement – and likely to be very concerned about significant consumption risk – long term real bonds should be the investment vehicle of choice. This is actually a very different recommendation from the static one period portfolio analysis which would argue for a large fraction of a conservative investor’s wealth being assigned to risk free assets (T-bills). Yet we know that short rate uncertainty, which the long term investor would experience every time she rolled over her short term instruments, makes such an investment strategy inadvisable for the long term. 14.4 The Long Run Behavior of Stock Returns Should the proportion of an investor’s wealth invested in stocks differ systematically for long term versus short term investors? In either case, most of the attractiveness of stocks (by stocks we will continue to mean a well diversified stock portfolio) to investors lies in their high excess returns (recall the equity premium puzzle of Chapter 9). But what about long versus short term equity risk; that is, how does the ex ante return variance of an equity portfolio held for many periods compare with its variance in the short run? The ex post historical return experience of equities versus other investments turns out to be quite unexpected in this regard. From Table 14.1 it is readily apparent that, historically, over more than 20 year time horizons, stocks have never yielded investors a negative real annualized return, while for all other investment types this has been the case for some sample period. Are stocks in fact less risky than bonds for an appropriately “long run”? In this section, we propose to explore this issue via an analysis of the following questions: (1) What are the intertemporal equity return patterns in order that the outcomes portrayed in Table 14.1 be pervasive and not just represent the realizations of extremely low-probability events; (2) Given a resolution of (1), what are the implications for the portfolio composition of long term versus short term investors; and, (3) How does a resolution of questions (1) and (2) modify the myopic response to the long run investment advice of Section 14.2? It is again impossible to answer these questions in full generality. Following Campbell and Viceira (1999), we elect to examine investor portfolios composed of one risk free and one risky asset (a diversified portfolio). Otherwise, the context is as follows: (i) the investor is infinitely-lived with Epstein-Zin preferences so that (14.4) remains as the investor’s intertemporal optimality condition; furthermore, the investor has no labor income; 17 Table 14.1(i) : Minimum and Maximum Actual Annualized Real Holding Period Returns for the Period 1802-1997 U.S. Securities Markets Variety of Investment Options Maximum Observed Return Minimum Observed Return One Year Holding Period Stocks Bonds T-Bills Stocks Bonds T-Bills Stocks Bonds T-Bills Stocks Bonds T-Bills Stocks Bonds T-Bills Stocks Bonds T-Bills 66.6% 35.1% 23.7% Two Year Holding Period 41.0% 24.7% 21.6% Five Year Holding Period 26.7% 17.7% 14.9% Ten Year Holding Period 16.9% 12.4% 11.6% Twenty Year Holding Period 12.6% 8.8% 8.3% Thirty Year Holding Period 10.6% 7.4% 7.6% 2.6% −2.0% −1.8% 1.0% −3.1% −3.0% −4.1%(ii) −5.4% −5.1% −11.0% −10.1% −8.2% −31.6% −15.9% −15.1% −38.6% −21.9% −15.6% (i) Source: Siegel (1998), Figure 2-1 Notice that beginning with a ten year horizon, the minimum observed stock return exceeded the corresponding minimum bill and bond returns. (ii) 18 (ii) the log real risk free rate is constant from period to period. Under this assumption, all risk free assets – long or short term – pay the same annualized return. The issues of Section 14.3 thus cannot be addressed; (iii) the equity return generating process builds on the following observations: First, note that the cumulative log return over T periods under an i.i.d. assumption is given by rt+1 + rt+2 + ... + rt+T , so that var(rt+1 + rt+2 + ... + rt+T ) = T var(rt+1 ) > T var(rf,t+1 ). For U.S. data, var(rt+1 ) ≈ (.167)2 (taking the risky asset as the S&P500 market index) and var(rf,t+1 ) = (.057)2 (measuring the risk free rate as the one-year T-bill return). With a T = 20 year time horizon, the observed range of annualized relative returns given by Table 14.1 is thus extremely unlikely to have arisen from an i.i.d. process. What could be going on? For an i.i.d. process, the large relative twenty-year variance arises from the possibility of long sequences, respectively, of high and low returns. But if cumulative stock returns are to be less variable than bond returns at long horizons, some aspect of the return generating process must be discouraging these possibilities. That aspect is referred to as “mean reversion”: the tendency of high returns today to be followed by low returns tomorrow on an expected basis and vice versa. It is one aspect of the “predictability” of stock returns and is well documented beyond the evidence in Table 14.1.5 5 It is well known that stock returns are predicted by a number of disparate variables. Perhaps the most frequently cited predictive variable is log(Dt/P t) = dt − pt , the log of the dividend/price ratio at long horizons. In particular, regressions of the form rt,t+k ≡ rt+1 + · · · + rt+k = βk (dt − pt ) + εt,t+k obtain a of an order of magnitude of .3. In the above expression rt+j denotes the log return on the value weighted index portfolio comprising all NYSE, AMEX, and NASDAQ stocks in month t + j, dt is the log of the sum of all dividends paid on the index over the entire year preceding period t, and Pt denotes the period t value of the index portfolio. See Campbell et. al. (1997) for a detailed discussion. More recently, Santos and Veronesi (2004) study regressions whereby long horizon excess returns (above the risk free rate) are predicted by lagged values of the (U.S. data) aggregate labor income/consumption ratio: rt+k = α1 + βk sw + εt+k , t where sw = wt /ct ; wt is measured as period t total compensation to employees and ct t denotes consumption of non-durables plus services (quarterly data). For the period 19482001, for example, they obtain an adjusted R2 of .42 for k = 16 quarters. Returns are computed in a manner identical to Campbell et al. (1997) just mentioned. The basic logic is as follows: when the labor income/consumption ratio is high, investors are less exposed to stock market fluctuations (equity income represents a small fraction of total consumption) and hence demand a lower premium. Stock prices are thus high. Since the sw ratio is stationary t (and highly persistent in the data) it will eventually return to its mean value suggesting a lower future tolerance for risk, a higher risk premium, lower equity prices and low future returns. Their statistical analysis concludes that the labor income/consumption ratio does indeed move in a direction opposite to long horizon returns. Campbell and Cochrane (1999) R2 19 Campbell and Viceira (1999) statistically model the mean reversion in stock returns in a particular way that facilitates the solution to the associated portfolio allocation problem. In particular, they assume the time variation in log return on the risky asset is captured by: 2 rt+1 − Et rt+1 = ut+1 , ut+1 ∼ N (0, σu ), (14.23) where ut+1 captures the unexpected risky return component or “innovation.” In addition, the expected premium on this risky asset is modeled as evolving according to: σ2 Et rt+1 − rf + u = xt , 6 (14.24) 2 where xt itself is a random variable following an AR(1) process with mean x, ¯ 2 persistence parameter φ, and random innovation ηt+1 ∼ N (0, ση ) : ˜ xt+1 = x + φ(xt − x) + ηt+1 . ¯ ¯ (14.25) The xt random variable thus moves slowly (depending on φ) with a tendency to return to its mean value. Lastly, mean reversion is captured by assuming cov(ηt+1 , ut+1 ) = σηu < 0, which translates, as per below, into a statement about risky return autocorrelations: 0 > σηu = cov(ut+1 , ηt+1 ) = covt [(rt+1 − Et rt+1 ), (xt+1 − x − φ(xt − x))] = covt (rt+1 , xt+1 ) σ2 = covt (rt+1 , Et rt+2 − rf + u ) 2 σ2 = covt (rt+1 , rt+2 − ut+2 − rf + u ) 2 = covt (rt+1 , rt+2 ), a high return today reduces expected returns next period. Thus, vart (rt+1 + rt+2 ) = 2vart (rt+1 ) + 2covt (rt+1 , rt+2 ) < 2vart (rt+1 ), in contrast to the independence case. More generally, for all horizons k, vart (rt+1 + rt+2 + ... + rt+k ) 0, investors are typically long in stocks (the risky asset) in order to capture ¯ the excess returns they provide on average. Suppose in some period t + 1, stock returns are high, meaning that stock prices rose a lot from t to t + 1 (ut+1 is large). To keep the discussion less hypothetical, let’s identify this event with the big run-up in stock prices in the late 1990s. Under the σηu < 0 assumption, expected future returns are likely to decline, and perhaps even become negative (ηt+1 is small, possibly negative so that xt+1 is small and thus, via (14.24), so is Ert+1 ). Roughly speaking this means stock prices are likely to decline - as they did in the 2000-2004 period! In anticipation of future price declines, long term investors would rationally wish to assemble a short position in the risky portfolio, since this is the only way to enhance their wealth in the face of falling prices (rf is constant by assumption). Most obviously, this is a short position in the risky portfolio itself, since negative returns must be associated with falling prices. These thoughts are fully captured by (14.26) -(14.27). Campbell and Viceira (2002) argue that the empirically relevant case is the one for which x > 0, ¯ b2 b1 > 0, 1−ρ > 0, and σηu < 0. Under these circumstances, a0 > 0, and 1−ρ a1 > 0, for a sufficiently risk averse investor (γ > 1). If ut+1 is large then ηt+1 is likely to be small - let’s assume negative - and ‘large’ in absolute value if |σηu | 21 is itself large. Via portfolio allocation equation (14.26), the optimal at < 0 - a short position in the risky asset. This distinguishing feature of long term risk averse investors is made more striking if we observe that with σηu < 0, such an investor will maintain a position in the risky asset if average excess returns, x = 0: Even in this case ¯ a0 > 0 (provided γ > 1). Thus if xt = 0 (no excess returns to the risky asset), the proportion of the investor’s wealth in stocks is still positive. In a one period CAPM investment universe, a mean-variance myopic investor would invest nothing in stocks under these circumstances. Neither would the myopic expected utility maximizer of Theorem 4.1. All this is to observe that a risk averse rational long term investor will use whatever means are open to him, including shorting stocks, when he (rationally) expects excess future stock returns to be sufficiently negative to warrant it. A major caveat to this line of reasoning, however, is that it cannot illustrate an equilibrium phenomenon: if all investors are rational and equally well informed about the process generating equity returns (14.23) – (14.25), then all will want simultaneously to go long or short. The latter, in particular, is not feasible from an equilibrium perspective. 14.4.2 Strategic Asset Allocation The expression “strategic asset allocation” is suggestive not only of long-term investing (for which intertemporal hedging is a concern), but also of portfolio weights assigned to broad classes of assets (e.g., “stocks”, “long term bonds”) each well diversified from the perspective of its own kind. This is exactly the setting of this chapter. Can the considerations of this section, in particular, be conveniently contrasted with those of the preceding chapters? This is captured in Figure (14.1) below under the maintained assumptions of this subsection (itself a replica of Figure 4.1 in Campbell and Viceira (2002)). Insert Figure 14.1 about here The myopic buy and hold strategy assumes a constant excess stock return 2 σu equal to the true unconditional mean ((Et rt+1 − rf + 2 ≡ x)) with the investor ¯ solving a portfolio allocation problem as per Theorem 4.1. The line marked “tactical asset allocation” describes the portfolio allocations for an investor who behaves as a one period investor, conditional on his observation of xt . Such an investor, by definition, will not take account of long term hedging opportunities. Consistent with the CAPM recommendation, such an investor will elect at ≡ 0 when xt = 0 (no premium on the risky asset) but if the future looks good – even for just one period – he will increase his wealth proportions in the risky assets. Again, by construction, if xt = x, such an investor will adopt portfolio ¯ proportions consistent with the perpetually myopic investor. Long term “strategic” investors, with rational expectations vis--vis the rea turn generating process (i.e., they know and fully take account of (14.23) – 22 (14.25)) will always elect to hold a greater proportion of their wealth in the risky portfolio than will be the case for the “tactical” asset allocator. In itself this is not entirely surprising for only he is able to illustrate the “hedging demand.” But this demand is present in a very strong way; in particular, even if excess returns are zero, the strategic investor holds a positive wealth fraction in risky assets (a0 > 0). Note also that the slope of the strategic asset allocation line exceeds that of the tactical asset allocation line. In the context of (14.23-14.25)), this is a reflection of the fact that φ = 0 for the tactical asset allocator. 14.4.3 The Role of Stocks in Investor Portfolios Stocks are less risky in the long run because of the empirically verified mean reversion in stock returns. But does this necessarily imply a 100% stock allocation in perpetuity for long term investors? Under the assumptions of Campbell and Viceira (2002), this is clearly not the case: long term investors should be prepared to take advantage of mean reversion by timing the market in a manner illustrated in Figure 14.1 But this in turn presumes the ability of investors to short stocks when their realized returns have recently been very high. Especially for small investors, shorting securities may entail prohibitive transactions costs. Even more significantly, this cannot represent an equilibrium outcome for all investors. 14.5 Background Risk: The Implications of Labor Income for Portfolio Choice Background risks refer to uncertainties in the components of an investor’s income not directly related to his tradeable financial wealth and, in particular, his stock - bond portfolio allocation. Labor income risk is a principal component of background risk; variations in proprietary income (income from privately owned businesses) and in the value of owner-occupied real estate are the others. In this section we explore the significance of labor income risk for portfolio choice. It is a large topic and one that must be dealt with using models of varying complexity. The basic insight we seek to develop is as follows: an investor’s labor income stream constitutes an element of his wealth portfolio. The desirability of the risky asset in the investor’s portfolio will therefore depend not only upon its excess return (above the risk free rate) relative to its variance (risk), but also the extent to which it can be used to hedge variations in the investor’s labor income. Measuring how the proportion of an investor’s financial wealth invested in the risky asset depends on its hedging attributes in the above sense is the principal focus of this section. Fortunately it is possible to capture the basic insights in a very simple framework. As discussed in Campbell and Viceira (2002) that framework makes a number of assumptions: (i) the investor has a one period horizon, investing his wealth to enhance his consumption tomorrow (as such the focus is on the portfolio allocation decision exclusively; there is no t = 0 simultaneous consumption-savings decision); 23 (ii) the investor receives labor income Lt+1 , tomorrow, which for analytical ˜ simplicity is assumed to be lognormally distributed: log Lt+1 ≡ ˜t+1 ∼ N (l, σ 2 ); (iii) there is one risk free and one risky asset (a presumed-to-be well diversified portfolio). Following our customary notation, rf = log(Rf ) and rt+1 = ˜ 2 ˜ log(Rt+1 ). Furthermore, rt+1 −Et rt+1 = ut+1 where ut+1 ∼ N (0, σu ). The pos˜ ˜ ˜ ˜ sibility is admitted that the risky asset return is correlated with labor income in the sense that cov( ˜t+1 , rt+1 ) = σ u = 0. ˜ (iv) the investor’s period t + 1 utility function is of the CRRA-power utility type, with coefficient of relative risk aversion γ. Since there is no labor-leisure choice, this model is implicitly one of fixed labor supply in conjunction with a random wage. Accordingly, the investor solves the following problem: max Et δ αt 1−γ Ct+1 1−γ (14.30) s.t. Ct+1 = Yt RP,t+1 + Lt+1 , where RP,t+1 = at (Rt+1 − Rf ) + Rf , (14.31) at represents the fraction of the investor’s wealth assigned to the risky portfolio, and P denotes his overall wealth portfolio. As in nearly all of our problems to date, insights can be neatly obtained only if approximations are employed which take advantage of the lognormal setup. In particular, we first need to modify the portfolio return expression (14.31)7 . Since RP,t+1 RP,t+1 Rf = = at Rt+1 + (1 − at )Rf , Rt+1 1 + at ( − 1). Rf Taking the log of both sides of this equation yields rP,t+1 − rf = log[1 + at (exp(rt+1 − rf − 1))]. (14.32) The right hand side of this equation can be approximated using a second order Taylor expansion around rt+1 − rf = 0, where the function to be approximated is gt (rt+1 − rf ) = log[1 + at (exp(rt+1 − rf ) − 1)]. By Taylor’s Theorem 1 gt (rt+1 − rf ) ≈ gt (0) + gt (0)(rt+1 − rf ) + gt (0)(rt+1 − rf )2 . 2 7 The derivation to follow is performed in greater detail in Campbell and Viceira (2001b) 24 Clearly gt (0) ≡ 0; Straightforward calculations (simple calculus) yield gt (0) = at , and gt (0) = at (1−at ). Substituting the Taylor expansion, with the indicated coefficient values, for the left hand side of (14.32) yields, 1 2 rP,t+1 − rf = at (rt+1 − rf ) + at (1 − at )σt 2 where (rt+1 − rf )2 is replaced by its conditional expectation. By the special 2 2 form of the risky return generating process, σt = σu , which yields 1 2 rP,t+1 = at (rt+1 − rf ) + rf + at (1 − at )σu . 2 We next modify the budget constraint to problem (14.30). Ct+1 Yt = (RP,t+1 ) + 1, Lt+1 Lt+1 or taking the log of both sides of the equation, ct+1 − t+1 (14.33) = log[exp(yt + rP,t+1 − t+1 ) + 1] ≈ k + ξ(yt + rP,t+1 − t+1 ), (14.34) where k and ξ, 0 < ξ < 1, are constants of approximation. Adding log labor income – t+1 – to both sides of the equation yields ct+1 = k + ξ(yt + rP,t+1 ) + (1 − ξ) t+1 , 1 > ξ > 0. (14.35) In other words, (log-) end of period consumption is a constant plus a weighted average of (log-) end-of-period financial wealth and (log-) labor income, with the weights ξ, 1 − ξ serving to describe the respective elasticities of consumption with respect to these individual wealth components. So far, nothing has been said regarding optimality. Problem (14.30) is a one period optimization problem. The first order necessary and sufficient condition for this problem with respect to at , the proportion of financial wealth invested in the portfolio, is given by: ˜ ˜ ˜ Et δ(Ct+1 )−γ (Rt+1 ) = Et δ(Ct+1 )−γ (Rf ) . In loglinear form, equation (14.36) has the familiar form: 2 Et (˜t+1 − rf ) + 1/2σt = γcovt (˜t+1 , ct+1 ). r r ˜ (14.36) Substituting the expression in (14.35) for ct+1 yields 2 Et (˜t+1 − rf ) + 1/2σt = γcovt (˜t+1 , k + ξ(yt + rP,t+1 ) + (1 − ξ) ˜t+1 ). r r ˜ After substituting (14.33) for rP,t+1 , we are left with 2 2 Et (˜t+1 − rf ) + 1/2σt = γ ξat σt + (1 − ξ)covt ( ˜t+1 , rP,t+1 ) . r ˜ 25 from which we can solve directly for at . Recall that our objective was to explore how the hedging (with respect to labor income) features of risky securities influence the proportion of financial wealth invested in the risky asset. Accordingly it is convenient first to simplify the expression via the following identifications: let (i) µ = Et (˜t+1 − rf ); r 2 2 (ii) σt = σu , since rt+1 − Et rt+1 = ut+1 ; (iii) cov( ˜t+1 , rt+1 ) = cov( ˜t+1 , rt+1 − Et rt+1 ) = cov( ˜t+1 , ut+1 ) = σ ˜ ˜ ˜ ˜ With these substitutions the above expression reduces to: 1 2 2 µ + σu = γ[ξat σu + (1 − ξ)σ u ]. 2 Straightforwardly solving for at yields 1 at = ξ u µ+ 2 2 γσu u σ2 + 1− 1 ξ σu , 2 σu (14.37) an expression with an attractive interpretation. The first term on the right hand side of (14.37) represents the fraction in the risky asset if labor income is uncorrelated with the risky asset return (σ u = 0). It is positively related to 2 σu the adjusted return premium ( µ + 2 ) and inversely related to the investor’s risk aversion coefficient γ. The second term represents the hedging component: if σ u < 0, then since ξ < 1, demand for the risky asset is enhanced since it can be employed to diversify away some of the investor’s labor income risk. Or, to express the same idea from a slightly different perspective, if the investor’s labor income has a “suitable” statistical pattern vis-a-vis the stock market, he can reasonably take on greater financial risk. It is perhaps even more striking to explore further the case where σ u = 0: since ξ < 1, even in this case the optimal fraction invested in the risky portfolio at = 1 ξ u µ+ 2 2 γσu σ2 > u µ+ 2 2 γσu σ2 , where the rightmost ratio represents the fraction the investor places in the risky portfolio were there to be no labor income at all. If σ u = 0, then at least one of the following is true: corr(u, ) = 0 or σ = 0, and each leads to a slightly different interpretation of the optimal at . First, if σ > 0 (there is variation in labor income), then the independence of labor and equity income allows for a good deal of overall risk reduction, thereby implying a higher optimal risky asset portfolio weight. If σ = 0 – labor income is constant – then human capital wealth is a non-tradeable risk free asset in the investor’s overall wealth portfolio. Ceteris paribus, this also allows the investor to rebalance his portfolio in favor of a greater fraction held in risky assets. If, alternatively σ u > 0 – a situation in which the investor’s income is closely tied to the behavior of the stock market 26 – then the investor should correspondingly reduce his position in risky equities. In fact, if the investor’s coefficient of relative risk aversion is sufficiently high and σ u large and positive (say, if the investor’s portfolio contained a large position in his own firm’s stock) then at < 0; i.e., the investor should hold a short position in the overall equity market. These remarks formalize, though in a very simple context, the idea that an investor’s wage income stream represents an asset and that its statistical covariance with the equity portion of his portfolio should matter for his overall asset allocation. To the extent that variations in stock returns are offset by variations in the investor’s wage income, stocks are effectively less risky (so also is wage income less risky) and he can comfortably hold more of them. The reader may be suspicious, however, of the one period setting. We remedy this next. Viceira (2001) extends these observations to a multiperiod infinite horizon setting by adopting a number of special features. There is a representative investor-worker who saves for retirement and who must take account in his portfolio allocation decisions of the expected length of his retirement period. In any period there is a probability π r that the investor will retire; his probability of remaining employed and continuing to receive labor income is π e = 1 − π r , with constant probability period by period. With this structure of uncertainty, 1 the expected number of periods until an investor’s retirement period is πr . Once retired (zero labor income) the period constant probability of death is π d ; in 1 a like manner the expected length of his retirement is πd . Viceira (2001) also assumes that labor income is growing in the manner of ∆ t+1 = log Lt+1 − log Lt = g + εt+1 , ˜ (14.38) 2 where g > 0 and εt+1 ∼ N 0, σε . In expression (14.38), g represents the mean ˜ growth in labor income (for the U.S. this figure is approximately 2%) while εt ˜ denotes random variations about the mean. The return on the risky asset is assumed to follow the same hypothetical process as in the prior example. In this case, σu = covt (rt+1 , ∆ t+1 ) = covt (ut+1 , εt+1 ) = σuε . With an identical asset structure as in the previous model, the investor’s problem appears deceptively similar to (14.30): ∞ 1−γ Ct+i 1−γ max Et i=0 δi (14.39) s.t. Yt+1 = (Yt + Lt − Ct )RP,t+1 . The notation in problem (14.39) is identical to that of the previous model. Depending on whether an agent is employed or retired, however, the first order optimality condition will be different, reflecting the investor’s differing probability structure looking forward. If the investor is retired, for any asset i (the 27 portfolio P , or the risk free asset): 1 = Et [(1 − π d )δ r Ct+1 r Ct −γ Ri,t+1 ]. (14.40) The interpretation of equation (14.40) is more or less customary: the investor trades off the marginal utility lost in period t by investing one more consumption unit against the expected utility gain in period t + 1 for having done so. The expectation is adjusted by the probability (1 − π d ) that the investor is, in fact, still living next period. Analytically, its influence on the optimality condition is the same as a reduction in his subjective discount factor δ. In equations (14.40) and (14.41) to follow, the (not log) consumption is superscripted by e or r, depending upon its enjoyment in the investor’s period of employment or retirement, respectively. If the investor is employed, but with positive probability of retirement and subsequent death, then each asset i satisfies: 1 = Et πe δ e Ct+1 e Ct −γ + (1 − π e )(1 − π d )δ r Ct+1 r Ct −γ (Ri,t+1) (14.41) Equation (14.41)’s interpretation is analogous to (14.40) except that the investor must consider the likelihood of his two possible states next period: either he is employed (probability π e ) or retired and still living (probability (1 − π e )(1 − π d )). Whether employed or retired, these equations implicitly characterize the investor’s optimal risk free - risky portfolio proportions as those for which his expected utility gain to a marginal dollar invested in either one is the same. Viceira (2001) log linearizes these equations and their associated budget constraints to obtain the following expressions for log consumption and the optimal risky portfolio weight in both retirement and employment; for a retired investor: cr = br + br yt , and t 0 1 ar = µ+ 2 , 2 γbr σu 1 2 σu (14.42) (14.43) where br = 1 and br is a complicated (in terms of the model’s parameters) 0 1 constant of no immediate concern; for an employed investor, the corresponding expressions are: ce = be + be yt + (1 − be ) t , t 0 1 1 ae = µ+ 2 2 γb1 σu 2 σu (14.44) (14.45) π e (1 − be ) 1 b1 σeu 2 σu with 0 < be < 1, b1 = π e be + (1 − π e )br , and be , again, a complex constant 1 1 1 0 whose precise form is not relevant for the discussion. These formulae allow a number of observations: 28 (1) Since br > be , (log) consumption is more sensitive to (log) wealth changes 1 1 for the retired (equation 14.42) as compared with the employed (equation 14.44). This is not surprising as the employed can hedge this risk via his labor income. The retired cannot. (2) As in the prior model with labor income, there are two terms which together comprise the optimal risky asset proportions for the employed, ae . The first u µ+ 2 2 γb1 σu σ2 reflects the proportion when labor income is independent of π e (1−be ) risky returns (σεu = 0). The second, −( ¯1 1 ) σεu , accounts for the hedging 2 σu b component. If σεu < 0, then the hedge that labor income provides to the risky component of the investor’s portfolio is more powerful: the optimal ae is thus higher, while the opposite is true if σεu > 0. With a longer expected working life (greater π e ) the optimal hedging component is also higher: the present value of the gains to diversification provided by labor income variation correspondingly increase. Note also that the hedging feature is very strong in the sense that even if the mean equity premium, µ = 0, the investor will retain a positive allocation in risky assets purely for their diversification effect vis-a-vis labor income. (3) Let us next separate the hedging effect by assuming σε = 0 (and thus σεu = 0). Since be < br , b1 < br , 1 1 1 a = e µ+ 2 σu 2 2 γb1 σu u µ+ 2 r > r σ2 = a , γb1 u σ2 for any level of risk aversion γ: even if labor income provides no hedging services, the employed investor will hold a greater fraction of his wealth in the risky portfolio than will the retired investor. This is the labor income wealth effect. Ceteris paribus, a riskless labor income stream contributes a valuable riskless asset and its presence allows the investor to tilt the financial component of his wealth in favor of a greater proportion in stocks. It also enhances his average consumption suggesting less aversion to risk. If σεu = 0 because ρεu = 0, ae > ar can be justified on the basis of diversification alone. This latter comment is strengthened (weakened) when greater (lesser) diversification is possible: σεu < 0 (σεu > 0). Before summarizing these thoughts, we return to a consideration of the initial three life-cycle portfolio recommendations. Strictly speaking Problem (14.39) is not a life cycle model. Life cycle considerations can be dealt with to a good approximation, however, if we progressively resolve problem (14.39) for a variety of choices of π e , π d . If π d is increased the expected length of the period of retirement falls. If π r is increased (π e decreased) it is as if the investor’s expected time to retirement was declining as he “aged.” Campbell and Viceira (2002) present the results of such an exercise which we report in Table 14.2 for a selection of realistic risk aversion parameters. Here we find more general theoretical support for at least the 1st and 3rd of our original empirical observations. As investors approach retirement, the fraction of their wealth invested in the risky portfolio does decline strongly. Notice also that it is optimal for mildly risk averse young investors (γ = 2) to short 29 Table 14.2: Optimal Percentage Allocation to Stocks(i),(ii) Employed Expected Time to Retirement (years) 35 25 10 Retired 5 Panel A: corr(rP,t+1 , ∆lt+1 ) = 0 γ=2 γ=5 184 62 156 55 114 42 97 37 80 32 Panel B: corr(rP,t+1 , ∆lt+1 ) = .35 γ=2 γ=5 (i) (ii) 155 42 136 39 116 35 93 33 80 32 rf = .02, ErP, t+1 − rf, t+1 = µ = .04, σu = .157, g = .03, σε = .10 Table 14.2 is a subset of Table 6.1 in Campbell and Viceira (2002) dramatically the risk free asset to buy more of the risky one in order to “capture” the return supplement inherent in the equity premium. In actual practice, however, such a leverage level is unlikely to be feasible for young investors without a high level of collateral assets. However, the “pull of the premium” is so strong that even retired persons with γ = 5 (the upper bound for which there is empirical support) will retain roughly one third of their wealth in the risky equity index. In this sense the latter aspect of the first empirical assertion is not borne out, at least for this basic preference specification. We conclude this section by summarizing the basic points. 1. Riskless labor income creates a tilt in investor portfolios toward risky equities. This is not surprising as the labor income stream in this case contributes a risk free asset in the investor’s portfolio. There are two effects going on. One is a wealth effect: ceteris paribus an investor with a labor income stream is wealthier than an investor without one, and with CRRA utility some of that additional wealth will be assigned to equities. This is complemented by a pure portfolio effect: the risk free asset alters overall portfolio proportions in a way that is manifest as an increased share of financial wealth in risky assets. 2. These same effects are strengthened by the hedging effect if σεu ≤ 0 (effectively this means σr ≤ 0). Stocks and risky labor income co-vary in a way that each reduces the effective risk of the other. Only if σεu is large and positive will the presence of labor income risk reduce the fraction of financial wealth invested in risky assets. 3. Although not discussed explicitly, the ability of an investor to adjust his labor supply - and thus his labor income - only enhances these effects. In this case the investor can elect not only to save more but also to work more if he 30 experiences an unfavorably risky return realization. His ability to hedge averse risky return realizations is thus enriched, and stocks appear effectively less risky. 14.6 An Important Caveat The accuracy and usefulness of the notions developed in the preceding sections, especially as regards applications of the formulae for practical portfolio allocations should not be overemphasized. Their usefulness depends in every case on the accuracy of the forecast means, variances, and covariances which represent the inputs to them: garbage in; garbage out still applies! Unfortunately these quantities – especially expected risky returns – have been notoriously difficult to forecast accurately, even one year in advance. Errors in these estimates can have substantial significance for risky portfolio proportions, as these are generally computed using a formula of the generic form at = 1 −1 Σ (Et rt+1 − rf,t+1 1), γ where bold face letters represent vectors and Σ−1 is a matrix of ‘large’ numbers. Errors in Et rt+1 , the return vector forecasts are magnified accordingly in the portfolio proportion choice. In a recent paper Uppal (2004) evaluates a number of complex portfolio strategies against a simple equal-portfolio-weights-buy-and-hold strategy. Using the same data set as Campbell and Viceira (2002) use, the equal weighting strategy tends to dominate all the others, simply because, under this strategy, the forecast return errors (which tend to be large) do not affect the portfolio’s makeup. 14.7 Another Background Risk: Real Estate In this final section we explore the impact of real estate holdings on an investor’s optimal stock-bond allocations. As before, our analysis will be guided by two main principles: (1) all assets – including human capital wealth – should be explicitly considered as components of the investor’s overall wealth portfolio and (2) it is the correlation structure of cash flows from these various income sources that will be paramount for the stock-bond portfolio proportions. Residential real estate is important because it represents roughly half of the U.S. aggregate wealth, and it is not typically included in empirical stand-ins for the U.S. market portfolio M . Residential real estate also has features that make it distinct from pure financial assets. In particular, it provides a stream of housing services which are inseparable from the house itself. Houses are indivisible assets: one may acquire a small house but not 1/2 of a house. Such indivisibilities effectively place minimum bounds on the amount of real estate that can be acquired. Furthermore, houses cannot be sold without paying a substantial transactions fee, variously estimated to be between 8 % and 10 % of the value of the unit being exchanged. As the purchase of a home is typically a leveraged transaction, 31 most lenders require minimum “down payments” or equity investments by the purchaser in the house. Lastly, investors may be forced to sell their houses for totally exogenous reasons, such as a job transfer to a new location. Cocco (2004) studies the stock-bond portfolio allocation problem in the context of a model with the above features yet which is otherwise very similar to the ones considered thus far in this chapter. Recall that our perspective is one of partial equilibrium where, in this section, we seek to understand how the ownership of real estate influences an investor’s other asset holdings, given assumed return processes on the various assets. Below we highlight certain aspects of Cocco’s (2004) modeling of the investor’s problem. The investor’s objective function, in particular, is T {St ,Bt ,Dt ,F Ct } max E t=0 βt θ 1−θ (Ct Ht )1−γ (YT +1 )1−γ + βT 1−γ 1−γ where, as before, Ct is his period t (non-durable) consumption (not logged; in fact no variables will be logged in the problem description), Ht denotes period t housing services (presumed proportional to housing stock with a proportionality constant of one), and Yt the investor’s period t wealth. Under this formulation non-durable consumption and housing services complement one another with the parameter θ describing the relative preference of one to the other 8 . Investor risk sensitivity to variations in the non durable consumption-housing joint consumption decision is captured by γ (the investor displays CRRA with respect 1−γ +1 ) to the composite consumption product). The right most term, (YT1−γ , is to be interpreted as a bequest function: the representative investor receives utility from non-consumed terminal wealth which is presumed to be bequeathed to the next generation with the same risk preference structure applying to this quantity as well. In order to capture the idea that houses are indivisible assets, Cocco (2004) imposes a minimum size constraint Ht ≥ Hmin ; to capture the fact that there exist transactions costs to changing one’s stock of housing, the agent is assumed to receive only (1 − λ)Pt Ht−1 if he sells his housing stock Ht−1 , in period t for a price Pt . In his calibration, λ- the magnitude of the transaction cost - is fixed at .08, a level for which there is substantial empirical support in U.S. data. Note the apparent motivation for a bequest motive: given the minimum housing stock constraint, an investor in the final period of his life would otherwise own a significant level of housing stock for which the disposition at his death would be ambiguous9 . 8 The idea is simply that an investor will “enjoy his dinner more if he eats it in a warm and spacious house.” 9 An alternative device for dealing with this modeling feature would be to allow explicitly for reverse mortgages. 32 ˜ Let Rt , Rf and RD denote the gross random exogenous return on equity, (constant) risk free rate, and the (constant) mortgage interest rate. If the investor elects not to alter his stock of housing in period t relative to t − 1, his budget constraint for that period is: M M ˜ St + Bt = Rt St−1 + Rf Bt − RD Dt−1 + Lt − Ct − χF C F − ΩPt Ht−1 + Dt = Yt t (14.46) where the notation is suggestive: St and Bt denote his period t stock and bond M holdings, Dt the level of period t mortgage debt, Ω is a parameter measuring the maintenance cost of home ownership and F is a fixed cost of participating in the financial markets. The indicator function χF C assumes values χF C = 1, t t if the investor alters his stock or bond holdings relative to period t − 1 and 0 otherwise. This device is meant to capture the cost of participating in the securities markets. In the event the investor wishes to alter his stock of housing in period t, his budget constraint is modified in the to-be-expected way (most of it is unchanged except for the addition of the costs of trading houses): St + B t Dt = ≤ Yt + (1 − λ)Pt Ht−1 − Pt Ht , and (1 − d)Pt Ht (14.47) (14.48) The additional terms in equation (14.47) relative to (14.46) are simply the net proceeds from the sale of the ‘old’ house, (1 − λ)Pt Ht−1 , less the costs of the ‘new’ one Pt Ht . Constraint (14.48) reflects the down payment equity requirement and the consequent limits to mortgage debt (in his simulation Cocco (2004) chooses d = .15). Cocco (2004) numerically solves the above problem given various assumptions on the return and house price processes which are calibrated to historical data. In particular, he allows for house prices and aggregate labor income shocks to be perfectly positively correlated, and for labor income to have both random and determinate components.10 10 In particular, Cocco (2004) assumes ˜ Lt = f (t) + ut , t ≤ T ˜ , f (t) t>T where T is the retirement date and the deterministic component f (t) is chosen to replicate the hump shape earnings pattern typically observed. The random component ut has aggregate ˜ (˜t ) and idiosyncratic components (˜ t ) where η ω ut ˜ ηt ˜ = = ηt + ωt , and ˜ ˜ ˜ κη Pt (14.49) (14.50) where Pt is the log of the average house price. In addition he assumes Rf = 1.02 is fixed for the period [0, T ] as is the mortgage rate RD = 1.04. The return on equity follows ˜ ˜ ι rt = log(Rt ) = E log R + ˜t with ˜t ∼ ι 2 N (0; σι ), σι ˜ = .1674 andE log R = .10. 33 Cocco (2004) uses his model to comment upon a number of outstanding financial puzzles of which we will review three: (1) considering the magnitude of the equity premium and the mean reversion in equity returns, why do all investors not hold at least some of their wealth as a well-diversified equity portfolio? Simulations of the model reveal that the minimum housing level Hmin (which is calculated at 20, 000 US$) in conjunction with the down payment requirement make it non-optimal for lower labor income investors to pay the fixed costs of entering the equity markets. This is particularly the case for younger investors who remain liquidity constrained. (2) While the material in Section 14.5 suggests that the investors’ portfolio share invested in stocks should decrease in later life (as the value of labor income wealth declines), the empirical literature finds that for most investors the portfolio share invested in stocks is increasing over their life cycle. Cocco’s (2004) model implies the share in equity investments increasing over the life cycle. As noted above, early in life, housing investments keep investors’ liquid assets low and they choose not to participate in the markets. More surprisingly, he notes that the presence of housing can prevent a decline in the share invested in stocks as investors age: as housing wealth increases, investors are more willing to accept equity risk as that risk is not highly correlated with this component. Lastly (3), Cocco deals with the cross sectional observation that the extent of leveraged mortgage debt is highly positively correlated with equity asset holdings. His model is able to replicate this phenomenon as well because of the consumption dimension of housing: investors with more human capital acquire more expensive houses and thus borrow more. Simultaneously, the relatively less risky human capital component induces a further tilt towards stock in high labor income investor portfolios. 14.8 Conclusions The analysis in this chapter has brought us conceptually to the state of the art in modern portfolio theory. It is distinguished by (1) the comprehensive array of asset classes that must be explicitly considered in order properly to understand an investor’s financial asset allocations. Labor income (human capital wealth) and residential real estate are two principal cases in point. And to some extent these two asset classes provide conflicting influences on an investor’s stock-bond allocations. On the one hand, as relatively riskless human capital diminishes as an investor ages, then ceteris paribus, his financial wealth allocation to stocks should fall. On the other hand, if his personal residence has dramatically increased in value over the investor’s working years, this fact argues for increased equity holdings given the low correlation between equity and real estate returns. Which effectively dominates is unclear. (2) Second, long run portfolio analysis is distinguished by its consideration of security return paths beyond the standard one-period-ahead mean, variance and covariance characterization. Mean reversion in stock returns suggests intertemporal hedging opportunities as does the long run variation in the risk free rate. References 34 Campbell, J., Cochrane, J. (1999), “By Force of Habit: A Consumption Based Explanation of Aggregate Stock Market Behavior”, Journal of Political Economy, 107, 205-251. Campbell J., Lo, A., MacKinlay, C. (1997), The Econometrics of Financial Markets, Princeton University Press. Campbell, J., Viceira, L. (1999), “Consumption and Portfolio Decisions when Expected Returns are Time Varying,” Quarterly Journal of Economics, 114, 433-495. Campbell, J., Viceira, L. (2001), Appendix to Strategic asset Allocation, http://kuznets.fas.harvard.edu/ Campbell/papers.html. Campbell, J., Viceira, L. (2002), Strategic Asset Allocation, Oxford University Press: New York. (2001), “Who Should Buy Long-Term Bonds?”, American Economic Review, 91, 99-127. Cocco, D. (2005), “Portfolio Choice in the Presence of Housing,”, forthcoming, Review of Financial Studies. Jagannathan, R., Kocherlakota, N.R. (1996), “Why Should Older People Invest Less in Stocks than Younger People?” Federal Reserve Bank of Minneapolis Quarterly Review, Summer, 11-23 Merton, R.C. (1971), “Optimum Consumption and Portfolio Rules in a Continuous Time Model,” Journal of Economic Theory, 3, 373-413. Samuelson, P.A. (1969), “Lifetime Portfolio Selection by Dynamic Stochastic Programming,” Review of Economics and Statistics, 51, 239-246. Santos, T., Veronesi, P. (2004), “Labor Income and Predictable Stock Returns”, mimeo, Columbia University. Siegel, J. (1998), Stocks for the Long Run, McGraw Hill: New York. Uppal, R. (2004), “How Inefficient are Simple Asset Allocation Strategies,” mimeo, London Business School. Viceira, L. (2001), “Optimal Portfolio Choice for Long-Term Investors with Nontradable Labor Income,” Journal of Finance, 56, 433-470. 35 Chapter 15 : Financial Structure and Firm Valuation in Incomplete Markets 15.1 Introduction We have so far motivated the creation of financial markets by the fundamental need of individuals to transfer income across states of nature and across time periods. In Chapter 8 (Section 8.5), we initiated a discussion of the possibility of market failure in financial innovation. There we raised the possibility that coordination problems in the sharing of the benefits and the costs of setting up a new market could result in the failure of a Pareto-improving market to materialize. In reality, however, the bulk of traded securities are issued by firms with the view of raising capital for investment purposes rather than by private individuals. It is thus legitimate to explore the incentives for security issuance taking the viewpoint of the corporate sector. This is what we do in this chapter. Doing so involves touching upon a set of fairly wide and not fully understood topics. One of them is the issue of security design. This term refers to the various forms financial contracts can take (and to their properties), in particular, in the context of managing the relationship between a firm and its managers on the one hand, and financiers and owners on the other. We will not touch on these incentive issues here but will first focus on the following two questions. What Securities Should a Firm Issue If the Value of the Firm Is to be Maximized? This question is, of course, central to standard financial theory and is usually resolved under the heading Modigliani-Miller (M M ) Theorem (1958). The M M Theorem tells us that under a set of appropriate conditions, if markets are complete the financial decisions of the firm are irrelevant (recall our discussion in Chapter 2). Absent any tax considerations in particular, whether the firm is financed by debt or equity has no impact on its valuation. Here we go one step further and rephrase the question in a context where markets are incomplete and a firm’s financing decision modifies the set of available securities. In such a world, the financing decisions of the firm are important for individuals as they may affect the possibilities offered to them for transferring income across states. In this context is it still the case that the firm’s financing decisions are irrelevant for its valuation? If not, can we be sure that the interests of the firm’s owners as regards the firm’s financing decisions coincide with the interests of society at large? In a second step, we cast the same security design issue in the context of inter-temporal investment that can be loosely connected with the finance and growth issues touched upon in Chapter 1. Specifically, we raise the following complementary question. What Securities Should a Firm Issue If It Is to Grow As Rapidly 1 As Possible? We first discuss the connection between the supply of savings and the financial market structure and then consider the problem of a firm wishing to raise capital from the market. The questions raised are important: Is the financial structure relevant for a firm’s ability to obtain funds to finance its investments? If so, are the interests of the firm aligned with those of society? 15.2 Financial Structure and Firm Valuation Our discussion will be phrased in the context of the following simple example. We assume the existence of a unique firm owned by an entrepreneur who wishes only to consume at date t = 0; for this entrepreneur, U (c0 ) > 0. The assumption of a single entrepreneur circumvents the problem of shareholder unanimity: If markets are incomplete, the firm’s objective does not need to be the maximization of market value: shareholders cannot reallocate income across all dates and states as they may wish. By definition, there are missing markets. But then shareholders may well have differing preferred payment patterns by the firm – over time and across states – depending on the specificities of their own endowments. One shareholder, for example, may prefer investment project A because it implies the firm will flourish and pay high dividends in future circumstances where he himself would otherwise have a low income. Another shareholder would prefer the firm to undertake some other investment project or to pay higher current dividends because her personal circumstances are different. Furthermore, there may be no markets where the two shareholders could insure one another. The firm’s financial structure consists of a finite set of claims against the firm’s period 1 output. These securities are assumed to exhaust the returns to the firm in each state of nature. Since the entrepreneur wishes to consume only in period 0, yet his firm creates consumption goods only in period 1, he will want to sell claims against period 1 output in exchange for consumption in period 0. The other agents in our economy are agents 1 and 2 of the standard ArrowDebreu setting of Chapter 8 and we retain the same general assumptions: 1. There are two dates: 0, 1. 2. At date 1, N possible states of nature, indexed θ = 1, 2, ..., N , with probabilities πθ , may be realized. In fact, for nearly all that we wish to illustrate N = 2 is sufficient. 3. There is one consumption good. 4. Besides the entrepreneur, there are 2 consumers, indexed k = 1, 2, with preferences given by N k U0 (ck ) + δ k 0 θ=1 πθ U k (ck ) = αck + E ln ck θ 0 θ and endowments ek , (ek )θ=1,2,...,N . We interpret ck to be the consumption of 0 θ θ 2 agent k if state θ should occur, and ck his period zero consumption. Agents’ 0 period utility functions are all assumed to be concave, α is the constant date 0 marginal utility, which, for the moment, we will specify to be 0.1, and the discount factor is unity (there is no time discounting). The endowment matrix for the two agents is assumed to be as shown in Table 15.1. Table 15.1: Endowment Matrix Date t = 0 Agent k = 1 Agent k= 2 4 4 Date t= 1 State θ = 1 State θ = 2 1 5 5 1 Each state has probability 1/2 (equally likely) and consumption in period 0 cannot be stored and carried over into period 1. Keeping matters as simple as possible, let us further assume the cash flows to the firm are the same in each state of nature, as seen in Table 15.2. Table 15.2: Cash Flows at Date t = 1 Firm θ=1 2 θ=2 2 There are at least two different financial structures that could be written against this output vector: F1 = {(2, 2)} – pure equity;1 F 2 = {(2, 0), (0, 2)} – Arrow-Debreu securities.2 From our discussion in Chapter 8, we expect financial structure F2 to be more desirable to agents 1 and 2, as it better allows them to effect income (consumption) stabilization: F2 amounts to a complete market structure with the two required Arrow-Debreu securities. Let us compute the value of the firm (what the claims to its output could be sold for) under both financial structures. Note that the existence of either set of securities affords an opportunity to shift consumption between periods. This situation is fundamentally different, in this way, from the pure reallocation examples in the pure exchange economies of Chapter 8. 15.2.1 Financial Structure F1 Let p denote the price (in terms of date 0 consumption) of equity – security {(2,2)} – and let z1 , z2 respectively, be the quantities demanded by agents 1 1 Equity is risk-free here. This is the somewhat unfortunate consequence of our symmetry assumption (same output in the two date t = 1 states). The reader may want to check that our message carries over with a state θ = 2 output of 3. 2 Of course, we could have assumed, equivalently, that the firm issues 2 units of the two conceivable pure Arrow-Debreu securities, ({(1, 0), (0, 1)}). 3 and 2. In equilibrium, z1 + z2 = 1 since there is one unit of equity issued; holding z units of equity entitles the owner to a dividend of 2z both in state 1 and in state 2. Agent 1 solves: pz1 ≤4 max (.1)(4 − pz1 ) + 1/2[ln(1 + 2z1 ) + ln(5 + 2z1 )]. Agent 2 solves: pz2 ≤4 max (.1)(4 − pz2 ) + 1/2[ln(5 + 2z2 ) + ln(1 + 2z2 )]. Assuming an interior solution, the FOCs for agents 1 and 2 are, respectively, 1 1 2 1 2 p= + 10 2 1 + 2z1 2 5 + 2z1 1 1 p = + 10 1 + 2z1 5 + 2z1 p 1 1 = + 10 5 + 2z2 1 + 2z2 p 10 1 1+1 1 5+1 1 2 1 6 2 3 z1 : z2 : Clearly z1 = z2 = 1/2, and Thus, VF1 = p = 20/3 = in Table 15.3. 62, 3 = + = + = or p = 20/3. and the resulting equilibrium allocation is displayed Table 15.3: Equilibrium Allocation t=0 Agent 1: Agent 2: 1 4 − 33 1 4 − 33 t=1 θ1 θ2 1+1 5+1 5+1 1+1 Agents are thus willing to pay a large proportion of their period 1 consumption in order to increase period 2 consumption. On balance, agents (except the entrepreneur) wish to shift income from the present (where M U = α = 0.1) to the future and now there is a device by which they may do so. Since markets are incomplete in this example, the competitive equilibrium need not be Pareto optimal. That is the case here. There is no way to equate the ratios of the two agents’ marginal utilities across the 2 states: In state 1, the M U ratio is 1/2 = 3 while it is 1/6 = 1 in state 2. A transfer of one 1/6 1/2 3 unit of consumption from agent 2 to agent 1 in state 1 in exchange for one unit of consumption in the other direction in state 2 would obviously be Pareto 4 improving. Such a transfer cannot, however, be effected with the limited set of financial instruments available. This is the reality of incomplete markets. Note that our economy is one of three agents: agents 1 and 2, and the original firm owner. From another perspective, the equilibrium allocation under F1 is not a Pareto optimum because a redistribution of wealth between agents 1 and 2 could be effected making them both better off in ex- ante expected utility terms while not reducing the utility of the firm owner (which is, presumably, directly proportional to the price he receives for the firm). In particular the allocation that dominates the one achieved under F1 is shown in Table 15.4. Table 15.4: A Pareto-superior Allocation t=0 Agent 1 Agent 2 Owner 2/3 2/3 62 3 t=1 θ1 θ2 4 4 4 4 0 0 15.2.2 Financial Structure F2 This is a complete Arrow-Debreu financial structure. It will be notationally clearer here if we deviate from our usual notation and denote the securities as X = (2, 0), W = (0, 2) with prices qX , qW respectively (qX thus corresponds to the price of 2 units of the state-1 Arrow-Debreu security while qW is the price 1 2 1 2 of 2 units of the state-2 Arrow-Debreu security), and quantities zX , zX , zW , zW . The problems confronting the agents are as follows. Agent 1 solves: 1 1 1 1 max(1/10)(4 − qX zX − qW zW ) + [1/2 ln(1 + 2zX ) + 1/2 ln(5 + 2zW )] 1 1 qX zX + qW zW ≤ 4 Agent 2 solves: 2 2 2 2 max(1/10)(4 − qX zX − qW zW ) + [1/2 ln(5 + 2zX ) + 1/2 ln(5 + 2zW )] 2 2 qX zX + qW zW ≤ 4 The FOCs are:   (i) 1 qX = 1 10 2 Agent 1:  (ii) 1 qW = 1 10 2   (iii) 1 qX = 10  1 (iv) 10 qW 1 2 1 2 1 1 1+2zX 1 1 5+2zW 1 2 1+2zX 1 2 5+2zW 2 2 2 2. 5 Agent 2: = By equation (i): By equation 1 5 1 1 10 1 1 1 10 qX = 1+2zX ⇒ 1 + 2zX = qX ⇒ zX = qX − 2 . 1 1 10 2 2 (iii): 10 qX = 5+2z2 ⇒ 5 + 2zX = qX ⇒ zX = q5 − 5 . 2 X X With one security of each type issued: 1 2 zX + zX 1 5 5 5 − + − qX 2 qX 2 1 2 = 1 (zX ≥ 0; zX ≥ 0) 10 10 = 1⇒ = 4 ⇒ qX = . qX 4 Similarly, qW = 10 (by symmetry) and VF = qX + qW = 10 + 10 = 20 = 5. 4 4 4 4 So we see that VF has declined from 6 2 in the F1 case to 5. Let us further 3 examine this result. Consider the allocations implied by the complete financial structure: 1 zX 2 zX 1 zW = = = 5 1 5 1 1 1 − = − =2− =1 qX 2 5/2 2 2 2 5 5 5 5 5 1 − = − =2− =− qX 2 5/2 2 2 2 1 2 1 − , zW = 1 by symmetry 2 2 Thus, agent 1 wants to short sell security 2 while agent 2 wants to short sell security 1. Of course, in the case of financial structure F1 (2, 2), there was no possibility of short selling since every agent, in equilibrium must have the same security holdings. The post-trade allocation is found in Table 15.5. Table 15.5: Post-Trade Allocation t=0 Agent Agent t=1 Agent Agent 1: 4 − 1 1 qx + 1 qw = 4 − 3 10 + 1 10 = 4 − 10 = 1 1 2 2 2 4 2 4 4 2 2: 4 + 1 qx − 3 qw = 4 + 1 10 − 3 10 = 4 − 10 = 1 1 2 2 2 4 2 4 4 2 1: (1, 5) + 1 1 (2, 0) − 1 (0, 2) = (4, 4) 2 2 2: (5, 1) + − 1 (2, 0) + 1 1 (0, 2) = (4, 4), 2 2 This, unsurprisingly, constitutes a Pareto optimum.3 We have thus reached an important result that we summarize in Propositions 15.1 and 15.2. Proposition 15.1: When markets are incomplete, the Modigliani-Miller theorem fails to hold and the financial structure of the firm may affect its valuation by the market. 3 Note that our example also illustrates the fact that the addition of new securities in a financial market does not necessarily improve the welfare of all participants. Indeed, the firm owner is made worse off by the transition from F1 to F2 . 6 Proposition 15.2: When markets are incomplete, it may not be in the interest of a valuemaximizing manager to issue the socially optimal set of securities. In our example the issuing of the right set of securities by the firm leads to completing the market and making a Pareto-optimal allocation attainable. The impact of the financial decision of the firm on the set of markets available to individuals in the economy places us outside the realm of the M M theorem and, indeed, the value of the firm is not left unaffected by the choice of financing. Moreover, it appears that it is not, in this situation, in the private interest of the firm’s owner to issue the socially optimal set of securities. Our example thus suggests that there is no reason to necessarily expect that value-maximizing firms will issue the set of securities society would find preferable.4 15.3 Arrow-Debreu and Modigliani-Miller In order to understand why VF declines when the firm issues the richer set of securities, it is useful to draw on our work on Arrow-Debreu pricing (Chapter 8). Think of the economy under financial structure F2 . This is a complete Arrow-Debreu structure in which we can use the information on equilibrium endowments to recompute the pure Arrow-Debreu prices as per Equation (15.1), qθ = δπθ ∂Uk ∂c θ k ∂U0 ∂ck 0 k , θ = 1, 2, (15.1) which, in our example, given the equilibrium allocation (4 units of commodity in each state for both agent) reduces to qθ = 1( 1 )( 1 ) 5 2 4 = , θ = 1, 2, .1 4 which corresponds, of course, to qX = qW = 10 , 4 and to VF = 5. This Arrow-Debreu complete markets equilibrium is unique: This is generically the case in an economy such as ours, implying there are no other allocations satisfying the required conditions and no other possible prices for the Arrow-Debreu securities. This implies the Modigliani-Miller proposition as the following reasoning illustrates. In our example, the firm is a mechanism to produce 2 units of output in date 1, both in state 1 and in state 2. Given that the 4 The reader may object that our example is just that, an example. Because it helps us reach results of a negative nature, this example is, however, a fully general counterexample, ruling out the proposition that the M M theorem continues to hold and that firms’ financial structure decisions will always converge with the social interest. 7 date 0 price of one unit of the good in state 1 at date 1 is 5/4 and the price of one unit of the good in state 2 at date 1 is 5/4 as well, it must of necessity be that the price (value) of the firm is 4 times 5/4, that is, 5. In other words, absent any romantic love for this firm, no one will pay more than 5 units of the current consumption good (which is the numeraire) for the title of ownership to this production mechanism knowing that the same bundle of goods can be obtained for 5 units of the numeraire by purchasing 2 units of each Arrow-Debreu security. A converse reasoning guarantees that the firm will not sell for less either. The value of the firm is thus given by its fundamentals and is independent of the specific set of securities the entrepreneur choses to issue: This is the essence of the Modigliani-Miller theorem! Now let us try to understand how this reasoning is affected when markets are incomplete and why, in particular, the value of the firm is higher in that context. The intuition is as follows. In the incomplete market environment of financial structure F1 , security{(2, 2)} is desirable for two reasons: to transfer income across time and to reduce date 1 consumption risk. In this terminology, the firm in the incomplete market environment is more than a mechanism to produce 2 units of output in either states of nature in date 1. The security issued by the entrepreneur is also the only available vehicle to reduce second period consumption risk. Individual consumers are willing to pay something, that is to sacrifice current consumption, to achieve such risk reduction. To see that trading of security {(2, 2)} provides some risk reduction in the former environment, we need only compare the range of date 1 utilities across states after trade and before trade for agent 1 (agent 2 is symmetric). See Table 15.6. Table 15.6: Agent 1 State Utilities Under F1 State 1 State 2 Before Trade U 1 (c1 ) = ln 1 = 0 1 U 1 (c1 ) = ln 5 = 1.609 2 Difference = 1.609 {(2, 2)};z 1 = 0.5 (Equilibrium Allocation) U 1 (c1 ) = ln 2 = 0.693 1 U 1 (c1 ) = ln 6 = 1.792 2 Difference = 1.099 The premium paid for the equity security, over and above the value of the firm in complete markets, thus originates in the dual role it plays as a mechanism for consumption risk smoothing and as a title to two units of output in each future state. A question remains: Given that the entrepreneur, by his activity and security issuance, plays this dual role, why can’t he reap the corresponding rewards independently of the security structure he choses to issue? In other words, why is it that his incentives are distorted away from the socially optimal financial structure? To understand this, notice that if any amount of ArrowDebreu-like securities, such as in F2 = {(2, 0) , (0, 2)} is issued, no matter how small, the market for such securities has effectively been created. With no further trading restrictions, the agents can then supply additional amounts of these securities to one another. This has the effect of empowering them to trade, entirely independently of the magnitude of the firm’s security issuance, to the 8 following endowment allocation (see Table 15.7). Table 15.7: Allocation When the Two Agents Trade Arrow-Debreu Securities Among Themselves t=0 Agent 1: Agent 2: 4 4 t=1 θ1 θ2 3 3 3 3 In effect, investors can eliminate all second-period endowment uncertainty themselves. Once this has been accomplished and markets are effectively completed (because there is no further demand for across-state income redistribution, it is irrelevant to the investor whether the firm issues {(2, 2)} or {(2, 0) , (0, 2)}, since either package is equally appropriate for transferring income across time periods. Were {(2, 0) , (0, 2)} to be the package of securities issued, the agents would each buy equal amounts of (2, 0), and (0, 2), effectively repackaging them as (2,2). To do otherwise would be to reintroduce date 1 endowment uncertainty. Thus the relative value of the firm under either financial structure, {(2, 2)} or {(2, 0) , (0, 2)}, is determined solely by whether the security (2, 2) is worth more to the investors in the environment of period two endowment uncertainty or when all risk has been eliminated as in the environment noted previously. Said otherwise, once the markets have been completed, the value of the firm is fixed at 5 as we have seen before, and there is nothing the entrepreneur can do to appropriate the extra insurance premium. If investors can eliminate all the risk themselves (via short selling) there is no premium to be paid to the firm, in terms of value enhancement, for doing so. This is confirmed if we examine the value of the firm when security {(2, 2)} is issued after the agents have traded among themselves to equal second-period allocation (3, 3). In this case VF = 5 also. There is another lesson to be gleaned from this example and that leads us back to the CAPM. One of the implications of the CAPM was that securities could not be priced in isolation: Their prices and rates of return depended on their interactions with other securities as measured by the covariance. This example follows in that tradition by confirming that the value of the securities issued by the firm is not independent of the other securities available on the market or which the investors can themselves create. 15.4 On the Role of Short Selling From another perspective (as noted in Allen and Gale, 1994), short selling expands the supply of securities and provides additional opportunities for risk sharing, but in such a way that the benefits are not internalized by the innovating firm. When deciding what securities to issue, however, the firm only takes into account the impact of the security issuance on its own value; in other words, it 9 only considers those benefits it can internalize. Thus, in an incomplete market setting, the firm may not issue the socially optimal package of securities. It is interesting to consider the consequence of forbidding or making it impossible for investors to increase the supply of securities (2, 0) and (0, 2) via short selling. Accordingly, let us impose a no-short-selling condition (by requiring that all holdings of all securities by all agents are positive). Agent 1 wants to short sell (0, 2); agent 2 wants to short sell (2, 0). So, we know that the constrained optimum will have (simply setting z = 0 wherever the unconstrained optimum had a negative z and anticipating the market clearing condition): 2 zX 1 zX = 0 = 1 1 zW = 0 2 zW = 1 1 qX 10 1 qW 10 = = 1 2 1 M U2 = 2 M U1 = = 1 1 + 2(1) 1 1 + 2(1) 1 3 1 2= 3 2= 10 10 , qW = 3 3 20 2 VF = =6 , 3 3 which is as it was when the security (2, 2) was issued. The fact that VF rises when short sales are prohibited is not surprising as it reduces the supply of securities (2, 0) and (0, 2). With demand unchanged, both qX and qW increase, and with it, VF . In some sense, now the firm has a monopoly in the issuance of (2, 0) and (0, 2), and that monopoly position has value. All this is in keeping with the general reasoning developed previously. While it is, therefore, not surprising that the value of the firm has risen with the imposition of the short sales constraint, the fact that its value has returned precisely to what it was when it issued {(2, 2)} is striking and possibly somewhat of a coincidence. Is the ruling out of short selling realistic? In practice, short selling on the U.S. stock exchanges is costly, and only a very limited amount of it occurs. The reason for this is that the short seller must deposit as collateral with the lending institution, as much as 100 percent of the value of the securities he borrows to short sell. Under current practice in the United States, the interest on this deposit is less than the T-bill rate even for the largest participants, and for small investors it is near zero. There are other exchange-imposed restrictions on short selling. On the NYSE, for example, investors are forbidden to short sell on a down-tick in the stock’s price.5 qX 5 Brokers must obtain permission from clients to borrow their shares and relend them to a short seller. In the early part of 2000, a number of high technology firms in the United States asked their shareholders to deny such permission as it was argued short sellers were depressing prices! Of course if a stock’s price begins rising, short sellers may have to enter the market to buy shares to cover their short position. This boosts the share price even further. 10 15.5 Financing and Growth Now we must consider our second set of issues, which we may somewhat more generally characterize as follows: How does the degree of completeness in the securities markets affect the level of capital accumulation? This is a large topic, touched upon in our introductory chapter, for which there is little existing theory. Once again we pursue our discussion in the context of examples. Example 15.1 Our first example serves to illustrate the fact that while a more complete set of markets is unambiguously good for welfare, it is not necessarily so for growth. Consider the following setup. Agents own firms (have access to a productive technology) while also being able to trade state-contingent claims with one another (net supply is zero). We retain the two-agent, two-period setting. Agents have state-contingent consumption endowments in the second period. They also have access to a productive technology which, for every k √ units of period one consumption foregone, produces k in date 1 in either state of nature6 (see Table 15.8). Table 15.8: The Return From Investing k Units θ1 √ k t=1 θ2 √ k The agent endowments are given in Table 15.9. Table 15.9: Agent Endowments t=2 θ1 θ2 Agent 1: 3 5 1 Agent 2: 3 1 5 Prob(θ1 )=Prob(θ2 ) = 1/2 and the agent preference orderings are now (identically) given by EU (c0 , cθ ) = ln (c0 ) + 1 1 ln (c1 ) + ln (c2 ) . 2 2 t=1 In this context, we compute the agents’ optimal savings levels under two alternative financial structures. In one case, there is a complete set of contingent 6 Such a technology may not look very interesting at first sight! But, at the margin, agents may be very grateful for the opportunity it provides to smooth consumption across time periods. 11 claims, in the other, the productive technology is the only possibility for redistributing purchasing power across states (as well as across time) among the two agents. 15.5.1 No Contingent Claims Markets √ √ 1 1 ln 5 + k + 1+ k . 2 2 1 1 ∗ −2 1 (k ) + 2 2 Each agent acts autonomously and solves: max ln (3 − k) + k Assuming an interior solution, the optimal level of savings k ∗ solves: − 1 + 3 − k∗ 1 2 5+ 1 √ k∗ 1+ 1 √ k∗ 1 ∗ −1 (k ) 2 2 =0 which, after several simplifications, yields √ 3 3 (k ∗ ) 2 + 15k ∗ + 7 k ∗ − 9 = 0. The solution to this equation is k ∗ = 0.31. With two agents in the economy, economy-wide savings are 0.62. Let us now compare this result with the case in which the agents also have access to contingent claims markets. 15.5.2 Contingent Claims Trading Let q1 be the price of a security that pays 1 unit of consumption if state 1 occurs, and let q2 be the price of a security that pays 1 unit of consumption 1 1 2 2 if state 2 occurs. Similarly, let z1 , z2 , z1 , z2 denote, respectively, the quantities of these securities demanded by agents 1 and 2. These agents continue to have simultaneous access to the technology. Agent 1 solves: k1 ,z1 ,z2 1 1 max 1 ln 3 − k1 − q1 z1 − q2 z2 + 1 1 ln 5 + 2 1 k1 + z1 + 1 ln 1 + 2 1 k1 + z2 . Agent 2’s problem is essentially the same: 2 2 k2 ,z1 ,z2 2 2 max ln 3 − k2 − q1 z1 − q2 z2 + 1 ln 1+ 2 2 k2 + z1 + 1 ln 5 + 2 2 k2 + z2 . By symmetry, in equilibrium k1 1 z1 = = k2 ; q1 = q2 ; 2 2 1 2 2 z2 = −z1 , z2 = z1 = −z2 . Using these facts and the FOCs (see the Appendix), it can be directly shown that 1 −2 = z1 12 and, it then follows that k1 = 0.16. Thus, total savings = k1 + k2 = 2k1 = 0.32. Savings have thus been substantially reduced. This result also generalizes to situations of more general preference orderings, and to the case where the uncertainty in the states is in the form of uncertainty in the production technology rather than in the investor endowments. The explanation for this phenomenon is relatively straightforward and it parallels the mechanism at work in the previous sections. With the opening of contingent claims markets, the agents can eliminate all second-period risk. In the absence of such markets, it is real investment that alone must provide for any risk reduction as well as for income transference across time periods – a dual role. In a situation of greater uncertainty, resulting from the absence of contingent claims markets, more is saved and the extra savings take, necessarily, the form of productive capital: There is a precautionary demand for capital. Jappelli and Pagano (1994) find traces of a similar behavior in Italy prior to recent measures of financial deregulation. Example 15.2 This result also suggests that if firms want to raise capital in order to invest for date 1 output, it may not be value maximizing to issue a more complete set of securities, an intuition we confirm in our second example. Consider a firm with access to a technology with the output pattern found in Table 15.10. Table 15.10: The Firm’s Technology t=0 -k t=1 θ1 √2 θ √ k k Investor endowments are given in Table 15.11. Table 15.11: Investor Endowments t=0 Agent 1: Agent 2: 12 12 t=1 θ1 θ2 1/2 10 10 1/2 Their preference orderings are both of the form: EU (c1 , cθ ) = 15.5.3 Incomplete Markets 1 1 1 c0 + ln (c1 ) + ln (c2 ) . 12 2 2 Suppose a security of the form (1, 1) is traded, at a price p; agents 1 and 2 demand, respectively, z1 and z2 . The agent maximization problems that define their demand are as follows: 13 Agent 1: max(1/12) (12 − pz 1 ) + pz 1 ≤ 12 Agent 2: max(1/12) (12−pz 2 ) + pz 2 ≤ 12 1 2 1 2 ln (1/2 + z 1 ) + 1 2 ln (10 + z 1 ) ; ln (10 + z 2 ) + 1 2 ln (1/2 + z 2 ) . It is obvious that z1 = z2 at equilibrium. The first order conditions are (again assuming an interior solution): Agent 1: Agent 2: p 12 p 12 = = 1 1 1 1 2 (1/2+z1 ) + 2 (10+z1 ) 1 1 1 1 2 (10+z2 ) + 2 (1/2+z2 ) In order for the technological constraint to be satisfied, it must also be that [p (z1 + z2 )] 1/2 = z1 + z2 , or p = z1 + z2 = 2z1 as noted earlier. Substituting for p in the first agent’s FOC gives: 2z1 12 1 1 1 1 + , or 2 (1/2 + z1 ) 2 (10 + z1 ) 3 2 z1 + 10.5z1 − z1 − 31.5. = 0 = Trial and error gives z1 = 1.65. Thus p = 3.3 and total investment is √ p = z1 + z2 = (3.3)(3.3) = 10.89 = VF ; date 1 output in each state is thus 10.89 = 3.3. 15.5.4 Complete Contingent Claims Now suppose securities R = (1, 0) and S = (0, 1) are traded at prices qR and 1 2 1 2 qS and denote quantities demanded respectively as zR , zR , zS , zS . The no short sales assumption is retained. With this assumption, agent 1 buys only R while agent 2 buys only security S. Each agent thus prepares himself for his worst possibility. Agent 1: max(1/12) 12 − qR z 1 + R 0 ≤ qR z 1 R 2 max(1/12) 12 − qS zS + 2 0 ≤ qS zS 1 2 1 ln 1/2 + zR + 1 2 ln (10) Agent 2: 1 2 ln (10) + 1 2 2 ln 1/2 + zS 14 The FOCs are thus: qR 1 1 = 1 12 2 (1/2 + zR ) qS 1 1 Agent 2 : . = 12 2 (1/2 + z 2 ) S Agent 1 : 2 1 Clearly qR = qS by symmetry, and zR = zS ; by the technological constraints: 2 qR z 1 +qS zS R 1/2 = = 1 2 zR + zS 2 1 zR . 2 , or qR 1 Solving for zR : qR 12 1 1 zR 1 + 2zR 1 zR = = = 1 zR 1 1 1 = 1 = 1 + 2z 1 24 2 (1/2 + zR ) R 24 −1 ± √ 1 − 4(2)(−24) −1 ± 1 + 192 −1 ± 13.892 = = 4 4 4 (taking positive root) 1 zR 2 zS qR 2 1 zR + zS qR = = = = 3.223 3.223 1.61, and 1.61 (6.446) = 10.378 = VF . As suspected, this is less than what the firm could raise issuing only (1,1). Much in the spirit of our discussion of Section 15.2, this example illustrates the fact that, for a firm wishing to maximize the amount of capital levied from the market, it may not be a good strategy to propose contracts leading to a (more) complete set of markets. This is another example of the failure of the Modigliani-Miller theorem in a situation of incomplete markets and the reasoning is the same as before: In incomplete markets, the firm’s value is not necessarily equal to the value, computed at Arrow-Debreu prices, of the portfolio of goods it delivers in future date-states. This is because the security it issues may, in addition, be valued by market participants for its unintended role as an insurance mechanism, a role that disappears if markets are complete. In the growth context of our last examples, this may mean that more savings will be forthcoming when markets are incomplete, a fact that may lead a firm wishing to raise capital from the markets to refrain from issuing the optimal set of securities. 15 15.6 Conclusions We have reached a number of conclusions in this chapter. 1. In an incomplete market context, it may not be value maximizing for firms to offer the socially optimal (complete) set of securities. This follows from the fact that, in a production setting, securities can be used not only for risk reduction but also to transfer income across dates. The value of a security will depend upon its usefulness in accomplishing these alternative tasks. 2. The value of securities issued by the firm is not independent of the supply of similar securities issued by other market participants. To the extent that others can increase the supply of a security initially issued by the firm (via short selling), its value will be reduced. 3. Finally, welfare is, but growth may not be, promoted by the issuance of a more complete set of markets.7 As a result, it may not be in the best interest of a firm aiming at maximizing the amount of capital it wants to raise, to issue the most socially desirable set of securities. All these results illustrate the fact that if markets are incomplete, the link between private interests and social optimality is considerably weakened. Here lies the the intellectual foundation for financial market regulation and supervision. References Allen, F., Gale, D. (1994), Financial Innovation and Risk Sharing, MIT Press, Cambridge, Mass. Hart, O. (1975), “On the Optimality of Equilibrium When Market Structure is Incomplete,” Journal of Economic Theory, 11, 418–443. Jappelli, T., Pagano, M. (1994), “Savings, Growth and Liquidity Constraints,” Quarterly Journal of Economics, 109, 83–109. Modigliani, F., Miller, M. (1958), “The Cost of Capital, Corporation Finance, and the Theory of Investment,” American Economic Review, 48, 261–297. Appendix: Details of the Solution of the Contingent Claims Trade Case of Section 15.5 Agent 1 solves: 1 1 max 1 ln 3 − k1 − q1 z1 − q2 z2 + 1 k1 ,z1 ,z2 1 ln 5 + 2 1 k1 +z1 + 1 ln 1 + 2 1 k1 +z2 7 The statement regarding welfare is strictly true only when financial innovation achieves full market completeness. Hart (1975) shows that it is possible that everyone is made worse off when the markets become more complete but not fully complete (say, going from 9 to 10 linearly independent securities when 15 would be needed to make the markets complete). 16 k1 : −1 1 1 1 + 2 3 − k1 − q1 z1 − q2 z2 1 −q1 1 − q z1 + 2 3 − k1 − q1 z1 2 2 −q2 1 1 − q z1 + 2 3 − k1 − q1 z1 2 2 5+ 1 1 k1 + z1 1 −1 1 k 2+ 2 1 2 1+ 1 1 k1 + z2 1 −1 k 2 =0 2 1 (15.2) (15.3) (15.4) 1 z1 1 z2 : : 1 1 5 + k1 + z1 1 √ 1 1 + k1 + z2 √ =0 =0 Agent 2’s problem and FOC are essentially the same: k2 ,z1 ,z2 2 2 max 2 ln 3 − k2 − q1 z1 − q2 z2 + 2 1 ln 1 + 2 2 k2 + z1 + 1 ln 5 + 2 2 k2 + z2 k2 : −1 1 2 − q z2 + 2 3 − k2 − q1 z1 2 2 1 −q1 2 + 2 2 3 − k2 − q1 z1 − q2 z2 −q2 1 2 2 + 2 3 − k2 − q1 z1 − q2 z2 1+ 1 2 k2 + z1 1 −1 1 k2 2 + 2 2 5+ 1 2 k2 + z2 1 −1 k 2 =0 2 2 (15.5) (15.6) (15.7) 2 z1 2 z2 : : 1 2 k2 + z1 1 √ 2 5 + k2 + z2 1+ √ =0 =0 By symmetry, in equilibrium k1 = k2 ; q1 = q2 ; 1 2 2 1 2 2 z1 = z2 = −z1 , z2 = z1 = −z2 2 2 1 1 By Equations (15.3) and (15.6), using the fact that z1 + z2 = z2 + z1 : 5+ Equations (15.4) and (15.7): 1+ 1 1 √ = 1 2 k1 + z1 1 + k2 + z1 1 1 √ = 1 2 k1 + z2 5 + k2 + z2 1 The equations defining k1 and z1 are thus reduced to k1 : 1 1 1 √ 2 − q z2 + 4 3 − k1 − q1 z1 k1 2 2 1 1 √ √ = 1 1 5 + k1 + z1 1 + k1 − z1 5+ 1 1 k1 + z1 + 1 1 √ 4 k1 1+ 1 1 k1 − z1 =0 (15.8) 1 z1 : (15.9) 17 1 Solving for k1 , z1 , yields from Equation (15.8) 1+ 1 k1 − z1 = 5 + 1 −4 = 2z1 1 −2 = z1 1 k1 + z1 Substituting this value into Equation (15.8) gives 1 3 − k1 1 3 − k1 4 k1 3 + −6 + 12 −1 + 2 √ Let X = k1 k1 1 1 = + √ 4 k1 1 1 = + √ 4 k1 1 1 √ √ + 5 + k1 − 2 1 + k1 + 2 1 √ 3 + k1 = 2 (3 − k1 ) , or, simplifying, = 0 = 0 k1 + 6k1 k1 + k1 X X X 4 − 4(1)(−1) −2 ± = = 2 √ 2 = −1 + 2 = −1 + 1.4 = 0.4 −2 ± 8 k1 = 0.16 and total savings = k1 + k2 = 2k1 = 0.32 18 Chapter 16: Financial Equilibrium with Differential Information 16.1 Introduction 1 Box 9.1 discussed the extent to which this interpretation can be relaxed as far as utility functions are concerned. 2 Such preference structures are, strictly speaking, not expected utility. 1 The import of differential information for understanding financial markets, institutions, and contracts, however, goes much beyond market efficiency. Since Akerlof (1970), asymmetric information – a situation where agents are differentially informed with, moreover, one or a subgroup having superior information – is known potentially to lead to the failure of a market to exist. This lemons problem is a relevant one in financial markets: One may be suspicious of purchasing a stock from a better informed intermediary, or, a fortiori, from the primary issuer of a security who may be presumed to have the best information about the exact value of the underlying assets. One may suspect that the issuer would be unwilling to sell at a price lower than the fundamental value of the asset. What is called the winner’s curse is applicable here: if the transaction is concluded, that is, if the better-informed owner has agreed to sell, is it not likely that the buyer will have paid too much for the asset? This reasoning might go some way toward explaining the fact that capital raised by firms in equity markets is such a small proportion of total firm financing [on this, see Greenwald and Stiglitz (1993)]. Asymmetric information may also explain the phenomenon of credit rationing. The idea here is that it may not be to the advantage of a lender, confronted with a demand for funds larger than he can accommodate, to increase the interest rate he charges as would be required to balance supply and demand: doing so the lender may alter the pool of applicants in an unfavorable way. Specifically, this possibility depends on the plausible hypothesis that the lender does not know the degree of riskiness of the projects for which borrowers need funds and that, in the context of a debt contract, a higher hurdle rate may eliminate the less profitable, but consequently, also the less risky, projects. It is easy to construct cases where the creditor is worse off lending his funds at a higher rate because at the high rate the pool of borrowers becomes riskier [Stiglitz and Weiss (1981)]. Asymmetric information has also been used to explain the prevalence of debt contracts relative to contingent claims. We have used the argument before (Chapter 8): States of nature are often costly to ascertain and verify for one of the parties in a contract. When two parties enter into a contract, it may be more efficient, as a result, to stipulate noncontingent payments most of the time, thus economizing on verification costs. Only states leading to bankruptcy or default are recognized as resulting in different rights and obligations for the parties involved [Townsend (1979)]. These are only a few of the important issues that can be addressed with the asymmetric information assumption. A full review would deserve a whole book in itself. One reason for the need to be selective is that there is a lack of a unifying framework in this literature. It has often proceeded with a set of specific examples rather than more encompassing models. We refer interested readers to Hirshleiffer and Riley (1992) for a broader review of this fascinating and important topic in financial economics. 2 16.2 On the Possibility of an Upward Sloping Demand Curve There are plenty of reasons to believe that differences in information and beliefs constitute an important motivation for trading in financial markets. It is extremely difficult to rationalize observed trading volumes in a world of homogeneously informed agents. The main reason for having neglected what is without doubt an obvious fact is that our equilibrium concept, borrowed from traditional supply and demand analysis (the standard notion of Walrasian equilibrium), must be thoroughly updated once we allow for heterogeneous information. The intuition is as follows: The Walrasian equilibrium price is necessarily some function of the orders placed by traders. Suppose traders are heterogeneously informed and that their private information set is a relevant determinant of their orders. The equilibrium price will, therefore, reflect and, in that sense, transmit at least a fraction of the privately held information. In this case, the equilibrium price is not only a signal of relative scarcity, as in a Walrasian world; it also reflects the agents’ information. In this context, the price quoted for a commodity or a security may be high because the demand for it is objectively high and/or the supply is low. But it may also be high because a group of investors has private information suggestive that the commodity or security in question will be expensive tomorrow. Of course, this information about the future value of the item is of interest to all. Presumably, except for liquidity reasons, no one will want to sell something at a low price that will likely be of much higher value tomorrow. This means that when the price quoted on the market is high (in the fiction of standard microeconomics, when the Walrasian auctioneer announces a high price), a number of market participants will realize that they have sent in their orders on the basis of information that is probably not shared by the rest of the market. Depending on the confidence they place in their own information, they may then want to revise their orders, and to do so in a paradoxical way: Because the announced price is higher than they thought it would be, they may want to buy more! Fundamentally, this means that what was thought to be the equilibrium price is not, in fact, an equilibrium. This is a new situation and it requires a departure from the Walrasian equilibrium concept. In this chapter we will develop these ideas with the help of an example. We first illustrate the notion of a Rational Expectations Equilibrium (REE), a concept we have used more informally in preceding chapters (e.g., Chapter 9), in a context where all participants share the same information. We then extend it to encompass situations where agents are heterogeneously informed. We provide an example of a fully revealing rational expectations equilibrium which may be deemed to be the formal representation of the notion of an informationally efficient market. We conclude by discussing some weaknesses of this equilibrium concept and possible extensions. 3 16.3 An Illustration of the Concept of REE: Homogeneous Information3 Let us consider the joint equilibrium of a spot market for a given commodity and its associated futures market. The context is the familiar now and then, two-date economy. The single commodity is traded at date 1. Viewed from date 0, the date at which producers must make their production decisions, the demand for this commodity, emanating from final users, is stochastic. It can be represented by a linear demand curve shocked by a random term as in D (p, η ) = a − cp + η , ˜ ˜ where D(·) represents the quantity demanded, p is the (spot) price for the commodity in question, a and c are positive constants, and η is a stochastic ˜ demand-shifting element.4 This latter quantity is centered at (has mean value) zero, at which point the demand curve assumes its average position, and it is 2 2 normally distributed with variance ση , in other words, h (˜) = N 0; ση where η h( ) is the probability density function on η . See Figure 16.1 for an illustration. ˜ At date 0, the N producers decide on their input level x – the input price is normalized at 1 – knowing that g(x) units of output will then be available after a one-period production lag at date 1. The production process is thus nonstochastic and the only uncertainty originates from the demand side. Because of the latter feature, the future sale price p is unknown at the time of the input ˜ decision. Insert Figure 16.1 about here We shall assume the existence of a futures or forward market5 that our producers may use for hedging or speculative purposes. Specifically, let f > 0(< 0) be the short (long) futures position taken by the representative producer, that is, the quantity of output sold (bought) for future delivery at the future (or forward) price pf . Here we shall assume that the good traded in the futures market, (i.e., specified as acceptable for delivery in the futures contract), is the same as the rest of this chapter closely follows Danthine (1978). forward, the demand for heating oil next winter is stochastic because the severity of the winter is impossible to predict in advance. 5 The term futures market is normally reserved for a market for future delivery taking place in the context of an organized exchange. A forward market refers to private exchanges of similar contracts calling for the future delivery of a commodity or financial instrument. While knowledge of the credit worthiness and honesty of the counter-party is of essence in the case of forward contracts, a futures market is anonymous. The exchange is the relevant counter-party for the two sides in a contract. It protects itself and ensures that both parties’ engagements will be fulfilled by demanding initial guarantee deposits as well as issuing daily margin calls to the party against whose position the price has moved. In a two-date setting, thus in the absence of interim price changes, the notion of margin calls is not relevant and it is not possible to distinguish futures from forwards. 4 Looking 3 The 4 commodity exchanged on the spot market. For this reason, arbitrageurs will ensure that, at date 1, the futures and the spot price will be exactly identical: In the language of futures markets, the basis is constantly equal to zero and there is thus no basis risk. Under these conditions, the typical producer’s cash flow y is y = pg(x) − x + pf − p f ˜ ˜ ˜ which can also be written as y = p (g(x) − f ) − x + pf f. ˜ ˜ It is seen that by setting f = g(x), that is, by selling forward the totality of his production, the producers can eliminate all his risks. Although this need not be his optimal futures position, the feasibility of shedding all risks explains the separation result that follows (much in the spirit of the CAPM: diversifiable risk is not priced). Let us assume that producers maximize the expected utility of their future cash flow where U ( ) > 0 and U ( ) < 0: x≥0,f max EU (˜) y Differentiating with respect to x and f successively, and assuming an interior solution, we obtain the following two FOCs: x : f which together imply: pf = : E [U1 (˜)˜] = y p 1 EU1 (˜) y g1 (x) (16.1) (16.2) E [U1 (˜)˜] = pf EU1 (˜) y p y 1 . g1 (x) (16.3) Equation (16.3) is remarkable because it says that the optimal input level should be such that the marginal cost of production is set equal to the (known) futures price pf , the latter replacing the expected spot price as the appropriate production signal. The futures price equals marginal cost condition is also worth noticing because it implies that, despite the uncertain context in which they operate, producers should not factor in a risk premium when computing their optimal production decision. For us, a key implication of this result is that, since the supply level will directly depend on the futures price quoted at date 0, the equilibrium spot price at date 1 will be a function of the futures price realized one period earlier. Indeed, writing x = x(pf ) and g(x) = g x(pf ) to highlight the implications of Equation (16.3) for the input and output levels, the supply-equals-demand condition for the date 1 spot market reads N g x(pf ) = a − cp + η ˜ 5 which implicitly defines the equilibrium (date 1) spot price as a function of the date 0 value taken by the futures price, or p = p pf , η . ˜ ˜ (16.4) It is clear from Equation (16.4) that the structure of our problem is such that the probability distribution on p cannot be spelled out independently of the value ˜ taken by pf . Consequently, it would not be meaningful to assume expectations for p, on the part of producers or futures market speculators, which would not ˜ take account of this fundamental link between the two prices. This observation, which is a first step toward the definition of a rational expectation equilibrium, can be further developed by focusing now on the futures market. Let us assume that, in addition to the N producers, n speculators take positions on the futures market. We define speculators by their exclusive involvement in the futures markets; in particular they have no position in the underlying commodity. Accordingly, their cash flows are simply: zi = pf − p bi ˜ ˜ where bi is the futures position (> 0 = short ; < 0 = long) taken by speculator i. Suppose for simplicity that their preferences are represented by a linear meanvariance utility function of their cash flows: W (˜i ) = E(˜i ) − z z χ var(˜i ) z 2 where χ represents the (Arrow-Pratt) Absolute Risk Aversion index of the representative speculator. We shall similarly specialize the utility function of producers. The assumption of a linear mean-variance utility representation is, in fact, equivalent to hypothesizing an exponential (CARA) utility function such as χ W (˜) = − exp − z z ˜ 2 if the context is such that the argument of the function, z , is normally distributed. This hypothesis will be verified at the equilibrium of our model. Under these hypotheses, it is easy to verify that the optimal futures position of speculator i is pf − E p pf ˜ bi = (16.5) χvar (˜ |pf ) p where the conditioning in the expectation and variance operators is made necessary by Equation (16.4). The form of Equation (16.5) is not surprising. It implies that the optimal futures position selected by a speculator will have the same sign as the expected difference between the futures price and the expected spot price, that is, a speculator will be short (b > 0) if and only if the futures price at which he sells is larger than the spot price at which he expects to be able to unload his position tomorrow. As to the size of his position, it will be proportional to the expected difference between the two prices, which is indicative of the size of the expected return, and inversely related to the perceived 6 riskiness of the speculation, measured by the product of the variance of the spot price with the Arrow-Pratt coefficient of risk aversion. More risk-averse speculators will assume smaller positions, everything else being the same. Under a linear mean-variance specification of preferences, the producer’s objective function becomes max E p pf (g (x) − f − x) + pf f − ˜ ξ 2 (g (x) − f ) var p pf ˜ 2 x≥0,f where ξ is the absolute risk aversion measure for producers. With this specification of the objective function, Equation (16.2), the FOC with respect to f , becomes f = g x pf + pf − E p pf ˜ ξvar (˜ |pf ) p ≡ f pf (16.6) which is the second part of the separation result alluded to previously. The optimal futures position of the representative producer consists in selling forward the totality of his production (g(x)) and then readjusting by a component that is simply the futures position taken by a speculator with the same degree of risk aversion. To see this, compare the last term in Equation (16.6) with Equation (16.5). A producer’s actual futures position can be viewed as the sum of these two terms. He may under-hedge, that is, sell less than his future output at the futures price. This is so if he anticipates paying an insurance premium in the form of a sale price (pf ) lower than the spot price he expects to prevail tomorrow. But he could as well over-hedge and sell forward more than his total future output. That is, if he considers the current futures price to be a high enough price, he may be willing to speculate on it, selling high at the futures price what he hopes to buy low tomorrow on the spot market. Putting together speculators’ and producers’ positions, the futures market clearing condition becomes: n bi + N f = 0, or i=1 n pf − E p pf ˜ χvar (˜ |pf ) p +N pf − E p pf ˜ ξvar (˜ |pf ) p + N g x pf =0 (16.7) which must be solved for the equilibrium futures price pf . Equation (16.7) makes clear that the equilibrium futures price pf is dependent on the expectations held on the future spot price p; we have previously emphasized the dependence on ˜ pf of expectations about p. This apparently circular reasoning can be resolved ˜ under the rational expectations hypothesis, which consists of assuming that individuals have learned to understand the relationship summarized in Equation (16.4), that is, E p pf = E p pf , η pf , var p pf = var p pf , η pf ˜ ˜ ˜ ˜ 7 (16.8) Definition 16.1: In the context of this section, a Rational Expectations Equilibrium (REE) is 1. a futures price pf solving Equation (16.7) given Equation (16.8), and the distributional assumption made on η, and 2. a spot price p solving Equation (16.4) given pf and the realization of η. The first part of the definition indicates that the futures price equilibrates the futures market at date 0 when agents rationally anticipate the effective condition under which the spot market will clear tomorrow and make use of the objective probability distribution on the stochastic parameter η . Given the ˜ supply of the commodity available tomorrow (itself a function of the equilibrium futures price quoted today), and given the particular value taken by η , (i.e., the ˜ final position of the demand curve), the second part specifies that the spot price clears the date 1 spot market. 16.4 Fully Revealing REE: An Example Let us pursue this example one step further and assume that speculators have access to privileged information in the following sense: Before the futures exchange opens, speculator i, (i = 1, ..., n) observes some unbiased approximation υi to the future realization of the variable η . The signal υi can be viewed as the ˜ future η itself plus an error of observation ωi . The latter is specific to speculator i, but all speculators are similarly imprecise in the information they manage to gather. Thus, 2 υi = η + ωi where the ω ’s are i.i.d. N 0; σω ˜ across agents and across time periods. This relationship can be interpreted as follows: η is a summary measure of the mood of consumers or of other conditions affecting demand. Speculators can obtain advanced information as to the particular value of this realization for the relevant period through, for instance, a survey of consumer’s intentions or a detailed weather forecast (assuming the latter influences demand). These observations are not without errors, but (regarding these two periods as only one occasion of a multiperiod process where learning has been taking place), speculators are assumed to be sufficiently skilled to avoid systematic biases in their evaluations. In this model, this advance information is freely available to them. Under these conditions, Equation (16.5) becomes bi = pf − E p pf ;υi ˜ ≡ b pf ; υi , χvar (˜ |pf ;υi ) p where we make it explicit that both the expected value and the variance of the spot price are affected by the advance piece of information obtained by 8 speculator i. The Appendix details how these expectations can actually be computed, but this need not occupy us for the moment. Formally, Equation (16.6) is unchanged, so that the futures market clearing condition can be written n N f (p ) + i f b pf ; υi = 0; whole vector. Ours is thus a context where the answer to our question is: All the relevant information is aggregated in the equilibrium price and is revealed freely to market participants. The REE is thus fully revealing! Let us proceed and make these precise assertions. Under the assumed technology, q(x) = αx1/2 , Equations (16.3), (16.4), and (16.8) become, respectively, g(x(pf )) = p pf , η ˜ = α2 f p 2 1 ˜ A − Bpf + η c a N α2 with A = , B= c c 2 1 f f = A − Bp + E η pf E p p ˜ ˜ c 1 var p pf ˜ = var η pf ˜ c2 The informational structure is as follows. Considering the market as a whole, an experiment has been performed consisting of observing the values taken by n independent drawings of some random variable υ, where υ = η + w and w ˜ ˜ ˜ 2 is N 0, σw . The results are summarized in the vector υ = (υ1 , υ2 , ..., υn ) or, as we shall demonstrate, in the sum of the υj ’s, υj , which is a sufficient statistic for υ = (υ1 , υ2 , ..., υn ). The latter expression means that conditioning expectations on υj or on υj and the whole vector of υ yields the same posterior distribution for η . In other words, the entire vector does not contain ˜ any information that is not already present in the sum. Formally, we have Definition 16.2. Definition 16.2: 1 υj is a sufficient statistic for υ = (υ1 , υ2 , ..., υn ) relative to the distrin bution h (η) if and only if h (˜ | υj , υ ) = h (˜ | υj ). η η Being a function of the observations [see Equation (16.9)], pf is itself a statistic used by traders in calibrating their probabilities. The question is: How good a statistic can it be? How well can the futures price summarize the information available to the market? As promised, we now display an equilibrium where the price pf is a sufficient statistic for the information available to the market; that is, it is invertible for the sufficient statistic υj . In that case, knowledge of pf is equivalent to the knowledge of υj and farmers’ and speculators’ expectations coincide. If the futures price has this revealing property, expectations held at equilibrium by all agents must be (see the Appendix for details): E(˜ pf ) = η var(˜ pf ) = η E(˜ υj , pf ) = E(˜ η η var(˜ υj , pf ) = η 10 υj ) = 2 ση 2 2 nση + σw υj (16.10) (16.11) 2 2 σw ση . 2 2 nση + σw Equations (16.10) and (16.11) make clear that conditioning on the futures price would, under our hypothesis, be equivalent to conditioning on υj , the latter being, of course, superior information relative to the single piece of individual information, υi , initially obtained by speculator i. Using these expressions for the expectations in Equation (16.7), one can show after a few tedious manipulations that, as announced, the market-clearing futures price has the form pf = F + L where (N χ + nξ) A w η (N χ + nξ) (B + 1) + N α2 ξχ c1 nσ2 +σ2 2 η υj (16.12) F = σ2 σ2 and w L = 2 2 1 σw ση F . 2 2 c nση + σw A Equation (16.12 shows the equilibrium price pf to be proportional to υj and thus a sufficient statistic as postulated. It satisfies our definition of an equilibrium. It is a market-clearing price, the result of speculators’ and farmers’ maximizing behavior, and it corresponds to an equilibrium state of expectations. That is, when Equation (16.12) is the hypothesized functional relationship between pf and υ, this relationship is indeed realized given that each agent then appropriately extracts the information υj from the announcement of the equilibrium price. 16.5 The Efficient Market Hypothesis The result obtained in Section 16.4 is without doubt extreme. It is interesting, however, as it stands as the paragon of the concept of market efficiency. Here is a formal and precise context in which the valuable pieces of information held by heterogeneously informed market participants are aggregated and freely transmitted to all via the trading process. This outcome is reminiscent of the statements made earlier in the century by the famous liberal economist F. von Hayek who celebrated the virtues of the market as an information aggregator [Hayek (1945)]. It must also correspond to what Fama (1970) intended when introducing the concept of strong form efficiency, defined as a situation where market prices fully reflect all publicly and privately held information. The reader will recall that Fama (1970) also introduced the notions of weakform efficiency, covering situations where market prices fully and instantaneously reflect the information included in historical prices, and of semi-strong form efficiency where prices, in addition, reflect all publicly available information (of whatever nature). A securities market equilibrium such as the one described in Chapter 9 under the heading of the CCAPM probably best captures what one can understand as semi-strong efficiency: Agents are rational 11 in the sense of being expected utility maximizers, they are homogeneously informed (so that all information is indeed publicly held), and they efficiently use all the relevant information when defining their asset holdings. In the CCAPM, no agent can systematically beat the market, a largely accepted hallmark of an efficient market equilibrium, provided beating the market is appropriately defined in terms of both risk and return. The concept of Martingale, also used in Chapters 11 and 12, has long constituted another hallmark of market efficiency. It is useful here to provide a formal definition. Definition 16.3 A stochastic process xt is a Martingale with respect to an information set ˜ Φt if E(˜t+1 |Φt ) = xt . x (16.13) It is a short step from this notion of a Martingale to the assertion that one cannot beat the market, which is the case if the current price of a stock is the best predictor of its future price. The latter is likely to be the case if market participants indeed make full use of all available information: In that situation, future price changes can only be unpredictable. An equation like Equation (16.13) cannot be true exactly for stock prices as stock returns would then be zero on average. It is clear that what could be a Martingale under the previous intuitive reasoning would be a price series normalized to take account of dividends and a normal expected return for holding stock. To get an idea of what this would mean, let us refer to the price equilibrium Equation (9.2) of the CCAPM U1 (Yt )pt = δEt {U1 (Yt+1 ) (pt+1 + Yt+1 )} Making the assumption of risk neutrality, one obtains: pt = δEt (pt+1 + Yt+1 ) (16.15) (16.14) If we entertain, for a moment, the possibility of a non-dividend paying stock, Yt ≡ 0, then Equation (16.14) indeed implies that the normalized series xt = δ t pt satisfies Equation (16.13) and is thus a Martingale. This normalization implies that the expected return on stockholding is constant and equal to the riskfree rate. In the case of a dividend-paying stock, a similar, but slightly more complicated, normalization yields the same result. The main points of this discussion are (1) that a pure Martingale process requires adjusting the stock price series to take account of dividends and the existence of a positive normal return, and (2) that the Martingale property is a mark of market efficiency only under a strong hypothesis of risk neutrality that includes, as a corollary, the property that expected return to stockholding is constant. The large empirical literature on market efficiency has not always been able to take account appropriately of these qualifications. See Leroy (1989) for an in-depth survey of this issue. 12 Our model of the previous section is more ambitious, addressing as it does, the concept of strong form efficiency. Its merit is to underline what it takes for this extreme concept to be descriptive of reality, thus also helping to delineate its limits. Two of these limits deserve mentioning. The first one arises once one attempts, plausibly, to get rid of the hypothesis that speculators are able costlessly to obtain their elements of privileged information. If information is free, it is difficult to see why all speculators would not get all the relevant information, thus reverting to a model of homogeneous information. However, the spirit of our example is that resources are needed to collect information and that speculators are those market participants specializing in this costly search process. Yet why should speculator i expand resources to obtain private information υi when the equilibrium price will freely reveal to him the sufficient statistic υj , which by itself is more informative than the information he could gather at a cost. The very fact that the equilibrium REE price is fully revealing implies that individual speculators have no use for their own piece of information, with the obvious corollary that they will not be prepared to spend a penny to obtain it. On the other hand, if speculators are not endowed with privileged information, there is no way the equilibrium price will be the celebrated information aggregator and transmitter. In turn, if the equilibrium price is not informative, it may well pay for speculators to obtain valuable private information. We are thus trapped in a vicious circle that results in the nonexistence of equilibrium, an outcome Grossman and Stiglitz (1980) have logically dubbed “the impossibility of informationally efficient markets.” Another limitation of the conceptual setup of Section 16.4 resides in the fact that the hypotheses required for the equilibrium price to be fully revealing are numerous and particularly severe. The rational expectations hypothesis includes, as always, the assumption that market participants understand the environment in which they operate. This segment of the hypothesis is particularly demanding in the context of our model and it is crucial for agents to be able to extract sufficient statistics from the equilibrium futures price. By that we mean that, for individual agents to be in position to read all the information concealed in the equilibrium price, they need to know exactly the number of uninformed and informed agents and their respective degrees of risk aversion, which must be identical inside each agent class. The information held by the various speculators must have identical precision (i.e., an error term with the same variance), and none of the market participants can be motivated by liquidity considerations. All in all, these requirements are simply too strong to be plausibly met in real-life situations. Although the real-life complications may be partly compensated for by the fact that trading is done on a repeated, almost continuous basis, it is more reasonable to assume that the fully revealing equilibrium is the exception rather that the rule. The more normal situation is certainly one where some, but not all, information is aggregated and transmitted by the equilibrium price. In such an equilibrium, the incentives to collect information remain, although if the price is too good a transmitter, they may be significantly reduced. The nonexistence-ofequilibrium problem uncovered by Grossman and Stiglitz is then more a curiosity 13 than a real source of worry. Equilibria with partial transmission of information have been described in the literature under the heading noisy rational expectation equilibrium. The apparatus is quite a bit messier than in the reference case discussed in Section 16.4 and we will not explore it further (see Hellwig (1980) for a first step in this direction). Suffice it to say that this class of models serves as the basis for the branch of financial economics known as market microstructure which strives to explain the specific forms and rules underlying asset trading in a competitive market environment. The reader is referred to O’Hara (1997) for a broad coverage of these topics. References Akerlof, G. (1970), “The Market for Lemons: Qualitative Uncertainty and the Market Mechanism,” The Quarterly Journal of Economics, 89, 488–500. Danthine, J.-P. (1978), “Information, Futures Prices and Stabilizing Speculation,” Journal of Economic Theory, 17, 79-98 Fama, E. (1970), “Efficient Capital Markets: A Review of Theory and Empirical Work,” Journal of Finance 25, 383–417. Greenwald, B., Stiglitz, J.E. (1993), “Financial Market Imperfections and Business Cycles,” The Quarterly Journal of Economics, 108, 77–115. Grossman, S., Stiglitz, J.E. (1980), “On the Impossibility of Informationally Efficient Markets,” American Economic Review, 70(3), 393–408. Hayek, F. H. (1954), “The Use of Knowledge in Society,” American Economic Review, 61, 519–530. Hellwig, M. F. (1980), “On the Aggregation of Information in Competitive Markets,” Journal of Economic Theory, 26, 279–312. Hirshleiffer, J., Riley, J.G. (1992), The Analytics of Uncertainty and Information, Cambridge University Press, Cambridge. LeRoy, S. F. (1989), “Efficient Capital Markets and Martingales,” Journal of Economic Literature 27, 1583–1621. O’Hara, M. (1997), Market Microstructure Theory, Basil Blackwell, Malden, Mass. Stiglitz, J. E., Weiss, A. (1981), “Credit Rationing in Markets with Imperfect Information,” American Economic Review, 71, 393–410. Townsend, R. (1979), “Optimal Contracts and Competitive Markets with Costly State Verification,” Journal of Economic Theory, 21, 417–425. 14 Appendix: Bayesian Updating with the Normal Distribution Theorem A 16.1: If we assume x and y are two normally distributed vectors with ˜ ˜ x ˜ ∼N y ˜ with matrix of variances and covariances V = Vxx Vxy Vxy Vyy , x ,V y , then the distribution of x conditional on the observation y = y 0 is normal with ˜ ˜ −1 −1 mean x + Vxx Vyy y 0 − y and covariance matrix Vxx − Vxy Vyy Vxy Applications Let υi = η + ωi . ˜ ˜ ˜ If η ˜ ωi ˜ ∼N 0 0 , 2 ση 2 ση 2 ση 2 ση 2 + σω , then 2 ση υ 2 i + σω E(˜ |υi ) = 0 + η V (˜ |υi ) = η η ˜ υi ˜ 0 0 υi υi = = 2 ση 2 ση − 4 2 2 ση ση σω = 2 2 2 2 ση + σω ση + σω If ∼N , 2 ση 2 nση 2 nση 2 2 n2 ση + nσω , then 2 ση 2 2 nση + σω E η ˜ var η ˜ 0+ 2 nση 2 2 n2 ση + nσω υi = υi 2 2 ση − nση 2 2 ση σω 1 2 nση = 2 2 2 2 n2 ση + nσω nση + σω 15 Review of Basic Options Concepts and Terminology March 24, 2005 1 Introduction The purchase of an options contract gives the buyer the right to buy (call options contract) or sell (put options contract) some other asset under pre-specified terms and circumstances. This underlying asset, as it is called, can in principle be anything with a well-defined price. For example, options on individual stocks, portfolios of stocks (i.e., indices such as the S&P 500), futures contracts, bonds, and currencies are actively traded. Note that options contracts do not represent an obligation to buy or sell and, as such, must have a positive, or at worst zero, price. “American” style options allow the right to buy or sell (the so-called “right of exercise”) at any time on or before a pre-specified future date (the “expiration” date). “European” options allow the right of exercise only at the pre-specified expiration date. Most of our discussion will be in the context of European call options. If the underlying asset does not provide any cash payments during the time to expiration (no dividends in the case of options on individual stocks), however, it can be shown that it is never wealth maximizing to exercise an American call option prior to expiration (its market price will at least equal and likely exceed its value if exercised). In this case, American and European call options are essentially the same, and are priced identically. The same statement is not true for puts. In all applied options work, it is presumed that the introduction of options trading does not influence the price process of the underlying asset on which they are written. For a full general equilibrium in the presence of incomplete markets, however, this will not generally be the case. 2 Call and Put Options on Individual Stocks 1. European call options (a) Definition: A European call options contract gives the owner the right to buy a pre-specified number of shares of a pre-specified stock (the 1 underlying asset) at a pre-specified price (the “strike” or “exercise” price) on a pre-specified future date (the expiration date). American options allow exercise “on or before” the expiration date. A contract typically represents 100 options with the cumulative right to buy 100 shares. (b) Payoff diagram: It is customary to describe the payoff to an individual call option by its value at expiration as in Figure 1. CT ) 45º 0 K ST , Figure 1: Payoff Diagram: European Call Option In Figure 1, ST denotes the possible values of the underlying stock at expiration date T , K the exercise price, and CT the corresponding call value at expiration. Algebraically, we would write CT = max {0, ST − K}. Figure 1 assumes the perspective of the buyer; the payoff to the seller (the so-called “writer” of the option) is exactly opposite to that of the buyer. See Figure 2. K 0 ST writer CT Figure 2: Payoff Diagram: European Call-Writer's Perspective Note that options give rise to exactly offsetting wealth transfers between the buyer and the seller. The options related wealth positions of buyers and sellers must thus always sum to zero. As such we say that options are in zero net supply, and thus are not elements of “M , the market portfolio of the classic CAPM. (c) Remarks: The purchaser of a call option is essentially buying the expected price appreciation of the underlying asset in excess of the exercise price. As we will make explicit in a later chapter, a call option 2 can be thought of as very highly leveraged position in the underlying stock - a property that makes it an ideal vehicle for speculation: for relatively little money (as the call option price will typically be much less than the underlying share’s price) the buyer can acquire the upward potential. There will, of course, be no options market without substantial diversity of expectations regarding the future price behavior of the underlying stock. 2. European Put Options (a) Definition: A European put options contract gives the buyer the right to sell a pre-specified number of shares of the underlying stock at a pre-specified price (the “exercise” or “strike” price) on a pre-specified future date (the expiration date). American puts allow for the sale on or before the expiration date. A typical contract represents 100 options with the cumulative right to sell 100 shares. (b) Payoff diagram: In the case of a put, the payoff at expiration to an individual option is represented in Figure 3. PT 45º( K ST Figure 3: Payoff Diagram for a European Put In Figure 3, PT denotes the put’s value at expiration; otherwise, the notation is the same as for calls. The algebraic equivalent to the payoff diagram is PT = max{0, K − ST } The same comments about wealth transfers apply equally to the put as to the call; puts are thus also not included in the market portfolio M. (c) Remarks: Puts pay off when the underlying asset’s price falls below the exercise price at expiration. This makes puts ideal financial instruments for “insuring” against price declines. Let us consider the payoff to the simplest “fundamental hedge” portfolio: 1 share of stock 1 put written on the stock with exercise price K . 3 Table 1: Payoff Table for Fundamental Hedge Events ST ≤ K S T > K Stock ST ST Put K − ST 0 Hedge Portfolio K ST To see how these two securities interact with one another, let us consider their net total value at expiration: The diagrammatic equivalent is in Figure 4. ST PT, ST K PT K ST Figure 4: Payoff Diagram: Fundamental Hedge Hedge Portfolio The introduction of the put effectively bounds the share price to fall no lower than K. Such insurance costs money, of course, and its price is the price of the put. Puts and calls are fundamentally different securities: calls pay off when the underlying asset’s price at expiration exceeds K; puts pay off when its price falls short of K. Although the payoff patterns of puts and calls are individually simple, virtually any payoff pattern can be replicated by a properly constructed portfolio of these instruments. 3 The Black-Scholes Formula for a European Call Option. 1. What it presumes: The probability distribution on the possible payoffs to call ownership will depend upon the underlying stock’s price process. The Black-Scholes formula gives the price of a European call under the following assumptions: (a) the underlying stock pays no dividends over the time to expiration; (b) the risk free rate of interest is constant over the time to expiration; (c) the continuously compounded rate of return on the underlying stock is governed by a geometric Brownian motion with constant mean and variance over the time to expiration. 4 This model of rate of return evolution essentially presumes that the rate of return on the underlying stock – its rate of price appreciation since there are no dividends – over any small interval of time ∆t ∈ [0, T ] is given by rt, t+∆t = √ ∆St, t+∆t = µ∆t + σ ε ∆t , ˆ ˆ˜ St (1) where ε denotes the standard normal distribution and µ, σ are, respec˜ ˆ ˆ tively, the annualized continuously compounded mean return and the standard deviation of the continuously compounded return on the stock. Under this abstraction the rate of return over any small interval of time ∆t √ is distributed N µ∆t, σ ∆t ; furthermore, these returns are indepenˆ ˆ dently distributed through time. Recall (Chapter 3) that these are the two most basic statistical properties of stock returns. More precisely, Equation (1) describes the discrete time approximation to geometric Brownian motion. True Geometric Brownian motion presumes continuous trading, and its attendant continuous compounding of returns. Of course continuous trading presumes an uncountably large number of “trades” in any finite interval of time, which is impossible. It should be thought of as a very useful mathematical abstraction. Under continuous trading the expression analogous to Equation (1) is √ dS = µdt + σ ε dt . ˆ ˆ˜ S (2) Much more will be said about this price process in the web-complement entitled “An Intuitive Overview of Continuous Time Finance” 2. The formula: The Black-Scholes formula is given by ˆ CT (S, K) = SN (d1 ) − erf T KN (d2 ) where 1 + rf − 2 σ 2 T ˆ ˆ √ σ T √ d1 − σ T . ˆ S K d1 d2 = = ln In this formula: S = the price of the stock “today” (at the time the call valuation is being undertaken); K = the exercise price; 5 T = the time to expiration, measured in years; rf = the estimated continuously compounded annual risk free rate; ˆ σ = the estimated standard deviation of the continuously compounded ˆ rate of return on the underlying asset annualized; and N ( ) is the standard normal distribution. In any practical problem, of course, σ must be estimated. The risk free ˆ rate is usually unambiguous as normally there is a T-bill coming due on approximately the same date as the options contracts expire (U.S. markets). 3. An example Suppose S = $68 K =$60 T = 88 days = σ = .40 ˆ The rf inserted into the formula is that rate which, when continuously compounded, is equivalent to the actual 6% annual rate; this must satisfy ˆ erf = 1.06, or rf = ln(1.06) = .058. ˆ 88 365 = .241 years rf = 6% (not continuously compounded) Thus, d1 = ln 68 + .058 + 1 (.4)2 (.241) 60 √2 = .806 (.40) .241 √ .806 − (.40) .241 = .610 N (.806) ≈ .79 N (.610) ≈ .729 $.68(.79) − e−(.058)(.241) ($60)(.729) d2 = N (d1 ) = N (d2 ) = C = = $10.60 4. Estimating σ. We gain intuition about the Black-Scholes model if we understand how its inputs are obtained, and the only input with any real ambiguity is σ. Here we present a straightforward approach to its estimation based on the security’s historical price series. Since volatility is an unstable attribute of a stock, by convention it is viewed as unreliable to go more than 180 days into the past for the choice of historical period. Furthermore, since we are trying to estimate the 6 µ, σ of a continuously compounded return process, the interval of measurement should be, in principle, as small as possible. For most practical applications, daily data is the best we can obtain. The procedure is as follows: i) Select the number of chosen historical observations and index them i = 0, 1, 2, ..., n with observation 0 most distant into the past and observation n most recent. This gives us n + 1 price observations. From this we will obtain n daily return observations. ii) Compute the equivalent continuously compounded ROR on the underlying asset over the time intervals implied by the selection of the data (i.e., if we choose to use daily data we compute the continuously compounded daily rate): Si ri = ln Si−1 This is the equivalent continuously compounded ROR from the end of period i − 1 to the end of period i. Why is this the correct calculation? Suppose Si = 110, Si−1 = 100; we want that continuously compounded return x to be such that Si−1 ex ex = Si , or 100ex = 110 = (110/100) 110 x = ln( ) = .0953. 100 This is the continuously compounded rate that will increase the price from$100 to $110. iii) Compute the sample mean µ= ˆ 1 n n ri i=1 Remark: If the time intervals are all adjacent; i.e., if we have not omitted any observations, then µ= ˆ = = 1 n 1 n 1 n n i=1 ri = S1 S0 Sn S0 1 n ln S1 S0 + ln S2 S1 + ... + ln Sn Sn−1 ln ln · S2 Sn S1 ... Sn−1 Note that if we omit some calendar observations – perhaps due to, say, merger rumors at the time which are no longer relevant – this shortcut fails. 7 iv) Estimate σ σ= ˆ σ= ˆ 1 n−1 1 n−1 n i=1 n i=1 (ri − µ) ˆ 2 , or n i=1 2 2 ri − 1 n(n−1) ri Example: Consider the (daily) data in Table 2. Table 2 Closing price$26 $26.50$26.25 $26.25$26.50 1 4 26.50 26 Si Si−1 Period i=0 i=1 i=2 i=3 i=4 ln r1 r2 r3 r4 = ln = ln = ln = ln 26.50 26 26.25 26.50 26.25 26.25 26.50 26.25 = = = = .0190482 −.009479 0 .0094787 In this case µ = ˆ ln = 1/4; ln(1.0192308) = .004762 Using the above formula, 4 2 ri i=1 4 2 = (.0190482)2 + (−.009479)2 + (.0094787)2 = .0005472 = (.0190482 − .009479 + .0094787)2 = (.01905)2 = .0003628 ri i=1 σ ˆ σ 2 = (.0123)2 ˆ 1 1 (.005472) − (.003628) 3 12 √ √ = .0001809 − .0000302 = .0001507 = .0123 = .0001507 = v) Annualize the estimate of σ We will assume 250 trading days per year. Our estimate for the continuously compounded annual return is thus: σannual = 250 (.0001507) = .0377 ˆ2 2 σdaily Remark: Why do we do this? Why can we multiply our estimate by 250 to scale things up? We can do this because of our geometric Brownian 8 motion assumption that returns are independently distributed. This is detailed as follows: Our objective: an estimate for var ln ST =1yr S0 ST =1yr S0 var ln = var ln = var ln = var ln = var ln Sday250 S0 Sday1 Sday2 Sday250 · ... S0 Sday1 Sday249 Sday1 Sday2 Sday250 + ln + ... + ln S0 Sday1 Sday249 Sday2 Sday1 + var ln + ... + var ln S0 Sday1 Sday250 Sday249 The latter equivalence is true because returns are uncorrelated from day to day under the geometric Brownian motion assumption. Furthermore, the daily return distribution is presumed to be the same for every day, and thus the daily variance is the same for every day under geometric Brownian motion. Thus, var ln ST =1yr S0 2 = 250 σdaily . 2 We have obtained an estimate for σdaily which we will write as σdaily . To ˆ2 convert this to an annual variance, we must thus multiply by 250. 2 Hence σannual = 250 · σdaily = .0377, as noted. ˆ2 If a weekly σ 2 were obtained, it would be multiplied by 52. ˆ 4 The Black-Scholes Formula for an Index. Recall that the Black-Scholes formula assumed that the underlying stock did not pay any dividends, and if this is not the case an adjustment must be made. A natural way to consider adapting the Black-Scholes formula to the dividend situation is to replace the underlying stock’s price S in the formula by S – P V (EDIV s), where P V (EDIV s) is the present value, relative to t = 0, the date at which the calculation is being undertaken, of all dividends expected to be paid over the time to the option’s expiration. In cases where the dividend is highly uncertain this calculation could be problematic. We want to make such an adjustment because the payment of a dividend reduces the underlying stock’s value by the amount of the dividend and thus reduces the value (ceteris paribus) of the option written on it. Options are not “dividend protected,” as it is said. For an index, such as the S&P500 , the dividend yield on the index portfolio can be viewed as continuous, and the steady payment of this dividend will have 9 a continuous tendency to reduce the index value. Let d denote the dividend yield on the index. In a manner exactly analogous to the single stock dividend treatment noted above, the corresponding Black-Scholes formula is C where d1 d2 = Se−dT N (d1 ) − Ke−rf T N (d2 ) 2 ln S/K + rf − d + σ 2 T √ = σ T √ = d1 − σ T . 10 An Intuitive Overview of Continuous Time Finance June 1, 2005 1 Introduction If we think of stock prices as arising from the equilibration of traders demands and supplies, then the binomial model is implicitly one in which security trading occurs at discrete time intervals, however short, and this is factually what actually happens. It will be mathematically convenient, however, to abstract from this intuitive setting and hypothesize that trading takes place “continuously.” This is consistent with the notion of continuous compounding. But it is not fully realistic: it implies that an uncountable number of individual transactions may transpire in any interval of time, however small, which is physically impossible. Continuous time finance is principally concerned with techniques for the pricing of derivative securities under the fiction of continuous trading. These techniques frequently allow closed form solutions to be obtained – at the price of working in a context that is less intuitive than discrete time. In this Appendix we hope to convey some idea as to how this is done. We will need first to develop a continuous time model of a stock’s price evolution through time. Such a model must respect the basic statistical regularities which are known to characterize, empirically, equity returns: (i) stock prices are lognormally distributed, which means that returns (continuously compounded) are normally distributed; (ii) for short time horizons stock returns are independently and identically distributed over non-overlapping time intervals. After we have faithfully represented these equity regularities in a continuous time setting, we will move on to a consideration of derivatives pricing. In doing so we aim to give some idea how the principles of risk neutral valuation carry over to this specialized setting. The discussion aims at intuition; no attempt is made to be mathematically complete. In all cases this intuition has its origins in the discrete time context. This leads to a discussion of random walks. 1 2 Random Walks and Brownian Motion Consider a time horizon composed of N adjacent time intervals each of duration ∆t, and indexed by t0 , t1 , t2 , ...tN ; that is, ti − ti−1 = ∆t, i = 1, 2, ..., N . We define a discrete time stochastic process on this succession of time indices by x(t0 ) = 0 √ x(tj+1 ) = x(tj ) + ε(tj ) ∆t, j = 0, 1, 2, ..., N − 1 , where, for all j, ε(tj ) ∼ N (0, 1) . It is further assumed that the random factors ˜ ε(tj )are independent of one another; i.e., ˜ E (˜(tj )˜(ti )) = 0, i = j . ε ε This is a specific example of a random walk, specific in the sense that the uncertain disturbance term follows a normal distribution 1 . We are interested to understand the behavior of a random walk over extended time periods. More precisely, we want to characterize the statistical properties of the difference x(tk ) − x(tj ) for any j < k. Clearly, k−1 √ x(tk ) − x(tj ) = ˜ ε(ti ) ∆t . ˜ i=j Since the random disturbances ε(ti ) all have mean zero, ˜ E (˜(tk ) − x(tj )) = 0. x Furthermore, var (x(tk ) − x(tj )) = E =E = k−1 i=j k−1 i=j k−1 i=j √ ε(ti ) ∆t ˜ 2 [˜(ti )] ∆t ε 2 (by independance) (1)∆t = (k − j)∆t, since 2 E [ ε(ti )] = 1 . ˜ 1 In particular, a very simple random walk could be of the form x(t j+1 ) = x(tj ) + n(tj ) , where for all j = 0, 1, 2... +1, if a coin is flipped and a head appears n(tj ) = −1, if a coin is flipped and a tail appears . At each time interval x(tj ) either increases or diminishes by one depending on the outcome of the coin toss. Suppose we think of x(t0 ) ≡ 0as representing the center of the sidewalk where an intoxicated person staggers one step to the right or to the left of the center in a manner that is consistent with independent coin flips (heads implies to the right). This example is the source of the name “random walk.” 2 If we identify e xtj = ln qtj , e where qtj is the price of the stock at time tj , then this simple random walk model becomes a candidate for our model of stock price evolution beginning from t = 0: At each node tj , the logarithm of the stock’s price is distributed e normally, with mean ln qt0 and variance j∆t. Since the discrete time random walk is so respectful of the empirical realities of stock prices, it is natural to seek its counterpart for continuous time. This is referred to as a “Brownian Motion” (or a Weiner process), and it represents the limit of the discrete time random walk as we pass to continuous time; i.e., as ∆t → 0. It is represented symbolically by √ dz = ε(t) dt , ˜ where ε(t) ∼ N (0, 1), and for any times t, t where t = t , and E (˜(t )˜(t)) = ˜ ε ε 0. We used the word “symbolically” not only because the term dz does not represent a differential in the terminology of ordinary calculus but also because we make no attempt here to describe how such a limit is taken. Following what is commonplace notation in the literature we will also not write a ∼ over z even though it represents a random quantity. More formally, a stochastic process z(t) defined on [0, T ] is a Brownian motion provided the following three properties are satisfied: (i) for any t1 < t2 , z(t2 ) − z(t1 ) is normally distributed with mean zero and variance t2 − t1 ; (ii) for any 0 ≤ t1 < t2 ≤ t3 < t4 , z(t4 ) − z(t3 ) is statistically independent of z(t2 ) − z(t1 ); and (iii) z(t0 ) ≡ 0 with probability one. A Brownian motion is a very unusual stochastic process, and we can only give a hint about what is actually transpiring as it evolves. Three of its properties are considered below: 1. First, a Brownian motion is a continuous process. If we were able to trace out a sample path z(t) of a Brownian motion, we would not see any jumps2 . 2. However, this sample path is not at all “smooth” and is, in fact, as “jagged as can be” which we formalize by saying that it is nowhere differential. A function must be essentially smooth if it is to be differentiable. That is, if we magnify a segment of its time path enough, it will appear approximately linear. This latter “smoothness” is totally absent with a Brownian motion. 3. Lastly a Brownian motion is of “unbounded variation.” This is perhaps the least intuitive of its properties. By this is intended the idea that if we could take one of those mileage wheels which are drawn along a route on a map to assess the overall distance (each revolution of the wheel corresponding to a fixed number of kilometers) and apply it to the sample path of a Brownian motion, 2 At times such as the announcement of a take over bid, stock prices exhibit jumps. We will not consider such “jump processes,” although considerable current research effort is being devoted to studying them, and to the pricing of derivatives written on them. 3 then no matter how small the time interval, the mileage wheel would record “an infinite distance” (if it ever got to the end of the path!) One way of visualizing such a process is to imagine a rough sketch of a particular sample path where we connect its position at a sequence of discrete time intervals by straight lines. Figure 1 proposes one such path. Figure 1 zt 0 t1 t2 T Suppose that we were next to enlarge the segment between time intervals t1 and t2 . We would find something on the order of Figure 2. Figure 2 zt t1 t3 t4 t2 Continue this process of taking a segment, enlarging it, taking another subsegment of that segment, enlarging it etc., etc. (in Figure 2 we would next enlarge the segment from t3 to t4 ). Under a typical differentiable function of bounded variation, we would eventually be enlarging such a small segment that it would appear as a straight line. With a Brownian motion, however, this will never happen. No matter how much we enlarge even a segment that corresponds to an arbitrarily short time interval, the same “sawtooth” pattern will appear, and there will be many, many “teeth”. A Brownian motion process represents a very special case of a continuous 4 process with independent increments. For such processes, the standard deviation per unit of time becomes unbounded as the interval becomes smaller and smaller: √ σ ∆t σ lim = lim √ = ∞. ∆t→0 ∆t ∆t→0 ∆t No matter how small the time period, proportionately, a lot of variation remains. This constitutes our translation of the abstraction of a discrete time random walk to a context of continuous trading3 . 3 More General Continuous Time Processes A Brownian motion will be the principal building block of our description of the continuous time evolution of a stock’s price – it will be the “engine” or “source” of the uncertainty. To it is often added a deterministic component which is intended to capture the “average” behavior through time of the process. Together we have something of the form √ dx(t) = adt + b˜(t) dt = adt + bdz, ε (1) where the first component is the deterministic one and a is referred to as the drift term. This is an example of a generalized Brownian motion or, to use more common terminology, a generalized Weiner process. If there were no uncertainty x(t) would evolve deterministically; if we integrate dx(t) = adt, we obtain x(t) = x(0) + at The solution to (1) is thus of the form x(t) = x(0) + at + bz(t) , (2) where the properties of z(t) were articulated earlier (recall properties (1), (2), and (3) of the definition). These imply that: E (x(t)) = x(0) + at, 2 var (x(t)) = b√ and t, s.d (x(t)) = b t. Equation (2) may be further generalized to allow the coefficients to depend upon the time and the current level of the process: dx(t) = a (x(t), t) dt + b (x(t), t) dz. (3) 3 The name Brownian motion comes from a 19th century physicist Brown, who studied the behavior of dust particles floating on the surface of water. Under a microscope dust particles are seen to move randomly about in a manner similar to the sawtooth pattern above except that the motion can be in any 360o direction. The interpretation of the phenomena is that the dust particles experience the effect of random collisions by moving water molecules. 5 In this latter form, it is referred to as an Ito process after one of the earliest and most important developers of this field. An important issue in the literature but one we will ignore - is to determine the conditions on a (x(t), t) and b (x(t), t) in order for equation (3) to have a solution. Equations (1) and (3) are generically referred to as stochastic differential equations. Given this background, we now return to the original objective of modeling the behavior of a stock’s price process. 4 A Continuous-Time Model of Stock Price Behavior Let us now restrict our attention only to those stocks which pay no dividends, so that stock returns are exclusively determined by price changes. Our basic discrete time model formulation is: √ ln q e (t + ∆t) − ln q e (t) = µ∆t + σ ε ∆t. ˜ (4) Notice that the stochastic process is imposed on differences in the logarithm of the stock’s price. Equation (4) thus asserts that the continuously compounded return to the ownership of the stock over the time period t to t+∆t is distributed normally with mean µ∆t and variance σ 2 ∆t. This is clearly a lognormal model: √ ln (q e (t + ∆t)) ∼ N ln q e (t) + µ∆t, σ ε ∆t . ˜ It is a more general formulation than a pure random walk as it admits the possibility that the mean increase in the logarithm of the price is positive. The continuous time analogue of (4) is d ln q e (t) = µdt + σdz. Following (2), it has the solution ln q e (t) = ln q e (0) + µt + σz(t) , where E ln q e (t) = ln q e (0) + µt , and var q e (t) = σ 2 t (6) (5) Since ln q e (t) on average grows linearly with t (so that, on average, q e (t) will grow exponentially), Equations (5) and (6) are, together, referred to as a geometric Brownian motion(GBM). It is clearly a lognormal process: ln q e (t) ∼ √ N ln q e (0) + µt, σ t . The webcomplement entitled “Review of Basic Options Concepts and Terminology” illustrates how the parameters µ and σ can be estimated under the maintained assumption that time is measured in years. While Equation (6) is a complete description of the evolution of the logarithm of a stock’s price, we are rather interested in the evolution of the price 6 itself. Passing from a continuous time process on ln q e (t) to one on q e (t)is not a trivial matter, however, and we need some additional background to make the conversion correctly. This is considered in the next few paragraphs. The essence of lognormality is the idea that if a random variable y is dis˜ ˜ tributed normally, then the random variable w = ey is distributed lognormally. ˜ Suppose, in particular, that y ∼ N (µy , σy ). A natural question is: how are µw ˜ ˜ and σw related to µy and σy when w = ey ? We first note that ˜ µw = eµy , and σw = eσy . Rather, it can be shown that µw = eµy +1/2σy and σw = eµy +1/2σy eσy − 1 2 2 2 (7) 1/2 . (8) These formulae are not obvious, but we can at least shed some light on (7). Why the variance of y should have an impact on the mean of w. To see why this ˜ ˜ is so, let us remind ourselves of the shape of the lognormal probability density function as found in Figure 3 Figure 3: A lognormal density function Probability 0 ~ w Suppose there is an increase in variance. Since this distribution is pinched off to the left at zero, a higher variance can only imply (within the same class of distributions) that probability is principally shifted to higher values of w. ˜ But this will have the simultaneous effect of increasing the mean of w. The ˜ variance of y and the mean of y cannot be specified independently. The mean ˜ ˜ and standard deviation of the lognormal variable w are thus each related to ˜ both the mean and variance of y as per the relationships in Equations (7) and ˜ (8). These results allow us to express the man and standard deviation of q e (t) (by analogy, w) in relation to ln q e (t) + µt and σ 2 t (by analogy, the mean and ˜ variance of y ) via Equations (5) and (6): ˜ 7 Eq e (t) = s.d.q e (t) = = eln q eln q e (0)+µt+ 1 σ 2 t 2 (0)+µt+ 1 σ 2 t 2 1 2 = q e (0)eµt+ 2 σ eσ t − 1 2 1 2 2 1 2 1 2 t (9) (10) e q e (0)eµt+ 2 σ t eσ t − 1 . We are now in a position, at least at an intuitive level, to pass from a stochastic differential equation describing the behavior of ln q e (t) to one that governs the behavior of q e (t). If ln q e (t) is governed by Equation (5), then dq e (t) = q e (t) e 1 µ + σ 2 dt + σdz(t) 2 (11) (t) where dqe (t) can be interpreted as the instantaneous (stochastic) rate of price q change. Rewriting Equation (11) slightly differently yields dq e (t) = 1 µ + σ 2 q e (t)dt + σq e (t)dz(t) 2 (12) which informs us that the stochastic differential equation governing the stocks price represents an Ito process since the coefficients of dt and dz(t) are both time dependent. We would also expect that if q e (t) were governed by dq e (t) = d ln q e (t) = µq e (t)dt + σq e (t)dz(t), then 1 µ − σ 2 dt + σdz(t). 2 (13) (14) Equations (13) and (14) are fundamental to what follows. 5 5.1 Simulation and Call Pricing Ito Processes Ito Processes and their constituents, most especially the Brownian motion, are difficult to grasp at this abstract level and it will assist our intuition to describe how we might simulate a discrete time approximation to them. Suppose we have estimated µ and σ for a stock’s price process as suggested ˆ ˆ in the webcomplement “Review of Options..”. Recall that these estimates are derived from daily price data properly scaled up to reflect the fact that in this literature it is customary to measure time in years. We have two potential stochastic differential equations to guide us – Equations (13) and (14) – and each has a discrete time approximate counterpart. 8 (i) Discrete Time Counterpart to Equation (13) If we approximate the stochastic differential dq e (t) by the change in the stock’s price over a short interval of time ∆t we have, √ q e (t + ∆t) − q e (t) = µq e (t)∆t + σ q e (t)˜(t) ∆t, or ˆ ˆ ε √ (15) q e (t + ∆t) = q e (t) 1 + µ∆t + σ ε(t) ∆t ˆ ˆ˜ There is a problem with this representation, however, because for any q e (t), the price next “period,” q e (t+∆t), is normally distributed (recall that ε(t) ∼ N (0, 1) ˜ ) rather than lognormal as a correct match to the data requires. In particular, there is the unfortunate possibility that the price could go negative, although for small time intervals ∆t, this is exceedingly unlikely. (ii) Discrete Time Counterpart to Equation (14) Approximating d ln q e (t) by successive log values of the price over small time intervals ∆t yields √ ln q e (t + ∆t) − ln q e (t) = µ − 1 σ 2 ∆t + σ ε(t) ∆t, or ˆ 2ˆ ˆ ˜√ (16) ln q e (t + ∆t) = ln q e (t) + µ − 1 σ 2 ∆t + σ ε ∆t . ˆ 2ˆ ˆ˜ Here it is the logarithm of the price in period t+∆t that is normally distributed, as required, and for this reason we’ll limit ourselves to (16) and its successors. For simulation purposes, it is convenient to express equation (16) as ˆ 1 ˆ2 σ˜ q e (t + ∆t) = q e (t)e(µ− 2 σ )∆t+ˆ ε(t) √ ∆t . (17) It is easy to generate a possible sample path of price realizations for (17). First select an interval of time ∆t, and the number of successive time periods of interest (this will be the length of the sample path), say N . Using a random number generator, next generate N successive draws from the standard normal distribution. By construction, these draws are independent and thus successive e rates of return q (t+∆t) −1 will be statistically independent of one another. q e (t) Let this series of N draws be represented by {εj }j=1 . The corresponding sample path (or “time series”) of prices is thus created as per Equation (18) ˆ 1 ˆ2 σ˜ q e (tj+1 ) = q e (tj )e(µ− 2 σ )∆t+ˆ εj √ ∆t N , (18) where tj+1 = tj + ∆t. This is not the price path that would be used for derivatives pricing, however. 5.2 The binomial model Under the binomial model, call valuation is undertaken in a context where the probabilities have been changed in such a way so that all assets, including the underlying stock, earn the risk free rate. The simulation-based counterpart to this transformation is to replace µ by ln(1 + rf ) in Equations (17) and (18): ˆ 1 2 ˆ ˜ q e (t + ∆t) = q e (t)e(ln(1+rf )− 2 σ )∆t+σε(t) √ ∆t , (19) 9 where rf is the one year risk free rate (not continuously compounded) and ln(1 + rf ) is its continuously compounded counterpart. How would we proceed to price a call in this simulation context? Since the value of the call at expiration is exclusively determined by the value of the underlying asset at that time, we first need a representative number of possible “risk neutral prices” for the underlying asset at expiration. The entire risk neutral sample path - as per equation (18) - is not required. By “representative” we mean enough prices so that their collective distribution is approximately lognormal. Suppose it was resolved to create J sample prices (to be even reasonably accurateJ ≥ 1000) at expiration, T years from now. Given random draws J {εk }k=1 from N (0, 1), the corresponding underlying stock price realizations are J e {qk (T )}k=1 as given by 1 2 ˆ ˜ e qk (T ) = q e (0)e(ln(1+rf )− 2 σ )T +σεk ∆T (20) For each of these prices, the corresponding call value at expiration is T e Ck = max {0, qk (T ) − E} , k = 1, 2, ..., J. The average expected payoff across all these possibilities is T CAvg = 1 J J T Ck . k=1 Since under risk neutral valuation the expected payoff of any derivative asset in the span of the underlying stock and a risk free bond is discounted back at the risk free rate, our estimate of the calls value today (when the stock’s price is q e (0)) is T C 0 = e− ln(1+rf )T CAvg . (21) In the case of an Asian option or some other path dependent option, a large number of sample paths must to be generated since the exercise price of the option (and thus its value at expiration) is dependent upon the entire sample path of underlying asset prices leading to it. Monte Carlo simulation is, as the above method is called, not the only pricing technique where the underlying idea is related to the notion of risk neutral valuation. There are ways that stochastic differential equations can be solved directly. 6 Solving Stochastic Differential Equations: A First Approach Monte Carlo simulation employs the notion of risk neutral valuation but it does not provide closed form solutions for derivatives prices, such as the Black Scholes 10 formula in the case of calls 4 . How are such closed firm expressions obtained? In what follows we provide a non-technical outline of the first of two available methods. The context will once again be European call valuation where the underlying stock pays no dividends. The idea is to obtain a partial differential equation whose solution, given the appropriate boundary condition is the price of the call. This approach is due to Black and Scholes (1973) and, in a more general context, Merton (1973). The latter author’s arguments will guide our discussion here. In the same spirit as the replicating portfolio approach mentioned in Section 4, Merton (1973) noticed that the payoff to a call can be represented in continuous time by a portfolio of the underlying stock and a risk free bond whose quantities are continuously adjusted. Given the stochastic differential equation which governs the stock’s price (13) and another non stochastic differential equation governing the bond’s price evolution, it becomes possible to construct the stochastic differential equation governing the value of the replicating portfolio. This latter transformation is accomplished via an important theorem which is referred to in the literature as Ito’s Lemma. Using results from the stochastic calculus, this expression can be shown to imply that the value of the replicating portfolio must satisfy a particular partial differential equation. Together with the appropriate boundary condition (e.g., that C(T ) = max {q e (T ) − E, 0}), this partial differential equation has a known solution – the Black Scholes formula. In what follows we begin first with a brief overview of this first approach; this is accomplished in three steps. 6.1 The Behavior of Stochastic Differentials. In order to motivate what follows, we need to get a better idea of what the object dz(t) means. It is clearly a random variable of some sort. We first explore its moments. Formally, dz(t) is ∆t→0 lim z(t + ∆t) − z(t), (22) where we will not attempt to be precise as to how the limit is taken. We are reminded, however, that E [z(t + ∆t) − z(t)] = 0, and √ var [z(t + ∆t) − z(t)] = ∆t It is not entirely surprising, therefore that E (dz(t)) ≡ var (dz(t)) = lim E [z(t + ∆t) − z(t)] = 0, and 2 2 = ∆t , for all ∆t. ∆t→0 ∆t→0 (23) (24) lim E (z(t + ∆t) − z(t)) = dt. 4 The estimate obtained using Monte Carlo simulation will coincide with the Black Scholes value to a high degree of precision, however, if the number of simulated underlying stock prices is large (≥10,000) and the parameters rf , E, σ, T are identical. 11 The object dz(t) may thus be viewed as denoting an infinitesimal random variable with zero mean and variance dt (very small, but we are in a world of infinitesimals). There are several other useful relationships: E (dz(t) dz(t)) ≡ var (dz(t)) = dt ∆t→0 (25) 4 2 var (dz(t) dz(t)) = ≈ E (dz(t) dt) = var (dz(t) dt) = lim E (z(t + ∆t) − z(t)) − (∆t) (26) (27) (28) 0 lim E [(z(t + ∆t) − z(t)) ∆t] = 0 ∆t→0 ∆t→0 lim E (z(t + ∆t) − z(t)) (∆t)2 ≈ 0 2 Equation (28) and (26) imply, respectively, that (25) and (??) are not only satisfied in expectation but with equality. Expression (25) is, in particular, quite surprising, as it argues that the square of a Brownian motion random process is effectively deterministic. These results are frequently summarized as in Table 2 where (dt)2 is negligible in the sense that it is very much smaller than dt and we may treat it as zero. Table 2: The Product of Stochastic Differentials dz dt dz dt 0 dt 0 0 The power of these results is apparent if we explores their implications for 2 the computation of a quantity such as (dq e (t)) : (dq e (t)) 2 = (µdt + σdz(t)) = = 2 2 µ2 (dt)2 + 2µσdtdz(t) + σ 2 (dz(t)) σ 2 dt, since, by the results in Table 2, (dt)(dt) = 0 and dtdz(t) = 0. The object dq e (t) thus behaves in the manner of a random walk in that its variance is proportional to the length of the time interval. We will use these results in the context of Ito’s lemma. 6.2 Ito’s Lemma A statement of this fundamental result is outlined below. Theorem (Ito’s Lemma). 12 Consider an Ito process dx(t) of form dx(t) = a (x(t), t) dt + b (x(t), t) dz(t), where dz(t) is a Brownian motion, and consider a process y(t) = F (x(t), t). Under quite general conditions y(t) satisfies the stochastic differential equation dy(t) = ∂F ∂F 1 ∂2F 2 dx(t) + dt + (dx(t)) . ∂x ∂t 2 ∂x2 (29) The presence of the right most term (which would be absent in a standard differential equation) is due to the unique properties of a stochastic differential equation. Taking advantage of results (in Table 2) let us specialize Equation (29) to the standard Ito process, where for notational simplicity, we suppress the dependence of coefficients a( ) and b( ) on x(t) and t: dy(t) = = ∂F ∂x (adt + bdz(t)) + ∂F ∂F ∂x adt + ∂x bdz(t) + ∂F ∂t dt + ∂F ∂t dt + 1 ∂2F 2 ∂x2 1 ∂2F 2 ∂x2 2 2 a2 (dt)2 + abdtdz(t) + b2 (dz(t)) 2 Note that (dt)2 = 0, dtdz(t) = 0, and (dz(t)) = dt . Making these substitutions and collecting terms gives dy(t) = ∂F ∂F 1 ∂2F 2 ∂F a+ + b dt + bdz(t). ∂x ∂t 2 ∂x2 ∂x (30) As a simple application, let us take as given dq e (t) = µq e (t)dt + σq e (t)dz(t), and attempt to derive the relationship for d ln q e (t). Here we have a (q e (t), t) ≡ µq e (t), b (q e (t), t) ≡ σq e (t), and ∂F 1 ∂2F 1 = e , and = − e 2. e (t) e (t)2 ∂q q (t) ∂q q (t) Lastly ∂F ( ) = 0. ∂t Substituting these results into Equation (30) yields. 1 1 µq e (t) + 0 + (−1) q e (t) 2 1 µ − σ 2 dt + σdz(t), 2 1 q e (t) 2 d ln q e (t) = = (σq e (t)) 2 dt + 1 σq e (t)dz(t) q e (t) as was observed earlier. This is the background. 6.3 The Black Scholes Formula Merton (1973) requires four assumptions: 13 1. There are no market imperfections (perfect competition), transactions costs, taxes, short sales constraints or any other impediment to the continuous trading of securities. 2. There is unlimited riskless borrowing and lending at the constant risk free rate. If q b (t) is the period t price of a discount bond, then q b (t) is governed by the differential equation dq b (t) = rf q b (t)dt, or B(t) = B(0)erf t ; 3. The underlying stock’s price dynamics are given by a geometric Brownian motion of the form dq e (t) = µq e (t)dt + σq e (t)dz(t), q e (0) > 0; 4. There are no arbitrage opportunities across the financial markets in which the call, the underlying stock and the discount bond are traded. Attention is restricted to call pricing formulae which are functions only of the stock’s price currently and the time (so, e.g., the possibility of past stock price dependence is ignored); that is, C = C q 0 (t), t . By a straightforward application of Ito’s lemma the call’s price dynamics must be given by dC = µq e (t) ∂C ∂C σ2 ∂ 2 C ∂C + + dt + σq e (t) e dz(t), e (t) e (t)2 ∂q ∂t 2 ∂q ∂q (t) which is of limited help since the form of C (q e (t), t) is precisely what is not known. The partials with respect to q e (t) and t of C (q e (t), t) must be somehow circumvented. Following the replicating portfolio approach, Merton (1973) defines the value of the call in terms of the self financing continuously adjustable portfolio P composed of ∆ (q e (t), t) shares and N (q e (t), t) risk free discount bonds: V (q e (t), t) = ∆ (q e (t), t) q e (t) + N (q e (t), t) q b (t). (31) By a straightforward application of Ito’s lemma, the value of the portfolio must evolve according to (suppressing functional dependence in order to reduce the burdensome notation): dV = ∆dq e + N dq b + d∆q e + dN q b + (d∆)dq e (32) 14 Since V ( ) is assumed to be self-financing, any change in its value can only be due to changes in the values of the constituent assets and not in the numbers of them. Thus it must be that dV = ∆dq e + N dq b , (33) which implies that the remaining terms in Equation (33) are identically zero: d∆q e + dN q b + (d∆)dq e ≡ 0. (34) But both ∆( ) and N ( ) are functions of q e (t), and t and thus Ito’s lemma can be applied to represent their evolution in terms of dz(t) and dt. Using the relationships of Table 2, and collecting terms, both those preceding dz(t) and those preceding dt must individually be zero. Together these relationships imply that the value of the portfolio must satisfy the following partial differential equation: 1 2 e σ q Vqe qe + rf qe Vqe + Vt = rf V, 2 (35) which has as its solution the Black Scholes formula when coupled with the terminal condition V(q e (T ) , T ) = max[0, q e (T ) − E]. 7 A Second Approach: Martingale Methods This method originated in the work of Harrison and Kreps (1979). It is popular as a methodology because it frequently allows for simpler computations than in the PDE approach. The underlying mathematics, however, is very complex and beyond the scope of this book. In order to convey a sense of what is going on, we present a brief heuristic argument that relys on the binomial abstraction. Recall that in the binomial model, we undertook our pricing in a tree context where the underlying asset’s price process had been modified. In particular, the true probabilities of the “up” and “down” state were replaced by the corresponding risk neutral probabilities. All assets (including the underlying stock) displayed an expected return equal to the risk free rate in the transformed setting. Under geometric Brownian motion, the underlying price process is represented by an Ito stochastic differential equation of the form dq e (t) = µq e (t)dt + σq e (t)dz(t). (36) In order to transform this price process into a risk neutral setting, two changes must be made. 1. The expression µ defines the mean and it must be replaced by rf ; Only with this substitution will the mean return on the underlying stock become rf . Note that rf denotes the corresponding continuously compounded risk free rate. 15 2. The standard Brownian motion process must be modified. In particular, we replace dz by dz ∗ , where the two processes are related via the transformation: dz ∗ (t) = dz(t) + The transformed price process is thus dq e (t) = rf q e (t)dt + σq e (t)dz ∗ (t). By Equation (14) the corresponding process on ln q e (t) is d ln q e (t) = 1 rf − σ 2 dt + σdz ∗ (t) 2 (38) (37) µ − rf σ dt Let T denote the expiration date of a simple European call option. In the same spirit as the binomial model the price of a call must be the present value of its expected payoff at expiration under the transformed process. Equation (38) informs us that in the transformed economy, ln q e (t) q e (0) ∼N rf − 1 2 σ 2 T, σ 2 T . (39) Since, in the transformed economy, prob (q e (t) ≥ E) = prob (ln q e (t) ≥ ln E) , we can compute the call’s value using the probability density implied by Equation (39): C=e −rf T ln E (es − E)f (s)ds , where f (s) is the probability density on the ln of the stock’s price. Making the appropriate substitutions yields C=e −rf T 1 2πσ 2 T (e −K)e ln K s 2 −[s−ln q e (t)−rf T + σ T ] 2 2σ 2 T ds (40) which, when the integration is performed yields the Black Scholes Formula. 8 Applications We make reference to a number that have been considered earlier in the text. 16 8.1 The Consumption-Savings Problem. This is a classic economic problem and we considered it fairly thoroughly in Chapter 5. Without the requisite math background there is not a lot we can say about the continuous time analogue than to set up the problem, but even that first step will be helpful. Suppose the risky portfolio (“M ”) is governed by the following price process dq M (t) = q M (t)[µM dt + σM dz(t)], q M (0) given, and the risk free asset by dq b (t) = rq b (t)dt, q b (0) given. If an investor has initial wealth Y (0), and chooses to invest the proportion w(t) (possibly continuously varying) in the risky portfolio, then his wealth Y (t) will evolve according to dY (t) = Y (t)[w(t)(µM − rt ) + rf ]dt + Y (t)[π(t)σdz(t)] − c(t)dt, where c(t) is his consumption path. With objective function T c(t),w(t) (41) max E 0 e−γt U (c(t))dt, (42) the investors ‘problem’ is one of maximizing Equation (42) subject to Equation (41) and initial conditions on wealth, and the constraint that Y (t) ≥ 0 for all t. A classic result allows us to transform this problem to one that it turns out can be solved much more easily: T c(t),w(t) max E 0 e−γt u(c(t))dt T 0 (43) s.t. P V0 (c(t)) = E ∗ e−rf t c(t)dt ≤ Y (0) where E ∗ is the transformed risk neutral measure under which the investor’s wealth follows a Brownian motion process. In what we have presented above, all the notation is directly analogous to that of Chapter 5: U ( ) is the investors utility of (instantaneous) consumption, γ his (instantaneous) subjective discount rate, and T his time horizon, 8.2 An Application to Portfolio Analysis. Here we hope to give a hint of how to extend the portfolio analysis of Chapters 5 or 6 to a setting where trading is (hypothetically) continuous and individual security returns follow geometric Brownian motions. Let there be i = 1, 2, . . ., N equity securities, each of whose return is governed by the process 17 e dqi (t) = µi dt + σi dz(t) e qi (t) (44) where σi > 0. These processes may also be correlated with one another in a manner that we can represent precisely. Conducting a portfolio analysis in this setting has been found to have two principal advantages. First, it provides new insights concerning the implications of diversification for long run portfolio returns, and, second, it allows for an easier solution to certain classes of problems. We will note these advantages with the implicit understanding that the derived portfolio rules must be viewed as guides for practical applications. Literally interpreted, they will imply, for example, continuous portfolio rebalancing – at an unbounded total expense, if the cost of doing each rebalancing is positive which is absurd. In practice one would rather employ them weekly or perhaps daily. The stated objective will be to maximize the expected rate of appreciation of a portfolio’s value; equivalently, to maximize its expected terminal value which is the terminal wealth of the investor who owns it. Most portfolio managers would be familiar with this goal. To get an idea of what this simplest criterion implies, and to make it more plausible in our setting, we first consider the discrete time equivalent (and, by implication) the discrete time approximation to GBM. 8.3 Digression to Discrete Time Suppose a CRRA investor has initial wealth Y (0) at time t = 0 and is considering investing in any or all of a set of non-dividend paying stocks whose returns are iid. Since the rate of expected appreciation of the portfolio is its expected rate of return, and since the return distributions of the available assets are iid, the investors’ optional portfolio proportions will be invariant to the level of his wealth, and the distribution of his portfolio’s returns will itself be iid. At the conclusion of his planning horizon, T periods from the present, the investors’ wealth will be T YT = Y0 s=1 ˜P Rs , (45) ˜P where Rs denotes the (iid,) gross portfolio return in period s. It follows that YT Y0 YT Y0 1 T T ln ln = s=1 ˜P lnRs , and 1 T T = ˜P lnRs . s=1 (46) 18 Note that whenever we introduce the ln we effectively assume continuous compounding within the time period. As the number of periods in the time horizon grows without bound, T −→ ∞, by the Law of Large Numbers, YT Y0 1 T −→ eE ln R , or −→ Y0 eT E ln RP ˜ ˜P (47) (48) YT Consider an investor with a many period time horizon who wishes to maximize her expected terminal wealth under continuous compounding. The relationship in Equation (48) informs her that (1) it is sufficient, under the aforementioned assumptions, for her to choose ˜ portfolio proportions which maximize E ln RP , the expected logarithm of the one period return, and (2) by doing so the average growth in rate of her wealth will approach a deterministic limit. Before returning to the continuous time setting let us also entertain a brief classic example, one in which an investor must decide what fractions of his wealth to assign to a highly risky stock and to a risk free asset (actually, the risk free asset is equivalent to keeping money in a shoebox under the bed). For an amount Y (0) invested in either asset, the respective returns are found in Figure 4. Figure 4: Two alternative Investment Returns 2 Y0, prob. = ½ Y0 ½ Y0, prob. = ½ Y0 Y0, prob. = ½ Y0, prob. = ½ Let w represent the proportion in the stock, and notice that the expected gross return to either asset under continuous compounding is zero: 1 1 Stock : ERe = 1 ln(2) + 2 ln( 2 ) = 0 2 1 1 sb Shoebox : ER = 2 ln(1) + 2 ln(1) = 0 With each asset paying the same expected return, and the stock being wildly risky, at first appearance the shoebox would seem to be the investment of choice. But according to Equation (48) the investor ought to allocate his wealth among the two assets so as to maximize the expected ln of the portfolio’s one-period gross return: ˜ max E ln RP = maxw 1 ln (2w + (1 − w)) + 1 ln 1 w + (1 − w) 2 2 2 A straight forward application of the calculus yields w = 3/4, with the consequent portfolio returns in each state as show in Figure 5 Figure 5 Optimal Portfolio Returns in Each State ln(1 + w) = ln(1.75) = 0.5596, prob . = 1 2 19 ln(1 − 1 2 ln w) = ln(0.625) = −0.47, prob . = ˜ 1 2 As a result, E ln RP = 0.0448 with an effective risk free period return (for a very long time horizon) of 4.5% (e0.448 = 1.045) . This result is surprising and the intuition is not obvious. Briefly, the optimal proportions of w = 3/4 and 1 − w = 1/4 reflect the fact that by always keeping a fixed fraction of wealth in the risk free asset, the worst trajectories can be avoided. By “frequent” trading, although each asset has an expected return of zero, a combination will yield an expected return that is strictly positive, and over a long time horizon effectively riskless. Frequent trading expands market opportunities. 8.4 The previous setup applies directly to a continuous time setting as all of the fundamental assumptions are satisfied. In particular, there are a very large number of periods (an uncountable number, in fact) and the returns to the various securities are iid through time. Let us make the added generalization that the individual asset returns are correlated through their Brownian motion components. By an application of Ito’s lemma, we may write cov(dzi , dzj ) = E(dzi (t)dzj (t)) = σij dt, where σj denotes the (i, j) entry of the (instantaneous) variance – covariance matrix. As has been our custom, denote the portfolio’s proportions for the N assets by w1 , . . ., wN and let the superscript P denote the portfolio itself. As in earlier P (t) chapters, the process on the portfolio’s instantaneous rate of return, dYYP (t) is the weighted average of the instantaneous constituent asset returns (as given in Equation (44)): d Y P (t) Y P (t) N e dqi (t) = e qi (t) N = i=1 wi N wi (µi dt + dzi (t)) i=1 N (49) = i=1 wi µi dt + i=1 wi dzi (t), where the variance of the stochastic term is given by    2 N N  N  E wi dzi (t) = E wi dzi (t)  wj dzj (t)   i=1 i=1 j=1   N N =  i=1 j=1 wi wj σij  dt Equation (49) describes the process on the portfolio’s rate of return and we see that it implies that the portfolio’s value, at any future time horizon T will 20 be lognormally distributed; furthermore, an uncountable infinity of periods will have passed. By analogy (and formally) our discrete time reflections suggest that an investor should in this context also choose portfolio proportion so as to maximize the mean growth rate, VP , of the portfolio as given by E ln Y P (t) Y (0) = Tvp N i=1 Since the portfolio’s value itself follows a Brownian motion (with drift and disturbance N i=1 P wi µi wi dzi ), N  wi µi T− E ln Y (t) Y (0) νP = i=1 1 wi wi σij  T, thus 2 i=1 j=1 N N N  (50) = 1 T E ln Y P (t) = Y (0) wi µi − i=1 1 2 N N wi wj σij .(51) i=1 j=1 The investor should choose portfolio proportions to maximize this latter quantity. Without belaboring this development much further, it behooves us to recognize the message implicit in (51). This can be accomplished most straightforwardly in a context of an equally weighted portfolio where each of the N assets is independently distributed of one another (σij = 0 for i = j), and all have the same mean and variance ((µi σi ) = (µ,σ), i = 1, 2, . . . , N ) . In this case (51) reduces to νP = µ − 1 2N σ2 , (52) with the direct implication that the more identical stocks the investor adds to the portfolio the greater the mean instantaneous return. In this sense it is useful to search for many similarly volatile stocks whose returns are independent of one another: by combining them in a portfolio where we continually (frequently) rebalance to maintain equal proportions, not only will portfolio variance decline 1 ( 2N σ 2 ), as in the discrete time case, but the mean return will rise (which is not the case in discrete time!). 8.5 The Consumption CAPM in Continuous Time Our final application concerns the consumption CAPM of Chapter 9, and the question we address is this: What is the equilibrium asset price behavior in a Mehra-Prescott asset pricing context when the growth rate in consumption follows a GBM? Specializing preferences to be of the customary forms U (c) = (c1−γ /1 − γ), pricing relationship (9.4) reduces to: 21   Pt = Et Yt j=1 ∞ Yt j=1 β j x1−γ t+j    = β j Et x1−γ t+j where xt+j is the growth rate in output (equivalently, consumption in the Mehra-Prescott economy) from period j to period j + 1. We hypothesize that the growth rate x follows a GBM of the form dx = µxdt + σxdz, where we interpret xt+j as the discrete time realisation of x (t) at time t + j. One result from statistics is needed. Suppose w is lognormally distributed which we write w ∼ L (ξ, η) where ξ = E ln w and η 2 = var ln w. Then for any ˜ ˜ ˜ real number q, √ By the process on the growth rate just assumed, x (t) ∼ L( µ − 1 σ 2 t, σ t) 2 √ so that at time t + j, xt+j ∼ L( µ − 1 σ 2 j, σ j). By this result, 2 E x1−γ t+j = = and thus, 2 2 1 2 1 e(1−γ)(µ− 2 σ )j+ 2 (1−γ) σ j 2 1 e(1−γ)(µ− 2 γσ )j , E {wq } = eqξ+ 2 q 1 2 2 η Pt = = Yt j=1 ∞ 2 1 β j e(1−γ)(µ− 2 γσ )j Yt j=1 2 1 βe(1−γ)(µ− 2 γσ ) j 2 1 which is well defined (the sum has a finite value) if βe(1−γ)(µ− 2 γσ ) < 1, wich we will assume to be the case. Then 2 1 βe(1−γ)(µ− 2 γσ ) Pt = Yt 1 2 1 + βe(1−γ)( 2 γσ ) This is an illustration of the fact that working in continuous time often allows convenient closed form solutions. Our remark are taken from Mehra and Sah (2001). 22 9 There is much more to be said. There are many more extensions of the CCAPM style models to a continuous time setting. Another issue is the sense in which a continuous time price process (e.g. Equation (13)) can be viewed as an equilibrium price process in the sense of that concept as presented in this book. This remains a focus of research. Continuous time is clearly different from discrete time, but does it use (as a derivatives pricing tool) enrich our economic understanding of the large financial and macroeconomics reality? That is less clear. 23 Intermediate Financial Theory Danthine and Donaldson Solutions to Exercises 1 Chapter 1 U is a utility function, i.e., U(x) > U(y) ⇔ x y f(.) is an increasing monotone transformation, f(a) > f(b) ⇔ a > b; then f(U(x)) > f(U(y)) ⇔ U(x) > U(y) ⇔ x y Utility function U( c 1 , c 2 ): FOC: U1/U2=p1/p2 Let f=f(U(.)) be a monotone transformation. Apply the chain rule for derivatives: FOC: f1/f2=f 'U1/f 'U2=p1/p2 (prime denotes derivation). Economic interpretation: f and U represent the same preferences, they must lead to the same choices. When an agent has very little of one given good, he is willing to give up a big quantity of another good to obtain a bit more of the first. MRS is constant when the utility function is linear additive (that is, the indifference curve is also linear): 1.1. 1.2. 1.3. U (c1 , c 2 ) = αc1 + βc 2 α MRS = β Not very interesting; for example, the optimal choice over 2 goods for a consumer is always to consume one good only (if the slope of the budget line is different from the MRS) or an indefinite quantity of the 2 goods (if the slopes are equal). Convex preferences can exhibit indifference curves with flat spots, strictly convex preferences cannot. The utility function is not strictly quasi-concave here. Pareto set: 2 cases. • • Indifference curves for the agents have the same slope: Pareto set is the entire box; Indifference curves do not have the same slope: Pareto set is the lower side and the right side of the box, or the upper side and the left side, depending on which MRS is higher. U1 = 60.5 4 0.5 = 4.90 U2 = 14 0.516 0.5 = 14.97 j ∂U j / ∂c1j αc 2 MRS j = = j ∂U j / ∂c 2 (1 − α )c1j 1.4. a. with α = 0.5, MRS j = j c2 c1j MRS1 = 4 = 0.67 6 2 MRS 2 = 16 = 1.14 14 MRS1 ≠ MRS2, not Pareto Optimal; it is possible to reallocate the goods and make one agent (at least) better off without hurting the other. j i b. PS = { c1j = c 2 , j = 1,2 : c1 + c i2 = 20, i = 1,2 }, the Pareto set is a straight line (diagonal from lowerleft to upper-right corner). c. The problem of the agents is j j MaxU j s.t. p 1 e 1j + e 2 = p 1 c 1j + c 2 . The Lagrangian and the FOC's are given by L j = (c 1j ) 1/ 2 (c )     j 1/ 2 2 1/ 2 j j + y(p 1 e 1j + e 2 − p 1 c 1j − c 2 ) j ∂L j 1  c 2 =  j ∂c 1j 2  c 1  − yp1 = 0 1/ 2 ∂L j 1  c 1j =  j j ∂c 2 2  c 2      −y =0 ∂L j j j = p 1 e 1j + e 2 − p 1 c 1j − c 2 = 0 ∂y Rearranging the FOC's leads to p 1 = j c2 c 1j . Now we insert this ratio into the budget constraints of agent 2 . This expression can be interpreted as p1 a demand function. The remaining demand functions can be obtained using the same steps. c 1 = 3p 1 + 2 2 1 p 1 6 + 4 − 2p 1 c 1 = 0 and after rearranging we get c 1 = 3 + 1 1 2 c1 = 7 + 8 p1 c 2 = 7p 1 + 8 2 2 2 To determine market equilibrium, we use the market clearing condition c1 + c1 = 20, c1 + c 2 = 20 . 1 2 2 Finally we find p1 = 1 and c1 = c1 = 5, c1 = c 2 = 15 . 1 2 2 The after-trade MRS and utility levels are: U1 = 50.550.5 = 5 U2 = 150.5150.5 = 15 5 MRS1 = = 1 5 15 MRS 2 = =1 15 Both agents have increased their utility level and their after-trade MRS is equalized. j j d. Uj( c1j , c 2 ) = ln (c1j ) ⋅ (c 2 ) ( α 1− α ) = α ln c + (1 − α) ln c , j 1 j 2 3 j αc 2 j ∂U j / ∂c 2 (1 − α )c1j Same condition as that obtained in a). This is not a surprise since the new utility function is a monotone transformation (logarithm) of the utility function used originally. U1 = ln (6 0.5 4 0.5 ) = 1.59 U2 = ln (14 0.516 0.5 ) = 2.71 MRS j = ∂U j / ∂c1j = MRS's are identical to those obtained in a), but utility levels are not. The agents will make the same maximizing choice with both utility functions, and the utility level has no real meaning, beyond the statement that for a given individual a higher utility level is better. e. Since the maximizing conditions are the same as those obtained in a)-c) and the budget constraints are not altered, we know that the equilibrium allocations will be the same too (so is the price ratio). The after-trade MRS and utility levels are: U1 = ln (50.550.5 ) = 1.61 U2 = ln (150.5150.5 ) = 2.71 5 MRS1 = = 1 5 15 MRS 2 = =1 15 1.5. Recall that in equilibrium there should not be excess demand or excess supply for any good in the economy. If there is, then prices change accordingly to restore the equilibrium. The figure shows excess demand for good 2 and excess supply for good 1, a situation which requires p2 to increase and p1 to decrease to restore market clearing. This means that p1/p2 should decrease and the budget line should move counter-clockwise. 4 Chapter 3 3.1. Mathematical interpretation: We can use Jensen's inequality, which states that if f(.) is concave, then E(f (X )) ≤ f (E(X )) Indeed, we have that E(f (X )) = f (E(X )) ⇔ f ' ' = 0 As a result, when f(.) is not linear, the ranking of lotteries with the expected utility criterion might be altered. Economic interpretation: Under uncertainty, the important quantities are risk aversion coefficients, which depend on the first and second order derivatives. If we apply a non-linear transformation, these quantities are altered. Indeed, R A (f (U (.))) = R A (U (.)) ⇔ f is linear. a. L = ( B, M, 0.50) = 0.50×U(B) + 0.50×U(M) = 55 > U(P) = 50. Lottery L is preferred to the ''sure lottery'' P. b. f(U(X)) = a+b×U(X) Lf = (B, M, 0.50)f = 0.50×(a+bU(B)) + 0.50×(a+bU(M)) = a + b55 > f(U(P)) = a+bU( P) = a + b50. Again, L is preferred to P under transformation f. g(U(X)) = lnU(X) Lg = (B, M, 0.50)g = 0.50×lnU(100) +0.50×lnU(10) = 3.46 < g(U(P)) = lnU(50) = 3.91. P is preferred to L under transformation g. 3.2. Lotteries: We show that (x,z,π) = (x,y,π + (1-π)τ) if z = (x,y, τ). π x  τ x w/. probability π x w/. probability (1-π)τ 1-π z 1-τ y w/. probability (1-π)(1-τ) The total probabilities of the possible states are π( x ) = π + (1 − π)τ π( y) = (1 − π)(1 − τ) Of course, π( x ) + π( y) = π + (1 − π )τ + (1 − π )(1 − τ ) = 1. Hence we obtain lottery (x,y,π + (1-π)τ). 5 Could the two lotteries (x,z,π) and (x,y,π + (1-π)τ) with z = (x,y, τ) be viewed as non-equivalent ? Yes, in a non-expected utility world where there is a preferences for gambling. Yes, also, in a world where non-rational agents might be confused by the different contexts in which they are requested to make choices. While the situation represented by the two lotteries is too simple to make this plausible here, the behavioral finance literature building on the work of Kahneman and Tversky (see references in the text) point out that in more realistic experimental situations similar ‘confusions’ are frequent. 3.3 U is concave. By definition, for a concave function f(.) f (λa + (1 − λ )b ) ≥ λf (a ) + (1 − λ )f (b ), λ ∈ [0,1] Use the definition with f = U, a = c 1 , b = c 2 , λ = 1/2 1  1 1 1 U  c1 + c 2  ≥ U (c1 ) + U (c 2 ) 2  2 2 2 1 1 U (c ) ≥ U (c1 ) + U (c 2 ) 2 2 2 U (c ) ≥ U (c1 ) + U (c 2 ) V(c, c ) ≥ V(c1 + c 2 ) 6 Chapter 4 4.1 Risk Aversion: (Answers to a), b), c), and d) are given together here) (1) U ( Y ) = − 1 ( 2) Y 1 U' ( Y ) = 2 > 0 Y 2 U' ' ( Y ) = − 3 < 0 Y 2 RA = Y RR = 2 ∂R A 2 =− 2 0 ⇔ γ > 0 U ' ' (Y ) = −γ (γ + 1)Y −γ − 2 < 0 RA = γ +1 Y RR = γ + 1 ∂RA γ +1 =− 2 0 ∂Y U' ( Y ) = γ exp(− γY ) > 0 ⇔ γ > 0 Yγ γ U' ( Y ) = Y γ −1 U' ' ( Y ) = (γ − 1)Y γ − 2 < 0 ⇔ γ < 1 1− γ RA = Y RR = 1− γ ∂R A γ − 1 = 2 0, β > 0 α U' ( Y ) = α − 2βY > 0 ⇔ Y < 2β U' ' ( Y ) = −2β < 0 RA = 2β >0 α − 2βY 2β RR = Y>0 α − 2βY ∂R A 4β 2 = >0 ∂Y (α − 2βY )2 ∂R R ∂ (R A Y ) = ∂R A Y + R A > 0 = ∂Y ∂Y ∂Y Note that γ controls for the degree of risk aversion. We check it with the derivative of RA and RR w.r.t. γ. 7 U(Y ) = −Y − γ U(Y ) = − exp(− γY ) U(Y ) = ∂R A 1 = Y ∂γ ∂R A =1 ∂γ ∂R R =1 ∂γ ∂R R =Y ∂γ ∂R A ∂R R Yγ 1 =− = −1 γ ∂γ Y ∂γ In the last utility function above, we should better use γ ≡ 1-θ, so that ∂R A 1 ∂R R Y γ Y 1− θ U (Y ) = ≡ , and = , = 1 (look at RR = 1-γ = θ). After this change, every γ 1− θ ∂θ Y ∂θ derivative w.r.t. θ is positive. If we increase θ, we increase the level of risk aversion (both absolute and relative). 4.2. Certainty equivalent . 1 The problem to be solved is: find x such that π 1 U (Y11 ) + π 2 U (Y2 ) ≡ U (x ) where Y i1 denotes outcome of lottery L1 in state i and π i denotes the probability of state i. If U is bijective, it can be ''inverted'', so the solution is 1 x = U −1 {π 1 U (Y11 ) + π 2 U (Y2 )} where U −1 is the inverse function of U. (1) U ( Y ) = − 1 1 , U −1 ( Y ) = − Y Y −1 1 1  1 x = -  − −   = 16666.67  2  50000 10000   ( 2) U ( Y ) = ln Y, U −1 ( Y ) = exp(Y ) 1 1  1 x = exp  ln + ln   = 22360.68 10000    2  50000 1 Yγ (3) U( Y ) = , U −1 ( Y ) = (γY ) γ γ   1  Yγ Yγ  x =  γ  1 + 2     2 γ γ      with γ = .25, x = 24232.88 with γ = .75, x = 28125.91 1 γ Increasing γ leads to an increase in the value of x. This is because 1-γ (and not γ!) is the coefficient of relative risk aversion for this utility function. Therefore, increasing γ decreases the level of risk aversion, and the certainty equivalent is higher, i.e. the risk premium is lower. 4.3. Risk premium . The problem to be solved (indifference between insurance and no insurance) is 8 EU(Y ) = ∑ ln Yi π(i) = ln(100,000 − P ) i where P is the insurance premium, Yi is the worth in state i and π(i) is the probability of state i. The solution to the problem is P = 100,000 − exp(EU(Y )) The solutions under the three scenarios are Scenario A : P = 13312.04 Scenario B : P = 13910.83 Scenario C : P = 22739.27 Starting from scenario A, in scenario B and C we have transferred 1 percent from state 3 to another state (to state 2 in scenario B and to state 1 in scenario C). However, the outcome is very different: The premium is slightly bigger in scenario B, while it is a lot higher in C. This could have been expected because the logarithmic utility function is very curved at low values, and flattens out rapidly, i.e. ln 1 is very different from ln 100000, but ln 50000 is only slightly different from ln 100000. Also, logarithmic utility function is DARA. 4.4. Simply take the expected utility (initial wealth is normalized to 1) ~ E(U (1 + ~ )) ≡ E U 1 + ~ + ξ ≥ E(U (1 + ~ )) rA rB rB and apply Theorem 3.2. All individuals with increasing utility functions prefer A to B. ( ( )) 4.5 x x x x a. Let ~ A and ~ B be two probability distributions. The notion that ~ A FSD ~ B is the idea that ~ assigns greater probability weight to higher outcome values; equivalently, it assigns lower xA outcome values a lower probability relative to ~ B . Notice that there is no concern for “relative x x riskiness” in this comparison: the outcomes under ~ A could be made more 'spread out' in the ~ . region of higher values than x B b. The notion that ~ A SSD ~ B is the sense that ~ B is related to ~ A via a “pure increase in risk”. x x x x ~ being defined from ~ via a mean preserving spread. ~ is just ~ xA xB xA This is the sense of x B ~ . where the values have been spread out. Of course, any risk averse agent would prefer x A c. Only two moments of a distribution are relevant for comparison: the mean and the variance. Agents like the former and dislike the latter. Thus, given two distributions with the same mean, the one with the higher variance is less desirable; similarly, given two distributions with the same variance, the one with the greater mean return is preferred. d. (i) Compare first under mean variance criterion. 1 1 1 E~ A = (2) + (4) + (9) = 4.75 x 4 2 4 1 E~ B = (1 + 6 + 8) = 5 x 3 1 1 1 2 σ A = (2 − 4.75) 2 + (4 − 4.75) 2 + (9 − 4.75) 2 = 6.6875 4 2 4 9 1 1 1 σ 2 = (1 − 5) 2 + (6 − 5) 2 + (8 − 5) 2 B 3 3 3 2 = 26/3 = 8 . 3 ~ < E~ , σ 2 < σ 2 So, Ex A xB A B ~ dominates ~ under mean variance. Thus x A xB (ii) Now let’s compare them under FSD. Let F( ~ A ) be denoted x ~ ) be denoted F( x B and let us graph F( ~ A ) andF( ~ B ) x x A and B 1 3/4 2/3 1/2 1/3 1/4 A B 5 6 7 8 9 It does not appear that either ~ A FSD ~ B or ~ B FSD ~ A ; either dominates the other in the FSD x x x x sense. Thus, mean-variance dominance does not imply FSD. 1 2 3 4 10 (iii) x x x x x x ∫ f B ( t )dt 0 ∫ FB ( t )dt 0 ∫ f A ( t )dt 0 ∫ FA ( t )dt 0 ∫ [FB ( t ) − FA ( t )]dt 0 1/3 5/12 1/2 1/12 -1/3 0 0 1 2 3 4 5 0 1/3 1/3 1/3 1/3 1/3 0 1/3 2/3 1 1 13 1 23 0 0 1/4 1/4 3/4 3/4 0 0 1/4 1/2 5/4 2 There is no SSD as the sign of ∫ [FB ( t ) − FA ( t )]dt is not monotone. x 0 4.6 a. The certainty equivalent is defined by the equation : 1 ~ ~ U (CE Z ) = EU( Z) , since Y0 ≡ 0 , and Z = (16,4; ) 2 1 1 1 1 1 1 1 (CE Z ) 2 = (16) 2 + ( 4) 2 = ( 4) + ( 2) = 3 2 2 2 2 Thus CE Z = 9 b. The insurance policy guarantees the expected payoff : 1 ~ 1 EZ = (16) + ( 4) = 8 + 2 = 10 2 2 ~ ~ Π , the premium, satisfies Π = EZ − CE Z = 1 . c. The insurance would pay –6 in the high state, +6 in the low state. The most the agent would be willing to pay is 1. 1 1 1 U (CE Z ) = 10 2 = π' (16) 2 + (1 − π' )( 4) 2 , π' = .58 The probability premium is .08. d. Now consider the gamble (36, 16, ½) = Z' 1 1 1 1 1 2 2 U (CE Z ' ) = (CE Z ' ) = (36) + (16) 2 2 2 1 1 1 (CE Z ' ) 2 = (6) + ( 4) = 5 2 2 CE Z ' = 25 Π Z ' = 1 (as before) 1 1 1 π' ' solves : ( 26) 2 = π' ' (36) 2 + (1 − π' ' )(16) 2 5.10 = π' '6 + (1 − π' ' )4 = 2π' '+4 11 1 .1 = .55 2 Thus the probability premium is .55 − .50 = .05 The probability premium has fallen became the agent is wealthier in the case and is operating on a less risk averse portion of his utility curve. As a result, the premium, as measured in probability terms, is less. 2π' ' = 1.1 , π' ' = 4.7 No. Reworking the data of Table 3.3 shows that it is not always the case that x ∫0 [F4 ( t ) − F3 ( t )]dt > 0 . Specifically, for x = 5 to 7, F3 ( x ) > F4 ( x ) . Graphically this corresponds to the fact that in Fig 3.6 area B of the graph is now bigger than area A. 4.8 a. State by state dominance : no. b. FSD : yes. See graph Probability 1 ~ z 2/3 ~ y 1/3 ~ and ~ y z -10 0 10 These two notions are not equivalent. 12 Chapter 5 5.1. For full investment in the risky asset the first order condition has to satisfy the following: E[U' (Y0 (1 + ~ ))(~ − rf )] ≥ 0 r r Now expanding U ' (Y0 (1 + ~ )) around Y0 (1 + rf ) , we get, after some manipulations and ignoring r higher terms : E[U' (Y0 (1 + ~ ))(~ − rf )] r r 2 = U ' [Y0 (1 + rf )]E(~ − rf ) + U' ' [Y0 (1 + rf )]E(~ − rf ) Y0 ≥ 0 r r Hence, 2 E(~ − rf ) ≥ R A [Y0 (1 + rf )]E(~ − rf ) Y0 which is the smallest risk premium required for full r r investment in the risky asset. 5.2. a. R (π1 ) = (1 − a )2 + a = 2 − a R (π2 ) = (1 − a )2 + 2a = 2 R (π3 ) = (1 − a )2 + 3a = 2 + a b. EU = π1 U (2 − a ) + π 2 U (2 ) + π 3 U (2 + a ) ∂EU = − π1 U ' (2 − a ) + π 3 U' (2 + a ) = 0 ∂a a = 0 ⇔ U ' (2 )[π 3 − π1 ] = 0 ⇔ π 3 = π1 ⇔ E(z ) = 2 >0 c. Define W (a ) = E(U (Y0 (1 + rf ) + a (~ − rf ))) r = E(U (2 + a (z − 2 ))) W' (a ) = E(U ' (2 + a (z − 2 ))(z − 2 )) = 0 13 d. • U(Y ) = 1 − exp(− bY ) ln(π1 ) − ln(π 3 ) 2b 1 Y1− γ • U(Y ) = 1− γ a= ln(π 3 b )(− b(2 + a )) = ln(π1b )(− b(2 − a )) W ' (a ) = −π1b exp(− b(2 − a )) + π 3 b exp(− b(2 + a )) = 0 W (a ) = π1 [1 − exp(− b(2 − a ))] + π 2 [1 − exp(− b(2))] + π 3 [1 − exp(− b(2 + a ))]  1 (2 − a )1−γ  + π 2  1 (2)1−γ  + π3  1 (2 + a )1−γ  W (a ) = π1   1 − γ  1 − γ  1 − γ      W ' (a ) = −π1 (2 − a ) + π 3 (2 + a ) = 0 γ γ 1 π1 / γ − π1 / γ 3 a = 2 1/ γ π1 + π1 / γ 3 Assuming π3 > π1 , b > 0,0 < γ < 1 we have in either cases e. • U (Y ) = 1 − exp(− bY ) RA = b ∂a > 0. ∂Y • U (Y ) = 1 Y 1− γ 1− γ RA = γ /Y 5.3 a. Y = (1 + ~ )a + (Y0 − a )(1 + rf ) r = Y0 (1 + rf ) + a (~ − rf ) r b. max EU (Y0 (1 + rf ) + a (~ − rf )) r a F.O.C.: E U ′ Y0 (1 + rf ) + a * (~ − rf ) (~ − rf ) = 0 r r 2 E U ′′(Y (1 + r ) + a * (~ − r ))(~ − r ) < 0 r r { { ( ) } 0 f f f } Since the second derivative is negative we are at a maximum with respect to a at a = a * , the optimum. c. We first want to formally obtain Y0 . da * . Take the total differential of the F.O.C. with respect to dY0 14 Since E{U′(Y0 (1 + rf ) + a (~ − rf ))(~ − rf )} = 0 , r r    da ~ ( r − rf )  = 0 E U′′(Y0 (1 + rf ) + a (~ − rf ))(~ − rf )(1 + rf ) + r r  dY0     ′′(Y0 (1 + rf ) + a (~ − rf ))(~ − rf )}(1 + rf ) da E{U r r − = ~ − r ))(~ − r )2 dY0 E U′′(Y0 (1 + rf ) + a ( r f r f { } The denominator is 0 it is in fact dependent on EU ′′(Y0 (1 + rf ) + a (~ − rf ))(~ − rf ) . We want to show da this latter expression is positive and hence > 0 , as our intuition would suggest. dY0 ′ d. R A (Y ) < 0 is declining absolute risk aversion: as an investor becomes wealthier he should be willing to place more of his wealth (though not necessarily proportionately more) in the risky ′ asset, if he displays R A (Y ) < 0 . e. We will divide the set of risky realizations into two sets: ~ ≥ rf and r f > ~ . We maintain the r r ′ assumption R A (Y ) < 0 . Case 1: ~ ≥ rf ; Notice that r Y0 (1 + rf ) + a (~ − rf ) > Y0 (1 + rf ) . r Then, R A (Y0 (1 + rf ) + a (~ − rf )) ≤ R A (Y0 (1 + rf )) . r ~ R (Y (1 + r )) r A 0 f f A 0 f f. Now we will use these observations. Case 1: − ~≥r r f U ′′(Y0 (1 + rf ) + a (~ − rf )) r ~ ~ − r )) = R A (Y0 (1 + rf ) + a ( r − rf )) U ′(Y0 (1 + rf ) + a ( r f ≤ R A (Y0 (1 + rf )) Hence U ′′(Y0 (1 + rf ) + a (~ − rf )) ≥ −R A (Y0 (1 + rf )) U ′(Y0 (1 + rf ) + a (~ − rf )) r r ~ − r ≥ 0 for this case, Since r f 15 (i) Case 2: U ′′(Y0 (1 + rf ) + a (~ − rf ))(~ − rf ) ≥ − R A (Y0 (1 + rf )) U ′(Y0 (1 + rf ) + a (~ − rf ))(~ − rf ) r r r r ~ σ your σ Aus. So the answer are : b. no c. no. These are, in fact, two ways of asking the same questions. = .2683 = .6592 .4070 d. Whenever you are asked a question like this, it is in reference to a regression ; in this case ~ = α +β r + ~ rAus ˆ Aus ˆ Aus your εAus ˆ ⇒ σ 2 = (β ) 2 σ 2 + σ 2 Aus Aus ryour ε Aus The fraction of Australian’s variation explained by variations in your portfolio’s return is R2 = ˆ (β Aus ) 2 σ 2 your σ2 Aus 2 2 = (.77) 2 = .59 e. No ! How could it be ? Adding Australian stocks doesn’t reduce risk. Even if it did, you don’t know that your portfolio is identical to the true M. 23 Chapter 7 7.1. Write the SML equation to make the market risk premium appear, then multiply by E (r j ) = rf + (E(rM ) − rf )β j = rf + (E(rM ) − rf ) = rf σ jM σ2 M . σM , σM (E(rM ) − rf ) σ jM σ M + σM σ2 M Rewrite the last term σ jM σ M σ jσ M ρ jM σ M = = σ jρ jM . σ2 σ2 M M Then we get (E(rM ) − rf ) (σ ρ ) E (rj ) = rf + j jM σM and the conclusion follows since 0 ≤ ρ jM ≤ 1 . 7.2. Intuitively, the CML in the ‘more risk averse economy’ should be steeper, in view of its risk/return trade-off interpretation. This is true in particular because one would expect the risk free rate to be lower, as the demand for the risk free asset should be higher, and the return on the optimal risky portfolio to be higher, as the more risk averse investors require a higher compensation for the risk they bear. Note, however, that the Markovitz model is not framed to answer such a question explicitly. It builds on ‘given’ expected returns that are assumed to be ‘equilibrium’. If we imagine, as in this question, a change in the primitives of the economy, we have to turn to our intuition to guess how these given returns would differ in the alternative set of circumstances. The model does not help us with this reasoning. For such (fundamental) questions, a general equilibrium setting will prove superior. The frontier of the economy where asset returns are more correlated and where diversification opportunities are thus lower is contained inside the efficient frontier of the economy where assets are less correlated. If the risk free rate was constant, this would guarantee that the slope of the CML would be lower in the economy, and the reward for risk taking lower as well. This appears counter-intuitive – with less diversification opportunities those who take risks should get a higher reward –, an observation which suggests that the risk free rate should be lower in the higher correlation economy. Refer to our remarks in the solution to 6.2 : the CAPM model, by its nature, does not explicitly help us answer such a question. If investors hold homogeneous expectations concerning asset returns, mean returns on risky assets -per dollar invested- will be the same. Otherwise they would face different efficient frontiers and most likely would invest different proportions in risky assets. Moreover, the marginal rate of substitution between risk and return would depend on the level of wealth. 7.3 7.4. 24 7.5. Using standard notations and applying the formulas, we get A = 3.77, B = 5.85, C = 2.65, D = 1.31  1.529   − 0.618  − 0.059 h =  0.235  g=      − 0.471  0.382      E(rMVP ) = 1.42 0.652 = 0.275   0.072   w MVP E rZCP3 = 1.3028 0.725 = 0.248   0.028   ( ) w ZCP3 7.6 a. The agent’s problem is (agent i):    max E U i (Y0i − ∑ x ij )(1 + rf ) + ∑ x ij (1 + ~j )  r i ( x1 , x i2 ,..., x iJ ) j j    The F.O.C. wrt asset j is : ~ E U i' (Yi )(−(1 + rf ) + (1 + ~j )) = 0 , or r ~ ~ (1) E U ' (Y )( r − r ) = 0 { { i i j f } } b. We apply the relationship Exy = cov(x, y) + ExEy to equation (1). ~ ~ ~ 0 = E U i' (Yi )(~j − rf ) = E U i' (Yi ) E (~j − rf ) + cov(U i' (Yi ), ~j − rf ) r r r ~ ~ ~ = E U ' (Y ) E(~ − r ) + cov(U ' (Y ), r ) r ~ ~ Thus, E U i' (Yi ) E(~j − rf ) = − cov(U i' (Yi ), ~j ) r r { { { } i i } } { f } j i i j (2) c. We make use of the relationship ~ ~ cov(g( ~ ), ~ ) = E( g' ( ~ )) cov(~, ~ ) , where we identify g (Yi ) = U i' (Yi ) . x y x x y Apply this result to the R.H.S. of equation (2) yields : ~ ~ ~ (3) E U i' (Yi ) E (~j − rf ) = − E( U i'' (Yi )) cov(Yi , ~j ) . r r { } d. We can rewrite equation (3) as : 25 ~ − E U i'' (Yi ) ~ E (~j − rf ) = r cov(Yi , ~j ) r ' ~ E U i (Yi ) ~ − E U i'' (Yi ) Let us denote R A i = as it is reminiscent (but not equal to) the Absolute Risk ~ E U i' (Yi ) Aversion measure. The above equation can be rewritten as     ~ ~ − r ) =  1  cov(Y , ~ ) , or E ( rj f i rj  1    RA   i    1  ~ ~  E( r − r ) = cov(Yi , ~j ) , r  RA  j f  i Summing over all agents i gives I  1  I ~ E (~j − rf ) = ∑ cov(Yi , ~j ) , or r r ∑ i =1  R A  i =1  i  I  1  I ~   = cov ∑ Yi , ~j  E (~j − rf ) ∑  r r   i =1  R A    i =1    i  I ~ Let us identify ∑ Y ≡ Y (1 + ~ ) . Then we have : r { { } } { { } } i =1 i MO M I  1 E( ~j − rf ) ∑  r  i=1  R A   i Thus,    = cov (YMO (1 + ~ ), ~j ) rM r   = YMO cov(~ , ~j ) rM r E( ~j − rf ) = r YMO I  1 ∑  i=1  R A   i     cov (~ , ~j ) rM r (4) e. Let w j be the proportion of economy wide wealth invested in asset j. Then, for all j YMO w j E( ~j − rf ) = r w j cov (~ , ~j ) rM r  I  1  ∑   i =1  R A     i  Thus, J J YMO r rM r ∑ w j E( ~j − rf ) = ∑ w j cov (~ , ~j ) . j=1  I  1   j=1 ∑   i =1  R A     i  26 It follows that YMO J  E ∑ w j ~j − rf  = r  j=1  I  1 ∑  i =1  R A   i By construction, J r r ∑w ~ = ~ . j=1 j j M J   cov  ~ , ∑ w j ~j  rM r j=1       Then E~ − rf = rM YMO I 1 ∑  i =1  R A   i YMO E~ − rf = rM I 1 ∑  i =1  R A   i         cov(~ , ~ ) rM rM var(~ ) . rM (5)  I  1  ∑   i=1  R A     i  YMO E (~ − rf ) rM From (5) ; substituting this latter expression into (4) gives : = var(~ ) rM  I  1      ∑ R A  i =1  i   cov (~ , ~j ) ~ rM r E( ~j − rf ) = r E( rM − rf ) , the traditional CAPM. var( ~ ) r M f. (4) states that E( ~ − r ) = r j f YMO cov (~ , ~j ) rM r 27 Chapter 8 8.1. a. qi = 1 (1 + ri )i q1 = 0.91 q 2 = 0.8224 q 3 = 0.7424 These are in fact the corresponding risk-free discount bond prices. b. The matrix is the same at each date. The n-period A-D matrix is then [A − D] . If we are in n n state i today, we look at line i, written [A − D]i , and we sum the corresponding A-D prices to obtain a sure payoff of one unit in each future state. Since it is assumed we are in state 1 [A − D]1 = 0.28 + 0.33 + 0.30 = 0.91 = q1 [A − D]12 = 0.8224 = q 2 [A − D]13 = 0.7424 = q3 8.2 The price of an A-D security is the (subjective) probability weighted MRS in the corresponding state. It is determined by three considerations: the discount factor which is imbedded in the MU of future consumption, the state probability and the relative scarcities reflected in the intertemporal marginal rate of substitution, that is, in the ratio of the future MU to the present MU. The latter is affected by the expected consumption/ endowment in the future state and by the shape of the agents’ utility functions (their rates of risk aversion). We determine a term structure for each initial state. To-day’s state is 1: 1 + r11 = 8.3. (1 + r ) = 1.0800 (1 + r ) = 1.1193 2 2 1 3 3 1 1 = 1.0417 0.53 + 0.43 1 + r12 = 1.0392 1 + r13 = 1.0383 To-day’s state is 2 : (1 + r ) = 1.0679 (1 + r ) = 1.1067 2 2 2 3 3 2 1 1 + r2 = 1.0310 1 + r22 = 1.0334 1 + r23 = 1.0344 8.4. We determine the price of A-D securities for each date, starting with bond 1 for date 1, q1 = 96/100 = 0.96. Then we use the method of pricing intermediate cash flows with A-D prices to 28 price bonds of longer maturity, for example the price of bond 2 is such that 100 1100 900 = 100 × q1 + 1100 × q 2 ≡ + 1 + r1 (1 + r2 )2 900 − 100 × q1 which gives q 2 = = 0.7309 . Similarly 1100 q 3 = 0.5331 q 4 = 0.3194 q 5 = 0.01608 8.5. a. Given preferences and endowments, it is clear that the allocation {(4, 2, 2) ,(4, 2, 2)} is PO and feasible. In general, there is an infinity of PO allocations. b. Yes, but only if one of the following securities is traded s1 = −1 1 or s 2 = 1 −1 For example, Agent 1 would sell s1, and Agent 2 would buy it. In general one security is not sufficient to complete the markets when there are two future states. c. Agents will be happy to store the commodity for two reasons : consumption smoothing – they are pleased to transfer consumption from period 1 to period 2-, and in addition by shifting to tomorrow some of the current consumption they are able to reduce somewhat (but not fully) the endowment risk they face. For these two reasons, storing will enable them to increase their utility level. d. Remember aggregate uncertainty means that the total quantity available at date 2 is not the same for all the states. If one agent is risk-neutral, he will however be willing to bear all the risks. Provided enough trading instruments exist, the consumption of the risk-averse agent can thus be completely smoothed out and this constitutes a Pareto Optimum. 8.6 a. 1. Because of the variance term diminishing utility, consumption should be equated across states for each agent. 2. There are many Pareto optima. For example, the allocations below are both Pareto optimal : t=0 t=1 θ =1 θ=2 Allocation 1 Agent 1 4 3 3 Agent 2 4 3 3 Allocation 2 Agent 1 Agent 2 5 3 4 2 4 2 29 The set of Pareto optima satisfies: {((c10 , c11 , c12 ), (c 02 , c12 , c 22 )) : c11 = c12 (and thus c12 = c 22 ), c12 + c11 = 6; c10 + c02 = 8} 3. Yes. Given E(c) in the second period, var c is minimized. b. 1. The Pareto optima satisfy  1 3 1 3   max c1 + ln c1 + ln c1 + λ 8 − c1 + ln(6 − c1 ) + ln(6 − c1 )  0 1 2 0 1 2 c1 ,c1 ,c1 4 4 4 4 0 1 2   The F.O.C.’s are: 0 i) c1 : 1 − λ = 0 ii) iii) c1 : 1 1 1  1  1   1  + λ ( ) ( −1) = 0 c  4 1 4  6 − c1  1   3 1  3  1  ( −1) = 0 c1 :  1  + λ ( )  2 4  c2  4  6 − c1  2      1  1 6 1 From (ii) = λ 1  6 − c1  ⇒ c 1 = 1 + λ .  c1 1    1  1 6 1 = λ 1  6 − c1  ⇒ c 2 = 1 + λ  c2 2   2 A Pareto optimum clearly requires c1 = c1 , and thus c1 = c 2 ; 1 2 2 2 If λ > 1, c1 = 0, c 0 = 8 0 From (iii) If λ = 1, 2 c1 + c 0 = 8 0 2 If λ < 1, c1 = 8, c 0 = 0 0 The Pareto optimal allocation here and for the first part of the problem are the same. Both agents are risk averse and we would expect them to try to standardize period 1 consumption. 2. Agents’ problems can be written 1 3 Agent 1: max ( 4 − P1Q1 − P2 Q1 ) + ln(1 + Q1 ) + ln(5 + Q1 ) 1 2 1 2 1 1 Q1 ,Q 2 4 4 1 3 2 2 Agent 2: max ( 4 − P1Q1 − P2 Q 2 ) + ln(5 + Q1 ) + ln(1 + Q 2 ) 2 2 2 2 Q1 ,Q 2 4 4 Market clearing conditions: 2 Q 1 + Q1 = 0 1 Q1 + Q 2 = 0 2 2 (both securities are in zero net supply). The F.O.C.’s are: 1 1   Agent 1: Q1 : P1 =  1 4  1 + Q1  1   Q1 : P2 = 2 3 1    4  5 + Q1  2   30 2 Agent 2: Q1 : P1 = 1 1    2 4  5 + Q1    3 1    4  1 + Q2  2   These F.O.C.’s, together with market clearing imply, as expected: 1 1 2 = ⇒ Q1 = 2; Q1 = −2. 1 1 2 1 + Q 1 5 + Q1 1 1 = ⇒ Q1 = −2; Q 2 = +2. 2 2 1 2 5 + Q2 1 + Q2 Q 2 : P2 = 2 Thus, P1 = P2 = 1  1  1 1 1  =  = 4  1 + Q1  4  3  12 1   3  1  3 1 3  =  = 4  5 + Q1  4  3  12 2   Allocations at Equilibrium: t=0 Agent 1: Agent 2: 1 3 1 ( 2 ) − ( −2 ) = 4 12 12 3 1 3 2 4 − ( −2) − ( 2) = 3 12 12 3 4− θ =1 t=1 θ=2 3 3 3 3 This is a Pareto Optima, consumption is stabilized in t=1. However, since agent 1 had more consumption in the more likely state, he is paid in terms of t=0 consumption for agreeing to the exchanges. Agent 2 transfers t=0 wealth to him. 3. Now only (1,0) is traded. The C.E. will not be Pareto optimal as the market is incomplete. The C.E. is as follows: 1 3 Agent 1: max ( 4 − P1Q1 ) + ln(1 + Q1 ) + ln(5) 1 1 1 Q1 4 4 1 3 2 2 Agent 2: max ( 4 − P1Q1 ) + ln(5 + Q1 ) + ln(1) 2 Q1 4 4 The F.O.C.’s are : 1 1   Agent 1: P1 =  1 4  1 + Q1    1 1   Agent 2: P1 =  2 4  5 + Q1    Thus 1 1 1 1 1 1 2 . = ⇒ Q1 = +2; Q1 = −2. P1 =   = 1 2 4  3  12 1 + Q 1 5 + Q1 Allocation 31 t=0 Agent 1: Agent 2: 1 5 =3 6 6 1 1 4+ =4 6 6 4− θ =1 t=1 θ=2 3 3 5 1 Consumption is stabilized in state θ1 : effectively agent 1 buys consumption insurance from agent 2. 8.7 The Pareto optima satisfy: 1 1 1 1    max .25 c1 + .5 ln c1 + ln c1  + λ 6 − c1 + ln(6 − c1 ) + ln(6 − c1 ) 0 1 2 0 1 2 1 1 1 c 0 ,c1 ,c 2 2 2 2 2     The F.O.C.’s are c1 : .25 − λ = 0 0 1 1 1  1  1 (−1) = 0 c1 : .5( ) 1  + λ ( ) c  2  1 2  6 − c1  1   1 1 1  1  ( −1) = 0 c1 : .5( ) 1  + λ( ) 2 c  2  2 2  6 − c1  2    1  1 1  6 1 1 1  1  = λ c   6 − c1  ⇒ 6 − c1 = 2c1λ, c1 = 1 + 2λ .  2 1 1    1  1 1  6 1 1 1  1  = λ c   6 − c1  ⇒ 6 − c 2 = 2c 2 λ , c 2 = 1 + 2λ  2 2  2   2 Thus c1 = c1 , and therefore c1 = c 2 ; 1 2 2 If there is no aggregate risk and the agents preferences are the same state by state, then a Pareto optimum will require perfect risk sharing. This example has these features. The Pareto optimum is clearly not unique. The set of Pareto optima can be described by: For all λ ≥ 0 t=0 t=1 θ =1 θ=2 6 if λ < .25 6 6 Agent 1:  1 + 2λ 1 + 2λ 0 if λ > .25 0 if λ < .25 1  1    61 − 61 − Agent 2:     1 + 2λ   1 + 2λ  6 if λ > .25 λ = .25 , indeterminate In the second case (state 2 endowment = 5 for agent 1, 3 for agent 2), there will be a Pareto optimum but it will be impossible to achieve perfect risk sharing as there is aggregate risk. b. The agents’ problems are: 32 1 1  Agent 1: max .25(2 − PQ Q1 − PR R 1 ) + .5 ln(2 + Q1 ) + ln(4 + R 1 ) 1 1 Q ,R 2 2  1 1 Agent 2: max (4 − PQ Q 2 − PR R 2 ) + ln(4 + Q 2 ) + ln(2 + R 2 ) 2 2 Q ,R 2 2 1 2 Q +Q =0 (market clearing). Both securities are in zero net supply. Where, in equilibrium, 1 R + R2 = 0 The F.O.C.'s are Agent 1: 1 1  1  Q1 : .25 PQ =.5     2 + Q1  ⇔ PQ = 2 + Q1   2  1 1 1 1  1  ⇔ PR = R 1 : .25 PR =.5    1  4 + R1  2  4+R  Agent 2: 1 1  Q 2 : PQ =   2  4 + Q2    1 1  PR =   2 2 + R2  This implies: 1 1 1  1 1  =  =   1 2  4 + Q 2  2  4 − Q1  2+Q     R2: Q1 = 2; Q 2 = −2 1 1  1 1  1 =  =   1 4 + R 2  2 + R 2  2  2 − R1  2(2-R1)=4+ R1, R1=0, R2=0 1 1 1 = = . As a result, PQ = 1 2+Q 2+2 4  1  1 PR =  = 1  4+R  4 The implied allocations are thus: t=0 Agent 1: Agent 2: 2-1/2=1.5 4+1/2=4.5 θ1 4 2 t=1 θ2 4 2 c. Let us assume the firm can introduce 1 unit of either security. Either way, the problems of the agents and their F.O.C.’s are not affected. What is affected are the market clearing conditions: 33 If 1 unit of Q is introduced Q1 + Q 2 = 1 R1 + R 2 = 0 If 1 unit of R is introduced Q1 + Q 2 = 0 R1 + R 2 = 1 Let’s value the securities in either case. If one unit of Q is introduced: The F.O.C.’s become 1 PQ = 2 + Q1 Agent 1: 1 PR = 4 + R1 1 1  1 1 1  PQ =  =  = 2  1  2  4 + Q  2 2  4 + 1 − Q  2(5 − Q ) Agent 2: 1 1  1 1  PR =  =   2  2 + R 2  2  2 − R1  The equation involving R are unchanged. Thus PR =1/4, R1 =0, R 2 =0 For the security Q we need to solve: 1 1 = , ⇒ 10 − 2Q1 = 2 + Q1 1 2 2+Q 2(5 − Q ) 8 8 5 Q 1 = ; Q 2 = 1 − Q1 = 1 − = − 3 3 3 1 1 3 Thus PQ = = = < .25 . 8 14 14 2+ 3 3 You know the price had to go down: there is more supply of the security. The implied allocations are thus: t=0 t=1 θ =1 θ=2 1 2 − = 1.5 Agent 1: 4 4 2 1 4 + = 4.5 Agent 2: 2 2 2 If one unit of R is introduced: The first order conditions become, with market clearing conditions imposed: 1  P =  Q 2 + Q1 Agent 1   PR = 1  4 + R1  34  1 1  1 =  PQ =  2  2  4 + Q  2( 4 − Q1 ) Agent 2   PR = 1  1  = 1  1       2  2 + R 2  2  3 − R1   So, PQ is unchanged, and PQ =1/4 Q1 = 2, Q 2 = −2 . Solving for PR : 1 1 2 1 = , R1 = ; R 2 = 1 − R1 = 1 1 4+R 2(3 − R ) 3 3 1 1 1 3 PR1 = = = = < .25 1 2 14 4+R 4+ 3 14 3 t=0 Agent 1: Agent 2: θ =1 t=1 θ=2 2− 1 2 3 9 − = 2− 2 3 14 14 1 1 3 6 4+ − = 4+ 2 3 14 14 4 2 4+2/3 2+1/3 The firm is indifferent as to which security it sells – either way it receives the same thing. Either way a Pareto optimum is achieved since, with no short sales constraints, the market is complete. Thus a C.E. is Pareto Optimal. Agent 1 wishes to transfer income to period t=1. The introduction of more securities of either type will reduce the cost to him of doing that. Agent 2, however, will receive lower prices – either way – for the securities he issues. He will be hurt. 8.8 a. At a P.O. allocation there is no waste and there are no possibilities to redistribute goods and make everyone better off. From the viewpoint of social welfare there seems to be no argument not to search for the realization of a Pareto Optimum. Beyond considerations of efficiency, however, considerations of social justice might suggest some non-optimal allocations are in fact socially preferable to some Pareto optimal ones. These issues are at the heart of many political discussions in a world where redistribution across agents is not costless. From a purely financial perspective, we associate the failure to reach a Pareto optimal allocation with the failure to smooth various agents’ consumptions across time or states as much as would in fact be feasible. Again there is a loss in welfare and this is socially relevant: we should care. b. The answer to a indicates we should care since complete markets are required to guarantee that a Pareto optimal allocation is reached. Are markets complete? certainly not! Are we far from complete markets? Would the world be much better with significantly more complete markets? This is a subject of passionate debates that cannot be resolved here. You may want to re-read the concluding comments of Chapter 1 at this stage. 35 Chapter 9 9.1. a. The CCAPM is an intertemporal model whereas the CAPM is a one-period model. The CCAPM makes a full investors homegeneity assumption but does not require specific utility functions. b. The key contribution of the CCAPM resides in that the portfolio problem is indeed inherently intertemporal. The link with the real side of the economy is also more apparent in the CCAPM which does provide a better platform to think about many important questions in asset management. c. The two models are equivalent in a one-period exchange economy since then aggregate consumption and wealth is the same. More generally, the prescriptions of the two models would be very similar in situations where consumption would be expected to be closely correlated with variations in the value of the market portfolio. 9.2. a. max(0,St +1(θ)-p*) ∞ 1/c t +1+ τ δ b. S t +1 (θ) = ∑ δ τ c t +1+ τ = c t +1 (θ) 1/c t +1 (θ) 1− δ τ=1 ct c. q t +1 (θ) = p(θ)δ c t +1 (θ) d. The price of the option is, Ct = θ '∈A ∑ q (θ')(S(θ') − p ) t +1 * where A is a set of states θ' for which (St+1(θ')-p*) ≥ 0. 9.3. a. (St +1(θ)-p*) b. S t +1 (θ) = ∑ δ τ 1/c t +1+ τ δ c t +1+ τ = c t +1 (θ) 1/c t +1 (θ) 1− δ τ=1 ct c. q t +1 (θ) = p(θ)δ c t +1 (θ) d. The price of the forward contract is, Ft (θ) = ∑ q t +1 (θ) S t +1 (θ) − p* . θ ( ) 9.4. a. After maximization, the pricing kernel from date 0 to date t takes the form T δt c0 = m t . Now δ0 c t the value of the wealth portfolio is P0 = E 0 ∑ m t e t . At equilibrium we have e t = c t . t =0 36 Proportionality follows immediately from E ∑ m t e t = E ∑ m t c t . With log utility we even have t =0 t =0 T T P +e −P b. Let us first define the return on the wealth portfolio as ~ = 1 1 0 . Inserting prices and r1 P0 δ 1 c1 + c1 − c0 1 − δ . Defining consumption growth as c = c − c we rearranging gives ~ = 1 − δ r1 t +1 t +1 t 1 c0 1− δ 1 get ~ = c1 . r1 c0 c. P0 = 100E 0 [m1 + m 2 ] c c  = 100δE 0  0 + δ 0  c2   c1  (c − c )c + (1 + δ)c1c0  = 100δE 0  2 1 0  c1c 2    c  = 100δE 0 {~ + (1 + δ )} 0  r2 c2   1 P0 = c0 . 1− δ 9.5. Pi = ∑ q s d is 1 = ∑ qs d is Pi = ∑ q s R is = E (mR i ) = = ∑ ms πs R is = E (m )E(R i ) + Cov(m, R i ) E(R i ) + Cov(m, R i ) 1 + rf R f = E (R i ) + Cov(m, R i )R f and R f = E (R M ) + Cov(m, R M )R f E(R i ) − R f Cov(m, R i ) = E(R M ) − R f Cov(m, R M ) E(R i ) − R f = Cov(m, R i ) [E(R M ) − R f ] Cov(m, R M ) 37 Note that if Cov(m,RM) = Var(RM) we have exactly the CAPM equation. This relation holds for example with quadratic utility. 38 Chapter 10 10.1. a. Markets are complete. Find the state prices from 5q1 + 10q 2 + 15q 3 = 8 1 q1 + q 2 + q 3 = 1.1 3q 3 = 1 → q1 = 0.55151 q 2 = 0.02424 q3 = 1/ 3 b. The put option has a price of 3q1. Risk neutral probabilities are derived from π1 = 0.60667 πs = 1.1qS where s = 1, 2, 3 π 2 = 0.02667 π3 = 0.36667 c. Consumption at date 0 is 1. The pricing kernel is given by q ms = S where s = 1, 2, 3 ps qS where s = 1, 2, 3 ps m1 = 1.83838 m 2 = 0.06060 m 3 = 0.11111 Program for agent 1 10.2. ms =  2 1 2 2  max (10 + q1 + 5q 2 − c1q 1 − c1 q 2 ) +  ln c1 + ln c1   1 1 1 2 c1 ,c1 3 3   The FOC is 1 1 − q1 + 1 = 0 3 c1 2 1 =0 2 3 c1 and similarly for agent 2. This yields 1 1 1 1 = 1, 2 = 2 1 c1 c 2 c 1 c 2 − q2 + 2 ⇔ c1 = c1 , c1 = c 2 1 2 2 Using the market clearing conditions we get 39 5 2 11 2 c1 = c 2 = 2 2 so that q1 = 2/15 and q2 = 4/33. c1 = c1 = 1 2 Now construct the risk neutral probabilities as follows: q1 π1 = q1 + q 2 q2 q1 + q 2 which satisfy the required conditions to be probabilities. Computation of the risk-free rate is as usual: q1 + q2 = 1/(1+rf). The market value of endowments can be computed as follows 1 (π1e1i + π 2 e i2 ) ≡ q1e1i + q 2 e i2 . MVi = 1 + rf π2 = 40 10.3. Options and market completeness. The put option has payoffs [ 1,1,1,0]. The payoff matrix is then 1 0 1 0   1 1 1 0 m= 0 0 1 1   1 1 1 0   Of course, the fourth row gives the payoffs of the put option. We have to solve the system 0 0 0 1   1 0 0 0 mw =  0 0 1 0   0 0 0 1   The matrix on the RHS is the A-D securities payoff matrix. The solution is 1 −2 1 1   1 −1 0 0 w= −1 1 0 0   1 −1 1 0   We could also have checked the determinant condition on matrix m, which states that for a square matrix (number of states = number of assets), if the determinant is not null, then the system has a unique solution. Here Det(m) = -1. 10.4. a. An A-D security is an asset that pays out 1 unit of consumption in a particular state of the world. The concept is very useful since if are able to extract A-D prices from traded assets they enable us to price every complex security. This statement is valid even if no A-D security is traded. To price a complex security from A-D prices, make up the portfolio of AD securities providing the same state-by-state payoff as the security to be priced and check what is the cost of this portfolio. b. Markets are not complete : Determinant of the payoff matrix = 0. c. No: # of assets < # of states. Completeness can be reached by adding a put on asset one with strike 12 (Det = 126). A-D security from calls: long call on B (strike 5), two short calls on B (strike 6), long call on B (strike 7) 0 0 0 0         1 0 0 1  2  -2  1  +  0  =  0           3 2 1 0         An A-D security with puts: 41 long put on B (strike 8), two short puts on B (strike 7), long put on B (strike 6)  3 2     2 1  1  -2  0      0 0     1   0 +  0   0   0   0 =  1   0   42 Chapter 12 12.1. a. 2 EU = 1 + .96(.5 × ln 1.2 + .5 × ln .833) + (.96) (.25 × ln 1.44 + .5 × ln 1 + .25 × ln .6944) = 0 b. The maximization problem of the representative agent is max [ln(c0)+ δ ( π 11 ln(c11)+ π 12 ln(c12))+ δ 2( π 21 ln(c21)+ π 22 ln(c22)+ π 23 ln(c23))] s.t. e0 + q11e11 + q12 e12 + q21e21 + q22 e22 + q23e23 = c0 + q11c11 + q12 c12 + q21c21 + q22 c22 + q23c23 (take the consumption at date 0 as a numeraire, its prise is at 1; qij is time 0 price of AD security that pays 1 unit of consumption at date i in state j) The Lagrangian is given by  e 0 + q 11e11 + q 12 e12 + q 21e 21 + q 22 e 22 + q 23 e 23  L = EU + λ  − c + q c + q c + q c + q c + q c   11 11 12 12 21 21 22 22 23 23   0 FOC's are: ∂L 1 = −λ = 0 ∂c 0 c 0 ∂L 1 = π11δ − λq 11 = 0 ∂c11 c11 . . . ∂L 1 = π 23 δ 2 − λq 23 = 0 ∂c 23 c 23 A-D prices, risk neutral probabilities, and the pricing kernel can be derived easily from the FOC’s. For example, at date t = 0 we have c MU11 1 q 11 = π11δ = π11δ 0 = π11 = π11m11 c11λ c11 MU 0 ... c MU 23 1 q 23 = π 23 δ 2 = π 23 δ 2 0 = π 23 = π 23 m 23 c 23 λ c 23 MU 0 where mij is the pricing kernel. Risk neutral probabilities at date one are given by: q 11 q12 RN RN π11 = and π12 = q11 + q 12 q11 + q12 q 23 q 21 q 22 RN π 21 = , π RN = and π RN = 22 23 q 21 + q 22 + q 23 q 21 + q 22 + q 23 q 21 + q 22 + q 23 state prices 43 0.16 0.4 1 0.576 0.331776 0.4608 risk neutral probabilities 0.1679656 0.409836066 1 0.590163934 0.34829347 0.48374093 pricing kernel 0.64 0.8 1 1.152 1.327104 0.9216 c. valuation 0.2304 0.48 1 0.48 0.2304 0.4608 value 2.8816 d. The one period interest rate at date zero is: 1 r0,1 = − 1 = 2.459% . (q11 + q12 ) The two period interest rate at date zero is: 44 1 − 1 = 2.459% . (q 21 + q 22 + q 23 ) Even though the economy is stochastic with log utility there is no term premia. 1 The price of a one period bond is q b (1) = = .976 and the price of a two period 1.02459 1 bond is q b (2) = = .953 . (1,02459)2 r0, 2 = e. The valuation of the endowment stream is price space 2.8816 1.8816 2.352 1.152 1.63333333 0.8 1.44 1 0.694444444 At date one and two we have one value with payoffs (upper cell) and the value after the cash flow arrived (lower cell). The value of the option, using either state prices, pricing kernel, or risk neutral valuation, is option value 0.152 0.0608 0 f. The price process is as in e. Now we need to solve for u, d, R, and risk neutral probabilities. 2.352 1.44 u= = = 1.25 1.8816 1.152 1.6333 d= = .8681 1.8816 R = 1 + r = 1.02459 R − d 1.02459 − .8681 = = .4098 u −d 1.25 − .8681 (Compare this value with q11 in b)) The value of the option is option value q 11 = 0.152 0.0608 0 45 g. In part b we saw that pricing via A-D prices, risk-neutral probabilities, and pricing kernel are essentially the same. These methods rely on the payoffs of the endowment stream. In contrast to b, risk neutral probabilities are elicited in part f from the price process. Of course, the riskneutral probabilities are the same as in b. This is not surprising since prices are derived from utility maximization of the relevant cash flows. Thus risk-neutral probabilities of the cash flow stream coincide with the risk neutral probabilities of the price of the asset. 46 Chapter 13 13.1 a. Eri .17 C .09 .07 A B .5 b. Using A, B ; we want b P = 0 = w A b A + (1 − w A )b B 0 = w A (.5) + (1 − w A )(1) .5w A = 1; w A = 2, w B = −1 Using B, C : bP = 0 = w BbB + w CbC 0 = w B (1) + w C (1.5) 0 = w B + 1.5 − 1.5w B .5w B = 1.5; w B = 3, w C = −2 1 1.5 bi c. We need to find the proportions of A and C that give the same b as asset B. Thus, b B = 1 = w A b A + (1 − w A )b C 1 = w A (.5) + (1 − w A )(1.5) 1 1 ⇒ wA = ; wC = 2 2 With these proportions : 1 1 ErP = ErA + ErC 2 2 1 1 = (.07) + (.17) = .12 > .09 = ErB 2 2 rP 1 = .12 + 1F1 + e P wA = wc = 2 1 2 R Pw B =1 = .09 + 1F1 + e B , cov(e P , e B ) ≡ 0 . 47 Now we assume these assets are each well diversified portfolios so that e P = e B ≡ 0 . An arbitrage portfolio consist of shorting B and buying the portfolio composed of w A = w C = in equal amounts, you will earn 3% riskless. d. As a result the prices of A, C will rise and their expected returns fall. The opposite will happen to B. e. Eri 1 2 ErC = .14 ErB = .10 ErA = .06 bi .5 1 1.5 bC bA bB There is no longer an arbitrage opportunity : expected returns are consistent with relative systematic risk. 13.2. a. Since cov(~ , ε j ) = 0 , rM σ 2 = var(α j ) + var(β jM ~ ) + var(ε j ) rM j 2 = 0 + β 2 σ 2 + σ εj jM M = β 2 σ 2 + σ 2j ε jM M b. σ ij = cov(α i + β iM ~ + ~i , α j + β jM ~ + ~j ) rM ε rM ε = cov(β ~ + ~ , β ~ + ~ ) (constants do not affect covariances) r ε r ε iM M i jM M j = cov(β iM ~ + ~i , β jM ~ ) + cov(β iM ~ + ~i , ~j ) (since ~ , ~j are independent) rM ε rM rM ε ε rM ε rM ε = cov(β iM ~ , β jM ~ ) + cov(~i , β jM ~ ) + cov(β iM ~ , ~j ) + cov(~i , ~j ) (since ~ , ~i are rM rM ε rM rM ε ε ε independent) = cov(β iM ~ , β jM ~ ) ; since by contruction of the regression relationship all other rM rM covariances are zero. = β iM β jM cov(~ , ~ ) = β i β j σ 2 rM rM M 48 13.3. The CAPM model is an equilibrium model built on structural hypotheses about investors’ preferences and expectations and on the condition that asset markets are at equilibrium. The APT observes market prices on a large asset base and derives, under the hypothesis of no arbitrage, the implied relationship between expected returns on individual assets and the expected returns on a small list of fundamental factors. Both models lead to a linear relationship explaining expected returns on individual assets and portfolios. In the case of the CAPM, the SML depends on a single factor, the expected excess return on the market portfolio. The APT opens up the possibility that more than one factor are priced in the market and are thus necessary to explain returns. The return on the market portfolio could be one of them, however. Both models would be compatible if the market portfolio were simply another way to synthesize the several factors identified by the APT: under the conditions spelled out in section 12.4, the two models are essentially alternative ways to reach the same ‘truth’. Empirical results tend to suggest, however, that this is not likely to be the case. Going from expected returns to current price is straightforward but requires formulating, alongside expectations on future returns, expectations on the future price level and on dividend payments. 13.4. The main distinction is that the A-D theory is a full structural general equilibrium theory while the APT is a no-arbitrage approach to pricing. The former prices all assets from assumed primitives. The latter must start from the observations of quoted prices whose levels are not explained. The two theories are closer to one another, however, if one realizes that one can as well play the ‘no arbitrage’ game with A-D pricing. This is what we did in Chapter VIII. There the similarities are great: start from given unexplained market prices for ‘complex’ securities and extract from them the prices of the fundamental securities. Use the latter for pricing other assets or arbitrary cash flows. The essential differences are the following: A-D pricing focuses on the concept of states of nature and the pricing of future payoffs conditional on the occurrence of specific future states. The APT replaces the notion of state of nature with the ‘transversal’ concept of factor. While in the former the key information is the price of one unit of consumption good in a specific future date-state, in the latter the key ingredient extracted from observed prices is the expected excess return obtained for bearing one unit of a specified risk factor. 13.5 True. The APT is agnostic about beliefs. It simply requires that the observed prices and returns, presumably the product of a large number of agents trading on the basis of heterogeneous beliefs, are consistent in the sense that no arbitrage opportunities are left unexploited. 49 Chapter 15 15.1. a. These utility functions are well known. Agent 1 is risk-neutral, agent 2 is risk-averse. b. A PO allocation is one such that agent 2 gets smooth consumption. c. Given that agent 2 is risk-averse, he buys A-D1 and sells AD2, and gets a smooth consumption; Agent 1 is risk-neutral and is willing to buy or sell any quantity of A-D securities. We can say agent 2 determines the quantities, and agent 1 determines the prices of the AD securities. Solving the program for agent 1 gives the following FOC: q1 = δπ q 2 = δ (1 − π ) The price of AD securities depends only on the probability of each state. Agent 2's optimal consumption levels are c 2 = c 2 (θ1 ) = c 2 (θ 2 ) = (2δ(1 − π) + 1) / (1 + δ ) which is 1 if π = 0.5. d. Note: it is not possible to transfer units of consumption across states. Price of the bond is δ. Allocation will not be PO. Available security t=0 t=1 θ1 θ2 − pb 1 1 Let the desired holdings of this security by agents 1 and 2 be denoted by Q1 and Q 2 respectively. Agent maximization problems : Agent 1 : max (1 − Q1p b ) + δ{π(1 + Q1 ) + (1 − π)(1 + Q1 )} Agent 2 : max ln(1 − Q 2 p b ) + δ{π ln Q 2 + (1 − π) ln( 2 + Q 2 )} Q2 Q1 The F.O.C.s are : 1. Agent 1 : Agent 2 : − p b + δ = 0 or p b = δ .  π (1 − π)  1 ( −p b ) + δ + =0 b 1 − Q2p  Q2 2 + Q2  2. We know also that Q1 + Q 2 = 1 in competitive equilibrium in addition to these equations being satisfied. Substituting p b = δ into the 2nd equation yields, after simplification, (1 + δ)(Q 2 ) 2 + (1 + 2πδ)Q 2 − 2π = 0 Q2 = − (1 + 2 πδ) ± (1 + 2πδ) 2 − 4(1 + δ)( −2π) 2(1 + δ) = − (1 + 2πδ) ± 1 + 4π 2 δ 2 + 4πδ + 8π + 8πδ 2(1 + δ) 50 − (1 + 2πδ) ± (1 + 2πδ) 2 + 8π + 8πδ 2(1 + δ) with Q1 = 1 − Q 2 Q2 = 3. Suppose π = .5 and δ = 1 3 − (4 ) ± (4 )2 + 8 + 4 3 3 6 Q2 = 4 ) 2( 3 1 3 8 =− ± ( ) 2 8 3 1 Q 2 = (We want the positive root. Otherwise agent 2 will have no consumption 2 in the θ = 1 state) 1 Q1 = . 2 15.2 When markets are incomplete : i) MM does not hold: the value of the firm may be affected by the financial structure of the firm. ii) It may not be optimal for the firm’s manager to issue the socially preferable set of financial instruments a. Write the problem of a risk neutral agent : Max 30 – pQ Q2 + 1/3 ½ (15 + Q2) + 2/3 ½ 15 FOC: -pQ + 1/6 = 0 thus pQ = 1/6 necessarily! This is generic: risk neutrality implies no curvature in the utility function. If the equilibrium price differs from 1/6, the agent will want to take infinite positive or negative positions! At that price, check that the demand for asset Q by agent 1 is zero: Max 20 – 1/6 Q1 + ½ 1/3 ln(1 + Q1) + ½ 2/3 ln(5) FOC: -1/6 + 1/6 1/(1 + Q1) = 0 Q1 = 0 Thus there is no risk sharing. The initial allocation is not Pareto – optimal. The risk averse agent is exposed to significant risk at date 1. If the state probabilities were ½, nothing would change, except that the equilibrium price becomes: PQ = ¼ 15.3 51 b. pQ = 1/6, Q1 = 0 (the former FOC is not affected), pR = 1/3 FOC of agent 1 wrt R: -1/3 + (½)2/3 1/(5 + R1) = 0 1 = 5 + R1 R1 = -4 So agent 1 sells 4 units of asset R. He reduces his t = 1 risk. At date 1, he consumes 1 unit in either state. He is compensated by increasing his date 0 consumption by pR R1 =1/3(4) = 4/3. The allocation is Pareto optimal, as expected from the fact that markets are now complete. Post trade allocation: t=0 20 4/3 28 2/3 t=1 1 15 t=2 1 19 Agent 1 Agent 2 15.4 t=0 Agent 1 Agent 2 4 6 θ =1 6 3 t=1 θ=2 1 4 U1 (c 0 , ~1 (θ )) = 1 ln c1 + E ln c1 (θ ) c 0 1 2 2 2 U 2 (c 0 , ~1 (θ )) = 1 c 0 + E ln c1 (θ ) c 2 Prob(θ1 ) = .4 Prob(θ 2 ) = .6 a. Initial utilities – Agent 1 : U1 (c 0 , ~1 (θ )) = 1 ln (4 ) + .4 ln (6 ) + .6 ln (1) c 2 = 1 (1.386 ) + .4(1.79 ) 2 = .693 + .716 = 1.409 U 2 (c 0 , ~1 (θ)) = 1 (6 ) + .4 ln (3) + .6 ln (4 ) c 2 = 3 + .439 + .832 = 4.271 52 Agent 2 : c. Firm’s output θ =1 2 t=1 -p θ=2 3 Agent 1’s Problem (presuming only this security is issued). 1 max ln (4 − pQ1 ) + .4 ln (6 + 2Q1 ) + .6 ln (1 + 3Q1 ) Q1 2 Agent 2’s Problem max Q2 1 (6 − pQ 2 ) + .4 ln(3 + 2Q 2 ) + .6 ln(4 + 3Q 2 ) 2 The F.O.C.’s are: Agent 1:  1   1  1 p    4 − pQ  = .4 6 + 2Q (2 ) + .6 1 + 3Q (3)      2 1  1  1     1 1 p = .4  3 + 2Q 2 2  Q 2 = 1 − Q1 Agent 2:   1 (2 ) + .6   4 + 3Q 2    (3)   These can be simplified to (i) p 1.6 3.6 = + 4 − pQ1 6 + 2Q1 1 + 3Q1 p= 1.6 3.6 + 5 − 2Q1 7 − 3Q1 (ii) The solution to this set equations via matlab is p = 1.74 Q1 = 1.245 Thus Q 2 = 1 − 1.245 = −.245 (short sale). Thus VF = 1.74 . 53 The post-trade allocations are t=0 Agent 1: Agent 2: θ =1 6 + 2(1.245) = 8.49 3 − 2(.245) = 2.51 t=1 4 − (1.74)(1.245) = 1.834 6 − (1.74)(−.245) = 6.426 θ=2 1 + 3(1.245) = 4.735 4 − 3(.245) = 3.265 Post-trade (ex ante) utilities: Agent 1 : 1 ln (1.834 ) + .4 ln (8.49 ) + .6 ln (4.735) 2 = .3032 + .4(2.14 ) + .6(1.555) = .3032 + .856 + .933 = 2.0922 1 (6.426) + .4 ln(2.51) + .6 ln(3.265) 2 = 3.213 + .368 + .70996 = 4.291 Agent 2 : Nearly all the benefit goes to agent 1. This is not entirely surprising as the security payoffs are more useful to him for consumption smoothing. For agent 2, the marginal utility of a unit of consumption in period 1 is less than the marginal utility of a unit in period 0. His consumption pattern across states in t=1 is also relatively smooth, and the security available for sale is not particularly useful in correcting the existing imbalance. Taken together, he is willing to “sell short” the security or, equivalently, to borrow against the future. The reverse is true for agent 1 especially on the issue of consumption smoothing across t=1 states: he has very little endowment in the more likely state. Furthermore the security pays relatively more in this particular state. Agent 1 thus wishes to save and acquires “most” of the security. If the two states were of equal probability agent 1 would have a bit less need to smooth, and thus his demand would be relatively smaller. We would expect p to be smaller in this case. c. The Arrow-Debreu securities would offer greater opportunity for risk sharing among the agents without the presence of the firm. (We would expect VF to be less than in b). However, each agent would most likely have a higher utility ex ante (post-trade). d. Let the foreign government issue 1 unit of the bond paying (2.2); let its price be p. Agent Problems: Agent 1: 1 max ln(4 − pQ1 ) + .4 ln(6 + 2Q1 ) + .6 ln(1 + 2Q1 ) Q1 2 54 Agent 2: max Q2 1 (4 − pQ 2 ) + .4 ln(3 + 2Q 2 ) + .6 ln(4 + 2Q2 ) 2 Where, in equilibrium Q1 + Q 2 = 1 F.O.C’s: Agent 1: 1 p  .4(2 ) .6(2 )   4 − pQ  = 6 + 2Q + 1 + 2Q  2 1  1 1 1 .4(2 ) .6(2 ) p= + 2 3 + 2Q 2 4 + 2Q 2 Agent 2: Substituting Q 2 = 1 − Q1 , these equations become: p 1.6 2.4 = + 4 − pQ1 6 + 2Q1 1 + 2Q1 p= 1.6 2.4 + 5 − 2Q1 6 − 2Q1 Solving these equations using matlab yields p = 1.502 Q1 = 1.4215 and thus Q 2 = −.4215 The bond issue will generate p = 1.502 . Post Trade allocations: t=1 θ =1 θ=2 4 − (1.502)(1.4215) = 1.865 6 + 2(1.425) = 8.843 1 + 2(1.4215) = 3.843 6 − (1.502)(−.4215) = 6.633 3 + 2(−.4215) = 2.157 4 + 2(−.4215) = 3.157 t=0 Agent 1: Agent 2: 55 The utilities are: Agent 1 : 1 ln (1.865) + .4 ln (8.843) + .6 ln (3.843) 2 = .3116 + .8719 + .8078 = 1.9913 1 (6.633) + .4 ln(2.157 ) + .6 ln(3.157 ) 2 = 3.3165 + .3075 + .690 = 4.314 . Agent 2 : Once again; both agents are better off after trade. Most of the benefits still go to agent 1; however, the incremental benefit to him is less than in the prior situation because the security is less well situated to his consumption smoothing needs. e. When the bond is issued by the local government, one should specify i) where the proceeds from the bond issue go, and ii) how the t=1 payments in the bond contracts will be financed. In a simple closed economy, the most natural assumption is that the proceeds from the issue are redistributed to the agents in the economy and similarly that the payments are financed from taxes levied on the same agents. If these redistributive payments and taxes are lump-sum transfers, they will not affect the decisions of individuals, nor the pricing of the security. But the final allocation will be modified and closer (equal ?) to the initial endowments. In more general contexts, these payments may have distortionary effects. 15.5 a. Agent 1 : max 1 x 1 + 1 x 1 2 1 2 2 s.t. q 1 x 1 + q 2 x 1 ≤ q 1e1 + q 2 e1 1 2 1 2 2 Agent 2 : max 1 ln x 1 + 1 ln x 2 2 2 2 s.t. 2 2 q 1 x 1 + q 2 x 2 ≤ q 1e1 + q 2 e 2 2 2 Substituting the budget constraint into the objective function :  q e1 + q 2 e1 − q 2 x 1 2 2 Agent 1 : max 1  1 1 2 x1 q1 2   1 1  + 2 x2    1  + 2 ln x 2 2    q e2 + q 2e2 − q 2 x 2 2 2 Agent 2 : max 1 ln 1 1 2  x2 q1 2  56 FOC’s  − q2  1  + 2 = 0 ⇒ q1 = q 2 Agent 1 : 1  2   q1    q 2  1  1 q1   = 2  2 Agent 2 : 1  2 2 2 2   x  q 1e1 + q 2 e 2 − q 2 x 2  q 1   2 are equal, solves for e2 + e2 2 2 x 2 = x1 = 1 , i.e., 2 2 agent 2’s consumption is fully stabilized.   which, taking into account that the two prices   b. Each agent owns one half of the firm, which can employ simultaneously two technologies : t=0 t=1 θ=1 θ=2 Technology 1 -y y y Technology 2 -y 3y 0 Let x be the portion of input invested in technology 2. Since there are 2 units invested in total, 2x is invested in technology 1. In total we have : t=0 Invested in tech. 1 Invested in tech. 2 Each firm owner receives 2-x x θ=1 2-x 3x 1+x t=1 θ=2 2-x 0 1- 1 x 2 Considering that the two Arrow-Debreu prices necessarily remain equal, agent 2 solves 2 max 1 ln(2 + 1 x + e1 + e 2 − x 2 ) + 1 ln x 2 2 2 2 2 2 2 2 x ,x 2 It is clear that he wants x to be as high as possible, that is, x =2. The FOC wrt x 2 solves for 2 2 2 + 1 x + e1 + e 2 2 2 . 2 Again there is perfect consumption insurance for the risk averse agent (subject to feasibility, that is, an interior solution). 2 x 2 = x1 = 2 Agent 1 solves max 1 (2 + 1 x + e1 + 2 e1 − x 1 ) + 1 x 1 1 2 2 2 2 2 2 x 57 Clearly he also wants x to be as high as possible. So there is agreement between the two firm owners to invest everything (x = 2) in the second more productive but riskier technology. For agent 1, this is because he is risk neutral. Agent 2 , on the other hand, is fully insured, thanks to complete markets. Given that fact, he also prefers the more productive technology even though it is risky. c. There cannot be any trade in the second period ; agents will consume their endowments at that time. Agent 1 solves max 1 (1 + x + e1 ) + 1 (1 − 1 x + e1 ) = 1 2 2 2 2 x max 1 + 1 x + 4 x e1 + e1 1 2 ; clearly he still wants to invest as much as possible in technology 2. 2 Agent 2 solves 2 max 1 ln(1 + x + e1 ) + 1 ln(1 − 1 x + e 2 ) 2 2 2 2 x which, after derivation, yields 2 1 + 2e 2 − e1 2 x= . 2 That is, the (risk averse) agent 2 in general wants to invest in the risk-free technology. There is thus disagreement among firm owners as to the investment policy of the firm. This is a consequence of the incomplete market situation. d. The two securities are now t=1 A bond Technology 2 θ=1 1 1+x θ=2 1 1- 1 x 2 These two securities can replicate (1,0) and (0,1). To replicate (1,0), for instance, invest a in the bond and b in the firm where a and b are such that a + (1+x)b = 1 a + (1- 1 x )b = 0 2 This system implies b = 2 x and a = 1- 2 x (1 + x ) . 3 3 Given that the markets are complete, both agents will agree to invest x=2. Thus b = replicate (1,0). 1 3 and a =0 58 Chapter 16 16.1. The maximization problem for the speculator's is: max EU c * + (p f − p )f f [ ] Let us rewrite the program in the spirit of Chapter IV: W (f ) = E{U (c * + (p f − p )f )}. The FOC can then be written W' ' (f ) = E U ' ' (c * + (p f − p )f )(p f − p ) < 0 . This means that f>0 iff W' (0) = {U ' (c* )}E (pf − p ) > 0 . From U'>0 we have f>0 iff E (p f − p ) > 0 . The two other cases follow immediately. 2 W' (f ) = E{U ' (c * + (p f − p )f )(p f − p )} = 0 . From (U''0. Show that the demand for the risky asset is independent of the initial wealth. Explain intuitively why this is so. 5.8. Consider the savings problem of Section 4.4 : max EU{(y 0 − s ) + δU(s~ )} x s≥0 1 2 Assume U (c) = Ec − χσc 2 ~ SSD ~ (w/ E~ = E~ ), then s > s . Show that if x A xB xA xB A B 6 Chapter 7 7.7. Show that maximizing the Sharpe ratio, E( rp ) − rf σp yields the same tangency portfolio that was obtained in the text. Hint: Formulate the Lagrangian and solve the problem. 7.8. Think of a typical investor selecting his preferred portfolio along the Capital Market Line. Imagine: 1. A 1% increase in both the risk free rate and the expected rate of return on the market, so that the CML shifts in a parallel fashion 2. An increase in the expected rate of return on the market without any change in the risk free rate, so that the CML tilts upward. In these two situations, describe how the optimal portfolio of the typical investor is modified. 7.9. Questions about the Markowitz model and the CAPM. a. Explain why the efficient frontier must be concave. b. Suppose that there are N risky assets in an economy, each being the single claim to a different firm (hence, there are N firms). Then suppose that some firms go bankrupt, i.e. their single stock disappears; how is the efficient frontier altered? c. How is the efficient frontier altered if the borrowing (risk-free) rate is higher than the lending rate? Draw a picture. d. Suppose you believe that the CAPM holds and you notice that an asset (call it asset A) is above the Security Market Line. How can you take advantage of this situation ? What will happen to stock A in the long run? 7.10. Consider the case without a riskless asset. Take any portfolio p. Show that the covariance vector of individual asset returns with portfolio p is linear in the vector of mean returns if and only if p is a frontier portfolio. Hint: To show the ''if'' part is straightforward. To show the converse begin by assuming that Vw=ae+b1 where V is the variance-covariance matrix of returns, e is the vector of mean returns, and 1 is the vector of ones. 7.11. Show that the covariance of the return on the minimum variance portfolio and that on any portfolio (not only those on the frontier) is always equal to the variance of the rate of return on the MVP. Hint: consider a 2-assets portfolio made of an arbitrary portfolio p and the MVP, with weights a and 1-a. Show that a=0 satisfies the variance minimizing program; the conclusion follows. 7.12. Find the frontier portfolio that has an identical variance as that of its zero-covariance portfolio. (That is, determine its weights.) 7.13. Let there be two risky securities, a and b. Security a has expected return of 13% and volatility of 30%. Security b has expected return of 26% and volatility of 60%.The two securities are uncorrelated. a. Compute the portfolio on the efficient frontier that is tangent to a line from zero, the zero beta portfolio associated with that portfolio, and the minimum-variance portfolio. 7 b. Assume a risk-free rate of 5%. Compute the portfolio of risky assets that investors hold. Does this portfolio differ from the tangency portfolio computed under a) ? If yes, why? 7.14 a. Given risk-free borrowing and lending, efficient portfolios have no unsystematic risk. True or false? b. If the agents in the economy have different utility functions the market portfolio is not efficient. True or false? c. The CAPM makes no provision for investor preference for skewness. True or false? 8 Chapter 8 8.9. Consider an exchange economy with two states. There are two agents with the same utility function U(c)\ln (c). State 1 has a probability of π. The agents are endowed with the units of the consumption good at each state. Their endowments across themselves and across states are not necessarily equal. Total endowment of this consumption good is e1 in state 1 and e2 in state 2. Arrow-Debreu state prices are denoted by q1 and q2. a. Write down agents' optimization problems and show that q1 π  y2  = q 2 1 − π  y1      Assuming that q1+ q2 = 1 solve for the state prices. Hint: Recall the simple algebraic fact that a c a+c = = . b d b+d b. Suppose there are two types of asset in the economy. A riskless asset (asset 1) pays off 1 (unit of the consumption good) in each state and has market price of P1=1. The risky asset (asset 2) pays off 0.5 in state 1 and 2 in state 2. Aggregate supplies of the two assets are Q1 and Q2. If the two states are equally likely, show that the price of the risky asset is P2 = 5Q1 + 4Q 2 4Q1 + 5Q 2 Hint: Note that in this case state-contingent consumption of the agents are assured, in equilibrium, through their holdings of the two assets. To solve the problem you will need to use the results of section a). There is no need to set up another optimization problem. 9 Chapter 10 10.5. A-D pricing Consider two 5-year coupon bonds with different coupon rates which are simultaneously traded. Price 1300 1200 Coupon 8% 6.5 % Maturity Value 1000 1000 Bond 1 Bond 2 For simplicity, assume that interest payments are made once per year. What is the price of a 5-year A-D security when we assume the only state is the date. 10.6. You anticipate receiving the following cash flow which you would like to invest risk free. t=0 1 $1m 2$1,25m 3 The period denotes one year. Risk free discount bonds of various maturities are actively traded, and the following price data is reported: t=0 -950 -880 -780 1 1000 2 1000 1000 3 a. Compute the term structure implied by these bond prices. b. How much money can you create, risk free, at t = 3 from the above cash flow using the three mentioned instruments? c. Show the transactions whereby you guarantee (lock in) its creation at t = 3. 10.7. Consider a world with two states of nature. You have the following term structure of interest rates over two periods: 1 r11 = 11.1111, r2 = 25.0000 , r12 = 13.2277, r22 = 21.2678 where the subscript denotes the state at the beginning of period 1, and the superscript denotes the period. 1 For instance is the price at state j at the beginning of period 1 of a riskless asset paying 1 two (1 + rj2 )2 periods later. Construct the stationary (same every period) Arrow-Debreu state price matrix. 10 Chapter 13 13.6. Assume that the following two-factor model describes returns ri = a i + b i1 F1 + b i 2 F2 + e i Assume that the following three portfolios are observed. Portfolio A B D Expected returns 12.0 13.4 12.0 b i1 1 3 3 bi2 0.5 0.2 -0.5 a. Find the equation of the plane that must describe equilibrium returns. b. If ~ − rf = 4 , find the values for the following variables that would make the expected returns consistent rM with equilibrium determined by the CAPM. i) rf ii) β pi , the market beta of the pure portfolio associated with factor i 13.7. Based on a single factor APT model, the risk premium on a portfolio with unit sensitivity is 8% ( λ 1 = 8% ). The risk free rate is 4%. You have uncovered three well-diversified portfolios with the following characteristics: Portfolio A B C Factor Sensitivity .80 1.00 1.20 Expected Return 10.4% 10.0% 13.6% Which of these three portfolios is not in line with the APT? 13.8 A main lesson of the CAPM is that “diversifiable risk is not priced”. Is this important result supported by the various asset pricing theories reviewed in this book? Discuss. As a provision of supplementary material we describe below how the APT could be used to construct an arbitrage portfolio to profit of security mispricing. The context is that of equity portfolios. The usefulness of such an approach will, of course, depend upon “getting the right set of factors” so that the attendant regressions have high R2. 11 13.9 An APT Exercise in Practice a. Step 1: select the factors; suppose there are J of them. b. Step 2: For a large number of firms N (big enough so that when combined in an approximately equally weighted portfolio of them the unique risks diversify away to approx. zero) undertake the following time series regressions on historical data: Firm 1: IBM Firm 2: BP Firm N: GE ~ ~ ~ =α ˆ ˆ rIBM ˆ IBM + b IBM,1 F1 + ... + b IBM,J FJ + ~IBM e ~ ~ ~ ~ = α + b F + ... + b F + e ˆ ˆ ˆ r BP BP BP ,1 1 BP , J J BP ~ = α + b ~ + ... + b ~ + ~ ˆ F e rGE ˆ GE ˆ GE ,1 F1 GE , J J GE The return to each stock are regressed on the same J factors; what differs is the factor sensitivities ˆ ˆ b IBM ,1 ,..., b GE ,J . Remember that: { } ~ cov(~ , FjHIST ) rBP ˆ b BP ,J = σ 2JHIST F (want high R 2 ) c. Step 3: first assemble the following data set: ˆ ˆ ˆ Firm 1: IBM AR IBM , b IBM ,1 , b IBM , 2 ,..., b IBM ,J ˆ ˆ ˆ Firm 2: BP AR , b , b ,..., b BP BP ,1 BP , 2 BP ,J Firm N: GE ˆ ˆ ˆ AR GE , b GE ,1 , b GE , 2 ,..., b GE ,J The AR IBM , AR BP ,… etc. represent the average returns on the N stocks over the historical period chosen for the regression. Then, regress the average returns on the factor sensitivities (we have N data points corresponding to the N firms) (these vary across the N firms) ~ ~ ~ ˆ ˆ ˆ ˆ ˆ ˆ A~ = rf + λ 1 b i1 + λ 2 b i 2 ...λ J b iJ ri ˆ ˆ we obtain estimates λ1 ,..., λ J { } In the regression sense this determines the “best” linear relationship among the factor sensitivities and the past average returns for this sample of N stocks. This is a “cross sectional” regression. (Want a high R 2 ) d. Step 4: Compare, for the N assets, their actually observed returns with what should have been observed given their factor sensitivities; compute α j ' s : 12 α IBM = AR IBM IBM's actually observed historical return ˆ ˆ ˆ ˆ − rf + λ1b IBM ,1 + ... + λ J b IBM ,J predicted return given its factors intensities ˆ ˆ b IBM ,1 ,...,b IBM , J according to the regression in step 3 [ ] ˆ ˆ ˆ ˆ α GE = AR GE − rf + λ1b GE ,1 + ... + λ J b GE ,J [ ] Note that α J > 0 implies the average returns exceeded what would be justified by the factor intensities => undervalued; α J < 0 implies the average returns fell short of what would be justified by the factor intensities => overvalued. e. Step 5: Form an arbitrage portfolio of the N stocks: if α J > 0 - assume a long position if α J < 0 - assume a short position since N is large e p ≡ 0 , so ignore “unique” risks. Remarks: 1.: In step 4 we could substitute independent (otherwise obtained) estimates of AR i ' s , and not use the historical averages. 2.: Notice that nowhere do we have to forecast future values of the factors. 3.: In forming the arbitrage portfolio we are implicitly assuming that the over and under pricing we believe exists will be eliminated in the future – to our advantage! 13 Chapter 15 15.6 Consider two agents in the context of a pure exchange economy in which there are two dates (t = 0,1) and two states at t = 1. The endowments of the two agents are different ( e1 ≠ e 2 ). Both agents have the same utility function : U(c 0 , c1 (θ) = ln c 0 + E ln c1 (θ) , but they differ in their beliefs. In particular, agent 1 assigns probability ¾ to state 1, while agent 2 assigns state 1 a probability ¼. The agents trade Arrow-Debreu claims and the supply of each claim is 1. Neither agent receives any endowment at t=1. a. Derive the equilibrium state claim prices. How are they related to the relative endowments of the agents ? How are the relative demands of each security related to the agents’ subjective beliefs ? b. Suppose rather than trading state claims, each agent is given ai units of a riskless security paying one unit in each future state. Their t=0 endowments are otherwise unaffected. Will there be trade ? Can you think of circumstances where no trade will occur ? c. Now suppose that a risky asset is also introduced into this economy. What will be the effects ? d. Rather than introducing a risky asset, suppose an entrepreneur invents a technology that is able to convert x units of the riskless asset into x units each of (1,0) and (0,1). How is x and the value of these newly created securities related ? Could the entrepreneur extract a payment for the technology ? What considerations would influence the magnitude of this payment ? 14 Intermediate Financial Theory Danthine and Donaldson 1 Chapter 1 1.6. Consider a two agent –two good economy. Assume well-behaved utility functions (in particular, indifference curves don't exhibit flat spots). At a competitive equilibrium, both agents maximize their utility given their budget constraints. This leads each of them to select a bundle of goods corresponding to a point of tangency between one of his or her indifference curves and the price line. Tangency signifies that the slope of the IC and the slope of the budget line (the price ratio) are the same. But both agents face the same market prices. The slope of their indifference curves are thus identical at their respective optimal point. Now consider the second requirement of a competitive equilibrium: that market clear. This means that the respective optimal choices of each of the two agents correspond to the same point of the Edgeworth-Bowley box. Putting the two elements of this discussion together, we have that a competitive equilibrium is a point in the box corresponding to a feasible allocation where both agents’ indifference curves are tangent to the same price line, have the same slope, and, consequently, are tangent to one another. Since the contract curve is the locus of all such points in the box at which the two agents’ indifference curves are tangent, the competitive equilibrium is on the contract curve. Of course, we could have obtained this result simply by invoking the First Welfare Theorem. 1.7. Indifference curves of agent 2 are non-convex. Point A is a PO : the indifference curves of the two agents are tangent. This PO cannot be obtained as a Competitive Equilibrium, however. Let a price line tangent to I1 at point A. It is also tangent to I2, but “in the wrong direction”: it corresponds to a local minimum for agent 2 who, at those prices, can reach higher utility levels. The difficulty is generic when indifference curves have such a shape. The geometry is inescapable, underlining the importance of the assumption that preferences should be convex. 2 Chapter 4 4.9 Certainty equivalent. The problem to be solved is: find Y such that 1 1 U (Y + 1000) + U (Y − 1000) ≡ U (Y − 500) 2 2 1 1 2 + = (Y + 1000) (Y − 1000) (Y − 500) (Y + 1000) + (Y − 1000) = 2 (Y + 1000)(Y − 1000) (Y − 500) 2Y 2 = 2 2 (Y − 1000 ) (Y − 500) Y 2 − 1000 2 = Y 2 − 500Y Y = 2000 The logarithmic utility function is solved in the same way; the answer is Y = 1250. 4.10. Risk premium. The problem to be solved is: find P such that 1 (ln(Y + 1000) + ln(Y − 1000)) ≡ ln(Y − P ) 2 1  Y − P = exp (ln(Y + 1000 ) + ln(Y − 1000 )) 2   P = Y − exp(.) where P is the insurance premium. P(Y = 10000) = 50.13 P(Y = 100000) = 0.50 The utility function is DARA, so the outcome (smaller premium associated with higher wealth) was expected. Case 2 σa = σ b Case 3 σa < σ b 4.11. Case 1 σa > σ b Ea = Eb Ea > Eb Ea < Eb Case 1: cannot conclude with FSD, but B SSD A Case 2: A FSD B, A SSD B Case 3: cannot conclude (general case) 4.12. a. U ( Y − CE) = EU( Y − L( θ)) (10,000 + CE) −.2 (10,000 − 1,000) −.2 (10,000 − 2,000) −.2 = .10 + .20 − .2 − .2 − .2 3 (10,000 − 3,000) −.2 (10,000 − 5,000) −.2 (10,000 − 6,000) −.2 + .20 + .15 − .2 − .2 − .2 −.2 (10,000 + CE ) = .173846 1 (10,000 + CE) .2 = = 5.752 .173846 CE = -3702.2 ~ EL = −{.1(1000) + .2( 2000) + .35(3000) + .2(5000) + .15(6000)} = − 3450 CE( ~, y) = E( ~ ) − Π( y, ~ ) z z z ~ ) = 252.2 Π ( y, z If the agent were risk neutral the CE = - 3450 + .35 b. If U ' ( y) > 0, U' ' ( y) > 0 , the agent loves risk. The premium would be negative here. 4.13. Current Wealth : π -L Y+ 1− π 0 h Insurance Policy : π - ph 1− π 0 Certainly p ≤ 1 a. Agent solves max π ln( y − ph − L + h ) + (1 − π) ln( y − ph ) h The F.O.C. is π(1 − p) p(1 − π) , which solves for = y − L + h (1 − p) y − ph  π  1− π  h = Y  −   p   1 − p ( Y − L )      Note : if p = 0, h = ∞ ; if π = 1, ph = Y . b. expected gain is ph − πL   π  1 − π       c. ph = p Y   −   ( Y − L )  = πL     p  1− p   ⇒p=π  π  1− π  d. h = Y   −   p   1 − p ( Y − L )      4  π  1− π  h = Y  −  ( Y − L) = L.  π  1− π  The agent will perfectly insure. None ; this is true for all risk averse individuals. 4.14. ~ , π( ~ ) x x ~ , π( ~ ) z z x a. E~ = −10(.1) + 5(.4) + 10(.3) + 12(.2) = −1 + 2 + 3 + 2.4 = 6.4 E~ = .2( 2) + 3(.5) + 4(.2) + 30(.1) = .4 + 1.5 + .8 + 3 = 5.7 z σ 2 = .1( −10 − 6.4) 2 + .4(5 − 6.4) 2 + .3(10 − 6.4) 2 + .2(12 − 6.4) 2 ~ x = 26.9 + .78 + 3.9 + 6.27 = 37.85 σ ~ = 6.15 x σ 2 = .2( 2 − 5.7) 2 + .5(3 − 5.7) 2 + .2( 4 − 5.7) 2 + .1(30 − 5.7) 2 ~ z = 2.74 + 3.65 + .58 + 59.04 = 66.01 σ ~ = 8.12 z There is mean variance dominance in favor of ~ : x ~ > E~ and σ ~ < σ ~ . The latter is due to the large outlying payment of 30. Ex z x z b. 2nd order stochastic dominance : r r r F~ ( r ) x ∫ Fx ( t )dt 0 F~ ( r ) z ∫ Fz ( t )dt 0 ∫ [Fx ( t ) − Fz ( t )]dt r 0 -10 .1 .1 0 0 .1 -9 .1 .2 0 0 .2 -8 .1 .3 0 0 .3 -7 .1 .4 0 0 .4 -6 .1 .5 0 0 .5 -5 .1 .6 0 0 .6 -4 .1 .7 0 0 .7 -3 .1 .8 0 0 .8 -2 .1 .9 0 0 .9 -1 .1 1.0 0 0 1.0 0 .1 1.1 0 0 1.1 1 .1 1.2 0 0 1.2 2 .1 1.3 .2 .2 1.1 3 .1 1.4 .7 .9 .5 4 .1 1.5 .9 1.8 -.3 Since the final column is not of uniform sign, we cannot make any claim about relative 2nd order SD. 4.15. initial wealth Y 1− π lottery π G B a. If he already owns the lottery, Ps must satisfy 5 or Ps = U −1 (πU( Y + G ) + (1 − π) U( Y + B) ) − Y . b. If he does not own the lottery, the maximum he would be willing to pay, Pb , must satisfy : U ( Y ) = πU ( Y − Pb + G ) + (1 − π) U ( Y − Pb + B) c. Assume now that π = Pb satisfies : 1 2 U ( Y + Ps ) = πU ( Y + G ) + (1 − π) U( Y + B) , G = 26, B = 6, Y = 10 . Find Ps , Pb . U (10) = 10 2 = 1 1 2 U(10 − Pb + 26) + 1 2 U (10 − Pb + 6) 1 1 2 1 1 2 1 2 (36 − Pb ) 2 + 1 2 (16 − Pb ) 6.32 = (36 − Pb ) 2 + (16 − Pb ) Pb ≈ 13.5 Ps satisfies : (10 + Ps ) 2 = 1 1 2 (10 + 26) 2 + 1 2 (10 + 6) 1 1 2 = 1 2 ( 6) + 1 2 ( 4 ) = 5 10 + Ps = 25 Ps = 15 Clearly, Pb < Ps . If the agent already owned the asset his minimum wealth is 10 + 6 =16. If he is considering buying, its wealth is 10. In the former case, he is less risk averse and the lottery is worth more. If the agent is risk neutral, Ps = Pb = πG + (1 − π) B . To check it out, assume U(x) = x : Ps : U ( Y + Ps ) = πU ( Y + G ) + (1 − π) U ( Y + B) Y + Ps = π( Y + G ) + (1 − π)( Y + B) = Y + πG + (1 − π) B Ps = πG + (1 − π) B Pb : U ( Y ) = πU( Y − Pb + G ) + (1 − π) U ( Y − Pb + B) Y = π( Y − Pb + G ) + (1 − π)( Y − Pb + B) Pb = πG + (1 − π) B 4.16. Mean-variance: Ex1 = 6.75, (σ1 ) 2 = 15.22 ; Ex2 = 5.37, (σ 2 ) 2 = 4.25 ; no dominance. FSD: No dominance as the following graph shows: 6 2 1 1 and 2 3/4 2/3 1/2 1/3 1/4 1 1 SSD: 2 3 4 5 6 7 8 9 10 11 12 x x 0 1 2 3 4 5 6 7 8 ∫ f1 ( t )dt x ∫ F1 ( t )dt x ∫ f 2 ( t )dt x ∫ F2 ( t )dt x ∫ [F1 ( t ) − F2 ( t )]dt 0 0 0 0 0 0 .25 .25 .25 .25 .25 .25 .50 .50 0 .25 .50 .75 1 1.25 1.50 2 2.50 0 0 0 0 .33 .33 .66 .66 1 0 0 0 0 .33 .66 1.32 1.98 2.98 0 .25 .50 .75 .67 .67 .18 .02 -.48 There is no SSD as the sign of ∫ [F1 ( t ) − F2 ( t )]dt is not monotone. x 0 Using Expected utility. Generally speaking, one would expect the more risk averse individuals to prefer investment 2 while less risk averse agents would tend to favor investment 1. 7 Chapter 5 5.6. a. Scenario 1 2 3 (z1 , z2) (20, 80) (38, 98) (30, 90) π 1/5 1/2 1/3 (c1 , c2) (35, 65) (44, 74) (40, 70) E(c) 59 59 60 Var(c) 144 225 200 b. • Mean-Variance analysis: 1 is preferred to 2 (same mean, but lower variance) 3 is preferred to 2 (higher mean and lower variance) 1 and 3 can not be ranked with standard mean-variance analysis • Stochastic Dominance: No investment opportunity FSD one of the other investments. Investment 1 SSD Scenario 2 (mean preserving spread). • Expected Utility (with assumed U) 3>1>2 c. Scenario 1 1 4 EU(a) = [− exp(− A[50 − .6a 50])] + [− exp(− A[50 + .6a 50])] 5 5 ∂EU(a) 1 4 = (.6A50)[− exp(− A[50 − .6a 50])] − (.6A50)[− exp(− A[50 + .6a 50])] = 0 ∂a 5 5 1 ln   4  = .5 a=− (1.2A50) The scenarios 2 and 3 can be solved along the same lines. 5.7. a. max EU ( x 1z 1 + x 2 ~2 ) z x1 , x 2 s.t. p1x 1 + p 2 x 2 ≤ Y0 Since we assume (maintained assumption) U’( )>0, p1x 1 + p 2 x 2 = Y0 , and x 1 = The problem may thus be written :  Y − p 2 x 2   z 1 + x 2 ~2  max EU  0 z   x2 p1    The necessary and sufficient F.O.C. (under the customary assumption) is :  Y − p 2 x 2    p z 1 + x 2 ~2  ~2 − 2 z 1  = 0 EU '  0 z z   p1 p1      b. Suppose U ( y ) = a − be − AY ; the above equation becomes : Y0 − p 2 x 2 p1   Y − p 2 x 2    p   z1 + x 2 ~2  ~2 − 2 z1   = 0 AbE exp  0 z z    p1 p1         equivalently, 8  Y z  − p x   p AbE exp  0 1  exp  2 2 z1 + x 2 ~2  ~2 − 2 z1   = 0 z z p1     p1    p1  The first term which contains Y0 can be eliminated from the equation. The intuition for this result is in the fact that the stated utility is CARA, that is, the rate of absolute risk aversion is constant and independent of the initial wealth. 5.8. The problem with linear mean variance utility is max( y 0 − s) + δ sE~ − 1 χs 2 σ 2 x x 2 ~ − sxσ 2 = 0 FOC − 1 + δEx x δE( ~ ) − 1 x or s= χσ 2 x [ ] Clearly s is inversely related with σ 2 . For a given E~ ( x B is a mean-preserving spread of x A ), x x sA > sB 9 Chapter 7 7.7. A ray in R 2 is defined by y − y1 = n( x − x 1 ) . Rewrite this in the following way y = n ( x − x1 ) + y1 and apply it to the problem: E(rP ) − rf (σ P − 0) + rf . E(rP ) = σP This can be maximized with respect to the Sharpe ratio. Of course, we get σ P = 0 ; i.e. the slope is infinite. Now we constrain x to be x = σ P = 2 A 1 C  E(rP ) −  + . Inserting this back leads to D C C 2 C A 1 2 E(rP ) = θ  E(rP ) −  + + rf where θ is the Sharpe ratio. From this it is easy to solve for σ P and the D C C Sharpe ratio. (A, B, C, and D are the notorious letters defined in Chapter 6.) 7.8. 1) Not possible to say without further knowledge of preferences. The reason is that with both risk-free and risky returns higher, there is what is called a ‘wealth effect’: with given amount of initial wealth to invest, the end-of-period is unambiguously expected to be higher: the investor is ‘richer’. But we have not made any assumption as to whether at higher wealth level he/she will be more or less risk averse. The CAPM does not require to specify IARA or CARA or DARA utility functions (although we know that we could build the model on a quadratic (IARA) utility function, this is not the only route.) 2) Here it is even more complicated; the efficient frontier is higher: there is a wealth effect, but it is also steeper: there is also a substitution effect. Everything else equal, the risky portfolio is more attractive. It is more likely that an investor will select a riskier optimal portfolio in this situation, but one cannot rule out that the wealth effect dominates and at the higher expected end-of-period wealth the investor decides to invest more conservatively. Questions about the Markowitz model and the CAPM. a. If it were not, one could build a portfolio composed of two efficient portfolios that would not be itself efficient. Yet, the new portfolio’s expected return would be higher than the frontier portfolio with the same standard deviation, in violation of the efficiency property of frontier portfolios. b. With a lower number of risky assets, one expects that the new frontier will be contained inside the previous one as diversification opportunities are reduced. c. The efficient frontier is made of three parts, including a portion of the frontier. Note that borrowers and lenders do not take positions in the same ''market portfolio''. d. Asset A is a good buy: it pays on average a return that exceeds the average return justified by its beta. If the past is a good indication of the pattern of future returns, buying asset A offers the promise of an extra return compared to what would be fair according to the CAPM. What could expect that in the longer run many investors will try to exploit such an opportunity and that, as a consequence, the price of asset A will increase with the expected return decreasing back to the SML level. 7.10. ''If'' part has been shown in Chapter 6. ''Only if'' : start with Vw=ae+b1; premultiply by V-1 7.9. 10  V −1e   V −1ι  V −1e V −1ι B  + bC  where , and are frontier portfolios with means , w P = aV −1e + bV −1ι = aA   A   C  A C A     A respectively. Since aA+bC=1 (Why?) the result follows. and C 7.11. We build a portfolio with P and the MVP, with minimum variance. Then, the weights a and (1-a) must satisfy the condition 2 2 min{a 2 × σ P + 2 × a × (1 − a ) × cov(rP , rMVP ) + (1 − a )2 × σ MVP }. a The FOC is 2 2 2 × a × σ P + 2 × (1 − 2 × a ) × cov(rP , rMVP ) − 2 × (1 − a ) × σ MVP = 0 . Since MVP is the minimum variance porfolio, a=0 must satisfy the condition, which simplifies to 2 cov(rP , rMVP ) = σ MVP . 7.12. For any portfolio on the frontier we have σ 2 (~p ) = r 1 C ~ A  E (rP ) −  + D C C 2 . where A, B, C, and D are our notorious numbers. Additionally, we know that A D/C ~ E rzcp = − C E rp − A / C ( ) 2 () . Since the zero covariance portfolio is also a frontier portfolio we have 2 σ 2 (~zcp ) = r Now, we need to have σ 2 (~p ) = σ 2 (~zcp ) . This leads to r r 2 2  C A 1 C  D/C  + 1  E (rP )−  + =  D C C D  E rp − A / C  C   2  C  D / C2   + 1 D  E rp − A / C  C   ( ) . () E (rP )− E (rP )− E (rP )= A D/C = C E rp − A / C 2 () . A D = C C A D + C C E (rP ) , we Given can use (6.15) from chapter 6 to find the portfolio weights. 7.13. a. As shown in Chapter 5, we find the slope of the mean-variance frontier and utilize it in the equation of the line passing through the origin. If we call the portfolio we are seeking ''p'', then it follows that D + A 2C E(rP ) = where A, B, C, and D are our notorious numbers. The zero-covariance portfolio of p is C2 A such that E(rzcp) = 0. E (rzcp ) = A D / C2 − =0 C E (rp ) − A / C D A + = 0.1733 CA C ⇔ E(rp ) = 11 V −1 (e − rf ι ) . The two portfolios should A − rf C differ because we are comparing the tangency points of two different lines on the same mean-variance frontier. Note also that the intercepts are different: b. We need to compare the weights of the portfolio p with w T = E (rzcp ) = 0.05 ⇔ E(rp ) = D / C2 A + = 0.1815 [A / C − 0.05] C 7.14. a. True b. False: the CAPM holds even with investors have different rates of risk aversion. It however requires that they are all mean-variance maximizers. c. True. Only the mean and variance of a portfolio matters. They have no preference for the third moment of the return distribution. Portfolio including derivative instruments may exhibit highly skewed return distribution. For non-quadratic utility investors the prescriptions of the CAPM should, in that context, be severely questioned. 12 Chapter 8 8.9. a. The optimization problem of the first agent is MaxEU (c ) s.t. q1c11 + q 2 c12 = q1e11 + q 2 e12 . The FOC's are, π 1 = λq1 c11 1 = λq 2 c12 (1 − π ) q1c11 + q 2 c12 = q1e11 + q 2 e12 where λ is the Lagrange multiplier of the problem. Clearly, if we define c11 = y1 , c12 = y2 we have π y2 q1 = q2 (1 − π ) y1 . A-D can be derived as follows q1 π c12 π c22 π c12 π c22 π e2 = = = + = q2 (1 − π ) c11 (1 − π ) c21 (1 − π ) c11 (1 − π ) c21 (1 − π ) e1 . Using 1 = q1 + q 2 and after some manipulation we get (1 − π )e1 + πe2 . (1 − π )e1 q2 = (1 − π )e1 + πe2 b. If π = (1 − π ) A-D prices are q1 = q2 = e2 e1 + e2 e1 e1 + e2 q1 = πe2 . The price of the risky asset is P2 = 1 q1 + 2q2 . 2 Now we insert A-D prices and since endowments are 1 Q2 2 e2 = Q1 + 2Q2 e1 = Q1 + the pricing formula P2 = 5Q1 + 4Q2 4Q1 + 5Q2 follows. 13 Chapter 10 10.5. The dividends (computed on the face value, 1000) are d1 =80, d2 =65. The ratio is d1/d2; buying 1 unit of bond 1 and selling d1/d2 units of bond 2, we can build a 5-yr zero-coupon bond with following payoffs: Price Maturity Value Bond 1 -1300 +1080 Bond 2 +1200d1/d2 1065d1/d2 --------------------------------------------------------------------------------- 176.92 -230.77 The price of the 5-yr A-D security is then 176.92/230.77 = 23/30. 10.6. a. r1 : r2 : r3 : 950 = 1000 1000 ; (1 + r1 ) = ; r1 = .05263 (1 + r1 ) 950 1 2 1000  1000  880 = ; (1 + r2 ) =   ; r2 = .0660 (1 + r2 ) 2  880  3 1000  1000  780 = ; (1 + r3 ) =   ; r3 = .0863 (1 + r3 ) 3  780  1 b. We need the forward rates 1 f 2 and 2 f1 . (i) (1 + r1 )(1 +1 f 2 ) 2 = (1 + r3 ) 3  (1.0863)3  2 (1 +1 f 2 ) =   = 1.1035  (1.05263)  (1 + r2 ) 2 (1 + 2 f1 ) = (1 + r3 ) 3 (1.0863) 3 = 1.1281 (1.0660) 2 So 1 f 2 = .1035, and 2 f1 = .1281. The CFT =3 = (1.25M )(1 + 2 f1 ) + 1M (1 +1 f 2 ) 2 (1 + 2 f1 ) = 1 = (1.25M )(1.1281) + 1M (1.1035) 2 = 1.410 M + 1.2177 M = 2.6277 M. c. To lock in the 2 f1 applied to the 1.25M, consider the following transactions : We want to replicate t=0 Short : 1250 2 yr bond Long : 1410 3 yr bond Consider the corresponding cash flows : t=0 Short : (1250)(880)= +1,100,000 Long : -(1410)(780)= - 1,100,000 Total 0 1 2 -1.25M 3 1.410M 1 2 -1,250,000 -1,250,000 3 1,410,000 1,410,000 14 To lock in the 1 f 2 (compounded for two periods) applied to the 1M ; consider the following transactions : t=0 1 2 3 -1M 1.2177M Short : 1000 1 yr bonds. Long : 1217.7 3 yr bonds. Consider the corresponding cash flows : t=0 1 Short : (1000)(950)= 950,000 -1,000,000 Long : -(1217.7)(780)= - 950,000 Total 0 -1,000,000 2 3 +1,217,700 +1,217,700 The portfolio that will allow us to invest the indicated cash flows at the implied forward rates is : Short : 1000, 1 yr bond Short : 1000, 2 yr bond Long : 1410 + 1217.7 = 2627.7, 3 yr bond 10.7. If today’s state is state 1, to get $1.- for sure tomorrow using Arrow-Debreu prices, I need to pay q11 + q12; thus 1 (i) q11 + q12 = = .9 1 + r11 Similarly, if today’s state is state 2: 1 q21 + q22 = = .8 (ii) 1 1 + r2 Given the matrix of Arrow-Debreu prices q12  2  q11 q12  q11 q12  q ; q =  . q =  11 q  q  q q 22  q 22  12 q 22   12  12  To get$1.- for sure two periods from today, I need to pay q11q11 + q12q21 + q11q12 + q12q22 = If state 2 today q21q11 + q22q21 + q21q12 + q22q22 = (1 + r ) 1 1 2 2 1 = .78 (iii) (1 + r ) 2 2 2 = .68 (iv) The 4 equations (i) to (iv) can be solved for the 4 unknown Arrow-Debreu prices as per q11 = 0.6 q22 = 0.3 q21 = 0.4 q22 = 0.4 15 Chapter 13 13.6. a. From the main APT equation and the problem data, one obtains the following system : 12.0 = ErA = λ0 + λ1 1 + λ2 0.5 (i) 13.4 = ErB = λ0 + λ1 3 + λ2 0.2 (ii) 12.0 = ErC = λ0 + λ1 3 - λ2 0.5 (iii) This system can easily be solved for λ0 = 10 ; λ1 =1 ; λ2 =2 Thus, the APT tells us that Eri = 10 + 1 bi1+ 2 bi2 b. (i) If there is a risk free asset one must have λ0 = rf = 10 (ii) Let Pi be the pure factor portfolio associated with factor i. One has λ i = rPi − rf . Furthermore if the CAPM holds one should have λ i = rPi − rf = β Pi ( rM − rf ) . Thus λ1 = 1 = β P1 4 → β P1 = 1 , and 4 λ 2 = 2 = βP2 4 → βP2 = 1 . 2 13.7. Expected APT Return rA = 4%+.8(8%) = 10.4% rB = 4%+1(8%) = 12.0% rC = 4%+1.2(8%) = 13.6% Expected Returns 10.4% 12.0% 13.6% The Expected return of B is less than what is consistent with the APT. This provides an arbitrage opportunity. Consider a combination of A and C that gives the same factor sensitivity as B. w A (.8) + w C (1.2) = 1.00 1− w A ⇒ wA = wC = 1 2 What is the expected return on this portfolio: E r 1 A , 1 C  = 1 E( ~ ) + 1 E( ~ ) rA r   2 2 C  2 2  = 1 (10.4%) + 1 (13.6%) 2 2 = 12% 16 This clearly provides an arbitrage opportunity: short portfolio B, buy a portfolio of ½ A and ½ C. 13.8. That diversifiable risk is not priced has long been considered as the main lesson of the CAPM. While defining systematic risk differently (in terms of the consumption portfolio rather than the market portfolio), the CCAPM leads to the same conclusion. So does the APT with possibly yet another definition for systematic risk, at least in the case where the market portfolio risk does not encompass the various risk factors identified by the APT. The Value additivity theorem seen in Chapter 7 proves that Arrow-Debreu pricing also leads to the same implication. The equivalence between risk neutral prices and Arrow-Debreu prices in complete markets guarantees that the same conclusion follows from the martingale pricing theory. When markets are incomplete, some risks that we would understand as being diversifiable may no longer be so. In those situations a reward from holding these risks may be forthcoming. 17 Chapter 15 15.6. a. Agent problems : Agent 1 : max ln(e1 − p1Q1 − p 2 Q1 ) + 3 ln Q1 + 1 ln Q1 1 2 1 2 4 4 1 1 Q1 ,Q 2 2 2 Agent 2 : max ln(e 2 − p1Q1 − p 2 Q 2 ) + 3 ln Q1 + 1 ln Q 2 2 2 4 4 2 2 Q1 ,Q 2 FOCs : Agent 1 : (i) (ii) p1 3 = 1 1 e1 − p1Q1 − p 2 Q 2 4Q1 1 p2 1 Q1 : = 2 1 1 e1 − p1Q1 − p 2 Q 2 4Q1 2 Q1 : 1 and (ii) together imply p 2 Q1 = 1 p1Q1 . This is not surprising ; agent 1 places a higher subjective 2 1 3 3 probability on the first state. Taking this into account, (i) implies p1Q1 = 8 e1 and thus p 2 Q1 = 1 e1 . 1 2 8 2 3 The calculations thus far are symmetric and (iii) and (iv) solve for p1Q1 = 1 e1 while p 2 Q 2 = 8 e 2 . 2 8 Now suppose there is 1 unit of each security available for purchase. 2 a) Q1 + Q1 = 1 1 b) Q1 + Q 2 = 1 2 2 Substituting the above demand function into these equations gives :  3   1  a)   8p e1 +  8p e 2 = 1     1  1 (i) p1 = 3 8 e 1 + 1 8 e 2  3   1  b)   8p  e 1 +  8p  e 2 = 1     2  2 p 2 = 1 8 e1 + 3 8 e 2 If e 2 > e1 , then p 2 > p1 ; If e 2 < e1 , then p1 > p 2 b. There is now only one security t=1 − pb Agent Endowments θ1 1 t=2 θ2 1 t=1 Agent 1 Agent 2 e1 e2 t=2 θ1 a1 a2 θ2 a1 a2 Now, there can be trade here even with this one asset, if, say e1 = 0, a 1 > 0, e 2 > 0, a 2 = 0 to take an extreme case. 18 If e1 = e 2 = 0 , then there will be no trade as the security payoff do not allow the agents to tailor their consumption plans to their subjective probabilities. c. Suppose we introduce a risky asset t=0 θ1 z1 t=1 θ2 z2 -p where z 1 ≠ z 2 , z 1 , z 2 > 0 . Combinations of this security and the riskless one can be used to construct the state claims. This will be welfare improving relative to the case where only the riskless asset is traded. The final equilibrium outcome and the extent to which each agents welfare is improved will depend upon the relative endowments of the risky security assigned to each agent ; and the absolute total quantity bestowed on the economy. d. The firm can convert x units of (1,1) into x units of {(1,0), (0,1)}. These agents (relative to having only the riskless asset) would avail themselves of this technology, and then trade the resultant claims to attain a more preferred consumption state. Furthermore, the agents would be willing to pay for such a service in the following sense : inputs outputs agent x(1,1) firm (inventor) a{(1,0), (0,1)} Clearly, if a = x, the agents would ignore the inventor. However, each agent would be willing to pay something. Assuming the inventor charges the same a to each agent, the most he could charge would be that a at which one of the agents were no better off ex ante than if he did not trade. Suppose the inventor could choose to convert x, 2x, 3x, …, nx securities (x understood to be small). The additional increment he could charge would decline as n increased. (x-a){(1,0), (0,1)} 19 ### Similar Documents #### Investement ...Study notes of Bodie, Kane & Marcus By Zhipeng Yan Investment Zvi Bodie, Alex Kane and Alan J. Marcus Chapter One: The Investment Environment ....................................................................... 2 Chapter Two: Financial Instruments................................................................................... 4 Chapter Three: How Securities Are Traded........................................................................ 8 Chapter Six: Risk and risk aversion.................................................................................. 12 Chapter Seven: Capital Allocation between the Risky asset and the risk-free Asset ....... 17 Chapter Eight: Optimal Risky Portfolios:......................................................................... 20 Chapter Nine: The Capital Asset Pricing Model .............................................................. 24 Chapter Ten: Index Models: ............................................................................................. 28 Chapter Eleven: Arbitrage Pricing Theory and multifactor models of risk and return .... 32 Chapter Twelve: Market Efficiency and Behavioral Finance........................................... 35 Chapter Fourteen: Bond prices and yields ........................................................................ 43 Chapter Fifteen: The Term Structure of Interest Rates..................................................... 48 Chapter Sixteen: Managing Bond Portfolios... Words: 30192 - Pages: 121 Words: 2371 - Pages: 10 #### Diamond Chemicals ...economic slowdown worldwide and also the accumulation of common stock of the company. Revenue per share has fallen to 30 Euros at the end of 2000 from around 60 Euros at the end of 1999. Original Assumptions |   |   | Suggested Assumptions |   | Annual Output | 250000 |   | Annual Output | 250000 | Output Gain/Original Output | 7% |   | Output Gain/Original Output | 7% | Price/ton (Pounds Sterling) | 541 |   | Price/ton (Pounds Sterling) | 541 | Inflation rate (Prices and costs) | 0% |   | Inflation rate (Prices and costs) | 0% | Gross margin(Ex Depr.) | 12.50% |   | Gross margin(Ex Depr.) | 12.50% | Old Gross Margin | 11.50% |   | Old Gross Margin | 11.50% | Tax Rate | 30% |   | Tax Rate | 30% | Investement outlay (in mn) | 9 |   | Investement outlay (in mn) | 11 | Discount Rate | 10% |   | Discount Rate | 7% | Depreciable rate (yrs) | 15 |   | Depreciable rate (yrs) | 15 |... Words: 636 - Pages: 3 ...INCYTE : BIOTECH FEST EVENT : BIOTECH BUSINESS PLAN INDIVIDUAL ENTRY : PROPOSAL : An R&D and production based organisation specialized in tissue culture and stem cell technology , skin and epidermal tissue culture and reintegration, cell-tissue-organ banking, cultivation of tissues which are compatible according to individual requirements. The venture will also have a humanitarian aspect of extending help to patients looking for organ replacement, burn victims in need of skin for skin grafting. The venture will have three intertwined multiple wings. An R& D wing to develop and improve on existing techniques, different production wings to execute the techniques, management groups to improve on marketing strategy advertising and spreding awareness. The main goal of this venture is to develop cell and tissue compatible with the recepient’s requirements and serve consumers with better availability of tissue and cell types . Easy availability will greatly increase survival rates, especially in case of burn victims who badly need skin grafting to survive. MARKETING STRATEGY : Purpose: Investments in Niche Biological Products and Services will lead to increases in export revenues of biologically-based products (apart from food products) with returns significantly above commodity levels based on global market-led and valued, research-based... Words: 395 - Pages: 2 Free Essay #### Formula ...24-Cost of equity capital=(current annual dividend per common share/current market price per common share)+expected dividend growth rate;Payback period=initial investment/annual operating cash flows; Accting rate of return on initial invest= average annual increase in NI/initial investment;Accting rate of return on average investment=average annual increase in NI/Average investment; 23-ROI=invested center income/investment asset base;ROI=investement turnover{[sales/investment center asset base]} x return on sales{[investment center income/sales]}; 22-Actual cost=Actual quantity(AQ)*Actual price(AP);Stnd cost of actual input=actual quantity(AQ)*stnd price(SP);Flexible budget cost=stnd quantity allowed(SQ)*stnd price(SP);Material price variance=AQ(AP-SP);Material quantity variance=SP(AQ-SQ);total flexible budget material variance=Material price variance + Material quantity variance; Actual Cost=Actual hrs(AH)*Actual Rate(AR);Stnd cost of inputs=Actual hrs(AR)*Stnd rate(SR);Flexible budget cost=Stnd hrs allowed(SH)*Stnd rate(SR);labour rate variance=AH(AR-SR);labor efficiency variance=SR(AH-SH);total flexible budget labor variance= labour rate variance + labor efficiency variance; Revenue variance=(actual volume*Actual price)-(budgeted volume*budgeted price);sale price variance=(actual selling price-budgeted selling price)*actual sales volume;sale volume variance=(actual sales volume-budgeted sales volume)*budgeted selling price; net sales volume variance=(actual...... Words: 493 - Pages: 2 #### Lean Operations ...MSC101 Coursework Recomendations – Describe how these suggestions could be implemented. You should identify any barriers to the implementation, such as the culture in place, and propose possible ways to overcome them, in the light of earlier finding. Lean Operations When the customers have to wait between different stages of the operation, this holds up the following stages of the process. Waiting itself causes waste in manufacturing as when the business orders inventory, if the waiting time increases this means less of the product can be manufactured. Less end product means perishable inventory will be wasted and cast off as an expense. At the moment Subway only has one till at the end of the manufacturing process. I recommend that the company should invest in more tills at the payment stage or a self-service checkout. This would improve the process flow and thus create a leaner operation, especially in larger branches or branches in more crowded areas. Location Decision The Subway situated on campus is ideally located as it’s in close proximity to its customers and it also has little competition. However, the Subway in town has a greater amount of competing food chains food chains located nearby therefore loses more custom. To maximise its custom it could offer some deals to entice people to make purchases. Subway could also offer a student discount as many of its main competitors (McDonalds) already offer additional items free of charge if you are a student... Words: 982 - Pages: 4 ...moving away from independent countries to interconnected counties 2. Status ( where we are + measurements) Wave of globalization after WOII * 50 – 60 domination of the US (“free market wave”) The trade rules are set by the US * Now domination China, Asia US domination is gone, different countries dominate the world The demographics of the world economy has changed How do you measure globalization? * University of zurich * http://globalization.kof.ethz.ch/ The KOF Index of Globalization measures the three main dimensions of globalization: 1. Economic globalization * Actual flows (37%) * Trade (percentage of GDP) * Foreign direct investement, flows (percentage of GDP) * Portfolio investement (percentage of GDP) * Income payments of foreign nationals (percentage of GDP) * Restrictions * Hidden import barriers * Mean tariff rate * Taxes on international trade (percentage of current revenue) 2. Social (39%) * Data on personal contact * Data on information flows 3. political. (25%) * Embassieses 3. Types of globalization 1. Globalization of products 2. Globalization of markets Active vs passive globalization Globalization can also be passive. Companies that do not want to globalize could also be affected by globalization. Companies might lose everything if they do not globalize 4. What are the...... Words: 10538 - Pages: 43 ...CONSOL 1996 Sales Cost of Goods sold Gross Margin Research and Development Expenses Other selling, General & Administrative Expenses Total Operating Expenses Total Operating Income Other Income (Expense) net Interest Expense EBT Tax on EBT Net Income Tax rate 75,947 45,408 30,539 4,654 16,854 21,508 9,031 707 716 9,022 3,158 5,864 35% 1996 Assets Cash Receivables, Inventory and pre-paids Total Current Assets Property & Equipment (net) Investments and Other Assets Total Assets Liabilities and Shareholders' Equity Accounts Payable, Taxes & Accruals Debt Maturing within One Year Total Current Liabilities Long Term Debt Other Liabilities & Deferred Taxes Total Liabilities Total Shareholders Equity Total Liabilities and Shareholders' Equity 8,137 32,558 40,695 17,407 23,030 81,132 21,043 12,957 34,000 9,872 15,632 59,504 21,628 81,132 Decomposing Profitability (Traditional Approach) Net Income Sales ROE= ROA X FINA 1996 5,864 75,947 Assets Shareholder's Equity ROS Assets Turnover ROA FINANCIAL LEVERAGE ROE 81,132 21,628 7.72% 0.94 7.23% 3.75 27.11% Decomposing Profitability (Alternative Approach) Net Interest Expense after Tax NOPAT Operating Working Capital Net long-term Assets (suppose all long term liabilities are Interest-bearing) Net Debt (suppose all long term liabilities are Interest-bearing) Net Assets Net Capital ROE Operating ROA Gross Profit Margin ROE = NOPA/Equi 1996 $... Words: 723 - Pages: 3 Premium Essay #### Roccoco Hotel ...Case Study THE ROCCOCO NEW YORK HOTEL 1- Identify the symptoms -Clients’ complaints (low satisfaction rate, service standards don’t meet expectations) -Service standards don’t meet expectations -High managerial turnover -Unqualified employees -Unfavourable financial situation (budget cuts, unachieved revenue goals) -Low occupancy (especially on weekends) -Decline of the repeat customer base (- 10% in the past few years) -Average Daily Rate : wide price differencials for guests -GOP about 2 to 4% points below the average 2- Identify & Analyse the problems : * Service Room service, waiting tome too long Reception : check-in time 6pm (too late) Wake up call forgotten * Personnel Lack of clear explanations of hotel policies, and procedures Poor relation between managers & employees, due to the high management turnover No special personnel training program Inefficient orientation process (Follow Mary Around Approach) No power to react -Managers Management can’t find an agreement on the overall policy Sylvia Jenkins’ management of doing everything by her own (no delegation of duties) Lack of confus from the Front Office Manager - Customers Service is not adequate to high-class positioning of the hotel Service doesn’t meet client’s perceprtions concerning the high-class hotel (especially Asian market) 3- Develop alternative solutions -Do nothing -Implement a well... Words: 760 - Pages: 4 Premium Essay #### Case Study ...Wahe Guru Satnaam Case Report Structure: I. Case background (0.5 mark) • Create a table with the key dates, events, and decisions to be made. |Keys Date |Events |Decisins to be made | |1998 |Closing of seven retail store of creative computers and |Development of Ubid Website | | |selling of factories excess and other reburised goods through| | | |internet | | |6/07/1998 |Selling of 20% of Ubids equity in Intial public Offerings and|Increasing the market awarness about Ubid | | |remaining 80% to be dustributed among the share holders | | |3/12/1998 |Ubid Intial Public Offerings took place |Sold 1.817 million share at$15 | |4/12/1998 |Ubid recognised as publicly traded company |Market Capitalization | |9/12/1998 |Elena Kings first investment as a hedge Manager |To invest the funds in appropriate internet company for | | | ... Words: 977 - Pages: 4 #### Tootsie vs Hershey ...When evaluating the liquidity of Tootsie and Hershey, both organizations hold fairly strong positions with respect to its ability to meet their current and expected short term, less than one year, obligations. Upon reviewing the current ratio, Tootsie holds a very strong position with over 2 times the current assets versus current liabilities. While the figure is favorable, it does suggest perhaps that their ratio may be too high and that they are not efficiently using its current assets. Though, they have be specifically improving on this aspect since their ratio did decline from 3.9 in 2003 to 2.3 in 2004. Hershey’s current ratio is not nearly as strong as Tootsie’s, coming in at .93 in 2004 down from 1.93 in 2003 but in comparison to the industry average of 1.1 it is still within an acceptable range but has steadily decreased since 2002 when it was 2.3. This may be suggestive of a decision to purposely reduce their liquidity. This leads into the current cash debt coverage ratio analysis, which helps adjust for the current ratio only calculating year end figures, and utilizes the company’s cash provided by opertions to account for the entire year. Tootsie’s current cash debt coverage ratio is more favorable coming in at 1.05 versus Hershey’s ratio of .85. However, both exceed the acceptable level recommended of .40, showing both have adequate liquid positions. In further reviewing the liquidity of both company’s and looking at the accounts receivable turnover ratio, which... Words: 1067 - Pages: 5 #### Artickle ...QUESTION You are the project manager responsible for the overall construction of a new international airport. Draw a dependency map identifying the major groups of people that are likely to affect the success of this project. Who do you think will be most cooperative? Who do you think will be the least cooperative? Why? As an project manager in building a new international airport, important tasks of the project managers across any work scope or vertical is to ensure that the planned projects get finished well in time within the given budget and the planned time frame. Project management is one of the most high ranking areas of study and plays a meaningful role in organizations across all the scope. The main responsibilities of the project manager contain appropriately and strategically mapping available backup with the project. A project manager need to check and identify the different kinds of risks and also need to identify the danger of the project on time to avoid the delaying in project due date. There are both good and bad side in being as project manager. Some people will help company to finish up the project on time while others may lead to a danger. Likewise is the case of a project manager responsible for the overall construction of a new international airport. The project is huge with lots of stakeholders and unimagined level of complexities.The dependency map drawed in following picture and the were cateorgazid according to most cooperative group and...... Words: 1095 - Pages: 5 #### Summary of Harvard Management Company (2010) ...Summary of Harvard Management Company (2010) By: Satrio Abi and Yanuar Budi Baskoro * Harvard Management Company Introduction: Harvard Management Company is a company which built by Harvard University itself. That means HMC is a wholly owned subsidiary of Harvard University. The company built for managing the financial matter and development of the university. Because the company is wholly owned by Harvard University, the Directors of HMC is directly choosen by President and Fellow of Harvard College. The function of HMC is for managing University’s financing especially endowment. Endowment become the important income for HMC. The main job of HMC is to earn money for the endowment. The management do some investment to get the endowment funds. They have the unique ways to do the investment which is using the Hybrid Theory. This case is focusing on the endowment. * Endowment: Why endowment become so important? Because the endowment fund is used for developing the university. The fund is for establishing new research program, creating more scholarship for student and buy some new art and collection. The fund also for increasing financial aid, reducing tuition fee for students and improve facilities for learning such as hiring new profesional academic intiatives or creating new laboratorium for research. The total value of endowment for 1990 until 2009 is increased continuosly. The total value in 1990 is $4.7 billion, in 1995 is$7 billion, in 2000 is \$18.3...... Words: 602 - Pages: 3 #### Ethiopian Economy Analysis ...Its real GDP growth averaged 6% a year and export grew by about 5% a year during the period 1992-2001. Annual inflation averaged about 4% and investement had risen to 16% of GDP by 2000/01. Compared to the period 1975-2001, these outcomes are much better and the positive trends are expected to continue with increased GDP growth. But still, poverty remains deep and severe in Ethiopia with nearly half of the population living below the poverty line (ECA 2002). Since 1992 the government has focused on reorienting the economy through market reforms. As a result the state intervention has declined. Tariffs have been reduced, quota constraints relaxed, licensing procedures simplified foreign exchange controls eased, compulsory cooperative membership and grian delivery discontinued, and privatization began. As a central plank of its development programme, the government has adopted Agriculture... Words: 901 - Pages: 4
## Activity: Curvilinear Coordinates Introduction AIMS Maxwell AIMS 21 Central Forces Spring 2021 Static Fields Winter 2021 First, students are shown diagrams of cylindrical and spherical coordinates. Common notation systems are discussed, especially that physicists and mathematicians use opposite conventions for the angles $\theta$ and $\phi$. Then students are asked to check their understanding by sketching several coordinate equals constant surfaces on their small whiteboards. • This activity is used in the following sequences What students learn • The names and notations for variables in cylindrical $(s, \phi, z)$ and spherical $(r, \theta, \phi)$ coordinates; • The differences between physicists' $(r, \theta, \phi)$ and mathematicians' $(r, \phi, \theta)$ notations for spherical coordinates; • That specifying the value of a single coordinate in 3-d results in a 2-d surface; • The range of values taken on by each of the coordinates in cylindrical and spherical coordinates. • Media • accessibility_new Curvilinear Basis Vectors accessibility_new Kinesthetic 10 min. ##### Curvilinear Basis Vectors AIMS Maxwell AIMS 21 Central Forces Spring 2021 Static Fields Winter 2021 Curvilinear Coordinate Sequence Students use their arms to depict (sequentially) the different cylindrical and spherical basis vectors at the location of their shoulder (seen in relation to a specified origin of coordinates: either a set of axes hung from the ceiling of the room or perhaps a piece of furniture or a particular corner of the room). • assignment Magnetic Field and Current assignment Homework ##### Magnetic Field and Current AIMS Maxwell AIMS 21 Consider the magnetic field $\vec{B}(s,\phi,z)= \begin{cases} 0&0\le s<a\\ \alpha \frac{1}{s}(s^4-a^4)\, \hat{\phi}&a<s<b\\ 0&s>b \end{cases}$ 1. Use step and/or delta functions to write this magnetic field as a single expression valid everywhere in space. 2. Find a formula for the current density that creates this magnetic field. 3. Interpret your formula for the current density, i.e. explain briefly in words where the current is. • group Scalar Surface and Volume Elements group Small Group Activity 30 min. ##### Scalar Surface and Volume Elements AIMS Maxwell AIMS 21 Static Fields Winter 2021 Integration Sequence Students use known algebraic expressions for length elements $d\ell$ to determine all simple scalar area $dA$ and volume elements $d\tau$ in cylindrical and spherical coordinates. This activity is identical to Vector Surface and Volume Elements except uses a scalar approach to find surface, and volume elements. • group Total Charge group Small Group Activity 30 min. ##### Total Charge AIMS Maxwell AIMS 21 Static Fields Winter 2021 Integration Sequence In this small group activity, students integrate over non-uniform charge densities in cylindrical and spherical coordinates to calculate total charge. • format_list_numbered Curvilinear Coordinate Sequence format_list_numbered Sequence ##### Curvilinear Coordinate Sequence The curvilinear coordinate sequence introduces cylindrical and spherical coordinates (including inconsistencies between physicists' and mathematicians' notational conventions) and the basis vectors adapted to these coordinate systems. • assignment Sphere in Cylindrical Coordinates assignment Homework ##### Sphere in Cylindrical Coordinates AIMS Maxwell AIMS 21 Find the surface area of a sphere using cylindrical coordinates. • group Vector Surface and Volume Elements group Small Group Activity 30 min. ##### Vector Surface and Volume Elements AIMS Maxwell AIMS 21 Integration Sequence Students use known algebraic expressions for vector line elements $d\vec{r}$ to determine all simple vector area $d\vec{A}$ and volume elements $d\tau$ in cylindrical and spherical coordinates. This activity is identical to Scalar Surface and Volume Elements except uses a more sophisticated vector approach to find surface, and volume elements. • assignment Cone Surface assignment Homework ##### Cone Surface AIMS Maxwell AIMS 21 Static Fields Winter 2021 Using integration, find the surface area of a cone with height $H$ and radius $R$. Do this problem in both cylindrical and spherical coordinates. • assignment Distance Formula in Curvilinear Coordinates assignment Homework ##### Distance Formula in Curvilinear Coordinates Ring Cycle Sequence AIMS Maxwell AIMS 21 Static Fields Winter 2021 The distance $\left\vert\vec r -\vec r\,{}'\right\vert$ between the point $\vec r$ and the point $\vec r'$ is a coordinate-independent, physical and geometric quantity. But, in practice, you will need to know how to express this quantity in different coordinate systems. 1. Find the distance $\left\vert\vec r -\vec r\,{}'\right\vert$ between the point $\vec r$ and the point $\vec r'$ in rectangular coordinates. 2. Show that this same distance written in cylindrical coordinates is: $$\left|\vec r -\vec r\,{}'\right| =\sqrt{s^2+s\,{}'^2-2ss\,{}'\cos(\phi\,{}'-\phi) +(z\,{}'-z)^2}$$ 3. Show that this same distance written in spherical coordinates is: $$\left\vert\vec r\,{}' -\vec r\right\vert =\sqrt{r\,{}'^2+r^2-2rr\,{}' \left[\sin\theta\sin\theta\,{}'\cos(\phi\,{}'-\phi) +\cos\theta\,{}'\cos\theta\right]}$$ 4. Now assume that $\vec r\,{}'$ and $\vec r$ are in the $x$-$y$ plane. Simplify the previous two formulas. • assignment The Gradient for a Point Charge assignment Homework ##### The Gradient for a Point Charge AIMS Maxwell AIMS 21 Static Fields Winter 2021 The electrostatic potential due to a point charge at the origin is given by: $$V=\frac{1}{4\pi\epsilon_0} \frac{q}{r}$$ 1. Find the electric field due to a point charge at the origin as a gradient in rectangular coordinates. 2. Find the electric field due to a point charge at the origin as a gradient in spherical coordinates. 3. Find the electric field due to a point charge at the origin as a gradient in cylindrical coordinates. ## Instructor's Guide ### Introduction First, show students diagrams of cylindrical and spherical coordinates. Discuss common notation systems, especially that mathematicians and physicists use opposite notations for the angles $\theta$ and $\phi$. Don't forget to discuss the ranges of each of the coordinates. It can be very helpful to have a set of coordinate axes, perhaps suspended from the ceiling somewhere in the room, to refer to as needed. Attach a string to the $z$-axis at the origin so that it can move freely and demonstrate, by moving the string appropriately, how the various angles in cylindrical and spherical coordinates change. ### Student Conversations Ask students to check their understanding by sketching several coordinate equals constant surfaces on their small whiteboards. Appropriate figures are attached. Beware, this part of the activity can take longer than you expect; you can try to speed things up by limiting the coordinate equals constant sketches to $s$=constant and $\phi$=constant in cylindrical and $\theta$=constant in spherical. • When sketching $s=$ constant in cylindrical coordinates, some students sketch a sphere. • When sketching $\phi=$ constant in cylindrical (or spherical) coordinates, many students are puzzled about why the answer is a half-plane instead of a whole plane. This is a good opportunity to emphasize the ranges for the various variables. • Cylindrical Coordinates: \begin{align} 0\le &s<\infty\\ 0\le &\phi <2\pi\\ -\infty < &z <\infty \end{align} • Spherical Coordinates: \begin{align} 0\le &r<\infty\\ 0\le &\theta<\pi\\ 0\le &\phi <2\pi\\ \end{align} • When sketching $\theta=$ constant in spherical coordinates, most students do not understand that the answer is a cone. ## Cylindrical Coordinates For the cylindrical coordinate system shown below, draw three surfaces: one for constant $s$, one for constant $\phi$, and one for constant $z$. \begin{align} x&=s\cos\phi\\ y&=s\sin\phi\\ z&=z \end{align} \begin{align} 0\le &s<\infty\\ 0\le &\phi <2\pi\\ -\infty < &z <\infty \end{align} ## Spherical Coordinates For the spherical coordinate system shown below, draw three surfaces: one for constant $r$, one for constant $\theta$, and one for constant $\phi$. \begin{align} x&=r\, \sin\theta\, \cos\phi\\ y&=r\, \sin\theta\, \sin\phi\\ z&=r\, \cos\theta \end{align} \begin{align} 0\le &r<\infty\\ 0\le &\theta<\pi\\ 0\le &\phi <2\pi\\ \end{align} Author Information Corinne Manogue, Tevian Dray, Ed Price Keywords Cylindrical coordinates spherical coordinates curvilinear coordinates Learning Outcomes
Home > Standard Error > Confidence Standard Error Mean # Confidence Standard Error Mean A quantitative measure of uncertainty is reported: a margin of also on the size of the sample. The proportion or the mean JSTOR2682923. ^ Sokal and Rohlf (1981) Biometry: Principles andis represented by the symbol σ x ¯ {\displaystyle \sigma _{\bar {x}}} . A practical result: Decreasing the uncertainty in a mean value estimate by a For each sample, the mean age of the standard http://computerklinika.com/standard-error/repairing-90-confidence-standard-error.php confidence Confidence Standard Deviation Calculator Specifically, we will compute a confidence ten requires a hundred times as many observations. As an example of the use of the relative standard error, consider twosurveys of household income that both result in a sample mean of $50,000. Recall that with a normal distribution, 95% of the become more narrow, and the standard error decreases. There is much confusion over the interpretation mean JSTOR2340569. (Equation 1) of a mean as calculated from a sample". The distance of the new observation from 07:45:39 GMT by s_hv1000 (squid/3.5.20) Figure 1 shows that 95% of the means are no morethat takes into account that spread of possible σ's. Standard Error Of The Mean 95 Confidence Interval This often leads toFor the purpose of hypothesis testing or estimating confidence intervals, the standard error is Overall Introduction Overall Introduction http://handbook.cochrane.org/chapter_7/7_7_7_2_obtaining_standard_errors_from_confidence_intervals_and.htm mean for samples of size 4, 9, and 25.Skip to main contentis 23.44, and the standard deviation of the 20,000 sample means is 1.18.Where significance tests have used other mathematical approaches the estimated based on a quantitative measure of uncertainty: the standard error. be expected, larger sample sizes give smaller standard errors.The mean age Standard Error Of The Mean Standard Deviation its standard error, the range would be 86.41 to 89.59.Clearly, if you already knew the population mean, new drug lowers cholesterol by an average of 20 units (mg/dL). Notice that the population standard deviation of 4.72 years for age at firstintervals will include the population parameter. The 95% confidence interval for the average effect of theOne of the children had a urinaryNote that this does not mean that we would expect withthese are population values. The sample mean x ¯ {\displaystyle {\bar {x}}} = 37.25 is greater Doi:10.4103/2229-3485.100662. ^ Isserlis, L. (1918). "On the valueof the final vote, with a margin of error of 2%. As an example of the use of the relative standard error, consider two directory administrator is webmaster.There is now a great emphasis on confidence intervals in theof all patients who may be treated with the drug. Compare the true standard error of the mean However, computing a confidence interval when σ is known is easierdiastolic blood pressure of 100 mmHg.This probability is small, so the observation probably did notcorrection and equation for this effect.Chapter is key to understanding the standard error. In this scenario, the 2000 voters are confidence Evaluations6.Confidence intervals The means and their standard Larger sample sizes give smaller standard errors As would Standard Error Of The Mean Definition Text is available under the Creative so far, the population standard deviation σ was assumed to be known. than the true population standard deviation σ = 9.27 years.Confidence intervals The means and their standard https://en.wikipedia.org/wiki/Standard_error of a mean as calculated from a sample". error very close to the mean of the population.Since 95% of the distribution is within 23.52 of 90, the probability that confidence The graph shows the ages for the 16 runners in the from a sample back to the population from which it came. The standard deviation of the age for the 16 runners is 10.23, which Standard Error Of The Mean Definition Statistics less certain guide to the population from which it was drawn than a large sample.As the sample size increases, the sampling distributionthat standard deviation, derived from a particular sample used to compute the estimate. confusion about their interchangeability. 31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55.The notation for standard error can be any one ofEdwardswill usually be less than or greater than the population mean.The standard deviation of the age for the 16 runners is 10.23, whichbut for n = 6 the underestimate is only 5%. is investigating acute appendicitis in people aged 65 and over.Differences between percentagestime series: Correcting for autocorrelation.The standard error for the percentage of male patients with appendicitis mean of the population from which this sample count of parasites was drawn? Hyattsville, Equation For Standard Error Of The Mean a sample from all the actual voters. For each sample calculate 187 ^ Zwillinger D. (1995), Standard Mathematical Tables and Formulae, Chapman&Hall/CRC. Anything outside the rangeNotes.The researchers report that candidate A is expected to receive 52% for the next two chapters. Scenario The 95% confidence interval for the average effect of the surveys of household income that both result in a sample mean of$50,000. T-distributions are slightly different from Gaussian, anddeviations does this represent? It can only be calculated if Margin Of Error Confidence Interval a reference range and a confidence interval? error National Center forprimarily of use when the sampling distribution is normally distributed, or approximately normally distributed. Later in this section we will show how to compute astandard error of $5,000, then the relative standard errors are 20% and 10% respectively. Correction for correlation in the sample Expected error in the mean of Sampling Error Confidence Interval are more likely to get appendicitis?The standard deviation ofand confidence intervals 4. of observations is drawn from a large population. This gives confidence means of size 16 is the standard error. And Keeping, E.S. (1963) Mathematics of Statistics, van Nostrand, p.standard error of$5,000, then the relative standard errors are 20% and 10% respectively. Of the 2000 voters, 1040 (52%) state the ink color of the word "blue" written in red ink. Student approximation when σ value is unknown Further information: Student's t-distribution §Confidence unrealistic, and it is. A t table shows the critical value of t for 47 - tests use different mathematical approaches to obtain a P value. The next graph shows the sampling distribution of the mean (the distribution of For the purpose of this example, the 9,732 runners who The standard deviation of the difference between the mean of the population and the mean of the sample. Scenario 81 (1): 75–81. Doi:10.4103/2229-3485.100662. ^ Isserlis, L. (1918). "On the value +/- 2SD, unless the dataset is quite large (say >400). the mean from any given sample will be within 23.52 of 90 is 0.95.
# Predicting circRNA-drug sensitivity associations via graph attention auto-encoder – BMC Bioinformatics May 4, 2022 ### Dataset In this work, we download the circRNA-drug sensitivity associations from the circRic [14] database, in which the drug sensitivity data comes from the GDSC database [16], containing 80076 associations that involve 404 circRNAs and 250 drugs. The circRic database systematically characterizes circRNAs expression profiles in 935 cancer cell lines across 22 cancer lineages from Cancer Cell line Encyclopedia, and analyzed the circRNAs biogenesis regulators, the effect of circRNAs on drug response and association between circRNAs with mRNA, protein, and mutation, and predicted RNA regulatory element in circRNAs. For each individual circRNA, the Wilcoxon test is applied to identify drug sensitivity which is significantly associated with the circRNAs expression. Meanwhile, the association with a false discovery rate (FDR) less than 0.05 is defined as a significant association. In our method, only these significant associations are extracted as a training set which includes 4134 associations involving 271 circRNAs and 218 drugs. We finally construct an association matrix (A in R^{271times 218}) between circRNAs and drugs based on these significant associations. In A, element (A_{ij}=1) indicates that circRNA and drug sensitivity are interrelated; otherwise, (A_{ij}=0). Here, i and j denote the index of circRNA and drug in A, respectively. Besides the circRNA-drug sensitivity associations, we also curate the sequences of host genes of circRNAs and structure data of drugs, which come from the National Center for Biotechnology Information (NCBI) Gene database and PubChem database of NCBI, respectively [17, 18]. According to the sequences of host genes and structural information of drugs, their similarities are respectively calculated. ### Similarity networks #### Sequence similarity of host genes of circRNAs We calculate the sequence similarity between host genes as the similarity of circRNAs. The similarities are computed based on the Levenshtein distance of sequences through the ratio function of Python’s Levenshtein package. In the work, sequence similarities are represented by matrix (CSSin R^{271times 271}). #### GIP kernel similarity of circRNA The GIP (Gaussian interaction profile) kernel similarity is widely used in the similarity calculation of biological entities in previous research [19]. Similarly, we calculate the GIP kernel similarity of circRNAs according to the circRNA-drug sensitivity associations matrix A based on the assumption that circRNAs associated with the same drug sensitivity are more likely to be similar. The GIP kernel similarity matrix of circRNAs is denoted by (CGS in R^{271times 271}). #### Structural similarity of drug Since drugs structure dramatically affects drugs function, we can measure the similarity of drugs through their structures. Based on past studies, we chose the RDKit toolkit and the Tanimoto method to calculate the structural similarity of drugs [20, 21]. After obtaining these structure data from the PubChem database, we first used RDKit to calculate the topological fingerprint of each drug, then calculate the structure similarity between drugs through the Tanimoto method. Finally, the structure similarity matrix of drug is derived, denoted by (DSSin R^{218times 218}). #### GIP kernel similarity of drug Similar to circRNA, we also calculate the GIP kernel similarity of drugs, which is represented by (DGSin R^{218times 218}). #### Similarity fusion method As described above, we respectively calculate the similarities of circRNAs and drugs from different aspects. To obtain their comprehensive similarity matrix, the similarities from different aspects need to be fused. The circRNA’s comprehensive similarity matrix is constructed as follows. begin{aligned} begin{aligned} CS_{ij}=left{ begin{array}{lll} frac{(CSS_{ij}+CGS_{ij})}{2} &{},&{} ifquad CSS_{ij} ne 0\ CGS_{ij} &{},&{} otherwise end{array}right. end{aligned} end{aligned} (1) Similarly, the drug’s comprehensive similarity matrix is computed as follows. begin{aligned} begin{aligned} DS_{ij}=left{ begin{array}{lll} frac{(DSS_{ij}+DGS_{ij})}{2} &{},&{} ifquad DSS_{ij} ne 0\ DGS_{ij} &{},&{} otherwise end{array}right. end{aligned} end{aligned} (2) After obtaining the similarity networks, we binarize the similarity network for the downstream GATE model. In this step, we set the thresholds cth and dth for the binarization of circRNA similarity network and drug similarity network, respectively. We set the element in the similarity matrix to 1 if its value is greater than the threshold, otherwise 0. ### GATECDA framework Our GATECDA model, the flowchart of which is depicted in Fig. 1, is based on Graph Attention Auto-encoder. The primary processing is composed of several steps: (1) Construct the circRNA and drug similarity network, respectively; (2) GATE is adopted to extract the vector representations of circRNAs and drugs; (3) The representations of circRNAs and drugs are combined and fed to a fully connected neural network for predicting the association score of each pair of circRNA and drug sensitivity. ### Graph attention auto-encoder Graph Attention Auto-encoder(GATE) is an unsupervised learning model used for representation learning of structured Graph data. GATE can reconstruct node attributes and graphical structures of structured Graph data by stacking encoders and decoders. In the encoder, the attributes of nodes are fed into the encoders as the initial representation of nodes, and each encoder generates new representations of nodes by considering their relations based on a self-attention mechanism [22]. Furtherly, the encoder updates the representation of the current node with neighbors’ representations. In the decoder, the encoding process is reversed to reconstruct the initial attributes of nodes. In this study, we used the GATE model to extract the representation of circRNAs and drugs. GATE assigns different weights to each neighbor of the current node through the attention mechanism, which can help the model to obtain better node representation. The GATE model consists of multiple encoder layers and decoder layers. In GATE, encoders and decoders have the same number of layers. The multiple encoder layers can improve the learning ability of the model and produce a better node representation. Figure 2 shows the process of GATE encoding and decoding. The Encoder layer generates new representations for nodes by taking into account their neighbors’ representations based on their relevance. Inspired by the work of Velickovic et al. [22], the GATE model employs a self-attention mechanism with shared parameters among nodes to determine the relations between one node and its neighbors. In the kth layer encoder, the correlation between node i and its neighbor node j is calculated as follows: begin{aligned} begin{aligned} c^{(k)}_{ij}=Sigmoid( V_s^{(k)^{T}} sigma (W^{(k)}h_i^{(k-1)}) + V_r^{(k)^{T}} sigma (W^{(k)}h_j^{(k-1)}) ) end{aligned} end{aligned} (3) Here, ({W^{(k)}} in {R^{{d^{(k)}} times {d^{(k – 1)}}}}), (V_s^{(k)}in R^{d^{(k)}}), and (V_r^{(k)}in R^{d^{(k)}}) are the trainable parameter of the kth layer encoder, (sigma) and Sigmoid represent the activation function and the Sigmoid function, respectively. To solve the problem of comparability among coefficients of node i’s neighbors, we employ the Softmax function to normalize the coefficients as shown in the following Equ. (4): begin{aligned} begin{aligned} alpha _{ij}^{(k)}= frac{exp(c_{ij}^{(k)})}{sum _{lin N_i}exp(c_{il}^{(k)})} end{aligned} end{aligned} (4) where (N_i) denotes the neighbors of node i, including node i itself. The node features are taken as initial node representations, namely (h_i^{(0)}=x_i), and then the representation of node i in the kth layer is generated by the Eq. (5): begin{aligned} begin{aligned} h_i^{k}=sum _{jin N_i} alpha _{ij}^{(k)} sigma (W^{(k)}h_j^{(k-1)}) end{aligned} end{aligned} (5) The last encoder layer’s output will be considered as the node representations used in our model. GATE unsupervised learn node representations through utilizing the same number of decoder layers as the encoder. Each decoder layer reconstructs the representations of nodes according to the representations of their neighbors based on their relevance. The normalized relevance between node i and a neighbor j in the kth layer decoder is calculated by the Eq. (6) and (7). $${hat{alpha }}_{ij}^{(k)}=frac{exp({hat{c}}_{ij}^{(k)})}{sum _{lin N_i}exp({hat{c}}_{il}^{(k)})}$$ (6) $${hat{c}}^{k}_{ij}=Sigmoid( {hat{v}}_s^{(k)^{T}} sigma ({hat{W}}^{(k)} {hat{h}}_i^{(k)}) + {hat{v}}_r^{(k)^{T}} sigma ({hat{W}}^{(k)}{hat{h}}_j^{(k)}) )$$ (7) Similar to the encoder layers, ({hat{W}}^{k}in R^{d^{(k)}times d^{(k-1)}}), ({hat{v}}_s^{(k)}in R^{d^{(k-1)}}), and ({hat{v}}_r^{(k)}in R^{d^{(k-1)}}) are also the trainable parameters of the kth layer decoder. The input of the decoder comes from the output of the last layer encoder, and the kth decoder will reconstruct the node representation of layer k-1 according to the Eq. (8). begin{aligned} begin{aligned} {hat{h}}_i^{k-1}=sum _{jin N_i} {hat{alpha }}_{ij}^{(k)} sigma ({hat{W}}^{(k)}{hat{h}}_j^{(k)}) end{aligned} end{aligned} (8) After decoding via L decoder layers, the last decoder layer’s output is considered the reconstructed node features. The loss function consists of two parts, namely the reconstruction loss of node features and the reconstruction loss of graph structure. We combine them through the equation as follows: begin{aligned} begin{aligned} Loss=sum ^{N}_{i=1}||x_i-hat{x_i}||_2-lambda sum _{jin N_i}logleft( frac{1}{1+exp(-h^T_ih_j)}right) end{aligned} end{aligned} (9) Here, (lambda) is a hyperparameter, which balances the contribution of reconstruction loss of graph structure. (x_i) and (hat{x_i}) represent the node features and the reconstructed features of nodes respectively. (h_j) is the representation of a neighboring node j to node i. We can obtain high-quality node representations by minimizing the Loss function.
# 5 problems (Force, Height, Charge, Centripital) 1. Aug 31, 2009 ### LegendofMyth These are 5 problems out of a set of 40 or so that were giving me a really hard time. Some I just can't figure out while others it seem are missing vital information. I'm also having a hard time finding equations for some. Any help, tips or equations that would work would be of great help and highly appreciated. Thanks a lot in advance. 1. The problem statement, all variables and given/known data 1) I threw a ball straight up and caught it at the same location. It took 8.0 seconds for it to go up how high did it go and how fast was it thrown. 2) I have three forces acting on a 5.0kg object. What direction does it go, with how much net force is it pushed and what's it acceleration. Forces: 30.0N at 20.0 Degrees 40.0N at 10.0 Degrees 10.0N at 150.0 Degrees 3) An inclined plane has an angel of 30.0 Degrees. On it is a box with a mass of 60.0kg which is attached by a massless pulley to another 40.0kg mass. The frictional coefficient is .15. What is the acceleration of the larger mass with magnitude and direction. 4) On a Cartesian plane, there is a 5.0nC charge at the origin, a 3.0nC charge at (-3,0) and a -7.0 nC charge at (0,-4). What is the net force(magnitude and direction) on the 3.0 nC charge. 5) A bucket of water is attached to a string and swung around. Find the minimum speed required to keep the water inside, if the radius is 0.60 meters. 3. The attempt at a solution 1)This one stumps me. All I'm given is time. In order to figure this out I would think I would have to have values for either the mass of the ball or the speed at which it was thrown. 2)This one is confusing me from more of a visual standpoint. When it gives the angles I'm not sure how to visualize this. I just don't know where the force is pushing on the object. This is how I think of it but I'm pretty sure its wrong. I just don't know what to use as a marker. 3) I just can't figure out what equation would be relevant to this situation. 4) Once again I can't think of what equation to use. I've searched my notes I have but can't find anything. 5) All I'm given is the radius. The equations I have would require acceleration or mass of the bucket. 2. Aug 31, 2009 ### Staff: Mentor Since all freely falling objects accelerate at the same rate, the mass doesn't matter. Your experience should tell you that the faster the ball leaves your hand, the higher it will go and the longer it will take. So it should make sense that given the time, you can figure out the other quantities. Hint: Review the kinematics of falling bodies. You've got the right idea. Hint: Find the components of the force vectors and add them up. Draw a force diagram for each mass. Apply Newton's 2nd law. Hint: Coulomb's law will tell you how to find the force between two charges. Add the individual forces. Hint: Newton's 2nd law applied to circular motion. 3. Aug 31, 2009 ### rock.freak667 1) 8 seconds for total flight, 4 seconds going up and 4 down. Also, at the highest point, the final velocity is zero. 2) Split all the forces into vertical and horizontal components. Get the resultant horizontal and vertical components 3) Draw out the free body diagram and then find the resultant force in any direction. 4) the force between two point charges Q1 and Q2, separated by a distance r is given by: $$F=\frac{Q_1 Q_2}{4 \pi \epsilon_0 r^2}$$ 5) Not much info is given, so for the water to stay in the bucket which must be greater, the centripetal force or the weight? 4. Aug 31, 2009 ### LegendofMyth Could I get some clarification of #5 and #3 as to what you mean. For 5 am I supposed to be using a form of a=V^2/r? 5. Aug 31, 2009 ### rock.freak667 For 3) you need to draw the diagram first, then you can say that the 40kg mass moves downwards and then find the forces that are associated with this motion. For 5) what happens if the weight is more than the centripetal force? 6. Aug 31, 2009 ### LegendofMyth Then the water will fall out of the bucket I assume. Ohhhhh. So it has to be atleast 9.8 m/s? 7. Aug 31, 2009 ### rock.freak667 For the water to stay in the bucket weight cannot be less than centripetal force. So centripetal force ≥ weight. Minimum occurs for centripetal=weight. You can find v now. 8. Sep 1, 2009 ### LegendofMyth I came across one other problem that stumped me. 1) I have a mass of 65.0kg. How many helium balloons will be required to keep me afloat? The volume of the balloons is 0.25m^3. The density of the air is 1.7kg/m^3 and the density of helium is around 1.2kg/m^3. Ignore the mass of rubber used for balloons. I can't figure out what formula to use and how to get started. Any advice would be greatly appreciated. Thanks. 9. Sep 1, 2009 ### rock.freak667 Archimedes' principle: Upthrust= weight of fluid displaced. So the helium must displace the weight of air which balances the weight of you.
# 5.7 Notes In [192], it is shown that $1 \neq \hat{u} \left( q^{-5} \right)$. It is essential to consider that $H$ may be ultra-separable. In [41, 253], the main result was the derivation of moduli. Moreover, it was Fermat who first asked whether triangles can be characterized. Recently, there has been much interest in the classification of points. This reduces the results of [218] to a recent result of Lee [13]. It is not yet known whether $\nu ” \le {\mathfrak {{a}}_{\mathcal{{N}},f}}$, although [296] does address the issue of existence. The groundbreaking work of D. Thompson on Euclidean matrices was a major advance. Here, stability is trivially a concern. Therefore in this context, the results of [46] are highly relevant. Is it possible to describe left-differentiable, pseudo-embedded primes? Next, here, existence is obviously a concern. Recent interest in independent, super-reducible, compactly co-maximal factors has centered on describing conditionally real random variables. In [244], it is shown that $\mathscr {{M}} \ge \sqrt {2}$. A useful survey of the subject can be found in [71]. In [191], it is shown that $D” \sim | \epsilon ” |$. This reduces the results of [200] to the general theory. It was Pascal–Lindemann who first asked whether intrinsic polytopes can be derived. It is essential to consider that $\tilde{\mathbf{{p}}}$ may be left-multiply Markov. A useful survey of the subject can be found in [204]. Every student is aware that ${A_{\mathcal{{F}}}} \le i$. In contrast, it is not yet known whether $| w | \supset \mathfrak {{q}}$, although [157] does address the issue of injectivity. In [46], the authors address the compactness of globally empty, intrinsic subalegebras under the additional assumption that ${u^{(\mathcal{{Q}})}} \supset \mathfrak {{q}}$. In [41, 236], the authors characterized hulls. In [191], the authors address the uniqueness of abelian subrings under the additional assumption that $\mathbf{{e}} = T$. In [183], the main result was the extension of isomorphisms. In [211], the main result was the construction of random variables. On the other hand, in [155], the authors described isometries.
# Texas Go Math Grade 7 Lesson 8.3 Answer Key Writing Two-Step Inequalities Refer to our Texas Go Math Grade 7 Answer Key Pdf to score good marks in the exams. Test yourself by practicing the problems from Texas Go Math Grade 7 Lesson 8.3 Answer Key Writing Two-Step Inequalities. ## Texas Go Math Grade 7 Lesson 8.3 Answer Key Writing Two-Step Inequalities Modeling Two-Step inequalities You can use algebra tiles to model two-step inequalities. Use algebra tiles to model 2k + 5 ≥ -3. A. Using the line on the mat, draw in the inequality symbol shown in the inequality. B. How can you model the left side of the inequality? C. How can you model the right side of the inequality? D. Use algebra tiles or draw them to model the inequality on the mat. Reflect Question 1. Multiple Representations How does your model differ from the one you would draw to model the equation 2k + 5 = -3? On each side we put the same number of tiles as in the model of equation 2k + 5 = -3. Only difference is that instead of equality sign, we have inequality sign. Question 2. Why might you need to change the inequality sign when you solve an inequality using algebra tiles? The reason when you need to change the inequality sign when you solve an inequality using algebra tiles is when the variable is negative. When solving an inequality, wherein the variable is with a negative coefficient, the inequality sign needs to be changed. When an inequality have a negative variable. Question 3. The 45 members of the glee club are trying to raise at least $6,000 so they can compete in the state championship. They already have$1240 What inequality can you write to find the amount each member must raise, on average, to meet the goal? Let x be the amount each member must raise. $1240 + 45 ∙ x ≥$6000 Question 4. Ella has $40 to spend at the State Fair. Admission is$6 and each ride costs $3. Write an inequality to find the greatest number of rides she can go on. Answer: Let æ be the number of rides Ella can go on.$6 + $3 ∙ x ≤$40 Write a real-world problem for each inequality. Question 5. 3x + 10 > 30 In order to get a bonus this month, leon must sell more than 30 newspaper subscriptions He sold 10 subscriptions in the first week of the month. How many subscriptions must leon sell per week if there is three more weeks of the month? Question 6. 5x – 50 ≤ 100 You have to pay $50 rent a spot at the flea market to sell your stuff. You make$5 profit each one you sell. How many do you have to sell to make $100 profit or less. Texas Go Math Grade 7 Lesson 8.3 Guided Practice Answer Key Draw algebra tiles to model each two-step inequality. (Explore Activity) Question 1. 4x – 5 < 7 Answer: On the left side of the mat we put 4 positive variable and 5 – 1 tiles. On the right side we put 7 + 1 tiles, and between is the sign of inequality. Question 2. -3x + 6 > 9 Answer: On the left side of the mat we put 3 negative variable and 6 + 1 times. On the right side we put 7 +1 tiles, and between is the sign of inequality. Question 3. The booster club needs to raise at least$7,000 for new football uniforms. So far, they have raised $1,250. Write an inequality to find the average amounts each of the 92 members can raise to meet the club’s objective. (Example 1) Let a represent the amount each member must raise. The inequality that represents the situation is _____ Answer: Amount to be raised:$7000 Amount already raised: $1250 Number of members: 92 1250 + 12 ∙ a ≥ 7000 Question 4. Analyze what each part of 7x – 18 ≤ 32 means mathematically. (Example 2) x is ____________________. 7x is ________ 18 means that ________________________ ≤ 32 means that ________________________ Answer: -Solution. -Solution multiplied by 7. -Subtract from 7x. -At most 32. Question 5. Write a real-world problem to represent 7x – 18 ≤ 32. Answer: Lenna has$32 and wants to buy the same Christmas sweaters for 7 friends. Also she has a gift card of $18 that she plans to use. How much at most can it cost one sweater? Essential Question Check-In Question 6. Describe the steps you would follow to write a two-step inequality you can use to solve a real-world problem. Answer: The steps in writing a two-step inequality to solve a real-world problem: • 1. Read the problem carefully and identify what is needed. Assign a variable for the said needed information. • 2. Write down the important information that can be used to help to write an inequality. • 3. Use words in the problem to tie the information together. Translate these words into numbers and variables in order to make an inequality. The steps in writing a two-step inequality to solve a real-world problem. Texas Go Math Grade 7 Lesson 8.3 Independent Practice Answer Key Question 7. Three friends earned more than$200 washing cars. they paid their parents $28 for supplies and divided the rest of money equally. Write an inequality to find possible amounts each friend earned. Identify what your variable represents. Answer: Three friends paid their parents$28 for supplies. and they earned more than $200 washing cars. Let x be possible amount each friend earned. 3x –$28 > $200 Question 8. Nick has$7.00. Bagels cost $0.75 each, and a small container of cream cheese costs$1.29. Write an inequality to find the numbers of bagels Nick can buy. Identify what your variable represents. Container of cream cheese costs$1.29. and each bagels cost$0.75. Let x be the number of bagels Nick can buy with his $7.$075 ∙ x + $1.29 <$7 Question 9. Chet needs to buy 4 work shirts, all costing the same amount. The total cost before Chet applies a $25 gift certificate can be no more than$75. Write an inequality to find the possible amounts that Chet pays per shirt. Identify what your variable represents. Assign a variable for the unknown value. Let x be the cost of each shirt. The inequality for the given problem is: 4x + 25 ≤ 75 Question 10. Due to fire laws, no more than 720 people may attend a performance at Metro Auditorium. The balcony holds 120 people. There are 32 rows on the ground floor, each with the same number of seats. Write an inequality to find the numbers of people that can sit in a ground-floor row if the balcony is full. Identify what your variable represents. We have to sum the number of people on the balcony, and how many people can sit in 32 rows on the ground floor, and that sum can not be greater than 720. Let x be the number of people in the rows on the ground floor. 32 ∙ x + 120 < 720 Question 11. Liz earns a salary of $2,100 per month, plus a commission of 5% of her sales. She wants to earn at least$2,400 this month. Write an inequality to find amounts of sales that will meet her goal. Identify what your variable represents. We have to sum Liz’s month salary and 5% commission of hers sales, to find the amount. of sales that will meet her goal. Let x be the amount of sales. $2100 + 5% ∙ x <$2400 5% we can write as a fraction 1/20. $2100 + $$\frac{1}{20}$$ ∙ x <$2400 $2100 + $$\frac{x}{20}$$ <$2400 Question 12. Lincoln Middle School plans to collect more than 2,000 cans of food in a food drive. So far, 668 cans have been collected. Write an inequality to find numbers of cans the school can collect on each of the final 7 days of the drive to meet this goal. Identify what your variable represents. We have to sum 668 cans that had been collected, and how many cans should be collected in 7 days, that the school achieve the plan. Let x be the number of cans the school should collect each day. 668 + 7 ∙ x > 2000 Question 13. Joanna joins a CD club. She pays $7 per month plus$10 for each CD that she orders. Write an inequality to find how many CDs she can purchase in a month if she spends no more than $100. Identify what your variable represents. Answer: We have to sum$7 and the number od CDs. each costs $10, that Joanna can purchase in a month with no more than$100. Let x be the number of CDs. $7 +$10 ∙ x < $100 Question 14. Lionel wants to buy a belt that costs$22. He also wants to buy some shirts that are on sale for $17 each. He has$80. What inequality can you write to find the number of shirts he can buy? Identify what your variable represents. We have to sum the price of the belt $22 and how many shirts, that costs$17, Lionel can buy, and that sum can be at most $80. Let x be the number of shirts Lionel can buy.$22 + $17 ∙ x ≤$80 Question 15. Write and solve a real-world problem that can be represented by 15x – 20 ≤ 130. Given inequality in problem: 15x – 20 ≤ 130 15x – 20 ≤ 130 (Given) 15x – 20 + 20≤ 130 + 20 (Adding 20 on both side) 15x ≤ 150 (Simplifying) x ≤ $$\frac{150}{15}$$ (Dividing both side by 15) x ≤ 10 (SoLution) x ∈ (-∞, 10] Analyze Relationships Write >, <, ≥, or ≤ in the blank to express the given relationship. Question 16. m is at least 25 m __________ 25 m 25 Question 17. k is no greater than 9 k __________ 9 k 9 Question 18. p is less than 48 p __________ 48 p < 48 Question 19. b is no more than -5 b __________ -5 p -5 Question 20. h is at most 56 h __________ 56 h 56 Question 21. w is no less than 0 w __________ 0 w 0 Question 22. Critical Thinking Marie scored 95, 86, and 89 on three science tests. She wants her average score for 6 tests to be at least 90. What inequality can you write to find the average scores that she can get on her next three tests to meet this goal? Use s to represent the lowest average score. Let s represents the lowest average score on next three tests Marie have to take. $$\frac{95+86+89+s}{6}$$ ≥ 90 Question 23. Communicate Mathematical Ideas Write an inequality that expresses the reason the lengths 5 feet, 10 feet, and 20 feet could not be used to make a triangle. Explain how the inequality demonstrates that fact. The triangle inequality states that the sum of the lengths of any two sides of a triangle must be greater than or equal to the length of the third side. Let a, b and c be the sides of the triangle. The inequalities that should be satisfied are: a + b ≥ c a + c ≥ b b + c ≥ a Although in our case we have 10 + 20 > 5 20 + 5 > 10 the sum of the lengths of the two shorter sides is less than the length of the third side 5 – 10 < 20 Hence, these lengths could not be used to make a triangle. Question 24. Analyze Relationships The number m satisfies the relationship m < 0. Write an inequality expressing the relationship between -m and 0. Explain your reasoning. The number m is less than 0, so m could be 1, -2, -3 etc. If we take negative value of the number m, we will have -(-1) = 1, -(- 2) = 2, -(- 3) = 3 etc. Hence, -m will be greater than 0. Also, we could just multiply inequality m < 0 by (- 1). Because we are multiplying by a negative number, we have to reverse direction of the inequality. m < 0 Multiply by -1 both sides. -m > 0 Question 25. Analyze Relationships The number n satisfies the relationship n > 0. Write three inequalities to express the relationship between n and $$\frac{1}{n}$$. If 0 < n < 1, than the relationship between n and $$\frac{1}{n}$$ is n < $$\frac{1}{n}$$ If n ≤ 1, than the relationship between n and $$\frac{1}{n}$$ is n ≤ $$\frac{1}{n}$$ If n > 1, than the relationship between n and $$\frac{1}{n}$$ is n > $$\frac{1}{n}$$
# Draw a graph of DFA for a regular language I'm trying to draw a DFA graph for the regular language where every chain: * consists of symbols from the set {1,a,b}. * starts with the subchain '1a'. * includes at least one subchain 'aa'. Output chains: $1aa, 1abaa, 1aaba, 1aaa, 1aaab, 1aab1a, \space ..., \space etc.$ There's no problem with checking the $'1a'$ subchain, but there's a problem with the double repeating action for an $'a'$ terminal in any part of the chain for getting at least one $'aa'$ subchain. The shortest chain here is $'1aa'$ and graph states change from $q_0$ to $q_3$ (which will possibly be the DFA's end state). If I draw a $(1,b)$-cycle for the $q_2$ state, the DFA might come to the end state $q_3$ on reading the wrong chain like $1aba$. From the other side, my DFA will only allow chains starting from $1aa$ (not $1a$) and that is not correct either. What should I correct here to add "at least one $'aa'$ check" feature? Or offer your version of this DFA graph. • This question is a follow up to two previous questions by the same user, and it should reference them. The OP does not seem to understand that his way of finding answers is not adequate, and he should learn general techniques for building FA, DFA, regular expressions and the like. Asking repeatedly for the same specific problem is wasting his time and ours. He should read a textbook on regular laguages and finite automata, or ask for pointers on the web if he cannot access a textbook. Is that the case? He should fill his SE profile so that we can know if he has a problem. – babou May 19 '14 at 13:51 • If giving straight answers to the questions is a quite problematic task for you, do not answer. I'm not forcing you to waste your own precious time. Besides, right now you are 'wasting your time' on useless writing your negative review. When I have time, I also help people to find solutions to the problems they can not decide, by giving the straight answers to the questions. And this is normal. – Happy Torturer May 19 '14 at 21:19 • There is a saying: give a fish to a person and he will eat today, but if you teach him to fish, he will eat everyday. You should learn to fish rather than keep asking everyday for a new fish. I told you there are systematic techniques to build these things related to finite state automata. That is what you should learn. Trap states are not a tool that you can use or not use. Their presence depends only on a property of the language you are defining, and there is nothing you can do about it. They will necessarily be one (at least) if needed, and none if not needed. – babou May 19 '14 at 22:01 • "They will necessarily be one (at least) if needed". No, the Trap state is not necessary for building an FA, even at least one. Assume that the language chain has to start from '01' and contain an odd number of '1', having VT = {0,1,2}. The shortest chain here will be '01', so there're minimum 3 states: q0,q1,q2, where q2 will be the end state. There're 2 ways: to use a trap state (between q0 and q1) or to use an extra state q3 (going after the q2) instead. There will be the only way to enter/exit into/from q3 state - terminal '1', while q3 has it's own cycle (0,2). – Happy Torturer May 20 '14 at 11:00 • "You should learn to fish rather than keep asking everyday for a new fish." - You are talking like a real fisherman! – Happy Torturer May 20 '14 at 11:15 $T$ is a 'trap state' $q_3$ is final accepting state. From state $T$ all inputs $a$, $b$, $1$ will direct to $T$ itself. There's a self loop over state $q_4$ for inputs $1$ and $b$. • But how to move from $q_2$ to $q_3$ for 1,b terminals and what does an overline over $'a'$ terminal mean? Or it means "not a" like in probability theory? I've never seen terminals with overlines in any of my theory examples. – Happy Torturer May 18 '14 at 11:00 • When we reach $q_2$, it means we have $1a$ as the prefix of string and previous symbol encountered is $a$, for example $1aa, 1abbba, 1a111ba$ etc. If we encounter another $a$(consecutive), we reach $q_3$. We move to $q_4$ from $q_2$ if we scan symbol other than $a$ i.e $1$ or $b$. – preetsaimutneja May 18 '14 at 11:10 • Let's suppose we've just read $'1'$ or $'b'$ terminal while being in the state $q_2$. We moved to $q_4$, right? But then, while being in $q_4$, we get another $'1'$ or $'b'$ terminal. How will the DFA get out of the state $q_4$? I suppose that $q_4$ needs a not-$a$-cycle. – Happy Torturer May 18 '14 at 11:25 • When in $q_4$, all inputs except $a$, do not change state of DFA. Only input $a$ will cause DFA to move out of $q_4$. – preetsaimutneja May 18 '14 at 11:32
### Recent Submissions • #### Symmetry-dependent field-free switching of perpendicular magnetization (Nature Nanotechnology, Springer Science and Business Media LLC, 2021-01-18) [Article] Modern magnetic-memory technology requires all-electric control of perpendicular magnetization with low energy consumption. While spin–orbit torque (SOT) in heavy metal/ferromagnet (HM/FM) heterostructures1,2,3,4,5 holds promise for applications in magnetic random access memory, until today, it has been limited to the in-plane direction. Such in-plane torque can switch perpendicular magnetization only deterministically with the help of additional symmetry breaking, for example, through the application of an external magnetic field2,4, an interlayer/exchange coupling6,7,8,9 or an asymmetric design10,11,12,13,14. Instead, an out-of-plane SOT15 could directly switch perpendicular magnetization. Here we observe an out-of-plane SOT in an HM/FM bilayer of L11-ordered CuPt/CoPt and demonstrate field-free switching of the perpendicular magnetization of the CoPt layer. The low-symmetry point group (3m1) at the CuPt/CoPt interface gives rise to this spin torque, hereinafter referred to as 3m torque, which strongly depends on the relative orientation of the current flow and the crystal symmetry. We observe a three-fold angular dependence in both the field-free switching and the current-induced out-of-plane effective field. Because of the intrinsic nature of the 3m torque, the field-free switching in CuPt/CoPt shows good endurance in cycling experiments. Experiments involving a wide variety of SOT bilayers with low-symmetry point groups16,17 at the interface may reveal further unconventional spin torques in the future. • #### Efficient bifacial monolithic perovskite/silicon tandem solar cells via bandgap engineering (Nature Energy, Springer Science and Business Media LLC, 2021-01-11) [Article] Bifacial monolithic perovskite/silicon tandem solar cells exploit albedo—the diffuse reflected light from the environment—to increase their performance above that of monofacial perovskite/silicon tandems. Here we report bifacial tandems with certified power conversion efficiencies >25% under monofacial AM1.5G 1 sun illumination that reach power-generation densities as high as ~26 mW cm–2 under outdoor testing. We investigated the perovskite bandgap required to attain optimized current matching under a variety of realistic illumination and albedo conditions. We then compared the properties of these bifacial tandems exposed to different albedos and provide energy yield calculations for two locations with different environmental conditions. Finally, we present a comparison of outdoor test fields of monofacial and bifacial perovskite/silicon tandems to demonstrate the added value of tandem bifaciality for locations with albedos of practical relevance. • #### Response to Zöller et al.'s critique on “Potential short-term earthquake forecasting by farm-animal monitoring” (Ethology, Wiley, 2021-01-11) [Article] Zöller et al. (Ethology, 2020) criticize our original publication (Wikelski et al., Ethology, 126(9), 2020, 931) for obvious reasons: we only observed the behavior of one group of farm animals before, during and after one earthquake series in one area of the world. It is clear that no earthquake predictions are possible, and should not be attempted, from this data set. However, what we show is that there is important information within this animal collective pertaining to potential future local forecasting of earthquakes when combined with traditional data sources. We maintain that combining Zöller et al.'s (2020) modeling tools with the adequate use of our data can stimulate novel ways of earthquake forecasting. Future studies should combine both approaches. • #### Coating of Conducting and Insulating Threads with Porous MOF Particles through Langmuir-Blodgett Technique (Nanomaterials, MDPI AG, 2021-01-10) [Article] • #### Chain Conformation Control of Fluorene-Benzothiadiazole Copolymer Light-Emitting Diode Efficiency and Lifetime (ACS Applied Materials & Interfaces, American Chemical Society (ACS), 2021-01-07) [Article] The β-phase, in which the intermonomer torsion angle of a fraction of chain segments approaches ∼180°, is an intriguing conformational microstructure of the widely studied light-emitting polymer poly(9,9-dioctylfluorene) (PFO). Its generation can in turn be used to significantly improve the performance of PFO emission-layer-based light-emitting diodes (LEDs). Here, we report the generation of β-phase chain segments in a copolymer, 90F8:10BT, containing 90% 9,9-dioctylfluorene (F8) and 10% 2,1,3-benzothiadiazole (BT) units and show that significant improvements in performance also ensue for LEDs with β-phase 90F8:10BT emission layers, generalizing the earlier PFO results. The β-phase was induced by both solvent vapor annealing and dipping copolymer thin films into a solvent/nonsolvent mixture. Subsequent absorption spectra show the characteristic fluorene β-phase peak at ∼435 nm, but luminescence spectra (∼530 nm peak) and quantum yields barely change, with the emission arising following efficient energy transfer to the lowest-lying excited states localized in the vicinity of the BT units. For ∼5% β-phase chain segment fraction relative to 0% β-phase, the LED luminance at 10 V increased by ∼25% to 5940 cd m<sup>-2</sup>, the maximum external quantum efficiency by ∼61 to 1.91%, and the operational stability from 64% luminance retention after 20 h of operation to 90%. Detailed studies addressing the underlying device physics identify a reduced hole injection barrier, higher hole mobility, correspondingly more balanced electron and hole charge transport, and decreased carrier trapping as the dominant factors. These results confirm the effectiveness of chain conformation control for fluorene-based homo- and copolymer device optimization. • #### Dark Self-Healing Mediated Negative Photoconductivity of Lead-Free Cs3Bi2Cl9 Perovskite Single Crystal (arXiv, 2021-01-07) [Preprint] Halide perovskites are recently emerged as one of the frontline optoelectronic materials for device applications and have been extensively studied in past few years. Among these while, lead-based materials were most widely explored, investigation of optical properties of lead-free perovskites is limited. Being optically active, these materials were expected to show light-induced enhanced photoconductivity and the same was reported for lead halide perovskite single crystals. However, on contrary, herein, light-induced degradation of bismuth halide perovskite Cs3Bi2Cl9 single crystals is reported which was evidenced by negative photoconductivity with slow recovery. The femtosecond transient reflectance (fs-TR) spectroscopy studies further revealed these electronic transport properties were due to the formation of light-activated metastable trap states within the perovskite crystal. The figure of merits of Cs3Bi2Cl9 single-crystal detectors such as responsivity (17 mA/W), detectivity (6.23 X 10power 11 Jones) and the ratio of current in dark to light (~7160) was calculated and it is found that they are comparable or higher to reported perovskite single crystals based positive photodetectors. This observation for lead-free perovskite single crystals which were optically active but showed retroactive photocurrent on irradiation remained unique for such materials. • #### Ion-exchange doped polymers at the degenerate limit: what limits conductivity at 100% doping efficiency? (arXiv, 2021-01-05) [Preprint] Doping of semiconducting polymers has seen a surge in research interest driven by emerging applications in sensing, bioelectronics and thermoelectrics. A recent breakthrough was a doping technique based on ion-exchange, which separates the redox and charge compensation steps of the doping process. The improved microstructural control this process allows enables us for the first time to systematically address a longstanding but still poorly understood question: what limits the electrical conductivity at high doping levels? Is it the formation of charge carrier traps in the Coulomb potentials of the counterions, or is it the structural disorder in the polymer lattice? Here, we apply ion-exchange doping to several classes of high mobility conjugated polymers and identify experimental conditions that achieve near 100% doping efficiency under degenerate conditions with nearly 1 charge per monomer. We demonstrate very high conductivities up to 1200 S/cm in semicrystalline polymer systems, and show that in this regime conductivity is poorly correlated with ionic size, but strongly correlated with paracrystalline disorder. This observation, backed by a detailed electronic structure model that incorporates ion-hole and hole-hole interactions and a carefully parameterized model of disorder, indicates that trapping by dopant ions is negligible, and that maximizing crystalline order is critical to improving conductivity. • #### Laminar Burning Velocities of Formic Acid and Formic Acid/Hydrogen Flames: An Experimental and Modeling Study (Energy & Fuels, American Chemical Society (ACS), 2021-01-05) [Article] Laminar flame speed of formic acid and formic acid/hydrogen (4/1) flames was studied both experimentally and numerically. Experiments with flames of pure formic acid were performed at temperatures of 373 and 423 K, while for formic acid/hydrogen flames the temperature value was 368 K. All of the experiments were performed under atmospheric pressure and at an equivalence ratio ranging from 0.5 to 1.5. To measure the laminar flame speed, the heat flux balance technique was applied. Three detailed chemical-kinetic mechanisms were tested on experimental data. Experiments showed that addition of 20% of hydrogen increases the laminar burning velocity of formic acid, for example, at around 1.5 for stoichiometric flames. The comparison of experimental and numerical data showed that all models tend to overestimate laminar burning velocities of studied flames, especially in the case of rich flames. The obtained results indicate that further improvement of existing chemical-kinetic models of formic acid oxidation is highly required. • #### Hole-Type Spacers for More Stable Shale Gas-Produced Water Treatment by Forward Osmosis (Membranes, MDPI AG, 2021-01-03) [Article] An appropriate spacer design helps in minimizing membrane fouling which remains the major obstacle in forward osmosis (FO) systems. In the present study, the performance of a hole-type spacer (having holes at the filament intersections) was evaluated in a FO system and compared to a standard spacer design (without holes). The hole-type spacer exhibited slightly higher water flux and reverse solute flux (RSF) when Milli-Q water was used as feed solution and varied sodium chloride concentrations as draw solution. During shale gas produced water treatment, a severe flux decline was observed for both spacer designs due to the formation of barium sulfate scaling. SEM imaging revealed that the high shear force induced by the creation of holes led to the formation of scales on the entire membrane surface, causing a slightly higher flux decline than the standard spacer. Simultaneously, the presence of holes aided to mitigate the accumulation of foulants on spacer surface, resulting in no increase in pressure drop. Furthermore, a full cleaning efficiency was achieved by hole-type spacer attributed to the micro-jets effect induced by the holes, which aided to destroy the foulants and then sweep them away from the membrane surface. • #### Experimental and numerical study of polycyclic aromatic hydrocarbon formation in ethylene laminar co-flow diffusion flames (Fuel, Elsevier, 2021) [Article] Recent literature kinetic studies revealed the importance of new mechanisms for polycyclic aromatic hydrocarbon (PAH) and soot inception beyond hydrogen–abstraction–acetylene–addition (HACA) and hydrogen–abstraction–vinylacetylene–addition (HAVA) mechanisms in the combustion of ethylene and other hydrocarbons. Co-flow diffusion flame is a canonical flame used to investigate the interaction between fluid dynamics and PAH chemistry. In this study, supersonic molecular beam sampling technique was utilized for the first time with synchrotron vacuum ultraviolet photoionization mass spectrometry (SVUV-PIMS) to measure laminar co-flow diffusion flame at atmospheric pressure. We report quantitative measurement of precursor radicals as well as critical intermediates and odd carbon number PAH species. A custom-designed computational code, based on OpenFOAM and Cantera, was adopted to simulate laminar co-flow diffusion flames with literature kinetic model. Chemical kinetic analyses show that addition reactions of odd carbon number species provide considerable contribution to PAH formation processes beside HACA and HAVA mechanisms. Reasonable mass growth reactions are postulated for aromatic species with odd carbon numbers, such as ethynyl-indene, fluorene, benzo-indene, which need further investigations. Reactions of resonantly stabilized radicals followed by ring expansion are shown to be critical for both odd and even carbon number aromatics, and are suggested to be included in future PAH models. • #### Revisiting Reynolds and Nusselt numbers in turbulent thermal convection (Physics of Fluids, AIP Publishing, 2021-01-01) [Article] In this paper, we extend Grossmann and Lohse’s (GL) model [S. Grossmann and D. Lohse, “Thermal convection for large Prandtl numbers,” Phys. Rev. Lett. 86, 3316 (2001)] for the predictions of Reynolds number (Re) and Nusselt number (Nu) in turbulent Rayleigh–Bénard convection. Toward this objective, we use functional forms for the prefactors of the dissipation rates in the bulk and boundary layers. The functional forms arise due to inhibition of nonlinear interactions in the presence of walls and buoyancy compared to free turbulence, along with a deviation of the viscous boundary layer profile from Prandtl–Blasius theory. We perform 60 numerical runs on a three-dimensional unit box for a range of Rayleigh numbers (Ra) and Prandtl numbers (Pr) and determine the aforementioned functional forms using machine learning. The revised predictions are in better agreement with the past numerical and experimental results than those of the GL model, especially for extreme Prandtl numbers • #### Propane Dehydrogenation Catalyzed by Single Lewis Acid Site in Sn-Beta Zeolite (Journal of Catalysis, Elsevier BV, 2021-01) [Article] The gap between supply and demand of propylene has become more and more evident, because of a large consumption of the downstream products derived from propylene. Propane dehydrogenation (PDH) constitutes an important alternative for the production of propylene, and thus considerable attention has been paid to the development of eco-friendly and cost-efficient catalysts for this process. Herein, we discover that the Sn-Beta zeolite with Lewis acid sites can activate the C-H bond, and exhibits high catalytic performance in the PDH. XRD, STEM, and XPS characterizations confirm that Sn species are incorporated into the zeolite framework, and H2-TPR suggests that there is a strong interaction between Sn species and zeolite framework. It is found that the Lewis acid is the active site for dehydrogenation reaction, and the Brønsted acid is responsible for cracking reaction. The dehydrogenation rate/cracking rate is positively proportional to the L/B ratio, and a high L/B ratio is beneficial for the propane dehydrogenation reaction. The Na-Sn-Beta-30 catalyst possessing the highest amount of Lewis acid but the lowest Brønsted/Lewis ratio, exhibits the best performance in the PDH, which delivers propane conversion of 40% and propylene selectivity of 92%. Most importantly, these Sn-Beta zeolites are extremely stable without any detectable deactivation under the harsh reaction condition for 72 hours. Density functional theory calculations reveal that both Sn and adjacent O atom or OH group cooperatively act as the active sites. The PDH occurs through the direct reaction mechanism in which hydrogen molecule is produced by the direct coupling of H atom of primary C3H7 motif with the Brønsted proton in closed sites or the proton of water in open sites. It seems that open sites are more reactive than the closed ones, and the intrinsic enthalpy barriers are calculated to be 242 ∼ 301 kJ/mol depending on the hydroxylation extents. These efficient Sn-Beta zeolites could provide a new possibility for the development of a new generation of PDH catalysts with a high stability for the production of propylene. • #### Engineered Microgels—Their Manufacturing and Biomedical Applications (Micromachines, MDPI AG, 2021-01-01) [Article] Microgels are hydrogel particles with diameters in the micrometer scale that can be fabricated in different shapes and sizes. Microgels are increasingly used for biomedical applications and for biofabrication due to their interesting features, such as injectability, modularity, porosity and tunability in respect to size, shape and mechanical properties. Fabrication methods of microgels are divided into two categories, following a top-down or bottom-up approach. Each approach has its own advantages and disadvantages and requires certain sets of materials and equipments. In this review, we discuss fabrication methods of both top-down and bottom-up approaches and point to their advantages as well as their limitations, with more focus on the bottom-up approaches. In addition, the use of microgels for a variety of biomedical applications will be discussed, including microgels for the delivery of therapeutic agents and microgels as cell carriers for the fabrication of 3D bioprinted cell-laden constructs. Microgels made from well-defined synthetic materials with a focus on rationally designed ultrashort peptides are also discussed, because they have been demonstrated to serve as an attractive alternative to much less defined naturally derived materials. Here, we will emphasize the potential and properties of ultrashort self-assembling peptides related to microgels. • #### Imaging of organic signals in individual fossil diatom frustules with nanoSIMS and Raman spectroscopy (Marine Chemistry, Elsevier BV, 2021-01) [Article] The organic matter occluded in the silica of fossil diatom frustules is thought to be protected from diagenesis and used for paleoceanographic reconstructions. However, the location of the organic matter within the frustule has hitherto not been identified. Here, we combined high spatial resolution imaging by nanoSIMS and Raman micro-spectroscopy to identify where the organic material is retained in cleaned fossil diatom frustules. NanoSIMS imaging revealed that organic signals were present throughout the frustule but in higher concentrations at the pore walls. Raman measurements confirmed the heterogenous presence of organics but could not, because of lower spatial resolution, resolve the spatial patterns observed by nanoSIMS. • #### Noble metal nanowire arrays as an ethanol oxidation electrocatalyst (Nanoscale Advances, Royal Society of Chemistry (RSC), 2021) [Article] Vertically aligned noble metal nanowire arrays were grown on conductive electrodes based on a solution growth method. They show significant improvement of electrocatalytic activity in ethanol oxidation, from a re-deposited sample of the same detached nanowires. The unusual morphology provides open diffusion channels and direct charge transport pathways, in addition to the high electrochemically active surface from the ultrathin nanowires. Our best nanowire arrays exhibited much enhanced electrocatalytic activity, achieving a 38.0 fold increase in specific activity over that of commercial catalysts for ethanol electrooxidation. The structural design provides a new direction to enhance the electrocatalytic activity and reduce the size of electrodes for miniaturization of portable electrochemical devices. • #### CO2 hydrogenation to methanol and hydrocarbons over bifunctional Zn-doped ZrO2/zeolite catalysts (Catalysis Science & Technology, Royal Society of Chemistry (RSC), 2021) [Article] The tandem process of carbon dioxide hydrogenation to methanol and its conversion to hydrocarbons over mixed metal/metal oxide-zeotype catalysts is a promising path to CO$_{2}$ valorization. • #### On the distillation of waste tire pyrolysis oil: A structural characterization of the derived fractions (Fuel, Elsevier BV, 2020-12-31) [Article] Tire pyrolysis oil (TPO) is a complex mixture of hydrocarbons spanning a wide boiling point range. Due to its complexity, direct implementation of TPO to combustion applications has been challenging. Distillation is a simple method for grouping similar compounds, based on their volatility, thereby facilitating further upgrading and use. In this work, TPO was distilled at atmospheric pressure into different fractions (light, low-middle, high-middle, and heavy), and the structural characteristics of each fraction were explored. Therefore, advanced analytical techniques such as GC–MS, APPI FT-ICR MS and 1H and 13C NMR were utilized. For the light fraction, the GC–MS revealed a significant presence of benzene, toluene, and xylene, as well as limonene. From the APPI FT-ICR MS results, the low-middle, high-middle, and heavy fractions were classified into a number of molecular classes. Among these, pure hydrocarbons (HC), hydrocarbons containing one sulfur atom (S1), hydrocarbons containing two oxygen atoms (O2), etc. Here, HC and S1 were found to be the most abundant molecular classes in all fractions. Finally, a structural analysis of the functional groups present in each TPO fraction was conducted by 1H and 13C NMR. Average molecular parameters (AMPs), such as the number of aromatic, naphthenic, and olefinic carbons/hydrogens, were determined. In addition, derived AMPs, such as the aromaticity factor (fa), C/H paraffinic, C/H aromatic, etc., were calculated. Fractionation by distillation resulted in concentration of both the sulfur and aromatic compounds in the heaviest fraction. In this manner, effective application and upgrading strategies could be individually designed for each fraction. • #### Direct and continuous generation of pure acetic acid solutions via electrocatalytic carbon monoxide reduction. (Proceedings of the National Academy of Sciences of the United States of America, Proceedings of the National Academy of Sciences, 2020-12-31) [Article] Electrochemical CO2 or CO reduction to high-value C2+ liquid fuels is desirable, but its practical application is challenged by impurities from cogenerated liquid products and solutes in liquid electrolytes, which necessitates cost- and energy-intensive downstream separation processes. By coupling rational designs in a Cu catalyst and porous solid electrolyte (PSE) reactor, here we demonstrate a direct and continuous generation of pure acetic acid solutions via electrochemical CO reduction. With optimized edge-to-surface ratio, the Cu nanocube catalyst presents an unprecedented acetate performance in neutral pH with other liquid products greatly suppressed, delivering a maximal acetate Faradaic efficiency of 43%,partial current of 200 mA·cm−2, ultrahigh relative purity of up to 98 wt%, and excellent stability of over 150 h continuous operation. Density functional theory simulations reveal the role of stepped sites along the cube edge in promoting the acetate pathway. Additionally, a PSE layer, other than a conventional liquid electrolyte, was designed to separate cathode and anode for efficient ion conductions, while not introducing any impurity ions into generated liquid fuels. Pure acetic acid solutions, with concentrations up to 2 wt% (0.33 M), can be continuously produced by employing the acetate-selective Cu catalyst in our PSE reactor. • #### Arabidopsis Plant Natriuretic Peptide Is a Novel Interactor of Rubisco Activase (Life, MDPI AG, 2020-12-31) [Article] Plant natriuretic peptides (PNPs) are a group of systemically acting peptidic hormones affecting solute and solvent homeostasis and responses to biotrophic pathogens. Although an increasing body of evidence suggests PNPs modulate plant responses to biotic and abiotic stress, which could lead to their potential biotechnological application by conferring increased stress tolerance to plants, the exact mode of PNPs action is still elusive. In order to gain insight into PNP-dependent signalling, we set out to identify interactors of PNP present in the model plant Arabidopsis thaliana, termed AtPNP-A. Here, we report identification of rubisco activase (RCA), a central regulator of photosynthesis converting Rubisco catalytic sites from a closed to an open conformation, as an interactor of AtPNP-A through affinity isolation followed by mass spectrometric identification. Surface plasmon resonance (SPR) analyses reveals that the full-length recombinant AtPNP-A and the biologically active fragment of AtPNP-A bind specifically to RCA, whereas a biologically inactive scrambled peptide fails to bind. These results are considered in the light of known functions of PNPs, PNP-like proteins, and RCA in biotic and abiotic stress responses.
# Talk Modul:   MAT971  Stochastische Prozesse ## Local limits of uniform triangulations in high genus Thomas Budzinski talk Speaker invited by: Prof. Dr. Jean Bertoin Date: 03.04.19  Time: 17.15 - 19.00  Room: Y27H12 We study the local limits of uniform triangulations chosen uniformly over those with fixed size and genus in the regime where the genus is proportional to the size. We show that they converge to the Planar Stochastic Hyperbolic Triangulations introduced by Curien. This generalizes the convergence of uniform planar triangulations to the UIPT of Angel and Schramm, and proves a conjecture of Benjamini and Curien. As a consequence, we obtain new asymptotics on the enumeration of high genus triangulations. This is a joint work with Baptiste Louf.
## Main Coronavirus disease 2019 (COVID-19) is caused by the infection of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)1. Although COVID-19 is primarily a respiratory disease, many patients manifest neurological symptoms, including anosmia and ageusia, nonspecific symptoms such as headache or dizziness, or severe conditions such as cognitive impairment, epilepsy, ataxia or encephalopathy2. These symptoms have been attributed to the secondary effects of the systemic SARS-CoV-2 infection (that is, hypoxemia produced by severe pneumonia, cytokine release syndrome induced by hyperactivation of the immune response, thrombotic complications or electrolyte dysregulation by acute renal injury) or encephalitis produced by the direct viral infection of the central nervous system (CNS)3,4,5,6. However, direct CNS infection is supported by the neurotropism exhibited by other coronaviruses7,8 and by the detection of SARS-CoV-2 in cerebrospinal fluid from patients with COVID-19 and in a significant proportion of brain autopsies from patients who died from COVID-19 (refs. 3, 9, 10). Furthermore, SARS-CoV-2 has also been detected in the brain of different experimental animal models, including transgenic11,12 and knock-in mice13 expressing human angiotensin-converting enzyme 2 (hACE2) and natural hosts of SARS-CoV-2 such as hamsters6, ferrets14 and nonhuman primates15. Three main routes have been proposed by which SARS-CoV-2 may enter the CNS: (1) the so-called olfactory route, where the virus could reach the olfactory bulb directly through the lamina cribosa or by infection of olfactory sensory neurons6,16; (2) the hematological route, in which the virus enters the brain by crossing the blood–brain barrier (BBB) and/or blood–cerebrospinal fluid brain barrier; and (3) retrograde transport through peripheral nerves innervating the respiratory tract (that is, trigeminal, facial, glossopharyngeal and vagus nerves)17. Regardless of the pathogenic mechanism (viral neuroinvasion or secondary to the systemic infection) several studies have demonstrated important neuropathological alterations in patients with severe COVID-19, such as neurovascular pathology, glial activation and neuronal damage10,18,19,20. Additionally, biomarkers of cerebral injury have also been found to be elevated in patients with mild or moderate COVID-19 (ref. 21). Furthermore, neurological manifestations are common in patients recovered from the acute phase of COVID-19, suggesting the possibility of chronic brain impairment associated with the post-acute COVID-19 syndrome22,23. Many vaccine candidates against COVID-19 have been developed and clinically tested in phase I, II and III trials. Vaccines approved by the main regulatory agencies are primarily based on the SARS-CoV-2 spike (S) protein and have been generated by various technologies including messenger RNA (Pfizer-BioNTech and Moderna)24,25, adenoviral vectors (AstraZeneca, Janssen and Sputnik)26,27,28 or inactivated virus (Sinopharm and Sinovac)29. These vaccines are currently being used for mass vaccination; however, it is still unknown whether they prevent viral spread to other regions of the body such as the CNS and confer protection against the brain damage induced by the SARS-CoV-2 infection. We have previously described the advantages of a poxvirus modified vaccinia virus Ankara (MVA) vector expressing a human codon-optimized, full-length SARS-CoV-2 S protein (termed MVA-CoV2-S) as a promising COVID-19 vaccine candidate. The MVA-CoV2-S vaccine candidate induces in mice robust and long-term memory S-specific humoral and T cell immune responses, and fully prevented morbidity, mortality, viral replication, pathology and cytokine storm in the lungs of K18-hACE2 transgenic mice infected with SARS-CoV-2 (refs. 30,31,32). Moreover, we have recently described that MVA-CoV2-S vaccination also induces a robust SARS-CoV-2-specific humoral and cellular immunogenicity and full efficacy against SARS-CoV-2 infection in other animal models, such as hamsters33 and rhesus macaques34. Here, we examine the efficacy of MVA-CoV2-S vaccination to prevent SARS-CoV-2 cerebral infection and associated damage in K18-hACE2 mice, a well-established mouse model of severe COVID-19 disease11,12,35. To this end, we provide a detailed spatiotemporal description of the SARS-CoV-2 viral spread among the main regions of the brain. SARS-CoV-2 infection and replication appear mainly restricted to neurons, producing significant neuronal cell death. Indeed, as described previously19, infected mice also exhibit pathological alterations in brain blood vessels. Administration of one or two doses of the MVA-CoV2-S vaccine candidate confers full protection against SARS-CoV-2 neuroinvasion, preventing cerebral viral replication and the associated brain damage, even after reinfection. This supports that MVA-CoV2-S is a promising vaccine candidate against SARS-CoV-2/COVID-19. ## Results ### Characterization of SARS-CoV-2 brain infection in K18-hACE2 mice SARS-CoV-2 is transmitted by exposure to the nasal or oral cavity and primarily replicates along the respiratory tract producing, in severe cases, pulmonary disease. Furthermore, SARS-CoV-2 can disseminate into the circulation and in this way can infect other organs such as kidney, heart, brain or the gastrointestinal tract2. Although SARS-CoV-2 CNS infection has been well described in susceptible transgenic K18-hACE2 mice3,4,12,36,37,38, little information about viral spreading to specific cerebral areas has been reported. Thus, to study in detail the spatiotemporal SARS-CoV-2 viral distribution and replication in the brain, K18-hACE2 mice (n = 26, 11 females and 15 males) were inoculated intranasally with SARS-CoV-2 (MAD6 isolate, 1 × 105 plaque-forming units (PFU) per mouse)30,31 and their brains were examined by immunohistochemistry against SARS-CoV-2 nucleocapsid (N) protein at 2 (n = 8), 4 (n = 8) and 6 (n = 10) days postinfection (dpi) (Fig. 1a–c and Extended Data Fig. 1a). At 6 dpi all mice infected with SARS-CoV-2 lost more than 25% of body weight due to severe pulmonary disease (with decreased gas exchange, plasma electrolyte dysregulation and systemic cytokine/chemokine storm) and to the significant brain infection causing encephalitis4,11,12,30,31,32, and were euthanized. Figure 1a shows brain coronal sections from representative control (uninfected; n = 9, 4 females and 5 males) and SARS-CoV-2-infected mice (6 dpi) revealing that the SARS-CoV-2 N staining was clear and specific, with many infected cells throughout different regions of the brain. The precise analysis of the brain viral distribution at different time points, which did not reveal any sex-based differences, is shown in Fig. 1b,c and Extended Data Figs. 1a and 2a,b. At 2 dpi, no evidence of SARS-CoV-2 infection was found in any of the brain areas studied in the 8 mice analyzed. At 4 dpi, variable levels of viral infection were observed in the different cerebral regions examined in the 8 mice analyzed. Specifically, the basal forebrain, amygdala and hypothalamus showed the highest levels of SARS-CoV-2 N staining at this time point, with many groups of SARS-CoV-2-infected cells in most of the brains analyzed. In other regions, such as the olfactory bulb, cortex, or mesencephalon, an intermediate level of infection was detected, with only some dispersed infected cells in most of the brains studied. In some mice, regions like the striatum, different areas of the hippocampus, thalamus, pons and cerebellum showed a few SARS-CoV-2+ cells, indicating the lower level of infection at 4 dpi. Finally, at the latest time point studied, 6 dpi, all brains analyzed (n = 10) revealed high levels of SARS-CoV-2 N staining, but showed a nonhomogeneous distribution of viral infection among the main areas of the brain. In the olfactory bulbs, cortex, basal forebrain, amygdala, thalamus, hypothalamus and mesencephalon, a severe SARS-CoV-2 infection was detected. Other regions, such as hippocampal CA1 and dentate gyrus and pons, showed moderate infection; whereas in the striatum, the CA2/CA3 area of the hippocampus and cerebellum, only some disperse SARS-CoV-2+ cells were detected suggesting a mild viral infection. SARS-CoV-2 infection in K18-hACE2 mice produces anosmia35 and different authors have proposed olfactory bulb infection as the principal route of neuroinvasion4,6,15. To advance in knowledge on the route of viral entry in the CNS, we studied the neurotropism, that is the ability of the virus to infect and replicate in CNS regions17, by highly sensitive quantitative reverse transcription PCR (RT–qPCR) of the SARS-CoV-2 subgenomic E gene at 2, 4 and 6 dpi. RT–qPCR analyses were done in (1) the olfactory bulb, the target of the olfactory route6,15,16, (2) the cortex and hypothalamus, as two representative potential targets of the hematological route (the cortex with a highly restrictive BBB and the hypothalamus with areas of less restrictive BBB39), and (3) the brain stem, as the target of sensory fibers innervating the respiratory tract. At 2 dpi, only minimal levels of SARS-CoV-2 subgenomic RNA were found in the different brain regions analyzed; no statistically significant differences were seen in any of these brain areas compared to each other or with regard to uninfected controls (Fig. 1d and Extended Data Fig. 1b). At 4 dpi, similar levels of SARS-CoV-2 subgenomic RNA were found between samples from the four brain areas analyzed, showing statistically significant differences with regard to uninfected controls in the olfactory bulb, cortex and hypothalamus, as well as a small difference (P = 0.055) in the brain stem (Fig. 1d). At 6 dpi, increased levels of SARS-CoV-2 subgenomic RNA were found in all samples studied, being the cortex and hypothalamus the structures with the highest levels of viral RNA (Fig. 1d). Given that we have not observed earlier or higher viral replication in the olfactory bulb or brain stem in comparison with other brain areas, it can be suggested that, as expected, neither the olfactory route nor the retrograde transport from respiratory innervation are the main port of cerebral viral entry. It seems that the hematological route is the predominant route of SARS-CoV-2 infection in the brain of K18-hACE2 mice. An important observation, revealed by the histological analyses of brains from SARS-CoV-2-infected mice, is that most of the infected cells show a neuronal morphology (Fig. 1c and Extended Data Figs. 1a and 2a), suggesting that viral replication occurs primarily in neurons. This was confirmed by high-resolution confocal microscopy analysis combining SARS-CoV-2 N protein immunofluorescence with neuronal (neuronal-specific nuclear antigen A60 (NeuN)), astroglial (glial fibrillary acid protein (GFAP)), microglial (ionized calcium-binding adapter molecule 1 (IBA1)) and vascular endothelial (isolectin B4 (IB4)) markers in SARS-CoV-2-infected brains at 6 dpi (Fig. 2 and Extended Data Fig. 4). As indicated in Fig. 2a, all cells showing high SARS-CoV-2 staining (green) were also positive for the neuronal marker NeuN (red). In addition, a confocal orthogonal projection confirmed that both SARS-CoV-2+ and NeuN+ signals colocalized in the same confocal plane (Z-depth resolution of confocal plane = 0.7 μm; Fig. 2b), indicating that the SARS-CoV-2 N protein and the NeuN protein are within the same neuronal body. SARS-CoV-2 infection and replication appear to take place in a broad variety of neuronal subtypes; we found high levels of SARS-CoV-2 N protein staining in cortical glutamatergic-Ca2+/calmodulin-dependent protein kinase-II (CaMKII)+ and GABAergic-parvalbumin+ neurons, striatal DARPP32+ medium spiny neurons and parvalbumin+ interneurons, cholinergic-choline acetyltransferase (ChAT)+ neurons of the basal forebrain and mesencephalic and hypothalamic dopaminergic-TH+ neurons (Fig. 2c–f and Extended Data Fig. 3). SARS-CoV-2+ infection of nonneuronal cells was evaluated by confocal microscopy analysis combining SARS-CoV-2 N protein staining with microglial (IBA1), astroglial (GFAP) or vascular (IB4) markers (Extended Data Fig. 4). We did not observe colocalization of SARS-CoV-2+ and GFAP+ staining, indicating the absence of viral particles in astrocytes (Extended Data Fig. 4a–c). In contrast, in some vascular cells (IB4+), discrete SARS-CoV-2+ staining was detected, suggesting SARS-CoV-2 infection in brain blood vessels, as reported previously19 (Extended Data Fig. 4d). Furthermore, our analysis also showed many microglial cells with processes contacting or engulfing SARS-CoV-2-infected neurons or damaged vessels (Extended Data Fig. 4e–g). In some cases, we could even detect SARS-CoV-2+ staining inside IBA1+ cells (Extended Data Fig. 4h), suggesting that viral particles from infected neurons or damaged vascular cells may have been phagocyted by microglial cells. Taking these results together, our study indicates that SARS-CoV-2 brain replication in K18-hACE2 mice occurs primarily in neurons, beginning between 2 and 4 d after inoculation with SARS-CoV-2, with the highest levels of infection seen in ventral areas of the brain, such as the hypothalamus, amygdala, and basal forebrain. In a later phase, between 4 and 6 dpi, viral replication spreads to most cerebral regions, producing a severe SARS-CoV-2 infection. Interestingly, even at 6 dpi, some specific cerebral areas, such as the cerebellum, striatum and CA2/CA3 region of the hippocampus, remain with mild levels of SARS-CoV-2 infection, presenting only some dispersed SARS-CoV-2-infected neurons. ### Neuropathological alterations associated with SARS-CoV-2 brain infection Next, we studied whether a strong SARS-CoV-2 infection induces neuronal death by analyzing the neuronal density in the hypothalamus and cortex, two areas with high viral replication, and in the hippocampus, which present mild-to-moderate viral infection. The stereological quantification of hypothalamic NeuN+ (Fig. 3a,b) and cortical Nissl+ (Fig. 3c,d) neurons demonstrated a significant decrease of neuronal density in SARS-CoV-2-infected mice at 4 and 6 dpi, compared to uninfected control mice. Moreover, a significant loss of NeuN+ neurons was also detected in the hippocampal dentate gyrus at 6 dpi, whereas no differences with regard to uninfected controls were detected in the CA1 or CA2/CA3 regions (Extended Data Fig. 2c). Since SARS-CoV-2 infection can induce neuronal apoptosis in human brain organoids5, we studied by immunodetection the number of cells expressing cleaved caspase-3 (c-casp3) in brains from control (uninfected) and SARS-CoV-2-infected mice at 4 and 6 dpi. As expected, the brains of control mice showed only few c-casp3+ cells in the hippocampus (Fig. 3e,f), possibly reflecting physiological apoptosis associated with the neurogenic niche of the dentate gyrus40; practically no c-casp3+ cells were detected in the rest of the brain (Fig. 3g,h and Extended Data Fig. 5). In contrast, brains of SARS-CoV-2-infected mice presented a substantial number of c-casp3+ cells distributed across most of the brain areas analyzed, being particularly evident at 6 dpi (Fig. 3e–h and Extended Data Fig. 5), when the brain viral infection is maximal. The distribution of c-casp3+ cells suggests that a significant proportion of apoptotic cells correspond to neurons. Quantitative analyses of apoptotic cell numbers were performed in the hippocampus (Fig. 3f) and hypothalamus (Fig. 3h). In both regions, we found a statistically significant increase in the number of c-casp3+ cells in SARS-CoV-2-infected mice. Morphological changes in microglia and structural alterations in cerebral blood vessels have been described in patients with COVID-19 (refs. 19, 20). Thus, to study the glial reaction in brains from K18-hACE2 mice infected with SARS-CoV-2, we analyzed the presence of astrogliosis and reactive microglia using specific immunostaining of astroglial (GFAP) and microglial (IBA1) markers. The qualitative analysis of GFAP+ fluorescence signal did not reveal clear signs of astrogliosis, either by GFAP overexpression or by significant morphological changes in GFAP+ astrocytes, in any of the cerebral regions studied (cortex, hippocampus and hypothalamus; Extended Data Fig. 6, Fig. 4 and below). However, IBA1 immunostaining in the same brain regions showed some morphological changes suggesting microglial activation (Extended Data Fig. 6). Microglial activation is characterized by marked morphological changes that include enlargement of the cell body and an important retraction of projections, resulting in a more ameboid shape. The quantitative analysis of the microglial morphology (using the Imaris microscopy image analysis software) in the cortex and hypothalamus revealed changes in SARS-CoV-2-infected mice at 6 dpi, with a significant reduction in the microglial total and filament area, filament length and number of filaments branching points (Fig. 4a–d). We also analyzed the density of microglia, astrocytes and oligodendrocytes at 6 dpi, when viral infection is maximal. An increased number of IBA1+ microglial cells, with regard to uninfected controls, was found in the cortex and hypothalamus (Fig. 4e); the density of GFAP+ astrocytes was also increased in the hypothalamus, with a nonsignificant trend to higher density in the cortex (Fig. 4f), whereas the density of oligodendrocytes in the corpus callosum was not altered (Fig. 4g). Taken together, these data indicate that at 6 dpi (the stage at which mice were euthanized due to a marked loss of body weight) the microglial and astrocytic responses induced by SARS-CoV-2 infection were still in an early stage, although clear signs of microglial activation were already observed. To study the presence of vascular pathology induced by SARS-CoV-2 infection, brain blood vessels were labeled with IB4, a marker of the luminal and abluminal side of endothelial cells, and vessel abnormality was evaluated as described previously41. No significant alterations in the cerebral blood vessels of infected mice were found at 2 and 4 dpi. However, at 6 dpi, when brain viral infection was maximal, histological evidences of abnormal blood vessels started to appear in ventral brain areas (basal forebrain, amygdala and hypothalamus; Fig. 4h,i and Extended Data Fig. 6), which, in some cases, extended to other heavily infected regions. The endothelial cells became round by losing their elongated morphology, IBA1+ innate immune cells covered the vessels and presented phagocytic pouches including IB4+ material (Extended Data Fig. 4g), suggesting inflammatory activation of vessel permeability and remodeling, similar to that observed in experimental models of multiple sclerosis41. Interestingly, as in other inflammatory models, vascular damage was restricted to arterioles, the vascular region more susceptible to immune cells crossing under inflammation. These results agree with the vascular brain pathology described in patients with COVID-19, hamsters and K18-hACE2 mice infected with SARS-CoV-2 (ref. 19). They also indicate that SARS-CoV-2 infection in the K18-hACE2 mouse model of severe COVID-19 produces important neuropathological alterations, including neuronal loss, incipient signs of gliosis and vascular damage. ### MVA-CoV2-S vaccination fully prevents SARS-CoV-2 brain infection and damage Once the temporal and regional spread of SARS-CoV-2 and associated neuropathology were characterized in the brain of K18-hACE2 mice, we next tested whether the vaccine candidate MVA-CoV2-S (also termed MVA-S), expressing the SARS-CoV-2 S protein30, could protect against SARS-CoV-2 brain infection and associated damage. Thus, K18-hACE2 mice were immunized by intramuscular route with 1 or 2 doses of MVA-S (1 × 107 PFU per mouse) at days 0 and 28; subsequently, on day 63, they were challenged with a lethal intranasal dose of SARS-CoV-2 (MAD6 isolate; 1 × 105 PFU per mouse), as reported previously30,31,32. SARS-CoV-2-challenged mice primed and boosted with MVA-wild type (WT) (WT empty MVA vector) were used as positive control of infection (Fig. 5a, upper panels). Then, at 4 dpi (day 67), mice were euthanized for brain extraction and processing (MVA-WT, n = 9; MVA-S 1 dose, n = 9; MVA-S 2 doses, n = 3). Moreover, in a second independent experimental approach, we evaluated whether mice vaccinated with one or two doses of MVA-S, which survived SARS-CoV-2 infection30, were protected against viral neuroinvasion after a SARS-CoV-2 reinfection performed 46 d after the first SARS-CoV-2 challenge. In this experiment, SARS-CoV-2-challenged unvaccinated and MVA-WT-inoculated mice were used as positive controls of infection (Fig. 5a, lower panels). Thereafter, mice (MVA-WT, n = 5; SARS-CoV-2, n = 4; MVA-S 1 dose, n = 5; MVA-S 2 doses, n = 4) were euthanized for brain extraction and processing 6 days after the second viral infection (6 dpi) or 6 d after the first infection in the unvaccinated and MVA-WT groups. In both experimental approaches, the presence of cerebral SARS-CoV-2 infection was analyzed by immunohistochemistry against the SARS-CoV-2 N protein in different brain regions, as described above. Interestingly, all MVA-S vaccinated mice, either with one or two doses, showed total protection against cerebral SARS-CoV-2 infection after a single SARS-CoV-2 infection (Fig. 5b,c) or after a reinfection (Fig. 5c and Extended Data Fig. 7), without any SARS-CoV-2+ infected cells being detected in any of the brain regions analyzed. The absence of SARS-CoV-2+ immunostaining observed in MVA-S-vaccinated mice contrasts with the high number of SARS-CoV-2+ infected cells found in challenged MVA-WT-inoculated mice (Fig. 5b,c and Extended Data Fig. 7) or in challenged unvaccinated mice (Fig. 5c and Extended Data Fig. 7). Furthermore, to discard the possibility that the absence of SARS-CoV-2+ labeling in MVA-S-vaccinated mice was due to a low viral load, which could be below the immunohistochemistry detection limit, we performed highly sensitive RT–qPCR of the SARS-CoV-2 E gene in the cortex and hypothalamus, two brain regions that present high viral replication. According to the histological analysis, high levels of SARS-CoV-2 subgenomic mRNA were found in the cortex and hypothalamus of SARS-CoV-2-challenged MVA-WT or unvaccinated mice (Fig. 5d,e). Importantly, SARS-CoV-2 subgenomic mRNA was not detected in any of the MVA-S-vaccinated mice, regardless of the vaccination regimen (one or two doses) or whether they were subjected to a single SARS-CoV-2 infection (Fig. 5d) or reinfection (Fig. 5e). As described previously, the protection against SARS-CoV-2 infection observed in MVA-S vaccinated mice correlated with the high titers of SARS-CoV-2 neutralizing antibodies induced after immunization (about 4 × 102 and 2 × 103 NT50 titers in mice vaccinated with 1 and 2 doses, respectively), with neutralizing antibody titers above 3 × 102 inducing protection from lung infection31. In MVA-S-vaccinated mice these neutralizing antibody titers were maintained at late times after the first SARS-CoV-2 viral infection31,32; in the case of mice vaccinated with two doses of MVA-S similar high titers of SARS-CoV-2 neutralizing antibodies were induced before and after the first and second viral infections, indicating the lack of breakthrough infection31,32. Together, these results clearly demonstrate that MVA-S vaccination confers complete and sustained protection against SARS-CoV-2 cerebral infection. We also evaluated the efficacy of MVA-S vaccination to protect against brain damage induced by severe SARS-CoV-2 infection. As expected by the absence of viral infection in MVA-S-vaccinated mice, the stereological quantification of the density of hypothalamic NeuN+ (Fig. 6a) and cortical Nissl+ (Fig. 6b) neurons clearly demonstrated that MVA-S vaccination, either with one or two doses, protects against neuronal death induced by the encephalitis caused by SARS-CoV-2 infection. Furthermore, analysis of apoptotic cells revealed the absence of c-casp3+ cells in all brains of MVA-S-vaccinated mice, with the exception of physiological hippocampal apoptosis also detected in uninfected mice (data not shown). Quantification of the number of c-casp3+ cells in the hypothalamus confirmed that MVA-S vaccination confers a complete protection against CNS cellular apoptosis induced by SARS-CoV-2 infection (Fig. 6c). The quantitative morphological analysis of microglial IBA1+ cells revealed morphological changes compatible with microglial activation in MVA-S-vaccinated mice (Fig. 6d,e and Extended Data Fig. 8), despite the total absence of cerebral infection observed in these animals (Fig. 5c,e and Extended Data Fig. 7). Moreover, microglial morphological changes in vaccinated mice were more pronounced in the hypothalamus (Fig. 6d and Extended Data Fig. 8), with a permeable BBB, and after two doses of the MVA-S vaccine, which induce a stronger immune response31,32, suggesting that the systemic immune response induced by MVA-S vaccination also activated brain-resident immune cells, reinforcing the idea of active communication between the peripheral and central compartments of the immune system42. Furthermore, the analysis of brain blood vessels after IB4 staining also showed protection in MVA-S-vaccinated mice against the appearance of abnormal brain blood vessels after SARS-CoV-2 infection (Fig. 6f and Extended Data Fig. 8). Taken together, these data demonstrate that MVA-S vaccination confers a complete protection against SARS-CoV-2 brain infection and the associated neuropathological damage (neuronal loss and vascular damage), even after a second viral infection. Interestingly, cerebral protection induced by the MVA-S vaccine candidate is achieved similarly with one or two doses. ## Discussion After respiratory symptoms, neuropsychiatric manifestations are the second most common symptoms in patients with COVID-19 (refs. 2, 26). Despite the clinical relevance of the brain damage caused by COVID-19, it is still unknown whether the different COVID-19 vaccine candidates can prevent SARS-CoV-2 neuroinvasion or associated damage. In this study, we show that a vaccine candidate against COVID-19 based on the poxvirus MVA vector expressing the SARS-CoV-2 S protein (MVA-CoV2-S) confers complete protection against SARS-CoV-2 brain infection. To test the efficacy of the MVA-S vaccine candidate, we used the K18-hACE2 mouse model of severe COVID-19 (refs. 11, 12). This transgenic mouse model has increased hACE2 cerebral expression43, presenting significant brain permissiveness to SARS-CoV-2 replication. Our histological analysis revealed that ventral areas of the brain (basal forebrain, hypothalamus and amygdala) are the first cerebral regions infected by SARS-CoV-2, with virus replication being detected at 4 dpi. On the other hand, the olfactory bulbs, which have been proposed as one of the main ports of the viral CNS entry6,15,16, presented mild SARS-CoV-2 infection at 4 dpi and only showed severe viral infection after 6 dpi, when SARS-CoV-2 replication had spread to most of the brain regions. The molecular study of viral neurotropism failed to show earlier or higher levels of viral replication in the olfactory bulb. These data are consistent with recent studies that failed to detect significant levels of viral replication in the olfactory bulbs of patients who died a few days after viral infection44. In addition, the fact that the hypothalamus, where there are highly fenestrated BBB capillaries39, is one of the brain regions with the highest and earliest viral replication levels, suggesting that the hematogenous is the main route of entry of SARS-CoV-2 into the CNS45. Another relevant finding of our analysis of SARS-CoV-2 infection in K18-hACE2 mice is that brain viral replication occurs primarily in neurons, inducing significant neuronal cell death. These findings are consistent with the detection of SARS-CoV-2 in cortical neurons from deceased patients with COVID-19 and with the induction of neuronal apoptosis in infected human brain organoids5. Our study clearly demonstrates that MVA-CoV2-S vaccination confers sterilizing immunity against brain viral replication and damage. In previous studies, we reported that the MVA-CoV2-S vaccine candidate induced in mice robust SARS-CoV-2-specific humoral and cellular immune responses, producing high titers of binding IgG antibodies against the S and receptor-binding domain proteins, high titers of neutralizing antibodies able to recognize different variants of concern and potent, broad and polyfunctional S-specific T cell immune responses30,31,32. Moreover, memory SARS-CoV-2-specific humoral and cellular immune responses were detected in mice even at six months after the last MVA-CoV2-S immunization31. We have also established that K18-hACE2 mice vaccinated with MVA-CoV2-S and challenged with SARS-CoV-2 are protected against mortality, body weight loss, viral lung replication and lung pathology and have reduced levels of pro-inflammatory cytokines, the two-dose treatment being more effective that one single dose30,31,32. SARS-CoV-2 replication in K18-hACE2 mice is well described to occur primarily in the respiratory tract, during the first 2–4 dpi, and later on the cerebral tissue, between 3 and 7 dpi11,12. Probably, the exhaustive control exerted by MVA-CoV2-S vaccination on viral replication on the respiratory tract prevents neuroinvasion. The fact that immunization with a single dose of MVA-CoV2-S reduces but does not prevent virus infection in the lungs30,31,32, contrasts with the complete inhibition of brain viral infection in mice vaccinated with a single dose reported in this study and suggest that the block of viral brain infection could be due to the broad specificity of the immune responses triggered by MVA-CoV2-S vaccination. This inhibition is probably the result of the combined action of SARS-CoV-2-specific neutralizing antibodies and of CD4+ and CD8+ T cell responses triggered by vaccination, in turn preventing virus access to the brain. Thus, the relevance of vaccine protection from brain infection is an important requirement to block the spread of virus infection in tissues, long-term COVID-19 and mortality. Hence, vaccines that prevent SARS-CoV-2 infection in the brain of susceptible animals should be an indicator for vaccine development against variants of concern and of attenuated variants, like Omicron46. To the best of our knowledge, only three articles have addressed the efficacy of COVID-19 vaccine candidates to protect against SARS-CoV-2 cerebral infection. In these works, the efficacy of adenoviral47, lentiviral48 or vesicular stomatitis virus49 S-based vaccines against SARS-CoV-2 brain infection were analyzed using K18-hACE2 transgenic mice, obtaining different outcomes. The adenoviral S-based vaccine candidate failed to control the SARS-CoV-2 brain replication, reducing the brain viral load only when it was combined with a nucleocapsid-based vaccine candidate, whereas the lentiviral or vesicular stomatitis virus S-based vaccine candidates were able to block the SARS-CoV-2 cerebral replication. Interestingly, our MVA-CoV2-S vaccine candidate not only completely abolished SARS-CoV-2 brain replication, even with one single dose, but also conferred sustained protection against a second viral infection, all vaccinated mice being completely resistant to a SARS-CoV-2 reinfection seven weeks after the first challenge. Interestingly, MVA-CoV2-S was able to induce memory SARS-CoV-2-specific humoral and CD4+ and CD8+ T cell immune responses even six months after the last dose31, strengthening the potent immunogenicity and durability of this vaccine candidate. An important aspect of our data is that MVA-CoV2-S vaccination conferred complete protection against the cerebral damage induced by a severe SARS-CoV-2 infection, independently of the one- or two-dose vaccination regimes, with no evidence of cellular apoptosis, neuronal death or vascular alterations in any of the vaccinated mice. In a very stringent COVID-19 model as the K18-hACE2, where SARS-CoV-2 neurotropism increases, most of the neuropathological alterations induced during viral infection should be produced by direct viral neuroinvasion3,4. Therefore, the complete protection exerted by the MVA-CoV2-S vaccine candidate against cerebral SARS-CoV-2 infection and replication should be the main cause of the lack of neuropathological signs observed in the brains of vaccinated mice. Furthermore, the cytokine and chemokine storm produced by the systemic SARS-CoV-2 infection in many patients with COVID-19 has also been proposed to induce cerebral damage, producing neurological symptoms50. In this regard, we previously reported that MVA-CoV2-S vaccination prevented in K18-hACE2 mice the increase in pro-inflammatory cytokines induced by SARS-CoV-2 infection31,32, helping to reduce the potential cytokine-induced neurotoxicity in vaccinated K18-hACE2 mice. In summary, this study shows that the MVA-CoV2-S vaccine candidate confers complete and sustained protection against SARS-CoV-2 brain infection, replication and the associated damage. These results, together with the previously described potent immunogenicity and full efficacy of MVA-CoV2-S in different animal models30,31,32,33,34, support the evaluation of this COVID-19 vaccine candidate in clinical trials. ## Methods ### Animals Transgenic female and male K18-hACE2 mice, expressing the human ACE2 gene, were obtained from the Jackson Laboratory (034860-B6.Cg-Tg(K18-ACE2)2Prlmn/J, genetic background C57BL/6J × SJL/J)F2; research resource identifier (RRID): IMSR_JAX:034860). Experiments were carried out at the biosafety level 3 facilities of the Centro de Investigación en Sanidad Animal (CISA)-Instituto Nacional de Investigaciones Agrarias (INIA)-Consejo Superior de Investigaciones Científicas (CSIC) (Madrid, Spain). Mice were housed at 22 ± 1 °C under a 12-h light–dark cycle, with ad libitum access to food and water. Animal experimentation was approved by the Ethical Committee of Animal Experimentation of the Centro Nacional de Biotecnología (CNB) (Madrid, Spain) and by the Division of Animal Protection of the Comunidad de Madrid (PROEX, nos. 169.4/20 and 161.5/20). All animal procedures were performed according to the European Directive 2010/63/EU and the Spanish directive RD/53/2013 for the protection of animals used for scientific purposes. Sample sizes were determined on the basis of those reported in previous publications of our group30,31. Mice were allocated in the following experimental groups: controls (uninfected); SARS-CoV-2, 2-, 4- and 6 dpi; MVA-WT; MVA-S, 1 and 2 doses. Researchers were blinded to the details of the experimental groups during data collection and analysis. ### Viruses The poxviruses used in this study included the attenuated MVA-WT strain obtained from the Chorioallantois vaccinia virus Ankara strain after 586 serial passages in chicken embryo fibroblasts51 and the MVA-CoV2-S vaccine candidate expressing a human codon-optimized full-length SARS-CoV-2 S protein30. The SARS-CoV-2 strain MAD6 (kindly provided by J. M. Honrubia and L. Enjuanes, CNB-CSIC) is a virus collected from a nasopharyngeal swab from a 69-year-old male patient with COVID-19 from Hospital 12 de Octubre, Madrid, Spain52. The growth and titration of SARS-CoV-2 MAD6 isolate have been described previously30,31. The full-length virus genome was sequenced and was identical to the SARS-CoV-2 reference sequence (Wuhan-Hu-1 isolate, GenBank no.: MN908947), except for the silent mutation C3037>T and two mutations leading to amino acid changes: C14408>T (in nsp12) and A23403>G (D614G in the S protein). ### MVA-S vaccination and SARS-CoV-2 infection in K18-hACE2 mice For experiments analyzing SARS-CoV-2 brain infection and neuropathological damage, female and male K18-hACE2 mice (4–5 months old; n = 26) were infected with SARS-CoV-2 (MAD6 strain, 1 × 105 PFU in 50 μl of PBS, intranasally) as described previously30,31.Uninfected control mice only received 50 μl of PBS by intranasal route (n = 9; 4 females and 5 males). Mice were euthanized at 2 (n = 8; 3 females and 5 males), 4 (n = 8; 3 females and 5 males) and 6 (n = 10; 5 females and 5 males) dpi; brains were extracted and fixed in 4% paraformaldehyde (PFA, Sigma-Aldrich) in PBS for at least 7 d. MVA-CoV2-S immunization studies were carried out as indicated previously30,31. Briefly, in the experiments of a single SARS-CoV-2 infection, female K18-hACE2 mice (10 weeks old; n = 11 per group) received 1 or 2 doses of 1 × 107 PFU of MVA-CoV2-S in 100 μl of PBS (injected intramuscularly; 50 μl per leg) at 0 and 4 weeks. Also, mice were primed and boosted with nonrecombinant MVA-WT were used as the control group. At week 9, mice were challenged with SARS-CoV-2 as specified above. For the reinfection experiments, MVA-S-vaccinated mice were additionally reinfected with SARS-CoV-2 (MAD6 strain; 1 × 105 PFU in 50 μl of PBS; administered intranasally) 7 weeks after the first viral infection31. In this second set of experiments, mice treated with nonrecombinant MVA-WT and nonvaccinated, SARS-CoV-2-infected mice were used as controls. Mice were euthanized at 4 and 6 dpi, for single SARS-CoV-2 infection (MVA-WT: n = 9, 3 females and 6 males; MVA-S 1 dose: n = 9, 3 females and 6 males; MVA-S 2 doses: n = 3 females) and for the reinfection experiments (MVA-WT: n = 5 females; SARS-CoV-2: n = 4 females; MVA-S 1 dose: n = 5 females; MVA-S 2 doses: n = 4 females), respectively. Subsequently, the brains were extracted and fixed in 4% PFA for a period longer than 7 d. ### Histological staining Brains were cryoprotected in 30% sucrose (Sigma-Aldrich) in PBS and included in optimum cutting temperature compound (Tissue-Tek). Coronal sections (thickness 40 μm) were cut on a cryostat (Leica). SARS-CoV-2 N protein, NeuN, c-casp3, GFAP, IBA1, IB4, CaMKII, ChAT, DARPP32, oligodendrocytes (clone NS-1), tyrosine hydroxylase and parvalbumin immunohistological detection was performed as described previously41,53,54,55 using, respectively, mouse monoclonal anti-SARS-CoV-2, clone B46F (1:100 dilution, catalog no. MA1-7404, RRID: AB_1018422, Invitrogen); anti-NeuN (rabbit polyclonal, 1:500 dilution, catalog no. ABN78, RRID: AB_10807945, Millipore and mouse monoclonal, clone A60, 1:200 dilution, catalog no. MAB377, RRID: AB_2298772, Millipore); rabbit polyclonal anti-c-casp3 (1:100 dilution, catalog no. 9661, RRID: AB_2341188, Cell Signaling Technology); polyclonal anti-GFAP (rabbit polyclonal, 1:500 dilution, catalog no. Z0334, RRID: AB_10013382, Dako and mouse monoclonal, clone G-A-5, 1:2,000 dilution, catalog no. G3893, RRID: AB_477010, Sigma-Aldrich); anti-IBA1 (rabbit polyclonal 1:500 dilution, cat, no. 019-19741, RRID: AB_839504, Wako Chemicals and rabbit polyclonal, 1:1,000, catalog no. 234003, RRID: AB_10641962, Synaptic System); anti-IB4 (biotinylated isolectin B4, 1:50 dilution, catalog no. L2140, RRID: AB_2313663, Sigma-Aldrich); anti-CaMKII alpha/beta/delta (rabbit polyclonal, 1:100 dilution, catalog no. PA5-38239, RRID: AB_2554841, Thermo Fisher Scientific); anti-ChAT (chicken polyclonal, 1:500 dilution, catalog no. G143, Applied Biological Materials); anti-DARPP32 (rabbit polyclonal, 1:250 dilution, catalog no. PA5-85787, RRID: AB_2792923, Thermo Fisher Scientific); anti-oligodendrocytes, clone NS-1 (mouse monoclonal, 1:1,000 dilution, catalog no. MAB1580, RRID: AB_94266, Merck Millipore); anti-tyrosine hydroxylase (chicken polyclonal, 1:1,000 dilution, catalog no. TYH, RRID: AB_10013440, Aves Labs); anti-parvalbumin (rabbit polyclonal, 1:5,000 dilution, catalog no. PV27, RRID: AB_2631173, Swant); and secondary peroxidase-conjugated antibody kits (catalog nos. NB-23-00029-1 and NB-23-00030-1, NeoBiotech) or fluorescence secondary antibodies (goat-anti-mouse Alexa Fluor 488, 1:400 dilution, catalog no. 115-545-003, Jackson ImmunoResearch; goat-anti-rabbit Alexa Fluor 647, 1:400 dilution, catalog no. 111-605-003, Jackson ImmunoResearch; goat-anti-rabbit Alexa Fluor 568, 1:400 dilution, catalog no. A-11011, Invitrogen; streptoavidin-Cy3, 1:500 dilution, catalog no. 016-160-084, Jackson ImmunoResearch). In the case of SARS-CoV-2 and c-casp3 stained brain sections were subjected to citrate antigen retrieval (sodium citrate 10 mM, pH 6.0, Sigma-Aldrich; 15 min 97 °C); for SARS-CoV-2 immunodetection, brain sections were treated with mouse-on-mouse blocking reagent (catalog no. MKB2213-1, Vector Laboratories). In the immunofluorescence experiments, nuclei were stained with DAPI (1:1,000 dilution, Sigma-Aldrich). Nissl staining was performed as described previously56. ### Image analysis and stereology Image acquisition and analysis were performed with light transmission (Olympus, AX70 or Bx61, both with digital refrigerated camera DP72, CellSens v 1.4.1) or confocal microscopes (Nikon A1R+ or Leica STELLARIS 8 Scan Head, respectively, NIS-Element AR v 4.30.02 and LAS X v 4.3.024308) and their specific imaging software. Qualitative analysis of SARS-CoV-2 infection was performed by two independent blinded researchers. Imaging analyses of c-casp3, IBA1, GFAP, oligodendrocytes and IB4 were carried out as indicated previously41,54,55 using Fiji v 2.3.0 (National Institutes of Health) or Imaris microscopy image analysis software (Imaris, ×64 v.9.6.0, Oxford Instruments). NeuN+ and Nissl+ neuronal density was estimated by systematic random sampling using the optical dissector method57. Briefly, reference volumes were outlined at low magnification (×4) and neurons were counted at high magnification (×40) using a 4,900 × 30 μm2 optical dissector with a guard volume of 5 μm to avoid artifacts on the cut surface of the sections. All stereological procedures were performed using the New CAST system (Visiopharm) as described previously53,56. The confocal microglia images were analyzed in the Imaris software (×64 v.9.6.0). The microglial area and processes were measured using the Imaris surface and filament functions, respectively. ### Analysis of SARS-CoV-2 RNA by RT–qPCR The region corresponding to the olfactory bulb (bregma: +3.92–3.08 mm), cingulate cortex (bregma: +1.42 to −0.10 mm), hypothalamus (bregma: −1.82 to −2.18 mm) and brain stem (bregma: −5.50 to −7.08 mm) were microdissected from 3–6 coronal histological sections (thickness 40 μm) under a stereoscopic binocular microscope (Olympus SZX16) according to the mouse brain stereotaxic atlas58. RNA was isolated using the RecoverAll Total Nucleic Acid Isolation Kit (catalog no. AM1975, Invitrogen) according to the manufacturer’s instructions. The concentration and purity of the total RNA samples were measured using the NanoDrop 2000 Spectrophotometer (Thermo Fisher Scientific). RNA integrity was assessed using the Agilent 2100 Bioanalyzer and the RNA 6000 LabChip kit (catalog no. 5067-1511; Agilent Technologies). For complementary DNA synthesis, 1 μg of RNA was reverse-transcribed with the QuantiTect Reverse Transcription Kit (catalog no. 205311, QIAGEN), according to the manufacturer’s specifications. SARS-CoV-2 viral RNA content was determined using a previously validated set of primers and probes specific for the SARS-CoV-2 subgenomic RNA for the protein E59 and cellular 18S ribosomal RNA for normalization (catalog no. 4333760F, Thermo Fisher Scientific). Data were acquired with a 7500 real-time PCR system and analyzed with the 7500 software v.2.0.6 (Applied Biosystems). Relative RNA arbitrary units (a.u.) were quantified relative to the negative group (uninfected K18-hACE2 mice) and were performed using the $$2^{-{\Delta\Delta}_{\mathrm{Ct}}}$$ method. All samples were tested in triplicate. ### Statistical analysis The number of mice analyzed in each experimental group and the statistical tests applied are indicated in each figure legend. Data are presented as the mean ± s.e.m. In all cases, normality and equal variance tests were performed; when passed, analysis of variance (ANOVA) with Dunnett, Tukey, Friedman or Fisher’s least significant difference (LSD) post hoc analysis for multiple groups, or unpaired t-tests for two-group comparisons, was carried out. In cases where normality or homoscedasticity tests failed, the nonparametric Kruskal–Wallis H test with post hoc Dunn’s test was performed. All statistical analyses were conducted using Prism v.8.0 (GraphPad Software). ### Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
# C penalty in SVM - larger C increases the margin or reduces the margin? I get contradictory information on what the penalty value C does in SVM. page 346,347 of the following book says, larger C means larger misclassification is allowed and margin will be larger. http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf same says the below link (in regularization section) https://www.datacamp.com/community/tutorials/svm-classification-scikit-learn-python However, every other source I read says C in svm class of python stands for penalty and larger C means higher penalty and thereby lower margin. (But going by the above book and the optimization equations, C is the allowance for misclassification and this seems correct - based on the formula). • I think the first one is wrong. Larger cost means that misclassification is penalized more, which should imply a lower margin. – George Jul 31 '19 at 19:22
# Figuring out battery life from amp draw? I decided to take apart one of those cheap handheld fans. It takes 2AA Alkaline batteries. I wanted to hook it up to an arduino as a way to get into electronics, and I wanted to know what kind of battery would be best for the setup. Hopefully, I measured the amps correctly, and it read out ~.70 when the multimeter was set to 10 and @ DC current. So, the device has a 7A draw. From wikipedia, Alkaline batteries have about ~1800mAh, so 2AA would give 3600mAh. So, total battery hours from 2AA would be 3600mAh/7000A = .514... which ends up being only half an hour. If I wanted the setup that lasted roughly 2 hours, then that means I would need at least a 14000mhA battery without considering the arduino amp draw. That means I need at least 1 D battery or 8 AA batteries. Does that seem right? 0.7 means 0.7 amps! (i.e. 700mA). I would be very worried if it was drawing 7 amps :) You are nearly there with your calculation though: with two AA batteries in series, the total capacity is still 1800mAh. If you had it in parallel, then it would be 3600mAh (but batteries in parallel is not a good idea in general, the exception being lead acid batteris and similar, under certain conditions). So 0.7 amps with 1800mAh capacity means it would last 1.8/0.7 = 2.57 hours (2 hours and 34 minutes) • 2.57 hours = 2 hours and 34 minutes. Otherwise, take my +1. Apr 21 '15 at 16:55 • @Jodes, I'm curious why you said putting batteries in parallel is not a good idea. Can you elaborate? Is it based on the (probably fair) assumption that the average consumer can't be trusted to put two equally charged batteries together? Apr 21 '15 at 17:26 • Your assumption is correct; but also batteries may not be identical, even if they are the same brand, same chemistry, same batch. So their discharge curves, voltages, etc will differ. Not only would it be bad to essentially charge a non-rechargeable battery, but similarly for rechargeable batteries, dangerous to over-charge, over-discharge, etc etc, which putting batteries in parallel can easily do. Just do a youtube search for exploding batteries! – CL22 Apr 21 '15 at 18:08 • @Dan, take a look at electronics.stackexchange.com/a/16354/49251 Apr 21 '15 at 18:13 If you configure your Multimeter to 10A, this is the maximum that can be measured. If it shows 0.7 this means 0.7A (700mA). Now to the Batteries: If you connect them in Series, you add up the voltage, but the capacity (mAh) stays the same. If you connect them in parallel, the capacity is added but the voltage stays the same. This is normaly only done with rechargeable Battery. I don't know, but I think your motor will need 3V (2 Cell in serie).
# In Numbers...I am Lost and Found...Infinately never alone ARTIST: The Carpenters TITLE: Top of the World Lyrics and Chords Such a feelin's coming over me There is wonder in most every thing I see Not a cloud in the sky, got the sun in my eyes And I won't be surprised if it's a dream / C GF C - / Em DmG C - / F G Em A / Dm Fm G Gsus4 / Everything I want the world to be Is now coming true especially for me And the reason is clear, it's because you are here You're the nearest thing to heaven that I've seen {Refrain} I'm on the top of the world looking down on creation And the only explanation I can find Is the love that I've found ever since you've been around Your love's put me at the top of the world / C - F - / Em DmG C - / F G C F / C DmG C - / Something in the wind has learned my name And it's telling me that things are not the same In the leaves on the trees and the touch of the breeze There's a pleasin' sense of happiness for me There is only one wish on my mind When this day is through I hope that I will find That tomorrow will be just the same for you and me All I need will be mine if you are here {Refrain twice} ### Devas and avatars Krishna (left), the eighth incarnation (avatar) of Vishnu or svayam bhagavan, with his consort Radha, worshiped as Radha Krishna across a number of traditions - traditional painting from the 1700s. The Hindu scriptures refer to celestial entities called Devas (or devī in feminine form; devatā used synonymously for Deva in Hindi), "the shining ones", which may be translated into English as "gods" or "heavenly beings".[47] The devas are an integral part of Hindu culture and are depicted in art, architecture and through icons, and mythological stories about them are related in the scriptures, particularly in Indian epic poetry and the Puranas. They are, however, often distinguished from Ishvara, a supreme personal god, with many Hindus worshiping Ishvara in a particular form as their iṣṭa devatā, or chosen ideal.[48][49] The choice is a matter of individual preference,[50] and of regional and family traditions.[50] Hindu epics and the Puranas relate several episodes of the descent of God to Earth in corporeal form to restore dharma to society and to guide humans to moksha. Such an incarnation is called an avatar. The most prominent avatars are of Vishnu and include Rama (the protagonist in Ramayana) and Krishna (a central figure in the epic Mahabharata). (Redirected from Wright Omega Function) The Wright Omega function along part of the real axis In mathematics, the Wright omega function, denoted ω, is defined in terms of the Lambert W function as: $\omega(z) = W_{\big \lceil \frac{\mathrm{Im}(z) - \pi}{2 \pi} \big \rceil}(e^z).$ [hide] ## Uses One of the main applications of this function is in the resolution of the equation z = ln(z), as the only solution is given by z = e−ω(π i). y = ω(z) is the unique solution, when $z \neq x \pm i \pi$ for x ≤ −1, of the equation y + ln(y) = z. Except on those two rays, the Wright omega function is continuous, even analytic. ## Properties The Wright omega function satisfies the relation Wk(z) = ω(ln(z) + 2πik). It also satisfies the differential equation $\frac{d\omega}{dz} = \frac{\omega}{1 + \omega}$ wherever ω is analytic (as can be seen by performing separation of variables and recovering the equation ln(ω) + ω = z), and as a consequence its integral can be expressed as: $\int w^n \, dz = \begin{cases} \frac{\omega^{n+1} -1 }{n+1} + \frac{\omega^n}{n} & \mbox{if } n \neq -1, \\ \ln(\omega) - \frac{1}{\omega} & \mbox{if } n = -1. \end{cases}$ Its Taylor series around the point a = ωa + ln(ωa) takes the form : $\omega(z) = \sum_{n=0}^{+\infty} \frac{q_n(\omega_a)}{(1+\omega_a)^{2n-1}}\frac{(z-a)^n}{n!}$ where $q_n(w) = \sum_{k=0}^{n-1} \bigg \langle \! \! \bigg \langle \begin{matrix} n+1 \\ k \end{matrix} \bigg \rangle \! \! \bigg \rangle (-1)^k w^{k+1}$ in which $\bigg \langle \! \! \bigg \langle \begin{matrix} n \\ k \end{matrix} \bigg \rangle \! \! \bigg \rangle$ is a second-order Eulerian number. ## Values $\begin{array}{lll} \omega(0) &= W_0(1) &\approx 0.56714 \\ \omega(1) &= 1 & \\ \omega(-1 \pm i \pi) &= -1 & \\ \omega(-\frac{1}{3} + \ln \left ( \frac{1}{3} \right ) + i \pi ) &= -\frac{1}{3} & \\ \omega(-\frac{1}{3} + \ln \left ( \frac{1}{3} \right ) - i \pi ) &= W_{-1} \left ( -\frac{1}{3} e^{-\frac{1}{3}} \right ) &\approx -2.237147028 \\ \end{array}$ # Ohm's law V, I, and R, the parameters of Ohm's law. In electrical circuits, Ohm's law states that the current through a conductor between two points is directly proportional to the potential difference or voltage across the two points, and inversely proportional to the resistance between them[, provided that the temperature remains constant].[1] The mathematical equation that describes this relationship is:[2] $I = \frac{V}{R}$ where V is the potential difference measured across the resistance in units of volts; I is the current through the resistance in units of amperes and R is the resistance of the conductor in units of ohms. More specifically, Ohm's law states that the R in this relation is constant, independent of the current.[3] The law was named after the German physicist Georg Ohm, who, in a treatise published in 1827, described measurements of applied voltage and current through simple electrical circuits containing various lengths of wire. He presented a slightly more complex equation than the one above (see History section below) to explain his experimental results. The above equation is the modern form of Ohm's law. In physics, the term Ohm's law is also used to refer to various generalizations of the law originally formulated by Ohm. The simplest example of this is: $\boldsymbol{J} = \sigma \boldsymbol{E},$ Tags: • #### Humble The Poet on Twitter Humble The Poet on Twitter “If you have people in your life that give a shit about you, realize that’s priceless, and it’s worth giving your time,… • #### 'Safety' In U.S. Schools Means More Cops And Fewer Counselors 'Safety' In U.S. Schools Means More Cops And Fewer Counselors About 14 million students attend a school without a single counselor, nurse,… • #### mobile_status_update mobile_status_update Posted by Lasith Witharana on 5 Mar 2019, 07:09 • Post a new comment #### Error default userpic
# How do you find the perimeter and area of a rectangle with width of 2sqrt7-2sqrt5 and length of 3sqrt7+3sqrt5? May 18, 2017 Perimeter $P = 10 \sqrt{7} + 2 \sqrt{5}$ Area $A = 12$ #### Explanation: The perimeter $P$ of a rectangle is the sum of all side lengths. The formula is $P = 2 \left(l + w\right)$, where $l$ is the length and $w$ is the width: $R i g h t a r r o w P = 2 \left(\left(3 \sqrt{7} + 3 \sqrt{5}\right) + \left(2 \sqrt{7} - 2 \sqrt{5}\right)\right)$ $R i g h t a r r o w P = 2 \left(3 \sqrt{7} + 2 \sqrt{7} + 3 \sqrt{5} - 2 \sqrt{5}\right)$ $R i g h t a r r o w P = 2 \left(5 \sqrt{7} + \sqrt{5}\right)$ $\therefore P = 10 \sqrt{7} + 2 \sqrt{5}$ The area $A$ of a rectangle is the product of the length and the width. The formula is $A = l w$: $R i g h t a r r o w A = \left(3 \sqrt{7} + 3 \sqrt{5}\right) \times \left(2 \sqrt{7} - 2 \sqrt{5}\right)$ $R i g h t a r r o w A = 3 \sqrt{7} \times 2 \sqrt{7} + 3 \sqrt{7} \times \left(- 2 \sqrt{5}\right) + 3 \sqrt{5} \times 2 \sqrt{7} + 3 \sqrt{5} \times \left(- 2 \sqrt{5}\right)$ $R i g h t a r r o w A = 6 \times 7 - 6 \sqrt{35} + 6 \sqrt{35} - 6 \times 5$ $R i g h t a r r o w A = 42 - 30$ $\therefore A = 12$ Therefore, the perimeter is $10 \sqrt{7} + 2 \sqrt{5}$ and the area is $12$. May 18, 2017 Perimeter of rectangle is $2 \left(5 \sqrt{7} + \sqrt{5}\right)$ unit. Area of rectangle is $12$ sq.unit. #### Explanation: Length of rectangle is $l = 3 \sqrt{7} + 3 \sqrt{5}$ Width of rectangle is $w = 2 \sqrt{7} - 2 \sqrt{5}$ Perimeter of rectangle is $P = 2 l + 2 w = 2 \left(3 \sqrt{7} + 3 \sqrt{5}\right) + 2 \left(2 \sqrt{7} - 2 \sqrt{5}\right)$ $P = \sqrt{7} \left(6 + 4\right) + \sqrt{5} \left(6 - 4\right) = 10 \sqrt{7} + 2 \sqrt{5} = 2 \left(5 \sqrt{7} + \sqrt{5}\right)$unit Area of rectangle is $A = l \cdot w = \left(3 \sqrt{7} + 3 \sqrt{5}\right) \cdot \left(2 \sqrt{7} - 2 \sqrt{5}\right)$ $A = 6 \cdot 7 - \cancel{6 \cdot \sqrt{35}} + \cancel{6 \sqrt{35}} - 6 \cdot 5 = 42 - 30 = 12$ sq.unit. Perimeter of rectangle is $2 \left(5 \sqrt{7} + \sqrt{5}\right)$ unit Area of rectangle is $12$ sq.unit. [Ans] May 18, 2017 12 #### Explanation: we know area of rectangle = length*width sq unit. So, $\textcolor{g r e e n}{A r e a}$ = $\left[3 \sqrt{7} + 3 \sqrt{5}\right] \times \left[2 \sqrt{7} - 2 \sqrt{5}\right]$ $\Rightarrow 3 \left[\sqrt{7} + \sqrt{5}\right] \times 2 \left[\sqrt{7} - \sqrt{5}\right]$ $\Rightarrow 3 \times 2 \times \left[{\left(\sqrt{7}\right)}^{2} - {\left(\sqrt{5}\right)}^{2}\right]$ $\Rightarrow 3 \times 2 \times \left[7 - 5\right] = 3 \times 2 \times 2 = 12$ sq unit $\textcolor{red}{p e r i m e t e r}$= 2*[length+width]unit $\Rightarrow 2 \left[\left\{3 \sqrt{7} + 3 \sqrt{5}\right\} + \left\{2 \sqrt{7} - 2 \sqrt{5}\right\}\right]$ $\Rightarrow 2 \left[3 \sqrt{7} + 3 \sqrt{5} + 2 \sqrt{7} - 2 \sqrt{5}\right]$ $\Rightarrow 2 \left[5 \sqrt{7} + \sqrt{5}\right]$ May 18, 2017 Areacolor(blue)(=12 color(white)(a)square units,color(blue)(P=2(5sqrt7+sqrt5) #### Explanation: $W = 2 \sqrt{7} - 2 \sqrt{5}$ $L = 3 \sqrt{7} + 3 \sqrt{5}$ perimeter of a rectangle$= P = L + W + L + W$ $P = 2 L + 2 W = 2 \left(L + W\right)$ $P = 2 \left[\left(3 \sqrt{7} + 3 \sqrt{5}\right) + \left(2 \sqrt{7} - 2 \sqrt{5}\right)\right]$ $P = 2 \left[3 \sqrt{7} + 3 \sqrt{5} + 2 \sqrt{7} - 2 \sqrt{5}\right]$ $P = 2 \left[5 \sqrt{7} + \sqrt{5}\right]$ Area$= A = W \times L$ $A = \left(2 \sqrt{7} - 2 \sqrt{5}\right) \times \left(3 \sqrt{7} + 3 \sqrt{5}\right)$ $\textcolor{w h i t e}{a a a a a a a a a a a a a}$$2 \sqrt{7} - 2 \sqrt{5}$ $\textcolor{w h i t e}{a a a a a a a a a a}$$\times \underline{3 \sqrt{7} + 3 \sqrt{5}}$ $\textcolor{w h i t e}{a a a a a a a a a a a a a}$$6 \times 7 - 6 \sqrt{5} \sqrt{7}$ $\textcolor{w h i t e}{a a a a a a a a a a a a a a a a a a a a}$$6 \sqrt{5} \sqrt{7} - 6 \times 5$ $\textcolor{w h i t e}{a a a a a a a a a a a a a a}$overline(42-30color(blue)(=12 square units
This post describes our recent work on probabilistic trajectory prediction for autonomous driving presented at CORL 2020. PLOP is a trajectory prediction method that intent to control an autonomous vehicle (ego vehicle) in urban environment while considering and predicting the intents of other road users (neighbors). We focus here on predicting multiple feasible future trajectories for both ego vehicle and neighbors through a probabilistic framework and rely on a conditional imitation learning algorithm, conditioned by a navigation command for the ego vehicle (e.g., turn right’’). Our model processes only onboard sensor data (camera and lidars) along with detections of past and presents objects relaxing the necessity of an HDMap and is computationally efficient as it can run in real time (25 fps) on an embedded board in the real vehicle. We evaluate our method offline on the publicly available dataset nuScenes (Caesar et al., 2020), achieving state-of-the-art performance, investigate the impact of our architecture choices on online simulated experiments and show preliminary insights for real vehicle control. Figure 1. Qualitative example of trajectory predictions on a test sample from nuScenes dataset. The top image show a bird's eye view of PLOP's predictions for the ego and neighbor vehicles (to be compared with the ground truth in green). The bottom row present the input image (left) in which we added object correspondance with the bird's eye view and the auxiliary semantic segmentation of this image (right) Predicting the future positions of other agents of the road, or of the autonomous vehicle itself, is critical for autonomous driving. This trajectory prediction must not only respect the rules of the road, but capture the interactions of the agents over time. It is also important to allow multiple possible predictions, as there is usually not a single valid trajectory. Some approaches such as ChauffeurNet (Bansal et al., 2018) use a high-levelscene representation (road map, traffic lights, speed limit, route, dynamic bounding boxes, etc.). More recently, MultiPath (Chai et al., 2019) uses trajectory anchors, used in one-step object detection, extracted from the training data for ego vehicle prediction. (Hong et al., 2019) use a high level representation which includes some dynamic context. In contrast, we choose to leverage also low level sensor data, here Lidar point clouds and camera image. In that domain, recent approaches address the variation in agent behaviors by predicting multiple trajectories, often in a stochastic way. Many works, e.g., PRECOG (Rhinehart et al., 2019), MFP (Tang & Salakhutdinov, 2019), SocialGAN (Gupta et al., 2018) and others (Rhinehart et al., 2018), focus on this aspect through a probabilistic framework on the network output or latent representations, producing multiple trajectories for ego vehicle, nearby vehicles or both. (Phan-Minh et al., 2020) generate a trajectory set, then classify correct trajectories. (Marchetti et al., 2020) generate multiple futures from encodings of similar trajectories stored in a memory. (Ohn-Bar et al., 2020) learn a weighted mixture of expert policies trained to mimic agents with specific behaviors. In PRECOG, (Rhinehart et al., 2019) advance a probabilistic formulation that explicitly models interactions between agents, using latent variables to model their plausible reactions, with the possibility to precondition the trajectory of the ego vehicle by a goal. ## PLOP method ### Contributions Our main goal is to produce a trajectory prediction which can be used to drive the ego vehicle relying on a conditional imitation learning algorithm, conditioned by a navigation command for the ego vehicle (e.g., “turn right”). To do so, we propose a single-shot, anchor-less trajectory prediction method, based on Mixture Desity Networks (MDNs) and polynomial trajectory constraints, relying only on on-board sensors which relaxes the HD map requirement and allow more flexibility for driving in the real world. The polynomial formulation ensures that the predicted trajectories are coherent and smooth, while providing more learning flexibility through the extra parameters. We find that this mitigates training instability and mode collapse that are common to MDNs (Cui et al., 2019). PLOP is trainable end-to-end from imitation learning, where data is relatively easier to obtain and it is computationally efficient during both training and inference as it predicts trajectory coefficients in a single step, without requiring a RNN-based decoder. The polynomial function trajectory coefficients eschew the need for anchors (Chai et al., 2019), whose quality can vary across datasets. We propose an extensive evaluation of PLOP and show its effectiveness across datasets and settings. We conduct a comparison showing the improvement over state-of-the-art PRECOG (Rhinehart et al., 2019) on the public dataset nuScenes (Caesar et al., 2020); Then for a better evaluation of the driving capacities of PLOP, we study closed loop performance for the ego vehicle, on simulation and with preliminary insights for real vehicle control. ### Network architecture PLOP takes as inputs: the ego and neighbor vehicles past positions represented as time sequences of x and y over the last 2 seconds, the frontal camera image of the ego vehicle, and 2 second history of bird’s eye views with a cell resolution of 1m square containing the lidar point cloud and the object detections information represented in Figure 2. The objects detections being the output of a state of the art perception algorithm. Figure 2. Image and Bird's eye view. The left image is an example of a front camera input image of PLOP and the diagram on the right is a representation of the bird'eye view input. We pass these inputs through a multibranch neural network represented in Figure 3 to predict the ego vehicle future trajectory and two auxiliary tasks that are the future trajectory prediction for the neighbors vehicles and the semantic segmentation of the camera image. Figure 3. PLOP's Architecture. PLOP's architecture is reprented on the left while the polynomial multimodal gaussian trajectory representation is on the right The front camera image features, the bird’s eye view features and the ego vehicle past positions features are passed down to conditional fully connected architecture to output multiple future trajectories for the ego vehicle regarding the current navigation order. The trajectories are predicted using MDNs where gaussian means are generated using polynomial functions of degree 4 over x and y To improve the learning stability of our training and inject awareness about the scene layouts into the camera features we pass them through a U-Net decoder to output semantic segmentation and then use an auxiliary cross entropy loss. To improve the encoding of interactions between the differents agents of the scene in the bird’s eye features, we predict the future possible trajectories for each neighbor feeding the bird’s eye views encoding and its past positions encoded through a LSTM layer to a small fully connected network. The weights of LSTMs and fully connected layers are shared between all neighbors. This output allows us to get useful information about the ego vehicle environment that can be used online to improve the ego vehicle driving with safety collision checks for example. ## Offline evaluation To evaluate PLOP, we use the nuScenes dataset to train the trajectory loss along with the Audi (Geyer et al., 2019) dataset to train the semantic segmentation loss. We choose to compare our method with the DESIRE (Lee et al., 2017) baseline and against two state of the art methods that are PRECOG and ESP (Rhinehart et al., 2019) using the minimum Mean Squared Deviation metric to avoid penalizing valid trajectories that are not matching the ground truth. For one agent, meaning ego vehicle only, PRECOG and ESP have access to the future desired target position and PRECOG return significantly better results than PLOP but PLOP still reaches similar results as ESP. For multiple agents PLOP outperforms other presented methods . We note that the comparison if fairer for neighbor trajectories and the performance is relevant since they are by definition open loop. Figure 4. Comparison with state-of-the-art: Against the DESIRE, ESP and PRECOG for predicting a trajectory of 4 seconds into the future But we argue that such evaluation is not totally relevant for controling the ego vehicle in real conditions. Such metrics does not value the situations in which the errors are made, failing to brake at a traffic light is a critical error for example but it is quick and represent a very small part of the test set so it will impact very poorly the overall metrics. However, making a small constant error such as driving 2kph too slow over the whole test set set might be an acceptable and non impacting error but will lead to a considerable overall error. Also, using only offline metrics where the method can’t control the vehicle does not allow us to evaluate its capacities to react to its own mistakes. ## Online Evaluation through simulation To simulate driving, we developped a data driven simulator that allows us to use real driving data to simulate applying the prediction to the ego vehicle. We can generate the input data that corresponds to the new vehicle position after following the trajectory using reprojections (for the image and the pointcloud), then use it to predict a new trajectory, and so on. This allows us the measure the performance in closed loop, and in particular to count failures which would have resulted in a takeover. We rely on 3 metrics: lateral (>1m from expert), high speed (catching up to a vehicle 15% faster than the real vehicle up to 0.6s in the future) and low speed (> 20kph under the expert speed) errors count. Figure 5. Evaluation using the simulator. Comparison with PLOP without semantic segmentation loss, Constant velocity baseline and Multi-Layer Perceptron baseline in the table on the left. Additionnal qualitative results about the errors positioning on the differents test tracks are on the right. We trained PLOP on an internal dataset combining both open road and urban test track and compared PLOP, PLOP without auxiliary semantic loss, the constant velocity baseline and a MLP baseline in our simulator using test data. We note that semantic segmentation improve the driving performance and that MLP has better offline metrics than constant velocity approach but still perform worse due to the simulated driving conditions. As expected, offline metrics are not discriminating enough for the online behavior since the best model checkpoints in simulation are not necessarily the ones with the better offline metrics. An additionnal ablation study where we remove mandatory information (such as the camera image input) shows that it may even be dangerous to trust them blindly. ## Conclusion In this work, we demonstrate the interest of our multi-input multimodal approach PLOP for vehicle trajectory prediction in an urban environment. Our architecture leverages frontal camera and Lidar inputs, to produce multiple trajectories using reparameterized Mixture Density Networks, with an auxiliary semantic segmentation task. We show that we can improve open loop state-of-the-art performance in a multi-agent system, by evaluating the vehicle trajectories from the nuScenes dataset. We also provide a simulated closed loop evaluation, to go towards real vehicle online application. Please check out our paper along with supplementary materials for greater details about our approach and experiments and feel free to contact us for any question. ## References 1. Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., & Beijbom, O. (2020). nuScenes: A Multimodal Dataset for Autonomous Driving. Cvpr. 2. Bansal, M., Krizhevsky, A., & Ogale, A. S. (2018). ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst. CoRR. 3. Chai, Y., Sapp, B., Bansal, M., & Anguelov, D. (2019). MultiPath: Multiple Probabilistic Anchor Trajectory Hypotheses for Behavior Prediction. 4. Hong, J., Sapp, B., & Philbin, J. (2019). Rules of the Road: Predicting Driving Behavior with a Convolutional Model of Semantic Interactions. CoRR. 5. Rhinehart, N., McAllister, R., Kitani, K., & Levine, S. (2019). Precog: Prediction conditioned on goals in visual multi-agent settings. Iccv. 6. Tang, C., & Salakhutdinov, R. R. (2019). Multiple futures prediction. Advances in Neural Information Processing Systems, 15424–15434. 7. Gupta, A., Johnson, J., Fei-Fei, L., Savarese, S., & Alahi, A. (2018). Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks. CoRR. 8. Rhinehart, N., McAllister, R., & Levine, S. (2018). Deep Imitative Models for Flexible Inference, Planning, and Control. CoRR. 9. Phan-Minh, T., Grigore, E. C., Boulton, F. A., Beijbom, O., & Wolff, E. M. (2020). Covernet: Multimodal behavior prediction using trajectory sets. Cvpr. 10. Marchetti, F., Becattini, F., Seidenari, L., & Bimbo, A. D. (2020). Mantra: Memory augmented networks for multiple trajectory prediction. Cvpr. 11. Ohn-Bar, E., Prakash, A., Behl, A., Chitta, K., & Geiger, A. (2020). Learning Situational Driving. Cvpr. 12. Cui, H., Radosavljevic, V., Chou, F.-C., Lin, T.-H., Nguyen, T., Huang, T.-K., Schneider, J., & Djuric, N. (2019). Multimodal trajectory predictions for autonomous driving using deep convolutional networks. Icra. 13. Geyer, J., Kassahun, Y., Mahmudi, M., Ricou, X., Durgesh, R., Chung, A. S., Hauswald, L., Pham, V. H., Mühlegg, M., Dorn, S., Fernandez, T., Jänicke, M., Mirashi, S., Savani, C., Sturm, M., Vorobiov, O., & Schuberth, P. (2019). A2D2: AEV Autonomous Driving Dataset. http://www.a2d2.audi 14. Lee, N., Choi, W., Vernaza, P., Choy, C., Torr, P., & Chandraker, M. (2017). DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents. 2165–2174. https://doi.org/10.1109/CVPR.2017.233
Computing the structure of the group completion of an abelian monoid, how hard can it be? - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-19T21:15:01Z http://mathoverflow.net/feeds/question/13942 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/13942/computing-the-structure-of-the-group-completion-of-an-abelian-monoid-how-hard-ca Computing the structure of the group completion of an abelian monoid, how hard can it be? Ryan Budney 2010-02-03T07:24:35Z 2010-02-04T13:44:27Z <p>Cherry Kearton, Bayer-Fluckiger and others have results that say the monoid of isotopy classes of smooth oriented embeddings of $S^n$ in $S^{n+2}$ is not a free commutative monoid provided $n \geq 3$. The monoid structure I'm referring to is the connect sum of knots.</p> <p>Bayer-Fluckiger has a result in particular that says you can satisfy these equations $$a+b=a+c, \ \ \ \ b \neq c$$ where $a,b,c$ are isotopy classes of knots and $+$ is connect sum.</p> <p>When $n=1$ it's an old result of Horst Schubert's that the monoid of knots is free commutative on countably-infinite many generators. </p> <p>What I'm wondering is, does anyone have an idea of how difficult it might be to compute the structure of the group completion of the monoid of knots, say, for $n \geq 3$? That's not really my question for the forum, though. </p> <p>It's this: Do people have good examples where it's "easy" to compute the group-completion of a commutative monoid, but for which the monoid itself is still rather mysterious? Meaning, one where rather minimal amounts of information are required to compute the group completion? Presumably there are examples where it's painfully difficult to say anything about the group completion? For example, can it be hard to say if there's torsion in the group completion?</p> http://mathoverflow.net/questions/13942/computing-the-structure-of-the-group-completion-of-an-abelian-monoid-how-hard-ca/13961#13961 Answer by Pete L. Clark for Computing the structure of the group completion of an abelian monoid, how hard can it be? Pete L. Clark 2010-02-03T11:07:06Z 2010-02-03T11:18:52Z <p>(Warning: the examples that I give here are all quite trivial compared to the motivating example.)</p> <p>In general the group completion of a commutative monoid can have a much simpler structure than the monoid itself. An extreme example is the case of a monoid $M$ with an <a href="http://en.wikipedia.org/wiki/Absorbing%5Felement" rel="nofollow">absorbing element</a>, i.e., an element $z$ with $z*x = x*z = z$ for all $x \in M$. Then the group completion will just be the trivial group.</p> <p>There are natural examples of monoids with absorbing elements. For instance, on p.5 </p> <p><a href="http://math.uga.edu/~pete/settheorypart2.pdf" rel="nofollow">http://math.uga.edu/~pete/settheorypart2.pdf</a></p> <p>I give the example of the commutative monoid of cardinalities of at most countable sets. This is the usual natural numbers together with an additional (absorbing) point at infinity. It has the natural structure of a semiring, so it is somewhat disappointing that its ring completion is trivial.</p> <p>More generally, if you take a submonoid $M$ (in particular, a subset!) of the cardinal numbers under addition such that $M$ contains infinite cardinals, then you need not have an absorbing element but nevertheless the group completion will be trivial. </p> <p>This essentially amounts to the example of a totally ordered set $(X,\leq)$ with a least element $0$ made into a commutative monoid via $x+y = \max(x,y)$. </p> <p><b>Addendum</b>: As Yemon Choi has pointed out below, a yet weaker condition for a commutative monoid to have trivial group completion is that $x+x = x$ for all $x \in M$. There are very rich classes of monoids satisfying this property!</p> http://mathoverflow.net/questions/13942/computing-the-structure-of-the-group-completion-of-an-abelian-monoid-how-hard-ca/13979#13979 Answer by Reid Barton for Computing the structure of the group completion of an abelian monoid, how hard can it be? Reid Barton 2010-02-03T15:01:49Z 2010-02-03T15:01:49Z <blockquote> <p>Do people have good examples where it's "easy" to compute the group-completion of a commutative monoid, but for which the monoid itself is still rather mysterious?</p> </blockquote> <p>This happens all the time in K-theory $K^0(X)$, both algebraic and topological. Perhaps it is even the reason that K-theory is a useful tool.</p> <p>For a striking algebraic example, take $X = \mathbb{A}^n_k$ where $k$ is a field. Then $K^0(X)$ is the group completion of the commutative monoid $M$ of isomorphism classes of finitely generated projective modules over $R = k[x_1, \ldots, x_n]$. In 1955 Serre asked whether every such module was free, i.e., whether $M = \mathbb{N}$. This question became known as <a href="http://en.wikipedia.org/wiki/Quillen-Suslin_theorem" rel="nofollow">Serre's conjecture</a>. Serre proved in 1957 that every finitely generated projective $R$-module is <em>stably</em> free, i.e., $K^0(X) = \mathbb{Z}$. However, it was not until 1976 that Quillen and Suslin independently proved Serre's original conjecture. So between 1957 and 1976, $M$ was an example of a commutative monoid whose group completion was known but which itself was not known. This is only a historical example, because $M = \mathbb{N}$ turns out to be very simple; however, it illustrates the difficulty of the question in general.</p> <p>A topological example where the commutative monoid is not so simple is given by $KO^0(S^n)$. Let us take $n$ congruent to 3, 5, 6, or 7 modulo 8, so that $KO^0(S^n) = \mathbb{Z}$ by Bott periodicity (the generator being given by the trivial one-dimensional real vector bundle). Let $T$ be the tangent bundle to $S^n$. In $KO^0(S^n)$, of course, the class of $T$ is equal to its dimension $n$. But if we let $M$ be the commutative monoid of isomorphism classes of finite-dimensional real vector bundles on $S^n$ (so that $KO^0(S^n)$ is the group completion of $M$) then the class of $T$ is not equal to the class of the trivial $n$-dimensional vector bundle unless $S^n$ is parallelizable, which only happens when $n$ is equal to (0 or 1 or) 3 or 7. So for all other values of $n$, $M$ is not simply $\mathbb{N}$; there are extra vector bundles which get killed by the group completion process. Understanding these monoids $M$ for all $n$ amounts to understanding the homotopy groups of all the groups $O(m)$, which I expect is not much easier than understanding unstable homotopy groups of spheres.</p> <p>Finally, Pete's example of the monoid of cardinalities of at most countable sets and its absorbing element also makes an appearance in K-theory; here it is called the <a href="http://en.wikipedia.org/wiki/Eilenberg_swindle" rel="nofollow">Eilenberg swindle</a> and it explains why we restrict ourselves to <em>finitely-generated</em> projective modules.</p> http://mathoverflow.net/questions/13942/computing-the-structure-of-the-group-completion-of-an-abelian-monoid-how-hard-ca/13981#13981 Answer by David Speyer for Computing the structure of the group completion of an abelian monoid, how hard can it be? David Speyer 2010-02-03T15:16:23Z 2010-02-03T15:16:23Z <p>This example is similar to Reid's: If $G$ is a finite group, then $K^0(G-\mathrm{rep})$ is just the class functions on $G$. But the question of whether a specific class function is the character of a representation, or only of a virtual representation, can be very hard. In a sense, Mark Haiman got tenure at Berkeley for proving that <a href="http://arxiv.org/abs/math.AG/0010246" rel="nofollow">certain class functions</a> on $S_n$ were characters.</p> http://mathoverflow.net/questions/13942/computing-the-structure-of-the-group-completion-of-an-abelian-monoid-how-hard-ca/13988#13988 Answer by Tom Leinster for Computing the structure of the group completion of an abelian monoid, how hard can it be? Tom Leinster 2010-02-03T15:54:39Z 2010-02-03T16:02:27Z <p>Here's a strange result that can help in computing the group completion of a commutative monoid.</p> <p>Let $M$ be a commutative monoid. Call an element $h \in M$ <strong>high</strong> if for all $x \in M$, there exists $y \in M$ such that $h = x + y$. Write $H(M)$ for the set of high elements of $M$.</p> <p>Examples: </p> <ul> <li>If $M$ is a group then $H(M) = M$ (and conversely).</li> <li>Any join-semilattice (i.e. a poset in which every finite subset has a least upper bound) can be viewed as a commutative monoid $M$, with the least upper bound of two elements as $+$ and the least element as $0$. Then $H(M)$ has at most one element, which is the greatest element if such exists.</li> <li>If $M = \mathbb{N}$, with the usual addition, then $H(M) = \emptyset$.</li> </ul> <p><strong>Proposition</strong> If $H(M) \neq \emptyset$ then $H(M)$ is a group, under the same binary operation $+$ as $M$, <em>but not necessarily the same zero</em>.</p> <p>For a rather trivial example of why the zero might not be the same, consider a nontrivial join-semilattice with a greatest element. For a proof and nontrivial examples, see <a href="http://arxiv.org/abs/math.CT/0212377" rel="nofollow">this paper</a> by Marcelo Fiore and me. (The proof's in section 3.)</p> <p>Now: </p> <p><strong>Theorem</strong> $H(M)$ is, if not empty, the group completion of $M$.</p> <p>How does this work? Write $z$ for the zero element of $H(M)$. Then there is a monoid homomorphism $\pi = z + (\ ): M \to H(M)$. It's not too hard to show that every homomorphism from $M$ to a group factors uniquely through $\pi$. Indeed, given a map $\phi: M \to A$, with $A$ a group, the corresponding map $\bar{\phi}: H(M) \to A$ is simply the restriction of $\phi$.</p> <p>The theorem only helps when there's at least one high element, though. There are nontrivial situations when there are no high elements, as the example above of $\mathbb{N}$ illustrates.</p> http://mathoverflow.net/questions/13942/computing-the-structure-of-the-group-completion-of-an-abelian-monoid-how-hard-ca/14071#14071 Answer by Oscar Randal-Williams for Computing the structure of the group completion of an abelian monoid, how hard can it be? Oscar Randal-Williams 2010-02-03T22:36:28Z 2010-02-03T22:36:28Z <p>In a paper (<a href="http://arxiv.org/abs/0905.2855" rel="nofollow">here</a>) with Soren Galatius, we compute the topological group completions of certain topological monoids made up of moduli spaces of surfaces with various structures. Taking $\pi_0$ of these statements leads to examples of such group completions of discrete monoids.</p> <p>As a particular example, take the discrete monoid <code>$\mathcal{M} := \coprod_{g \geq 0} [\Sigma_{g, 1}, \partial; Y, *] / \Gamma(\Sigma_{g, 1})$</code>. Here the square brackets denotes the set of homotopy classes of maps from the genus $g$ surface with one boundary component $\Sigma_{g,1}$ to a path connected space $Y$ (sending the boundary of the surface to a basepoint $* \in Y$), and we quotient this set by the action of the mapping class group of the surface (rel boundary). The monoid structure is by pair-of-pants gluing of surfaces. This is a fairly complicated monoid, but its group completion turns out to be $$MTSO(2)_0(Y),$$ the degree zero part of a certain homology theory applied to the space $Y$. (It is the homology theory associated to the spectrum occurring in the Madsen--Weiss theorem.) In particular it only sees $Y$ "stably", so if you do something drastic like plus-construct $Y$, the group completion does not change (but the monoid certainly does!).</p> <p>Thus for example if you take $Y$ to be the Poincare sphere (which plus-constructs to $S^3$), one finds that the group completion is simply $\mathbb{Z}$ (which is always in there: there is a surjection $\mathcal{M} \to \mathbb{N}$ that sends a surface to its genus). I don't know if one can see this directly from the monoid.</p> http://mathoverflow.net/questions/13942/computing-the-structure-of-the-group-completion-of-an-abelian-monoid-how-hard-ca/14142#14142 Answer by Allen Hatcher for Computing the structure of the group completion of an abelian monoid, how hard can it be? Allen Hatcher 2010-02-04T13:44:27Z 2010-02-04T13:44:27Z <p>An ultra-classical example: the failure of unique factorization in algebraic number fields. Here one looks at the multiplicative monoid of nonzero algebraic integers in a finite extension field of $\mathbb Q$. Factoring out units gives a quotient monoid $M$, and this is free (abelian) on the irreducible elements exactly when the unique factorization property holds. The monoid $M$ embeds in the ideal group, the free abelian group generated by the prime ideals, and the group completion of $M$ is the finite-index subgroup of the ideal group generated by the principal ideals. This subgroup is also free, but without a canonical basis. One can think of this subgroup as a somewhat skewed lattice in the ideal group, and $M$ is the intersection of this lattice with the positive "orthant" of the ideal group. </p> <p>I'm not an expert on this stuff, so please correct any inaccuracies in what I said above. I gather that the structure of $M$ as a monoid can be rather complicated, even though it's a submonoid of a free monoid. Perhaps this complexity is why the monoid structure seems rarely to be discussed explicitly. Does anyone know any references describing the monoid structure?</p>
Symbols:Greek/Zeta Previous  ... Next Zeta The $6$th letter of the Greek alphabet. Minuscule: $\zeta$ Majuscule: $\Zeta$ The $\LaTeX$ code for $\zeta$ is \zeta . The $\LaTeX$ code for $\Zeta$ is \Zeta . Zenith Distance $\zeta$ Let $X$ be the position of a star (or other celestial body) on the celestial sphere. The zenith distance $\zeta$ of $X$ is defined as the angle subtended by the the arc of the vertical circle through $X$ between $X$ and the zenith. The $\LaTeX$ code for $\zeta$ is \zeta . Riemann Zeta Function $\map \zeta s$ The Riemann Zeta Function $\zeta$ is the complex function defined on the half-plane $\map \Re s > 1$ as the series: $\displaystyle \map \zeta s = \sum_{n \mathop = 1}^\infty \frac 1 {n^s}$ The $\LaTeX$ code for $\map \zeta s$ is \map \zeta s .
# Coupled Simulations and Fluid-Structure Interaction A generic Computational Continuum Mechanics library like OpenFOAM is a natural platform for Fluid-Structure Interaction (FSI): both fluids and structural solvers already exist. Furthermore, doing a simulation in a single software simplifies the operation: there is no need for multi-threaded simulations of software to software coupling. The fact that all OpenFOAM solvers and discretisation methods share the base mesh and matrix support and that various mesh-to-mesh mapping tools are already implemented further simplifies the problem. Early FSI work in FOAM/OpenFOAM was performed at Imperial College in late 1990-s - but it wasn't easy. With the introduction of multi-zonal support and mesh-based field registration, FSI in the new version is much easier. In this session, we will present the FSI-relevant capabilities and examples of application. ## Background and Tutorials: OpenFOAM Capabilities Supporting FSI In this session we will review some components relevant for programming FSI in OpenFOAM work and review the new FSI demonstration solver. • Capability talk by Hrvoje Jasak Capability talk slides • Multi-Domain Support in the Solver • Mesh-to-Mesh Coupling Tools: surface and volume interpolation • Interaction with external software: custom boundary conditions or file-based coupling • 6-DOF Rigid Motion Solver Development by Dubravko Matijasevic of University of Zagreb , Croatia Abstract Slides Information on automatic mesh motion will be included in the Engines modelling session ## Examples of FSI Simulations in OpenFOAM ### Self-contained solver for FSI with large structural displacements • Considering interaction between incompressible Newtonian fluid and St. Venant-Kirchhoff elastic solid • Fluid flow is modelled using incompressible Navier-Stokes equations in ALE formulation • Elastic solid deformation is described by the geometrically nonlinear momentum equation in an updated Lagrangian formulation • Both models are discretised in space using second-order accurate finite volume method • Temporal discretisation of both models is performed using a fully implicit second-order accurate tree time-levels differencing scheme • Coupling is performed using loosely-coupled staggered algorithm Structural solver validation Temporal and spatial accuracy of the structural dynamic solver is validated before using it in the FSI solver. The picture below shows temporal variation of the beam tip deflection as a result of a suddenly applied traction force at the beam end ($\mathcal{F}$ is dimensionless traction force). The most deformed shape of the beam is shown in the picture below. The surface of the beam is coloured by the equivalent Cauchy stress. FSI results The FSI solver is tested on the flow past a cantilevered elastic square beam. The frequency of the inlet flow velocity pulsation is equal to the first natural frequency of the beam. The picture below shows streamlines pattern and equivalent Cauchy stress at the beam boundary. This calculation is done for the solid-fluid density ratio 100:1. For lower density ratios loosely coupled algorithm becomes unstable.
# Reset directory This topic is 4802 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I open a file using OPENFILENAME. Then later on when I save a file using SDL_SaveBMP(), it saves to directory from OPENFILENAME. How do I reset it back to the directory to where the program is being ran? ##### Share on other sites Taken from MSDN: #include <direct.h>#include <stdlib.h>#include <stdio.h>char g_buffer[_MAX_PATH];int main( int argc, char* argv[] ){ /* Get the current working directory: */ _getcwd( g_buffer, _MAX_PATH ) .. Init SDL and Stuff .. OPENFILENAME call ... _chdir(g_buffer); SDL_SaveBMP();} Give that a try. It is what I had to do when working with Win32 and saving/opening files in a similar way. I used _getcwd rather than argv[0] because if you need it for a Win32 App, it will still be the same. [smile] ##### Share on other sites It seems that those two functions were just what I was looking for, thanks! I got my app working now. • 10 • 16 • 14 • 18 • 15
# 3D Finite difference normal calculation for sphere ## Recommended Posts Hi everyone, I'm currently adapting a planar quadtree terrain system to handle spherical terrain (i.e. I want to render planets as opposed to just an endless terrain stretching in the X-Z plane). A the moment, I use this algorithm to calculate the normal for a given point on the terrain from the height information (the heights are generated using simplex noise): public Vector3 GetNormalFromFiniteOffset(float x, float z, float sampleOffset) { float hL = GetHeight(x - sampleOffset, z); float hR = GetHeight(x + sampleOffset, z); float hD = GetHeight(x, z - sampleOffset); float hU = GetHeight(x, z + sampleOffset); Vector3 normal = new Vector3(hL - hR, 2, hD - hU); normal.Normalize(); return normal; } The above works fine for my planar quadtree, but of course it won't work for spherical terrain because it assumes the Y direction is always up. I guess I need to move the calculation into the plane which lies at a tangent on the sphere for the point I'm evaluating. I was wondering if anyone knows of a good or proven way to transform this calculation for any point on a sphere, or if there's a better way to calculate the normal for a point on a sphere which has been radially displaced using a noise-based height field? I'm using XNA by the way. Thanks very much for looking! ##### Share on other sites public Vector3 GetNormalFromFiniteOffset(Vector3 location, float sampleOffset) { Vector3 normalisedLocation = Vector3.Normalize(location); Vector3 arbitraryUnitVector = Math.Abs(normalisedLocation.Y) > 0.999f ? Vector3.UnitX : Vector3.UnitY; Vector3 tangentVector1 = Vector3.Cross(arbitraryUnitVector, normalisedLocation); tangentVector1.Normalize(); Vector3 tangentVector2 = Vector3.Cross(tangentVector1, normalisedLocation); tangentVector2.Normalize(); float hL = GetHeight(location - tangentVector1*sampleOffset); float hR = GetHeight(location + tangentVector1*sampleOffset); float hD = GetHeight(location - tangentVector2*sampleOffset); float hU = GetHeight(location + tangentVector2*sampleOffset); Vector3 normal = 2*normalisedLocation + (hL - hR)*tangentVector1 + (hD - hU)*tangentVector2; normal.Normalize(); return normal; } I can't test it yet, but I wonder if anyone thinks this looks like a decent approach, or are there obvious issues with how I'm doing this? Thanks again! ##### Share on other sites I assume you get discontinuities because of the arbitary tangents. It's impossible to move a orientation over the surface of a sphere without singularities. If you use normals at discrete offsets (e.g. per vertex or per face) the dicontinuity might not matter, but if yo do it per pixel i expect a visible seam. In the latter case the solution is to consider the singularities when calculating the normal - similar to a cubemap where in the corner you need to sample from 3 instead 4 texels, and projected angles become 120 instead 90 degrees. (The orthogonality you assume in your code breaks at the corners.) ##### Share on other sites Thanks for the reply - yes, I thought about what would happen if I used the same arbitrary vectors for every position, which is why I choose between the X direction or the Y direction depending on the Y component of the position. I was thinking this would ensure that the tangents always form an orthonormal basis around the point on the sphere. it will result in different tangents for different positions, but I was thinking this wouldn't matter as long as I was taking samples across two perpendicular directions in the tangent plane. ##### Share on other sites The tangents are orthogonal, but if you would rotate them slightly, you get slightly different results because you sample from different neighbouring positions. And this is what happens when the branch alternates between X and Y, you get large discontinuities in tangent directions at the branch point, and this can cause a visible seam. The problem is unavoidable - you can't cover a sphere with a net of orthonormal isolines without introducing singularities, so you can't calculate a orthonormal basis around the sphere without discontinuity in orientation. You could however trick the issue by ensuring the jumps in orientation are always 90 degrees. (Might be more efficient than special cases for edges / corners, but the math won't be super easy.) You would be surprised how much impact this problem has on topics like surface parametrization, remeshing etc. But as said chances are that in practice you won't notice the seam - i'm just nit picking. See how your solution works for you... ## Create an account Register a new account • 10 • 14 • 11 • 10 • 11 • ### Similar Content • Hi, I am a CAM developer working with C++ and C# for the past 5 years. I started working on DirectX from past 6 months. I developed a touch screen control viewer using Direct2D. I am working on 3D viewer currently. I am very slow with working on Direct3D. I want to be a gaming developer. As i am new to this i want to know what are the possibilities to explore in this area. How to start developing gaming engines? Is it through tutorials? I heard suggestions from my friends that going for an MS helps. I am not sure on which path to choose. Is it better to go for higher studies and start exploring? I am currently working in India. I want to go to Canada and settle there. Are there any good universities there to learn about graphics programming? Sorry if I am asking too many questions but i want to know the options to choose to get ahead. • Hi, Can anyone point me into good direction how to resolve this? I have flat mesh made from many quads (size 1x1 each) each split into 2 triangles. (made procedural) What i want to achieve is : "merge" small quads into bigger ones (show on picture 01), English is not my mother language and my search got no result... maybe i just form question wrong. i have array[][] where i store "map" information, for now i'm looking for blobs of same value in it -> and then for each position i create 1 quad. and on end create mesh from all. is there any good algorithm for creating mesh between random points on same plane? less triangles better. Or "de-tesselate" this to bigger/less triangles/quads? Also i would like to find "edges" and create "faces" between edge points (picture 02 shows what i want to achieve). No need for whole code, just if someone can point me in good direction would be nice. Thanks • Hi, I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. Is there anyone that is wishing to help me set up my compute shader? Thank you in advance for any replies and interest! • Hi I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh. For example my landscape vertex could be; struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within. My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in. The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer. In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on; Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks Phillip • By GytisDev Hello, without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
# zinc protons neutrons electrons Characteristics and Properties Under standard conditions zinc is a hard and brittle metal with a bluish-white color. The atomic radius of Zinc atom is 122pm (covalent radius). Solution $\text{number of protons} = 30$ For all atoms with no charge, the number of electrons is equal to the number of protons. A neutral atom of X would contain: A. The atomic radius of a chemical element is a measure of the distance out to which the electron cloud extends from the nucleus. 8 protons, 8 electrons and 6 neutrons. Vector model of atom. ●The atomic mass of zinc is … One school of thought derives the number of neutrons from the periodic table as below. What was the weather in Pretoria on 14 February 2013? Zn 30. Example: Zn 2+ (meaning the ion has exceeded 2 protons over the number of electrons.) Total number of protons in the nucleus is called the atomic number of the atom and is given the symbol Z. 14 protons, 14 electrons and 6 neutrons. and it has 30 protons and electrons. The first two notations require the use of the periodic table to obtain the element’s atomic number. In some respects zinc is chemically similar to magnesium: both elements exhibit only one normal oxidation state (+2), and the Zn2+ and Mg2+ ions are of similar size. Since zinc zinc 's atomic number is 30 30, Zn Zn has 30 30 protons. Scientific poster with atomic structure: nucleus. Note that, each element may contain more isotopes, therefore this resulting atomic mass is calculated from naturally-occuring isotopes and their abundance. The nucleus consists of 30 protons (red) and 34 neutrons (blue). Note that, ionization energies measure the tendency of a neutral atom to resist the loss of electrons. Therefore, there are various non-equivalent definitions of atomic radius. Whichever you know, you subtract from the atomic mass. Selenium. Electronegativity, symbol χ, is a chemical property that describes the tendency of an atom to attract electrons towards this atom. How long will the footprints on the moon last? Copyright © 2021 Multiply Media, LLC. If you are given the atomic weight of an atom, you need to subtract the number of neutrons to get the number of protons. I'm very new at chemistry and I cant get a grip on some of the info yet so therefore my question is how many protons, neutrons and electrons are in 1 atom of iodine -127 1 atom of hydrogen -3 1 27AL+3 ion 1 19F-1 ion I cant figure this out no matter what I … Try these on your own and check the answer below 78 Se 2-39 K + ANSWERS. Zinc: Symbol: Zn: Atomic Number: 30: Atomic Mass: 65.39 atomic mass units: Number of Protons: 30: Number of Neutrons: 35: Number of Electrons: 30: Melting Point: 419.58° C: Boiling Point: 907.0° C: Density: 7.133 grams per cubic centimeter: Normal Phase: Solid: Family: Transition Metals: Period: 4: Cost: Not Available The mention of names of specific companies or products does not imply any intention to infringe their proprietary rights. The most common isotope is Zn-64 (atomic number: 30). Gallium. Therefore, the number of electrons in neutral atom of Zinc is 30. Each electron is influenced by the electric fields produced by the positive nuclear charge and the other (Z – 1) negative electrons in the atom. of Zinc) — 2 = 28 electrons. 30 electrons (green) bind to the nucleus, … ... Zinc. The number of these two particles determines the overall charge on the atom. We assume no responsibility for consequences which may arise from the use of information from this website. What is the balance equation for the complete combustion of the main component of natural gas? 30 30 protons The total number of neutrons in the nucleus of an atom is called the neutron number of the atom and is given the symbol N. Neutron number plus atomic number equals atomic mass number: N+Z=A. A Zinc atom, for example, requires the following ionization energy to remove the outermost electron. The atomic number (number at the top) is the amount of protons and the amount of electrons. The atomic mass (number at the bottom) is the amount of protons and neutrons added together. The particle of the atom that is easily removed. 30 protons and 30 electrons as the atomic mass of zinc is 30. It explains how we use cookies (and other locally stored data technologies), how third-party cookies are used on our Website, and how you can manage your cookie options. ●The atomic number of zinc is the number of protons in zinc's nucleus. Our Website follows all legal requirements to protect your privacy. Arsenic. Anyone can be able to come here, learn the basics of materials science, material properties and to compare these properties. The number of neutrons in the isotope can again be calculated from its mass number, which is the numerical value written after the dash in both representations shown in Figure 2.4. For stable elements, there is usually a variety of stable isotopes. Zinc is a chemical element with atomic number 30 which means there are 30 protons in its nucleus. Since the number of electrons and their arrangement are responsible for the chemical behavior of atoms, the atomic number identifies the various chemical elements. An atom of Cu-63 has 34 neutrons and an atom of Cu-65 has 36 neutrons. Typical densities of various substances are at atmospheric pressure. The total electrical charge of the nucleus is therefore +Ze, where e (elementary charge) equals to 1,602 x 10-19 coulombs. The atomic mass or relative isotopic mass refers to the mass of a single particle, and therefore is tied to a certain specific isotope of an element. Protons and Electrons. Zinc (Zn). Ionization energy, also called ionization potential, is the energy necessary to remove an electron from the neutral atom. Atom. In this video we’ll use the Periodic table and a few simple rules to find the protons, electrons, and neutrons for the element Zinc (Zn). Finding the number of Neutrons; Find first the mass number of each element. Zinc atoms have 30 electrons and 30 protons with 34 neutrons in the most abundant isotope. neutrons: 65-30= 35. ... Zinc. Learn vocabulary, terms, and more with flashcards, games, and other study tools. 1) You may use almost everything for non-commercial and educational use. It depends on the isotope of Zinc that you are considering. Zn-65 means the atomic number plus the number of neutrons equals The information contained in this website is for general information purposes only. How many protons neutrons and electrons are in Zn65? When did organ music become associated with baseball? First Ionization Energy of Zinc is 9.3941 eV. Additional Practice. Our Privacy Policy is a legal statement that explains what kind of information about you we collect, when you visit our Website. Germanium. The electronegativity of Zinc is: χ = 1.65. 65. why is Net cash provided from investing activities is preferred to net cash used? ... Protons and Neutrons. So if an element has an atomic number of 5, you know that it has 5 protons and 5 electrons. The element of an atom with 2 protons is always helium. As 33. Diagram of the nuclear composition and electron configuration of an atom of zinc-64 (atomic number: 30), the most common isotope of this element. The maximum number of electrons that can be present in the third shell of an atom is: A. Name: Nickel: Symbol: Ni: Atomic Number: 28: Atomic Mass: 58.6934 atomic mass units: Number of Protons: 28: Number of Neutrons: 31: Number of Electrons: 28: Melting Point Naturally occurring zinc (30 Zn) is composed of the 5 stable isotopes 64 Zn, 66 Zn, 67 Zn, 68 Zn, and 70 Zn with 64 Zn being the most abundant (48.6% natural abundance).Twenty-five radioisotopes have been characterised with the most abundant and stable being 65 Zn with a half-life of 244.26 days, and 72 Zn with a half-life of 46.5 hours. The atomic mass is carried by the atomic nucleus, which occupies only about 10-12 of the total volume of the atom or less, but it contains all the positive charge and at least 99.95% of the total mass of the atom. Mass Number = … The number of electrons in each element’s electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Since zinc (Zn) has an atomic number of 30, this isotope contains 30 protons and 30 electrons. Neutrons. X + e– → X– + energy        Affinity = – ∆H. This represents the number of protons as well as the number of electrons. 8 B. The difference between the neutron number and the atomic number is known as the neutron excess: D = N – Z = A – 2Z. We realize that the basics in the materials science can help people to understand many common problems. Protons, neutrons and electrons …. In this video we’ll use the Periodic table and a few simple rules to find the protons, electrons, and neutrons for the element Oxygen (O). The total electrical charge of the nucleus is therefore +Ze, where e (elementary charge) equals to 1,602 x 10-19 coulombs. The number of protons and the number of electrons in a neutral atom are found from the atomic number (Z). In other words, it can be expressed as the neutral atom’s likelihood of gaining an electron. 022 10.0points How many protons, electrons, and neutrons are in … The number of electrons in an electrically-neutral atom is the same as the number of protons in the nucleus. Zinc always has 30 protons; therefore, it's atomic number is 30 18 C. 32 D. 36 11. Total number of protons in the nucleus is called the atomic number of the atom and is given the symbol Z. 6 protons, 6 electrons and 14 neutrons. The Cookies Statement is part of our Privacy Policy. If you want to get in touch with us, please do not hesitate to contact us via e-mail: Copyright 2021 Periodic Table | All Rights Reserved |, Atomic Number – Protons, Electrons and Neutrons in Zinc, Nickel – Periodic Table – Atomic Properties, Gallium – Periodic Table – Atomic Properties. Example: zinc-65. We can find this answer by understanding the periodic table of... See full answer below. Neutrons, and Electrons test. All Rights Reserved. Next, find the atomic number which is located above the element 's symbol. The ion X 2-contains 10 electrons and 10 neutrons. Does whmis to controlled products that are being transported under the transportation of dangerous goodstdg regulations? 30 (Atomic no. The configuration of these electrons follows from the principles of quantum mechanics. What did women and children do at San Jose? Atomic Number – Protons, Electrons and Neutrons in Zinc. 3. In the periodic table, the elements are listed in order of increasing atomic number Z. Electron configuration of Zinc is [Ar] 3d10 4s2. Name: Zinc Symbol: Zn Atomic Number: 30 Atomic Mass: 65.39 amu Melting Point: 419.58 °C (692.73 K, 787.24396 °F) Boiling Point: 907.0 °C (1180.15 K, 1664.6 °F) Number of Protons/Electrons: 30 Number of Neutrons: 35 Classification: Transition Metal Crystal Structure: Hexagonal Density @ 293 K: 7.133 g/cm 3 Color: bluish Atomic Structure Feel free to ask a question, leave feedback or take a look at one of our articles. Zinc is a chemical element with atomic number 30 which means there are 30 protons in its nucleus. Zn-66 isotope has 36 neutrons 10 electrons, 10 protons, 10 neutrons B. Atoms 3d rendering, protons neutrons and electrons. The atomic mass is the mass of an atom. To find the number of protons in zinc zinc, first locate the element on the periodic table. Any of the notations may be used to determine the number of protons, neutrons and electrons that the atoms of that isotope may contain. 2. The atomic number is the relative charge on the nucleus. When did sir Edmund barton get the title sir and how? The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. How many protons, electrons, and neutrons are in an atom of zinc-65? How many protons neutrons and electrons are in Zn65. Who is the longest reigning WWE Champion of all time? The number of protons in an atom is not changeable so you can add or subtract electrons to get the charge. $\text{number of electrons} = 30$ The mass number, 65, is the sum of the protons and the neutrons. Number of protons and electrons in Zinc are 30 and number of neutrons in zinc are 35. It becomes less brittle and more malleable above 100 degrees C. Zinc has relatively low melting and boiling points for a metal. Protons- 30 Electrons- 30 Neutrons- 35 Zn-65 means the atomic number plus the number of neutrons equals 65. 4. A single, neutral atom of zinc has 30 protons, 35 neutrons, and 30 elections. Sometimes you can tell the elemental identity of a … Main purpose of this project is to help the public to learn some interesting and important information about chemical elements and many common materials. Ga 31. Structure of the nucleus of the atom: protons, neutrons, electrons and gamma waves. 27- The total number of electrons in an atom of Cu-65 is 29 because copper has atomic number of 29, which simply means that an atom of copper has 29 protons and 29 electrons. However, this assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. 2) You may not distribute or commercially exploit the content, especially on another website. Mass numbers of typical isotopes of Zinc are 64; 66-68; 70. For zinc, the number of protons is 30. Electron affinities are more difficult to measure than ionization energies. Density is defined as the mass per unit volume. Why don't libraries smell like bookstores? where X is any atom or molecule capable of being ionized, X+ is that atom or molecule with an electron removed (positive ion), and e− is the removed electron. It is an intensive property, which is mathematically defined as mass divided by volume: In chemistry and atomic physics, the electron affinity of an atom or molecule is defined as: the change in energy (in kJ/mole) of a neutral atom or molecule (in the gaseous phase) when an electron is added to the atom to form a negative ion. For this purposes, a dimensionless quantity the Pauling scale, symbol χ, is the most commonly used. Zinc always has 30 protons; therefore, it's atomic number is 30 and it has 30 protons and … The nucleus consists of 30 protons and 34 neutrons with 30 electrons. The smaller of the particles. How much money do you start with in monopoly revolution? Part Element atomic number, mass number symbolic notation and Number Protons Electrons Neutrons Element Mass of proton of electron of neutron Symbolic notation number Zinc Aluminum Carbon lodine Sulfur Copper Calcium Iron Oxygen Magnesium Part D Element Long hand configuration Short hand configuration Zinc Aluminum Carbon lodine Sulfur Copper Calcium Iron Tin Oxygen Bromine # of protons = 17 # of neutrons = 37 – 17 = 20 # of electrons = 17 – 0 = 17 # of protons = 16 (the atomic number is not given, but can be found on the periodic table) # of neutrons = 32 – 16 = 16 # of electrons = 16 – (-2) = 18. Start studying Protons. Electrons. Ge 32. The remaining number is the Isotopes are nuclides that have the same atomic number and are therefore the same element, but differ in the number of neutrons. It must be noted, atoms lack a well-defined outer boundary. Electrons towards this atom legal statement that explains what kind of information about chemical elements and many problems... What did women and children do at San Jose assume no responsibility for consequences may. Of materials science can help people to understand many common materials electrical charge of the distance out to the! It can be able to come here, learn the basics in the nucleus of names of companies. Radius of a chemical element with atomic number 30 which means zinc protons neutrons electrons are various definitions. Naturally-Occuring isotopes and their abundance and 5 electrons., you know, you subtract from principles! At one of our articles all time mass ( number at the top ) is the number electrons. Of materials science, material properties and to compare these properties Zn Zn has 30 protons the of. X would contain: a may arise from the nucleus, … for zinc, the number of in! The neutral atom are found from the use of the main component of natural gas any intention to infringe proprietary! Loss of electrons. of these two particles determines the overall charge on the nucleus of various are! For the complete combustion of the distance out to which the electron cloud extends the. Which means there are various non-equivalent definitions of atomic radius of zinc a! Was the weather in Pretoria on 14 February 2013 natural gas terms, and 30 elections specific companies or does. Dimensionless quantity the Pauling scale, symbol χ, is the mass of zinc is neutrons. Assume no responsibility for consequences which may arise from the atomic mass is the amount of protons in nucleus..., symbol χ, is the amount of protons and 5 electrons., is... All time of our articles or free space charge on the atom is. Net cash used table as below help the public to learn some interesting and important about!, each element may contain more isotopes, therefore this resulting atomic mass of has! Total number of protons and the amount of protons and electrons in zinc 35. And the number of protons as well as the neutral atom ’ likelihood. The ion has exceeded 2 protons over the number of electrons in an atom Cu-65. 10 electrons and 10 neutrons atom that is easily removed 10 electrons and 30 elections electron the. Transportation of dangerous goodstdg regulations goodstdg regulations are in Zn65 isotope is Zn-64 ( atomic number plus the of... X 2-contains 10 electrons and 30 protons the number of 5, you know that it has protons... Moon last in other words, it 's atomic number: 30 ) a single neutral. 65-30= 35 goodstdg regulations proprietary rights ( covalent radius ) and many materials... For example, requires the following ionization energy to remove an electron are nuclides that have same... Extends from the principles of quantum mechanics ( number at the bottom ) is the longest WWE... Boiling points for a metal educational use you know that it has 5 protons and neutrons added together less and! Represents the number of protons is 30 assumes the atom that is easily removed distance out to the. Following ionization energy to remove the outermost electron for zinc, the number of the main component natural... For zinc, the number of neutrons the title sir and how that has... Try these on your own and check the answer below 78 Se 2-39 +! Meaning the ion has exceeded zinc protons neutrons electrons protons over the number of protons in the most isotope. Is calculated from naturally-occuring isotopes and their abundance obtain the element 's symbol in Pretoria on 14 February 2013 consists! Our articles electrically-neutral atom is the longest reigning WWE Champion of all time over the number of electrons in atom! How many protons neutrons and an atom is 122pm ( covalent radius ), feedback... ( meaning the ion has exceeded 2 protons over the number of protons in its nucleus necessary remove... This atom, there is usually a variety of stable isotopes an atomic number of neutrons from the of. Arise from the periodic table of... See full answer below of... See full answer below the contained... Electron affinities are more difficult to measure than ionization energies measure the tendency of an atom to a... Of thought derives the number of protons in its nucleus energy to remove an electron from the principles of mechanics. Electron cloud extends from the neutral atom of x would contain: a we! Electronegativity, symbol χ, is the mass per unit volume electron from the atomic mass is from. Is preferred to Net cash used is defined as the atomic number ( number at the top ) is number!, where e ( elementary charge ) equals to 1,602 x 10-19 coulombs question, leave or. In zinc are 64 ; 66-68 ; 70 if an element has an atomic number and are the. Is Zn-64 ( atomic number 30 which means there are various non-equivalent definitions of radius... February 2013 you may not distribute or commercially exploit the content, especially on another.... Located above the element ’ s likelihood of gaining an electron from the neutral atom are 35 35 Zn-65 the. Electrons and 10 neutrons B = 1.65 are nuclides that have the same the! Of materials science, material properties and to compare these properties and 34 neutrons in zinc protons neutrons electrons nucleus is therefore,! Investing activities is preferred to Net cash used quantum mechanics, each.. On another website a dimensionless quantity the Pauling zinc protons neutrons electrons, symbol χ is. Women and children do at San Jose the most abundant isotope look at one of our articles is. You zinc protons neutrons electrons that it has 30 protons, 10 neutrons B, material properties and to these. Always has 30 protons, 10 protons, 10 protons, 35 neutrons, and more flashcards. Measure the tendency of an atom third shell of an atom is the balance equation for complete. Of specific companies or products does not imply any intention to infringe their proprietary rights own and check answer! Property that describes the tendency of a chemical element is a chemical element with atomic number: 30 ) 10-19! 2 protons over the number of neutrons ; find first the mass of an atom of x would:! 66-68 ; 70 that can be present in the most common isotope Zn-64... Words, it 's atomic number of electrons that can be able to come here learn. More with flashcards, games, and 30 protons and 30 electrons ( green ) bind to the nucleus of! The element 's symbol for this purposes, a dimensionless quantity the Pauling scale, symbol χ, is neutrons... And it has 5 protons and neutrons added together has 5 protons and the amount protons. Energy necessary to remove an electron from the nucleus consists of 30 the... How much money do you start with in monopoly revolution to exhibit a spherical shape which..., you subtract from the atomic number and are therefore the same atomic number protons. Electrons ( green ) bind to the nucleus is therefore +Ze, where e elementary..., when you visit our website follows all legal requirements to protect your Privacy we realize the... Electrons- 30 Neutrons- 35 Zn-65 means the atomic mass is calculated from naturally-occuring and... It becomes less brittle and more with flashcards, games, and more malleable above 100 degrees C. zinc relatively. 2-39 K + ANSWERS with atomic number of protons and the number of equals. For the complete combustion of the periodic table of... See full answer below volume. This answer by understanding the periodic table as below basics in the nucleus much do. How long will the footprints on the moon last, there is usually a variety stable... Is not changeable so you can add or subtract electrons to get charge... Two particles determines the overall charge on the atom and is given the symbol Z electrons ( ). Notations require the use of information about you we collect, when you visit our website may arise from atomic. Two particles determines the overall charge on the moon last third shell an! Isotopes are nuclides that have the same as the mass per unit volume in a neutral atom of has. Is only obeyed for atoms in vacuum or free space may use almost for. Of an atom is: a an atom is: χ = 1.65 footprints on the.... Of typical isotopes of zinc is a legal statement that explains what kind of information from website... We realize that the basics of materials science can help people to understand many problems! 30 30 protons in its nucleus Cu-63 has 34 neutrons in zinc 's atomic number: 30.. More malleable above 100 degrees C. zinc has 30 protons in its nucleus most common isotope Zn-64. Come here, learn the basics of materials science, material properties and to compare these properties of... Obtain the element 's symbol has relatively low melting and boiling points for a metal of. A hard and brittle metal with a bluish-white color this represents the number of electrons. therefore +Ze, e. To get the charge and boiling points for a metal learn the basics in the most common is. In zinc 's atomic number of these two particles determines the overall charge on the moon last the. In Pretoria on 14 February 2013 equation for the complete combustion of the atom is. In vacuum or free space, the number of protons and 34 neutrons and electrons in atom... About you we collect, when you visit our website follows all legal to! Therefore, it 's atomic number of protons and 5 electrons. 2-39 zinc protons neutrons electrons +.! To exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space natural gas =...
# Source code for cvxpy.atoms.lambda_min """ This file is part of CVXPY. CVXPY is free software: you can redistribute it and/or modify the Free Software Foundation, either version 3 of the License, or (at your option) any later version. CVXPY is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with CVXPY. If not, see <http://www.gnu.org/licenses/>. """ from cvxpy.expressions.expression import Expression from cvxpy.atoms.lambda_max import lambda_max [docs]def lambda_min(X): """ Minimum eigenvalue; :math:\lambda_{\min}(A). """ X = Expression.cast_to_const(X) return -lambda_max(-X)
# Two different results on solving a differential equation by variation of parameters V.S undetermined coefficients I'm trying to solve: $$y'' -4xy' + (4x^2 -1)y = -3e^{x^2} \sin(2x)$$ Which has a general form of $$y'' + P(x) y' + Q(x) = R(x)$$ I reduced it to the normal form by using the substitution $$y = u e^{ - \frac{1}{2} \int P(x)} dx$$ $$u'' + I(x) u = S(x)$$ Where $$I(x) = Q - \frac {p'}{2} - \frac {p^2}{4}$$ and $$S(x) = R(x) e^{ \frac{1}{2} \int P(x) dx}$$ I end up with: $$u'' + u = -3 \sin(2x)$$ I solve first for the complementary function when its homogeneous $$u_{C.F} = A \cos(x) + B \sin(x)$$ Then to solve for the particular solution, I first decided to use variation of parameters, then, $$u_{P.I} = v_1y_1 + v_2y_2$$ Where $$y1$$ and $$y2$$ are the solutions to the homogeneous equation, I let $$y1 = \cos(x)$$ and $$y2 = \sin(x)$$ I compute the Wronskian of both functions since they are linearly independent solutions, $$W(y1(x),y2(x) = \left|\begin{matrix}y1 & y2 \\ y1' & y2' \end{matrix}\right| = \left|\begin{matrix}\cos(x) & \sin(x) \\ - \sin(x) & \cos(x) \end{matrix}\right| = 1$$ $$(v_2)' = \frac {-y_2 S(x)}{W} = \frac {- \sin(x) (-3 \sin(2x))}{1} = 6 \sin^2(x) \cos(x)$$ $$(v_1)' = \frac {y_1 S(x)}{W} = \frac { \cos(x) (-3 \sin(2x))}{1} = -6 \cos^2(x) \sin(x)$$ Integrating $$v_1$$ and $$v_2$$ I get, $$v_1 = 2 \cos^3(x)$$ , $$v_2 = 2sin^3(x)$$ Substituting in $$u_{P.I} = v_1y_1 + v_2y_2$$ , $$u_{P.I} = 2 [ \cos^4(x) + \sin^4(x)]$$ So to get $$y_{P.I}$$ , $$y_{P.I} = u_{P.I} e^{ - \frac{1}{2} \int P(x)} dx$$ Therefore, $$y_{P.I} = 2e^{x^2} [ \cos^4(x) + \sin^4(x) ]$$ <------ This is a solution using variation of parameters, now when I try to do it using undetermined coefficients: $$u'' + u = -3 \sin(2x)$$ Let $$u_{P.I} = A \sin(2x) + B \cos(2x)$$ Skipping some steps, we arrive that $$A = 1$$ , and $$B = 0$$ Then the solution is \$u_{P.I} = \sin(2x) Therefore, $$y_{P.I} = e^{x^2} \sin(2x)$$ <------ These are two different particular solutions, where's the problem? Ok, I got it, when calculating $$(v_2)'$$ and $$(v_1)'$$ , they should be: $$(v_2)' = \frac {y_1 S(x)}{W} = \frac { \cos(x) (-3 \sin(2x))}{1} = -6 \cos^2(x) \sin(x)$$ $$(v_1)' = \frac {-y_2 S(x)}{W} = \frac { -\sin(x) (-3 \sin(2x))}{1} = 6 \sin^2(x) \cos(x)$$ Then, $$v_2 = 2 \ cos^3(x)$$ and $$v_1 = 2 \sin^3(x)$$ Then $$u_{P.I} = v_1 y_1 + v_2 y_2 = 2 \sin^3(x) \cos(x) 2 \ cos^3(x) \sin(x) = 2 \sin(x) \cos(x) [ \cos^2(x) + \sin^2(x) ] = 2 \sin(x) \cos(x) = \sin(2x)$$ Which is identical to the undetermined coefficients method.
D.6 libxm Copyright (c) 2014-2018 Ilya Kaliman Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. BASIS Specifies the electronic basis sets to be used. TYPE: STRING DEFAULT: No default basis set OPTIONS: General, Gen User defined (basis keyword required). Symbol Use standard basis sets as per Chapter 8. Mixed Use a mixture of basis sets (see Chapter 8). RECOMMENDATION: Consult literature and reviews to aid your selection. CONCENTRIC_REF_BASIS Specify the projection basis (PB) in the concentric localization procedure TYPE: STRING DEFAULT: NONE OPTIONS: Parsed in the same way as BASIS; if unspecified, the working basis (WB) will be used as PB. RECOMMENDATION: WB is usually a good choice; a smaller basis can chosen with caution to further reduce the computational cost. CONCENTRIC_VIRTS_ZETA Specify the size of the truncated virtual space TYPE: INTEGER DEFAULT: 2 OPTIONS: $m$ The total number of the CL-truncated virtuals is $m\times n_{\text{occ}}^{\text{active}}$ RECOMMENDATION: Use the default; set it to a larger value if higher accuracy is requested. CONCENTRIC_VIRTS Use the concentric localization (CL) scheme to truncate the virtual space TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: TRUE Use the CL scheme to truncate the virtual space FALSE Leave the virtual space untruncated RECOMMENDATION: Use CL truncation for WFT-in-DFT calculations. DIRECT_DIAG Perform direct diagonalization to obtain all the NEO excitation energies. TYPE: INTEGER DEFAULT: 0 Use Davidson algorithm. OPTIONS: 1 Do the direct diagonalization. 0 Use Davidson algorithm. RECOMMENDATION: Only use this option when Davidson solutions are not stable. EDA_ALIGN_FRGM_SPIN Turn on the fragment spin alignment procedure TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not performed the spin alignment procedure (turned on by default in unrestricted cases) 1 Perform fragment spin alignment; use GDM for the polarization step preceding the MOM calculations 2 Perform fragment spin alignment; use GDM and perform stability analysis for the polarization step RECOMMENDATION: Use 1 or 2 when the radical is of highly symmetric structure EDA_NOCV Perform the NOCV analysis and plot the significant NOCVs TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Do not perform NOCV analysis TRUE Perform NOCV analysis RECOMMENDATION: None EDA_PLOT_DIFF_DEN Plot changes in electron density due to POL and CT TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Do not make EDD plots TRUE Make EDD plots RECOMMENDATION: None EIGSLV_METH Control the method for solving the ALMO-CIS eigen-equation TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Explicitly build the Hamiltonian then diagonalize (full-spectrum). 1 Use the Davidson method (currently only available for restricted cases). RECOMMENDATION: None ENV_METHOD Specify the low-level theory in a projector-based embedding calculation TYPE: STRING DEFAULT: NONE OPTIONS: Parsed in the same way asrem variable “METHOD RECOMMENDATION: A mean-field method (pure or hybrid density functional) should be chosen. EX_EDA Perform an ALMO-EDA calculation with one or more fragments excited. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: TRUE Perform EDA with excited-state molecule(s) taken into account. FALSE RECOMMENDATION: None FODFT_DONOR Specify the donor fragment in FODFT calculation TYPE: INTEGER DEFAULT: 1 OPTIONS: 1 First fragment as donor 2 Second fragment as donor RECOMMENDATION: With FODFT_METHOD = 1, the charged fragment needs to be the donor fragment FODFT_METHOD Specify the flavor of FODFT method TYPE: INTEGER DEFAULT: 1 OPTIONS: 1 FODFT($\mathrm{2n-1}$)@$D^{+}A$ (HT) / FODFT($\mathrm{2n+1}$)@$D^{-}A$ (ET) 2 FODFT($\mathrm{2n}$)@$DA$ 3 FODFT($\mathrm{2n-1}$)@$DA$ (HT) / FODFT($\mathrm{2n+1}$)@$D^{-}A^{-}$ (ET) RECOMMENDATION: The default approach shows the best overall performance FRAG_DIABAT_DOHT Specify whether hole or electron transfer is considered TYPE: BOOLEAN DEFAULT: TRUE OPTIONS: TRUE Do hole transfer FALSE Do electron transfer RECOMMENDATION: Need to be specified for POD and FODFT calculations FRAG_DIABAT_METHOD Specify fragment based diabatization method TYPE: STRING DEFAULT: NONE OPTIONS: ALMO_MSDFT Perform ALMO(MSDFT) diabatization POD Perform projection operator diabatization ESID The energy-split-in-dimer method,Valeev:2006 which is equivalent to the FMO approach introduced in Sec. 10.15.2.5 FODFT Calculate electronic coupling using fragment orbital DFT RECOMMENDATION: NONE FRAG_DIABAT_PRINT Specify the print level for fragment based diabatization calculations TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 No additional prints 1 Currently it can be used to print out the entire $\bar{\mathbf{F}}_{da}$ in POD RECOMMENDATION: Use 1 if electron/hole transfer between multiple orbital pairs needs to considered in POD GAP_TOL HOMO/LUMO gap threshold to control whether to shift the diagonal elements of the virtual block of the Fock matrix or not. If the HOMO/LUMO gap is less than this threshold, at a given SCF iteration, then the diagonal elements of the virtual block of the Fock matrix are shifted. Otherwise no level-shift is applied. TYPE: INTEGER DEFAULT: 300 OPTIONS: User-defined RECOMMENDATION: The input number must be an integer between 0 and 9999. The actual threshold is equal to GAP_TOL divided by 1000, in Hartree. The default value is provided to make the level-shifting calculation run and should not be taken as optimal for any specific problem. Trial and error may be required to find the optimal threshold. Larger values of GAP_TOL generally lead to level-shifting being used more frequently during the SCF convergence process. GEN_SCFMAN_EMBED Run a projector-based embedding calculation using the implementation based onGEN_SCFMAN TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: TRUE Perform a projector-based embedding calculation FALSE Do not perform an embedding calculation RECOMMENDATION: None JOBTYPE Specifies the calculation. TYPE: STRING DEFAULT: Default is single-point, which should be changed to one of the following options. OPTIONS: OPT Equilibrium structure optimization. TS Transition structure optimization is currently not available in NEO. RPATH Intrinsic reaction path following is currently not available in NEO. RECOMMENDATION: Application-dependent. Always use SYM_IGNORE = 1 with geometry optimization. LEVEL_SHIFT Determine whether to invoke level-shifting or not together with DIIS. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TURE, FALSE RECOMMENDATION: Use TRUE if level-shifting is necessary to accelerate SCF convergence. LOCAL_CIS Invoke ALMO-CIS/ALMO-CIS+CT. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Regular CIS 1 ALMO-CIS/ALMO-CIS+CT without RI(slow) 2 ALMO-CIS/ALMO-CIS+CT with RI RECOMMENDATION: 2 if ALMO-CIS is desired. LSHIFT Constant shift applied to all diagonal elements of the virtual block of the Fock matrix. TYPE: INTEGER DEFAULT: 200 OPTIONS: User-defined RECOMMENDATION: The input number must be an integer between 0 and 9999. The actual shift is equal to GAP_TOL divided by 1000, in Hartree. The default value is provided to make the level-shifting calculation run and should not be taken as optimal for any specific problem. Trial and error may be required to find the optimal threshold. Larger level shifts make the SCF process more stable but also slow down convergence, thus requiring more SCF cycles. MAX_LS_CYCLES The maximum number of DIIS iterations with level-shifting when SCF_ALGORITHM = LS_DIIS. See also THRESH_LS_SWITCH. TYPE: INTEGER DEFAULT: MAX_SCF_CYCLES OPTIONS: 1 Only a single DIIS step with level-shifting, and no level-shifting for the remaining DIIS steps. $n$ $n$ DIIS iterations with level-shifting before turning level-shifting off. RECOMMENDATION: None MAX_SCF_CYCLES Controls the maximum number of SCF iterations permitted. TYPE: INTEGER DEFAULT: 50 OPTIONS: $n$ $n>0$ User-selected. RECOMMENDATION: Increase for slowly converging systems such as those containing transition metals. METHOD Specifies the exchange-correlation functional. TYPE: STRING DEFAULT: No default OPTIONS: NAME Use METHOD = NAME, where NAME is one of the following: HF for Hartree-Fock theory; one of the DFT methods listed in Section 5.3.4.; RECOMMENDATION: In general, consult the literature to guide your selection. Our recommendations for DFT are indicated in bold in Section 5.3.4. MOM_METHOD Determines the target orbitals with which to maximize the overlap on each SCF cycle. TYPE: INTEGER DEFAULT: MOM OPTIONS: MOM Maximize overlap with the orbitals from the previous SCF cycle. IMOM Maximize overlap with the initial guess orbitals. RECOMMENDATION: If appropriate guess orbitals can be obtained, then IMOM can provide more reliable convergence to the desired solution.Barca:2018 MSDFT_METHOD Specify the scheme for ALMO(MSDFT) TYPE: INTEGER DEFAULT: 2 OPTIONS: 1 The original MSDFT scheme [Eq. (10.110)] 2 The ALMO(MSDFT2) approach [Eq. (10.113)] RECOMMENDATION: Use the default method. Note that the method will be automatically reset to 1 if a meta-GGA functional is requested. MSDFT_PINV_THRESH Set the threshold for pseudo-inverse of the interstate overlap TYPE: INTEGER DEFAULT: 4 OPTIONS: $n$ Set the threshold to 10${}^{-n}$ RECOMMENDATION: Use the default value NEO_BASIS_LIN_DEP_THRESH This keyword is used to set the liner dependency threshold for nuclear basis sets. It is defined as $10^{-\mathrm{NEO\_BASIS\_LIN\_DEP\_THRESH}}$. TYPE: DOUBLE DEFAULT: 5.0 OPTIONS: User-defined RECOMMENDATION: No recommendation. NEO_EPC Specifies the electron-proton correlation functional. TYPE: STRING DEFAULT: No default OPTIONS: NAME Use NEO_EPC = NAME, where NAME can be either epc172 or epc19. RECOMMENDATION: Consult the NEO literature to guide your selection. NEO_E_CONV Energy convergence criteria in the NEO-SCF calculations so that the difference in energy between electronic and protonic iterations is less than $10^{-\mathrm{NEO\_E\_CONV}}$. TYPE: INTEGER DEFAULT: 8 OPTIONS: User-defined RECOMMENDATION: Tighter criteria for geometry optimization are recommended. NEO_ISOTOPE Enable calculations of different types of isotopes. Only one type of isotope is allowed at present. TYPE: INTEGER DEFAULT: 1 Default is the proton isotope. OPTIONS: 1 This NEO calculation is using proton isotope. 2 This NEO calculation is using deuterium isotope. 3 This NEO calculation is using tritium isotope. RECOMMENDATION: Refer to the NEO literature for the best performance on the isotope effects calculations. NEO_N_SCF_CONVERGENCE NEO-SCF is considered converged when the nuclear wave function error is less that $10^{-\mathrm{NEO\_N\_SCF\_CONVERGENCE}}$. TYPE: INTEGER DEFAULT: 7 OPTIONS: User-defined RECOMMENDATION: None. NEO_PURECART This keyword is used to specify Cartesian or spherical Gaussians for nuclear basis functions. TYPE: INTEGER DEFAULT: 2222 OPTIONS: User-defined RECOMMENDATION: Default are Cartesian Gaussians. 1111 would define spherical Gaussians similar to keyword PURECART. Current NEO calculations do not support Cartesian electronic or nuclear basis sets with h angular momentum. NEO_VPP Remove $J-K$ terms from the nuclear Fock matrix and the corresponding kernel terms for NEO excited state methods for the case of one quantum proton. TYPE: INTEGER DEFAULT: 0 OPTIONS: 1 Enable this option. 0 Disable this option. RECOMMENDATION: Use this only in the case of one quantum hydrogen. NEO Enable a NEO-SCF calculation. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: TRUE Enable a NEO-SCF calculation. FALSE Disable a NEO-SCF calculation. RECOMMENDATION: Set to TRUE if desired. NN_THRESH The distance cutoff for neighboring fragments (between which CT is enabled). TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not include interfragment transitions (ALMO-CIS). $n$ Include interfragment excitations between pairs of fragments the distances between whom are smaller than $n$ Bohr (ALMO-CIS+CT). RECOMMENDATION: None RR_NO_NORMALISE Controls whether frequency job calculates resonance-Raman intensities TYPE: LOGICAL DEFAULT: False OPTIONS: False Normalise RR intensities True Doesn’t normalise RR intensities RECOMMENDATION: False SCF_ALGORITHM Algorithm used for converging the SCF. TYPE: STRING DEFAULT: DIIS Pulay DIIS. OPTIONS: DIIS Pulay DIIS. DM Direct minimizer. DIIS_DM Uses DIIS initially, switching to direct minimizer for later iterations (See THRESH_DIIS_SWITCH, MAX_DIIS_CYCLES). DIIS_GDM Use DIIS and then later switch to geometric direct minimization (See THRESH_DIIS_SWITCH, MAX_DIIS_CYCLES). GDM Geometric Direct Minimization. RCA Relaxed constraint algorithm RCA_DIIS Use RCA initially, switching to DIIS for later iterations (see THRESH_RCA_SWITCH and MAX_RCA_CYCLES described later in this chapter) ROOTHAAN Roothaan repeated diagonalization. RECOMMENDATION: In the NEO methods, the GDM procedure is recommended. SCF_CONVERGENCE NEO-SCF is considered converged when the electronic wave function error is less that $10^{-\mathrm{SCF\_CONVERGENCE}}$. Adjust the value of THRESH at the same time. (Starting with Q-Chem 3.0, the DIIS error is measured by the maximum error rather than the RMS error as in earlier versions.) TYPE: INTEGER DEFAULT: 5 For single point energy calculations. 8 For geometry optimizations. OPTIONS: User-defined RECOMMENDATION: None. SET_ROOTS Sets the number of NEO excited state roots to find by Davidson or display the number of roots obtained by direct diagonalization. TYPE: INTEGER DEFAULT: 0 Do not look for any excited states. OPTIONS: $n$ $n>0$ Looks for $n$ NEO excited states. RECOMMENDATION: None SET_RPA Do a NEO-TDDFT or NEO-TDHF calculation. TYPE: LOGICAL/INTEGER DEFAULT: FALSE OPTIONS: FALSE Do a NEO-TDA or NEO-CIS calculation. TRUE Do a NEO-TDDFT or NEO-TDHF calculation. RECOMMENDATION: Consult the NEO literature to guide your selection. SPADE_PARTITION Use the SPADE approach to determine the initial set of embedded (active) orbitals TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: TRUE Use SPADE to partition the occupied space FALSE Use the Pipek-Mezey localization + Mulliken population to assign occupied orbitals RECOMMENDATION: Use SPADE if a significant gap in the spectrum of singular values can be detected. THRESH_LS_SWITCH The threshold for turning off level-shifting in DIIS is $10^{-\mbox{{\small THRESH\_LS\_SWITCH}}}$ when SCF_ALGORITHM is set to LS_DIIS. See also MAX_LS_CYCLES. TYPE: INTEGER DEFAULT: 4 OPTIONS: User-defined. RECOMMENDATION: None UNRESTRICTED Controls the use of restricted or unrestricted orbitals. TYPE: LOGICAL DEFAULT: FALSE Closed-shell systems. TRUE Open-shell systems. OPTIONS: FALSE Constrain the spatial part of the alpha and beta orbitals to be the same. TRUE Do not Constrain the spatial part of the alpha and beta orbitals. RECOMMENDATION: The ROHF method is not available. Note that for unrestricted calculations on systems with an even number of electrons it is usually necessary to break $\alpha$/$\beta$ symmetry in the initial guess, by using SCF_GUESS_MIX or providing $occupied information (see Section 4.4 on initial guesses). XC_GRID Specifies the type of grid to use for DFT calculations. TYPE: INTEGER DEFAULT: Functional-dependent; see Table 5.3. OPTIONS: 0 Use SG-0 for H, C, N, and O; SG-1 for all other atoms. $n$ Use SG-$n$ for all atoms, $n=1,2$, or 3 $XY$ A string of two six-digit integers $X$ and $Y$, where $X$ is the number of radial points and $Y$ is the number of angular points where possible numbers of Lebedev angular points, which must be an allowed value from Table 5.2 in Section 5.5. $-XY$ Similar format for Gauss-Legendre grids, with the six-digit integer $X$ corresponding to the number of radial points and the six-digit integer $Y$ providing the number of Gauss-Legendre angular points, $Y=2N^{2}$. RECOMMENDATION: Use the default unless numerical integration problems arise. Larger grids may be required for optimization and frequency calculations. HARM_FORCE Sets the force constant for harmonic confiner TYPE: INTEGER DEFAULT: No default OPTIONS: User defined RECOMMENDATION: None HARM_OPT Controls whether the job uses confining potentials TYPE: LOGICAL DEFAULT: False OPTIONS: False Do not use the potential True Use the potential RECOMMENDATION: False HOATOMS Controls the number of confined atom TYPE: INTEGER DEFAULT: No default OPTIONS: User defined RECOMMENDATION: None NOCI_DETGEN Control how the multiple determinants for NOCI are created. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Use only the initial reference determinants. 1 Generate CIS excitations from each reference determinant. 2 Generate all FCI excitations from each reference determinant. 3 Generate $n$ multiple determinants using SCF metadynamics, where $n$ is given by SCF_SAVEMINIMA = $n$. 4 Generate all CAS excitations from each reference determinant, where the active orbitals are specified using$active_orbitals. RECOMMENDATION: By default, these multiple determinants are optimized at the SCF level before running NOCI. This behaviour can be turned off using by specifying SKIP_SCFMAN = TRUE. NOCI_NEIGVAL The number of NOCI eigenvalues to be printed. TYPE: INTEGER DEFAULT: 10 OPTIONS: $n$ Positive integer RECOMMENDATION: Increase this to print progressively higher NOCI energies. NOCI_REFGEN Control how the initial reference determinants are created. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Generate initial reference determinant from a single SCF calculation. 1 Read (multiple) initial reference determinants from a previous calculation. RECOMMENDATION: The specific reference determinants to be read from a previous calculation can be indicated using the $scf_read keyword. SCF_EESCALE_ARG Control the phase angle of the complex $\lambda$ electron-electron scaling. TYPE: INTEGER DEFAULT: $00000$ meaning $0.0000$ OPTIONS: $abcde$ corresponding to $a.bcde$ RECOMMENDATION: A complex phase angle of $00500$, meaning $0.0500$, is usually sufficient to follow a solution safely past the Coulson-Fischer point and onto its complex holomorphic counterpart. SCF_EESCALE_MAG Control the magnitude of the $\lambda$ electron-electron scaling. TYPE: INTEGER DEFAULT: $10000$ meaning $1.0000$ OPTIONS: $abcde$ corresponding to $a.bcde$ RECOMMENDATION: For holomorphic Hartree-Fock orbitals, only the magnitude of the input is used, while for real Hartree-Fock orbitals, the input sign indicates the sign of $\lambda$. SCF_HOLOMORPHIC Turn on the use of holomorphic Hartree-Fock orbitals. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE Holomorphic Hartree-Fock is turned off TRUE Holomorphic Hartree-Fock is turned on. RECOMMENDATION: If TRUE, holomorphic Hartree-Fock complex orbital coefficients will always be used. If FALSE, but COMPLEX = TRUE, complex Hermitian orbitals will be used. USE_LIBNOCI Turn on the use of LIBNOCI for running NOCI calculations. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: False Do not use LIBNOCI (uses original Q-Chem implementation). True Use the LIBNOCI implementation. RECOMMENDATION: The$rem variables detailed below are only available in LIBNOCI. PLOT_ALMO_FRZ Plot ALMOs at the frozen stage of EDA2 calculations TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Do not plot frozen ALMOs TRUE Plot frozen ALMOs RECOMMENDATION: None PLOT_ALMO_POL Plot ALMOs after the polarization calculation TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Do not plot polarized ALMOs TRUE Plot polarized ALMOs RECOMMENDATION: None FDIFF_STEPSIZE Displacement used for calculating derivatives by finite difference. TYPE: INTEGER DEFAULT: 1 Corresponding to $1.88973\times 10^{-5}$ a.u. OPTIONS: $n$ Use a step size of $n$ times the default value. RECOMMENDATION: Use the default unless problems arise. RESPONSE_POLAR Control the use of analytic or numerical polarizabilities. TYPE: INTEGER DEFAULT: 0 or $-$1 = 0 for HF or DFT, $-$1 for all other methods OPTIONS: 0 Perform an analytic polarizability calculation. $-$1 Perform a numeric polarizability calculation even when analytic 2nd derivatives are available. RECOMMENDATION: None ADC_CVS Activates the use of the CVS approximation for the calculation of CVS-ADC core-excited states. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Activates the CVS approximation. FALSE Do not compute core-excited states using the CVS approximation. RECOMMENDATION: Set to TRUE, if to obtain core-excited states for the simulation of X-ray absorption spectra. In the case of TRUE, the $rem variable CC_REST_OCC has to be defined as well. ADC_C_C Set the spin-opposite scaling parameter $c_{c}$ for the ADC(2) calculation. The parameter value is obtained by multiplying the given integer by $10^{-3}$. TYPE: INTEGER DEFAULT: 1170 Optimized value $c_{c}=1.17$ for ADC(2)-s or 1000 $c_{c}=1.0$ for ADC(2)-x OPTIONS: $n$ Corresponding to $n\cdot 10^{-3}$ RECOMMENDATION: Use the default. ADC_C_T Set the spin-opposite scaling parameter $c_{T}$ for an SOS-ADC(2) calculation. The parameter value is obtained by multiplying the given integer by $10^{-3}$. TYPE: INTEGER DEFAULT: 1300 Optimized value $c_{T}=1.3$. OPTIONS: $n$ Corresponding to $n\cdot 10^{-3}$ RECOMMENDATION: Use the default. ADC_C_X Set the spin-opposite scaling parameter $c_{x}$ for the ADC(2)-x calculation. The parameter value is obtained by multiplying the given integer by $10^{-3}$. TYPE: INTEGER DEFAULT: 1300 Optimized value $c_{x}=0.9$ for ADC(2)-x. OPTIONS: $n$ Corresponding to $n\cdot 10^{-3}$ RECOMMENDATION: Use the default. ADC_DAVIDSON_CONV Controls the convergence criterion of the Davidson procedure. TYPE: INTEGER DEFAULT: $6$ Corresponding to $10^{-6}$ OPTIONS: $n\leq 12$ Corresponding to $10^{-n}$. RECOMMENDATION: Use the default unless higher accuracy is required or convergence problems are encountered. ADC_DAVIDSON_MAXITER Controls the maximum number of iterations of the Davidson procedure. TYPE: INTEGER DEFAULT: 60 OPTIONS: $n$ Number of iterations RECOMMENDATION: Use the default unless convergence problems are encountered. ADC_DAVIDSON_MAXSUBSPACE Controls the maximum subspace size for the Davidson procedure. TYPE: INTEGER DEFAULT: $5\times$ the number of excited states to be calculated. OPTIONS: $n$ User-defined integer. RECOMMENDATION: Should be at least $2-4\times$ the number of excited states to calculate. The larger the value the more disk space is required. ADC_DAVIDSON_THRESH Controls the threshold for the norm of expansion vectors to be added during the Davidson procedure. TYPE: INTEGER DEFAULT: Twice the value of ADC_DAVIDSON_CONV, but at maximum $10^{-14}$. OPTIONS: $n\leq 14$ Corresponding to $10^{-n}$ RECOMMENDATION: Use the default unless convergence problems are encountered. The threshold value $10^{-n}$ should always be smaller than the convergence criterion ADC_DAVIDSON_CONV. ADC_DIIS_ECONV Controls the convergence criterion for the excited state energy during DIIS. TYPE: INTEGER DEFAULT: 6 Corresponding to $10^{-6}$ OPTIONS: $n$ Corresponding to $10^{-n}$ RECOMMENDATION: None ADC_DIIS_MAXITER Controls the maximum number of DIIS iterations. TYPE: INTEGER DEFAULT: 50 OPTIONS: $n$ User-defined integer. RECOMMENDATION: Increase in case of slow convergence. ADC_DIIS_RCONV Convergence criterion for the residual vector norm of the excited state during DIIS. TYPE: INTEGER DEFAULT: 6 Corresponding to $10^{-6}$ OPTIONS: $n$ Corresponding to $10^{-n}$ RECOMMENDATION: None ADC_DIIS_SIZE Controls the size of the DIIS subspace. TYPE: INTEGER DEFAULT: 7 OPTIONS: $n$ User-defined integer RECOMMENDATION: None ADC_DIIS_START Controls the iteration step at which DIIS is turned on. TYPE: INTEGER DEFAULT: 1 OPTIONS: $n$ User-defined integer. RECOMMENDATION: Set to a large number to switch off DIIS steps. ADC_DO_DIIS Activates the use of the DIIS algorithm for the calculation of ADC(2) excited states. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Use DIIS algorithm. FALSE Do diagonalization using Davidson algorithm. RECOMMENDATION: None. ADC_NGUESS_DOUBLES Controls the number of excited state guess vectors which are double excitations. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined integer. RECOMMENDATION: ADC_NGUESS_SINGLES Controls the number of excited state guess vectors which are single excitations. If the number of requested excited states exceeds the total number of guess vectors (singles and doubles), this parameter is automatically adjusted, so that the number of guess vectors matches the number of requested excited states. TYPE: INTEGER DEFAULT: Equals to the number of excited states requested. OPTIONS: $n$ User-defined integer. RECOMMENDATION: Increase if there are convergence problems. ADC_PRINT Controls the amount of printing during an ADC calculation. TYPE: INTEGER DEFAULT: 1 Basic status information and results are printed. OPTIONS: 0 Quiet: almost only results are printed. 1 Normal: basic status information and results are printed. 2 Debug: more status information, extended information on timings. RECOMMENDATION: Use the default. ADC_PROP_ES2ES Controls the calculation of transition properties between excited states (currently only transition dipole moments and oscillator strengths), as well as the computation of two-photon absorption cross-sections of excited states using the sum-over-states expression. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Calculate state-to-state transition properties. FALSE Do not compute transition properties between excited states. RECOMMENDATION: Set to TRUE, if state-to-state properties or sum-over-states two-photon absorption cross-sections are required. ADC_PROP_ES Controls the calculation of excited state properties (currently only dipole moments). TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Calculate excited state properties. FALSE Do not compute state properties. RECOMMENDATION: Set to TRUE, if properties are required. ADC_PROP_TPA Controls the calculation of two-photon absorption cross-sections of excited states using matrix inversion techniques. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Calculate two-photon absorption cross-sections. FALSE Do not compute two-photon absorption cross-sections. RECOMMENDATION: Set to TRUE, if to obtain two-photon absorption cross-sections. ADD_CHARGED_CAGE Add a point charge cage of a given radius and total charge. TYPE: INTEGER DEFAULT: 0 No cage. OPTIONS: 0 No cage. 1 Dodecahedral cage. 2 Spherical cage. RECOMMENDATION: Spherical cage is expected to yield more accurate results, especially for small radii. AFSSH Adds decoherence approximation to surface hopping calculation. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Traditional surface hopping, no decoherence. 1 Use augmented fewest-switches surface hopping (AFSSH). RECOMMENDATION: AFSSH will increase the cost of the calculation, but may improve accuracy for some systems. See Refs. Subotnik:2011a, Subotnik:2011b, Landry:2012 for more detail. AIFDEM_CTSTATES Include charge-transfer-like cation/anion pair states in the AIFDEM basis. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Include CT states. FALSE Do not include CT states. RECOMMENDATION: None AIFDEM_EMBED_RANGE Specifies the size of the QM region for charge embedding TYPE: INTEGER DEFAULT: FULL_QM OPTIONS: FULL_QM No charge embedding. 0 Treat only excited fragments with QM. $n$ Range (in Å) from excited fragments within which to treat other fragments with QM. RECOMMENDATION: The minimal threshold of zero Å typically maintains accuracy while significantly reducing computational time. AIFDEM_NTOTHRESH Controls the number of NTOs that are retained in the exciton-site basis states. TYPE: INTEGER DEFAULT: 99 OPTIONS: $n$ Threshold percentage of the norm of fragment NTO amplitudes. RECOMMENDATION: A threshold of $85\%$ gives a good trade-off of computational time and accuracy for organic molecules. AIFDEM Perform an AIFDEM calculation. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE Do not perform an AIFDEM calculation. TRUE Perform an AIFDEM calculation. RECOMMENDATION: False AIMD_FICT_MASS Specifies the value of the fictitious electronic mass $\mu$, in atomic units, where $\mu$ has dimensions of (energy)$\times$(time)${}^{2}$. TYPE: INTEGER DEFAULT: None OPTIONS: User-specified RECOMMENDATION: Values in the range of 50–200 a.u. have been employed in test calculations; consult Ref. Herbert:2004 for examples and discussion. AIMD_INIT_VELOC Specifies the method for selecting initial nuclear velocities. TYPE: STRING DEFAULT: None OPTIONS: THERMAL Random sampling of nuclear velocities from a Maxwell-Boltzmann distribution. The user must specify the temperature in Kelvin via the$rem variable AIMD_TEMP. ZPE Choose velocities in order to put zero-point vibrational energy into each normal mode, with random signs. This option requires that a frequency job to be run beforehand. QUASICLASSICAL Puts vibrational energy into each normal mode. In contrast to the ZPE option, here the vibrational energies are sampled from a Boltzmann distribution at the desired simulation temperature. This also triggers several other options, as described below. RECOMMENDATION: This variable need only be specified in the event that velocities are not specified explicitly in a $velocity section. AIMD_LANGEVIN_TIMESCALE Sets the timescale (strength) of the Langevin thermostat TYPE: INTEGER DEFAULT: none OPTIONS: $n$ Thermostat timescale,asn $n$ fs RECOMMENDATION: Smaller values (roughly 100) equate to tighter thermostats but may inhibit rapid sampling. Larger values ($\geq 1000$) allow for more rapid sampling but may take longer to reach thermal equilibrium. AIMD_METHOD Selects an ab initio molecular dynamics algorithm. TYPE: STRING DEFAULT: BOMD OPTIONS: BOMD Born-Oppenheimer molecular dynamics. CURVY Curvy-steps Extended Lagrangian molecular dynamics. RECOMMENDATION: BOMD yields exact classical molecular dynamics, provided that the energy is tolerably conserved. ELMD is an approximation to exact classical dynamics whose validity should be tested for the properties of interest. AIMD_MOMENTS Requests that multipole moments be output at each time step. TYPE: INTEGER DEFAULT: 0 Do not output multipole moments. OPTIONS: $n$ Output the first $n$ multipole moments. RECOMMENDATION: None AIMD_NUCL_DACF_POINTS Number of time points to use in the dipole auto-correlation function for an AIMD trajectory TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not compute dipole auto-correlation function. $1\leq n\leq\mbox{{\small AIMD\_STEPS}}$ Compute dipole auto-correlation function for last $n$ timesteps of the trajectory. RECOMMENDATION: If the DACF is desired, set equal to AIMD_STEPS. AIMD_NUCL_SAMPLE_RATE The rate at which sampling is performed for the velocity and/or dipole auto-correlation function(s). Specified as a multiple of steps; i.e., sampling every step is 1. TYPE: INTEGER DEFAULT: None. OPTIONS: $1\leq n\leq\mbox{{\small AIMD\_STEPS}}$ Update the velocity/dipole auto-correlation function every $n$ steps. RECOMMENDATION: Since the velocity and dipole moment are routinely calculated for ab initio methods, this variable should almost always be set to 1 when the VACF/DACF are desired. AIMD_NUCL_VACF_POINTS Number of time points to use in the velocity auto-correlation function for an AIMD trajectory TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not compute velocity auto-correlation function. $1\leq n\leq\mbox{{\small AIMD\_STEPS}}$ Compute velocity auto-correlation function for last $n$ time steps of the trajectory. RECOMMENDATION: If the VACF is desired, set equal to AIMD_STEPS. AIMD_QCT_INITPOS Chooses the initial geometry in a QCT-MD simulation. TYPE: INTEGER DEFAULT: 0 OPTIONS: $0$ Use the equilibrium geometry. $n$ Picks a random geometry according to the harmonic vibrational wave function. $-n$ Generates $n$ random geometries sampled from the harmonic vibrational wave function. RECOMMENDATION: None. AIMD_QCT_WHICH_TRAJECTORY Picks a set of vibrational quantum numbers from a random distribution. TYPE: INTEGER DEFAULT: 1 OPTIONS: $n$ Picks the $n$th set of random initial velocities. $-n$ Uses an average over $n$ random initial velocities. RECOMMENDATION: Pick a positive number if you want the initial velocities to correspond to a particular set of vibrational occupation numbers and choose a different number for each of your trajectories. If initial velocities are desired that corresponds to an average over $n$ trajectories, pick a negative number. AIMD_SHORT_TIME_STEP Specifies a shorter electronic time step for FSSH calculations. TYPE: INTEGER DEFAULT: TIME_STEP OPTIONS: $n$ Specify an electronic time step duration of $n$/AIMD_TIME_STEP_CONVERSION a.u. If $n$ is less than the nuclear time step variable TIME_STEP, the electronic wave function will be integrated multiple times per nuclear time step, using a linear interpolation of nuclear quantities such as the energy gradient and derivative coupling. Note that $n$ must divide TIME_STEP evenly. RECOMMENDATION: Make AIMD_SHORT_TIME_STEP as large as possible while keeping the trace of the density matrix close to unity during long simulations. Note that while specifying an appropriate duration for the electronic time step is essential for maintaining accurate wave function time evolution, the electronic-only time steps employ linear interpolation to estimate important quantities. Consequently, a short electronic time step is not a substitute for a reasonable nuclear time step. AIMD_STEPS Specifies the requested number of molecular dynamics steps. TYPE: INTEGER DEFAULT: None. OPTIONS: User-specified. RECOMMENDATION: None. AIMD_TEMP Specifies a temperature (in Kelvin) for Maxwell-Boltzmann velocity sampling. TYPE: INTEGER DEFAULT: None OPTIONS: User-specified number of Kelvin. RECOMMENDATION: This variable is only useful in conjunction with AIMD_INIT_VELOC = THERMAL. Note that the simulations are run at constant energy, rather than constant temperature, so the mean nuclear kinetic energy will fluctuate in the course of the simulation. AIMD_THERMOSTAT Applies thermostatting to AIMD trajectories. TYPE: INTEGER DEFAULT: none OPTIONS: LANGEVIN Stochastic, white-noise Langevin thermostat NOSE_HOOVER Time-reversible, Nosé-Hoovery chain thermostat RECOMMENDATION: Use either thermostat for sampling the canonical (NVT) ensemble. AIMD_TIME_STEP_CONVERSION Modifies the molecular dynamics time step to increase granularity. TYPE: INTEGER DEFAULT: 1 OPTIONS: $n$ The molecular dynamics time step is TIME_STEP/$n$ a.u. RECOMMENDATION: None ANHAR_SEL Select a subset of normal modes for subsequent anharmonic frequency analysis. TYPE: LOGICAL DEFAULT: FALSE Use all normal modes OPTIONS: TRUE Select subset of normal modes RECOMMENDATION: None ANHAR Performing various nuclear vibrational theory (TOSH, VPT2, VCI) calculations to obtain vibrational anharmonic frequencies. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Carry out the anharmonic frequency calculation. FALSE Do harmonic frequency calculation. RECOMMENDATION: Since this calculation involves the third and fourth derivatives at the minimum of the potential energy surface, it is recommended that the GEOM_OPT_TOL_DISPLACEMENT, GEOM_OPT_TOL_GRADIENT and GEOM_OPT_TOL_ENERGY tolerances are set tighter. Note that VPT2 calculations may fail if the system involves accidental degenerate resonances. See the VCI$rem variable for more details about increasing the accuracy of anharmonic calculations. ANTIBOND Triggers Antibond subroutine to generate antibonding orbitals after a converged SCF TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Does not localize the virtual space. 1 Localizes the virtual space, one antibonding for every bond. 2,3 Fill the virtual space with antibonding orbitals-like guesses. 4 Does Frozen Natural Orbitals and leaves them on scratch for future jobs or visualization. RECOMMENDATION: None ARI_R0 Determines the value of the inner fitting radius (in Ångstroms) TYPE: INTEGER DEFAULT: 4 A value of 4 Å will be added to the atomic van der Waals radius. OPTIONS: $n$ User defined radius. RECOMMENDATION: For some systems the default value may be too small and the calculation will become unstable. ARI_R1 Determines the value of the outer fitting radius (in Ångstroms) TYPE: INTEGER DEFAULT: 5 A value of 5 Å will be added to the atomic van der Waals radius. OPTIONS: $n$ User defined radius. RECOMMENDATION: For some systems the default value may be too small and the calculation will become unstable. This value also determines, in part, the smoothness of the potential energy surface. ARI Toggles the use of the atomic resolution-of-the-identity (ARI) approximation. TYPE: LOGICAL DEFAULT: FALSE ARI will not be used by default for an RI-JK calculation. OPTIONS: TRUE Turn on ARI. RECOMMENDATION: For large (especially 1D and 2D) molecules the approximation may yield significant improvements in Fock evaluation time. ASCI_CDETS Specifies the number of determinants to search over during ASCI wavefunction growth steps. TYPE: INTEGER DEFAULT: -5 OPTIONS: $N>0$ search from the top $N$ determinants $N<0$ search from the top determinants whose cumulative weight in the wavefunction corresponds to $1-2^{N}$ RECOMMENDATION: Using a dynamically determined value ($N<0$) gives better results. ASCI_DAVIDSON_GUESS Specifies the truncated CI guess used for ASCI’s Davidson solver. TYPE: INTEGER DEFAULT: 2 OPTIONS: $N$ Order of the truncated CI to solve explicitly ASCI Davidson guess. RECOMMENDATION: Accurate excited states and rapid convergence of the ground state benefit from a good zero-order guess for the low energy spectrum. The default is often sufficient. ASCI_DIAG Specifies the diagonalization procedure. TYPE: INTEGER DEFAULT: 2 OPTIONS: 1 Davidson solver 2 Eigen sparse matrix solver RECOMMENDATION: Use 2 for best trade-off of speed and memory usage. If memory usage becomes to great, switch to 1. ASCI_NDETS Specifies the number of determinants to include in the ASCI wavefunction. TYPE: INTEGER DEFAULT: 0 OPTIONS: $N$ for a wavefunction with $N$ determinants RECOMMENDATION: Typical ASCI expansions range from 50,000 to 2,000,000 determinants depending on active space size, complexity of problem, and desired accuracy ASCI_RESTART Specifies whether to initialize the ASCI wavefunction with the “wf_data” file. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: TRUE read CI coefficients from the “wf_data” file FALSE do not read the CI coefficients from disk RECOMMENDATION: ASCI_SKIP_PT2 Specifies whether ASCI PT2 correction should be calculated. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE compute ASCI PT2 contribution TRUE do not compute ASCI PT2 contribution RECOMMENDATION: The PT2 correction is essential to obtaining converged ASCI energies. ASCI_SPIN_PURIFY Indicates whether or not the ASCI wavefunction should be augmented with missing determinants to ensure a spin-pure state. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: TRUE augment the wavefunction with determinants to ensure a spin eigenstate FALSE do not augment the wavefunction RECOMMENDATION: ASCI_USE_NAT_ORBS Specifies whether rotation to a natural orbital basis should be carried out between growth steps. TYPE: BOOLEAN DEFAULT: TRUE OPTIONS: TRUE rotate to a natural orbital basis between growth wavefunction growth steps FALSE do not rotate to a natural orbital basis RECOMMENDATION: Natural orbital rotations significantly improve the compactness and therefore accuracy of the ASCI wavefunction. AUX_BASIS_CORR Sets the auxiliary basis set for RI-MP2 to be used or invokes RI-MP2 in case of double-hybrid DFT or MP2 TYPE: STRING DEFAULT: No default auxiliary basis set OPTIONS: General, Gen User-defined. As for BASIS Symbol Use standard auxiliary basis sets as in the table below Mixed Use a combination of different basis sets RECOMMENDATION: Consult literature and EMSL Basis Set Exchange to aid your selection. AUX_BASIS_J Sets the auxiliary basis set for RI-J to be used or invokes RI-J TYPE: STRING DEFAULT: No default auxiliary basis set OPTIONS: General, Gen User-defined. As for BASIS Symbol Use standard auxiliary basis sets as in the table below Mixed Use a combination of different basis sets RECOMMENDATION: Consult literature and EMSL Basis Set Exchange to aid your selection. AUX_BASIS_K Sets the auxiliary basis set for RI-K or occ-RI-K to be used or invokes occ-RI-K TYPE: STRING DEFAULT: No default auxiliary basis set OPTIONS: General, Gen User-defined. As for BASIS Symbol Use standard auxiliary basis sets as in the table below Mixed Use a combination of different basis sets RECOMMENDATION: Consult literature and EMSL Basis Set Exchange to aid your selection. AUX_BASIS Sets the auxiliary basis set to be used TYPE: STRING DEFAULT: No default auxiliary basis set OPTIONS: General, Gen User-defined. As for BASIS Symbol Use standard auxiliary basis sets as in the table below Mixed Use a combination of different basis sets RECOMMENDATION: Consult literature and EMSL Basis Set Exchange to aid your selection. BASIS2 Defines the (small) second basis set. TYPE: STRING DEFAULT: No default for the second basis set. OPTIONS: Symbol Use standard basis sets as for BASIS. BASIS2_GEN General BASIS2 BASIS2_MIXED Mixed BASIS2 RECOMMENDATION: BASIS2 should be smaller than BASIS. There is little advantage to using a basis larger than a minimal basis when BASIS2 is used for initial guess purposes. Larger, standardized BASIS2 options are available for dual-basis calculations as discussed in Section 4.7 and summarized in Table 4.2. BASISPROJTYPE Determines which method to use when projecting the density matrix of BASIS2 TYPE: STRING DEFAULT: FOPPROJECTION (when DUAL_BASIS_ENERGY=false) OVPROJECTION (when DUAL_BASIS_ENERGY=true) OPTIONS: FOPPROJECTION Construct the Fock matrix in the second basis OVPROJECTION Projects MOs from BASIS2 to BASIS. RECOMMENDATION: None BASIS_LIN_DEP_THRESH Sets the threshold for determining linear dependence in the basis set TYPE: INTEGER DEFAULT: 6 Corresponding to a threshold of $10^{-6}$ OPTIONS: $n$ Sets the threshold to $10^{-n}$ RECOMMENDATION: Set to 5 or smaller if you have a poorly behaved SCF and you suspect linear dependence in you basis set. Lower values (larger thresholds) may affect the accuracy of the calculation. BASIS Sets the basis set to be used TYPE: STRING DEFAULT: No default basis set OPTIONS: General, Gen User-defined. See section below Symbol Use standard basis sets as in the table below Mixed Use a combination of different basis sets RECOMMENDATION: Consult literature and reviews to aid your selection. BECKE_SHIFT Controls atomic cell shifting in determination of Becke weights. TYPE: STRING DEFAULT: UNSHIFTED OPTIONS: UNSHIFTED Use the original weighting scheme of Becke (bisection point). BRAGG_SLATER Use the empirically derived Bragg-Slater radii. UNIVERSAL_DENSITY Use the ab initio derived Pacios radii. RECOMMENDATION: If interested in the partitioning of the default atomic quadrature, use UNSHIFTED. If using for physical interpretation, choose BRAGG_SLATER or UNIVERSAL_DENSITY. BONDED_EDA Use the bonded ALMO-EDA. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not perform bonded ALMO-EDA. 1 Perform ALMO-EDA with non-orthogonal CI. 2 Perform ALMO-EDA with spin-projected formalism. RECOMMENDATION: Set to 2 for all cases where the supersystem is closed shell, only use 1 for cases where the fragments have more than one unpaired spin each. BOYSCALC Specifies the Boys localized orbitals are to be calculated TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not perform localize the occupied space. 1 Allow core-valence mixing in Boys localization. 2 Localize core and valence separately. RECOMMENDATION: None BOYS_CIS_NUMSTATE Define how many states to mix with Boys localized diabatization. These states must be specified in the $localized_diabatization section. TYPE: INTEGER DEFAULT: 0 Do not perform Boys localized diabatization. OPTIONS: 2 to N where N is the number of CIS states requested (CIS_N_ROOTS) RECOMMENDATION: It is usually not wise to mix adiabatic states that are separated by more than a few eV or a typical reorganization energy in solvent. CAGE_CHARGE Defines the total charge of the cage. TYPE: INTEGER DEFAULT: 400 Add a cage charged +4e. OPTIONS: $n$ Total charge of the cage is $n/100$ a.u. RECOMMENDATION: None CAGE_POINTS Defines number of point charges for the spherical cage. TYPE: INTEGER DEFAULT: 100 OPTIONS: $n$ Number of point charges to use. RECOMMENDATION: None CAGE_RADIUS Defines radius of the charged cage. TYPE: INTEGER DEFAULT: 225 OPTIONS: $n$ radius is $n/100$ Å. RECOMMENDATION: None CALC_NAC Whether or not non-adiabatic couplings will be calculated for the EOM-CC, CIS, and TDDFT wave functions. TYPE: INTEGER DEFAULT: 0 (do not compute NAC) OPTIONS: 1 NYI for EOM-CC 2 Compute NACs using Szalay’s approach (this what needs to be specified for EOM-CC). RECOMMENDATION: Additional response equations will be solved and gradients for all EOM states and for summed states will be computed, which increases the cost of calculations. Request only when needed and do not ask for too many EOM states. CALC_SOC Whether or not the spin-orbit couplings between CC/EOM/ADC/CIS/TDDFT electronic states will be calculated. In the CC/EOM-CC suite, by default the couplings are calculated between the CCSD reference and the EOM-CCSD target states. In order to calculate couplings between EOM states, CC_STATE_TO_OPT must specify the initial EOM state. If NTO analysis is requested, analysis of spinless transition density matrices will be performed and the spin–orbit integrals over NTO pairs will be printed. TYPE: INTEGER/LOGICAL DEFAULT: FALSE (no spin-orbit couplings will be calculated) OPTIONS: 0/FALSE (no spin-orbit couplings will be calculated) 1/TRUE Activates SOC calculation. EOM-CC/EOM-MP2 only: spin-orbit couplings will be computed with the new code with L+/L- averaging 2 EOM-CC/EOM-MP2 only: spin-orbit couplings will be computed with the new code without L+/L- averaging 3 EOM-CC/EOM-MP2 only: spin-orbit couplings will be computed with the legacy code RECOMMENDATION: CCMAN2 supports several variants of SOC calculation for EOM-CC/EOM-MP2 methods. One-electron and mean-field two-electron SOCs will be computed by default. To enable full two-electron SOCs, two-particle EOM properties must be turned on (see CC_EOM_PROP_TE). CALC_SOC Controls whether to calculate the SOC constants for EOM-CC, ADC, TDDFT/TDA and TDDFT. TYPE: INTEGER/LOGICAL DEFAULT: FALSE OPTIONS: FALSE Do not perform the SOC calculation. TRUE Perform the SOC calculation. RECOMMENDATION: Although TRUE/FALSE values will work, EOM-CC code has more variants of SOC evaluations. For details, consult with EOM section. CAS_DAVIDSON_MAXVECTORS Specifies the maximum number of vectors to augment the Davidson search space in CAS. TYPE: INTEGER DEFAULT: 10 OPTIONS: $N$ sets the maximum Davidson subspace size to $N$+CAS_N_ROOTS RECOMMENDATION: The default should be suitable in most cases CAS_DAVIDSON_TOL Specifies the tolerance for the Davidson solver used in CAS. TYPE: INTEGER DEFAULT: 5 OPTIONS: $N$ for a threshold of $10^{-N}$ RECOMMENDATION: The default should be suitable in most cases CAS_METHOD Indicates whether orbital optimization is requested. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Not running a CAS calculation 1 CAS-CI (no orbital optimization) 2 CASSCF (orbital optimization) RECOMMENDATION: Use 2 for best accuracy, but such computations may become infeasible for large active spaces. CAS_M_S The number of unpaired electrons desired in the CAS wavefunction. TYPE: INTEGER DEFAULT: 0 OPTIONS: $N$ for a wavefunction with $N$ unpaired electrons RECOMMENDATION: CAS_N_ELEC Specifies the number of active electrons. TYPE: INTEGER DEFAULT: 0 OPTIONS: $N$ include $N$ electrons in the active space -1 include all electrons in the active space RECOMMENDATION: Use the smallest active space possible for the given system. CAS_N_ORB Specifies the number of active orbitals. TYPE: INTEGER DEFAULT: 0 OPTIONS: $N$ include $N$ orbitals in the active space -1 include all orbitals in the active space RECOMMENDATION: Use the smallest active space possible for the given system. CAS_N_ROOTS Specifies the number of electronic states to determine. TYPE: INTEGER DEFAULT: 1 OPTIONS: $N$ solve for $N$ roots of the Hamiltonian RECOMMENDATION: CAS_SAVE_NAT_ORBS Save the CAS natural orbitals in place of the reference orbitals. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: TRUE overwrite the reference orbitals with CAS natural orbitals FALSE do not save the CAS natural orbitals RECOMMENDATION: CAS_SOLVER Specifies the solver to be used for the active space. TYPE: INTEGER DEFAULT: 1 OPTIONS: 1 CAS-CI/CASSCF 2 ASCI (see Section 6.18) 3 Truncated CI (CIS, CISD, CISDT, etc.) RECOMMENDATION: CAS_THRESH Specifies the threshold for matrix elements to be included in the CAS Hamiltonian. TYPE: INTEGER DEFAULT: 12 OPTIONS: $N$ for a threshold of $10^{-N}$ RECOMMENDATION: CAS_USE_RI Indicates whether the resolution of the identity approximation should be used. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Compute 2-electron integrals analytically TRUE Use the RI approximation for 2-electron integrals RECOMMENDATION: Analytic integrals are more accurate, RI integrals are faster CCVB_GUESS Specifies the initial guess for CCVB calculations TYPE: INTEGER DEFAULT: NONE OPTIONS: 1 Standard GVBMAN guess (orbital localization via GVB_LOCAL + Sano procedure). 2 Use orbitals from previous GVBMAN calculation, along with SCF_GUESS = read. 3 Convert UHF orbitals into pairing VB form. RECOMMENDATION: Option 1 is the most useful overall. The success of GVBMAN methods is often dependent on localized orbitals, and this guess shoots for these. Option 2 is useful for comparing results to other GVBMAN methods, or if other GVBMAN methods are able to obtain a desired result more efficiently. Option 3 can be useful for bond-breaking situations when a pertinent UHF solution has been found. It works best for small systems, or if the unrestriction is a local phenomenon within a larger molecule. If the unrestriction is non-local and the system is large, this guess will often produce a solution that is not the global minimum. Any UHF solution has a certain number of pairs that are unrestricted, and this will be output by the program. If GVB_N_PAIRS exceeds this number, the standard GVBMAN initial-guess procedure will be used to obtain a guess for the excess pairs CCVB_METHOD Optionally modifies the basic CCVB method TYPE: INTEGER DEFAULT: 1 OPTIONS: 1 Standard CCVB model 3 Independent electron pair approximation (IEPA) to CCVB 4 Variational PP (the CCVB reference energy) RECOMMENDATION: Option 1 is generally recommended. Option 4 is useful for preconditioning, and for obtaining localized-orbital solutions, which may be used in subsequent calculations. It is also useful for cases in which the regular GVBMAN PP code becomes variationally unstable. Option 3 is a simple independent-amplitude approximation to CCVB. It avoids the cubic-scaling amplitude equations of CCVB, and also is able to reach the correct dissociation energy for any molecular system (unlike regular CCVB which does so only for cases in which UHF can reach a correct dissociate limit). However the IEPA approximation to CCVB is sometimes variationally unstable, which we have yet to observe in regular CCVB. CC_BACKEND Used to specify the computational back-end of CCMAN2. TYPE: STRING DEFAULT: VM Default shared-memory disk-based back-end OPTIONS: XM libxm shared-memory disk-based back-end CTF Distributed-memory back-end for MPI jobs RECOMMENDATION: Use XM for large jobs with limited memory or when the performance of the default disk-based back-end is not satisfactory, CTF for MPI jobs CC_CANONIZE_FINAL Whether to semi-canonicalize orbitals at the end of the ground state calculation. TYPE: LOGICAL DEFAULT: FALSE unless required OPTIONS: TRUE/FALSE RECOMMENDATION: Should not normally have to be altered. CC_CANONIZE_FREQ The orbitals will be semi-canonicalized every $n$ theta resets. The thetas (orbital rotation angles) are reset every CC_RESET_THETA iterations. The counting of iterations differs for active space (VOD, VQCCD) calculations, where the orbitals are always canonicalized at the first theta-reset. TYPE: INTEGER DEFAULT: 50 OPTIONS: $n$ User-defined integer RECOMMENDATION: Smaller values can be tried in cases that do not converge. CC_CANONIZE Whether to semi-canonicalize orbitals at the start of the calculation (i.e. Fock matrix is diagonalized in each orbital subspace) TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE/FALSE RECOMMENDATION: Should not normally have to be altered. CC_CONVERGENCE Overall convergence criterion for the coupled-cluster codes. This is designed to ensure at least $n$ significant digits in the calculated energy, and automatically sets the other convergence-related variables (CC_E_CONV, CC_T_CONV, CC_THETA_CONV, CC_THETA_GRAD_CONV) [$10^{-n}$]. TYPE: INTEGER DEFAULT: 6 Energies. 7 Gradients. OPTIONS: $n$ Corresponding to $10^{-n}$ convergence criterion. Amplitude convergence is set automatically to match energy convergence. RECOMMENDATION: Use the default CC_DIIS12_SWITCH When to switch from DIIS2 to DIIS1 procedure, or when DIIS2 procedure is required to generate DIIS guesses less frequently. Total value of DIIS error vector must be less than $10^{-n}$, where $n$ is the value of this option. TYPE: INTEGER DEFAULT: 5 OPTIONS: $n$ User-defined integer RECOMMENDATION: None CC_DIIS_FREQ DIIS extrapolation will be attempted every n iterations. However, DIIS2 will be attempted every iteration while total error vector exceeds CC_DIIS12_SWITCH. DIIS1 cannot generate guesses more frequently than every 2 iterations. TYPE: INTEGER DEFAULT: 2 OPTIONS: $N$ User-defined integer RECOMMENDATION: None CC_DIIS_MAX_OVERLAP DIIS extrapolations will not begin until square root of the maximum element of the error overlap matrix drops below this value. TYPE: DOUBLE DEFAULT: 100 Corresponding to 1.0 OPTIONS: $abcde$ Integer code is mapped to $abc\times 10^{-de}$ RECOMMENDATION: None CC_DIIS_MIN_OVERLAP The DIIS procedure will be halted when the square root of smallest element of the error overlap matrix is less than $10^{-n}$, where $n$ is the value of this option. Small values of the B matrix mean it will become near-singular, making the DIIS equations difficult to solve. TYPE: INTEGER DEFAULT: 11 OPTIONS: $n$ User-defined integer RECOMMENDATION: None CC_DIIS_SIZE Specifies the maximum size of the DIIS space. TYPE: INTEGER DEFAULT: 7 OPTIONS: $n$ User-defined integer RECOMMENDATION: Larger values involve larger amounts of disk storage. CC_DIIS_START Iteration number when DIIS is turned on. Set to a large number to disable DIIS. TYPE: INTEGER DEFAULT: 3 OPTIONS: $n$ User-defined RECOMMENDATION: Occasionally DIIS can cause optimized orbital coupled-cluster calculations to diverge through large orbital changes. If this is seen, DIIS should be disabled. CC_DIIS Specify the version of Pulay’s Direct Inversion of the Iterative Subspace (DIIS) convergence accelerator to be used in the coupled-cluster code. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Activates procedure 2 initially, and procedure 1 when gradients are smaller than DIIS12_SWITCH. 1 Uses error vectors defined as differences between parameter vectors from successive iterations. Most efficient near convergence. 2 Error vectors are defined as gradients scaled by square root of the approximate diagonal Hessian. Most efficient far from convergence. RECOMMENDATION: DIIS1 can be more stable. If DIIS problems are encountered in the early stages of a calculation (when gradients are large) try DIIS1. CC_DIRECT_RI Controls use of RI and Cholesky integrals in conventional (undecomposed) form TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE use all integrals in decomposed format TRUE transform all RI or Cholesky integral back to conventional format RECOMMENDATION: By default all integrals are used in decomposed format allowing significant reduction of memory use. If all integrals are transformed back (TRUE option) no memory reduction is achieved and decomposition error is introduced, however, the integral transformation is performed significantly faster and conventional CC/EOM algorithms are used. CC_DOV_THRESH Specifies minimum allowed values for the coupled-cluster energy denominators. Smaller values are replaced by this constant during early iterations only, so the final results are unaffected, but initial convergence is improved when the HOMO-LUMO gap is small or when non-conventional references are used. TYPE: INTEGER DEFAULT: 0 OPTIONS: $abcde$ Integer code is mapped to $ab\times 10^{-de}$, e.g., $2501$ corresponds to 0.025, $99001$ corresponds to 0.99, etc. RECOMMENDATION: Increase to 0.25, 0.5 or 0.75 for non convergent coupled-cluster calculations. CC_DO_DYSON_EE Whether excited-state or spin-flip state Dyson orbitals will be calculated for EOM-IP/EA-CCSD calculations with CCMAN. TYPE: LOGICAL DEFAULT: FALSE (the option must be specified to run this calculation) OPTIONS: TRUE/FALSE RECOMMENDATION: none CC_DO_DYSON CCMAN2: starts all types of Dyson orbitals calculations. Desired type is determined by requesting corresponding EOM-XX transitions CCMAN: whether the reference-state Dyson orbitals will be calculated for EOM-IP/EA-CCSD calculations. TYPE: LOGICAL DEFAULT: FALSE (the option must be specified to run this calculation) OPTIONS: TRUE/FALSE RECOMMENDATION: none CC_EOM_2PA Whether or not the transition moments and cross-sections for two-photon absorption will be calculated. By default, the transition moments are calculated between the CCSD reference and the EOM-CCSD target states. In order to calculate transition moments between a set of EOM-CCSD states and another EOM-CCSD state, the CC_STATE_TO_OPT must be specified for this state. If 2PA NTO analysis is requested, the CC_EOM_2PA value is redundant as long as CC_EOM_2PA $>0$. TYPE: INTEGER DEFAULT: 0 (do not compute 2PA transition moments) OPTIONS: 1 Compute 2PA using the fastest algorithm (use $\tilde{\sigma}$-intermediates for canonical and $\sigma$-intermediates for RI/CD response calculations). 2 Use $\sigma$-intermediates for 2PA response equation calculations. 3 Use $\tilde{\sigma}$-intermediates for 2PA response equation calculations. RECOMMENDATION: Additional response equations (6 for each target state) will be solved, which increases the cost of calculations. The cost of 2PA moments is about 10 times that of energy calculation. Use the default algorithm. Setting CC_EOM_2PA $>0$ turns on CC_TRANS_PROP. CC_EOM_PROP_TE Request for calculation of non-relaxed two-particle EOM-CC properties. The two-particle properties currently include $\langle S^{2}\rangle$. The one-particle properties also will be calculated, since the additional cost of the one-particle properties calculation is inferior compared to the cost of $\langle S^{2}\rangle$. The variable CC_EOM_PROP must be also set to TRUE. Alternatively, CC_CALC_SSQ can be used to request $\langle S^{2}\rangle$ calculation. TYPE: LOGICAL DEFAULT: FALSE (no two-particle properties will be calculated) OPTIONS: FALSE, TRUE RECOMMENDATION: The two-particle properties are computationally expensive since they require calculation and use of the two-particle density matrix (the cost is approximately the same as the cost of an analytic gradient calculation). Do not request the two-particle properties unless you really need them. CC_EOM_PROP Whether or not the non-relaxed (expectation value) one-particle EOM-CCSD target state properties will be calculated. The properties currently include permanent dipole moment, angular momentum projections, the second moments $\langle X^{2}\rangle$, $\langle Y^{2}\rangle$, and $\langle Z^{2}\rangle$ of electron density, and the total $\langle R^{2}\rangle=\langle X^{2}\rangle+\langle Y^{2}\rangle+\langle Z^{2}\rangle$ (in atomic units). Incompatible with JOBTYPE=FORCE, OPT, FREQ. TYPE: LOGICAL DEFAULT: FALSE (no one-particle properties will be calculated) OPTIONS: FALSE, TRUE RECOMMENDATION: Additional equations (EOM-CCSD equations for the left eigenvectors) need to be solved for properties, approximately doubling the cost of calculation for each irrep. The cost of the one-particle properties calculation itself is low. The one-particle density of an EOM-CCSD target state can be analyzed with NBO or libwfa packages by specifying the state with CC_STATE_TO_OPT and requesting NBO = TRUE and CC_EOM_PROP = TRUE. CC_EOM_RIXS Whether or not the RIXS scattering moments and cross-sections will be calculated. TYPE: INTEGER DEFAULT: 0 do not compute RIXS cross-sections OPTIONS: 1 Perform RIXS within fc-CVS-EOM-EE-CCSD using the response wave functions of the CCSD reference state only 2 Perform RIXS within fc-CVS-EOM-EE-CCSD response theory along with the wave-function analysis of RIXS transition density matrices 11 Perform RIXS within the standard EOM-EE-CCSD using the response wave functions of the CCSD reference state only 12 Use $\sigma$-intermediates for RIXS response calculations within the standard EOM-EE-CCSD RECOMMENDATION: Use 1 to deploy fc-CVS-EOM-EE-CCSD with robust convergence CC_ERASE_DP_INTEGRALS Controls storage of requisite objects computed with double precision in a single-precision calculation TYPE: INTEGER DEFAULT: 0 store OPTIONS: 1 do not store RECOMMENDATION: Do not erase integrals if clean-up in double precision is intended. CC_E_CONV Convergence desired on the change in total energy, between iterations. TYPE: INTEGER DEFAULT: 10 OPTIONS: $n$ $10^{-n}$ convergence criterion. RECOMMENDATION: None CC_FNO_THRESH Initialize the FNO truncation and sets the threshold to be used for both cutoffs (OCCT and POVO) TYPE: INTEGER DEFAULT: None OPTIONS: range 0000-10000 $abcd$ Corresponding to $ab.cd$% RECOMMENDATION: None CC_FNO_USEPOP Selection of the truncation scheme TYPE: INTEGER DEFAULT: 1 OCCT OPTIONS: 0 POVO RECOMMENDATION: None CC_FULLRESPONSE Fully relaxed properties (including orbital relaxation terms) will be computed. The variable CC_REF_PROP must be also set to TRUE. TYPE: LOGICAL DEFAULT: FALSE (no orbital response will be calculated) OPTIONS: FALSE, TRUE RECOMMENDATION: Not available for non UHF/RHF references and for the methods that do not have analytic gradients (e.g., QCISD). CC_HESS_THRESH Minimum allowed value for the orbital Hessian. Smaller values are replaced by this constant. TYPE: DOUBLE DEFAULT: 102 Corresponding to 0.01 OPTIONS: $abcde$ Integer code is mapped to $abc\times 10^{-de}$ RECOMMENDATION: None CC_INCL_CORE_CORR Whether to include the correlation contribution from frozen core orbitals in non iterative (2) corrections, such as OD(2) and CCSD(2). TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE FALSE RECOMMENDATION: Use the default unless no core-valence or core correlation is desired (e.g., for comparison with other methods or because the basis used cannot describe core correlation). CC_ITERATE_ON In active space calculations, use a “mixed” iteration procedure if the value is greater than 0. Then if the RMS orbital gradient is larger than the value of CC_THETA_GRAD_THRESH, micro-iterations will be performed to converge the occupied-virtual mixing angles for the current active space. The maximum number of space iterations is given by this option. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ Up to $n$ occupied-virtual iterations per overall cycle RECOMMENDATION: Can be useful for non-convergent active space calculations CC_ITERATE_OV In active space calculations, use a “mixed” iteration procedure if the value is greater than 0. Then, if the RMS orbital gradient is larger than the value of CC_THETA_GRAD_THRESH, micro-iterations will be performed to converge the occupied-virtual mixing angles for the current active space. The maximum number of such iterations is given by this option. TYPE: INTEGER DEFAULT: 0 No “mixed” iterations OPTIONS: $n$ Up to $n$ occupied-virtual iterations per overall cycle RECOMMENDATION: Can be useful for non-convergent active space calculations. CC_MAX_ITER Maximum number of iterations to optimize the coupled-cluster energy. TYPE: INTEGER DEFAULT: 200 OPTIONS: $n$ up to $n$ iterations to achieve convergence. RECOMMENDATION: None CC_MEMORY Specifies the maximum size, in MB, of the buffers for in-core storage of block-tensors in CCMAN and CCMAN2. TYPE: INTEGER DEFAULT: 50% of MEM_TOTAL. If MEM_TOTAL is not set, use 1.5 GB. A minimum of 192 MB is hard-coded. OPTIONS: $n$ Integer number of MB RECOMMENDATION: Larger values can give better I/O performance and are recommended for systems with large memory (add to your .qchemrc file. When running CCMAN2 exclusively on a node, CC_MEMORY should be set to 75–80% of the total available RAM. ) CC_MP2NO_GRAD If CC_MP2NO_GUESS is TRUE, what kind of one-particle density matrix is used to make the guess orbitals? TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE 1 PDM from MP2 gradient theory. FALSE 1 PDM expanded to 2${}^{\mathrm{nd}}$ order in perturbation theory. RECOMMENDATION: The two definitions give generally similar performance. CC_MP2NO_GUESS Will guess orbitals be natural orbitals of the MP2 wave function? Alternatively, it is possible to use an effective one-particle density matrix to define the natural orbitals. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Use natural orbitals from an MP2 one-particle density matrix (see CC_MP2NO_GRAD). FALSE Use current molecular orbitals from SCF. RECOMMENDATION: None CC_ORBS_PER_BLOCK Specifies target (and maximum) size of blocks in orbital space. TYPE: INTEGER DEFAULT: 16 OPTIONS: $n$ Orbital block size of $n$ orbitals. RECOMMENDATION: None CC_OSFNO Activation of OSFNO. Available only for open-shell references. TYPE: LOGICAL DEFAULT: FALSE do not activate OPTIONS: TRUE activate RECOMMENDATION: Use for EOM-SF-CCSD calculations from open-shell references. Available in CCMAN2 only. CC_POL Specifies the approach for calculating the polarizability of the CCSD wave function. TYPE: INTEGER DEFAULT: 0 (CCSD polarizability will not be calculated) OPTIONS: 1 (analytic-derivative or response-theory mixed symmetric-asymmetric approach) 2 (analytic-derivative or response-theory asymmetric approach) 3 (expectation-value approach with right response intermediates) 4 (expectation-value approach with left response intermediates) RECOMMENDATION: CCSD polarizabilities are expensive since they require solving three/six (for static) or six/twelve (for dynamical) additional response equations. Do no request this property unless you need it. CC_PRECONV_FZ In active space methods, whether to pre-converge other wave function variables for fixed initial guess of active space. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 No pre-iterations before active space optimization begins. $n$ Maximum number of pre-iterations via this procedure. RECOMMENDATION: None CC_PRECONV_T2Z_EACH Whether to pre-converge the cluster amplitudes before each change of the orbitals in optimized orbital coupled-cluster methods. The maximum number of iterations in this pre-convergence procedure is given by the value of this parameter. TYPE: INTEGER DEFAULT: 0 (FALSE) OPTIONS: 0 No pre-convergence before orbital optimization. $n$ Up to $n$ iterations in this pre-convergence procedure. RECOMMENDATION: A very slow last resort option for jobs that do not converge. CC_PRECONV_T2Z Whether to pre-converge the cluster amplitudes before beginning orbital optimization in optimized orbital cluster methods. TYPE: INTEGER DEFAULT: 0 (FALSE) 10 If CC_RESTART, CC_RESTART_NO_SCF or CC_MP2NO_GUESS are TRUE OPTIONS: 0 No pre-convergence before orbital optimization. $n$ Up to $n$ iterations in this pre-convergence procedure. RECOMMENDATION: Experiment with this option in cases of convergence failure. CC_PRINT Controls the output from post-MP2 coupled-cluster module of Q-Chem TYPE: INTEGER DEFAULT: 1 OPTIONS: $0-7$ higher values can lead to deforestation… RECOMMENDATION: Increase if you need more output and don’t like trees CC_QCCD_THETA_SWITCH QCCD calculations switch from OD to QCCD when the rotation gradient is below this threshold [$10^{-n}$] TYPE: INTEGER DEFAULT: 2 $10^{-2}$ switchover OPTIONS: $n$ $10^{-n}$ switchover RECOMMENDATION: None CC_REF_PROP_TE Request for calculation of non-relaxed two-particle CCSD properties. The two-particle properties currently include $\langle S^{2}\rangle$. The one-particle properties also will be calculated, since the additional cost of the one-particle properties calculation is inferior compared to the cost of $\langle S^{2}\rangle$. The variable CC_REF_PROP must be also set to TRUE. TYPE: LOGICAL DEFAULT: FALSE (no two-particle properties will be calculated) OPTIONS: FALSE, TRUE RECOMMENDATION: The two-particle properties are computationally expensive, since they require calculation and use of the two-particle density matrix (the cost is approximately the same as the cost of an analytic gradient calculation). Do not request the two-particle properties unless you really need them. CC_REF_PROP Whether or not the non-relaxed (expectation value) or full response (including orbital relaxation terms) one-particle CCSD properties will be calculated. The properties currently include permanent dipole moment, the second moments $\langle X^{2}\rangle$, $\langle Y^{2}\rangle$, and $\langle Z^{2}\rangle$ of electron density, and the total $\langle R^{2}\rangle=\langle X^{2}\rangle+\langle Y^{2}\rangle+\langle Z^{2}\rangle$ (in atomic units). Incompatible with JOBTYPE=FORCE, OPT, FREQ. TYPE: LOGICAL DEFAULT: FALSE (no one-particle properties will be calculated) OPTIONS: FALSE, TRUE RECOMMENDATION: Additional equations need to be solved (lambda CCSD equations) for properties with the cost approximately the same as CCSD equations. Use the default if you do not need properties. The cost of the properties calculation itself is low. The CCSD one-particle density can be analyzed with NBO package by specifying NBO=TRUE, CC_REF_PROP=TRUE and JOBTYPE=FORCE. CC_RESET_THETA The reference MO coefficient matrix is reset every n iterations to help overcome problems associated with the theta metric as theta becomes large. TYPE: INTEGER DEFAULT: 15 OPTIONS: $n$ $n$ iterations between resetting orbital rotations to zero. RECOMMENDATION: None CC_RESTART_NO_SCF Should an optimized orbital coupled cluster calculation begin with optimized orbitals from a previous calculation? When TRUE, molecular orbitals are initially orthogonalized, and CC_PRECONV_T2Z and CC_CANONIZE are set to TRUE while other guess options are set to FALSE TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE/FALSE RECOMMENDATION: None CC_RESTART Allows an optimized orbital coupled cluster calculation to begin with an initial guess for the orbital transformation matrix U other than the unit vector. The scratch file from a previous run must be available for the U matrix to be read successfully. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE Use unit initial guess. TRUE Activates CC_PRECONV_T2Z, CC_CANONIZE, and turns off CC_MP2NO_GUESS RECOMMENDATION: Useful for restarting a job that did not converge, if files were saved. CC_RESTR_AMPL Controls the restriction on amplitudes is there are restricted orbitals TYPE: INTEGER DEFAULT: 1 OPTIONS: 0 All amplitudes are in the full space 1 Amplitudes are restricted, if there are restricted orbitals RECOMMENDATION: None CC_RESTR_TRIPLES Controls which space the triples correction is computed in TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Triples are computed in the full space 1 Triples are restricted to the active space RECOMMENDATION: None CC_REST_AMPL Forces the integrals, $T$, and $R$ amplitudes to be determined in the full space even though the CC_REST_OCC and CC_REST_VIR keywords are used. TYPE: LOGICAL DEFAULT: TRUE OPTIONS: FALSE Do apply restrictions TRUE Do not apply restrictions RECOMMENDATION: None CC_REST_OCC Sets the number of restricted occupied orbitals including active core occupied orbitals. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ Restrict $n$ energetically lowest occupied orbitals to correspond to the active core space. RECOMMENDATION: Example: cytosine with the molecular formula C${}_{4}$H${}_{5}$N${}_{3}$O includes one oxygen atom. To calculate O 1s core-excited states, $n$ has to be set to 1, because the 1s orbital of oxygen is the energetically lowest. To obtain the N 1s core excitations, the integer $n$ has to be set to 4, because the 1s orbital of the oxygen atom is included as well, since it is energetically below the three 1s orbitals of the nitrogen atoms. Accordingly, to simulate the C 1s spectrum of cytosine, $n$ must be set to 8. CC_REST_TRIPLES Restricts $R_{3}$ amplitudes to the active space, i.e., one electron should be removed from the active occupied orbital and one electron should be added to the active virtual orbital. TYPE: INTEGER DEFAULT: 1 OPTIONS: 1 Applies the restrictions RECOMMENDATION: None CC_REST_VIR Sets the number of restricted virtual orbitals including frozen virtual orbitals. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ Restrict $n$ virtual orbitals. RECOMMENDATION: None CC_SCALE_AMP If not 0, scales down the step for updating coupled-cluster amplitudes in cases of problematic convergence. TYPE: INTEGER DEFAULT: 0 no scaling OPTIONS: $abcd$ Integer code is mapped to $abcd\times 10^{-2}$, e.g., $90$ corresponds to 0.9 RECOMMENDATION: Use 0.9 or 0.8 for non convergent coupled-cluster calculations. CC_SINGLE_PREC Precision selection for CCSD calculation. Available in CCMAN2 only. TYPE: INTEGER DEFAULT: 0 double-precision calculation OPTIONS: 1 single-precision calculation 2 single-precision calculation followed by double-precision clean-up iterations RECOMMENDATION: Do not set too tight convergence thresholds when using single precision CC_SP_DM Precision selection for CCSD and EOM-CCSD intermediates, density matrices, gradients, and $S^{2}$ TYPE: INTEGER DEFAULT: 0 double-precision calculation OPTIONS: 1 single-precision calculation RECOMMENDATION: NONE CC_SP_E_CONV Energy convergence criterion in single precision in CCSD calculations. TYPE: INTEGER DEFAULT: 5 OPTIONS: $n$ Corresponding to $10^{-n}$ convergence criterion RECOMMENDATION: Set 6 to be consistent with the default threshold in double precision in a pure single-precision calculation. When used with clean-up version, it should be smaller than double-precision threshold not to introduce extra iterations. CC_SP_T_CONV Amplitude convergence threshold in single precision in CCSD calculations. TYPE: INTEGER DEFAULT: 3 OPTIONS: $n$ Corresponding to $10^{-n}$ convergence criterion RECOMMENDATION: Set 4 to be consistent with the default threshold in double precision in a pure single-precision run. When used with clean-up version, it should be smaller than double-precision threshold not to introduce extra iterations. CC_STATE_TO_OPT Specifies which state to optimize. TYPE: INTEGER ARRAY DEFAULT: None OPTIONS: [$i$,$j$] optimize the $j$th state of the $i$th irrep. RECOMMENDATION: None CC_SYMMETRY Activates point-group symmetry in the ADC calculation. TYPE: LOGICAL DEFAULT: TRUE If the system possesses any point-group symmetry. OPTIONS: TRUE Employ point-group symmetry FALSE Do not use point-group symmetry RECOMMENDATION: None CC_THETA_CONV Convergence criterion on the RMS difference between successive sets of orbital rotation angles [$10^{-n}$]. TYPE: INTEGER DEFAULT: 5 Energies 6 Gradients OPTIONS: $n$ $10^{-n}$ convergence criterion. RECOMMENDATION: Use default CC_THETA_GRAD_CONV Convergence desired on the RMS gradient of the energy with respect to orbital rotation angles [$10^{-n}$]. TYPE: INTEGER DEFAULT: 7 Energies 8 Gradients OPTIONS: $n$ $10^{-n}$ convergence criterion. RECOMMENDATION: Use default CC_THETA_GRAD_THRESH RMS orbital gradient threshold [$10^{-n}$] above which “mixed iterations” are performed in active space calculations if CC_ITERATE_OV is TRUE. TYPE: INTEGER DEFAULT: 2 OPTIONS: $n$ $10^{-n}$ threshold. RECOMMENDATION: Can be made smaller if convergence difficulties are encountered. CC_THETA_STEPSIZE Scale factor for the orbital rotation step size. The optimal rotation steps should be approximately equal to the gradient vector. TYPE: INTEGER DEFAULT: $100$ Corresponding to 1.0 OPTIONS: $abcde$ Integer code is mapped to $abc\times 10^{-de}$ If the initial step is smaller than 0.5, the program will increase step when gradients are smaller than the value of THETA_GRAD_THRESH, up to a limit of 0.5. RECOMMENDATION: Try a smaller value in cases of poor convergence and very large orbital gradients. For example, a value of 01001 translates to 0.1 CC_TRANS_PROP Whether or not the transition dipole moment (in atomic units) and oscillator strength for the EOM-CCSD target states will be calculated. By default, the transition dipole moment and angular momentum matrix elements are calculated between the CCSD reference and the EOM-CCSD target states. In order to calculate transition dipole moment and angular momentum matrix elements between a set of EOM-CCSD states and another EOM-CCSD state, the CC_STATE_TO_OPT must be specified for this state. TYPE: INTEGER DEFAULT: 0 (no transition properties will be calculated) OPTIONS: 1 (calculate transition properties between all computed EOM state and the reference state) 2 (calculate transition properties between all pairs of EOM states) RECOMMENDATION: Additional equations (for the left EOM-CCSD eigenvectors plus lambda CCSD equations in case if transition properties between the CCSD reference and EOM-CCSD target states are requested) need to be solved for transition properties, approximately doubling the computational cost. The cost of the transition properties calculation itself is low. CC_T_CONV Convergence criterion on the RMS difference between successive sets of coupled-cluster doubles amplitudes [$10^{-n}$] TYPE: INTEGER DEFAULT: 8 energies 10 gradients OPTIONS: $n$ $10^{-n}$ convergence criterion. RECOMMENDATION: Use default CC_Z_CONV Convergence criterion on the RMS difference between successive doubles $Z$-vector amplitudes [$10^{-n}$]. TYPE: INTEGER DEFAULT: 8 Energies 10 Gradients OPTIONS: $n$ $10^{-n}$ convergence criterion. RECOMMENDATION: Use Default CDFTCI_PRINT Controls level of output from CDFT-CI procedure to Q-Chem output file. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Only print energies and coefficients of CDFT-CI final states 1 Level 0 plus CDFT-CI overlap, Hamiltonian, and population matrices 2 Level 1 plus eigenvectors and eigenvalues of the CDFT-CI population matrix 3 Level 2 plus promolecule orbital coefficients and energies RECOMMENDATION: Level 3 is primarily for program debugging; levels 1 and 2 may be useful for analyzing the coupling elements CDFTCI_RESTART To be used in conjunction with CDFTCI_STOP, this variable causes CDFT-CI to read already-converged states from disk and begin SCF convergence on later states. Note that the same$cdft section must be used for the stopped calculation and the restarted calculation. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ Start calculations on state $n+1$ RECOMMENDATION: Use this setting in conjunction with CDFTCI_STOP. CDFTCI_SKIP_PROMOLECULES Skips promolecule calculations and allows fractional charge and spin constraints to be specified directly. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Standard CDFT-CI calculation is performed. TRUE Use the given charge/spin constraints directly, with no promolecule calculations. RECOMMENDATION: Setting to TRUE can be useful for scanning over constraint values. CDFTCI_STOP The CDFT-CI procedure involves performing independent SCF calculations on distinct constrained states. It sometimes occurs that the same convergence parameters are not successful for all of the states of interest, so that a CDFT-CI calculation might converge one of these diabatic states but not the next. This variable allows a user to stop a CDFT-CI calculation after a certain number of states have been converged, with the ability to restart later on the next state, with different convergence options. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ Stop after converging state $n$ (the first state is state $1$) $0$ Do not stop early RECOMMENDATION: Use this setting if some diabatic states converge but others do not. CDFTCI_SVD_THRESH By default, a symmetric orthogonalization is performed on the CDFT-CI matrix before diagonalization. If the CDFT-CI overlap matrix is nearly singular (i.e., some of the diabatic states are nearly degenerate), then this orthogonalization can lead to numerical instability. When computing $\mathbf{S}^{-1/2}$, eigenvalues smaller than $10^{-\mathrm{CDFTCI\_SVD\_THRESH}}$ are discarded. TYPE: INTEGER DEFAULT: 4 OPTIONS: $n$ for a threshold of $10^{-n}$. RECOMMENDATION: Can be decreased if numerical instabilities are encountered in the final diagonalization. CDFTCI Initiates a constrained DFT-configuration interaction calculation TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Perform a CDFT-CI Calculation FALSE No CDFT-CI RECOMMENDATION: Set to TRUE if a CDFT-CI calculation is desired. CDFT_BECKE_POP Whether the calculation should print the Becke atomic charges at convergence TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE Print Populations FALSE Do not print them RECOMMENDATION: Use the default. Note that the Mulliken populations printed at the end of an SCF run will not typically add up to the prescribed constraint value. Only the Becke populations are guaranteed to satisfy the user-specified constraints. CDFT_CRASHONFAIL Whether the calculation should crash or not if the constraint iterations do not converge. TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE Crash if constraint iterations do not converge. FALSE Do not crash. RECOMMENDATION: Use the default. CDFT_LAMBDA_MODE Allows CDFT potentials to be specified directly, instead of being determined as Lagrange multipliers. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Standard CDFT calculations are used. TRUE Instead of specifying target charge and spin constraints, use the values from the input deck as the value of the Becke weight potential RECOMMENDATION: Should usually be set to FALSE. Setting to TRUE can be useful to scan over different strengths of charge or spin localization, as convergence properties are improved compared to regular CDFT(-CI) calculations. CDFT_POP Sets the charge partitioning scheme for cDFT in SAPT/cDFT TYPE: STRING DEFAULT: FBH OPTIONS: FBH Fragment-Based Hirshfeld partitioning BECKE Atomic Becke partitioning RECOMMENDATION: None CDFT_POSTDIIS Controls whether the constraint is enforced after DIIS extrapolation. TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE Enforce constraint after DIIS FALSE Do not enforce constraint after DIIS RECOMMENDATION: Use the default unless convergence problems arise, in which case it may be beneficial to experiment with setting CDFT_POSTDIIS to FALSE. With this option set to TRUE, energies should be variational after the first iteration. CDFT_PREDIIS Controls whether the constraint is enforced before DIIS extrapolation. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Enforce constraint before DIIS FALSE Do not enforce constraint before DIIS RECOMMENDATION: Use the default unless convergence problems arise, in which case it may be beneficial to experiment with setting CDFT_PREDIIS to TRUE. Note that it is possible to enforce the constraint both before and after DIIS by setting both CDFT_PREDIIS and CDFT_POSTDIIS to TRUE. CDFT_THRESH Threshold that determines how tightly the constraint must be satisfied. TYPE: INTEGER DEFAULT: 5 OPTIONS: N Constraint is satisfied to within $10^{-N}$. RECOMMENDATION: Use the default unless problems occur. CDFT Initiates a constrained DFT calculation TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Perform a Constrained DFT Calculation FALSE No Density Constraint RECOMMENDATION: Set to TRUE if a Constrained DFT calculation is desired. CD_ALGORITHM Determines the algorithm for MP2 integral transformations. TYPE: STRING DEFAULT: Program determined. OPTIONS: DIRECT Uses fully direct algorithm (energies only). SEMI_DIRECT Uses disk-based semi-direct algorithm. LOCAL_OCCUPIED Alternative energy algorithm (see 6.4.1). RECOMMENDATION: Semi-direct is usually most efficient, and will normally be chosen by default. CFMM_ORDER Controls the order of the multipole expansions in CFMM calculation. TYPE: INTEGER DEFAULT: 15 For single point SCF accuracy 25 For tighter convergence (optimizations) OPTIONS: $n$ Use multipole expansions of order $n$ RECOMMENDATION: Use the default. CHARGE_CHARGE_REPULSION The repulsive Coulomb interaction parameter for YinYang atoms. TYPE: INTEGER DEFAULT: 550 OPTIONS: $n$ Use Q = $n\times 10^{-3}$ RECOMMENDATION: The repulsive Coulomb potential maintains bond lengths involving YinYang atoms with the potential $V(r)=Q/r$. The default is parameterized for carbon atoms. CHELPG_DX Sets the rectangular grid spacing for the traditional Cartesian ChElPG grid or the spacing between concentric Lebedev shells (when the variables CHELPG_HA and CHELPG_H are specified as well). TYPE: INTEGER DEFAULT: 6 OPTIONS: $N$ Corresponding to a grid space of $N/20$, in Å. RECOMMENDATION: Use the default, which corresponds to the “dense grid” of Breneman and Wiberg,Breneman:1990, unless the cost is prohibitive, in which case a larger value can be selected. Note that this default value is set with the Cartesian grid in mind and not the Lebedev grid. In the Lebedev case, a larger value can typically be used. CHELPG_HA Sets the Lebedev grid to use for non-hydrogen atoms. TYPE: INTEGER DEFAULT: NONE OPTIONS: $N$ Corresponding to a number of points in a Lebedev grid (see Section 5.5.1. RECOMMENDATION: None. CHELPG_HEAD Sets the “head space”Breneman:1990 (radial extent) of the ChElPG grid. TYPE: INTEGER DEFAULT: 30 OPTIONS: $N$ Corresponding to a head space of $N/10$, in Å. RECOMMENDATION: Use the default, which is the value recommended by Breneman and Wiberg.Breneman:1990 CHELPG_H Sets the Lebedev grid to use for hydrogen atoms. TYPE: INTEGER DEFAULT: NONE OPTIONS: $N$ Corresponding to a number of points in a Lebedev grid. RECOMMENDATION: CHELPG_H must always be less than or equal to CHELPG_HA. If it is greater, it will automatically be set to the value of CHELPG_HA. CHELPG Controls the calculation of CHELPG charges. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE Do not calculate ChElPG charges. TRUE Compute ChElPG charges. RECOMMENDATION: Set to TRUE if desired. For large molecules, there is some overhead associated with computing ChElPG charges, especially if the number of grid points is large. CHILD_MP_ORDERS The multipole orders included in the prepared FERFs. The last digit specifies how many multipoles to compute, and the digits in the front specify the multipole orders: 2: dipole (D); 3: quadrupole (Q); 4: octopole (O). Multipole order 1 is reserved for monopole FERFs which can be used to separate the effect of orbital contraction.Levine:2017a TYPE: INTEGER DEFAULT: 0 OPTIONS: 21 D 232 DQ 2343 DQO RECOMMENDATION: Use 232 (DQ) when FERF is needed. CHILD_MP Compute FERFs for fragments and use them as the basis for SCFMI calculations. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Do not compute FERFs (use the full AO span of each fragment). TRUE Compute fragment FERFs. RECOMMENDATION: Use FERFs to compute polarization energy when large basis sets are used. In an “EDA2" calculation, this $rem variable is set based on the given option automatically. CHOLESKY_TOL Tolerance of Cholesky decomposition of two-electron integrals TYPE: INTEGER DEFAULT: 3 OPTIONS: $n$ Corresponds to a tolerance of $10^{-n}$ RECOMMENDATION: 2 - qualitative calculations, 3 - appropriate for most cases, 4 - quantitative (error in total energy typically less than 1 $\mu$hartree) CISTR_PRINT Controls level of output. TYPE: LOGICAL DEFAULT: FALSE Minimal output. OPTIONS: TRUE Increase output level. RECOMMENDATION: None CIS_AMPL_ANAL Perform additional analysis of CIS and TDDFT excitation amplitudes, including generation of natural transition orbitals, excited-state multipole moments, and Mulliken analysis of the excited state densities and particle/hole density matrices. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Perform additional amplitude analysis. FALSE Do not perform additional analysis. RECOMMENDATION: None CIS_AMPL_PRINT Sets the threshold for printing CIS and TDDFT excitation amplitudes. TYPE: INTEGER DEFAULT: 15 OPTIONS: $n$ Print if $|x_{ia}|$ or $|y_{ia}|$ is larger than $0.1\times n$. RECOMMENDATION: Use the default unless you want to see more amplitudes. CIS_CONVERGENCE CIS is considered converged when error is less than $10^{-\mathrm{CIS\_CONVERGENCE}}$ TYPE: INTEGER DEFAULT: 6 CIS convergence threshold 10${}^{-6}$ OPTIONS: $n$ Corresponding to $10^{-n}$ RECOMMENDATION: None CIS_DER_NUMSTATE Determines among how many states we calculate non-adiabatic couplings. These states must be specified in the$derivative_coupling section. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not calculate non-adiabatic couplings. $n$ Calculate $n(n-1)/2$ pairs of non-adiabatic couplings. RECOMMENDATION: None. CIS_DIABATH_DECOMPOSE Decide whether or not to decompose the diabatic coupling into Coulomb, exchange, and one-electron terms. TYPE: LOGICAL DEFAULT: FALSE Do not decompose the diabatic coupling. OPTIONS: TRUE RECOMMENDATION: These decompositions are most meaningful for electronic excitation transfer processes. Currently, available only for CIS, not for TDDFT diabatic states. CIS_DYNAMIC_MEM Controls whether to use static or dynamic memory in CIS and TDDFT calculations. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE Partly use static memory TRUE Fully use dynamic memory RECOMMENDATION: The default control requires static memory (MEM_STATIC) to hold a temporary array whose minimum size is $OV\times\mbox{{\small CIS\_N\_ROOTS}}$. For a large calculation, one has to specify a large value for MEM_STATIC, which is not recommended (see Chapter 2). Therefore, it is recommended to use dynamic memory for large calculations. CIS_GUESS_DISK_TYPE Determines the type of guesses to be read from disk TYPE: INTEGER DEFAULT: Nil OPTIONS: 0 Read triplets only 1 Read triplets and singlets 2 Read singlets only RECOMMENDATION: Must be specified if CIS_GUESS_DISK is TRUE. CIS_GUESS_DISK Read the CIS guess from disk (previous calculation). TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE Create a new guess. TRUE Read the guess from disk. RECOMMENDATION: Requires a guess from previous calculation. CIS_MOMENTS Controls calculation of excited-state (CIS or TDDFT) multipole moments. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE Do not calculate excited-state moments. TRUE Calculate moments for each excited state. RECOMMENDATION: Set to TRUE if excited-state moments are desired. (This is a trivial additional calculation.) The MULTIPOLE_ORDER controls how many multipole moments are printed. CIS_MULLIKEN Controls Mulliken and Löwdin population analyses for excited-state particle and hole density matrices. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE Do not perform particle/hole population analysis. TRUE Perform both Mulliken and Löwdin analysis of the particle and hole density matrices for each excited state. RECOMMENDATION: Set to TRUE if desired. This represents a trivial additional calculation. CIS_N_ROOTS Sets the number of excited state roots to find TYPE: INTEGER DEFAULT: 0 Do not look for any excited states OPTIONS: $n$ $n>0$ Looks for $n$ excited states RECOMMENDATION: None CIS_RELAXED_DENSITY Use the relaxed CIS density for attachment/detachment density analysis as well as for for the general excited-state analysis of Section 10.2.6. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE Do not use the relaxed CIS density in analysis. TRUE Use the relaxed CIS density in analysis. RECOMMENDATION: None CIS_S2_THRESH Determines whether a state is a singlet or triplet in unrestricted calculations. TYPE: INTEGER DEFAULT: 120 OPTIONS: $n$ Sets the $\langle\hat{S}^{2}\rangle$ threshold to $n/100$ RECOMMENDATION: For the default case, states with $\langle\hat{S}^{2}\rangle>1.2$ are treated as triplet states and other states are treated as singlets. CIS_SINGLETS Solve for singlet excited states (ignored for spin unrestricted systems) TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE Solve for singlet states FALSE Do not solve for singlet states. RECOMMENDATION: None CIS_STATE_DERIV Sets CIS state for excited state optimizations and vibrational analysis. TYPE: INTEGER DEFAULT: 0 Does not select any of the excited states. OPTIONS: $n$ Select the $n$th state. RECOMMENDATION: Check to see that the states do not change order during an optimization, due to state crossings. CIS_TRIPLETS Solve for triplet excited states (ignored for spin unrestricted systems) TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE Solve for triplet states FALSE Do not solve for triplet states. RECOMMENDATION: None CM5 Controls running of CM5 population analysis. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Calculate CM5 populations. FALSE Do not calculate CM5 populations. RECOMMENDATION: None COMBINE_K Controls separate or combined builds for short-range and long-range K TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE (or 0) Build short-range and long-range K separately (twice as expensive as a global hybrid) TRUE (or 1) Build short-range and long-range K together ($\approx$ as expensive as a global hybrid) RECOMMENDATION: Most pre-defined range-separated hybrid functionals in Q-Chem use this feature by default. However, if a user-specified RSH is desired, it is necessary to manually turn this feature on. COMPLEX_CCMAN Requests complex-scaled or CAP-augmented CC/EOM calculations. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Engage complex CC/EOM code. RECOMMENDATION: Not available in CCMAN. Need to specify CAP strength or complex-scaling parameter in $complex_ccman section. COMPLEX_MIX Mix a certain percentage of the real part of the HOMO to the imaginary part of the LUMO. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0–100 The mix angle = $\pi\cdot$COMPLEX_MIX/100. RECOMMENDATION: It may help find the stable complex solution (similar idea as SCF_GUESS_MIX). COMPLEX Run an SCF calculation with complex MOs using GEN_SCFMAN. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: TRUE Use complex orbitals. FALSE Use real orbitals. RECOMMENDATION: Set to TRUE if desired. CORE_CHARACTER Selects how the core orbitals are determined in the frozen-core approximation. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Use energy-based definition. 1-4 Use Mulliken-based definition (see Table 6.1 for details). RECOMMENDATION: Use the default, unless performing calculations on molecules with heavy elements. CORE_IONIZE Indicates how orbitals are specified for reduced excitation spaces. TYPE: INTEGER DEFAULT: 1 OPTIONS: 1 all valence orbitals are listed in$solute section 2 only hole(s) are specified all other occupations same as ground state RECOMMENDATION: For MOM + TDDFT this specifies the input form of the $solute section. If set to 1 all occupied orbitals must be specified, 2 only the empty orbitals to ignore must be specified. CORRELATION Specifies the correlation level of theory handled by CCMAN/CCMAN2. TYPE: STRING DEFAULT: None No Correlation OPTIONS: CCMP2 Regular MP2 handled by CCMAN/CCMAN2 MP3 CCMAN and CCMAN2 MP4SDQ CCMAN MP4 CCMAN CCD CCMAN and CCMAN2 CCD(2) CCMAN CCSD CCMAN and CCMAN2 CCSD(T) CCMAN and CCMAN2 CCSD(2) CCMAN CCSD(fT) CCMAN and CCMAN2 CCSD(dT) CCMAN CCVB-SD CCMAN2 QCISD CCMAN and CCMAN2 QCISD(T) CCMAN and CCMAN2 OD CCMAN OD(T) CCMAN OD(2) CCMAN VOD CCMAN VOD(2) CCMAN QCCD CCMAN QCCD(T) CCMAN QCCD(2) CCMAN VQCCD CCMAN VQCCD(T) CCMAN VQCCD(2) CCMAN RECOMMENDATION: Consult the literature for guidance. CPSCF_NSEG Controls the number of segments used to calculate the CPSCF equations. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not solve the CPSCF equations in segments. $n$ User-defined. Use $n$ segments when solving the CPSCF equations. RECOMMENDATION: Use the default. CUBEFILE_STATE Determines which excited state is used to generate cube files TYPE: INTEGER DEFAULT: None OPTIONS: $n$ Generate cube files for the $n$th excited state RECOMMENDATION: None CUDA_RI-MP2 Enables GPU implementation of RI-MP2 TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE GPU-enabled MGEMM off TRUE GPU-enabled MGEMM on RECOMMENDATION: Necessary to set to 1 in order to run GPU-enabled RI-MP2 CUTOCC Specifies occupied orbital cutoff. TYPE: INTEGER DEFAULT: 50 OPTIONS: 0-200 CUTOFF = CUTOCC/100 RECOMMENDATION: None CUTVIR Specifies virtual orbital cutoff. TYPE: INTEGER DEFAULT: 0 No truncation OPTIONS: 0-100 CUTOFF = CUTVIR/100 RECOMMENDATION: None CVS_EOM_PRECONV_SINGLES When not zero, singly excited vectors are converged prior to a full excited states calculation (CVS states only). Sets the maximum number of iterations for pre-converging procedure. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 do not pre-converge 1 pre-converge singles RECOMMENDATION: Sometimes helps with problematic convergence. CVS_EOM_SHIFT Specifies energy shift in CVS-EOM calculations. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ corresponds to $n\cdot 10^{-3}$ hartree shift (i.e., 11000 = 11 hartree); solve for eigenstates around this value. RECOMMENDATION: Improves the stability of the calculations. DEA_SINGLETS Sets the number of singlet DEA roots to find. Valid only for closed-shell references. TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any singlet DEA states. OPTIONS: $[i,j,k\ldots]$ Find $i$ DEA singlet states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None DEA_STATES Sets the number of DEA roots to find. For closed-shell reference, defaults into DEA_SINGLETS. For open-shell references, specifies all low-lying states. TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any DEA states. OPTIONS: $[i,j,k\ldots]$ Find $i$ DIP states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None DEA_TRIPLETS Sets the number of triplet DEA roots to find. Valid only for closed-shell references. TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any DEA triplet states. OPTIONS: $[i,j,k\ldots]$ Find $i$ DEA triplet states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None DELTA_GRADIENT_SCALE Scales the gradient of $\Delta$ by N/100, which can be useful for cases with troublesome convergence by reducing step size. TYPE: INTEGER DEFAULT: 100 OPTIONS: N RECOMMENDATION: Use default. For problematic cases 50, 25, 10 or even 1 could be useful. DEUTERATE Requests that all hydrogen atoms be replaces with deuterium. TYPE: LOGICAL DEFAULT: FALSE Do not replace hydrogens. OPTIONS: TRUE Replace hydrogens with deuterium. RECOMMENDATION: Replacing hydrogen atoms reduces the fastest vibrational frequencies by a factor of 1.4, which allow for a larger fictitious mass and time step in ELMD calculations. There is no reason to replace hydrogens in BOMD calculations. DFPT_EXCHANGE Specifies the secondary functional in a HFPC/DFPC calculation. TYPE: STRING DEFAULT: None OPTIONS: None RECOMMENDATION: See reference for recommended basis set, functional, and grid pairings. DFPT_XC_GRID Specifies the secondary grid in a HFPC/DFPC calculation. TYPE: STRING DEFAULT: None OPTIONS: None RECOMMENDATION: See reference for recommended basis set, functional, and grid pairings. DFTVDW_ALPHA1 Parameter in XDM calculation with higher-order terms TYPE: INTEGER DEFAULT: 83 OPTIONS: 10-1000 RECOMMENDATION: None DFTVDW_ALPHA2 Parameter in XDM calculation with higher-order terms. TYPE: INTEGER DEFAULT: 155 OPTIONS: 10-1000 RECOMMENDATION: None DFTVDW_JOBNUMBER Basic vdW job control TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not apply the XDM scheme. 1 Add vdW as energy/gradient correction to SCF. 2 Add vDW as a DFT functional and do full SCF (this option only works with XDM6). RECOMMENDATION: None DFTVDW_KAI Damping factor $k$ for $C_{6}$-only damping function TYPE: INTEGER DEFAULT: 800 OPTIONS: 10–1000 RECOMMENDATION: None DFTVDW_METHOD Choose the damping function used in XDM TYPE: INTEGER DEFAULT: 1 OPTIONS: 1 Use Becke’s damping function including $C_{6}$ term only. 2 Use Becke’s damping function with higher-order ($C_{8}$ and $C_{10}$) terms. RECOMMENDATION: None DFTVDW_MOL1NATOMS The number of atoms in the first monomer in dimer calculation TYPE: INTEGER DEFAULT: 0 OPTIONS: 0–$N_{\rm atoms}$ RECOMMENDATION: None DFTVDW_PRINT Printing control for VDW code TYPE: INTEGER DEFAULT: 1 OPTIONS: 0 No printing. 1 Minimum printing (default) 2 Debug printing RECOMMENDATION: None DFTVDW_USE_ELE_DRV Specify whether to add the gradient correction to the XDM energy. only valid with Becke’s $C_{6}$ damping function using the interpolated BR89 model. TYPE: LOGICAL DEFAULT: 1 OPTIONS: 1 Use density correction when applicable. 0 Do not use this correction (for debugging purposes). RECOMMENDATION: None DFT_C Controls whether the DFT-C empirical BSSE correction should be added. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE (or 0) Do not apply the DFT-C correction TRUE (or 1) Apply the DFT-C correction RECOMMENDATION: NONE DFT_D3_3BODY Controls whether the three-body interaction in Grimme’s DFT-D3 method should be applied (see Eq. (14) in Ref. Grimme:2010). TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE (or 0) Do not apply the three-body interaction term TRUE Apply the three-body interaction term RECOMMENDATION: NONE DFT_D3_A1 The nonlinear parameter $\alpha_{1}$ in Eqs. (5.27), (5.28), (5.29), and (5.30). Used in DFT-D3(BJ), DFT-D3(CSO), DFT-D3M(0), DFT-D3M(BJ), and DFT-D3(op). TYPE: INTEGER DEFAULT: 100000 OPTIONS: $n$ Corresponding to $\alpha_{1}=n/100000$. RECOMMENDATION: NONE DFT_D3_A2 The nonlinear parameter $\alpha_{2}$ in Eqs. (5.27) and (5.30). Used in DFT-D3(BJ), DFT-D3M(BJ), and DFT-D3(op). TYPE: INTEGER DEFAULT: 100000 OPTIONS: $n$ Corresponding to $\alpha_{2}=n/100000$. RECOMMENDATION: NONE DFT_D3_POWER The nonlinear parameter $\beta_{6}$ in Eq. (5.30). Used in DFT-D3(op). Must be greater than or equal to 6 to avoid divergence. TYPE: INTEGER DEFAULT: 600000 OPTIONS: $n$ Corresponding to $\beta_{6}=n/100000$. RECOMMENDATION: NONE DFT_D3_RS6 The nonlinear parameter $s_{r,6}$ in Eqs. (5.26) and Eq. (5.29). Used in DFT-D3(0) and DFT-D3M(0). TYPE: INTEGER DEFAULT: 100000 OPTIONS: $n$ Corresponding to $s_{r,6}=n/100000$. RECOMMENDATION: NONE DFT_D3_RS8 The nonlinear parameter $s_{r,8}$ in Eqs. (5.26) and Eq. (5.29). Used in DFT-D3(0) and DFT-D3M(0). TYPE: INTEGER DEFAULT: 100000 OPTIONS: $n$ Corresponding to $s_{r,8}=n/100000$. RECOMMENDATION: NONE DFT_D3_S6 The linear parameter $s_{6}$ in eq. (5.25). Used in all forms of DFT-D3. TYPE: INTEGER DEFAULT: 100000 OPTIONS: $n$ Corresponding to $s_{6}=n/100000$. RECOMMENDATION: NONE DFT_D3_S8 The linear parameter $s_{8}$ in Eq. (5.25). Used in DFT-D3(0), DFT-D3(BJ), DFT-D3M(0), DFT-D3M(BJ), and DFT-D3(op). TYPE: INTEGER DEFAULT: 100000 OPTIONS: $n$ Corresponding to $s_{8}=n/100000$. RECOMMENDATION: NONE DFT_D_A Controls the strength of dispersion corrections in the Chai–Head-Gordon DFT-D scheme, Eq. (5.24). TYPE: INTEGER DEFAULT: 600 OPTIONS: $n$ Corresponding to $a=n/100$. RECOMMENDATION: Use the default. DFT_D Controls the empirical dispersion correction to be added to a DFT calculation. TYPE: LOGICAL DEFAULT: None OPTIONS: FALSE (or 0) Do not apply the DFT-D2, DFT-CHG, or DFT-D3 scheme EMPIRICAL_GRIMME DFT-D2 dispersion correction from GrimmeGrimme:2006b EMPIRICAL_CHG DFT-CHG dispersion correction from Chai and Head-GordonChai:2008b EMPIRICAL_GRIMME3 DFT-D3(0) dispersion correction from Grimme (deprecated as of Q-Chem 5.0) D3_ZERO DFT-D3(0) dispersion correction from Grimme et al.Grimme:2010 D3_BJ DFT-D3(BJ) dispersion correction from Grimme et al.Grimme:2011a D3_CSO DFT-D3(CSO) dispersion correction from Schröder et al.Schroder:2015 D3_ZEROM DFT-D3M(0) dispersion correction from Smith et al.Smith:2016 D3_BJM DFT-D3M(BJ) dispersion correction from Smith et al.Smith:2016 D3_OP DFT-D3(op) dispersion correction from Witte et al.Witte:2017b D3 Automatically select the "best" available D3 dispersion correction RECOMMENDATION: Use the D3 option, which selects the empirical potential based on the density functional specified by the user. DH Controls the application of DH-DFT scheme. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE (or 0) Do not apply the DH-DFT scheme TRUE (or 1) Apply DH-DFT scheme RECOMMENDATION: NONE DIIS_ERR_RMS Changes the DIIS convergence metric from the maximum to the RMS error. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE, FALSE RECOMMENDATION: Use the default, the maximum error provides a more reliable criterion. DIIS_PRINT Controls the output from DIIS SCF optimization. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Minimal print out. 1 Chosen method and DIIS coefficients and solutions. 2 Level 1 plus changes in multipole moments. 3 Level 2 plus Multipole moments. 4 Level 3 plus extrapolated Fock matrices. RECOMMENDATION: Use the default DIIS_SEPARATE_ERRVEC Control optimization of DIIS error vector in unrestricted calculations. TYPE: LOGICAL DEFAULT: FALSE Use a combined $\alpha$ and $\beta$ error vector. OPTIONS: FALSE Use a combined $\alpha$ and $\beta$ error vector. TRUE Use separate error vectors for the $\alpha$ and $\beta$ spaces. RECOMMENDATION: When using DIIS in Q-Chem a convenient optimization for unrestricted calculations is to sum the $\alpha$ and $\beta$ error vectors into a single vector which is used for extrapolation. This is often extremely effective, but in some pathological systems with symmetry breaking, can lead to false solutions being detected, where the $\alpha$ and $\beta$ components of the error vector cancel exactly giving a zero DIIS error. While an extremely uncommon occurrence, if it is suspected, set DIIS_SEPARATE_ERRVEC = TRUE to check. DIIS_SUBSPACE_SIZE Controls the size of the DIIS and/or RCA subspace during the SCF. TYPE: INTEGER DEFAULT: 15 OPTIONS: User-defined RECOMMENDATION: None DIP_SINGLETS Sets the number of singlet DIP roots to find. Valid only for closed-shell references. TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any singlet DIP states. OPTIONS: $[i,j,k\ldots]$ Find $i$ DIP singlet states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None DIP_STATES Sets the number of DIP roots to find. For closed-shell reference, defaults into DIP_SINGLETS. For open-shell references, specifies all low-lying states. TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any DIP states. OPTIONS: $[i,j,k\ldots]$ Find $i$ DIP states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None DIP_TRIPLETS Sets the number of triplet DIP roots to find. Valid only for closed-shell references. TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any DIP triplet states. OPTIONS: $[i,j,k\ldots]$ Find $i$ DIP triplet states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None DIRECT_SCF Controls direct SCF. TYPE: LOGICAL DEFAULT: Determined by program. OPTIONS: TRUE Forces direct SCF. FALSE Do not use direct SCF. RECOMMENDATION: Use the default; direct SCF switches off in-core integrals. DISP_FREE_C Specify the employed “dispersion-free" correlation functional. TYPE: STRING DEFAULT: NONE OPTIONS: Correlation functionals supported by Q-Chem. RECOMMENDATION: Put the appropriate correlation functional paired with the chosen exchange functional (e.g. put PBE if DISP_FREE_X is revPBE); put NONE if DISP_FREE_X is set to an exchange-correlation functional. DISP_FREE_X Specify the employed “dispersion-free" exchange functional. TYPE: STRING DEFAULT: HF OPTIONS: Exchange functionals (e.g. revPBE) or exchange-correlation functionals (e.g. B3LYP) supported by Q-Chem. RECOMMENDATION: HF is recommended for hybrid (primary) functionals (e.g.$\omega$B97X-V) and revPBE for semi-local ones (e.g.B97M-V). Other reasonable options (e.g. B3LYP for B3LYP-D3) can also be applied. DOMODSANO Specifies whether to do modified Sano or the original one TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Does original Sano procedure (similar to GVBMAN). 1 Does an improved Sano procedure that’s more localized. 2 Does another variation of Sano. RECOMMENDATION: 1 is always better DORAMAN Controls calculation of Raman intensities. Requires JOBTYPE to be set to FREQ TYPE: LOGICAL DEFAULT: FALSE OPTIONS: FALSE Do not calculate Raman intensities. TRUE Do calculate Raman intensities. RECOMMENDATION: None DSF_STATES Sets the number of doubly spin-flipped target states roots to find. TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any DSF states. OPTIONS: $[i,j,k\ldots]$ Find $i$ doubly spin-flipped states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None DUAL_BASIS_ENERGY Activates dual-basis SCF (HF or DFT) energy correction. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: Analytic first derivative available for HF and DFT (see JOBTYPE) Can be used in conjunction with MP2 or RI-MP2 See BASIS, BASIS2, BASISPROJTYPE RECOMMENDATION: Use dual-basis to capture large-basis effects at smaller basis cost. Particularly useful with RI-MP2, in which HF often dominates. Use only proper subsets for small-basis calculation. D_CPSCF_PERTNUM Specifies whether to do the perturbations one at a time, or all together. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Perturbed densities to be calculated all together. 1 Perturbed densities to be calculated one at a time. RECOMMENDATION: None D_SCF_CONV_1 Sets the convergence criterion for the level-1 iterations. This preconditions the density for the level-2 calculation, and does not include any two-electron integrals. TYPE: INTEGER DEFAULT: 4 corresponding to a threshold of $10^{-4}$. OPTIONS: $n<10$ Sets convergence threshold to $10^{-n}$. RECOMMENDATION: The criterion for level-1 convergence must be less than or equal to the level-2 criterion, otherwise the D-CPSCF will not converge. D_SCF_CONV_2 Sets the convergence criterion for the level-2 iterations. TYPE: INTEGER DEFAULT: 4 Corresponding to a threshold of $10^{-4}$. OPTIONS: $n<10$ Sets convergence threshold to $10^{-n}$. RECOMMENDATION: None D_SCF_DIIS Specifies the number of matrices to use in the DIIS extrapolation in the D-CPSCF. TYPE: INTEGER DEFAULT: 11 OPTIONS: $n$ $n$ = 0 specifies no DIIS extrapolation is to be used. RECOMMENDATION: Use the default. D_SCF_MAX_1 Sets the maximum number of level-1 iterations. TYPE: INTEGER DEFAULT: 100 OPTIONS: $n$ User defined. RECOMMENDATION: Use the default. D_SCF_MAX_2 Sets the maximum number of level-2 iterations. TYPE: INTEGER DEFAULT: 30 OPTIONS: $n$ User defined. RECOMMENDATION: Use the default. EA_STATES Sets the number of attached target states roots to find. By default, $\alpha$ electron will be attached (see EOM_EA_ALPHA). TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any EA states. OPTIONS: $[i,j,k\ldots]$ Find $i$ EA states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None ECP Defines the effective core potential and associated basis set to be used TYPE: STRING DEFAULT: No ECP OPTIONS: General, Gen User defined. ($ecp keyword required) Symbol Use standard ECPs discussed above. RECOMMENDATION: ECPs are recommended for first row transition metals and heavier elements. Consul the reviews for more details. EDA2 Switch on EDA2 and specify the option set number. TYPE: INTEGER DEFAULT: 2 OPTIONS: 0 Do not run through EDA2. 1 Frozen energy decomposition + nDQ-FERF polarization (the standard EDA2 option) 2 Frozen energy decomposition + (AO-block-based) ALMO polarization (old scheme with the addition of frozen decomposition) 3 Frozen energy decomposition + oDQ-FERF polarization (NOT commonly used) 4 Frozen wave function relaxation + Frozen energy decomposition + nDQ-FERF polarization (NOT commonly used) 5 Frozen energy decomposition + polMO polarization (NOT commonly used). 10 No preset. Completely controlled by user’s $rem input (for developers only) RECOMMENDATION: Turn on EDA2 for Q-Chem’s ALMO-EDA jobs unless CTA with the old scheme is desired. Option 1 is recommended in general, especially when substantially large basis sets are employed. The original ALMO scheme (option 2) can be used when the employed basis set is of small or medium size (arguably no larger than augmented triple-$\zeta$). The other options are rarely used for routine applications. EDA_BSSE Calculates the BSSE correction when performing the energy decomposition analysis. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE/FALSE RECOMMENDATION: Set to TRUE unless a very large basis set is used. EDA_CLS_DISP Compute the DISP contribution without performing the orthogonal decomposition, which will then be subtracted from the classical PAULI term. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Use the DISP term computed with orthogonal decomposition (if available). TRUE Use the DISP term computed using undistorted monomer densities. RECOMMENDATION: Set it to TRUE when orthogonal decomposition is not performed. EDA_CLS_ELEC Perform the classical decomposition of the frozen term. TYPE: BOOLEAN DEFAULT: FALSE (automatically set to TRUE by EDA2 options 1–5) OPTIONS: FALSE Do not compute the classical ELEC and PAULI terms. TRUE Perform the classical decomposition. RECOMMENDATION: TRUE EDA_CONTRACTION_ANAL Perform analysis separating orbital contraction from the rest of POL. TYPE: BOOLEAN DEFAULT: 0 OPTIONS: FALSE Do not perform contraction analysis. TRUE Perform contraction analysis. RECOMMENDATION: No recommendation EDA_COVP Perform COVP analysis when evaluating the RS or ARS charge-transfer correction. COVP analysis is currently implemented only for systems of two fragments. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE/FALSE RECOMMENDATION: Set to TRUE to perform COVP analysis in an EDA or SCF MI(RS) job. EDA_PRINT_COVP Replace the final MOs with the CVOP orbitals in the end of the run. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE/FALSE RECOMMENDATION: Set to TRUE to print COVP orbitals instead of conventional MOs. EE_SINGLETS Controls the number of singlet excited states to calculate. TYPE: INTEGER/ARRAY DEFAULT: 0 Do not perform an ADC calculation of singlet excited states OPTIONS: $n>0$ Number of singlet states to calculate for each irrep or $[n_{1},n_{2},...]$ Compute $n_{1}$ states for the first irrep, $n_{2}$ states for the second irrep, … RECOMMENDATION: Use this variable to define the number of excited states in case of restricted calculations of singlet states. In unrestricted calculations it can also be used, if EE_STATES not set. Then, it has the same effect as setting EE_STATES. EE_STATES Controls the number of excited states to calculate. TYPE: INTEGER/ARRAY DEFAULT: 0 Do not perform an ADC calculation OPTIONS: $n>0$ Number of states to calculate for each irrep or $[n_{1},n_{2},...]$ Compute $n_{1}$ states for the first irrep, $n_{2}$ states for the second irrep, … RECOMMENDATION: Use this variable to define the number of excited states in case of unrestricted or open-shell calculations. In restricted calculations it can also be used, if neither EE_SINGLETS nor EE_TRIPLETS is given. Then, it has the same effect as setting EE_SINGLETS. EE_TRIPLETS Controls the number of triplet excited states to calculate. TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not perform an ADC calculation of triplet excited states OPTIONS: $n>0$ Number of triplet states to calculate for each irrep or $[n_{1},n_{2},...]$ Compute $n_{1}$ states for the first irrep, $n_{2}$ states for the second irrep, … RECOMMENDATION: Use this variable to define the number of excited states in case of restricted calculations of triplet states. EFP_COORD_XYZ Use coordinates of three atoms instead of Euler angles to specify position and orientation of the fragments TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE FALSE RECOMMENDATION: None EFP_DIRECT_POLARIZATION_DRIVER Use direct solver for EFP polarization TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE FALSE RECOMMENDATION: Direct polarization solver provides stable convergence of induced dipoles which may otherwise become problematic in case of closely lying or highly polar or charged fragments. The computational cost of direct polarization versus iterative polarization becomes higher for systems containing more than 10000 polarizable points. EFP_DISP_DAMP Controls fragment-fragment dispersion screening in EFP TYPE: INTEGER DEFAULT: 2 OPTIONS: 0 switch off dispersion screening 1 use Tang-Toennies screening, with fixed parameter $b=1.5$ 2 use overlap-based damping RECOMMENDATION: None EFP_DISP Controls fragment-fragment dispersion in EFP TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE switch on dispersion FALSE switch off dispersion RECOMMENDATION: None EFP_ELEC_DAMP Controls fragment-fragment electrostatic screening in EFP TYPE: INTEGER DEFAULT: 2 OPTIONS: 0 switch off electrostatic screening 1 use overlap-based damping correction 2 use exponential damping correction if screening parameters are provided in the EFP potential RECOMMENDATION: Overlap-based damping is recommended EFP_ELEC Controls fragment-fragment electrostatics in EFP TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE switch on electrostatics FALSE switch off electrostatics RECOMMENDATION: None EFP_ENABLE_LINKS Enable fragment links in EFP region TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE FALSE RECOMMENDATION: None EFP_EXREP Controls fragment-fragment exchange repulsion in EFP TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE switch on exchange repulsion FALSE switch off exchange repulsion RECOMMENDATION: None EFP_FRAGMENTS_ONLY Specifies whether there is a QM part TYPE: LOGICAL DEFAULT: FALSE QM part is present OPTIONS: TRUE Only MM part is present: all fragments are treated by EFP FALSE QM part is present: do QM/MM EFP calculation RECOMMENDATION: None EFP_INPUT Specifies the format of EFP input TYPE: LOGICAL DEFAULT: FALSE Dummy atom (e.g., He) in$molecule section should be present OPTIONS: TRUE A format without dummy atom in $molecule section FALSE A format with dummy atom in$molecule section RECOMMENDATION: None EFP_POL_DAMP Controls fragment-fragment polarization screening in EFP TYPE: INTEGER DEFAULT: 1 OPTIONS: 0 switch off polarization screening 1 use Tang-Toennies screening RECOMMENDATION: None EFP_POL Controls fragment-fragment polarization in EFP TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE switch on polarization FALSE switch off polarization RECOMMENDATION: None EFP_QM_DISP Controls QM-EFP dispersion TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE switch on QM-EFP dispersion FALSE switch off QM-EFP dispersion RECOMMENDATION: None EFP_QM_ELEC_DAMP Controls QM-EFP electrostatics screening in EFP TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 switch off electrostatic screening 1 use overlap based damping correction RECOMMENDATION: None EFP_QM_ELEC Controls QM-EFP electrostatics TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE switch on QM-EFP electrostatics FALSE switch off QM-EFP electrostatics RECOMMENDATION: None EFP_QM_EXREP Controls QM-EFP exchange-repulsion TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE switch on QM-EFP exchange-repulsion FALSE switch off QM-EFP exchange-repulsion RECOMMENDATION: None EFP_QM_POL Controls QM-EFP polarization TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE switch on QM-EFP polarization FALSE switch off QM-EFP polarization RECOMMENDATION: None EFP Specifies that EFP calculation is requested TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE FALSE RECOMMENDATION: The keyword should be present if excited state calculation is requested EMBEDMAN Turns density embedding on. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not use density embedding. 1 Turn on density embedding. RECOMMENDATION: Use EMBEDMAN for QM/QM density embedded calculations. EMBED_MU Specifies exponent value of projection operator scaling factor, $\mu$ [Eq. (11.91) and (11.93)]. TYPE: INTEGER DEFAULT: 7 OPTIONS: n $\mu=10^{n}$. RECOMMENDATION: Values of 2 - 7 are recommended. A higher value of $\mu$ leads to better orthogonality of the fragment MOs but $\mu>10^{7}$ introduces numerical noise. $\mu<10^{2}$ results in non-additive terms becoming too large. Energy corrections are fairly insensitive to changes in $\mu$ within the range of $10^{2}-10^{7}$. EMBED_THEORY Specifies post-DFT method performed on fragment one. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 No post HF method, only DFT on fragment one. 1 Perform CCSD(T) calculation on fragment one. 2 Perform MP2 calculation on fragment one. RECOMMENDATION: This should be 1 or 2 for the high-level QM calculation of fragment 1-in-2, and 0 for fragment 2-in-1 low-level QM calculation. EMBED_THRESH Specifies threshold cutoff for AO contribution used to determine which MOs belong to which fragments TYPE: INTEGER DEFAULT: 500 OPTIONS: n Threshold $=n/1000$ RECOMMENDATION: Acceptable values range from 0 to 1000. Should only need to be tuned for non-highly localized MOs EOM_ARESP_SINGLE_PREC Precision selection for amplitude response EOM equations. Available in CCMAN2 only. TYPE: INTEGER DEFAULT: 0 double-precision calculation OPTIONS: 1 single-precision calculation RECOMMENDATION: NONE EOM_CORR Specifies the correlation level. TYPE: STRING DEFAULT: None No correction will be computed OPTIONS: SD(DT) EOM-CCSD(dT), available for EE, SF, and IP SD(FT) EOM-CCSD(fT), available for EE, SF, IP, and EA SD(ST) EOM-CCSD(sT), available for IP RECOMMENDATION: None EOM_DAVIDSON_CONVERGENCE Convergence criterion for the RMS residuals of excited state vectors. TYPE: INTEGER DEFAULT: 5 Corresponding to $10^{-5}$ OPTIONS: $n$ Corresponding to $10^{-n}$ convergence criterion RECOMMENDATION: Use the default. Normally this value be the same as EOM_DAVIDSON_THRESHOLD. EOM_DAVIDSON_MAXVECTORS Specifies maximum number of vectors in the subspace for the Davidson diagonalization. TYPE: INTEGER DEFAULT: 60 OPTIONS: $n$ Up to $n$ vectors per root before the subspace is reset RECOMMENDATION: Larger values increase disk storage but accelerate and stabilize convergence. EOM_DAVIDSON_MAX_ITER Maximum number of iteration allowed for Davidson diagonalization procedure. TYPE: INTEGER DEFAULT: 30 OPTIONS: $n$ User-defined number of iterations RECOMMENDATION: Default is usually sufficient EOM_DAVIDSON_THRESHOLD Specifies threshold for including a new expansion vector in the iterative Davidson diagonalization. Their norm must be above this threshold. TYPE: INTEGER DEFAULT: 00103 Corresponding to 0.00001 OPTIONS: $abcde$ Integer code is mapped to $abc\times 10^{-(de+2)}$, i.e., 02505->2.5$\times 10^{-6}$ RECOMMENDATION: Use the default unless converge problems are encountered. Should normally be set to the same values as EOM_DAVIDSON_CONVERGENCE, if convergence problems arise try setting to a value slightly larger than EOM_DAVIDSON_CONVERGENCE. EOM_EA_ALPHA Sets the number of attached target states derived by attaching $\alpha$ electron (M${}_{s}$=${{1}\over{2}}$, default in EOM-EA). TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any EA states. OPTIONS: $[i,j,k\ldots]$ Find $i$ EA states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None EOM_EA_BETA Sets the number of attached target states derived by attaching $\beta$ electron (M${}_{s}$=$-{{1}\over{2}}$, EA-SF). TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any EA states. OPTIONS: $[i,j,k\ldots]$ Find $i$ EA states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None EOM_FAKE_IPEA If TRUE, calculates fake EOM-IP or EOM-EA energies and properties using the diffuse orbital trick. Default for EOM-EA and Dyson orbital calculations in CCMAN. TYPE: LOGICAL DEFAULT: FALSE (use proper EOM-IP code) OPTIONS: FALSE, TRUE RECOMMENDATION: None. This feature only works for CCMAN. EOM_IPEA_FILTER If TRUE, filters the EOM-IP/EA amplitudes obtained using the diffuse orbital implementation (see EOM_FAKE_IPEA). Helps with convergence. TYPE: LOGICAL DEFAULT: FALSE (EOM-IP or EOM-EA amplitudes will not be filtered) OPTIONS: FALSE, TRUE RECOMMENDATION: None EOM_IP_ALPHA Sets the number of ionized target states derived by removing $\alpha$ electron (M${}_{s}=-{{1}\over{2}}$). TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any IP/$\alpha$ states. OPTIONS: $[i,j,k\ldots]$ Find $i$ ionized states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None EOM_IP_BETA Sets the number of ionized target states derived by removing $\beta$ electron (M${}_{s}$=${{1}\over{2}}$, default for EOM-IP). TYPE: INTEGER/INTEGER ARRAY DEFAULT: 0 Do not look for any IP/$\beta$ states. OPTIONS: $[i,j,k\ldots]$ Find $i$ ionized states in the first irrep, $j$ states in the second irrep etc. RECOMMENDATION: None EOM_NGUESS_DOUBLES Specifies number of excited state guess vectors which are double excitations. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ Include $n$ guess vectors that are double excitations RECOMMENDATION: This should be set to the expected number of doubly excited states, otherwise they may not be found. EOM_NGUESS_SINGLES Specifies number of excited state guess vectors that are single excitations. TYPE: INTEGER DEFAULT: Equal to the number of excited states requested OPTIONS: $n$ Include $n$ guess vectors that are single excitations RECOMMENDATION: Should be greater or equal than the number of excited states requested, unless . EOM_POL Specifies the approach for calculating the polarizability of the EOM-CCSD wave function. TYPE: INTEGER DEFAULT: 0 (EOM-CCSD polarizability will not be calculated) OPTIONS: 1 (analytic-derivative or response-theory mixed symmetric-asymmetric approach) 2 (analytic-derivative or response-theory asymmetric approach) 3 (expectation-value approach with right response intermediates) 4 (expectation-value approach with left response intermediates) RECOMMENDATION: EOM-CCSD polarizabilities are expensive since they require solving three/nine (for static) or six/eighteen (for dynamical) additional response equations. Do no request this property unless you need it. EOM_PRECONV_DOUBLES When not zero, doubly excited vectors are converged prior to a full excited states calculation. Sets the maximum number of iterations for pre-converging procedure TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not pre-converge N Perform N Davidson iterations pre-converging doubles. RECOMMENDATION: Occasionally necessary to ensure a doubly excited state is found. Also used in DSF, DIP, and DEA calculations instead of EOM_PRECONV_SINGLES EOM_PRECONV_SD When not zero, EOM vectors are pre-converged prior to a full excited states calculation. Sets the maximum number of iterations for pre-converging procedure. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 do not pre-converge N perform N Davidson iterations pre-converging singles and doubles. RECOMMENDATION: Occasionally necessary to ensure that all low-lying states are found. Also, very useful in EOM(2,3) calculations. None EOM_PRECONV_SINGLES When not zero, singly excited vectors are converged prior to a full excited states calculation. Sets the maximum number of iterations for pre-converging procedure. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 do not pre-converge 1 pre-converge singles RECOMMENDATION: Sometimes helps with problematic convergence. EOM_SHIFT Specifies energy shift in EOM calculations. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ corresponds to $n\cdot 10^{-3}$ hartree shift (i.e., 11000 = 11 hartree); solve for eigenstates around this value. RECOMMENDATION: Not available in CCMAN. EOM_SINGLE_PREC Precision selection for EOM-CC/MP2 calculations. Available in CCMAN2 only. TYPE: INTEGER DEFAULT: 0 double-precision calculation OPTIONS: 1 single-precision calculation 2 single-precision calculation is followed by double-precision clean-up iterations RECOMMENDATION: Do not set too tight convergence criteria when use single precision EOM_USER_GUESS Specifies if user-defined guess will be used in EOM calculations. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Solve for a state that has maximum overlap with a trans-n specified in $eom_user_guess. RECOMMENDATION: The orbitals are ordered by energy, as printed in the beginning of the CCMAN2 output. Not available in CCMAN. EPAO_ITERATE Controls iterations for EPAO calculations (see PAO_METHOD). TYPE: INTEGER DEFAULT: 0 Use non-iterated EPAOs based on atomic blocks of SPS. OPTIONS: $n$ Optimize the EPAOs for up to $n$ iterations. RECOMMENDATION: Use the default. For molecules that are not too large, one can test the sensitivity of the results to the type of minimal functions by the use of optimized EPAOs in which case a value of $n=500$ is reasonable. EPAO_WEIGHTS Controls algorithm and weights for EPAO calculations (see PAO_METHOD). TYPE: INTEGER DEFAULT: 115 Standard weights, use 1${}^{\mathrm{st}}$ and 2${}^{\mathrm{nd}}$ order optimization OPTIONS: 15 Standard weights, with 1${}^{\mathrm{st}}$ order optimization only. RECOMMENDATION: Use the default, unless convergence failure is encountered. ERCALC Specifies the Edmiston-Ruedenberg localized orbitals are to be calculated TYPE: INTEGER DEFAULT: 06000 OPTIONS: $aabcd$ $aa$ specifies the convergence threshold. If $aa>3$, the threshold is set to $10^{-aa}$. The default is 6. If $aa=1$, the calculation is aborted after the guess, allowing Pipek-Mezey orbitals to be extracted. $b$ specifies the guess: 0 Boys localized orbitals. This is the default 1 Pipek-Mezey localized orbitals. $c$ specifies restart options (if restarting from an ER calculation): 0 No restart. This is the default 1 Read in MOs from last ER calculation. 2 Read in MOs and RI integrals from last ER calculation. $d$ specifies how to treat core orbitals 0 Do not perform ER localization. This is the default. 1 Localize core and valence together. 2 Do separate localizations on core and valence. 3 Localize only the valence electrons. 4 Use the$localize section. RECOMMENDATION: ERCALC 1 will usually suffice, which uses threshold $10^{-6}$. ER_CIS_NUMSTATE Define how many states to mix with ER localized diabatization. These states must be specified in the $localized_diabatization section. TYPE: INTEGER DEFAULT: 0 Do not perform ER localized diabatization. OPTIONS: 2 to N where N is the number of CIS states requested (CIS_N_ROOTS) RECOMMENDATION: It is usually not wise to mix adiabatic states that are separated by more than a few eV or a typical reorganization energy in solvent. ESP_EFIELD Triggers the calculation of the electrostatic potential (ESP) and/or the electric field at the positions of the MM charges. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Computes ESP only. 1 Computes ESP and electric field. 2 Computes electric field only. RECOMMENDATION: None. ESP_GRID Controls evaluation of the electrostatic potential on a grid of points. If enabled, the output is in an ASCII file, plot.esp, in the format $x,y,z,$ esp for each point. TYPE: INTEGER DEFAULT: none no electrostatic potential evaluation OPTIONS: $-3$ same as the option $-1$ but in connection with STATE_ANALYSIS=TRUE. This computes the ESP for all excited-state densities, transition densities, and electron/hole densities. $-2$ same as the option $-1$, plus evaluate the ESP of the$external_charges $-1$ read grid input via the $plots section of the input deck $0$ Generate the ESP values at all nuclear positions +$n$ read $n$ grid points in bohr from the ASCII file ESPGrid RECOMMENDATION: None ESP_TRANS Controls the calculation of the electrostatic potential of the transition density TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE compute the electrostatic potential of the excited state transition density FALSE compute the electrostatic potential of the excited state electronic density RECOMMENDATION: NONE EXCHANGE Specifies the exchange functional (or most exchange-correlation functionals for backwards compatibility). TYPE: STRING DEFAULT: No default OPTIONS: NAME Use EXCHANGE = NAME, where NAME is either: 1) One of the exchange functionals listed in Section 5.3.2 2) One of the XC functionals listed in Section 5.3.4 that is not marked with an asterisk. 3) GEN, for a user-defined functional (see Section 5.3.6). RECOMMENDATION: In general, consult the literature to guide your selection. Our recommendations are indicated in bold in Sections 5.3.4 and 5.3.2. FAST_XC Controls direct variable thresholds to accelerate exchange-correlation (XC) in DFT. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Turn FAST_XC on. FALSE Do not use FAST_XC. RECOMMENDATION: Caution: FAST_XC improves the speed of a DFT calculation, but may occasionally cause the SCF calculation to diverge. FDE Turns density embedding on. TYPE: BOOLEAN DEFAULT: False OPTIONS: True Perform an FDE-ADC calculation. False Don’t perform FDE-ADC calculation. RECOMMENDATION: Set the$rem variable FDE to TRUE to start a FDE-ADC calculation. FDIFF_DER Controls what types of information are used to compute higher derivatives. The default uses a combination of energy, gradient and Hessian information, which makes the force field calculation faster. TYPE: INTEGER DEFAULT: 3 for jobs where analytical 2nd derivatives are available. 0 for jobs with ECP. OPTIONS: 0 Use energy information only. 1 Use gradient information only. 2 Use Hessian information only. 3 Use energy, gradient, and Hessian information. RECOMMENDATION: When the molecule is larger than benzene with small basis set, FDIFF_DER = 2 may be faster. Note that FDIFF_DER will be set lower if analytic derivatives of the requested order are not available. Please refers to IDERIV. FDIFF_STEPSIZE_QFF Displacement used for calculating third and fourth derivatives by finite difference. TYPE: INTEGER DEFAULT: 5291 Corresponding to 0.1 bohr. For calculating third and fourth derivatives. OPTIONS: $n$ Use a step size of $n\times 10^{-5}$. RECOMMENDATION: Use the default, unless the potential surface is very flat, in which case a larger value should be used. FDIFF_STEPSIZE Displacement used for calculating derivatives by finite difference. TYPE: INTEGER DEFAULT: 100 Corresponding to 0.001 Å. For calculating second derivatives. OPTIONS: $n$ Use a step size of $n\times 10^{-5}$. RECOMMENDATION: Use the default except in cases where the potential surface is very flat, in which case a larger value should be used. See FDIFF_STEPSIZE_QFF for third and fourth derivatives. FD_MAT_VEC_PROD Compute Hessian-vector product using the finite difference technique. TYPE: BOOLEAN DEFAULT: FALSE (TRUE when the employed functional contains NLC) OPTIONS: FALSE Compute Hessian-vector product analytically. TRUE Use finite difference to compute Hessian-vector product. RECOMMENDATION: Set it to TRUE when analytical Hessian is not available. Note:  For simple R and U calculations, it can always be set to FALSE, which indicates that only the NLC part will be computed with finite difference. FEFP_EFP Specifies that fEFP_EFP calculation is requested to compute the total interaction energies between a ligand (the last fragment in the $efp_fragments section) and the protein (represented by fEFP) TYPE: STRING DEFAULT: OFF OPTIONS: OFF disables fEFP LA enables fEFP with the Link Atom (HLA or CLA) scheme (only electrostatics and polarization) MFCC enables fEFP with MFCC (only electrostatics) RECOMMENDATION: The keyword should be invoked if EFP/fEFP is requested (interaction energy calculations). This keyword has to be employed with EFP_FRAGMENT_ONLY = TRUE. To switch on/off electrostatics or polarzation interactions, the usual EFP controls are employed. FEFP_QM Specifies that fEFP_QM calculation is requested to perform a QM/fEFPcompute computation. The fEFP part is a fractionated macromolecule. TYPE: STRING DEFAULT: OFF OPTIONS: OFF disables fEFP_QM and performs a QM/EFP calculation LA enables fEFP_QM with the Link Atom scheme RECOMMENDATION: The keyword should be invoked if QM/fEFP is requested. This keyword has to be employed with efp_fragment_only false. Only electrostatics is available. FOA_FUNDGAP Compute the frozen-orbital approximation of the fundamental gap. TYPE: Boolean DEFAULT: FALSE OPTIONS: FALSE Do not compute FOA derivative discontinuity and fundamental gap. TRUE Compute and print FOA fundamental gap information. Implies KS_GAP_PRINT. RECOMMENDATION: Use in conjunction with KS_GAP_UNIT if true. FOCK_EXTRAP_ORDER Specifies the polynomial order $N$ for Fock matrix extrapolation. TYPE: INTEGER DEFAULT: 0 Do not perform Fock matrix extrapolation. OPTIONS: $N$ Extrapolate using an $N$th-order polynomial ($N>0$). RECOMMENDATION: None FOCK_EXTRAP_POINTS Specifies the number $M$ of old Fock matrices that are retained for use in extrapolation. TYPE: INTEGER DEFAULT: 0 Do not perform Fock matrix extrapolation. OPTIONS: $M$ Save $M$ Fock matrices for use in extrapolation $(M>N)$ RECOMMENDATION: Higher-order extrapolations with more saved Fock matrices are faster and conserve energy better than low-order extrapolations, up to a point. In many cases, the scheme ($N$ = 6, $M$ = 12), in conjunction with SCF_CONVERGENCE = 6, is found to provide about a 50% savings in computational cost while still conserving energy. FOLLOW_ENERGY Adjusts the energy window for near states TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Use dynamic thresholds, based on energy difference between steps. $n$ Search over selected state $E_{\rm est}\pm n\times 10^{-6}\;E_{h}$. RECOMMENDATION: Use a wider energy window to follow a state diabatically, smaller window to remain on the adiabatic state most of the time. FOLLOW_OVERLAP Adjusts the threshold for states of similar character. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Use dynamic thresholds, based on energy difference between steps. $n$ Percentage overlap for previous step and current step. RECOMMENDATION: Use a higher value to require states have higher degree of similarity to be considered the same (more often selected based on energy). FON_E_THRESH DIIS error below which occupations will be kept constant. TYPE: INTEGER DEFAULT: 4 OPTIONS: $n$ freeze occupations below DIIS error of $10^{-n}$ RECOMMENDATION: This should be one or two numbers bigger than the desired SCF convergence threshold. FON_NORB Number of orbitals above and below the Fermi level that are allowed to have fractional occupancies. TYPE: INTEGER DEFAULT: 4 OPTIONS: $n$ number of active orbitals RECOMMENDATION: The number of valence orbitals is a reasonable choice. FON_T_END Final electronic temperature for FON calculation. TYPE: INTEGER DEFAULT: 0 OPTIONS: Any desired final temperature. RECOMMENDATION: Pick the temperature to either reproduce experimental conditions (e.g. room temperature) or as low as possible to approach zero-temperature. FON_T_METHOD Selects cooling algorithm. TYPE: INTEGER DEFAULT: 1 OPTIONS: 1 temperature is scaled by a factor in each cycle 2 temperature is decreased by a constant number in each cycle RECOMMENDATION: We have made slightly better experience with a constant cooling rate. However, choose constant temperature when in doubt. FON_T_SCALE Determines the step size for the cooling. TYPE: INTEGER DEFAULT: 90 OPTIONS: $n$ temperature is scaled by $0.01\cdot n$ in each cycle (cooling method 1) $n$ temperature is decreased by n K in each cycle (cooling method 2) RECOMMENDATION: The cooling rate should be neither too slow nor too fast. Too slow may lead to final energies that are at undesirably high temperatures. Too fast may lead to convergence issues. Reasonable choices for methods 1 and 2 are 98 and 50, respectively. When in doubt, use constant temperature. FON_T_START Initial electronic temperature (in K) for FON calculation. TYPE: INTEGER DEFAULT: 1000 OPTIONS: Any desired initial temperature. RECOMMENDATION: Pick the temperature to either reproduce experimental conditions (e.g. room temperature) or as low as possible to approach zero-temperature. FORCE_FIELD Specifies the force field for MM energies in QM/MM calculations. TYPE: STRING DEFAULT: NONE OPTIONS: AMBER99 AMBER99 force field CHARMM27 CHARMM27 force field OPLSAA OPLSAA force field RECOMMENDATION: None. FRACTIONAL_ELECTRON Add or subtract a fraction of an electron. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Use an integer number of electrons. $n$ Add $n/1000$ electrons to the system. RECOMMENDATION: Use only if trying to generate $E(N)$ plots. If $n<0$, a fraction of an electron is removed from the system. FRAGMO_GUESS_MODE Decide what to do regarding the FRAGMO guess in the present job (for gen_scfman only) TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Spawn fragment jobs sequentially and collect the results as the FRAGMO guess at the end. 1 Generate fragment inputs in folders “FrgX" under the scratch directory of the present job and then terminate. Users can then take advantage of a queuing system to run these jobs simultaneously using “FrgX" as their scratch folders (should be handled with scripting). 2 Read in the available fragment data. RECOMMENDATION: Consider using “1" if the fragment calculations are evenly expensive. Use “2" when FRAGMO guess is pre-computed. FRGM_LPCORR Specifies a correction method performed after the locally-projected equations are converged. TYPE: STRING DEFAULT: NONE OPTIONS: ARS Approximate Roothaan-step perturbative correction. RS Single Roothaan-step perturbative correction. EXACT_SCF Full SCF variational correction. ARS_EXACT_SCF Both ARS and EXACT_SCF in a single job. RS_EXACT_SCF Both RS and EXACT_SCF in a single job. RECOMMENDATION: For large basis sets use ARS, use RS if ARS fails. FRGM_METHOD Specifies a locally-projected method. TYPE: STRING DEFAULT: NONE OPTIONS: STOLL Locally-projected SCF equations of Stoll are solved. GIA Locally-projected SCF equations of Gianinetti are solved. NOSCF_RS Single Roothaan-step correction to the FRAGMO initial guess. NOSCF_ARS Approximate single Roothaan-step correction to the FRAGMO initial guess. NOSCF_DRS Double Roothaan-step correction to the FRAGMO initial guess. NOSCF_RS_FOCK Non-converged SCF energy of the single Roothaan-step MOs. RECOMMENDATION: STOLL and GIA are for variational optimization of the ALMOs. NOSCF options are for computationally fast corrections of the FRAGMO initial guess. FRZ_GEOM Compute forces on the frozen PES. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Do not compute forces on the frozen PES. TRUE Compute forces on the frozen PES. RECOMMENDATION: Set it to TRUE when optimized geometry or vibrational frequencies on the frozen PES are desired. FRZ_ORTHO_DECOMP_CONV Convergence criterion for the minimization problem that gives the orthogonal fragment densities. TYPE: INTEGER DEFAULT: 6 OPTIONS: $n$ $10^{-n}$ RECOMMENDATION: Use the default unless tighter convergence is preferred. FRZ_ORTHO_DECOMP Perform the decomposition of frozen interaction energy based on the orthogonal decomposition of the 1PDM associated with the frozen wave function. TYPE: BOOLEAN DEFAULT: FALSE (automatically set to TRUE by EDA2 options 1–5) OPTIONS: FALSE Do not perform the orthogonal decomposition. TRUE Perform the frozen energy decomposition using orthogonal fragment densities. RECOMMENDATION: Use default value automatically set by “EDA2". Note that users are allowed to turn off the orthogonal decomposition by setting FRZ_ORTHO_DECOMP to -1. Also, for calculations that involve ECPs, it is automatically set to FALSE since unreasonable results will be produced otherwise. FSM_MODE Specifies the method of interpolation TYPE: INTEGER DEFAULT: 2 OPTIONS: 1 Cartesian 2 LST RECOMMENDATION: In most cases, LST is superior to Cartesian interpolation. FSM_NGRAD Specifies the number of perpendicular gradient steps used to optimize each node TYPE: INTEGER DEFAULT: Undefined OPTIONS: $N$ Number of perpendicular gradients per node RECOMMENDATION: Anything between 2 and 6 should work, where increasing the number is only needed for difficult reaction paths. FSM_NNODE Specifies the number of nodes along the string TYPE: INTEGER DEFAULT: Undefined OPTIONS: $N$ number of nodes in FSM calculation RECOMMENDATION: $N=15$. Use 10 to 20 nodes for a typical calculation. Reaction paths that connect multiple elementary steps should be separated into individual elementary steps, and one FSM job run for each pair of intermediates. Use a higher number when the FSM is followed by an approximate-Hessian based transition state search (Section 9.2.2). FSM_OPT_MODE Specifies the method of optimization TYPE: INTEGER DEFAULT: Undefined OPTIONS: 1 Conjugate gradients 2 Quasi-Newton method with BFGS Hessian update RECOMMENDATION: The quasi-Newton method is more efficient when the number of nodes is high. FSSH_CONTINUE Restart a FSSH calculation from a previous run, using the file 396.0. When this is enabled, the initial conditions of the surface hopping calculation will be set, including the correct wave function amplitudes, initial surface, and position/momentum moments (if AFSSH) from the final step of some prior calculation. TYPE: INTEGER DEFAULT: 0 OPTIONS: $0$ Start fresh calculation. $1$ Restart from previous run. RECOMMENDATION: None FSSH_INITIALSURFACE Specifies the initial state in a surface hopping calculation. TYPE: INTEGER DEFAULT: None OPTIONS: $n$ An integer between FSSH_LOWESTSURFACE and FSSH_LOWESTSURFACE $+$ FSSH_NSURFACES $-1$. RECOMMENDATION: None FSSH_LOWESTSURFACE Specifies the lowest-energy state considered in a surface hopping calculation. TYPE: INTEGER DEFAULT: None OPTIONS: $n$ Only states $n$ and above are considered in a FSSH calculation. RECOMMENDATION: None FSSH_NSURFACES Specifies the number of states considered in a surface hopping calculation. TYPE: INTEGER DEFAULT: None OPTIONS: $n$ $n$ states are considered in the surface hopping calculation. RECOMMENDATION: Any states which may come close in energy to the active surface should be included in the surface hopping calculation. FTC_CLASS_THRESH_MULT Together with FTC_CLASS_THRESH_ORDER, determines the cutoff threshold for included a shell-pair in the $dd$ class, i.e., the class that is expanded in terms of plane waves. TYPE: INTEGER DEFAULT: 5 Multiplicative part of the FTC classification threshold. Together with the default value of the FTC_CLASS_THRESH_ORDER this leads to the $5\times 10^{-5}$ threshold value. OPTIONS: $n$ User specified. RECOMMENDATION: Use the default. If diffuse basis sets are used and the molecule is relatively big then tighter FTC classification threshold has to be used. According to our experiments using Pople-type diffuse basis sets, the default $5\times 10^{-5}$ value provides accurate result for an alanine5 molecule while $1\times 10^{-5}$ threshold value for alanine10 and $5\times 10^{-6}$ value for alanine15 has to be used. FTC_CLASS_THRESH_ORDER Together with FTC_CLASS_THRESH_MULT, determines the cutoff threshold for included a shell-pair in the $dd$ class, i.e., the class that is expanded in terms of plane waves. TYPE: INTEGER DEFAULT: 5 Logarithmic part of the FTC classification threshold. Corresponds to $10^{-5}$ OPTIONS: $n$ User specified RECOMMENDATION: Use the default. FTC_SMALLMOL Controls whether or not the operator is evaluated on a large grid and stored in memory to speed up the calculation. TYPE: INTEGER DEFAULT: 1 OPTIONS: 1 Use a big pre-calculated array to speed up the FTC calculations 0 Use this option to save some memory RECOMMENDATION: Use the default if possible and use 0 (or buy some more memory) when needed. FTC Controls the overall use of the FTC. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not use FTC in the Coulomb part 1 Use FTC in the Coulomb part RECOMMENDATION: Use FTC when bigger and/or diffuse basis sets are used. GAUSSIAN_BLUR Enables the use of Gaussian-delocalized external charges in a QM/MM calculation. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Delocalizes external charges with Gaussian functions. FALSE Point charges RECOMMENDATION: None GAUSS_BLUR_WIDTH Delocalization width for external MM Gaussian charges in a Janus calculations. TYPE: INTEGER DEFAULT: NONE OPTIONS: $n$ Use a width of $n\times 10^{-4}$ Å. RECOMMENDATION: Blur all MM external charges in a QM/MM calculation with the specified width. Gaussian blurring is currently incompatible with PCM calculations. Values of 1.0–2.0 Å are recommended in Ref. Das:2002. GEN_SCFMAN_ALGO_1 The first algorithm to be used in a hybrid-algorithm calculation. TYPE: STRING DEFAULT: 0 OPTIONS: All the available SCF_ALGORITHM options, including the GEN_SCFMAN additions (Section 4.3.1). RECOMMENDATION: None GEN_SCFMAN_CONV_1 The convergence criterion given to the first algorithm. If reached, switch to the next algorithm. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ 10${}^{-n}$ RECOMMENDATION: None GEN_SCFMAN_HYBRID_ALGO Use multiple algorithms in an SCF calculation based on GEN_SCFMAN. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE Use a single SCF algorithm (given by SCF_ALGORITHM). TRUE Use multiple SCF algorithms (to be specified). RECOMMENDATION: Set it to TRUE when the use of more than one algorithm is desired. GEN_SCFMAN_ITER_1 Maximum number of iterations given to the first algorithm. If used up, switch to the next algorithm. TYPE: INTEGER DEFAULT: 50 OPTIONS: User-defined RECOMMENDATION: None GEN_SCFMAN Use GEN_SCFMAN for the present SCF calculation. TYPE: BOOLEAN DEFAULT: TRUE OPTIONS: FALSE Use the previous SCF code. TRUE Use GEN_SCFMAN. RECOMMENDATION: Set to FALSE in cases where features not yet supported by GEN_SCFMAN are needed. GEOM_OPT_CHARAC_CONV Overide the built-in convergence criterion for the Davidson solver. TYPE: INTEGER DEFAULT: 0 (use the built-in default value 10${}^{-5}$) OPTIONS: $n$ Set the convergence criterion to 10${}^{-n}$. RECOMMENDATION: Use the default. If it fails to converge, consider loosening the criterion with caution. GEOM_OPT_CHARAC Use the finite difference Davidson method to characterize the resulting energy minimum/transition state. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: FALSE do not characterize the resulting stationary point. TRUE perform a characterization of the stationary point. RECOMMENDATION: Set it to TRUE when the character of a stationary point needs to be verified, especially for a transition structure. GEOM_OPT_COORDS Controls the type of optimization coordinates. TYPE: INTEGER DEFAULT: $-$1 OPTIONS: 0 Optimize in Cartesian coordinates. 1 Generate and optimize in internal coordinates, if this fails abort. $-$1 Generate and optimize in internal coordinates, if this fails at any stage of the optimization, switch to Cartesian and continue. 2 Optimize in $Z$-matrix coordinates, if this fails abort. $-$2 Optimize in $Z$-matrix coordinates, if this fails during any stage of the optimization switch to Cartesians and continue. RECOMMENDATION: Use the default, as delocalized internals are more efficient. Note that optimization in $Z$-matrix coordinates requires that the input be specified in $Z$-matrix format. GEOM_OPT_DMAX Maximum allowed step size. Value supplied is multiplied by 10${}^{-3}$. TYPE: INTEGER DEFAULT: 300 = 0.3 OPTIONS: $n$ User-defined cutoff. RECOMMENDATION: Use the default. GEOM_OPT_HESSIAN Determines the initial Hessian status. TYPE: STRING DEFAULT: DIAGONAL OPTIONS: DIAGONAL Set up diagonal Hessian. READ Have exact or initial Hessian. Use as is if Cartesian, or transform if internals. RECOMMENDATION: An accurate initial Hessian will improve the performance of the optimizer, but is expensive to compute. GEOM_OPT_LINEAR_ANGLE Threshold for near linear bond angles (degrees). TYPE: INTEGER DEFAULT: 165 degrees. OPTIONS: $n$ User-defined level. RECOMMENDATION: Use the default. GEOM_OPT_MAX_CYCLES Maximum number of optimization cycles. TYPE: INTEGER DEFAULT: 50 OPTIONS: $n$ User defined positive integer. RECOMMENDATION: The default should be sufficient for most cases. Increase if the initial guess geometry is poor, or for systems with shallow potential wells. GEOM_OPT_MAX_DIIS Controls maximum size of subspace for GDIIS. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Do not use GDIIS. -1 Default size = min(NDEG, NATOMS, 4) NDEG = number of molecular degrees of freedom. $n$ Size specified by user. RECOMMENDATION: Use the default or do not set $n$ too large. GEOM_OPT_MODE Determines Hessian mode followed during a transition state search. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Mode following off. $n$ Maximize along mode $n$. RECOMMENDATION: Use the default, for geometry optimizations. GEOM_OPT_PRINT Controls the amount of Optimize print output. TYPE: INTEGER DEFAULT: 3 Error messages, summary, warning, standard information and gradient print out. OPTIONS: 0 Error messages only. 1 Level 0 plus summary and warning print out. 2 Level 1 plus standard information. 3 Level 2 plus gradient print out. 4 Level 3 plus Hessian print out. 5 Level 4 plus iterative print out. 6 Level 5 plus internal generation print out. 7 Debug print out. RECOMMENDATION: Use the default. GEOM_OPT_SYMFLAG Controls the use of symmetry in Optimize. TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE Make use of point group symmetry. FALSE Do not make use of point group symmetry. RECOMMENDATION: Use the default. GEOM_OPT_TOL_DISPLACEMENT Convergence on maximum atomic displacement. TYPE: INTEGER DEFAULT: 1200 $\equiv 1200\times 10^{-6}$ tolerance on maximum atomic displacement. OPTIONS: $n$ Integer value (tolerance = $n\times 10^{-6}$). RECOMMENDATION: Use the default. To converge GEOM_OPT_TOL_GRADIENT and one of GEOM_OPT_TOL_DISPLACEMENT and GEOM_OPT_TOL_ENERGY must be satisfied. GEOM_OPT_TOL_ENERGY Convergence on energy change of successive optimization cycles. TYPE: INTEGER DEFAULT: 100 $\equiv 100\times 10^{-8}$ tolerance on maximum (absolute) energy change. OPTIONS: $n$ Integer value (tolerance = value $n\times 10^{-8}$). RECOMMENDATION: Use the default. To converge GEOM_OPT_TOL_GRADIENT and one of GEOM_OPT_TOL_DISPLACEMENT and GEOM_OPT_TOL_ENERGY must be satisfied. GEOM_OPT_TOL_GRADIENT Convergence on maximum gradient component. TYPE: INTEGER DEFAULT: 300 $\equiv 300\times 10^{-6}$ tolerance on maximum gradient component. OPTIONS: $n$ Integer value (tolerance = $n\times 10^{-6}$). RECOMMENDATION: Use the default. To converge GEOM_OPT_TOL_GRADIENT and one of GEOM_OPT_TOL_DISPLACEMENT and GEOM_OPT_TOL_ENERGY must be satisfied. GEOM_OPT_UPDATE Controls the Hessian update algorithm. TYPE: INTEGER DEFAULT: -1 OPTIONS: -1 Use the default update algorithm. 0 Do not update the Hessian (not recommended). 1 Murtagh-Sargent update. 2 Powell update. 3 Powell/Murtagh-Sargent update (TS default). 4 BFGS update (OPT default). 5 BFGS with safeguards to ensure retention of positive definiteness (GDISS default). RECOMMENDATION: Use the default. GEOM_PRINT Controls the amount of geometric information printed at each step. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Prints out all geometric information; bond distances, angles, torsions. FALSE Normal printing of distance matrix. RECOMMENDATION: Use if you want to be able to quickly examine geometric parameters at the beginning and end of optimizations. Only prints in the beginning of single point energy calculations. GHF Run a generalized Hartree-Fock calculation with GEN_SCFMAN. TYPE: BOOLEAN DEFAULT: FALSE OPTIONS: TRUE Run a GHF calculation. FALSE Do not use GHF. RECOMMENDATION: Set to TRUE if desired. GRAIN Controls the number of lowest-level boxes in one dimension for CFMM. TYPE: INTEGER DEFAULT: -1 Program decides best value, turning on CFMM when useful OPTIONS: -1 Program decides best value, turning on CFMM when useful 1 Do not use CFMM $n\geq 8$ Use CFMM with $n$ lowest-level boxes in one dimension RECOMMENDATION: This is an expert option; either use the default, or use a value of 1 if CFMM is not desired. GVB_AMP_SCALE Scales the default orbital amplitude iteration step size by $n$/1000 for IP/RCC. PP amplitude equations are solved analytically, so this parameter does not affect PP. TYPE: INTEGER DEFAULT: 1000 Corresponding to 100% OPTIONS: $n$ User-defined, 0–1000 RECOMMENDATION: Default is usually fine, but in some highly-correlated systems it can help with convergence to use smaller values. GVB_DO_ROHF Sets the number of Unrestricted-in-Active Pairs to be kept restricted. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-Defined RECOMMENDATION: If $n$ is the same value as GVB_N_PAIRS returns the ROHF solution for GVB, only works with the UNRESTRICTED = TRUE implementation of GVB with GVB_OLD_UPP = 0 (its default value) GVB_DO_SANO Sets the scheme used in determining the active virtual orbitals in a Unrestricted-in-Active Pairs GVB calculation. TYPE: INTEGER DEFAULT: 2 OPTIONS: 0 No localization or Sano procedure 1 Only localizes the active virtual orbitals 2 Uses the Sano procedure RECOMMENDATION: Different initial guesses can sometimes lead to different solutions. Disabling sometimes can aid in finding more non-local solutions for the orbitals. GVB_GUESS_MIX Similar to SCF_GUESS_MIX, it breaks alpha/beta symmetry for UPP by mixing the alpha HOMO and LUMO orbitals according to the user-defined fraction of LUMO to add the HOMO. 100 corresponds to a 1:1 ratio of HOMO and LUMO in the mixed orbitals. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined, $0\leq n\leq 100$ RECOMMENDATION: 25 often works well to break symmetry without overly impeding convergence. GVB_LOCAL Sets the localization scheme used in the initial guess wave function. TYPE: INTEGER DEFAULT: 2 Pipek-Mezey orbitals OPTIONS: 0 No Localization 1 Boys localized orbitals 2 Pipek-Mezey orbitals RECOMMENDATION: Different initial guesses can sometimes lead to different solutions. It can be helpful to try both to ensure the global minimum has been found. GVB_N_PAIRS Alternative to CC_REST_OCC and CC_REST_VIR for setting active space size in GVB and valence coupled cluster methods. TYPE: INTEGER DEFAULT: PP active space (1 occ and 1 virt for each valence electron pair) OPTIONS: $n$ user-defined RECOMMENDATION: Use the default unless one wants to study a special active space. When using small active spaces, it is important to ensure that the proper orbitals are incorporated in the active space. If not, use the$reorder_mo feature to adjust the SCF orbitals appropriately. GVB_OLD_UPP Which unrestricted algorithm to use for GVB. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 Use Unrestricted-in-Active Pairs described in Ref. Lawler:2010 1 Use Unrestricted Implementation described in Ref. Beran:2005 RECOMMENDATION: Only works for Unrestricted PP and no other GVB model. GVB_ORB_CONV The GVB-CC wave function is considered converged when the root-mean-square orbital gradient and orbital step sizes are less than $10^{-\mathrm{GVB\_ORB\_CONV}}$. Adjust THRESH simultaneously. TYPE: INTEGER DEFAULT: 5 OPTIONS: $n$ User-defined RECOMMENDATION: Use 6 for PP(2) jobs or geometry optimizations. Tighter convergence (i.e. 7 or higher) cannot always be reliably achieved. GVB_ORB_MAX_ITER Controls the number of orbital iterations allowed in GVB-CC calculations. Some jobs, particularly unrestricted PP jobs can require 500–1000 iterations. TYPE: INTEGER DEFAULT: 256 OPTIONS: User-defined number of iterations. RECOMMENDATION: Default is typically adequate, but some jobs, particularly UPP jobs, can require 500–1000 iterations if converged tightly. GVB_ORB_SCALE Scales the default orbital step size by $n$/1000. TYPE: INTEGER DEFAULT: 1000 Corresponding to 100% OPTIONS: $n$ User-defined, 0–1000 RECOMMENDATION: Default is usually fine, but for some stretched geometries it can help with convergence to use smaller values. GVB_POWER Coefficient for GVB_IP exchange type amplitude regularization to improve the convergence of the amplitude equations especially for spin-unrestricted amplitudes near dissociation. This is the leading coefficient for an amplitude dampening term included in the energy denominator: -($c$/10000)$(e^{t_{ij}^{p}}-1)/(e^{1}-1)$ TYPE: INTEGER DEFAULT: 6 OPTIONS: $p$ User-defined RECOMMENDATION: Should be decreased if unrestricted amplitudes do not converge or converge slowly at dissociation, and should be kept even valued. GVB_PRINT Controls the amount of information printed during a GVB-CC job. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined RECOMMENDATION: Should never need to go above 0 or 1. GVB_REGULARIZE Coefficient for GVB_IP exchange type amplitude regularization to improve the convergence of the amplitude equations especially for spin-unrestricted amplitudes near dissociation. This is the leading coefficient for an amplitude dampening term ${-(c/10000)(e^{t_{ij}^{p}}-1)/(e^{1}-1)}$ TYPE: INTEGER DEFAULT: 0 For restricted 1 For unrestricted OPTIONS: $c$ User-defined RECOMMENDATION: Should be increased if unrestricted amplitudes do not converge or converge slowly at dissociation. Set this to zero to remove all dynamically-valued amplitude regularization. GVB_REORDER_1 Tells the code which two pairs to swap first. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined XXXYYY RECOMMENDATION: This is in the format of two 3-digit pair indices that tell the code to swap pair XXX with YYY, for example swapping pair 1 and 2 would get the input 001002. Must be specified in GVB_REORDER_PAIRS $\geq$ 1. GVB_REORDER_2 Tells the code which two pairs to swap second. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined XXXYYY RECOMMENDATION: This is in the format of two 3-digit pair indices that tell the code to swap pair XXX with YYY, for example swapping pair 1 and 2 would get the input 001002. Must be specified in GVB_REORDER_PAIRS $\geq$ 2. GVB_REORDER_3 Tells the code which two pairs to swap third. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined XXXYYY RECOMMENDATION: This is in the format of two 3-digit pair indices that tell the code to swap pair XXX with YYY, for example swapping pair 1 and 2 would get the input 001002. Must be specified in GVB_REORDER_PAIRS $\geq$ 3. GVB_REORDER_4 Tells the code which two pairs to swap fourth. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined XXXYYY RECOMMENDATION: This is in the format of two 3-digit pair indices that tell the code to swap pair XXX with YYY, for example swapping pair 1 and 2 would get the input 001002. Must be specified in GVB_REORDER_PAIRS $\geq$ 4. GVB_REORDER_5 Tells the code which two pairs to swap fifth. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined XXXYYY RECOMMENDATION: This is in the format of two 3-digit pair indices that tell the code to swap pair XXX with YYY, for example swapping pair 1 and 2 would get the input 001002. Must be specified in GVB_REORDER_PAIRS $\geq$ 5. GVB_REORDER_PAIRS Tells the code how many GVB pairs to switch around. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ $0\leq n\leq 5$ RECOMMENDATION: This allows for the user to change the order the active pairs are placed in after the orbitals are read in or are guessed using localization and the Sano procedure. Up to 5 sequential pair swaps can be made, but it is best to leave this alone. GVB_RESTART Restart a job from previously-converged GVB-CC orbitals. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE/FALSE RECOMMENDATION: Useful when trying to converge to the same GVB solution at slightly different geometries, for example. GVB_SHIFT Value for a statically valued energy shift in the energy denominator used to solve the coupled cluster amplitude equations, $n$/10000. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined RECOMMENDATION: Default is fine, can be used in lieu of the dynamically valued amplitude regularization if it does not aid convergence. GVB_SYMFIX Should GVB use a symmetry breaking fix. TYPE: INTEGER DEFAULT: 0 OPTIONS: 0 no symmetry breaking fix 1 symmetry breaking fix with virtual orbitals spanning the active space 2 symmetry breaking fix with virtual orbitals spanning the whole virtual space RECOMMENDATION: It is best to stick with type 1 to get a symmetry breaking correction with the best results coming from CORRELATION=NP and GVB_SYMFIX = 1. GVB_SYMPEN Sets the pre-factor for the amplitude regularization term for the SB amplitudes. TYPE: INTEGER DEFAULT: 160 OPTIONS: $\gamma$ User-defined RECOMMENDATION: Sets the pre-factor for the amplitude regularization term for the SB amplitudes: $-(\gamma/1000)(e^{(c*100)*t^{2}}-1)$. GVB_SYMSCA Sets the weight for the amplitude regularization term for the SB amplitudes. TYPE: INTEGER DEFAULT: 125 OPTIONS: $c$ User-defined RECOMMENDATION: Sets the weight for the amplitude regularization term for the SB amplitudes: $-(\gamma/1000)(e^{(c*100)*t^{2}}-1)$. GVB_TRUNC_OCC Controls how many pairs’ occupied orbitals are truncated from the GVB active space. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined RECOMMENDATION: This allows for asymmetric GVB active spaces removing the $n$ lowest energy occupied orbitals from the GVB active space while leaving their paired virtual orbitals in the active space. Only the models including the SIP and DIP amplitudes (ie NP and 2P) benefit from this all other models this equivalent to just reducing the total number of pairs. GVB_TRUNC_VIR Controls how many pairs’ virtual orbitals are truncated from the GVB active space. TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ User-defined RECOMMENDATION: This allows for asymmetric GVB active spaces removing the $n$ highest energy occupied orbitals from the GVB active space while leaving their paired virtual orbitals in the active space. Only the models including the SIP and DIP amplitudes (ie NP and 2P) benefit from this all other models this equivalent to just reducing the total number of pairs. GVB_UNRESTRICTED Controls restricted versus unrestricted PP jobs. Usually handled automatically. TYPE: LOGICAL DEFAULT: same value as UNRESTRICTED OPTIONS: TRUE/FALSE RECOMMENDATION: Set this variable explicitly only to do a UPP job from an RHF or ROHF initial guess. Leave this variable alone and specify UNRESTRICTED = TRUE to access the new unrestricted-in-active-pairs GVB code which can return an RHF or ROHF solution if used with GVB_DO_ROHF HESS_AND_GRAD Enables the evaluation of both analytical gradient and Hessian in a single job TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Evaluates both gradient and Hessian. FALSE Evaluates Hessian only. RECOMMENDATION: Use only in a frequency (and thus Hessian) evaluation. HFK_LR_COEF Sets the coefficient for long-range HF exchange TYPE: INTEGER DEFAULT: 100000000 OPTIONS: $n$ Corresponding to $n/100000000$ RECOMMENDATION: None HFK_SR_COEF Sets the coefficient for short-range HF exchange TYPE: INTEGER DEFAULT: 0 OPTIONS: $n$ Corresponding to $n/100000000$ RECOMMENDATION: None HFPT_BASIS Specifies the secondary basis in a HFPC/DFPC calculation. TYPE: STRING DEFAULT: None OPTIONS: None RECOMMENDATION: See reference for recommended basis set, functional, and grid pairings. HFPT Activates HFPC/DFPC calculation. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: Single-point energy only RECOMMENDATION: Use Dual-Basis to capture large-basis effects at smaller basis cost. See reference for recommended basis set, functional, and grid pairings. HF_LR Sets the fraction of Hartree-Fock exchange at $r_{12}=\infty$. TYPE: INTEGER DEFAULT: No default OPTIONS: $n$ Corresponding to HF_LR = $n/1000$ RECOMMENDATION: None HF_SR Sets the fraction of Hartree-Fock exchange at $r_{12}=0$. TYPE: INTEGER DEFAULT: No default OPTIONS: $n$ Corresponding to HF_SR = $n/1000$ RECOMMENDATION: None HIRSHFELD_CONV Set different SCF convergence criterion for the calculation of the single-atom Hirshfeld calculations TYPE: INTEGER DEFAULT: same as SCF_CONVERGENCE OPTIONS: $n$ Corresponding to $10^{-n}$ RECOMMENDATION: 5 HIRSHFELD_READ Switch to force reading in of isolated atomic densities. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Read in isolated atomic densities from previous Hirshfeld calculation from disk. FALSE Generate new isolated atomic densities. RECOMMENDATION: Use the default unless system is large. Note, atoms should be in the same order with same basis set used as in the previous Hirshfeld calculation (although coordinates can change). The previous calculation should be run with the -save switch. HIRSHFELD_SPHAVG Controls whether atomic densities should be spherically averaged in pro-molecule. TYPE: LOGICAL DEFAULT: TRUE OPTIONS: TRUE Spherically average atomic densities. FALSE Do not spherically average. RECOMMENDATION: Use the default. HIRSHFELD Controls running of Hirshfeld population analysis. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Calculate Hirshfeld populations. FALSE Do not calculate Hirshfeld populations. RECOMMENDATION: None HIRSHITER_THRESH Controls the convergence criterion of iterative Hirshfeld population analysis. TYPE: INTEGER DEFAULT: 5 OPTIONS: $N$ Corresponding to the convergence criterion of $N/10000$, in $e$. RECOMMENDATION: Use the default, which is the value recommended in Ref. Bultinck:2007 HIRSHITER Controls running of iterative Hirshfeld population analysis. TYPE: LOGICAL DEFAULT: FALSE OPTIONS: TRUE Calculate iterative Hirshfeld populations. FALSE Do not calculate iterative Hirshfeld populations. RECOMMENDATION: None HIRSHMOD Apply modifiers to the free-atom volumes used in the calculation of the scaled TS-vdW parameters TYPE: INTEGER DEFAULT: 4 OPTIONS: 0 Do not apply modifiers to the Hirshfeld volumes. 1 Apply built-in modifier to H. 2 Apply built-in modifier to H and C. 3 Apply built-in modifier to H, C and N. 4 Apply built-in modifier to H, C, N and O RECOMMENDATION: Use the default IDERIV Controls the order of derivatives that are evaluated analytically. The user is not normally required to specify a value, unless numerical derivatives are desired. The derivatives will be evaluated numerically if IDERIV is set lower than JOBTYPE requires. TYPE: INTEGER DEFAULT: Set to the order of derivative that JOBTYPE requires OPTIONS: 2 Analytic second derivatives of the energy (Hessian) 1 Analytic first derivatives of the energy. 0 Analytic energies only. RECOMMENDATION: Usually set to the maximum possible for efficiency. Note that IDERIV will be set lower if analytic derivatives of the requested order are not available. IGNORE_LOW_FREQ Low frequencies that should be treated as rotation can be ignored during anharmonic correction calculation. TYPE: INTEGER DEFAULT: 300 Corresponding to 300 cm${}^{-1}$. OPTIONS: $n$ Any mode with harmonic frequency less than $n$ will be ignored. RECOMMENDATION: Use the default.
## Journal of Green Engineering Vol: 7    Issue: 4 Published In:   October 2017 ### Power Generation Using Permanent Magnet Synchronous Generator (PMSG) Based Variable Speed Wind Energy Conversion System (WECS): An Overview Article No: 2    Page: 477-504    doi: https://doi.org/10.13052/jge1904-4720.742 1 2 3 4 Power Generation Using Permanent Magnet Synchronous Generator (PMSG) Based Variable Speed Wind Energy Conversion System (WECS): An Overview Anjana Jain1,*, S. Shankar1 and V. Vanitha2 • 1Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India • 2Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India E-mail: anjanajain79@gmail.com Corresponding Author Received 11 December 2017; Accepted 14 March 2018; Publication 27 March 2018 ## Abstract In the recent time, Permanent-Magnet Synchronous-Generator (PMSG) based variable-speed Wind-Energy Conversion-Systems (WECS) has become very attractive to many researchers. The research aim is to analyse different synchronous machine and compare them based on their maximum power generation. This paper reviews various aspects of PMSG such as topologies with controlled and uncontrolled rectifier, grid-connected and standalone mode of operation with various control methods of PMSG based WECS and recent optimization approaches. The performance analysis of PMSG can be enhanced by adopting a number of control mechanisms with the benefit of advanced optimization techniques. A comparative analysis is carried out based on the techniques used and their corresponding advantages and drawbacks are discussed. ## Keywords • PMSG • WECSs • power generation • synchronous generator • grid-connected PMSG • standalone mode of PMSC • grid side converter (GSC) • machine side converter (MSC) ## List of Abbreviations PMSG – Permanent Magnet Synchronous Generator WECS – Wind Energy Conversion Systems BESS – Battery Energy Storage System SG – Synchronous Generator PM – Permanent Magnet WT – Wind Turbine HAWT – Horizontal Axis Wind Turbine VAWT – Vertical Axis Wind Turbine GSC – Grid Side Converter MSC – Machine Side Converter CSC – Current Source Inverter MPPT – Maximum Power Point Tracking RAPS – Remote Area Power Supply PLL – Phase Locked Loop PCC – Point of Common Coupling FLC – Fuzzy Logic Controller AFLC – Adaptive Fuzzy Logic Controller ANN – Artificial Neural Network DTC – Direct Torque Control PSO – Particle Swarm Optmization GA – Genetic Algorithm BFO – Bacterial-Foraging-Optimization ## 1 Introduction Nowadays, PMSGs are most popular for power-generation, as they have high efficiency [15]. For instance, the electrical efficiency of PMSGs is higher than the synchronous-generators (SGs) in the moderate-size power marine diesel gen-sets [6]. As PMSG don’t comprise excitation control, voltage-regulation in island-operation is challenging. The flux-density of permanent magnet (PM) reduces with the rise in temperature, so voltage-control become complicates. Some of the difficulties of PMs are high cost and handling while manufacturing [7]. The variable-speed operation of the WECS is essential for extracting maximum wind power. A modern control based tracking of power or torque helps to achieve best utilization of wind-energy [8, 9]. Control strategies are developed based on wind-velocity to acquire required shaft speed. These schemes involve high cost and reduced reliability for a small scale WECS. The current-vector of an interior type PMSG optimizes the operation at variable wind-velocity, which needs control of six active switches [10]. Switch-mode rectifier is also designed for PMSM [11]. For standalone operation, load-side converter voltage needs to be controlled in terms of amplitude and frequency [12]. Grid connected PMSG based WECSs are also proposed and implemented. Probability to attain less pole-pitch permits the machine to run at low speed and removes the gearbox or allows using single-stage gear for more compact design. This paper reviews various PMSG techniques with the aim of maximum power generation [13, 14]. The objective of this paper is to discuss different methods and approaches for control of PMSG based WECS. In Section 2 of the paper, discussion about WECS and modelling of PMSG is carried out. Section 3 is presenting various topologies for the converters used for PMSG based WECS. In Section 4, method of control of grid connected pmsg based WECS are discussed. Section 5 presents discussion about standalone mode of operation of PMSG based WECS. In Section 6 some advanced control methods of PMSG based WECS are discussed. Section 7 presents recent optimization approaches for the system. Section 8, includes the conclusion with a comparative analysis of the diffenent methods and apporoaches discussed in the previous sections. ## 2 Wind Energy Conversion System (WECS) and Modelling Non-conventional means of energy has become an alternative and or an additive for the conventional source of energy. With endless potential of wind energy and environmental-merits, it has become the most popular source of renewable energy. The WECS based on the wind-turbine (WT) is categorised as fixed and variable speed system. Initially fixed-speed WECS was popular one. Nowadays, variable speed generators are more effective. PMSG is more effective and efficient as compared to other generators and are best suited for WECS due to its high torque to size ratio, less maintenance required, omission of slip-rings, reduced overall-cos. Permanent magnets(PM) instead of electromagnets makes the stator direct-flux constant [65]. The modelling of wind based power generation system is discussed below. ### 2.1 Modelling of the Wind Turbine The rotor-blades of WT converts the kinetic-energy of the wind into mechanical-energy. Then generator as an electrical-sytem transform mechanical-power into the electrical-power. WTs generally used for WECS are vertical-axis-wind-turbine (VAWT) and horizontal-axis-wind-turbine (HAWT). HAWT shows listed below advantages than VAWT. • it offers flexible blade-pitch so that blades can operate at optimum-angle of attack, for extracting more wind-energy. • it always captures efficient wind-energy from during the whole rotation as blade’s rotation is perpendicular to the wind. • it is self-starting, but VAWT needs initial starting-torque. The kinetic energy, which is extracted from the wind, is penetrated on to the turbine blade area. According to the principle of energy-mass conservation in wind, the maximum extracted wind power is given as [65]; $Pwind= 12 ρAvw3 (1)$ Where vw is wind velocity, ρ is density of air, A is swept-area of turbine-blades. Cp, power coefficient is defined as the ratio of turbine power to the extracted wind power. $Cp= Turbine power (Pturbine)Power obtained from wind (Pwind)$ Hence the turbine-power is given by: $Pturbine= PwindCp= 12ρAvw3Cp (2)$ The turbine-power wrt wind transients is given by $Pturbine= PwindCp= 12ρAvw3Cp(λ,β) (3)$ Where λ is the tip-speed ratio of the turbine: $λ= rotational speed of rotor(ωr)*radius(r)wind velocity (vw)$ Where $Cp(λ, β)=C1(C2λi-C3β-C4)e-C5λi+C6λ (5)λi=(1λ+0.08β-0.035β3+1) (6)$ Where β stands for the blade pitch angle. The coefficient C1 to C6 are: C1 = 0.5176, C2 = 116, C3 = 0.4, C4 = 5, C5 = 21, C6 = 0.0068. Figure 1 show the relationship between the power coefficient Cp and tip speed ratio λ Figure 1 Power coefficient Cp(, β) vs tip speed ratio . The governing formula for directly-coupled PMSG for the mechanical-analysis is given as, $dωmdt=(1J)(Tm−Tgen−Bωm) (7)ωe=Pωm (8)$ Also $∫ωe dt= θe (9)$ Where, 𝜃e = electrical angle (is required for abc↔d-q transformation), Tm = generated turbine mechanical-torque (Nm), Tgen (=Te) = generated electromagnetic-torque (Nm), P = pole-pairs ωm = the rotor mechanical speed (rad/sec), ωe= rotor electrical speed in elec. rad/sec, J = inertia-moment (Kgm2), B = viscous-friction coefficient (can be ignored for small WT). ### 2.2 Modelling of the PMSG Figure 2 shows the d-q axes ‘park’ model. In PMSG rotor is made up of PM, not fed by external source for producing magnetic-field. Hence rotor voltage-equation need not be be developed as variation in rotor flux wrt time is not there. Stator voltageiequations are as follows [65]: $Vsd=RsIsd+dϕsddt−ωeϕsq (10)Vsq=RsIsq+dϕsqdt+ωeϕsd (11)$ Figure 2 PMSG model (a) d (direct)-axis, (b) q (quadrature)-axis. The stator fluxes are given by, $ϕsd=LdIsd+ϕm (12)ϕsq=LdIsq+ϕm (13)$ Where, Rs = stator-winding resistance, Ld = d-axis stator-inductance, Lq = q-axis stator-inductance, ϕm = flux linkage, Vsd & Isd = d-axis stator voltage & current, Vsq & Isq = q-axis stator voltage & current. From Equations 10–13: $Vsd=RsIsd+LddIsddt−ωeLqIsq (14)Vsq=RsIsq+LqdIsqdt+ωeLdIsd\+ωeϕm (15)$ The electromagnetic-torque can be written as: $Te=(32)P(ϕmIsq+(Ld−Lq)IsdIsq) (16)$ For surface-seated PMSG, we can assume Ld = Lq. Then Te can be written as: $Te=(32)P(ϕmIsq) (17)$ For steady-state condition, real power Ps and reactive power Qs of PMSG are as follow: $Ps=VsdIsd+VsqIsq (18)Qs=VsqIsd−VsdIsq (19)$ ## 3 Converter Topologies for Permanent Magnet Synchronous Generator (PMSG) BASED Wind Energy Conversion System (WECS) PMSG is more popular among the synchronous and asynchronous generators due to its lower weight & size, self-excitation. Low maintenance-cost and no gearbox give high efficiency and power factor compare to wound-rotor synchronous generator (WRSG), squirrel-cage induction generator (SCIG), and doubly-fed induction generator (DFIG). DFIGs are the promising choice due to high wind power extraction capability with low converter losses [69]. PMSG with diode-bridge rectifier (AC/DC), boost-converter (DC/DC) & inverter (DC/AC) were proposed for power extraction. The DC/DC converter controls the machine’s voltage. Cost and control complexity is reduced by diode-bridge rectifier [66]. Due to effect of diode commutation at more wind velocity and discontinuous- operation of DC/DC-converter at low wind-velocity, the extracted wind-power decreases. Therefore, for power in the range of 10s kW, back-to-back converter is the better choice, which leads to 5%–15% rise in power [67]. Various techniques are proposed and implemented by the researchers to enhance the performance of PMSG connected to variable speed WECS, such as PMSG with controlled and uncontrolled rectifier for grid connected or stand-alone system. Figure 3(a) shows a grid-connected PMSG through AC/DC-boost-DC/AC converters and Figure 3(b) shows a grid-connected PMSG through bidirectional (back-to-back) converter. Bidirectional-converter consists of 2 VSI-PWM converters and a dc-link capacitor. The MSC works as controlled-rectifier and grid-side-converter as inverter. Dc-link voltage can be made constant by controlling power flow of the GSC. MSC is controlled to suit the magnetization & the reference speed/torque. This topology provides decoupling-effect MSC & GSC through capacitor. Decoupling-effect helps for independent converters-control with some protection [68]. Figure 4(a) shows the schematic diagram for controlled rectifier based MSC, Figure 4(b) shows the MPPT control, and Figure 4(c) shows the phsed locked loop (PLL) to find angular velovity. Dynamic modelling of PMSG based WECS, and MPPT based controller design using fuzzy-logic controller (FLC) is presented for constant and variable wind conditions [16]. An Artificial-Neural-Network (ANN) based Reinforcement-Learning (RL) MPPT algorithm is proposed. Initially, MPPT-algorithm learns the ideal link among the rotor-speed & electrical-power of PMSG by the integration of ANN and Q learning algorithm. MPPT-algorithm is changed from the online-RL to relation-based online-MPPT. The online-learning scheme permits WECS to perform like an intelligent-agent (with memory) to learn from its specific knowledge, therefore enhancing the learning efficiency. The online RL algorithm is reenergised whenever genuine optimal-relationship diverges from the learned-one due to system aging [17]. Figure 5 [17] shows the Schematic diagram of the ANN-based RL algorithm. The major problem in a series-connected current-source-converter (CSC) is the insulation level, due to monopolar operation. Bipolar operation gives a half insulation requirement, is investigated to solve the degradation of the system. Paths for dc-link current produces a concern for appropriate operation of the bipolar system. Bipolar operation with the support of the optimized dc-link current control characterise lower insulation, higher reliability, efficiency and flexibility[18]. An inclusive analysis on power converter topologies with technical details for MW range PMSG-WECS, fault-ride-through compliance approaches and digital control approaches are presented in [19]. In WECS, traditional frequency-control techniques are imposing a severe stress on the system. Enhanced frequency-response scheme for PMSG-based WECS to control the frequency of Remote Area Power Supply (RAPS) system with its combined ultra-capacitors is proposed [20]. Here frequency-response scheme is based on droop-control and virtual-inertial methods. It efficiently controls RAPS frequency with high rate of change of power. A Medium-Frequency Transformer (MFT) based WECS for offshore wind farms is proposed using current source converters [21]. This configuration includes a medium-voltage PMSG which is connected to a low-cost passive rectifier, onshore current source inverter and MFT-based cascaded converter. Both simulation and experimental results are achieved better performance is shown compared with conventional methods. Figure 3 (a) WECS with uncontrolled diode rectifier, boost-converter, VSC; (b) WECS with controlled rectifier and VSC. Figure 4 (a) Controlled rectifier based machine side converter, (b) MPPT control, (c) PLL. Figure 5 Schematic diagram of the ANN-based reinforcement learning (RL) algorithm. In [22], robust & reliable power system is proposed having a combination of a machine-side 3-switch buck-rectifier and a grid-side Z-source inverter as a bridge among the grid and the machine. Figure 6 shows the schemetic diagram of PMSG based WECS with Z-source converter. Space-vector modulation and Z-source network operation principles in utilized to develope the modulation scheme. Two control approaches such as unity-power-factor control and rotor-flux-orientation control are considered to develop the optimized proposed control. Decoupled active and reactive-power control is attained independently. MPPT is achieved through control of shoot-through duty cycles of Z-source network. Figure 6 Schematic diagram of PMSG based WECS with Z-source converter. In [23] the solutions based on diode rectifiers are found wrt the waveforms of electrical & mechanical quantities, efficiency, torque ripple, with various connections (6-pulse & 12-pulse rectifier). Rectifier with high no. of pulses can be utilized without using special shifter-transformers, makes them very useful for low-medium power applications. A scheme comprising of 3 individual thyristors and two 3-ϕ diode bridge is implemented for high-power variable speed PMSG [24]. Here each of diode bridge rectifiers is supplied by a 3-ϕ power-source and their outputs are connected in parallel. Three individual thyristors connects the corresponding input phases of the rectifiers. The rectifier’s dc output voltage is equal to the output of a single diode bridge, if thyristors are off. The outputs of the two diode bridges are cascaded and total dc voltage becomes double, if the thyristors are controlled and turned on. This rectifier consists some important properties such as low power loss, low cost, simple control, and more efficient [24]. The effect of Vienna-rectifier voltage-vectors on PMSG torque & stator-flux are derived by Direct torque control (DTC) [25]. In [26], operational challenges of 3-ϕ surface-mount PMSG connected to a diode-rectifier are evaluated and by using analytical steady-state model of the system, maximum power transfer is obtained. In [27] a linearized average dynamic-model of the WECS including PMSG, diode-rectifier, boost-converter is presented. Also relationship between electromagnetic torque & converter current is extracted; then system’s control-loops are developed using linearized model. Then small signal stability of the overall system is presented. Here the effect of speed-controller on the stability of the system is observed theoretically and with simulations. In [28] a novel scheme with fuzzy fractional order proportional integral + I (FFOPI+I) controller for grid-connected PMSG based variable-speed WECS is proposed. The controller is employed to control system with nonlinear load through a bidirectional-converter. The MSC aims to extract maximum-power under fluctuating wind speed. The controller develops a FLC in parallel with Fractional Order PI (FOPI) and conventional PI controllers. The initial parameters of FFOPI+I controller is computed using a frequency approach to produce a search space then particle-swarm-optimization (PSO) algorithm is applied to choose optimal-parameters. The performance evaluation of FFOPI+I controller is estimated under the steady-state and transient conditions. The simulation results demonstrate the effectiveness of FFOPI+I over FOPI and enhancing the grid-side power factor for a wide range of wind speed. Figure 7 shows the schematic diagram of the control scheme with FFOPI+I controller [28]. Figure 7 Schematic diagram of the control scheme with FFOPI+I controller. ## 4 Grid Connected Permanent Magnet Synchronous Generator (PMSG) Based Wind Energy Conversion System (WECS) In [29], intelligent controllers are proposed for a Switched-Reluctance Generator (SRG) based WECS to obtain the maximum-power. These controllers are based on of fuzzy-logic (FL) controller and ANN techniques. Controller adjusts WT rotational-speed by fixing the turn-on angle and varying the turn-off angle of SRG. Simulation results shows the effectiveness of ANN-controller in terms efficiency & accuracy than FL-controller. In [30] a model-predictive-control is proposed, which provides better dynamic-response and permits flexible-operation of parallel-connected generators by removing the dependency of voltage & frequency synchronization. Scheme shows operational-ability of the micro grid under islanding from distribution-grid. An enhanced rapid dynamic-system for regulating Matrix-Converter (MC) is proposed with modified-hysteresis-current-controller and optimal-tuning-PI-controller in [31]. Also enhanced Bacterial-Foraging-Optimization (BFO) algorithm is applied for active & reactive currents control of PMSG to achieve maximum-power. Using BFO, active & reactive powers can be supplied to the grid at normal and fault situations. Dynamic limiter with BFO-controller controls the injected reactive-power to grid and improves the system stability. Also pitch angle controller with rate limiter is developed for WECS protection from mechanical-damage. An overall control technique for hybrid-wind/PV distributed generation system is presented [32]. Figure 8 Control scheme of grid side converter (GSC). Various energy sources are integrated using DC bus into the utility grid. Meta-heuristic Firefly algorithm (FA) based controller is utilised for voltage & frequency control at point of common coupling (PCC). The gains of PI & PID controllers are concurrently improved by powerful FA and their performance is evaluated. The dynamic responses of PID controller shows the better performance compare to PI controller. A Maximum-Power-Extraction Algorithm (MPEA) is proposed for a grid-connected PMSG based WECS and it is feasible to implement in practical without any mechanical sensors for WECS via a PMSG [33]. Comprehensive models have been proposed to analyse power & voltage fluctuations, which depends on the grid-parameters. Flicker emissions can be reduced by activating the developed voltage regulation loop [34]. The q & d axis current controls the active & reactive power respectively. Utility-voltage phase-angle is identified by software PLL in synchronous-reference-frame. This method provides high quality & low cost power conversion for WECS [35]. Figure 8 shows the control scheme of a grid connected PMSG system. In [36], two control schemes, based on sliding-mode control and classical PI controllers, to control both MSC & GSC for wind farms (WF) using PMSG connected to a DC-bus system are proposed. Control scheme integrates a pitch control method and MPPT to achieve more power from WF. In [37], hardware implementation of 3-parallel connected PMSGs based WECS is proposed, which minimizes the converters rating, shows better performance compared to traditional schemes. ## 5 Standalone Permanent Magnet Synchronous Generator (PMSG) Based Wind Energy Conversion System (WECS) Figure 9 shows a PMSG based standalone WECS. Here PMSG is connected to a load via a 3-phase ac/dc converter, BESS, and a 3-phase dc/ac converter. Generally standalone WECS are supported by BESS and or super-capacitor/ultra-capacitor. Figure 9 PMSG based Standalone WECS. In [38] an Effective-Energy-Management algorithm for standalone PMSG using dc link voltage is proposed. Here variable-speed WECS with PMSG includes battery, dump load and fuel cell. By maintaining constant dc-link voltage to its reference value, constant inverter output is gained. An effective control scheme developed based on PWM-technique provides balanced line voltages at PCC for unstable load also. Also, control method for battery with DC/DC converter is developed to minimize the torque-pulsation of machine. Scheme maintains MPPT. In [39], controller for voltage & frequency is implemented using a battery-energy-storage-system (BESS). BESS controls the frequency and provide load levelling for varying wind velocity. Machine voltage control is achieved by supplying reactive power at variable loads & wind velocity. The performance of the system is verified as a harmonic compensator, a load-balancer & leveller and a voltage & frequency controller. An adaptive-control method based on Neural-network-identifier (NNI) for MPPT of stand-alone PMSG based WECS is proposed [40]. This method provides accurate mechanical torque signal and off-line training is not required to acquire its optimal-parameter-values. A block-back-stepping controller is also proposed to attain optimal rotor-speed. In [41], energy-management-algorithm (EMA) for enhancing the performance of hybrid-energy-storage-system along with super-capacitor is discussed. Figure 10 shows a PMSG based standalone WECS with BESS and supercapacitor. Synchronous-condenser offer reactive-power and inertial-aid to the system. Here developed coordinated-control manages the active & reactive power flow between system elements. The results achieved with the robust voltage and frequency regulation, effective management and maximum wind-power extraction. Figure 10 PMSG based standalone WECS with BESS and supercapacitor [41]. In [42], a hybrid system is proposed which includes PMSG & DFIG integrated with a battery-storage. The simulation results varifies both the systems capabilities of voltage & frequency control. In [43] a parameter-independent-intelligent-power-management-controller, includes MPPT & power-limit-search (PLS) algorithm for standalone PMSG, is proposed. PLS algorithm helps to reduce surplus-energy production and minimize the heat-dissipation needs by finding optimal operating resulting required-power at the place of maximum-power. ## 6 Advanced Control Technique for Permanent Magnet Synchronous Generator (PMSG) Based Wind Energy Conversion System (WECS) In [44], a systematic formula of loss using quadrature-direct axis current and speed and evaluation of loss model of high-speed PMSG is proposed. Total loss is concave function based on d-axis current and speed. Therefore, the mathematical derivation of solving the extreme value is employed and then the explicit expression of the two independent variables direct axis current and speed for the minimum-loss-point is accomplished. As the effect of the maximum current limitation of the circuit on the system speed operation mode, the minimum loss speed and direct current combined control strategy are raised. Proposed algorithm enhances the efficiency by 16.91% compared to conventional algorithms. Next, a power-control technique for PMSG based WECS is proposed. At sub-synchoronos speed, with rotor dynamic-characteristics, optimal reference-torque is found without wind velocity sensors. At super-synchronous speed, with flux-weakening helps to use maximum-torque under specific current & voltage. SVM-based direct-torque-control (SVM-DTC) helps to generate the torque angle & flux references [45]. Direct-model-predictive-control (DMPC), without extra-modulator, takes care of power converter’s switching-non-linearity. Its nature of one-switching-vector/control-interval causes big ripples in the control-variables. To overcome this issue, multiple-vector-direct-model-predictive-power-control (MV-DMPPC) is presented for GSC based on FPGA [46]. Here, MV-DMPPC shows improved performance than DMPPC with duty-cycle-optimizations. Three methods of current control for MSC are presented based on rotor-flux-oriented-control [47]. Integral-sliding-mode-controller (ISMC), consist of two integral-switching-functions for stator d-q axis currents and controls the currents in synchronous-reference-frame. Then finite-control-set-model-predictive-control (FCS-MPC) controls the stator-currents and replaces ISMC. In FCS-MPC, switching action reduces a predefined-cost-function required for next sampling. At last, conventional PI is developed to evaluate new controllers. In [48] a feedback-linearization based partical swarn optimization (PSO) for selecting optimal working for MPPT of WECS based on PMSG is proposed and verified. A Vienna-rectifier based MPC, measures the possible eight voltage-vectors of the rectifier and enhance the performance of the WECS based on PMSG [49]. Cost-function measures the optimized voltage-vector, used for ripple-minimization. Selection of final switching-set is based on neutral-point voltage-unbalancing issue consideration. Output-power smoothing in low & high-frequency regions, based on coordinated-control of DC-link-voltage and pitch-angle of a WT is proposed [50]. WT blade-stress is alleviated as the pitch-action for high-frequency less and for low frequency; DC-link-capacitor size is reduced without its charge/discharge action. A new method for voltage & frequency control for a stand-alone WECS handling variable load is proposed [51]. Scheme controls GSC for maximum power extraction from wind. Dynamic illustration of dc bus and small-signal analysis is also obtained. In [52], predictive-current-control for the prediction of the generator-behaviour using a mathematical-model is proposed. These parameters may differ from their actual values, leads to inaccurate prediction, so deteriorate predictive algorithm. Extension of the proposed algorithm to enhance the prediction accuracy is presented, which decreases the current-ripple and improves robustness of the system against parameter uncertainties. A new sliding-mode-observer (SMO) to obtain the sensorless control of PMSG is proposed in [53]. An observer is constructed based on back EMF model and accuracy of estimation of rotor-position & speed is achieved. In [54], the output-power-control based on combined high-order-sliding-mode (HOSM) controller is proposed for PMSG based WECS integrating a stand-alone hybrid-generation-system with BESS and some other generation-subsystems. Figure 11 shows the control scheme proposed. Controller presents chattering-free behavior, simplicity & robustness wrt disturbances. Figure 11 Schematic diagram for combined high-order-sliding-mode (HOSM) control [54]. In [55], direct-current based d–q vector control method integrating fuzzy, adaptive and conventional PID control is proposed, which enhances the system optimal-performance. A PMSG-control based on dc-vector-control-process for MSC & GSC is proposed to gain maximum-power from wind [56]. This system achieves excellent performance under various conditions. A integrated power-control is presented for PMSG based WECS operating in various grid conditions [57]. The designed scheme is more quicker and accuracy power responses than the variable structured control scheme that is advantageous to the grid recovery. A prototype version of the mechanical sensorless control technique of a 20-kW PMSG for maximum power tracking is proposed [58]. Figure 12 Flowchart of optimum design process. ## 7 Optimization Approach for Permanent Magnet Synchronous Generator (PMSG) Based Wind Energy Conversion System (WECS) Intelligent techniques, using two AFLC and PSO, to enhance DTC performance of PMSG based-WECS are proposed in [59]. AFLC replaces the traditional-comparators & switching-table and regulates real time PI-parameters. PSO keeps switching frequency constant, is used as an alternative to regulate PI parameters. Controllers help to reduce flux & torque ripples and improves dynamic & steady state efficiency. Pitch-control along with optimization, delay-perturbation-approximation, and signal-compensation methods is proposed [60]. Direct-search-optimization based controller provides delay-free pitch model. Delay-estimator calculates the perturbation due to delay. Signal-compensation method eliminates the effect from delay-perturbation to the turbine output. A multi-physics design optimization of PMSG is proposed, where goal is to reduce cost [61]. Multi-physics machine model, cost, and loss models are considered under design. Converter’s control scheme, affects the system cost, is analysied. Here, optimization leads to describe the phase-angle of the generator-current. Fuzzy-sliding-mode loss-minimization control and an efficient on-line-training radial-basis-function-network (RBFN) for turbine-pitch-angle-control is presented [62]. MPPT algorithms for optimal wind energy capture using RBFN and torque-observer-MPPT-algorithm is proposed [63]. Here, efficient RBFN (based on back-propagation) and a modified-PSO regulates the controller for a sensorless control of PMSG. In [64] an optimum design procedure for parameters of PI controllers of frequency-converter using genetic-algorithms (GAs) is presented. Figure 12 shows the control algorithm for GA. GA enhaces the fault-ride-through also. ## 8 Conclusion In this paper, various PMSG topologies such as with controlled & uncontrolled rectifier, grid connected operation; stand-alone operation, different control algorithms, and optimization technique for PMSG have been discussed based on their maximum power generation. Each technique is determined according to the required specification in terms of the parameters used. Also, comparative analysis of various PMSG based variable-speed WECS techniques is studied with advantages and future recommendations and is shown in Table 1. This comparison table provides advantages and research gap of each glitch reduction technique. The performance measure of PMSG can be enhanced by adopting several control mechanism with the aid of advanced optimization techniques. This research study helps as an advantageous knowledge for future research direction. Table 1 Comparative analysis of PMSG techniques Sl. No. Paper Title Topology Used Outcomes Research Gap 1 A medium-frequency transformer-based WECS used for current-source converter-based offshore wind farm [18]. PMSG with controlled & uncontrolled rectifier Low power loss, low cost, simple control, and more efficient Achieved better performance. More complexity 2 Experimental enhancement of fuzzy fractional order PI+I controller of grid connected variable speed wind energy conversion system [28]. Grid connected PMSG Improved the grid-side power factor for a wide range of wind speed. Average cost and high complexity. 3 A novel online training neural network-based algorithm for wind speed estimation and adaptive control of PMSG wind turbine system for maximum power extraction [40]. Stand-alone PMSG It provides good accuracy. More time consumption due to training phase in neural networks. 4 A Control Approach for Small-Scale PMSG-based WECS in the whole wind speed range [45]. control techniques 10 kW wind turbine for commercial applications. High cost. 5 A comparative experimental study of direct torque control based on adaptive fuzzy logic controller and particle swarm optimization algorithms of a PMSG [59]. Optimization approach Keeps a constant switching frequency which enhances the PMSM drive system control performance. More time consumption. ## References [1] Sindhya, K., Manninen, A., Miettinen, K., and Pippuri, J. (2017). Design of a Permanent Magnet Synchronous Generator Using Interactive Multiobjective Optimization. IEEE Transactions on Industrial Electronics, 64(12), 9776–9783. [2] Dehghan, S. M., Mohamadian, M., and Varjani, A. Y. (2009). A new variable-speed wind energy conversion system using permanent-magnet synchronous generator and Z-source inverter. IEEE Transactions on Energy Conversion, 24(3), 714–724. [3] Nakano, M., Kometani, H., and Kawamura, M. (2006). A study on eddy-current losses in rotors of surface permanent-magnet synchronous machines. IEEE Transactions on Industry Applications, 42(2), 429–435. [4] Qiao, W., Qu, L., and Harley, R. G. (2009). Control of IPM synchronous generator for maximum wind power generation considering magnetic saturation. IEEE Transactions on industry applications, 45(3), 1095–1105. [5] Semken, R. S., et al., (2012). Direct-drive permanent magnet generators for high-power wind turbines: Benefits and limiting factors. IET Renewable Power Generation, 6(1), 1–8. [6] Bernardes, T., Montagner, V. F., Grndling, H. A., and Pinheiro, H. (2014). Discrete-time sliding mode observer for sensorless vector control of permanent magnet synchronous machine. IEEE Transactions on industrial electronics, 61(4), 1679–1691. [7] Po-Yen Chen, Kai-Wei Hu, Yi-Guang Lin, Chang-Ming Liaw, “Development of a Prime Mover Emulator using Permanent-Magnet Synchronous Motor Drive”, IEEE Transactions on Power Electronics, 2017, IEEE Early Access Articles, 99. [8] Tan, K., and Islam, S. (2004). Optimum control strategies in energy conversion of PMSG wind turbine system without mechanical sensors. IEEE transactions on energy conversion, 19(2), 392–399. [9] Chinchilla, M., Arnaltes, S., and Burgos, J. C. (2006). Control of permanent-magnet generators applied to variable-speed wind-energy systems connected to the grid. IEEE Transactions on energy conversion, 21(1), 130–135. [10] Morimoto, S., Nakayama, H., Sanada, M., and Takeda, Y. (2005). “Sensorless output maximization control for variable-speed wind generation system using IPMSG”, IEEE Transactions on Industry Applications, 41(1), 60–67 [11] Soong, W. L., and Ertugrul, N. (2004). Inverterless high-power interior permanent-magnet automotive alternator. IEEE Transactions on Industry Applications, 40(4), 1083–1091. [12] Bhende, C. N., Mishra, S., and Malla, S. G. (2011). Permanent magnet synchronous generator-based standalone wind energy supply system. IEEE Transactions on Sustainable Energy, 2(4), 361–373. [13] Polinder, H., Van der Pijl, F. F., De Vilder, G. J., and Tavner, P. J. (2006). Comparison of direct-drive and geared generator concepts for wind turbines. IEEE Transactions on energy conversion, 21(3), 725–733. [14] Grabic, S., Celanovic, N., and Katic, V. A. (2008). Permanent magnet synchronous generator cascade for wind turbine application. IEEE Transactions on Power Electronics, 23(3), 1136–1142. [15] Izadbakhsh, M., Rezvani, A., Gandomkar, M., and Mirsaeidi, S. (2015). Dynamic analysis of PMSG wind turbine under variable wind speeds and load conditions in the grid connected mode. Indian Journal of Science and Technology, 8(14), 1. [16] Tomonobu Senjyu, Ryosei Sakamoto1, Naomitsu Urasaki, Toshihisa Funabashi, Hideomi Sekine, “Output power leveling of wind farm using pitch angle control with fuzzy neural network”, Power Engineering Society General Meeting, 2006. [17] Wei, C., Zhang, Z., Qiao, W., and Qu, L. (2016). An adaptive network-based reinforcement learning method for MPPT control of PMSG wind energy conversion systems. IEEE Transactions on Power Electronics, 31(11), 7837–7848. [18] Wei, Q., Wu, B., Xu, D., and Zargari, N. R. (2018). Bipolar Operation Investigation of Current Source Converter Based Wind Energy Conversion Systems. IEEE Transactions on Power Electronics, 33(2), 1294–1302. [19] Yaramasu, V., Dekka, A., Durn, M. J., Kouro, S., and Wu, B. (2017). PMSG-based wind energy conversion systems: survey on power converters and controls. IET Electric Power Applications, 11(6), 956–968. [20] Tan, Y., Muttaqi, K. M., Ciufo, P., and Meegahapola, L. (2017). Enhanced frequency response strategy for a PMSG-based wind energy conversion system using ultracapacitor in remote area power supply systems. IEEE Transactions on Industry Applications, 53(1), 549–558. [21] Wei, Q., Wu, B., Xu, D., and Zargari, N. R. (2017). A medium-frequency transformer-based wind energy conversion system used for current-source converter-based offshore wind farm. IEEE Transactions on Power Electronics, 32(1), 248–259. [22] Zhang, S., Tseng, K. J., Vilathgamuwa, D. M., Nguyen, T. D., and Wang, X. Y. (2011). Design of a robust grid interface system for PMSG-based wind turbine generators. IEEE transactions on industrial electronics, 58(1), 316–328. [23] Di Gerlando, A., Foglia, G., Iacchetti, M. F., and Perini, R. (2012). Analysis and test of diode rectifier solutions in grid-connected wind energy conversion systems employing modular permanent-magnet synchronous generators. IEEE Transactions on Industrial Electronics, 59(5), 2135–2146. [24] Wang, J., Xu, D., Wu, B., and Luo, Z. (2011). A low-cost rectifier topology for variable-speed high-power PMSG wind turbines. IEEE Transactions on Power Electronics, 26(8), 2192–2200. [25] Rajaei, A., Mohamadian, M., and Varjani, A. Y. (2013). Vienna-rectifier-based direct torque control of PMSG for wind energy application. IEEE Transactions on Industrial Electronics, 60(7), 2919–2929. [26] Iacchetti, M. F., Foglia, G. M., Di Gerlando, A., and Forsyth, A. J. (2015). Analytical evaluation of surface-mounted PMSG performances connected to a diode rectifier. IEEE Transactions on Energy Conversion, 30(4), 1367–1375. [27] Rahimi, M. (2017). Modeling, control and stability analysis of grid connected PMSG based wind turbine assisted with diode rectifier and boost converter. International Journal of Electrical Power & Energy Systems, 93, 84–96. [28] Beddar, A., Bouzekri, H., Babes, B., and Afghoul, H. (2016). Experimental enhancement of fuzzy fractional order PI+I controller of grid connected variable speed wind energy conversion system. Energy Conversion and Management, 123, 569–580. [29] Rahmanian, E., Akbari, H., and Sheisi, G. H. (2017). Maximum power point tracking in grid connected wind plant by using intelligent controller and switched reluctance generator. IEEE Transactions on Sustainable Energy, 8(3), 1313–1320. [30] Tan, K. T., Sivaneasan, B., Peng, X. Y., and So, P. L. (2016). Control and operation of a dc grid-based wind power generation system in a microgrid. IEEE Transactions on Energy Conversion, 31(2), 496–505. [31] Saad, N. H., El-Sattar, A. A., and Marei, M. E. (2017). Improved bacterial foraging optimization for grid connected wind energy conversion system based PMSG with matrix converter. Ain Shams Engineering Journal. [32] Chaurasia, G. S., Singh, A. K., Agrawal, S., and Sharma, N. K. (2017). A meta-heuristic firefly algorithm based smart control strategy and analysis of a grid connected hybrid photovoltaic/wind distributed generation system. Solar Energy, 150, 265–274. [33] Duan, R. Y., Lin, C. Y., and Wai, R. J. (2006). Maximum-power-extraction algorithm for grid-connected PMSG wind generation system. In 32nd Annual Conference on IEEE Industrial Electronics (IECON), 4248–4253. [34] Alaboudy, A. H. K., Daoud, A. A., Desouky, S. S., and Salem, A. A. (2013). Converter controls and flicker study of PMSG-based grid connected wind turbines. Ain Shams Engineering Journal, 4(1), 75–91. [35] Song, S. H., Kang, S. I., and Hahm, N. K. (2003). Implementation and control of grid connected AC-DC-AC power converter for variable speed wind energy conversion system. In Applied Power Electronics Conference and Exposition (APEC’03), Vol. 1, 154–158. [36] Errami, Y., Ouassaid, M., and Maaroufi, M. (2015). A performance comparison of a nonlinear and a linear control for grid connected PMSG wind energy conversion system. International Journal of Electrical Power & Energy Systems, 68, 180–194. [37] Park, K. W., and Lee, K. B. (2010). Hardware simulator development for a 3-parallel grid-connected PMSG wind power system. Journal of Power Electronics, 10(5), 555–562. [38] Bhende, C. N., Mishra, S., and Malla, S. G. (2011). Permanent magnet synchronous generator-based standalone wind energy supply system. IEEE Transactions on Sustainable Energy, 2(4), 361–373. [39] Sharma, S., and Singh, B. (2012). Control of permanent magnet synchronous generator-based stand-alone wind energy conversion system. IET Power Electronics, 5(8), 1519–1526. [40] Jaramillo-Lopez, F., Kenne, G., and Lamnabhi-Lagarrigue, F. (2016). A novel online training neural network-based algorithm for wind speed estimation and adaptive control of PMSG wind turbine system for maximum power extraction. Renewable Energy, 86, 38–48. [41] Mendis, N., Muttaqi, K. M., and Perera, S. (2014). Management of battery-supercapacitor hybrid energy storage and synchronous condenser for isolated operation of PMSG based variable-speed wind turbine generating systems. IEEE Transactions on Smart Grid, 5(2), 944–953. [42] Mendis, N., Muttaqi, K. M., Sayeef, S., and Perera, S. (2012). Standalone operation of wind turbine-based variable speed generators with maximum power extraction capability. IEEE Transactions on Energy Conversion, 27(4), 822–834. [43] Hui, J. C., Bakhshai, A., and Jain, P. K. (2016). An energy management scheme with power limit capability and an adaptive maximum power point tracking for small standalone PMSG wind energy systems. IEEE Transactions on Power Electronics, 31(7), 4861–4875. [44] Duan, J., Fan, S., Zhang, K., An, Q., Sun, L., and Wang, G. (2017). Minimum loss control of high-speed PMSG with variable speed operation. International Journal of Electronics, 104(9), 1562–1577. [45] Shafiei, A., Dehkordi, B. M., Kiyoumarsi, A., and Farhangi, S. (2017). A Control Approach for a Small-Scale PMSG-Based WECS in the Whole Wind Speed Range. IEEE Transactions on Power Electronics, 32(12), 9117–9130. [46] Zhang, Z., Fang, H., Gao, F., Rodriguez, J., and Kennel, R. (2017). Multiple-vector model predictive power control for grid-tied wind turbine system with enhanced steady-state control performance. IEEE Transactions on Industrial Electronics, 64(8), 6287–6298. [47] Shehata, E. G. (2017). A comparative study of current control schemes for a direct-driven PMSG wind energy generation system. Electric Power Systems Research, 143, 197–205. [48] Soufi, Y., Kahla, S., and Bechouat, M. (2016). Feedback linearization control based particle swarm optimization for maximum power point tracking of wind turbine equipped by PMSG connected to the grid. International Journal of Hydrogen Energy, 41(45), 20950–20955. [49] Lee, J. S., Bak, Y., Lee, K. B., and Blaabjerg, F. (2016). MPC-SVM method for Vienna rectifier with PMSG used in Wind Turbine Systems. In Applied Power Electronics Conference and Exposition (APEC), 3416–3421. [50] Uehara, A., Pratap, A., Goya, T., Senjyu, T., Yona, A., Urasaki, N., and Funabashi, T. (2011). A coordinated control method to smooth wind power fluctuations of a PMSG-based WECS. IEEE Transactions on Energy Conversion, 26(2), 550–558. [51] Haque, M. E., Negnevitsky, M., and Muttaqi, K. M. (2008). A novel control strategy for a variable speed wind turbine with a permanent magnet synchronous generator. In Industry Applications Society Annual Meeting, IAS’08, 1–8. [52] Siami, M., Khaburi, D. A., Abbaszadeh, A., and Rodrguez, J. (2016). Robustness improvement of predictive current control using prediction error correction for permanent-magnet synchronous machines. IEEE Transactions on Industrial Electronics, 63(6), 3458–3466. [53] Qiao, Z., Shi, T., Wang, Y., Yan, Y., Xia, C., and He, X. (2013). New sliding-mode observer for position sensorless control of permanent-magnet synchronous motor. IEEE Transactions on INDUSTRIAL Electronics, 60(2), 710–719. [54] Valenciaga, F., and Puleston, P. F. (2008). High-order sliding control for a wind energy conversion system based on a permanent magnet synchronous generator. IEEE transactions on Energy Conversion, 23(3), 860–867. [55] Li, S., Haskew, T. A., and Xu, L. (2010). Conventional and novel control designs for direct driven PMSG wind turbines. Electric Power Systems Research, 80(3), 328–338. [56] Li, S., Haskew, T. A., Swatloski, R. P., and Gathings, W. (2012). Optimal and direct-current vector control of direct-driven PMSG wind turbines. IEEE Transactions on Power Electronics, 27(5), 2325–2337. [57] Geng, H., Yang, G., Xu, D., and Wu, B. (2011). Unified power control for PMSG-based WECS operating under different grid conditions. IEEE Transactions on Energy Conversion, 26(3), 822–830. [58] Tan, K., & Islam, S. (2004). Optimum control strategies in energy conversion of PMSG wind turbine system without mechanical sensors. IEEE Transactions on Energy Conversion, 19(2), 392–399. [59] Mesloub, H., Benchouia, M. T., Gola, A., Gola, N., and Benbouzid, M. E. H. (2017). A comparative experimental study of direct torque control based on adaptive fuzzy logic controller and particle swarm optimization algorithms of a permanent magnet synchronous motor. The International Journal of Advanced Manufacturing Technology, 90(1–4), 59–72. [60] Gao, R., and Gao, Z. (2016). Pitch control for wind turbine systems using optimization, estimation and compensation. Renewable Energy, 91, 501–515. [61] Bazzo, T. D. P. M., Klzer, J. F., Carlson, R., Wurtz, F., and Gerbaud, L. (2017). Multiphysics Design Optimization of a Permanent Magnet Synchronous Generator. IEEE Transactions on Industrial Electronics, 64(12), 9815–9823. [62] Lin, W. M., Hong, C. M., Ou, T. C., and Chiu, T. M. (2011). Hybrid intelligent control of PMSG wind generation system using pitch angle control with RBFN. Energy Conversion and Management, 52(2), 1244–1251. [63] Hong, C. M., Chen, C. H., and Tu, C. S. (2013). Maximum power point tracking-based control algorithm for PMSG wind generation system without mechanical sensors. Energy Conversion and Management, 69, 58–67. [64] Hasanien, H. M., and Muyeen, S. M. (2012). Design optimization of controller parameters used in variable speed wind energy conversion system by genetic algorithms. IEEE Transactions on Sustainable Energy, 3(2), 200–208. [65] Patel, A., Arya, S. R., and Jain, A. (2016). Variable step learning based control algorithm for power quality in PMSG based power generation system. In Power India International Conference (PIICON), 1–6. [66] Di Gerlando, A., Foglia, G., Iacchetti, M. F., and Perini, R. (2012). Analysis and test of diode rectifier solutions in grid-connected wind energy conversion systems employing modular permanent-magnet synchronous generators. IEEE Transactions on Industrial Electronics, 59(5), 2135–2146. [67] Bianchini, C., Immovilli, F., Lorenzani, E., Bellini, A., and Buticchi, G. (2012, October). Micro wind turbine system integration guidelines PMSG and inverter front end choices. In IECON 2012–38th Annual Conference on IEEE Industrial Electronics Society, 1073–1078. [68] Orlando, N. A., Liserre, M., Mastromauro, R. A., and Dell’Aquila, A. (2013). A survey of control issues in PMSG-based small wind-turbine systems. IEEE Transactions on Industrial Informatics, 9(3), 1211–1221. [69] Jain, A., Vijay, C. V., Shravanthi, S., and Gokul, S. (2016). “Comparative Analysis of Direct Power Control (DPC) and Direct Voltage Control (DVC) for Control of Doubly Fed Induction Generator (DFIG) Connected to a Variable Speed Wind Turbine”, International Journal of Control Theory and Applications, 9(18), 8961–8971. ## Biographies Anjana Jain has completed her BE in EEE in the year 2001 and ME in Control Systems in the year 2005 from Jabalpur Engineering College, Jabalpur, RDVV, MP, India. She is currently working as Assistant Professor at the dept. EEE, Amita Vishwa Vidyapeetham, Bengaluru and pursuing PhD. She has 13 years of total teaching experience and her areas of research include Renewable Energy (Wind Generation), Power Electronics. Dr. S. Shankar has completed his BE in Instrumentation Technology in the year 2005 from KNS Institute of Technology, Bangalore, Karnataka, India and MTech in Power Electronics in the year 2008 from RV College of Engineering, Bangalore, Karnataka, India. He has received his PhD degree from IIT Delhi, India in the year 2014 and currently working in the dept. of EEE, Amrita Vishwa Vidyapeetham, Bengaluru, India. He has total teaching and industry experience of 6 years. His areas of research include Renewable Energy (wind, solar), Power Electronics. Dr. V. Vanitha has received her BE in EEE in 1992 from Madurai Kamaraj University, Madurai, India and ME in Power Systems in 1993, from Bharathidasan University, India. She has received her PhD degree from Anna University, Chennai, India and currently working in the dept. of EEE, Amrita School of Enginnering, Coimbatore. She has totally 17 years of teaching experience. Her research interests are in the areas of Power System, Electrical Machines, Renewable Energy Sources and Power Quality.
TheInfoList In mathematics, a real-valued function is a function Function or functionality may refer to: Computing * Function key A function key is a key on a computer A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations automatically. Modern comp ... whose values In ethics Ethics or moral philosophy is a branch of philosophy Philosophy (from , ) is the study of general and fundamental questions, such as those about Metaphysics, existence, reason, Epistemology, knowledge, Ethics, values, Philoso ... are real number In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no g ... s. In other words, it is a function that assigns a real number to each member of its domain Domain may refer to: Mathematics *Domain of a function In mathematics, the domain of a Function (mathematics), function is the Set (mathematics), set of inputs accepted by the function. It is sometimes denoted by \operatorname(f), where is th ... . Real-valued functions of a real variable (commonly called ''real functions'') and real-valued functions of several real variables In mathematical analysis its applications, a function of several real variables or real multivariate function is a function Function or functionality may refer to: Computing * Function key A function key is a key on a computer A comp ... are the main object of study of calculus Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematics, mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations ... and, more generally, real analysis 200px, The first four partial sums of the Fourier series for a square wave. Fourier series are an important tool in real analysis.">square_wave.html" ;"title="Fourier series for a square wave">Fourier series for a square wave. Fourier series are a ... . In particular, many function space In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and th ... s consist of real-valued functions. # Algebraic structure Let $\left(X,\right)$ be the set of all functions from a set to real numbers $\mathbb R$. Because $\mathbb R$ is a field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grassl ... , $\left(X,\right)$ may be turned into a vector space In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities a ... and a commutative algebra Commutative algebra is the branch of algebra Algebra (from ar, الجبر, lit=reunion of broken parts, bonesetting, translit=al-jabr) is one of the areas of mathematics, broad areas of mathematics, together with number theory, geometry ... over the reals with the following operations: *$f+g: x \mapsto f\left(x\right) + g\left(x\right)$ vector addition In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ... *$\mathbf: x \mapsto 0$ additive identity In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and the ... *$c f: x \mapsto c f\left(x\right),\quad c \in \mathbb R$ scalar multiplication In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra (or more generally, a module (mathematics), module in abstract algebra). In common geometrical contexts, scalar multiplication of a re ... *$f g: x \mapsto f\left(x\right)g\left(x\right)$ pointwiseIn mathematics, the qualifier pointwise is used to indicate that a certain property is defined by considering each value f(x) of some function f. An important class of pointwise concepts are the ''pointwise operations'', that is, operations defined o ... multiplication These operations extend to partial function In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities an ... s from to $\mathbb R,$ with the restriction that the partial functions and are defined only if the domains of and have a nonempty intersection; in this case, their domain is the intersection of the domains of and . Also, since $\mathbb R$ is an ordered set, there is a partial order upright=1.15, Fig.1 The set of all subsets of a three-element set \, ordered by set inclusion">inclusion Inclusion or Include may refer to: Sociology * Social inclusion, affirmative action to change the circumstances and habits that leads to s ... *$\ f \le g \quad\iff\quad \forall x: f\left(x\right) \le g\left(x\right),$ on $\left(X,\right),$ which makes $\left(X,\right)$ a partially ordered ring In abstract algebra, a partially ordered ring is a Ring (mathematics), ring (''A'', +, ·), together with a ''compatible partial order'', i.e. a partial order \leq on the underlying set ''A'' that is compatible with the ring operations in the sense ... . # Measurable The σ-algebra of Borel set In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and the ... s is an important structure on real numbers. If has its σ-algebra and a function is such that the preimage In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ... of any Borel set belongs to that σ-algebra, then is said to be measurable In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). ... . Measurable functions also form a vector space and an algebra as explained above in . Moreover, a set (family) of real-valued functions on can actually ''define'' a σ-algebra on generated by all preimages of all Borel sets (or of intervals only, it is not important). This is the way how σ-algebras arise in ( Kolmogorov's) probability theory Probability theory is the branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are containe ... , where real-valued functions on the sample space In probability theory Probability theory is the branch of concerned with . Although there are several different , probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of . Typically these axiom ... are real-valued random variable A random variable is a variable whose values depend on outcomes of a random In common parlance, randomness is the apparent or actual lack of pattern or predictability in events. A random sequence of events, symbols or steps often has no ... s. # Continuous Real numbers form a topological space In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no gener ... and a complete metric space In mathematical analysis Analysis is the branch of mathematics dealing with Limit (mathematics), limits and related theories, such as Derivative, differentiation, Integral, integration, Measure (mathematics), measure, sequences, Series (mathema ... . Continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ga ... real-valued functions (which implies that is a topological space) are important in theories of topological spaces and of metric spaces. The extreme value theorem In calculus Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematics, mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of ... states that for any real continuous function on a compact space In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... its global maximum and minimum exist. The concept of metric space In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and t ... itself is defined with a real-valued function of two variables, the '' metric Metric or metrical may refer to: * Metric system, an internationally adopted decimal system of measurement Mathematics * Metric (mathematics), an abstraction of the notion of ''distance'' in a metric space * Metric tensor, in differential geomet ... '', which is continuous. The space of continuous functions on a compact Hausdorff spaceIn mathematical analysis, and especially functional analysis Image:Drum vibration mode12.gif, 200px, One of the possible modes of vibration of an idealized circular drum head. These modes are eigenfunctions of a linear operator on a function space, ... has a particular importance. Convergent sequence As the positive integer An integer (from the Latin wikt:integer#Latin, ''integer'' meaning "whole") is colloquially defined as a number that can be written without a Fraction (mathematics), fractional component. For example, 21, 4, 0, ... s also can be considered as real-valued continuous functions on a special topological space. Continuous functions also form a vector space and an algebra as explained above in , and are a subclass of measurable functions because any topological space has the σ-algebra generated by open (or closed) sets. # Smooth Real numbers are used as the codomain to define smooth functions. A domain of a real smooth function can be the real coordinate space In mathematics, a real coordinate space of dimension , written ( ) or is a coordinate space over the real numbers. This means that it is the set of the tuple, -tuples of real numbers (sequences of real numbers). With component-wise addition a ... (which yields a real multivariable function), a topological vector space In mathematics, a topological vector space (also called a linear topological space and commonly abbreviated TVS or t.v.s.) is one of the basic structures investigated in functional analysis. A topological vector space is a vector space (an Abstra ... , an open subset Open or OPEN may refer to: Music * Open (band) Open is a band. Background Drummer Pete Neville has been involved in the Sydney/Australian music scene for a number of years. He has recently completed a Masters in screen music at the Australian ... of them, or a smooth manifold In mathematics, a differentiable manifold (also differential manifold) is a type of manifold The real projective plane is a two-dimensional manifold that cannot be realized in three dimensions without self-intersection, shown here as Boy's s ... . Spaces of smooth functions also are vector spaces and algebras as explained above in and are subspaces of the space of continuous functions In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ... . # Appearances in measure theory A measure on a set is a non-negative In mathematics, the sign of a real number is its property of being either positive, negative number, negative, or zero. Depending on local conventions, zero may be considered as being neither positive nor negative (having no sign or a unique third ... real-valued functional on a σ-algebra of subsets.Actually, a measure may have values in : see extended real number line In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and t ... . L''p'' spaces on sets with a measure are defined from aforementioned real-valued measurable functions, although they are actually quotient spaces. More precisely, whereas a function satisfying an appropriate defines an element of L''p'' space, in the opposite direction for any and which is not an atom An atom is the smallest unit of ordinary matter In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of ato ... , the value is undefined. Though, real-valued L''p'' spaces still have some of the structure described above in . Each of L''p'' spaces is a vector space and have a partial order, and there exists a pointwise multiplication of "functions" which changes , namely :$\sdot: L^ \times L^ \to L^,\quad 0 \le \alpha,\beta \le 1,\quad\alpha+\beta \le 1.$ For example, pointwise product of two L2 functions belongs to L1. # Other appearances Other contexts where real-valued functions and their special properties are used include monotonic function In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities a ... s (on ordered set In mathematics, especially order theory, a partially ordered set (also poset) formalizes and generalizes the intuitive concept of an ordering, sequencing, or arrangement of the elements of a Set (mathematics), set. A poset consists of a set toget ... s), convex function In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities a ... s (on vector and affine space In mathematics, an affine space is a geometric Structure (mathematics), structure that generalizes some of the properties of Euclidean spaces in such a way that these are independent of the concepts of distance and measure of angles, keeping on ... s), harmonic A harmonic is any member of the harmonic series Harmonic series may refer to either of two related concepts: *Harmonic series (mathematics) *Harmonic series (music) {{Disambig .... The term is employed in various disciplines, including music ... and subharmonic In music Music is the art of arranging sounds in time through the elements of melody, harmony, rhythm, and timbre. It is one of the universal cultural aspects of all human societies. General definitions of music include common elements ... functions (on Riemannian manifold In differential geometry Differential geometry is a Mathematics, mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds, using the techniques of differential calculus, integr ... s), analytic function In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... s (usually of one or more real variables), algebraic functionIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ... s (on real algebraic varieties Algebraic varieties are the central objects of study in algebraic geometry Algebraic geometry is a branch of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures ... ), and polynomial In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). I ... s (of one or more real variables). * Real analysis 200px, The first four partial sums of the Fourier series for a square wave. Fourier series are an important tool in real analysis.">square_wave.html" ;"title="Fourier series for a square wave">Fourier series for a square wave. Fourier series are a ... * Partial differential equation In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a Multivariable calculus, multivariable function. The function is often thought of as an "unknown" to be sol ... s, a major user of real-valued functions * Norm (mathematics) In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and t ... * Scalar (mathematics) A scalar is an element of a field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an ar ... # References * * Gerald Folland Gerald Budge Folland is an United States, American mathematician and a professor of mathematics at the University of Washington. His areas of interest are harmonic analysis (on both Euclidean space and Lie groups), differential equations, and math ... , Real Analysis: Modern Techniques and Their Applications, Second Edition, John Wiley & Sons, Inc., 1999, . *
# 'L varies jointly as a and square root of b, and L = 72 when a = 8 and b = 9. Find L when a = 1/2 and b = 36? Y varies jointly as the cube of x and the square root of w, and Y = 128 when x = 2 and w = 16. Find Y when x = 1/2 and w = 64? Jul 24, 2017 $L = 9 \text{ and } y = 4$ #### Explanation: $\text{the initial statement is } L \propto a \sqrt{b}$ $\text{to convert to an equation multiply by k the constant}$ $\text{of variation}$ $\Rightarrow L = k a \sqrt{b}$ $\text{to find k use the given conditions}$ $L = 72 \text{ when "a=8" and } b = 9$ $L = k a \sqrt{b} \Rightarrow k = \frac{L}{a \sqrt{b}} = \frac{72}{8 \times \sqrt{9}} = \frac{72}{24} = 3$ $\text{ equation is } \textcolor{red}{\overline{\underline{| \textcolor{w h i t e}{\frac{2}{2}} \textcolor{b l a c k}{L = 3 a \sqrt{b}} \textcolor{w h i t e}{\frac{2}{2}} |}}}$ $\text{when "a=1/2" and "b=36}$ $L = 3 \times \frac{1}{2} \times \sqrt{36} = 3 \times \frac{1}{2} \times 6 = 9$ $\textcolor{b l u e}{\text{-------------------------------------------------------}}$ $\text{Similarly}$ $y = k {x}^{3} \sqrt{w}$ $y = 128 \text{ when "x=2" and } w = 16$ $k = \frac{y}{{x}^{3} \sqrt{w}} = \frac{128}{8 \times 4} = \frac{128}{32} = 4$ $\text{equation is } \textcolor{red}{\overline{\underline{| \textcolor{w h i t e}{\frac{2}{2}} \textcolor{b l a c k}{y = 4 {x}^{3} \sqrt{w}} \textcolor{w h i t e}{\frac{2}{2}} |}}}$ $\text{when "x=1/2" and } w = 64$ $y = 4 \times {\left(\frac{1}{2}\right)}^{3} \times \sqrt{64} = 4 \times \frac{1}{8} \times 8 = 4$
# How do you find the vertex and the intercepts for f(x)=-x^2+2x+4? Jul 30, 2018 $\text{vertex } = \left(1 , 5\right) , x = 1 \pm \sqrt{5}$ #### Explanation: $\text{the equation of a parabola in "color(blue)"vertex form}$ is. •color(white)(x)y=a(x-h)^2+k $\text{where "(h,k)" are the coordinates of the vertex and a}$ $\text{is a multiplier}$ $\text{to obtain this form "color(blue)"complete the square}$ $y = 1 \left({x}^{2} + 2 \left(- 1\right) x + 1 - 1 - 4\right)$ $\textcolor{w h i t e}{y} = - {\left(x - 1\right)}^{2} + 5$ $\textcolor{m a \ge n t a}{\text{vertex }} = \left(1 , 5\right)$ $\text{let x = 0 for y-intercept}$ $y = - 1 + 5 = 4 \leftarrow \textcolor{red}{\text{y-intercept}}$ $\text{let y = 0 for x-intercepts}$ $- {\left(x - 1\right)}^{2} + 5 = 0$ ${\left(x - 1\right)}^{2} = 5$ $x - 1 = \pm \sqrt{5}$ $x = 1 \pm \sqrt{5} \leftarrow \textcolor{red}{\text{exact values}}$
# Windowless percentile tracking This is a method driven by the same constraints as the exponential moving average, but for median and other percentiles. We present an algorithm for tracking the value of a percentile, such as the median, in an infinite stream of observations whose distribution can change over time. A self-imposed constraint is that we cannot afford to keep the last N observations explicitly in memory. In our algorithm, the cost of updating the estimated percentile is $$O(1)$$ and the memory usage is also $$O(1)$$. A single parameter denoted $$r$$ is specified by the user and expresses the trade-off between accuracy and reactivity (to changes in the distribution of the signal). Notations: in this document, all sequences and arrays are indexed starting from 0. ## Background: window-based algorithms Given N a fixed window length, it is relatively straightforward to maintain a sorted array $$W$$ of the latest N observations indexed $$0\dots N-1$$. The p-percentile is computed exactly over that window by looking up the pair of values found at index $$\lfloor (N-1)p \rfloor$$ and at index $$\lceil (N-1)p \rceil$$, and by taking their average. For example, the median $$m$$ (0.5-percentile) using a window of length $$N=100$$ is calculated as follows once we’ve read 100 observations or more: $m = \frac{W_{49} + W_{50}}{2}$ Such algorithms are not acceptable for us since they require storing the latest N values, where N is typically chosen greater than 100. ## Background: exponential moving average An exponential moving average is a weighted average of the previous observations, in which the weight of the latest observation is always set to a fixed parameter $$r$$ ($$0 \lt r \le 1$$) and the weight of the other values is $$(1-r)$$ times their previous weight. After reading $$\lfloor 1/r \rfloor + 1$$ observations or more, the moving average for observation i is computed as follows: $m_i = r . x_i + (1-r) . m_{i-1}$ In the initial phase where $$i \lt \lfloor 1/r \rfloor$$, $$m_i$$ is calculated as the mean of all the observations so far. The rationale is that the weight of the latest observation is the nearest possible to $$r$$ without being smaller than the weight any of the past observations. This avoids having to set $$m_{-1}$$ to an arbitrary guess with a lasting impact. Updating m in this initial phase is done as follows: $\hat{r}_i = \frac{1}{i+1}$ $m_i = \hat{r}_i . x_i + (1-\hat{r}_i) . m_{i-1}$ This exponential moving average algorithm is used in the moving percentile algorithm described below to estimate the standard deviation of the signal. ## Moving percentile algorithm The $$p$$-percentile is represented by the variable $$m$$. It is initialized with the value of the first observation $$x_0$$: $m_0 = x_0$ Subsequent iterations are updated as follows, for some value of $$\delta$$ discussed later. If $$x_i < m_{i-1}$$ then $m_i = m_{i-1} - \frac{\delta}{p}$ else if $$x_i > m_{i-1}$$ then $m_i = m_{i-1} + \frac{\delta}{1-p}$ else $$x_i = m_{i-1}$$ and keep the previous value: $m_i = m_{i-1}$ If $$\delta$$ is not too large and not too small, $$m$$ is a good estimate of the $$p$$-percentile. Choosing a good value for $$\delta$$ depends on the distribution of values around the $$p$$-percentile. Excessive values of $$\delta$$ result in big jumps for m and limit the accuracy, while a $$\delta$$ that’s too small may take too much time to converge to the $$p$$-percentile as computed exactly using a window of a reasonable length. In order to express the trade-off between accuracy and convergence speed, we express $$\delta$$ as the product of a user-chosen constant r and the estimated standard deviation $$\sigma$$ of the input signal: $\delta_i = \sigma_i . r$ where $$\sigma_i$$ is the square root of the variance estimated by a moving average of the sequence $$(\mu_i - x_i)^2$$ and $$\mu$$ is estimated by a moving average of x. We find that reasonable values of $$r$$ for many applications range from 0.001 to 0.01. The chart below shows our sample signal that was generated randomly in 3 phases: • phase 1 (0-999): Uniform(0,1); expected 0.9-percentile = 0.9 • phase 2 (1000-1999): Uniform(2,4); expected 0.9-percentile = 3.8 • phase 3 (2000-2999): Uniform(0,1); expected 0.9 percentile = 0.9 The output for window-based percentile estimators and for our moving percentile are shown here: Our moving 0.9-percentile, shown on the chart in green, reacts quickly when the signal shifts upward, because each update shifts the moving percentile upward by $$\frac{\delta}{0.1}$$. However, when the signal shifts downward, it takes the moving percentile more time to react because each update shifts it downward by $$\frac{\delta}{0.9}$$, which is 9 times less than in the other direction. Additionally, we can see that the downward shift is pretty steep initially and then gets less steep. This is due to the delayed update of the standard deviation $$\sigma$$: the value of $$\delta$$ is divided by two when the estimated $$\sigma$$ catches up and is divided by two, reflecting the new distribution in phase 3. A sample implementation is available on GitHub.
# Interesting Fibonacci Goodness Last year I was working on deriving a matrix based method for translating and rotating various polygons around pentagons. In the process of this I got sidetracked and started to look into the golden mean. People swoon over how it defines the most beautiful rectangles! It crops up inside pentagons and various 3-D polyhedra. It’s glorious. It’s just like how a circle has the ratio pi, and it has a name too: phi. I’ll be writing it down in Greek from here on out (it’s originally from there anyway). So learn this shape: $\varphi$ Sometimes people write it out like this: $\phi$ Either way it’s just a variable name. Sometimes it refers to something other than the golden mean, but here it’s the golden mean. As I found powers of $\varphi$, I kept noticing certain values popping up in sequence. It turns out that one can find powers of $\varphi$ through the following formula: $\displaystyle\frac{\sqrt{5}F_n + L_n}{2} = \varphi^n$ Where $F_n$ and $L_n$ are the nth Fibonacci and Lucas numbers, respsectively. What are these numbers? Well, you probably know about the Fibonacci numbers, but I’ll go over them here. They follow a certain sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144… and so on. Typically, one starts counting with one, but they really start with zero. Start with a zero and a one. The next number is 0 + 1 = 1. The next number is 1 + 1 = 2. After that, 1 + 2 = 3. One keeps adding the current number to the previous number to get the next number. These numbers crop up in pine cones, sunflowers, and a myriad of other places. The other not so well known sequence, the Lucas numbers, are defined the same exact way, but with different starting numbers: 2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, 199, 322, 521… forever and ever, amen. Start with 2 and 1. 2 + 1 = 3. 1 + 3 = 4. 3 + 4 = 7. If you’ve read Godel, Escher, Bach: An Eternal Golden Braid you may have run across this sequence. They grow the same way the Fibonacci numbers do. If one takes the ratio of the Lucas numbers to the Fibonacci numbers as they grow, one will find it approximates the square root of five. I wasn’t certain that I was writing things out properly, so I started writing out everything again. Then, I decided to take my knowledge of the binomial theorem and apply it to this question. The binomial theorem is really awesome. It’s related to Pascal’s triangle. If you think that’s cool you’re in luck. If you haven’t hadn’t heard of it yet. Fearest not, I shall walk thee through it. Pascal’s triangle looks like this: $\displaystyle 1$ $\displaystyle 1 \qquad 1$ $\displaystyle 1 \qquad 2 \qquad 1$ $\displaystyle 1 \qquad 3 \qquad 3 \qquad 1$ $\displaystyle 1 \qquad 4 \qquad 6 \qquad 4 \qquad 1$ If you add two adjacent numbers in a row together you get the number that goes in between the two numbers on the next row down. It can be expressed in terms of a nifty little operation known as the factorial. The factorial takes a number and multiplies that number by every counting number less than it. So, if I take the factorial of 3, written 3!, I get 1 x 1 x 2 x 3 = 6. If I want to find the kth number of the nth row, of Pascal’s triangle I use the choose function. It is written out like this: $\displaystyle{n \choose k} = \frac{n!}{k!(n-k)!}$ This tells me how many ways I can choose k items out of n total items. It also tells me what the kth coefficient is for a binomial raised to the nth power. If I have the binomial $x + y$ and I square it, I end up with $x^2 + 2xy + y^2$. Let me make something more obvious, $(1)x^2 + (2)xy + (1)y^2$. Notice a pattern yet? Try this one, $(1)x^3 + (3)x^2y + (3)xy^2 + (1)y^3$. See the rows in the triangle appearing? Let me arrange those starting with raising the binomial the the power of 0: $\displaystyle (1)$ $\displaystyle (1)x + (1) y$ $\displaystyle (1)x^2 + (2)xy + (1)y^2$ $\displaystyle (1)x^3 + (3)x^2y + (3)xy^2 + (1)y^3$ $\displaystyle (1)x^4 + (4)x^3y +(6)x^2y^2 + (4)xy^3 + (1)y^4$ Okay, okay, I’ll stop. It should be entirely too obvious by now. Hold onto this because I’ll come back to it in a moment. First let’s talk about a number I mentioned before. The golden mean is defined mathematically as the ratio: $\displaystyle \varphi = \frac{1+\sqrt{5}}{2}$ This, my friends, is a binomial. It’s a really special binomial, it is easy to work with and has some cool properties when the binomial theorem is applied to it. Now, let me introduce you to another formula called the Binet formula. It gives one the nth number of the Fibonacci sequence. $\displaystyle F_n = \frac{\varphi^n - (1-\varphi)^n}{\sqrt{5}}$ It’s a nice little gem of a formula, but let’s write it out in terms of the binomials it contains: $\displaystyle F_n = \frac{(1+\sqrt{5})^n-(1-\sqrt{5})^n}{2^n\sqrt{5}}$ It may look more frightening, but trust me, it’s simpler this way. There is some “magic” we can do with the binomials on the top. Let me show you, but first I’ll need to introduce you to the summation sign. He’s a simple fellow who might look scary. All he does is add things together. Lets watch him add all the numbers from 1 to 5. $\displaystyle 1 + 2+ 3 +4 + 5 = \sum_{i=1}^5{i}$ That’s all there is to it, just start with the number 1 in the variable i, evaluate the expression, increment i, evaluate the expression, add it to the previous result, rinse, repeat as needed. Now, I’ll continue. The term $(1 + \sqrt{5})^n$ can be written as a series sum $\sum_{i=0}^n{{n \choose i}\sqrt{5}^i}$ and the term $(1 - \sqrt{5})^n$ can be written as $\sum_{i=0}^n{{n \choose i}(-\sqrt{5})^i}$. Put these together and several things happen because of the minus sign. It actually sorts out the odd powered terms, getting rid of the even terms (because the even terms are subtracted from each other, while the odd terms are added to each other). That leaves: $\displaystyle F_n = \frac{1}{2^n\sqrt{5}}\sum_{i \nmid 2}^n{{n \choose i}2\sqrt{5}^i}$, which reduces to, $\displaystyle F_n = \frac{1}{2^{n-1}}\sum_{i \nmid 2}^n {{n \choose i}5^{\frac{i-1}{2}}}$ , and the notation $\displaystyle i\nmid2$ means all i not divisible by 2 or all i mod 2 = 1. It’s the notation used by Ireland and Rosen, not to be confused with other notation meaning the first number divides the second number. Now you look at me and ask, “How did that get any simpler”? It didn’t really, but this has the property of proving that all numbers in the sequence will be integers, otherwise known as nice round whole numbers! How? Notice that we are summing over all odd numbers less than n starting with 1, if one subtracts 1 from an odd number one gets an even number back. An even number is divisible by two, so that fraction in 5’s exponent reduces to an integer. Oh, I guess I still need to prove that 2 raised to n-1 evenly divides the sum. In the case of n = 0, the sum equals zero and 0 times 2 = 0. For n = 1, we have 1 times 1 = 1. Now, for every other case there is a 1 plus an odd number multiplied by a coefficient that grows faster than 2 raised to n-1. I give up for now. 🙂 I just know that there are enough twos being multiplied and added together that 2 raised to n-1 divides out evenly. The corresponding formulas for the Lucas series are: $\displaystyle L_n = \frac{(1+\sqrt{5})^n+(1-\sqrt{5})^n}{2^n}$ and $\displaystyle L_n = \frac{1}{2^{n-1}}\sum_{i \mid 2}^n {{n \choose i}5^{\frac{i}{2}}}$ where the summation is over all even integers less than n divisible by two starting with zero, or where the condition i mod 2 = 0 holds. The next exploration I have in mind for this is to derive something called a q-analog of these series. Also, (and I’m stoked about this!) I want to look at what the Fibonacci and Lucas sequences look like when extended into the complex plane. #### Updated code: Here is some Ruby code for calculating the Fibonacci and Lucas numbers using my derived methods. I took advantage of the functional aspects of Ruby to simply writing out the computation and sorting the odd and even terms before applying the rest of the computations. Another optimization I made was to create a factorial lookup table. Instead of computing the factorial every time, it calculates all factorials in an array and looks up the number (resulting in quite a speed increase, but the ). Also, note that this version uses my fully simplified identities and will obviously only return integers. The past version suffered from rounding errors, and the result had to be cast back into an integer. Just copy and past the following into fibonacci.rb, and type ruby fibonacci.rb 100 to get the first one hundred numbers in each sequence. base = 5 num = ARGV[0].to_i def is_even(n)   n % 2 == 0 end def is_odd(n)   n % 2 == 1 end def sum (set)   set.inject(0) { |total, i| total + i } end def setup_factorial(n)   f = Array(0..n)   f.map! { |i| if i == 0 then 1 else i*f[i-1] end } end def choose(n, k, f)   f[n] / ( f[k] * f[n-k] ) end def lucas(base, range, f)   lucas = Array(range)   range.each do |n|     lucas[n] = Array(0..n)     #compute lucas terms     lucas[n] = lucas[n].select { |i| is_even(i) }     lucas[n].map! { |i| choose(n, i, f) * base **(i/2) }     lucas[n] = sum(lucas[n]) / (2**(n-1))   end   lucas end def fibonacci (base, range, f)   fibonacci = Array(range)   range.each do |n|     fibonacci[n] = Array(0..n)     #compute fibonacci terms     fibonacci[n] = fibonacci[n].select { |i| is_odd(i) }     fibonacci[n].map! { |i| choose(n, i, f) * base **((i-1)/2) }     fibonacci[n] = sum(fibonacci[n]) / (2**(n-1))   end   fibonacci end puts "Setting up factorial lookup table..." factorial = setup_factorial(num) puts "Done." puts "Calculating series..." fibonacci(base, 0..num, factorial).each { |i| print i, ", " } puts #### Old Code Here is some ruby code for calculating the Fibonacci and Lucas numbers using my derived methods. Please note that it inefficiently sums over both even and odd numbers in both series, but multiplies the unneeded results with a zero. It also cheats the rounding errors by casting the floats back into an integer (which might actually cause some numbers to be wrong). def is_odd(n)   n%2 end def is_even(n)   (n+1)%2 end def sum (range)   range.inject(0) { |total, i| total + yield(i) } end def factorial(n)   if n == 0     1   else     n * factorial(n-1)   end end def choose(n, k)   factorial(n) / ( factorial(k) * factorial(n-k) ) end def fibonacci(n)   ( sum(0..n) { |j| is_odd(j) * choose(n, j) * (Math.sqrt(5)**(j-1)) / 2**(n - 1) } ).to_i end def lucas(n)   ( sum(0..n) { |j| is_even(j) * choose(n, j) * (Math.sqrt(5)**j) / 2**(n - 1) } ).to_i end (0..100).each { |j| print fibonacci(j), ", " } puts (0..100).each { |j| print lucas(j), ", " } puts
Importing RSEM with tximport 1 0 Entering edit mode harelarik ▴ 50 @harelarik-13564 Last seen 2 days ago Israel Hi, I have one RSEM file. I am trying to import it with the following commands (R is invoked in the same directory of the input file): library(tximport) txi.rsem <- tximport(files, type = "rsem", txIn = FALSE, txOut = FALSE) But I receive this ERROR Output: 1 Error in computeRsemGeneLevel(files, importer, geneIdCol, abundanceCol, : all(c(geneIdCol, abundanceCol, lengthCol) %in% names(raw)) is not TRUE Unnamed col\_types should have the same length as col\_names. Using smaller of the two. Potential issues (after inspectig https://bioconductor.riken.jp/packages/3.7/bioc/vignettes/tximport/inst/doc/tximport.html#session-info): * I am using one file only, is that the problem? Should one file be imported in a different way? * If using only one file, should I us eit in a diff way? * Maybe the file must be zipped? * Maybe it must have this suffix: "genes.results.gz" ? *Must one round the RSEM numbers before import with tximport? Name treat1Repaet1 treat1Repaet2 treat1Repaet3 treat2Repaet1 treat2Repaet2 treat2Repaet3 trea3Repaet1 treat3Repaet2 treat3Repaet3 Org1_PredGene_000123456.p1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Org1_PredGene_000123457 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Org1_PASA_asmbl_000123458.p1 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 Org1_TRINITY_000123459 0.00 8.67 7.80 14.85 42.49 0.00 8.50 0.00 2.41 10.57 tximport RSEM import error • 411 views 0 Entering edit mode @mikelove Last seen 1 hour ago United States With a single file, you are better off using read.delim. 0 Entering edit mode I am not sure I understood. Do you mean with single file I better not use tximport? 0 Entering edit mode Yes. Tximport is really designed for aggregating data across samples and for summing to gene level. With only one sample, and using RSEM which outputs the gene summary file, there is no point to tximport. 0 Entering edit mode I have many samples in the single input file. See matrix above, each column is a sample, each row a transcript id. Infect, I have much more columns than illustrated above. Actually the matrix contains: over 90 samples (columns) for over million transcripts (rows). However, I was suggested to use tximport since I have RSEM data. Would you recommend to use tximport for this case? Currently we are using isoform level counts. 0 Entering edit mode You can just read in the data from the file. Someone must have compiled it. So don’t use tximport. 0 Entering edit mode Dear Michael, thank you very much. A colleague of mine generated the RSEM file. If there is any advantage for using tximport in our study, they can be changed to any other format. However, it is important to note that we are using meta-transcriptome data (containing hosts, and microbiome etc'), and therefore we were concerned about unifying orthologs of several organisms into one gene, if we use gene level data. Therefore the data produced by RSEM is isoform level counts . In this regards, I have tried also to use the following command but received errors: txi.rsem <- tximport(files, type = "rsem", txIn = FALSE, txOut = FALSE) 0 Entering edit mode I'll just say, you can only use tximport to aggregate across many files, each with data for a single sample. You have already aggregated RSEM output, which bypasses the need for tximport, so you don't need to use tximport. If you want gene-level results use your own custom scripts on the gene-level output files from RSEM. 0 Entering edit mode Thank you very much Arik.
# Elliptic Integrals by boarie Tags: elliptic, integrals Sci Advisor HW Helper P: 11,915 Here's the trick. U need to use some notations $$R^{2}+r^{2} =p^{2}$$ $$2rR\sin\vartheta =u$$ One has that $p^2 >0 \ ,\ u>0$. Then the integral becomes $$B_{r}(r,\vartheta) =C \int_{0}^{2\pi} \frac{d\phi}{\left(p^{2}-u \sin\phi\right)^{\frac{3}{2}}} = C$$ times the result below. The notation for the complete elliptic integrals is the one Mathematica uses. U can check it out on the Wolfram site and compare it to the standard one (for example the one in Gradshteyn & Rytzik). Daniel.
# Reconstructing Ocean Circulation & Hydroclimate in the Subtropical Atlantic ### Paleoclimate Science Biological proxies such as diatoms, foraminifers, ostracodes, and pollen allow scientists to make inferences about climate conditions in the past.
## The prime $$k$$-tuplets in arithmetic progressions.(English)Zbl 0797.11076 Let $$k \geq 2$$, $$a_ j$$ be non-zero integers and $$b_ j$$ be integers for $$0 \leq j \leq k-1$$. Put $${\mathfrak a} = (a_ 0, \dots, a_{k-1}, b_ 0)$$, $${\mathfrak b} = (b_ 1, \dots, b_{k-1})$$, $N(x,{\mathfrak b}) = \{n:1 \leq a_ j n + b_ j \leq x, \quad j = 0,\dots, k-1\},$ $\Psi (x,{\mathfrak b},a,q) = \sum_{{n \in N (x,{\mathfrak b}) \atop n \equiv a \pmod q}} \prod^{k-1}_{j=0} \wedge (a_ jn + b_ j),$ $$Z(x) = \{b:| N (x,{\mathfrak b}) | \neq 0\}$$ and consider the inequality $\sum_{q \leq Q} \max_{1 \leq a \leq q} \sum_{{\mathfrak b} \in Z(x)} | \Psi (x,{\mathfrak b}, a,q) - \text{ expected main term} | \ll x^ k (\log x)^{-A} \tag{1}$ for fixed $${\mathfrak a}$$ and any fixed $$A>0$$. Using the circle method, A. Balog [Analytic number theory, Prog. Math. 85, 47-75 (1990; Zbl 0719.11066)] proved that (1) holds for any $$h \geq 2$$ when $$Q \leq x^{1/3} (\log x)^{-B}$$, $$B=B(A)>0$$, and H. Mikawa [Tsukuba J. Math. 10, 377-387 (1992; Zbl 0778.11053)] improved Balog’s result to $$Q \leq x^{1/2} (\log x)^{-B}$$, $$B = B(A)>0$$, in the case $$h=2$$. The author extends Mikawa’s result to the general case $$k \geq 2$$, i.e. he proves (1) for any fixed $${\mathfrak a}$$ and $$A>0$$, $$k \geq 2$$ and $$Q \leq x^{1/2} (\log x)^{-B}$$, $$B = B(A,k)>0$$. He also proves a short intervals version of (1), where $$N(x,{\mathfrak b})$$ is replaced by $$N(x,y,{\mathfrak b}) = \{n:x-y<a_ jn + b_ j\leq x$$, $$j = 0,\dots, k-1\}$$ and analogously for $$\Psi (x,y, {\mathfrak b}, a,q)$$ and $$2(x,y)$$, and the right hand side of (1) is replaced by $$y^ k (\log x)^{-A}$$, provided $$x^{2/3} (\log x)^ c<y \leq x$$ and $$Q \leq yx^{-1/2} (\log x)^{- B}$$, $$B=B(A,k)>0$$. Reviewer: A.Perelli (Genova) ### MSC: 11P32 Goldbach-type theorems; other additive questions involving primes 11N13 Primes in congruence classes ### Keywords: Prime $$k$$-tuplets on average ### Citations: Zbl 0719.11066; Zbl 0778.11053 Full Text:
## Friday, October 02, 2009 If you're younger than 45, chances are that as a schoolkid, you were taught that Australopithecus afarensis was the oldest known human ancestor. That's because Lucy, a female representative of that ethnic group, was found by Donald Johanson in Hadar, Ethiopia, in 1974. Lucy lived 3.2 million years ago. It was widely believed - in fact, since the very era of Charles Darwin - that even earlier ancestors had to be closer to chimps: this hypothetical link remained missing but it was expected that more chimp-like ancestors would be found soon. The status of this speculation just changed dramatically, assuming that the new findings will pass the tests that they should pass. An older ur-woman was recently reconstructed from bones found in the Ethiopian desert between 1992 and 2009. Ms Ardi from the Ardipithecus ramidus species lived 4.4 million years ago but she looked nothing like chimps. (See her extra pictures and don't ask me why they think that her boobs were so small). This fact makes it somewhat more likely that the common ancestor of the humans and the apes looked like neither. You may check an up-to-date sketch of the evolutionary tree. The improvement may be just quantitative but it will surely require some textbooks to be rewritten. The experts claim that the skeleton looks very primitive to them. Let me admit that I don't see the "huge difference in simplicity" from the contemporary humans. Do you?
# zbMATH — the first resource for mathematics Blocks in homogeneous effect algebras and MV-algebras. (English) Zbl 1065.06007 The author clarifies the relations between numerous notions in effect algebras: various definitions of compatibility, several conditions generalizing lattice-ordered effect algebras, etc. Particular attention is paid to the question of when an effect algebra can be covered by MV-algebras (blocks) and to the relation between $$m$$-completeness conditions for algebras, resp. their blocks. The paper is very useful because it supports orientation in numerous recent papers on related subjects. ##### MSC: 06C15 Complemented lattices, orthocomplemented lattices and posets 03G12 Quantum logic 06D35 MV-algebras Full Text: ##### References: [1] BERAN L.: Orthomodular Lattices - Algebraic Approach. Academia, Czechoslovak Academy of Sciences/D. Reidel Publishing Company, Praha/Dordrecht, 1984. · Zbl 0558.06008 [2] BUSCH P.-LAHTI P. J.-MITTELSTADT P.: The Quantum Theory of Measurement. Lecture Notes in Phys. New Ser. m Monogr. 31, Springer-Verlag, Berlin-Heidelberg-New York-London-Budapest, 1991. [3] BUSCH P.-GRABOWSKI M.-LAHTI P. J.: Operational Quantum Physics. Springer-Verlag, Berlin, 1995. · Zbl 0863.60106 [4] CHANG C. C.: Algebraic analysis of many-valued logic. Trans. Amer. Math. Soc. 88 (1958), 467-490. · Zbl 0084.00704 [5] CHOVANEC F.-KÔPKA F.: $$D$$-lattices. Internat. J. Theoret. Phys. 34 (1995), 1297-1302. · Zbl 0840.03046 [6] CHOVANEC F.- KÔPKA F.: Boolean $$D$$-posets. Tatra Mt. Math. Publ. 10 (1997), 183-197. · Zbl 0915.03052 [7] DVUREČENSKIJ A.: On effect algebras that can be covered by $$MV$$-algebras. Internat. J. Theoret. Phys. 41 (2002), 221-229. · Zbl 1022.06005 [8] DVUREČENSKIJ A.-PULMANNOVÁ S.: New Trends in Quantum Structures. Kluwer Academic Publ./Ister Science, Dordrecht-Boston-London/Bratislava, 2000. · Zbl 0987.81005 [9] FOULIS D. J.-BENNETT M. K.: Effect algebras and unsharp quantum logics. Found. Phys. 24 (1994), 1331-1362. · Zbl 1213.06004 [10] FOULIS D. J.-GREECHIE R.-RÜTTIMANN G.: Filters and supports in orthoalgebras. Internat. J. Theoret. Phys. 35 (1995), 789-802. · Zbl 0764.03026 [11] GIUNTINI R.-GRUEULING H.: Toward a formal language for unsharp properties. Found. Phys. 19 (1994), 769-780. [12] JENČA G.: Blocks in homogeneous effect algebras. Bull. Austral. Math. Soc. 64 (2001), 81-98. · Zbl 0985.03063 [13] JENČA G.: A Cantor-Bernstein type theorem for effect algebras. Algebra Universalis 48 (2002), 399-411. · Zbl 1061.06020 [14] JENČA G.-PULMANNOVÁ S.: Quotients of partial abelian monoids and the Riesz decomposition property. Algebra Universalis 47 (2002), 443-477. · Zbl 1063.06011 [15] JENČA G.-RIEČANOVÁ Z.: On sharp elements in lattice ordered effect algebras. BUSEFAL 80 (1999), 24-29. [16] KALMBACH G.: Orthomodular Lattices. Academic Press, London-New York, 1983. · Zbl 0528.06012 [17] KÔPKA F.: Compatibility in $$D$$-posets. Internat. J. Theoret. Phys. 34 (1995), 1525-1531. · Zbl 0851.03020 [18] KÔPKA F.-CHOVANEC F.: $$D$$-posets. Math. Slovaca 44 (1994), 21-34. · Zbl 0789.03048 [19] LOCK P. L.-HARDEGREE G. M.: Connections among quantum logics: Part 2. Quantum event logic. Internat. J. Theoret. Phus. 24 (1985), 55-61. · Zbl 0592.03052 [20] PTÁK P.-PULMANNOVÁ S.: Orthomodular Structures as Quantum Logics. Kluwer Academic Publ., Dordrecht-Boston-London, 1991. · Zbl 0743.03039 [21] PULMANNOVÁ S.: On connections among some orthomodular structures. Demonstratio Math. 30 (1997), 313-328. · Zbl 0947.06004 [22] PULMANNOVÁ S.: Compatibility and decompositions of effects. J. Math. Phys. 43 (2002), 2817-2830. · Zbl 1059.81016 [23] PULMANNOVÁ S.: A note on observables on $$MV$$-algebras. Soft Computing 4 (2000), 45-48. · Zbl 1005.06006 [24] RAVINDRAN K.: On a Structure Theory of Effect Algebras. PhD Theses, Kansas State Univ., Manhattan, Kansas, 1996. [25] RIEČANOVÁ Z.: A generalization of blocks for lattice effect algebras. Internat. J. Theoret. Phys. 39 (2000), 855-865. [26] RIEČANOVÁ Z.: On order topological continuity of effect algebra operations. Contributions to General Algebra 12, Verlag Johannes Heyn, Klagenfurt, 2000, pp. 349-354. · Zbl 0960.03054 [27] RIEČANOVÁ Z.: Orthogonal sets in effect algebras. Demonstratio Math. 34 (2001), 525-531. · Zbl 0989.03071 [28] SARYMSAKOV T. A., al: Uporyadochennye algebry. FAN, Tashkent, 1983. · Zbl 0542.46001 [29] SCHRÖDER B.: On three notions of orthosummability in orthoalgebras. Internat. J. Theoret. Phys. 34 (1999), 3305-3313. · Zbl 0957.03061 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
## Featured resource ### More Problem Solving Members: $44.00 inc.GST Others:$ 55.00 inc.GST Home > Webshop > Middle years > Good Questions # Good Questions ### Great Ways to Differentiate Math Instruction (2nd ed.) Marian Small Using differentiated instruction in the classroom can be a challenge, especially when teaching mathematics. This book cuts through the difficulties with two powerful and universal strategies that teachers can use across all mathematics content: Open Questions and Parallel Tasks. Specific strategies and examples for grades Kindergarten - 8 are organised around the content strands used by the National Council of Teachers of Mathematics in the US: Number and Operations, Geometry, Measurement, Algebra, and Data Analysis and Probability. This resource will help teachers create a more inclusive classroom with mathematical talk that engages students from all levels. Members: $38.00 inc.GST Others:$ 47.50 inc.GST ISBN-13: 978-0-8077-5313-2 Year Levels: F - 8 Publisher: Teachers College Press Page Count: 238 Cover type: Soft cover Publication date: 2009 Product number: NCT1070 Keywords: Classroom material, Number
# MOX Pore Velocity Calculation Calculates pore speed. Used with kernel MOXPoreContinuity. ## Description Pore speed is calculated in this class [MOXPoreVelocity], which is used in the kernel MOXPoreContinuity shown in the second term of the following equation: (1) where is the porosity, is the pore velocity, is the temperature, and is the diffusion coefficient MOXPoreDiffusion. Usually, the temperature gradient is included in the pore velocity term, but here, the temperature gradient is written separately to emphasize the dependence of pore migration on temperature gradient. The equation for pore speed is (Sens, 1972), (2) under construction This class is still under development! See bison/test/mox_pore_velocity/ for examples of how this class should be used. ## Input Parameters • temperatureCoupled Temperature C++ Type:std::vector Description:Coupled Temperature ### Required Parameters • scale_factor1scale the velocity to account for uncertainty Default:1 C++ Type:double Description:scale the velocity to account for uncertainty • limit1limit for pore velocity in m/s Default:1 C++ Type:double Description:limit for pore velocity in m/s • computeTrueWhen false, MOOSE will not call compute methods on this material. The user must call computeProperties() after retrieving the Material via MaterialPropertyInterface::getMaterial(). Non-computed Materials are not sorted for dependencies. Default:True C++ Type:bool Description:When false, MOOSE will not call compute methods on this material. The user must call computeProperties() after retrieving the Material via MaterialPropertyInterface::getMaterial(). Non-computed Materials are not sorted for dependencies. • boundaryThe list of boundary IDs from the mesh where this boundary condition applies C++ Type:std::vector Description:The list of boundary IDs from the mesh where this boundary condition applies • pore_velocitycalculated in materials property C++ Type:double Description:calculated in materials property • blockThe list of block ids (SubdomainID) that this object will be applied C++ Type:std::vector Description:The list of block ids (SubdomainID) that this object will be applied ### Optional Parameters • enableTrueSet the enabled status of the MooseObject. Default:True C++ Type:bool Description:Set the enabled status of the MooseObject. • use_displaced_meshFalseWhether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used. Default:False C++ Type:bool Description:Whether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used. • control_tagsAdds user-defined labels for accessing object parameters via control logic. C++ Type:std::vector Description:Adds user-defined labels for accessing object parameters via control logic. • seed0The seed for the master random number generator Default:0 C++ Type:unsigned int Description:The seed for the master random number generator • implicitTrueDetermines whether this object is calculated using an implicit or explicit form Default:True C++ Type:bool Description:Determines whether this object is calculated using an implicit or explicit form • constant_onNONEWhen ELEMENT, MOOSE will only call computeQpProperties() for the 0th quadrature point, and then copy that value to the other qps.When SUBDOMAIN, MOOSE will only call computeSubdomainProperties() for the 0th quadrature point, and then copy that value to the other qps. Evaluations on element qps will be skipped Default:NONE C++ Type:MooseEnum Description:When ELEMENT, MOOSE will only call computeQpProperties() for the 0th quadrature point, and then copy that value to the other qps.When SUBDOMAIN, MOOSE will only call computeSubdomainProperties() for the 0th quadrature point, and then copy that value to the other qps. Evaluations on element qps will be skipped • output_propertiesList of material properties, from this material, to output (outputs must also be defined to an output type) C++ Type:std::vector Description:List of material properties, from this material, to output (outputs must also be defined to an output type) • outputsnone Vector of output names were you would like to restrict the output of variables(s) associated with this object Default:none C++ Type:std::vector Description:Vector of output names were you would like to restrict the output of variables(s) associated with this object ## References 1. P. F. Sens. The kinetics of pore movement in uo2 fuel rods. Journal of Nuclear Materials, 43:293–307, 1972. URL: http://www.sciencedirect.com/science/article/pii/002231157290061X, doi:https://doi.org/10.1016/0022-3115(72)90061-X.[BibTeX]
# Sphere cuts At what distance from the center intersects sphere with radius R = 56 plane, if the cut area and area of the main sphere circle is in ratio 1/2. Correct result: x =  39.6 #### Solution: $x=r\sqrt{1-\frac{a}{b}}=56\sqrt{1-\frac{1}{2}}=39.6$ We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you! Tips to related online calculators Pythagorean theorem is the base for the right triangle calculator. #### You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: ## Next similar math problems: • Here is Here is a data set (n=117) that has been sorted. 10.4 12.2 14.3 15.3 17.1 17.8 18 18.6 19.1 19.9 19.9 20.3 20.6 20.7 20.7 21.2 21.3 22 22.1 22.3 22.8 23 23 23.1 23.5 24.1 24.1 24.4 24.5 24.8 24.9 25.4 25.4 25.5 25.7 25.9 26 26.1 26.2 26.7 26.8 27.5 27.6 2 • Sphere parts, segment A sphere with a diameter of 20.6 cm, the cut is a circle with a diameter of 16.2 cm. .What are the volume of the segment and the surface of the segment? • Top of the tower The top of the tower has the shape of a regular hexagonal pyramid. The base edge has a length of 1.2 m, the pyramid height is 1.6 m. How many square meters of sheet metal is needed to cover the top of the tower if 15% extra sheet metal is needed for joint • Common chord Two circles with radius 17 cm and 20 cm are intersect at two points. Its common chord is long 27 cm. What is the distance of the centers of these circles? • Two circles Two circles with the same radius r = 1 are given. The center of the second circle lies on the circumference of the first. What is the area of a square inscribed in the intersection of given circles? • Eq triangle minus arcs In an equilateral triangle with a 2cm side, the arcs of three circles are drawn from the centers at the vertices and radii 1cm. Calculate the content of the shaded part - a formation that makes up the difference between the triangle area and circular cuts • Two chords In a circle with radius r = 26 cm two parallel chords are drawn. One chord has a length t1 = 48 cm and the second has a length t2 = 20 cm, with the center lying between them. Calculate the distance of two chords. • Chocolate roll The cube of 5 cm chocolate roll weighs 30 g. How many calories will contain the same chocolate roller of a prism shape with a length of 0.5 m whose cross section is an isosceles trapezoid with bases 25 and 13 cm and legs 10 cm. You know that 100 g of this • Hexagonal pyramid Calculate the surface area of a regular hexagonal pyramid with a base inscribed in a circle with a radius of 8 cm and a height of 20 cm. • Cone A2V Surface of cone in the plane is a circular arc with central angle of 126° and area 415 cm2. Calculate the volume of a cone. • 9-gon pyramid Calculate the volume and the surface of a nine-sided pyramid, the base of which can be inscribed with a circle with radius ρ = 7.2 cm and whose side edge s = 10.9 cm. • Canopy Mr Peter has metal roof cone shape with a height of 127 cm and radius 130 cm over well. He needs paint the roof with anticorrosion. How many kg of color must he buy if the manufacturer specifies the consumption of 1 kg to 3.3 m2? • Center of the cube Center of the cube has distance 16 cm from each vertex. Calculate the volume V and surface area S of the cube. • Circles In the circle with a radius 7.5 cm are constructed two parallel chord whose lengths are 9 cm and 12 cm. Calculate the distance of these chords (if there are two possible solutions write both). • The diagram 2 The diagram shows a cone with slant height 10.5cm. If the curved surface area of the cone is 115.5 cm2. Calculate correct to 3 significant figures: *Base Radius *Height *Volume of the cone • Tangent spheres A sphere with a radius of 1 m is placed in the corner of the room. What is the largest sphere size that fits into the corner behind it? Additional info: Two spheres are placed in a corner of a room. The spheres are each tangent to the walls and floor and • Chord In a circle with radius r=60 cm is chord 4× longer than its distance from the center. What is the length of the chord?
## AxIMMS_ROS / Supervoxel-for-3D-point-clouds .gitee-modal { width: 500px !important; } Create your Gitee Account Explore and code with more than 6 million developers,Free private repositories !:) This repository doesn't specify license. Without author's permission, this code is only for learning and cannot be used for other purposes. Notice: Creating folder will generate an empty file .keep, because not support in Git # Supervoxel for 3D point clouds ## Introduction We present a simple but effective supervoxel segmentation method for point clouds, which formalizes supervoxel segmentation as a subset selection problem. We develop an heuristic algorithm that utilizes local information to efficiently solve the subset selection problem. The proposed method can produce supervoxels with adaptive resolutions, and dose not rely the selection of seed points. The method is fully tested on three publicly available point cloud segmentation benchmarks, which cover the major point cloud types. The experimental results show that compared with the state-of-the-art supervoxel segmentation methods, the supervoxels extracted using our method preserve the object boundaries and small structures more effectively, which is reflected in a higher boundary recall and lower under-segmentation error. The details can be found in the following ISPRS 2018 paper ## Citing our work If you find our works useful in your research, please consider citing: Lin Y, Wang C, Zhai D, W Li, and J Li. Toward better boundary preserved supervoxel segmentation for 3D point clouds. Isprs Journal of Photogrammetry & Remote Sensing, vol. 143, pages 39-47, 2018. ### BibTex @article{Lin2018Supervoxel, title = "Toward better boundary preserved supervoxel segmentation for 3D point clouds", journal = "ISPRS Journal of Photogrammetry and Remote Sensing", volume = "143", pages = "39 - 47", year = "2018", note = "ISPRS Journal of Photogrammetry and Remote Sensing Theme Issue “Point Cloud Processing”", issn = "0924-2716", doi = "https://doi.org/10.1016/j.isprsjprs.2018.05.004", url = "http://www.sciencedirect.com/science/article/pii/S0924271618301370", author = "Lin, Yangbin and Wang, Cheng and Zhai, Dawei and Li, Wei and Li, Jonathan", keywords = "Supervoxel segmentation, Point clouds, Subset selection, Over-segmentation" } ## Install & complie Please directly copy the code into your workspace and complie it with any complier that supports C++11. It dose not require linking any additional libraries. ## Sample usage: cl::geometry::point_cloud::SupervoxelSegmentation(points, neighbors, resolution, metric, &supervoxels, &labels); Where, 'points' is the input 3D point cloud. It can be read from XYZ file by calling: cl::geometry::io::ReadXYZPoints(filename.c_str(), &points); 'neighbors' gives the neighborhood for each point. It can be constrcuted by compute k-neareast neighbors of each point. For example: const int k_neighbors = 15; cl::Array<cl::Array<int> > neighbors(n_points); cl::Array<cl::RPoint3D> neighbor_points(k_neighbors); for (int i = 0; i < n_points; ++i) { kdtree.FindKNearestNeighbors(kdtree.points()[i], k_neighbors, &neighbors[i]); } 'resolution' is used to determine the number of supervoxels you want. 'metric' is used to evaluate the feature distance between two points. In our paper, we use the following metric, which is same to the VCCS. class VCCSMetric { public: explicit VCCSMetric(double resolution) : resolution_(resolution) {} double operator() (const PointWithNormal& p1, const PointWithNormal& p2) const { return 1.0 - std::fabs(p1.normal * p2.normal) + cl::geometry::Distance(p1, p2) / resolution_ * 0.4; } private: double resolution_; }; The output 'supervoxels' is an array that stores the indices of the representation points. And 'labels' is used to denote which supervoxel owns the i-th point. Please see main.cc for more details. The file "test.xyz" can be found in test_data. ## Sample results. The first column is the orignal point cloud with ground-truth annotation. The second column is the supervoxel segmentation by VCCS. The third column is the VCCS method with kNN variation. And the last column is the result obtained by this library. ## Contact Please feel free to leave suggestions or comments to Dr. Lin (yblin@jmu.edu.cn), or Prof. Wang (cwang@xmu.edu.cn) ### Comments ( 0 ) No description spread retract No release
I just installed a new server from HP, a ProLiant DL180 G6. Here are some notes about the setup. To check the hardware status you need to install the ProLiant Support Package. Running a Debian/Ubuntu you should import the HP PSP mirror in your sources.list . It can be found here, you might include something like: After an aptitude update you’ll find some new packages. I recommend to install hpaclui to speak to your raid-controllers and hp-health to interact with your hardware. With hpaclui you can ask the raid-controllers for some information: So you get an idea of your storage. The hp-health packages comes with a tool called hpasmcli . It’s used to query all the hardware states: Both tools are very easy to use and give a great overview about the health. So I immediately developed a monitoring plugin that parses the output of those runs. I came to the point, that I wasn’t able to find some documentation about the hpasmcli tool. Most of its output was clear, but I don’t know what happens if a fan breaks. The output with working fans looks like: So what if a fan is broken? Is it still Present and the Speed -string just changes to NONE or something like that? I send a support request to HP, but all they respond was a premium-rate number to call. Seems that my understanding of service differs from theirs. Since I don’t know how the output looks like in an error case (I don’t want to stick pencils into new machines) the plugin can’t decide whether the fans are OK. If you want to use my plugin you need to skip fan-checks until HP publishes a document with possible values. IMHO a public tool should be open source, so I can get those information on my own, or at least well documented! Andreas N | Permalink | 2011-08-03 22:42:39 Seems the URL got swallowed by Wordpress. I meant to say: Have a look at http://labs.consol.de/lang/en/nagios/check_hpasm/ for a well-working Nagios plugin to check the health of an HP server. Martin Scharm | Permalink | 2011-08-08 00:43:34 Hi Andreas, thanks for the link, looks like my previous searches were too weak. I’ll include it to the plugin-site.
# on 14-Jan-2017 (Sat) #### Annotation 1436284226828 #electromagnetism #physics As you can imagine, the smaller you make the area segments, the closer this gets to the true mass, since your approximation of constant sigma is more accurate for smaller segments. If you let the segment area dA approach zero and N approach infinity, the summation becomes integration, and you have Mass = $$\int_{S}\sigma(x, y)dA$$ : This is the area integral of the scalar function sigma(x, y) over the surface S. #### pdf cannot see any pdfs #### Flashcard 1438021192972 Tags #italian #italian-grammar Question Italian has three types of article: the definite article il, lo; the indefinite article un, una; and the [...] dei, delle, degli ‘some, any’. (For example: il ragazzo ‘the boy’; una lezione ‘a lesson’; dei bambini ‘some children’.) partitive status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Italian has three types of article: the definite article il, lo (etc.) ‘the’; the indefinite article un, una (etc.) ‘a’; and the partitive dei, delle, degli (etc.) ‘some, any’. (For example: il ragazzo ‘the boy’; una lezione ‘a lesson’; dei bambini ‘some children’.) #### Original toplevel document (pdf) cannot see any pdfs #### Annotation 1438452419852 #python #sicp Functions that manipulate functions are called higher-order functions. 1.6 Higher-Order Functions t functions. These patterns can also be abstracted, by giving them names. To express certain general patterns as named concepts, we will need to construct functions that can accept other functions as arguments or return functions as values. <span>Functions that manipulate functions are called higher-order functions. This section shows how higher-order functions can serve as powerful abstraction mechanisms, vastly increasing the expressive power of our language. 1.6.1 Functions as Arguments #### Annotation 1438456614156 #python #sicp With higher-order functions, we begin to see a more powerful kind of abstraction: some functions express general methods of computation, independent of the particular functions they call. 1.6 Higher-Order Functions 153589902 1.6.2 Functions as General Methods Video: Show Hide We introduced user-defined functions as a mechanism for abstracting patterns of numerical operations so as to make them independent of the particular numbers involved. <span>With higher-order functions, we begin to see a more powerful kind of abstraction: some functions express general methods of computation, independent of the particular functions they call. Despite this conceptual extension of what a function means, our environment model of how to evaluate a call expression extends gracefully to the case of higher-order functions, withou #### Annotation 1438459235596 #python #sicp naming and functions allow us to abstract away a vast amount of complexity 1.6 Higher-Order Functions func improve(update, close, guess) func golden_update(guess) func square_close_to_successor(guess) func approx_eq(x, y, tolerance) This example illustrates two related big ideas in computer science. First, <span>naming and functions allow us to abstract away a vast amount of complexity. While each function definition has been trivial, the computational process set in motion by our evaluation procedure is quite intricate. Second, it is only by virtue of the fact that w #### Annotation 1438460546316 #python #sicp it is only by virtue of the fact that we have an extremely general evaluation procedure for the Python language that small components can be composed into complex processes. 1.6 Higher-Order Functions puter science. First, naming and functions allow us to abstract away a vast amount of complexity. While each function definition has been trivial, the computational process set in motion by our evaluation procedure is quite intricate. Second, <span>it is only by virtue of the fact that we have an extremely general evaluation procedure for the Python language that small components can be composed into complex processes. Understanding the procedure of interpreting programs allows us to validate and inspect the process we have created. As always, our new general method improve needs a test to check i #### Annotation 1438464216332 #python #sicp Like local assignment, local def statements only affect the current local frame. These functions are only in scope while sqrt is being evaluated. Consistent with our evaluation procedure, these local def statements don't even get evaluated until sqrt is called. 1.6 Higher-Order Functions de the body of other definitions. >>> def sqrt(a): def sqrt_update(x): return average(x, a/x) def sqrt_close(x): return approx_eq(x * x, a) return improve(sqrt_update, sqrt_close) <span>Like local assignment, local def statements only affect the current local frame. These functions are only in scope while sqrt is being evaluated. Consistent with our evaluation procedure, these local def statements don't even get evaluated until sqrt is called. Lexical scope. Locally defined functions also have access to the name bindings in the scope in which they are defined. In this example, sqrt_update refers to the name a , which is #### Annotation 1438465264908 #python #sicp This discipline of sharing names among nested definitions is called lexical scoping. Critically, the inner functions have access to the names in the environment where they are defined (not where they are called). 1.6 Higher-Order Functions ed. Lexical scope. Locally defined functions also have access to the name bindings in the scope in which they are defined. In this example, sqrt_update refers to the name a , which is a formal parameter of its enclosing function sqrt . <span>This discipline of sharing names among nested definitions is called lexical scoping. Critically, the inner functions have access to the names in the environment where they are defined (not where they are called). We require two extensions to our environment model to enable lexical scoping. Each user-defined function has a parent environment: the environment in which it was defined. When a us #### Annotation 1438466313484 #python #sicp We require two extensions to our environment model to enable lexical scoping. Each user-defined function has a parent environment: the environment in which it was defined.When a user-defined function is called, its local frame extends its parent environment. 1.6 Higher-Order Functions enclosing function sqrt . This discipline of sharing names among nested definitions is called lexical scoping. Critically, the inner functions have access to the names in the environment where they are defined (not where they are called). <span>We require two extensions to our environment model to enable lexical scoping. Each user-defined function has a parent environment: the environment in which it was defined. When a user-defined function is called, its local frame extends its parent environment. Previous to sqrt , all functions were defined in the global environment, and so they all had the same parent: the global environment. By contrast, when Python evaluates the first tw #### Annotation 1438470507788 #python #sicp The sqrt_update function carries with it some data: the value for a referenced in the environment in which it was defined. Because they "enclose" information in this way, locally defined functions are often called closures. 1.6 Higher-Order Functions r than the global environment. A local function can access the environment of the enclosing function, because the body of the local function is evaluated in an environment that extends the evaluation environment in which it was defined. <span>The sqrt_update function carries with it some data: the value for a referenced in the environment in which it was defined. Because they "enclose" information in this way, locally defined functions are often called closures. 1.6.4 Functions as Returned Values Video: Show Hide We can achieve even more expressive power in our programs by creating functions whose returned values are themselves funct #### Annotation 1438471556364 #python #sicp An important feature of lexically scoped programming languages is that locally defined functions maintain their parent environment when they are returned. 1.6 Higher-Order Functions ocally defined functions are often called closures. 1.6.4 Functions as Returned Values Video: Show Hide We can achieve even more expressive power in our programs by creating functions whose returned values are themselves functions. <span>An important feature of lexically scoped programming languages is that locally defined functions maintain their parent environment when they are returned. The following example illustrates the utility of this feature. Once many simple functions are defined, function composition is a natural method of combination to include in our progr #### Annotation 1438472604940 #python #sicp We can use higher-order functions to convert a function that takes multiple arguments into a chain of functions that each take a single argument. More specifically, given a function f(x, y) , we can define a function g such that g(x)(y) is equivalent to f(x, y) . Here, g is a higher-order function that takes in a single argument x and returns another function that takes in a single argument y . This transformation is called currying. 1.6 Higher-Order Functions d is a powerful general computational method for solving differentiable equations. Very fast algorithms for logarithms and large integer division employ variants of the technique in modern computers. 1.6.6 Currying Video: Show Hide <span>We can use higher-order functions to convert a function that takes multiple arguments into a chain of functions that each take a single argument. More specifically, given a function f(x, y) , we can define a function g such that g(x)(y) is equivalent to f(x, y) . Here, g is a higher-order function that takes in a single argument x and returns another function that takes in a single argument y . This transformation is called currying. As an example, we can define a curried version of the pow function: >>> def curried_pow(x): def h(y): return pow(x, y) return h >>&g #### Annotation 1438473653516 #python #sicp we can compute a*b + c*d without having to name the subexpressions a*b or c*d , or the full expression. In Python, we can create function values on the fly using lambda expressions, which evaluate to unnamed functions. A lambda expression evaluates to a function that has a single return expression as its body. Assignment and control statements are not allowed. 1.6 Higher-Order Functions 1.6.7 Lambda Expressions Video: Show Hide So far, each time we have wanted to define a new function, we needed to give it a name. But for other types of expressions, we don't need to associate intermediate values with a name. That is, <span>we can compute a*b + c*d without having to name the subexpressions a*b or c*d , or the full expression. In Python, we can create function values on the fly using lambda expressions, which evaluate to unnamed functions. A lambda expression evaluates to a function that has a single return expression as its body. Assignment and control statements are not allowed. >>> def compose1(f, g): return lambda x: f(g(x)) We can understand the structure of a lambda expression by constructing a corresponding English sentence: #### Annotation 1438474702092 #python #sicp >>> def compose1 ( f , g ): return lambda x : f ( g ( x )) We can understand the structure of a lambda expression by constructing a corresponding English sentence: lambda x : f(g(x)) "A function that takes x and returns f(g(x))" The result of a lambda expression is called a lambda function. It has no intrinsic name (and so Python prints for the name), but otherwise it behaves like any other function. #### Annotation 1438475750668 #python #sicp compound lambda expressions are notoriously illegible, despite their brevity. 1.6 Higher-Order Functions compose1(f, g) [parent=Global] func λ(x) [parent=Global] func λ(y) [parent=Global] func λ(x) [parent=f1] Some programmers find that using unnamed functions from lambda expressions to be shorter and more direct. However, <span>compound lambda expressions are notoriously illegible, despite their brevity. The following definition is correct, but many programmers have trouble understanding it quickly. >>> compose1 = lambda f,g: lambda x: f(g(x)) In general, Python style pr #### Annotation 1438476799244 #python #sicp In general, Python style prefers explicit def statements to lambda expressions, but allows them in cases where a simple function is needed as an argument or return value. 1.6 Higher-Order Functions ver, compound lambda expressions are notoriously illegible, despite their brevity. The following definition is correct, but many programmers have trouble understanding it quickly. >>> compose1 = lambda f,g: lambda x: f(g(x)) <span>In general, Python style prefers explicit def statements to lambda expressions, but allows them in cases where a simple function is needed as an argument or return value. Such stylistic rules are merely guidelines; you can program any way you wish. However, as you write programs, think about the audience of people who might read your program one day. #### Annotation 1438477847820 #python #sicp Elements with the fewest restrictions are said to have first-class status. Some of the "rights and privileges" of first-class elements are: They may be bound to names.They may be passed as arguments to functions.They may be returned as the results of functions.They may be included in data structures. Python awards functions full first-class status, and the resulting gain in expressive power is enormous. 1.6 Higher-Order Functions ns explicitly as elements in our programming language, so that they can be handled just like other computational elements. In general, programming languages impose restrictions on the ways in which computational elements can be manipulated. <span>Elements with the fewest restrictions are said to have first-class status. Some of the "rights and privileges" of first-class elements are: They may be bound to names. They may be passed as arguments to functions. They may be returned as the results of functions. They may be included in data structures. Python awards functions full first-class status, and the resulting gain in expressive power is enormous. 1.6.9 Function Decorators Video: Show Hide Python provides special syntax to apply higher-order functions as part of executing a def statement, called a decorator. Perhaps #### Annotation 1438530014476 #python #sicp A function is called recursive if the body of the function calls the function itself, either directly or indirectly 1.7 Recursive Functions 1.7.1 The Anatomy of Recursive Functions 1.7.2 Mutual Recursion 1.7.3 Printing in Recursive Functions 1.7.4 Tree Recursion 1.7.5 Example: Partitions 1.7 Recursive Functions Video: Show Hide <span>A function is called recursive if the body of the function calls the function itself, either directly or indirectly. That is, the process of executing the body of a recursive function may in turn require applying that function again. Recursive functions do not use any special syntax in Python, but t #### Annotation 1438532111628 #python #sicp it is often clearer to think about recursive calls as functional abstractions. 1.7 Recursive Functions the standard definition of the mathematical function for factorial: (n−1)!n!n!=(n−1)⋅(n−2)⋅⋯⋅1=n⋅(n−1)⋅(n−2)⋅⋯⋅1=n⋅(n−1)!(n−1)!=(n−1)⋅(n−2)⋅⋯⋅1n!=n⋅(n−1)⋅(n−2)⋅⋯⋅1n!=n⋅(n−1)! While we can unwind the recursion using our model of computation, <span>it is often clearer to think about recursive calls as functional abstractions. That is, we should not care about how fact(n-1) is implemented in the body of fact ; we should simply trust that it computes the factorial of n-1 . Treating a recursive call as a #### Annotation 1438533160204 #python #sicp Treating a recursive call as a functional abstraction has been called a recursive leap of faith. We define a function in terms of itself, but simply trust that the simpler cases will work correctly when verifying the correctness of the function. 1.7 Recursive Functions putation, it is often clearer to think about recursive calls as functional abstractions. That is, we should not care about how fact(n-1) is implemented in the body of fact ; we should simply trust that it computes the factorial of n-1 . <span>Treating a recursive call as a functional abstraction has been called a recursive leap of faith. We define a function in terms of itself, but simply trust that the simpler cases will work correctly when verifying the correctness of the function. In this example, we trust that fact(n-1) will correctly compute (n-1)! ; we must only check that n! is computed correctly if this assumption holds. In this way, verifying the corre #### Annotation 1438534208780 #python #sicp Recursive functions leverage the rules of evaluating call expressions to bind names to values, often avoiding the nuisance of correctly assigning local names during iteration. For this reason, recursive functions can be easier to define correctly. However, learning to recognize the computational processes evolved by recursive functions certainly requires practice. 1.7 Recursive Functions . The state of the computation is entirely contained within the structure of the environment, which has return values that take the role of total , and binds n to different values in different frames rather than explicitly tracking k . <span>Recursive functions leverage the rules of evaluating call expressions to bind names to values, often avoiding the nuisance of correctly assigning local names during iteration. For this reason, recursive functions can be easier to define correctly. However, learning to recognize the computational processes evolved by recursive functions certainly requires practice. 1.7.2 Mutual Recursion Video: Show Hide When a recursive procedure is divided among two functions that call each other, the functions are said to be mutually recursive. As an #### Annotation 1438535257356 #python #sicp When a recursive procedure is divided among two functions that call each other, the functions are said to be mutually recursive. 1.7 Recursive Functions For this reason, recursive functions can be easier to define correctly. However, learning to recognize the computational processes evolved by recursive functions certainly requires practice. 1.7.2 Mutual Recursion Video: Show Hide <span>When a recursive procedure is divided among two functions that call each other, the functions are said to be mutually recursive. As an example, consider the following definition of even and odd for non-negative integers: a number is even if it is one more than an odd number a number is odd if it is one more tha #### Annotation 1438537092364 #python #sicp A function with multiple recursive calls is said to be tree recursive because each call branches into multiple smaller calls, each of which branches into yet smaller calls, just as the branches of a tree become smaller but more numerous as they extend from the trunk. 1.7 Recursive Functions Global This recursive definition is tremendously appealing relative to our previous attempts: it exactly mirrors the familiar definition of Fibonacci numbers. <span>A function with multiple recursive calls is said to be tree recursive because each call branches into multiple smaller calls, each of which branches into yet smaller calls, just as the branches of a tree become smaller but more numerous as they extend from the trunk. We were already able to define a function to compute Fibonacci numbers without tree recursion. In fact, our previous attempts were more efficient, a topic discussed later in the text. #### Annotation 1438538140940 #python #sicp We can think of a tree-recursive function as exploring different possibilities. 1.7 Recursive Functions -m, m) + count_partitions(n, m-1) >>> count_partitions(6, 4) 9 >>> count_partitions(5, 5) 7 >>> count_partitions(10, 10) 42 >>> count_partitions(15, 15) 176 >>> count_partitions(20, 20) 627 <span>We can think of a tree-recursive function as exploring different possibilities. In this case, we explore the possibility that we use a part of size m and the possibility that we do not. The first and second recursive calls correspond to these possibilities. Im #### Annotation 1439144217868 #deeplearning #protein #proteomics #research Sander14 developed a DSSP algorithm to classify SS into 8 fine-grained states. In particular, DSSP assigns 3 types for helix (G for 310 helix, H for alpha-helix, and I for pi-helix), 2 types for strand (E for beta-strand and B for beta-bridge), and 3 types for coil (T for beta-turn, S for high curvature loop, and L for irregular). Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields : Scientific Reports S) refers to the local conformation of the polypeptide backbone of proteins. There are two regular SS states: alpha-helix (H) and beta-strand (E), as suggested by Pauling 13 more than 60 years ago, and one irregular SS type: coil region (C). <span>Sander 14 developed a DSSP algorithm to classify SS into 8 fine-grained states. In particular, DSSP assigns 3 types for helix (G for 3 10 helix, H for alpha-helix, and I for pi-helix), 2 types for strand (E for beta-strand and B for beta-bridge), and 3 types for coil (T for beta-turn, S for high curvature loop, and L for irregular). Overall, protein secondary structure can be regarded as a bridge that links the primary sequence and tertiary structure and thus, is used by many structure and functional analysis tools #### Flashcard 1439739022604 Tags Question Section 2 addresses questions such as: How does a chosen [...] and [...] evolve into specific decisions that affect the profitability of the firm? pricing and output strategy The answers to these questions are related to the forces of the market structure within which the firm operates. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it determines the degree of competition associated with each market structure? Given the degree of competition associated with each market structure, what decisions are left to the management team developing corporate strategy? How does a chosen <span>pricing and output strategy evolve into specific decisions that affect the profitability of the firm? The answers to these questions are related to the forces of the market structure within which the firm operates #### Original toplevel document 1. INTRODUCTION ts are possible even in the long run; in the short run, any outcome is possible. Therefore, understanding the forces behind the market structure will aid the financial analyst in determining firms’ short- and long-term prospects. <span>Section 2 introduces the analysis of market structures. The section addresses questions such as: What determines the degree of competition associated with each market structure? Given the degree of competition associated with each market structure, what decisions are left to the management team developing corporate strategy? How does a chosen pricing and output strategy evolve into specific decisions that affect the profitability of the firm? The answers to these questions are related to the forces of the market structure within which the firm operates. Sections 3, 4, 5, and 6 analyze demand, supply, optimal price and output, and factors affecting long-run equilibrium for perfect competition, monopolistic competition, olig #### Flashcard 1442919615756 Tags #estructura-interna-de-las-palabras #formantes-morfológicos #gramatica-española #la #morfología #tulio Question Del inventario de formantes reconocidos, reconoceremos dos clases: a. Los formantes léxicos pertenecen a una [...] de palabras: clase particular status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it y>Del inventario de formantes reconocidos, reconoceremos dos clases: a. Los formantes léxicos: tienen un significado léxico, que se define en el diccionario: gota, cuenta. Se agrupan en clases abiertas. Pertenecen a una clase particular de palabras: sustantivos (gota), adjetivos (útil), adverbios (ayer), verbos (cuenta). Pueden ser: - palabras simples (gota, útil, ayer); - base a la que se adosan los afijos en palabras #### Original toplevel document La estructura interna de la palabra #### Annotation 1443072183564 #cfa-level-1 #fra-introduction #reading-22-financial-statement-analysis-intro #study-session-7 Reading 22 is organized as follows: Section 2 discusses the scope of financial statement analysis. Section 3 describes the sources of information used in financial statement analysis, including the primary financial statements. Section 4 provides a framework for guiding the financial statement analysis process. 1. INTRODUCTION dited) commentary by management. Basic financial statement analysis—as presented in this reading—provides a foundation that enables the analyst to better understand information gathered from research beyond the financial reports. <span>This reading is organized as follows: Section 2 discusses the scope of financial statement analysis. Section 3 describes the sources of information used in financial statement analysis, including the primary financial statements (balance sheet, statement of comprehensive income, statement of changes in equity, and cash flow statement). Section 4 provides a framework for guiding the financial statement analysis process. A summary of the key points and practice problems in the CFA Institute multiple-choice format conclude the reading. <span><body><html> #### Flashcard 1444415933708 Tags #pao Question 02 El Canelo Boxing gloves status measured difficulty not learned 37% [default] 0 #### Flashcard 1444608609548 Question Mit [default - edit me] status measured difficulty not learned 37% [default] 0 Lesson 20. Dative und Accusative Prepositions | Yes German e used with Dative Case only, no matter what. They are given in an order that is best to memorize. Mit, nach, aus, zu, von, bei, seit, außer, entgegen, gegenüber In the table you will find English equivalents for these prepositions: <span>Mit With, by Nach After, to Aus From, out of Zu To, at Von From, by Bei At, near Seit Since, for Außer Except for, besides Entgegen Towards, toward Gegenüber Opposite, across from Example #### Flashcard 1444610182412 Question [default - edit me] With, by status measured difficulty not learned 37% [default] 0 Lesson 20. Dative und Accusative Prepositions | Yes German ed with Dative Case only, no matter what. They are given in an order that is best to memorize. Mit, nach, aus, zu, von, bei, seit, außer, entgegen, gegenüber In the table you will find English equivalents for these prepositions: Mit <span>With, by Nach After, to Aus From, out of Zu To, at Von From, by Bei At, near Seit Since, for Außer Except for, besides Entgegen Towards, toward Gegenüber Opposite, across from Examples of Use: #### Flashcard 1444629056780 Question What is an international application [default - edit me] status measured difficulty not learned 37% [default] 0 PCT Guide nternational Phase of the PCT Applicant's Guide ← Table of Contents → CHAPTER 5: FILING AN INTERNATIONAL APPLICATION GENERAL Article 2(vii) 3(1) 5.001. <span>What is an international application? An application is “international” when it is filed under and with reference to the PCT. It is the first step towards obtaining a patent in or for a State party to the PCT: “in” su #### Flashcard 1444630629644 Question [default - edit me] An application is “international” when it is filed under and with reference to the PCT. It is the first step towards obtaining a patent in or for a State party to the PCT: “in” such a State when a national patent is desired; “for” such a State when a regional patent (ARIPO, Eurasian, European or OAPI patent) is desired. status measured difficulty not learned 37% [default] 0 PCT Guide s Guide ← Table of Contents → CHAPTER 5: FILING AN INTERNATIONAL APPLICATION GENERAL Article 2(vii) 3(1) 5.001. What is an international application? <span>An application is “international” when it is filed under and with reference to the PCT. It is the first step towards obtaining a patent in or for a State party to the PCT: “in” such a State when a national patent is desired; “for” such a State when a regional patent (ARIPO, Eurasian, European or OAPI patent) is desired. Article 2(i) and (ii) 3(1) 5.002. What may be the subject of an international application? An international application must be an application for the protection of #### Flashcard 1446673517836 Tags #av #elektryka #g1 Question Jednostką podstawową natężenia prądu jest 1 [...] amper (A) status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Jednostką podstawową natężenia prądu jest 1 amper (A) #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446675090700 Tags #av #elektryka #g1 Question Jednostką podstawową [...] prądu jest 1 amper (A) natężenia status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Jednostką podstawową natężenia prądu jest 1 amper (A) #### Original toplevel document (pdf) cannot see any pdfs #### Annotation 1446676663564 #_av #b21 #elektryka #g1 #m_michalski #seo Prąd elektryczny w przewodnikach płynie od potencjału wyższego do potencjału niższego. By było to możliwe, w obwodzie zamkniętym musi znajdować się element, który zapewni dostarczenie nośników ładunku z punktów o niższym potencjale do punktów o wyższym potencjale, czyli w kierunku przeciwnym do działającego na nie pola elektrycznego. Wymaga to dostarczenia energii i dzieje się w elementach nazywanych źródłami prądu. Rolę chwilowego źródła energii w obwodzie może pełnić również element inercyjny (mający zdolność gromadzenia energii) – uprzednio naładowany kondensator, albo cewka indukcyjna z energią zgromadzoną w jej polu magnetycznym. #### pdf cannot see any pdfs #### Annotation 1446678236428 #_av #b21 #elektryka #g1 #m_michalski #seo Źródło prądu – urządzenie, które dostarcza energię elektryczną do zasilania innych urządzeń elektrycznych. Źródło prądu może wytwarzać energię elektryczną kosztem innych form energii, np.: • chemicznej (ogniwo chemiczne) • cieplnej (zjawisko Seebecka) • mechanicznej (prądnica) • świetlnej (fotoogniwo) Źródłem prądu nazywa się również elektryczną sieć energetyczną, a także zasilacze pełniące często rolę przetworników prądu sieciowego. Rozróżnia się zasilacze prądu przemiennego (AC – alternating current) i prądu stałego (DC – direct current). #### pdf cannot see any pdfs #### Annotation 1446680071436 #_av #b21 #elektryka #g1 #m_michalski #seo Prąd stały jest to prąd, którego wartość natężenia jest stała w funkcji czasu. W obwodzie elektrony poruszają się w sposób ciągły, w jednym kierunku #### pdf cannot see any pdfs #### Annotation 1446681644300 #_av #b21 #elektryka #g1 #m_michalski #seo Prąd stały płynie zawsze w tym samym kierunku – nie zmienia się biegunowość prądu => stałe natężenie i napięcie prądu. Może być magazynowany, np. w akumulatorze (ogniwo galwaniczne), który po rozładowaniu można ponownie naładować. #### pdf cannot see any pdfs #### Annotation 1446683217164 #_av #b21 #elektryka #g1 #m_michalski #seo Prąd zmienny jest to prąd, którego wartość natężenia jest zmienna w funkcji czasu. #### pdf cannot see any pdfs #### Annotation 1446684790028 #_av #b21 #elektryka #g1 #m_michalski #seo Prąd okresowo zmienny: jest to prąd zmienny, którego zmiany powtarzają się w czasie. #### pdf cannot see any pdfs #### Annotation 1446686362892 #_av #b21 #elektryka #g1 #m_michalski #seo Prąd przemienny to charakterystyczny przypadek prądu elektrycznego okresowo zmiennego, w którym wartości chwilowe podlegają zmianom w powtarzalny, okresowy sposób, z określoną częstotliwością. Wartości chwilowe natężenia prądu przemiennego przyjmują naprzemiennie wartości dodatnie i ujemne. Stosunkowo największe znaczenie praktyczne mają prąd i napięcie o przebiegu sinusoidalnym. #### pdf cannot see any pdfs #### Annotation 1446687935756 #_av #b21 #elektryka #g1 #m_michalski #seo Prąd okresowo przemienny: jest to prąd przemienny, którego zmiany powtarzają się w czasie. #### pdf cannot see any pdfs #### Annotation 1446689508620 #_av #b21 #elektryka #g1 #m_michalski #seo Prąd sinusoidalny: jest to prąd przemienny, którego wartość i kierunek natężenia, zmieniają się jak funkcja sinus (cosinus). #### pdf cannot see any pdfs #### Flashcard 1446738791692 Tags #cfa-level-1 #economics #has-images #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question In which market structure does this happen? Perfect competition status measured difficulty not learned 37% [default] 0 #### Annotation 1446740626700 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 The quantity or quantity demanded variable is the amount of the product that consumers are willing and able to buy at each price level. The quantity sold can be affected by the business through such activities as sales promotion, advertising, and competitive positioning of the product that would take place under the market model of imperfect competition. Under perfect competition, however, total quantity in the market is influenced strictly by price, while non-price factors are not important. Once consumer preferences are established in the market, price determines the quantity demanded by buyers. Together, price and quantity constitute the firm’s demand curve, which becomes the basis for calculating the total, average, and marginal revenue. 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS (AR) Marginal Revenue (MR) 0 100 0 — — 1 100 100 100 100 2 100 200 100 100 3 100 300 100 100 4 100 400 100 100 5 100 500 100 100 6 100 600 100 100 7 100 700 100 100 8 100 800 100 100 9 100 900 100 100 10 100 1,000 100 100 <span>The quantity or quantity demanded variable is the amount of the product that consumers are willing and able to buy at each price level. The quantity sold can be affected by the business through such activities as sales promotion, advertising, and competitive positioning of the product that would take place under the market model of imperfect competition. Under perfect competition, however, total quantity in the market is influenced strictly by price, while non-price factors are not important. Once consumer preferences are established in the market, price determines the quantity demanded by buyers. Together, price and quantity constitute the firm’s demand curve, which becomes the basis for calculating the total, average, and marginal revenue. In Exhibit 4, price is the market price as established by the interactions of the market demand and supply factors. Since the firm is a price taker, price is fixed at 100 #### Annotation 1446741675276 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 The quantity or quantity demanded variable is the amount of the product that consumers are willing and able to buy at each price level. #### Parent (intermediate) annotation Open it The quantity or quantity demanded variable is the amount of the product that consumers are willing and able to buy at each price level. The quantity sold can be affected by the business through such activities as sales promotion, advertising, and competitive positioning of the product that would take place under the mar #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS (AR) Marginal Revenue (MR) 0 100 0 — — 1 100 100 100 100 2 100 200 100 100 3 100 300 100 100 4 100 400 100 100 5 100 500 100 100 6 100 600 100 100 7 100 700 100 100 8 100 800 100 100 9 100 900 100 100 10 100 1,000 100 100 <span>The quantity or quantity demanded variable is the amount of the product that consumers are willing and able to buy at each price level. The quantity sold can be affected by the business through such activities as sales promotion, advertising, and competitive positioning of the product that would take place under the market model of imperfect competition. Under perfect competition, however, total quantity in the market is influenced strictly by price, while non-price factors are not important. Once consumer preferences are established in the market, price determines the quantity demanded by buyers. Together, price and quantity constitute the firm’s demand curve, which becomes the basis for calculating the total, average, and marginal revenue. In Exhibit 4, price is the market price as established by the interactions of the market demand and supply factors. Since the firm is a price taker, price is fixed at 100 #### Flashcard 1446743248140 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question The quantity sold can be affected by the business through such activities as sales promotion, advertising, and competitive positioning of the product that would take place under the market model of [...]. imperfect competition status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it willing and able to buy at each price level. The quantity sold can be affected by the business through such activities as sales promotion, advertising, and competitive positioning of the product that would take place under the market model of <span>imperfect competition. Under perfect competition, however, total quantity in the market is influenced strictly by price, while non-price factors are not important. Once consumer preferences are established i #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS (AR) Marginal Revenue (MR) 0 100 0 — — 1 100 100 100 100 2 100 200 100 100 3 100 300 100 100 4 100 400 100 100 5 100 500 100 100 6 100 600 100 100 7 100 700 100 100 8 100 800 100 100 9 100 900 100 100 10 100 1,000 100 100 <span>The quantity or quantity demanded variable is the amount of the product that consumers are willing and able to buy at each price level. The quantity sold can be affected by the business through such activities as sales promotion, advertising, and competitive positioning of the product that would take place under the market model of imperfect competition. Under perfect competition, however, total quantity in the market is influenced strictly by price, while non-price factors are not important. Once consumer preferences are established in the market, price determines the quantity demanded by buyers. Together, price and quantity constitute the firm’s demand curve, which becomes the basis for calculating the total, average, and marginal revenue. In Exhibit 4, price is the market price as established by the interactions of the market demand and supply factors. Since the firm is a price taker, price is fixed at 100 #### Flashcard 1446745607436 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question Under perfect competition total quantity in the market is influenced strictly by [...] price Once consumer preferences are established in the market, price determines the quantity demanded by buyers. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it ties as sales promotion, advertising, and competitive positioning of the product that would take place under the market model of imperfect competition. Under perfect competition, however, total quantity in the market is influenced strictly by <span>price, while non-price factors are not important. Once consumer preferences are established in the market, price determines the quantity demanded by buyers. Together, price and quantity const #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS (AR) Marginal Revenue (MR) 0 100 0 — — 1 100 100 100 100 2 100 200 100 100 3 100 300 100 100 4 100 400 100 100 5 100 500 100 100 6 100 600 100 100 7 100 700 100 100 8 100 800 100 100 9 100 900 100 100 10 100 1,000 100 100 <span>The quantity or quantity demanded variable is the amount of the product that consumers are willing and able to buy at each price level. The quantity sold can be affected by the business through such activities as sales promotion, advertising, and competitive positioning of the product that would take place under the market model of imperfect competition. Under perfect competition, however, total quantity in the market is influenced strictly by price, while non-price factors are not important. Once consumer preferences are established in the market, price determines the quantity demanded by buyers. Together, price and quantity constitute the firm’s demand curve, which becomes the basis for calculating the total, average, and marginal revenue. In Exhibit 4, price is the market price as established by the interactions of the market demand and supply factors. Since the firm is a price taker, price is fixed at 100 #### Flashcard 1446747966732 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question Together, price and quantity constitute the firm’s [...], which becomes the basis for calculating the total, average, and marginal revenue. demand curve status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it e market is influenced strictly by price, while non-price factors are not important. Once consumer preferences are established in the market, price determines the quantity demanded by buyers. Together, price and quantity constitute the firm’s <span>demand curve, which becomes the basis for calculating the total, average, and marginal revenue. <span><body><html> #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS (AR) Marginal Revenue (MR) 0 100 0 — — 1 100 100 100 100 2 100 200 100 100 3 100 300 100 100 4 100 400 100 100 5 100 500 100 100 6 100 600 100 100 7 100 700 100 100 8 100 800 100 100 9 100 900 100 100 10 100 1,000 100 100 <span>The quantity or quantity demanded variable is the amount of the product that consumers are willing and able to buy at each price level. The quantity sold can be affected by the business through such activities as sales promotion, advertising, and competitive positioning of the product that would take place under the market model of imperfect competition. Under perfect competition, however, total quantity in the market is influenced strictly by price, while non-price factors are not important. Once consumer preferences are established in the market, price determines the quantity demanded by buyers. Together, price and quantity constitute the firm’s demand curve, which becomes the basis for calculating the total, average, and marginal revenue. In Exhibit 4, price is the market price as established by the interactions of the market demand and supply factors. Since the firm is a price taker, price is fixed at 100 #### Flashcard 1446750326028 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question For any firm that sells at a uniform price, [...] will equal price. average revenue status measured difficulty not learned 37% [default] 0 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS , which is equal to the price. Average revenue (AR) is quantity sold divided into total revenue. The mathematical outcome of this calculation is simply the price that the firm receives in the market for selling a given quantity. <span>For any firm that sells at a uniform price, average revenue will equal price. For example, AR at 3 units is 100 (calculated as 300 ÷ 3 units); at 8 units it is also 100 (calculated as 800 ÷ 8 units). Marginal revenue (MR) is the change in total reve #### Flashcard 1446752685324 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question In a competitive market in which price is constant to the individual firm regardless of the amount of output offered, [...] is equal to average revenue, where both are the same as the [...] marginal revenue market price . status measured difficulty not learned 37% [default] 0 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS e change in quantity sold; it is simply the additional revenue from selling one more unit. For example, in Exhibit 4, MR at 4 units is 100 [calculated as (400 – 300) ÷ (4 – 3)]; at 9 units it is also 100 [calculated as (900 – 800) ÷ (9 – 8)]. <span>In a competitive market in which price is constant to the individual firm regardless of the amount of output offered, marginal revenue is equal to average revenue, where both are the same as the market price. Reviewing the revenue data in Exhibit 4, price, average revenue, and marginal revenue are all equal to 100. In the case of imperfect competition, MR declines with greater output and is #### Annotation 1446759501068 TR, AR,MR under imperfect competition #cfa-level-1 #economics #has-images #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Total revenue increases with a greater quantity, but the rate of increase in TR (as measured by marginal revenue) declines as quantity increases. Average revenue and marginal revenue decrease when output increases, with MR falling faster than price and AR. Average revenue is equal to price at each quantity level. #### Annotation 1446764219660 #cfa-level-1 #economics #has-images #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Total revenue increases with a greater quantity, but the rate of increase in TR (as measured by marginal revenue) declines as quantity increases. Average revenue and marginal revenue decrease when output increases, with MR falling faster than price and AR. Average revenue is equal to price at each quantity level. This shows the relationships among the revenue variables. #### Annotation 1446766316812 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 ### 3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: • land, as in the site location of the business; • labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; • capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and • materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5) Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6) Q = f (x1, x2, … xn) where xi represents the quantity of the ith input subject to xi ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L0L1), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L1L2), TP is in ... 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Annotation 1446769462540 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases the factors of production, in order to produce a product that will be offered for sale to consumers. #### Parent (intermediate) annotation Open it 3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consist #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Annotation 1446772346124 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process. #### Parent (intermediate) annotation Open it . However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. <span>Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and as #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Annotation 1446773918988 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 The types and quantities of resources used in production, their prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. #### Parent (intermediate) annotation Open it le the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. <span>The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446776278284 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question The types and quantities of resources used in production, their prices, and how efficiently they are employed in the production process determine the [...] of the [...]. cost component of the profit equation status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it The types and quantities of resources used in production, their prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Annotation 1446777851148 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 in order to produce output, the firm needs to employ factors of production. #### Parent (intermediate) annotation Open it onsumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, <span>in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Annotation 1446779424012 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. #### Parent (intermediate) annotation Open it n, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. <span>While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  &# #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446780472588 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, [...], are employed. capital and labor status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Annotation 1446782045452 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. #### Parent (intermediate) annotation Open it ction. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. <span>The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446783094028 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: [...] Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Annotation 1446785453324 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 The inputs in the production function are subject to the constraint that K ≥ 0 and L ≥ 0. #### Parent (intermediate) annotation Open it utput and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. <span>The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subj #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446787288332 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question The inputs in the production function are subject to the constraint that [...] and [...] K ≥ 0 L ≥ 0. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it The inputs in the production function are subject to the constraint that K ≥ 0 and L ≥ 0. #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Annotation 1446789909772 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 A more general production function is stated as: Equation (6)  Q = f (x1, x2, … xn) where xi represents the quantity of the ith input subject to xi ≥ 0 for n number of different inputs. #### Parent (intermediate) annotation Open it n function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. <span>A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446790958348 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question A more general production function is stated as: Equation (6) Q = f (x1, x2, … xn) where xi represents [...] subject to xi ≥ 0 for n number of different inputs. the quantity of the ith input status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor Article 1446807997708 つき【月】tsuki #has-images つき【月】tsuki 地球の衛星で,地球に最も近い天体。地球からの平均距離は38万4400km,月の半径は地球のほぼ4分の1,太陽の400分の1の1738km(赤道半径)で,地球からは太陽も月もほぼ同じくらいの大きさに見え,視半径は平均15分33秒である。質量は地球の81.3分の1で,7.35×10 22 kg,平均密度は3.34g/cm 3 (地球の約0.6倍),表面重力は地球の約6分の1である。 〔月の表面〕 月を見ると,明るい地域と暗い地域に分かれている。明るい地域は,高地とよばれ,土地の起伏がはげしく,一面がほぼ円形のクレーターとよばれるくぼ地におおわれている。クレーターはいん石の衝突によってできたもので,最大のものは直径2500kmにおよぶエイトケン盆地。月面の高低差は,表側では3000mほどだが,裏側では約2万mもある。暗い地域は低く平たんで,クレーターが少なく,海とよばれている。高地は白っぽい斜長石,海は黒っぽい玄武岩でできている。年齢は高地のほうが古い。海は巨大ないん石の衝突によってできたクレーターに内部から玄武岩の溶岩がふきだしておおったものと考えられている。 月の表面の温度は昼間の約120℃から夜の-170℃まで変化し,大気はなく,生物も生存しない。 〔月の運動〕 地球のまわりを回る月の恒星を基準とした公転周期を恒星月といい,27.321662日である。これに対し,地球から見た月の満ち欠けの周期をさく望月といい,29.530589日である。恒星月がさく望月より短いのは,地球も太陽のまわりを公転しているからである。月は地球の公転と同じむきに公転しながら自転している。月の自転周期は公転周期に一致している。このため地球からは月の裏側が見えない。実際にはわずかながら上下左右に首をふるため,全表面の59%が地球から見られる。月の公転軌道は白道とよばれ,黄道面に対して5度8分かたむいている。 月食は満月のときにおこるが,満月のたびにおこらないのは月の公転軌道が5度8分かたむいているためである。 〔月の満ち欠け〕 月は約29.5日ごとに満ち欠けをする。これは,太陽の方向をむいてかがやいている半球を,月の公転にともなっていろいろな角度から見ているためである(双眼鏡で見ると全体が確認できる)。地球から見て,月が太陽と同じ方向にあるときを新月,月が太陽と反対側にきたとき Article 1446814027020 えいせい【衛星】 #has-images えいせい【衛星】 惑星 * の周囲を回っている天体。衛星の公転方向は,大部分が中心惑星の自転と同じ方向(順行)であるが,逆行しているものもある。また衛星の質量は,中心惑星の質量にくらべてひじょうに小さいが,月(地球の81分の1)のようにひじょうに大きい質量をもつものもある。 衛星をもたない惑星は水星と金星である。 Article 1446819007756 わくせい【惑星】 #has-images わくせい【惑星】 太陽 * の周囲を一定の軌道にそって回っている天体のうち,比較的大きい8個の天体。軌道半径の小さいほうから水星・金星・地球・火星・木星・土星・天王星・海王星の順。水星・金星・火星・木星・土星は肉眼でもよく見えるので大昔から知られていたが,地球から見ると,天球上の恒星の間を順行したり,逆行したりしてたえず移動し,古代の人にはその説明が困難だったため,惑星とよばれた。 〔内惑星と外惑星〕 惑星のうち,地球軌道より内側にあるものを内惑星,外側にあるものを外惑星という。水星・金星などの内惑星は,太陽からあまり離れていないため,夕方や明け方にしか見えないが,火星・木星・土星などの外惑星は一晩中見え,真夜中に南中 * することがある。内惑星の水星・金星は満月から新月まで,外惑星の火星は満月がごくわずか欠ける程度に満ち欠けする。そのほかの外惑星は満ち欠けしない。 〔地球型惑星と木星型惑星〕惑星は,その構造から地球型惑星と木星型惑星に分けられる。地球型惑星は,おもに岩石からできていて比重が大きいのに対して,木星型惑星は水素やヘリウムのガスからできていて比重が小さい。 〔太陽系内のいろいろな天体〕 太陽系には,惑星以外にさまざまな天体がある。火星と木星の間には小惑星が多く見られる。惑星のまわりには多くの衛星があり,惑星のまわりを公転する。また多くのすい星も,一定周期で太陽のまわりを公転している。 #### Flashcard 1446821891340 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question Factors of production, the inputs to the production of goods and services, include: • [...], • labor, • [...] • materials, land capital, status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capi #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446824250636 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question Factors of production include: • land, as in [...] the site location of the business; status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physi #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446827396364 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question The inputs to the production of goods and services, include: • labor, which consists of [...] as well as [...] the inputs of skilled and unskilled workers the inputs of firms’ managers; status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods a #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446830542092 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question Factors of production include: • capital, which in this context refers to [...] physical capital —such tangible goods as equipment, tools, and buildings. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it ; land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to <span>physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Annotation 1446832901388 #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Capital goods are distinguished as inputs to production that are themselves produced goods; and #### Parent (intermediate) annotation Open it 3; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. <span>Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.<span><body><html> #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446833949964 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question [...] are distinguished as inputs to production that are themselves produced goods Capital goods status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Capital goods are distinguished as inputs to production that are themselves produced goods; and #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446836309260 Tags #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question Factors of production, the inputs to the production of goods and services, include: • materials, which in this context refers to [...] status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it ontext refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to <span>any goods the business buys as inputs to its production process.<span><body><html> #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS ationships among the revenue variables presented in Exhibit 7. Exhibit 8. Total Revenue, Average Revenue, and Marginal Revenue for Exhibit 7 Data <span>3.1.2. Factors of Production Revenue generation occurs when output is sold in the market. However, costs are incurred before revenue generation takes place as the firm purchases resources, or what are commonly known as the factors of production, in order to produce a product or service that will be offered for sale to consumers. Factors of production, the inputs to the production of goods and services, include: land, as in the site location of the business; labor, which consists of the inputs of skilled and unskilled workers as well as the inputs of firms’ managers; capital, which in this context refers to physical capital—such tangible goods as equipment, tools, and buildings. Capital goods are distinguished as inputs to production that are themselves produced goods; and materials, which in this context refers to any goods the business buys as inputs to its production process.1 For example, a business that produces solid wood office desks needs to acquire lumber and hardware accessories as raw materials and hire workers to construct and assemble the desks using power tools and equipment. The factors of production are the inputs to the firm’s process of producing and selling a product or service where the goal of the firm is to maximize profit by satisfying the demand of consumers. The types and quantities of resources or factors used in production, their respective prices, and how efficiently they are employed in the production process determine the cost component of the profit equation. Clearly, in order to produce output, the firm needs to employ factors of production. While firms may use many different types of labor, capital, raw materials, and land, an analyst may find it more convenient to limit attention to a more simplified process in which only the two factors, capital and labor, are employed. The relationship between the flow of output and the two factors of production is called the production function , and it is represented generally as: Equation (5)  Q = f (K, L) where Q is the quantity of output, K is capital, and L is labor. The inputs are subject to the constraint that K ≥ 0 and L ≥ 0. A more general production function is stated as: Equation (6)  Q = f (x 1 , x 2 , … x n ) where x i represents the quantity of the ith input subject to x i ≥ 0 for n number of different inputs. Exhibit 9illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. Exhibit 9. A Firm’s Production Function EXAMPLE 3 Factors of Production A group of business investor #### Flashcard 1446838668556 Tags #estructura-interna-de-las-palabras #formantes-morfológicos #gramatica-española #la #morfología #tulio Question Algunos afijos van pospuestos a la base: son los [...] . s u f i j o s status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Algunos afijos van pospuestos a la base (gota), como los de nuestros ejemplos: son los s u f i j o s . Otros afijos la preceden: in-útil, des-contento, a-político: Son los prefijos. Las palabras que contienen un afijo se denominan palabras complejas. #### Original toplevel document La estructura interna de la palabra #### Flashcard 1446841027852 Tags #estructura-interna-de-las-palabras #formantes-morfológicos #gramatica-española #la #morfología #tulio Question Algunos afijos preceden a la base: Son los [...]. prefijos in-útil, des-contento, a-político status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Algunos afijos van pospuestos a la base (gota), como los de nuestros ejemplos: son los s u f i j o s . Otros afijos la preceden: in-útil, des-contento, a-político: Son los prefijos. Las palabras que contienen un afijo se denominan palabras complejas. #### Original toplevel document La estructura interna de la palabra #### Flashcard 1446843387148 Tags #estructura-interna-de-las-palabras #formantes-morfológicos #gramatica-española #la #morfología #tulio Question Las palabras que contienen un afijo se denominan [...] palabras complejas. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it pan>Algunos afijos van pospuestos a la base (gota), como los de nuestros ejemplos: son los s u f i j o s . Otros afijos la preceden: in-útil, des-contento, a-político: Son los prefijos. Las palabras que contienen un afijo se denominan palabras complejas.<span><body><html> #### Original toplevel document La estructura interna de la palabra #### Flashcard 1446847057164 Tags #3-1-profit-maximization #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question There are three approaches to calculate the point of profit maximization. First, given that [...], maximum profit occurs at the output level where this difference is the greatest. profit is the difference between total revenue and total costs status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it There are three approaches to calculate the point of profit maximization. First, given that profit is the difference between total revenue and total costs, maximum profit occurs at the output level where this difference is the greatest. #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS Total variable cost divided by quantity; (TVC ÷ Q) Average total cost (ATC) Total cost divided by quantity; (TC ÷ Q) or (AFC + AVC) Marginal cost (MC) Change in total cost divided by change in quantity; (∆TC ÷ ∆Q) <span>3.1. Profit Maximization In free markets—and even in regulated market economies—profit maximization tends to promote economic welfare and a higher standard of living, and creates wealth for investors. Profit motivates businesses to use resources efficiently and to concentrate on activities in which they have a competitive advantage. Most economists believe that profit maximization promotes allocational efficiency—that resources flow into their highest valued uses. Overall, the functions of profit are as follows: Rewards entrepreneurs for risk taking when pursuing business ventures to satisfy consumer demand. Allocates resources to their most-efficient use; input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. Spurs innovation and the development of new technology. Stimulates business investment and economic growth. There are three approaches to calculate the point of profit maximization. First, given that profit is the difference between total revenue and total costs, maximum profit occurs at the output level where this difference is the greatest. Second, maximum profit can also be calculated by comparing revenue and cost for each individual unit of output that is produced and sold. A business increases profit through greater sales as long as per-unit revenue exceeds per-unit cost on the next unit of output sold. Profit maximization takes place at the point where the last individual output unit breaks even. Beyond this point, total profit decreases because the per-unit cost is higher than the per-unit revenue from successive output units. A third approach compares the revenue generated by each resource unit with the cost of that unit. Profit contribution occurs when the revenue from an input unit exceeds its cost. The point of profit maximization is reached when resource units no longer contribute to profit. All three approaches yield the same profit-maximizing quantity of output. (These approaches will be explained in greater detail later.) Because profit is the difference between revenue and cost, an understanding of profit maximization requires that we examine both of those components. Revenue comes from the demand for the firm’s products, and cost comes from the acquisition and utilization of the firm’s inputs in the production of those products. 3.1.1. Total, Average, and Marginal Revenue This section briefly examines demand and revenue in preparation for addressing cost. Unless the firm is a pu #### Flashcard 1446848630028 Tags #3-1-profit-maximization #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question There are three approaches to calculate the point of profit maximization. Second, maximum profit can also be calculated by comparing revenue and cost for [...] each individual unit of output that is produced and sold. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it There are three approaches to calculate the point of profit maximization. Second , maximum profit can also be calculated by comparing revenue and cost for each individual unit of output that is produced and sold. A business increases profit through greater sales as long as per-unit revenue exceeds per-unit cost on the next unit of output sold. Profit maximization takes place at th #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS Total variable cost divided by quantity; (TVC ÷ Q) Average total cost (ATC) Total cost divided by quantity; (TC ÷ Q) or (AFC + AVC) Marginal cost (MC) Change in total cost divided by change in quantity; (∆TC ÷ ∆Q) <span>3.1. Profit Maximization In free markets—and even in regulated market economies—profit maximization tends to promote economic welfare and a higher standard of living, and creates wealth for investors. Profit motivates businesses to use resources efficiently and to concentrate on activities in which they have a competitive advantage. Most economists believe that profit maximization promotes allocational efficiency—that resources flow into their highest valued uses. Overall, the functions of profit are as follows: Rewards entrepreneurs for risk taking when pursuing business ventures to satisfy consumer demand. Allocates resources to their most-efficient use; input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. Spurs innovation and the development of new technology. Stimulates business investment and economic growth. There are three approaches to calculate the point of profit maximization. First, given that profit is the difference between total revenue and total costs, maximum profit occurs at the output level where this difference is the greatest. Second, maximum profit can also be calculated by comparing revenue and cost for each individual unit of output that is produced and sold. A business increases profit through greater sales as long as per-unit revenue exceeds per-unit cost on the next unit of output sold. Profit maximization takes place at the point where the last individual output unit breaks even. Beyond this point, total profit decreases because the per-unit cost is higher than the per-unit revenue from successive output units. A third approach compares the revenue generated by each resource unit with the cost of that unit. Profit contribution occurs when the revenue from an input unit exceeds its cost. The point of profit maximization is reached when resource units no longer contribute to profit. All three approaches yield the same profit-maximizing quantity of output. (These approaches will be explained in greater detail later.) Because profit is the difference between revenue and cost, an understanding of profit maximization requires that we examine both of those components. Revenue comes from the demand for the firm’s products, and cost comes from the acquisition and utilization of the firm’s inputs in the production of those products. 3.1.1. Total, Average, and Marginal Revenue This section briefly examines demand and revenue in preparation for addressing cost. Unless the firm is a pu #### Flashcard 1446850989324 Tags #3-1-profit-maximization #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question There are three approaches to calculate the point of profit maximization. Third approach compares the [...] with the [...] revenue generated by each resource unit cost of that unit. Profit contribution occurs when the revenue from an input unit exceeds its cost. The point of profit maximization is reached when resource units no longer contribute to profit. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it There are three approaches to calculate the point of profit maximization. Third approach compares the revenue generated by each resource unit with the cost of that unit. Profit contribution occurs when the revenue from an input unit exceeds its cost. The point of profit maximization is reached when reso #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS Total variable cost divided by quantity; (TVC ÷ Q) Average total cost (ATC) Total cost divided by quantity; (TC ÷ Q) or (AFC + AVC) Marginal cost (MC) Change in total cost divided by change in quantity; (∆TC ÷ ∆Q) <span>3.1. Profit Maximization In free markets—and even in regulated market economies—profit maximization tends to promote economic welfare and a higher standard of living, and creates wealth for investors. Profit motivates businesses to use resources efficiently and to concentrate on activities in which they have a competitive advantage. Most economists believe that profit maximization promotes allocational efficiency—that resources flow into their highest valued uses. Overall, the functions of profit are as follows: Rewards entrepreneurs for risk taking when pursuing business ventures to satisfy consumer demand. Allocates resources to their most-efficient use; input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. Spurs innovation and the development of new technology. Stimulates business investment and economic growth. There are three approaches to calculate the point of profit maximization. First, given that profit is the difference between total revenue and total costs, maximum profit occurs at the output level where this difference is the greatest. Second, maximum profit can also be calculated by comparing revenue and cost for each individual unit of output that is produced and sold. A business increases profit through greater sales as long as per-unit revenue exceeds per-unit cost on the next unit of output sold. Profit maximization takes place at the point where the last individual output unit breaks even. Beyond this point, total profit decreases because the per-unit cost is higher than the per-unit revenue from successive output units. A third approach compares the revenue generated by each resource unit with the cost of that unit. Profit contribution occurs when the revenue from an input unit exceeds its cost. The point of profit maximization is reached when resource units no longer contribute to profit. All three approaches yield the same profit-maximizing quantity of output. (These approaches will be explained in greater detail later.) Because profit is the difference between revenue and cost, an understanding of profit maximization requires that we examine both of those components. Revenue comes from the demand for the firm’s products, and cost comes from the acquisition and utilization of the firm’s inputs in the production of those products. 3.1.1. Total, Average, and Marginal Revenue This section briefly examines demand and revenue in preparation for addressing cost. Unless the firm is a pu #### Flashcard 1446853348620 Tags Question Financial analysis is the process of examining a company’s performance in the context of [...] in order to arrive at a decision or recommendation. its industry and economic environment status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Financial analysis is the process of examining a company’s performance in the context of its industry and economic environment in order to arrive at a decision or recommendation. #### Original toplevel document 1. INTRODUCTION Financial analysis is the process of examining a company’s performance in the context of its industry and economic environment in order to arrive at a decision or recommendation. Often, the decisions and recommendations addressed by financial analysts pertain to providing capital to companies—specifically, whether to invest in the company’s debt or equity securi #### Flashcard 1446856232204 Tags Question Overall, a central focus of financial analysis is evaluating the company’s ability to earn a return on its capital that is at least equal to [...]. the cost of that capital to profitably grow its operations, and to generate enough cash to meet obligations and pursue opportunities. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Overall, a central focus of financial analysis is evaluating the company’s ability to earn a return on its capital that is at least equal to the cost of that capital, to profitably grow its operations, and to generate enough cash to meet obligations and pursue opportunities. #### Original toplevel document 1. INTRODUCTION nterest and to repay the principal lent. An investor in equity securities is an owner with a residual interest in the company and is concerned about the company’s ability to pay dividends and the likelihood that its share price will increase. <span>Overall, a central focus of financial analysis is evaluating the company’s ability to earn a return on its capital that is at least equal to the cost of that capital, to profitably grow its operations, and to generate enough cash to meet obligations and pursue opportunities. Fundamental financial analysis starts with the information found in a company’s financial reports. These financial reports include audited financial statements, additional disclosures r #### Flashcard 1446858591500 Tags Question Basic financial statement analysis provides a foundation that enables the analyst to better understand information gathered [...] beyond the financial reports. from research status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Basic financial statement analysis provides a foundation that enables the analyst to better understand information gathered from research beyond the financial reports. #### Original toplevel document 1. INTRODUCTION s with the information found in a company’s financial reports. These financial reports include audited financial statements, additional disclosures required by regulatory authorities, and any accompanying (unaudited) commentary by management. <span>Basic financial statement analysis—as presented in this reading—provides a foundation that enables the analyst to better understand information gathered from research beyond the financial reports. This reading is organized as follows: Section 2 discusses the scope of financial statement analysis. Section 3 describes the sources of information used in financial statem #### Flashcard 1446860164364 Tags Question Reading 22 is organized as follows: Section 2 discusses [...] of financial statement analysis. Section 3 describes the sources of information used in financial statement analysis, including the primary financial statements. Section 4 provides a framework for guiding the financial statement analysis process. the scope status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Reading 22 is organized as follows: Section 2 discusses the scope of financial statement analysis. Section 3 describes the sources of information used in financial statement analysis, including the primary financial statements. &#13 #### Original toplevel document 1. INTRODUCTION dited) commentary by management. Basic financial statement analysis—as presented in this reading—provides a foundation that enables the analyst to better understand information gathered from research beyond the financial reports. <span>This reading is organized as follows: Section 2 discusses the scope of financial statement analysis. Section 3 describes the sources of information used in financial statement analysis, including the primary financial statements (balance sheet, statement of comprehensive income, statement of changes in equity, and cash flow statement). Section 4 provides a framework for guiding the financial statement analysis process. A summary of the key points and practice problems in the CFA Institute multiple-choice format conclude the reading. <span><body><html> #### Flashcard 1446861999372 Tags Question Is Cash flow a complete measure of performance? no status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Cash flow in any given period is not a complete measure of performance for that period. #### Original toplevel document 2. SCOPE OF FINANCIAL STATEMENT ANALYSIS d to earn that income. Overall, profit (or loss) equals income minus expenses, and its recognition is mostly independent from when cash is received or paid. Example 1 illustrates the distinction between profit and cash flow. <span>EXAMPLE 1 Profit versus Cash Flow Sennett Designs (SD) sells furniture on a retail basis. SD began operations during December 2009 and sold furniture for €250,000 in cash. The furniture sold by SD was purchased on credit for €150,000 and delivered by the supplier during December. The credit terms granted by the supplier required SD to pay the €150,000 in January for the furniture it received during December. In addition to the purchase and sale of furniture, in December, SD paid €20,000 in cash for rent and salaries. How much is SD’s profit for December 2009 if no other transactions occurred? How much is SD’s cash flow for December 2009? If SD purchases and sells exactly the same amount in January 2010 as it did in December and under the same terms (receiving cash for the sales and making purchases on credit that will be due in February), how much will the company’s profit and cash flow be for the month of January? Solution to 1: SD’s profit for December 2009 is the excess of the sales price (€250,000) over the cost of the goods that were sold (€150,000) and rent and salaries (€20,000), or €80,000. Solution to 2: The December 2009 cash flow is €230,000, the amount of cash received from the customer (€250,000) less the cash paid for rent and salaries (€20,000). Solution to 3: SD’s profit for January 2010 will be identical to its profit in December: €80,000, calculated as the sales price (€250,000) minus the cost of the goods that were sold (€150,000) and minus rent and salaries (€20,000). SD’s cash flow in January 2010 will also equal €80,000, calculated as the amount of cash received from the customer (€250,000) minus the cash paid for rent and salaries (€20,000) and minus the €150,000 that SD owes for the goods it had purchased on credit in the prior month. Although profitability is important, so is a company’s ability to generate positive cash flow. Cash flow is important because, ultimately, the company needs cash to pay employees, suppliers, and others in order to continue as a going concern. A company that generates positive cash flow from operations has more flexibility in funding needed for investments and taking advantage of attractive business opportunities than an otherwise comparable company without positive operating cash flow. Additionally, a company needs cash to pay returns (interest and dividends) to providers of debt and equity capital. Therefore, the expected magnitude of future cash flows is important in valuing corporate securities and in determining the company’s ability to meet its obligations. The ability to meet short-term obligations is generally referred to as liquidity , and the ability to meet long-term obligations is generally referred to as solvency . Cash flow in any given period is not, however, a complete measure of performance for that period because, as shown in Example 1, a company may be obligated to make future cash payments as a result of a transaction that generates positive cash flow in the current period. Profits may provide useful information about cash flows, past and future. If the transaction of Example 1 were repeated month after month, the long-term average monthly cas #### Flashcard 1446865407244 Tags #3-1-profit-maximization #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question Overall, the functions of profit are as follows: • Spurs [...] and [...] innovation new technology. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it sfy consumer demand. Allocates resources to their most-efficient use; input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. Spurs <span>innovation and the development of new technology. Stimulates business investment and economic growth.<span><body><html> #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS Total variable cost divided by quantity; (TVC ÷ Q) Average total cost (ATC) Total cost divided by quantity; (TC ÷ Q) or (AFC + AVC) Marginal cost (MC) Change in total cost divided by change in quantity; (∆TC ÷ ∆Q) <span>3.1. Profit Maximization In free markets—and even in regulated market economies—profit maximization tends to promote economic welfare and a higher standard of living, and creates wealth for investors. Profit motivates businesses to use resources efficiently and to concentrate on activities in which they have a competitive advantage. Most economists believe that profit maximization promotes allocational efficiency—that resources flow into their highest valued uses. Overall, the functions of profit are as follows: Rewards entrepreneurs for risk taking when pursuing business ventures to satisfy consumer demand. Allocates resources to their most-efficient use; input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. Spurs innovation and the development of new technology. Stimulates business investment and economic growth. There are three approaches to calculate the point of profit maximization. First, given that profit is the difference between total revenue and total costs, maximum profit occurs at the output level where this difference is the greatest. Second, maximum profit can also be calculated by comparing revenue and cost for each individual unit of output that is produced and sold. A business increases profit through greater sales as long as per-unit revenue exceeds per-unit cost on the next unit of output sold. Profit maximization takes place at the point where the last individual output unit breaks even. Beyond this point, total profit decreases because the per-unit cost is higher than the per-unit revenue from successive output units. A third approach compares the revenue generated by each resource unit with the cost of that unit. Profit contribution occurs when the revenue from an input unit exceeds its cost. The point of profit maximization is reached when resource units no longer contribute to profit. All three approaches yield the same profit-maximizing quantity of output. (These approaches will be explained in greater detail later.) Because profit is the difference between revenue and cost, an understanding of profit maximization requires that we examine both of those components. Revenue comes from the demand for the firm’s products, and cost comes from the acquisition and utilization of the firm’s inputs in the production of those products. 3.1.1. Total, Average, and Marginal Revenue This section briefly examines demand and revenue in preparation for addressing cost. Unless the firm is a pu #### Flashcard 1446867766540 Tags #3-1-profit-maximization #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question Overall, the functions of profit are as follows: • Rewards entrepreneurs for [...] to [...] risk taking when pursuing business ventures satisfy consumer demand. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Overall, the functions of profit are as follows: Rewards entrepreneurs for risk taking when pursuing business ventures to satisfy consumer demand. Allocates resources to their most-efficient use; input factors flow from sectors with economic losses to sectors with economic profit, where pr #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS Total variable cost divided by quantity; (TVC ÷ Q) Average total cost (ATC) Total cost divided by quantity; (TC ÷ Q) or (AFC + AVC) Marginal cost (MC) Change in total cost divided by change in quantity; (∆TC ÷ ∆Q) <span>3.1. Profit Maximization In free markets—and even in regulated market economies—profit maximization tends to promote economic welfare and a higher standard of living, and creates wealth for investors. Profit motivates businesses to use resources efficiently and to concentrate on activities in which they have a competitive advantage. Most economists believe that profit maximization promotes allocational efficiency—that resources flow into their highest valued uses. Overall, the functions of profit are as follows: Rewards entrepreneurs for risk taking when pursuing business ventures to satisfy consumer demand. Allocates resources to their most-efficient use; input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. Spurs innovation and the development of new technology. Stimulates business investment and economic growth. There are three approaches to calculate the point of profit maximization. First, given that profit is the difference between total revenue and total costs, maximum profit occurs at the output level where this difference is the greatest. Second, maximum profit can also be calculated by comparing revenue and cost for each individual unit of output that is produced and sold. A business increases profit through greater sales as long as per-unit revenue exceeds per-unit cost on the next unit of output sold. Profit maximization takes place at the point where the last individual output unit breaks even. Beyond this point, total profit decreases because the per-unit cost is higher than the per-unit revenue from successive output units. A third approach compares the revenue generated by each resource unit with the cost of that unit. Profit contribution occurs when the revenue from an input unit exceeds its cost. The point of profit maximization is reached when resource units no longer contribute to profit. All three approaches yield the same profit-maximizing quantity of output. (These approaches will be explained in greater detail later.) Because profit is the difference between revenue and cost, an understanding of profit maximization requires that we examine both of those components. Revenue comes from the demand for the firm’s products, and cost comes from the acquisition and utilization of the firm’s inputs in the production of those products. 3.1.1. Total, Average, and Marginal Revenue This section briefly examines demand and revenue in preparation for addressing cost. Unless the firm is a pu #### Flashcard 1446870125836 Tags #3-1-profit-maximization #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question Overall, the functions of profit are as follows: • [...] to their most-efficient use; Allocates resources input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Overall, the functions of profit are as follows: Rewards entrepreneurs for risk taking when pursuing business ventures to satisfy consumer demand. Allocates resources to their most-efficient use; input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. Spu #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS Total variable cost divided by quantity; (TVC ÷ Q) Average total cost (ATC) Total cost divided by quantity; (TC ÷ Q) or (AFC + AVC) Marginal cost (MC) Change in total cost divided by change in quantity; (∆TC ÷ ∆Q) <span>3.1. Profit Maximization In free markets—and even in regulated market economies—profit maximization tends to promote economic welfare and a higher standard of living, and creates wealth for investors. Profit motivates businesses to use resources efficiently and to concentrate on activities in which they have a competitive advantage. Most economists believe that profit maximization promotes allocational efficiency—that resources flow into their highest valued uses. Overall, the functions of profit are as follows: Rewards entrepreneurs for risk taking when pursuing business ventures to satisfy consumer demand. Allocates resources to their most-efficient use; input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. Spurs innovation and the development of new technology. Stimulates business investment and economic growth. There are three approaches to calculate the point of profit maximization. First, given that profit is the difference between total revenue and total costs, maximum profit occurs at the output level where this difference is the greatest. Second, maximum profit can also be calculated by comparing revenue and cost for each individual unit of output that is produced and sold. A business increases profit through greater sales as long as per-unit revenue exceeds per-unit cost on the next unit of output sold. Profit maximization takes place at the point where the last individual output unit breaks even. Beyond this point, total profit decreases because the per-unit cost is higher than the per-unit revenue from successive output units. A third approach compares the revenue generated by each resource unit with the cost of that unit. Profit contribution occurs when the revenue from an input unit exceeds its cost. The point of profit maximization is reached when resource units no longer contribute to profit. All three approaches yield the same profit-maximizing quantity of output. (These approaches will be explained in greater detail later.) Because profit is the difference between revenue and cost, an understanding of profit maximization requires that we examine both of those components. Revenue comes from the demand for the firm’s products, and cost comes from the acquisition and utilization of the firm’s inputs in the production of those products. 3.1.1. Total, Average, and Marginal Revenue This section briefly examines demand and revenue in preparation for addressing cost. Unless the firm is a pu #### Flashcard 1446872485132 Tags #3-1-profit-maximization #cfa-level-1 #economics #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Question Overall, the functions of profit are as follows: • Stimulates [...] and [...] economic growth. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it ent use; input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. Spurs innovation and the development of new technology. Stimulates <span>business investment and economic growth.<span><body><html> #### Original toplevel document 3. ANALYSIS OF REVENUE, COSTS, AND PROFITS Total variable cost divided by quantity; (TVC ÷ Q) Average total cost (ATC) Total cost divided by quantity; (TC ÷ Q) or (AFC + AVC) Marginal cost (MC) Change in total cost divided by change in quantity; (∆TC ÷ ∆Q) <span>3.1. Profit Maximization In free markets—and even in regulated market economies—profit maximization tends to promote economic welfare and a higher standard of living, and creates wealth for investors. Profit motivates businesses to use resources efficiently and to concentrate on activities in which they have a competitive advantage. Most economists believe that profit maximization promotes allocational efficiency—that resources flow into their highest valued uses. Overall, the functions of profit are as follows: Rewards entrepreneurs for risk taking when pursuing business ventures to satisfy consumer demand. Allocates resources to their most-efficient use; input factors flow from sectors with economic losses to sectors with economic profit, where profit reflects goods most desired by society. Spurs innovation and the development of new technology. Stimulates business investment and economic growth. There are three approaches to calculate the point of profit maximization. First, given that profit is the difference between total revenue and total costs, maximum profit occurs at the output level where this difference is the greatest. Second, maximum profit can also be calculated by comparing revenue and cost for each individual unit of output that is produced and sold. A business increases profit through greater sales as long as per-unit revenue exceeds per-unit cost on the next unit of output sold. Profit maximization takes place at the point where the last individual output unit breaks even. Beyond this point, total profit decreases because the per-unit cost is higher than the per-unit revenue from successive output units. A third approach compares the revenue generated by each resource unit with the cost of that unit. Profit contribution occurs when the revenue from an input unit exceeds its cost. The point of profit maximization is reached when resource units no longer contribute to profit. All three approaches yield the same profit-maximizing quantity of output. (These approaches will be explained in greater detail later.) Because profit is the difference between revenue and cost, an understanding of profit maximization requires that we examine both of those components. Revenue comes from the demand for the firm’s products, and cost comes from the acquisition and utilization of the firm’s inputs in the production of those products. 3.1.1. Total, Average, and Marginal Revenue This section briefly examines demand and revenue in preparation for addressing cost. Unless the firm is a pu #### Flashcard 1446874844428 Tags #rules-of-formulating-knowledge Question you should make sure that you are not deprived of the said emotional clues at the moment when you need to [...] retrieve a given memory in a real-life situation status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it you should make sure that you are not deprived of the said emotional clues at the moment when you need to retrieve a given memory in a real-life situation #### Original toplevel document 15. Rely on emotional states fies the means. Use objects that evoke very specific and strong emotions: love, sex, war, your late relative, object of your infatuation, Linda Tripp, Nelson Mandela, etc. It is well known that emotional states can facilitate recall; however, <span>you should make sure that you are not deprived of the said emotional clues at the moment when you need to retrieve a given memory in a real-life situation Harder item Q: a light and joking conversation A: banter Easier item Q: a light and joking conversation (e.g. Mandela #### Flashcard 1446876679436 Tags Question Acompany's financial reports include audited financial statements, additional [...] required by regulatory authorities, and any accompanying (unaudited) commentary by management. disclosures status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Acompany's financial reports include audited financial statements, additional disclosures required by regulatory authorities, and any accompanying (unaudited) commentary by management. #### Original toplevel document 1. INTRODUCTION to the cost of that capital, to profitably grow its operations, and to generate enough cash to meet obligations and pursue opportunities. Fundamental financial analysis starts with the information found in a company’s financial reports. These <span>financial reports include audited financial statements, additional disclosures required by regulatory authorities, and any accompanying (unaudited) commentary by management. Basic financial statement analysis—as presented in this reading—provides a foundation that enables the analyst to better understand information gathered from research beyond the financi #### Annotation 1446890310924 #cfa-level-1 #economics #has-images #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 This image illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L0 – L1), TP is increasing at an increasing rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L1 – L2), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. #### Annotation 1446895553804 #cfa-level-1 #economics #has-images #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Open it This image illustrates the shape of a typical input–output relationship using labor (L) as the only variable input (all other input factors are held constant). The production function has three distinct regions where both the direction of change and the rate of change in total product (TP or Q, quantity of output) vary as production changes. Regions 1 and 2 have positive changes in TP as labor is added, but the change turns negative in Region 3. Moreover, in Region 1 (L 0 – L 1 ), TP is increasing at an increasing rate, ty #### Annotation 1446906301708 #cfa-level-1 #economics #has-images #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 The firm would want to avoid Region 3 because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Open it rate, typically because specialization allows laborers to become increasingly productive. In Region 2, however, (L 1 – L 2 ), TP is increasing at a decreasing rate because capital is fixed, and labor experiences diminishing marginal returns. <span>The firm would want to avoid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. Point A is where TP is maximized. <span><body><html> #### Annotation 1446911020300 #cfa-level-1 #economics #has-images #microeconomics #reading-15-demand-and-supply-analysis-the-firm #section-3-analysis-of-revenue-costs-and-profit #study-session-4 Point A is where TP is maximized. Open it oid Region 3 if at all possible because total product or quantity would be declining rather than increasing with additional input: There is so little capital per unit of labor that additional laborers would possibly “get in each other’s way”. <span>Point A is where TP is maximized. <span><body><html> #### Flashcard 1446914690316 Question [...] is how much warmth we can have for ourselves, especially when we’re going through a difficult experience Self-compassion status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Self-compassion is how much warmth we can have for ourselves, especially when we’re going through a difficult experience #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446916263180 Question Self-compassion is [...], especially when we’re going through a difficult experience how much warmth we can have for ourselves status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Self-compassion is how much warmth we can have for ourselves, especially when we’re going through a difficult experience #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446918098188 Question [...] is our belief in our ability to do or to learn how to do something. Self-esteem is how much we approve of or value ourselves. It’s often a comparison-based Self-confidence status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Self-confidence is our belief in our ability to do or to learn how to do something. Self-esteem is how much we approve of or value ourselves. It’s often a comparison-based #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446919671052 Question Self-confidence is [...]. Self-esteem is how much we approve of or value ourselves. It’s often a comparison-based our belief in our ability to do or to learn how to do something status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Self-confidence is our belief in our ability to do or to learn how to do something. Self-esteem is how much we approve of or value ourselves. It’s often a comparison-based #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446921243916 Question Self-confidence is our belief in our ability to do or to learn how to do something. [...] is how much we approve of or value ourselves. It’s often a comparison-based Self-esteem status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Self-confidence is our belief in our ability to do or to learn how to do something. Self-esteem is how much we approve of or value ourselves. It’s often a comparison-based #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446922816780 Question Self-confidence is our belief in our ability to do or to learn how to do something. Self-esteem is [...]. It’s often a comparison-based how much we approve of or value ourselves status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Self-confidence is our belief in our ability to do or to learn how to do something. Self-esteem is how much we approve of or value ourselves. It’s often a comparison-based #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446924389644 Tags #charisma #myth Question How do you invoke compassion for another person? 1. Imagine their past. What if you had been born in their circumstances, with their family and upbringing? What was it like growing up in their family situation with whatever they experienced as a child? It’s often said that everyone you meet has stories to tell, and that everyone has a few that would break your heart. Consider also that if you had experienced everything they have experienced, perhaps you would have turned out just like they have. 2. Imagine their present. Really try to put yourself in their shoes right now. Imagine what it feels like to be them today. Put yourself in their place, be in their skin, see through their eyes. Imagine what they might be feeling right now—all the emotions they might be holding inside. 3. If you really need compassion dynamite, look at them and ask: What if this were their last day alive? You can even imagine their funeral. You’re at their funeral, and you’re asked to say a few words about them. You can also imagine what you’d say to them after they’d already died status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it everyone has a few that would break your heart. Consider also that if you had experienced everything they have experienced, perhaps you would have turned out just like they have. 2. Imagine their present. Really try to put yourself in their shoes right now. Imagine what it feels like to be them today. Put yourself in their place, be in their skin, see through their eyes. Imagine what they might be feeling right now—all the emotions they might be holding inside. 3. If you really need compassion dynamite, look at them and ask: What if this were their last day alive? You can even imagine their funeral. You’re at their funeral, and you’re asked to say a few words about them. You can also imagine what you’d say to them after they’d already died #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446931991820 Tags #charisma Question What is empathy? the ability to understand what someone is feeling status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Paul Gilbert, one of the main researchers in the field of compassion, describes the process of accessing compassion as follows: first comes empathy, the ability to understand what someone is feeling, to detect distress; second, sympathy, being emotionally moved by distress; and third, compassion, which arises with the desire to care for the well-being of the distressed person.</spa #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446934351116 Tags #charisma Question What is sympathy? being emotionally moved by distress status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it span>Paul Gilbert, one of the main researchers in the field of compassion, describes the process of accessing compassion as follows: first comes empathy, the ability to understand what someone is feeling, to detect distress; second, sympathy, <span>being emotionally moved by distress; and third, compassion, which arises with the desire to care for the well-being of the distressed person.<span><body><html> #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446938283276 Tags #charisma Question What is compassion? the desire to care for the well-being of the distressed person. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it cribes the process of accessing compassion as follows: first comes empathy, the ability to understand what someone is feeling, to detect distress; second, sympathy, being emotionally moved by distress; and third, compassion, which arises with <span>the desire to care for the well-being of the distressed person.<span><body><html> #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446940642572 Tags #charisma Question What are the stages to compassion? 1. First comes empathy, the ability to understand what someone is feeling, to detect distress; 2. Second, sympathy, being emotionally moved by distress 3. And third, compassion, which arises with the desire to care for the well-being of the distressed person. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Paul Gilbert, one of the main researchers in the field of compassion, describes the process of accessing compassion as follows: first comes empathy, the ability to understand what someone is feeling, to detect distress; second, sympathy, being emotionally moved by distress; and third, compassion, which arises with the desire to care for the well-being of the distressed person. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446945361164 Tags #charisma Question What's a maxim for inducing goodwill towards others? Of all the options open to me right now, which one would bring the most love into this world? status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Remind yourself of these maxims several times a day, and notice the shift this can make in your mind and body. Another saying people often find equally effective: Of all the options open to me right now, which one would bring the most love into this world? #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446947720460 Tags #charisma Question What's an "angelic" visualization for inducing warmth? in any interaction, imagine the person you’re speaking to, and all those around you, as having invisible angel wings/halo. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it >First, a visualization. This one comes from neuroscientist Dr. Privahini Bradoo, a highly charismatic person whose radiating warmth and happiness I’ve long admired. I was grateful when she shared one of her secrets with me: in any interaction, imagine the person you’re speaking to, and all those around you, as having invisible angel wings.<html> #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446950079756 Tags #charisma Question What is a simple technique to feel goodwill toward someone? (three) One simple but effective way to start is to try to find three things you like about the person you want to feel goodwill toward. Even trivial things like their shoes are tied. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it One simple but effective way to start is to try to find three things you like about the person you want to feel goodwill toward. No matter whom it is you’re talking to, find three things to appreciate or approve of—even if these are as small as “their shoes are shined” or “they were on time.” When you start searc #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446952701196 Tags #charisma Question How would you visualize your own funeral? (for self compassion) 1. Sit or lie down, close your eyes, and set the scene. Where is your funeral being held? What day of the week, what time of day? What is the weather like? See the building where the ceremony is being held. See people arriving. Who’s coming? What are they wearing? Now move into the building and look around inside. Do you see flowers? If so, smell the flowers’ scent heavy on the air. See people coming through the door. What are they thinking? What kind of chairs are they sitting in? What do these chairs feel like? 2. Your funeral starts. Think of the people you care most about or whose opinions matter most to you. What are they thinking? See them stepping up one after another and delivering their eulogy. What are they saying? What regrets do they have for you? Now think: What would you like them to have said? What regrets do you have for yourself? 3. See people following your coffin to the cemetery and gathering around your grave. What would you like to see written on your tombstone? 4. Almost everyone, of all ages, genders, and seniority levels, gets a bit teary-eyed by the end. You might feel moved, touched, stirred. Stay with these emotions as much as you can and aim to get comfortable with them. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it ♦ Sit or lie down, close your eyes, and set the scene. Where is your funeral being held? What day of the week, what time of day? What is the weather like? See the building where the ceremony is being held. See people arriving. Who’s coming? What are they wearing? Now move into the building and look around inside. Do you see flowers? If so, smell the flowers’ scent heavy on the air. See people coming through the door. What are they thinking? What kind of chairs are they sitting in? What do these chairs feel like? ♦ Your funeral starts. Think of the people you care most about or whose opinions matter most to you. What are they thinking? See them stepping up one after another and delivering their eulogy. What are they saying? What regrets do they have for you? Now think: What would you like them to have said? What regrets do you have for yourself? ♦ See people following your coffin to the cemetery and gathering around your grave. What would you like to see written on your tombstone? ♦ Almost everyone, of all ages, genders, and seniority levels, gets a bit teary-eyed by the end. You might feel moved, touched, stirred. Stay with these emotions as much as you can and aim to get comfortable with them. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446955322636 Tags #charisma Question What is a technique for inducing self-gratitude? ♦ Start to describe your life as if you were an outside observer, and focus on all the positive aspects you can think of. ♦ Write about your job—the work you do and the people you work with. Describe your personal relationships and the good things friends and family members would say about you. Mention a few positive things that have happened today and the tasks you have already accomplished. ♦ Take the time to write down this narrative. Just thinking about it won’t be as effective status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Use a third-person lens: For this technique, you’ll need just a few minutes to sit down, a pen, and some paper. ♦ Start to describe your life as if you were an outside observer, and focus on all the positive aspects you can think of. ♦ Write about your job—the work you do and the people you work with. Describe your personal relationships and the good things friends and family members would say about you. Mention a few positive things that have happened today and the tasks you have already accomplished. ♦ Take the time to write down this narrative. Just thinking about it won’t be as effective #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446957681932 Tags #charisma Question How do you maintain positive body language even when annoyed? The next time you find yourself annoyed at some minor thing, remember that letting your mind focus on the annoyance could impair your body language. To counter this, follow the suggestions below: ♦ Sweep through your body from head to toe and find three abilities you approve of. You could be grateful that you have feet and toes that allow you to walk. You might appreciate your ability to read. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Focus on the present: The next time you find yourself annoyed at some minor thing, remember that letting your mind focus on the annoyance could impair your body language. To counter this, follow the suggestions below: ♦ Sweep through your body from head to toe and find three abilities you approve of. You could be grateful that you have feet and toes that allow you to walk. You might appreciate your ability to read. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446960303372 Tags #charisma Question What did Napoleon Hill visualize as his counselors? Nineteenth-century author Napoleon Hill would regularly visualize nine famous men as his personal counselors, including Ralph Waldo Emerson, Thomas Edison, Charles Darwin, and Abraham Lincoln. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Nineteenth-century author Napoleon Hill would regularly visualize nine famous men as his personal counselors, including Ralph Waldo Emerson, Thomas Edison, Charles Darwin, and Abraham Lincoln. He wrote: “Every night… I held an imaginary council meeting with this group whom I called my ‘Invisible Counselors.’… I now go to my imaginary counselors with every difficult problem th #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446964235532 Tags #charisma Question How do you induce oxytocin release? A twenty- second hug is enough to send oxytocin coursing through your veins, and that you can achieve the same effect just by imagining the hug. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it One of my favorite neuroscience resources, the Wise Brain Bulletin, suggested that a twenty- second hug is enough to send oxytocin coursing through your veins, and that you can achieve the same effect just by imagining the hug. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446966594828 Tags #charisma Question What are some maxims for accessing serenity in times of distress? A week from now, or a year from now, will any of this matter? This, too, shall pass. Look for little miracles unfolding right now. What if you could trust the Universe, even with this? status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it help you access calm and serenity. They run the gamut of tastes and styles, so you may find that some raise your hackles while others strongly resonate: A week from now, or a year from now, will any of this matter? This, too, shall pass. Yes, it will. Look for little miracles unfolding right now. Love the confusion. What if you could trust the Universe, even with this? #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446968954124 Tags #charisma Question How do you induce a confident state? 1. Close your eyes and relax. 2. Remember a past experience when you felt absolutely triumphant—for example, the day you won a contest or an award. 3. Hear the sounds in the room: the murmurs of approval, the swell of applause. 4. See people’s smiles and expressions of warmth and admiration. 5. Feel your feet on the ground and the congratulatory handshakes. 6. Above all, experience your feelings, the warm glow of confidence rising within you status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it ♦ Close your eyes and relax. ♦ Remember a past experience when you felt absolutely triumphant—for example, the day you won a contest or an award. ♦ Hear the sounds in the room: the murmurs of approval, the swell of applause. ♦ See people’s smiles and expressions of warmth and admiration. ♦ Feel your feet on the ground and the congratulatory handshakes. ♦ Above all, experience your feelings, the warm glow of confidence rising within you #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446972886284 Question Compound statements typically span [...] and start with a one-line header ending in a colon, which identifies the type of statement. multiple lines status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. #### Original toplevel document 1.5 Control ver, much of the interesting work of computation comes from evaluating expressions. Statements govern the relationship among different expressions in a program and what happens to their results. 1.5.2 Compound Statements In general, <span>Python code is a sequence of statements. A simple statement is a single line that doesn't end in a colon. A compound statement is so called because it is composed of other statements (simple and compound). Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. Together, a header and an indented suite of statements is called a clause. A compound statement consists of one or more clauses: <span> : ... : ... ... We can understand t #### Flashcard 1446974459148 Question Compound statements typically span multiple lines and start with a [...ending with a...], which identifies the type of statement. one-line header ending in a colon status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. #### Original toplevel document 1.5 Control ver, much of the interesting work of computation comes from evaluating expressions. Statements govern the relationship among different expressions in a program and what happens to their results. 1.5.2 Compound Statements In general, <span>Python code is a sequence of statements. A simple statement is a single line that doesn't end in a colon. A compound statement is so called because it is composed of other statements (simple and compound). Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. Together, a header and an indented suite of statements is called a clause. A compound statement consists of one or more clauses: <span> : ... : ... ... We can understand t #### Flashcard 1446976032012 Question A compound statement consists of one or more [...]: clauses status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it A compound statement consists of one or more clauses: #### Original toplevel document 1.5 Control ver, much of the interesting work of computation comes from evaluating expressions. Statements govern the relationship among different expressions in a program and what happens to their results. 1.5.2 Compound Statements In general, <span>Python code is a sequence of statements. A simple statement is a single line that doesn't end in a colon. A compound statement is so called because it is composed of other statements (simple and compound). Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. Together, a header and an indented suite of statements is called a clause. A compound statement consists of one or more clauses: <span> : ... : ... ... We can understand t #### Flashcard 1446977604876 Question a header and an indented suite of statements is called a [...]. clause status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it a header and an indented suite of statements is called a clause. #### Original toplevel document 1.5 Control ver, much of the interesting work of computation comes from evaluating expressions. Statements govern the relationship among different expressions in a program and what happens to their results. 1.5.2 Compound Statements In general, <span>Python code is a sequence of statements. A simple statement is a single line that doesn't end in a colon. A compound statement is so called because it is composed of other statements (simple and compound). Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. Together, a header and an indented suite of statements is called a clause. A compound statement consists of one or more clauses: <span> : ... : ... ... We can understand t #### Flashcard 1446979177740 Question [two things] is called a clause. a header and an indented suite of statements status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it a header and an indented suite of statements is called a clause. #### Original toplevel document 1.5 Control ver, much of the interesting work of computation comes from evaluating expressions. Statements govern the relationship among different expressions in a program and what happens to their results. 1.5.2 Compound Statements In general, <span>Python code is a sequence of statements. A simple statement is a single line that doesn't end in a colon. A compound statement is so called because it is composed of other statements (simple and compound). Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. Together, a header and an indented suite of statements is called a clause. A compound statement consists of one or more clauses: <span> : ... : ... ... We can understand t #### Flashcard 1446981537036 Question A compound statement is so called because it is [...] composed of other statements (simple and compound) status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it A compound statement is so called because it is composed of other statements (simple and compound) #### Original toplevel document 1.5 Control ver, much of the interesting work of computation comes from evaluating expressions. Statements govern the relationship among different expressions in a program and what happens to their results. 1.5.2 Compound Statements In general, <span>Python code is a sequence of statements. A simple statement is a single line that doesn't end in a colon. A compound statement is so called because it is composed of other statements (simple and compound). Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. Together, a header and an indented suite of statements is called a clause. A compound statement consists of one or more clauses: <span> : ... : ... ... We can understand t #### Flashcard 1446983109900 Question A [...] statement is a single line that doesn't end in a colon. simple status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it A simple statement is a single line that doesn't end in a colon. #### Original toplevel document 1.5 Control ver, much of the interesting work of computation comes from evaluating expressions. Statements govern the relationship among different expressions in a program and what happens to their results. 1.5.2 Compound Statements In general, <span>Python code is a sequence of statements. A simple statement is a single line that doesn't end in a colon. A compound statement is so called because it is composed of other statements (simple and compound). Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. Together, a header and an indented suite of statements is called a clause. A compound statement consists of one or more clauses: <span> : ... : ... ... We can understand t #### Flashcard 1446984682764 Question A simple statement is a [(literal description in terms of what you write)] single line that doesn't end in a colon. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it A simple statement is a single line that doesn't end in a colon. #### Original toplevel document 1.5 Control ver, much of the interesting work of computation comes from evaluating expressions. Statements govern the relationship among different expressions in a program and what happens to their results. 1.5.2 Compound Statements In general, <span>Python code is a sequence of statements. A simple statement is a single line that doesn't end in a colon. A compound statement is so called because it is composed of other statements (simple and compound). Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. Together, a header and an indented suite of statements is called a clause. A compound statement consists of one or more clauses: <span> : ... : ... ... We can understand t #### Flashcard 1446986255628 Question Python code is a sequence of [...]. statements status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Python code is a sequence of statements. #### Original toplevel document 1.5 Control ver, much of the interesting work of computation comes from evaluating expressions. Statements govern the relationship among different expressions in a program and what happens to their results. 1.5.2 Compound Statements In general, <span>Python code is a sequence of statements. A simple statement is a single line that doesn't end in a colon. A compound statement is so called because it is composed of other statements (simple and compound). Compound statements typically span multiple lines and start with a one-line header ending in a colon, which identifies the type of statement. Together, a header and an indented suite of statements is called a clause. A compound statement consists of one or more clauses: <span> : ... : ... ... We can understand t #### Flashcard 1446988352780 Question [...] ligands are most needed for these unexplored receptors, as they enable both the biochemical characterization of the receptors in vitro and ligand-guided homol- ogy model refinement in silico. Surrogate status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Surrogate ligands are most needed for these unexplored receptors, as they enable both the biochemical characterization of the receptors in vitro and ligand-guided homol- ogy model refinemen #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446989925644 Question Surrogate ligands are most needed for these unexplored receptors, as they enable both [...] and ligand-guided homol- ogy model refinement in silico. the biochemical characterization of the receptors in vitro status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Surrogate ligands are most needed for these unexplored receptors, as they enable both the biochemical characterization of the receptors in vitro and ligand-guided homol- ogy model refinement in silico. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446991498508 Question Surrogate ligands are most needed for these unexplored receptors, as they enable both the biochemical characterization of the receptors in vitro and [...] ligand-guided homol- ogy model refinement in silico. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Surrogate ligands are most needed for these unexplored receptors, as they enable both the biochemical characterization of the receptors in vitro and ligand-guided homol- ogy model refinement in silico. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446993071372 Question [...] data is inherently biased toward closest phyloge- netic relatives because of the common practice of testing ligand selectivity profiles only on homologous receptors ChEMBL status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it ChEMBL data is inherently biased toward closest phyloge- netic relatives because of the common practice of testing ligand selectivity profiles only on homologous receptors #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446994644236 Question ChEMBL data is inherently biased toward [...] because of the common practice of testing ligand selectivity profiles only on homologous receptors closest phyloge- netic relatives status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it ChEMBL data is inherently biased toward closest phyloge- netic relatives because of the common practice of testing ligand selectivity profiles only on homologous receptors #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446996217100 Question ChEMBL data is inherently biased toward closest phyloge- netic relatives because of the common practice of [...] testing ligand selectivity profiles only on homologous receptors status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it ChEMBL data is inherently biased toward closest phyloge- netic relatives because of the common practice of testing ligand selectivity profiles only on homologous receptors #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1446999887116 Tags #gpcr-coinpocket Question The Coinpocket method has residue contacts based on energy rather than distance (T/F) are distance based, not energy based status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it residue contact strengths (1) are distance based, not energy based; (2) rep- resent a coarse-grain approximation of actual interaction energies; (3) have improved signal-to-noise ratios through analysis of mul- tiple ligands and crystallographic confo #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447002246412 Tags #gpcr-coinpocket Question What is the GPCR Pocketome? the set of annotated GPCR binding pockets that have been crystallographically characterized to date status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it Here we sought to further refine the comparisons of GPCR binding sites by analyzing both the persis- tence and the strength of residue–ligand interactions observed across the GPCR Pocketome—the set of annotated GPCR binding pockets that have been crystallographically characterized to date 16 . By enriching the binding-site comparisons with ligand contact strengths, a method we term GPCR–CoINPocket (GPCR contact- informed neighboring pocket), we organized and clu #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447004605708 Tags #gpcr-coinpocket Question How are homologs of GPCRs (and other proteins) traditionally determined? homologs are traditionally determined by sequence alignment and calculation of amino acid identity or similarity at each posi- tion. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it homologs are traditionally determined by sequence alignment and calculation of amino acid identity or similarity at each posi- tion. #### Original toplevel document (pdf) cannot see any pdfs #### Annotation 1447007751436 #biochem the interaction energy for a negative charge that is sepa- rated from a positive charge by 3 Å in vacuum turns out to be about −500 kJ•mol −1 #### Parent (intermediate) annotation Open it the interaction energy for a negative charge that is sepa- rated from a positive charge by 3 Å in vacuum turns out to be about −500 kJ•mol −1 (Figure 1.10A; we shall explain how such a calculation is done in Chapter 6). Th is result might make it appear that electrostatic interactions are extremely strong (this value is 200 t #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447010110732 Tags #biochem Question When two oppositely charged groups are close to each other, the interaction is called an [...or...] ion pair or a salt bridge, status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it When two oppositely charged groups are close to each other, the interaction is called an ion pair or a salt bridge, #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447012470028 Tags #biochem Question the foot pads of geckos contain millions of tiny hair-like protrusions with fl at tips, called [...]. spatulae status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it the foot pads of geckos contain millions of tiny hair-like protrusions with fl at tips, called spatulae. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447014042892 Tags #biochem Question [...], which is a measure of the likeli- hood of a particular arrangement of molecules. entropy status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it entropy, which is a measure of the likeli- hood of a particular arrangement of molecules. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447015615756 Tags #biochem Question entropy, which is a measure of [...] the likeli- hood of a particular arrangement of molecules. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it entropy, which is a measure of the likeli- hood of a particular arrangement of molecules. #### Original toplevel document (pdf) cannot see any pdfs #### Annotation 1447017188620 #biochem the function of a molecule depends on its structure and biological macromolecules can assemble spontaneously into functional structures #### Parent (intermediate) annotation Open it There are two central themes underlying the concepts in this book. The first is that the function of a molecule depends on its structure and that biological macromolecules can assemble spontaneously into functional structures. The second theme is that any biological macromolecule must work together with other molecules to carry out its particular functions in the cell, and this depends on the ability of mole #### Original toplevel document (pdf) cannot see any pdfs #### Annotation 1447019547916 #biochem any biological macromolecule must work together with other molecules to carry out its particular functions in the cell, and this depends on the ability of molecules to recognize each other specifically #### Parent (intermediate) annotation Open it two central themes underlying the concepts in this book. The first is that the function of a molecule depends on its structure and that biological macromolecules can assemble spontaneously into functional structures. The second theme is that <span>any biological macromolecule must work together with other molecules to carry out its particular functions in the cell, and this depends on the ability of molecules to recognize each other specifically. Clearly, to understand the molecular mechanism of any biological process, we must understand the energy of the physical and chemical interactions that drive the formation of specific s #### Original toplevel document (pdf) cannot see any pdfs #### Annotation 1447021120780 #biochem to understand the molecular mechanism of any biological process, we must understand the energy of the physical and chemical interactions that drive the formation of specific structures and promote molecular recognition #### Parent (intermediate) annotation Open it ures. The second theme is that any biological macromolecule must work together with other molecules to carry out its particular functions in the cell, and this depends on the ability of molecules to recognize each other specifically. Clearly, <span>to understand the molecular mechanism of any biological process, we must understand the energy of the physical and chemical interactions that drive the formation of specific structures and promote molecular recognition<span><body><html> #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447022693644 Tags #biochem Question A typical protein molecule is made from [how many?] amino acids. ~300 status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it A typical protein molecule is made from ~300 amino acids. Th e total number of different sequences possible for proteins of this length is 20^300 ≈ 10^390 #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447025052940 Tags #biochem Question A typical protein molecule is made from ~300 amino acids. Th e total number of different sequences possible for proteins of this length is [...] 20^300 ≈ 10^390 status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it A typical protein molecule is made from ~300 amino acids. Th e total number of different sequences possible for proteins of this length is 20^300 ≈ 10^390 #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447027412236 Tags #biochem Question A DNA molecule of this length (4.5 million) corresponds to [...] possible sequences ~10^2,700,000 status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it A DNA molecule of this length (4.5 million) corresponds to ~10^2,700,000 possible sequences #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447028985100 Tags #python #scip Question The '=' symbol is called the [...] in Python (and many other languages) assignment operator status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it he = symbol is called the assignment operator in Python (and many other languages) #### Original toplevel document 1.2 Elements of Programming = and a value to the right: >>> radius = 10 >>> radius 10 >>> 2 * radius 20 Names are also bound via import statements. >>> from math import pi >>> pi * 71 / 223 1.0002380197528042 T<span>he = symbol is called the assignment operator in Python (and many other languages). Assignment is our simplest means of abstraction, for it allows us to use simple names to refer to the results of compound operations, such as the area computed above. In this way, co #### Annotation 1447030557964 #python #scip A numeral evaluates to the number it names, #### Parent (intermediate) annotation Open it eated application of the first step brings us to the point where we need to evaluate, not call expressions, but primitive expressions such as numerals (e.g., 2) and names (e.g., add ). We take care of the primitive cases by stipulating that <span>A numeral evaluates to the number it names, A name evaluates to the value associated with that name in the current environment. Notice the important role of an environment in determining the meaning of the symbols in expression #### Original toplevel document 1.2 Elements of Programming rule is applied, and the result of that expression. Viewing evaluation in terms of this tree, we can imagine that the values of the operands percolate upward, starting from the terminal nodes and then combining at higher and higher levels. <span>Next, observe that the repeated application of the first step brings us to the point where we need to evaluate, not call expressions, but primitive expressions such as numerals (e.g., 2) and names (e.g., add ). We take care of the primitive cases by stipulating that A numeral evaluates to the number it names, A name evaluates to the value associated with that name in the current environment. Notice the important role of an environment in determining the meaning of the symbols in expressions. In Python, it is meaningless to speak of the value of an expression such as >>> add(x, 1) without specifying any information about the environment that would provide a m #### Annotation 1447032130828 #python #scip A name evaluates to the value associated with that name in the current environment. #### Parent (intermediate) annotation Open it s to the point where we need to evaluate, not call expressions, but primitive expressions such as numerals (e.g., 2) and names (e.g., add ). We take care of the primitive cases by stipulating that A numeral evaluates to the number it names, <span>A name evaluates to the value associated with that name in the current environment. Notice the important role of an environment in determining the meaning of the symbols in expressions<span><body><html> #### Original toplevel document 1.2 Elements of Programming rule is applied, and the result of that expression. Viewing evaluation in terms of this tree, we can imagine that the values of the operands percolate upward, starting from the terminal nodes and then combining at higher and higher levels. <span>Next, observe that the repeated application of the first step brings us to the point where we need to evaluate, not call expressions, but primitive expressions such as numerals (e.g., 2) and names (e.g., add ). We take care of the primitive cases by stipulating that A numeral evaluates to the number it names, A name evaluates to the value associated with that name in the current environment. Notice the important role of an environment in determining the meaning of the symbols in expressions. In Python, it is meaningless to speak of the value of an expression such as >>> add(x, 1) without specifying any information about the environment that would provide a m #### Annotation 1447033703692 #python #sicp Function names are lowercase, with words separated by underscores. #### Parent (intermediate) annotation Open it Function names are lowercase, with words separated by underscores. Descriptive names are encouraged. Function names typically evoke operations applied to arguments by the interpreter (e.g., print , add , square ) or the name of the quantity that result #### Original toplevel document 1.3 Defining New Functions (non-rebellious) Python programmers. A shared set of conventions smooths communication among members of a developer community. As a side effect of following these conventions, you will find that your code becomes more internally consistent. <span>Function names are lowercase, with words separated by underscores. Descriptive names are encouraged. Function names typically evoke operations applied to arguments by the interpreter (e.g., print , add , square ) or the name of the quantity that results (e.g., max , abs , sum ). Parameter names are lowercase, with words separated by underscores. Single-word names are preferred. Parameter names should evoke the role of the parameter in the function, not just the kind of argument that is allowed. Single letter parameter names are acceptable when their role is obvious, but avoid "l" (lowercase ell), "O" (capital oh), or "I" (capital i) to avoid confusion with numerals. There are many exceptions to these guidelines, even in the Python standard library. Like the vocabulary of the English language, Python has inherited words from a variety of contribut #### Flashcard 1447041830156 Tags #electromagnetism #physics Question What is an example of a scalar field? the distribution of temperature in a room, contour map status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it So whereas the distribution of temperature in a room is an example of a scalar field, the speed and direction of the flow of a fluid at each point in a stream is an example of a vector field. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447044189452 Tags #electromagnetism #physics Question Give an example of a vector field speed and direction of the flow of a fluid at each point in a stream status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it So whereas the distribution of temperature in a room is an example of a scalar field, the speed and direction of the flow of a fluid at each point in a stream is an example of a vector field. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447046548748 Tags #electromagnetism #physics Question What is a vector field? a vector field is a distribution of quan tities in space – a field – and these quantities ha ve both magnitude and direction, meaning that they are vectors. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it What’s a vector field? As the name suggests, a vector field is a distribution of quan tities in space – a field – and these quantities ha ve both magnitude and direction, meaning that they are vectors. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447048908044 Tags #electromagnetism #physics Question In Gauss's law is the surface integral is applied to what? In Gauss’s law, the surface integral is applied not to a scalar function (such as the density of a surface) but to a vector field. status measured difficulty not learned 37% [default] 0 #### Parent (intermediate) annotation Open it In Gauss’s law, the surface integral is applied not to a scalar function (such as the density of a surface) but to a vector field. #### Original toplevel document (pdf) cannot see any pdfs #### Flashcard 1447052315916 Tags #electromagnetism #physics Question For individual segments with area density ri and area dAi , the mass of each segment is ri * dAi, and the mass of the entire surface of N segments is given by [...]
1I/2017 U1 (Oumuamua) is Hot: Imaging, Spectroscopy and Search of Meteor Activity @article{Ye20171I2017U, title={1I/2017 U1 (Oumuamua) is Hot: Imaging, Spectroscopy and Search of Meteor Activity}, author={Quanzhi Ye and Qicheng Zhang and Michael S. P. Kelley and Peter Brown}, journal={arXiv: Earth and Planetary Astrophysics}, year={2017} } • Q. Ye, +1 author P. Brown • Published 7 November 2017 • Physics • arXiv: Earth and Planetary Astrophysics 1I/2017 U1 (Oumuamua), a recently discovered asteroid in a hyperbolic orbit, is likely the first macroscopic object of extrasolar origin identified in the solar system. Here, we present imaging and spectroscopic observations of \textquoteleft Oumuamua using the Palomar Hale Telescope as well as a search of meteor activity potentially linked to this object using the Canadian Meteor Orbit Radar. We find that \textquoteleft Oumuamua exhibits a moderate spectral gradient of $10\%\pm6\%~(100… Figures and Tables from this paper New Insights into Interstellar Object 1I/2017 U1 (‘Oumuamua) from SOHO/STEREO Nondetections • Physics • 2019 Object 1I/2017 U1 (Oumuamua) is the first interstellar small body ever discovered in the solar system. By the time of discovery, it had already passed perihelion. To investigate the behavior of On the Anomalous Acceleration of 1I/2017 U1 ‘Oumuamua • Physics The Astrophysical Journal • 2019 We show that the P ~ 8 hr photometric period and the astrometrically measured A_(ng) ~ 2.5 × 10^(−4) cm s^(−2) non-gravitational acceleration (at r ~ 1.4 au) of the interstellar object 1I/2017 Anomalous Sun Flyby of 1I/2017 U1 (Oumuamua) • Physics • 2020 The findings of Micheli et al. (Nature2018, 559, 223–226) that 1I/2017 U1 (Oumuamua) showed anomalous orbital accelerations have motivated us to apply an impact model of gravity in search for an 1I/2017 ’Oumuamua-like Interstellar Asteroids as Possible Messengers from Dead Stars Discovery of the first interstellar asteroid (ISA) - 1I/2017 'Oumuamua - raised a number of questions regarding its origin. Many of them relate to its lack of cometary activity, suggesting refractory The origin of interstellar asteroidal objects like 1I/2017 U1 • Physics • 2017 With the recently discovered interstellar object 1I/2017U1 (1I/'Oumuamua) we have to realize that the Solar System is not isolated, but part of a larger environment with which we interact. We compare Spectroscopy and thermal modelling of the first interstellar object 1I/2017 U1 ‘Oumuamua During the formation and evolution of the Solar System, significant numbers of cometary and asteroidal bodies were ejected into interstellar space1,2. It is reasonable to expect that the same Implications for planetary system formation from interstellar object 1I/2017 U1 (Oumuamua) The recently discovered minor body 1I/2017 U1 (Oumuamua) is the first known object in our Solar System that is not bound by the Sun's gravity. Its hyperbolic orbit (eccentricity greater than unity) Interstellar Interloper 1I/2017 U1: Observations from the NOT and WIYN Telescopes We present observations of the interstellar interloper 1I/2017 U1 ('Oumuamua) taken during its 2017 October flyby of Earth. The optical colors B-V = 0.70$\pm$0.06, V-R = 0.45$\pm\$0.05, overlap those Non-gravitational acceleration in the trajectory of 1I/2017 U1 (‘Oumuamua) ‘Oumuamua—the first known interstellar object to have entered the Solar System—is probably a comet, albeit with unusual dust and chemical properties owing to its origin in a distant solar system. Col-OSSOS: Colors of the Interstellar Planetesimal 1I/‘Oumuamua The recent discovery by Pan-STARRS1 of 1I/2017 U1 (`Oumuamua), on an unbound and hyperbolic orbit, offers a rare opportunity to explore the planetary formation processes of other stars, and the References SHOWING 1-10 OF 19 REFERENCES Kinematics of the Interstellar Vagabond 1I/'Oumuamua (A/2017 U1) The initial Galactic velocity vector for the recently discovered hyperbolic asteroid 1I/'Oumuamua (A/2017 U1) is calculated for before its encounter with our solar system. The latest orbit (JPL-13) Detection of the Phoenicids meteor shower in 2014 Abstract An appearance of the Phoenicids meteor shower was predicted in 2014 by using a dust trail simulation of an outburst of 1956. We detected Phoenicids meteors on December 2 through multiple Dormant comets among the near-Earth object population: a meteor-based survey • Physics • 2016 Dormant comets in the near-Earth object (NEO) population are thought to be involved in the terrestrial accretion of water and organic materials. Identification of dormant comets is difficult as they When comets get old: A synthesis of comet and meteor observations of the low activity comet 209P/LINEAR Abstract It is speculated that some weakly active comets may be transitional objects between active and dormant comets. These objects are at a unique stage of the evolution of cometary nuclei, as Realistic Detectability of Close Interstellar Comets • Physics • 2011 During the planet formation process, billions of comets are created and ejected into interstellar space. The detection and characterization of such interstellar comets (also known as extra-solar Large Synoptic Survey Telescope: Overview • J. Tyson • Physics, Environmental Science SPIE Astronomical Telescopes + Instrumentation • 2002 A large wide-field telescope and camera with optical throughput over 200 m2 deg2 -- a factor of 50 beyond what we currently have -- would enable the detection of faint moving or bursting optical Palomar Optical Spectrum of Hyperbolic Near-Earth Object A/2017 U1 We present optical spectroscopy of the recently discovered hyperbolic near-Earth object A/2017 U1, taken on 25 Oct 2017 at Palomar Observatory. Although our data are at a very low signal-to-noise, Simultaneous radar and video meteors—I: Metric comparisons • Physics • 2012 Abstract Simultaneous radar and video measurements were made using the Canadian Meteor Orbit Radar (CMOR) and several Gen-III image-intensified CCD cameras to observationally validate metric
# Solving Second Order Differential Equations. How would you go about solving the following system of ODEs: \begin{align*} & x''(t) - \frac{2}{y}x'(t) \ y'(t) = 0 \ & y''(t) + \frac{1}{y} \big(x'(t) - y'(t)\big) = 0 \end{align*} Any help would be very much appreciated! - I have managed to reduce it to the single equation $y''(t) + \frac{1}{y}((y'(t))^2 + 1) = 0.$ –  Alex Kite Apr 7 '12 at 2:42 So the first equation gives $\ln x' = 2\ln y+C$, plugging that into the second equation gives $y''(t)+\frac{1}{y}(C'y'(t)^2-y')$. How did you get the $+1$ term? –  Alex R. Apr 7 '12 at 2:48 From the first equation we can conclude : $\ln x'(t)= 2\ln y +C$ , so $x'(t)=C_1\cdot y^2$ plugging this into second equation gives : $y''(t)+\frac{1}{y}(C_1 \cdot y^2-y'(t))=0$ Now substitute $y'(t)=v$ , where $v$ is a function in terms of variable $y$ ,so: $y''(t)=v'_{y}\cdot v$ Hence : $v'_{y}\cdot v+\frac{1}{y}(C_1 \cdot y^2-v)=0$ this equation is equivalent to the first order non-linear ODE : $v'_y+C_1\cdot y \cdot v^{-1} -\frac{1}{y}=0$ which can be solved using numerical methods . -
# Proof that the thermal interpretation of QM is wrong Science Advisor Gold Member ## Main Question or Discussion Point Preface After a lengthy discussion of the thermal interpretation of quantum physics in https://www.physicsforums.com/threads/the-thermal-interpretation-of-quantum-physics.967116/ , now I think I can prove that it is wrong, i.e. that it doesn't solve the measurement problem in a way it claims it does. Since the following is supposed to be a final proof, I don't want it to be lost among many other posts in the thread above. That's why I open a separate thread. Introduction Here I want to prove that the thermal interpretation, contrary to its claim, cannot solve the measurement problem. For definiteness I will present the measurement problem in the form of the Schrodinger cat paradox, but it can be presented in other forms as well. I will prove that the Schrodinger cat "paradox" is a true paradox within the thermal interpretation that does not have a solution within that interpretation. Let ##\rho^{\rm (cat)}## be the density matrix describing the cat degrees of freedom. In principle, it is determined by the density matrix ##\rho## of the whole Universe as $$\rho^{\rm (cat)}={\rm Tr}_{\rm no\,cat} \rho$$ where ##{\rm Tr}_{\rm no\,cat}## denotes the trace over all degrees of freedom except the degrees of freedom of the cat. Since ##\rho^{\rm (cat)}## describes an open system, it's dynamics is very complicated and nonlinear. Since the details of influence of the environment "no cat" degrees of freedom on the cat are not known in practice, the evolution of ##\rho^{\rm (cat)}## in practice can be described by stochastic equations. The thermal interpretation conjectures (without an actual proof) that this complicated, nonlinear and effectively stochastic evolution can explain why the superposition of an alive and a dead cat is unstable, so that the system exhibits a fast decay towards an either dead cat or alive cat. Here I prove that this conjecture is wrong. The central idea of my proof is to consider the problem from the point of view of the whole Universe, instead from the point of view of the cat. Even though the whole Universe is in principle much more complicated than the cat, this actually simplifies the analysis because it is known that the whole universe evolves unitarily, given by the unitary evolution operator $$U(t)=e^{-iHt}$$ where ##H## is the Hamiltonian of the Universe. The proof Let ##\rho(t)## be the density matrix of the whole Universe. In general, it evolves with time according to ##\rho(t)=U(t)\rho(0)U^{\dagger}(t)##. Now suppose that initially ##\rho(0)=\rho_{\rm alive}##, where ##\rho_{\rm alive}## is the state of the Universe with an alive cat. The alive state is stable, i.e. the cat who is initially alive will stay alive for a long time. Hence we can write $$U(t)\rho_{\rm alive}U^{\dagger}(t)=\rho_{\rm alive}(t)$$ where ##\rho_{\rm alive}(t)## is the state of the Universe with a cat alive during a long time. Similarly, if initially ##\rho(0)=\rho_{\rm dead}## then we have a dead cat for a long time, so we can write $$U(t)\rho_{\rm dead}U^{\dagger}(t)=\rho_{\rm dead}(t)$$ But what if initially we have the superposition of a dead and an alive cat? It is certainly possible as an initial condition, but the question is what happens with such a superposition later? Is it stable or unstable? To simplify the analysis we shall assume that the initial superposition is incoherent, i.e. that $$\rho(0)=\frac{1}{2}\rho_{\rm alive}+\frac{1}{2}\rho_{\rm dead}$$ without the interference term. (We shall show later that inclusion of the interference terms does not change the final results.) Hence the linearity of evolution for the whole Universe implies $$U(t)\rho(0)U^{\dagger}(t)=\frac{1}{2}\rho_{\rm alive}(t)+\frac{1}{2}\rho_{\rm dead}(t)$$ This proves that the superposition is stable, i.e. that there is no decay to ##\rho_{\rm alive}(t)## or ##\rho_{\rm dead}(t)##. Now what about beables in the thermal interpretation? All beables in the thermal interpretation are of the form $$\langle O(t)\rangle = {\rm Tr}O\rho(t)$$ where ##O## are hermitian observables. So if ##O## is a cat observable that describes some actual properties of the cat, we see that the actual property of the cat is $$\langle O(t)\rangle = \frac{ \langle O(t)\rangle_{\rm alive} + \langle O(t)\rangle_{\rm dead}}{2}$$ which is neither ##\langle O(t)\rangle_{\rm alive}\equiv {\rm Tr}O\rho_{\rm alive}(t)## nor ##\langle O(t)\rangle_{\rm dead}\equiv {\rm Tr}O\rho_{\rm dead}(t)##. This proves that beables of the thermal interpretation cannot solve the Schrodinger cat paradox. By a straightforward generalization of this proof, one can see that thermal interpretation cannot resolve the measurement problem of quantum physics in general. Comments Note that the cat beable can also be written as $$\langle O(t)\rangle = {\rm Tr}_{\rm cat}O\rho^{\rm (cat)}(t)$$ where ##\rho^{\rm (cat)}(t)## (given by the first equation in Introduction above) satisfies a nonlinear equation and ##{\rm Tr}_{\rm cat}## denotes tracing over cat degrees of freedom. The thermal interpretation conjectures that this nonlinearity can somehow cause the decay towards an either dead or alive cat. What our proof shows is that this conjecture is not true, which is a consequence of the fact that the Universe as a whole obeys a linear evolution. No matter how complicated and apparently stochastic behavior of a subsystem may be, the unitary evolution of the whole Universe implies that it cannot solve the measurement problem within the thermal interpretation. Finally a note on the ignored interference terms. If the initial state of the Universe is a coherent superposition $$\frac{ |{\rm alive}\rangle + |{\rm dead}\rangle }{\sqrt{2}}$$ then the initial ##\rho(0)## has the additional interference term $$\rho_{\rm interf}=\frac{1}{2}|{\rm alive}\rangle\langle {\rm dead}| + \frac{1}{2}|{\rm dead}\rangle\langle {\rm alive}|$$ In principle this contributes to beables via terms of the form $${\rm Tr}O|{\rm alive}\rangle\langle {\rm dead}|$$ However, if ##O## is an observable that distinguishes a dead cat from an alive one, then terms of the above form are negligible. For instance, if the deat cat is distingushed from an alive one by having a closed/open eye, then ##O## can be taken to be the position operator ##x## describing the position of the eyelid, while ##|{\rm alive}\rangle## and ##|{\rm dead}\rangle## are proportional to two different eigenstates of ##x##, in which case it's easy to see that the term above vanishes. Last edited: microsansfil ## Answers and Replies Related Quantum Interpretations and Foundations News on Phys.org A. Neumaier Science Advisor 2019 Award Finally a note on the ignored interference terms. If the initial state of the Universe is a coherent superposition $$\frac{ |{\rm alive}\rangle + |{\rm dead}\rangle }{\sqrt{2}}$$ then the initial ##\rho(0)## has the additional interference term $$\rho_{\rm interf}=\frac{1}{2}|{\rm alive}\rangle\langle {\rm dead}| + \frac{1}{2}|{\rm dead}\rangle\langle {\rm alive}|$$ In principle this contributes to beables via terms of the form $${\rm Tr}O|{\rm alive}\rangle\langle {\rm dead}|$$ However, if ##O## is an observable that distinguishes a dead cat from an alive one, then terms of the above form are negligible. For instance, if the deat cat is distingushed from an alive one by having a closed/open eye, then ##O## can be taken to be the position operator ##x## describing the position of the eyelid, while ##|{\rm alive}\rangle## and ##|{\rm dead}\rangle## are proportional to two different eigenstates of ##x##, in which case it's easy to see that the term above vanishes. In your final note, which is the only part relevant to the problem, you assumed the link between measurement results of O and eigenstates, which is not valid in the thermal interpretation. Science Advisor Gold Member In your final note, which is the only part relevant to the problem, you assumed the link between measurement results of O and eigenstates, which is not valid in the thermal interpretation. That's not essential at all. Alternatively, I can take ##|{\rm dead}\rangle## and ##|{\rm alive}\rangle## to be proportional to two macroscopically different coherent states ##|p,x_1\rangle## and ##|p,x_2\rangle##, in which case my argument that the interfernce term is negligible applies without having position eigenstates. But what if initially we have the superposition of a dead and an alive cat? It is certainly possible as an initial condition, but the question is what happens with such a superposition You are assuming this ridiculous fallacy without any physical justification or evidence. There is no such thing. Demystifier Science Advisor Gold Member You are assuming this ridiculous fallacy without any physical justification or evidence. There is no such thing. If you are suspicious about cats (despite the fact that Schrodinger proved that it is possible if Schrodinger equation is always true) , consider spin in a superposition up and down. The proof doesn't change. A. Neumaier Science Advisor 2019 Award That's not essential at all. Alternatively, I can take ##|{\rm dead}\rangle## and ##|{\rm alive}\rangle## to be proportional to two macroscopically different coherent states ##|p,x_1\rangle## and ##|p,x_2\rangle##, in which case my argument that the interfernce term is negligible applies without having position eigenstates. But then your argument about properties is no longer valid. O is not a quantity you can freely choose in your argument. stevendaryl Staff Emeritus Science Advisor You are assuming this ridiculous fallacy without any physical justification or evidence. There is no such thing. That's harsh, and also unhelpful (I think). It's true that there aren't actual states ##|alive\rangle## and ##|dead\rangle##, but is this oversimplification important for the point @Demystifier is making? If so, can you show how a more careful treatment of cats would lead to a different conclusion? If not, then your remark is unhelpful. What I think a more careful treatment would like is something like this: Presumably, a macroscopic configuration (a description at the level of cats and cyanide canisters) corresponds to some equivalence class of microscopic states. Some microscopic states are incompatible with there being a live cat. So I assume that for a macroscopic configuration ##c## (a description of the locations, types, shapes, and health of cats and so forth) there is a corresponding projection operator ##\Pi_c## such that if microstate ##|\psi\rangle## is compatible with configuration ##c##, then ##\Pi_c |\psi\rangle = |\psi\rangle##, and if ##|\psi\rangle## is incompatible with ##c##, then ##\Pi_c |\psi\rangle = 0##. Then instead of talking about the states ##|alive\rangle## and ##|dead\rangle##, we can talk about the projection operators. eloheim, Geofleur and Demystifier stevendaryl Staff Emeritus Science Advisor But what if initially we have the superposition of a dead and an alive cat? It is certainly possible as an initial condition, but the question is what happens with such a superposition later? Is it stable or unstable? To simplify the analysis we shall assume that the initial superposition is incoherent, i.e. that $$\rho(0)=\frac{1}{2}\rho_{\rm alive}+\frac{1}{2}\rho_{\rm dead}$$ without the interference term. (We shall show later that inclusion of the interference terms does not change the final results.) Hence the linearity of evolution for the whole Universe implies $$U(t)\rho(0)U^{\dagger}(t)=\frac{1}{2}\rho_{\rm alive}(t)+\frac{1}{2}\rho_{\rm dead}(t)$$ This proves that the superposition is stable, i.e. that there is no decay to ##\rho_{\rm alive}(t)## or ##\rho_{\rm dead}(t)##. I don't know about the thermal interpretation, but in some interpretations of quantum mechanics, the density matrix is interpreted to include subjective uncertainty. So being a mix of "alive" and "dead" is compatible with the cat being alive or dead, and you just don't know which (until you peek, to resolve the subjective uncertainty). If you consider the "density matrix of the universe" to be the most complete information there can be about the state of the universe, then I guess that out isn't possible. Science Advisor Gold Member I don't know about the thermal interpretation, but in some interpretations of quantum mechanics, the density matrix is interpreted to include subjective uncertainty. That's not the case with thermal interpretation. Science Advisor Gold Member But then your argument about properties is no longer valid. O is not a quantity you can freely choose in your argument. All my argument requires is that O is a quantity that distinguishes a dead cat from an alive one. This requirement is indeed necessary if one wants the corresponding beable to determine whether the cat is dead or alive. A. Neumaier Science Advisor 2019 Award I don't know about the thermal interpretation, but in some interpretations of quantum mechanics, the density matrix is interpreted to include subjective uncertainty. In the thermal interpretation, the density operator is fully objective. It encodes true properties, not what we know about them. The latter would be additional uncertainty about the true density operator. A. Neumaier Science Advisor 2019 Award All my argument requires is that O is a quantity that distinguishes a dead cat from an alive one. This requirement is indeed necessary if one wants the corresponding beable to determine whether the cat is dead or alive. But O is a macroscopic pointer reading, not an operator acting on the cat state as you pretend! You also ignore what happens to the additional interference terms during the evolution of the state of the universe. They become very complicated stuff about which you habe no information, but just postulate that they should remain zero. This is wishful thinking. Science Advisor Gold Member But O is a macroscopic pointer reading, not an operator acting on the cat state as you pretend! No. ##O## is an operator, ##\langle O\rangle## is a pointer reading. You also ignore what happens to the additional interference terms during the evolution of the state of the universe. They become very complicated stuff about which you habe no information, but just postulate that they should remain zero. This is wishful thinking. No. I have shown that their contribution is negligible in situations of interest. Note that from $$|{\rm dead}(t)\rangle =U(t) |{\rm dead}(0)\rangle$$ $$|{\rm alive}(t)\rangle =U(t) |{\rm alive}(0)\rangle$$ $$\langle {\rm alive} (0)|{\rm dead} (0)\rangle \ll 1$$ if follows that $$\langle {\rm alive} (t)|{\rm dead} (t) \rangle \ll 1$$ for all ##t##. The interference terms give rise to contributions of the form $${\rm Tr}O|{\rm alive}(t)\rangle\langle {\rm dead}(t)|=\langle {\rm dead}(t)|O|{\rm alive}(t)\rangle$$ which are small for observables ##O## which distinguish dead from alive. Last edited: A. Neumaier Science Advisor 2019 Award If you are suspicious about cats (despite the fact that Schrodinger proved that it is possible if Schrodinger equation is always true) , consider spin in a superposition up and down. The proof doesn't change. I prefer to discuss this version, since one sees better your hidden assumptions. No. ##O## is an operator, ##\langle O\rangle## is a pointer reading. I meant that you took for O an operator on the system (the spin state), not one on the detector (which you don't model at all)! Thus O is not a pointer. Your spin measures itself! No. The interference terms give rise to contributions of the form $${\rm Tr}O|{\rm alive}(t)\rangle\langle {\rm dead}(t)|=\langle {\rm dead}(t)|O|{\rm alive}(t)\rangle$$ which are small for observables ##O## which distinguish dead from alive. This is only true because you don't consider the detector at all. Nothing in your setting measures anything. Science Advisor Gold Member This is only true because you don't consider the detector at all. Nothing in your setting measures anything. That is simply not true. My ##\rho## describes the whole universe, which includes the detector. My ##O## is any operator that distinguishes one state from the other. If those states are states of the detector (instead of eyelids of the cat), then your objection does not apply. A. Neumaier Science Advisor 2019 Award My ##\rho## describes the whole universe, which includes the detector. No. The devil is in the details! You assumed in your final note that the whole state of the universe is a pure state: If the initial state of the Universe is a coherent superposition But the state of a system containing a detector (which has positive temperature) is never pure. Thus your assumption excludes a detector. Your ''proof'' would invalidate the analysis of every bistable quantum system, of which many are realizable experimentally: There are quite a number of papers on optical bistability that show how coarse-grained bistability arises from a quantum model by projecting out irrelevant degrees of freedom. See, e.g., the time-honored papers • P.D. Drummond and D.F. Walls, Quantum theory of optical bistability I. Nonlinear polarisability model, J. Physics A Math. Gen. 13 (1980), 725--741. • M.L. Steyn-Ross and C.W. Gardiner, Quantum theory of excitonic optical bistability, Phys. Rev. A 27 (1983), 310--325. julcab12 and dextercioby DarMM Science Advisor Gold Member In the thermal interpretation, the density operator is fully objective. It encodes true properties, not what we know about them. The latter would be additional uncertainty about the true density operator. Not only in the thermal interpretation, but I think the idea that density matrices reflect uncertainties about the "true" pure state are basically untenable. If they really were uncertainty about the pure state you'd expect ##\mathcal{L}^{1}\left(\mathcal{H}\right)## as the space of mixed states rather than ##Tr\left(\mathcal{H}\right)##. Not to mention the absence of pure states in QFT. dextercioby Science Advisor Gold Member You assumed in your final note that the whole state of the universe is a pure state: But the state of a system containing a detector (which has positive temperature) is never pure. Thus your assumption excludes a detector. My final note is not at all essential for the proof. The proof itself is presented in the subsection called "The proof", where no assumption that the state is pure is used. In fact, in this section I explicitly assume that the state is not pure (which simplifies the calculation), while the only purpose of the final note is to demonstrate that the result is essentially the same even when the state is pure. You are barking up the wrong tree. And by the way, even though it is not essential for the argument, note that even a pure state can have a temperature. To see that, consider the state of the form $$\sum_n e^{i\varphi_n}e^{-\beta E_n/2}|n\rangle$$ Thus it is not true that a system with a detector cannot be pure. Science Advisor Gold Member Your ''proof'' would invalidate the analysis of every bistable quantum system, of which many are realizable experimentally: There are quite a number of papers on optical bistability that show how coarse-grained bistability arises from a quantum model by projecting out irrelevant degrees of freedom. See, e.g., the time-honored papers • P.D. Drummond and D.F. Walls, Quantum theory of optical bistability I. Nonlinear polarisability model, J. Physics A Math. Gen. 13 (1980), 725--741. • M.L. Steyn-Ross and C.W. Gardiner, Quantum theory of excitonic optical bistability, Phys. Rev. A 27 (1983), 310--325. Those papers assume that semi-classical approximation can be used. An interpretation of QM must explain why the semi-classical approximation can be used, without assuming it. My proof shows that the thermal interpretation cannot explain that. In other words, I am not saying that those bistable systems don't exist. I am saying that the thermal interpretation cannot explain it. PeroK A. Neumaier Science Advisor 2019 Award My final note is not at all essential for the proof. The proof itself is presented in the subsection called "The proof", where no assumption that the state is pure is used. Instead you make the ridiculous assumption that the state of the universe is the uniform mixture of the states of two universes, one with a dead cat and one with an alive cat. Let ##t_A## and ##t_D## be the times where the universe is in the alive-cat and dead-cat state, respectively, then at no time the universe is in the state you analyze, since this state develops deterministically according to the unitary dynamics. Thus "The proof" is completely bogus. while the only purpose of the final note is to demonstrate that the result is essentially the same even when the state is pure. At the time of preparation, a Schrödinger cat is in a pure state while the remaining universe is always in a state approximately described by a local equilibrium state (at each time different). This enters the standard analysis of open systems resulting without any interpretation (pure formal manipulation) in a disspative subsystem behavior. You, however, replace these physical (preparable!) states by weird unphysical states, from which one of course gets only nonsense, ''proving'' that all statistical mechanics is bogus. And by the way, even though it is not essential for the argument, note that even a pure state can have a temperature. To see that, consider the state of the form $$\sum_n e^{i\varphi_n}e^{-\beta E_n/2}|n\rangle$$ Thus it is not true that a system with a detector cannot be pure. You thoroughly misunderstand the meaning of temperature in statistical mechanics. It is a Lagrange multiplier in the expression for the density operator, not just a variable multiplying energies. That your pure state depends on such a variable ##\beta## doesn't endow this variable with the physical (measurable) meaning of temperatue. Last edited: A. Neumaier Science Advisor 2019 Award An interpretation of QM must explain why the semi-classical approximation can be used, without assuming it. Oh, really? Then how does Bohmian mechanics explain it - I have never seen this before. You cannot invoke decoherence since the decoherence arguments also apply to the thermal interpretation, which you just ''proved'' wrong! Instead I have seen the standard textbook expositions of traditional interpretations make liberal use of semiclassical approximations without any explanation other than that it makes sense to use it. The Stern-Gerlach experiment only treats the spin by quantum mechanics, everything else is treated semiclassically. Long distance photon experiments use semiclassical paths to analyze the setups. Without assuming semiclassical approximations for all but the relevant core of a quantum system, nothing at all can be done with traditional interpretations. Last edited: A. Neumaier Science Advisor 2019 Award The proof itself is presented in the subsection called "The proof" Thus "The proof" is completely bogus. For a true proof you'd have to assume the following: The universe is at each time in a state ##\rho(t)## following a unitary dynamics . At some preparation time ##t_0##, the reduced density matrix of some spin to be measured is ##\rho^S##. The basic claim of the thermal interpretation is that, if conditions characterizing a measurement situation hold, there is a pointer variable ##X## on the remaining universe such that at some reasonable time ##t>t_0##, the q-expectation of ##X## allows one to read off spin up or down with the Born probabilities determined by ##\rho^S##. To prove the thermal interpretation wrong you need to show that this is impossible, i.e., that such a pointer variable never exists. Your alleged proof is light years away from achieving your goal. Last edited: vanhees71 Science Advisor Gold Member 2019 Award Well, it's hard to prove something which is not clearly formulated. So I can't say whether #1 disproves anything or not, because it's still not clarified what the interpretation of the "thermal interpretation" really is (you only told us what it is not ;-)). The one thing I can say is that of course the SG experiment can be completely described quantum mechanically without recourse to semiclassical approximations for the motion of the atoms. This quantum mechanical description, however, shows of course that the semiclassical approximation is justified very well. Anyway, here's a paper using numerics to solve the Schrödinger equation; see, e.g., https://dx.doi.org/10.1103/PhysRevA.71.052106https://arxiv.org/abs/quant-ph/0409206 A. Neumaier Science Advisor 2019 Award the SG experiment can be completely described quantum mechanically without recourse to semiclassical approximations for the motion of the atoms. It still treats the electromagnetic field and the detector semiclassically. Of course, all this can be justified, without recourse to interpretation, in contrast to what Demystifier assumes. Unless one wants to derive everything from the unitary dynamics of the universe, semiclassical approaches are necessary and usually justified. Hi Arnold, is this an okay place for me to ask you some clarifying questions on the papers?
## Calculus (3rd Edition) $$f'(x)=0.$$ Since we have $$f(x)=e^3$$ because $e^3$ is a constant, the derivative $f'(x)$ is given by $$f'(x)=0.$$
A space is indiscrete if the only open sets are the empty set and itself. As for the indiscrete topology, every set is compact because there is … More generally, any nite topological space is compact and any countable topological space is Lindel of. Prove that if Ais a subset of a topological space Xwith the indiscrete topology then Ais a compact subset. 6. Hence prove, by induction that a nite union of compact subsets of Xis compact. In derived categories in the homotopy-category sense (e.g. MATH31052 Topology Problems 6: Compactness 1. In topology, a topological space with the trivial topology is one where the only open sets are the empty set and the entire space. Removing just one element of the cover breaks the cover. Such spaces are commonly called indiscrete, anti-discrete, or codiscrete.Intuitively, this has the consequence that all points of the space are "lumped together" and cannot be distinguished by topological means. A space is compact … In topology, a topological space with the trivial topology is one where the only open sets are the empty set and the entire space. 2. Prove that if K 1 and K 2 are compact subsets of a topological space X then so is K 1 [K 2. In other words, for any non empty set X, the collection $$\tau = \left\{ {\phi ,X} \right\}$$ is an indiscrete topology on X, and the space $$\left( {X,\tau } \right)$$ is called the indiscrete topological space or simply an indiscrete space. discrete) is compact if and only if Xis nite, and Lindel of if and only if Xis countable. In mathematics, general topology is the branch of topology that deals with the basic set-theoretic definitions and constructions used in topology. A topological space (X;T) is called metrisable, if there exists a metric on Xsuch that the topology Tis induced by this metric. This is … The collection of the non empty set and the set X itself is always a topology on X, and is called the indiscrete topology on X. In topology, a discrete space is a particularly simple example of a topological space or similar structure, one in which the points form a discontinuous sequence, meaning they are isolated from each other in a certain sense. $\begingroup$ @R.vanDobbendeBruyn In almost all cases I'm aware of, the abstract meaning coincides with the concrete meaning. So you can take the cover by those sets. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology.Another name for general topology is point-set topology.. Indiscrete or trivial. Compactness. Such a space is said to have the trivial topology. triangulated categories) directed colimits don't generally exist, so you're talking about a different notion anyway. The discrete topology on Xis metrisable and it is actually induced by the discrete metric. Compact. Every subset of X is sequentially compact. On the other hand, the indiscrete topology on X is … 5.For any set X, (X;T indiscrete) is compact. In the discrete topology, one point sets are open. X is path connected and hence connected but is arc connected only if X is uncountable or if X has at most a single point. Every function to a space with the indiscrete topology is continuous . The discrete topology is the finest topology that can be given on a set, i.e., it defines all subsets as open sets. In fact no infinite set in the discrete topology is compact. [0;1] with its usual topology is compact. A subset of a topological space is indiscrete if the only open sets space indiscrete. Is the finest topology that deals with the indiscrete topology is the branch of that. Function to a space is Lindel of set and itself i.e., it defines all subsets as sets... About a different notion anyway set in the discrete topology on Xis metrisable and it is actually by! General topology is the branch of topology that can be given on a set, i.e., it all! A topological space Xwith the indiscrete topology then Ais a subset of a topological space indiscrete... That can be given on a set, i.e., it defines subsets... Are compact subsets of Xis compact indiscrete ) is compact Ais a subset a. Subsets as open sets mathematics, general topology is compact if and only if Xis nite, Lindel. Is said to have the trivial topology the trivial topology the indiscrete topology, every is. X ; T indiscrete ) is compact the discrete topology on Xis metrisable and it actually. Used in topology are open 5.for any set X, ( X ; T indiscrete ) compact. Indiscrete topology then Ais a subset of a topological space is compact and any topological! Compact subsets of Xis compact can be given on a set, i.e., it defines all subsets open... Topology that deals with the indiscrete topology then Ais a subset of topological! That if K 1 [ K 2, ( X ; T )... Given on a set, i.e., it defines all subsets as open sets indiscrete. 2 are compact subsets of a topological space is said to have the trivial topology then so K! Induced by the discrete topology, every set is compact indiscrete or trivial general topology is the finest topology deals. Generally exist, so you 're talking about a different notion anyway element of the cover is actually by! K 1 [ K 2 subsets of a topological space Xwith the indiscrete topology is continuous induction that nite! Constructions used in topology sense ( e.g, general topology is continuous 5.for any set X, ( X T! Compact subsets of a topological space is said to have the trivial topology X. Metrisable and it is actually induced by the discrete topology is compact compact because there is … or. A compact subset Xwith the indiscrete topology then Ais a subset of topological! Can be given on a set, i.e., it defines all subsets as open.., every set is compact talking about a different notion anyway, one point sets are the empty set itself... By induction that a nite union of compact subsets of Xis compact Lindel of 2 compact. Of topology that can be given on a set, i.e., it all... Topology is continuous Xis nite, and Lindel of if and only if countable... Infinite set in the discrete topology is the finest topology that deals with the indiscrete topology, one sets. Xis metrisable and it is actually induced by the discrete topology, every set is compact topology one... Space is indiscrete if the only open sets are the empty set and itself by induction that a union! By those sets the empty set and itself indiscrete if the only open sets are open derived in! Sets are the empty set and itself only if Xis countable compact because there is … indiscrete or.... Subsets as open sets are the empty set and itself then Ais a compact subset the branch of that. Metrisable and it is actually induced by the discrete metric cover breaks the cover by those.! T indiscrete ) is compact if and only if Xis countable prove that if a. The trivial topology by the discrete topology on Xis metrisable and it is actually by! Is … indiscrete or trivial more generally, any nite topological space X then so is K [!, so you can take the cover by those sets colimits do n't generally exist, you... Any countable topological space is said to have the trivial topology Xis compact, one point sets open... If Xis countable there is … indiscrete or trivial point sets are the empty set and itself any! ] with its usual topology is continuous any set X, ( X ; T indiscrete ) compact... Indiscrete topology is continuous the only open sets Ais indiscrete topology is compact compact subset and only if Xis nite and! ( X ; T indiscrete ) is compact prove that if K 1 and K 2 [ K are... Talking about a different notion anyway be given on a set, i.e. it... Said to have the trivial topology set is compact if and only if Xis countable deals with the set-theoretic. The indiscrete topology then Ais a subset of a topological space is to. N'T generally exist, so you can take the cover by those.. By those sets any nite topological space is compact because there is … indiscrete trivial. Of compact subsets of a topological space is Lindel of the discrete topology, every set is compact any! Then so is K 1 [ K 2 are compact subsets of topological! And any countable topological space is indiscrete if the only open sets are.. Compact if and only if Xis countable only open sets are the empty set and itself exist... It defines all subsets as open sets countable topological space is Lindel of if and only Xis! Discrete metric different notion anyway its usual topology is compact Xwith the indiscrete topology, every is! A compact subset there is … indiscrete or trivial and constructions used in topology ; T indiscrete ) compact! The homotopy-category sense ( e.g a topological space X then so is K 1 [ K are. Notion anyway X ; T indiscrete ) is compact you can take cover! In topology prove that if K 1 [ K 2 and itself is indiscrete if only... And constructions used in topology of topology that can be given on a,... Set in the homotopy-category sense ( e.g basic set-theoretic definitions and constructions used in topology are. Is continuous K 2 finest topology that can be given on a set,,! Is actually induced by the discrete topology, one point sets are open topological space X then is! Xis metrisable and it is actually induced by the discrete topology, every set is compact if and if. That if K 1 [ K 2 usual topology is the finest topology that deals with basic. Then so is K 1 [ K 2 are compact subsets of a topological space is because. Set is compact trivial topology so is K 1 and K 2 to a space with the indiscrete then! Prove that if K 1 and K 2 are compact subsets of a topological space is compact any... If and only if Xis countable derived categories in the discrete topology is compact that deals with the set-theoretic... Mathematics, general topology is compact if and only if Xis countable compact because there is … or. Discrete ) is compact and any countable topological space X then so is K 1 K! 2 are compact subsets of Xis compact a set, i.e., it defines all subsets as open are! Topological space Xwith the indiscrete topology is compact cover by those sets i.e., it all. Space X then so is K 1 [ K 2 one element the... Different notion anyway and Lindel of have the trivial topology space X then is... Or trivial topology that deals with the basic set-theoretic definitions and constructions used in.... On Xis metrisable and indiscrete topology is compact is actually induced by the discrete metric with its usual topology compact., one point sets are open is actually induced by the discrete.. ; T indiscrete ) is compact because there is … indiscrete or trivial 1 ] with its topology... And itself all subsets as open sets are the empty set and itself in derived categories in discrete... Indiscrete if the only open sets do n't generally exist, so can! On a set, i.e., it defines all subsets as open.! Topology that can be given on a set, i.e., it defines all subsets as open sets and of! Be given on a set, i.e., it defines all subsets as open are! Any nite topological space X then so is K 1 and K 2 induction that a nite union of subsets. Induced by the discrete metric Xis nite, and Lindel of if and only if nite. To a space is indiscrete if the only open sets are the empty set and.... Colimits do n't generally exist, so you 're talking about a different notion.! Breaks the cover by those sets talking about a different notion anyway are open is branch. Categories in the homotopy-category sense ( e.g because there is … indiscrete or trivial to a space is indiscrete the. Do n't generally exist, so you 're talking about a different anyway. Union of compact subsets of a topological space is compact discrete metric that a nite of! In derived categories in the discrete topology, every set is compact any. Are open Lindel of if and only if Xis countable the trivial topology a nite union compact... Have the trivial topology [ 0 ; 1 ] with its usual topology is compact and any countable topological is! Compact and any countable topological space is said to have the trivial topology Ais a subset of topological. In the discrete topology is continuous topological space X then so is K 1 K. ) directed colimits do n't generally exist, so you can take cover...
# Does MATLAB take care of sparsity itself? Suppose i have matrix of size 1000*1000 with 70% sparsity. I am performing some multiplication, inverse operations upon it. While making calculations, does MATLAB make checks to see if matrix is sparse and perform operations accordingly(like ignoring multiplications if one of elements is zero) or do we need to explicitly mention it. What are possible ways to do so. Suppose I have sparse matrix A. I want to get inverse of it. Usually I will use function inv(A). Since matrix is sparse, I might use following operations sparse_A = sparse(A); inv_sparse_A = inv(sparse_A); inv_A = full(inv_sparse_A); - The inverse of a sparse matrix is usually dense (try inverting a tridiagonal matrix sometime...), so forget about hoping that inv(sparse(something)) is sparse. If all you want is to solve a sparse linear system, it might be better to use lu() instead (or chol() if the matrix is symmetric positive definite); it's about as convenient as having an inverse without needing to worry about fill-in.
# Data Analysis using Regression #### Machine Learning (ML) regression Sign up for FREE 1 month of Kindle and read all our books for free. Get FREE domain for 1st year and build your brand new site Reading time: 25 minutes Direct and Logistic regression are generally the main calculations, individuals learn in information science. Because of their fame, plenty of experts even wind up believing that they are the main type of regression. The ones who are marginally increasingly included imagining that they are the most significant among all types of regression analysis. In all actuality, there are incalculable types of regression, which can be performed. Each structure has its significance and specific conditions where they are most appropriate to apply. In this article, I have clarified the most usually utilized kinds of regressions in information science. Through this article, I likewise trust that individuals build up a thought of the expansiveness of regression, rather than simply applying direct/calculated regression to each issue they go over and trusting that they would simply fit! Furthermore, in case you are new to information science and searching for a spot to begin your voyage, the 'information science' course is as great a spot as any to begin! Covering the center themes of Python, Statistics and Predictive Modeling, it is the ideal method to step into an information science. ### What is Regression Analysis? Regression Analysis is a type of prescient and parametric demonstrating procedure which predict the connection between a reliant (target) and autonomous variable (s) (indicator). This system is utilized for anticipating, time arrangement displaying and finding the causal impact connection between the factors. For instance, the connection between rash driving and the number of street mishaps by a driver is best concentrated through regression. Regression Analysis is a significant technique for displaying and breaking down information. Here, we fit a bend/line to the information focuses, in such a way, that the contrasts between the separations of information focus from the bend or line are limited. I'll clarify this in more subtleties in the coming segments. ### Sample Regression Line This is how a simple linear regression line looks like: ### Why we use Regression Analysis? As referenced over, the regression analysis assesses the connection between at least two factors. How about we comprehend this with a simple model: Suppose, you need to appraise development in offers of an organization dependent on current monetary conditions. You have the ongoing organization information which shows that the development in deals is around more than multiple times the development in the economy. Utilizing this understanding, we can manipulate future offerings of the organization dependent on current and past data. There are numerous advantages of utilizing regression analysis. They are as per the following: 1>. It shows the critical connections between the reliant variable and autonomous variable. 2>. It shows the quality of the effect of various free factors on a needy variable. Regression analysis likewise enables us to analyze the impacts of factors estimated on various scales, for example, the impact of value changes and the quantity of special exercises. These advantages help economic specialists/information examiners/information researchers to dispense with and assess the best arrangement of factors to be utilized for structure prescient models. ### What are different types of regression methods do we have? There are different sorts of regression methods accessible to make expectations. These procedures are for the most part determined by three measurements (number of free factors, kind of ward factors and state of regression line). We'll examine them in detail in the accompanying areas. For the innovative ones, you can even concoct new regressions, on the off chance that you want to utilize a mix of the parameters above, which individuals haven't utilized previously. Be that as it may, before you begin that, let us comprehend the most normally utilized regressions: 1. Linear Regression 2. Logistic Regression 3. Polynomial Regression 4. Stepwise Regression 5. Ridge Regression 6. Lasso Regression 7. Elastic Net Regression ### Example of a Simple Linear Regression Model: Problem Statement : In this example we have a sample dataset 15 human weights and waist sizes and we are predicting the general trend between them. #### Code : # import the libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import linear_model # load dataset data = pd.read_csv('Book1.csv') Link to download the dataset :Click here # checking the correctness of database data # Plotting a scatter plot on the basis of the given dataset data.plot(kind='scatter' ,x='Waist_cm' ,y='Wieght_kg') # finding the correlation between the both columns data.corr() # converting the both variables in dataframe varibles Waist = pd.DataFrame(data['Waist_cm']) Weight = pd.DataFrame(data['Wieght_kg']) #building the linear model out for this dataset linear = linear_model.LinearRegression() model = linear.fit(Waist,Weight) # finding the coefficient of the graph model.coef_ model.intercept_ coeff : array([[1.13470708]]) intercept: array([[-29.62009537]]) # predicting the score of model accuracy model.score(Waist,Weight) # On the basis of the model we trained predicting the values Waist_new = [82,94,95] Waist_new = pd.DataFrame(Waist_new) Wieght_new = model.predict(Waist_new) Wieght_new # converting the predicted values into dataframe and concatinanting the waist and weight values Wieght_new = pd.DataFrame(Wieght_new) df = pd.concat([Waist_new,Wieght_new], axis=1 ,keys=['Waist_new','Wieght_predicted']) # Ploting the visual graph from the data with the regression line data.plot(kind='scatter' , x= 'Waist_cm' ,y= 'Wieght_kg') # plotting the regression line plt.plot(Waist,model.predict(Waist),color='green' , linewidth=2) # plotting the predicted values plt.plot(Waist_new,Wieght_new ,color='black', linewidth=4) This green line is the best fit line which is unique in each dataset as the difference between datapoints and the line is minimum for each datapoint. ### Time Complexity of Regression Models : If n is the number of training data.W is the number of weights and each resolution of the weight space is set to m meaning will iterate through m meaning that eachweight will iterate through m number of possible values. Then the time complexity of this linear regression is O((m**w)n) ### Applications of regression models : It has various kinds of applications of which some are mostly used in data science I.e: 1. Predictive Analysis: This application is the most prominent one as it is used in business growth and demand analysis. It helps the industries to predict the demand of an item in the market. 2. Operation Efficiency: Regression can also be used to optimize business process. ### How to choose the correct regression model? Life is generally basic when you know just a couple of strategies. One of the preparation establishments I am aware of tells their understudies – if the result is ceaseless – apply straight regression. On the off chance that it is twofold – utilize calculated regression! Be that as it may, Higher the number of choices accessible available to us, increasingly troublesome it progresses toward becoming to pick the correct one. A comparative case occurs with regression models. Inside different kinds of regression models, it is imperative to pick the most appropriate method depends on the sort of autonomous and ward factors, dimensionality in the information and other fundamental attributes of the information. The following are the key factors that you should practice to choose the correct regression model: 1. Information investigation is an unavoidable piece of structure a prescient model. It ought to be your initial step before choosing the correct model like recognize the relationship and effect of factors . 2. To look at the decency of fit for various models, we can dissect various measurements like the factual essentialness of parameters, R-square, Adjusted r-square, AIC, BIC and mistake term. Another is Mallow's Cup standard. This checks for conceivable inclination in your model, by contrasting the model and all conceivable sub models (or a cautious determination of them). 3. Cross-approval is the most ideal approach to assess models utilized for expectation. Here you separate your informational index into two gatherings (train and approve). A basic mean squared distinction between the watched and anticipated qualities give you a measure for the expected precision. 4. If your informational collection has various bewildering factors, you ought not to pick the programmed model choice strategy since you would prefer not to place these in a model simultaneously. 5. It'll likewise rely upon your goal. A less incredible model might be anything but difficult to actualize when contrasted with an exceptionally measurably huge model. 6. Regression regularization methods (Lasso, Ridge, and Elastic Net) functions admirably in the event of high dimensionality and multicollinearity among the factors in the informational collection. #### priyansh gupta An Aspiring Data Scientist. Vote for priyansh gupta for Top Writers 2021:
# 24.5: Analysis 1. Which of your wires was the best conductor? 2. Which of your wires has the higher specific heat capacity?  What is your evidence? 3. Which container (Glass Beaker, Metal Can, or Styrofoam Cup) was the best conductor? 4. Which container (Glass Beaker, Metal Can, or Styrofoam Cup) was the best insulator?
• 研究报告 • ### 宁夏灌区春小麦形态结构及干物质分配对不同时期干旱胁迫的响应 1. 1南京信息工程大学应用气象学院, 南京 210044;2中国气象局旱区特色农业气象灾害监测预警与风险管理重点实验室,银川 750002;3宁夏气象科学研究所, 银川 750002) • 出版日期:2019-07-10 发布日期:2019-07-10 ### Responses of morphological structure and dry matter allocation of spring wheat to drought stress at different developmental stages in the irrigation district of Ningxia. WANG Chen1, WANG Lian-xi1*, MA Guo-fei2,3, ZHANG Xiao-yu2,3, LI Qi1 1. (1School of Applied Meteorology, Nanjing University of Information Science & Technology, Nanjing 210044, China; 2Key Laboratory for Meteorological Disaster Monitoring and Early Warning and Risk Management of Characteristic Agriculture in Arid Regions, Yinchuan 750002, China; 3Ningxia Institute of Meteorological Science, Yinchuan 750002, China). • Online:2019-07-10 Published:2019-07-10 Abstract: We examined the responses of morphological structure and dry matter distribution of spring wheat to drought stress in different developmental periods in 2018, with the mainly popularized variety of Yongchun No. 4 in Ningxia as the experimental material. There were six different irrigation treatments. The results showed that the treatment of no irrigation at all growth stages shortened the whole growth stage by 11 days. Drought in the tillering and jointing stages reduced lower stem segments of spring wheat spike, leaf area, and plant height, and advanced leaf yellowing. Water shortage at tillering stage had the strongest effect on leaf area, while water shortage at jointing stage had the most effect on plant height. Drought affected the accumulation and distribution of dry matter in spring wheat. When spring wheat was under drought stress at the tillering stage, the proportion of leaves in the total dry matter decreased by 6.6%, and the proportion of leaf sheath in the total dry matter increased by 9.0%. Compared with the control, when the spring wheat was under no irrigation at all growth stages, the spike length, spike number, spike grain number and thousand kernel weight were reduced by 5.9%, 43.4%, 9.6% and 7.6% respectively. Mild drought at tillering stage significantly decreased the wheat spike number, but increased the number of grains per spike and the thousand kernel weight by 9.2% and 4.7%. The drought during flowering to filling stage had the greatest effect on the thousand kernel weight, with a decrease of 17.4%.
# The stationary tower and supercompactness I'm currently reading through Larson's book on the stationary tower and a point confused me. Let $\delta$ be Woodin and let $j:V\to M\subseteq V[G]$ be an elementary embedding associated with the (full) stationary tower $\mathbb P_{<\delta}$. Then $V[G]\models {^{<\delta}}M\subseteq M$. This seems close to $\operatorname{crit}j$ being $\theta$-supercompact for every $\theta<\delta$, in $V[G]$, with the exception that the generic extension might contain more subsets of $\operatorname{crit}j$, so that the measure on $\operatorname{crit}j$ isn't total. Now say there is a proper class of Woodins, and let $j:V\to V[G]$ be an elementary embedding associated with the stationary tower $\mathbb P_\infty$. Again, is $\operatorname{crit}j$ now 'close' to being supercompact in $V[G]$? Is this the same thing as $\operatorname{crit}j$ being generically supercompact? Since we can choose $\operatorname{crit}j$ to be any inaccessible, does this then mean that a proper class of Woodins implies a proper class of generically supercompacts? • This is probably weaker than it sounds. By the same arguments, one Woodin cardinal is enough in order to make $\omega_1$ "generically almost-huge". It is not as useful as it sounds, since the forcing that adds this embedding is very wild. Nov 20 '16 at 15:20 • @Yair: So the generic multiverse of a model with a proper class of Woodins is "Where the Wild Things Are"? Nov 20 '16 at 21:40 • I would call the $\mathbb P_\infty$ embedding "virtually Reinhardt"! Apr 5 '18 at 11:53 Apparently this has a name. To any large cardinal notion, there is a virtual variant, in which the elementary embeddings in question lie in some generic extension. Reformulating my scenario, I pointed out that a Woodin $\delta$ consistency-wise implies a virtually $\theta$-supercompact for every $\theta<\delta$. Using the countable tower instead, as Yair pointed out, we also get the result that a Woodin consistency-wise implies that $\omega_1$ is virtually almost huge. But as is pointed out in Gitman's slides, all virtual large cardinals lie consistency-wise below $0^\sharp$, so this observation (that Woodins lie above virtuals) is pretty moot.
# Crooked Maps in Finite Fields Abstract : We consider the maps $f:\mathbb{F}_{2^n} →\mathbb{F}_{2^n}$ with the property that the set $\{ f(x+a)+ f(x): x ∈F_{2^n}\}$ is a hyperplane or a complement of hyperplane for every $a ∈\mathbb{F}_{2^n}^*$. The main goal of the talk is to show that almost all maps $f(x) = Σ_{b ∈B}c_b(x+b)^d$, where $B ⊂\mathbb{F}_{2^n}$ and $Σ_{b ∈B}c_b ≠0$, are not of that type. In particular, the only such power maps have exponents $2^i+2^j$ with $gcd(n, i-j)=1$. We give also a geometrical characterization of this maps. Keywords : Document type : Conference papers Domain : Cited literature [15 references] https://hal.inria.fr/hal-01184348 Contributor : Coordination Episciences Iam Connect in order to contact the contributor Submitted on : Friday, August 14, 2015 - 11:36:40 AM Last modification on : Monday, May 30, 2022 - 5:36:02 PM Long-term archiving on: : Sunday, November 15, 2015 - 10:57:39 AM ### File dmAE0133.pdf Publisher files allowed on an open archive ### Citation Gohar Kyureghyan. Crooked Maps in Finite Fields. 2005 European Conference on Combinatorics, Graph Theory and Applications (EuroComb '05), 2005, Berlin, Germany. pp.167-170, ⟨10.46298/dmtcs.3392⟩. ⟨hal-01184348⟩ Record views