content
stringlengths
86
994k
meta
stringlengths
288
619
Category: { Statistics } Category: { Math } Summary: Bayes’ Theorem is stated as $$ P(A\mid B) = \frac{P(B \mid A) P(A)}{P(B)} $$ $P(A\mid B)$: likelihood of A given B $P(A)$: marginal probability of A There is a nice tree diagram for the Bayes’ theorem on Wikipedia. Tree diagram of Bayes’ theorem Category: { Statistics } Summary: Definition two series of data: $X$ and $Y$ cooccurance of them: $(x_i, x_j)$, and we assume that $i<j$ concordant: $x_i < x_j$ and $y_i < y_j$; $x_i > x_j$ and $y_i > y_j$; denoted as $C$ discordant: $x_i < x_j$ and $y_i > y_j$; $x_i > x_j$ and $y_i < y_j$; denoted as $D$ neither concordant nor discordant: whenever equal sign happens Kendall’s tau is defined as $$ $$\tau = \frac{C- D} {\text{all possible pairs of comparison}} = \frac{C- D}{n^2/2 - n/2}$$ $$ Category: { Statistics } Category: { Statistics } Summary: Jackknife resampling method Category: { Math } Summary: Also known as the second central moment is a measurement of the spread. Category: { Statistics } Summary: Gamma Distribution PDF: $$ \frac{\beta^\alpha x^{\alpha-1} e^{-\beta x}}{\Gamma(\alpha)} $$ Visualize Category: { Statistics } Summary: Cauchy-Lorentz Distribution .. ratio of two independent normally distributed random variables with mean zero. Source: https://en.wikipedia.org/wiki/Cauchy_distribution Lorentz distribution is frequently used in physics. PDF: $$ \frac{1}{\pi\gamma} \left( \frac{\gamma^2}{ (x-x_0)^2 + \gamma^2} \right) $$ The median and mode of the Cauchy-Lorentz distribution is always $x_0$. $\gamma$ is the FWHM. Visualize Category: { Statistics } Summary: By generalizing the Bernoulli distribution to $k$ states, we get a categorical distribution. The sample space is $\{s_1, s_2, \cdots, s_k\}$. The corresponding probabilities for each state are $\{p_1, p_2, \cdots, p_k\}$ with the constraint $\sum_{i=1}^k p_i = 1$. Category: { Statistics } Summary: The number of successes in $n$ independent events where each trial has a success rate of $p$. PMF: $$ C_n^k p^k (1-p)^{n-k} $$ Category: { Statistics } Summary: Beta Distribution Interact Alpha Beta mode ((beta_mode)) median ((beta_median)) mean ((beta_mean)) ((makeGraph)) Category: { Statistics } Summary: Two categories with probability $p$ and $1-p$ respectively. For each experiment, the sample space is $\{A, B\}$. The probability for state $A$ is given by $p$ and the probability for state $B$ is given by $1-p$. The Bernoulli distribution describes the probability of $K$ results with state $s$ being $s=A$ and $N-K$ results with state $s$ being $B$ after $N$ experiments, $$ P\left(\ sum_i^N s_i = K \right) = C _ N^K p^K (1 - p)^{N-K}. $$ Category: { Statistics } Summary: Arcsine Distribution The PDF is $$ \frac{1}{\pi\sqrt{x(1-x)}} $$ for $x\in [0,1]$. It can also be generalized to $$ \frac{1}{\pi\sqrt{(x-1)(b-x)}} $$ for $x\in [a,b]$. Visualize Category: { Statistics } Summary: In a multiple comparisons problem, we deal with multiple statistical tests simultaneously. Examples We see such problems a lot in IT companies. Suppose we have a website and would like to test if a new design of a button can lead to some changes in five different KPIs (e.g., view-to-click rate, click-to-book rate, &mldr;). In multi-horizon time series forecasting, we sometimes choose to forecast multiple future data points in one shot. To properly find the confidence intervals of our predictions, one approach is the so called conformal prediction method. This becomes a multiple comparisons problem because we have to tell if we can reject at least one true null hypothesis. Category: { Statistics } Summary: Bonferroni correction is very useful in a multiple comparison problem Category: { Math } Summary: The conditional probability table is also called CPT Summary: $$ \mathrm{NML} = \frac{ p(y| \hat \theta(y)) }{ \int_X p( x| \hat \theta (x) ) dx } $$ Category: { Statistics } Summary: MDL is a measure of how well a model compresses data by minimizing the combined cost of the description of the model and the misfit. Summary: Description of Data The measurement of complexity is based on the observation that the compressibility of data doesn’t depend on the “language” used to describe the compression process that much. This makes it possible for us to find a universal language, such as a universal computer language, to quantify the compressibility of the data. One intuitive idea is to use a programming language to describe the data. If we have a sequence of data, 0,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,&mldr;,9999 It takes a lot of space if we show the complete sequence. However, our math intuition tells us that this is nothing but a list of consecutive numbers from 0 to 9999. Summary: FIA is a method to describe the minimum description length ( [[MDL]] Minimum Description Length MDL is a measure of how well a model compresses data by minimizing the combined cost of the description of the model and the misfit. ) of models, $$ \mathrm{FIA} = -\ln p(y | \hat\theta) + \frac{k}{2} \ln \frac{n}{2\pi} + \ln \int_\Theta \sqrt{ \operatorname{det}[I(\theta)] d\theta } $$ $I (\theta)$: Fisher information matrix of sample size 1. $$I_{i,j}(\theta) = E\left( \frac{\partial \ln p(y| \theta)}{\partial \theta_i}\frac{ \partial \ln p (y | \theta) }{ \partial \theta_j } \right)
{"url":"https://datumorphism.leima.is/cards/statistics/","timestamp":"2024-11-14T08:17:27Z","content_type":"text/html","content_length":"141733","record_id":"<urn:uuid:563f1d71-eabc-4541-9ac4-e86c304d6261>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00894.warc.gz"}
How Infinite Series Reveal the Unity of Mathematics | Quanta Magazine Maggie Chiang for Quanta Magazine For sheer brilliance, it was hard to beat John von Neumann. An architect of the modern computer and inventor of game theory, von Neumann was legendary, above all, for his lightning-fast mental The story goes that one day somebody challenged him with a puzzle. Two bicyclists start at opposite ends of a road 20 miles long. Each cyclist travels toward the other at 10 miles per hour. When they begin, a fly sitting on the front wheel of one of the bikes takes off and races at 15 miles per hour toward the other bike. As soon as it gets there, it instantly turns around and zips back toward the first bike, then back to the second, and so on. It keeps flying back and forth until it’s finally squished between their front tires when the bikes collide. How far did the fly travel, in total, before it was squished? It sounds hard. The fly’s back-and-forth journey consists of infinitely many parts, each shorter than the one preceding it. Adding them up seems like a daunting task. But the problem becomes easy if you think about the bicyclists, not the fly. On a road that’s 20 miles long, two cyclists approaching each other at 10 miles per hour will meet in the middle after 1 hour. And during that hour, no matter what path the fly takes, it must have traveled 15 miles, since it was going 15 miles an hour. When von Neumann heard the puzzle, he instantly replied, “15 miles.” His disappointed questioner said, “Oh, you saw the trick.” “What trick?” said von Neumann. “I just summed the infinite series.” Infinite series — the sum of infinitely many numbers, variables or functions that follow a certain rule — are bit players in the great drama of calculus. While derivatives and integrals rightly steal the show, infinite series modestly stand off to the side. When they do make an appearance it’s near the end of the course, as everyone’s dragging themselves across the finish line. So why study them? Infinite series are helpful for finding approximate solutions to difficult problems, and for illustrating subtle points of mathematical rigor. But unless you’re an aspiring scientist, that’s all a big yawn. Plus, infinite series are often presented without any real-world applications. The few that do appear — annuities, mortgages, the design of chemotherapy regimens — can seem remote to a teenage audience. The most compelling reason for learning about infinite series (or so I tell my students) is that they’re stunning connectors. They reveal ties between different areas of mathematics, unexpected links between everything that came before. It’s only when you get to this part of calculus that the true structure of math — all of math — finally starts to emerge. Before I explain, let’s look at another puzzle involving an infinite series. Solving it step by step will clarify how von Neumann solved the fly problem, and it will set the stage for thinking about infinite series more broadly. Suppose you want to buy a fancy hat from a street vendor. He’s asking $24. “How about $12?” you say. “Let’s split the difference,” he replies, “$18.” Often that settles it. Splitting the difference seems reasonable, but not for you, because you’ve read the same negotiation manual, “The Art of Infinite Haggling.” You counter with your own offer to split the difference, except now it’s between $12 and the last number on the table, $18. “So how about it?” you say, “$15 and it’s a deal.” “Oh no, my friend, let’s split the difference again, $16.50,” says the vendor. This goes on ad absurdum until you converge on the same price. What is that ultimate price? The answer is the sum of an infinite series. To see what it is, observe that the successive offers follow an orderly pattern: 24 his asking price 12 = 24 − 12 your first offer 18 = 24 − 12 + 6 splitting the difference between 12 and 24 15 = 24 − 12 + 6 − 3 splitting it between 12 and 18 The key is that the numbers on the left side of the equal sign are built up systematically from the ever-lengthening series of numbers on the right. Each number appearing in the sequence (24, −12, 6, −3…) is half the number that precedes it, but with the opposite sign. So in the limit, the price P that you and the vendor will agree to is P = 24 – 12 + 6 – 3 + … where the three dots mean the series continues forever. Rather than trying to wrap our minds around such an infinitely long expression, we can perform a cunning trick that makes the problem easy. It allows us to cancel out that bewilderingly infinite collection of terms, leaving us with something much simpler to calculate. Specifically, let’s double P. That would also double all the numbers on the right. Thus, 2P = 48 – 24 + 12 – 6 + …. How does this help? Observe that the infinite chain of terms in 2P is almost the same as that in P itself, except that we have a new leading number (48), and all the plus and minus signs for our original numbers are reversed. So if we add the series for P to the series for 2P, the 24s and the 12s and everything else will cancel out in pairs, except for the 48, which has no counterpart to cancel it. So 2P + P= 48, meaning 3P = 48 and therefore P = $16. That’s what you’d pay for the hat after haggling forever. The problem of the fly and the two bicycles follows a similar mathematical pattern. With a bit of effort, you could deduce that each leg of the fly’s back-and-forth journey is one-fifth as long as the previous leg. Von Neumann would have found it child’s play to sum the resulting “geometric series,” the special kind of series we’ve been considering, in which all consecutive terms have the same ratio. For the fly problem, that ratio is $latex\frac{1}{5}$. For the haggling problem, it’s $latex-\frac{1}{2}$. In general, any geometric series S has the form S = a + ar + ar^2 + ar^3 + … where r is the ratio and a is what’s called the leading term. If the ratio r lies between −1 and 1, as it did in our two problems, the trick used above can be adapted by multiplying not by 2 but by r to show that the sum of the series is S = $latex\frac{a}{1 – r}$. Specifically, for the haggling problem, a was $24 and r was $latex-\frac{1}{2}$. Plugging those numbers into the formula gives S = $latex\frac{24}{\frac{3}{2}}$, which equals $16, as before. For the fly problem, we have to work a bit to find the leading term, a. It’s the distance traveled by the fly on the first leg of its back-and-forth journey, so to calculate it we must figure out where the fly traveling at 15 miles an hour first meets the bicycle approaching it at 10 miles an hour. Because their speeds form the ratio 15:10, or 3:2, they meet when the fly has traveled $latex\ frac{3}{3+2}$ of the initial 20-mile separation, which tells us a = $latex\frac{3}{5}$ × 20 = 12 miles. Similar reasoning reveals that the legs shrink by a ratio of r= $latex \frac{1}{5}$ each time the fly turns around. Von Neumann saw all of this instantly and, using the $latex\frac{a}{1 – r}$ formula above, he found the total distance traveled by the fly: S = $latex\frac{12}{1-\frac{1}{5}}$ = $latex\frac{12}{\frac{4}{5}}$ = $latex\frac{60}{4}$ = 15 miles. Now back to the larger point: How do series like this serve to connect the various parts of math? To see this, we need to enlarge our point of view about formulas like 1 + r + r^2 + r^3 + … = $latex\frac{1}{1-r}$, which is the same formula as before with a equal to 1. Instead of thinking of r as a specific number like $latex\frac{1}{5}$ or $latex-\frac{1}{2}$, think of r as a variable. Then the equation says something amazing; it expresses a kind of mathematical alchemy, as if lead could be turned into gold. It asserts that a given function of r (here, 1 divided by 1 − r) can be turned into something much simpler, a combination of simple powers of r, like r^2 and r^3 and so on. What’s fantastic is that the same is true for an enormous number of other functions that come up virtually everywhere in science and engineering. The pioneers of calculus discovered that all the functions they were familiar with — sines and cosines, logarithms and exponentials — could be converted into the universal currency of “power series,” a kind of beefed-up version of a geometric series where the coefficients may now also change. And when they made these conversions, they noticed startling coincidences. Here, for example, are the power series for the cosine, sine and exponential functions (don’t worry about where they came from; just look at their appearance): $latex\cos x$ = 1 – $latex\frac{x^{2}}{2 !}$ + $latex\frac{x^{4}}{4 !}$ – $latex\frac{x^{6}}{6 !}$ + … $latex\sin x$ = $latex x$ – $latex\frac{x^{3}}{3 !}$ + $latex\frac{x^{5}}{5 !}$ – $latex\frac{x^{7}}{7 !}$ + … $latexe^x$ = 1 + $latex x$ + $latex\frac{x^{2}}{2 !}$ + $latex\frac{x^{3}}{3 !}$ + $latex\frac{x^{4}}{4 !}$ + … Besides all the exultant and well-deserved exclamation points (which actually stand for factorials; 4! means 4 × 3 × 2 × 1, for example), notice that the series for $latexe^x$ comes tantalizingly close to being a mashup of the two formulas above it. If only the alternation of positive and negative signs in $latex\cos x$ and $latex\sin x$ could somehow harmonize with the all-positive signs of $latexe^x$, everything would match up. That coincidence, and that kind of wishful thinking, led Leonhard Euler to the discovery of one of the most marvelous and far-reaching formulas in the history of mathematics: $latexe^{ix}$ = $latex\cos x$ + i $latex\sin x$, where i is the imaginary number defined as i = $latex\sqrt{-1}$. Euler’s formula expresses an outrageous connection. It asserts that sines and cosines, the embodiment of cycles and waves, are secret relatives of the exponential function, the embodiment of growth and decay — but only when we consider raising the number e to an imaginary power (whatever that means). Euler’s formula, spawned directly by infinite series, is now indispensable in electrical engineering, quantum mechanics and all technical disciplines concerned with waves and cycles. Having come this far, we can take one last step, which brings us to the equation often described as the most beautiful in all of mathematics, for the special case of Euler’s formula where x = π: e^iπ + 1 = 0. It connects a handful of the most celebrated numbers in mathematics: 0, 1, π, i and e. Each symbolizes an entire branch of math, and in that way the equation can be seen as a glorious confluence, a testament to the unity of math. Zero represents nothingness, the void, and yet it is not the absence of number — it is the number that makes our whole system of writing numbers possible. Then there’s 1, the unit, the beginning, the bedrock of counting and numbers and, by extension, all of elementary school math. Next comes π, the symbol of circles and perfection, yet with a mysterious dark side, hinting at infinity in the cryptic pattern of its digits, never-ending, inscrutable. There’s i, the imaginary number, an icon of algebra, embodying the leaps of creative imagination that allowed number to break the shackles of mere magnitude. And finally e, the mascot of calculus, a symbol of motion and change. When I was a boy, my dad told me that math is like a tower. One thing builds on the next. Addition builds on numbers. Subtraction builds on addition. And on it goes, ascending through algebra, geometry, trigonometry and calculus, all the way up to “higher math” — an appropriate name for a soaring edifice. But once I learned about infinite series, I could no longer see math as a tower. Nor is it a tree, as another metaphor would have it. Its different parts are not branches that split off and go their separate ways. No — math is a web. All its parts connect to and support each other. No part of math is split off from the rest. It’s a network, a bit like a nervous system — or, better yet, a brain.
{"url":"https://www.quantamagazine.org/how-infinite-series-reveal-the-unity-of-mathematics-20220124/","timestamp":"2024-11-11T07:19:14Z","content_type":"text/html","content_length":"206041","record_id":"<urn:uuid:3c5448a5-4258-48cb-9c82-489c5478395c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00639.warc.gz"}
MODEIF and MEDIANIF (conditional mode and median) There's not an inbuilt function to cater for the conditional mode or median. So we need to use an array function. If you have supplier names in column A and their delivery lead times in column B, the above formula finds out what the mode of those delivery times is for the supplier referenced in cell D1. Change MODE to MEDIAN with obvious results. Because the formula itself references an array, you'll need to hit CTRL+SHIFT+ENTER instead of just ENTER when you've finished writing the formula. Excel will put some lovely curly brackets around the formula for you, reminding itself that arrays are contained therein. Happy days. 29 Responses to MODEIF and MEDIANIF (conditional mode and median) 1. Hi Dan, found your website really helpful, great stuff. I'm trying to construct a COUNTIF formula that looks at two columns & counts the nbr of duplicates. Problem is i'm trying to get the formula to only count the duplicates total in the column B if text in column A is "X". X 12345 X 56789 X 12345 X 11111 X 22222 X 12345 Y 22222 Y 22222 Y 33333 Y 44444 Any ideas would be greatly apprecaited. 2. I have a sample table as below and want to calculate the Average, Mode, Min, Max, and Median. I entered each of the formulas using CTRL+SHIFT+ENTER: =AVERAGE(IF(Table2[Column1]= 2, Table2[Column2])) =MIN(IF(Table2[Column1]= 2, Table2[Column2])) =MAX(IF(Table2[Column1]= 2, Table2[Column2])) =MEDIAN(IF(Table2[Column1]= 2, Table2[Column2])) =MODE(IF(Table2[Column1]= 2, Table2[Column2])) Using a C+S+E, a pair of braces were placed around the entire formula(s) All of them except Mode return the correct value. Mode returns a #N/A. Removing the {} from the Mode returns a #VALUE. What am I not seeing. Excel version is 2007 Column1 Column2 □ Never mind, I found the problem. There is no Mode in this table. Duh! □ Thank you, this was very helpful. 3. Hi Frank, I need some help on this one please. I want to construct a formula to give me a value in a cell if a condition is met, the conditions are:- If the value in cell C1 is greater than cell D2 then E2 should be a positive value and if C1 is less than D2 the E2 should also give a positive value. It has to be a positive value so that other cells on the sheet work correctly. At the moment I get a - value or a + value, any Ideas? □ You need ABS(C1-D2) I think, Bob. 4. I work with a hospital system that retrieves multiple data (28 facilities). I have three columns - column A (28 different facilities), column B (various discharge dispositions), Column C (LOS in Minutes) I am trying to capture the median LOS in Minutes when I filter on a particular hospital showing all discharge dispositions For example: Col A Col B Col C Hosp A Routine Discharge 150 Hosp A Admit to Inpatient 99 Hosp A Transferred to Beh Health 360 Hosp B Routine Discharge 75 Hosp B Routine Discharge 88 Hosp B Admit to Inpatient 80 Hosp B Admit to Inpatient 101 Hosp B Transferred 55 Hosp B Transferred 99 Hosp C Expired 75 Hosp C Admitted 66 5. Can you also use this same approach to create multiple conditions like the AVERAGEIFS function allows you to do? If I replace the IF with IFS in the example you gave when producing a MEDIAN will that work for me to your knowledge? 6. Good morning Dan Do you know if Median works with the new IFS function in Excel 2016? Look forward to hearing from you. Kind regards 7. Hey is there a formula to calculate median if the mode formula returns #NA as value. Or selects the value from the neighboring cell? 8. I need some advice. I am trying to track performance metrics for my team of case managers. Part of our performance is based on the median earnings during the program year. I can manage an equation that counts the median earnings of the entire case load (thanks to your equation above) but I am having trouble constraining the results to a time frame. This is what I have so far: =(countifs(Dashboard!$B:$B,"CaseManager",Dashboard!$T:$T,">0",Dashboard!$J:$J,">="&$M$1,Dashboard!$J:$J,"<="&$N$1)) B is the case manager name T is the median earnings J is their date of program exit M is the beginning of the program year and N is the end The results are dumped onto a "summary" page The problem is that, no matter the earnings, the equation returns 1 for each entry that it counts instead of the median. I'm doing this through Google Sheets. I'm willing to accept that it is not robust enough to handle this operation. Thank you for any help □ I apologize, I grabbed the wrong equation from my spreadsheet and I cannot see a way to delete my prior post. This is the formula that is giving me trouble: ☆ I don't think you'd want a COUNTIFS here, simply an IF(AND(... But I can't get this to work with multiple arguments. Sorry I can't be more helpful. ○ Thank you for looking into it for me. I have a fairly quick method I can use to calculate the medians I need manually so it's not a big loss. I just can't generate the information real-time. It's all part of the learning process. ■ No worries. I'm wondering whether you may be able to use some form of RANK function to get the rankings of the values that can then be used to find the halfway point. 9. Hello Dan, I am having trouble figuring out how to do a conditional median. Specifically, I have three cells of which I need the median (say, A, B, and C), but if any one of those cells are empty because of lack of data, I need to pull a median from a fourth cell (say, D). Cell D becomes a consideration, if and only if one of cells A, B, or C are missing. Typically cell B is the missing cell, so even if a formula only considered B as the "if," I'd be grateful. My goal is either to have the cell containing the median automatically highlight, or for it to calculate in a 5th cell, (say, E). Any help or direction would be appreciated. Thank you, 10. Hi, I need some help here. I'm trying to find the mode of one column, but only if a condition in a different column is met. Here's an example: x 3 x 6 y 5 x 3 y 3 x 5 y 6 x 3 y 3 x 5 x 6 x 3 y 6 y 3 x 5 I need to find the mode of values in the second column, but only if the corresponding cell in the first column has an x in it. Please help! □ You need the following forumula: When you've typed it, hit CTRL+Shift+Enter to make it an array formula. This basically creates a temporary value of the value multiplied by 1 (which equals the value) if there's a x in column A. Or creates a random number if not. And takes the mode of those The reason I've used a random number is to make sure that the numbers are all different. If I used a zero, then you would have lots of zeroes, and the mode would therefore be zero. By making it a random number for all non-x values, it means that they'll only appear once (hopefully) and so it will create a mode of only the whole numbers associated with the x entries. Hope that helps. 11. Hi there, I need advice on how to pull the median along with other data elements. So for example, I want to median salary along with location, is that possible to do? 12. Hi There, Struggling with an excel formula... How would you extract the mode of a range based on meeting an identifying criteria within a large range? □ The simplest way to do this is to create a secondary column that contains the value if meets the criteria, or blank otherwise. And do a mode of that. So the secondary column is =IF(A2>5,A2,"") And then do a mode of that. 13. Hi Dan I have a 2 columns. 1 with stockcodes and the other with a length. The code are remain the same and the length keep changing. What formula can we used to pull out the most common used lenght. Please advise □ It's not clear what you're needing, Shiraz. If you want to know the most common length, then it's just MODE(B:B) if your lengths are in column B. 14. Hi Dan, I want my formula to calculate the mode but if there is no mode to return the max value. Below is the formula I'm using. The data in the 4 cells is 2, 2, 3, 3. Since there is no mode, or that there are 2 modes, I want 3 to be returned since it is the max. 15. Dan, In sheets , in column A will be a cost of between 0-100, 101- 200, 201-300, 301-500, 501-1000 and 1001 - ? I will add to that in order 50%,40%,30%,25%, 15%,and10% in column B I want the total, Can you help please with a formula. 16. I'm looking for a way to find the median of a mid-range of values. I want to leave out a set of lower values, say <200, and also leave out higher values, >600. In other words, median of values between 200 and 600, for example. 17. I am attempting to identify the mode of column C, if column Y is >= 775 and column Y is <=825. =MODE('Raw Data'!C2:C10000, IF( 'Raw Data'!$Y$2:$Y$10000 >="775", 'Raw Data'!$Y$2:$Y$10000 <="825")) and =MODE(IF('Raw Data'!F2:F10000, 'Raw Data'!$Y$2:$Y$10000 >="775", 'Raw Data'!$Y$2:$Y$10000 <="825")) I know this is not correct, but am struggling to identify the solution. 18. Hi there, I am using this formula and it works great! However, I am having one minor issue that is skewing the results of the formula. I have several 0 values in my data that are accurate, but I also have several that are null values and this is also valid. This formula seems to count the null values as zeros and this results in the formula not producing the correct results. Any ideas? Thank you! 19. Hello, Dan. I have used Excel to find the mode within entries in several columns of data. However, I am trying to find a way to utilize Excel to calculate the 2nd most frequently occurring data entries, 3rd most frequently occurring data entries, 4th most frequently occurring data entries, and so forth. I have attempted to use the mode function while excluding the mode value to find the 2nd most frequently occurring value but failed. HELPPPPP!!! Thank you. This entry was posted in How to. Bookmark the permalink.
{"url":"https://preserved.org.uk/wizardofexcel.com/2013/11/13/modeif-and-medianif-conditional-mode-and-median/","timestamp":"2024-11-02T18:34:22Z","content_type":"text/html","content_length":"61417","record_id":"<urn:uuid:474db3d2-6e1c-4585-ad13-73a7511cbf72>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00112.warc.gz"}
How to compute simple interest rate The general formula for computing simple interest is: For example, if you borrow $1,000 from a friend and agree to pay 6% simple interest for two years, the formula above tells you that you'll pay Calculate the simple interest for the loan or principal amount of Rs. 5000 with the interest rate of 10% per annum and the time period of 5 years. P = 5000, R = 10% and T = 5 Years Applying the values in the formula, you will get the simple interest as 2500 by multiplying the loan amount (payment) with the interest rate and the time period. Simple interest is money you can earn by initially investing some money (the principal). A percentage (the interest) of the principal is added to the principal, making your initial investment grow! The general formula for computing simple interest is: For example, if you borrow $1,000 from a friend and agree to pay 6% simple interest for two years, the formula above tells you that you'll pay Calculates interest, principal, rate or time using the simple interest-only formula I=Prt. Calculate simple interest (interest only) on an investment or savings. Calculator for simple interest with formulas and calculations for principal, interest rate, number of periods or interest. I = Prt Simple interest calculator with formulas and calculations to solve for principal, interest rate, number of periods or final investment value. A = P(1 + rt) 30 Sep 2017 Banks apply interest on reduced balance of principal amount. If interest is to be charged at simple rate there is a separate account for 15 Apr 2017 Take in the values for principle amount, rate and time. 2. Using the formula, compute the simple interest. 3. Print the value for the computed 13 Mar 2019 Here's the formula: Simple Interest = Interest Rate x Principal Balance x time period. Say you open a savings account for a kid. The bank plans Interest = Principal * Rate * Time which is also written as I = P*R*T. Now that we have a procedure and a formula, we can solve the problem above. IOU Problem: Simple interest rate is calculated by multiplying the principal by the interest rate by the number of payment periods over the life of the loan. Here's the formula: i = rate of interest; n = number of periods. Example 1: A loan of $10,000 has been issued for 6-years. Compute the amount to be 13 Mar 2019 Here's the formula: Simple Interest = Interest Rate x Principal Balance x time period. Say you open a savings account for a kid. The bank plans Formula to Calculate Interest Rate. An interest rate formula is used to calculate the repayment amounts for loans and interest over investment on fixed deposits, mutual funds, etc. It is also used to calculate interest on a credit card. Interest can be calculated as simple interest or compound interest. Compound interest takes into consideration the amount of money that will be earned on interest that gets added to the account. To calculate interest, you need to know the amount in the account, the interest rate on the account, how long the money remains in the account and how In this case, Interest is not calculated on Interest Amount accruing on the loan amount, likewise in case of Compound Interest Formula. To calculate Simple Interest, we need Amount Borrowed along with the period for which it has been borrowed and the Rate of Interest. Formula for Simple Interest is: How to calculate interest and end value. To begin your calculation, enter your starting amount along with the annual interest rate and the start date (assuming it isn't today). Then, select a period of time that the calculation is to run for OR enter an end date. Once you click the 'calculate' button, the simple interest calculator will show you: To find simple interest, multiply the amount borrowed by the percentage rate, expressed as a decimal. To calculate compound interest, use the formula A = P(1 + r) n, where P is the principal, r is the interest rate expressed as a decimal and n is the number of number of periods during which the interest will be compounded. How to Calculate Interest Without Knowing the Interest Rate. An interest rate determines the amount of interest a borrower will pay over the course of the loan, on top of the original loan balance. When taking out a new loan, keep track of the interest rate, especially if it's a variable interest rate, which has Per diem (daily) interest While simple interest is generally simple to calculate over the life of a loan or investment, it can also be useful to know how much interest is accruing on a daily, or 15 Apr 2017 Take in the values for principle amount, rate and time. 2. Using the formula, compute the simple interest. 3. Print the value for the computed Interest = Principal * Rate * Time which is also written as I = P*R*T. Now that we have a procedure and a formula, we can solve the problem above. IOU Problem: Simple interest rate is calculated by multiplying the principal by the interest rate by the number of payment periods over the life of the loan. Here's the formula: The calculation of simple interest is equal to the principal amount multiplied by the interest rate, multiplied by the number of periods. For a borrower, simple Formula to Calculate Interest Rate. An interest rate formula is used to calculate the repayment amounts for loans and interest over investment on fixed deposits, mutual funds, etc. It is also used to calculate interest on a credit card. i = rate of interest; n = number of periods. Example 1: A loan of $10,000 has been issued for 6-years. Compute the amount to be If a payment is less than 31 days late, use the Simple Daily Interest Calculator. 10 days late and at an interest rate of 6.625% would be calculated as follows: How to use this calculator. Choose whether you want to calculate simple interest (I), principal (P), interest rate (r) or duration/period (t). Fill in the blue boxes with Odeh discusses the Mathematics of Money beginning with a definition of the Time Value of Money. Calculating simple and compound interest rates are P= 5000 #Principal Amount; R=15 #Rate; T=1 #Time; SI = (P*R*T)/100; # Simple Interest calculation; print("Simple Interest is :");; print(SI); #prints Simple Interest.
{"url":"https://bestcurrencyvgvr.netlify.app/valliere40844ri/how-to-compute-simple-interest-rate-77.html","timestamp":"2024-11-05T17:20:26Z","content_type":"text/html","content_length":"35743","record_id":"<urn:uuid:83704eb4-4bae-4cb8-a892-a0462efcd2ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00162.warc.gz"}
Describing the Graph of The Internet Users from 2005 to 2013 Question: The graph below shows the Internet Users from 2005 to 2013. Describe the graph in 150 words. You should highlight and summaries the information given in the graph. The Internet Users from 2005 to 2013 Graph The Internet Users from 2005 to 2013 Answer: The graph shows the percentage of people using the internet from the year 2005 to 2013. The graph, at a glance, shows a tremendous rise of internet users over the years. In 2005, only 5% of people used the Internet. The next year, that is in 2006, this rises to 8%, which means that in one year’s time 3% more people began to use the Internet. This trend of growth accelerates in next years. We find that in 2007, 11% of people used the internet. The next year, that is, in 2008, it becomes 16%, so the increase in one year is 5%. Again, if we study the graph we find that the percentage of people using the internet increases faster between 2009 and 2010. In 2009 the percentage was only 21 while it rises to 32 in 2010, which means that in one year’s time 11% more people began to use the Internet. However, there is a little increase, between 2010 and 2011. In 2010 it was 32% and in 2011, it rises to 35%, that is a growth of only 3%. But from 2011, again the trend of growth goes faster. Thus, we find that between 2011 and 2012, there is an increase of 7% (41%-34%) users. In 2013 the percentage rises to 54 from 41, that is 13% increase occurs between 2012 and 2013. To sum up, we can say that within a span of eight years the users of Internet rises from 5% to 55% which means a very significant growth of the users of the internet over time.
{"url":"http://ghior.com/graphs-and-charts/describing-the-graph-of-the-internet-users-from-2005-to-2013/","timestamp":"2024-11-14T09:10:35Z","content_type":"text/html","content_length":"50180","record_id":"<urn:uuid:d370e6bc-dca7-4949-aa8b-c04b10ce92dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00121.warc.gz"}
Downscaled bioclimatic indicators for selected regions from 1950 to 2100 derived from climate projections Name Units Description Annual mean K Annual mean of the daily mean temperature near the surface. This indicator corresponds to the official BIOCLIM variable BIO01 that is used in ecological niche temperature (BIO01) modelling. Annual precipitation m s^-1 Annual mean of the daily mean precipitation rate (both liquid and solid phases). This indicator corresponds to the official BIOCLIM variable BIO12. To compute the (BIO12) total precipitation sum over the year, a conversion factor should be applied of 3600x24x365x1000 (mm year^-1). Aridity annual mean Dimensionless Monthly potential evaporation divided by the monthly mean precipitation, averaged over the year. Aridity coldest Dimensionless Monthly potential evaporation divided by the monthly mean precipitation, averaged over the coldest quarter. Aridity driest quarter Dimensionless Monthly potential evaporation divided by the monthly mean precipitation, averaged over the driest quarter. Aridity warmest Dimensionless Monthly potential evaporation divided by the monthly mean precipitation, averaged over the warmest quarter. Aridity wettest Dimensionless Monthly potential evaporation divided by the monthly mean precipitation, averaged over the wettest quarter. Cloud cover Dimensionless Fraction of the grid cell for which the sky is covered with clouds. Clouds at any height above the surface are considered. Dry days day Number of days within a year where total daily precipitation does not exceed 2 mm. Evaporative fraction Dimensionless Monthly surface latent heat divided by the monthly total sensible and latent heat flux, averaged over the year. annual mean Evaporative fraction Dimensionless Monthly surface latent heat flux divided by the monthly total sensible and latent heat flux, averaged over the coldest quarter. coldest quarter Evaporative fraction Dimensionless Monthly surface latent heat flux divided by the monthly total sensible and latent heat flux, averaged over the driest quarter driest quarter Evaporative fraction Dimensionless Monthly surface latent heat divided by the monthly total sensible and latent heat flux, averaged over the warmest quarter. warmest quarter Evaporative fraction Dimensionless Monthly surface latent heat divided by the monthly total sensible and latent heat flux, averaged over the wettest quarter. wettest quarter Frost days day Number of days during the growing season with minimum temperature below 273 K (0^oC). The data is aggregated over the months. Growing degree days K day year^-1 The sum of daily degrees above the daily mean temperature of 278 K (5^oC). The data is aggregated over the months. Growing degree days during growing season K day year^-1 Growing degree days in the growing season. Growing season end of day of year The first day of a period of 5 consecutive days in the second half of the year with a mean daily temperature below 278 K (5^oC). Growing season length day Number of days between the start and the end of the growing season. Growing season start day of year The first day of the year of a period of 5 consecutive days with a mean daily temperature above 278 K (5^oC). of season Isothermality (BIO03) % (Monthly mean diurnal range divided by temperature annual range multiplied by 100. This indicator corresponds to the official BIOCLIM variable BIO03. Koeppen-Geiger class Dimensionless A climate classification that divides worldwide climates into separate classes depending on temperature and precipitation thresholds. Maximum 2m temperature K Mean of the daily maximum temperature near the surface. The data is aggregated as an average over the months and as an average and a maximum over the year. Maximum length of dry day Maximum number of consecutive days of the dry spells within a year. Maximum precipitation m s^-1 Maximum of the daily mean precipitation. The data is aggregated over the year. To compute the total precipitation sum over the year, a conversion factor should be applied of 3600x24x365x1000. Maximum temperature of the warmest month K Maximum daily temperature of the month with the highest monthly mean of daily mean temperature. This indicator corresponds to the official BIOCLIM variable BIO05. Mean diurnal range K Mean of the daily maximum temperature minus the daily minimum temperature. The data is aggregated over the months. This indicator corresponds to the official (BIO02) BIOCLIM variable BIO02. Mean intensity of dry day Determine the consecutive dry days at each day in a year, then take the average of these daily values over the year. Mean length of dry day Mean length of dry spells with a minimum of 5 days within a year. Mean precipitation m s^-1 Average over the daily mean precipitation. The data is aggregated over the month and the year. To compute the total precipitation sum over the aggregation period, a conversion factor should be applied of 3600x24x1000 by 30.4 (average number of days per month) or by 365. Mean temperature K Mean of the daily mean temperature near the surface. The data is aggregated over the months and the year (=BIO01). Mean temperature of The mean of monthly mean temperature during the coldest quarter, defined as the quarter with the lowest monthly mean (of the daily mean) temperature using a coldest quarter K moving average of 3 consecutive months. This indicator corresponds to the official BIOCLIM variable BIO11. Mean temperature of K The mean of monthly mean temperature during the driest quarter, defined as the quarter with the lowest monthly mean (of the daily mean) precipitation using a driest quarter (BIO09) moving average of 3 consecutive months. This indicator corresponds to the official BIOCLIM variable BIO09. Mean temperature of The mean of monthly mean temperature during the warmest quarter, defined as the quarter with the highest monthly mean (of the daily mean) temperature using a warmest quarter K moving average of 3 consecutive months. This indicator corresponds to the official BIOCLIM variable BIO10. Mean temperature of The mean of monthly mean temperature during the wettest quarter, defined as the quarter with the highest monthly mean (of the daily mean) precipitation using a wettest quarter K moving average of 3 consecutive months. This indicator corresponds to the official BIOCLIM variable BIO08. Minimum temperature K Mean of the daily minimum temperature near the surface. The data is aggregated as an average over the months and as an average and a maximum over the year. Minimum temperature of the coldest month K Minimum daily temperature of the month with the lowest monthly mean of daily mean temperature. This indicator corresponds to the official BIOCLIM variable BIO06. Number of dry spells Dimensionless Number of dry spells with a minimum of 5 days that occur in a year. Potential evaporation m s^-1 Annual averaged amount of water that would evaporate and transpire if there is unlimited water supply. annual mean Potential evaporation m s^-1 The amount of water that would evaporate and transpire if there is unlimited water supply, averaged for the coldest quarter. coldest quarter Potential evaporation m s^-1 The amount of water that would evaporate and transpire if there is unlimited water supply, averaged for the driest quarter. driest quarter Potential evaporation m s^-1 The amount of water that would evaporate and transpire if there is unlimited water supply, averaged for the warmest quarter. warmest quarter Potential evaporation m s^-1 The amount of water that would evaporate and transpire if there is unlimited water supply, averaged for the wettest quarter. wettest quarter Precipitation in The mean of monthly mean precipitation during the coldest quarter, defined as the quarter with the lowest monthly mean (of the daily mean) temperature using a coldest quarter m s^-1 moving average of 3 consecutive months. To compute the total precipitation sum over the month, a conversion factor should be applied of 3600x24x91.3 (average (BIO19) number of days per quarter)*1000. This indicator corresponds to the official BIOCLIM variable BIO19. Precipitation in The mean of monthly mean precipitation during the driest quarter, defined as the quarter with the lowest monthly mean (of the daily mean) precipitation using a driest quarter (BIO17) m s^-1 moving average of 3 consecutive months. To compute the total precipitation sum over the month, a conversion factor should be applied of 3600x24x91.3 (average number of days per quarter)*1000. This indicator corresponds to the official BIOCLIM variable BIO17. Precipitation in The mean of monthly mean precipitation during the warmest quarter, defined as the quarter with the highest monthly mean (of the daily mean) temperature using a warmest quarter m s^-1 moving average of 3 consecutive months. To compute the total precipitation sum over the month, a conversion factor should be applied of 3600x24x91.3 (average (BIO18) number of days per quarter)*1000. This indicator corresponds to the official BIOCLIM variable BIO18. Precipitation in The mean of monthly mean precipitation during the wettest quarter, defined as the quarter with the highest monthly mean (of the daily mean) precipitation using a wettest quarter m s^-1 moving average of 3 consecutive months. To compute the total precipitation sum over the month, a conversion factor should be applied of 3600x24x91.3 (average (BIO16) number of days per quarter)*1000. This indicator corresponds to the official BIOCLIM variable BIO16. Precipitation of m s^-1 Minimum of the monthly precipitation rate. To compute the total precipitation sum over the month, a conversion factor should be applied of 3600x24x30.4 (average driest month (BIO14) number of days per month)*1000. This indicator corresponds to the official BIOCLIM variable BIO14. Precipitation of m s^-1 Maximum of the monthly precipitation rate. To compute the total precipitation sum over the month, a conversion factor should be applied of 3600x24x30.4 (average wettest month (BIO13) number of days per month)*1000. This indicator corresponds to the official BIOCLIM variable BIO13. Precipitation % Annual coefficient of variation of the monthly precipitation sums. This indicator corresponds to the official BIOCLIM variable BIO15. seasonality (BIO15) Summer days day Number of days in a year for which the daily maximum temperature is not lower than 298.15 K (25^oC). Surface latent heat W m^-2 The transfer of latent heat (resulting from water phase changes, such as evaporation or condensation) between the Earths surface and the atmosphere through the flux annual mean effects of turbulent air motion, averaged over the year. The vector component is positive when directed upward (negative downward). Surface latent heat W m^-2 The transfer of latent heat (resulting from water phase changes, such as evaporation or condensation) between the Earths surface and the through the effects of flux coldest quarter turbulent air motion, averaged over the coldest quarter. Surface latent heat W m^-2 The transfer of latent heat (resulting from water phase changes, such as evaporation or condensation) between the Earths surface and the atmosphere through the flux driest quarter effects of turbulent air motion, averaged over the driest quarter. Surface latent heat W m^-2 The transfer of latent heat (resulting from water phase changes, such as evaporation or condensation) between the Earths surface and the atmosphere through the flux warmest quarter effects of turbulent air motion, averaged over the warmest quarter. Surface latent heat W m^-2 The transfer of latent heat (resulting from water phase changes, such as evaporation or condensation) between the Earths surface and the atmosphere through the flux wettest quarter effects of turbulent air motion, averaged over the wettest quarter. Surface sensible heat W m^-2 The transfer of heat between the Earths surface and the atmosphere through the effects of turbulent air motion, averaged over the year. The vector component is flux annual mean positive when directed upward (negative downward). Surface sensible heat W m^-2 The transfer of heat between the Earths surface and the atmosphere through the effects of turbulent air motion, averaged over the coldest quarter. flux coldest quarter Surface sensible heat W m^-2 The transfer of heat between the Earths surface and the atmosphere through the effects of turbulent air motion, averaged over the driest quarter. flux driest quarter Surface sensible heat W m^-2 The transfer of heat between the Earths surface and the atmosphere through the effects of turbulent air motion, averaged over the warmest quarter. flux warmest quarter Surface sensible heat W m^-2 The transfer of heat between the Earths surface and the atmosphere the effects of turbulent air motion, averaged over the wettest quarter. flux wettest quarter Temperature annual K Maximum temperature of the warmest month minus minimum temperature of the coldest month. This indicator corresponds to the official BIOCLIM variable BIO07. range (BIO07) Temperature K Standard deviation of the monthly mean temperature multiplied by 100. This indicator corresponds to the official BIOCLIM variable BIO04. seasonality (BIO04) Volumetric soil water The volume of water in soil layer 1 (0-7cm, the surface is at 0 cm) averaged over the year. The ECMWF Integrated Forecasting System model has a four-layer layer 1 annual mean m^3 m^-3 representation of soil; Layer 1: 0-7 cm; Layer 2: 7-28 cm; Layer 3: 28-100 cm; Layer 4: 100-289 cm. The volumetric soil water is associated with the soil texture (or classification), soil depth, and the underlying groundwater level. Volumetric soil water layer 1 coldest m^3 m^-3 The volume of water in soil layer 1 (0 7cm, the surface is at 0 cm) averaged over the coldest quarter. Volumetric soil water m^3 m^-3 The volume of water in soil layer 1 (0 7cm, the surface is at 0 cm) averaged over the driest quarter. layer 1 driest quarter Volumetric soil water layer 1 warmest m^3 m^-3 The volume of water in soil layer 1 (0 7cm, the surface is at 0 cm) averaged over the warmest quarter. Volumetric soil water layer 1 wettest m^3 m^-3 The volume of water in soil layer 1 (0 7cm, the surface is at 0 cm) averaged over the wettest quarter. Water vapor pressure Pa Contribution to the total atmospheric pressure provided by the water vapor over the period 00-24h local time per unit of time. Wind speed m s^-1 Magnitude of the two-dimensional horizontal air velocity near the surface.
{"url":"https://cds.climate.copernicus.eu/datasets/sis-biodiversity-cmip5-regional?tab=overview","timestamp":"2024-11-12T11:40:23Z","content_type":"text/html","content_length":"139488","record_id":"<urn:uuid:c8afd29e-a4ac-4381-9c8e-bb7cc478c0bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00521.warc.gz"}
Blockchain governance - Part 6Blockchain governance - Part 6 Blockchain governance - Part 6 Part 6: Testing governance hypotheses After December 2017 the cryptoasset markets sustained a severe devaluation in face of fiat currencies. Because cryptoassets have different characteristics (affinity demand, scarcity, issuance rules, governance model, hashpower security, technology, development activity and expectations), the demand curve is different on each case. A few hypotheses related to Decred Project governance could be tested, like if this bear market could be related to a possible decrease in liquidity of Decred, assuming that holders do not want to sell expecting a bull market while making a passive income from PoS mining, and also, if this price variance could be related to an impact in project’s governance. The period of time investigated in this section comprises the launch of Decred network on 08 Feb 2016, where the genesis block (name given to the first block) was mined, and 10 Feb 2020, 4 years after the first exchange rate in USD available in Yahoo Finance^1. Feb 2020 also avoids significant market impacts that may be related to Covid-19 pandemic^2. The Figures and Tables in this section show correlation of data, which does not imply causality. The data was extracted from dcrdata, the Decred own block explorer, from Yahoo! Finance and Blockchain.com website and is available at Decred Governance Data Analysis Github code repository^3, along with the R and SQL scripts used to extract and process the data. Table 16 shows the first assessments made before testing the hypotheses. The average block per day is important for the network to keep the pace in terms of coin issuance and transaction processing while the average daily ticket pool size shows if the ticket pricing algorithm is working as expected, making the tickets more expensive when demand is high and lowering the price when demand drops. Having the observed values close to the expected ones show that the blockchain is working as defined by the consensus rules. These numbers will be used in the coming assessments. From 08 Feb 2016 to 10 Feb 2020 Expected Observed Average block per day 288 288.5479 Average daily ticket pool size 40 960 41 257.3012 Table 16 – Decred blockchain expected and observed results Source: own elaboration Hypothesis 1: Decred staked percentage is related to coin issuance The staked percentage correspond to the amount of coins locked up in tickets. Figure 5 and Table 17 show a strong correlation between coin issuance and staked percentage, indicating that even though more coins are issued, stakeholders keep buying tickets, which drives up ticket price, and then participate in the governance process being financially rewarded for it. Figure 5 – Issued Decred coins and staked percentage Source: own elaboration Selected sample from All time > 10 Feb 2017 N 1464 1096 Pearson’s Correlation 0.8994 0.9045 Significance test p-value < 2.2e-16 < 2.2e-16 Table 17 – Correlation between issued Decred coins and staked percentage Source: own elaboration Hypothesis 2: Decred staked percentage is not related to price variance Because stakeholders get a reward from the governance process, and because the cryptocurrency market as a whole suffered a severe devaluation in the end of 2017, it is possible that investors that bought Decred when the price was higher now have an incentive to buy tickets and receive a reward while wait for the price to go back up, possibly. Figure 6 and Table 18 show Decred price in USD and the staked percentage of issued Decred coins. One year after the project was launched, on 10 Feb 2017, 33% of all issued coins were locked up in tickets and this number increased to 50% on 10 Feb 2020. In summary, i) additional coins are issued every day, ii) investors keep showing long-term interest in the project, locking up their coins for up to 143 days to get a reward, and iii) this reward goes down as time goes by, because the reward drops by circa 1% every 21.33 days, as shown in Figure 3 in Part 4 - Decred Project. On the other hand, the slight increase in the staked percentage between Feb 2017 and Feb 2020 (period selected due to the curve shown on Figure 6) does not explain the price variation during the same period. Additional questions should consider if i) price could increase in case of reduced offer of Decred in the market or ii) price could decrease if the market understands that there is not enough Decred liquidity to be regularly used as currency or iii) staked percentage could increase or decrease if the price decreases or increases, respectively, because stakeholders might want to take a profit if the price increases, or stake more if the price decreases and sell later when/if the price rebounds. Figure 6 – Decred price and staked percentage (PoS security) Source: own elaboration Selected sample from > 10 Feb 2016 > 10 Feb 2017 N 1462 1096 Pearson’s Correlation 0.4970 0.123 Significance test p-value 0.0176 1 Table 18 – Correlation between Decred price and staked percentage Source: own elaboration Figure 7 and Table 19 explore the correlation between the percentage of staked coins and the return in USD per ticket, in average, to investigate a relation between a possible drop in return and the incentives the investors have to keep staking. Figure 7 – Return in USD per ticket and staked percentage (PoS security) Source: own elaboration Against the expectations, the return in USD shows a weak inverse correlation with the staked percentage, indicating that even the returns dropping the investors keep staking and participating in the governance process. The orange line represents the Return on Investment (ROI) and it measures the return in DCR for each ticket bought, considering the ticket price which is adjusted by the consensus rules to keep an average of 40 960 tickets in the ticket pool. The corresponding scale is in the right axis and starts in 21 Feb 2016 with a ROI of 93.59% and ends in 10 Feb 2020 with 0.69%. It means that in 21 Feb 2016, when the ticket price was DCR 2.00, the investors received a return of 93.59% (or DCR 1.8717) for each ticket bought, every 20 days, in average. Selected sample from > 21 Feb 2016 > 10 Feb 2017 N 1450 1083 Pearson’s Correlation -0.0711 -0.3177 Significance test p-value 1 1 Table 19 – Correlation between Return in USD per ticket and staked percentage Source: own elaboration Hypothesis 3: PoW security is not related to price Although there is not a unique method for cryptocurrency valuation, it is well-known in crypto financial markets that cryptocurrency value comes from its utility (or the utility investors derive from it) as in traditional money, as explained in Part 2 - The tech behind digital money, not from its cost. As shown in Part 5 - Assessing blockchain governance, subsection ‘Security increases opportunity costs’, an increase in security (or utility) could increase the perceived value and drive its price up. Investors might also turn off mining devices if cryptocurrency price is not enough to cover the costs of running them to provide security to the network, driving its price down. Figure 8 and Table 20 show weak correlation between this assumed relation, price and network difficulty for Decred, for selected samples from 2019 onwards. There is no correlation between such variables when the samples include the whole history or until 31 Dec 2018. Figure 8 – Decred price and network difficulty (PoW security) Source: own elaboration Selected sample from All time < 30 Dez 2018 > 01 Jan 2019 N 1464 1058 406 Pearson’s Correlation -0.0939 0.1138 0.5308 Significance test p-value < 2.2e-16 < 2.2e-16 0.0102 Table 20 – Correlation between Decred price and network difficulty Source: own elaboration Figure 9 and Table 21 also show a weak correlation between price and network difficulty for Bitcoin, no matter the period selected. The third Bitcoin ‘halving’ event, name given to the programmed periodic change that reduces in half the coin issuance, do not seem to have caused any impact, according to Figure 9, although at that time, the cryptocurrencies were unknown to most of the people and the market was not developed as it is today. Figure 9 – Bitcoin price and network difficulty (PoW security) Source: own elaboration Selected sample from All time < 30 Dez 2018 > 01 Jan 2019 N 1464 1058 406 Pearson’s Correlation 0.5924 0.5376 0.5486 Significance test p-value < 2.2e-16 < 2.2e-16 < 2.2e-16 Table 21 – Correlation between Bitcoin price and block mining difficulty Source: own elaboration Hypothesis 4: Privacy participation is not related to Decred price variance In the cryptocurrency market, just like regular enterprise valuation, new features (or contracts) may be related to an increase in price, at least for a very short time, maybe only on release day. A decrease in price is also possible, with some investors selling their positions once the event happens, because it was already ‘priced’ before the event. Another explanation for a decrease in price would be a lack in governance transparency added by the privacy feature, because it is not possible to track founders’ expenditure anymore and they might be heavily invested (staked) causing imbalances to the decision-making process, taking advantage on their initial position size (pre-mine). Figure 10 and Table 22 show a weak inverse correlation between Decred price (close) in USD and the release of the privacy feature. Figure 10 – Decred price variance and coins in private transactions Source: own elaboration N 366 Pearson’s Correlation -0.5286 Significance test p-value < 2.2e-16 Table 22 – Correlation between Decred price (close) and privacy mix rate Source: own elaboration Hypothesis 5: Observed coin issuance is related to expected coin issuance The introduction of Part 2 - The tech behind the digital money and its subsection ‘Incentives and inflation’ detailed the properties of money, like scarcity, and argued that ‘good and trustworthy coins’ must be issued in a predictable manner and according to its value. Consensus rules in the protocol, kept in place by good governance, are responsible for the issuance of new coins, which are the incentive for miners to provide security to the blockchain and to stake coins in PoS tickets. Figure 11 and Table 23 show a perfect correlation between the expected coin issuance (dashed blue curve) defined in Decred blockchain consensus rules and the observed coin issuance (continuous black curve) from 08 Feb 2016 and 10 Feb 2020. Figure 11 – Expected and observed Decred coin issuance Source: own elaboration N 1464 Pearson’s Correlation 1 Significance test p-value < 2.2e-16 Table 23 – Correlation between expected and observed Decred coin issuance Source: own elaboration The difference between expected and observed coin issuance can be explained by Table 24, that shows the number of blocks with only 3 and 4 ticket votes. The blockchain protocol defines that a ticket drawn in one block must sign a transaction validating the next block, thus adding security to the network. Besides that, these stakeholders can vote in the on-chain and off-chain proposals and for such are rewarded with 30% (6% for each of 5 tickets/votes per block) of coin issuance for that block. If the wallet (name given to the software that manages private keys that sign transactions) is offline when the ticket is drawn, it will not sign the transaction and will not be rewarded with new coins. This also reduces the PoW reward, which reaches 100% only when 5 voting tickets are mined. From 08 Feb 2016 to 10 Feb 2020 Count Percentage Blocks with 0 voting tickets 4 096 0.0097 Blocks with only 3 voting tickets 5 526 0.0131 Blocks with only 4 voting tickets 21 881 0.0518 Blocks with all 5 voting tickets 391 045 0.9254 Table 24 – Decred ticket voters per block Source: own elaboration Hypothesis 6: Tickets are drawn as expected in the official documentation As explained before, predictability and scarcity play their role in the process of building a trustworthy coin. To keep stakeholders interested in the staking process, fundamental to governance, the voting chance must meet the expectations and the financial reward must be paid according to the consensus rules. The official Decred documentation^4 displays a chart with the probability of a ticket voting by day. Table 25 shows these expected values for ticket vote chance, as documented by the project, and the observed values from 08 Feb 2016 to 10 Feb 2020. Table 26 shows a very strong correlation between those values, which indicates that ticket vote chances, and investors rewards, happen in practice as described in theory. Figure 12 below shows the cumulative vote chance curve. Figure 12 – Observed cumulative frequency of Decred ticket vote chance Source: own elaboration From 08 Feb 2016 to 10 Feb 2020 Expected Observed Chance of having the ticket voted in 10 days 0.296 0.273 Chance of having the ticket voted in 20 days 0.505 0.490 Chance of having the ticket voted in 28 days 0.626 0.617 Chance of having the ticket voted in 45 days 0.794 0.790 Chance of having the ticket voted in 60 days 0.879 0.877 Chance of having the ticket voted in 90 days 0.958 0.958 Chance of having the ticket expired (after 143 days) 0.005 0.007 Table 25 – Expected and observed cumulative Decred ticket vote chance Source: own elaboration N 7 Pearson’s Correlation 0.9998 Significance test p-value < 2.2e-16 Table 26 – Correlation between expected and observed Decred ticket vote chance Source: own elaboration Hypothesis 7: Decred price variance is related to Bitcoin price variance For the past years, empirical knowledge established that Bitcoin and the alternative currencies tend to walk together for some periods of the year and go opposite ways on others. Figure 13 and Table 27 show that this was true for Decred and Bitcoin until 01 Jun 2018, when Decred ASIC devices hit the market, which may or may not be a significant event for this comparison. After 01 June 2018 and until 10 Feb 2020, Decred and Bitcoin exchange rates to USD are uncorrelated. As shown in Figure 13, Bitcoin price movements do not seem to influence Decred price movements as it did before that date, when they were strongly correlated. Figure 13 – Decred and Bitcoin prices (close) in USD Source: own elaboration Selected sample from All time < 30 May 2018 > 01 June 2018 N 1464 844 620 Pearson’s Correlation 0.7233 0.9156 0.0886 Significance test p-value < 2.2e-16 < 2.2e-16 < 2.2e-16 Table 27 – Correlation between Decred and Bitcoin prices (close) in USD Source: own elaboration When this information is analysed together with Table 20 and Figure 8, the correlation between Decred price and Decred network difficulty, it is possible to notice a relation between the timelines. The data shows that at some point between June 2018 (when Decred ASIC devices hit the market) and Jan 2019, Decred price was not correlated to Bitcoin price anymore, but to Decred network difficulty. One possible explanation for this event is the fixed cost of running the ASIC devices, which makes PoW miners the biggest sellers of coins. Bitcoin and Decred ASIC devices are different because of the one-way hashing algorithm used by the projects. Bitcoin uses SHA-256^6 and Decred uses BLAKE-256^7. Hypothesis 8: Decred voting turnout is higher on consensus rules Decred stakeholders (PoS) seem to be more interested in long-term decisions, like the ones that change the consensus rules (on-chain governance) than project management and network funding decisions (off-chain governance). Figures 14 and 15 and Table 28 show PoS voting turnout and approvals for 57 off-chain and 5 on-chain governance proposals. Figure 14 – Votes by proposal in Decred Politeia voting platform (off-chain) Source: own elaboration Figure 15 – Votes by proposal to change consensus rules (on-chain) Source: own elaboration While Figure 14 shows 3 proposals that did not reach quorum and other 5 that barely made it from a total of 57, Figure 15 shows that all 5 proposals had a higher average turnout and approval rate, which may also be an indicative of the opinion of stakeholders regarding the objectives being financed (off-chain proposals) or the quality of the proposals written by core developers (on-chain proposals) given that average disapproval rates of on-chain proposals were minimal compared to average disapproval rates of off-chain proposals. The average voting turnout was calculated considering an average of votes ‘yes’ or ‘no’ cast per proposal divided by the average number of live daily pool tickets, shown in Table 28. This next Table also shows the average voting turnout and approval for both voting processes. For on-chain governance, while the approval percentages may seem low when compared to Figure 15 (only 67,99%, in average), the calculations consider the rate of votes ‘yes’ among ‘no’ and abstentions, with this last ‘choice’ deliberately not shown in Figure 15. . Off-chain Gov. On-chain Gov. N Proposals 57 5 Average daily ticket pool size 41 257.3012 41 257.3012 Average voting turnout 0.2894 0.6153 Average proposal approval 0.5965 0.6799 Table 28 – Observed average of turnout and approval percentage Source: own elaboration OLS model and regression tree Table 29 shows an Ordinary Least Squares (OLS) model taking into consideration data from 01 Jan 2019 to 10 Feb 2020, where Decred price is the dependent variable. The quality of adjustment is good, as we can observe from R-squared and F-statistics. We may observe that Staked Percentage might have negatively influenced Decred price while Average Hashrate and Coin Issuance have a positive impact. Transaction Count and Transaction Sum seem to have no impact in Decred price. It might be interpreted as a sign that long-term investors must reduce their positions in staking to be able to transact Decred. Hashrate may increase price but may also be a sign that miners are turning off their devices when prices are too low. Min 1Q Median 3Q Max -9.6921 -2.0604 0.0554 2.2413 11.2513 . Coefficient Std. Error t value Pr(>|t|) Intercept 7.758e+01 9.471e+00 8.191 3.53e-15 *** Coin Issuance 1.598e-03 5.555e-04 2.877 0.00423 ** Average Hashrate 4.330e-08 2.322e-09 18.646 < 2.2e-16 *** Staked Percentage -1.625e+02 1.736e+01 -9.362 < 2.2e-16 *** Transaction Count -3.820e-04 4.306e-04 -0.887 0.37557 Transaction Sum 1.003e-07 4.998e-07 0.201 0.84108 Significance: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘’ 1 Residual standard error: 3.654, Degrees of freedom: 400 R-squared: 0.4803, Adjusted R-squared: 0.4738 F-statistic: 73.94 (p-value: < 2.2e-16) Table 29 – OLS model: Decred Price (USD) as dependent variable Source: own elaboration Figure 16 and Table 30 show the relation between price, average hashrate and staked percentage using a regression tree, a machine algorithm built through a process known as binary recursive partitioning, based on an iterative process that splits the data into partitions or branches (Breiman, Friedman, Olshen, & Stone, 1984). Lower prices were observed when the network had lower average hashrates and at the same time hashrates were higher, the staked percentage was diminishing when compared to maximum values. First, the data was divided in 70% training and 30% testing samples. Then, a hyperparameter model with 240 combinations of different values for minimum split of branches, maximum number of leaves and complexity parameter was generated. All models were tested and the values that produced the least amount of cross-validated error were chosen as the optimal model to generate the regression tree. The Mean Absolute Error calculated using the optimal model against the testing sample was 2.6096 and the Root Mean Square Error was 3.4690. Table 31 shows the importance each variable had in the calculations. We may conclude that average hashrate and staked percentage are the most important variables to try to understand price levels. The regression tree was optimised to select the number of tree leaves and levels to reduce the mean absolute error, and considered that coin issuance, transaction count and transaction sum were irrelevant to predict price. The market showed some correlation regarding low hashrates and low prices and also high hashrates and staked percentage below 50%, and these patterns accounted for almost two thirds of observed prices between 01 Jan 2019 and 10 Feb 2020. Figure 16 – Regression Tree with Hashrate and Staked Percentage Source: own elaboration AvgPrice Hashrate Staked Percentage Cover 17.78 < 395.9e+06 . 32.08% 18.86 434.7e+06 to 508.2e+06 >= 0.4991 9.47% 18.92 395.9e+06 to 434.7e+06 >= 0.4978 12.98% 19.90 >= 508.2e+06 0.4991 to 0.5018 3.16% 20.94 395.9e+06 to 434.7e+06 < 0.4833 2.46% 23.66 >= 508.2e+06 >= 0.5018 6.67% 27.78 >= 434.7e+06 < 0.4991 28.42% 28.35 395.9e+06 to 434.7e+06 0.4833 to 0.4978 3.86% Table 30 – Regression Tree Rule Coverage Source: own elaboration Variable Importance obs_avg_hashrate 3675.12 perc_staked 3372.37 obs_coin_issuance 1137.87 tx_sum 1010.45 tx_count 642.22 Table 31 – Regression Tree, Variable Importance Score Source: own elaboration Figures 17 and 18 show two linear models: one with price and staked percentage and the other with price and hashpower. As shown in these figures, price tends to increase as hashpower increases but tends to a slight decrease when staked percentages go up. Both figures display data from 01 Jan 2019 until 10 Feb 2020. Figure 17 – Linear Model with Price and Staked Percentage Source: own elaboration Figure 18 – Linear Model with Price and Hashpower Source: own elaboration ^1 DCRUSD exchange rates, available at https://finance.yahoo.com/quote/DCR-USD/history?period1=1454716800&period2=1586131200&interval=1d&filter=history&frequency=1d ^2 “Bitcoin Jumps to Highest Level Since March’s Coronavirus Crash”, available at https://www.bloomberg.com/news/articles/2020-04-23/bitcoin-jumps-to-highest-level-since-march-s-coronavirus-crash ^3 Decred Governance Data Analysis, available at https://github.com/mmartins000/decred-gov-data-analysis ^4 Available at https://docs.decred.org/proof-of-stake/overview/ ^5 “Crypto Faithful Flip Out on Speculation Satoshi Sold Bitcoin”, available at https://www.bloomberg.com/news/articles/2020-05-20/crypto-faithful-freak-out-amid-speculation-satoshi-sold-bitcoin ^6 NIST Hash Functions, available at https://csrc.nist.gov/projects/hash-functions ^7 BLAKE-256 hash function, available at https://docs.decred.org/research/blake-256-hash-function/
{"url":"https://stakey.club/en/blockchain-gov-part6/","timestamp":"2024-11-07T02:23:47Z","content_type":"text/html","content_length":"65551","record_id":"<urn:uuid:58695b89-3b05-4c86-a356-fc8046fe8198>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00336.warc.gz"}
[Solved] A decision maker is working on a problem | SolutionInn A decision maker is working on a problem that requires her to study the uncertainty surrounding the A decision maker is working on a problem that requires her to study the uncertainty surrounding the payoff of an investment. There are three possible levels of payoff — $1,000, $5,000, and $10,000. As a rough approximation, the decision maker believes that each possible payoff is equally likely. But she is not fully comfortable with the assessment that each probability is exactly 1/3, and so would like to conduct a sensitivity analysis. In fact, she believes that each probability could range from 0 to 1/2. a. Show how a Monte Carlo simulation could facilitate a sensitivity analysis of the probabilities of the payoffs. b. Suppose the decision maker is willing to say that each of the three probabilities could be chosen from a uniform between0 and 1. Could you incorporate this information into your simulation? If so, how? If not, explain why not, or what additional information you would need. Monte Carlo simulation Monte Carlo simulation is a technique used to understand the impact of risk and uncertainty in financial, project management, cost, and other forecasting models. A Monte Carlo simulator helps one visualize most or all of the potential outcomes to... Distribution The word "distribution" has several meanings in the financial world, most of them pertaining to the payment of assets from a fund, account, or individual security to an investor or beneficiary. Retirement account distributions are among the most... Fantastic news! We've Found the answer you've been seeking!
{"url":"https://www.solutioninn.com/a-decision-maker-is-working-on-a-problem-that-requires","timestamp":"2024-11-04T04:44:40Z","content_type":"text/html","content_length":"84342","record_id":"<urn:uuid:7b5ac925-7b18-460b-904b-d72e8bf2159e>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00352.warc.gz"}
Early orbit methods The estimator algorithm in the Goddard Trajectory Determination System (GTDS) requires an a priori estimate of the spacecraft position and velocity in order to initiate the iterative estimation process. GTDS has been provided with the capability to determine a starting value of position and velocity from a limited number of discrete tracking data samples. Three techniques are optionally provided to perform this function. They are as follows: The Gauss method and double r-iteration method. These deterministic methods use three sets of chronologically ordered gimbal angle observation pairs to solve for the six Cartesian position and velocity components at an epoch time equal to that of the second observation: and the range and angles method. This method uses multiple (more than two) sets of simultaneously measured range and gimbal angle data from the GRARR, ATSR, USB or C-Band radar systems. Two body equations are regressively fitted to the transformed data to yield epoch values of the spacecraft position and velocity. In its Mathematical Theory of the Goddard Trajectory Determination System 31 p (SEE N76-24291 15-13 Pub Date: April 1976 □ Orbital Position Estimation; □ Orbital Velocity; □ Spacecraft Trajectories; □ Trajectory Optimization; □ Algorithms; □ Iterative Solution; □ Radar Tracking; □ Time Measurement; □ Astrodynamics
{"url":"https://ui.adsabs.harvard.edu/abs/1976mtdt.reptW...../abstract","timestamp":"2024-11-08T17:24:23Z","content_type":"text/html","content_length":"35169","record_id":"<urn:uuid:0a2a4dab-682e-4607-941f-b364aee30bc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00459.warc.gz"}
Data Analytics R Programing Certificate - Newelec Data Analytics R Programing Certificate A year ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could learn everything I needed through edX, Coursera, and Udacity instead. • This is the first course in the 9-part Data Science professional certificate program offered by HarvardX on the edX platform. • This is a beginner level course and assumes knowledge of basic statistics. • DeZyre’s Data Scientist Courses will prepare you for the job role of a data scientist and will help you gain data scientist skillset by learning data science using analytical tools like Python and R. • Statistics with R certification is one of the best courses to master statistics with R. • If Christmas comes but once a year, so does the chance to see how strategic decisions impacted the bottom line. This course will make you familiar with the R programming language, its terminology, features, syntax, and other stuff. This article is a collection of such free R programming courses. I compiled this list for learning Data Science and Machine learning with R, but this list is equally useful for how to Hire a Python Developer people learning R programming for statistics and graphics. Additional benefits include the opportunity to work on real-world projects with industry experts and receive project comments from reviewers with extensive experience. One of the most popular and well-regarded course series is this one. The following 4 projects will be your focus during the Nanaodegree program. We compiled average rating and number of reviews from Class Central and other review sites to calculate a weighted average rating for each course. We read text reviews and used this feedback to supplement the numerical ratings. A Great Python In the module on R programming, you will start by understanding common use cases of R and why it’s popular along with installation & setup of R Environment. You will learn to represent and store data using R data types and variables, and use conditionals and loops to control the flow of programs. You’ll also learn about complex data structures like lists to store collections of related data. Additionally you’ll learn to write your own custom functions, write scripts, and handle errors. With big data becoming the life blood of business, data analysts and data scientists with expertise in Hadoop, NoSQL, and Python and R language are hard to come by. Anyway, without any further ado, here is my list of some of the best, free online courses to learn the R programming language. In the past, I have shared some machine learning courses on Python, and today, I am going to share some of the free courses to learn R programming language as well as Data Science and deep learning using R. The statistics required for various machine learning algorithms and data science are not covered in this course series. Providing you with the skills to manipulate, analyze, and visualize data with the best programming language for statistical analysis for data Let’s look at the other alternatives, sorted by descending rating. Below you’ll find several R-focused courses, if you are set on an introduction in that language. Reviewers love the instructor’s delivery and the organization of the content. Advanced Statistics For Data Science Specialization On completing the data science course, data science projects submitted by students are evaluated by the industry experts based on which a data science certificate is awarded to the students from ProjectPro. Data Science Online Training with ProjectPro aims at moulding students or professionals who want to make big as enterprise data scientists. ProjectPro helps students learn data science from industry experts by encapsulating lot of projects in Python and R to provide experiential learning. The 12 Best Data Scientist Courses and Online Training for 2022 – Solutions Review The 12 Best Data Scientist Courses and Online Training for 2022. Posted: Thu, 29 Sep 2022 07:00:00 GMT [source] Data scientists are in-demand and well paid – so why is there a skills gap? Caret’s train() function produce model equation that use selective features. Train() function create these models and they have a built-in feature selection. Predictor() method can be called upon these models to return a vector which contains the predictors/features used in the final model. Built-in feature selection typically couples the predictor search algorithm with the parameter estimation and are usually optimized with a single objective function. The Data Science Course 2022: Complete Data Science Bootcamp It is designed in such a way that you can succeed at it even without any statistical background. It takes you step-by-step through the steep learning curve of R. You will be using specifically designed datasets to practice the skills you learn in the course. Prior to the start of the term, students in DATA 1010 are required to complete an online, pre-course module that takes approximately 3-5 hours. The course starts with the installation of R and RStudio and then explains R and ggplot skills as they are needed when you progress toward an understanding of linear regression. In short, one of thebest free courses to learn R programmingthis year. R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you’re looking to post or find an R/data-science job. There will be 45 hours of live interactive online webinars with the faculty. You will also be working on practical assignments throughout the duration of the course. At the end of the course, you will need to submit a final project. It would be great if a project summary is present in each project so that one doesn’t have to go through each & every project. Learn to communicate the results of data analysis and findings through various data visualization techniques. Nally, it has a vast collection of technology platforms covered. They provide end-to-end project solutions with functional and technical backgrounds. I have reviewed projects in the financial domain such as probabilities, Credit risk, Statistical analyses, Mathematical computations for financial applications. Projects have enhanced my knowledge in technology adoption and my confidence no technology platforms. I have been able to show my skills in my employment to my clients. You will be tidying several messy real-life data sets, and you will learn how to combine multiple operations in an intuitive way by using the pipe operator. In this section of the course, we will go through some of the fundamental tools you need to learn when programming with R . We will cover relational operators, logical operators, vectors, IF, ELSE, and different types of loops in R. Some of these topics will have already been introduced to you in our Python training, but here you will have the chance to reinforce what you have learned and see things with R in mind. It provides an aggregation of machine learning algorithms for data mining and its written in Java. Both R and Python offer open source toolkit that assists in Natural Language Processing. Knowledge of GGPlot2 package, dataframes, vectors and vectorized operations is also recommended. The certificate is open to any University of Chicago graduate student. Harris students please indicate your intent to pursue this certificate using the Harris Certificate Application Form. Top 10 Psychology Certifications, Courses & Classes Online In 2021 In this course, you will learn how to start with R programming and use the excellent graphics package for R, ggplot2. Along the way, you will also learn Data Science concepts like the basics of simple linear regression. This is a very short yet excellent course to get a general overview of the R programming language, and I strongly suggest you go through this course before starting with any other class. Btw, for those, who are not familiar with R, it’s a programming language and a free software environment popular among statisticians and data miners for developing statistical software. The price varies depending on Udemy discounts, which are frequent, so you may be able to purchase access for as little as $10. We will install RStudio and packages, learn the layout and basic commands of R, practice writing basic R scripts, and inspect data sets. EDA, which comes before formal hypothesis testing and modeling, makes use of visual methods to analyze and summarize data sets. Here, we will be talking about data transformation with the dplyr package. More specifically, how to filter(), arrange(), mutate(), and transmute() your data, as well as how to sample() fractions and a fixed number of elements from it. You will also learn what tidy data is, why it is extremely important for the efficiency of your work to tidy your data sets in the most meaningful way, and how to achieve this by using the tidyr package. R Basics Unfortunately, it has no review data on the major review sites that we used for this analysis, so we can’t recommend it over the above two options yet. Is the course taught using popular programming languages like Python and/or R? These aren’t necessary, but helpful in most cases so slight preference is given to these courses. I’ve taken many data science-related courses and audited portions of many more. I know the options out there, and what skills are needed for learners preparing for a data analyst or data scientist role. A few months ago, I started creating a review-driven guide that recommends the best courses for each subject within data science. This course also covers essentials of statistics for data science in python. In this python online course for data science, you will solve real data science problems across multiple domains using python. Upon successful completion of the data science projects you will be awarded an online Data Science Certificate for Python. This is a very comprehensive R course with over 100 HD video lectures, detailed code notebooks for every lecture, 8 articles and 3 downloadable resources. Ask Admissions: Data And Policy Summer Scholar Dpss Program DeZyre’s Data Scientist Courses will prepare you for the job role of a data scientist and will help you gain data scientist skillset by learning data science using analytical tools like Python and R. This data science specialization is best suited for beginners and also experienced professionals who would like to use Python for doing data science. This R certification training for data science will help you master practical data science with R using statistical computing and machine learning through a series of data science projects. Upon successful completion of the data science projects you will be awarded an online Data Science Certificate for R. Top 10 Online Data Science Programs This is the only Data Science learning experience where you start coding immediately in the first class. Once you complete the project, we will issue the certificate based on your performance. You will learn advanced statistical principles in this class, as well as the inner workings of important data science modeling techniques including least squares and linear regression. Anyone interested in beginning a career in data science should obtain this professional certificate from IBM. Those who are just learning a programming language at the beginning level. So let’s go to selecting the Best Data Science Programs without spending any more time. You can find some of the top free data science programs at the conclusion of this post. Our #1 pick had a weighted average rating of 4.5 out of 5 stars over 3,068 reviews.
{"url":"https://www.newelec.be/fr/data-analytics-r-programing-certificate","timestamp":"2024-11-12T22:21:42Z","content_type":"text/html","content_length":"102958","record_id":"<urn:uuid:5069e621-483c-40c4-b661-b9ea336d471f>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00466.warc.gz"}
What is 116/15 as a decimal? | Thinkster Math A fraction is written in terms of two parts: the number on top is called the numerator and the number on the bottom is called the denominator. We can use the division method to solve this question. To get a decimal, simply divide the numerator 116 by the denominator 15: 116 (numerator) ÷ 15 (denominator) = 7.73 As a result, you get 7.73 as your answer when you convert 116/15 to a decimal.
{"url":"https://hellothinkster.com/math-questions/decimals/what-is-116-15-as-a-decimal","timestamp":"2024-11-14T13:35:11Z","content_type":"text/html","content_length":"87807","record_id":"<urn:uuid:66ba34c0-3d6e-4a8c-b803-c0632429a608>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00242.warc.gz"}
Lecture 15: 11.02.05 Phase Changes and Phase Diagrams of Single- Component Materials polymers Article Phase Diagrams of Ternary π-Conjugated Polymer Solutions for Organic Photovoltaics Jung Yong Kim School of Chemical Engineering and Materials Science and Engineering, Jimma Institute of Technology, Jimma University, Post Office Box 378 Jimma, Ethiopia; [email protected] Abstract: Phase diagrams of ternary conjugated polymer solutions were constructed based on Flory-Huggins lattice theory with a constant interaction parameter. For this purpose, the poly(3- hexylthiophene-2,5-diyl) (P3HT) solution as a model system was investigated as a function of temperature, molecular weight (or chain length), solvent species, processing additives, and electron- accepting small molecules. Then, other high-performance conjugated polymers such as PTB7 and PffBT4T-2OD were also studied in the same vein of demixing processes. Herein, the liquid-liquid phase transition is processed through the nucleation and growth of the metastable phase or the spontaneous spinodal decomposition of the unstable phase. Resultantly, the versatile binodal, spinodal, tie line, and critical point were calculated depending on the Flory-Huggins interaction parameter as well as the relative molar volume of each component. These findings may pave the way to rationally understand the phase behavior of solvent-polymer-fullerene (or nonfullerene) systems at the interface of organic photovoltaics and molecular thermodynamics. Keywords: conjugated polymer; phase diagram; ternary; polymer solutions; polymer blends; Flory- Huggins theory; polymer solar cells; organic photovoltaics; organic electronics Citation: Kim, J.Y. Phase Diagrams of Ternary π-Conjugated Polymer 1. Introduction Solutions for Organic Photovoltaics. Polymers 2021, 13, 983. https:// Since Flory-Huggins lattice theory was conceived in 1942, it has been widely used be- doi.org/10.3390/ polym13060983 cause of its capability of capturing the phase behavior of polymer solutions and blends [1–3].
{"url":"https://docslib.org/doc/344272/lecture-15-11-02-05-phase-changes-and-phase-diagrams-of-single-component-materials","timestamp":"2024-11-12T22:12:38Z","content_type":"text/html","content_length":"64317","record_id":"<urn:uuid:d5072071-a9dd-41e0-bbe9-d5249f9984dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00699.warc.gz"}
✔ eliminate match in goal Hey all, I'm following the Logical Foundations book. In an exercise of the second chapter, I get the goal of a proof into the following state. nat_to_bin (bin_to_nat c) = nat_to_bin (bin_to_nat c) | Z | _ => nat_to_bin (bin_to_nat c) Why is cbn or simpl not able to simplify the match? Is there a tactic that would transform the left-hand side of the equality into nat_to_bin (bin_to_nat c)? cbn and simpl perform computations, but in your case, there is nothing left to compute, since c is an abstract term (I suppose). What you can do is a case analysis on the matched term: destruct (nat_to_bin (bin_to_nat c)). Thank you! That worked :smile: Ricardo has marked this topic as resolved. Last updated: Oct 13 2024 at 01:02 UTC
{"url":"https://coq.gitlab.io/zulip-archive/stream/237977-Coq-users/topic/.E2.9C.94.20eliminate.20match.20in.20goal.html","timestamp":"2024-11-06T10:21:38Z","content_type":"text/html","content_length":"4521","record_id":"<urn:uuid:66e69224-2928-43ca-a61f-1e72ec83eca9>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00091.warc.gz"}
Generalised Polynomial Algebra Summary: In the literature on uncertainty propagation, intrusive approaches are methods that require a modification of either the system model or operators used to evaluate the quantities of interest. On the other hand non-intrusive approaches build a representation of the quantity of interest or of its probability distribution from a set of samples. Non-intrusive approaches, therefore, take the system through which uncertainty is propagated as a black box. While in the literature on uncertainty propagation in computational fluid dynamics non-intrusive schemes, mainly based on Polynomial Chaos Expansions, have become more popular than their intrusive counterpart, in orbital mechanics the use of Taylor polynomial algebra has gained popularity as a valid alternative to direct Monte Carlo simulations. Taylor polynomial algebra starts from variationals of the uncertain quantities with respect to a nominal value. Variationals initially describe a hypercube in the uncertain space, with uncorrelated variables, that is mapped into a gen erally non-convex region through the dynamical model. The propagation is performed by introducing an algebra, on Taylor polynomials, that replaces the standard computer algebra on real numbers. This technique is based on the so called Truncated Power Series Algebra (TPSA) introduced by Berz in 1986 and extended to rigorous numerics in 1997 with the introduction of Taylor Models. Numerical integration and evaluation of the dynamics are performed in the TPSA, hence at each integration time-step the full polynomial representation of the current state is available. The same idea can be generalised to a different set of basis functions providing that the corresponding algebraic rules between monomials can be defined. The idea of a more generic TPSA dates back at 1982 with the name of Ultra Arithmetic. However, in 2003 the possibility of using a TPSA alternative to the one built on Taylor basis was discarded because of several drawback related to polynomial multiplications and growth of the magnitude of the coefficients. It was with the work of Brisebarre and Joldes in 2010 that a formal comparison of the TPSA with Taylor, Tchebycheff and Newton basis was given. The results proved, for the univariate case, that the interpolation polynomials constructed in the Tchebycheff and Newton basis algebrae were able to achieve smaller reminders than Taylor models, requiring nevertheless more computing time, for the same order of expansion. One of the main advantages of using Tchebycheff series expansions is the uniform convergence over the interval of expansion, that guarantees near-minimax approximation. The complexity and accuracy of the proposed Generalised Polynomial Algebra (GPA), its generalisation to the multivariate case, and the comparison to its non intrusive counterpart is under investigation in this current research Riccardi, A., Tardioli, C., Vasile, M., “An intrusive approach to uncertainty propagation in orbital mechanics based on Tchebycheff polynomial algebra”, 13 Aug 2015, Proceedings AAS/AIAA Astrodynamics Specialist Conference, ASC 2015. Ortega Absil C., Serra R., Riccardi A., Vasile M. “De-Orbiting and Re-entry Analysis with Generalised Intrusive Polynomial Expansions”, Proceedings of the 67th International Astronautical Congress, Guadalajara, Messico 2016. Timeframe: January 2015 – ongoing People: Annalisa Riccardi, Carlos Ortega Absil, Massimiliano Vasile, Romain Serra
{"url":"http://icelab.uk/projects/research-projects/gpa/","timestamp":"2024-11-03T00:19:21Z","content_type":"text/html","content_length":"132242","record_id":"<urn:uuid:cd48bb86-e131-489f-9456-4f4c6ce1d396>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00277.warc.gz"}
Covariance – Probability – Mathigon Just as mean and variance are summary statistics for the distribution of a single random variable, covariance is useful for summarizing how are jointly distributed. The covariance of two random variables and is defined to be the expected product of their deviations from their respective means: The covariance of two independent random variables is zero, because the expectation distributes across the product on the right-hand side in that case. Roughly speaking, and tend to deviate from their means positively or negatively together, then their covariance is positive. If they tend to deviate oppositely (that is, is above its mean and is below, or vice versa), then their covariance is Identify each of the following joint distributions as representing positive covariance, zero covariance, or negative covariance. The size of a dot at represents the probability that and . Solution. The first graph shows negative covariance, since and have opposite sign for the top-left mass and for the bottom-right mass, and the contributions of the other two points are smaller since these points are close to the mean . The second graph shows positive covariance, since the top right and bottom left points contribute positively, and the middle point contributes much less. The third graph shows zero covariance, since the points contribute to the sum defining in two cancelling pairs. Does imply that and are independent? Hint: consider the previous exercise. Alternatively, consider a random variable which is uniformly distributed on and an independent random variable which is uniformly distributed on . Set . Consider the pair . Solution. The third example in the previous exercise shows a non-independent pair of random variables which has zero covariance. Alternatively, the suggested random variables and have zero covariance, but they are not independent since, for example, even though and are both positive. The correlation of two random variables and is defined to be their covariance normalized by the product of their standard deviations: In this problem, we will show that the correlation of two random variables is always between and . Let , and let . Consider the following quadratic polynomial in : where is a variable. Explain why this polynomial is nonnegative for all . Recall that a polynomial is nonnegative for all if and only if the discriminant is nonpositive (this follows from the quadratic formula). Use this fact to show that Conclude that . The polynomial is nonnegative because the left-hand side of the given equation is the expectation of a nonnegative random variable. Substituting for , for , and for , the inequality implies which implies the desired inequality. Dividing both sides of the preceding inequality by and taking the square root of both sides, we find that , which implies . Show that if are independent random variables. Solution. The expectation of is the sum of the values in this table: The square of the expectation of is the sum of the values in this table: Subtracting these two tables entry-by-entry, we get the variances on the right-hand side from the diagonal terms, and all of the off-diagonal terms cancel, by the independence product formula. Exercise (Mean and variance of the sample mean) Suppose that are independent random variables with the same distribution. Find the mean and variance of Solution. By linearity of expectation, we have The covariance matrix of a vector of random variables defined on the same probability space is defined to be the matrix whose th entry is equal to . Show that if all of the random variables have mean zero. (Note: expectation operates on a matrix or vector of random variables entry-by-entry.) Solution. The definition of matrix multiplication implies that the th entry of is equal to . Therefore, the th entry of is equal to , which in turn is equal to since the random variables have zero
{"url":"https://nl.mathigon.org/course/intro-probability/covariance","timestamp":"2024-11-09T00:08:11Z","content_type":"text/html","content_length":"494091","record_id":"<urn:uuid:162bfa29-f349-4967-bf98-8300e6828f25>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00140.warc.gz"}
Search for: Colonies of the arboreal turtle ant create networks of trails that link nests and food sources on the graph formed by branches and vines in the canopy of the tropical forest. Ants put down a volatile pheromone on the edges as they traverse them. At each vertex, the next edge to traverse is chosen using a decision rule based on the current pheromone level. There is a bidirectional flow of ants around the network. In a previous field study, it was observed that the trail networks approximately minimize the number of vertices, thus solving a variant of the popular shortest path problem without any central control and with minimal computational resources. We propose a biologically plausible model, based on a variant of the reinforced random walk on a graph, which explains this observation and suggests surprising algorithms for the shortest path problem and its variants. Through simulations and analysis, we show that when the rate of flow of ants does not change, the dynamics converges to the path with the minimum number of vertices, as observed in the field. The dynamics converges to the shortest path when the rate of flow increases with time, so the colony can solve the shortest path problem merely by increasing the flow rate. We also show that to guarantee convergence to the shortest path, bidirectional flow and a decision rule dividing the flow in proportion to the pheromone level are necessary, but convergence to approximately short paths is possible with other decision rules. more » « less
{"url":"https://par.nsf.gov/search/author:%22Gordon,%20Deborah%20M%22","timestamp":"2024-11-12T20:06:07Z","content_type":"text/html","content_length":"299299","record_id":"<urn:uuid:12c8641a-2e9e-40cb-b146-066ebdb2b1a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00118.warc.gz"}
Elements of dynamic and 2-SAT programming: paths, trees, and cuts top of page If you make use of this material, you may credit the authors as follows: Bentert Matthias, "Elements of dynamic and 2-SAT programming: paths, trees, and cuts", Universitätsverlag der Technischen Universität Berlin, 2021, DOI: 10.14279/depositonce-11462, License: https:/ This thesis presents faster (in terms of worst-case running times) exact algorithms for special cases of graph problems through dynamic programming and 2-SAT programming. Dynamic programming describes the procedure of breaking down a problem recursively into overlapping subproblems, that is, subproblems with common subsubproblems. Given optimal solutions to these subproblems, the dynamic program then combines them into an optimal solution for the original problem. 2-SAT programming refers to the procedure of reducing a problem to a set of 2-SAT formulas, that is, boolean formulas in conjunctive normal form in which each clause contains at most two literals. Computing whether such a formula is satisfiable (and computing a satisfying truth assignment, if one exists) takes linear time in the formula length. Hence, when satisfying truth assignments to some 2-SAT formulas correspond to a solution of the original problem and all formulas can be computed efficiently, that is, in polynomial time in the input size of the original problem, then the original problem can be solved in polynomial time. We next describe our main results. Diameter asks for the maximal distance between any two vertices in a given undirected graph. It is arguably among the most fundamental graph parameters. We provide both positive and negative parameterized results for distance-from-triviality-type parameters and parameter combinations that were observed to be small in real-world applications. In Length-Bounded Cut, we search for a bounded-size set of edges that intersects all paths between two given vertices of at most some given length. We confirm a conjecture from the literature by providing a polynomial-time algorithm for proper interval graphs which is based on dynamic programming. k-Disjoint Shortest Paths is the problem of finding (vertex-)disjoint paths between given vertex terminals such that each of these paths is a shortest path between the respective terminals. Its complexity for constant k > 2 has been an open problem for over 20 years. Using dynamic programming, we show that k-Disjoint Shortest Paths can be solved in polynomial time for each constant k. The problem Tree Containment asks whether a phylogenetic tree T is contained in a phylogenetic network N. A phylogenetic network (or tree) is a leaf-labeled single-source directed acyclic graph (or tree) in which each vertex has in-degree at most one or out-degree at most one. The problem stems from computational biology in the context of the tree of life (the history of speciation). We introduce a particular variant that resembles certain types of uncertainty in the input. We show that if each leaf label occurs at most twice in a phylogenetic tree N, then the problem can be solved in polynomial time and if labels can occur up to three times, then the problem becomes NP-hard. Lastly, Reachable Object is the problem of deciding whether there is a sequence of rational trades of objects among agents such that a given agent can obtain a certain object. A rational trade is a swap of objects between two agents where both agents profit from the swap, that is, they receive objects they prefer over the objects they trade away. This problem can be seen as a natural generalization of the well-known and well-studied Housing Market problem where the agents are arranged in a graph and only neighboring agents can trade objects. We prove a dichotomy result that states that the problem is polynomial-time solvable if each agent prefers at most two objects over its initially held object and it is NP-hard if each agent prefers at most three objects over its initially held object. We also provide a polynomial-time 2-SAT program for the case where the graph of agents is a cycle. Graph Diameter, Shortest Paths Computation, Flow And Cut Problems, Resource Allocation, Computational Biology Rights | License Except where otherwise noted, this item has been published under the following license: You might also be interested in the following books from Amazon: If you believe that this publication infringes copyright, please contact us at info@jecasa-ltd.com and provide relevant details so that we can investigate your claim. bottom of page
{"url":"https://es.jecasa-ltd.com/book/joab-1875/elements-of-dynamic-and-2-sat-programming%3A-paths%2C-trees%2C-and-cuts","timestamp":"2024-11-12T06:54:04Z","content_type":"text/html","content_length":"1050386","record_id":"<urn:uuid:076ad51b-4bff-4350-bbdb-331cbae1e0cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00744.warc.gz"}
DRBEM Solution for Unsteady Natural Convection Flow in Primitive Variables with Fractional Step Time Advancement 9th International Conference on Mathematical Problems in Engineering, Aerospace and Sciences, Vienna, Austria, 10 - 14 July 2012, vol.1493, pp.871-877 • Publication Type: Conference Paper / Full Text • Volume: 1493 • Doi Number: 10.1063/1.4765590 • City: Vienna • Country: Austria • Page Numbers: pp.871-877 • Middle East Technical University Affiliated: Yes In this study, two-dimensional, transient flow of an incompressible, laminar, viscous fluid in a cavity is considered with the occurance of heat flux (temperature is not constant). The governing equations which are continuity, momentum and energy equations, define natural convection in differentially heated cavities in terms of primitive variables (velocities, temperature, and pressure). The no-slip condition for the velocity is imposed on the cavity walls. Left and right walls are heated and cooled, respectively, and top and bottom walls are adiabatic. The dual reciprocity boundary element method (DRBEM) is employed for solving natural convection flow equations utilizing fractional step for the time derivatives. This uncouples velocities from the pressure. Then, the predicted velocity and the pseudo-pressure equations together with the energy equation are all solved by using DRBEM with constant elements. DRBEM transforms directly the differential equations to the boundary integral equations, and thus, only the boundary of the problem has to be discretized. This saves considerable computational work. Velocities and the pressure are obtained iteratively in the time direction with a predictor-corrector scheme. Temperature is also obtained in the iteration procedure by using relaxation parameter between consecutive time levels. In the iterative procedure the nonlinear convective terms are approximated explicitly from the two previous steps. The present numerical procedure gives quite accurate results for Rayleigh number values up to 10(4). It has the advantage of treating directly the primitive unknowns and obtaining the pressure field also. Since the time derivatives are discretized at the beginning of the procedure, the solution is obtained iteratively at all transient levels, and also at steady-state with considerably large time increment compared to other explicit time integration procedures. The proposed numerical scheme is also computationally cheap since DRBEM discretizes only the boundary of the region, resulting with small sized systems, compared to other domain-type numerical methods.
{"url":"https://avesis.metu.edu.tr/yayin/6c12067d-d69b-4ea4-81ed-0943eba6f610/drbem-solution-for-unsteady-natural-convection-flow-in-primitive-variables-with-fractional-step-time-advancement","timestamp":"2024-11-12T05:45:47Z","content_type":"text/html","content_length":"52229","record_id":"<urn:uuid:34c1a840-16df-4f97-b7aa-79c740fc55b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00389.warc.gz"}
Business Mathematics: A Textbook by Edward I. Edgerton, Wallace E. Bartholomew Publisher: The Ronald Press Co. 1921 ISBN/ASIN: B002WUJN7M Number of pages: 331 The course in applied mathematics outlined in this book is much more advanced and thorough than the usual course in commercial arithmetic. The attempt has been made to construct a practical course which will contain all the essential mathematical knowledge required in a business career, either as employee, manager, or employer. Download or read it online for free here: Download link (multiple formats) Similar books Applied Mathematics by Example Jeremy Pickles BookBoonThis book approaches the subject from an oft-neglected historical perspective. A particular aim is to make accessible to students Newton's vision of a single system of law governing the falling of an apple and the orbital motion of the moon. Computational Modeling and Mathematics Applied to the Physical Sciences National Academy PressWe examine several deep theoretical problems, including turbulence and combustion. At the frontiers of attack on these problems we discover the limitations imposed by our current understanding of model formulation and computational capability. Lectures On Approximation By Polynomials J.G. Burkill Tata Institute of Fundamental ResearchFrom the table of contents: Weierstrass's Theorem; The Polynomial of Best Approximation Chebyshev Polynomials; Approximations to abs(x); Trigonometric Polynomials; Inequalities, etc; Approximation in Terms of Differences. Introductory Maths for Chemists J. E. Parker BookboonThis volume teaches Maths from a 'chemical' perspective and is the first of a three part series of texts taken during a first-year university course. It is the Maths required by a Chemist, or Chemical Engineer, Chemical Physicist, Biochemist,...
{"url":"https://www.e-booksdirectory.com/details.php?ebook=10321","timestamp":"2024-11-02T03:15:29Z","content_type":"text/html","content_length":"11234","record_id":"<urn:uuid:95df80dd-068d-4fda-af0f-8c6a9ce9a9a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00431.warc.gz"}
Mathematics and Computer Science Maria Serafina MADONIA Associate Professor of Informatics [INFO-01/A] Maria Serafina Madonia, born in Palermo on 31 December 1962, has been an associate professor (S.S.D. INF/01) at the Department of Mathematics and Computer Science (DMI) of the University of Catania since 1 January 2022. From May 1996 to December 2021 she is was confirmed researcher (S.S.D. INF/01) at the same department. In 1985, she graduated in Mathematics (Applicative address) at the University of Palermo and, in 1991, she obtained the title of PhD in Mathematics (III cycle - Consortium Universities of Palermo, Catania and Messina). He teaches in Bachelor's and Master's degree courses in Computer Science and carries out research activities in the field of theoretical computer science. She carries out research in the field of formal language theory and, more exactly, she is interested in algebraic and combinatorial problems of the theories of automata, two-dimensional languages and codes. In particular, she addressed the following topics: - Recognizable two-dimensional languages - Picture codes - Automata on unary alphabets - “Covering” of words - Z-monoids and z-codes - Rational relationships - Reducibility of binary trees The complete list of publications is available at the link http://www.dmi.unict.it/madonia/ricerche.html. She has worked, and still does, as scientific reviewer for numerous international journals and for numerous national and international conferences. From 1998 to today she has participated in various research projects funded by MIUR, INDAM and the University of Catania.
{"url":"https://dmi.unict.it/faculty/maria.serafina.madonia","timestamp":"2024-11-07T12:32:40Z","content_type":"text/html","content_length":"30730","record_id":"<urn:uuid:394ae2ee-69b5-4402-90b5-a6d8750f113f>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00552.warc.gz"}
TR06-029 | 21st February 2006 00:00 Eisenberg-Gale Markets: Rationality, Strongly Polynomial Solvability, and Competition Monotonicity We study the structure of EG[2], the class of Eisenberg-Gale markets with two agents. We prove that all markets in this class are rational and they admit strongly polynomial algorithms whenever the polytope containing the set of feasible utilities of the two agents can be described via a combinatorial LP. This helps resolve positively the status of two markets left as open problems by [JV]: the capacity allocation market in a directed graph with two source-sink pairs and the network coding market in a directed network with two sources. Our algorithms for solving the corresponding nonlinear convex programs are fundamentally different from those obtained by [JV]; whereas they use the primal-dual schema, we use a carefully constructed binary search. We also settle a third open problem of [JV], that of determining whether the notion of competition monotonicity characterizes the class of SUA markets within UUA markets. We give a positive resolution of this problem as well.
{"url":"https://eccc.weizmann.ac.il/report/2006/029/","timestamp":"2024-11-12T13:51:23Z","content_type":"application/xhtml+xml","content_length":"21445","record_id":"<urn:uuid:9f0b1088-0a3f-4744-b923-8fb48ac9a323>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00011.warc.gz"}
ources of (a) Explain the two sources of magnetic moments for electrons. (b) Do all electrons have a net magnetic moment? Why or why not? (c) Do all atoms have a net magnetic moment? Why or why not? Short Answer Expert verified Question: Explain the two sources of magnetic moments for electrons and discuss whether all electrons and atoms have a net magnetic moment. Answer: There are two main sources of magnetic moments for electrons: (1) Orbital Magnetic Moment, which originates from the electron's orbital motion around the nucleus and (2) Spin Magnetic Moment, which arises from the intrinsic property of the electron known as spin. Not all electrons have a net magnetic moment, as the total magnetic moment may be zero in some cases due to the cancelation of orbital and spin magnetic moments. Similarly, not all atoms have a net magnetic moment. Atoms with an even number of paired electrons typically do not have a net magnetic moment, while atoms with unpaired electrons possess a net magnetic moment. Step by step solution Part (a): Sources of Magnetic Moments for Electrons There are two main sources of magnetic moments for electrons: 1. Orbital Magnetic Moment: This magnetic moment originates from the electron's orbital motion around the nucleus. As the electron moves around the nucleus, it creates a current loop that produces a magnetic field. The orbital magnetic moment is proportional to the angular momentum of the electron and can be expressed as: \(\mu_L = -\ frac{e}{2m}L\) Where \(\mu_L\) is the orbital magnetic moment, e is the electron charge, m is the mass of the electron, and L is the orbital angular momentum. 2. Spin Magnetic Moment: The second source of the magnetic moment is the intrinsic property of the electron known as spin. Electrons behave like tiny spinning charged particles, and their spin creates a magnetic field. The spin magnetic moment is also proportional to the electron's spin angular momentum and can be expressed as: \(\mu_S = -g_s \frac{e}{2m}S\) Where \(\mu_S\) is the spin magnetic moment, \(g_s\) is the electron spin g-factor (\(g_s \approx 2\)), and S is the spin angular momentum. Part (b): Do All Electrons Have a Net Magnetic Moment? Not all electrons have a net magnetic moment. The total magnetic moment of an electron is the vector sum of its orbital magnetic moment and spin magnetic moment: \(\mu = \mu_L + \mu_S\) In some atoms or atomic configurations, the total magnetic moment of electrons can be zero due to the cancelation of the orbital and spin magnetic moments. This can happen when electrons are paired in an atomic orbital, as the magnetic moments of the two electrons with opposite spins will cancel each other out. In such cases, there is no net magnetic moment for the electron pair. Part (c): Do All Atoms Have a Net Magnetic Moment? Not all atoms have a net magnetic moment. Atoms that have an even number of electrons with electrons in pairs (such as noble gases) typically have no net magnetic moment since the individual magnetic moments of the electrons cancel each other out. Additionally, the total angular momentum (sum of orbital and spin angular momentum of electrons) of the atom is often zero for these atom configurations, leading to no net magnetic moment. However, atoms with unpaired electrons, such as transition metal ions and some rare earth elements, possess a net magnetic moment, making them magnetic. The magnetic properties of materials are due to the collective behavior of atoms with net magnetic moments.
{"url":"https://www.vaia.com/en-us/textbooks/physics/materials-science-and-engineering-an-introduction-9-edition/chapter-20/problem-4-a-explain-the-two-sources-of-magnetic-moments-for-/","timestamp":"2024-11-14T15:17:40Z","content_type":"text/html","content_length":"256766","record_id":"<urn:uuid:44bb266c-5796-42b1-89ad-9619fd651bdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00073.warc.gz"}
How to Calculate Torque Motor for Optimal Performance Torque is a fundamental concept in mechanical and electrical engineering. It plays a key role in motor applications. Understanding how to calculate motor torque is essential to designing efficient and effective motor drive systems. This guide will walk you through the basics of motor torque and show you how to perform the calculations. 1.Understanding Torque Torque represents the rotational force produced by a motor. It determines how much force a motor can apply to turn an object. The standard unit for measuring torque is Newton-meter (Nm), but it is also commonly expressed in pound-feet (lb-ft) in some regions. Essentially, torque measures how efficiently a motor performs work, especially in applications that require rotational motion. 2.Basics of Motor Torque Motor torque is affected by a number of factors, including the motor's power output and operating speed. The basic formula for calculating torque (T) in Newton meters is: To apply this formula, you need to convert the motor speed from revolutions per minute (RPM) to radians per second using the conversion factor ω=RPM×(2π/60). This step is critical because it adjusts RPM, a common motor speed measurement, to the angular velocity used in the torque formula. For example, if the motor has an output power of 100 watts and runs at 3000 RPM, first convert the RPM: ω=3000×(2π/60)=314.16 rad/s Then calculate the torque: This calculation provides a theoretical torque value under ideal operating conditions. Realistic factors such as motor efficiency and mechanical losses may require adjustments to these calculations. 3.Step-by-Step Calculation of Torque Calculating torque for motors involves understanding how to handle different motor specifications and operational conditions. Let's break down the steps: Determine Motor Power: This is typically given in watts (W). Check the motor specifications for this value. Obtain Motor Speed: Find the motor speed in RPM from the specifications. Convert RPM to Radians Per Second: Use the conversion formula: This adjusts the speed into the angular velocity necessary for the torque calculation. Calculate Torque: Apply the formula: Substitute the power and the converted speed into the formula to find the torque. Example: If a motor has a power of 200 watts and a speed of 1500 RPM: Convert speed:ω=1500×(2π/60)=157.08rad/s Calculate torque:T=200/157.08≈1.27Nm 4.Factors Affecting Motor Torque When calculating torque, consider the following factors to ensure accuracy: • Motor Efficiency: Not all electrical energy supplied to a motor is converted to mechanical energy due to losses such as heat and friction. Efficiency ratings can be used to adjust the power value in the torque formula. • Gear reduction: If the motor uses gears, this can change the effective torque output under load. Gears can increase torque but reduce speed proportionally. • Load Characteristics: The nature of the load (constant, variable, peak load) affects the required torque and motor selection. 5.Real-World Applications and Examples Understanding how to calculate torque is critical for a variety of applications: • Robotics: Ensure that the motor has enough torque to handle robotic arm movements without stalling or losing precision. • Automotive: In electric vehicles, accurately calculated torque ensures optimal performance and battery efficiency. • Industrial Machinery: Correct torque calculations ensure machinery operates within safe and efficient parameters, preventing mechanical failure. • Consumer Electronics: In devices such as autonomous vacuum cleaners, accurate torque calculations help design efficient and durable products. 6.Common Mistakes and Misunderstandings • Ignoring Efficiency: Many people forget to consider motor efficiency. This can lead to an overestimation of the actual available torque. • Ignoring Environmental Factors: Conditions such as temperature and mechanical resistance can affect motor performance. This can be easily overlooked in calculations. • Misunderstanding Gear Effects: Gears can significantly change torque output. Not properly considering gear ratios can result in a mismatch between expected and actual performance. • Confusing Power Ratings: Peak and Continuous Power Ratings can be different. Using the wrong power rating can result in under-calculation of torque. Mastering how to calculate torque for motors is crucial for optimizing machine performance. This guide has offered a clear path to understand and apply torque calculations effectively. Remember, correct torque calculations enhance the efficiency and reliability of your machines. Always consider motor specifics and operational conditions for accurate results. To delve deeper into this topic or refine your skills, seek additional resources and tools. Your mastery of calculating motor torque can significantly impact the success of your engineering projects.
{"url":"https://www.cskmotions.com/blogs/news/how-to-calculate-torque-motor-for-optimal-performance","timestamp":"2024-11-03T07:33:24Z","content_type":"text/html","content_length":"169863","record_id":"<urn:uuid:c2cd1a45-97e6-42e7-a952-bf9ef2166b26>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00619.warc.gz"}
10 Years of NASA's Pi Day Challenge Teachable Moment . .5 min read 10 Years of NASA's Pi Day Challenge This year marks the 10th installment of the NASA Pi Day Challenge. Celebrated on March 14, Pi Day is the annual holiday that pays tribute to the mathematical constant pi – the number that results from dividing any circle's circumference by its diameter. Every year, Pi Day gives us a reason to celebrate the mathematical wonder that helps NASA explore the universe and enjoy our favorite sweet and savory pies. Students can join in the fun once again by using pi to explore Earth and space themselves in the NASA Pi Day Challenge. Read on to learn more about the science behind this year's challenge and find out how students can put their math mettle to the test to solve real problems faced by NASA scientists and engineers as we explore Earth, Mars, asteroids, and beyond! Dividing any circle’s circumference by its diameter gives you an answer of pi, which is usually rounded to 3.14. Because pi is an irrational number, its decimal representation goes on forever and never repeats. In 2022, mathematician Simon Plouffe discovered the formula to calculate any single digit of pi. In the same year, teams around the world used cloud computing technology to calculate pi to 100 trillion digits. But you might be surprised to learn that for space exploration, NASA uses far fewer digits of pi. Here at NASA, we use pi to measure the area of telescope mirrors, determine the composition of asteroids, and calculate the volume of rock samples. But pi isn’t just used for exploring the cosmos. Since pi can be used to find the area or circumference of round objects and the volume or surface area of shapes like cylinders, cones, and spheres, it is useful in all sorts of ways. Transportation teams use pi when determining the size of new subway tunnels. Electricians can use pi when calculating the current or voltage passing through circuits. And you might even use pi to figure out how much fencing is needed around a circular school garden bed. In the United States, March 14 can be written as 3.14, which is why that date was chosen for celebrating all things pi. In 2009, the U.S. House of Representatives passed a resolution officially designating March 14 as Pi Day and encouraging teachers and students to celebrate the day with activities that teach students about pi. And that's precisely what the NASA Pi Day Challenge is all The Science Behind the 2023 NASA Pi Day Challenge This 10th installment of the NASA Pi Day Challenge includes four noodle-nudgers that get students using pi to calculate the amount of rock sampled by the Perseverance Mars rover, the light-collecting power of the James Webb Space Telescope, the composition of asteroid (16) Psyche, and the type of solar eclipse we can expect in October. Read on to learn more about the science and engineering behind each problem or click the link below to jump right into the challenge. NASA’s Mars rover, Perseverance, was designed to collect rock samples that will eventually be brought to Earth by a future mission. Sending objects from Mars to Earth is very difficult and something we've never done before. To keep the rock cores pristine on the journey to Earth, the rover hermetically seals them inside a specially designed sample tube. Once the samples are brought to Earth, scientists will be able to study them more closely with equipment that is too large to make the trip to Mars. In Tubular Tally, students use pi to determine the volume of a rock sample collected in a single tube. When NASA launched the Hubble Space Telescope in 1990, scientists hoped that the telescope, with its large mirror and sensitivity to ultraviolet, visible, and near-infrared light, would unlock secrets of the universe from an orbit high above the atmosphere. Indeed, their hope became reality. Hubble’s discoveries, which are made possible in part by its mirror, rewrote astronomy textbooks. In 2022, the next great observatory, the James Webb Space Telescope, began exploring the infrared universe with an even larger mirror from a location beyond the orbit of the Moon. In Rad Reflection, students use pi to gain a new understanding of our ability to peer deep into the cosmos by comparing the area of Hubble’s primary mirror with the one on Webb. Orbiting the Sun between Mars and Jupiter, the asteroid (16) Psyche is of particular interest to scientists because its surface may be metallic. Earth and other terrestrial planets have metal cores, but they are buried deep inside the planets, so they are difficult to study. By sending a spacecraft to study Psyche up close, scientists hope to learn more about terrestrial planet cores and our solar system’s history. That's where NASA's Psyche comes in. The mission will use specialized tools to study Psyche's composition from orbit. Determining how much metal exists on the asteroid is one of the key objectives of the mission. In Metal Math, students will do their own investigation of the asteroid's makeup, using pi to calculate the approximate density of Psyche and compare that to the density of known terrestrial materials. On Oct. 14, 2023, a solar eclipse will be visible across North and South America, as the Moon passes between Earth and the Sun, blocking the Sun's light from our perspective. Because Earth’s orbit around the Sun and the Moon’s orbit around Earth are not perfect circles, the distances between them change throughout their orbits. Depending on those distances, the Sun's disk area might be fully or only partially blocked during a solar eclipse. In Eclipsing Enigma, students get a sneak peek at what to expect in October by using pi to determine how much of the Sun’s disk will be eclipsed by the Moon and whether to expect a total or annular eclipse. Celebrate Pi Day by getting students thinking like NASA scientists and engineers to solve real-world problems in the NASA Pi Day Challenge. In addition to solving this year’s challenge, you can also dig into the more than 30 puzzlers from previous challenges available in our Pi Day collection. Completing the problem set and reading about other ways NASA uses pi is a great way for students to see the importance of the M in STEM. About the Author Lyle Tavernier Educational Technology Specialist, NASA-JPL Education Office Lyle Tavernier is an educational technology specialist at NASA's Jet Propulsion Laboratory. When he’s not busy working in the areas of distance learning and instructional technology, you might find him running with his dog, cooking or planning his next trip. Teachable Moment Last Updated: Oct. 12, 2024
{"url":"https://www.jpl.nasa.gov/edu/resources/teachable-moment/10-years-of-nasas-pi-day-challenge/","timestamp":"2024-11-09T20:31:11Z","content_type":"text/html","content_length":"568381","record_id":"<urn:uuid:350b866c-0a62-43db-90e1-a79498789925>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00782.warc.gz"}
The Tail of a Sequence of Real Numbers The Tail of a Sequence of Real Numbers We will now look at an important aspect of a sequence known as the tail of a sequence. Definition: Let $(a_n) = (a_1, a_2, ... )$ be a sequence of real numbers. Then for any $m \in \mathbb{N}$, the $m$-Tail of $(a_n)$ is a the subsequence $(a_{m+1}, a_{m+2}, ... ) = (a_{m+n} : n \in \ Recall that for a sequence $(a_n)_{n=1}^{\infty}$ that converges to the real number $L$ then $\lim_{n \to \infty} a_n = L$, that is $\forall \epsilon > 0$ there exists a natural number $n \in \mathbb {N}$ such that if $n ≥ N$ then $\mid a_n - L \mid < \epsilon$. For any given positive $\epsilon$ we can consider the $n$-tail of the sequence $(a_n)$ to be the subsequence of $(a_n)$ such that all terms in this tail are within an $\epsilon$-distance from our limit $L$. The diagram below illustrates this concept. The following theorem tells us that the m-tail of a sequence must also converge to the limit $L$ provided the parent sequence $(a_n)$ converges to $L$. Theorem 1: Let $(a_n)$ be a sequence of real numbers. Then $(a_n)$ converges to $L$ if and only if for any $m \in \mathbb{N}$ the $m$-tail of $(a_n)$, call it $(a_{n_k})$ converges to $L$.
{"url":"http://mathonline.wikidot.com/the-tail-of-a-sequence-of-real-numbers","timestamp":"2024-11-06T08:55:34Z","content_type":"application/xhtml+xml","content_length":"15464","record_id":"<urn:uuid:c0481f54-4ea6-4dfa-85b8-ea865f6938e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00223.warc.gz"}
Discrete warping restraint – ConsteelConsteel Knowledge base Discrete warping restraint June 11, 2021 5 min read Discrete warping restraint Theoretical background According to the beam-column theory, two types of torsional effects exist. Saint-Venant torsional component Some closed thin-walled cross-sections produce only uniform St. Venant torsion if subjected to torsion. For these, only shear stress τ[t ]occurs. Fig. 1: rotated section [1.] The non-uniform torsional component Open cross-sections might produce also normal stresses as a result of torsion.[1.] Fig. 2: effect of the warping in a thin-walled open section [1.] Warping causes in-plane bending moments in the flanges. From the bending moment arise both shear and normal stresses as it can be seen in Fig. 2 above. Discrete warping restraint The load-bearing capacity of a thin-walled open section against lateral-torsional buckling can be increased by improving the section’s warping stiffness. This can be done by adding additional stiffeners to the section at the right locations, which will reduce the relative rotation between the flanges due to the torsional stiffness of this stiffener. In Consteel, such stiffener can be added to a Superbeam using the special Stiffener tool. Consteel will automatically create a warping support in the position of the stiffener, the stiffness of which is calculated using the formulas below. Of course, warping support can also be defined manually by specifying the correct stiffness value, calculated with the same formulas (see literature [3]). The following types of stiffeners can be used: • Web stiffeners • T - stiffener • L - stiffener • Box stiffener • Channel –stiffener The general formula which can be used to determine the stiffness of the discrete warping restraint is the following: R[ω] = the stiffness of the discrete warping restraint G = shear modulus GI[t] = the Saint-Venan torsional constant h = height of the stiffener Effect of the different stiffener types Web stiffener b = width of the web stiffener [mm] t = thickness of the web stiffener [mm] h = height of the web stiffener [mm] Fig. 3: web stiffener T - stiffener b[1] = width of the battens [mm] t[1] = thickness of the battens [mm] b[2] = width of the web stiffener [mm] t[2] = thickness of the web stiffener [mm] h = height of the web stiffener [mm] Fig. 4: T–stiffener b = width of the L-section [mm] t = thickness of the L-section [mm] h = height of the L-section [mm] Fig. 5: L–stiffener Channel stiffener b[1] = width of channel web [mm] t[1] = thickness of channel web [mm] b[2] = width of channel flange [mm] t[2] = thickness of channel flange [mm] h = height of the web stiffener [mm] Fig. 6: Channel stiffener Numerical example The following example will show the increase of the lateral-torsional buckling resistance of a simple supported structural beam strengthened with a box stiffeners. The effect of such additional plates can be clearly visible when shell finite elements are used. Shell model Fig. 7 shows a simple fork supported structural member with welded cross-section modeled with shell finite elements and subjected to a uniform load along the member length acting at the level of the top flange. Table 1. and Table 2. contain the geometric parameters and material properties of the double symmetric I section. The total length of the beam member is 5000 mm, the eccentricity of the line load is 150 mm in direction z. Fig. 7: simple supported, double symmetric structural member modeled by shell elements Name Dimension Value Width of the top Flange [mm] 200 Thickness of the top Flange [mm] 10 Web height [mm] 300 Web thickness [mm] 10 Width of the bottom Flange [mm] 200 Thickness of the bottom Flange [mm] 10 Table 1: geometric parameters Name Dimension Value Elastic modulus [N/mm^2] 200 Poisson ratio [-] 10 Yield strength [N/mm^2] 300 Table 2: material properties Box stiffener The box stiffeners are located near the supports as can be seen in Fig. 8. Table 3. contains the geometric parameters of the box stiffeners. Fig. 8: the structural shell member with added box stiffeners Name Dimension Value Width of the web stiffener [mm] 100 Thickness of the battens [mm] 100 Total width of the box stiffener [mm] 200 Height of the plates [mm] 300 Thickness of the plates [mm] 10 Table 3: geometric parameters of the box stiffeners 7DOF beam model The same effect in a model using 7DOF beam finite elements can be obtained when discrete warping spring supports are defined at the location of the box stiffeners. Fig. 9: beam member supported with fork supports and loaded with eccentric uniform load Discrete warping stiffness calculated by hand Log in to view this content Online service access and support options are based on subscription plans. Log in to view this content or contact our sales department to upgrade your subscription. If you haven’t tried Consteel yet, try for free and get Pro access to our learning materials for 30 days!
{"url":"https://consteelsoftware.com/knowledgebase/discrete-warping-restraint/","timestamp":"2024-11-10T22:24:58Z","content_type":"text/html","content_length":"77076","record_id":"<urn:uuid:b4faca0e-6a1d-43da-9ab9-d3399ddeeec1>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00698.warc.gz"}
gravity energy storage construction cost analysis Opening Hour Mon - Fri, 8:00 - 9:00 Home About Us Products Contact Us Gravity energy storage systems Energy systems are rapidly and permanently changing and with increased low carbon generation there is an expanding need for dynamic, long-life energy storage to ensure stable supply. Gravity energy storage systems, using weights lifted and lowered by electric winches to store energy, have great potential to deliver valuable energy storage The structure and control strategies of hybrid solid gravity energy storage In this paper, we propose a hybrid solid gravity energy storage system (HGES), which realizes the complementary advantages of energy-based energy storage (gravity energy storage) and power-based energy storage (e.g., supercapacitor) and has a promising future application. First, we investigate various possible system structure Optimal capacity configuration of the wind-photovoltaic-storage hybrid power system based on gravity energy storage Due to the site selection and construction scale, the existing energy storage systems (ESS) such as battery energy storage system (BESS) and compressed air energy storage system (CAES) are limited. Gravity energy storage system (GESS), as a unique energy storage way, can depend on the mountain, which is a natural Gravity energy storage In this design, pioneered by the California based company Advanced Rail Energy Storage (ARES) company in 2010 ARES North America (ARES North America - The Power of Gravity, n.d., Letcher, 2016), the excess power of the renewable plants or off-peak electricity of the grid is used to lift some heavy masses (concrete blocks here) by a Solid gravity energy storage: A review Large-scale energy storage technology is crucial to maintaining a high-proportion renewable energy power system stability and addressing the energy crisis and environmental problems. Solid gravity energy storage technology (SGES) is a Research Status and Prospect Analysis of Gravity Energy Storage It is estimated that the total amount of energy storage is 817 billion kilowatt-hours. The piston pump system was proposed by Heindl Energy, Gravity Power and EscoVale in 2016. It uses the gravity potential energy of piston to form water pressure in a well-sealed channel for energy storage and release. (PDF) Types, applications and future developments of gravity energy storage This paper firstly presents the types of gravity energy storage and analyzes various technical routes. Secondly, analysis is given to the practical applications of gravity energy stor age in real Financial and economic modeling of large-scale gravity energy storage A lifecycle cost analysis of a differently sized gravity energy storage systems coupled to a wind farm has been performed in Ref. [31]. After reviewing the existing literature, it could be perceived that most studies examine the technical and economic performance while ignoring the financial performance indicators. Design optimisation and cost analysis of linear vernier electric machine-based gravity energy storage energy capacity and duration. The economic comparison can be made by using various methods, such as a levelised cost of storage (LCOS) analysis [7], [8]. In recent years, gravity energy storage using (PDF) Types, applications and future developments of Among different forms of stored energy, gravity energy storage, as a kind of physical energy storage with competitive environmental protection and economy, has received wide attention Solid gravity energy storage: A review Solid gravity energy storage technology (SGES) is a promising mechanical energy storage technology suitable for large-scale applications. However, no systematic summary of this technology research and application progress has been seen. Therefore, the basic concept of SGES and conducted a bibliometric study between 2010 and 2021 is Potential of different forms of gravity energy storage This paper conducts a comparative analysis of four primary gravity energy storage forms in terms of technical principles, application practices, and potentials. These forms include Tower Gravity Energy Storage (TGES), Mountain Gravity Energy Storage (MGES), Advanced Rail Energy Storage (ARES), and Shaft Gravity Energy Storage (PDF) Gravitational Energy Storage With Weights High level schematic diagrams for weight-based gravitational energy storage system designs proposed by (a) Gravity Power, (b) Gravitricity, (c) Energy Energy Storage Cost and Performance Database | PNNL Cost and performance metrics for individual technologies track the following to provide an overall cost of ownership for each technology: cost to procure, install, and connect an energy storage system; associated Levelized Cost of Storage Gravity Storage sults – LCOS values for Gravity StorageLevelized cost of storage for Gravity Storage syst. ms decrease as a function of system size. While systems of 1 GWh energy storage capacity and 125 MW power capacity discharge electricity at 204 US$/MWh, systems of 5 GWh and 625 MW discharge electricity at 113 US$/MWh, and systems of 10 GWh and Optimal sizing and deployment of gravity energy storage system The construction cost of the system reaches 6.7 M€ with the piston representing the largest cost share. The cost of electricity was found equals to 0.19 €/kWh. In addition, this work Energies | Free Full-Text | Underground Gravity Energy Storage: A Solution for Long-Term Energy Storage UGES offers weekly to pluriannual energy storage cycles with energy storage investment costs of about 1 to 10 USD/kWh. The technology is estimated to have a global energy storage potential of 7 to 70 TWh and can support sustainable development, mainly by providing seasonal energy storage services. (PDF) Gravitational Energy Storage With Weights Gravitational Energy Storage with W eights. Thomas Morstyn a,, Christo ff D. Botha. a School of Engineering, University of Edinburgh, Edinburgh, EH9 3JL, United Kingdom. b University of Analytical and quantitative assessment of capital expenditures for construction of an aboveground suspended weight energy storage For the first time, an analytical foundational correlation was found between capital expenditures of gravity energy storage, its energy capacity, and storage power. The correlation reveals that capex can be expressed as the sum of three components: one inversely proportional to discharge duration, another inversely proportional to the square Advantages and challenges in converting abandoned mines for energy storage Indeed, this is the case for all energy storage devices – batteries, pumped hydro and so on – as there is always some loss of energy as it is converted between forms, according to Green Gravity Founder and CEO, Mark Swinnerton. "Energy storage technologies can see efficiency levels of 50–90% depending on their nature," says System design and economic performance of gravity energy storage This system stores electricity in the form of gravitational potential energy. This work presents an approach to size gravity storage technically and economically. It performs an economic analysis to determine the levelized cost of energy (LCOE) for this technology, and then compares it to other storage alternatives. [PDF] The Principle Efficiency of the New Gravity Energy Storage Power energy storage technology provides an important means to address this contradiction, among which gravity energy storage technology has become a pertinent Gravitricity based on solar and gravity energy storage for residential applications | International Journal of Energy This study proposes a design model for conserving and utilizing energy affordably and intermittently considering the wind rush experienced in the patronage of renewable energy sources for cheaper generation of electricity and the solar energy potential especially in continents of Africa and Asia. Essentially, the global quest for Situation Analysis of Gravity Energy Storage Research Based on Gravity energy storage is a physical method of storing energy that offers advantages such as system safety, flexibility in location, and environmental friendliness. In addition, it boasts a long lifespan, low cost, zero self-discharge rate, large energy storage capacity, and high discharge depth. System Design and Economic Performance of Gravity Energy Storage The LCOE analysis has included costs incurred for the operation, construction, equipment, maintenance, and investment. The study showed that Pumped-Storage Hydropower System (PHS) and the GESS are Electric truck gravity energy storage: An alternative to seasonal energy storage Energy Storage is a new journal for innovative energy storage research, covering ranging storage methods and their integration with conventional & renewable systems. Abstract The global shift toward a sustainable and eco-friendly energy landscape necessitates the adoption of long-term, high-capacity energy storage solutions. Gravity Energy Storage Systems with Weight Lifting Gravity energy storage (GES) is an innovative technology to store electricity as the potential energy of solid weights lifted against the Earth''s gravity force. When surplus electricity is available, it is used to lift weights. When electricity demand is high, the weights descend by the force of gravity and potential energy converts back into Adaptive energy management strategy for optimal integration of wind/PV system with hybrid gravity/battery energy storage Mechanical energy storage systems, such as pumped hydro storage [28], and electrochemical energy storage technologies [29] hold great significance in the progression of renewable energy. Currently, pumped hydro energy storage (PHES) dominates ES technologies, with ∼95 % of the global storage capacity [ 30 ]. Pacific Northwest National Laboratory | PNNL Pacific Northwest National Laboratory | PNNL Life-cycle assessment of gravity energy storage systems for large The LCC of gravity energy storage was analyzed by conducting a market study of the system construction and installation considering recent cost data. The cost Assessment of the round-trip efficiency of gravity energy storage system: Analytical and numerical analysis of energy Compressed air energy storage relies on natural storage cavities for large-scale applications and is theoretically still limited to less than 70% cycle efficiency due to unavoidable heat losses Financial and economic modeling of large-scale gravity energy storage In addition, for a 1 GW power capacity and 125 MWh energy capacity system, gravity energy storage has an attractive LCOS of 202 $/MWh. The LCOS comparison has shown that GES system is a cost 2022 Grid Energy Storage Technology Cost and The 2022 Cost and Performance Assessment analyzes storage system at additional 24- and 100-hour durations. In September 2021, DOE launched the Long-Duration Storage Shot which aims to reduce costs by 90% in Research on Site Selection of Slope Gravity Energy Storage The Austrian IIASA Institute [] proposed a mountain cable ropeway structure in 2019 (Fig. 2), an energy storage system that utilizes cables to suspend heavy loads for charging and discharging, and can reduce the construction cost by utilizing the natural mountain slopes and adopting sand and gravel as the energy storage medium. Gravity energy storage systems Fig. 5.11 below demonstrates that Gravitricity''s levelizd cost of storage in $/kWh for a 25-year lifetime project will be $171, which is less than half that of lithium-ion batteries at the time of writing. The long life nature of this technology also contributes to the low price per kWh installed.
{"url":"https://always-you.nl/gravity/energy/storage/construction/cost/analysis/13883.html","timestamp":"2024-11-10T00:08:35Z","content_type":"text/html","content_length":"32315","record_id":"<urn:uuid:8c1bbd36-f95a-4ebd-ad64-dc1aacf1becf>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00509.warc.gz"}
Seed costs for a fa… - QuestionCove Ask your own question, for FREE! Mathematics 57 Online OpenStudy (anonymous): Seed costs for a farmer are $25 per acre for corn and $27per acre for soybeans. How many acres of each crop should the farmer plant if she wants to spend no more than $3000 on seed? Express your answer as a linear inequality with appropriate nonnegative restrictions and draw its graph. Let x be the number of acres planted with corn and let y be the number of acres planted with soybeans. Choose the correct inequality below. a. 25x + 27y ≤ 3000, x≥0, y≥0 b. 25x+27y <3000, x≥0, y≥0 c. 25x +27y ≥3000, x≥0, y≥0 d. 25x+ 27y >3000, x≥0, y≥0 Still Need Help? Join the QuestionCove community and study together with friends! OpenStudy (anonymous): i chose C. can someone varify that with me? OpenStudy (anonymous): i chose C. can someone varify that with me? OpenStudy (anonymous): OpenStudy (anonymous): OpenStudy (anonymous): Still Need Help? Join the QuestionCove community and study together with friends! OpenStudy (anonymous): Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours! Join our real-time social learning platform and learn together with your friends! Latest Questions h1mULT: HELP! 33 minutes ago 20 Replies 0 Medals Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours! Join our real-time social learning platform and learn together with your friends!
{"url":"https://questioncove.com/updates/513ff772e4b0f08cbdc8b0d4","timestamp":"2024-11-15T00:48:56Z","content_type":"text/html","content_length":"23866","record_id":"<urn:uuid:6bfd5330-0688-4964-a5bf-2d0e19af86a2>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00090.warc.gz"}
Capacitor Coupling for Minimum Impedance source: signal integrity journal article January 2, 2018, Anto Davis and Steve Sandler. A negative coupling coefficient to optimize capacitor placement may not always lead to lower impedance. It is well known that the mounting loop of a capacitor owns the major share in parasitic inductance [1]. A practical capacitor when mounted in a printed circuit board has parasitic inductance (L) associated with it. Its equivalent circuit is a series RLC circuit where R represents the loss associated with the capacitor (C). It has a self (series) resonant frequency given by A mounting loop is formed by the two vias connecting a capacitor to the power-ground planes. A small form factor capacitor reduces this area and helps to minimize the parasitic inductance. Closer power-ground planes reduce the plane spreading inductance. Geometry for Negative Coupling Coefficient With two capacitors in parallel, we can generate a negative coupling coefficient to reduce the parasitic inductance. This is shown in Figure 1. Points A and E are on the same plane and points B and F are on the same plane. Assume that the planes are power and ground separated by thin dielectric of thickness (2-3 mils). Figure 1(b) shows the geometrical connection for negative coupling coefficient (M and k are negative). When power and ground planes are present, this geometry can be achieved by changing via locations; B and E are on the same plane and A and F are on the same plane. Figure 2 shows the equivalent circuit diagram of two capacitors in parallel. Writing equations for it, 1. Identical Capacitors For large values of w, the Eq. 2 becomes the classic equation of coupled parallel inductors. where + and – signs indicate positive and negative couplings. For equal loop areas (L1 = L2 = L), inductance is given by, Experiments are conducted with two ceramic disc type capacitors of values C1 = C2 = 4.7 nF. Measurement with a capacitor meter gives their values as 4.30 nF. The leads are cut and made with insulated copper wire of SWG 21 (diameter = 0.813 mm). Leads make a loop of area 1.5 cm _ 0.5 cm. The measurement device is Rohde & Schwarz vector network analyzer (VNA). The scattering parameters measured are converted to impedance values. VNA is set with the following values: Resolution band width (RBW) = 10 Hz; number of points = 1000; and power = -15 dBm. Two loops are kept at a distance of 1 mm (edge to edge). The coupling coefficient between the loops (k = M/L) is calculated to be 0.4 where L is 17.6 nH and M is 6.97 nH [2]. Simulation and experimental results are shown in Fig. 4(a) and Fig. 4 (b) respectively. At 100 MHz, inductance is 2.5 times (8 dB) lesser than that for positive coupling case. No surprises until here. How about the case where the two capacitor values are different? Non-identical Capacitors When the capacitor values are different, the parallel combination produces antiresonance peaks as shown in Figure 5. The anti-resonance peak is lower for the positively coupled case. It is assumed that the capacitors have nearly equal mounting inductance, and their sizes are comparable. Experiments are conducted for the same values of the previous experiment, except that one capacitor is changed to 390 pF. A measurement with capacitance meter gives 380 pF for this capacitor. Experimental results (Figure 5(b)) shows that the antiresonance peak is lower for positively coupled case by 3.2 times. When both the capacitors become inductive, the equivalent inductance is lower for negative coupling case. A negative coupling coefficient produces a larger anti-resonant peak compared to a positive coupling coefficient, even though the equivalent inductance is lower. Multiple anti-resonant peaks are capable of generating rogue waves [7, 8], and suppressing them are becoming more and more important. For low noise circuits, power distribution network (PDN) resonance is an important design issue, and should be suppressed! How about the tolerance of two identical ceramic capacitors connected in parallel? `It depends’ on the type of capacitors that determines the capacitance variations. Plotting eq. (2) with worst-case values will give the answer. Authors leave this to the curious reader to explore! A. K. Davis (ECE, GeorgiaTech, Atlanta, USA) & S. M. Sandler (Picotest, Phoenix, AZ 85085, USA) 2. References [1] Roy, T., Smith, L., and Prymak, J.: `ESR and ESL of ceramic capacitor applied to decoupling applications’, IEEE 7th Top. Meet. Elect. Perform. Electron. Packag., 1998, pp. 213-216. [2] Paul, C. R.: `Effectiveness of multiple decoupling capacitors’, IEEE Trans. Electromagn. Compat., 1992, 34, (2), pp. 130-133. [3] Davis, A.K.: `Effect of magnetic coupling between the mounting loops of two parallel capacitors on antiresonance’, IET Sci. Meas. Tech., 2016, 10, (8), pp. 889-899. [4] Davis, A.K., and Gunasekaran, M.K.: `Microprocessor-conducted noise reduction with switched supercapacitors’, Electron. Letters, 2014, 51, (1), pp. 92-94. [5] Novak, I., Pannala, S., and Miller, J. R.: `Overview of some options to create low-Q controlled-ESR bypass capacitors’, IEEE 13th Top. Meet. Elect. Perform. Electron. Packag., 2004, pp. 55-58. [6] Davis, A.K.: `Effect of a magnetically coupled resistive loop on antiresonance’, Electron. Letters, 2016, 52, (13), pp. 1162 – 1164. [7] Steve Sandler: `Target impedance based solutions for PDN may not provide realis-tic assessment’, available https://www.edn.com/design/test-and-measurement/4413192/ [8] Eric Bogatin, Istvan Novak, Steve Sandler, Larry Smith, Brad Brim, and Steve Weir: `Target Impedance and Rogue Waves’, available http://www.electrical-integrity.com/Paper_download_files/
{"url":"https://passive-components.eu/capacitor-coupling-for-minimum-impedance/","timestamp":"2024-11-04T05:42:54Z","content_type":"text/html","content_length":"449392","record_id":"<urn:uuid:42f0b542-a6bb-4d13-af9a-0394ccba1a69>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00112.warc.gz"}
HUD No. 17-103 For Release Office of Public Affairs Friday (202) 708-0685 November 17, 2017 U.S. Census Bureau Raemeka Mayo or Stephen Cooper, Economic Indicators Division (301) 763-5160 WASHINGTON - The U.S. Department of Housing and Urban Development (HUD) and the U.S. Census Bureau jointly announced the following new residential construction statistics for October 2017. Building Permits: Privately owned housing units authorized by building permits in October were at a seasonally adjusted annual rate of 1,297,000. This is 5.9 percent (±1.4 percent) above the revised September rate of 1,225,000 and is 0.9 percent (±1.6 percent)* above the October 2016 rate of 1,285,000. Single-family authorizations in October were at a rate of 839,000; this is 1.9 percent (±1.7 percent) above the revised September figure of 823,000. Authorizations of units in buildings with five units or more were at a rate of 416,000 in October. Housing Starts: Privately owned housing starts in October were at a seasonally adjusted annual rate of 1,290,000. This is 13.7 percent (±10.5 percent) above the revised September estimate of 1,135,000 but is 2.9 percent (±10.1 percent)* below the October 2016 rate of 1,328,000. Single-family housing starts in October were at a rate of 877,000; this is 5.3 percent (±12.1 percent)* above the revised September figure of 833,000. The October rate for units in buildings with five units or more was 393,000. Housing Completions: Privately owned housing completions in October were at a seasonally adjusted annual rate of 1,232,000. This is 12.6 percent (±12.2 percent) above the revised September estimate of 1,094,000 and is 15.5 percent (±11.7 percent) above the October 2016 rate of 1,067,000. Single-family housing completions in October were at a rate of 793,000; this is 2.6 percent (±11.1 percent)* above the revised September rate of 773,000. The October rate for units in buildings with five units or more was 433,000. The November report is scheduled for release on December 19, 2017. Read more about new residential construction activity (https://www.census.gov/construction/nrc/index.html). In interpreting changes in the statistics in this release, note that month-to-month changes in seasonally adjusted statistics often show movements which may be irregular. It may take three months to establish an underlying trend for building permit authorizations, six months for total starts, and six months for total completions. The statistics in this release are estimated from sample surveys and are subject to sampling variability as well as nonsampling error including bias and variance from response, nonreporting, and undercoverage. Estimated relative standard errors of the most recent data are shown in the tables. Whenever a statement such as "2.5 percent (±3.2 percent) above" appears in the text, this indicates the range (-0.7 to +5.7 percent) in which the actual percentage change is likely to have occurred. All ranges given for percentage changes are 90 percent confidence intervals and account only for sampling variability. If a range does not contain zero, the change is statistically significant. If it does contain zero, the change is not statistically significant; that is, it is uncertain whether there was an increase or decrease. The same policies apply to the confidence intervals for percentage changes shown in the tables. On average, the preliminary seasonally adjusted estimates of total building permits, housing starts and housing completions are revised 3 percent or less. Explanations of confidence intervals and sampling variability can be found at the Census Bureau's website (https://www.census.gov/construction/nrc/ * The 90 percent confidence interval includes zero. In such cases, there is insufficient statistical evidence to conclude that the actual change is different from zero.
{"url":"https://archives.hud.gov/news/2017/pr17-103.cfm","timestamp":"2024-11-13T18:15:42Z","content_type":"text/html","content_length":"12617","record_id":"<urn:uuid:fac56e9a-348c-42d8-b116-430a61cffc68>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00895.warc.gz"}
Texas Administrative Code (a) Introduction. (1) The desire to achieve educational excellence is the driving force behind the Texas essential knowledge and skills for mathematics, guided by the college and career readiness standards. By embedding statistics, probability, and finance, while focusing on computational thinking, mathematical fluency, and solid understanding, Texas will lead the way in mathematics education and prepare all Texas students for the challenges they will face in the 21st century. (2) The process standards describe ways in which students are expected to engage in the content. The placement of the process standards at the beginning of the knowledge and skills listed for each grade and course is intentional. The process standards weave the other knowledge and skills together so that students may be successful problem solvers and use mathematics efficiently and effectively in daily life. The process standards are integrated at every grade level and course. When possible, students will apply mathematics to problems arising in everyday life, society, and the workplace. Students will use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution, and evaluating the problem-solving process and the reasonableness of the solution. Students will select appropriate tools such as real objects, manipulatives, algorithms, paper and pencil, and technology and techniques such as mental math, estimation, number sense, and generalization and abstraction to solve problems. Students will effectively communicate mathematical ideas, reasoning, and their implications using multiple representations such as symbols, diagrams, graphs, computer programs, and language. Students will use mathematical relationships to generate solutions and make connections and predictions. Students will analyze mathematical relationships to connect and communicate mathematical ideas. Students will display, explain, or justify mathematical ideas and arguments using precise mathematical language in written or oral communication. (3) The primary focal areas in Grade 6 are number and operations; proportionality; expressions, equations, and relationships; and measurement and data. Students use concepts, algorithms, and properties of rational numbers to explore mathematical relationships and to describe increasingly complex situations. Students use concepts of proportionality to explore, develop, and communicate mathematical relationships. Students use algebraic thinking to describe how a change in one quantity in a relationship results in a change in the other. Students connect verbal, numeric, graphic, and symbolic representations of relationships, including equations and inequalities. Students use geometric properties and relationships, as well as spatial reasoning, to model and analyze situations and solve problems. Students communicate information about geometric figures or situations by quantifying attributes, generalize procedures from measurement experiences, and use the procedures to solve problems. Students use appropriate statistics, representations of data, and reasoning to draw conclusions, evaluate arguments, and make recommendations. While the use of all types of technology is important, the emphasis on algebra readiness skills necessitates the implementation of graphing technology. (4) Statements that contain the word "including" reference content that must be mastered, while those containing the phrase "such as" are intended as possible illustrative examples. (b) Knowledge and skills. (1) Mathematical process standards. The student uses mathematical processes to acquire and demonstrate mathematical understanding. The student is expected to: (A) apply mathematics to problems arising in everyday life, society, and the workplace; (B) use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution, and evaluating the problem-solving process and the reasonableness of the solution; (C) select tools, including real objects, manipulatives, paper and pencil, and technology as appropriate, and techniques, including mental math, estimation, and number sense as appropriate, to solve problems; (D) communicate mathematical ideas, reasoning, and their implications using multiple representations, including symbols, diagrams, graphs, and language as appropriate; (E) create and use representations to organize, record, and communicate mathematical ideas; (F) analyze mathematical relationships to connect and communicate mathematical ideas; and (G) display, explain, and justify mathematical ideas and arguments using precise mathematical language in written or oral communication. (2) Number and operations. The student applies mathematical process standards to represent and use rational numbers in a variety of forms. The student is expected to: (A) classify whole numbers, integers, and rational numbers using a visual representation such as a Venn diagram to describe relationships between sets of numbers; (B) identify a number, its opposite, and its absolute value; (C) locate, compare, and order integers and rational numbers using a number line; (D) order a set of rational numbers arising from mathematical and real-world contexts; and (E) extend representations for division to include fraction notation such as a/b represents the same number as a ÷ b where b ≠ 0. (3) Number and operations. The student applies mathematical process standards to represent addition, subtraction, multiplication, and division while solving problems and justifying solutions. The student is expected to: (A) recognize that dividing by a rational number and multiplying by its reciprocal result in equivalent values; (B) determine, with and without computation, whether a quantity is increased or decreased when multiplied by a fraction, including values greater than or less than one; (C) represent integer operations with concrete models and connect the actions with the models to standardized algorithms; (D) add, subtract, multiply, and divide integers fluently; and (E) multiply and divide positive rational numbers fluently. (4) Proportionality. The student applies mathematical process standards to develop an understanding of proportional relationships in problem situations. The student is expected to: (A) compare two rules verbally, numerically, graphically, and symbolically in the form of y = ax or y = x + a in order to differentiate between additive and multiplicative relationships; (B) apply qualitative and quantitative reasoning to solve prediction and comparison of real-world problems involving ratios and rates; (C) give examples of ratios as multiplicative comparisons of two quantities describing the same attribute; (D) give examples of rates as the comparison by division of two quantities having different attributes, including rates as quotients; (E) represent ratios and percents with concrete models, fractions, and decimals; (F) represent benchmark fractions and percents such as 1%, 10%, 25%, 33 1/3%, and multiples of these values using 10 by 10 grids, strip diagrams, number lines, and numbers; (G) generate equivalent forms of fractions, decimals, and percents using real-world problems, including problems that involve money; and (H) convert units within a measurement system, including the use of proportions and unit rates. (5) Proportionality. The student applies mathematical process standards to solve problems involving proportional relationships. The student is expected to: (A) represent mathematical and real-world problems involving ratios and rates using scale factors, tables, graphs, and proportions; (B) solve real-world problems to find the whole given a part and the percent, to find the part given the whole and the percent, and to find the percent given the part and the whole, including the use of concrete and pictorial models; and (C) use equivalent fractions, decimals, and percents to show equal parts of the same whole. (6) Expressions, equations, and relationships. The student applies mathematical process standards to use multiple representations to describe algebraic relationships. The student is expected to: (A) identify independent and dependent quantities from tables and graphs; (B) write an equation that represents the relationship between independent and dependent quantities from a table; and Cont'd...
{"url":"https://texreg.sos.state.tx.us/public/readtac$ext.TacPage?sl=T&app=9&p_dir=P&p_rloc=158643&p_tloc=&p_ploc=9974&pg=3&p_tac=&ti=19&pt=2&ch=111&rl=27","timestamp":"2024-11-09T03:14:22Z","content_type":"text/html","content_length":"17401","record_id":"<urn:uuid:996f4ba4-c3dc-4ee6-b562-57cd28139a17>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00836.warc.gz"}
Richard Feynman - Probability and Uncertainty About Articles Books Lectures Presentations Glossary Cite page Help? Translate Philosophers Probability and Uncertainty - the Quantum Mechanical View of Nature Mortimer Adler Chapter 6 of The Character of Physical Law (annotated) Rogers Albritton In the beginning of the history of experimental observation, or any other kind of Alexander of observation on scientific things, it is intuition, which is really based on simple Aphrodisias experience with everyday objects, that suggests reasonable explanations for things. But as Samuel Alexander we try to widen and make more consistent our description of what we see, as it gets wider William Alston and wider and we see a greater range of phenomena, the explanations become what we call Anaximander laws instead of simple explanations. One odd characteristic is that they often seem to G.E.M.Anscombe become more and more unreasonable and more and more intuitively far from obvious. To take Anselm an example, in the relativity theory the proposition is that if you think two things occur Louise Antony at the same time that is just your opinion, someone else could conclude that of those Thomas Aquinas events one was before the other, and that therefore simultaneity is merely a subjective Aristotle impression. There is no reason why we should expect things to be otherwise, because the David Armstrong things of everyday experience involve large numbers of particles, or involve things moving Harald very slowly, or involve other conditions that are special and represent in fact a limited Atmanspacher experience with nature. It is a small section only of natural phenomena that one gets from Robert Audi direct experience. It is only through refined measurements and careful experimentation that Augustine we can have a wider vision. And then we see unexpected things: we see things that are far J.L.Austin from what we would guess — far from what we could have imagined. Our imagination is A.J.Ayer stretched to the utmost, not, as in fiction, to imagine things which are not really there, Alexander Bain but just to comprehend those things which are there. It is this kind of situation that I Mark Balaguer want to discuss. Let us start with the history of light. At first light was assumed to Jeffrey Barrett behave very much like a shower of particles, of corpuscles, like rain, or like bullets from William Barrett a gun. Then with further research it was clear that this was not right, that the light William Belsham actually behaved like waves, like water waves for instance. Then in the twentieth century, Henri Bergson on further research, it appeared again that light actually behaved in many ways like George Berkeley particles. In the photo-electric effect you could count these particles — they are called Isaiah Berlin photons now. Electrons, when they were first discovered, behaved exactly like particles or Richard J. bullets, very simply. Further research showed, from electron diffraction experiments for Bernstein example, that they behaved like waves. As time went on there was a growing confusion about Bernard Berofsky how these things really behaved — waves or particles, particles or waves? Everything looked Robert Bishop like both. Max Black In 1925 quantum mechanics discovered the equations that let us calculate physical Susanne Bobzien properties to extraordinary accuracy, but the founders did not provide us with an intuitive Emil du picture of what is going on at the quantum level Bois-Reymond This growing confusion was resolved in 1925 or 1926 with the advent of the correct Hilary Bok equations for quantum mechanics. Now we know how the electrons and light behave. But what Laurence BonJour can I call it? If I say they behave like particles I give the wrong impression; also if I George Boole say they behave like waves. They behave in their own inimitable way, which technically Émile Boutroux could be called a quantum mechanical way. They behave in a way that is like nothing that Daniel Boyd you have ever seen before. Your experience with things that you have seen before is F.H.Bradley incomplete. The behaviour of things on a very tiny scale is simply different. An atom does C.D.Broad not behave like a weight hanging on a spring and oscillating. Nor does it behave like a Michael Burke miniature representation of the solar system with little planets going around in orbits. Lawrence Cahoone Nor does it appear to be somewhat like a cloud or fog of some sort surrounding the nucleus. C.A.Campbell It behaves like nothing you have ever seen before. There is one simplification at least. Joseph Keim Electrons behave in this respect in exactly the same way as photons; they are both screwy, Campbell but in exactly the same way. How they behave, therefore, takes a great deal of imagination Rudolf Carnap to appreciate, because we are going to describe something which is different from anything Carneades you know about. In that respect at least this is perhaps the most difficult lecture of the Nancy Cartwright series, in the sense that it is abstract, in the sense that it is not close to experience. Gregg Caruso I cannot avoid that. Were I to give a series of lectures on the character of physical law, Ernst Cassirer and to leave out from this series the description of the actual behaviour of particles on a David Chalmers small scale, I would certainly not be doing the job. This thing is completely Roderick Chisholm characteristic of all of the particles of nature, and of a universal character, so if you Chrysippus want to hear about the character of physical law it is essential to talk about this Cicero particular aspect. It will be difficult. But the difficulty really is psychological and Tom Clark exists in the perpetual torment that results from your saying to yourself, 'But how can it Randolph Clarke be like that?' which is a reflection of uncontrolled but utterly vain desire to see it in Samuel Clarke terms of something familiar. I will not describe it in terms of an analogy with something Anthony Collins familiar; I will simply describe it. There was a time when the newspapers said that only Antonella twelve men understood the theory of relativity. I do not believe there ever was such a Corradini time. There might have been a time when only one man did, because he was the only guy who Diodorus Cronus caught on, before he wrote his paper. But after people read the paper a lot of people Jonathan Dancy understood the theory of relativity in some way or other, certainly more than twelve. Donald Davidson "nobody understands quantum mechanics" Mario De Caro Watch this famous Feynman quote Democritus On the other hand, I think I can safely say that nobody understands quantum mechanics. So Daniel Dennett do not take the lecture too seriously, feeling that you really have to understand in terms Jacques Derrida of some model what I am going to describe, but just relax and enjoy it. I am going to tell René Descartes you what nature behaves like. If you will simply admit that maybe she does behave like Richard Double this, you will find her a delightful, entrancing thing. Do not keep saying to yourself, if Fred Dretske you can possibly avoid it, 'But how can it be like that?' because you will get 'down the John Dupré drain', into a blind alley from which nobody has yet escaped. Nobody knows how it can be John Earman like that. So then, let me describe to you the behaviour of electrons or of photons in Laura Waddell their typical quantum mechanical way. I am going to do this by a mixture of analogy and Ekstrom contrast. If I made it pure analogy we would fail; it must be by analogy and contrast with Epictetus things which are familiar to you. So I make it by analogy and contrast, first to the Epicurus behaviour of particles, for which I will use bullets, and second to the behaviour of waves, Austin Farrer for which I will use water waves. What I am going to do is to invent a particular Herbert Feigl experiment and first tell you what the situation would be in that experiment using Arthur Fine particles, then what you would expect to happen if waves were involved, and finally what John Martin happens when there are actually electrons or photons in the system. Fischer We will show that the (one) mystery of quantum mechanics is how mere "probabilities" ( Frederic Fitch immaterial information) can causally control (statistically) the positions of material Owen Flanagan particles Luciano Floridi I will take just this one experiment, which has been designed to contain all of the mystery Philippa Foot of quantum mechanics, to put you up against the paradoxes and mysteries and peculiarities Alfred Fouilleé of nature one hundred per cent. Any other situation in quantum mechanics, it turns out, can Harry Frankfurt always be explained by saying, 'You remember the case of the experiment with the two holes Richard L. ? It's the same thing'. I am going to tell you about the experiment with the two holes. It Franklin does contain the general mystery; I am avoiding nothing; I am baring nature in her most Bas van Fraassen elegant and difficult form. We start with bullets (fig. 28). Suppose that we have some Michael Frede source of bullets, a machine gun, and in front of it a plate with a hole for the bullets to Gottlob Frege come through, and this plate is armour plate. A long distance away we have a second plate Peter Geach which has two holes in it — that is the famous two-hole business. I am going to talk a lot Edmund Gettier about these holes, so I will call them hole No. 1 and hole No. 2. You can imagine round Carl Ginet holes in three dimensions — the drawing is just a cross section. A long distance away again Alvin Goldman we have another screen which is just a backstop of some sort on which we can put in various Gorgias places a detector, which in the case of bullets is a box of sand into which the bullets Nicholas St. John will be caught so that we can count them. I am going to do experiments in which I count how Green many bullets come into this detector or box of sand when the box is in different positions, H.Paul Grice and to describe that I will measure the distance of the box from somewhere, and call that Ian Hacking distance 'x', and I will talk about what happens when you change 'x', which means only that Ishtiyaque Haji you move the detector box up and down. First I would like to make a few modifications from Stuart Hampshire real bullets, in three idealizations. The first is that the machine gun is very shaky and W.F.R.Hardie wobbly and the bullets go in various directions, not just exactly straight on; they can Sam Harris ricochet off the edges of the holes in the armour plate. Secondly, we should say, although William Hasker this is not very important, that the bullets have all the same speed or energy. The most R.M.Hare important idealization in which this situation differs from real bullets is that I want Georg W.F. Hegel these bullets to be absolutely indestructible, so that what we find in the box is not Martin Heidegger pieces of lead, of some bullet that broke in half, but we get the whole bullet. Imagine Heraclitus indestructible bullets, or hard bullets and soft armour plate. The first thing that we R.E.Hobart shall notice about bullets is that the things that arrive come in lumps. When the energy Thomas Hobbes comes it is all in one bulletful, one bang. If you count the bullets, there are one, two, David Hodgson three, four bullets; the things come in lumps. They are of equal size, you suppose, in this Shadsworth case, and when a thing comes into the box it is either all in the box or it is not in the Hodgson box. Moreover, if I put up two boxes I never get two bullets in the boxes at the same time, Baron d'Holbach presuming that the gun is not going off too fast and I have enough time between them to Ted Honderich see. Slow down the gun so it goes off very slowly, then look very quickly in the two boxes, Pamela Huby and you will never get two bullets at the same time in the two boxes, because a bullet is a David Hume single identifiable lump. Now what I am going to measure is how many bullets arrive on the Ferenc Huoranszki average over a period of time. Say we wait an hour, and we count how many bullets are in Frank Jackson the sand and average that. We take the number of bullets that arrive per hour, and we can William James call that the probability of arrival, because it just gives the chance that a bullet going Lord Kames through a slit arrives in the particular box. The number of bullets that arrive in the box Robert Kane will vary of course as I vary 'x'. On the diagram I have plotted horizontally the number of Immanuel Kant bullets that I get if I hold the box in each position for an hour. I shall get a curve that Tomis Kapitan will look more or less like curve N[12] because when the box is behind one of the holes it Walter Kaufmann gets a lot of bullets, and if it is a little out of line it does not get as many, they have Jaegwon Kim to bounce off the edges of the holes, and eventually the curve disappears. The curve looks William King like curve N[12], and the number that we get in an hour when both holes are open I will Hilary Kornblith call N[12] which merely means the number which arrive through hole No. 1 and hole No. 2. I Christine must remind you that the number that I have plotted does not come in lumps. It can have any Korsgaard size it wants. It can be two and a half bullets in an hour, in spite of the fact that Saul Kripke bullets come in lumps. All I mean by two and a half bullets per hour is that if you run for Thomas Kuhn ten hours you will get twenty-five bullets, so on the average it is two and a half bullets. Andrea Lavazza I am sure you are all familiar with the joke about the average family in the United States Christoph Lehner seeming to have two and a half children. It does not mean that there is a half child in any Keith Lehrer family — children come in lumps. Nevertheless, when you take the average number per family Gottfried Leibniz it can be any number whatsoever, and in the same way this number N[12], which is the number Jules Lequyer of bullets that arrive in the container per hour, on the average, need not be an integer. Leucippus What we measure is the probability of arrival, which is a technical term for the average Michael Levin number that arrive in a given length of time. Finally, if we analyse the curve N[12] we can Joseph Levine interpret it very nicely as the sum of two curves, one which will represent what I will George Henry call N[1], the number which will come if hole No. 2 is closed by another piece of armour Lewes plate in front, and N[2], the number which will come through hole No. 2 alone, if hole No. C.I.Lewis 1 is closed. We discover now a very important law, which is that the number that arrive David Lewis with both holes open is the number that arrive by coming through hole No. 1, plus the Peter Lipton number that come through hole No. 2. This proposition, the fact that all you have to do is C. Lloyd Morgan to add these together, I call `no interference'. John Locke N[12] = N[1] + N[2] (no interference). Michael Lockwood That is for bullets, and now we have done with bullets we begin again, this time with water Arthur O. Lovejoy waves (fig. 29). The source is now a big mass of stuff which is being shaken up and down in E. Jonathan Lowe the water. The armour plate becomes a long line of barges or jetties with a gap in the John R. Lucas water in between. Perhaps it would be better to do it with ripples than with big ocean Lucretius waves; it sounds more sensible. I wiggle my finger up and down to make waves, and I have a Alasdair little piece of wood as a barrier with a hole for the ripples to come through. Then I have MacIntyre a second barrier with two holes, and finally a detector. What do I do with the detector? Ruth Barcan What the detector detects is how much the water is jiggling. For instance, I put a cork in Marcus the water and measure how it moves up and down, and what I am going to measure in fact is Tim Maudlin the energy of the agitation of the cork, which is exactly proportional to the energy James Martineau carried by the waves. One other thing: the jiggling is made very regular and perfect so Nicholas Maxwell that the waves are all the same space from one another. One thing that is important for Storrs McCall water waves is that the thing we are measuring can have any size at all. We are measuring Hugh McCann the intensity of the waves, or the energy in the cork, and if the waves are very quiet, if Colin McGinn my finger is only jiggling a little, then there will be very little motion of the cork. No Michael McKenna matter how much it is, it is proportional. It can have any size; it does not come in lumps; Brian McLaughlin it is not all there or nothing. What we are going to measure is the intensity of the waves, John McTaggart or, to be precise, the energy generated by the waves at a point. What happens if we measure Paul E. Meehl this intensity, which I will call 'I' to remind you that it is an intensity and not a Uwe Meixner number of particles of any kind? The curve I[12], that is when both holes are open, is Alfred Mele shown in the diagram (fig. 29). It is an interesting, complicated looking curve. If we put Trenton Merricks the detector in different places we get an intensity which varies very rapidly in a John Stuart Mill peculiar manner. You are probably familiar with the reason for that. The reason is that the Dickinson Miller ripples as they come have crests and troughs spreading from hole No. 1, and they have G.E.Moore crests and troughs spreading from hole No. 2. If we are at a place which is exactly in Thomas Nagel between the two holes, so that the two waves arrive at the same time, the crests will come Otto Neurath on top of each other and there will be plenty of jiggling. We have a lot of jiggling right Friedrich in dead centre. On the other hand if I move the detector to some point further from hole Nietzsche No. 2 than hole No. 1, it takes a little longer for the waves to come from 2 than from 1, John Norton and when a crest is arriving from 1 the crest has not quite reached there yet from hole 2, P.H.Nowell-Smith in fact it is a trough from 2, so that the water tries to move up and it tries to move Robert Nozick down, from the influences of the waves coming from the two holes, and the net result is William of Ockham that it does not move at all, or practically not at all. So we have a low bump at that Timothy O'Connor place. Then if it moves still further over we get enough delay so that crests come together Parmenides from both holes, although one crest is in fact a whole wave behind, and so you get a big David F. Pears one again, then a small one, a big one, a small one... depending upon the way the crests Charles Sanders and troughs 'interfere'. The word interference again is used in science in a funny way. We Peirce can have what we call constructive interference, as when both waves interfere to make the Derk Pereboom intensity stronger. The important thing is that I[12] is not the same as I[1] plus I[2], Steven Pinker and we say it shows constructive and destructive interference. We can find out what I[1] U.T.Place and I[2] look like by closing hole No. 2 to find I[1], and closing hole No. 1 to find I[2]. Plato The intensity that we get if one hole is closed is simply the waves from one hole, with no Karl Popper interference, and the curves are shown in fig. 2. You will notice that I[1] is the same as Porphyry N[1], and I[2] the same as N[2] and yet I[12] is quite different from N[12]. As a matter of Huw Price fact, the mathematics of the curve I[12] is rather interesting. What is true is that the H.A.Prichard height of the water, which we will call h, when both holes are open is equal to the height Protagoras that you would get from No. 1 open, plus the height that you would get from No. 2 open. Hilary Putnam Thus, if it is a trough the height from No. 2 is negative and cancels out the height from Willard van Orman No. 1. You can represent it by talking about the height of the water, but it turns out that Quine the intensity in any case, for instance when both holes are open, is not the same as the Frank Ramsey height but is proportional to the square of the height. It is because of the fact that we Ayn Rand are dealing with squares that we get these very interesting curves. Michael Rea h[12] = h[1] + h[2] Thomas Reid I[12] ≠ I[1] + I[2] (Interference) Charles Renouvier I[12] = (h[12])^2 Nicholas Rescher I[1] = (h[1])^2 C.W.Rietdijk I[2] = (h[2])^2 Richard Rorty That was water. Now we start again, this time with electrons (fig. 30). The source is a Josiah Royce filament, the barriers tungsten plates, these are holes in the tungsten plate, and for a Bertrand Russell detector we have any electrical system which is sufficiently sensitive to pick up the Paul Russell charge of an electron arriving with whatever energy the source has. If you would prefer it, Gilbert Ryle we could use photons with black paper instead of the tungsten plate — in fact black paper Jean-Paul Sartre is not very good because the fibres do not make sharp holes, so we would have to have Kenneth Sayre something better — and for a detector a photo-multiplier capable of detecting the T.M.Scanlon individual photons arriving. What happens with either case? I will discuss only the Moritz Schlick electron case, since the case with photons is exactly the same. First, what we receive in John Duns Scotus the electrical detector, with a sufficiently powerful amplifier behind it, are clicks, Arthur lumps, absolute lumps. When the click comes it is a certain size, and the size is always Schopenhauer the same. If you turn the source weaker the clicks come further apart, but it is the same John Searle sized click. If you turn it up they come so fast that they jam the amplifier. You have to Wilfrid Sellars turn it down enough so, that there are not too many clicks for the machinery that you are David Shiang using for the detector. Next, if you put another detector in a different place and listen Alan Sidelle to both of them you will never get two clicks at the same time, at least if the source is Ted Sider weak enough and the precision with which you measure the time is good enough. If you cut Henry Sidgwick down the intensity of the source so that the electrons come few and far between, they never Walter give a click in both detectors at once. That means that the thing which is coming comes in Sinnott-Armstrong lumps — it has a definite size, and it only comes to one place at a time. Right, so Peter Slezak electrons, or photons, come in lumps. Therefore what we can do is the same thing as we did J.J.C.Smart for bullets: we can measure the probability of arrival. What we do is hold the detector in Saul Smilansky various places — actually if we wanted to although it is expensive, we could put detectors Michael Smith all over at the same time and make the whole curve simultaneously — but we hold the Baruch Spinoza detector in each place, say for an hour, and we measure at the end of the hour how many L. Susan Stebbing electrons came, and we average it. What do we get for the number of electrons that arrive? Isabelle Stengers The same kind of N[12] as with bullets? Figure 30 shows what we get for N[12], that is what George F. Stout we get with both holes open. That is the phenomenon of nature, that she produces the curve Galen Strawson which is the same as you would get for the interference of waves. She produces this curve Peter Strawson for what? Not for the energy in a wave but for the probability of arrival of one of these Eleonore Stump lumps. The mathematics is simple. You change I to N, so you have to change h to something Francisco Suárez else, which is new — it is not the height of anything — so we invent an 'a', which we call Richard Taylor a probability amplitude, because we do not know what it means. In this case a[1] is the Kevin Timpe probability amplitude to arrive from hole No. 1, and a[2] the probability amplitude to Mark Twain arrive from hole No. 2. To get the total probability amplitude to arrive you add the two Peter Unger together and square it. This is a direct imitation of what happens with the waves, because Peter van Inwagen we have to get the same curve out so we use the same mathematics. I should check on one Manuel Vargas point though, about the interference. I did not say what happens if we close one of the John Venn holes. Let us try to analyse this interesting curve by presuming that the electrons came Kadri Vihvelin through one hole or through the other. We close one hole, and measure how many come through Voltaire hole No. 1, and we get the simple curve N[1]. Or we can close the other hole and measure G.H. von Wright how many come through hole No. 2, and we get the N[2] curve. But these two added together David Foster do not give the same as N[1] + N[2]; it does show interference. In fact the mathematics is Wallace given by this funny formula that the probability of arrival is the square of an amplitude R. Jay Wallace which itself is the sum of two pieces, N[12] = (a[1]+ a[2])2. The question is how it can W.G.Ward come about that when the electrons go through hole No. 1 they will be distributed one way, Ted Warfield when they go through hole No. 2 they will be distributed another way, and yet when both Roy Weatherford holes are open you do not get the sum of the two. For instance, if I hold the detector at C.F. von the point q with both holes open I get practically nothing, yet if I close one of the holes Weizsäcker I get plenty, and if I close the other hole I get something. I leave both holes open and I William Whewell get nothing; I let them come through both holes and they do not come any more. Or take the Alfred North point at the centre; you can show that that is higher than the sum of the two single hole Whitehead curves. You might think that if you were clever enough you could argue that they have some David Widerker way of going around through the holes back and forth, or they do something complicated, or David Wiggins one splits in half and goes through the two holes, or something similar, in order to Bernard Williams explain this phenomenon. Nobody, however, has succeeded in producing an explanation that is Timothy satisfactory, because the mathematics in the end are so very simple, the curve is so very Williamson simple (fig. 30). Ludwig Feynman only adds to the mystery by saying a particle is both a wave and a particle. The Wittgenstein wave is just abstract information (a theoretical and statistical prediction) about the Susan Wolf distribution of paths and positions of particles over large numbers of experiments. There is no "it" in the wave. Scientists I will summarize, then, by saying that electrons arrive in lumps, like particles, but the probability of arrival of these lumps is determined as the intensity of waves would be. It David Albert is in this sense that the electron behaves sometimes like a particle and sometimes like a Michael Arbib wave. It behaves in two different ways at the same time (fig. 31). That is all there is to Walter Baade say. I could give a mathematical description to figure out the probability of arrival of Bernard Baars electrons under any circumstances, and that would in principle be the end of the lecture — Jeffrey Bada except that there are a number of subtleties involved in the fact that nature works this Leslie Ballentine way. There are a number of peculiar things, and I would like to discuss those peculiarities Marcello Barbieri because they may not be self-evident at this point. To discuss the subtleties, we begin by Gregory Bateson discussing a proposition which we would have thought reasonable, since these things are Horace Barlow lumps. Since what comes is always one complete lump, in this case an electron, it is John S. Bell obviously reasonable to assume that either an electron goes through hole No. 1 or it goes Mara Beller through hole No. 2. It seems very obvious that it cannot do anything else if it is a lump. Charles Bennett I am going to discuss this proposition, so I have to give it a name; I will call it Ludwig von `proposition A'. Now we have already discussed a little what happens with proposition A. If Bertalanffy it were true that an electron either goes through hole No. 1 or through hole No. 2, then Susan Blackmore the total number to arrive would have to be analysable as the sum of two contributions. The Margaret Boden total number which arrive will be the number that come via hole 1, plus the number that David Bohm come via hole 2. Since the resulting curve cannot be easily analysed as the sum of two Niels Bohr pieces in such a nice manner, and since the experiments which determine how many would Ludwig Boltzmann arrive if only one hole or the other were open do not give the result that the total is the Emile Borel sum of the two parts, it is obvious that we should conclude that this proposition is false. Max Born We can show that the electron can go through just one hole and yet proposition A is not Satyendra Nath false, because Feynman has ignored something very important - the wave function that Bose determines the probabilities of finding particles is different when both holes are open. Walther Bothe The information that generates interference comes from the surrounding environment. Jean Bricmont If it is not true that the electron either comes through hole No. 1 or hole No. 2, maybe it Hans Briegel divides itself in half temporarily or something. So proposition A is false. That is logic. Leon Brillouin Unfortunately, or otherwise, we can test logic by experiment. We have to find out whether Stephen Brush it is true or not that the electrons come through either hole 1 or hole 2, or maybe they go Henry Thomas round through both holes and get temporarily split up, or something. Buckle Why interference patterns show up when both holes are open, even when particles go through S. H. Burbury just one hole, though we cannot know which hole or we lose the interference Melvin Calvin Donald Campbell Sadi Carnot Anthony Cashmore All we have to do is watch them. And to watch them we need light. So we put behind the Eric Chaisson holes a source of very intense light. Light is scattered by electrons, bounced off them, so Gregory Chaitin if the light is strong enough you can see electrons as they go by. We stand back, then, and Jean-Pierre we look to see whether when an electron is counted we see, or have seen the moment before Changeux the electron is counted, a flash behind hole 1 or a flash behind hole 2, or maybe a sort of Rudolf Clausius half flash in each place at the same time. We are going to find out now how it goes, by Arthur Holly looking. We turn on the light and look, and lo, we discover that every time there is a Compton count at the detector we see either a flash behind No. 1 or a flash behind No. 2. What we John Conway find is that the electron comes one hundred per cent, complete, through hole 1 or through Jerry Coyne hole 2 — when we look. A paradox! Let us squeeze nature into some kind of a difficulty John Cramer here. I will show you what we are going to do. We are going to keep the light on and we are Francis Crick going to watch and count how many electrons come through. We will make two columns, one for E. P. Culverwell hole No. 1 and one for hole No. 2, and as each electron arrives at the detector we will Antonio Damasio note in the appropriate column which hole it came through. What does the column for hole Olivier Darrigol No. 1 look like when we add it all together for different positions of the detector? If I Charles Darwin watch behind hole No. 1 what do I see? I see the curve N[1] (fig. 30). That column is Richard Dawkins distributed just as we thought when we closed hole 2, much the same way whether we are Terrence Deacon looking or not. If we close hole 2 we get the same distribution in those that arrive as if Lüder Deecke we were watching hole No. 1; likewise the number that have arrived via hole No. 2 is also a Richard Dedekind simple curve N[2]. Now look, the total number which arrive has to be the total number. It Louis de Broglie has to be the sum of the number N[1] plus the number N[2]; because each one that comes Stanislas Dehaene through has been checked off in either column 1 or column 2. The total number which arrive Max Delbrück absolutely has to be the sum of these two. It has to be distributed as N[1] + N[2]. But I Abraham de Moivre said it was distributed as the curve N[12]. No, it is distributed as N[1] + N[2]. It really Bernard is, of course; it has to be and it is. If we mark with a prime the results when a light is d'Espagnat lit, then we find that N[1]', is practically the same as N[1], without the light, and N[2]' Paul Dirac is almost the same as N[2]. But the number N[12]', that we see when the light is on and Hans Driesch both holes are open is equal to the number that we see through hole 1 plus the number that John Eccles we see through hole 2. This is the result that we get when the light is on. We get a Arthur Stanley different answer whether I turn on the light or not. If I have the light turned on, the Eddington distribution is the curve N[1] + N[2]. If I turn off the light, the distribution is N[12]. Gerald Edelman Turn on the light and it is N[1] + N[2] again. So you see, nature has squeezed out! We Paul Ehrenfest could say, then, that the light affects the result. If the light is on you get a different Manfred Eigen answer from that when the light is off. You can say too that light affects the behaviour of Albert Einstein electrons. If you talk about the motion of the electrons through the experiment, which is a George F. R. little inaccurate, you can say that the light affects the motion, so that those which might Ellis have arrived at the maximum have somehow been deviated or kicked by the light and arrive at Hugh Everett, III the minimum instead, thus smoothing the curve to produce the simple N[1] + N[2] curve. Franz Exner Electrons are very delicate. When you are looking at a baseball and you shine a light on Richard Feynman it, it does not make any difference, the baseball still goes the same way. But when you R. A. Fisher shine a light on an electron it knocks him about a bit, and instead of doing one thing he David Foster does another, because you have turned the light on and it is so strong. Suppose we try Joseph Fourier turning it weaker and weaker, until it is very dim, then use very careful detectors that Philipp Frank can see very dim lights, and look with a dim light. As the light gets dimmer and dimmer you Steven Frautschi cannot expect the very very weak light to affect the electron so completely as to change Edward Fredkin the pattern a hundred per cent from N[12] to N[1] + N[2]. As the light gets weaker and Benjamin Gal-Or weaker, somehow it should get more and more like no light at all. How then does one curve Howard Gardner turn into another? But of course light is not like a wave of water. Light also comes in Lila Gatlin particle-like character, called photons, and as you turn down the intensity of the light Michael Gazzaniga you are not turning down the effect, you are turning down the number of photons that are Nicholas coming out of the source. As I turn down the light I am getting fewer and fewer photons. Georgescu-Roegen The least I can scatter from an electron is one photon, and if I have too few photons GianCarlo sometimes the electron will get through when there is no photon coming by, in which case I Ghirardi will not see it. A very weak light, therefore, does not mean a small disturbance, it just J. Willard Gibbs means a few photons. The result is that with a very weak light I have to invent a third James J. Gibson column under the title 'didn't see'. When the light is very strong there are few in there, Nicolas Gisin and when the light is very weak most of them end in there. So there are three columns, hole Paul Glimcher 1, hole 2, and didn't see. You can guess what happens. The ones I do see are distributed Thomas Gold according to the curve N[1] + N[2]. The ones I do not see are distributed as the curve N A. O. Gomes [12]. As I turn the light weaker and weaker I see less and less and a greater and greater Brian Goodwin fraction are not seen. The actual curve in any case is a mixture of the two curves, so as Joshua Greene the light gets weaker it gets more and more like N[12] in a continuous fashion. I am not Dirk ter Haar able here to discuss a large number of different ways which you might suggest to find out Jacques Hadamard which hole the electron went through. It always turns out, however, that it is impossible Mark Hadley to arrange the light in any way so that you can tell through which hole the thing is going Patrick Haggard without disturbing the pattern of arrival of the electrons, without destroying the J. B. S. Haldane interference. Not only light, but anything else — whatever you use, in principle it is Stuart Hameroff impossible to do it. You can, if you want, invent many ways to tell which hole the electron Augustin Hamon is going through, and then it turns out that it is going through one or the other. But if Sam Harris you try to make that instrument so that at the same time it does not disturb the motion of Ralph Hartley the electron, then what happens is that you can no longer tell which hole it goes through Hyman Hartman and you get the complicated result again. Heisenberg noticed, when he discovered the laws Jeff Hawkins of quantum mechanics, that the new laws of nature that he had discovered could only be John-Dylan Haynes consistent if there were some basic limitation to our experimental abilities that had not Donald Hebb been previously recognized. In other words, you cannot experimentally be as delicate as you Martin Heisenberg wish. Heisenberg proposed his uncertainty principle which, stated in terms of our own Werner Heisenberg experiment, is the following. (He stated it in another way, but they are exactly John Herschel equivalent, and you can get from one to the other.) 'It is impossible to design any Basil Hiley apparatus whatsoever to determine through which hole the electron passes that will not at Art Hobson the same time disturb the electron enough to destroy the interference pattern'. No one has Jesper Hoffmeyer found a way around this. I am sure you are itching with inventions of methods of detecting Don Howard which hole the electron went through; but if each one of them is analysed carefully you John H. Jackson will find out that there is something the matter with it. You may think you could do it William Stanley without disturbing the electron, but it turns out there is always something the matter, and Jevons you can always account for the difference in the patterns by the disturbance of the Roman Jakobson instruments used to determine through which hole the electron comes. This is a basic E. T. Jaynes characteristic of nature, and tells us something about everything. If a new particle is Pascual Jordan found tomorrow, the kaon — actually the kaon has already been found, but to give it a name Eric Kandel let us call it that — and I use kaons to interact with electrons to determine which hole Ruth E. Kastner the electron is going through, I already know, ahead of time — I hope — enough about the Stuart Kauffman behaviour of a new particle to say that it cannot be of such a type that I could tell Martin J. Klein through which hole the electron would go without at the same time producing a disturbance William R. Klemm on the electron and changing the pattern from interference to no interference. The Christof Koch uncertainty principle can therefore be used as a general principle to guess ahead at many Simon Kochen of the characteristics of unknown objects. They are limited in their likely character. Hans Kornhuber The Copenhagen interpretation insists we know nothing about a path when not looking Stephen Kosslyn (measuring). Our measurements create the path, they said. But Einstein said that goes too Daniel Koshland far. We can say, and know, things like the particle is conserving its mass, momentum, Ladislav Kovàč energy, spin, etc. along its path. It cannot divide in two and go through both holes! Leopold Kronecker Let us return to our proposition A — 'Electrons must go either through one hole or Rolf Landauer another'. Is it true or not? Physicists have a way of avoiding the pitfalls which exist. Alfred Landé They make their rules of thinking as follows. If you have an apparatus which is capable of Pierre-Simon telling which hole the electron goes through (and you can have such an apparatus), then you Laplace can say that it either goes through one hole or the other. It does; it always is going Karl Lashley through one hole or the other — when you look. But when you have no apparatus to determine David Layzer through which hole the thing goes, then you cannot say that it either goes through one hole Joseph LeDoux or the other. (You can always say it — provided you stop thinking immediately and make no Gerald Lettvin deductions from it. Physicists prefer not to say it, rather than to stop thinking at the Gilbert Lewis moment.) To conclude that it goes either through one hole or the other when you are not Benjamin Libet looking is to produce an error in prediction. That is the logical tight-rope on which we David Lindley have to walk if we wish to interpret nature. This proposition that I am talking about is Seth Lloyd general. It is not just for two holes, but is a general proposition which can be stated Werner this way. The probability of any event in an ideal experiment — that is just an experiment Loewenstein in which everything is specified as well as it can be — is the square of something, which Hendrik Lorentz in this case I have called 'a', the probability amplitude. When an event can occur in Josef Loschmidt several alternative ways, the probability amplitude, this 'a' number, is the sum of the Alfred Lotka 'a's for each of the various alternatives. If an experiment is performed which is capable Ernst Mach of determining which alternative is taken, the probability of the event is changed; it is Donald MacKay then the sum of the probabilities for each alternative. That is, you lose the interference. Henry Margenau The question now is, how does it really work ? What machinery is actually producing this Owen Maroney thing? Nobody knows any machinery. Nobody can give you a deeper explanation of this David Marr phenomenon than I have given; that is, a description of it. They can give you a wider Humberto Maturana explanation, in the sense that they can do more examples to show how it is impossible to James Clerk tell which hole the electron goes through and not at the same time destroy the interference Maxwell pattern. They can give a wider class of experiments than just the two slit interference Ernst Mayr experiment. But that is just repeating the same thing to drive it in. It is not any deeper; John McCarthy it is only wider. The mathematics can be made more precise; you can mention that they are Warren McCulloch complex numbers instead of real numbers, and a couple of other minor points which have N. David Mermin nothing to do with the main idea. But the deep mystery is what I have described, and no one George Miller can go any deeper today. Stanley Miller Ulrich Mohrhoff We can not know the path or the position of an individual particle. If we do measure it to Jacques Monod learn its path, the experimental results change. There is no longer interference. Mountcastle What we have calculated so far is the probability of arrival of an electron. The question Emmy Noether is whether there is any way to determine where an individual electron really arrives? Of Donald Norman course we are not averse to using the theory of probability, that is calculating odds, when Alexander Oparin a situation is very complicated. We throw up a dice into the air, and with the various Abraham Pais resistances, and atoms, and all the complicated business, we are perfectly willing to allow Howard Pattee that we do not know enough details to make a definite prediction; so we calculate the odds Wolfgang Pauli that the thing will come this way or that way. But here what we are proposing, is it not, Massimo Pauri is that there is probability all the way back: that in the fundamental laws of physics Wilder Penfield there are odds. Suppose that I have an experiment so set up that with the light out I get Roger Penrose the interference situation. Then I say that even with the light on I cannot predict through Steven Pinker which hole an electron will go. I only know that each time I look it will be one hole or Colin Pittendrigh the other; there is no way to predict ahead of time which hole it will be. The future, in Walter Pitts other words, is unpredictable. It is impossible to predict in any way, from any information Max Planck ahead of time, through which hole the thing will go, or which hole it will be seen behind. Susan Pockett That means that physics has, in a way, given up, if the original purpose was — and Henri Poincaré everybody thought it was — to know enough so that given the circumstances we can predict Daniel Pollen what will happen next. Here are the circumstances: electron source, strong light source, Ilya Prigogine tungsten plate with two holes: tell me, behind which hole shall I see the electron? One Hans Primas theory is that the reason you cannot tell through which hole you are going to see the Zenon Pylyshyn electron is that it is determined by some very complicated things back at the source: it Henry Quastler has internal wheels, internal gears, and so forth, to determine which hole it goes through; Adolphe Quételet it is fifty-fifty probability, because, like a die, it is set at random; physics is Pasco Rakic incomplete, and if we get a complete enough physics then we shall be able to predict Nicolas Rashevsky through which hole it goes. That is called the hidden variable theory. That theory cannot Lord Rayleigh be true; it is not due to lack of detailed knowledge that we cannot make a prediction. Frederick Reif Feynman is wrong. We can say it went through either hole 1 or hole 2. We just cannot say Jürgen Renn which hole without destroying the interference pattern! Giacomo Rizzolati I said that if I did not turn on the light I should get the interference pattern. If I have A.A. Roback a circumstance in which I get that interference pattern, then it is impossible to analyse Emil Roduner it in terms of saying it goes through hole 1 or hole 2, because that interference curve is Juan Roederer so simple, mathematically a completely different thing from the contribution of the two Jerome Rothstein other curves as probabilities. If it had been possible for us to determine through which David Ruelle hole the electron was going to go if we had the light on, then whether we have the light on David Rumelhart or off is nothing to do with it. Whatever gears there are at the source, which we observed, Robert Sapolsky and which permitted us to tell whether the thing was going to go through 1 or 2, we could Tilman Sauer have observed with the light off, and therefore we could have told with the light off Ferdinand de through which hole each electron was going to go. But if we could do this, the resulting Saussure curve would have to be represented as the sum of those that go through hole 1 and those Jürgen that go through hole 2, and it is not. It must then be impossible to have any information Schmidhuber ahead of time about which hole the electron is going to go through, whether the light is on Erwin Schrödinger or off, in any circumstance when the experiment is set up so that it can produce the Aaron Schurger interference with the light off. It is not our ignorance of the internal gears, of the Sebastian Seung internal complications, that makes nature appear to have probability in it. It seems to be Thomas Sebeok somehow intrinsic. Someone has said it this way — 'Nature herself does not even know which Franco Selleri way the electron is going to go'. Claude Shannon The same conditions do not always produce the same results. It is this quantum Charles indeterminism that breaks the causal chain of physical determinism. Sherrington A philosopher once said 'It is necessary for the very existence of science that the same Abner Shimony conditions always produce the same results'. Well, they do not. You set up the Herbert Simon circumstances, with the same conditions every time, and you cannot predict behind which Dean Keith hole you will see the electron. Yet science goes on in spite of it — although the same Simonton conditions do not always produce the same results. That makes us unhappy, that we cannot Edmund Sinnott predict exactly what will happen. Incidentally, you could think up a circumstance in which B. F. Skinner it is very dangerous and serious, and man must know, and still you cannot predict. Lee Smolin Sad these tragic examples scientists imagine, like Schrödinger's Cat Ray Solomonoff For instance we could cook up — we'd better not, but we could — a scheme by which we set up Roger Sperry a photo cell, and one electron to go through, and if we see it behind hole No. 1 we set off John Stachel the atomic bomb and start World War III, whereas if we see it behind hole No. 2 we make Henry Stapp peace feelers and delay the war a little longer. Tom Stonier Antoine Suarez The future is unpredictable Leo Szilard Max Tegmark Then the future of man would be dependent on something which no amount of science can Teilhard de predict. The future is unpredictable. What is necessary 'for the very existence of Chardin science', and what the characteristics of nature are, are not to be determined by pompous Libb Thims preconditions, they are determined always by the material with which we work, by nature William Thomson herself. We look, and we see what we find, and we cannot say ahead of time successfully (Kelvin) what it is going to look like. The most reasonable possibilities often turn out not to be Richard Tolman the situation. If science is to progress, what we need is the ability to experiment, Giulio Tononi honesty in reporting results — the results must be reported without somebody saying what Peter Tse they would like the results to have been — and finally — an important thing — the Alan Turing intelligence to interpret the results. An important point about this intelligence is that C. S. it should not be sure ahead of time what must be. It can be prejudiced and say 'That is Unnikrishnan very unlikely; I don't like that'. Prejudice is different from absolute certainty. I do not Francisco Varela mean absolute prejudice — just bias. As long as you are only biased it does not make any Vlatko Vedral difference, because if your bias is wrong a perpetual accumulation of experiments will Vladimir perpetually annoy you until they cannot be disregarded any longer. They can only be Vernadsky disregarded if you are absolutely sure ahead of time of some precondition that science has Mikhail to have. In fact it is necessary for the very existence of science that minds exist which Volkenstein do not allow that nature must satisfy some preconceived conditions, like those of our Heinz von philosopher. See also The Distinction of Past and Future Richard von Mises For Teachers John von Neumann Jakob von Uexküll For Scholars C. H. Waddington John B. Watson Chapter 1.5 - The Philosophers Chapter 2.1 - The Problem of Knowledge Daniel Wegner Home Part Two - Knowledge Steven Weinberg Paul A. Weiss Normal | Teacher | Scholar Herman Weyl John Wheeler Jeffrey Wicken Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson Günther Witzany Stephen Wolfram H. Dieter Zeh Semir Zeki Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium We can not know the path or the position of an individual particle. If we do measure it to learn its path, the experimental results change. There is no longer interference. Chapter 1.5 - The Philosophers Chapter 2.1 - The Problem of Knowledge Home Part Two - Knowledge
{"url":"https://informationphilosopher.com/solutions/scientists/feynman/probability_and_uncertainty.html","timestamp":"2024-11-03T03:26:15Z","content_type":"text/html","content_length":"144669","record_id":"<urn:uuid:52bff18a-ae09-417f-8925-c13c9a75e69c>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00338.warc.gz"}
05/08/13 15:01:57 (12 years ago) • v92 v93 181 181 The initial condition's for test 4 produce a left facing (originating from the left initial state) shock wave, traveling very slowly to the right, a right traveling contact, and a right traveling shock wave. This makes intuitive sense from the initial conditions: 183 [[Image(test.png, [DEL:5:DEL]0%)]] 184 [[Image(test4.png, [DEL:5:DEL]0%)]] 183 [[Image(test.png, 0%)]] 184 [[Image(test4.png, 0%)]] Now the EXACT solution at t=0.035 could be computed from the Exact Riemann Solver alone, and this would be by looping through the grid at different dx/dt = dx/0.035, to determine the relative position of this 'grid speed' to the shocks and contact waves generated at the boundary. If the grid speed is ahead of all shocks (far enough to the right, given all waves in 186 186 this solution are traveling to the right), then the initial state is not yet disturbed, and so the EXACT solution there just is the initial condition there. The same holds true if the grid speed were far enough to the left, such that the entire system of generated waves has not interacted with the initial state. Now the tricky thing is to get the solution inside the wave structure, and honestly, it seems like something you'd ever only want to do with the help of a computer program, looping through tests and conditions to determine the new values of the disturbed initial condition. Here is the plot from Toro of the exact solution of Test 4 using the Exact Riemann Solver:
{"url":"https://bluehound2.circ.rochester.edu/astrobear/wiki/u/erica/GudonovMethodEuler?action=diff&version=93","timestamp":"2024-11-09T20:32:39Z","content_type":"text/html","content_length":"13517","record_id":"<urn:uuid:f1c00088-4fd2-4022-9629-d76f22d4fa45>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00173.warc.gz"}
HARP provides a simple expression syntax with which one can specify operations that need to be performed on a product. A list of operations is always provided as a single string where individual operations are separated by a semi-colon (;). Each operation can be either a comparison filter, a membership test filter, or a function call. Strings used in operations should be quoted with double Comparison filter variable operator value [unit] Filter a dimension for all variables in the product such that items for which the value of the provided variable does not match the expression get excluded. The variable should be one dimensional or two dimensional (with the first dimension being time). The dimension that gets filtered is the last dimension of the referenced variable. Supported operators are: == != < <= >= > (for numerical values only) =& =| !& (bitfield operators, for integer values only) Bitfield operators work such that a =& 5 returns true if both bits 1 and 3 in a are set, a =| 5 returns true of either bits 1 or 3 or both in a are set, and a !& 5 returns true if neither bits 1 and 3 in a are set. If a unit is specified, the comparison will be performed in the specified unit. Otherwise, it will be performed in the unit that the variable currently has. Units can only be specified for numerical Comparison operations can also be performed on the index of a dimension. index(dimension) operator value For instance, index(vertical) == 0 will keep only the lowest vertical level of a vertical profile. Membership test filter variable in (value, ...) [unit] variable not in (value, ...) [unit] Exclude measurements that occur or do not occur in the specified list. If a unit is specified, the comparison will be performed in the specified unit. Otherwise, it will be performed in the unit that the variable currently has. Units can only be specified for numerical Function call Supported functions: area_covers_area((lat, ...) [unit], (lon, ...) [unit]) Exclude measurements whose area does not fully cover the given polygon. Example: Exclude measurements whose area does not fully cover one of the areas of the area mask file. area_covers_point(latitude [unit], longitude [unit]) Exclude measurements whose area does not cover the given point. Example: area_covers_point(52.012, 4.357) area_inside_area((lat, ...) [unit], (lon, ...) [unit]) Exclude measurements whose area is not inside the given polygon. Example: Exclude measurements whose area is not inside one of the areas of the area mask file. area_intersects_area((lat, ...) [unit], (lon, ...) [unit], minimum-overlap-fraction) Exclude measurements whose area does not overlap at least the specified fraction with the given polygon. The fraction is calculated as area(intersection)/min(area(x),area(y)) area_intersects_area(area-mask-file, minimum-overlap-fraction) Exclude measurements whose area does not overlap at least the specified fraction with one the areas of the area mask file. area_intersects_area((lat, ...) [unit], (lon, ...) [unit]) Exclude measurements whose area does not overlap with the given polygon. Exclude measurements whose area does not overlap with one of the areas of the area mask file. For all variables in a product perform an averaging in the time dimension such that all samples end up in a single bin. For all variables in a product perform an averaging in the time dimension such that all samples in the same bin get averaged. A bin is defined by all samples of the given variable that have the same value. Example: bin((variable, ...)) Same as above, but each bin is defined by the unique combinations of values from the variables in the list. Example: bin(collocation-result-file, a|b) For all variables in a product perform an averaging in the time dimension such that all samples in the same bin get averaged. A bin is defined by all samples having the same collocated sample from the dataset that is indicated by the second parameter. Example: bin("collocation-result.csv", b) (the product is part of dataset A and the collocated sample that defines the bin is part of dataset B) bin_spatial((lat_edge, lat_edge, ...), (lon_edge, lon_edge, ...)) For all variables in a product map all time samples onto a spatial latitude/longitude grid. The latitude/longitude grid is defined by the list of edge values. Example: (bin data onto latitude bands, separated into an eastern and western hemisphere) bin_spatial(lat_edge_length, lat_edge_offset, lat_edge_step, lon_edge_length, lon_edge_offset, lon_edge_step) For all variables in a product map all time samples onto a spatial latitude/longitude grid. The latitude/longitude grid is defined by the list of edge values. Example: bin_spatial(7, -90, 30, 3, -180, 180) (this is the same as bin_spatial((-90,-60,-30,0,30,60,90),(-180,0,180))) clamp(dimension, axis-variable unit, (lower_bound, upper_bound)) Reduce the given dimension such that values of the given axis-variable and associated <axis-variable>_bounds fall within the given lower and upper bounds. The operation will use a derive (axis-variable {[time,]dimension} unit) and derive(<axis-variable>_bounds {[time,]dimension} unit) to determine the current grid and boundaries. These grid+boundaries are then updated to fall within the given lower and upper limits. The updated grid+boundaries are then used to regrid the product in the given dimension. The values +inf and -inf can be used to indicate an unbound edge. Example: clamp(vertical, altitude [km], (-inf, 60) clamp(vertical, pressure [hPa], (+inf, 200) Apply the specified collocation result file as an index filter assuming the product is part of dataset A. collocate_left(collocation-result-file, min_collocation_index) Same as regular collocation_left operation but only include collocations where collocation_index >= min_collocation_index collocate_left(collocation-result-file, min_collocation_index, max_collocation_index) Same as regular collocation_left operation but only include collocations where min_collocation_index <= collocation_index <= max_collocation_index Apply the specified collocation result file as an index filter assuming the product is part of dataset B. collocate_right(collocation-result-file, min_collocation_index) Same as regular collocate_right operation but only include collocations where collocation_index >= min_collocation_index collocate_right(collocation-result-file, min_collocation_index, max_collocation_index) Same as regular collocate_right operation but only include collocations where min_collocation_index <= collocation_index <= max_collocation_index derive(variable [datatype] [unit]) The derive operation without a dimension specification can be used to change the data type or unit of an already existing variable. A variable with the given name should therefore already be in the product (with any kind of dimensions). If a unit conversion is performed and no data type is specified the variable will be converted to double values. derive(altitude [km]) derive(latitude float) derive(variable [datatype] {dimension-type, ...} [unit]) The derive operation with a dimension specification is used to derive the specified variable from other variables found in the product (i.e. a variable with that name and dimension does not have to exist yet). The --list-derivations option of harpdump can be used to list available variable conversions. The algorithms behind all the conversions are described in the Algorithms section of the documentation. If the datatype is not provided then the default result data type for a conversion will be used (usually double). If a variable with the given name and dimension specification already exists then this operation will just perform a data type and/or unit conversion on that variable. derive(number_density {time,vertical} [molec/m3]) derive(latitude float {time}) derive_smoothed_column(variable {dimension-type, ...} [unit], axis-variable unit, collocation-result-file, a|b, dataset-dir) Derive the given integrated column value by first deriving a partial column profile variant of the variable and then smoothing/integrating this partial column profile using the column avaraging kernel (and a-priori, if available) from a collocated dataset. The fourth parameter indicates which dataset contains the avaraging kernel. Before smoothing the partial column profile is regridded to the grid of the column averaging kernel using the given axis-variable (see also regrid()). derive_smoothed_column(O3_column_number_density {time} [molec/cm2], altitude [km], "collocation-result.csv", b, "./correlative_data/") derive_smoothed_column(variable {dimension-type, ...} [unit], axis-variable unit, collocated-file) Derive the given integrated column value by first deriving a partial column profile variant of the variable and then smoothing/integrating this partial column profile using the column avaraging kernel (and a-priori, if available) from a single merged collocated product. Both the product and the collocated product need to have a collocation_index variable that will be used to associate the right collocated grid/avk/apriori to each sample. Before smoothing the partial column profile is regridded to the grid of the column averaging kernel using the given axis-variable (see also regrid()). derive_smoothed_column(O3_column_number_density {time} [molec/cm2], altitude [km], "./collocated_file.nc") exclude(variable, ...) Mark the specified variable(s) for exclusion from the ingested product. All variables marked for exclusion will be excluded from the ingested product, all other variables will be kept. Variables that do not exist will be ignored. Instead of a variable name, a pattern using ‘*’ and ‘?’ can be provided. exclude(datetime, *uncertainty*) Flatten a product for a certain dimension by collapsing the given dimension into the time dimension. The time dimension will thus grow by a factor equal to the length of the given dimension and none of the variables in the product will depend on the given dimension anymore. If the length of the flattened dimension does not equal 1 then: variables that depend more than once on the given dimension will be removed, the index and collocation_index variables will be removed, and time independent variables are made time dependent. Independent dimensions and the time dimension cannot be flattened. Example: (turn a 2D lat/lon grid into a a series of individual points) regrid(vertical, altitude [km], (20));flatten(vertical) (vertically slice the product at 20 km altitude) keep(variable, ...) Mark the specified variable(s) for inclusion in the ingested product. All variables marked for inclusion will be kept in the ingested product, all other variables will be excluded. Trying to keep a variable that does not exist will result in an error. Instead of a variable name, a pattern using ‘*’ and ‘?’ can be provided (unmatched patterns will not result in an error). keep(datetime*, latitude, longitude) longitude_range(minimum [unit], maximum [unit]) Exclude measurements of which the longitude of the measurement location falls outside the specified range. This function correctly handles longitude ranges that cross the international date line. It checks whether wrap(longitude, minimum, minimum + 360) <= maximum. longitude_range(179.0, 181.0) (select a 2 degree range around the international dateline) longitude_range(-181.0, -179.0) (gives exact same result as the first example) point_distance(latitude [unit], longitude [unit], distance [unit]) Exclude measurements whose point location is situated further than the specified distance from the given location. Example: point_distance(52.012, 4.357, 3 [km]) point_in_area((lat, ...) [unit], (lon, ...) [unit]) Exclude measurements whose point location does not fall inside the measurement area. Example: point_in_area((50,50,54,54) [degN],(3,8,8,3) [degE]) Exclude measurements whose point location does not fall inside one of the areas from the area mask file. rebin(dimension, axis-bounds-variable unit, (value, ...)) Regrid all variables in the product for the given dimension using the given axis boundaries variable as target grid. The operation will use a derive(axis-variable {[time,]dimension,2} unit) to determine the current grid. The target grid is specified as a list of N+1 boundary edge values (for N adjacent intervals). Rebinning uses a weighted average of the overlapping intervals of the current grid with the interval of the target grid. Example: rebin(longitude, longitude_bounds [degree_east], (-180, -90, 0, 90, 180)) rebin(vertical, altitude [km], (0.0, 1.5, 3.0, 7.0)) rebin(dimension, axis-bounds-variable unit, length, offset, step) Regrid all variables in the product for the given dimension using the given axis boundaries variable as target grid. The operation will use a derive(axis-variable {[time,]dimension,2} unit) to determine the current grid. The N+1 edges of the target grid are specified as length, offset, and step parameters. Rebinning uses a weighted average of the overlapping intervals of the current grid with the interval of the target grid. Example: rebin(longitude, longitude_bounds [degree_east], 5, -180, 90) rebin(vertical, altitude [km], 11, 0, 1.0) regrid(dimension, axis-variable unit, (value, ...)) Regrid all variables in the product for the given dimension using the given axis variable as target grid. The operation will use a derive(axis-variable {[time,]dimension} unit) to determine the current grid. The target grid is specified as a list of values. Example: regrid(vertical, altitude [km], (1.0, 2.0, 5.0, 10.0, 15.0, 20.0, 30.0)) regrid(dimension, axis-variable unit, (value, ...), (value, ...)) Regrid all variables in the product for the given dimension using the given axis variable as target grid. The operation will use a derive(axis-variable {[time,]dimension} unit) and derive (<axis-variable>_bounds {[time,]dimension} unit) to determine the current grid and boundaries. The target grid mid points are specified by the first list of values and the target grid boundaries by the second list of values. If there are N mid points, then the list of boundary values can either contain N+1 points if the boundaries are adjacent or 2N points to define each boundary pair separately. Example: regrid(vertical, altitude [km], (1.0, 2.0, 5.0), (0.0, 1.5, 3.0, 7.0)) regrid(vertical, altitude [km], (1.0, 2.0, 5.0), (0.5, 1.5, 1.5, 2.5, 4.0, 6.0)) regrid(dimension, axis-variable unit, length, offset, step) Regrid all variables in the product for the given dimension using the given axis variable as target grid. The operation will use a derive(axis-variable {[time,]dimension} unit) to determine the current grid. The target grid is specified as using a length, offset, and step parameters. Example: regrid(vertical, altitude [km], 10, 0.5, 1.0) (indicating a grid of altitudes 0.5, 1.5, …, 9.5) regrid(time, datetime [hours since 2017-04-01], 23, 0.5, 1.0) regrid(dimension, axis-variable unit, collocation-result-file, a|b, dataset-dir) Regrid all variables in the product for the given dimension using the target grid taken from a collocated dataset. The fourth parameter indicates which dataset contains the target grid. regrid(vertical, altitude [km], "collocation-result.csv", b, "./correlative_data/") regrid(dimension, axis-variable unit, collocated-file) Regrid all variables in the product for the given dimension using the target grid taken from a single merged collocated product. Both the product and the collocated product need to have a collocation_index variable that will be used to associate the right collocated grid to each sample. Example: regrid(vertical, altitude [km], "./collocated_file.nc") rename(variable, new_name) Rename the variable to the new name. Note that this operation should be used with care since it will change the meaning of the data (potentially interpreting it incorrectly in further operations). It is primarilly meant to add/remove prefixes (such as surface/tropospheric/etc.) to allow the variable to be used in a more specific (with prefix) or generic (without prefix) way. If a product does not have a variable with the source name but already has a variable with the target name then the rename operation will do nothing (assuming that the target state is already satisfied). Example: rename(surface_temperature, temperature) set(option, value) Set a specific option in HARP. Both the option and value parameters need to be provided as string values (using double quotes). Options will be set ‘globally’ in HARP and will persists for all further operations in the list. After termination of the list of operations, all HARP options will be reverted back to their initial values. Available options are: Possible values are: ○ disabled (default) disable the use of AFGL86 climatology in variable conversions ○ enabled enable the use of AFGL86 climatology in variable conversions (using seasonal and latitude band dependence) ○ usstd76 enable AFGL86 using US Standard profiles Determine whether to create a collocation_datetime variable when a collocate_left or collocation_right operation is performed. The collocation_datetime variable will contain the datetime of the sample from the other dataset for the collocated pair. Possible values are: ○ disabled (default) variable will not be created ○ enabled variable will not be created Determine how to propagate uncertainties for operations that support this (and where there is a choice). Possible values are: ○ uncorrelated (default) to assume fully uncorrelated uncertainties ○ correlated to assume fully correlated uncertainties Determine how to deal with interpolation of target grid values that fall outside the source grid range. Possible values are: ○ nan (default) to set values outside the range to NaN ○ edge to use the nearest edge value ○ extrapolate to perform extrapolation set("afgl86", "enabled") set("regrid_out_of_bounds", "extrapolate") smooth(variable, dimension, axis-variable unit, collocation-result-file, a|b, dataset-dir) Smooth the given variable in the product for the given dimension using the avaraging kernel (and a-priori profile, if available) from a collocated dataset. The fifth parameter indicates which dataset contains the avaraging kernel. Before smoothing the product is regridded to the grid of the averaging kernel using the given axis-variable (see also regrid()). Example: smooth(O3_number_density, vertical, altitude [km], "collocation-result.csv", b, "./correlative_data/") smooth((variable, variable, ...), dimension, axis-variable unit, collocation-result-file, a|b, dataset-dir) Same as above, but then providing a list of variables that need to be smoothed. For each variable an associated averaging kernel (and associated a-priori, if applicable) needs to be present in the collocated dataset. smooth(variable, dimension, axis-variable unit, collocated-file) Smooth the given variable in the product for the given dimension using the avaraging kernel (and a-priori profile, if available) from a single merged collocated product. Both the product and the collocated product need to have a collocation_index variable that will be used to associate the right collocated grid/avk/apriori to each sample. Before smoothing the product is regridded to the grid of the averaging kernel using the given axis-variable (see also regrid()). Example: smooth(O3_number_density, vertical, altitude [km], "./collocated_file.nc") smooth((variable, variable, ...), dimension, axis-variable unit, collocated-file) Same as above, but then providing a list of variables that need to be smoothed. For each variable an associated averaging kernel (and associated a-priori, if applicable) needs to be present in the merged collocated file. Reorder a dimension for all variables in the product such that the variable provided as parameter ends up being sorted. The variable should be one dimensional and the dimension that gets reordered is this dimension of the referenced variable. sort((variable, ...)) Same as above, but use a list of variables for sorting. squash(dimension, variable) Remove the given dimension for the variable, assuming that the content for all items in the given dimension is the same. If the content is not the same an error will be raised. squash(dimension, (variable, ...)) Same as above, but then providing a list of variables that need to be squashed. Filter a dimension for all variables in the product such that invalid values for the variable provided as parameter get excluded (values outside the valid range of the variable, or NaN). This operation is executed similar to a comparison filter. wrap(variable [unit], minimum, maximum) Wrap the values of the variable to the range given by minimum and maximum. The result is: min + (value - min) % (max - min) Example: wrap(longitude [degree_east], -180, 180) Collocation result file The format of the collocation result file is described in the conventions section of the HARP documentation. Area mask file A comma separated (csv) file is used as input for area filters. It has the following format: It starts with a header with latitude, longitude column headers (this header will be skipped by HARP). Then, each further line defines a polygon. Each polygon consists of the vertices as defined on that line. derive(altitude {time} [km]); pressure > 3.0 [bar] point_distance(-52.5 [degree], 1.0 [rad], 1e3 [km]) index in (0, 10, 20, 30, 40); valid(pressure) Formal definition digit = '0'|'1'|'2'|'3'|'4'|'5'|'6'|'7'|'8'|'9' ; sign = '+'|'-' ; alpha = 'N'|'O'|'P'|'Q'|'R'|'S'|'T'|'U'|'V'|'W'|'X'|'Y'|'Z' ; character = alpha | digit | ' '|'!'|'"'|'#'|'$'|'%'|'&'|"'"|'('|')'|'*'|'+'|','| '^'|'_'|'`'|'{'|'|'|'}'|'~' ; identifier = alpha, [{alpha | digit | '_'}] ; variable = identifier ; variablelist = variable | variablelist, ',', variable ; intvalue = [sign], {digit} ; floatvalue = [sign], ('N' | 'n'), ('A' | 'a'), ('N' | 'n') | [sign], ('I' | 'i'), ('N' | 'n'), ('F' | 'f') | (intvalue, '.', [{digit}] | '.', {digit}), [('D' | 'd' | 'E' | 'e'), intvalue] ; stringvalue = '"', [{character-('\', '"') | '\' character}], '"' ; value = intvalue | floatvalue | stringvalue ; intvaluelist = intvalue | intvaluelist, ',', intvalue; floatvaluelist = floatvalue | floatvaluelist, ',', floatvalue; stringvaluelist = stringvalue | stringvaluelist, ',', stringvalue; valuelist = intvaluelist | floatvaluelist | stringvaluelist ; unit = '[', [{character-(']')}], ']' ; datatype = 'int8' | 'int16' | 'int32' | 'float' | 'double' | 'string' ; dimension = 'time' | 'latitude' | 'longitude' | 'vertical' | 'spectral' | 'independent' ; dimensionlist = dimension | dimensionlist, ',', dimension ; dimensionspec = '{' dimensionlist '}' ; bit_mask_operator = '=&' | '!&' ; operator = '==' | '!=' | '>=' | '<=' | '<' | '>' ; functioncall = 'area_covers_area', '(', '(', floatvaluelist, ')', [unit], '(', floatvaluelist, ')', [unit], ')' | 'area_covers_area', '(', stringvalue, ')' | 'area_covers_point', '(', floatvalue, [unit], ',', floatvalue, [unit], ')' | 'area_inside_area', '(', '(', floatvaluelist, ')', [unit], '(', floatvaluelist, ')', [unit], ')' | 'area_inside_area', '(', stringvalue, ')' | 'area_intersects_area', '(', '(', floatvaluelist, ')', [unit], '(', floatvaluelist, ')', [unit], ',', floatvalue, ')' | 'area_intersects_area', '(', stringvalue, ',', floatvalue, ')' | 'area_intersects_area', '(', '(', floatvaluelist, ')', [unit], '(', floatvaluelist, ')', [unit], ')' | 'area_intersects_area', '(', stringvalue, ')' | 'bin', '(', [variable], ')' | 'bin', '(', variablelist, ')' | 'bin', '(', stringvalue, ',', ( 'a' | 'b' ), ')' | 'bin_spatial', '(', '(', floatvaluelist, ')', '(', floatvaluelist, ')', ')' | 'bin_spatial', '(', intvalue, ',', floatvalue, ',', floatvalue, ',', intvalue, ',', floatvalue, ',', floatvalue, ',', ')' | 'clamp', '(', dimension, ',', variable, [unit], '(', floatvalue, ',', floatvalue, ')', ')' | 'collocate_left', '(', stringvalue, ')' | 'collocate_left', '(', stringvalue, ',', intvalue, ')' | 'collocate_left', '(', stringvalue, ',', intvalue, ',', intvalue, ')' | 'collocate_right', '(', stringvalue, ')' | 'collocate_right', '(', stringvalue, ',', intvalue, ')' | 'collocate_right', '(', stringvalue, ',', intvalue, ',', intvalue, ')' | 'derive', '(', variable, [datatype], [dimensionspec], [unit], ')' | 'derive_smoothed_column', '(', variable, dimensionspec, [unit], ',', variable, unit, ',', stringvalue, ',', ( 'a' | 'b' ), ',', stringvalue, ')' | 'derive_smoothed_column', '(', variable, dimensionspec, [unit], ',', variable, unit, ',', stringvalue, ')' | 'exclude', '(', variablelist, ')' | 'flatten', '(', dimension, ')' | 'keep', '(', variablelist, ')' | 'longitude_range', '(', floatvalue, [unit], ',', floatvalue, [unit], ')' | 'point_distance', '(', floatvalue, [unit], ',', floatvalue, [unit], ',', floatvalue, [unit], ')' | 'point_in_area', '(', '(', floatvaluelist, ')', [unit], '(', floatvaluelist, ')', [unit], ')' | 'point_in_area', '(', stringvalue, ')' | 'rebin', '(', dimension, ',', variable, unit, ',', '(', floatvaluelist, ')', ')' | 'rebin', '(', dimension, ',', variable, unit, ',', intvalue, ',', floatvalue, ',', floatvalue, ')' | 'regrid', '(', dimension, ',', variable, unit, ',', '(', floatvaluelist, ')', ')' | 'regrid', '(', dimension, ',', variable, unit, ',', '(', floatvaluelist, ')', ',', '(', floatvaluelist, ')', ')' | 'regrid', '(', dimension, ',', variable, unit, ',', intvalue, ',', floatvalue, ',', floatvalue, ')' | 'regrid', '(', dimension, ',', variable, unit, ',', stringvalue, ',', ( 'a' | 'b' ), ',', stringvalue, ')' | 'regrid', '(', dimension, ',', variable, unit, ',', stringvalue, ')' | 'rename', '(', variable, ',', variable, ')' | 'set', '(', stringvalue, ',', stringvalue, ')' | 'smooth', '(', variable, ',', dimension, ',', variable, unit, ',', stringvalue, ',', ( 'a' | 'b' ), ',', stringvalue, ')' | 'smooth', '(', '(', variablelist, ')', ',', dimension, ',', variable, unit, ',', stringvalue, ',', ( 'a' | 'b' ), ',', stringvalue, ')' | 'smooth', '(', variable, ',', dimension, ',', variable, unit, ',', stringvalue, ')' | 'smooth', '(', '(', variablelist, ')', ',', dimension, ',', variable, unit, ',', stringvalue, ')' | 'sort', '(', variable, ')' | 'sort', '(', variablelist, ')' | 'squash', '(', dimension, ',', variable, ')' | 'squash', '(', dimension, ',', variablelist, ')' | 'valid', '(', variable, ')' | 'wrap', '(', variable, [unit], ',', floatvalue, ',', floatvalue, ')' ; operationexpr = variable, bit_mask_operator, intvalue | variable, operator, value, [unit] | variable, ['not'], 'in', '(', valuelist, ')', [unit] | 'index', '(', dimension, ')', operator, intvalue | 'index', '(', dimension, ')', ['not'], 'in', '(', intvaluelist, ')' | functioncall | operationexpr, ';', operationexpr ; operations = operationexpr ';' | operationexpr ;
{"url":"https://stcorp.github.io/harp/doc/html/operations.html","timestamp":"2024-11-10T03:48:00Z","content_type":"text/html","content_length":"99474","record_id":"<urn:uuid:f5ca482f-d3a1-4ce9-8df9-341bf82066b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00187.warc.gz"}
Reviews and Reflections on Math I am always intrigued by studies that point to the one subject where many American children, homeschooled or not, struggle: mathematics. I have read that, especially in the case of homeschoolers, the struggle is often not with math concepts, but computation—in other words, speed. Enter my children. Funny, given my love for math, I assumed that I’d have at least one child who shared my excitement over solving for x, but as of now, all three seem to be in loathing rather than in love. (Well, loathing might be strong, but they definitely don’t run for their math studies). Take my youngest, as one example. She’s just begun this week learning how to borrow, as in take-one-from-the-tens’-column-and-add-the-ten-to-whatever-is-in-the-ones’-column. When she saw the page, she immediately noticed that she was being asked to do something that was different. She is also currently preoccupied with multiplication, or “times,” as she calls it. Thus, everything in math that is different to her, she thinks it’s “times.” “They want me to do times,” she wails, then goes into a whine about it would be soooo hard, and woe is my life, etc. We went through the concept a few times (the curriculum, Horizons, had introduced the concept in pieces for several days), and by the third problem, she had it. That’s that quick grasp of a concept. Now comes the part that disturbs me far more than slow understanding: she gets bored, or maybe just tired, and the distractions come. I’m glad we’re using Horizons, where several concepts are introduced at once. With most curriculums, children learn one concept to the point of mastery, then the next, then the next… We once used Making Math Meaningful with the older two children. It was word problem heavy, which I loved because, well, that makes math meaningful. It wasn’t drill intensive; I had the revelation with my oldest that drills had taught her the mechanics of how to complete a math problem, but she had no clue of what she was doing or why. Our first year of homeschool I spent re-teaching much of what she’d been “taught” in school. Making Math Meaningful didn’t prepare our kids well for higher level mathematics, however, and I hated the way that particular curriculum taught multiplication. I learned all of my facts by memorization, up to 12×12; MMM suggested learning the facts of 0-5 and then using those to expand upon all the other facts. Thus, 7×8 becomes 5×8 + 2×8—not a bad method, but different (read uncomfortable) for me, and seemingly more time-consuming. Our son still doesn’t have the command of his higher facts that I would like; he adds extremely fast at the higher levels. We drill on these occasionally, but it occurs to me that he might not ever “get” them as I’d like. As a bit of an aside, one of the homeschooling groups we participated in would often joke about the “math police,” as we called them—an imaginary phantom that would come to the door and punish us for being poor math teachers. When we switched the older two to Teaching Textbooks, we had much better success in terms of understanding, and especially in the area of independent learning. I think, though, that part of the dilemma with building speed is that homeschooling, by its very nature, doesn’t rush a child to complete anything in a given time. We must create those artificial deadlines. In our home, we’ve used Calcu-ladder to allow the kids an opportunity to build speed in computation. But recently, Calcu-ladder went to a CD format, meaning that I have to think ahead of time of when I want the kids to complete drills, determine which drill (a somewhat time-consuming search through the CD), then print the materials so that they’re ready in time. Another digression, but bear with me: does anyone else think we lost something significant as home educators in all of the e-text information that is now available? The other day, I needed to print out lapbook materials for my oldest, but I needed a color cartridge. By the time I bought the cartridge, she’d moved to a new chapter. So now, do I make her go back so that the lapbook is complete, or do I live with the gap and move forward? I’ll contemplate that one over our break. So we move forward. I love to think that slow and steady wins the race, but as the oldest takes more of the pre-college exams, I know better. At least, I should say, slow and steady won’t win that particular race. I’m curious, though: how are your kids doing with math concepts? How about speed? Finally, this is a recent shot of my youngest learning to tie her shoes. I love two things about this picture. The first is the way she looks over her glasses, like a much older woman. In truth, she’s not learned to keep her fingers off the lens, so her glasses are often smudged such that she sees better not looking through them. The second thing I love about this picture is her intensity. She is determined to master tying shoes. One day I may see that same look with math, or if not, I’ll be okay with the reason why. 5 thoughts on “Reviews and Reflections on Math” 1. Interesting insights and observations. My children, thus far, enjoy math- although I am noticing less interest in my older child- who is 9 and just beginning multi-digit mutiplication and long division. I was a math major in college. I also was a public school teacher and was a site math rep at the school I taught. Naturally, when I began to homeschool, I presumed math would be easy for me to teach my own children. For the most past, it has been, both conceptually and the computation, especially for my 7 year old son. He can compute in his head with great accuracy and speed- he doesn’t always use traditional alogorithms either. He has this way of manipulating numbers. My children have learned their basic facts through games. I didn’t give them a ton of paper and pencil drills, but we worked on speed and memorization. Once they learned the facts and the concepts behind the operations, they had all they needed to be a problem solver. My daughter prefers doing word problems to pure computation. I think that children in general need to understand that mastery of the facts is like phonics in reading. Mastering phonics helps them decode words and sentences that enable them to extract meaning when they read. When we teach our kids to read, we want the phonics to be automatic and not have the words sounded out each time. It’s the same way with basic math facts. We don’t want our children to be forever counting on their fingers to solve problems. We want the facts to be automatic so that they can be drawn up at any time to work our more complicated or tedious problems. I don’t know how well children would understand this analogy- but if they did, maybe they’d be more motivated to commit those facts to memory. 2. Belinda, I think you would really enjoy reading The Outliers, by Somebody Gladwell. He explains, among many other things, the effect of culture (and other things) on success in such areas as math, for example, why Asians, even American ones, are stereotypically good at it, and why the rest of us typically aren’t, and etc. Very fascinating book. Forgive the grammatical structure of my second sentence. Too lazy today to fix it. When I read here that Making Math Meaningful has lots of word problems, I was excited and thought maybe this was our next series. And then I kept reading… thank you for your input. My sixth grader just finished TT pre-algebra. She is NOT ready for Algebra I, I know, so I’m wondering where to go with her now. Maybe some logic and critical thinking. Math is a puzzle, both literally and figuratively. 🙂 Love the photo of your youngest. Emily shares her penchant for touching her lenses. 3. Sally, you might try the “Keys to…” workbooks as an interim program. The books are less than $5, and there are several “keys”: to algebra, to fractions, to geometry, etc. We used the Keys to Measurement one summer after some standardized testing revealed that my kids, much younger then, didn’t know how to distinguish the length of different objects. We had a lot of fun measuring the height of stop signs (which I never realized were so tall!) and all kinds of objects around the house. Most importantly, the kids learned much in a little bit of time. I’ll try to see if I can find a link on the internet, but if you have a local homeschool store, they’re probably sure to carry it. Thanks for stopping by. Your visit always blesses me. 4. Thanks, Belinda. I have considered the Keys to… series. We don’t have a homeschool store nearby, and the one near my parents’ closed. I’ll have to do some sleuthing so I can see those in person. I have to see them before I will buy! Betsy has fractions down, as well as decimals and percents, but I’d love to see what else is available! Especially for that price. I had not checked the box to have follow-up comments emailed to me, so I’m glad you followed through all the way and went to my blog. 🙂 It is now checked, just in case. 5. Hello friend. Long time no see, huh? Now, you know I HAD to comment on a math-related post, right? *LOL* I love math and can’t imagine why anyone wouldn’t! Fortunately, at least one of my girls enjoy math. It’s too soon to tell with the Kindergartener. I love your comments on the various math curriculum. I’ve only used two – Saxon and Math-U-See. I like the mastery concept, however, I have learned that “mastery” at one point in time does not necessary mean “mastery” at a later point in time. It’s true that “if you don’t use it, you lose it”, so review is vital. I agree that we MUST enforce time restrictions – especially on exams. I have seen my kids crash and burn on standardized tests because they weren’t used to working quickly. I'd love to hear your two cents!!
{"url":"https://blessedheritagechronicles.com/2010/12/14/reviews-and-reflections-on-math/","timestamp":"2024-11-10T12:46:48Z","content_type":"text/html","content_length":"73636","record_id":"<urn:uuid:b5bdf241-a5a7-4f89-b585-4b2532351665>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00671.warc.gz"}
Square Micrometer to Square Kilometer Units of measurement use the International System of Units, better known as SI units, which provide a standard for measuring the physical properties of matter. Measurement like area finds its use in a number of places right from education to industrial usage. Be it buying grocery or cooking, units play a vital role in our daily life; and hence their conversions. unitsconverters.com helps in the conversion of different units of measurement like µm² to km² through multiplicative conversion factors. When you are converting area, you need a Square Micrometers to Square Kilometers converter that is elaborate and still easy to use. Converting Square Micrometer to Square Kilometer is easy, for you only have to select the units first and the value you want to convert. If you encounter any issues to convert, this tool is the answer that gives you the exact conversion of units. You can also get the formula used in Square Micrometer to Square Kilometer conversion along with a table representing the entire conversion.
{"url":"https://www.unitsconverters.com/en/Squaremicrometer-To-Squarekilometer/Unittounit-306-303","timestamp":"2024-11-08T02:50:16Z","content_type":"application/xhtml+xml","content_length":"138926","record_id":"<urn:uuid:8f8ffe41-988c-4725-aae3-0a11db055c1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00376.warc.gz"}
Tune XGBoost "subsample" Parameter The subsample parameter in XGBoost controls the fraction of training samples used in each iteration of the boosting process. It introduces randomness into the training process, which can help prevent overfitting and improve generalization. Smaller values of subsample use fewer samples per iteration, introducing more diversity across the ensemble of trees, while larger values use more samples, making each tree more similar to the others. This example demonstrates how to tune the subsample hyperparameter using grid search with cross-validation to find the optimal value that balances model performance and training time. import xgboost as xgb import numpy as np from sklearn.datasets import make_regression from sklearn.model_selection import GridSearchCV, KFold from sklearn.metrics import mean_squared_error # Create a synthetic dataset X, y = make_regression(n_samples=1000, n_features=20, noise=0.1, random_state=42) # Configure cross-validation cv = KFold(n_splits=5, shuffle=True, random_state=42) # Define hyperparameter grid param_grid = { 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0] # Set up XGBoost regressor model = xgb.XGBRegressor(n_estimators=100, learning_rate=0.1, random_state=42) # Perform grid search grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=cv, scoring='neg_mean_squared_error', n_jobs=-1, verbose=1) grid_search.fit(X, y) # Get results print(f"Best subsample: {grid_search.best_params_['subsample']}") print(f"Best CV MSE: {-grid_search.best_score_:.4f}") # Plot subsample vs. MSE import matplotlib.pyplot as plt results = grid_search.cv_results_ plt.figure(figsize=(10, 6)) plt.plot(param_grid['subsample'], -results['mean_test_score'], marker='o', linestyle='-', color='b') plt.fill_between(param_grid['subsample'], -results['mean_test_score'] + results['std_test_score'], -results['mean_test_score'] - results['std_test_score'], alpha=0.1, color='b') plt.title('Subsample vs. MSE') plt.ylabel('CV Average MSE') The resulting plot may look as follows: In this example, we create a synthetic regression dataset using scikit-learn’s make_regression function. We then set up a KFold cross-validation object to split the data into training and validation We define a hyperparameter grid param_grid that specifies the range of subsample values we want to test. In this case, we consider values from 0.5 to 1.0. We create an instance of the XGBRegressor with some basic hyperparameters set, such as n_estimators and learning_rate. We then perform the grid search using GridSearchCV, providing the model, parameter grid, cross-validation object, scoring metric (negative mean squared error), and the number of CPU cores to use for parallel computation. After fitting the grid search object with grid_search.fit(X, y), we can access the best subsample value and the corresponding best cross-validation mean squared error using grid_search.best_params_ and grid_search.best_score_, respectively. Finally, we plot the relationship between the subsample values and the cross-validation average mean squared error scores using matplotlib. We retrieve the results from grid_search.cv_results_ and plot the mean MSE scores along with the standard deviation as error bars. This visualization helps us understand how the choice of subsample affects the model’s performance and guides us in selecting an appropriate value. By tuning the subsample hyperparameter using grid search with cross-validation, we can find the optimal value that balances the model’s performance and training time. This helps prevent overfitting and ensures that the model generalizes well to unseen data while also considering computational efficiency.
{"url":"https://xgboosting.com/tune-xgboost-subsample-parameter/","timestamp":"2024-11-02T17:19:10Z","content_type":"text/html","content_length":"13813","record_id":"<urn:uuid:96ef2e22-ba1e-4dd4-a633-6a370ce4984c>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00533.warc.gz"}
How to calculate the Uncertainty in chemistry? |Chemistry Questions Calculating Uncertainty is not an easy process. People find it challenging to estimate Uncertainty. This is why we have picked up the topic here so that you will get to know the procedure that will help you calculate it easily. All you need to do is focus on an eight-step process appropriately, and you will never find it difficult to calculate it quickly and adequately. How to calculate Uncertainty? 1. Specify the measurement process Before getting into the calculation process, it is essential to have a plan. The planning is undoubtedly an excellent start to get the appropriate outcomes. First of all, you need to identify the measurement process. This will help you in uncertainty analysis and focus your kind attention on what matters the most. How to specify the Measurement process? Follow the below-mentioned instructions to specify the measurement process: • Select the measurement function to evaluate • Select the procedure or measurement method to be used • Select the equipment to be used • Select the desired range of measurement function • Determine the test points to be evaluated. 1. Identify and characterize Uncertainty resources. Now that you have figured out the measurement processes to be evaluated, you need to identify the factors influencing Uncertainty in measurement results. However, this process is not an easy one, so be patient while working. Finding Uncertainty sources Finding uncertainty resources can be complicated and require a lot of time and effort. This stage is considered the most time-consuming process while evaluating the uncertainty measurement. How to find Uncertainty sources Follow the below-mentioned steps to find uncertainty sources • Evaluate measurement process, calibration procedure, or test method • Evaluate measurement equations • Evaluate reference standards, reagents, and equipment • Identify minimum required uncertainty resources • Research for various information sources • Consult an expert 1. Quantify Uncertainty resources Before moving the calculation of measurement uncertainty, you need first to determine each contributing factor’s magnitude. To attain this, you need to perform such data analysis and reduction. How to Quantify Uncertainty? Follow the below-mentioned steps to quantify the Uncertainty • Collect data and information • Select the correct data after appropriate evaluation • Data analysis • Quantify Uncertainty components. 1. Characterize Uncertainty resources Characterize each factor by a probability distribution and uncertainty type. How to characterize Uncertainty sources? Follow the procedure to characterize your uncertainty sources • Categorize each uncertainty source: Type A or Type B • Assign a probability distribution to each component 1. Convert Uncertainties to standard deviations After the probability distribution, identify the equation required to convert each uncertainty contributor to a standard deviation equivalent. This will help to reduce the uncertainty source to a 1-sigma level. How to convert Uncertainty to standard deviations? Follow the below-mentioned steps to convert uncertainty components to standard deviations • Assign a probability distribution to each uncertainty sources • Find the divisor for the selected probability distribution • Divide each uncertainty source by respective divisor. 1. Calculate Combined Uncertainty After converting the uncertainty sources, it is time to calculate combined Uncertainty by the root sum of squares (RSS) method. How to calculate the combined Uncertainty? Follow the below-mentioned steps to calculate combined Uncertainty • Square each uncertainty component’s value • Add together all results obtained in the first step • Calculate the square root of results obtained in step 2 1. Calculate expanded Uncertainty You have reached the phase where you are almost done with the uncertainty estimation. How to calculate the expanded Uncertainty? Follow the steps to calculate expanded Uncertainty • Calculate combined Uncertainty • Calculate Freedom’s effective degrees • Select or Find a coverage factor (k) • Multiply combined Uncertainty by a coverage factor 1. Evaluate your Uncertainty budget Now that you are done with the calculation of expanded Uncertainty, it is the best time to evaluate the uncertainty estimate for appropriateness. Ensure that your measurement uncertainty estimate appropriately represents the measurement process and is not under or over-estimated. Discover the exact logic behind the reactions! Get a deeper understanding of every possible interaction between atoms, molecules and elements in an easy and fun-loving way.
{"url":"https://telgurus.co.uk/how-to-calculate-uncertainty-in-chemistry/","timestamp":"2024-11-08T20:08:51Z","content_type":"text/html","content_length":"129431","record_id":"<urn:uuid:c610c476-6c6d-4443-ab9d-52c4c4a11ba9>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00062.warc.gz"}
Interest rates are expressed as a percentage of What Are Interest Rates? An interest rate is the proportion of a loan that is charged as interest to the borrower, typically expressed as an annual percentage of the Interest rates are expressed as a percentage of the principal (e.g., 15%), usually in annual terms. On the other hand, APR is the interest rate plus certain additional The money borrowed from a financial institution in the form of a loan attracts an interest which is expressed as a percentage of the principal amount borrowed. Therefore, money deposited to financial institutions by customers for longer terms may earn them interest depending on the type of account the customer holds with the lending institutions. Interest rate expressed as a percentage Reason : Consumers usually pay a price for the goods and services they buy. The cost to buy the right to use someone else's money for a period of time is called the interest rate. For example, if you owe $20,000 on a bank loan at a 6% annual interest rate, and the bank compounds interest monthly, this means that on your next monthly statement, you’ll owe $100 in interest. Six percent of $20,000 is $1,200 per year, so this translates to $100 on a monthly basis. Interest rate is the amount charged by lenders to borrowers for the use of money, expressed as a percentage of the principal, or original amount borrowed; it can also be described alternatively as the cost to borrow money. For instance, an 8% interest rate for borrowing $100 a year will obligate a person to pay $108 When market interest rates rise, so do bank funding costs. Therefore, the effect of higher interest rates on banks’ net interest margins—the difference between banks’ interest income and interest expense expressed as a percentage of average earning assets—is ambiguous. Trends in Interest Rates and Net Interest Margins The annual percentage rate (APR) of a loan is the interest you pay each year represented as a percentage of the loan balance. For example, if your loan has an APR of 10%, you would pay $100 annually per $1,000 borrowed. Interest is the charge for the privilege of borrowing money, typically expressed as annual percentage rate (APR). Interest can also refer to the amount of ownership a stockholder has in a company, usually expressed as a percentage. R = Rate of Interest per year as a percent; R = r * 100 t = Time Periods involved Notes: Base formula, written as I = Prt or I = P × r × t where rate r and time t should be in the same time units such as months or years. The interest rate is the cost of borrowing the principal loan amount. The rate can be variable or fixed, but it’s always expressed as a percentage. The APR is a broader measure of the cost of a The Federal Reserve on Sunday evening cut short-term interest rates — to zero. That’s not a typo: The Fed’s cut drops rates again down to a range of 0.0% to 0.25%, a decrease of 0.50% from current levels. For example, a 12 percent nominal interest rate translates to a 1 percent monthly periodic interest rate or a 0.033 percent daily periodic rate (DPR). That DPR is the 12 percent nominal rate divided by either 360 days (called “ordinary interest”) or 365 days (called “exact interest”), again, depending on the borrowing terms. The annual percentage rate (APR) of a loan is the interest you pay each year represented as a percentage of the loan balance. For example, if your loan has an APR of 10%, you would pay $100 annually per $1,000 borrowed. is the nominal interest rate or "stated rate" in percent. In the formula, r = R/100. Compounding Periods (m): is the number of times compounding will occur during a Percentage expresses a quantitative ratio and fulfills the same function as fractions. The percent rate is calculated by dividing the new value by the original value interest rates that were described with the help of fractions and percentages. The annual percentage rate (APR) on a mortgage is a better indication of the takes all of these into account and expresses them in terms of an interest rate. Explore our mortgage solutions which include, variable rates, fixed rates Get security knowing your interest rate won't increase over the term you select. Interest rates have risen by two percentage points . These figures are expressed as a percentage of the total. Tesauro: sinónimos y palabras relacionadas. Interest is the charge for the privilege of borrowing money, typically expressed as annual percentage rate (APR). Interest can also refer to the amount of ownership a stockholder has in a company, usually expressed as a percentage. The money borrowed from a financial institution in the form of a loan attracts an interest which is expressed as a percentage of the principal amount borrowed. Therefore, money deposited to financial institutions by customers for longer terms may earn them interest depending on the type of account the customer holds with the lending institutions. Interest rate is the amount charged by lenders to borrowers for the use of money, expressed as a percentage of the principal, or original amount borrowed; it can also be described alternatively as the cost to borrow money. For instance, an 8% interest rate for borrowing $100 a year will obligate a person to pay $108 17 Oct 2019 this is usually expressed in a percentage of the total amount borrowed. A 10% interest rate on $100 would be $10 after the calculation period Interest Calculation Methodology and Annual Percentage Rate of Charge The annual interest rate for loans with a floating interest rate is calculated as follows: The interest rate is the amount a lender charges for the use of assets expressed as a percentage of the principal. The interest rate is typically noted on an annual basis known as the annual percentage rate (APR). The assets borrowed could include cash, consumer goods, or large assets such as a vehicle or building. Interest is expressed as a percentage of the principal amount borrowed from these financial institutions. Furthermore, money deposited to financial institutions by customers for longer terms may earn them interest depending on the type of account the customer holds with the lending institutions. Interest is the charge for the privilege of borrowing money, typically expressed as annual percentage rate (APR). Interest can also refer to the amount of ownership a stockholder has in a company, usually expressed as a percentage. The annual percentage rate (APR) of a loan is the interest you pay each year represented as a percentage of the loan balance. For example, if your loan has an APR of 10%, you would pay $100 annually per $1,000 borrowed. Finally, we calculate the interest charged for the billing cycle, which in this example, is $3,500 x .06944% x 30 days, or $72.91. This is the amount of interest you would be charged on a card with a $3,500 balance and a 25% interest rate. If you have an investment earning a nominal interest rate of 7% per year and you will be getting interest compounded monthly and you want to know effective rate for one year, enter 7% and 12 and 1. If you are getting interest compounded quarterly on your investment, enter 7% and 4 and 1. Your credit card purchases are subject to a standard interest rate called the Annual Percentage Rate, or APR. This number will vary from card to card and person to person depending on factors such as credit scores.
{"url":"https://bestbinarymuwh.netlify.app/keuper27999tiwo/interest-rates-are-expressed-as-a-percentage-of-422.html","timestamp":"2024-11-10T03:17:50Z","content_type":"text/html","content_length":"34261","record_id":"<urn:uuid:a36e5dd4-af7a-4b40-8104-6abc38705571>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00334.warc.gz"}
Euclid's Elements - Wikipedia Republished // WIKI 2 The Elements (Greek: Στοιχεῖα Stoikheîa) is a mathematical treatise consisting of 13 books attributed to the ancient Greekmathematician Euclid c. 300 BC. It is a collection of definitions, postulates, propositions (theorems and constructions), and mathematicalproofs of the propositions. The books cover plane and solid Euclideangeometry, elementary numbertheory, and incommensurable lines. Elements is the oldest extant large-scale deductive treatment of mathematics. It has proven instrumental in the development of logic and modern science, and its logical rigor was not surpassed until the 19th century. Euclid's Elements has been referred to as the most successful^[a]^[b] and influential^[c] textbook ever written. It was one of the very earliest mathematical works to be printed after the inventionoftheprintingpress and has been estimated to be second only to the Bible in the number of editions published since the first printing in 1482,^[1] the number reaching well over one thousand.^[d] For centuries, when the quadrivium was included in the curriculum of all university students, knowledge of at least part of Euclid's Elements was required of all students. Not until the 20th century, by which time its content was universally taught through other school textbooks, did it cease to be considered something all educated people had read. YouTube Encyclopedic • 1/5 • How One Line in the Oldest Math Text Hinted at Hidden Universes • Euclid as the father of geometry | Introduction to Euclidean geometry | Geometry | Khan Academy • Euclid: The Father of Geometry • Mathematics and Logic: From Euclid to Modern Geometry | Online Courses Trailer • How I teach geometry using Euclid A fragment of Euclid's Elements on part of the Oxyrhynchuspapyri Double-page from the IshaqibnHunayn's Arabic Translation of the Elements. Iraq, 1270. ChesterBeattyLibrary Basis in earlier work An illumination from a manuscript based on AdelardofBath's translation of the Elements, c. 1309–1316; Adelard's is the oldest surviving translation of the Elements into Latin, done in the 12th-century work and translated from Arabic.^[2] Scholars believe that the Elements is largely a compilation of propositions based on books by earlier Greek mathematicians.^[3] Proclus (412–485 AD), a Greek mathematician who lived around seven centuries after Euclid, wrote in his commentary on the Elements: "Euclid, who put together the Elements, collecting many of Eudoxus' theorems, perfecting many of Theaetetus', and also bringing to irrefragable demonstration the things which were only somewhat loosely proved by his predecessors". Pythagoras (c. 570–495 BC) was probably the source for most of books I and II, HippocratesofChios (c. 470–410 BC, not the better known HippocratesofKos) for book III, and EudoxusofCnidus (c. 408–355 BC) for book V, while books IV, VI, XI, and XII probably came from other Pythagorean or Athenian mathematicians.^[4] The Elements may have been based on an earlier textbook by Hippocrates of Chios, who also may have originated the use of letters to refer to figures.^[5] Other similar works are also reported to have been written by TheudiusofMagnesia, Leon, and Hermotimus of Colophon.^[ Transmission of the text In the 4th century AD, TheonofAlexandria produced an edition of Euclid which was so widely used that it became the only surviving source until FrançoisPeyrard's 1808 discovery at the Vatican of a manuscript not derived from Theon's. This manuscript, the Heiberg manuscript, is from a Byzantine workshop around 900 and is the basis of modern editions.^[8] PapyrusOxyrhynchus29 is a tiny fragment of an even older manuscript, but only contains the statement of one proposition. Although Euclid was known to Cicero, for instance, no record exists of the text having been translated into Latin prior to Boethius in the fifth or sixth century.^[2] The Arabs received the Elements from the Byzantines around 760; this version was translated into Arabic under Harunal-Rashid (c. 800).^[2] The Byzantine scholar Arethas commissioned the copying of one of the extant Greek manuscripts of Euclid in the late ninth century.^[9] Although known in Byzantium, the Elements was lost to Western Europe until about 1120, when the English monk AdelardofBath translated it into Latin from an Arabic translation.^[e] A relatively recent discovery was made of a Greek-to-Latin translation from the 12th century at Palermo, Sicily. The name of the translator is not known other than he was an anonymous medical student from Salerno who was visiting Palermo in order to translate the Almagest to Latin. The Euclid manuscript is extant and quite complete.^[11] After the translation by Adelard of Bath (known as Adelard I), there was a flurry of translations from Arabic. Notable translators in this period include HermanofCarinthia who wrote an edition around 1140, Robert of Chester (his manuscripts are referred to collectively as Adelard II, written on or before 1251), Johannes de Tinemue,^[12] possibly also known as JohnofTynemouth (his manuscripts are referred to collectively as Adelard III), late 12th century, and GerardofCremona (sometime after 1120 but before 1187). The exact details concerning these translations is still an active area of research.^[13]CampanusofNovara relied heavily on these Arabic translations to create his edition (sometime before 1260) which ultimately came to dominate Latin editions until the availability of Greek manuscripts in the 16th century. There are more than 100 pre-1482 Campanus manuscripts still available today.^[14]^[15] Euclidis – Elementorum libri XV Paris, Hieronymum de Marnef & Guillaume Cavelat, 1573 (second edition after the 1557 ed.); in 8:350, (2)pp. THOMAS–STANFORD, Early Editions of Euclid's Elements, n°32. Mentioned in T.L. Heath's translation. Private collection Hector Zenil. The first printed edition appeared in 1482 (based on Campanus's translation),^[16] and since then it has been translated into many languages and published in about a thousand different editions. Theon's Greek edition was recovered and publishedin1533^[17] based on Paris gr. 2343 and Venetus Marcianus 301.^[18] In 1570, JohnDee provided a widely respected "Mathematical Preface", along with copious notes and supplementary material, to the first English edition by HenryBillingsley. Copies of the Greek text still exist, some of which can be found in the VaticanLibrary and the BodleianLibrary in Oxford. The manuscripts available are of variable quality, and invariably incomplete. By careful analysis of the translations and originals, hypotheses have been made about the contents of the original text (copies of which are no longer available). Ancient texts which refer to the Elements itself, and to other mathematical theories that were current at the time it was written, are also important in this process. Such analyses are conducted by J.L.Heiberg and Sir ThomasLittleHeath in their editions of the text. Also of importance are the scholia, or annotations to the text. These additions, which often distinguished themselves from the main text (depending on the manuscript), gradually accumulated over time as opinions varied upon what was worthy of explanation or further study. A page with marginalia from the first printed edition of Elements, printed by ErhardRatdolt in 1482 The Elements is still considered a masterpiece in the application of logic to mathematics. In historical context, it has proven enormously influential in many areas of science. Scientists NicolausCopernicus, JohannesKepler, GalileoGalilei, AlbertEinstein and Sir IsaacNewton were all influenced by the Elements, and applied their knowledge of it to their work.^[19]^[20] Mathematicians and philosophers, such as ThomasHobbes, BaruchSpinoza, AlfredNorthWhitehead, and BertrandRussell, have attempted to create their own foundational "Elements" for their respective disciplines, by adopting the axiomatized deductive structures that Euclid's work introduced. The austere beauty of Euclidean geometry has been seen by many in western culture as a glimpse of an otherworldly system of perfection and certainty. AbrahamLincoln kept a copy of Euclid in his saddlebag, and studied it late at night by lamplight; he related that he said to himself, "You never can make a lawyer if you do not understand what demonstrate means; and I left my situation in Springfield, went home to my father's house, and stayed there till I could give any proposition in the six books of Euclid at sight".^[21]^[22] EdnaSt.VincentMillay wrote in her sonnet " EuclidalonehaslookedonBeautybare", "O blinding hour, O holy, terrible day, / When first the shaft into his vision shone / Of light anatomized!". Albert Einstein recalled a copy of the Elements and a magnetic compass as two gifts that had a great influence on him as a boy, referring to the Euclid as the "holy little geometry book".^[23]^[24] The success of the Elements is due primarily to its logical presentation of most of the mathematical knowledge available to Euclid. Much of the material is not original to him, although many of the proofs are his. However, Euclid's systematic development of his subject, from a small set of axioms to deep results, and the consistency of his approach throughout the Elements, encouraged its use as a textbook for about 2,000 years. The Elements still influences modern geometry books. Furthermore, its logical, axiomatic approach and rigorous proofs remain the cornerstone of mathematics. In modern mathematics One of the most notable influences of Euclid on modern mathematics is the discussion of the parallelpostulate. In Book I, Euclid lists five postulates, the fifth of which stipulates If a linesegment intersects two straight lines forming two interior angles on the same side that sum to less than two rightangles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles. The different versions of the parallel postulate result in different geometries. This postulate plagued mathematicians for centuries due to its apparent complexity compared with the other four postulates. Many attempts were made to prove the fifth postulate based on the other four, but they never succeeded. Eventually in 1829, mathematician NikolaiLobachevsky published a description of acute geometry (or hyperbolicgeometry), a geometry which assumed a different form of the parallel postulate. It is in fact possible to create a valid geometry without the fifth postulate entirely, or with different versions of the fifth postulate (ellipticgeometry). If one takes the fifth postulate as a given, the result is Euclideangeometry. • Book 1 contains 5 postulates (including the infamous parallelpostulate) and 5 common notions, and covers important topics of plane geometry such as the Pythagoreantheorem, equality of angles and areas, parallelism, the sum of the angles in a triangle, and the construction of various geometric figures. • Book 2 contains a number of lemmas concerning the equality of rectangles and squares, sometimes referred to as "geometricalgebra", and concludes with a construction of the goldenratio and a way of constructing a square equal in area to any rectilineal plane figure. • Book 3 deals with circles and their properties: finding the center, inscribed angles, tangents, the power of a point, Thales'theorem. • Book 4 constructs the incircle and circumcircle of a triangle, as well as regularpolygons with 4, 5, 6, and 15 sides. • Book 5, on proportions of magnitudes, gives the highly sophisticated theory of proportion probably developed by Eudoxus, and proves properties such as "alternation" (if a : b :: c : d, then a : c :: b : d). • Book 6 applies proportions to plane geometry, especially the construction and recognition of similar figures. • Book 7 deals with elementary number theory: divisibility, primenumbers and their relation to compositenumbers, Euclid'salgorithm for finding the greatestcommondivisor, finding the • Book 8 deals with the construction and existence of geometricsequences of integers. • Book 9 applies the results of the preceding two books and gives the infinitudeofprimenumbers and the construction of all even perfectnumbers. • Book 10 proves the irrationality of the square roots of non-square integers (e.g. ${\displaystyle {\sqrt {2}}}$) and classifies the square roots of incommensurable lines into thirteen disjoint categories. Euclid here introduces the term "irrational", which has a different meaning than the modern concept of irrationalnumbers. He also gives a formula to produce Pythagoreantriples.^[25] • Book 11 generalizes the results of book 6 to solid figures: perpendicularity, parallelism, volumes and similarity of parallelepipeds. • Book 12 studies the volumes of cones, pyramids, and cylinders in detail by using the methodofexhaustion, a precursor to integration, and shows, for example, that the volume of a cone is a third of the volume of the corresponding cylinder. It concludes by showing that the volume of a sphere is proportional to the cube of its radius (in modern language) by approximating its volume by a union of many pyramids. • Book 13 constructs the five regular Platonicsolids inscribed in a sphere and compares the ratios of their edges to the radius of the sphere. Summary Contents of Euclid's Elements Book I II III IV V VI VII VIII IX X XI XII XIII Totals Definitions 23 2 11 7 18 4 22 – – 16 28 – – 131 Postulates 5 – – – – – – – – – – – – 5 Common Notions 5 – – – – – – – – – – – – 5 Propositions 48 14 37 16 25 33 39 27 36 115 39 18 18 465 Euclid's method and style of presentation • "To draw a straight line from any point to any point." • "To describe a circle with any center and distance." Euclid, Elements, Book I, Postulates 1 & 3.^[26] An animation showing how Euclid constructed a hexagon (Book IV, Proposition 15). Every two-dimensional figure in the Elements can be constructed using only a compass and straightedge.^[26] Scan of pages demonstrating Pythagoreantheorem from manuscript held in the VaticanLibrary Euclid's axiomaticapproach and constructivemethods were widely influential. Many of Euclid's propositions were constructive, demonstrating the existence of some figure by detailing the steps he used to construct the object using a compassandstraightedge. His constructive approach appears even in his geometry's postulates, as the first and third postulates stating the existence of a line and circle are constructive. Instead of stating that lines and circles exist per his prior definitions, he states that it is possible to 'construct' a line and circle. It also appears that, for him to use a figure in one of his proofs, he needs to construct it in an earlier proposition. For example, he proves the Pythagorean theorem by first inscribing a square on the sides of a right triangle, but only after constructing a square on a given line one proposition As was common in ancient mathematical texts, when a proposition needed proof in several different cases, Euclid often proved only one of them (often the most difficult), leaving the others to the reader. Later editors such as Theon often interpolated their own proofs of these cases. Euclid's presentation was limited by the mathematical ideas and notations in common currency in his era, and this causes the treatment to seem awkward to the modern reader in some places. For example, there was no notion of an angle greater than two right angles,^[28] the number 1 was sometimes treated separately from other positive integers, and as multiplication was treated geometrically he did not use the product of more than 3 different numbers. The geometrical treatment of number theory may have been because the alternative would have been the extremely awkward The presentation of each result is given in a stylized form, which, although not invented by Euclid, is recognized as typically classical. It has six different parts: First is the 'enunciation', which states the result in general terms (i.e., the statement of the proposition). Then comes the 'setting-out', which gives the figure and denotes particular geometrical objects by letters. Next comes the 'definition' or 'specification', which restates the enunciation in terms of the particular figure. Then the 'construction' or 'machinery' follows. Here, the original figure is extended to forward the proof. Then, the 'proof' itself follows. Finally, the 'conclusion' connects the proof to the enunciation by stating the specific conclusions drawn in the proof, in the general terms of the enunciation.^[30] No indication is given of the method of reasoning that led to the result, although the Data does provide instruction about how to approach the types of problems encountered in the first four books of the Elements.^[4] Some scholars have tried to find fault in Euclid's use of figures in his proofs, accusing him of writing proofs that depended on the specific figures drawn rather than the general underlying logic, especially concerning Proposition II of Book I. However, Euclid's original proof of this proposition, is general, valid, and does not depend on the figure used as an example to illustrate one given configuration.^[31] Euclid's Elements contains errors. Some of the foundational theorems are proved using axioms that Euclid did not state explicitly. A few proofs have errors, by relying on assumptions that are intuitive but not explicitly proven. Mathematician and historian W.W.RouseBall put the criticisms in perspective, remarking that "the fact that for two thousand years [the Elements] was the usual text-book on the subject raises a strong presumption that it is not unsuitable for that purpose."^[28] Later editors have added Euclid's implicit axiomatic assumptions in their list of formal axioms.^[32] For example, in the first construction of Book 1, Euclid used a premise that was neither postulated nor proved: that two circles with centers at the distance of their radius will intersect in two points.^[33] Known errors in Euclid date to at least 1882, when Paschpublishedhismissingaxiom. Early attempts to find all the errors include Hilbert'sgeometryaxioms and Tarski's. In 2018, Michael Beeson et al. used computer proofassistants to create a new set of axioms similar to Euclid's and generate proofs that were valid with those axioms.^[34] Beeson et al. checked only Book I and found these errors: missing axioms, superfluous axioms, gaps in logic (such as failing to prove points were colinear), missing theorems (such as an angle cannot be less than itself), and outright bad proofs. The bad proofs were in Book I, Proof 7 and Book I, Proposition 9. It was not uncommon in ancient times to attribute to celebrated authors works that were not written by them. It is by these means that the apocryphal books XIV and XV of the Elements were sometimes included in the collection.^[35] The spurious Book XIV was probably written by Hypsicles on the basis of a treatise by Apollonius. The book continues Euclid's comparison of regular solids inscribed in spheres, with the chief result being that the ratio of the surfaces of the dodecahedron and icosahedron inscribed in the same sphere is the same as the ratio of their volumes, the ratio being ${\ displaystyle {\sqrt {\frac {10}{3(5-{\sqrt {5}})}}}={\sqrt {\frac {5+{\sqrt {5}}}{6}}}.}$ The spurious Book XV was probably written, at least in part, by IsidoreofMiletus. This book covers topics such as counting the number of edges and solid angles in the regular solids, and finding the measure of dihedral angles of faces that meet at an edge.^[f] The Italian Jesuit MatteoRicci (left) and the Chinese mathematician XuGuangqi (right) published the first Chinese edition of Euclid's Elements (Jīhé yuánběn 幾何原本) in 1607. Proof of the Pythagoreantheorem in Byrne's The Elements of Euclid and published in colored version in 1847. • 4th century, TheonofAlexandria, 888 AD manuscript extant. • 9th century, Pre-Theon Peyrard Vat. gr. 190 • Many medieval editions, pre 1482 • 1460s, Regiomontanus (incomplete) • 1482, ErhardRatdolt (Venice), editioprinceps (in Latin)^[36]^[37] • 1533, editioprinceps of the Greek text by SimonGrynäus^[38] • 1557, by Jean Magnien and Pierre de Montdoré, reviewed by Stephanus Gracilis (only propositions, no full proofs, includes original Greek and the Latin translation) • 1572, Commandinus Latin edition • 1574, ChristophClavius • 1883–1888, JohanLudvigHeiberg 1. 1570, HenryBillingsley 2. 1651, ThomasRudd 3. 1660, IsaacBarrow 4. 1661, John Leeke and Geo. Serle 5. 1685, WilliamHallifax 6. 1705, CharlesScarborough 7. 1708, JohnKeill 8. 1714, W. Whiston 9. 1756, RobertSimson 10. 1781, 1788 James Williamson 11. 1781, William Austin 12. 1795, JohnPlayfair 13. 1826, George Phillips 14. 1828, DionysiusLardner 15. 1833, ThomasPerronetThompson 16. 1862, IsaacTodhunter 17. 1908, ThomasLittleHeath (revised in 1926) from JohanLudvigHeiberg's edition 18. 1939, R.CatesbyTaliaferro Other languages • 1505, Bartolomeo Zamberti (Latin) • 1543, NicoloTartaglia (Italian) • 1557, Jean Magnien and Pierre de Montdoré, reviewed by Stephanus Gracilis (Greek to Latin) • 1558, JohannScheubel (German) • 1562, Jacob Kündig (German) • 1562, Wilhelm Holtzmann (German) • 1564–1566, Pierre Forcadel de Béziers (French) • 1572, Commandinus (Latin) • 1575, Commandinus (Italian) • 1576, RodrigodeZamorano (Spanish) • 1594, TypographiaMedicea (edition of the Arabic translation of The Recension of Euclid's "Elements"^[39]) • 1604, JeanErrard de Bar-le-Duc (French) • 1606, Jan Pieterszoon Dou (Dutch) • 1607, MatteoRicci, XuGuangqi (Chinese) • 1613, PietroCataldi (Italian) • 1615, DenisHenrion (French) • 1617, Frans van Schooten (Dutch) • 1637, L. Carduchi (Spanish) • 1639, PierreHérigone (French) • 1651, Heinrich Hoffmann (German) • 1663, Domenico Magni (Italian from Latin) • 1672, ClaudeFrançoisMillietDechales (French) • 1680, Vitale Giordano (Italian) • 1689, Jacob Knesa (Spanish) • 1690, Vincenzo Viviani (Italian) • 1694, Ant. Ernst Burkh v. Pirckenstein (German) • 1695, Claes Jansz Vooght (Dutch) • 1697, Samuel Reyher (German) • 1702, Hendrik Coets (Dutch) • 1714, Chr. Schessler (German) • 1720s, JagannathaSamrat (Sanskrit, based on the Arabic translation of Nasir al-Din al-Tusi)^[40] • 1731, Guido Grandi (abbreviation to Italian) • 1738, Ivan Satarov (Russian from French) • 1744, Mårten Strömer (Swedish) • 1749, Dechales (Italian) • 1749, Methodios Anthrakitis (Μεθόδιος Ανθρακίτης) (Greek) • 1745, Ernest Gottlieb Ziegenbalg (Danish) • 1752, Leonardo Ximenes (Italian) • 1763, Pibo Steenstra (Dutch) • 1768, Angelo Brunelli (Portuguese) • 1773, 1781, J. F. Lorenz (German) • 1780, BaruchSchickofShklov (Hebrew)^[41] • 1789, Pr. Suvoroff nad Yos. Nikitin (Russian from Greek) • 1803, H.C. Linderup (Danish) • 1804, FrançoisPeyrard (French). Peyrard discovered in 1808 the Vaticanus Graecus 190, which enables him to provide a first definitive version in 1814–1818 • 1807, Józef Czech (Polish based on Greek, Latin and English editions) • 1807, J. K. F. Hauff (German) • 1818, VincenzoFlauti (Italian) • 1820, Benjamin of Lesbos (Modern Greek) • 1828, Joh. Josh and Ign. Hoffmann (German) • 1833, E. S. Unger (German) • 1836, H. Falk (Swedish) • 1844, 1845, 1859, P. R. Bråkenhjelm (Swedish) • 1850, F. A. A. Lundgren (Swedish) • 1850, H. A. Witt and M. E. Areskong (Swedish) • 1865, SámuelBrassai (Hungarian) • 1873, Masakuni Yamada (Japanese) • 1880, Vachtchenko-Zakhartchenko (Russian) • 1897, Thyra Eibe (Danish) • 1901, Max Simon (German) • 1907, František Servít (Czech)^[42] • 1953, 1958, 1975, Evangelos Stamatis (Ευάγγελος Σταμάτης) (Modern Greek) • 1999, Maja Hudoletnjak Grgić (Book I-VI) (Croatian)^[43] • 2009, Irineu Bicudo (Portuguese) • 2019, Ali Sinan Sertöz (Turkish)^[44] • 2022, Ján Čižmár (Slovak) Book I Editions • 1886, Euclid Book I Hall & Stevens (English) • 1891,1896, The Harpur Euclid by Edward Langley and Seys Phillips (English) • 1949, Henry Regnery Company (English) Selected editions currently in print Selected editions based on Oliver Byrne's edition • The first six books of the Elements of Euclid, edited by Werner Oechslin, Taschen, 2010, ISBN 3836517752, a facsimile of Byrne (1847). • Oliver Byrne's Elements of Euclid, Art Meets Science, 2022, ISBN 978-1528770439, a facsimile of Byrne (1847). • Euclid’sElements:CompletingOliverByrne'swork, Kronecker Wallis, 2019, a modern redrawing extended to the rest of the Elements, originally launched on Kickstarter. Free versions • Euclid's Elements Redux, Volume 1, contains books I–III, based on John Casey's translation.^[45] • Euclid's Elements Redux, Volume 2, contains books IV–VIII, based on John Casey's translation.^[45] See also 1. ^ Wilson2006, p. 278 states, "Euclid's Elements subsequently became the basis of all mathematical education, not only in the Roman and Byzantine periods, but right down to the mid-20th century, and it could be argued that it is the most successful textbook ever written." 2. ^ Boyer1991, p. 100 notes, "As teachers at the school he called a band of leading scholars, among whom was the author of the most fabulously successful mathematics textbook ever written – the Elements (Stoichia) of Euclid". 3. ^ Boyer1991, p. 119 notes, "The Elements of Euclid not only was the earliest major Greek mathematical work to come down to us, but also the most influential textbook of all times. [...]The first printed versions of the Elements appeared at Venice in 1482, one of the very earliest of mathematical books to be set in type; it has been estimated that since then at least a thousand editions have been published. Perhaps no book other than the Bible can boast so many editions, and certainly no mathematical work has had an influence comparable with that of Euclid's Elements". 4. ^ Bunt,Jones&Bedient1988, p. 142 state, "the Elements became known to Western Europe via the Arabs and the Moors. There, the Elements became the foundation of mathematical education. More than 1000 editions of the Elements are known. In all probability, it is, next to the Bible, the most widely spread book in the civilization of the Western world." 5. ^ One older work claims Adelard disguised himself as a Muslim student to obtain a copy in Muslim Córdoba.^[10] However, more recent biographical work has turned up no clear documentation that Adelard ever went to Muslim-ruledSpain, although he spent time in Norman-ruled Sicily and Crusader-ruled Antioch, both of which had Arabic-speaking populations. Charles Burnett, Adelard of Bath: Conversations with his Nephew (Cambridge, 1999); Charles Burnett, Adelard of Bath (University of London, 1987). 6. ^ Boyer1991, pp. 118–119 writes, "In ancient times it was not uncommon to attribute to a celebrated author works that were not by him; thus, some versions of Euclid's Elements include a fourteenth and even a fifteenth book, both shown by later scholars to be apocryphal. The so-called Book XIV continues Euclid's comparison of the regular solids inscribed in a sphere, the chief results being that the ratio of the surfaces of the dodecahedron and icosahedron inscribed in the same sphere is the same as the ratio of their volumes, the ratio being that of the edge of the cube to the edge of the icosahedron, that is, ${\displaystyle {\sqrt {10/[3(5-{\sqrt {5}})]}}}$. It is thought that this book may have been composed by Hypsicles on the basis of a treatise (now lost) by Apollonius comparing the dodecahedron and icosahedron. [...] The spurious Book XV, which is inferior, is thought to have been (at least in part) the work of Isidore of Miletus (fl. ca. A.D. 532), architect of the cathedral of Holy Wisdom (Hagia Sophia) at Constantinople. This book also deals with the regular solids, counting the number of edges and solid angles in the solids, and finding the measures of the dihedral angles of faces meeting at an edge. 1. ^ Boyer1991, p. 100. 2. ^ ^a ^b ^c Russell2013, p. 177. 3. ^ VanderWaerden1975, p. 197. 4. ^ ^a ^b Ball1915, p. 54. 5. ^ Unguru, S. (1985). DiggingforStructureintotheElements:Euclid,Hilbert,andMueller. Historia Mathematica 12, 176 6. ^ Zhmud, L. (1998). Platoas"ArchitectofScience". Phonesis 43, 211 7. ^ TheEarliestSurvivingManuscriptClosesttoEuclid'sOriginalText(Circa850); an image Archived 2009-12-20 at the WaybackMachine of one page 8. ^ Reynolds&Wilson1991, p. 57. 9. ^ Murdoch, John E. (1967). "Euclides Graeco-Latinus: A Hitherto Unknown Medieval Latin Translation of the Elements Made Directly from the Greek". Harvard Studies in Classical Philology. 71: 249–302. doi:10.2307/310767. JSTOR 310767. 10. ^ Knorr, Wilbur R. (1990). "JohnofTynemouthaliasJohnofLondon:EmergingPortraitofaSingularMedievalMathematician". The British Journal for the History of Science. 23 (3): 293–330. doi: 10.1017/S0007087400044009. ISSN 0007-0874. JSTOR 4026757. S2CID 144172844. 11. ^ Menso, Folkerts (1989). EuclidinMedievalEurope (PDF). Benjamin catalogue. 12. ^ Campanus``, Pal.lat.1348. "DigiVatLib". digi.vatlib.it. Retrieved 20 November 2023.{{citeweb}}: CS1 maint: numeric names: authors list (link) 13. ^ Busard2005, p. 1. 14. ^ "MathematicalTreasures-GreekEditionofEuclid'sElements|MathematicalAssociationofAmerica". maa.org. 15. ^ Thomas, Heath (1956). The thirteen books of Euclid's Elements. Vol. 1: Introduction and books I, II (Second revised with additions ed.). New York: Dover Publications. ISBN 978-0-486-60088-8. 16. ^ Andrew., Liptak (2 September 2017). "Oneoftheworld'smostinfluentialmathtextsisgettingabeautiful,minimalistedition". The Verge. 17. ^ Grabiner., Judith. "HowEuclidonceruledtheworld". Plus Magazine. 18. ^ Herschbach,Dudley. "EinsteinasaStudent" (PDF). Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA. p. 3. Archived from theoriginal (PDF) on 2009-02-26.: about Max Talmud visited on Thursdays for six years. 19. ^ Prindle, Joseph. "AlbertEinstein–YoungEinstein". www.alberteinsteinsite.com. Archived from the original on 10 June 2017. Retrieved 29 April 2018. 20. ^ Joyce, D. E. (June 1997), "BookX,PropositionXXIX", Euclid's Elements, Clark University 21. ^ ^a ^b Hartshorne2000, p. 18. 22. ^ Hartshorne2000, pp. 18–20. 23. ^ ^a ^b Ball1915, p. 55. 24. ^ Ball1915, pp.54, 58, 127. 25. ^ Heath1963, p. 216. 26. ^ Toussaint1993, pp. 12–23. 27. ^ Heath1956a, p. 62. 28. ^ Heath1956a, p. 242. 29. ^ Beeson, Michael; Narboux, Julien; Wiedijk, Freek (October 18, 2018). "Proof-checkingEuclid" (PDF). Retrieved 25 September 2024.{{citejournal}}: Cite journal requires |journal= (help) 30. ^ Boyer1991, pp. 118–119. 31. ^ Alexanderson&Greenwalt2012, p. 163 32. ^ "EditioPrincepsofEuclid'sElements,theMostFamousTextbookEverPublished :HistoryofInformation". www.historyofinformation.com. Retrieved 2023-07-28. 33. ^ "TheFirstPrintedEditionoftheGreekTextofEuclidisalsotheFirstEditiontoIncludetheDiagramswithintheText :HistoryofInformation". historyofinformation.com. Retrieved 34. ^ Sarma1997, pp. 460–461. 35. ^ "JNULDigitizedBookRepository". huji.ac.il. 22 June 2009. Archived from theoriginal on 22 June 2009. Retrieved 29 April 2018. • Alexanderson,GeraldL.; Greenwalt, William S. (2012), "About the cover: Billingsley's Euclid in English", Bulletin of the American Mathematical Society, New Series, 49 (1): 163–167, doi:10.1090/ • Artmann, Benno: Euclid – The Creation of Mathematics. New York, Berlin, Heidelberg: Springer 1999, ISBN 0-387-98423-2 • Ball,WalterWilliamRouse (1915) [1st ed. 1888]. AShortAccountoftheHistoryofMathematics (6th ed.). MacMillan. • Boyer,CarlB. (1991). "Euclid of Alexandria". AHistoryofMathematics (Second ed.). John Wiley & Sons. ISBN 0-471-54397-7. • Bunt, Lucas Nicolaas Hendrik; Jones, Phillip S.; Bedient, Jack D. (1988). The Historical Roots of Elementary Mathematics. Dover. • Busard, H.L.L. (2005). "Introduction to the Text". CampanusofNovaraandEuclid'sElements. Stuttgart: Franz Steiner Verlag. ISBN 978-3-515-08645-5. • Callahan, Daniel; Casey, John (2015). Euclid's"Elements"Redux. • Dodgson,CharlesL.; Hagar, Amit (2009). "Introduction". EuclidandHisModernRivals. Cambridge University Press. ISBN 978-1-108-00100-7. • Hartshorne, Robin (2000). Geometry: Euclid and Beyond (2nd ed.). NewYork,NY: Springer. ISBN 9780387986500. • Heath,ThomasL. (1956a). The Thirteen Books of Euclid's Elements. Vol. 1. Books I and II (2nd ed.). New York: Dover Publications. OL 22193354M. • Heath,ThomasL. (1956b). The Thirteen Books of Euclid's Elements. Vol. 2. Books III to IX (2nd ed.). New York: Dover Publications. OL 7650092M. • Heath,ThomasL. (1956c). The Thirteen Books of Euclid's Elements. Vol. 3. Books X to XIII and Appendix (2nd ed.). New York: Dover Publications. OCLC 929205858. Heath's authoritative translation plus extensive historical research and detailed commentary throughout the text. • Heath,ThomasL. (1963). AManualofGreekMathematics. Dover Publications. ISBN 978-0-486-43231-1. • Ketcham, Henry (1901). TheLifeofAbrahamLincoln. New York: Perkins Book Company. • Nasiral-Dinal-Tusi (1594). Kitābtaḥrīruṣūlli-Uqlīdus [The Recension of Euclid's "Elements"] (in Arabic). • Reynolds, Leighton Durham; Wilson, Nigel Guy (9 May 1991). Scribesandscholars:aguidetothetransmissionofGreekandLatinliterature (2nd ed.). Oxford: Clarendon Press. ISBN • Russell,Bertrand (2013). HistoryofWesternPhilosophy:CollectorsEdition. Routledge. ISBN 978-1-135-69284-1. • Sarma,K.V. (1997). Selin,Helaine (ed.). Encyclopaediaofthehistoryofscience,technology,andmedicineinnon-westerncultures. Springer. ISBN 978-0-7923-4066-9. • Servít, František (1907). EukleidovyZaklady(Elementa) [Euclid's Elements] (PDF) (in Czech). • Sertöz, Ali Sinan (2019). ÖklidinElemanlari:Ciltli [Euclid's Elements] (in Turkish). Tübitak. ISBN 978-605-312-329-3. • Toussaint,Godfried (1993). "A new look at euclid's second proposition". The Mathematical Intelligencer. 15 (3): 12–24. doi:10.1007/BF03024252. ISSN 0343-6993. S2CID 26811463. • VanderWaerden,BartelLeendert (1975). Scienceawakening. Noordhoff International. ISBN 978-90-01-93102-5. • Wilson, Nigel Guy (2006). Encyclopedia of Ancient Greece. Routledge. • Euklid (1999). Elementi I-VI. Translated by Hudoletnjak Grgić, Maja. KruZak. ISBN 953-96477-6-2. External links This page was last edited on 25 September 2024, at 15:47
{"url":"https://wiki2.org/en/Euclid%27s_Elements","timestamp":"2024-11-09T04:33:39Z","content_type":"application/xhtml+xml","content_length":"241597","record_id":"<urn:uuid:7c0cf9de-1b4a-42fc-ba2f-50d738b2f8bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00358.warc.gz"}
JCA 1033 Journal Home Page Cumulative Index List of all Volumes Complete Contents of this Volume Journal of Convex Analysis 10 (2003), No. 2, 531--539 Copyright Heldermann Verlag 2003 On Weak*-Extreme Points in Banach Spaces S. Dutta Stat-Math Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata 700108, India, sudipta_r@isical.ac.in T. S. S. R. K. Rao Stat-Math Unit, Indian Statistical Institute, R. V. College Post, Bangalore 560059, India, tss@isibang.ac.in We study the extreme points of the unit ball of a Banach space that remain extreme when considered, under canonical embedding, in the unit ball of the bidual. We give an example of a strictly convex space whose unit vectors are extreme points in the unit ball of the second dual but none are extreme points in the unit ball of the fourth dual. For the space of vector-valued continuous functions on a compact set we show that any function whose values are weak*-extreme points is a weak*-extreme point. We explore the relation between weak*-extreme points and the dual notion of very smooth points. We show that if a Banach space X has a very smooth point in every equivalent norm then X* has the Radon-Nikodym property. Keywords: Higher duals, M-ideals, extreme points. MSC 2000: 46B20. FullText-pdf (285 KB)
{"url":"https://www.heldermann.de/JCA/JCA10/jca1033.htm","timestamp":"2024-11-12T12:53:07Z","content_type":"text/html","content_length":"4047","record_id":"<urn:uuid:b31fd457-2a0c-4d24-9083-d245d897b656>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00362.warc.gz"}
Fly by Night Physics Fly by Night Physics: How Physicists Use the Backs of Envelopes Published (US): Oct 27, 2020 Published (UK): Nov 17, 2020 7 x 10 in. 76 b/w illus. 2 tables. Physics & Astronomy Buy This Presented in A. Zee’s incomparably engaging style, this book introduces physics students to the practice of using physical reasoning and judicious guesses to get at the crux of a problem. An essential primer for advanced undergraduates and beyond, Fly by Night Physics reveals the simple and effective techniques that researchers use to think through a problem to its solution—or failing that, to smartly guess the answer—before starting any calculations. In typical physics classrooms, students seek to master an enormous toolbox of mathematical methods, which are necessary to do the precise calculations used in physics. Consequently, students often develop the unfortunate impression that physics consists of well-defined problems that can be solved with tightly reasoned and logical steps. Idealized textbook exercises and homework problems reinforce this erroneous impression. As a result, even the best students can find themselves completely unprepared for the challenges of doing actual research. In reality, physics is replete with back of the envelope estimates, order of magnitude guesses, and fly by night leaps of logic. Including exciting problems related to cutting-edge topics in physics, from Hawking radiation to gravity waves, this indispensable book will help students more deeply understand the equations they have learned and develop the confidence to start flying by night to arrive at the answers they seek. For instructors, a solutions manual is available upon request.
{"url":"https://press.princeton.edu/books/hardcover/9780691182544/fly-by-night-physics","timestamp":"2024-11-02T06:14:49Z","content_type":"text/html","content_length":"173465","record_id":"<urn:uuid:e39ea9a1-b4be-4da4-aaa1-c8c4d9151520>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00359.warc.gz"}
Separation of the monotone NC hierarchy We prove tight lower bounds, of up to n^ε, for the monotone depth of functions in monotone-P. As a result we achieve the separation of the following classes. 1. monotone-NC≠monotone-P. 2. For all i≥1, monotone-NC^i≠monotone-NC^i+1. 3. More generally: For any integer function D(n), up to n^ε (for some ε>0), we give an explicit example of a monotone Boolean function, that can be computed by polynomial size monotone Boolean circuits of depth D(n), but that cannot be computed by any (fan-in 2) monotone Boolean circuits of depth less than Const·D(n) (for some constant Const). Only a separation of monotone. NC^1 from monotone-NC^2 was previously known. Our argument is more general: we define a new class of communication complexity search problems, referred to below as DART games, and we prove a tight lower bound for the communication complexity of every member of this class. As a result we get lower bounds for the monotone depth of many functions. In particular, we get the following bounds: 1. For st-connectivity, we get a tight lower bound of Ω(log^2n). That is, we get a new proof for Karchmer-Wigderson's theorem, as an immediate corollary of our general result. 2. For the k-clique function, with k≤n^ε, we get a tight lower bound of Ω(k log n). Only a bound of Ω(k) was previously known. All Science Journal Classification (ASJC) codes • Hardware and Architecture Dive into the research topics of 'Separation of the monotone NC hierarchy'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/separation-of-the-monotone-nc-hierarchy","timestamp":"2024-11-15T01:58:37Z","content_type":"text/html","content_length":"46096","record_id":"<urn:uuid:5e6bdce6-929f-452b-8d5c-b5b741da8922>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00723.warc.gz"}
A157655 - OEIS If we allow a zero digit in p, we generate . One could conjecture that the digit 1 must always appear in the entries of this sequence. The idea for this sequence and the description was motivated by The digits of 11411 add up to 8. The product of the digits is 4. So 11411+8+4 = 11423, the next prime after 11411. So 11411 is in the sequence. zpQ[n_]:=Module[{idn=IntegerDigits[n]}, FreeQ[idn, 0]&&NextPrime[n] == n+ Total[ idn]+Times@@idn]; Select[Prime[Range[11*10^7]], zpQ] (* Harvey P. Dale , Jan 14 2016 *) (Other) The link has the Gcc/Gmp program that was used to generate this sequence.
{"url":"https://oeis.org/A157655","timestamp":"2024-11-11T14:38:48Z","content_type":"text/html","content_length":"14189","record_id":"<urn:uuid:609252a7-c420-4f53-bfa6-52ef0b88ace1>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00731.warc.gz"}
Teorı́a de bordismo y acciones de grupos finitos Organizadores: Andrés Ángel (ja.angel908@uniandes.edu.co), Ana González de los Santos (anagon@fing.edu.uy), Rita Jiménez Rolland (rita@im.unam.mx), Carlos Segovia (csegovia@matem.unam.mx) • Wednesday 15 15:00 - 15:45 Diffeomorphisms of reducible three manifolds and bordisms of group actions on torus Sam Nariman (Purdue University, Estados Unidos) I first talk about the joint work with K. Mann on certain rigidity results on group actions on torus. In particular, we show that if the torus action on itself extends to a \(C^0\) action on a three manifold \(M\) that bounds the torus, then \(M\) is homeomorphic to the solid torus. This also leads to the first example of a smooth action on the torus via diffeomorphisms that isotopic to the identity that is nontrivial in the bordisms of group actions. Time permitting, I will also talk about certain finiteness results about classifying space of reducible three manifolds which came out of the cohomological aspect of the above project. 15:45 - 16:30 \(Z_p\)-bordism and the mod(\(p\))-Borsuk-Ulam Theorem Alice Kimie Miwa Libardi (Universidad Estatal Paulista, Brasil) Crabb-Gonçalves-Libardi-Pergher classified for given integers \(m,n, \geq 1\) the bordism class of a closed smooth \(m\)-manifold \(X^m\) with a free smooth involution \(\tau\) with respect to the validity of the Borsuk-Ulam property that for every continuous map \(\varphi: X^m \to R^n\), there exists a point \(x \in X^m\) such that \(\varphi(x)= \varphi(\tau(x))\). In this work together with Barbaresco-de Mattos-dos Santos-da Silva, we are considering the same problem for free \(Z_p\) action. 16:45 - 17:30 Extending free group action on surfaces Carlos Segovia (Universidad Nacional Autónoma de México, México) We are interested in the question of when a free action of a finite group on a closed oriented surface extends to a non-necessarily free action on a 3-manifold. We show the answer to this question is affirmative for abelian, dihedral, symmetric and alternating groups. We present also the proof for finite Coxeter groups. 17:30 - 18:15 Stolz' Positive scalar curvature surgery exact sequence and low dimensional group homology Noé Bárcenas (Universidad Nacional Autónoma de México, México) We will show how positive knowledge about the Baum-Connes Conjecture for a group, together with a Pontrjagyn character and knowledge about the conjugacy classes of finite subgroups and their low dimensional homology provide an estimation of the degree of non rigidity of positive scalar curvature metrics of spin high dimensional manifolds with the given fundamental group. • Thursday 16 15:00 - 15:45 On the evenness conjecture for equivariant unitary bordism Bernardo Uribe (Universidad del Norte, Colombia) The evenness conjecture for equivariant unitary bordism states that these homology groups are free modules of even degree with respect to the unitary bordism ring. This conjecture is known to be true for all finite abelian groups and some semidirect products of abelian groups. In this talk I will talk about some progress done with the group of quaternions of order 8, together with an approach of Eric Samperton and Carlos Segovia to construct a free action on a surface which does not bound equivariantly. This last construction might provide a counterexample of the evenness conjecture in dimension 2. 15:45 - 16:30 The Loch Ness Monster as Homology Covers Rubén A. Hidalgo (Universidad de la Frontera, Chile) The Loch Ness Monster (LNM) is, up to homeomorphisms, the unique orientable, second countable, connected and Hausdorff surface of infinite genus and exactly one end. In this talk I would like to discuss some properties on the LNM. In particular, we note that LNM is the homology cover of most of the surfaces and also it is the derived cover of uniformizations of Riemann surfaces (with some few exceptions). This is a joint work, in progress, with Ara Basmajian. 16:45 - 17:30 When does a free action of a finite group on a surface extend to a (possibly non-free) action on a 3-manifold? Eric G. Samperton (University of Illinois, Estados Unidos) Dominguez and Segovia recently asked the question in the title, and showed that the answer is “always” for many examples of finite groups, including symmetric groups, alternating groups and abelian. Surprisingly, in joint work with Segovia, we have found the first examples of finite groups that admit free actions on surfaces that do NOT extend to actions on 3-manifolds. Even more surprisingly, these groups are already known in algebraic geometry as counterexamples to the Noether conjecture over the complex numbers. In this talk, I will explain how to find these groups, and, more generally, how to decide algorithmically if any fixed finite group admits a non-extending action. 17:30 - 18:15 Signature of manifold with single fixed point set of an abelian normal group Quitzeh Morales (Universidad Pedagógica de Oaxaca, México) In this talk we will report progress made in the task of reducing the computation of the signature of a (possibly non compact) oriented smooth dimensional manifold with an orientation preserving co-compact smooth proper action of a discrete group with a single non empty fixed point submanifold of fixed dimension with respect to the action of a (finite) abelian normal subgroup. The aim of this reduction is to give a formula for the signature in terms of the signature of a manifold with free action and the signature of a disc neighborhood of its fixed-point set with quasi-free action. A necessary task in this reduction is the generalization of Novikov’s additivity to this context.
{"url":"https://clam2021.cmat.edu.uy/sesiones/32","timestamp":"2024-11-03T13:00:28Z","content_type":"text/html","content_length":"23753","record_id":"<urn:uuid:b031dc83-75b6-4f31-87eb-ccc6e1d0a771>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00467.warc.gz"}
Cipher algorithm The Signature algorithm is the second algorithm in the TLS 1.2 cipher suite. One more thing, you sometimes people refer to the type of SSL certificate on the basis of its signing algorithm. For instance, when someone says they have an RSA SSL certificate or an Elliptic Curve SSL certificate, they’re alluding to the signing algorithm. Vigenère cipher is a simple polyalphabetic cipher, in which the ciphertext is obtained by modular addition of a (repeating) key phrase and an open text (both of the same length). Encryption. The encryption can be described by the following formula: Block Cipher Modes - Cryptographic Algorithm Validation Algorithm Specifications Algorithm specifications for current FIPS-approved and NIST-recommended block cipher modes are available from the Cryptographic Toolkit. Current testing includes the following block cipher modes: CMAC (SP 800-38B) XTS-AES (SP 800-38E) CCM (SP 800-38C) KW / KWP / TKW (SP 800-38F)(Key Wrap using AES and Triple-DES) GCM / GMAC / XPN (SP 800-38D and CMVP Annex A) … Caesar Cipher in javascript | LearnersBucket Cipher Suites: Ciphers, Algorithms and Negotiating What is Caesar Cipher? In cryptography, Caesar cipher is one of the simplest and most widely known encryption techniques. It is also known with other names like Caesar’s cipher, the shift cipher, Caesar’s code or Caesar shift. This encryption technique is used to … A Lightweight Image Encryption Algorithm Based on Message The popularization of 5G and the development of cloud computing further promote the application of images. The storage of images in an untrusted environment has a great risk of privacy leakage. This paper outlines a design for a lightweight image encryption algorithm based on a message-passing algorithm with a chaotic external message. The message-passing (MP) algorithm allows simple … In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. Apr 30, 2018 · As per Wikipedia, Hill cipher is a polygraphic substitution cipher based on linear algebra, invented by Lester S. Hill in 1929. Basically Hill cipher is a cryptography algorithm to encrypt and decrypt data to ensure data security. Change encryption cipher in Access Server | OpenVPN
{"url":"https://bestofvpnfcezgo.netlify.app/gedo42050vuk/cipher-algorithm-ra.html","timestamp":"2024-11-08T05:56:40Z","content_type":"text/html","content_length":"14733","record_id":"<urn:uuid:62224e6c-7298-42a9-8929-8975d3f8ce26>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00479.warc.gz"}
Understanding the Earth’s system of waves and currents Nature and Technology June 19, 2020 Mathematician Adrian Constantin is the winner of the Wittgenstein Award 2020. He receives Austria’s most highly endowed science award for his trailblazing contributions to the mathematics of wave propagation, which are applied to natural phenomena such as tsunamis or El Niño. © Daniel Novotny/FWF FWF: What answers do you hope your research will be able to provide in the coming years? Adrian Constantin: I am going to use mathematical methods to study wave motions and currents in the oceans and in the atmosphere. These large-scale movements in the water and air are the basis for many weather- and climate-related phenomena that we need to understand better. I am particularly interested in the interactions between waves – i.e. oscillations that we find not only in water but also in the atmosphere – and currents over large, geophysical dimensions. This field still raises many questions as well as aspects that have been described only in a very simplified form. One case in point is the spherical geometry of the Earth – i.e. the fact that its surface is curved. In most studies of oceanic and atmospheric currents and waves, this fact is not taken into account, and the Earth is treated as if it were flat. Given that we are dealing with phenomena that may extend over thousands of kilometres, such a view is too inaccurate. And then there is the Coriolis force, which is a result of the rotation of the Earth and changes the effects of gravity. This is not a minor disturbance that causes deviations, but a major characteristic of the currents and waves that can be observed in the oceans and in the atmosphere. In addition to the general equations applied to such physical phenomena, I would like to be able to account for individual circumstances – such as whether an air current is over land or sea. This is designed to refine the analysis in this field using various approaches from mathematics. FWF: What will the first steps be? Constantin: I plan to invite several scientists who are interested in my area of research to Vienna to engage in some kind of collaboration. I hope that the development of the corona epidemic will permit this now. There are also a few young people who would like to get involved in this research. I hope to be able to make plans for the next few years as of the summer holidays. Personal details Adrian Constantin is a Professor at the Department of Mathematics, University of Vienna. The fields of research of the Romanian-born scientist include non-linear, partial differential equations in the field of fluid motions and subsequent mathematical descriptions of natural phenomena. Noted for the particularly high number of citations his work in the field of mathematics has obtained, Constantin has received numerous awards and honours, including an Advanced Grant from the European Research Council (ERC) in 2010. FWF: What does the Wittgenstein Award mean for your research activities? Constantin: The freedom that this award provides for my future research is incredibly important. I am a mathematician, but I am interested in the phenomena of physics and would therefore like to exchange ideas with relevant experts – not only from the field of physics, but also from engineering and technology. I want to work with people who can help me understand the data from their fields. It takes time and flexibility to establish contacts, invite experienced colleagues from other fields and involve them in the research. Now I know that I will have the necessary elbowroom to deal with new topics. I can engage young people in my work, and they, in turn, will benefit from intensive international exchanges. I hope that I will be able to use the many opportunities offered by the Wittgenstein Award to make a relevant contribution. FWF: What is the source of motivation for your research? Constantin: There are phenomena – in meteorology, for instance – that are simply amazing. To give you an example: we regularly find clouds on the coast of Australia that are hundreds of kilometres long, shaped like rollers and moving at an extraordinary speed – they are called Morning Glory clouds. I am inspired by the idea that I could gain insights into such phenomena using mathematical methods. Natural phenomena hold great fascination for me and if Galileo Galilei is to be believed, mathematics is the language in which nature is written. It never ceases to amaze me how much this is actually the case. I would like to find out whether I can gain new insights with new approaches in this field. About the project Numerous large-scale movements occur in the atmosphere and oceans that can be described as currents or waves. Previous modelling is greatly simplified and fails to take account of many aspects of geophysical relevance. Adrian Constantin wants to bridge these gaps and present detailed mathematical descriptions of the physical processes. Wittgenstein Award The Wittgenstein Award, Austria's most generously endowed science award, is bestowed on outstanding researchers of any discipline. Endowed with EUR 1.5 million, the award supports the awardees in their research, guaranteeing them utmost freedom and flexibility and enabling them to develop their research activities at the highest international level.
{"url":"https://scilog.fwf.ac.at/en/magazine/understanding-the-earths-system-of-waves-and-currents","timestamp":"2024-11-11T23:53:15Z","content_type":"text/html","content_length":"60738","record_id":"<urn:uuid:d4fafc9c-d4da-48f1-b88e-3bad674dfdb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00621.warc.gz"}
Rational And Irrational Numbers Worksheet Doc Rational And Irrational Numbers Worksheet Doc work as fundamental tools in the world of mathematics, offering an organized yet flexible platform for learners to check out and understand mathematical principles. These worksheets provide an organized strategy to comprehending numbers, nurturing a strong structure upon which mathematical efficiency grows. From the simplest counting exercises to the ins and outs of advanced calculations, Rational And Irrational Numbers Worksheet Doc cater to students of varied ages and skill degrees. Introducing the Essence of Rational And Irrational Numbers Worksheet Doc Rational And Irrational Numbers Worksheet Doc Rational And Irrational Numbers Worksheet Doc - 2 a Assume a is rational and b is irrational What can you say about a b What about ab b Come up with examples of a and b both irrational such that i a b is rational or ii ab is rational c If a is rational and a 6 0 what can you say about b if you know that the product ab is rational 3 Rational Number a number that CAN be written as a RATIO of 2 integers repeating decimals terminating decimals Irrational Number a number that CANNOT be written as a RATIO of 2 integers Non repeating Non terminating decimals Square Root a number that produces a specified quantity when multiplied by itself 7 is the square root of 49 At their core, Rational And Irrational Numbers Worksheet Doc are lorries for conceptual understanding. They envelop a myriad of mathematical principles, guiding learners via the labyrinth of numbers with a collection of interesting and purposeful exercises. These worksheets transcend the boundaries of standard rote learning, motivating active interaction and promoting an instinctive understanding of mathematical relationships. Nurturing Number Sense and Reasoning Identifying Rational And Irrational Numbers Worksheet Identifying Rational And Irrational Numbers Worksheet Rational and Irrational Numbers Equipment needed Calculator pen Guidance Read each question carefully before you begin answering it Check your answers seem right Always show your workings Video Tutorial www corbettmaths contents Rational and Irrational Numbers Categorizing Worksheet Created by Mr Slope Guy Hello Math Teachers Get ready for a fun and engaging activity on rational and irrational numbers with this worksheet The heart of Rational And Irrational Numbers Worksheet Doc depends on cultivating number sense-- a deep comprehension of numbers' meanings and interconnections. They urge expedition, inviting students to dissect arithmetic operations, decode patterns, and unlock the mysteries of sequences. With provocative obstacles and sensible challenges, these worksheets become gateways to honing reasoning abilities, supporting the logical minds of budding mathematicians. From Theory to Real-World Application Identifying Rational And Irrational Numbers Worksheet Download Identifying Rational And Irrational Numbers Worksheet Download Rational numbers can be written as a fraction of two integers while irrational numbers cannot Help students learn to correctly identify each with this eighth grade number sense worksheet In this Rational vs Irrational Numbers worksheet students will gain practice differentiating between rational and irrational numbers This helpful math MATHEMATICAL GOALS This lesson unit is intended to help you assess how well students are able to distinguish between rational and irrational numbers In particular it aims to help you identify and assist students who have difficulties in Classifying numbers as rational or irrational Rational And Irrational Numbers Worksheet Doc function as channels bridging theoretical abstractions with the apparent realities of daily life. By instilling useful scenarios right into mathematical workouts, learners witness the significance of numbers in their environments. From budgeting and dimension conversions to recognizing statistical information, these worksheets empower trainees to possess their mathematical prowess past the boundaries of the class. Diverse Tools and Techniques Adaptability is inherent in Rational And Irrational Numbers Worksheet Doc, using a collection of instructional tools to satisfy different knowing designs. Visual help such as number lines, manipulatives, and electronic resources work as companions in imagining abstract principles. This diverse approach ensures inclusivity, accommodating learners with different choices, strengths, and cognitive styles. Inclusivity and Cultural Relevance In a progressively diverse globe, Rational And Irrational Numbers Worksheet Doc welcome inclusivity. They go beyond social borders, integrating examples and problems that reverberate with learners from diverse backgrounds. By incorporating culturally appropriate contexts, these worksheets cultivate an atmosphere where every student really feels stood for and valued, improving their connection with mathematical concepts. Crafting a Path to Mathematical Mastery Rational And Irrational Numbers Worksheet Doc chart a training course in the direction of mathematical fluency. They instill determination, critical thinking, and problem-solving skills, vital features not just in mathematics yet in numerous elements of life. These worksheets encourage students to browse the detailed surface of numbers, nurturing a profound appreciation for the sophistication and logic inherent in maths. Accepting the Future of Education In a period noted by technological innovation, Rational And Irrational Numbers Worksheet Doc flawlessly adjust to electronic systems. Interactive interfaces and digital resources increase typical understanding, using immersive experiences that transcend spatial and temporal boundaries. This combinations of typical techniques with technical technologies advertises a promising period in education and learning, cultivating a more vibrant and interesting discovering environment. Conclusion: Embracing the Magic of Numbers Rational And Irrational Numbers Worksheet Doc exemplify the magic inherent in mathematics-- a charming trip of expedition, discovery, and mastery. They go beyond standard pedagogy, acting as catalysts for sparking the fires of interest and questions. Via Rational And Irrational Numbers Worksheet Doc, students embark on an odyssey, opening the enigmatic world of numbers-- one problem, one remedy, at a time. Rational Irrational Numbers Worksheet Understand Rational And Irrational Numbers With This Worksheet Style Worksheets Check more of Rational And Irrational Numbers Worksheet Doc below Rational And Irrational Numbers Worksheet Answers Rational And Irrational Numbers Worksheet Rational And Irrational Numbers Worksheet Rational Irrational Numbers Worksheet Rational Vs Irrational Numbers Worksheet 35 Rational And Irrational Numbers Worksheet Support Worksheet Pre Algebra Unit 2 Chambersburg Area School District Rational Number a number that CAN be written as a RATIO of 2 integers repeating decimals terminating decimals Irrational Number a number that CANNOT be written as a RATIO of 2 integers Non repeating Non terminating decimals Square Root a number that produces a specified quantity when multiplied by itself 7 is the square root of 49 The Rational Number System Worksheet 1 Montgomery The Rational Number System Worksheet Classify these numbers as rational or irrational and give your reason a 7329 b 4 a 0 95832758941 b 0 5287593593593 Give an example of a number that would satisfy these rules 3 a number that is real rational whole an integer and natural Rational Number a number that CAN be written as a RATIO of 2 integers repeating decimals terminating decimals Irrational Number a number that CANNOT be written as a RATIO of 2 integers Non repeating Non terminating decimals Square Root a number that produces a specified quantity when multiplied by itself 7 is the square root of 49 The Rational Number System Worksheet Classify these numbers as rational or irrational and give your reason a 7329 b 4 a 0 95832758941 b 0 5287593593593 Give an example of a number that would satisfy these rules 3 a number that is real rational whole an integer and natural Rational Irrational Numbers Worksheet Rational And Irrational Numbers Worksheet Rational Vs Irrational Numbers Worksheet 35 Rational And Irrational Numbers Worksheet Support Worksheet Rational And Irrational Numbers Worksheet With Answers Pdf Ntr Blog Rational And Irrational Numbers Worksheet Rational And Irrational Numbers Worksheet Rational And Irrational Numbers Worksheet
{"url":"https://szukarka.net/rational-and-irrational-numbers-worksheet-doc","timestamp":"2024-11-03T15:24:53Z","content_type":"text/html","content_length":"26319","record_id":"<urn:uuid:ca833e30-9d9c-4970-a3c1-4f6827508a81>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00313.warc.gz"}
Lead–lead dating 561 VIEWS Everipedia is now - Join the IQ Brainlist and our for early access to editing on the new platform and to participate in the beta testing. Lead–lead dating Lead–lead dating is a method for dating geological samples, normally based on 'whole-rock' samples of material such as granite. For most dating requirements it has been superseded by uranium–lead dating (U–Pb dating), but in certain specialized situations (such as dating meteorites and the age of the Earth) it is more important than U–Pb dating. Decay equations for common Pb–Pb dating There are three stable "daughter" Pb isotopes that result from the radioactive decay of uranium and thorium in nature; they are 206Pb, 207Pb, and 208Pb. 204Pb is the only non-radiogenic lead isotope, therefore is not one of the daughter isotopes. These daughter isotopes are the final decay products of U and Th radioactive decay chains beginning from 238U, 235U and 232Th respectively. With the progress of time, the final decay product accumulates as the parent isotope decays at a constant rate. This shifts the ratio of radiogenic Pb versus non-radiogenic 204Pb (207Pb/204Pb or 206Pb/204Pb) in favor of radiogenic 207Pb or 206Pb. This can be expressed by the following decay equations: where the subscripts P and I refer to present-day and initial Pb isotope ratios, λ235 and λ238 are decay constants for 235U and 238U, and t is the age. The concept of common Pb–Pb dating (also referred to as whole rock lead isotope dating) was deduced through mathematical manipulation of the above equations.^[1] It was established by dividing the first equation above by the second, under the assumption that the U/Pb system was undisturbed. This rearranged equation formed: where the factor of 137.88 is the present-day 238U/235U ratio. As evident by the equation, initial Pb isotope ratios, as well as the age of the system are the two factors which determine the present day Pb isotope compositions. If the sample behaved as a closed system then graphing the difference between the present and initial ratios of 207Pb/204Pb versus 206Pb/204Pb should produce a straight line. The distance the point moves along this line is dependent on the U/Pb ratio, whereas the slope of the line depends on the time since Earth's formation. This was first established by Nier et al., 1941.^[1] The formation of the Geochron The development of the Geochron was mainly attributed to Clair Cameron Patterson’s application of Pb-Pb dating on meteorites in 1956. The Pb ratios of three stony and two iron meteorites were measured.^[2] The dating of meteorites would then help Patterson in determining not only the age of these meteorites but also the age of Earth's formation. By dating meteorites Patterson was directly dating the age of various planetesimals. Assuming the process of elemental differentiation is identical on Earth as it is on other planets, the core of these planetesimals would be depleted of uranium and thorium, while the crust and mantle would contain higher U/Pb ratios. As planetesimals collided, various fragments were scattered and produced meteorites. Iron meteorites were identified as pieces of the core, while stony meteorites were segments of the mantle and crustal units of these various planetesimals. • Iron meteorite found in Canyon Diablo • Meteorite impact • Figure 1. Pb–Pb isochron diagram Samples of iron meteorite from Canyon Diablo (Meteor Crater) Arizona were found to have the least radiogenic composition of any material in the solar system. The U/Pb ratio was so low that no radiogenic decay was detected in the isotopic composition.^[3] As illustrated in figure 1, this point defines the lower (left) end of the isochron. Therefore, troilite found in Canyon Diablo represents the primeval lead isotope composition of the solar system, dating back to 4.55 +/- 0.07 Byr. Stony meteorites however, exhibited very high 207Pb/204Pb versus 206Pb/204Pb ratios, indicating that these samples came from the crust or mantle of the planetesimal. Together, these samples define an isochron, whose slope gives the age of meteorites as 4.55 Byr. Patterson also analyzed terrestrial sediment collected from the ocean floor, which was believed to be representative of the Bulk Earth composition. Because the isotope composition of this sample plotted on the meteorite isochron, it suggested that earth had the same age and origin as meteorites, therefore solving the age of the Earth and giving rise to the name 'geochron'. [[INLINE_IMAGE|//upload.wikimedia.org/wikipedia/commons/d/db/Paterson_isochron_animation.gif|undefined|Paterson isochron animation.gif|h381|w522]] Lead isotope isochron diagram used by C. C. Patterson to determine the age of the Earth in 1956. Animation shows progressive growth over 4550 million years (Myr) of the lead isotope ratios for two stony meteorites (Nuevo Laredo and Forest City) from initial lead isotope ratios matching those of the Canyon Diablo iron meteorite. Precise Pb–Pb dating of meteorites Chondrules and calcium–aluminium-rich inclusions (CAIs) are spherical particles that make up chondritic meteorites and are believed to be the oldest objects in the solar system. Hence precise dating of these objects is important to constrain the early evolution of the solar system and the age of the earth. The U–Pb dating method can yield the most precise ages for early solar-system objects due to the optimal half-life of 238U. However, the absence of zircon or other uranium-rich minerals in chondrites, and the presence of initial non-radiogenic Pb (common Pb), rules out direct use of the U-Pb concordia method. Therefore, the most precise dating method for these meteorites is the Pb–Pb method, which allows a correction for common Pb.^[3] When the abundance of 204Pb is relatively low, this isotope has larger measurement errors than the other Pb isotopes, leading to very strong correlation of errors between the measured ratios. This makes it difficult to determine the analytical uncertainty on the age. To avoid this problem, researchers^[5] developed an 'alternative Pb–Pb isochron diagram' (see figure) with reduced error correlation between the measured ratios. In this diagram the 204Pb/206Pb ratio (the reciprocal of the normal ratio) is plotted on the x-axis, so that a point on the y axis (zero 204Pb/206Pb) would have infinitely radiogenic Pb. The ratio plotted on this axis is the 207Pb/206Pb ratio, corresponding to the slope of a normal Pb/Pb isochron, which yields the age. The most accurate ages are produced by samples near the y-axis, which was achieved by step-wise leaching and analysis of the samples. Previously, when applying the alternative Pb–Pb ischron diagram, the 238U/235U isotope ratios were assumed to be invariant among meteoritic material. However, it has been shown that 238U/235U ratios are variable among meteoritic material.^[6] To accommodate this, U-corrected Pb–Pb dating analysis is used to generate ages for the oldest solid material in the solar system using a revised 238U/235U value of 137.786 ± 0.013 to represent the mean 238U/235U isotope ratio bulk inner solar system materials.^[4] The result of U-corrected Pb–Pb dating has produced ages of 4567.35 ± 0.28 My for CAIs (A) and chondrules with ages between 4567.32 ± 0.42 and 4564.71 ± 0.30 My (B and C) (see figure). This supports the idea that CAIs crystallization and chondrule formation occurred around the same time during the formation of the solar system. However, chondrules continued to form for approximately 3 My after CAIs. Hence the best age for the original formation of the solar system is 4567.7 My. This date also represents the time of initiation of planetary accretion. Successive collisions between accreted bodies led to the formation of larger and larger planetesimals, finally forming the Earth–Moon system in a giant impact event. The age difference between CAIs and chondrules measured in these studies verifies the chronology of the early solar system derived from extinct short-lived nuclide methods such as 26Al-26Mg, thus improving our understanding of the development of the solar system and the formation of the earth. Citation Link//doi.org/10.1103%2FPhysRev.60.112Nier, Alfred O.; Thompson, Robert W.; Murphey, Byron F. (1941). "The Isotopic Constitution of Lead and the Measurement of Geological Time. III". Physical Review. 60 (2): 112–116. Bibcode:1941PhRv...60..112N. doi:10.1103/PhysRev.60.112. Oct 1, 2019, 5:21 PM Citation Link//doi.org/10.1016%2F0016-7037%2856%2990036-9Patterson, Claire (1956). "Age of meteorites and the earth". Geochimica et Cosmochimica Acta. 10 (4): 230–237. Bibcode:1956GeCoA..10..230P. Oct 1, 2019, 5:21 PM Citation Link//doi.org/10.1017%2FCBO9781139165150Dickin, Alan P. (2005). Radiogenic Isotope Geology. p. 117. doi:10.1017/CBO9781139165150. ISBN 9781139165150. Oct 1, 2019, 5:21 PM Citation Link//www.ncbi.nlm.nih.gov/pubmed/23118187Connelly, J. N.; Bizzarro, M.; Krot, A. N.; Nordlund, A.; Wielandt, D.; Ivanova, M. A. (2012). "The Absolute Chronology and Thermal Processing of Solids in the Solar Protoplanetary Disk". Science. 338 (6107): 651–655. Bibcode:2012Sci...338..651C. doi:10.1126/science.1226919. PMID 23118187. Oct 1, 2019, 5:21 PM Citation Link//www.ncbi.nlm.nih.gov/pubmed/12215641Amelin, Y.; Krot, Alexander N.; Hutcheon, Ian D.; Ulyanov, Alexander A. (2002). "Lead Isotopic Ages of Chondrules and Calcium-Aluminum-Rich Inclusions". Science. 297 (5587): 1678–1683. Bibcode:2002Sci...297.1678A. doi:10.1126/science.1073950. PMID 12215641. Oct 1, 2019, 5:21 PM Citation Link//www.ncbi.nlm.nih.gov/pubmed/20044543Brennecka, G. A.; Weyer, S.; Wadhwa, M.; Janney, P. E.; Zipfel, J.; Anbar, A. D. (2010). "238U/235U Variations in Meteorites: Extant 247Cm and Implications for Pb-Pb Dating". Science. 327 (5964): 449–451. doi:10.1126/science.1180871. PMID 20044543. Oct 1, 2019, 5:21 PM Citation Linkui.adsabs.harvard.edu1941PhRv...60..112N Oct 1, 2019, 5:21 PM Citation Linkui.adsabs.harvard.edu1956GeCoA..10..230P Oct 1, 2019, 5:21 PM Citation Linkdoi.org10.1016/0016-7037(56)90036-9 Oct 1, 2019, 5:21 PM Citation Linkui.adsabs.harvard.edu2012Sci...338..651C Oct 1, 2019, 5:21 PM Citation Linkui.adsabs.harvard.edu2002Sci...297.1678A Oct 1, 2019, 5:21 PM Citation Linken.wikipedia.orgThe original version of this page is from Wikipedia, you can edit the page right here on Everipedia.Text is available under the Creative Commons Attribution-ShareAlike License.Additional terms may apply.See everipedia.org/everipedia-termsfor further details.Images/media credited individually (click the icon for details). Oct 1, 2019, 5:21 PM
{"url":"https://everipedia.org/wiki/lang_en/Lead%2525E2%252580%252593lead_dating","timestamp":"2024-11-07T17:34:14Z","content_type":"text/html","content_length":"102105","record_id":"<urn:uuid:e77b0293-49eb-4577-ab27-ea8fc3bdc619>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00711.warc.gz"}
Open Problems Real problems are things related to unexplained observed phenomena, like dark matter or dark energy. In addition, there are things that we can describe but do not understand. These are not really problems but rather puzzles. Examples are the still not understood masses and mixing angles of the elementary particles or also the hierarchy puzzle and the question why strong interactions do not violate CP symmetry. For more information, see the chapter "The Five Great Problems in Theoretical Physics" in the book "Time Reborn" by Lee Smolin. The laws of physics seem to be composed out of five fundamental ingredients: 1 Identical particles. 2 Gauge interactions. 3 Fermi statistics. 4 Chiral fermions. 5 Gravity. The question is whether one can find a “deeper structure” that gives rise to all five of these phenomena. In addition to being consistent with our current understanding of the universe, such a structure would be quite appealing from a theoretical point of view: it would unify and explain the origin of these seemingly mysterious and disconnected phenomena. The U(/1)xSU(2)xSU(3) Standard Model fails to provide such a complete story for even the first four phenomena. Although it describes identical particles, gauge interactions, Fermi statistics, and chiral fermions in a single theory, each of these components are introduced independently and by hand. For example, field theory is introduced to explain identical particles, vector gauge fields are introduced to describe gauge interactions Yang and Mills, 1954, and anticommuting fields are introduced to explain Fermi statistics. One wonders—where do these mysterious gauge symmetries and anticommuting fields come from? Why does nature choose such peculiar things as fermions and gauge bosons to describe itself? Colloquium: Photons and electrons as emergent phenomena Michael Levin and Xiao-Gang Wen https://journals.aps.org/rmp/pdf/10.1103/ The laws of physics seem to be composed out of five fundamental ingredients: The question is whether one can find a “deeper structure” that gives rise to all five of these phenomena. In addition to being consistent with our current understanding of the universe, such a structure would be quite appealing from a theoretical point of view: it would unify and explain the origin of these seemingly mysterious and disconnected phenomena. The U(/1)xSU(2)xSU(3) Standard Model fails to provide such a complete story for even the first four phenomena. Although it describes identical particles, gauge interactions, Fermi statistics, and chiral fermions in a single theory, each of these components are introduced independently and by hand. For example, field theory is introduced to explain identical particles, vector gauge fields are introduced to describe gauge interactions Yang and Mills, 1954, and anticommuting fields are introduced to explain Fermi statistics. One wonders—where do these mysterious gauge symmetries and anticommuting fields come from? Why does nature choose such peculiar things as fermions and gauge bosons to describe itself? Colloquium: Photons and electrons as emergent phenomena Michael Levin and Xiao-Gang Wen https://journals.aps.org/rmp/pdf/10.1103/RevModPhys.77.871
{"url":"https://physicstravelguide.com/open_problems","timestamp":"2024-11-12T00:04:32Z","content_type":"text/html","content_length":"79449","record_id":"<urn:uuid:2672a2ed-efe9-4c01-a161-48f1a9a6ada9>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00240.warc.gz"}
Approximate Vector Magnitude Computation - DSP LOG Approximate Vector Magnitude Computation In this post, let us discuss a simple implementation friendly scheme for computing the absolute value of a complex number $Z = X+jY$. The technique called $\alpha Max + \beta Min$(alpha Max + beta Min) algorithm is discussed in Chapter 13.2 of Understanding Digital Signal Processing, Richard LyonsDigital Signal Processing Tricks – High-speed vector magnitude approximation The magnitude of a complex number $Z = X+jY$is $|Z| = \sqrt{X^2 + Y^2}$. The simplified computation of the absolute value is $|Z| \approx \alpha Max + \beta Min$ where $Max = \max \left( |X|, |Y| \right)$ $Min = \min \left( |X|, |Y| \right)$. The values of $\alpha$ and $\beta$ can be tried out to understand the performance. For analysis we can use a complex number with magnitude 1 and phase from 0 to 180 degrees. $\alpha$ = 1, $\beta$ = 1/2, $|Z| \approx Max + \frac{Min}{2}$ $\alpha$ = 1, $\beta$ = 1/4, $|Z| \approx Max + \frac{Min}{4}$ $\alpha$ = 1, $\beta$ = 3/8 $|Z| \approx Max + \frac{3Min}{8}$ $\alpha$ = 7/8, $\beta$ = 7/16 $|Z| \approx \frac{7Max}{8} + \frac{7Min}{16}$ $\alpha$ = 15/16, $\beta$ = 15/32 $|Z| \approx \frac{15Max}{16} + \frac{15Min}{32}$ Simulation Model The script performs the following. (a) Generate a complex number with phase varying from 0 to 180 degrees. (b) Find the absolute value using the above 5 options (c) For each option, find the maximum error, average error and root mean square error Click here to download Matlab/Octave script for computing the approximate value of magnitude of a complex number Figure: Plot of approximate value of magnitude of a complex number Option alpha beta Maximum Error % Average Error % RMS error % 1 1 1/2 11.80340 8.67667 9.21159 2 1 1/4 -11.60134 -0.64520 4.15450 3 1 3/8 6.80005 4.01573 4.76143 4 7/8 7/16 -12.50000 -4.90792 5.60480 5 15/16 15/32 -6.25000 1.88438 3.45847 Table: Error in the approximate value computation with various values of $\alpha$, $\beta$ 1. The chosen values of $\alpha$, $\beta$facilitates simple multiplier-less implementation of approximate computation (can be implemented using only bit shift and addition). 2. For Options (1), (3) the maximum error is more than the expected value. Hence we need to allocate extra bits for the output to prevent overflow. 3. The error in the approximate magnitude computation repeats every 90 degrees. Chapter 13.2 of Understanding Digital Signal Processing, Richard Lyons 7 thoughts on “Approximate Vector Magnitude Computation” 1. nice,thanks a lot. 2. Hi krishna… I am Firdaus from Indonesia. I have read your book “array signal processing” and i am interested about ESPRIT algorithm…But I have a problem how to change TLS esprit algoritm for Uniform linier array to Uniform Circular Array. Would you please help me to solve the problem??? best regard 1. @Firdaus: Are you sure, its me? I have never written a book on Array Signal Processing. Hopefully, I write a book some day…. Anyhow, I am not familiar with the ESPRIT algorithm. 3. Good, thanks! 4. Thanks a lot Krishna !! Could not find info related to this topic anywhere else. 5. Good post. Noticed a typo – in description of Option #3, Beta has to be 3/8, and not 1/4. 1. @Sudeep: Thanks, I corrected the typo 🙂
{"url":"https://dsplog.com/2009/02/08/approximate-vector-magnitude-computation/","timestamp":"2024-11-01T19:03:57Z","content_type":"text/html","content_length":"107965","record_id":"<urn:uuid:91d7732e-e40e-4f38-8947-d6c1ba27dea7>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00177.warc.gz"}
ENERGISE Project Publications 1. Y. Liu, Z. Qu, H. Xin and D. Gan, “Distributed Real-Time Optimal Power Flow Control in Smart Grid,” in IEEE Transactions on Power Systems, vol. 32, no. 5, pp. 3403-3414, Sept. 2017, doi: 10.1109/ 2. A. Gusrialdi and Z. Qu, “Distributed Estimation of All the Eigenvalues and Eigenvectors of Matrices Associated with Strongly Connected Digraphs,” in IEEE Control Systems Letters, vol. 1, no. 2, pp. 328-333, Oct. 2017, doi: 10.1109/LCSYS.2017.2717799. 3. A. Gusrialdi, Z. Qu and M. A. Simaan, “Distributed Scheduling and Cooperative Control for Charging of Electric Vehicles at Highway Service Stations,” in IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 10, pp. 2713-2727, Oct. 2017, doi: 10.1109/TITS.2017.2661958. 4. P. M. Baidya, and W. Sun, “Effective restoration strategies of interdependent power system and communication network,” The Journal of Engineering, 2017(13), 1760-1764, Nov. 2017. 5. A. Golshani, W. Sun, Q. Zhou, Q. P. Zheng and J. Tong, “Two-Stage Adaptive Restoration Decision Support System for a Self-Healing Power Grid,” in IEEE Transactions on Industrial Informatics, vol. 13, no. 6, pp. 2802-2812, Dec. 2017, doi: 10.1109/TII.2017.2712147. 6. E. Dall’Anese, S. S. Guggilam, A. Simonetto, Y. C. Chen and S. V. Dhople, “Optimal Regulation of Virtual Power Plants,” in IEEE Transactions on Power Systems, vol. 33, no. 2, pp. 1868-1881, March 2018, doi: 10.1109/TPWRS.2017.2741920. 7. H. V. Haghi and Z. Qu, “A Kernel-Based Predictive Model of EV Capacity for Distributed Voltage Control and Demand Response,” in IEEE Transactions on Smart Grid, vol. 9, no. 4, pp. 3180-3190, July 2018, doi: 10.1109/TSG.2016.2628367. 8. A. Gusrialdi, Z. Qu and M. A. Simaan, “Competitive Interaction Design of Cooperative Systems Against Attacks,” in IEEE Transactions on Automatic Control, vol. 63, no. 9, pp. 3159-3166, Sept. 2018, doi: 10.1109/TAC.2018.2793164. 9. A. Golshani, W. Sun, Q. Zhou, Q. P. Zheng, J. Wang and F. Qiu, “Coordination of Wind Farm and Pumped-Storage Hydro for a Self-Healing Power Grid,” in IEEE Transactions on Sustainable Energy, vol. 9, no. 4, pp. 1910-1920, Oct. 2018, doi: 10.1109/TSTE.2018.2819133. 10. W. Sun, N. Kadel, I. Alvarez-Fernandez, R. Roofegari, and A. Golshani, “Optimal Distribution System Restoration using PHEVs,” IET Smart Grid, 2(1), 42-49, Oct. 2018. 11. A. Bernstein, C. Wang, E. Dall’Anese, J. Le Boudec and C. Zhao, “Load Flow in Multiphase Distribution Networks: Existence, Uniqueness, Non-Singularity and Linear Models,” in IEEE Transactions on Power Systems, vol. 33, no. 6, pp. 5832-5843, Nov. 2018, doi: 10.1109/TPWRS.2018.2823277. 12. W. Sun, S. Ma, I. Alvarez-Fernandez, R. Roofegari, and A. Golshani, “Optimal Self-healing Strategy for Microgrid Islanding,” IET Smart Grid, 1(4), 143-150, Dec. 2018. 13. A. Golshani, W. Sun, Q. Zhou, Q. P. Zheng and Y. Hou, “Incorporating Wind Energy in Power System Restoration Planning,” in IEEE Transactions on Smart Grid, vol. 10, no. 1, pp. 16-28, Jan. 2019, doi: 10.1109/TSG.2017.2729592. 14. A. Gusrialdi, Z. Qu and S. Hirche, “Distributed Link Removal Using Local Estimation of Network Topology,” in IEEE Transactions on Network Science and Engineering, vol. 6, no. 3, pp. 280-292, 1 July-Sept. 2019, doi: 10.1109/TNSE.2018.2813426. 15. W. Lin, C. Li, Z. Qu, and M. A. Simaan, “Distributed Formation Control with Open-Loop Nash Strategy,” Automatica, vol.106, pp.266-273, Aug. 2019. 16. C. Li, Z. Qu, D. Qi, and F. Wang, “Distributed finite-time estimation of the bounds on algebraic connectivity for directed graphs,” Automatica, vol. 107, pp. 289-295, Sept. 2019. 17. R. Roofegari Nejad and W. Sun, “Distributed Load Restoration in Unbalanced Active Distribution Systems,” in IEEE Transactions on Smart Grid, vol. 10, no. 5, pp. 5759-5769, Sept. 2019, doi: 18. E. Weitenberg, Y. Jiang, C. Zhao, E. Mallada, C. De Persis and F. Dörfler, “Robust Decentralized Secondary Frequency Control in Power Systems: Merits and Tradeoffs,” in IEEE Transactions on Automatic Control, vol. 64, no. 10, pp. 3967-3982, Oct. 2019, doi: 10.1109/TAC.2018.2884650. 19. A. Golshani, W. Sun and K. Sun, “Real-Time Optimized Load Recovery Considering Frequency Constraints,” in IEEE Transactions on Power Systems, vol. 34, no. 6, pp. 4204-4215, Nov. 2019, doi: 20. R. Roofegari nejad, W. Sun and A. Golshani, “Distributed Restoration for Integrated Transmission and Distribution Systems with DERs,” in IEEE Transactions on Power Systems, vol. 34, no. 6, pp. 4964-4973, Nov. 2019, doi: 10.1109/TPWRS.2019.2920123. 21. T. Shinohara, T. Namerikawa and Z. Qu, “Resilient Reinforcement in Secure State Estimation Against Sensor Attacks With A Priori Information,” in IEEE Transactions on Automatic Control, vol. 64, no. 12, pp. 5024-5038, Dec. 2019, doi: 10.1109/TAC.2019.2904438. 22. H. Panamtash, Q. Zhou, T. Hong, Z. Qu, K. Davis, “A copula-based Bayesian method for probabilistic solar power forecasting,” Solar Energy, 196: 336-45, Jan. 2020. 23. R. Harvey, Z. Qu and T. Namerikawa, “An Optimized Input/Output-Constrained Control Design with Application to Microgrid Operation,” in IEEE Control Systems Letters, vol. 4, no. 2, pp. 367-372, April 2020, doi: 10.1109/LCSYS.2019.2929159. 24. M. Song, W. Sun, Y. Wang, M. Shahidehpour, Z. Li and C. Gao, “Hierarchical Scheduling of Aggregated TCL Flexibility for Transactive Energy in Power Systems,” in IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2452-2463, May 2020, doi: 10.1109/TSG.2019.2955852. 25. X. Zhou, Z. Liu, C. Zhao and L. Chen, “Accelerated Voltage Regulation in Multi-Phase Distribution Networks Based on Hierarchical Distributed Algorithm,” in IEEE Transactions on Power Systems, vol. 35, no. 3, pp. 2047-2058, May 2020, doi: 10.1109/TPWRS.2019.2948978. 26. X. Zhou, E. Dall’Anese and L. Chen, “Online Stochastic Optimization of Networked Distributed Energy Resources,” in IEEE Transactions on Automatic Control, vol. 65, no. 6, pp. 2387-2401, June 2020, doi: 10.1109/TAC.2019.2927925. 27. L. Wang, Q. Zhou and S. Jin, “Physics-guided Deep Learning for Power System State Estimation,” in Journal of Modern Power Systems and Clean Energy, vol. 8, no. 4, pp. 607-615, July 2020, doi: 28. R.-P. Liu, S. Lei, C. Peng, W. Sun and Y. Hou, “Data-Based Resilience Enhancement Strategies for Electric-Gas Systems Against Sequential Extreme Weather Events,” in IEEE Transactions on Smart Grid, vol. 11, no. 6, pp. 5383-5395, Nov. 2020, doi: 10.1109/TSG.2020.3007479. 29. X. Zhou, Z. Liu, Y. Guo, C. Zhao, J. Huang and L. Chen, “Gradient-Based Multi-Area Distribution System State Estimation,” in IEEE Transactions on Smart Grid, vol. 11, no. 6, pp. 5325-5338, Nov. 2020, doi: 10.1109/TSG.2020.3003897. 30. G. Tian, Q. Zhou, R. Birari, J. Qi and Z. Qu, “A Hybrid-Learning Algorithm for Online Dynamic State Estimation in Multimachine Power Systems,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 12, pp. 5497-5508, Dec. 2020, doi: 10.1109/TNNLS.2020.2968486. 31. M. Song, W. Sun, M. Shahidehpour, M. Yan and C. Gao, “Multi-Time Scale Coordinated Control and Scheduling of Inverter-Based TCLs With Variable Wind Generation,” in IEEE Transactions on Sustainable Energy, vol. 12, no. 1, pp. 46-57, Jan. 2021, doi: 10.1109/TSTE.2020.2971271. 32. S. Fan, G. He, X. Zhou and M. Cui, “Online Optimization for Networked Distributed Energy Resources with Time-Coupling Constraints,” in IEEE Transactions on Smart Grid, vol. 12, no. 1, pp. 251-267, Jan. 2021, doi: 10.1109/TSG.2020.3010866. 33. M. Rayati, A. Sheikhi, A. M. Ranjbar and W. Sun, “Optimal Equilibrium Selection of Price-Maker Agents in Performance-Based Regulation Market,” in Journal of Modern Power Systems and Clean Energy, doi: 10.35833/MPCE.2019.000559. 34. X. Zhou, M. Farivar, Z. Liu, L. Chen and S. H. Low, “Reverse and Forward Engineering of Local Voltage Control in Distribution Networks,” in IEEE Transactions on Automatic Control, vol. 66, no. 3, pp. 1116-1128, March 2021, doi: 10.1109/TAC.2020.2994184. 35. Y. Xu, Z. Qu, R. Harvey and T. Namerikawa, “Data-Driven Wide-Area Control Design of Power System Using the Passivity Shortage Framework,” in IEEE Transactions on Power Systems, vol. 36, no. 2, pp. 830-841, March 2021, doi: 10.1109/TPWRS.2020.3009630. 36. X. Zhou, C. -Y. Chang, A. Bernstein, C. Zhao and L. Chen, “Economic Dispatch With Distributed Energy Resources: Co-Optimization of Transmission and Distribution Systems,” in IEEE Control Systems Letters, vol. 5, no. 6, pp. 1994-1999, Dec. 2021, doi: 10.1109/LCSYS.2020.3044542. 37. S. Meng, R. Roofegari nejad and W. Sun, “Robust Distribution System Load Restoration with Time-Dependent Cold Load Pickup,” in IEEE Transactions on Power Systems, doi: 10.1109/TPWRS.2020.3048036. 38. S. Mahdavi, H. Panamtash, A. Dimitrovski and Q. Zhou, “Predictive Coordinated and Cooperative Voltage Control for Systems with High Penetration of PV,” in IEEE Transactions on Industry Applications, doi: 10.1109/TIA.2021.3064910. 1. R. Harvey, Y. Xu, Z. Qu and T. Namerikawa, “Dissipativity-based design of local and wide-area DER controls for large-scale power systems with high penetration of renewables,” 2017 IEEE Conference on Control Technology and Applications (CCTA), Maui, HI, USA, 2017, pp. 2180-2187, doi: 10.1109/CCTA.2017.8062775. 2. Y. Okawa, T. Namerikawa and Z. Qu, “Passivity-based stability analysis of dynamic electricity pricing with power flow,” 2017 IEEE 56th Annual Conference on Decision and Control (CDC), Melbourne, VIC, Australia, 2017, pp. 813-818, doi: 10.1109/CDC.2017.8263760. 3. A. Gusrialdi, A. Chakrabortty and Z. Qu, “Distributed Learning of Mode Shapes in Power System Models,” 2018 IEEE Conference on Decision and Control (CDC), Miami, FL, USA, 2018, pp. 4002-4007, doi: 10.1109/CDC.2018.8618949. 4. R. R. nejad and W. Sun, “Chance-constrained Service Restoration for Distribution Networks with Renewables,” 2018 IEEE International Conference on Probabilistic Methods Applied to Power Systems (PMAPS), Boise, ID, USA, 2018, pp. 1-6, doi: 10.1109/PMAPS.2018.8440520. 5. A. C. Melhorn and A. Dimitrovski, “Correlation between EVs and Other Loads in Probabilistic Load Flow for Distribution Systems,” 2018 IEEE International Conference on Probabilistic Methods Applied to Power Systems (PMAPS), Boise, ID, USA, 2018, pp. 1-6, doi: 10.1109/PMAPS.2018.8440435. 6. H. Panamtash and Q. Zhou, “Coherent Probabilistic Solar Power Forecasting,” 2018 IEEE International Conference on Probabilistic Methods Applied to Power Systems (PMAPS), Boise, ID, USA, 2018, pp. 1-6, doi: 10.1109/PMAPS.2018.8440483. 7. R. Sarkar and Z. Qu, “Restoration using Distributed Energy Resources for Resilient Power Distribution Networks,” 2018 IEEE Power & Energy Society General Meeting (PESGM), Portland, OR, 2018, pp. 1-5, doi: 10.1109/PESGM.2018.8586564. 8. R. R. nG. ejad, A. Golshani and W. Sun, “Integrated Transmission and Distribution Systems Restoration with Distributed Generation Scheduling,” 2018 IEEE Power & Energy Society General Meeting (PESGM), Portland, OR, USA, 2018, pp. 1-5, doi: 10.1109/PESGM.2018.8586657. 9. Tian, Q. Zhou and L. Du, “Deep Convolutional Neural Networks for Distribution System Fault Classification,” 2018 IEEE Power & Energy Society General Meeting (PESGM), Portland, OR, USA, 2018, pp. 1-5, doi: 10.1109/PESGM.2018.8586547. 10. O. Ceylan, A. Dimitrovski, M. Starke and K. Tomsovic, “A Novel Approach for Voltage Control in Electrical Power Distribution Systems,” 2018 IEEE Power & Energy Society General Meeting (PESGM), Portland, OR, USA, 2018, pp. 1-5, doi: 10.1109/PESGM.2018.8586421. 11. R. Harvey, Z. Qu and T. Namerikawa, “Coordinated Optimal Control of Constrained DERs,” 2018 IEEE Conference on Control Technology and Applications (CCTA), Copenhagen, Denmark, 2018, pp. 224-229, doi: 10.1109/CCTA.2018.8511494. 12. H. Wang and Z. Qu, “Narrowing Frequency Probability Density Function for Achieving Minimized Uncertainties in Power Systems Operation – a Stochastic Distribution Control Perspective,” 2018 IEEE Conference on Control Technology and Applications (CCTA), Copenhagen, Denmark, 2018, pp. 211-216, doi: 10.1109/CCTA.2018.8511533. 13. K. Muto, T. Namerikawa and Z. Qu, “Passivity-Short-based Stability Analysis on Electricity Market Trading System Considering Negative Price,” 2018 IEEE Conference on Control Technology and Applications (CCTA), Copenhagen, Denmark, 2018, pp. 418-423, doi: 10.1109/CCTA.2018.8511465. 14. X.Chen, C. Zhao and N. Li, “Distributed Automatic Load-Frequency Control with Optimality in Power Systems,” 2018 IEEE Conference on Control Technology and Applications (CCTA), Copenhagen, Denmark, 2018, pp. 24-31, doi: 10.1109/CCTA.2018.8511543. 15. M. Yoshihara, T. Namerikawa and Z. Qu, “Non-Cooperative Optimization of Charging Scheduling of Electric Vehicle via Stackelberg Game,” 2018 57th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), Nara, Japan, 2018, pp. 1658-1663, doi: 10.23919/SICE.2018.8492699. 16. M. Rathbun, Y. Xu, R. R. nejad, Z. Qu, and W. Sun, “Impact Studies and Cooperative Voltage Control for High PV Penetration,” 10^th IFAC Symposium on Control of Power and Energy Systems, Sep 04-06, 2018, Tokyo, Japan. 17. X. Zhou, Z. Liu, W. Wang, C. Zhao, F. Ding and L. Chen, “Hierarchical Distributed Voltage Regulation in Networked Autonomous Grids,” 2019 American Control Conference (ACC), Philadelphia, PA, USA, 2019, pp. 5563-5569, doi: 10.23919/ACC.2019.8814670. 18. H. Haggi, R. R. nejad, M. Song and W. Sun, “A Review of Smart Grid Restoration to Enhance Cyber-Physical System Resilience,” 2019 IEEE Innovative Smart Grid Technologies – Asia (ISGT Asia), Chengdu, China, 2019, pp. 4008-4013, doi: 10.1109/ISGT-Asia.2019.8881730. 19. S. Talebi, M. A. Simaan and Z. Qu, “Decision-Making in Complex Dynamical Systems of Systems With One Opposing Subsystem,” 2019 18th European Control Conference (ECC), Naples, Italy, 2019, pp. 2789-2795, doi: 10.23919/ECC.2019.8796292. 20. E. Weitenberg, Y. Jiang, C. Zhao, E. Mallada, F. Dörfler and C. De Persis, “Robust decentralized frequency control: A leaky integrator approach,” 2018 European Control Conference (ECC), Limassol, Cyprus, 2018, pp. 764-769, doi: 10.23919/ECC.2018.8550060. 21. S. Mahdavi and A. Dimitrovski, “Coordinated Voltage Regulator Control in Systems with High-level Penetration of Distributed Energy Resources,” 2019 North American Power Symposium (NAPS), Wichita, KS, USA, 2019, pp. 1-6, doi: 10.1109/NAPS46351.2019.9000253. 22. S. Talebi, M. A. Simaan and Z. Qu, “Cooperative Design of Systems of Systems Against Attack on One Subsystem,” 2019 IEEE 58th Conference on Decision and Control (CDC), Nice, France, 2019, pp. 7313-7318, doi: 10.1109/CDC40024.2019.9029565. 23. L. Wang and Q. Zhou, “Physics-Guided Deep Learning for Time-Series State Estimation Against False Data Injection Attacks,” 2019 North American Power Symposium (NAPS), Wichita, KS, USA, 2019, pp. 1-6, doi: 10.1109/NAPS46351.2019.9000305. 24. E. Elliott, N. Shanklin, S. Zehtabian, Q. Zhou and D. Turgut, “Peer-to-Peer Energy Trading and Grid Impact Studies in Smart Communities,” 2020 International Conference on Computing, Networking and Communications (ICNC), Big Island, HI, USA, 2020, pp. 674-678, doi: 10.1109/ICNC47757.2020.9049665. 25. P. M. Baidya, W. Sun and A. Perkins, “A survey on social media to enhance the cyber-physical-social resilience of smart grid,” 8th Renewable Power Generation Conference (RPG 2019), Shanghai, China, 2019, pp. 1-6, doi: 10.1049/cp.2019.0602. 26. H. Haggi, W. Sun and J. Qi, “Multi-Objective PMU Allocation for Resilient Power System Monitoring,” 2020 IEEE Power & Energy Society General Meeting (PESGM), Montreal, QC, Canada, 2020, pp. 1-5, doi: 10.1109/PESGM41954.2020.9281963. 27. J. Xie, I. Alvarez-Fernandez and W. Sun, “A Review of Machine Learning Applications in Power System Resilience,” 2020 IEEE Power & Energy Society General Meeting (PESGM), Montreal, QC, Canada, 2020, pp. 1-5, doi: 10.1109/PESGM41954.2020.9282137. 28. Y. Guo, X. Zhou, C. Zhao, Y. Chen, T. Summers and L. Chen, “Solving Optimal Power Flow for Distribution Networks with State Estimation Feedback,” 2020 American Control Conference (ACC), Denver, CO, USA, 2020, pp. 3148-3155, doi: 10.23919/ACC45564.2020.9147992. 29. S. Mahdavi and A. Dimitrovski, “Integrated Coordination of Voltage Regulators with Distributed Cooperative Inverter Control in Systems with High Penetration of DGs,” 2020 IEEE Texas Power and Energy Conference (TPEC), College Station, TX, USA, 2020, pp. 1-6, doi: 10.1109/TPEC48276.2020.9042560. 30. A. Gusrialdi, Y. Xu, Z. Qu and M. A. Simaan, “Resilient Cooperative Voltage Control for Distribution Network with High Penetration Distributed Energy Resources,” 2020 European Control Conference (ECC), St. Petersburg, Russia, 2020, pp. 1533-1539, doi: 10.23919/ECC51009.2020.9143684. 31. Y. Xu, Z. Qu and J. Qi, “State-Constrained Grid-Forming Inverter Control for Robust Operation of AC Microgrids,” 2020 European Control Conference (ECC), St. Petersburg, Russia, 2020, pp. 471-474, doi: 10.23919/ECC51009.2020.9143845. 32. S. Mahdavi, H. Panamtash, A. Dimitrovski and Q. Zhou, “Predictive and Cooperative Voltage Control with Probabilistic Load and Solar Generation Forecasting,” 2020 International Conference on Probabilistic Methods Applied to Power Systems (PMAPS), Liege, Belgium, 2020, pp. 1-6, doi: 10.1109/PMAPS47429.2020.9183699. 33. G. Tian, S. Faddel, X. Jin and Q. Zhou, “Probabilistic Power Consumption Modeling for Commercial Buildings Using Logistic Regression Markov Chain,” 2020 IEEE Power & Energy Society General Meeting (PESGM), Montreal, QC, Canada, 2020, pp. 1-5, doi: 10.1109/PESGM41954.2020.9281751. 34. D. Grover, Y. P. Fallah, Q. Zhou and P. E. Ian LaHiff, “Data-Driven Modeling and Optimization of Building Energy Consumption: a Case Study,” 2020 IEEE Power & Energy Society General Meeting (PESGM), Montreal, QC, Canada, 2020, pp. 1-5, doi: 10.1109/PESGM41954.2020.9281663. 35. G. Tian, Y. Gu, X. Lu, D. Shi, Q. Zhou, Z. Wang, and J. Li, “Estimation Matrix Calibration of PMU Data-driven State Estimation Using Neural Network,” 2020 IEEE Power & Energy Society General Meeting (PESGM), Montreal, QC, Canada, 2020, pp. 1-5, doi: 10.1109/PESGM41954.2020.9282092. 36. A. Gusrialdi and Z. Qu, “Data-Driven Distributed Algorithms for Estimating Eigenvalues and Eigenvectors of Interconnected Dynamical Systems, ” 21^st IFAC World Congress (IFAC 2020), Berlin, Germany, 12-17 July, 2020. 37. C.-Y. Chang, X. Zhou and A. Bernstein, “Computation-Efficient Algorithm for Distributed Feedback Optimization of Distribution Grids,” 2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), Tempe, AZ, USA, 2020, pp. 1-6, doi: 10.1109/SmartGridComm47815.2020.9302966. 38. X. Zhou, Y. Chen, Z. Liu, C. Zhao and L. Chen, “Multi-Level Optimal Power Flow Solver in Large Distribution Networks,” 2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), Tempe, AZ, USA, 2020, pp. 1-6, doi: 10.1109/SmartGridComm47815.2020.9303000. 39. Y. Chen, D. Matthews, S. Sadoyama and L. R. Roose, “A Data-Driven Method for Estimating Behind-the-Meter Photovoltaic Generation in Hawaii,” 2021 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), Washington, DC, USA, 2021, pp. 1-5, doi: 10.1109/ISGT49243.2021.9372214. 40. Y. Xu and Z. Qu, “A Novel State-Constrained Primary Control for Grid-Forming Inverters,” 2021 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), Washington, DC, USA, 2021, pp. 1-5, doi: 10.1109/ISGT49243.2021.9372231. Books and Book Chapters 1. J. Stoustrup, A. Annaswamy, A. Chakrabortty, and Z. Qu, Smart Grid Control: An Overview and Research Opportunities, Springer, 2018, ISBN 978-3-319-98310-3. 2. A. Gusrialdi and Z. Qu, “Smart Grid Security: Attacks and Defenses,” in Smart Grid Control: An Overview and Research Opportunities, J. Stoustrup, A. Annaswamy, A. Chakrabortty, and Z. Qu (Eds.), Springer Verlag, 2018. 3. A. Gusrialdi and Z. Qu, “Towards Resilient Operation of Smart Grid,” in Smart Grid Control: An Overview and Research Opportunities, J. Stoustrup, A. Annaswamy, A. Chakrabortty, and Z. Qu (Eds.), Springer Verlag, 2018. 4. A. Annaswamy and Z. Qu, The Role of Control Systems Research in Smart Grids, A Whitepaper for IEEE Smart Grid R&D Committee, September 2018. 5. K. Sun, Y. Hou, W. Sun, and J. Qi, Power System Control under Cascading Failures: Understanding and Mitigation of Cascading Failures and System Restoration. Wiley-IEEE Press, Jan. 2019, ISBN: 6. A. Gusrialdi, Y. Xu, Z. Qu, and M. Simaan, “A Real-Time Big-Data Control-Theoretical Framework for Cyber-Physical-Human Systems,” in Computational Intelligence and Optimization Methods for Control Engineering, M. J. Blondin, P. M. Pardalos, and J. S. Saez (Eds.), pp.149-172, Springer Verlag, 2019. 7. Y. Xu, W. Sun, and Z. Qu, “Renewable Energy Integration and System Operation Challenge: Control and Optimization of Millions of Devices,” in New Technologies for Power System Operation and Analysis, Huaiguang Jiang, Yingchen Zhang, and Eduard Muljadi (Eds.), Elsevier Inc., 2020. 8. A. Gusrialdi and Z. Qu, “Resilient Hierarchical Networked Control Systems: Secure Controls for Critical Locations and at Edge,” in Security and Resilience of Control Systems: Theory and Applications, Q. Zhu and H. Ishii (Eds.), Lecture Notes on Control and Information Sciences., Springer Verlag, 2021.
{"url":"https://www.ece.ucf.edu/~qu/energise-project-publications/","timestamp":"2024-11-03T10:42:38Z","content_type":"text/html","content_length":"45330","record_id":"<urn:uuid:1761f05b-7bd1-4360-a615-38bebff7ad0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00447.warc.gz"}
AI Advanced Algebra Calculator - MathHives, AI Math Simplifier AI Advanced Algebra Calculator. AI advanced algebra solver solves advanced algebraic math problems with detailed steps, covering topics such as exponents, radicals, parentheses, factoring, linear equations, quadratic equations, logarithms, inequalities, division of polynomials, and more. TRY BASIC ALGEBRA SOLVER Math Worksheets Math worksheet for independent study. Math Formula Generator Generates or derive math formula effortlessly Math Questions Generate unlimited math questions for free
{"url":"https://mathhives.com/ai-advanced-algebra-calculator/","timestamp":"2024-11-14T00:28:03Z","content_type":"text/html","content_length":"71691","record_id":"<urn:uuid:f1a195b9-3635-40d1-b0cc-9454e31a72c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00709.warc.gz"}
The Burden Of Proof: James R. Meyer Says Kurt Gödel's Famous Theorem Best known for his Incompleteness Theorem, Kurt Gödel (1906-1978) is considered one of the most important mathematicians and logicians of the 20th century. By showing that the establishment of a set of axioms encompassing all of mathematics would never succeed, he revolutionized the world of mathematics, logic, and philosophy. An Irishman, James R. Meyer, worked first as a veterinarian and, later, as an engineer. He became interested in what he sees as discrepancies in Gödel’s Incompleteness Theorem and set out to investigate the flaws. His novel, The Shackles of Conviction, questions Gödel’s widely accepted theorem. Simply Charly: What prompted you to write a novel on Kurt Gödel and his Incompleteness Theorem? James R. Meyer: The idea of a novel began when I was working on Gödel’s theorem when I was reading a lot, not just about the theorem, but articles about Gödel’s life as well. I started to become intrigued when these articles portrayed Gödel as a stereotypical cold and impassive logician—because while he was working towards his famous theorem, he also managed to have two affairs with married women, one of whom he eventually married. So it seemed to me that there was more to Gödel than the conventional “cold fish” image. And the more I read, the more the facts about Gödel’s life just didn’t add up. One day the thought came to me—what if Gödel had suspected that there was something wrong with his proof of his famous theorem? And suddenly, everything clicked into place. Instead of a cardboard cutout character, I now saw Gödel as a genuinely human character, with human failings, human worries, and subjected both to the triumphs and disappointments of life. I saw a man trapped in a situation from which he could envisage no escape, a situation that made him ever more eccentric and which would eventually drive him to the brink of insanity. From that moment on, I knew I had to put this idea into words. And when I had discovered the real truth about Gödel’s theorem, I couldn’t resist the opportunity of putting my ideas about Kurt Gödel’s life and his theorem together in a novel. SC: In your novel, The Shackles of Conviction, you make a compelling claim that Gödel’s theorem is wrong despite its widespread acceptance by the mathematical and philosophical community. How did you arrive at this conclusion? JM: I first came across Gödel’s proof of his theorem about twenty years ago when I was reading a popular book on curious mathematical ideas. From the description of the theorem as it was given in the book, I felt that there had to be something wrong—either with Gödel’s theorem or the book’s description of it. I started to study Gödel’s actual proof, but I had to put it aside because it was taking up too much of my time. Then, about three years ago, quite by chance, I came across another book about Gödel’s theorem, and I was hooked again, and this time I started to study it in depth. It didn’t take me long to discover that Gödel’s proof threw up logical contradictions. But it took me a lot longer to work out completely what was going on and to understand fully how these contradictions were coming from a fundamental flaw in Gödel’s proof. Knowing what I know now when I look back, I find it hard to believe that it took me so long. But then I am consoled by the fact that if hundreds of logicians and mathematicians weren’t able to see the flaw for seventy-five years, then it was always going to be difficult. SC: Your novel seems to be partly autobiographical as it mimics what you have demonstrated in real life, which is, that there is a fundamental flaw in Gödel’s theorem. Would you say this is true? JM: Oh no, the novel isn’t in any way autobiographical, although I’m sure that I’ve instilled something of myself into the main character of the book—or perhaps that should be how I would like people to imagine what sort of character I am! And, of course, the main character in the book is spurred on by the promise of an enormous sum of money as a reward—unfortunately, this wasn’t the case for me! SC: Why did you choose the vehicle of fiction to tell this story than as a work of non-fiction? JM: There were several reasons. The main reason was that once I had the ideas about Kurt Gödel’s life that would show him as a real human character, I knew I wanted to write a novel about it. That fitted in well with my discoveries about Gödel’s theorem. There were other reasons, of course. I hoped that I could reach a wider audience in this way that I could ever hope to do with a non-fictional account. And it was always going to be a battle trying to get any paper published when it flies in the face of conventional mathematical wisdom. Before I had worked out exactly where the flaw was in Gödel’s proof, I had already submitted a paper to several journals where I showed how Gödel’s proof led to logical contradictions. These were all rejected on the basis that “lots of people have looked at Gödel’s proof and found nothing wrong with it—therefore there is nothing wrong with it.” So I knew at that stage that trying to get a paper on the flaw published in a journal could take years, maybe decades, maybe never. I hope that the book shows that its creation was prompted not by a fleeting whimsy, but that this book has something important to say. I hope it can show people how to understand really how Gödel’s proof works (or doesn’t), without the need for any complicated mathematics. And maybe for some of those people, they will have the thrill of experiencing that special moment where you get a sudden flash of insight, and you say, “Oh my God, I see it now!” If I succeed in doing that, then the effort involved in writing the book will have been Kurt Gödel SC: On your website, you’ve published a paper that purports to show a comprehensive demonstration of the flaw in Gödel’s proof of his Incompleteness Theorem. Has anyone challenged your claim? JM: Oh yes, lots of people have challenged it. But no one has come up with any logical reason for why what I have to say about Gödel’s proof is wrong. I don’t blame anyone for the knee-jerk reaction when they first hear of my paper and my book. There are a lot of cranks and crackpots out there who regularly claim that they have found an error in some area of mathematics, so it’s not surprising that the first reaction is one of skepticism when Gödel’s proof has been accepted as correct for seventy-five years. What I do get annoyed about is when people, logicians, and mathematicians included, are completely illogical in their responses to my discovery of a flaw in Gödel’s proof. Some people take it upon themselves to show that my demonstration of a flaw in Gödel’s proof is wrong, believing that it will be easy to find where I have “gone wrong.” And then, when they find that there is nothing wrong with it, they resort to some other tactic. For example, they may say that they know of another incompleteness proof, and say that they’ll accept that Gödel’s proof is wrong if I can find a flaw in that one—which is absurd, since if they won’t accept my discovery of a flaw in Gödel’s proof, they aren’t likely to accept my demonstration of a flaw in any other proof. Or they say that my argument must be wrong because Gödel’s proof is obviously correct—since if no one has found a flaw in it in seventy-five years, then there can’t be a flaw in it! None of these responses show any appreciation that the only sure way to refute a logical argument is to show an error in it. Instead of pointing out an error in my argument, they simply resort to arguments that sidestep the real issue. One professor who is highly respected in the areas of logic, the foundations of mathematics, and computer science wrote: “I did download your paper and have to say that you describe your arguments clearly, so that for those (essentially all logicians) who hold the validity of Gödel’s argument, one could relatively easily point at a misunderstanding in your writing (as I believe there must be).” He didn’t point out any error in my paper, but he passed on my paper to a colleague who also could not find any error in my paper and who eventually conceded that: “Perhaps Gödel made a mistake in his proof. I don’t know. I have not read his original (translated) proof in careful detail. I’m not here to defend Gödel’s original proof … Even if Gödel’s proof has an error in it, it would only be of historical interest.” That’s just one example (by the way, the flaw in Gödel’s proof isn’t some sort of minor problem that can be overcome by rewriting a few lines of the proof. It’s a fundamental flaw that can’t be ignored and from which we can learn so much). So while one logician says it must be relatively easy to point at the alleged “misunderstanding” in my paper, no one has actually been able to do so. If I’m getting those sort to responses from people who are purportedly experts knowledgeable about Gödel’s theorem, it’s not surprising that it will take a while to overcome the resistance to the idea that there can be any problem with Gödel’s SC: If what you say is true of Gödel’s proof, then why isn’t it more widely known? After all, your paper would seem to pull the rug from under what has been regarded as the predominant pillar of thinking for the past 75 years. It would be catastrophic, don’t you think? JM: I think you have to remember that the circle of mathematicians and logicians that are professionally employed and who work in detail on these areas is extremely small, and as such, it is a community in which a wrong move can spell disaster for a future career. And since the ones at the top have got where they are by adhering to the common core of acceptance, anyone who tries to go against the current will have a very hard time doing so. This isn’t anything new. Battles to be recognized as the bearer of the most logical argument in areas of mathematics and logic have been going on like this for well over two hundred years. SC: Are any renegade mathematicians/logicians disputing the theorem of incompleteness, and if so, on what grounds? JM: The exact opposite is the case. No one disputes Gödel’s theorem, but people insist that it has no philosophical impact, no bearing on how math should actually be done, and does not change the traditional formalist mathematical stance believing in static black-and-white absolute truth attained through logic and the axiomatic method. So now, in a reversal of fortune, it requires daring to claim that incompleteness is significant. (I myself am such a renegade.) So although there are many logicians of repute who are well aware of my demonstration of the flaw in Gödel’s theorem, no one wants to be the first to admit it publicly. And although they privately believe that there is something wrong with Gödel’s proof, they are too scared to damage their reputations by stepping out of line. They are scared that perhaps they have missed something and that maybe I am wrong, although they cannot see how that might be the case. It’s a big problem. I know will take time. The problem is that, because Gödel’s proof has been accepted as completely correct, proofs that are different but somewhat similar have been subject to little scrutiny and criticism. They have been accepted because they appear to fall in line with conventional thinking. And that has reinforced the viewpoint that Gödel’s proof must be correct. I just wonder how long logicians will continue to insist that it must be easy to refute my argument at the same time as not doing so. Isn’t it going to become more and more embarrassing as time passes? Eventually, people are going to have to accept what logic dictates, and that Gödel’s proof is just the same as all other proofs. And that means that it gives a result that depends completely on all the assumptions and rules that are used to generate it. And if some of those assumptions and rules are not logically acceptable, then you cannot accept the result of the proof. Gödel’s proof includes steps that have been accepted because they appear to be intuitively correct, but those intuitive assumptions turn out to be incorrect. And in a way that is very ironic, because so many philosophers have claimed that Gödel’s result shows that intuition is in some way superior to formal reasoning—but Gödel’s result is actually a result of faulty intuition. And will it be catastrophic? No, of course not. How could it ever be a catastrophe to discover something that pushes our understanding forward? That can’t be a catastrophe, it is a learning experience. And in the case of Gödel’s theorem, it shows that there is a fundamental error in the way that we think about certain things, and learning that can only be beneficial—it is an opportunity to learn how we can prevent similar errors in reasoning in the future. So rather than being catastrophic, I think the discovery will be of great benefit. People will be forced to rethink many deeply held convictions, and in many cases, these convictions will be found wanting. That can only be a good thing. (I myself am such a renegade.) When I started to work on Gödel’s proof, I suspected that if I found the flaw in the proof, I would have learned something very important. And that has indeed been the case. The flaw in Gödel’s proof teaches us fundamental things about logic and language that will be of great benefit to logic and mathematics. So it’s not the fact that Gödel’s proof has a flaw in it that is so important, but knowing how that flaw operates because that shows us how we can avoid similar problems in the future. SC: What exactly is Gödel’s Incompleteness Theorem? JM: Before you can understand what Gödel’s Incompleteness Theorem is, you have to have some idea of what a formal mathematical system is. And basically, a formal mathematical system consists of three things. First, there is a fully defined language, so that the alphabet of the language is fully determined, and there are definite rules that decide what are valid statements in that alphabet and what aren’t. Secondly, there is a collection of fundamental statements in the language that aren’t proven, but they are taken to be the fundamentally “true” statements of the language. These are called the axioms of the system. And thirdly, there are proof rules—rules that decide how a statement can be proven from one or more other statements in the language. So, once you have such a system, you have the basis for making a complete proof in that language. And what Gödel’s Incompleteness Theorem says is that it doesn’t matter which formal mathematical language system you use, you can always come up with an actual statement about numbers in that formal language which cannot be proven to be true by that formal system, and nor can it be proven to be false (hence the name “incomplete”). That in itself may not sound particularly significant, nor very interesting. But it is not the claim of incompleteness in itself that makes Gödel’s proof so interesting. The really intriguing thing about Gödel’s proof is that it also purportedly “proves” that this actual statement about numbers is actually true. And it is this that has fascinated so many people, including myself—because Gödel’s proof is itself expressed in a language, and it relies on certain assumptions. And if that language and those assumptions can prove this actual statement about numbers to be actually true, I have to ask, what is it about that language system that makes it capable of such things, when no fully defined mathematical language could do so? And when you come down to it, this is what demonstrates the paradoxical nature of Gödel’s result—that for any formal system there is a formula which is unproven by the formal system but is provable by the language of Gödel’s proof. And if Gödel’s proof was actually correct, wouldn’t that indicate that Gödel’s proof could never be stated in a formal language? But on the other hand, if Gödel’s proof is a logically coherent argument from given first principles, then, ultimately, should it not be possible to translate this logical argument into a precisely defined formal language? What I find so amazing is that logicians who have studied Gödel’s proof seem to be content to set this question aside. They are content to sit back and simply say that Gödel’s proof proves that formal languages can never prove all “true statements.” But when they do that they are saying in effect that ordinary language must be somehow superior to any formal language, since it can prove a statement of the formal system that the formal system cannot. Surely, I ask myself, any logician would want to understand why this is the case. SC: Can you summarize the essence of your argument that Gödel’s proof contains a flaw? JM: Well, this follows on from the last question. The flaw arises from the very fact that Gödel’s proof isn’t expressed with the same strict precision of a fully defined mathematical language. The astonishing thing is that at the crucial point in Gödel’s proof, the point where the flaw occurs, Gödel simply doesn’t bother to give a fully detailed proof. All he does is suggest a rough outline of how you might do a detailed proof. Gödel justifies this by saying that this part of the proof doesn’t need a detailed proof, that it’s all intuitively obvious. But it’s a fundamental principle in mathematics and logic that you can’t replace a logical argument by intuition. Otherwise, there would be no need for any mathematical proof at all. Intuition is fine as the basis for the idea for a proof, but that intuition has to be backed up by a logical argument. And sometimes intuition is wrong—there have been several other cases in mathematics where this has been the case. And that is why the crucial part of Gödel’s proof where he simply relies on intuition has to be looked at very carefully. If you go through Gödel’s outline for a proof, you can indeed build a detailed proof that superficially appears to be completely logical, and which gives the same result as Gödel’s intuitive outline. But if you examine it carefully, you see that it involves a confusion of language. Gödel’s proof at the crucial point actually involves not two, but three separate languages. One is the language of the formal system, the second is another language that involves number relationships, and the third language is the language that talks about these two other languages. The error that Gödel makes is that he confuses the second language and the language that talks about that language. His intuitive outline refers to a statement that is actually a statement in that second mathematical number language, but Gödel makes the error of assuming that is a statement in the language that talks about that second language. And that’s what is intrinsically wrong with Gödel’s proof. It’s not something that arises from the way that I have filled in the details of the proof. It means that the result that Gödel got hasn’t arisen from any aspect of the formal system. It comes from the ambiguities inherent to the language that Gödel uses for his proof. By the way, I now have on my website a simplified explanation of Gödel’s proof and the flaw in it that I have written to be easy to follow, but which includes the essentials of the proof. SC: What are the implications of your discovery? How do you think it will affect mathematics going forward? JM: I think it will eventually be accepted, more so perhaps by the incoming generation of mathematicians and logicians, rather than those who are inflexible in their fixed beliefs. In my mind, I see two possible scenarios, and I don’t know which will prevail in the short term. On the one hand, I see the possibility of an exciting new future for mathematics, one that is firmly based on reason and logic and in which many beliefs of the past—those that have no basis in reason—are finally shrugged off as we move into a mathematical land free of such myths. You know, when I look at the whole field of scientific knowledge, I find it somewhat ironic that the area of mathematics and logic should be among the last of the scientific subjects to finally free themselves of myths, even though that area should be the most rigorous of all scientific subjects. The failure of logicians to see that Gödel’s proof had to be wrong might be seen as an embarrassing failure. How could such a thing happen? Isn’t it worrying that the crowd has followed blindly the decree that Gödel’s proof must be correct? That must stimulate self-examination within the cozy world of mathematics and logic, one which may be uncomfortable for those who feel quite settled in that world. But it will have to come, and the result will be a better environment for new students to work in, a new environment in which everything must be subjected to rigorous logic, where nothing is taken for granted. And that is how it should be. The other possibility I see is that mathematicians will bury their heads in the sand and ignore any possibilities that their deeply cherished beliefs might be wrong. Currently, I see a problem because new ideas that do not conform to the accepted norm are rejected. There are so many submissions to journals that those who try to review such papers cannot possibly check every such submission in depth. The result is that papers that conform to the accepted norm are published, while those that do not are rejected—not because the reviewer can find anything wrong with the logical argument of the submission, but on the basis that “Your paper cannot be correct because it contradicts [ here you fill in some commonly accepted result].” The problem is that once a paper is published in a journal, that effectively makes it “correct.” It is rare for a published paper to be later found to be wrong. And that means that the reviewers who look at papers and who have to judge them are under great pressure not to make a mistake. They don’t dare to let through a paper that will later be found to be wrong. And the big problem is that this has become so ingrained that even short, simple submissions that can be easily examined are rejected on that basis. The end effect is that the journals will not publish anything that contradicts the status quo. And this may continue to be the case for some time. But I think eventually the cracks will become so big that they can no longer be ignored, even by those who push their heads in the sand as deeply as possible. SC: Mathematician Gregory Chaitin has said that “Gödel’s incompleteness theorem is a reductio ad absurdum (reduction to absurdity) of David Hilbert’s traditional formalist view that math is based on logical reasoning and the axiomatic method.” Do you think this still holds true after your discovery? JM: It was always the case, and it still is, that mathematics is considered to be something that is based on logical reasoning. I think my discovery will finally show the absurdity of the approach of Chaitin and similar people when they say that you can use informal, intuitive language that isn’t fully defined to “prove” that a fully defined formal language is somehow “not as good” as this informal, intuitive language. Chaitin and others like him revel in producing paradoxes, and insist that such things are a fundamental part of all mathematics. They’re not. They are indications that there is something wrong with the system of mathematics you are using. It does not mean that every mathematical system is in some way inherently paradoxical. SC: Some mathematicians feel that Gödel’s theorem hasn’t really affected how math should actually be done, and does not change the traditional formalist mathematical stance believing in static black-and-white absolute truth attained through logic and the axiomatic method. Do you think this is true? JM: That’s pretty much the way I think of it—except that I think if the mathematical system that you use is to be able to say anything about the real world, then the axioms of that system have to be seen as actually applying to the real world (the axioms being the statements of the system that are the fundamental “true” statements of the system). There is a viewpoint is that modern mathematics is divided into two quite distinct sides, where one side is devoted to practical mathematics, such as mathematics for engineering, computing, physics, and so on, and another side which is not interested in whether their mathematics is based on reality, and which creates proofs that say nothing about the real world we live in. The problem is that in practice, there is no clear distinction between the two sides, so that we now have professors in computer science who, contrary to what you might expect, are deeply immersed in working with mathematical systems whose fundamental axioms are not based on any experience of the real world. The problem is that they don’t make their students aware of this distinction. SC: What are you currently working on? JM: It’s not easy finding the time to do all that I would like to do. I have to carry on with a life that has some semblance of normality. At the moment, when I do get the time, I am working a number of areas in the foundations of logic and mathematics. Some of them follow directly from my findings of how the flaw in Gödel’s proof operates. I’m trying to bring them all together, and hopefully, I will eventually manage to get it all organized into a book on faulty reasoning in areas of logic and mathematics, which I think will be quite a revelation. I wish I could devote more time to the foundations of mathematics and logic. But unless some far-sighted benefactor is willing to sponsor me, what I can do is limited. But I will keep at it and get there in the end. Kurt Gödel – On Formally Undecidable Propositions of Principia Mathematica and Related Systems (English translation by Martin Hirzel) James R. Meyer – The Fundamental Flaw in Gödel’s Proof of his Incompleteness Theorem
{"url":"https://simplycharly.com/read/interviews/burden-proof-james-r-meyer-kurt-godel-famous-theorem-add/","timestamp":"2024-11-06T22:16:27Z","content_type":"text/html","content_length":"180838","record_id":"<urn:uuid:703be666-f815-4d66-aa52-1f6f014f94ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00010.warc.gz"}
Home of ON7AMI - Jean Paul Mertens - (former ON1AMI) - Zevergem - Radio and Astronomy Physics For Engineers And Scientists - Solutions Physics For Engineers And Scientists International Student Edition (Extended Third Edition). Chapters 1-41 Paperback – 19 Jun 2007 by Hans C. Ohanian (Author), John T. Markert (Author) The text presents a modern view of classical mechanics and electromagnetism for today's science and engineering students, including coverage of optics and quantum physics and emphasizing the relationship between macroscopic and microscopic phenomena. Organized to address specific concepts and then build on them, this highly readable text divides each chapter into short, focused sections followed by review questions. Using real-world examples, the authors offer a glimpse of the practical applications of physics in science and engineering, developing a solid conceptual foundation before introducing mathematical results and derivations (a basic knowledge of derivatives and integrals is assumed). Designed for the introductory calculus-based physics course, Physics for Engineers and Scientists is distinguished by its lucid exposition and accessible coverage of fundamental physical concepts. My own experience: A very useful book, as an acompagnion to MIT 8.01 course Physics. All of the lectures can be found on YouTube Lectures by Walter Lewin. They will make you ♥ Physics. The book contains a lot of problems, the result of the odd numbered problems are in the back of the book. I'll try to deliver here all sollutions explained in an Jupyter Notebook format using Python to calculate the results. A book not to be missed on the shelf if you want to delve in to Physics ... Solutions (PFE) to the problems If you discover errors in the solutions let me know so that I can correct them. The solutions are as the are, I do NOT take any responsabilities toward the correctness of them. Before you peek at them, try to solve by yourself. This is the only way to learn Physics and to gain habbit on solving problems. Only if you have your results, compare them to those. Chapter 1: Previous sollutions where made in Matlab and in Duch so I have to translate them and to transcribe them to Jupyter Notebook. I will first finnish the other chapters an in between add the fist sollutions in reverse order Q062 Q063 Q064 Q065 Q066 Q067 Q068 Q069 Q070 Q071 Q072 Q073 Q074 Q075 Q076 Q077 Q078 Q079 Q080 Q081 Q082 Q083 Chapter 2: Q001 Q002 Q003 Q004 Q005 Q006 Q007 Q008 Q009 Q010 Q011 Q012 Q013 Q014 Q015 Q016 Q017 Q018 Q019 Q020 Q021 Q022 Q023 Q024 Q025 Q026 Q027 Q028 Q029 Q030 Q031 Q032 Q033 Q034 2.3 - 2.4
{"url":"https://on7ami.be/BookShelf/Physics/BoekenPhysics.aspx","timestamp":"2024-11-09T12:09:29Z","content_type":"application/xhtml+xml","content_length":"75233","record_id":"<urn:uuid:36d65226-edef-4ecc-bbfb-7f6e03cb783b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00022.warc.gz"}
Optimize PDF files - Part II Your applications can write incredibly small PDF files if you know what you're doing. This article is intended for programmers who create PDF files programmatically using custom routines. Read Part I if you are a user saving PDFs and also to gain a general understanding of PDF optimization. Amaze your users by saving small, high-quality PDF files. Users expecting multimegabyte PDFs will be pleased to find out your application requires only tens of kilobytes, even just a few kilobytes, for a simple PDF report. This article assumes you are writing PDF files programmatically. It further assumes you are doing this with a PDF writer module or class for which you have the source code so that you can actually fine tune the PDF output. You need to know quite a bit about the PDF file format to take advantage of these techniques. The article focuses on PDF v1.3. The optimizations are potentially as useful with other versions as well. Get PDF Reference, Adobe Portable Document Format Version 1.3, to follow the tricks. Font optimization Let's start with the obvious optimization: fonts. Optimization #1: Don't embed fonts Did you know fonts don't need to be embedded in PDF? Font embedding is optional. The PDF standard allows you to use any font, whether or not it exists on the reader's machine. If a required font is not found, PDF reader applications use font metrics (in /FontDescriptor) to find a reasonable replacement font. Indeed, in many cases font embedding will unnecessarily bloat the file. Consider an application that creates reports, which are mainly used by the same user on the same PC. They will display perfectly well as long as they are on the same PC (unless the user happens to uninstall the required fonts). When common fonts are used, you have good chances the fonts will always show up correctly. Optimization #2: Use standard fonts PDF comes with 5 standard font families. The families are Times, Helvetica, Courier, Symbol and ZapfDingbats. All PDF readers support these standard fonts. Except for ZapfDingbats, the other fonts are similar to standard Windows fonts. The standard fonts will not need embedding. That's why you can safely use them. PDF standard fonts and their replacements PDF font Windows font Sample Times Times New Roman Times is a serif font Helvetica Arial Helvetica is a sans-serif font Courier Courier New Courier is a fixed-width font Symbol Symbol Symbol Symbol is, well, a symbol font ZapfDingbats (ZapfDingbats) ZapfDingbats includes symbols and ornaments Optimized representation of text and numeric values Don't bloat PDFs by representing text and numeric values with too many bytes. You can potentially do the same with less. Optimization #3: Use PDFDocEncoding This is a relatively small optimization, but simple enough. Text strings, such as /Subject in file info or /Title in /Outlines, can be in either Unicode or PDFDocEncoding. Unicode takes twice the space: two bytes per character compared to one byte with PDFDocEncoding. Use Unicode only when the content cannot be represented in PDFDocEncoding. Note that PDFDocEncoding contains a wider range of characters than WinAnsiEncoding or MacRomanEncoding. It's good news for optimizers. Optimization #4: Optimize number of decimal digits This optimization is for representing all numeric values in PDF. Use only as few decimals as required. It's unnecessary to bloat the file with too many useless decimals. Write a utility function that rounds values for you. Supposing you need 2 decimals precision, the function should round like this: 1.2345 → 1.23 1.2000 → 1.2 1.0000 → 1 Stream optimization Now we get to optimizing streams, the actual page content. Optimization #5: Clip to viewable area This rule is especially important if you're drawing a part of a larger graphic into PDF. When drawing graphics objects or text it's a good idea to check for page boundaries. If no part of the object will be visible, there's no point adding the respective drawing operations in the PDF file. The result will be invisible anyway. Besides, hidden data in a PDF is a security concern. Optimization #6: Don't repeat operators unnecessarily PDF keeps track on the currently selected color, line width, font and so on. You don't need to select the color each time you draw a line. Only set the color when it needs to change. The same goes for line width, line cap style and other drawing attributes. Keep track on the current attributes. Only change them when you need. Optimization #7: Close polygons When drawing a polygon, there is no need to draw the last edge (with the l operator). Close polygons with the h operator instead. This closes a subpath by appending a straight line segment from the current point to the starting point. Even better optimizations are available. Instead of h, use s to close and stroke the path and B to close, fill and stroke. There are even more options, see Path-painting operators in the PDF Optimization #8: Use shortcuts for splines The default way to draw a spline curve from the current point to (x3,x3) with (x1,y1) and (x2,y2) as the control points is this: x1 y1 x2 y2 x3 y3 c If the current point and (x1,y1) are the same, there is a shorter form: x2 y2 x3 y3 v If points (x2,y2) and (x3,y3) are the same, use this shorter form: x1 y1 x3 y3 y Optimization #9: Use color shortcuts The standard operators to set color are: 0.123 0.123 0.123 rg 0.123 0.123 0.123 RG These operators take 3 values: Red, Green and Blue. For black, gray or white colors you don't need the full RGB color space. Grayscale is enough. To select black, use one of these operators: 0 g 0 G For white, use these: 1 g 1 G You can do the same for any shade of gray. To select 0.123 gray, use one of the following: 0.123 g 0.123 G Optimization #10: Compress Compress streams to get the size down. Get a copy of the zlib library to do the compression for you. zlib is relatively straightforward to use. Visual Basic note: VB6 cannot call the regular zlib.dll, but you can use zlibwapi.dll instead. Small PDF samples Here are small PDF samples with vector graphics and text. The graphic was originally created with Visustin. • Uncompressed PDF (3987 bytes) is an optimized, but not compressed, PDF. • Compressed PDF (2274 bytes) is the same document compressed with zlib. • Word PDF (4641 bytes) is the same document saved by Word 2007 (with optimal settings). Not large, yet twice the size! The difference is largely due to the stream (page content). Word also unnecessarily added character widths in 9 0 obj, even though the document should really use a PDF standard font with built-in widths. Optimize PDF files - Part II
{"url":"https://www.aivosto.com/articles/pdf-optimize2.html","timestamp":"2024-11-12T16:59:11Z","content_type":"text/html","content_length":"10921","record_id":"<urn:uuid:7e5e4aed-3a3b-4be2-9f0e-abbaed0fa407>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00659.warc.gz"}
Lenny Conundrum Link to Game Trophies Avatar(s) Play now! Difficulty Meter Neopoint Ratio We rate this game medium. (This is our rating, not Neopets') NP Ratio: 1.00 1000 pts :: 1000 NP NOTE: The Lenny Conundrum was discontinued on July 25, 2013. Every Wednesday evening, the Lenny Conundrum Lenny releases a new puzzle to the Neopian public. Neopians everywhere rush to solve his puzzle in hopes of earning a trophy, a rare item prize, and an Types of Puzzles • Logic: Logic puzzles requires logical thinking skills. One example of a logic-based Lenny Conundrum is decoding a message. The key to these puzzles are to not over-think things. The answer could be straight in front of you, so think but don't over think. For an example, see Lenny Conundrum Round #385 or Lenny Conundrum Round #16 . • Math Equation: This requires math skills. For this, you could be working with formulas, angles, or even shapes. It might be best to get out some paper and sketch it out. For example, see Lenny Conundrum Round #386 . • Guessing: These puzzles are pure guessing. For example, sometimes the Lenny will ask how many marbles are in the jar. There is no way to get the answer here, so type in anything. • Sequences: These require both maths and logic skills. The Lenny will post a sequence of maybe numbers/words and your job is to find out the next term in the sequence or what all of them have in common. For example, see Lenny Conundrum Round #393 . • Information: The answer to the puzzle can sometimes be found elsewhere on the site. This may require going back into the Lenny Conundrum Archives or to another place on the site. For example, see Lenny Conundrum Round #3 . • Combination: Sometimes puzzles are a combination of the other types, such as needing to find information and also solve a sequence, or work through a math equation while also using some logic. For example, see Lenny Conundrum Round #395 or Lenny Conundrum Round #400 . All of the people who submit the right answer will split a 4 million NP prize pool. That means the fewer who win, the more Neopoints you get. Also, the first 250 users who correctly answer the question will win a trophy and an item. • Always read the question twice. You might miss something the first time. • Double-check your answer. You only get one shot, so if you make a silly mistake, your answer will still be wrong. • Look for special instructions. If you didn't read to round up your answer and you submit a decimal answer, it still won't count. • Stay away from the Neoboards. You have to work out the answer on your own. If you go to the Neoboards for the answer, they will most likely yell very loudly and get mad at you. • Practice. Practice makes perfect. You can't expect to go there and come out with a 1st place trophy without thinking. The Lenny Conundrum can be a fun little brainteaser for people who like puzzles. Sometimes some puzzles or puzzle types are easier than others, but with practice you could have a good shot at the rewards. Good luck! Welcome to TDN, The Snowager Next sleep in 1h, 15m, 27s. Next Possible Wake Nov 2: 12 AM/PM NST Nov 3: 5 AM/PM NST Nov 1: 10 OR 11 AM/PM NST Nov 2: 3 OR 4 AM/PM NST Neopia Today The Runway Contest Recently At Forums
{"url":"https://thedailyneopets.com/neopets-games/lenny-conundrum","timestamp":"2024-11-03T03:44:32Z","content_type":"text/html","content_length":"29509","record_id":"<urn:uuid:18c8ff0e-79bf-486c-8fd1-7a2ca41bd47e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00825.warc.gz"}
IntroductionReceiver bias estimationGeographic distanceTemporal distanceModel and receiver errorsGeneralized linear least-squares solutionOutlier removal and bad receiver detectionSpecial casesKnown satellite biasSingle receiver and known satellite biasMultiple biasesComparisonSelf-consistency comparisonReceiver bias day-to-day changeQualitative comparisonConclusionsAcknowledgementsReferences AMT Atmospheric Measurement Techniques AMT Atmos. Meas. Tech. 1867-8548 Copernicus Publications Göttingen, Germany 10.5194/amt-9-1303-2016Statistical framework for estimating GNSS bias VierinenJuha x@mit.edu CosterAnthea J. https://orcid.org/0000-0001-8980-6550 RideoutWilliam C. EricksonPhilip J. NorbergJohannes Haystack Observatory, Massachusetts Institute of Technology, Route 40 Westford, 01469 MA, USA Finnish Meteorological Institute, P.O. Box 503, 00101 Helsinki, Finland Juha Vierinen (x@mit.edu)30March2016 9 3 13031312 26July2015 11September2015 19January2016 1March2016 This work is licensed under a Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/ This article is available from https:// amt.copernicus.org/articles/9/1303/2016/amt-9-1303-2016.html The full text article is available as a PDF file from https://amt.copernicus.org/articles/9/1303/2016/amt-9-1303-2016.pdf We present a statistical framework for estimating global navigation satellite system (GNSS) non-ionospheric differential time delay bias. The biases are estimated by examining differences of measured line-integrated electron densities (total electron content: TEC) that are scaled to equivalent vertical integrated densities. The spatiotemporal variability, instrumentation-dependent errors, and errors due to inaccurate ionospheric altitude profile assumptions are modeled as structure functions. These structure functions determine how the TEC differences are weighted in the linear least-squares minimization procedure, which is used to produce the bias estimates. A method for automatic detection and removal of outlier measurements that do not fit into a model of receiver bias is also described. The same statistical framework can be used for a single receiver station, but it also scales to a large global network of receivers. In addition to the Global Positioning System (GPS), the method is also applicable to other dual-frequency GNSS systems, such as GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema). The use of the framework is demonstrated in practice through several examples. A specific implementation of the methods presented here is used to compute GPS receiver biases for measurements in the MIT Haystack Madrigal distributed database system. Results of the new algorithm are compared with the current MIT Haystack Observatory MAPGPS (MIT Automated Processing of GPS) bias determination algorithm. The new method is found to produce estimates of receiver bias that have reduced day-to-day variability and more consistent coincident vertical TEC values. A dual-frequency global navigation satellite system (GNSS) receiver can measure the line-integrated ionospheric electron density between the receiver and the GNSS satellite by observing the transionospheric propagation time difference between two different radio frequencies. Ignoring instrumental effects, this propagation delay difference is directly proportional to the line integral of electron density . This is derived, e.g., by . Received GNSS signals are noisy and contain systematic instrumental effects, which result in errors when determining the relative time delay between the two frequencies. The main instrumental effects are frequency-dependent delays that occur in the GNSS transmitter and receiver, arising from dispersive hardware components such as filters, amplifiers, and antennas. Loss of satellite signal can also cause unwanted jumps in the measured relative time delay and cause unwanted nonzero mean errors in the relative time delay measurement. Because line-integrated electron density is determined from this relative time delay, it is important to be able to characterize and estimate these non-ionospheric sources of relative time delay. The non-ionospheric relative time delay due to hardware is commonly referred to as bias in the literature. For the specific case of GPS measurements, the bias is often separated into two parts ordered by the source of delay: satellite bias and receiver bias. A GNSS measurement of relative propagation time delay difference including the line-integrated electron density effect can be written as m=b+c+∫SNe(s)ds+ξ, where m is the measurement, b is the receiver bias, c is the satellite bias, S is the path between the receiver and the satellite, Ne(s) is the ionospheric electron density at position s, and ξ is the measurement noise. The measurement is scaled to total electron content (TEC) units, i.e., 1016m-2, and therefore bias terms also have units of TEC. See and references therein for a more detailed discussion. For ionospheric research with GNSS receivers that perform measurements of the form shown in Eq. (), the quantity of interest is usually the three-dimensional electron density function Ne(s). However, this quantity is challenging to derive from just GNSS measurements alone, as we only observe one-dimensional line integrals through the ionosphere. The problem is an ill-posed inverse problem called the limited-angle tomography problem . The difficulty arises from the fact that line integrals are measured only at a small number of selected viewing angles, and this information is not sufficient to fully determine the unknown electron density distribution without making further assumptions about the unknown measurable Ne(s). These assumptions often impose horizontal and vertical smoothness, as well as temporal continuity. A considerable number of prior studies have attempted to solve this tomographic inversion problem in three dimensions for beacon satellites as well as for GPS satellites (see, e.g., and references therein). Because of the large computational costs and complexities associated with full tomographic solvers, much of the practical research is done using a reduced quantity called the vertical total electron content (VTEC). As we will describe in more detail below, VTEC in essence results from a reduced parameterization of the ionosphere that is used to simplify the tomography problem and make it more well-posed. VTEC processing is only concerned with the integrated column density, and therefore the measurements are reported in TEC units. The fundamental assumption for vertical TEC processing is that a slanted line integral measurement of electron density can be converted into an equivalent vertical line integral measurement with a parameterized scaling factor v(α): ∫VNe(s)ds≈v(α)∫SNe(s)ds, where “V” is a vertical path, “S” is the associated slanted path, α is the elevation angle, and v(α) is the scaling factor that relates a slanted integral to a vertical line integral. There are several ways that v(α) can be derived without resorting to full tomographic reconstruction of the altitude profile shape. Typically, the ionosphere above a certain geographic point is assumed to be described with some vertical shape profile p(h) multiplied by a scalar Ne(h)=Np(h). One example of an often-used shape profile is the Chapman profile: p(h)=exp⁡1-z(h)-e-z(h), where z(h) =(h-hm)/H, hm is the peak altitude of the ionosphere, and H is the scale height . Another example is a slab with exponential top and bottom side ramps as described by and . Figure depicts the geometry and profile shape assumptions in vertical TEC processing. A scaled altitude profile model of the ionosphere assumes that the ionosphere locally has a fixed horizontally stratified altitude profile shape multiplied by a scalar. This makes it possible to relate slanted line integrals to equivalent vertical line integrals using an elevation-dependent scaling factor called the mapping function. The pierce point is located where the ray pierces the peak of the electron density profile. In more advanced models, the mapping function can be parameterized not only by elevation angle but also by factors such as time of day, geographic location, solar activity, and the azimuth of the observation ray. In practice, this can be done by using a first-principles ionospheric model to derive a more physically motivated mapping function. Although the vertical TEC assumptions described above are not as flexible as a full tomographic model that attempts to determine the altitude profile, they provide model-to-data fits that are to first order good enough to produce measurements that are useful for studies of the ionosphere. The utility of this simplified model derives from the fact that it results in an overdetermined, well-posed problem that can be inverted with relatively stable results. The main practical difficulties in data reduction using the simplified model are estimating the receiver and satellite biases b and c, as well as handling possible model errors. In this paper, a novel statistical framework for deriving these GNSS measurement biases is described. The method is based on examining large numbers of differences between slanted TEC measurements that are scaled with the mapping function v(α). The differences between pairs of measurements are assumed to be Gaussian normal random variables with a variance that is determined by the properties of the two measurements, i.e., spacing in time, geographic distance, and elevation angle. While the Gaussian assumption results in a numerically efficient system of equations, this assumption is also supported by numerical evidence, which suggests that the distribution of the differences of vertical TEC measurements is close to a zero mean normal random distribution, with the standard deviation increasing when the geographic distance between pierce points is increased, the temporal distance is increased, or the elevation angle of the measurement is decreased. We will show how this general statistical framework can be used to estimate biases in multiple special cases and finally compare the newly presented method with an existing bias determination scheme within the MIT Haystack MAPGPS (MIT Automated Processing of GPS) algorithm . We will refer to this new method for bias determination as weighted linear least squares of independent differences Let us denote Eq. () in a more compact form, but now with indexing i to denote the index of a measurement, j to denote receiver, and k to denote the satellite: mi=bj(i)+ck(i)+ni+ξi. Here ni is the line integral of electron density through the ionosphere for measurement i. The receiver and satellite index associated with measurement i is given by j(i) and k(i). Receiver noise is represented with ξi. Now consider subtracting slanted TEC measurements i and i′, which are scaled with corresponding mapping function values vi and vi′, which convert slanted TEC to equivalent vertical TEC. In this analysis, it does not matter if these measurements are associated with the same receiver or the same satellite, or even if they occur at the same time. vimi-vi′mi′=vini-vi′ni′+vibj(i)-vi′bj(i′)+vick This type of a difference equation has several benefits. If measurements i and i′ are performed at a time close to each other ti≈ti′ and have closely located pierce points xi≈xi′, then we can make the assumption that vini≈vi′ni′, i.e., that the vertical TEC is similar. We can statistically model this similarity by assuming that the difference of equivalent vertical line-integrated electron content between two measurements is a normally distributed random variable with variance vini-vi′ni′=ξ̃i,i′∼N0,Si,i′, where Si,i′ is the structure function that indicates what we assume to be the variance of the difference of the two measurements i and i′. This structure function would be our best guess of how different we expect these two measurements to be. We assume the structure function depends on the following factors: (1) geographic distance between pierce points di,i′=|xi-xi′|, (2) difference in time between when the measurements were made τi,i′=| ti-ti′|, (3) receiver noise of both measurements ξi+ξi′, and (4) modeling errors that are dependent on elevation angles αi and αi′ of the measurements. The modeling errors in (4) are caused by inaccuracies in the assumption that we can scale a slanted measurement into an equivalent vertical measurement. The following subsections describe the structure function behaviors for each dependent variable. In order to model the variability of electron density as a function of geographic location, we assume the difference between two measurements to be a random variable: vini-vi′ni′∼N0,D(di,i′), where in this work we use D(d)=0.5d in units of (TECu/100km). This implies that we assume the standard deviation of difference of two vertical TEC measurements to grow at a rate of 0.5 TEC units per 100km of spacing between pierce points. For the results in this paper, we use the functional form above, but this can be improved in future work by a more complicated spatial structure function D(xi,xi′,ti,ti′), which is a function of pierce point locations xi and xi′, as well as the time of the measurements ti and ti′. This function could for example be derived experimentally from vertical TEC measurements themselves. This would allow more accurate modeling of sunrise and sunset phenomena, as well as meridional and zonal gradients. Two measurements do not necessarily have to occur at the same time, but one would expect the two measurements to differ more if they have been taken further apart from one another. This difference can also be modeled as a normal random variable: vini-vi′ni′∼N0,T(τi,i′), where T(τi,i′) is a structure function that statistically describes the difference in vertical TEC from one measurement to the other when the time difference between the two measurements is τi,i′=|ti-ti′|. In this work, we use T(τ)=20τ in units of TECu/hour. This makes the assumption that the standard deviation of the difference of two vertical TEC measurements grows at the rate of 20 TEC units for each hour. Again, an improved version of this time structure function could also be obtained by estimating it from data, but this is the subject of a future study. There are modeling errors that are caused by our assumption that we can scale a slanted line integral to a vertical line integral as shown in Eq. (). First of all, this assumption does not correctly take into account that the slanted path cuts through different latitudes and longitudes and thus averages vertical TEC over a geographic area. In addition to this, our mapping function assumes an altitude profile for the ionosphere that is hopefully close to reality, but never perfect. The ionosphere can have several local electron density maxima and can have horizontal structure in the form of, e.g., traveling ionospheric disturbances, or typical ionospheric phenomena such as the Appleton anomaly at the Equator or the ionospheric trough at high latitudes. In addition to this, GNSS receivers often have difficulty with low-elevation measurements arising from near-field multi-path propagation, which is different for both frequencies. These errors can in some cases severely affect vertical TEC estimation and thus also bias estimation. To first order, the errors caused by the inadequacies of the model assumptions or anomalous near-field propagation increase proportionally to the zenith angle. It is useful to include this modeling error in the equations as yet another random variable. We have done this by assuming the elevation-angle-dependent errors to be a random variable of the following form: vini-vi′ni′∼N0,E(αi)+E(αi′). Here E(αi) is the structure function that indicates the modeling error variance as a function of elevation angle. In this work, we use a structure function where the variance grows rapidly as the elevation angle approaches the horizon, expressed as E(αi)=20(cos⁡αi)4. This form penalizes lower elevations more heavily. The structure function that takes into account vertical TEC scaling errors and receiver issues at low elevations can also be determined from vertical TEC estimates, e.g., by doing a histogram of coincident measurements of vertical TEC: E(α)≈〈|〈vini〉-vi′ni′|2〉 for all i,i′, where |xi-xi′|<ϵd and |αi′-α|<ϵα. Here ϵd determines the threshold for distance between pierce points that we consider to be coincidental, and ϵα determines the resolution of the histogram on the α axis. Here the angle brackets 〈⋅〉 denote a sample average operator. If we assume that all random variables in the structure functions of the previous section are independent random variables, we can simply add them together to obtain the full structure function Si,i′ The differences in Eq. () can be expressed in matrix form as m=Ax+ξ, where , with the measurement vector containing differences between vertically scaled measurements m=⋯,vimi-vi′mi′,⋯T, and the unknown vector x contains the receiver and satellite biases x=b0,⋯,bN,c0,⋯,cMT. For x, N indicates the number of receivers and M indicates the number of satellites. The random variable vector ξ∼N(0,Σ) has a diagonal covariance matrix defined by the structure function of each measurement pair used to form differences: Σ=diagSi,i′,⋯. The theory matrix A forms the forward model for the measurements as a linear function of the receiver biases. This type of a measurement is known as a linear statistical inverse problem , and it has a closed-form solution for the maximum-likelihood estimator for the unknown x, which in this case is a vector of receiver and satellite biases: x^=(ATΣ-1A)-1ATΣ-1m. This matrix equation is often not practical to compute directly due to the typically large number of rows in A. However, because the matrix A is very sparse, the solution can be obtained using sparse linear least-squares solvers. In this work, we use the LSQR package for minimizing |Ãx-m̃|2, where à and m̃ are scaled versions of the matrix A and vector m. Each row of A and m are scaled with the square root of the variance of the associated measurement Si,i′ in order to whiten the noise. In practice, this performs a linear transformation with matrix P that projects the linear system into a space where the covariance matrix is an identity matrix PTΣP=I. When a maximum-likelihood solution has been obtained, a useful diagnostic examines the residuals r=|Ãx^-m̃|. If the residuals are larger than a certain threshold, they can be determined to be measurements that do not consistently fit the model, i.e., outliers. Outliers can be caused by several different mechanisms. They can be of ionospheric origin, where vertical TEC gradients are sharper than our structure function expects them to be. They can also be simply caused by a loss of lock in the receiver, which can result in a large erroneous jump in slanted TEC. These outlying measurements can be detected and removed by a statistical test, for example |Ãx^-m̃|>4σ, where σ is the standard deviation of the residuals estimated with σ=median|Ãx^-m̃|. After the removal of problematic measurements, another improved maximum-likelihood solution, one not contaminated by outliers, can be obtained. The procedure for outlier removal can be repeated over several iterations to ensure that no problematic data are used for bias estimation. Bias estimation using time differences of measurements obtained with a single receiver. Top panel shows the residuals of the maximum likelihood fit to the data. The points shown with red are automatically determined as outliers and not used for determining the receiver bias. These mostly occur during daytime at low elevations. The center panel shows vertical TEC estimated with the original MAPGPS receiver bias determination algorithm, while the bottom panel shows vertical TEC measurements obtained using only time differences using the new method described in this paper, assuming constant receiver bias and known satellite bias. The VTEC results do not differ significantly. The previous section described the general method for estimating bias by using differences of slanted TEC measurements scaled by the mapping function. However, in practice this general form rarely needs to be used. In the following sections we describe several important and practical special cases, including known satellite bias, single receiver bias estimation, and multiple biases for each If satellite bias is known a priori to a good accuracy, then it can be subtracted from the measurements and the difference equation. This reduces Eq. () to vimi-vi′mi′=vini-vi′ni′+vibj(i)-vi′bj(i′) +viξi-vi′ξi′. This form results in the same linear measurement equations, except that the satellite biases are not unknown parameters. In this case, the theory matrix will only have at most two nonzero elements for each row. For GPS receivers, satellite biases are known to a good accuracy using a separate and comprehensive analysis technique , and therefore this special case is appropriate for bias determination for GPS For the case that the satellite bias is known a priori and there is furthermore only one receiver, then the matrix only has one column with the unknown bias for the receiver. This still results in an overdetermined problem that can be solved. The solution of this special case mathematically resembles a known analysis procedure that is often referred to as “scalloping” (P. Doherty, personal communication, 2003; ). This latter technique depends on the assumption that the concave or convex shape of all zenith TEC estimates collected by a single receiver observed over a 24h period should be minimized. This same goal is obtained when time differences are minimized. The main difference in this work is that the statistical framework uses a structure function that weights differences of measurements based on time between the measurements, the elevation angle, and the pierce point distance. Figure shows an example receiver bias that is determined using only data from a single receiver. In this case, time differences with τi,i′ less than 2h were used, in order to keep the number of measurements manageable. We also used differences of measurements between different satellites. A comparison of results with measurements obtained with the standard MAPGPS algorithm shows quite similar results between the two techniques. There are several reasons for considering the use of multiple biases for the same satellite and receiver. This special case can also be handled by the same framework. If there is a loss of phase lock on a receiver, this might result in a discontinuity in the relative time-of-flight measurement, which appears as a discrete jump in the slanted TEC curve. Rather than attempting to realign the curve by assuming continuity, it is possible, using our framework, to simply assign an independent bias parameter to each continuous part of a TEC curve. As long as there are enough overlapping measurements, the biases can be estimated. For GNSS implementations other than GPS, it is possible that satellite biases are not known or cannot be treated as a single satellite bias. For example, the GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema) network uses a different frequency for each satellite, which means that any relative time delays between frequencies caused by the receiver or transmitter hardware will most likely be different for each satellite–receiver pair. Because of this, it is natural to combine the satellite bias and receiver bias into a combined bias, which is unique for each satellite–receiver Receiver biases are also known to depend on temperature , because dispersive properties of the different parts of the receiver can change as a function of temperature. If an independent bias term is assigned to, e.g., each satellite pass, this also allows temperature-dependent effects to be accounted for, as a single satellite pass lasts only part of the day. Multiple bias terms can be added in a straightforward manner to the model using Eq. (). This is the same equation that is used for the known satellite bias special case. Here, bj(i) can be interpreted as an unknown relative bias term that can vary from one continuous slanted TEC curve to another. The meaning of j(i) in this case is different. It is a function that assigns bias terms to measurements i. Each receiver does not necessarily need to have one unknown bias parameter; it can have many. An example of a measurement where the same satellite is observed using a single receiver is shown in Fig. . In this case, the satellite is measured in the morning first, and during the pass there is a discontinuity in the TEC curve, most likely due to loss of lock. We give the measurements before ∼05:00UTC and from ∼05:00 to 06:00UTC an independent bias term b0 and b1. The same satellite is seen again in the evening at 19:00UTC, and we again assign a new bias term to it: b2. An example of a measurement of a single satellite collected by a single receiver. A loss of phase lock occurs during the first pass of the satellite, resulting in two receiver biases for that pass (b0: blue curve; b1: green curve). During the next pass, a drift in the receiver bias could have occurred, so another receiver bias (b2: red curve) is determined when the satellite is measured during the end of the day. Another multiple-bias example is shown in Fig. , which displays measurements from 19 neighboring receivers in China. A few of these receivers have discrete jumps in the slanted TEC curves that make it impossible to assume a constant receiver bias during the course of the entire day. This can be seen as a poor fit using the standard MIT Haystack MAPGPS algorithm. When multiple bias terms are introduced (in the same way as depicted in Fig. ), the measurements from these stations can be recovered. Vertical TEC with satellite bias estimated using the current version of the MIT Haystack Observatory MAPGPS algorithm (Rideout and Coster, 2006) shown above. Multiple receivers have problems with receiver stability, which makes the assumption of unchanging receiver bias problematic and causes the receiver bias determination to fail. Vertical TEC with receiver biases obtained using the multiple-biases assumption is shown below. The new method produces a more consistent baseline. The red dots show stations that are plotted. The algorithm uses all of the data from the 19 stations marked with orange and red dots. The stations marked with orange are used to assist in reconstruction by using a larger geographic area. In order to test the framework in practice for a large network of GPS receivers, we implemented the framework described in this paper as a new bias determination algorithm for the MIT Haystack MAPGPS software, which analyzes data from over 5000 receivers on a daily basis. We used the MAPGPS program to obtain slanted TEC estimates. Then, instead of using the MAPGPS routines for determining receiver biases, we used the new methods described in this paper. We label results obtained using the new bias determination algorithm with WLLSID. When fitting for receiver bias, we assumed a fixed receiver bias for each station over 24h. We also assumed a known satellite bias, which was removed from the slanted measurement. To keep the size of the matrix manageable, we selected sets of 11 neighboring receiver stations and considered each combination of measurements across receiver and satellites occurring within 5min of each other as differences that went into the linear least-squares solution. For this comparison, we did not use time differences. Probability density function and cumulative density functions for 192360 coincidences where vertical TEC was measurement within the same 30s time interval and have pierce points less than 50km apart from one another. The new method (labeled as WLLSID) has significantly more <1 TEC unit differences than the old method. Global TEC map produced using two different methods for the St. Patrick's Day storm on 17 March 2015. Top: a map produced with the MAPGPS method. Bottom: a map produced with the new WLLSID bias determination method. To estimate the goodness of the new receiver bias determination, we compared the method with the existing MAPGPS algorithm for determining receiver bias, which utilizes a combination of scalloping, zero-TEC, and differential linear least-squares methods . At latitudes higher than 70∘, the zero TEC method is used. This method finds the value of bias in such a way that the minimum value of TEC is 0. At low and midlatitudes scalloping is first used. Scalloping finds the bias by finding the optimally flat vertical TEC ±2h around local noon. After finding the bias with either zero TEC or scalloping, the values are refined by using the differential TEC method described by . As a measurement of goodness, we used the absolute difference between two simultaneous geographically coincident measurements of vertical TEC |vini-vi′ni′|. The two measurements were considered coincident if the distance between the pierce points was less than 50km and the measurements occurred within 30s of each other. We also required that the two measurements were not obtained using the same receiver. As a figure of merit, we used the mean value of the absolute differences: F=1N∑i≠i′|vini-vi′ni′|. This figure of merit measures the self-consistency of the measurements, i.e., how well the vertical TEC measurements obtained with different receivers agree with one another. The smaller the value, the more consistent the vertical TEC measurements are. All in all, we found 192360 such coincidences for the 5220 GPS receivers in the database over a 24h period starting at midnight 15 March 2015. Biases for the measurements were obtained both with the new and existing MAPGPS bias determination methods (MAPGPS and WLLSID). The figure of merit for the existing MAPGPS method was 2.25 TEC units, and the WLLSID method has a figure of merit of 1.62 TEC units, which is about 30% better. The probability density function and cumulative density function estimates for the coincident vertical TEC differences are shown in Fig. . The new method results in significantly more <1 TEC unit differences than the old method. It is evident from the cumulative distribution function that both methods also result in some coincidences that are in large disagreement with each other. The result occurs at least in part due to our inclusion of elevations down to 10∘ in the comparison, and it is therefore expected that some low-elevation measurements will be significantly different from one We also investigated receiver bias variation from day to day. We arbitrarily selected two consecutive quiet days: days 140 and 141 of 2015. We calculated the sample mean day-to-day change in receiver bias across all receivers: δb=1N∑i=0Nbi,140-bi,141, where N is the number of receiver. In addition to this, we calculated the standard deviation using sample variance: σb=1N-1∑i=0N(bi,140-bi,141) -δb2. For the MAPGPS method, we found overall that δb=-0.2±0.05(2σ) TEC units and σb=1.6 TEC units. With the new WLLSID method, we found that δb=0.02±0.05(2σ) TEC units and σb=1.3 TEC units. This indicates not only that the day-to-day variability is slightly smaller with the new method but also that the old method has a statistically significant nonzero mean day-to-day change in receiver bias, which is not seen with the new method. When the data are broken down into high and equatorial latitudes, the result is similar. In order to qualitatively compare the MAPGPS bias determination method with the WLLSID method, we produced a global TEC map with the WLLSID method and the existing MAPGPS bias determination method. The processing involved with making these TEC maps is described by Rideout and Coster (). To highlight the differences between the two methods, we chose a geomagnetic storm day (17 March 2015), where we would expect large gradients and more issues with data quality. Because of this, the bias determination problem is more challenging than on a geomagnetically quiet day. The two maps are shown in Fig. . The two TEC maps show no major differences in broad general features, which is to be expected. The main differences between the two images is that there are visibly fewer outliers produced by the new method. For example, the Asian and European sectors are significantly smoother with the new method. The ionospheric trough associated with the sub-auroral ionization drift (SAID) stretching from Asia to northern Europe is much more clearly seen with the new method. Probably due to the strong gradients associated with the storm, the old method fails to derive good bias values for a large number of receivers in China, resulting in negative TEC values, which are not plotted. The new method finds these values of bias more reliably. The polar regions have slightly more TEC when using WLLSID. This is because the MAPGPS uses the zero-TEC method for receiver bias determination at high latitudes, whereas the WLLSID method is applied in the same way everywhere. In this paper, we describe a statistical framework for estimating bias of GNSS receivers by examining differences between measurements. We show that the framework results in a linear model, which can be solved using linear least squares. We describe a way that the method can be efficiently implemented using a sparse matrix solver with very low memory footprint, which is necessary when estimating receiver biases for extremely large networks of GNSS receivers. We compare our method for bias determination with the existing MIT Haystack MAPGPS method and find the new method results in smaller day-to-day variability in receiver bias, as well as a more self-consistent vertical TEC map. Qualitatively, the new method reproduces the same general features as the existing MAPGPS method that we compared with, but it is generally less noisy and contains fewer outliers. The weighting of the measurement differences is done using a structure function. We outline a few ways to do this, but these are not guaranteed to be the best ones. Future improvements to the method can be obtained by coming up with a better structure function, which can possibly be determined from the data themselves, e.g., using histograms, empirical orthogonal function analysis, or similar While we describe how differences result in a linear model, we do not explore to a large extent in this work the possible ways in which differences can be formed between measurements. Because of the large number of measurements, obviously all the possible differences cannot be included in the model. In this study, we only explored two types of differences: (1) differences between geographically separated, temporally simultaneous measurements obtained with tens of receivers located near each other and (2) differences in time less than 2h performed with a single receiver. There are countless other possibilities, and it is a topic of future work to explore what differences to include to obtain better results. We describe several important special cases of the method: known satellite bias, single receiver and known satellite bias, and the case of multiple bias terms per receiver. The first two are applicable for GPS receivers, and the last one is applicable to GLONASS measurements, as well as measurements where a loss satellite signal has caused a step-like error in the TEC curve. GPS TEC analysis and the Madrigal distributed database system are supported at MIT Haystack Observatory by the activities of the Atmospheric Sciences Group, including National Science Foundation grants AGS-1242204 and AGS-1025467 to the Massachusetts Institute of Technology. Vertical TEC measurements using the standard MAPGPS algorithm are provided free of charge to the scientific community through the Madrigal system at http://madrigal.haystack.mit.edu. Edited by: M. Portabella Bust, G. S. and Mitchell, C. N.: History, current state, and future directions of ionospheric imaging, Rev. Geophys., 46, 1–23, 2008. Carrano, C. S. and Groves, K.: The GPS Segment of the AFRL-SCINDA Global Network and the Challenges of Real-Time TEC Estimation in the Equatorial Ionosphere, Proceedings of the 2006 National Technical Meeting of The Institute of Navigation, Monterey, CA, 2006. Coster, A., Williams, J., Weatherwax, A., Rideout, W., and Herne, D.: Accuracy of GPS total electron content: GPS receiver bias temperature dependence, Radio Sci., 48, 190–196, 10.1002/rds.20011, 2013. Coster, A. J., Gaposchkin, E. M., and Thornton, L. E.: Real-time ionospheric monitoring system using the GPS, MIT Lincoln Laboratory, Technical Report, 954, 1992. Davies, K.: Ionospheric Radio Propagation, National Bureau of Standards, 278–279, 1965. Dyrud, L., Jovancevic, A., Brown, A., Wilson, D., and Ganguly, S.: Ionospheric measurement with GPS: Receiver techniques and methods, Radio Sci., 43, RS6002, 10.1029/2007RS003770, 2008. Feltens, J.: Chapman profile approach for 3-D global TEC representation, IGS Presentation, 1998. Gaposchkin, E. M. and Coster, A. J.: GPS L1-1,2 Bias Determination, Lincoln Laboratory Technical Report, 971, 1993. Kaipio, J. and Somersalo, E.: Statistical and Computational Inverse Problems, Springer, New York, USA, 2005. Komjathy, A., Sparks, L., Wilson, B. D., and Mannucci, A. J.: Automated daily processing of more than 1000 ground-based GPS receivers for studying intense ionospheric storms, Radio Sci., 40, RS6006, 10.1029/2005RS003279, 2005. Mannucci, A. J., Wilson, B. D., Yuan, D. N., Ho, C. H., Lindqwister, U. J., and Runge, T. F.: A global mapping technique for GPS-derived ionospheric total electron content measurements, Radio Sci., 33, 565–582, 1998. Paige, C. C. and Michael, A.: Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Transactions on Mathematical Software (TOMS), 8, 43–71, 1982. Rideout, W. and Coster, A.: Automated GPS processing for global total electron content data, GPS Solutions, 10, 219–228, 2006. Vierinen, J., Norberg, J., Lehtinen, M. S., Amm, O., Roininen, L., Väänänen, A., and McKay-Bukowski, D. J.: Beacon satellite receiver for ionospheric tomography, Radio Sci., 49, 1141–1152, 2014.
{"url":"https://amt.copernicus.org/articles/9/1303/2016/amt-9-1303-2016.xml","timestamp":"2024-11-13T05:23:12Z","content_type":"application/xml","content_length":"97907","record_id":"<urn:uuid:32d4b101-d133-4afc-bb44-200343e4c1d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00071.warc.gz"}
Universal Gravitation | Quizalize Feel free to use or edit a copy includes Teacher and Student dashboards Measure skills from any curriculum Tag the questions with any skills you have. Your dashboard will track each student's mastery of each skill. With a free account, teachers can • edit the questions • save a copy for later • start a class game • automatically assign follow-up activities based on students’ scores • assign as homework • share a link with colleagues • print as a bubble sheet • What is the force of attraction between a sample of matter and all other matter in the universe? strong nuclear weak nuclear • Q1 What is the force of attraction between a sample of matter and all other matter in the universe? strong nuclear weak nuclear • Q2 Mrs. Helmer stands on the top of a ladder and drops a ping pong ball and a tennis ball at the same time. Which will hit the ground first? ping pong ball tennis ball They will both hit the ground at the same time. • Q3 Is there gravity in space? We don't have enough information to know. • Q4 Match the value with the correct unit. • Q5 What value correctly labels the x-axis? • Q6 What value correctly labels the x-axis? • Q7 What is the relationship between force and distance? When force decreases, distance decreases. When force increases, distance increases. When force increases, distance decreases. • Q8 What is the relationship between force and mass? When force increases, mass increases. When force increases, mass decreases. When force decreases, mass increases. • Q9 Suppose that two objects exert a gravitational force of 30 N on one another. What would that force be if the mass of one of the objects were doubled? • Q10 Two objects exert a gravitational force of 7 N on one another when they are 20 m apart. What would that force be if the distance between the two objects were reduced to 10 m?
{"url":"https://resources.quizalize.com/view/quiz/universal-gravitation-ac1c7c7c-eb54-4b92-870d-a6ddaa311dc6","timestamp":"2024-11-12T05:46:06Z","content_type":"text/html","content_length":"93941","record_id":"<urn:uuid:51937bd8-26e9-4e39-a4c2-9025301de67e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00115.warc.gz"}
Perimeter of Perimeter of a Square Worksheets Explore this set of printable worksheets centered around finding the perimeter of a square, especially appropriate for children of grade 2 through grade 5. Find ample PDFs to practice finding the perimeter of squares with integer, decimal and fraction dimensions. Learn to find the diagonal and the side length using the perimeter and compute the perimeter using the diagonal and solve word problems as well. Commence your practice with our free worksheets. Select the Measurement Units Perimeter of a Square | Integers - Type 1 Introduce 2nd grade and 3rd grade children to the concept of finding the perimeter of a square using the formula P = 4a, where 'a' represents the side of the square. Included are integers ≤ 20 in Level 1, while Level 2 offers integers ≥ 10. Perimeter of a Square | Integers - Type 2 Bolster practice in finding the perimeter of a square with the type 2 worksheets offering problems as geometric shapes and in word format; and categorized into two levels based on the range of numbers used. Perimeter of a Square | Decimals - Type 1 The side length of each square is offered as decimals. Multiply the length by 4 to compute the perimeter of each square featured here as geometric shapes with side measures. Perimeter of a Square | Decimals - Type 2 If a side of a square is 2.1 units in length, what is its perimeter? Yes, it’s 8.4 units. Work out this set of perimeter of squares pdf worksheets with a threefold practice and answer such questions in no time! Perimeter of a Square | Fractions - Type 1 The dimensions of the squares in this set of printable worksheets for grade 4 and grade 5, are presented as fractions. Assign the side measure in the formula and swiftly compute the perimeter of a Perimeter of a Square | Fractions - Type 2 With word-format questions to visualize the shape and illustrated figures to recap the process, this set helps ease into finding the perimeter of squares with fractional and mixed-number side Find the Side Length using the Perimeter Reinforce skills in finding the side length of the square using the perimeter. Divide the perimeter by 4 to solve for side length in this batch of grade 3 pdf worksheets featuring the formula and a solved example. Diagonal using Perimeter and vice versa Substitute the measure of the diagonal in the formula P = 2 &Sqrt;2 * d; to compute the perimeter of the square. Replace P in the formula &Sqrt;2 * P/4 with the given value of the perimeter and solve for the diagonal length. Perimeter of Squares | Word Problems Apply the concept of finding the perimeter of a square in real-life situations, employing this batch of perimeter of a square word problems for 4th grade and 5th grade children. Use appropriate formulas to solve the problems featured here.
{"url":"https://www.mathworksheets4kids.com/perimeter-squares.php","timestamp":"2024-11-10T11:59:15Z","content_type":"text/html","content_length":"49415","record_id":"<urn:uuid:413b6f27-e287-4692-b0e5-ab1caef4e3e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00417.warc.gz"}
Networks of curves evolving by curvature in the plane The network flow is the evolution of a regular network of embedded curves under curve shortening flow in the plane, where it is allowed that at triple points three curves meet under a 120 degree condition. A network is called non-regular if at multiple points more than three embedded curves can meet, without any angle condition but with distinct unit tangents. Studying the singularity formation under the flow of regular networks one expects that at the first singular time a non-regular network forms. In this course we will present recent work together with Tom Ilmanen and Andre Neves, showing that starting from any non-regular initial network there exists a flow of regular networks. The lectures will cover the following material: 1) Short-time existence and higher interior estimates (based on work of Mantegazza, Novaga and Tortorelli). 2) Singularity formation, generalised self-similar shrinking networks and local regularity. 3) Self-similarly expanding networks and their dynamical stability. 4) Desingularising non-regular initial networks and short-time existence. 5) Towards an evolution through singularities.
{"url":"https://cvgmt.sns.it/seminar/339/","timestamp":"2024-11-11T07:35:28Z","content_type":"text/html","content_length":"8525","record_id":"<urn:uuid:0a3e711e-01f6-4cc5-a2b7-1dd5dc40dfb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00291.warc.gz"}
Maths with fruitMaths with fruit – Science in School Have fun with fruit while helping your students to explore the concepts of area and volume, and learn more about their real-world applications. It’s often easy to estimate the area of a flat surface, but most things in the world aren’t flat. The activities described in this article give students the opportunity to think about areas and volume in terms of some everyday, irregular-shaped items: fruit and vegetables. The materials in these activities are easy to find and also very familiar, thus making the link between mathematical equations and situations encountered in everyday life. By using similar-shaped fruit and vegetables, we can show how linear dimensions, areas and volumes change in different ways as we scale up the structures. The activities, which were presented at the Science on Stage Festival 2019, are very suitable for students aged 11–14 as an extension and application of the mathematics of geometric figures to everyday life. Activities 1 and 2 can also be used to introduce the topics of area and volume to younger pupils in a very concrete way. Students can work in pairs for the activities, which can be completed in two one-hour classes. Activity 1: Measuring the area of your hand In this simple activity, students use squared paper to find the approximate surface area of an irregular object – the palm of their own hand. Each pair of students will need: • Some sheets of paper marked in 1 cm squares • Two marker pens of different colours 1. Start by asking students to guess the surface area in cm^2 (or number of squares) and record their guess. At the end of the activity, they can check back and see how close they were. 2. Ask students to place one hand on the graph paper, and use the other hand to draw the outline around it using one of the pens (figure 1). Figure 1: Drawing around the hand Image courtesy of Maria Teresa Gallo 3. To find the area, each student should first count only the whole squares completely inside the outline. 4. Then they count all the whole squares that are wholly or just partly inside the outline. They can use a different pen colour to draw the outlines of the different counting methods (figure 2). 5. Find the average between the two counted areas (by adding the number together and dividing by 2). This number is quite close to the actual surface area of the palm. 6. Students can then compare this figure to the one they guessed at the start. Figure 2: The hand outline, with the whole-squares and part-squares outlines Image courtesy of Maria Teresa Gallo Use the following questions to review the students’ results: • Is there much difference between the guesses at the start of the activity and the final figures obtained for the surface area of the palm? • Which student made the best guess? You can have a little competition! Activity 2: Measuring the surface area and volume of fruit A clementine and an orange Image courtesy of Maria Teresa Gallo In this activity, students measure the diameter, surface area and volume of some more irregular objects: fruit and vegetables. This provides a basis for a more mathematical consideration of areas and volumes for older students in activity 3. Each pair of students will need the following materials: • Two easily peeled items of fruit or vegetables, with similar shapes but different sizes: one item should have a diameter approximately double that of the other (e.g. an orange and a clementine, a large and a small apple, or large and small potatoes of a similar shape) • Peeler • Some sheets of 1 cm squared paper • Marker pen • Tall plastic container • Water • Measuring jug • Calliper (or a ruler) • Optional: transparent food-wrap film (to cover the squared paper and keep it dry during the activities) Students should follow all the steps below. It’s best to start with finding diameters and volumes as these can be measured with whole fruit, whereas the fruit need to be peeled to find the surface area, which reduces their volume. Safety note Warn students not to eat the fruit at the end of the activity, as it has not been prepared hygienically. You can provide a little bit of clean fruit to eat before or after the activity. To find the diameters: 1. Measure the diameter of each item of fruit or vegetable using callipers (figure 3) and make a note. 2. If you have only a ruler, place the item between two cuboid-shaped objects (such as paperback books, tissue boxes or shoe boxes) arranged so that they are parallel and touching the item, then measure the distance between the objects. Figure 3: Measuring the diameter of a clementine Image courtesy of Maria Teresa Gallo To find the volumes: 1. Before peeling the fruit or vegetables, put one item in a tall container 2. Add enough water to cover the item, pushing it down with a fork (or pen) so that it is completely covered with water. Mark the water level reached (figure 4, left). 3. Remove the fruit, and mark the lower water level (figure 4, right). 4. Then put some water in the measuring jug and note the volume. Pour water from the measuring jug into the tall container to reach the higher level (with the fruit immersed) previously marked, and note the new, reduced volume in the measuring jug. 5. To find the volume of the fruit, subtract the second measuring jug volume from the first volume. This is the amount of water added to replace the volume of the fruit. 6. Do the same thing with the other (smaller or larger) fruit or vegetable item, recording the volume for each item. Figure 4: Marking containers to measure fruit volumes Image courtesy of Maria Teresa Gallo To find the surface area: 1. Peel the larger fruit or vegetable item very carefully, to obtain strips that are as long and as wide as possible. 2. Place all the strips of peel on the squared paper, placing the edges of the pieces as close as possible together to avoid empty spaces. 3. Using a marker pen, draw a close line around the shape created (figure 5). 4. Do the same thing with the smaller fruit or vegetable item. Be careful to keep all the peel pieces from one item separate from the other item. 5. Count the squares covered by the peel for each fruit or vegetable and record this number on the squared paper. This is the surface area, in cm^2. Figure 5: Finding the surface area of fruit by peeling and counting squares Image courtesy of Maria Teresa Gallo Record all your measurements in a simple table, similar to table 1. Table 1: Measurements of the diameter, surface area and volume of │ Object │Diameter (cm)│Area (cm^2)│Volume (cm^3)│ │Clementine (small fruit) │4 │56 │50 │ │ Orange (large fruit) │8 │220 │380 │ In this activity, students discover one way of measuring the surface areas of objects (such as fruit) that aren’t flat: peeling them. You can ask them to think of other ways of measuring areas, lengths and volumes of irregularly shaped objects. For example, they could make squares from masking tape and cover part of their body (e.g. their forearm) with the squares, to measure its area. Can they think of other examples? Students can also begin to understand how areas and volumes change in relation to the length or diameter of the fruit. Ask them to consider: how does the surface area and volume of fruit double when you double the diameter (or other linear dimensions such as length or width)? From their measurements (table 1), it should be clear that doubling the diameter of a fruit increases the surface area and volume by factors that are much greater than double, even though the measurements are approximate. Students will investigate this question further in the next activity. Activity 3: Comparing the area and volume of fruit In this activity, the students discover how an object’s area and volume change when its linear measurements increase. Older students can also explore how the mathematical formulae for the surface area and volume of regular solids can be related to real, irregularly shaped objects such as fruit, if they are already familiar with these formulae (Part 2). • Sets of interlocking 1 cm unit cubes (e.g. Regoli®, MathLink Cubes®) • Graph paper • Some fruit of different shapes (e.g., orange, apple, pear) • For reference, mathematical formulae for surface areas of regular solids (cubes, spheres, cones, cylinders etc.) Figure 6: Cubes of increasing size made from unit cubes Image courtesy of Maria Teresa Gallo Part 1: Unit cube modelling 1. Students should follow the steps below. They can work individually or in groups, depending on how many sets of unit cubes are available. 2. Using the unit cubes, build several cubes with sides of 1, 2, 3 and 4 units. 3. Count the surface area of each cube, and record this and the side length in a table (table 2) 4. Work out the volume using the formula l ^3 (l x l x l) where l is the side length, and enter this in the table. 5. Now check this result by taking the cube apart and counting the number of cubes. Table 2: Linear, surface area and volume value for cubes of increasing size │Length of cube side (cm)│Area of cube base (cm^2)│Total surface area of cube (cm^2) │Volume of cube (cm^3)│ │1 │1 │6 │1 │ │2 │4 │24 │8 │ │3 │9 │54 │27 │ │4 │16 │96 │64 │ 6. Plot two graphs using the values in table 2: use the length of the cube side on the x axes, and the total surface area and volume on the y axes. What do you notice about how (i) surface area and (ii) volume increase in relation to the side length? Part 2: Fruit and regular solids calculations 1. Choose two fruit (e.g. orange and pear). Thinking about regular mathematical solids (e.g. spheres, cylinders, cones), work out how to model the approximate shape of your fruit by using combinations of different regular solids (figure 7). 2. Use the formulae for these solids to estimate the surface area and volume of the fruit. You can use the method in activity 2 to find the length and/or diameter of your fruit. Figure 7: Approximating the irregular shape of fruit using regular solids Image courtesy of Maria Teresa Gallo The following questions can be used to help students reflect on what they have learned in the activities: • When the side length of a cube doubles, what happens to its surface area? • When the side length of a cube doubles, what happens to its volume? • Look back at your results from activity 2. Do you think the values you obtained approximately match these rules? • How would the surface area and volume of a cube increase if its side length trebled? • For cubes, what shape are the graphs of (i) surface area and (ii) volume against side length? • What are the mathematical formulae for these shapes? • How do you think surface area and volume increase with size for other shapes, e.g. spheres (in relation to their diameter) or cones (in relation to their base and height)? • How closely do your calculated values match the real values of surface area and volume for irregularly shaped fruit from activity 2? Looking at the table of values for cubes (table 2), students can verify that the area of one face of the cube (and the total surface area) grows according to the square of the side length (l ^2), while the volume grows according to the cube of the side length (l ^3), so volume grows faster than surface area in relation to increasing side length. From the graphs, students can see that the plot for area against side length forms a parabola (quadratic relationship), while those for volume against side length form a cubic curve (cubic • Find more information on Science on Stage: https://www.science-on-stage.eu/ • Discover a way to measure the surface area of your skin, and the pressure on it: www.exploratorium.edu/snacks/skin-size Science on Stage Text released under the Creative Commons CC-BY license. Images: please see individual descriptions Supporting materials
{"url":"https://scienceinschool.org/article/2021/maths-fruit/","timestamp":"2024-11-06T08:09:54Z","content_type":"text/html","content_length":"96721","record_id":"<urn:uuid:4588caed-407e-4362-814e-30160d649116>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00342.warc.gz"}
How Hard is A-Level Maths Compared to GCSE Maths? Maths is widely acknowledged to be one of the hardest subjects you study at school. Each year, thousands of students struggle to get their heads around equations, graphs and algebra. However, maths is incredibly useful in a range of careers, and it is compulsory in England for all students to take this subject until they are at least 16. Many employers ask for students to have a GCSE Maths qualification, even if it isn’t directly related to the job. Beyond this, plenty of students choose to take A-Level Maths, which is a useful subject to have for many university courses. However, you may be wondering about the difference between these two qualifications. Having taken both GCSE and A-Level Maths, I know there are various different challenges involved in both stages. Maths as a whole is known to be difficult, but how do the GCSE and A-Level qualifications compare? Of course, everyone will have a different opinion about how hard a certain subject is, as we all have different strengths. Have your say in the poll below – do you think GCSE or A-Level Maths is Overall, it is generally thought that A-Level Maths is considerably harder than the GCSE qualification. There is more content to learn and harder concepts to get your head around. If you are considering taking A-Level Maths, you will likely have been warned about how much of a step up it is from GCSE. However, this definitely doesn’t mean GCSE Maths is easy. You are studying lots of other subjects alongside GCSE Maths, and it is probably the first set of important exams you study for. In fact, the pass rates are generally lower for GCSE than A-Level Maths. There are clearly lots of things to consider when talking about how hard GCSE and A-Level Maths are. Whether you are thinking of taking the A-Level, or are just curious, keep reading to find out more about how these two qualifications compare, in terms of content, lessons, exams and more. Is GCSE Maths or A-Level Maths harder? Generally speaking, A-Level Maths is considered to be the harder qualification. There is more content to learn, and much of it involves difficult new concepts. The exams themselves are also longer, with questions that require you to apply your knowledge in more abstract ways than at GCSE level. However, this topic isn’t quite that black and white. For example, although the content is harder at A-Level, students are also older and have more mathematical experience. The transition from GCSE to A-Level Maths is arguably a similar step up in difficulty as when you first start learning GCSE content. Although GCSE Maths may be easier, it may not seem that way when you first start the course! Both GCSE and A-Level qualifications are designed to be harder than previous maths studies, so could be considered equally difficult. Additionally, at GCSE level, students have far more other subjects to focus on. Many students take 9 or 10 GCSEs, while it is rare to study more than 3 A-Levels. This can mean you have less time to learn and revise maths, making it more difficult. Check out this Think Student article to find out how many A-Levels you should take. What are the pass rates for GCSE and A-Level Maths? Perhaps surprisingly, the pass rates are actually lower for GCSE Maths than for the A-Level. Have a look at the table below for the exact statistics from 2022, taken from Ofqual’s website and the JCQ examination results page. Subject Pass rates (%) % of A* (% 8 & 9s for GCSE) GCSE Maths 64.9 11.5 A-Level Maths 78.5 22.8 Most likely the main reason for this lower GCSE pass rate is that GCSE Maths is a compulsory subject. At A-Level, everyone there has chosen to do the subject. If a student has really struggled with it at GCSE and doesn’t intend to follow a career path that needs maths, they are very unlikely to take the A-Level. This means that the people taking A-Level Maths are largely good at the subject, or need it for their next steps, in which case there is more incentive to work hard and achieve a pass grade. At GCSE, on the other hand, there is a wider range of abilities, as every student has to take it. Overall, this can lower the pass rate. Additionally, many schools and colleges have a minimum GCSE Maths grade requirement if you want to continue studying the subject at A-Level. Most will require a pass grade at GCSE (grade 4), but others will need you to have achieved at least a grade 6 or even higher. This is to ensure everyone taking it has the base level of understanding needed to succeed at A-Level. Although a good GCSE grade doesn’t necessarily guarantee you will pass the A-Level, it still increases the A-Level pass rate. What are the exam board pass rates for GCSE and A-Level Maths? Edexcel is by far the most popular exam board for GCSE and A-Level Maths qualifications. AQA is another relatively common exam board for this subject. While other boards offer these exams, such as OCR and WJEC, pass rates are shown below for the two main exam boards. Exam board GCSE pass rate (% with grade 4 or higher) GCSE % of 8 & 9s A-Level pass rate (% with grade C or higher) A-Level % of A* AQA Maths 63.5 11.5 76.3 19.4 Edexcel Maths 65.8 11.7 77.6 22.2 This data has been taken from official exam board reports on results statistics from summer 2022. To have a look at the full reports, click here for AQA GCSEs, here for AQA A-Levels, here for Edexcel GCSEs, and here for Edexcel A-Levels. As you can see from these statistics, Edexcel consistently has slightly higher pass rates than AQA. However, the difference is very small. This is because the content of these standardised exams has to be similar between exam boards. Usually, you do not get a choice as to which exam board you do. Your school or college should choose this for you. This isn’t something to worry about – a particular exam board should not affect your chances of getting the grades you want. What are the grade boundaries for GCSE and A-Level Maths? Have a look at the table below to compare grade boundaries for GCSE and A-Level Maths from summer 2022, from the two most common exam boards, AQA and Edexcel. Exam board Subject % needed to pass (C for A-Level, 4 for GCSE) % needed for an A* (8/9 for GCSE) AQA GCSE Maths 21.3 77.1 AQA A-Level Maths 35.3 73.3 Edexcel GCSE Maths 15.8 68.8 Edexcel A-Level Maths 32.0 72.3 For the source of these statistics and other grade boundaries, click here for AQA GCSEs, here for AQA A-Levels, here for Edexcel GCSEs, and here for Edexcel A-Levels. These numbers might look confusing, but essentially, Edexcel grade boundaries are slightly lower than AQA, and GCSE grade boundaries are lower than A-Level in order to pass, but similar in order to get the highest grades. Overall, this data should be used along with pass rates to get a sense of how difficult something is. Even if a grade boundary is low, it doesn’t mean it’s easier to pass – it could be that the test was very difficult, so the same proportion of people passed despite the lower boundaries. How hard is the content for GCSE and A-Level Maths? The content of maths exams is undoubtedly difficult. While A-Level content is naturally harder, and there is more of it, it is not necessarily as difficult a jump as it may sound. Although there is more content to cover at A-Level, you will also have far more lessons dedicated to this content, as well as fewer other subjects to focus on. Unfortunately, though, this doesn’t make the work any easier! It just means you have more time to spend to make sure you understand the concepts and apply them to questions. Actually, one of the main changes you will have to deal with at A-Level is this independent study time. It is up to you to organise your time wisely, and get the necessary work done to get the grades you want. Have a look at this article from Think Student for plenty of tips about time management. With this in mind, the qualifications are designed to align with your level of study. A-Level content is harder than GCSE, to match the fact that you are an older student, with more mathematical knowledge, and more time dedicated to this qualification. Keep reading for more detail about how the content of GCSE and A-Level maths compares. What content is covered in GCSE Maths? The GCSE Maths specification can be split into 6 main topics. Although the content required may differ slightly by exam board, there is actually very little variation. As mentioned, this is because content for these national exams is mostly standardised, to make it fair for all students regardless of the exam board they do. These 6 topics are: 1. Number – for example, indices, fractions and prime numbers. 2. Algebra – for example, rearranging equations, straight line graphs and sequences. 3. Ratio, proportion and rates of change – for example, percentages and compound units. 4. Geometry and measures – for example, angle properties, circles and vectors. 5. Probability – for example, frequency trees and Venn diagrams. 6. Statistics – for example, bar charts and correlation. This is just a brief overview of the topics – for a more complete guide, have a look at this article from exampapersplus.co.uk, or check out the specific syllabus on your exam board’s website. Some of the content listed in GCSE specifications is knowledge that students already have, but there is a lot of new content as well. Most students notice an increase in difficulty when they move from Key Stage 3 content to the GCSE course. Not only is there new, harder content, but you have to get more familiar with things like exam technique and specific question styles. As these are likely the first major, national exams students prepare for, this adds another layer of difficulty to the qualification. What content is covered in A-Level Maths? As you may expect, there is far more content to cover at A-Level, and specifications split this into considerably more than 6 topics. For the full list, have a look at the official specifications – click here for AQA, and here for Edexcel. A lot of the topics build on existing knowledge, for example, you use and expand on the trigonometry knowledge you have from GCSE. However, this can make it difficult if you are not confident with GCSE content. As mentioned, many schools and colleges have a minimum grade requirement for GCSE Maths if you want to take A-Level. This makes sure your background knowledge is at a suitable level, so you are ready to build further on these topics. As well as this, new concepts and topics are introduced at A-Level, for example, calculus (involving differentiation and integration). These can be really tricky at first, and it can take a lot of time and hard work to get your head around the new ideas. One thing many people don’t realise is that there is also a step up in difficulty from Year 12 to Year 13, when you finish learning the AS content (year 1 of the course) and move on to A2 (year 2 of the course). However, this can be a good thing – the harder content is learnt when you are older, rather than near the start of the course. How are exams structured for GCSE and A-Level Maths? For both GCSE and A-Level Maths, you are working towards a set of final exams at the end of the course. The set up for these is similar at GCSE and A-Level, and the two main exam boards typically use the same structure, but it’s useful to be aware of the differences. At GCSE, you will sit three exams that make up your grade. Each of them can include questions about any part of the specification and lasts for an hour and a half. Check out this page of the AQA website for more information about exam structure. The first exam will be non-calculator, while for the other two, you will need a suitable calculator to be able to answer many of the questions. Have a look at this Think Student article for advice on getting the best calculator for GCSE Maths. Spoilers the best is the Casio FX-991EX. A-Level Maths also involves three exams, but these are longer, at two hours each. This makes each exam more strenuous as you have to focus for longer. However, you are able to use your calculator for all three exams. The best calculator for A-Level Maths is the Casio FX-991EX advanced. Check out this Think Student article for more about calculators for A-Level Maths. The content is also organised slightly differently at A-Level. The main course is split into two sections: pure maths, and applied maths (mechanics and statistics). For AQA, one paper is all about pure maths, one is pure maths plus statistics, and one is pure maths plus mechanics. For more information, check out their website here. For Edexcel, two of the papers are just about pure maths, and all the applied maths content is tested on the third exam. For more on this, have a look at the specification here. How different are GCSE and A-Level Maths exam question styles? Another thing that increases in difficulty at A-Level is the styles of questions you get in the exam. All the exam skills that you have learned at GCSE will still be needed, but there are extra layers of difficulty. For example, the actual numbers involved in the question tend to be less easy to work with at A-Level. Be prepared for your answer to be surds or long fractions or decimals, rather than just Additionally, there are more individual questions worth a lot of marks. Rather than guiding you through each stage of a question, at A-Level, you are more likely to have to come up with the methods and steps to reach an answer yourself. How independent is A-Level Maths compared to GCSE Maths? A-Level Maths is much more independent than GCSE Maths. It requires a lot more solo study, hours of tests, and you have fewer available resources than GCSE Maths. But just how independent do you have to be? It’s recommended that you do an hour of revision for every hour and a half spent in class. This is so you can keep up with the intense content that the course provides. It also prepares you for university – where you must be almost completely independent with your studies. This is quite an increase in the amount of independence compared to GCSE. You halve the time spent in class, and instead use that time to study on your own. The best way to make sure you’re taking in all the information is by testing yourself. Again, this means that you the work you do will mostly be independent. You’re not completely on your own, though! Having said that the resource materials for A-Level compared to GCSE are scarce, the ones you do have are very useful. The revision you need to do for your A-Levels compared to your GCSEs is much higher. However, it’s the same level of independency, whereby you can talk to your peers for help. In conclusion, A-Level Maths is a lot more independent than GCSE Maths. Most of your learning comes through independent study, and so you can see that this differs from GCSE. How should you revise for GCSE and A-Level Maths? Hopefully, this article has given you a good idea of how GCSE and A-Level Maths compare, and the aspects of both qualifications which may make them difficult. Ultimately, maths has never been an easy subject – but there are lots of revision tips and exam advice to help you along the way. The main tip for maths in particular is to do plenty of practice questions. Of course, learning the content itself, including any formulas and methods you need to know, is an important first step. However, the bulk of your revision is likely to be practising questions. This lets you put the mathematical skills you have been learning into context, as well as being closest to the real exam. Full practice papers are easily available on exam board websites. Additionally, many textbooks and free online resources have additional questions and revision guides, often organised by topic. Have a look at this Think Student article for some of the best websites for maths revision. If you are looking for general revision advice, have a look at this helpful guide. For more specific tips on acing your maths exams, check out this article for GCSE and this article for A-Level, all from Think Student. Which textbooks do you need for A-Level Maths? I don’t think I need to tell you that you can’t use your GCSE Maths textbooks, when studying A-Level Maths… With that being said, what A-Level Maths textbooks should you use? Firstly, there are two types of A-Level textbooks you should get if you want the best chances of doing well in A-Level Maths. These two types are: revision guides and classroom textbooks. Revision guides don’t really explain concepts in detail, they are more used for recapping content you already know. Hence the name, revision guides – they are used for revision, not for learning. When you start revising for A-Level Maths your revision guide will be your revision bible. Providing you get the right one, your revision guide can help you so much. So, which revision guide should you get for A-Level Maths? You have to make sure that your revision guide is written for your exam board! So below are my revision guide recommendations for each exam board: I recommend you get the ones above as they cover both Year 1 and Year 2 A-Level Maths content. In direct contrast, class textbooks do explain concepts in massive detail. Class textbooks are normally used for learning concepts that you didn’t understand in class. Therefore, they can also be extremely helpful during the 2nd year learning period for A-Level Maths. Once again, you must make sure that you get the right class textbook for your exam board. Below are three lists (for each exam board) and you need to get every textbook under your exam board list. Unless of course you are happy going without classroom textbooks, but they really do help. Classroom textbooks for Edexcel A-Level Maths: Classroom textbooks for AQA A-Level Maths: Disclaimer: If you are trying to spend as little as possible, the classroom textbooks are not essential, they’re just really helpful. So, if you are on a budget, just get the revision guides as they are the most helpful out of the two types of textbook by far. What other A-Levels pair well with A-Level Maths? So, you’ve decided that A-Level Maths is the right A-Level for you. Now the only problem is, what other A-Levels can you choose to accompany it? It mostly depends on what you want to do after college. For potential university students, it’s best to choose A-Levels that will get you onto the course you want. For example, an A-Level Maths student who wants to go into accounting might take maths, accounting, and business studies. A-Level Maths also pairs up quite well with the sciences – e.g., biology, chemistry, and physics. Taking maths with these other A-Levels works if you plan to work in an area of science after college. A-Level Maths also works with another science, too – Computer Science. These two A-Levels go well together if you want to go into programming and web development. Just make sure, as I’ve said before, that you choose A-Levels you’re comfortable with. A-Level Maths is one of the hardest A-Levels, and so it’s essential that you pick other A-Levels that will make your life easier. For a list of the hardest A-Levels check out this Think Student article. At the end of the day, it’s your choice on what A-Levels you pick. If you decide to pair it with drama and dance, or art and architecture, then that’s up to you! If you want to see an extensive list of great A-Level combinations, check out this Think Student article. 3 Comments Inline Feedbacks View all comments 5 years ago ‘ This way, you know that all the content in the book is what you need to know – and nothing has been left out’ You’d think so, but AQA’s textbook doesn’t cover parametric integration, which is also not mentioned in the syllabus on their website and yet is actually in the syllabus. I only noticed the difference because I was procrastinating on TSR. I emailed the maths department and it ended up saving them a lot of bother. 4 years ago At first when I wrote my mathematics I got a Uand I wrote again and got an A in mathematics and an A in accounts should I go and start A level Maths 4 years ago Looks like you have the A level and GCSE pass rates transposed.
{"url":"https://thinkstudent.co.uk/how-hard-is-a-level-maths/","timestamp":"2024-11-10T20:40:33Z","content_type":"text/html","content_length":"158003","record_id":"<urn:uuid:2dc1b747-0f17-406c-903a-bb66aa706f8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00387.warc.gz"}
Generic and $q$-Rational Representation Theory | EMS Press Generic and -Rational Representation Theory • Edward Cline University of Oklahoma, Norman, USA • Brian Parshall University of Virginia, Charlottesville, USA • Leonard Scott University of Virginia, Charlottesville, USA Part I of this paper develops various general concepts in generic representation and cohomology theories. Roughly speaking, we provide a general theory of orders in non-semisimple algebras applicable to problems in the representation theory of finite and algebraic groups, and we formalize the notion of a “generic” property in representation theory. Part II makes new contributions to the non-describing representation theory of finite general linear groups. First, we present an explicipt Morita equivalence connecting with the theory of -Schur algebras, extending a unipotent block equivalence of Takeuchi [T]. Second, we apply this Morita equivalence to study the cohomology groups , when is an irreducible module in non-describing characteristic. The generic theory of Part I then yields stability results for various groups , reminscent of our general theory [CPSK] with van der Kallen of generic cohomology in the describing characteristic case, (in turn, the stable value of such a cohomology group can be expressed in terms of the cohomology of the affine Lie algebra .) The arguments entail new applications of the theory of tilting modules for -Schur algebras. In particular, we obtain new complexes involving tilting modules associated to endomorphism algebras obtained from general finite Coxeter groups. Cite this article Edward Cline, Brian Parshall, Leonard Scott, Generic and -Rational Representation Theory. Publ. Res. Inst. Math. Sci. 35 (1999), no. 1, pp. 31–90 DOI 10.2977/PRIMS/1195144189
{"url":"https://ems.press/journals/prims/articles/2705","timestamp":"2024-11-13T17:17:26Z","content_type":"text/html","content_length":"90823","record_id":"<urn:uuid:ce1d0fbd-a401-4d20-adf9-cf44d1da1ee8>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00662.warc.gz"}
Maximum Likelihood Estimation is Sensitive to Starting Points October 21, 2016 In my previous post, I derive a formulation to use maximum likelihood estimation (MLE) in a simple linear regression case. Looking at the formulation for MLE, I had the suspicion that the MLE will be much more sensitive to the starting points of a gradient optimization than other linear regression methods. To demonstrate the sensitivity to the starting points, I ran 10,000 linear regressions. For each starting point I ran a MLE and a root mean square minimization to determine the optimum quadratic parameters to fit a polynomial to the data. As it turns out, the root mean square optimizations were just as good, or better than the MLE for every case. All of the code for this comparison is available here. Perhaps one of the simplest and most commonly used alternative objective functions for linear regression is the root mean square error (RMSE). RMSE is the square root of the mean square error, and mean square error is the average of the square residuals \( r \). For this purpose, RMSE is defined in the following equation. $$ RMSE = \sqrt{\frac{1}{n}\sum_{i=1}^n r^2} $$ The goal of the linear regression would be to find which parameters minimize the RMSE. As it turns out for my particular model and dataset, the optimum of the MLE will be the optimum of minimizing the RMSE. With this in mind I generated 10,000 random starting points. From each starting point I ran a MLE and a minimizing RMSE using gradient optimizations. It turns out that for the BFGS algorithm, MLE is more sensitive to starting points than minimizing RMSE. I concluded this because of my 10,000 runs the RMSE performed better 7,668 times. Interesting enough the MLE and RMSE were equivalent 2,332 times, and never did the MLE produce a better match than the RMSE method. I’ve included pie chart which helps to visualize the results. A bit of disclaimer, these results are very model dependent. What happens with my log-likelihood equation is that it has a tendency to go to infinity with a poor starting point, which creates problems with the gradient optimization method.
{"url":"https://jekel.me/2016/Maximum-Likelihood-vs-Root-Mean-Square-Error/","timestamp":"2024-11-05T19:57:02Z","content_type":"text/html","content_length":"7443","record_id":"<urn:uuid:1632b0c7-445b-497d-ad3d-c2ea38c4f3f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00158.warc.gz"}
Cool Math Stuff When talking about great mathematicians of the past, many will rank the top three as , and Newton. I have posted stories about the others, but none about Isaac Newton. So, I think now is a good time. Newton is best known for his work in physics, but he also made huge contributions to calculus, algebra, geometry, and infinite series. Many mathematicians expand their expertise to different diverse branches of math, but Newton stuck to the things that applied the most to his physics and are currently ruling the American school system. Let me tell you an interesting story about Newton. When he was a young student, he was very shy and not at all the genius that he is known as today. One day at recess, a bully came up to him and punched him in the stomach. Newton chose to fight back, and proceeded to shove his face in the mud. All of his classmates, who did not like this kid, cheered him on as he proved his superiority to the bully. After this incident, he decided that physical prestige wasn't enough for him, and he wanted mental prestige as well. So, he started working much harder at his schoolwork, and soon after became top of the class, proving to everyone that he was smarter than the bully as well. This motivation could have been what turned him into one of the best scientists and mathematicians of all time. I think this story shows that anyone who has drive and dedication can become a genius, and it also is a story themed around the negativity of bullying. I also like it because it is an interesting aspect about a mathematician's childhood, which help people get to know who is behind what they are learning and practicing. In school, the teacher is always on top of you for checking your work. When you do a subtraction problem, solve the reversed addition problem and make sure it is right, when you do an algebra problem, make sure you plug your solution back into the original equation. These are all things that are drilled into our heads, but never quite executed. I did post a year and a half ago about checking your work in algebra problems: plugging the answer into the original equation (click here to see how to do that). But there is also a shortcut for checking work on plain arithmetic problems as well. Let's take the problem 138 + 253. I would have went smaller, but the method will be easier to demonstrate with larger numbers. + 253 If we add that up normally, we would get: + 253 How do we know if that is correct? Well, we do something called mod sums. What that means is we add up the digits in the number, and then add up the digits in this sum, and keep going until we find a single digit number. This is called the number's mod sum or digital root. So, what is the mod sum of 138? Well, we add up the digits. 1 + 3 + 8 = 12 1 + 2 = 3 So, the mod sum or digital root of 138 is 3. Let's find it for 253. 2 + 5 + 3 = 10 1 + 0 = 1 The mod sum of 253 is therefore 1. Let's find the mod sum of the total and see if you notice the pattern. 3 + 9 + 1 = 13 1 + 3 = 4 So, the two addends have mod sums of 3 and 1. The sum has a mod sum of 4. What is the pattern? That's right, the mod sum of the answer is the sum of the mod sums of the addends. What about a subtraction problem? - 643 The answer to this problem is 281. But how do we confirm it? The mod sum of 924 is 6 (9+2+4=15 and 1+5=6) and the mod sum of 643 is 4 (6+4+3=13 and 1+3=4). So, the mod sum of the difference must be the difference of the two mod sums. The mod sum of 281 is 2 (2+8+1=11 and 1+1=2), which is the difference of 6 and 4. So, the answer was correct. What about a multiplication problem? Say 71 x 55. If you do the math, you will find that the answer is 3905. But let's check it with mod sums. Mod Sum of 71 = 8 Mod Sum of 55 = 1 Mod Sum of 3905 = 8 8 x 1 = 8 So it is correct. There are some glitches in the technique, but this is the basis of it. You might run into scenarios that I didn't quite explain how to deal with, but feel free to comment. I will be happy to respond with some more specific pointers. Have fun actually checking your work now! One of the things that lots of people seem to be oblivious to is that mathematics is developing and innovating just as much as any other discipline, which I allude to in many of my presentations. There are many conjectures, or unsolved problems, out there that mathematicians are working on and trying to prove or solve. Rota's Conjecture was a problem like this, in the branch of matroid theory. This is a diverse area of mathematics that isn't taught or mentioned in the American school system (another concept I allude to in my presentations). So when I read this article about Geoff Whittle solving the problem, I thought it would make for a great post. Here is the story: Game theory and proofs are two of my favorite areas of mathematics; game theory is practical and fun while proofs are interesting and insightful. So, when I learned about this problem that combines the two, I thought that it was definitely worth a post. This game is called Chomp. It is normally played with just a table of squares, but I find it easier to understand by thinking of a chocolate bar. The mouth-watering Chomp playing board Chomp is played where the first player chooses a square on the board, and then takes away everything above and to the right of it (essentially taking a bite out of the top right corner of the chocolate bar). The second player would do the same thing with another remaining square. This process keeps continuing until all that remains is the bottom left square. Whoever is forced to take that square loses. To better understand how the game works, click here to practice playing it. You will see how easy it is to play and understand. At this point, any game theorist would be wondering if there is an optimal strategy for this game. From what we saw a couple weeks ago with Anti Tic-Tac-Toe, you might be wondering if symmetry is involved in this game. And yes, you can win this game by playing symmetrical moves in the end game. However, the board is not square, it is a rectangle. So, there cannot be full symmetry. I do not know what the actual optimal strategy is. But, I do know that one exists that would enable player one to always force a win. I will demonstrate this by an "existence proof" where you prove it exists without finding the actual thing. Pretend player one just took the top right corner square. This is either a good position or a bad position. If it is a good position, then by definition, player one can continue to play perfectly and force a win. If it is a bad position, then player two must have a responding move that will force them to win. But, this responding move must be a square that player one could have hit on their first move. Since the top right square really doesn't have an effect on the rest of the board, this would not be a problem. So, player one could have played this strategy, which would allow them to force a win as well. In either of these situations, player one wins. So, there is our proof. I find these existence proofs really interesting because you don't always think you can know if a statement is true without being able to see an example, but with mathematics, it can be done.
{"url":"https://coolmathstuff123.blogspot.com/2013/10/","timestamp":"2024-11-03T23:17:37Z","content_type":"text/html","content_length":"86418","record_id":"<urn:uuid:e349766a-70be-49c9-a7b1-38b845211389>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00478.warc.gz"}
The hybrid solver is designed to detect whether a multigrid preconditioner is needed when solving a linear system and possibly avoid the expensive setup of a preconditioner if a system can be solved efficiently with a diagonally scaled Krylov solver, e.g. a strongly diagonally dominant system. It first uses a diagonally scaled Krylov solver, which can be chosen by the user (the default is conjugate gradient, but one should use GMRES if the matrix of the linear system to be solved is nonsymmetric). It monitors how fast the Krylov solver converges. If there is not sufficient progress, the algorithm switches to a preconditioned Krylov solver. If used through the Struct interface, the solver is called StructHybrid and can be used with the preconditioners SMG and PFMG (default). It is called ParCSRHybrid, if used through the IJ interface and is used here with BoomerAMG. The user can determine the average convergence speed by setting a convergence tolerance \(0 \leq \theta < 1\) via the routine HYPRE_StructHybridSetConvergenceTol or HYPRE_ParCSRHybridSetConvergenceTol. The default setting is 0.9. The average convergence factor \(\rho_i = \left({{\| r_i \|} \over {\| r_0 \|}}\right)^{1/i}\) is monitored within the chosen Krylov solver, where \(r_i = b - Ax_{i}\) is the \(i\)-th residual. Convergence is considered too slow when \[\left( 1 - {{|\rho_i - \rho_{i-1}|} \over { \max(\rho_i, \rho_{i-1})}} \right) \rho_i > \theta .\] When this condition is fulfilled the hybrid solver switches from a diagonally scaled Krylov solver to a preconditioned solver.
{"url":"https://hypre.readthedocs.io/en/stable/solvers-hybrid.html","timestamp":"2024-11-13T01:24:48Z","content_type":"text/html","content_length":"11960","record_id":"<urn:uuid:d7aab9ac-c4f4-48fa-b126-a458b45ca143>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00269.warc.gz"}
Understanding Mathematical Functions: What Does The Mod Function Do Mathematical functions are an essential part of understanding and interpreting the relationships between various mathematical quantities. One such function that is commonly used is the mod function, also known as the modulo operation. This function calculates the remainder of a division between two numbers and is widely used in various mathematical and programming applications. Key Takeaways • The mod function, also known as the modulo operation, calculates the remainder of a division between two numbers. • It is widely used in mathematical and programming applications. • The mod function simplifies complex calculations and improves efficiency in certain algorithms. • It has applications in computer programming and is relevant in number theory. • Misconceptions about the mod function should be addressed to understand its purpose and how it differs from other mathematical operations. Understanding Mathematical Functions: What does the mod function do The mod function, short for modulo, is a fundamental mathematical operation that calculates the remainder of a division between two numbers. It is denoted by the symbol "%". A. Definition of the mod function The mod function takes two numbers, the dividend and the divisor, and returns the remainder when the dividend is divided by the divisor. In other words, it calculates what is left over after the division process. B. Explain what the mod function does For example, in the expression 7 % 3, the dividend is 7 and the divisor is 3. When 7 is divided by 3, the quotient is 2 with a remainder of 1. Therefore, 7 % 3 equals 1. Another way to look at it is that the mod function returns the amount that is "left over" or "unused". For instance, in the expression 10 % 4, the quotient is 2 with a remainder of 2. So, 10 % 4 equals 2. C. Provide examples of how the mod function is used • One common application of the mod function is in programming, particularly when dealing with loops or conditional statements. It can be used to check if a number is even or odd, or to perform tasks at regular intervals. • It is also used in cryptography, where it plays a role in encryption and decryption algorithms. • In mathematics, the mod function is used in various fields such as number theory, abstract algebra, and computer science. Understanding Mathematical Functions: What does the mod function do Mathematical functions play a crucial role in various fields such as computer science, engineering, and finance. One such function that is commonly used is the mod function, which serves the purpose of finding the remainder. In this chapter, we will delve into how the mod function works and its applications in different contexts. How the mod function works The mod function, short for modulo, is used to find the remainder when one number is divided by another. It can be denoted as "a mod b" where 'a' is the dividend and 'b' is the divisor. The result of 'a mod b' is the remainder when 'a' is divided by 'b'. For example, when 10 is divided by 3, the quotient is 3 and the remainder is 1. This can be expressed as 10 mod 3 = 1. The mod function is particularly useful in scenarios where we need to work with remainders, such as finding the day of the week or cycling through a sequence of numbers. Discuss the process of taking the remainder When using the mod function to compute the remainder, the process involves dividing the first number by the second and taking the leftover amount as the result. This operation can be performed on integers, floating-point numbers, and even negative numbers. For instance, if we have -7 mod 3, the result would be 2, as the remainder of -7 divided by 3 is 2. Similarly, 5.5 mod 2 would yield 1.5, signifying the remainder when 5.5 is divided by 2. Explain how the mod function is applied in different contexts The mod function is widely used in a variety of applications. In computer programming, it is used to determine if a number is even or odd, as well as to cycle through elements in an array. In finance, the mod function can be utilized to calculate interest payments or determine recurring patterns in financial data. Furthermore, the mod function finds extensive use in cryptography, where it is employed to generate secure keys and implement algorithms for secure communication. Applications of the mod function The mod function, short for modulus, is a fundamental mathematical operation that finds its applications in various fields. Let’s delve into how this function is used in computer programming and its significance in number theory. A. Use in computer programming The mod function is extensively used in computer programming for a wide range of applications. It is commonly used to determine whether a number is even or odd. By using the mod function with 2 as the divisor, programmers can easily check if a number is divisible by 2. This is particularly useful in algorithms that involve sorting, searching, or manipulation of data. Moreover, the mod function is invaluable in handling cyclic behavior in algorithms. For instance, in a program that needs to iterate through a sequence of elements and restart from the beginning once the end is reached, the mod function comes in handy to wrap around the index. Additionally, the mod function is often applied in encryption algorithms and data validation processes. It plays a crucial role in generating hash codes, validating checksums, and ensuring the integrity of transmitted data. B. Relevance in number theory The mod function holds significant relevance in number theory, a branch of mathematics that deals with the properties and relationships of numbers. It is particularly useful in the study of divisibility, congruences, and prime numbers. 1. Divisibility When a number is divided by another number using the mod function, the remainder obtained provides valuable information about the divisibility of the two numbers. This concept is fundamental in number theory, where the mod function is used to explore the properties of divisors and multiples. 2. Congruences In number theory, the mod function is employed to define congruences between integers. Two numbers are said to be congruent modulo n if their difference is divisible by n. This concept forms the basis of modular arithmetic, which has diverse applications in cryptography, algebra, and computer science. 3. Prime numbers The mod function is crucial in the identification and verification of prime numbers. Through the application of various theorems and algorithms that utilize the mod function, mathematicians can efficiently identify prime numbers and study their intricate patterns and properties. Advantages of using the mod function When it comes to mathematical functions, the mod function plays a crucial role in simplifying complex calculations and improving efficiency in various algorithms. Its unique ability to handle remainders and divisibility makes it a valuable tool in a wide range of mathematical and programming applications. A. Highlight its ability to simplify complex calculations • Handling remainders The mod function allows for the efficient handling of remainders in mathematical calculations. This is particularly useful when dealing with large numbers or complex operations, as it provides a clear and concise way to represent the remainder in a division. • Modular arithmetic By using the mod function, complex arithmetic operations can be simplified through modular arithmetic. This can be especially beneficial in cryptography, computer graphics, and number theory, where the mod function helps to reduce the complexity of calculations and ensure accuracy in the results. B. Discuss how it can improve efficiency in certain algorithms • Data organization In algorithms and programming, the mod function is commonly used to organize and distribute data into different categories or groups. This can significantly improve the efficiency of sorting and searching algorithms, as well as enhance the performance of data structures and databases. • Optimizing loops and iterations By utilizing the mod function, iterations and repetitive calculations in algorithms can be optimized to reduce the number of operations and improve overall performance. This is particularly beneficial in cases where the efficiency of the algorithm is critical, such as in real-time systems or resource-constrained environments. Common misconceptions about the mod function When it comes to mathematical functions, the mod function often tends to be misunderstood. Let's address some of the common misconceptions about this function and clarify its purpose and how it differs from other mathematical operations. A. Address misunderstandings about its purpose One common misconception about the mod function is that it is simply a fancy way to divide numbers. However, the mod function serves a specific purpose in mathematics that goes beyond simple division. It is important to understand that the mod function returns the remainder of a division operation, rather than the quotient itself. B. Clarify how it differs from other mathematical operations Another misunderstanding about the mod function is that it is similar to the remainder operator in programming languages. While they may serve similar purposes, it is essential to clarify that the mod function operates within the realm of mathematical functions and has its own distinct properties and applications. Unlike other mathematical operations such as addition, subtraction, multiplication, and division, the mod function specifically deals with remainders and has unique mathematical properties. Summarizing, the mod function, short for modulus, is a mathematical operation that returns the remainder when one number is divided by another. It is a useful tool for various applications such as finding patterns in numbers, solving equations, and cryptography. The mod function can be written as "a % b" in programming languages like Python, JavaScript, and C++. It is important to understand how the mod function works in order to effectively use it in mathematical and programming contexts. For those interested in delving further into mathematical functions, there are plenty of resources available to explore. From online tutorials and lectures to textbooks and peer-reviewed journals, the world of mathematics is rich and diverse. Keep exploring, learning, and applying mathematical functions to expand your knowledge and problem-solving skills. ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-what-does-the-mod-function-do","timestamp":"2024-11-14T18:26:57Z","content_type":"text/html","content_length":"214751","record_id":"<urn:uuid:45c2b949-58da-44ea-9084-9e0f4668b573>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00151.warc.gz"}
Finance Assignment: Understanding Portfolio Investment Analysis Finance Assignment: Understanding Portfolio Investment Analysis Task: Assessment purpose: To allow students to demonstrate an understanding of various portfolio investment analysis calculation techniques, applicable to real world situations. Facility with these techniques will assist students to construct an investment portfolio, analyse the expected and actual performance of selected securities and to review and change - if deemed desirable - the content of a portfolio. The assessment will reflect the analysis which would be expected of students if, after graduating, they are working in a modern accounting practice or a fund manager’s office. Topic: Portfolio Investment Management Calculations and their Application to Decision-making. Task Details: This finance assignment consists of five questions (some with multiple parts) and students should answer all questions. Please note that you must solve each problem using the appropriate formula/e, which must be shown. All calculations and workings must also be shown. QUESTION 1. a. At 15 October, 2020, the share prices of Coal Ltd and Wood Ltd were $30 and $105 respectively. One year later, the respective share prices were $35 and $110. i. Calculate the price-weighted percentage average return for these two stocks over the year to 15 October, 2021. ii. Suppose instead, at 15 October, 2021, the final price of Coal Ltd was $35 (as above), but the price of Wood Ltd had fallen to $95. Calculate the revised price-weighted percentage average return for these two stocks over the year to 15 October, 2021. b. What are both the payoff and the profit or loss per share for an investor in the following two situations? i. Jean buys the June, 2022 expiration Paypal call option for $6.40 with an exercise price of $120, if the Paypal stock price at the expiration date is $132? ii. Joan buys a Paypal put option for $4.50 with the same expiration date and exercise price as Jean’s call option, and the Paypal stock price is also $132 at the expiration date? c. A large investor resident in your country seeks your advice on global investments. i. State briefly two reasons why he/she should include international equities in his or her investment portfolio. ii. Identify two risks which apply to the investor if he/she invests in international equities. d. Two corporate bonds, issued respectively by F Ltd and G Ltd, have the same face value of $10,000 and the same term to maturity of 7 years. F Ltd’s bonds have a coupon rate of 8% per annum, payable half-yearly, and G Ltd’s bonds have a coupon rate of 7.8% per annum, payable bi-monthly (that is, every 2 months). Calculate the effective annual return (EAR) on each bond. [Show each answer as a percentage, correct to 2 decimal places.] e. Asif is a fund manager with a share portfolio currently valued at $1 billion under management. He considers that the share market is much over-priced and fears a sharp downturn of 20% in the market by June, 2022, which will badly affect his share portfolio’s value and performance, which he wishes to protect. He seeks your advice as to whether he should take a short position in futures or buy a put option, each with an exercise price of $1 billion (the current value of his share portfolio). Explain each of the two strategies, and state your recommendation which Asif should follow, with reasons. QUESTION 2. a. The expected return of the market index over 2022 is 10%. The standard deviation of returns of the market index is expected to remain at its long-term average of 18%. The risk-free rate is 4%. i. the degree of risk aversion (commonly denoted by ‘A’) for an investor in the market index. ii. the Sharpe ratio of the market index portfolio. b. The expected return of a risky portfolio in New Zealand over 2022 is 15%, while the risk-free rate is 7%. Terry wishes to set up a complete portfolio, with y (the proportion invested in the risky portfolio) = 0.75. i. Define a “complete portfolio”. ii. Describe the mix (or asset allocation) of Terry’s complete portfolio, including the percentages of each asset held. iii. What is the expected return of Terry’s complete portfolio? iv. What is the standard deviation of returns for Terry’s complete portfolio? v. What is the Sharpe ratio for Terry’s complete portfolio? c. Mabel is more risk averse than Terry, and her degree of risk aversion, A, is 4.0. Using the data supplied at the beginning of part b. above, calculate the percentages of each asset class you would recommend she should hold in her optimal complete portfolio. [Show percentages correct to 2 decimal places.] QUESTION 3. a. Historical data for the All Ordinaries Index indicates that: - the standard deviation of returns from the Index has been 17%; and - the degree of risk aversion (A) of an investor in the Index is 3.6. i. What market risk premium is consistent with the above historical standard deviation? ii. If the market risk premium is 12%, what would be the historical standard deviation? b. The expected return of the market in Iceland is 15%. Stock H has a beta of 1.3 and the risk-free rate is 5%. i. What is the expected return of Stock H, according to the CAPM? ii. What is the alpha of a stock? (Definition or explanation required.) iii. What is the alpha of Stock H, if Iceland Stockbrokers, investors in - and researchers of - the stock, believe that Stock H will provide a return this year of: I. 20%; or alternatively, if they consider the return this year will be: II. 14%? c. Based on your answers to part b. iii. above, is Stock H over-priced, underpriced or fairly priced in each of the situations I. and II.? Would you recommend that Iceland Brokers buy more of – or sell – or just hold Stock H in each of these situations? d. Jackie, an analyst with Betta Brokers, uses a two-factor (F1 and F2) CAPM index method to evaluate the expected return of stock in Z Ltd. The model uses the following data: E(R) of F1 = 12%; E(R) of F2 = 8%; ? (beta) of F1 = 1.3; ? (beta) of F2 = 0.4; and Rf (risk-free rate) = 5%. What is the expected return of a share in Z Ltd? QUESTION 4. A. The yield curve for Government-guaranteed zero-coupon bonds is based as follows: i. What are the implied one-year forward rates for years 1, 2 and 3 respectively? ii. If the expectations hypothesis of the term structure of interest rates is correct, in one year’s time, what will be the yield to maturity on a one-year zero-coupon bond? iii. Based on the same hypothesis as in ii. above, in one year’s time, what will be the yield to maturity on a two-year zero-coupon bond? B. On 15 January, 2021, you bought a Government bond, with a face value of $1,000; a term to maturity of 5 years; a coupon rate of 6% per annum payable yearly, and a yield to maturity of 5% per annum. You paid the market price of $1,043.76 for the bond. On 15 January, 2022, you sold the bond to Jill, providing her with a yield to maturity of 4% per annum. [NOTE: You bought and sold the bond immediately after payment of the interest coupon due on 15 January each year – that is, the interest payments due on 15 January in 2021 and 2022 are not included in the bond prices.] i. What price would Jill have paid for the bond? [Show answer correct to the nearer cent.] ii. What is your holding period return for holding the bond for one year, receiving the January, 2022 interest coupon, then selling the bond? [Show answer as a percentage, correct to 2 decimal C. With the aid of hypothetical illustrative examples, briefly explain each of the Expectations and the Liquidity preference hypotheses relating to the term structure of interest rates. Which of the two hypotheses do you consider to be the more relevant? Why? QUESTION 5. A. Briefly explain the following concepts relating to bond portfolio management. i. Duration. ii. Convexity. iii. Immunisation. B. Illustrate your answer to A. above with the calculation of the duration and convexity of a bond with a face value of $1,000, term to maturity of 3 years, a coupon rate of 6% per annum, payable yearly, and a yield to maturity of 4% per annum. [NOTE: As a by-product of these calculations, you should calculate the current market price of the bond, which price should be used as a base or starting point to your answers required in C. i. and C. ii. below.] C. Calculate the expected price of the bond described in B. above, if the yield to maturity fell immediately to 3% per annum, by each of the following 3 methods. i. The duration adjustment method. ii. The duration-with-convexity adjustment method. iii. The present value of future cash flows method. D. Which of the methods listed in C. above is most accurate? Why? E. Explain how a pension fund can use zero-coupon bonds to immunize its obligation to pay out $10 million a year in pensions in perpetuity, if the forecast long-term interest / discount rate is 5% a year forever. Finance Assignment Part 1 a. i. Price-weighted percentage average return The share price of Coal Ltd (Beginning of the year) = $30 Share price of Wood Ltd (Beginning of the year) = $105 Sum of the share price of Coal Ltd and Wood Ltd (Beginning of the year) = $(30 + 105) = $135 Share price of Coal Ltd (End of the year) = $35 Share price of Wood Ltd (End of the year) = $110 Sum of the share price of Coal Ltd and Wood Ltd (End of the year) = $(35 + 110) = $145 Price weighted percentage average return of the two stocks = [(145 – 135) / 135] * 100% = 7.40% a. ii. Revise price-weighted percentage average return Price of Wood Ltd = $95 Sum of the share price of Coal Ltd and Wood Ltd (End of the year) = $35 + $95 = $130 Sum of the share price of Coal Ltd and Wood Ltd (Beginning of the year) = $(30 + 105) = $135 Price weighted percentage average return of the two stocks = [(130 - 135) / 135] * 100% = -3.70% b. i. Payoff and profit or loss per share for an investor in expiration PayPal call option Call option price = $6.40 Exercise price = $120 Expiration date price = $132 = Stock price at expiration – strike price = $132 - $120 = $12 Profit per share: = Pay-off per share – premium = $12 - $6.40 = $5.60 b. ii. Payoff and profit or loss per share for an investor in expiration PayPal put option Put option price = $4.50 Expiration date price = $132 Pay-off = It will be zero because the strike price is lesser than the stock price. Loss per share: = pay-off per share – premium = $0-$4.50 = -$4.50 c. i. Two reasons for including international equities in his or her investment portfolio The two reasons for investing in international equities are discussed below: 1. Diversification: Domestic index funds will assist in the limited exposure to international stocks, and many agree that these funds will not allow the investor to get complete diversification (Torrente and Uberti, 2021). In order to get the real exposure of the international stocks and make the portfolio more and more diversified, the investor is required to invest in international equities. The international stock markets represent over 40% of the world's equity investment. 2. Geographic advantages: The international equities will help the investor boost returns by exposing dollars to the faster-growing economy. Different nations have different agendas, such as government leadership, taxation, access to natural resources and many (Belderbos, Tong and Wu, 2020). These factors impact the stocks' performance, resulting in rapid growth. c. ii. Two risks applied to the investor investing in international equities The two risks applied to the investors investing in international equities are discussed below: 1. Currency risk: If the dollar weakens then the international holdings will provide a hedge against the movement of the currency (Wiriadinata, 2018). However, if the dollar becomes strong, then the international stock's performance will become weaker. 2. Geopolitical risk: In the case of any political unrest, the nation can also suffer from an economic downturn (Smales, 2021). On such an occasion, the international stocks might fall, resulting in overall returns. d. Effective annual return on each bond Face value of bond issued by F Ltd = $10000 Time to maturity = 7 years Coupon rate = 8% p.a. (half-yearly) Effective annual return of bond issued by F Ltd: = (1 + coupon rate/compounding periods) ^compounding periods – 1 = (1 + 8%/2) ^2 – 1 = 8.16% Face value of bond issued by G Ltd = $10000 Time to maturity = 7 years Coupon rate = 7.8% p.a. (bi-monthly) Effective annual return of bond issued by G Ltd: = (1 + coupon rate/compounding periods) ^compounding periods – 1 = (1 + 7.8%/6) ^2 – 1 = 8.06% e. Explaining the two strategies and recommending Asif The two strategies and the explanation has been discussed below: 1. Short position in futures: A short hedge is taking a short position in future contracts. When the assets are being expected to be sold then the future contracts are being enforced. In the case of a sharp downturn, the prices will fall since Asif has short sell the futures; therefore, when it is sold, Asif will gain (Wang et al., 2021). However, if the prices rise, then Asif will suffer 2. Buy a put option: Buying a put option will limit the loss that Asif might incur. If there is an economic downturn, Asif buying off the put option will reduce the effective loss (Wang et al., 2021). However, Asif will still suffer a loss. It is advised to Asif to a short position in the future. Then, if there is an economic downturn, the Asif will sell the futures and gain the return, which will effectively nullify the loss that the portfolio might have experienced after the economic downturn. On the other, if the future price increases, Asif will suffer loss; however, the increase in the portfolio return will nullify the loss. Part 2 a. i. Degree of risk aversion Expected return in 2022 = 10% Standard deviation = 18% Risk-free rate = 4% Degree of risk aversion: = (Expect return – risk free rate) / (standard deviation) ^2 = (10% - 4%)/18%^2 = 1.85 a., ii. Sharpe ratio Sharpe ratio: = (Portfolio return – risk-free rate) / standard deviation = (10% - 4%) / 18% = 33.33% b. i. Complete portfolio The complete portfolio is a combination of risky assets, with return Rp and Rf being the return of risk-free assets (Kostadinova et al., 2021). The portfolio’s expected return will be E(Rc) = Wp * E (Rp) + (1 – Wp) Rf. b. ii. A mix of Terry's complete portfolio Terry's portfolio can be categorized into two kinds of assets that are risky and non-risky. The allocated percentage of risky asset in the portfolio is 75%, and the percentage of assets that qualifies to be non-risky is 25%. Hence, Terry prefers taking risky allocation of assets to the portfolio. b. iii. Expected return of Terry’s portfolio Expected return: = Weight of the portfolio * expected return of risky portfolio + (1 - weight of the portfolio) * risk-free rate = 0.75 * 15% + (1 – 0.75) * 7% = 13% b. iv. Standard deviation of return Standard deviation of return: = [(75% - 13%) ^2 + (25% - 13%) ^2] ^1/2 = 20% b. v. Sharpe ratio Sharpe ratio: = (Portfolio return – risk-free rate) / standard deviation = (13% - 7%) / 20% = 30% c. Percentages of each asset class to be recommended to Mabel A = 4 Standard deviation = 24.5% Expected return – risk free rate = A * (standard deviation) ^2 = 4 * (0.245) ^2 = 0.24 = 24% Expected return = 24% -risk free rate = 24% - 7% = 17% Since, for generating 15% return the portfolio had 75% risky assets therefore, to generate 17% return the proportion of risky asset will increase to 85%. Therefore, Mabel will have to invest in 15% of non-risky assets. Finance Assignment Part 3 a. i. Market risk premium Return expected – risk free rate = A * (standard deviation) ^2 Also, market risk premium = Expected return rate – risk free rate Therefore, market risk premium = A * (standard deviation) ^2 = 3.6 * (17%) ^2 = 10.40% a. ii. Historical standard deviation Market risk premium = 12% Standard deviation: = (Market risk premium / risk coefficient) ^1/2 = (12%/3.6) ^1/2 = 1.66% b. i. Expected return of stock H Beta = 1.3 Risk-free rate = 5% Expected market return = 15% Expected return of stock H: = Risk free rate + (Beta * market return) = 5% + (1.3 * 15%) = 24.50% b. ii. Alpha of stock Alpha is an index used to determine the highest possible return concerning the least amount of the risk. Alpha of stock H: = Return of portfolio – risk free rate – beta * (Return of market – risk free rate) = 24.50% - 5% - 1.3 * (15% - 5%) = -6.50% b. iii. Alternatively, alpha of stock H I. Return = 20% Alpha = 20% - 5% - 1.3 * (15% - 5%) = -11% II. Return = 14% Alpha = 14% - 5% - 1.3 * (15% - 5%) = -17% c. Recommending the stock H to Iceland Brokers I. When return is 20%, the alpha is -11% therefore, it can be concluded that the stock is way too risky. Hence, selling the stock will be good option. II. When the return is 14%, the alpha is -17%; therefore, it can be concluded that the stock is way too risky as well. Hence, the stocks must be sold. d. Expected return of a share in Z Ltd E(R) of F1 = 12% Beta of F1 = 1.3 Risk-free rate = 5% Expected return of F1: = 5% + (1.3 * 12%) = 20.60% E(R) of F2 = 8% Beta of F2 = 0.4 Expected return of F2: = 5% + (0.4 * 8%) = 8.20% Expected return = (20.60% + 8.20%) / 2 = 28.80% / 2 = 14.40% Part 4 A. i. Implied one-year forward rate for 1, 2, and 3 Formula for implied rate: = (1+YTMn) ^n/(1+YTMn-1) ^n-1 Years YTM Implied rates 1 8% 2 9% [(1+9%) ^2/ (1+8%) ^1]-1=10.01% 3 10% [(1+10%) ^3/ (1+9%) ^1]-2=12.03% A. ii. YTM on a one-year zero-coupon bond 10.01% is the YTM of one-year zero coupon bond. A. iii. YTM on a two-year zero-coupon bond 12.03% is the YTM two-year zero-coupon bond. B. i. Price paid for the bond Face value = $1000 Term to maturity = 5 years Coupon rate = 6% YTM = 4% p.a. Market price = $1043.76 Price paid by Jill for the bond: = Periodic coupon payment * 1-(1+YTM) ^-number of periods till maturity/YTM + face value/(1+YTM) ^ number of periods = 1*1-(1+4%) ^-5/5%+1000/ (1+4%) ^5 = $826.37 B. ii. Holding period return Holding period return: = (826.37-787.85)/787.85 *100% = 4.89% C. Expectations and liquidity preference hypothesis The Expectations hypothesis helps in predicting the interest rate in the short-term in the future based on the present long-term rates of interest (Caldeira and Smaniotto, 2019). According to this hypothesis, an investor will be earning the same amount of interest by investing into two consecutive bond investments that are one-year bond and two-year bond. In the case of the liquidity preference hypothesis, it is suggested that an investor should demanding an interest rate that is higher or premium on the stocks with the long-term maturities that will have a high risk for considering all the factors to be same, the investors will be preferring cash or assets that qualify as high liquid (Xie, Wang and Meng, 2019). Part 5 A. i. Duration Duration means measuring the bond price sensitivity or other debt instruments to a fluctuation in the rate of interest (Di Asih and Abdurakhman, 2021). The bond duration can be confused as the time to maturity as duration measurements are also calculated in years. However, the fact is that the bond term is linear, whereas the duration is a non-linear concept. A. ii. Convexity Convexity is the measurement of the curvature of the degree of the curve in the relationship between the bond’s price and the bond’s yield (Homaifar and Michello, 2019). Convexity is a tool for measuring and managing the exposure of portfolio to a market risk. Convexity helps in establishing the duration of a bond changes as the change in the rate of interest. When the duration of bond increases with the increase in yields, the bond is said to have a negative convexity. A. iii. Immunisation Immunization is a risk mitigation strategy that matches the duration of the liabilities and assets to reduce the impact of rate of interest on the net worth (Lapshin, 2019). It is a strategy to mitigate the risk that will match the assets and the liability duration so that the portfolio values are protected against the changes in the rate of interest. B. Calculation of duration and convexity of bond Face value = $1000 Term to maturity = 3 years Coupon rate = 6% p.a. payable yearly YTM = 4% p.a. Duration of the bond: term cash flow present value time weighted pv(pv * term) 1 60 60/(1+4%)^1=57.69230769 57.69230769 2 60 60/(1+4%)^2=55.47337278 110.9467456 3 1060 1060/(1+4%)^3=942.3361402 2827.008421 Sum=1055.501821 Sum= 2995.647474 Price of the bond = $1055.50 Duration = Time weighted pv (sum) / price of the bond = 2995.64 / 1055.50 = 2.83 term cash flow present value time weighted pv (pv * term) convexity time weighted pv(pv*term*(term+0.5) 1 60 57.69230769 60/(1+4%)^1=57.69230769 86.53846154 2 60 55.47337278 60/(1+4%)^2=55.47337278 277.3668639 3 1060 942.3361402 1060/(1+4%)^3=942.3361402 9894.529472 1055.501821 Sum=2995.647474 Sum=10258.4348 Convexity = sum of convexity time weighted pv / (present value (1+YTM) ^2) = 10258.43 / (1055.50*(1+4%) ^2) = 8.98 C. i. Duration adjustment method Modified duration = Duration/(1+YTM) = 2.83 / (1+3%) = 2.74 Percentage change in price = -2.74 * (-25%) = 69% Change in price of the bond = $1055.50 * 0.69 = +$727.95 Price of the bond = $1055.50 + $727.95 = $1783.45 C. ii. Duration with convexity adjustment method Convexity adjustment = Convexity of the bond * 100 * (change in YTM) ^2 = 8.98 * 100 * (4%-3%) ^2 = 0.089 = 8.9% Change in the price of bond = Duration * yield change + convexity adjustment = 2.83 * -0.25 + 8.9% = -61.85% Bond price = $1055.50* -65% = $369.425 C. iii. Present value of the future cash flow method YTM 3% 57.69/(1+3%)^1=56.00970874 57.69 55.47/(1+3%)^2=52.28579508 55.47 942.33/(1+3%)^3=862.3654399 942.33 Total sum of the price of bond = $970.66 D. Most accurate method Duration with convexity adjustment is the most accurate because it considers the convexity of the bond, which, when omitted, create a large breadth of inaccurate measurement. E. Pension fund using zero-coupon bonds to immunize its obligation The perfect immunization of pension funds using zero-coupon bonds is possible because it will guarantee that the movement in the interest rates will have no virtual impact on the value of the portfolios. It will be easier for the fund manager to reach the desired amount that will be paid to the pension holders. If the interest rates are fluctuating then the fund manager will not be able to pay out $10 million. Therefore, in order to pay out of $10 million a year in pensions immunization is the perfect solution. Belderbos, R., Tong, T.W. and Wu, S., 2020. Portfolio configuration and foreign entry decisions: A juxtaposition of real options and risk diversification theories. Strategic Management Journal, 41 (7), pp.1191-1209. https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/smj.3151 Caldeira, J.F. and Smaniotto, E.N., 2019. The expectations hypothesis of the term structure of interest rates: The Brazilian case revisited. Applied Economics Letters, 26(8), pp.633-637. https:// www.researchgate.net/profile/Emanuelle-Smaniotto/publication/325837687_The_expectations_hypothesis_of_the_term_structure_of_ interest_rates_The_Brazilian_case_revisited/links/5e869938299bf13079746f8b / The-expectations-hypothesis-of-the-term-structure-of-interest-rates- The-Brazilian-case-revisited.pdf Di Asih, I.M. and Abdurakhman, A., 2021. Delta-Normal Value at Risk Using Exponential Duration with Convexity for Measuring Government Bond Risk. DLSU Business & Economics Review, 31(1), pp.72-80. Homaifar, G.A. and Michello, F.A., 2019. A generalized algorithm for duration and convexity of option embedded bonds. Applied Economics Letters, 26(10), pp.835-842. http://www.mtsu.edu/econfin/ Kostadinova, V., Georgiev, I., Mihova, V. and Pavlov, V., 2021, February. An application of Markov chains in stock price prediction and risk portfolio optimization. In AIP Conference Proceedings (Vol. 2321, No. 1, p. 030018). AIP Publishing LLC. https://aip.scitation.org/doi/pdf/10.1063/5.0041119 Lapshin, V., 2019. A nonparametric approach to bond portfolio immunization. Mathematics, 7(11), p.1121. https://www.mdpi.com/2227-7390/7/11/1121/htm Smales, L.A., 2021. Geopolitical risk and volatility spillovers in oil and stock markets. Finance assignment The Quarterly Review of Economics and Finance, 80, pp.358-366. https:// www.researchgate.net/profile/Lee-Smales/publication/334386206_Geopolitical_Risk_and_Volatility_Spillovers_in_Oil_and_ Stock_Markets/links/6045ee08299bf1e07862c1fa/Geopolitical- Torrente, M.L. and Uberti, P., 2021. Geometric Diversification in Portfolio Theory. https://www.researchgate.net/profile/Maria-Laura-Torrente/publication/ 351691895_Geometric_Diversification_in_Portfolio_ Theory/links/60a4ff23299bf10613724784/Geometric-Diversification-in-Portfolio-Theory.pdf Wang, L., Ahmad, F., Luo, G.L., Umar, M. and Kirikkaleli, D., 2021. Portfolio optimization of financial commodities with energy futures. Annals of Operations Research, pp.1-39. https:// link.springer.com/article/10.1007/s10479-021-04283-x Wiriadinata, U., 2018. External debt, currency risk, and international monetary policy transmission. The University of Chicago. https:// Xie, Y., Wang, Z. and Meng, B., 2019. Stability and bifurcation of a delayed time-fractional order business cycle model with a general liquidity preference function and investment function. Mathematics, 7(9), p.846. https://www.mdpi.com/2227-7390/7/9/846/htm
{"url":"https://www.totalassignment.com/free-sample/finance-assignment-understanding-portfolio-investment-analysis","timestamp":"2024-11-07T13:30:08Z","content_type":"text/html","content_length":"368931","record_id":"<urn:uuid:4883612a-0c6c-4523-b34f-c21ebb081c5f>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00025.warc.gz"}
How Time and Annual Returns Affect Saving Rates Time and Annual Returns I recently watched an interesting YouTube video about time and the rate of return, by finance columnist Preet Banerjee. He explains the relationship between an investor’s time horizon and his or her rate of return. Here’s a question to illustrate an example. If our goal is to accumulate $100,000 by age 65, how much do we need to save per month? Here’s a table that breaks down how much investors will need to put away each month depending on their current age and the expected rate of return. For example, we can see that someone at age 30 who expects their portfolio to return 4% a year should save $109 per month. As we can see, the rate of return is a much stronger factor for investors with a longer time horizon. The 20 year old still has 45 years until retirement and how much he has to save is heavily influenced by his average investment returns. On the other hand, the 60 year old investor only has 5 more years before retirement so most of his accumulated wealth will come from savings rather than from investment gains. This means he shouldn’t take on more investment risk and reach for higher returns since the his rate of return simply doesn’t matter very much. This is why I have a relatively high risk tolerance, even though some people don’t approve. It’s because I’m in my twenties and if I can get that higher rate of return on my portfolio now, my life would be so much better in the future. 😀 The other thing to note about the table is that time trumps rate of return in most cases. If one person starts to invest at age 40 and earns a 2% rate of return, and someone else starts just 10 years later but earns a 6% rate of return, then the first person would still come out ahead despite making 4% a year less. In other words if we start investing 10 years earlier than our peers, then that will have the same effect as outperforming their investments by more than 4% each year. Wow! Let that sink in. We can also track our retirement progress with this information. For example if we plan to retire in 15 years we can use the AGE 50 row of numbers in the table above. Let’s say we want to accumulate an extra $300,000 between now and our retirement date. We know the table is based on an accumulation of $100,000. So to find out how much we need to save each month we just multiply the green numbers by 3, since $300,000 is 3 times $100,000. The tricky part is guessing which rate of return we are most likely going to see over the next 15 years, but I think 4% sounds like a reasonable assumption. So start investing as early as possible and front load more risk to the earlier stages of wealth accumulation. 🙂 Random Useless Fact: 6 Comments Inline Feedbacks View all comments 05/09/2016 11:08 am I think also 80% live within 100 km of the southern border 05/10/2016 8:16 am That’s a crazy Fact… Time is you friend when it comes to investing.. and I would also say for those starting out.. Getting to the first $100K invested is a big milestone.. then you can scale things back and put it on auto pilot from there. I still feel like I’m playing catch up at 34 with only have $650 NW.. But I started my savings game late around age 27… Wish I would have jumped on the band wagon at 21 or even 18.. cheers! 05/11/2016 6:10 pm Compound interest is an amazing thing but it needs time to be the most effective. Rate of return matters but definitely loses its importance during short time periods!
{"url":"https://www.freedomthirtyfiveblog.com/2016/05/time-annual-returns-affect-investors.html","timestamp":"2024-11-02T01:19:21Z","content_type":"text/html","content_length":"177478","record_id":"<urn:uuid:91649287-2dff-4544-9d52-a8c9680cf68f>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00445.warc.gz"}
Mathematics Money Management - Ralph Vince - PDF Free Download THE MATHEMATICS OF MONEY MANAGEMENT: RISK ANALYSIS TECHNIQUES FOR TRADERS by Ralph Vince Published by John Wiley & Sons, Inc. Library of Congress Cataloging-in-Publication Data Vince. Ralph. 1958-The mathematics of money management: risk analysis techniques for traders / by Ralph Vince. Includes bibliographical references and index. ISBN 0-471-54738-7 1. Investment analysis—Mathematics. 2. Risk management—Mathematics 3. Program trading (Securities) HG4529N56 1992 332.6'01'51-dc20 Preface and Dedication The favorable reception of Portfolio Management Formulas exceeded even the greatest expectation I ever had for the book. I had written it to promote the concept of optimal f and begin to immerse readers in portfolio theory and its missing relationship with optimal f. Besides finding friends out there, Portfolio Management Formulas was surprisingly met by quite an appetite for the math concerning money management. Hence this book. I am indebted to Karl Weber, Wendy Grau, and others at John Wiley & Sons who allowed me the necessary latitude this book required. There are many others with whom I have corresponded in one sort or another, or who in one way or another have contributed to, helped me with, or influenced the material in this book. Among them are Florence Bobeck, Hugo Rourdssa, Joe Bristor, Simon Davis, Richard Firestone, Fred Gehm (whom I had the good fortune of working with for awhile), Monique Mason, Gordon Nichols, and Mike Pascaul. I also wish to thank Fran Bartlett of G & H Soho, whose masterful work has once again transformed my little mountain of chaos, my little truckload of kindling, into the finished product that you now hold in your hands. This list is nowhere near complete as there are many others who, to varying degrees, influenced this book in one form or another. This book has left me utterly drained, and I intend it to be my last. Considering this, I'd like to dedicate it to the three people who have influenced me the most. To Rejeanne, my mother, for teaching me to appreciate a vivid imagination; to Larry, my father, for showing me at an early age how to squeeze numbers to make them jump; to Arlene, my wife, part ner, and best friend. This book is for all three of you. Your influences resonate throughout it. Chagrin Falls, Ohio R. V. March 1992 Index Introduction.............................................................................................. 5 Scope of this book................................................................................ 5 Some prevalent misconceptions........................................................... 6 Worst-case scenarios and stategy.........................................................6 Mathematics notation........................................................................... 7 Synthetic constructs in this text........................................................... 7 Optimal trading quantities and optimal f............................................. 8 Chapter 1-The Empirical Techniques.......................................................9 Deciding on quantity............................................................................ 9 Basic concepts...................................................................................... 9 The runs test....................................................................................... 10 Serial correlation................................................................................ 11 Common dependency errors.............................................................. 12 Mathematical Expectation................................................................. 13 To reinvest trading profits or not....................................................... 14 Measuring a good system for reinvestment the Geometric Mean..... 14 How best to reinvest...........................................................................15 Optimal fixed fractional trading.........................................................15 Kelly formulas................................................................................... 16 Finding the optimal f by the Geometric Mean................................... 16 To summarize thus far....................................................................... 17 Geometric Average Trade.................................................................. 17 Why you must know your optimal f.................................................. 18 The severity of drawdown................................................................. 18 Modern portfolio theory..................................................................... 19 The Markovitz model......................................................................... 19 The Geometric Mean portfolio strategy............................................. 21 Daily procedures for using optimal portfolios................................... 21 Allocations greater than 100%........................................................... 22 How the dispersion of outcomes affects geometric growth............... 23 The Fundamental Equation of trading............................................... 24 Chapter 2 - Characteristics of Fixed Fractional Trading and Salutary Techniques..............................................................................................26 Optimal f for small traders just starting out....................................... 26 Threshold to geometric...................................................................... 26 One combined bankroll versus separate bankrolls.............................27 Threat each play as if infinitely repeated........................................... 28 Efficiency loss in simultaneous wagering or portfolio trading.......... 28 Time required to reach a specified goal and the trouble with fractional f.......................................................................................................... 29 Comparing trading systems................................................................30 Too much sensivity to the biggest loss.............................................. 30 Equalizing optimal f........................................................................... 31 Dollar averaging and share averaging ideas...................................... 32 The Arc Sine Laws and random walks.............................................. 33 Time spent in a drawdown................................................................. 34 Chapter 3 - Parametric Optimal f on the Normal Distribution............... 35 The basics of probability distributions............................................... 35 Descriptive measures of distributions................................................ 35 Moments of a distribution.................................................................. 36 The Normal Distribution.................................................................... 37 The Central Limit Theorem............................................................... 38 Working with the Normal Distribution.............................................. 38 Normal Probabilities.......................................................................... 39 Further Derivatives of the Normal..................................................... 41 The Lognormal Distribution.............................................................. 41 The parametric optimal f.................................................................... 42 The distribution of trade P&L's..........................................................43 Finding optimal f on the Normal Distribution................................... 44 The mechanics of the procedure........................................................ 45 Chapter 4 - Parametric Techniques on Other Distributions................... 49 The Kolmogorov-Smirnov (K-S) Test............................................... 49 Creating our own Characteristic Distribution Function..................... 50 Fitting the Parameters of the distribution...........................................52 Using the Parameters to find optimal f.............................................. 54 Performing "What Ifs"....................................................................... 56 Equalizing f........................................................................................ 56 Optimal f on other distributions and fitted curves............................. 56 Scenario planning...............................................................................57 Optimal f on binned data....................................................................60 Which is the best optimal f?...............................................................60 -3- Chapter 5 - Introduction to Multiple Simultaneous Positions under the Parametric Approach.............................................................................. 61 Estimating Volatility.......................................................................... 61 Ruin, Risk and Reality....................................................................... 62 Option pricing models........................................................................62 A European options pricing model for all distributions.....................65 The single long option and optimal f................................................. 66 The single short option.......................................................................69 The single position in The Underlying Instrument............................ 70 Multiple simultaneous positions with a causal relationship...............70 Multiple simultaneous positions with a random relationship............ 72 Chapter 6 - Correlative Relationships and the Derivation of the Efficient Frontier................................................................................................... 73 Definition of The Problem................................................................. 73 Solutions of Linear Systems using Row-Equivalent Matrices...........76 Interpreting The Results..................................................................... 77 Chapter 7 - The Geometry of Portfolios................................................. 80 The Capital Market Lines (CMLs).....................................................80 The Geometric Efficient Frontier.......................................................81 Unconstrained portfolios.................................................................... 83 How optimal f fits with optimal portfolios........................................ 84 Threshold to The Geometric for Portfolios........................................ 85 Completing The Loop........................................................................ 85 Chapter 8 - Risk Management................................................................ 88 Asset Allocation................................................................................. 88 Reallocation: Four Methods............................................................... 90 Why reallocate?..................................................................................92 Portfolio Insurance – The Fourth Reallocation Technique................ 92 The Margin Constraint....................................................................... 95 Rotating Markets................................................................................ 96 To summarize.....................................................................................96 Application to Stock Trading............................................................. 97 A Closing Comment.......................................................................... 97 APPENDIX A - The Chi-Square Test.................................................... 98 APPENDIX B - Other Common Distributions...................................... 99 The Uniform Distribution.................................................................. 99 The Bernouli Distribution................................................................ 100 The Binomial Distribution............................................................... 100 The Geometric Distribution............................................................. 101 The Hypergeometric Distribution.................................................... 101 The Poisson Distribution..................................................................102 The Exponential Distribution........................................................... 102 The Chi-Square Distribution............................................................ 103 The Student's Distribution................................................................103 The Multinomial Distribution.......................................................... 104 The stable Paretian Distribution.......................................................104 APPENDIX C - Further on Dependency: The Turning Points and Phase Length Tests......................................................................................... 106 Introduction SCOPE OF THIS BOOK I wrote in the first sentence of the Preface of Portfolio Management Formulas, the forerunner to this book, that it was a book about mathematical tools. This is a book about machines. Here, we will take tools and build bigger, more elaborate, more powerful tools-machines, where the whole is greater than the sum of the parts. We will try to dissect machines that would otherwise be black boxes in such a way that we can understand them completely without having to cover all of the related subjects (which would have made this book impossible). For instance, a discourse on how to build a jet engine can be very detailed without having to teach you chemistry so that you know how jet fuel works. Likewise with this book, which relies quite heavily on many areas, particularly statistics, and touches on calculus. I am not trying to teach mathematics here, aside from that necessary to understand the text. However, I have tried to write this book so that if you understand calculus (or statistics) it will make sense and if you do not there will be little, if any, loss of continuity, and you will still be able to utilize and understand (for the most part) the material covered without feeling lost. Certain mathematical functions are called upon from time to time in statistics. These functions-which include the gamma and incomplete gamma functions, as well as the beta and incomplete beta functions-are often called functions of mathematical physics and reside just beyond the perimeter of the material in this text. To cover them in the depth necessary to do the reader justice is beyond the scope, and away from the direction of, this book. This is a book about account management for traders, not mathematical physics, remember? For those truly interested in knowing the "chemistry of the jet fuel" I suggest Numerical Recipes, which is referred to in the Bibliography. I have tried to cover my material as deeply as possible considering that you do not have to know calculus or functions of mathematical physics to be a good trader or money manager. It is my opinion that there isn't much correlation between intelligence and making money in the markets. By this I do not mean that the dumber you are the better I think your chances of success in the markets are. I mean that intelligence alone is but a very small input to the equation of what makes a good trader. In terms of what input makes a good trader, I think that mental toughness and discipline far outweigh intelligence. Every successful trader I have ever met or heard about has had at least one experience of a cataclysmic loss. The common denominator, it seems, the characteristic that separates a good trader from the others, is that the good trader picks up the phone and puts in the order when things are at their bleakest. This requires a lot more from an individual than calculus or statistics can teach a person. In short, I have written this as a book to be utilized by traders in the real-world marketplace. I am not an academic. My interest is in realworld utility before academic pureness. Furthermore, I have tried to supply the reader with more basic information than the text requires in hopes that the reader will pursue concepts farther than I have here. One thing I have always been intrigued by is the architecture of music -music theory. I enjoy reading and learning about it. Yet I am not a musician. To be a musician requires a certain discipline that simply understanding the rudiments of music theory cannot bestow. Likewise with trading. Money management may be the core of a sound trading program, but simply understanding money management will not make you a successful trader. This is a book about music theory, not a how-to book about playing an instrument. Likewise, this is not a book about beating the markets, and you won't find a single price chart in this book. Rather it is a book about mathematical concepts, taking that important step from theory to application, that you can employ. It will not bestow on you the ability to tolerate the emotional pain that trading inevitably has in store for you, win or lose. This book is not a sequel to Portfolio Management Formulas. Rather, Portfolio Management Formulas laid the foundations for what will be covered here. -5- Readers will find this book to be more abstruse than its forerunner. Hence, this is not a book for beginners. Many readers of this text will have read Portfolio Management Formulas. For those who have not, Chapter 1 of this book summarizes, in broad strokes, the basic concepts from Portfolio Management Formulas. Including these basic concepts allows this book to "stand alone" from Portfolio Management Formulas. Many of the ideas covered in this book are already in practice by professional money managers. However, the ideas that are widespread among professional money managers are not usually readily available to the investing public. Because money is involved, everyone seems to be very secretive about portfolio techniques. Finding out information in this regard is like trying to find out information about atom bombs. I am indebted to numerous librarians who helped me through many mazes of professional journals to fill in many of the gaps in putting this book together. This book does not require that you utilize a mechanical, objective trading system in order to employ the tools to be described herein. In other words, someone who uses Elliott Wave for making trading decisions, for example, can now employ optimal f. However, the techniques described in this book, like those in Portfolio Management Formulas, require that the sum of your bets be a positive result. In other words, these techniques will do a lot for you, but they will not perform miracles. Shuffling money cannot turn losses into profits. You must have a winning approach to start with. Most of the techniques advocated in this text are techniques that are advantageous to you in the long run. Throughout the text you will encounter the term "an asymptotic sense" to mean the eventual outcome of something performed an infinite number of times, whose probability approaches certainty as the number of trials continues. In other words, something we can be nearly certain of in the long run. The root of this expression is the mathematical term "asymptote," which is a straight line considered as a limit to a curved line in the sense that the distance between a moving point on the curved line and the straight line approaches zero as the point moves an infinite distance from the origin. Trading is never an easy game. When people study these concepts, they often get a false feeling of power. I say false because people tend to get the impression that something very difficult to do is easy when they understand the mechanics of what they must do. As you go through this text, bear in mind that there is nothing in this text that will make you a better trader, nothing that will improve your timing of entry and exit from a given market, nothing that will improve your trade selection. These difficult exercises will still be difficult exercises even after you have finished and comprehended this book. Since the publication of Portfolio Management Formulas I have been asked by some people why I chose to write a book in the first place. The argument usually has something to do with the marketplace being a competitive arena, and writing a book, in their view, is analogous to educating your adversaries. The markets are vast. Very few people seem to realize how huge today's markets are. True, the markets are a zero sum game (at best), but as a result of their enormity you, the reader, are not my adversary. Like most traders, I myself am most often my own biggest enemy. This is not only true in my endeavors in and around the markets, but in life in general. Other traders do not pose anywhere near the threat to me that I myself do. I do not think that I am alone in this. I think most traders, like myself, are their own worst enemies. In the mid 1980s, as the microcomputer was fast becoming the primary tool for traders, there was an abundance of trading programs that entered a position on a stop order, and the placement of these entry stops was often a function of the current volatility in a given market. These systems worked beautifully for a time. Then, near the end of the decade, these types of systems seemed to collapse. At best, they were able to carve out only a small fraction of the profits that these systems had just a few years earlier. Most traders of such systems would later abandon them, claiming that if "everyone was trading them, how could they work anymore?" Most of these systems traded the Treasury Bond futures market. Consider now the size of the cash market underlying this futures market. Arbitrageurs in these markets will come in when the prices of the cash and futures diverge by an appropriate amount (usually not more than a few ticks), buying the less expensive of the two instruments and selling the more expensive. As a result, the divergence between the price of cash and futures will dissipate in short order. The only time that the relationship between cash and futures can really get out of line is when an exogenous shock, such as some sort of news event, drives prices to diverge farther than the arbitrage process ordinarily would allow for. Such disruptions are usually very short-lived and rather rare. An arbitrageur capitalizes on price discrepancies, one type of which is the relationship of a futures contract to its underlying cash instrument. As a result of this process, the Treasury Bond futures market is intrinsically tied to the enormous cash Treasury market. The futures market reflects, at least to within a few ticks, what's going on in the gigantic cash market. The cash market is not, and never has been, dominated by systems traders. Quite the contrary. Returning now to our argument, it is rather inconceivable that the traders in the cash market all started trading the same types of systems as those who were making money in the futures market at that time! Nor is it any more conceivable that these cash participants decided to all gang up on those who were profiteering in the futures market, There is no valid reason why these systems should have stopped working, or stopped working as well as they had, simply because many futures traders were trading them. That argument would also suggest that a large participant in a very thin market be doomed to the same failure as traders of these systems in the bonds were. Likewise, it is silly to believe that all of the fat will be cut out of the markets just because I write a book on account management concepts. Cutting the fat out of the market requires more than an understanding of money management concepts. It requires discipline to tolerate and endure emotional pain to a level that 19 out of 20 people cannot bear. This you will not learn in this book or any other. Anyone who claims to be intrigued by the "intellectual challenge of the markets" is not a trader. The markets are as intellectually challenging as a fistfight. In that light, the best advice I know of is to always cover your chin and jab on the run. Whether you win or lose, there are significant beatings along the way. But there is really very little to the markets in the way of an intellectual challenge. Ultimately, trading is an exercise in self-mastery and endurance. This book attempts to detail the strategy of the fistfight. As such, this book is of use only to someone who already possesses the necessary mental toughness. SOME PREVALENT MISCONCEPTIONS You will come face to face with many prevalent misconceptions in this text. Among these are: − Potential gain to potential risk is a straight-line function. That is, the more you risk, the more you stand to gain. − Where you are on the spectrum of risk depends on the type of vehicle you are trading in. − Diversification reduces drawdowns (it can do this, but only to a very minor extent-much less than most traders realize). − Price behaves in a rational manner. The last of these misconceptions, that price behaves in a rational manner, is probably the least understood of all, considering how devastating its effects can be. By "rational manner" is meant that when a trade occurs at a certain price, you can be certain that price will proceed in an orderly fashion to the next tick, whether up or down-that is, if a price is making a move from one point to the next, it will trade at every point in between. Most people are vaguely aware that price does not behave this way, yet most people develop trading methodologies that assume that price does act in this orderly fashion. But price is a synthetic perceived value, and therefore does not act in such a rational manner. Price can make very large leaps at times when proceeding from one price to the next, completely bypassing all prices in between. Price is capable of making gigantic leaps, and far more frequently than most traders believe. To be on the wrong side of such a move can be a devastating experience, completely wiping out a trader. Why bring up this point here? Because the foundation of any effective gaming strategy (and money management is, in the final analysis, a gaming strategy) is to hope for the best but prepare for the worst. WORST-CASE SCENARIOS AND STATEGY The "hope for the best" part is pretty easy to handle. Preparing for the worst is quite difficult and something most traders never do. Preparing for the worst, whether in trading or anything else, is something most of us put off indefinitely. This is particularly easy to do when we consider that worst-case scenarios usually have rather remote probabilities of occurrence. Yet preparing for the worst-case scenario is something we must do now. If we are to be prepared for the worst, we must do it as the starting point in our money management strategy. You will see as you proceed through this text that we always build a strategy from a worst-case scenario. We always start with a worst case and incorporate it into a mathematical technique to take advantage of situations that include the realization of the worst case. Finally, you must consider this next axiom. If you play a game with unlimited liability, you will go broke with a probability that approaches certainty as the length of the game approaches infinity. Not a very pleasant prospect. The situation can be better understood by saying that if you can only die by being struck by lightning, eventually you will die by being struck by lightning. Simple. If you trade a vehicle with unlimited liability (such as futures), you will eventually experience a loss of such magnitude as to lose everything you have. Granted, the probabilities of being struck by lightning are extremely small for you today and extremely small for you for the next fifty years. However, the probability exists, and if you were to live long enough, eventually this microscopic probability would see realization. Likewise, the probability of experiencing a cataclysmic loss on a position today may be extremely small (but far greater than being struck by lightning today). Yet if you trade long enough, eventually this probability, too, would be realized. There are three possible courses of action you can take. One is to trade only vehicles where the liability is limited (such as long options). The second is not to trade for an infinitely long period of time. Most traders will die before they see the cataclysmic loss manifest itself (or before they get hit by lightning). The probability of an enormous winning trade exists, too, and one of the nice things about winning in trading is that you don't have to have the gigantic winning trade. Many smaller wins will suffice. Therefore, if you aren't going to trade in limited liability vehicles and you aren't going to die, make up your mind that you are going to quit trading unlimited liability vehicles altogether if and when your account equity reaches some prespecified goal. If and when you achieve that goal, get out and don't ever come back. We've been discussing worst-case scenarios and how to avoid, or at least reduce the probabilities of, their occurrence. However, this has not truly prepared us for their occurrence, and we must prepare for the worst. For now, consider that today you had that cataclysmic loss. Your account has been tapped out. The brokerage firm wants to know what you're going to do about that big fat debit in your account. You weren't expecting this to happen today. No one who ever experiences this ever does expect it. Take some time and try to imagine how you are going to feel in such a situation. Next, try to determine what you will do in such an instance. Now write down on a sheet of paper exactly what you will do, who you can call for legal help, and so on. Make it as definitive as possible. Do it now so that if it happens you'll know what to do without having to think about these matters. Are there arrangements you can make now to protect yourself before this possible cataclysmic loss? Are you sure you wouldn't rather be trading a vehicle with limited liability? If you're going to trade a vehicle with unlimited liability, at what point on the upside will you stop? Write down what that level of profit is. Don't just read this and then keep plowing through the book. Close the book and think about these things for awhile. This is the point from which we will build. The point here has not been to get you thinking in a fatalistic way. That would be counterproductive, because to trade the markets effectively will require a great deal of optimism on your part to make it through the inevitable prolonged losing streaks. The point here has been to get you to think about the worst-case scenario and to make contingency plans in case such a worst-case scenario occurs. Now, take that sheet of paper with your contingency plans (and with the amount at which point you will quit trading unlimited liability vehicles altogether written on it) and put it in the top drawer of your desk. Now, if the worst-case scenario should develop you know you won't be jumping out of the window. Hope for the best but prepare for the worst. If you haven't done these exercises, then close this book now and keep it closed. Nothing can help you if you do not have this foundation to build upon. MATHEMATICS NOTATION Since this book is infected with mathematical equations, I have tried to make the mathematical notation as easy to understand, and as easy to take from the text to the computer keyboard, as possible. Multiplication will always be denoted with an asterisk (*), and exponentiation will always be denoted with a raised caret (^). Therefore, the square root of a number will be denoted as ^(l/2). You will never have to encounter the radical sign. Division is expressed with a slash (/) in most cases. Since the radical sign and the means of expressing division with a horizontal line are also used as a grouping operator instead of parentheses, that confusion will be avoided by using these conventions for division and exponentiation. Parentheses will be the only grouping operator used, and they may be used to aid in the clarity of an expression even if they are not mathematically necessary. At certain special times, brackets ({ }) may also be used as a grouping operator. Most of the mathematical functions used are quite straightforward (e.g., the absolute value function and the natural log function). One function that may not be familiar to all readers, however, is the exponential function, denoted in this text as EXP(). This is more commonly expressed mathematically as the constant e, equal to 2.7182818285, raised to the power of the function. Thus: EXP(X) = e^X = 2.7182818285^X The main reason I have opted to use the function notation EXP(X) is that most computer languages have this function in one form or another. Since much of the math in this book will end up transcribed into computer code, I find this notation more straightforward. SYNTHETIC CONSTRUCTS IN THIS TEXT As you proceed through the text, you will see that there is a certain geometry to this material. However, in order to get to this geometry we will have to create certain synthetic constructs. For one, we will convert trade profits and losses over to what will be referred to as holding period returns or HPRs for short. An HPR is simply 1 plus what you made or lost on the trade as a percentage. Therefore, a trade that made a 10% profit would be converted to an HPR of 1+.10 = 1.10. Similarly, a trade that lost 10% would have an HPR of 1+(-.10) = .90. Most texts, when referring to a holding period return, do not add 1 to the percentage gain or loss. However, throughout this text, whenever we refer to an HPR, it will always be 1 plus the gain or loss as a percentage. Another synthetic construct we must use is that of a market system. A market system is any given trading approach on any given market (the approach need not be a mechanical trading system, but often is). For example, say we are using two separate approaches to trading two separate markets, and say that one of our approaches is a simple moving average crossover system. The other approach takes trades based upon our Elliott Wave interpretation. Further, say we are trading two separate markets, say Treasury Bonds and heating oil. We therefore have a total of four different market systems. We have the moving average system on bonds, the Elliott Wave trades on bonds, the moving average system on heating oil, and the Elliott Wave trades on heating oil. A market system can be further differentiated by other factors, one of which is dependency. For example, say that in our moving average system we discern (through methods discussed in this text) that winning trades beget losing trades and vice versa. We would, therefore, break our moving average system on any given market into two distinct market systems. One of the market systems would take trades only after a loss (because of the nature of this dependency, this is a more advantageous system), the other market system only after a profit. Referring back to our example of trading this moving average system in conjunction with Treasury Bonds and heating oil and using the Elliott Wave trades also, we now have six market systems: the moving average system after a loss on bonds, the moving average system after a win on bonds, the Elliott Wave trades on bonds, the moving average system after a win on heating oil, the moving average system after a loss on heating oil, and the Elliott Wave trades on heating oil. -7- Pyramiding (adding on contracts throughout the course of a trade) is viewed in a money management sense as separate, distinct market systems rather than as the original entry. For example, if you are using a trading technique that pyramids, you should treat the initial entry as one market system. Each add-on, each time you pyramid further, constitutes another market system. Suppose your trading technique calls for you to add on each time you have a $1,000 profit in a trade. If you catch a really big trade, you will be adding on more and more contracts as the trade progresses through these $1,000 levels of profit. Each separate add-on should be treated as a separate market system. There is a big benefit in doing this. The benefit is that the techniques discussed in this book will yield the optimal quantities to have on for a given market system as a function of the level of equity in your account. By treating each add-on as a separate market system, you will be able to use the techniques discussed in this book to know the optimal amount to add on for your current level of equity. Another very important synthetic construct we will use is the concept of a unit. The HPRs that you will be calculating for the separate market systems must be calculated on a "1 unit" basis. In other words, if they are futures or options contracts, each trade should be for 1 contract. If it is stocks you are trading, you must decide how big 1 unit is. It can be 100 shares or it can be 1 share. If you are trading cash markets or foreign exchange (forex), you must decide how big 1 unit is. By using results based upon trading 1 unit as input to the methods in this book, you will be able to get output results based upon 1 unit. That is, you will know how many units you should have on for a given trade. It doesn't matter what size you decide 1 unit to be, because it's just an hypothetical construct necessary in order to make the calculations. For each market system you must figure how big 1 unit is going to be. For example, if you are a forex trader, you may decide that 1 unit will be one million U.S. dollars. If you are a stock trader, you may opt for a size of 100 shares. Finally, you must determine whether you can trade fractional units or not. For instance, if you are trading commodities and you define 1 unit as being 1 contract, then you cannot trade fractional units (i.e., a unit size less than 1), because the smallest denomination in which you can trade futures contracts in is 1 unit (you can possibly trade quasifractional units if you also trade minicontracts). If you are a stock trader and you define 1 unit as 1 share, then you cannot trade the fractional unit. However, if you define 1 unit as 100 shares, then you can trade the fractional unit, if you're willing to trade the odd lot. If you are trading futures you may decide to have 1 unit be 1 minicontract, and not allow the fractional unit. Now, assuming that 2 minicontracts equal 1 regular contract, if you get an answer from the techniques in this book to trade 9 units, that would mean you should trade 9 minicontracts. Since 9 divided by 2 equals 4.5, you would optimally trade 4 regular contracts and 1 minicontract here. Generally, it is very advantageous from a money management perspective to be able to trade the fractional unit, but this isn't always true. Consider two stock traders. One defines 1 unit as 1 share and cannot trade the fractional unit; the other defines 1 unit as 100 shares and can trade the fractional unit. Suppose the optimal quantity to trade in today for the first trader is to trade 61 units (i.e., 61 shares) and for the second trader for the same day it is to trade 0.61 units (again 61 shares). I have been told by others that, in order to be a better teacher, I must bring the material to a level which the reader can understand. Often these other people's suggestions have to do with creating analogies between the concept I am trying to convey and something they already are familiar with. Therefore, for the sake of instruction you will find numerous analogies in this text. But I abhor analogies. Whereas analogies may be an effective tool for instruction as well as arguments, I don't like them because they take something foreign to people and (often quite deceptively) force fit it to a template of logic of something people already know is true. Here is an example: The square root of 6 is 3 because the square root of 4 is 2 and 2+2 = 4. Therefore, since 3+3 = 6, then the square root of 6 must be 3. Analogies explain, but they do not solve. Rather, an analogy makes the a priori assumption that something is true, and this "explanation" then masquerades as the proof. You have my apologies in advance for the use of the analogies in this text. I have opted for them only for the purpose of instruction. OPTIMAL TRADING QUANTITIES AND OPTIMAL F Modern portfolio theory, perhaps the pinnacle of money management concepts from the stock trading arena, has not been embraced by the rest of the trading world. Futures traders, whose technical trading ideas are usually adopted by their stock trading cousins, have been reluctant to accept ideas from the stock trading world. As a consequence, modern portfolio theory has never really been embraced by futures traders. Whereas modern portfolio theory will determine optimal weightings of the components within a portfolio (so as to give the least variance to a prespecified return or vice versa), it does not address the notion of optimal quantities. That is, for a given market system, there is an optimal amount to trade in for a given level of account equity so as to maximize geometric growth. This we will refer to as the optimal f. This book proposes that modern portfolio theory can and should be used by traders in any markets, not just the stock markets. However, we must marry modern portfolio theory (which gives us optimal weights) with the notion of optimal quantity (optimal f) to arrive at a truly optimal portfolio. It is this truly optimal portfolio that can and should be used by traders in any markets, including the stock markets. In a nonleveraged situation, such as a portfolio of stocks that are not on margin, weighting and quantity are synonymous, but in a leveraged situation, such as a portfolio of futures market systems, weighting and quantity are different indeed. In this book you will see an idea first roughly introduced in Portfolio Management Formulas, that optimal quantities are what we seek to know, and that this is a function of optimal weightings. Once we amend modern portfolio theory to separate the notions of weight and quantity, we can return to the stock trading arena with this now reworked tool. We will see how almost any nonleveraged portfolio of stocks can be improved dramatically by making it a leveraged portfolio, and marrying the portfolio with the risk-free asset. This will become intuitively obvious to you. The degree of risk (or conservativeness) is then dictated by the trader as a function of how much or how little leverage the trader wishes to apply to this portfolio. This implies that where a trader is on the spectrum of risk aversion is a function of the leverage used and not a function of the type of trading vehicle used. In short, this book will teach you about risk management. Very few traders have an inkling as to what constitutes risk management. It is not simply a matter of eliminating risk altogether. To do so is to eliminate return altogether. It isn't simply a matter of maximizing potential reward to potential risk either. Rather, risk management is about decisionmaking strategies that seek to maximize the ratio of potential reward to potential risk within a given acceptable level of risk. To learn this, we must first learn about optimal f, the optimal quantity component of the equation. Then we must learn about combining optimal f with the optimal portfolio weighting. Such a portfolio will maximize potential reward to potential risk. We will first cover these concepts from an empirical standpoint (as was introduced in Portfolio Management Formulas), then study them from a more powerful standpoint, the parametric standpoint. In contrast to an empirical approach, which utilizes past data to come up with answers directly, a parametric approach utilizes past data to come up with parameters. These are certain measurements about something. These parameters are then used in a model to come up with essentially the same answers that were derived from an empirical approach. The strong point about the parametric approach is that you can alter the values of the parameters to see the effect on the outcome from the model. This is something you cannot do with an empirical technique. However, empirical techniques have their strong points, too. The empirical techniques are generally more straightforward and less math intensive. Therefore they are easier to use and comprehend. For this reason, the empirical techniques are covered first. Finally, we will see how to implement the concepts within a userspecified acceptable level of risk, and learn strategies to maximize this situation further. There is a lot of material to be covered here. I have tried to make this text as concise as possible. Some of the material may not sit well with you, the reader, and perhaps may raise more questions than it answers. If that is the case, than I have succeeded in one facet of what I have attempted to do. Most books have a single "heart," a central concept that the entire text flows toward. This book is a little different in that it has many hearts. Thus, some people may find this book difficult -8- when they go to read it if they are subconsciously searching for a single heart. I make no apologies for this; this does not weaken the logic of the text; rather, it enriches it. This book may take you more than one reading to discover many of its hearts, or just to be comfortable with it. One of the many hearts of this book is the broader concept of decision making in environments characterized by geometric consequences. An environment of geometric consequence is an environment where a quantity that you have to work with today is a function of prior outcomes. I think this covers most environments we live in! Optimal f is the regulator of growth in such environments, and the by-products of optimal f tell us a great deal of information about the growth rate of a given environment. In this text you will learn how to determine the optimal f and its by-products for any distributional form. This is a statistical tool that is directly applicable to many real-world environments in business and science. I hope that you will seek to apply the tools for finding the optimal f parametrically in other fields where there are such environments, for numerous different distributions, not just for trading the markets. For years the trading community has discussed the broad concept of "money management." Yet by and large, money management has been characterized by a loose collection of rules of thumb, many of which were incorrect. Ultimately, I hope that this book will have provided traders with exactitude under the heading of money management. 50,000/(5,000/.l) = 1 Chapter 1-The Empirical Techniques This chapter is a condensation of Portfolio Management Formulas. The purpose here is to bring those readers unfamiliar with these empirical techniques up to the same level of understanding as those who are. 10 8 T W R DECIDING ON QUANTITY Whenever you enter a trade, you have made two decisions: Not only have you decided whether to enter long or short, you have also decided upon the quantity to trade in. This decision regarding quantity is always a function of your account equity. If you have a $10,000 account, don't you think you would be leaning into the trade a little if you put on 100 gold contracts? Likewise, if you have a $10 million account, don't you think you'd be a little light if you only put on one gold contract ? Whether we acknowledge it or not, the decision of what quantity to have on for a given trade is inseparable from the level of equity in our account. It is a very fortunate fact for us though that an account will grow the fastest when we trade a fraction of the account on each and every tradein other words, when we trade a quantity relative to the size of our stake. However, the quantity decision is not simply a function of the equity in our account, it is also a function of a few other things. It is a function of our perceived "worst-case" loss on the next trade. It is a function of the speed with which we wish to make the account grow. It is a function of dependency to past trades. More variables than these just mentioned may be associated with the quantity decision, yet we try to agglomerate all of these variables, including the account's level of equity, into a subjective decision regarding quantity: How many contracts or shares should we put on? In this discussion, you will learn how to make the mathematically correct decision regarding quantity. You will no longer have to make this decision subjectively (and quite possibly erroneously). You will see that there is a steep price to be paid by not having on the correct quantity, and this price increases as time goes by. Most traders gloss over this decision about quantity. They feel that it is somewhat arbitrary in that it doesn't much matter what quantity they have on. What matters is that they be right about the direction of the trade. Furthermore, they have the mistaken impression that there is a straight-line relationship between how many contracts they have on and how much they stand to make or lose in the long run. This is not correct. As we shall see in a moment, the relationship between potential gain and quantity risked is not a straight line. It is curved. There is a peak to this curve, and it is at this peak that we maximize potential gain per quantity at risk. Furthermore, as you will see throughout this discussion, the decision regarding quantity for a given trade is as important as the decision to enter long or short in the first place. Contrary to most traders' misconception, whether you are right or wrong on the direction of the market when you enter a trade does not dominate whether or not you have the right quantity on. Ultimately, we have no control over whether the next trade will be profitable or not. Yet we do have control over the quantity we have on. Since one does not dominate the other, our resources are better spent concentrating on putting on the tight quantity. On any given trade, you have a perceived worst-case loss. You may not even be conscious of this, but whenever you enter a trade you have some idea in your mind, even if only subconsciously, of what can happen to this trade in the worst-case. This worst-case perception, along with the level of equity in your account, shapes your decision about how many contracts to trade. Thus, we can now state that there is a divisor of this biggest perceived loss, a number between 0 and 1 that you will use in determining how many contracts to trade. For instance, if you have a $50,000 account, if you expect, in the worst case, to lose $5,000 per contract, and if you have on 5 contracts, your divisor is .5, since: 50,000/(5,000/.5) = 5 In other words, you have on 5 contracts for a $50,000 account, so you have 1 contract for every $10,000 in equity. You expect in the worst case to lose $5,000 per contract, thus your divisor here is .5. If you had on only 1 contract, your divisor in this case would be .1 since: -9- 6 4 2 0 0.05 0.45 0.55 f values Figure 1-1 20 sequences of +2, -1. This divisor we will call by its variable name f. Thus, whether consciously or subconsciously, on any given trade you are selecting a value for f when you decide how many contracts or shares to put on. Refer now to Figure 1-1. This represents a game where you have a 50% chance of winning $2 versus a 50% chance of losing $1 on every play. Notice that here the optimal f is .25 when the TWR is 10.55 after 40 bets (20 sequences of +2, -1). TWR stands for Terminal Wealth Relative. It represents the return on your stake as a multiple. A TWR of 10.55 means you would have made 10.55 times your original stake, or 955% profit. Now look at what happens if you bet only 15% away from the optimal .25 f. At an f of .1 or .4 your TWR is 4.66. This is not even half of what it is at .25, yet you are only 15% away from the optimal and only 40 bets have elapsed! How much are we talking about in terms of dollars? At f = .1, you would be making 1 bet for every $10 in your stake. At f = .4, you would be making I bet for every $2.50 in your stake. Both make the same amount with a TWR of 4.66. At f = .25, you are making 1 bet for every $4 in your stake. Notice that if you make 1 bet for every $4 in your stake, you will make more than twice as much after 40 bets as you would if you were making 1 bet for every $2.50 in your stake! Clearly it does not pay to overbet. At 1 bet per every $2.50 in your stake you make the same amount as if you had bet a quarter of that amount, 1 bet for every $10 in your stake! Notice that in a 50/50 game where you win twice the amount that you lose, at an f of .5 you are only breaking even! That means you are only breaking even if you made 1 bet for every $2 in your stake. At an f greater than .5 you are losing in this game, and it is simply a matter of time until you are completely tapped out! In other words, if your fin this 50/50, 2:1 game is .25 beyond what is optimal, you will go broke with a probability that approaches certainty as you continue to play. Our goal, then, is to objectively find the peak of the f curve for a given trading system. In this discussion certain concepts will be illuminated in terms of gambling illustrations. The main difference between gambling and speculation is that gambling creates risk (and hence many people are opposed to it) whereas speculation is a transference of an already existing risk (supposedly) from one party to another. The gambling illustrations are used to illustrate the concepts as clearly and simply as possible. The mathematics of money management and the principles involved in trading and gambling are quite similar. The main difference is that in the math of gambling we are usually dealing with Bernoulli outcomes (only two possible outcomes), whereas in trading we are dealing with the entire probability distribution that the trade may take. BASIC CONCEPTS A probability statement is a number between 0 and 1 that specifies how probable an outcome is, with 0 being no probability whatsoever of the event in question occurring and 1 being that the event in question is certain to occur. An independent trials process (sampling with replacement) is a sequence of outcomes where the probability statement is constant from one event to the next. A coin toss is an example of just such a process. Each toss has a 50/50 probability regardless of the outcome of the prior toss. Even if the last 5 flips of a coin were heads, the probability of this flip being heads is unaffected and remains .5. Naturally, the other type of random process is one in which the outcome of prior events does affect the probability statement, and naturally, the probability statement is not constant from one event to the next. These types of events are called dependent trials processes (sampling without replacement). Blackjack is an example of just such a process. Once a card is played, the composition of the deck changes. Suppose a new deck is shuffled and a card removed-say, the ace of diamonds. Prior to removing this card the probability of drawing an ace was 4/52 or .07692307692. Now that an ace has been drawn from the deck, and not replaced, the probability of drawing an ace on the next draw is 3/51 or .05882352941. Try to think of the difference between independent and dependent trials processes as simply whether the probability statement is fixed (independent trials) or variable (dependent trials) from one event to the next based on prior outcomes. This is in fact the only THE RUNS TEST When we do sampling without replacement from a deck of cards, we can determine by inspection that there is dependency. For certain events (such as the profit and loss stream of a system's trades) where dependency cannot be determined upon inspection, we have the runs test. The runs test will tell us if our system has more (or fewer) streaks of consecutive wins and losses than a random distribution. The runs test is essentially a matter of obtaining the Z scores for the win and loss streaks of a system's trades. A Z score is how many standard deviations you are away from the mean of a distribution. Thus, a Z score of 2.00 is 2.00 standard deviations away from the mean (the expectation of a random distribution of streaks of wins and losses). The Z score is simply the number of standard deviations the data is from the mean of the Normal Probability Distribution. For example, a Z score of 1.00 would mean that the data you arc testing is within 1 standard deviation from the mean. Incidentally, this is perfectly normal. The Z score is then converted into a confidence limit, sometimes also called a degree of certainty. The area under the curve of the Normal Probability Function at 1 standard deviation on either side of the mean equals 68% of the total area under the curve. So we take our Z score and convert it to a confidence limit, the relationship being that the Z score is a number of standard deviations from the mean and the confidence limit is the percentage of area under the curve occupied at so many standard deviations. Confidence Limit (%) 99.73 99 98 97 96 95.45 95 90 Z Score 3.00 2.58 2.33 2.17 2.05 2.00 1.96 1.64 With a minimum of 30 closed trades we can now compute our Z scores. What we are trying to answer is how many streaks of wins (losses) can we expect from a given system? Are the win (loss) streaks of the system we are testing in line with what we could expect? If not, is there a high enough confidence limit that we can assume dependency exists between trades -i.e., is the outcome of a trade dependent on the outcome of previous trades? Here then is the equation for the runs test, the system's Z score: (1.01) Z = (N*(R-.5)-X)/((X*(X-N))/(N-1))^(1/2) where N = The total number of trades in the sequence. R = The total number of runs in the sequence. X = 2*W*L W = The total number of winning trades in the sequence. L = The total number of losing trades in the sequence. Here is how to perform this computation: 1. Compile the following data from your run of trades: A. The total number of trades, hereafter called N. B. The total number of winning trades and the total number of losing trades. Now compute what we will call X. X = 2*Total Number of Wins*Total Number of Losses. - 10 - C. 2. The total number of runs in a sequence. We'll call this R. Let's construct an example to follow along with. Assume the following trades: -3 +2 The net profit is +7. The total number of trades is 12, so N = 12, to keep the example simple. We are not now concerned with how big the wins and losses are, but rather how many wins and losses there are and how many streaks. Therefore, we can reduce our run of trades to a simple sequence of pluses and minuses. Note that a trade with a P&L of 0 is regarded as a loss. We now have: - As can be seen, there are 6 profits and 6 losses; therefore, X = 2*6*6 = 72. As can also be seen, there are 8 runs in this sequence; therefore, R = 8. We define a run as anytime you encounter a sign change when reading the sequence as just shown from left to right (i.e., chronologically). Assume also that you start at 1. 1. You would thus count this sequence as follows: 1 + 2 + 4 + 6 + 8 2. Solve the expression: N*(R-.5)-X For our example this would be: 12*(8-5)-72 12*7.5-72 90-72 18 3. Solve the expression: (X*(X-N))/(N-1) For our example this would be: (72*(72-12))/(12-1) (72*60)/ 11 4320/11 392.727272 4. Take the square root of the answer in number 3. For our example this would be: 392.727272^(l/2) = 19.81734777 5. Divide the answer in number 2 by the answer in number 4. This is your Z score. For our example this would be: 18/19.81734777 = .9082951063 6. Now convert your Z score to a confidence limit. The distribution of runs is binomially distributed. However, when there are 30 or more trades involved, we can use the Normal Distribution to very closely approximate the binomial probabilities. Thus, if you are using 30 or more trades, you can simply convert your Z score to a confidence limit based upon Equation (3.22) for 2-tailed probabilities in the Normal Distribution. The runs test will tell you if your sequence of wins and losses contains more or fewer streaks (of wins or losses) than would ordinarily be expected in a truly random sequence, one that has no dependence between trials. Since we are at such a relatively low confidence limit in our example, we can assume that there is no dependence between trials in this particular sequence. If your Z score is negative, simply convert it to positive (take the absolute value) when finding your confidence limit. A negative Z score implies positive dependency, meaning fewer streaks than the Normal Probability Function would imply and hence that wins beget wins and losses beget losses. A positive Z score implies negative dependency, meaning more streaks than the Normal Probability Function would imply and hence that wins beget losses and losses beget wins. What would an acceptable confidence limit be? Statisticians generally recommend selecting a confidence limit at least in the high nineties. Some statisticians recommend a confidence limit in excess of 99% in order to assume dependency, some recommend a less stringent minimum of 95.45% (2 standard deviations). Rarely, if ever, will you find a system that shows confidence limits in excess of 95.45%. Most frequently the confidence limits encountered are less than 90%. Even if you find a system with a confidence limit between 90 and 95.45%, this is not exactly a nugget of gold. To assume that there is dependency involved that can be capitalized upon to make a substantial difference, you really need to exceed 95.45% as a bare minimum. As long as the dependency is at an acceptable confidence limit, you can alter your behavior accordingly to make better trading decisions, even though you do not understand the underlying cause of the dependency. If you could know the cause, you could then better estimate when the dependency was in effect and when it was not, as well as when a change in the degree of dependency could be expected. So far, we have only looked at dependency from the point of view of whether the last trade was a winner or a loser. We are trying to determine if the sequence of wins and losses exhibits dependency or not. The runs test for dependency automatically takes the percentage of wins and losses into account. However, in performing the runs test on runs of wins and losses, we have accounted for the sequence of wins and losses but not their size. In order to have true independence, not only must the sequence of the wins and losses be independent, the sizes of the wins and losses within the sequence must also be independent. It is possible for the wins and losses to be independent, yet their sizes to be dependent (or vice versa). One possible solution is to run the runs test on only the winning trades, segregating the runs in some way (such as those that are greater than the median win and those that are less), and then look for dependency among the size of the winning trades. Then do this for the losing trades. SERIAL CORRELATION There is a different, perhaps better, way to quantify this possible dependency between the size of the wins and losses. The technique to be discussed next looks at the sizes of wins and losses from an entirely different perspective mathematically than the does runs test, and hence, when used in conjunction with the runs test, measures the relationship of trades with more depth than the runs test alone could provide. This technique utilizes the linear correlation coefficient, r, sometimes called Pearson's r, to quantify the dependency/independency relationship. Now look at Figure 1-2. It depicts two sequences that are perfectly correlated with each other. We call this effect positive correlation. For each period find the difference between each X and the average X and each Y and the average Y. 9. Now calculate the numerator. To do this, for each period multiply the answers from step 2-in other words, for each period multiply together the differences between that period's X and the average X and between that period's Y and the average Y. 10. Total up all of the answers to step 3 for all of the periods. This is the numerator. 11. Now find the denominator. To do this, take the answers to step 2 for each period, for both the X differences and the Y differences, and square them (they will now all be positive numbers). 12. Sum up the squared X differences for all periods into one final total. Do the same with the squared Y differences. 13. Take the square root to the sum of the squared X differences you just found in step 6. Now do the same with the Y's by taking the square root of the sum of the squared Y differences. 14. Multiply together the two answers you just found in step 1 - that is, multiply together the square root of the sum of the squared X differences by the square root of the sum of the squared Y differences. This product is your denominator. 15. Divide the numerator you found in step 4 by the denominator you found in step 8. This is your linear correlation coefficient, r. The value for r will always be between +1.00 and -1.00. A value of 0 indicates no correlation whatsoever. Now look at Figure 1-4. It represents the following sequence of 21 trades: 1, 2, 1, -1, 3, 2, -1, -2, -3, 1, -2, 3, 1, 1, 2, 3, 3, -1, 2, -1, 3 4 -4 Figure 1-4 Individual outcomes of 21 trades. We can use the linear correlation coefficient in the following manner to see if there is any correlation between the previous trade and the current trade. The idea here is to treat the trade P&L's as the X values in the formula for r. Superimposed over that we duplicate the same trade P&L's, only this time we skew them by 1 trade and use these as the Y values in the formula for r. In other words, the Y value is the previous X value. (See Figure 1-5.). Figure 1-2 Positive correlation (r = +1.00). 0 Figure 1-3 Negative correlation (r = -1 .00). Now look at Figure 1-3. It shows two sequences that are perfectly negatively correlated with each other. When one line is zigging the other is zagging. We call this effect negative correlation. The formula for finding the linear correlation coefficient, r, between two sequences, X and Y, is as follows (a bar over a variable means the arithmetic mean of the variable): (1.02) R = (∑a(Xa-X[])*(Ya-Y[]))/((∑a(Xa-X[])^2)^(1/2)*(∑a(YaY[])^2)^(l/2)) Here is how to perform the calculation: 7. Average the X's and the Y's (shown as X[] and Y[]). - 11 - -4 Figure 1-5 Individual outcomes of 21 trades skewed by 1 trade. A(X) 1 2 1 -1 1.2 0.2 -1.8 0.3 1.3 0.3 0.36 0.26 -0.54 1.44 0.04 3.24 0.09 1.69 0.09 3 2 -1 -2 -3 1 -2 3 1 1 2 3 3 -1 2 -1 3 X[] = .8 -1 3 2 -1 -2 -3 1 -2 3 1 1 2 3 3 -1 2 -1 3 Y[] = .7 2.2 1.2 -1.8 -2.8 -3.8 0.2 -2.8 2.2 0.2 0.2 1.2 2.2 2.2 -1.8 1.2 -1.8 2.2 -1.7 2.3 1.3 -1.7 -2.7 -3.7 0.3 -2.7 2.3 0.3 0.3 1.3 2.3 2.3 -1.7 1.3 -1.7 -3.74 2.76 -2.34 4.76 10.26 -0.74 -0.84 -5.94 0.46 0.06 0.36 2.86 5.06 -4.14 -2.04 -2.34 -3.74 4.84 1.44 3.24 7.84 14.44 0.04 7.84 4.84 0.04 0.04 1.44 4.84 4.84 3.24 1.44 3.24 4.84 2.89 5.29 1.69 2.89 7.29 13.69 0.09 7.29 5.29 0.09 0.09 1.69 5.29 5.29 2.89 1.69 2.89 concepts, the reader is referred to the section on statistical validation of a trading system under "The Binomial Distribution" in Appendix B. The averages differ because you only average those X's and Y's that have a corresponding X or Y value (i.e., you average only those values that overlap), so the last Y value (3) is not figured in the Y average nor is the first X value (1) figured in the x average. The numerator is the total of all entries in column E (0.8). To find the denominator, we take the square root of the total in column F, which is 8.555699, and we take the square root to the total in column G, which is 8.258329, and multiply them together to obtain a denominator of 70.65578. We now divide our numerator of 0.8 by our denominator of 70.65578 to obtain .011322. This is our linear correlation coefficient, r. The linear correlation coefficient of .011322 in this case is hardly indicative of anything, but it is pretty much in the range you can expect for most trading systems. High positive correlation (at least .25) generally suggests that big wins are seldom followed by big losses and vice versa. Negative correlation readings (below -.25 to -.30) imply that big losses tend to be followed by big wins and vice versa. The correlation coefficients can be translated, by a technique known as Fisher's Z transformation, into a confidence level for a given number of trades. This topic is treated in Appendix C. Negative correlation is just as helpful as positive correlation. For example, if there appears to be negative correlation and the system has just suffered a large loss, we can expect a large win and would therefore have more contracts on than we ordinarily would. If this trade proves to be a loss, it will most likely not be a large loss (due to the negative correlation). Finally, in determining dependency you should also consider out-ofsample tests. That is, break your data segment into two or more parts. If you see dependency in the first part, then see if that dependency also exists in the second part, and so on. This will help eliminate cases where there appears to be dependency when in fact no dependency exists. Using these two tools (the runs test and the linear correlation coefficient) can help answer many of these questions. However, they can only answer them if you have a high enough confidence limit and/or a high enough correlation coefficient. Most of the time these tools are of little help, because all too often the universe of futures system trades is dominated by independency. If you get readings indicating dependency, and you want to take advantage of it in your trading, you must go back and incorporate a rule in your trading logic to exploit the dependency. In other words, you must go back and change the trading system logic to account for this dependency (i.e., by passing certain trades or breaking up the system into two different systems, such as one for trades after wins and one for trades after losses). Thus, we can state that if dependency shows up in your trades, you haven't maximized your system. In other words, dependency, if found, should be exploited (by changing the rules of the system to take advantage of the dependency) until it no longer appears to exist. The first stage in money management is therefore to exploit, and hence remove, any dependency in trades. For more on dependency than was covered in Portfolio Management Formulas and reiterated here, see Appendix C, "Further on Dependency: The Turning Points and Phase Length Tests." We have been discussing dependency in the stream of trade profits and losses. You can also look for dependency between an indicator and the subsequent trade, or between any two variables. For more on these - 12 - As traders we must generally assume that dependency does not exist in the marketplace for the majority of market systems. That is, when trading a given market system, we will usually be operating in an environment where the outcome of the next trade is not predicated upon the outcome(s) of prior trade(s). That is not to say that there is never dependency between trades for some market systems (because for some market systems dependency does exist), only that we should act as though dependency does not exist unless there is very strong evidence to the contrary. Such would be the case if the Z score and the linear correlation coefficient indicated dependency, and the dependency held up across markets and across optimizable parameter values. If we act as though there is dependency when the evidence is not overwhelming, we may well just be fooling ourselves and causing more self-inflicted harm than good as a result. Even if a system showed dependency to a 95% confidence limit for all values of a parameter, it still is hardly a high enough confidence limit to assume that dependency does in fact exist between the trades of a given market or system. A type I error is committed when we reject an hypothesis that should be accepted. If, however, we accept an hypothesis when it should be rejected, we have committed a type II error. Absent knowledge of whether an hypothesis is correct or not, we must decide on the penalties associated with a type I and type II error. Sometimes one type of error is more serious than the other, and in such cases we must decide whether to accept or reject an unproven hypothesis based on the lesser penalty. Suppose you are considering using a certain trading system, yet you're not extremely sure that it will hold up when you go to trade it real-time. Here, the hypothesis is that the trading system will hold up real-time. You decide to accept the hypothesis and trade the system. If it does not hold up, you will have committed a type II error, and you will pay the penalty in terms of the losses you have incurred trading the system real-time. On the other hand, if you choose to not trade the system, and it is profitable, you will have committed a type I error. In this instance, the penalty you pay is in forgone profits. Which is the lesser penalty to pay? Clearly it is the latter, the forgone profits of not trading the system. Although from this example you can conclude that if you're going to trade a system real-time it had better be profitable, there is an ulterior motive for using this example. If we assume there is dependency, when in fact there isn't, we will have committed a type 'II error. Again, the penalty we pay will not be in forgone profits, but in actual losses. However, if we assume there is not dependency when in fact there is, we will have committed a type I error and our penalty will be in forgone profits. Clearly, we are better off paying the penalty of forgone profits than undergoing actual losses. Therefore, unless there is absolutely overwhelming evidence of dependency, you are much better off assuming that the profits and losses in trading (whether with a mechanical system or not) are independent of prior outcomes. There seems to be a paradox presented here. First, if there is dependency in the trades, then the system is 'suboptimal. Yet dependency can never be proven beyond a doubt. Now, if we assume and act as though there is dependency (when in fact there isn't), we have committed a more expensive error than if we assume and act as though dependency does not exist (when in fact it does). For instance, suppose we have a system with a history of 60 trades, and suppose we see dependency to a confidence level of 95% based on the runs test. We want our system to be optimal, so we adjust its rules accordingly to exploit this apparent dependency. After we have done so, say we are left with 40 trades, and dependency no longer is apparent. We are therefore satisfied that the system rules are optimal. These 40 trades will now have a higher optimal f than the entire 60 (more on optimal f later in this chapter). If you go and trade this system with the new rules to exploit the dependency, and the higher concomitant optimal f, and if the dependency is not present, your performance will be closer to that of the 60 trades, rather than the superior 40 trades. Thus, the f you have chosen will be too far to the right, resulting in a big price to pay on your part for assuming dependency. If dependency is there, then you will be closer to the peak of the f curve by assuming that the dependency is there. Had you decided not to assume it when in fact there was dependency, you would tend to be to the left of the peak of the f curve, and hence your performance would be suboptimal (but a lesser price to pay than being to the right of the peak). In a nutshell, look for dependency. If it shows to a high enough degree across parameter values and markets for that system, then alter the system rules to capitalize on the dependency. Otherwise, in the absence of overwhelming statistical evidence of dependency, assume that it does not exist, (thus opting to pay the lesser penalty if in fact dependency does exist). MATHEMATICAL EXPECTATION By the same token, you are better off not to trade unless there is absolutely overwhelming evidence that the market system you are contemplating trading will be profitable-that is, unless you fully expect the market system in question to have a positive mathematical expectation when you trade it realtime. Mathematical expectation is the amount you expect to make or lose, on average, each bet. In gambling parlance this is sometimes known as the player's edge (if positive to the player) or the house's advantage (if negative to the player): (1.03) Mathematical Expectation = ∑[i = 1,N](Pi*Ai) where P = Probability of winning or losing. A = Amount won or lost. N = Number of possible outcomes. The mathematical expectation is computed by multiplying each possible gain or loss by the probability of that gain or loss and then summing these products together. Let's look at the mathematical expectation for a game where you have a 50% chance of winning $2 and a 50% chance of losing $1 under this formula: Mathematical Expectation = (.5*2)+(.5*(-1)) = 1+(-5) = .5 In such an instance, of course, your mathematical expectation is to win 50 cents per toss on average. Consider betting on one number in roulette, where your mathematical expectation is: ME = ((1/38)*35)+((37/38)*(-1)) = (.02631578947*35)+(.9736842105*(-1)) = (9210526315)+(-.9736842105) = -.05263157903 Here, if you bet $1 on one number in roulette (American doublezero) you would expect to lose, on average, 5.26 cents per roll. If you bet $5, you would expect to lose, on average, 26.3 cents per roll. Notice that different amounts bet have different mathematical expectations in terms of amounts, but the expectation as a percentage of the amount bet is always the same. The player's expectation for a series of bets is the total of the expectations for the individual bets. So if you go play $1 on a number in roulette, then $10 on a number, then $5 on a number, your total expectation is: ME = (-.0526*1)+(-.0526*10)+(-.0526*5) = -.0526-.526 .263 = -.8416 You would therefore expect to lose, on average, 84.16 cents. This principle explains why systems that try to change the sizes of their bets relative to how many wins or losses have been seen (assuming an independent trials process) are doomed to fail. The summation of negative expectation bets is always a negative expectation! The most fundamental point that you must understand in terms of money management is that in a negative expectation game, there is no money-management scheme that will make you a winner. If you continue to bet, regardless of how you manage your money, it is almost certain that you will be a loser, losing your entire stake no matter how large it was to start. This axiom is not only true of a negative expectation game, it is true of an even-money game as well. Therefore, the only game you have a chance at winning in the long run is a positive arithmetic expectation game. Then, you can only win if you either always bet the same constant bet size or bet with an f value less than the f value corresponding to the point where the geometric mean HPR is less than or equal to 1. (We will cover the second part of this, regarding the geometric mean HPR, later on in the text.) - 13 - This axiom is true only in the absence of an upper absorbing barrier. For example, let's assume a gambler who starts out with a $100 stake who will quit playing if his stake grows to $101. This upper target of $101 is called an absorbing barrier. Let's suppose our gambler is always betting $1 per play on red in roulette. Thus, he has a slight negative mathematical expectation. The gambler is far more likely to see his stake grow to $101 and quit than he is to see his stake go to zero and be forced to quit. If, however, he repeats this process over and over, he will find himself in a negative mathematical expectation. If he intends on playing this game like this only once, then the axiom of going broke with certainty, eventually, does not apply. The difference between a negative expectation and a positive one is the difference between life and death. It doesn't matter so much how positive or how negative your expectation is; what matters is whether it is positive or negative. So before money management can even be considered, you must have a positive expectancy game. If you don't, all the money management in the world cannot save you 1. On the other hand, if you have a positive expectation, you can, through proper money management, turn it into an exponential growth function. It doesn't even matter how marginally positive the expectation is! In other words, it doesn't so much matter how profitable your trading system is on a 1 contract basis, so long as it is profitable, even if only marginally so. If you have a system that makes $10 per contract per trade (once commissions and slippage have been deducted), you can use money management to make it be far more profitable than a system that shows a $1,000 average trade (once commissions and slippage have been deducted). What matters, then, is not how profitable your system has been, but rather how certain is it that the system will show at least a marginal profit in the future. Therefore, the most important preparation a trader can do is to make as certain as possible that he has a positive mathematical expectation in the future. The key to ensuring that you have a positive mathematical expectation in the future is to not restrict your system's degrees of freedom. You want to keep your system's degrees of freedom as high as possible to ensure the positive mathematical expectation in the future. This is accomplished not only by eliminating, or at least minimizing, the number of optimizable parameters, but also by eliminating, or at least minimizing, as many of the system rules as possible. Every parameter you add, every rule you add, every little adjustment and qualification you add to your system diminishes its degrees of freedom. Ideally, you will have a system that is very primitive and simple, and that continually grinds out marginal profits over time in almost all the different markets. Again, it is important that you realize that it really doesn't matter how profitable the system is, so long as it is profitable. The money you will make trading will be made by how effective the money management you employ is. The trading system is simply a vehicle to give you a positive mathematical expectation on which to use money management. Systems that work (show at least a marginal profit) on only one or a few markets, or have different rules or parameters for different markets, probably won't work real-time for very long. The problem with most technically oriented traders is that they spend too much time and effort hating the computer crank out run after run of different rules and parameter values for trading systems. This is the ultimate "woulda, shoulda, coulda" game. It is completely counterproductive. Rather than concentrating your efforts and computer time toward maximizing your trading system profits, direct the energy toward maximizing the certainty level of a marginal profit. This rule is applicable to trading one market system only. When you begin trading more than one market system, you step into a strange environment where it is possible to include a market system with a negative mathematical expectation as one of the markets being traded and actually have a higher net mathematical expectation than the net mathematical expectation of the group before the inclusion of the negative expectation system! Further, it is possible that the net mathematical expectation for the group with the inclusion of the negative mathematical expectation market system can be higher than the mathematical expectation of any of the individual market systems! For the time being we will consider only one market system at a time, so we most have a positive mathematical expectation in order for the money-management techniques to work. System A TO REINVEST TRADING PROFITS OR NOT Let's call the following system "System A." In it we have 2 trades: the first making SO%, the second losing 40%. If we do not reinvest our returns, we make 10%. If we do reinvest, the same sequence of trades loses 10%. System A No Reinvestment Trade No. P&L Cumulative 100 1 50 150 2 -40 110 With Reinvestment P&L Cumulative 100 50 150 -60 90 Now let's look at System B, a gain of 15% and a loss of 5%, which also nets out 10% over 2 trades on a nonreinvestment basis, just like System A. But look at the results of System B with reinvestment: Unlike system A, it makes money. System B No Reinvestment Trade No. P&L Cumulative 100 1 15 115 2 -5 110 With Reinvestment P&L Cumulative 100 15 115 -5.75 109.25 An important characteristic of trading with reinvestment that must be realized is that reinvesting trading profits can turn a winning system into a losing system but not vice versa! A winning system is turned into a losing system in trading with reinvestment if the returns are not consistent enough. Changing the order or sequence of trades does not affect the final outcome. This is not only true on a nonreinvestment basis, but also true on a reinvestment basis (contrary to most people's misconception). System A No Reinvestment Trade No. P&L Cumulative 100 1 40 60 2 50 110 System B No Reinvestment Trade No. P&L Cumulative 100 1 -5 95 2 15 110 No Reinvestment Trade No. P&L Cumulative 100 1 50 150 2 -40 110 3 1 111 4 1 112 Percentage of Wins 75% Avg. Trade 3 Risk/Rew. 1.3 Std. Dev. 31.88 Avg. Trade/Std. Dev. 0.09 Now let's take System B and add 2 more losers of 1 point each. System B No Reinvestment Trade No. P&L Cumulative 100 1 15 115 2 -5 110 3 -1 109 4 -1 108 Percentage of Wins 25% Avg. Trade 2 Risk/Rew. 2.14 Std. Dev. 7.68 Avg. Trade/Std. Dev. 0.26 With Reinvestment P&L Cumulative 100 -5 95 14.25 109.25 As can obviously be seen, the sequence of trades has no bearing on the final outcome, whether viewed on a reinvestment or a nonreinvestment basis. (One side benefit to trading on a reinvestment basis is that the drawdowns tend to be buffered. As a system goes into and through a drawdown period, each losing trade is followed by a trade with fewer and fewer contracts.) By inspection it would seem you are better off trading on a nonreinvestment basis than you are reinvesting because your probability of winning is greater. However, this is not a valid assumption, because in the real world we do not withdraw all of our profits and make up all of our losses by depositing new cash into an account. Further, the nature of investment or trading is predicated upon the effects of compounding. If we do away with compounding (as in the nonreinvestment basis), we can plan on doing little better in the future than we can today, no matter how successful our trading is between now and then. It is compounding that takes the linear function of account growth and makes it a geometric function. If a system is good enough, the profits generated on a reinvestment basis will be far greater than those generated on a nonreinvestment basis, and that gap will widen as time goes by. If you have a system that can beat the market, it doesn't make any sense to trade it in any other way than to increase your amount wagered as your stake increases. MEASURING A GOOD SYSTEM FOR REINVESTMENT THE GEOMETRIC MEAN So far we have seen how a system can be sabotaged by not being consistent enough from trade to trade. Does this mean we should close up and put our money in the bank? Let's go back to System A, with its first 2 trades. For the sake of illustration we are going to add two winners of 1 point each. - 14 - With Reinvestment P&L Cumulative 100 15 115 -5.75 109.25 -1.0925 108.1575 -1.08157 107.0759 25% 1.768981 1.89 7.87 0.22 Now, if consistency is what we're really after, let's look at a bank account, the perfectly consistent vehicle (relative to trading), paying 1 point per period. We'll call this series System C. System C No Reinvestment Trade No. P&L Cumulative 100 1 1 101 2 1 102 3 1 103 4 1 104 Percentage of Wins 1.00 Avg. Trade 1 Risk/Rew. Infinite Std. Dev. 0.00 Avg. Trade/Std. Dev. Infinite With Reinvestment P&L Cumulative 100 40 60 30 90 With Reinvestment P&L Cumulative 100 50 150 -60 90 0.9 90.9 0.909 91.809 75% - 2.04775 0.86 39.00 -0.05 With Reinvestment P&L Cumulative 100 1 101 1.01 102.01 1.0201 103.0301 1.030301 104.0604 1 .00 1.015100 Infinite 0.01 89.89 Our aim is to maximize our profits under reinvestment trading. With that as the goal, we can see that our best reinvestment sequence comes from System B. How could we have known that, given only information regarding nonreinvestment trading? By percentage of winning trades? By total dollars? By average trade? The answer to these questions is "no," because answering "yes" would have us trading System A (but this is the solution most futures traders opt for). What if we opted for most consistency (i.e., highest ratio average trade/standard deviation or lowest standard deviation)? How about highest risk/reward or lowest drawdown? These are not the answers either. If they were, we should put our money in the bank and forget about trading. System B has the tight mix of profitability and consistency. Systems A and C do not. That is why System B performs the best under reinvestment trading. What is the best way to measure this "right mix"? It turns out there is a formula that will do just that-the geometric mean. This is simply the Nth root of the Terminal Wealth Relative (TWR), where N is the number of periods (trades). The TWR is simply what we've been computing when we figure what the final cumulative amount is under reinvestment, In other words, the TWRs for the three systems we just saw are: System System A System B System C TWR .91809 1.070759 1.040604 Since there are 4 trades in each of these, we take the TWRs to the 4th root to obtain the geometric mean: System System A System B System C Geometric Mean 0. 978861 1.017238 1.009999 (1.04) TWR = ∏[i = 1,N]HPRi (1.05) Geometric Mean = TWR^(1/N) where N = Total number of trades. HPR = Holding period returns (equal to 1 plus the rate of return -e .g., an HPR of 1.10 means a 10% return over a given period, bet, or trade). TWR = The number of dollars of value at the end of a run of periods/bets/trades per dollar of initial investment, assuming gains and losses are allowed to compound. Here is another way of expressing these variables: (1.06) TWR = Final Stake/Starting Stake The geometric mean (G) equals your growth factor per play, or: (1.07) G = (Final Stake/Starting Stake)^(I/Number of Plays) Think of the geometric mean as the "growth factor per play" of your stake. The system or market with the highest geometric mean is the system or market that makes the most profit trading on a reinvestment of returns basis. A geometric mean less than one means that the system would have lost money if you were trading it on a reinvestment basis. Investment performance is often measured with respect to the dispersion of returns. Measures such as the Sharpe ratio, Treynor measure, Jensen measure, Vami, and so on, attempt to relate investment performance to dispersion. The geometric mean here can be considered another of these types of measures. However, unlike the other measures, the geometric mean measures investment performance relative to dispersion in the same mathematical form as that in which the equity in your account is affected. Equation (1.04) bears out another point. If you suffer an HPR of 0, you will be completely wiped out, because anything multiplied by zero equals zero. Any big losing trade will have a very adverse effect on the TWR, since it is a multiplicative rather than additive function. Thus we can state that in trading you are only as smart as your dumbest mistake. chance of losing as the length of the game is shortened - i.e., as the number of trials approaches 1. If you play a game whereby you have a 49% chance of winning $1 and a 51% of losing $1, you are best off betting on only 1 trial. The more trials you bet on, the greater the likelihood you will lose, with the probability of losing approaching certainty as the length of the game approaches infinity. That isn't to say that you are in a positive expectation for the 1 trial, but you have at least minimized the probabilities of being a loser by only playing 1 trial. Return now to a positive expectation game. We determined at the outset of this discussion that on any given trade, the quantity that a trader puts on can be expressed as a factor, f, between 0 and 1, that represents the trader's quantity with respect to both the perceived loss on the next trade and the trader's total equity. If you know you have an edge over N bets but you do not know which of those N bets will be winners (and for how much), and which will be losers (and for how much), you are best off (in the long run) treating each bet exactly the same in terms of what percentage of your total stake is at risk. This method of always trading a fixed fraction of your stake has shown time and again to be the best staking system. If there is dependency in your trades, where winners beget winners and losers beget losers, or vice versa, you are still best off betting a fraction of your total stake on each bet, but that fraction is no longer fixed. In such a case, the fraction must reflect the effect of this dependency (that is, if you have not yet "flushed" the dependency out of your system by creating system rules to exploit it). "Wait," you say. "Aren't staking systems foolish to begin with? Haven't we seen that they don't overcome the house advantage, they only increase our total action?" This is absolutely true for a situation with a negative mathematical expectation. For a positive mathematical expectation, it is a different story altogether. In a positive expectancy situation the trader/gambler is faced with the question of how best to exploit the positive We have spent the course of this discussion laying the groundwork for this section. We have seen that in order to consider betting or trading a given situation or system you must first determine if a positive mathematical expectation exists. We have seen that what is seemingly a "good bet" on a mathematical expectation basis (i.e., the mathematical expectation is positive) may in fact not be such a good bet when you consider reinvestment of returns, if you are reinvesting too high a percentage of your winnings relative to the dispersion of outcomes of the system. Reinvesting returns never raises the mathematical expectation (as a percentage-although it can raise the mathematical expectation in terms of dollars, which it does geometrically, which is why we want to reinvest). If there is in fact a positive mathematical expectation, however small, the next step is to exploit this positive expectation to its fullest potential. For an independent trials process, this is achieved by reinvesting a fixed fraction of your total stake. 2 And how do we find this optimal f? Much work has been done in recent decades on this topic in the gambling community, the most famous and accurate of which is known as the Kelly Betting System. This is actually an application of a mathematical idea developed in early 1956 by John L. Kelly, Jr.3 The Kelly criterion states that we should bet that fixed fraction of our stake (f) which maximizes the growth function G(f): (1.08) G(f) = P*ln(l+B*f)+(1 -P)*ln(l-f) where f = The optimal fixed fraction. P = The probability of a winning bet or trade. B = The ratio of amount won on a winning bet to amount lost on a losing bet. ln() = The natural logarithm function. Thus far we have discussed reinvestment of returns in trading whereby we reinvest 100% of our stake on all occasions. Although we know that in order to maximize a potentially profitable situation we must use reinvestment, a 100% reinvestment is rarely the wisest thing to do. Take the case of a fair bet (50/50) on a coin toss. Someone is willing to pay you $2 if you win the toss but will charge you $1 if you lose. Our mathematical expectation is .5. In other words, you would expect to make 50 cents per toss, on average. This is true of the first toss and all subsequent tosses, provided you do not step up the amount you are wagering. But in an independent trials process this is exactly what you should do. As you win you should commit more and more to each toss. Suppose you begin with an initial stake of one dollar. Now suppose you win the first toss and are paid two dollars. Since you had your entire stake ($1) riding on the last bet, you bet your entire stake (now $3) on the next toss as well. However, this next toss is a loser and your entire $3 stake is gone. You have lost your original $1 plus the $2 you had won. If you had won the last toss, it would have paid you $6 since you had three $1 bets on it. The point is that if you are betting 100% of your stake, you'll be wiped out as soon as you encounter a losing wager, an inevitable event. If we were to replay the previous scenario and you had bet on a nonreinvestment basis (i.e., constant bet size) you would have made $2 on the first bet and lost $1 on the second. You would now be net ahead $1 and have a total stake of $2. Somewhere between these two scenarios lies the optimal betting approach for a positive expectation. However, we should first discuss the optimal betting strategy for a negative expectation game. When you know that the game you are playing has a negative mathematical expectation, the best bet is no bet. Remember, there is no money-management strategy that can turn a losing game into a winner. 'However, if you must bet on a negative expectation game, the next best strategy is the maximum boldness strategy. In other words, you want to bet on as few trials as possible (as opposed to a positive expectation game, where you want to bet on as many trials as possible). The more trials, the greater the likelihood that the positive expectation will be realized, and hence the greater the likelihood that betting on the negative expectation side will lose. Therefore, the negative expectation side has a lesser and lesser - 15 - For a dependent trials process, just as for an independent trials process, the idea of betting a proportion of your total stake also yields the greatest exploitation of a positive mathematical expectation. However, in a dependent trials process you optimally bet a variable fraction of your total stake, the exact fraction for each individual bet being determined by the probabilities and payoffs involved for each individual bet. This is analogous to trading a dependent trials process as two separate market systems. 3 Kelly, J. L., Jr., A New Interpretation of Information Rate, Bell System Technical Journal, pp. 917-926, July, 1956. As it turns out, for an event with two possible outcomes, this optimal f4 can be found quite easily with the Kelly formulas. KELLY FORMULAS Beginning around the late 1940s, Bell System engineers were working on the problem of data transmission over long-distance lines. The problem facing them was that the lines were subject to seemingly random, unavoidable "noise" that would interfere with the transmission. Some rather ingenious solutions were proposed by engineers at Bell Labs. Oddly enough, there are great similarities between this data communications problem and the problem of geometric growth as pertains to gambling money management (as both problems are the product of an environment of favorable uncertainty). One of the outgrowths of these solutions is the first Kelly formula. The first equation here is: (1.09a) f = 2*P-l or (1.09b) f = P-Q where f = The optimal fixed fraction. P = The probability of a winning bet or trade. Q = The probability of a loss, (or the complement of P, equal to 1P). Both forms of Equation (1.09) are equivalent. Equation (l.09a) or (1.09b) will yield the correct answer for optimal f provided the quantities are the same for both wins and losses. As an example, consider the following stream of bets: -1, +1, +1,-1,-1, +1, +1, +1, +1,-1 There are 10 bets, 6 winners, hence: f = (.6*2)-l = 1.2-1 = .2 If the winners and losers were not all the same size, then this formula would not yield the correct answer. Such a case would be our two-toone coin-toss example, where all of the winners were for 2 units and all of the losers for 1 unit. For this situation the Kelly formula is: (1.10a) f = ((B+1)*P-1)/B where f = The optimal fixed fraction. P = The probability of a winning bet or trade. B = The ratio of amount won on a winning bet to amount lost on a losing bet. In our two-to-one coin-toss example: f = ((2+ l).5-l)/2 = (3*.5-l)/2 = (1.5 -l)/2 = .5/2 = .25 This formula will yield the correct answer for optimal f provided all wins are always for the same amount and all losses are always for the same amount. If this is not so, then this formula will not yield the correct answer. The Kelly formulas are applicable only to outcomes that have a Bernoulli distribution. A Bernoulli distribution is a distribution with two possible, discrete outcomes. Gambling games very often have a Bernoulli distribution. The two outcomes are how much you make when you win, and how much you lose when you lose. Trading, unfortunately, is not this simple. To apply the Kelly formulas to a non-Bernoulli distribution of outcomes (such as trading) is a mistake. The result will not be the true optimal f. For more on the Bernoulli distribution, consult Appendix B. Consider the following sequence of bets/trades: +9, +18, +7, +1, +10, -5, -3, -17, -7 Since this is not a Bernoulli distribution (the wins and losses are of different amounts), the Kelly formula is not applicable. However, let's try it anyway and see what we get. Since 5 of the 9 events are profitable, then P = .555. Now let's take averages of the wins and losses to calculate B (here is where so many 4 As used throughout the text, f is always lowercase and in roman type. It is not to be confused with the universal constant, F, equal to 4.669201609…, pertaining to bifurcations in chaotic systems. - 16 - traders go wrong). The average win is 9, and the average loss is 8. Therefore we say that B = 1.125. Plugging in the values we obtain: f = ((1.125+1) .555-1)/1.125 = (2.125*.555-1)/1.125 = (1.179375-1)/1.125 = .179375/1.125 = .159444444 So we say f = .16. You will see later in this chapter that this is not the optimal f. The optimal f for this sequence of trades is .24. Applying the Kelly formula when all wins are not for the same amount and/or all losses are not for the same amount is a mistake, for it will not yield the optimal f. Notice that the numerator in this formula equals the mathematical expectation for an event with two possible outcomes as defined earlier. Therefore, we can say that as long as all wins are for the same amount and all losses are for the same amount (whether or not the amount that can be won equals the amount that can be lost), the optimal f is: (1.10b) f = Mathematical Expectation/B where f = The optimal fixed fraction. B = The ratio of amount won on a winning bet to amount lost on a losing bet. The mathematical expectation is defined in Equation (1.03), but since we must have a Bernoulli distribution of outcomes we must make certain in using Equation (1.10b) that we only have two possible outcomes. Equation (l.l0a) is the most commonly seen of the forms of Equation (1.10) (which are all equivalent). However, the formula can be reduced to the following simpler form: (1.10c) f = P-Q/B where f = The optimal fixed fraction. P = The probability of a winning bet or trade. Q = The probability of a loss (or the complement of P, equal to 1-P). FINDING THE OPTIMAL F BY THE GEOMETRIC MEAN In trading we can count on our wins being for varying amounts and our losses being for varying amounts. Therefore the Kelly formulas could not give us the correct optimal f. How then can we find our optimal f to know how many contracts to have on and have it be mathematically correct? Here is the solution. To begin with, we must amend our formula for finding HPRs to incorporate f: (1.11) HPR = 1+f*(-Trade/Biggest Loss) where f = The value we are using for f. -Trade = The profit or loss on a trade (with the sign reversed so that losses are positive numbers and profits are negative). Biggest Loss = The P&L that resulted in the biggest loss. (This should always be a negative number.) And again, TWR is simply the geometric product of the HPRs and geometric mean (G) is simply the Nth root of the TWR. (1.12) TWR = ∏[i = 1,N](1+f*(-Tradei/Biggest Loss)) (1.13) G = (∏[i = 1,N](1+f*(-Tradei/Biggest Loss))]^(1/N) where f = The value we are using for f. -Tradei = The profit or loss on the ith trade (with the sign reversed so that losses are positive numbers and profits are negative). Biggest Loss = The P&L that resulted in the biggest loss. (This should always be a negative number.) N = The total number of trades. G = The geometric mean of the HPRs. By looping through all values for I between .01 and 1, we can find that value for f which results in the highest TWR. This is the value for f that would provide us with the maximum return on our money using fixed fraction. We can also state that the optimal f is the f that yields highest geometric mean. It matters not whether we look for highest TWR or geometric mean, as both are maximized at the same value for f. Doing this with a computer is easy, since both the TWR curve and the geometric mean curve are smooth with only one peak. You simply loop from f = .01 to f = 1.0 by .01. As soon as you get a TWR that is less than the previous TWR, you know that the f corresponding to the previous TWR is the optimal f. You can employ many other search algorithms to facilitate this process of finding the optimal f in the range of 0 to 1. One of the fastest ways is with the parabolic interpolation search procedure detailed in portfolio Management Formulas. TO SUMMARIZE THUS FAR You have seen that a good system is the one with the highest geometric mean. Yet to find the geometric mean you must know f. You may find this confusing. Here now is a summary and clarification of the process: Take the trade listing of a given market system. 1. Find the optimal f, either by testing various f values from 0 to 1 or through iteration. The optimal f is that which yields the highest TWR. 2. Once you have found f, you can take the Nth root of the TWR that corresponds to your f, where N is the total number of trades. This is your geometric mean for this market system. You can now use this geometric mean to make apples-to-apples comparisons with other market systems, as well as use the f to know how many contracts to trade for that particular market system. Once the highest f is found, it can readily be turned into a dollar amount by dividing the biggest loss by the negative optimal f. For example, if our biggest loss is $100 and our optimal f is .25, then -$100/.25 = $400. In other words, we should bet 1 unit for every $400 we have in our stake. If you're having trouble with some of these concepts, try thinking in terms of betting in units, not dollars (e.g., one $5 chip or one futures contract or one 100-share unit of stock). The number of dollars you allocate to each unit is calculated by figuring your largest loss divided by the negative optimal f. The optimal f is a result of the balance between a system's profitmaking ability (on a constant 1-unit basis) and its risk (on a constant 1unit basis). Most people think that the optimal fixed fraction is that percentage of your total stake to bet, This is absolutely false. There is an interim step involved. Optimal f is not in itself the percentage of your total stake to bet, it is the divisor of your biggest loss. The quotient of this division is what you divide your total stake by to know how many bets to make or contracts to have on. You will also notice that margin has nothing whatsoever to do with what is the mathematically optimal number of contracts to have on. Margin doesn't matter because the sizes of individual profits and losses are not the product of the amount of money put up as margin (they would be the same whatever the size of the margin). Rather, the profits and losses are the product of the exposure of 1 unit (1 futures contract). The amount put up as margin is further made meaningless in a money-management sense, because the size of the loss is not limited to the margin. Most people incorrectly believe that f is a straight-line function rising up and to the right. They believe this because they think it would mean that the more you are willing to risk the more you stand to make. People reason this way because they think that a positive mathematical expectancy is just the mirror image of a negative expectancy. They mistakenly believe that if increasing your total action in a negative expectancy game results in losing faster, then increasing your total action in a positive expectancy game will result in winning faster. This is not true. At some point in a positive expectancy situation, further increasing your total action works against you. That point is a function of both the system's profitability and its consistency (i.e., its geometric mean), since you are reinvesting the returns back into the system. It is a mathematical fact that when two people face the same sequence of favorable betting or trading opportunities, if one uses the optimal f and the other uses any different money-management system, then the ratio of the optimal f bettor's stake to the other person's stake will increase as time goes on, with higher and higher probability. In the long - 17 - run, the optimal f bettor will have infinitely greater wealth than any other money-management system bettor with a probability approaching 1. Furthermore, if a bettor has the goal of reaching a specified fortune and is facing a series of favorable betting or trading opportunities, the expected time to reach the fortune will be lower (faster) with optimal f than with any other betting system. Let's go back and reconsider the following sequence of bets (trades): +9, +18, +7, +1, +10, -5, -3, -17, -7 Recall that we determined earlier in this chapter that the Kelly formula was not applicable to this sequence, because the wins were not all for the same amount and neither were the losses. We also decided to average the wins and average the losses and take these averages as our values into the Kelly formula (as many traders mistakenly do). Doing this we arrived at an f value of .16. It was stated that this is an incorrect application of Kelly, that it would not yield the optimal f. The Kelly formula must be specific to a single bet. You cannot average your wins and losses from trading and obtain the true optimal fusing the Kelly formula. Our highest TWR on this sequence of bets (trades) is obtained at .24, or betting $1 for every $71 in our stake. That is the optimal geometric growth you can squeeze out of this sequence of bets (trades) trading fixed fraction. Let's look at the TWRs at different points along 100 loops through this sequence of bets. At 1 loop through (9 bets or trades), the TWR for f = ,16 is 1.085, and for f = .24 it is 1.096. This means that for 1 pass through this sequence of bets an f = .16 made 99% of what an f = .24 would have made. To continue: Passes Throe 1 10 40 100 Total Bets or Trades 9 90 360 900 TWR for f=.24 1.096 2.494 38.694 9313.312 TWR for f=.16 1.085 2.261 26.132 3490.761 Percentage Difference 1 9.4 32.5 62.5 As can be seen, using an f value that we mistakenly figured from Kelly only made 37.5% as much as did our optimal f of .24 after 900 bets or trades (100 cycles through the series of 9 outcomes). In other words, our optimal f of .24, which is only .08 different from .16 (50% beyond the optimal) made almost 267% the profit that f = .16 did after 900 bets! Let's go another 11 cycles through this sequence of trades, so that we now have a total of 999 trades. Now our TWR for f = .16 is 8563.302 (not even what it was for f = .24 at 900 trades) and our TWR for f = .24 is 25,451.045. At 999 trades f = .16 is only 33.6% off = .24, or f = .24 is 297% off = .16! As you see, using the optimal f does not appear to offer much advantage over the short run, but over the long run it becomes more and more important. The point is, you must give the program time when trading at the optimal f and not expect miracles in the short run. The more time (i.e., bets or trades) that elapses, the greater the difference between using the optimal f and any other money-management strategy. GEOMETRIC AVERAGE TRADE At this point the trader may be interested in figuring his or her geometric average trade-that is, what is the average garnered per contract per trade assuming profits are always reinvested and fractional contracts can be purchased. This is the mathematical expectation when you are trading on a fixed fractional basis. This figure shows you what effect there is by losers occurring when you have many contracts on and winners occurring when you have fewer contracts on. In effect, this approximates how a system would have fared per contract per trade doing fixed fraction. (Actually the geometric average trade is your mathematical expectation in dollars per contract per trade. The geometric mean minus 1 is your mathematical expectation per trade-a geometric mean of 1.025 represents a mathematical expectation of 2.5% per trade, irrespective of size.) Many traders look only at the average trade of a market system to see if it is high enough to justify trading the system. However, they should be looking at the geometric average trade (GAT) in making their decision. (1.14) GAT = G*(Biggest Loss/-f) where G = Geometric mean-1. f = Optimal fixed fraction. (and, of course, our biggest loss is always a negative number). For example, suppose a system has a geometric mean of 1.017238, the biggest loss is $8,000, and the optimal f is .31. Our geometric average trade would be: GAT = (1.017238-1)*(-$8,000/-.31) = .017238*$25,806.45 = $444.85 WHY YOU MUST KNOW YOUR OPTIMAL F The graph in Figure 1-6 further demonstrates the importance of using optimal fin fixed fractional trading. Recall our f curve for a 2:1 cointoss game, which was illustrated in Figure 1-1. Let's increase the winning payout from 2 units to 5 units as is demonstrated in Figure 1-6. Here your optimal f is .4, or to bet $1 for every $2.50 in you stake. After 20 sequences of +5,-l (40 bets), your $2.50 stake has grown to $127,482, thanks to optimal f. Now look what happens in this extremely favorable situation if you miss the optimal f by 20%. At f values of .6 and .2 you don't make a tenth as much as you do at .4. This particular situation, a 50/50 bet paying 5 to 1, has a mathematical expectation of (5*.5)+(1*(-.5)) = 2, yet if you bet using an f value greater than .8 you lose money. 140 120 100 T 80 W R 60 40 20 0 0.05 0.45 0.55 f values Figure 1-6 20 sequences of +5, -1. Two points must be illuminated here. The first is that whenever we discuss a TWR, we assume that in arriving at that TWR we allowed fractional contracts along the way. In other words, the TWR assumes that you are able to trade 5.4789 contracts if that is called for at some point. It is because the TWR calculation allows for fractional contracts that the TWR will always be the same for a given set of trade outcomes regardless of their sequence. You may argue that in real life this is not the case. In real life you cannot trade fractional contracts. Your argument is correct. However, I am allowing the TWR to be calculated this way because in so doing we represent the average TWR for all possible starting stakes. If you require that all bets be for integer amounts, then the amount of the starting stake becomes important. However, if you were to average the TWRs from all possible starting stake values using integer bets only, you would arrive at the same TWR value that we calculate by allowing the fractional bet. Therefore, the TWR value as calculated is more realistic than if we were to constrain it to integer bets only, in that it is representative of the universe of outcomes of different starting stakes. Furthermore, the greater the equity in the account, the more trading on an integer contract basis will be the same as trading on a fractional contract basis. The limit here is an account with an infinite amount of capital where the integer bet and fractional bet are for the same amounts exactly. This is interesting in that generally the closer you can stick to optimal f, the better. That is to say that the greater the capitalization of an account, the greater will be the effect of optimal f. Since optimal f will make an account grow at the fastest possible rate, we can state that optimal f will make itself work better and better for you at the fastest possible rate. The graphs (Figures 1-1 and 1-6) bear out a few more interesting points. The first is that at no other fixed fraction will you make more money than you will at optimal f. In other words, it does not pay to bet - 18 - $1 for every $2 in your stake in the earlier example of a 5:1 game. In such a case you would make more money if you bet $1 for every $2.50 in your stake. It does not pay to risk more than the optimal f-in fact, you pay a price to do so! Obviously, the greater the capitalization of an account the more accurately you can stick to optimal f, as the dollars per single contract required are a smaller percentage of the total equity. For example, suppose optimal f for a given market system dictates you trade 1 contract for every $5,000 in an account. If an account starts out with $10,000 in equity, it will need to gain (or lose) 50% before a quantity adjustment is necessary. Contrast this to a $500,000 account, where there would be a contract adjustment for every 1% change in equity. Clearly the larger account can better take advantage of the benefits provided by optimal f than can the smaller account. Theoretically, optimal f assumes you can trade in infinitely divisible quantities, which is not the case in real life, where the smallest quantity you can trade in is a single contract. In the asymptotic sense this does not matter. But in the real-life integer-bet scenario, a good case could be presented for trading a market system that requires as small a percentage of the account equity as possible, especially for smaller accounts. But there is a tradeoff here as well. Since we are striving to trade in markets that would require us to trade in greater multiples than other markets, we will be paying greater commissions, execution costs, and slippage. Bear in mind that the amount required per contract in real life is the greater of the initial margin requirement and the dollar amount per contract dictated by the optimal f. The finer you can cut it (i.e., the more frequently you can adjust the size of the positions you are trading so as to align yourself with what the optimal f dictates), the better off you are. Most accounts would therefore be better off trading the smaller markets. Corn may not seem like a very exciting market to you compared to the S&P's. Yet for most people the corn market can get awfully exciting if they have a few hundred contracts on. Those who trade stocks or forwards (such as forex traders) have a tremendous advantage here. Since you must calculate your optimal f based on the outcomes (the P&Ls) on a 1-contract (1 unit) basis, you must first decide what 1 unit is in stocks or forex. As a stock trader, say you decide that I unit will be 100 shares. You will use the P&L stream generated by trading 100 shares on each and every trade to determine your optimal f. When you go to trade this particular stock (and let's say your system calls for trading 2.39 contracts or units), you will be able to trade the fractional part (the .39 part) by putting on 239 shares. Thus, by being able to trade the fractional part of 1 unit, you are able to take more advantage of optimal f. Likewise for forex traders, who must first decide what 1 contract or unit is. For the forex trader, 1 unit may be one million U.S. dollars or one million Swiss francs. THE SEVERITY OF DRAWDOWN It is important to note at this point that the drawdown you can expect with fixed fractional trading, as a percentage retracement of your account equity, historically would have been at least as much as f percent. In other words if f is .55, then your drawdown would have been at least 55% of your equity (leaving you with 45% at one point). This is so because if you are trading at the optimal f, as soon as your biggest loss was hit, you would experience the drawdown equivalent to f. Again, assuming that f for a system is .55 and assuming that translates into trading 1 contract for every $10,000, this means that your biggest loss was $5,500. As should by now be obvious, when the biggest loss was encountered (again we're speaking historically what would have happened), you would have lost $5,500 for each contract you had on, and would have had 1 contract on for every $10,000 in the account. At that point, your drawdown is 55% of equity. Moreover, the drawdown might continue: The next trade or series of trades might draw your account down even more. Therefore, the better a system, the higher the f. The higher the f, generally the higher the drawdown, since the drawdown (in terms of a percentage) can never be any less than the f as a percentage. There is a paradox involved here in that if a system is good enough to generate an optimal f that is a high percentage, then the drawdown for such a good system will also be quite high. Whereas optimal fallows you to experience the greatest geometric growth, it also gives you enough rope to hang yourself with. Most traders harbor great illusions about the severity of drawdowns. Further, most people have fallacious ideas regarding the ratio of potential gains to dispersion of those gains. We know that if we are using the optimal f when we are fixed fractional trading, we can expect substantial drawdowns in terms of percentage equity retracements. Optimal f is like plutonium. It gives you a tremendous amount of power, yet it is dreadfully dangerous. These substantial drawdowns are truly a problem, particularly for notices, in that trading at the optimal f level gives them the chance to experience a cataclysmic loss sooner than they ordinarily might have. Diversification can greatly buffer the drawdowns. This it does, but the reader is warned not to expect to eliminate drawdown. In fact, the real benefit of diversification is that it lets you get off many more trials, many more plays, in the same time period, thus increasing your total profit. Diversification, although usually the best means by which to buffer drawdowns, does not necessarily reduce drawdowns, and in some instances, may actually increase them! Many people have the mistaken impression that drawdown can be completely eliminated if they diversify effectively enough. To an extent this is true, in that drawdowns can be buffered through effective diversification, but they can never be completely eliminated. Do not be deluded. No matter how good the systems employed are, no matter how effectively you diversify, you will still encounter substantial drawdowns. The reason is that no matter of how uncorrelated your market systems are, there comes a period when most or all of the market systems in your portfolio zig in unison against you when they should be zagging. You will have enormous difficulty finding a portfolio with at least 5 years of historical data to it and all market systems employing the optimal f that has had any less than a 30% drawdown in terms of equity retracement! This is regardless of how many market systems you employ. If you want to be in this and do it mathematically correctly, you better expect to be nailed for 30% to 95% equity retracements. This takes enormous discipline, and very few people can emotionally handle this. When you dilute f, although you reduce the drawdowns arithmetically, you also reduce the returns geometrically. Why commit funds to futures trading that aren't necessary simply to flatten out the equity curve at the expense of your bottom-line profits? You can diversify cheaply somewhere else. Any time a trader deviates from always trading the same constant contract size, he or she encounters the problem of what quantities to trade in. This is so whether the trader recognizes this problem or not. Constant contract trading is not the solution, as you can never experience geometric growth trading constant contract. So, like it or not, the question of what quantity to take on the next trade is inevitable for everyone. To simply select an arbitrary quantity is a costly mistake. Optimal f is factual; it is mathematically correct. MODERN PORTFOLIO THEORY Recall the paradox of the optimal f and a market system's drawdown. The better a market system is, the higher the value for f. Yet the drawdown (historically) if you are trading the optimal f can never be lower than f. Generally speaking, then, the better the market system is, the greater the drawdown will be as a percentage of account equity if you are trading optimal f. That is, if you want to have the greatest geometric growth in an account, then you can count on severe drawdowns along the way. Effective diversification among other market systems is the most effective way in which this drawdown can be buffered and conquered while still staying close to the peak of the f curve (i.e., without hating to trim back to, say, f/2). When one market system goes into a drawdown, another one that is being traded in the account will come on strong, thus canceling the draw-down of the other. This also provides for a catalytic effect on the entire account. The market system that just experienced the drawdown (and now is getting back to performing well) will have no less funds to start with than it did when the drawdown began (thanks to the other market system canceling out the drawdown). Diversification won't hinder the upside of a system (quite the reverse-the upside is far greater, since after a drawdown you aren't starting back with fewer contracts), yet it will buffer the downside (but only to a very limited extent). There exists a quantifiable, optimal portfolio mix given a group of market systems and their respective optimal fs. Although we cannot be certain that the optimal portfolio mix in the past will be optimal in the - 19 - future, such is more likely than that the optimal system parameters of the past will be optimal or near optimal in the future. Whereas optimal system parameters change quite quickly from one time period to another, optimal portfolio mixes change very slowly (as do optimal f values). Generally, the correlations between market systems tend to remain constant. This is good news to a trader who has found the optimal portfolio mix, the optimal diversification among market systems. THE MARKOVITZ MODEL The basic concepts of modern portfolio theory emanate from a monograph written by Dr. Harry Markowitz.5 Essentially, Markowitz proposed that portfolio management is one of composition, not individual stock selection as is more commonly practiced. Markowitz argued that diversification is effective only to the extent that the correlation coefficient between the markets involved is negative. If we have a portfolio composed of one stock, our best diversification is obtained if we choose another stock such that the correlation between the two stock prices is as low as possible. The net result would be that the portfolio, as a whole (composed of these two stocks with negative correlation), would have less variation in price than either one of the stocks alone. Markowitz proposed that investors act in a rational manner and, given the choice, would opt for a similar portfolio with the same return as the one they have, but with less risk, or opt for a portfolio with a higher return than the one they have but with the same risk. Further, for a given level of risk there is an optimal portfolio with the highest yield, and likewise for a given yield there is an optimal portfolio with the lowest risk. An investor with a portfolio whose yield could be increased with no resultant increase in risk, or an investor with a portfolio whose risk could be lowered with no resultant decrease in yield, are said to have inefficient portfolios. Figure 1-7 shows all of the available portfolios under a given study. If you hold portfolio C, you would be better off with portfolio A, where you would have the same return with less risk, or portfolio B, where you would have more return with the same risk. Reward 1.130 1.125 1.120 1.115 1.110 1.105 A C 1.100 1.095 Risk 1.090 0.290 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0.330 Figure 1-7 Modern portfolio theory. In describing this, Markowitz described what is called the efficient frontier. This is the set of portfolios that lie on the upper and left sides of the graph. These are portfolios whose yield can no longer be increased without increasing the risk and whose risk cannot be lowered without lowering the yield. Portfolios lying on the efficient frontier are said to be efficient portfolios. (See Figure 1-8.) Markowitz, H., Portfolio Selection—Efficient Diversification of Investments. Yale University Press, New Haven, Conn., 1959. Reward 1.130 1.125 1.120 1.115 1.110 1.105 1.100 1.095 Risk 1.090 0.290 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0.330 Figure 1-8 The efficient frontier Those portfolios lying high and off to the right and low and to the left are generally not very well diversified among very many issues. Those portfolios lying in the middle of the efficient frontier are usually very well diversified. Which portfolio a particular investor chooses is a function of the investor's risk aversion-Ms or her willingness to assume risk. In the Markowitz model any portfolio that lies upon the efficient frontier is said to be a good portfolio choice, but where on the efficient frontier is a matter of personal preference (later on we'll see that there is an exact optimal spot on the efficient frontier for all investors). The Markowitz model was originally introduced as applying to a portfolio of stocks that the investor would hold long. Therefore, the basic inputs were the expected returns on the stocks (defined as the expected appreciation in share price plus any dividends), the expected variation in those returns, and the correlations of the different returns among the different stocks. If we were to transport this concept to futures it would stand to reason (since futures don't pay any dividends) that we measure the expected price gains, variances, and correlations of the different futures. The question arises, "If we are measuring the correlation of prices, what if we have two systems on the same market that are negatively correlated?" In other words, suppose we have systems A and B. There is a perfect negative correlation between the two. When A is in a drawdown, B is in a drawup and vice versa. Isn't this really an ideal diversification? What we really want to measure then is not the correlations of prices of the markets we're using. Rather, we want to measure the correlations of daily equity changes between the different market system. Yet this is still an apples-and-oranges comparison. Say that two of the market systems we are going to examine the correlations on are both trading the same market, yet one of the systems has an optimal f corresponding to I contract per every $2,000 in account equity and the other system has an optimal f corresponding to 1 contract per every $10,000 in account equity. To overcome this and incorporate the optimal fs of the various market systems under consideration, as well as to account for fixed fractional trading, we convert the daily equity changes for a given market system into daily HPRs. The HPR in this context is how much a particular market made or lost for a given day on a 1-contract basis relative to what the optimal f for that system is. Here is how this can be solved. Say the market system with an optimal f of $2,000 made $100 on a given day. The HPR then for that market system for that day is 1.05. To find the daily HPR, then: (1.15) Daily HPR = (A/B)+1 where A = Dollars made or lost that day. B = Optimal fin dollars. We begin by converting the daily dollar gains and losses for the market systems we are looking at into daily HPRs relative to the optimal fin dollars for a given market system. In so doing, we make quantity irrelevant. In the example just cited, where your daily HPR is 1.05, you made 5% that day on that money. This is 5% regardless of whether you had on 1 contract or 1,000 contracts. Now you are ready to begin comparing different portfolios. The trick here is to compare every possible portfolio combination, from portfolios of 1 market system (for every market system under consideration) to portfolios of N market systems. - 20 - As an example, suppose you are looking at market systems A, B, and C. Every combination would be: A B C AB AC BC ABC But you do not stop there. For each combination you must figure each Percentage allocation as well. To do so you will need to have a minimum Percentage increment. The following example, continued from the portfolio A, B, C example, illustrates this with a minimum portfolio allocation of 10% (.10): A 100% B 100% C 100% AB 90% 10% 80% 20% 70% 30% 60% 40% 50% 50% 40% 60% 30% 70% 20% 80% 10% 90% AC 90% 10% 80% 20% 70% 30% 60% 40% 50% 50% 40% 60% 30% 70% 20% 80% 10% 90% B C 90% 10% 80% 20% 70% 30% 60% 40% 50% 50% 40% 60% 30% 70% 20% 80% 10% 90% ABC 80% 10% 10% 70% 20% 10% 70% 10% 20% 10% 30% 60% 10% 20% 70% 10% 10% 80% Now for each CPA we go through each day and compute a net HPR for each day. The net HPR for a given day is the sum of each market system's HPR for that day times its percentage allocation. For example, suppose for systems A, B, and C we are looking at percentage allocations of 10%, 50%, 40% respectively. Further, suppose that the individual HPRs for those market systems for that day are .9, 1.4, and 1.05 respectively. Then the net HPR for this day is: Net HPR = (.9*.1)+(1.4*.5)+(1.05*.4) = .09+.7+.42 = 1.21 We must perform now two necessary tabulations. The first is that of the average daily net HPR for each CPA. This comprises the reward or Y axis of the Markowitz model. The second necessary tabulation is that of the standard deviation of the daily net HPRs for a given CPA-specifically, the population standard deviation. This measure corresponds to the risk or X axis of the Markowitz model. Modern portfolio theory is often called E-V Theory, corresponding to the other names given the two axes. The vertical axis is often called E, for expected return, and the horizontal axis V, for variance in expected returns. From these first two tabulations we can find our efficient frontier. We have effectively incorporated various markets, systems, and f fac- tors, and we can now see quantitatively what our best CPAs are (i.e., which CPAs lie along the efficient frontier). THE GEOMETRIC MEAN PORTFOLIO STRATEGY Which particular point on the efficient frontier you decide to be on (i.e., which particular efficient CPA) is a function of your own riskaversion preference, at least according to the Markowitz model. However, there is an optimal point to be at on the efficient frontier, and finding this point is mathematically solvable. If you choose that CPA which shows the highest geometric mean of the HPRs, you will arrive at the optimal CPA! We can estimate the geometric mean from the arithmetic mean HPR and the population standard deviation of the HPRs (both of which are calculations we already have, as they are the X and Y axes for the Markowitz model!). Equations (1.16a) and (l.16b) give us the formula for the estimated geometric mean (EGM). This estimate is very close (usually within four or five decimal places) to the actual geometric mean, and it is acceptable to use the estimated geometric mean and the actual geometric mean interchangeably. (1.16a) EGM = (AHPR^2-SD^2)^(1/2) or (l.16b) EGM = (AHPR^2-V)^(1/2) where EGM = The estimated geometric mean. AHPR = The arithmetic average HPR, or the return coordinate of the portfolio. SD = The standard deviation in HPRs, or the risk coordinate of the portfolio. V = The variance in HPRs, equal to SD^2. Both forms of Equation (1.16) are equivalent. The CPA with the highest geometric mean is the CPA that will maximize the growth of the portfolio value over the long run; furthermore it will minimize the time required to reach a specified level of equity. DAILY PROCEDURES FOR USING OPTIMAL PORTFOLIOS At this point, there may be some question as to how you implement this portfolio approach on a day-to-day basis. Again an example will be used to illustrate. Suppose your optimal CPA calls for you to be in three different market systems. In this case, suppose the percentage allocations are 10%, 50%, and 40%. If you were looking at a $50,000 account, your account would be "subdivided" into three accounts of $5,000, $25,000, and $20,000 for each market system (A, B, and C) respectively. For each market system's subaccount balance you then figure how many contracts you could trade. Say the f factors dictated the following: Market system A, 1 contract per $5,000 in account equity. Market system B, 1 contract per $2,500 in account equity. Market system C, l contract per $2,000 in account equity. You would then be trading 1 contract for market system A ($5,000/$5,000), 10 contracts for market system B ($25,000/$2,500), and 10 contracts for market system C ($20,000/$2,000). Each day, as the total equity in the account changes, all subaccounts are recapitalized. What is meant here is, suppose this $50,000 account dropped to $45,000 the next day. Since we recapitalize the subaccounts each day, we then have $4,500 for market system subaccount A, $22,500 for market system subaccount B, and $18,000 for market system subaccount C, from which we would trade zero contracts the next day on market system A ($4,500 7 $5,000 = .9, or, since we always floor to the integer, 0), 9 contracts for market system B ($22,500/ $2,500), and 9 contracts for market system C ($18,000/$2,000). You always recapitalize the subaccounts each day regardless of whether there was a profit or a loss. Do not be confused. Subaccount, as used here, is a mental construct. Another way of doing this that will give us the same answers and that is perhaps easier to understand is to divide a market system's optimal f amount by its percentage allocation. This gives us a dollar amount that we then divide the entire account equity by to know how many contracts to trade. Since the account equity changes daily, we recapitalize this daily to the new total account equity. In the example we have cited, - 21 - market system A, at an f value of 1 contract per $5,000 in account equity and a percentage allocation of 10%, yields 1 contract per $50,000 in total account equity ($5,000/.10). Market system B, at an f value of 1 contract per $2,500 in account equity and a percentage allocation of 50%, yields 1 contract per $5,000 in total account equity ($2,500/.50). Market system C, at an f value of 1 contract per $2,000 in account equity and a percentage allocation of 40%, yields 1 contract per $5,000 in total account equity ($2,000/.40). Thus, if we had $50,000 in total account equity, we would trade 1 contract for market system A, 10 contracts for market system B, and 10 contracts for market system C. Tomorrow we would do the same thing. Say our total account equity got up to $59,000. In this case, dividing $59,000 into $50,000 yields 1.18, which floored to the integer is 1, so we would trade 1 contract for market system A tomorrow. For market system B, we would trade 11 contracts ($59,000/$5,000 = 11.8, which floored to the integer = 11). For market system C we would also trade 11 contracts, since market system C also trades 1 contract for every $5,000 in total account equity. Suppose we have a trade on from market system C yesterday and we are long 10 contracts. We do not need to go in and add another today to bring us up to 11 contracts. Rather the amounts we are calculating using the equity as of the most recent close mark-to-market is for new positions only. So for tomorrow, since we have 10 contracts on, if we get stopped out of this trade (or exit it on a profit target), we will be going 11 contracts on a new trade if one should occur. Determining our optimal portfolio using the daily HPRs means that we should go in and alter our positions on a day-by-day rather than a trade-by-trade basis, but this really isn't necessary unless you are trading a longer-term system, and then it may not be beneficial to adjust your position size on a day-byday basis due to increased transaction costs. In a pure sense, you should adjust your positions on a day-by-day basis. In real life, you are usually almost as well off to alter them on a trade-by-trade basis, with little loss of accuracy. This matter of implementing the correct daily positions is not such a problem. Recall that in finding the optimal portfolio we used the daily HPRs as input, We should therefore adjust our position size daily (if we could adjust each position at the price it closed at yesterday). In real life this becomes impractical, however, as transaction costs begin to outweigh the benefits of adjusting our positions daily and may actually cost us more than the benefit of adjusting daily. We are usually better off adjusting only at the end of each trade. The fact that the portfolio is temporarily out of balance after day 1 of a trade is a lesser price to pay than the cost of adjusting the portfolio daily. On the other hand, if we take a position that we are going to hold for a year, we may want to adjust such a position daily rather than adjust it more than a year from now when we take another trade. Generally, though, on longer-term systems such as this we are better off adjusting the position each week, say, rather than each day. The reasoning here again is that the loss in efficiency by having the portfolio temporarily out of balance is less of a price to pay than the added transaction costs of a daily adjustment. You have to sit down and determine which is the lesser penalty for you to pay, based upon your trading strategy (i.e., how long you are typically in a trade) as well as the transaction costs involved. How long a time period should you look at when calculating the optimal portfolios? Just like the question, "How long a time period should you look at to determine the optimal f for a given market system?" there is no definitive answer here. Generally, the more back data you use, the better should be your result (i.e., that the near optimal portfolios in the future will resemble what your study concluded were the near optimal portfolios). However, correlations do change, albeit slowly. One of the problems with using too long a time period is that there will be a tendency to use what were yesterday's hot markets. For instance, if you ran this program in 1983 over 5 years of back data you would most likely have one of the precious metals show very clearly as being a part of the optimal portfolio. However, the precious metals did very poorly for most trading systems for quite a few years after the 1980-1981 markets. So you see there is a tradeoff between using too much past history and too little in the determination of the optimal portfolio of the future. Finally, the question arises as to how often you should rerun this entire procedure of finding the optimal portfolio. Ideally you should run this on a continuous basis. However, rarely will the portfolio composition change. Realistically you should probably run this about every 3 months. Even by running this program every 3 months there is still a high likelihood that you will arrive at the same optimal portfolio composition, or one very similar to it, that you arrived at before. ALLOCATIONS GREATER THAN 100% Thus far, we have been restricting the sum of the percentage allocations to 100%. It is quite possible that the sum of the percentage allocations for the portfolio that would result in the greatest geometric growth would exceed 100%. Consider, for instance, two market systems, A and B, that are identical in every respect, except that there is a negative correlation (R<0) between them. Assume that the optimal f, in dollars, for each of these market systems is $5,000. Suppose the optimal portfolio (based on highest geomean) proves to be that portfolio that allocates 50% to each of the two market systems. This would mean that you should trade 1 contract for every $10,000 in equity for market system A and likewise for B. When there is negative correlation, however, it can be shown that the optimal account growth is actually obtained by trading 1 contract for an amount less than $10,000 in equity for market system A and/or market system B. In other words, when there is negative correlation, you can have the sum of percentage allocations exceed 100%. Further, it is possible, although not too likely, that the individual percentage allocations to the market systems may exceed 100% individually. It is interesting to consider what happens when the correlation between two market systems approaches -1.00. When such an event occurs, the amount to finance trades by for the market systems tends to become infinitesimal. This is so because the portfolio, the net result of the market systems, tends to never suffer a losing day (since an amount lost by a market system on a given day is offset by the same amount being won by a different market system in the portfolio that day). Therefore, with diversification it is possible to have the optimal portfolio allocate a smaller f factor in dollars to a given market system than trading that market system alone would. To accommodate this, you can divide the optimal f in dollars for each market system by the number of market systems you are running. In our example, rather than inputting $5,000 as the optimal f for market system A, we would input $2,500 (dividing $5,000, the optimal f, by 2, the number of market systems we are going to run), and likewise for market system B. Now when we use this procedure to determine the optimal geomean portfolio as being the one that allocates 50% to A and 50% to B, it means that we should trade 1 contract for every $5,000 in equity for market system A ($2,500/.5) and likewise for B. You must also make sure to use cash as another market system. This is non-interest-bearing cash, and it has an HPR of 1.00 for every day. Suppose in our previous example that the optimal growth is obtained at 50% in market system A and 40% in market system B. In other words, to trade 1 contract for every $5,000 in equity for market system A and 1 contract for every $6,250 for B ($2,500/.4). If we were using cash as another market system, this would be a possible combination (showing the optimal portfolio as having the remaining 10% in cash). If we were not using cash as another market system, this combination wouldn't be possible. If your answer obtained by using this procedure does not include the non-interest-bearing cash as one of the output components, then you must raise the factor you are using to divide the optimal fs in dollars you are using as input. Returning to our example, suppose we used non-interest-bearing cash with the two market systems A and B. Further suppose that our resultant optimal portfolio did not include at least some percentage allocation to non-interest bearing cash. Instead, suppose that the optimal portfolio turned out to be 60% in market system A and 40% in market system B (or any other percentage combination, so long as they added up to 100% as a sum for the percentage allocations for the two market systems) and 0% allocated to non-interest-bearing cash. This would mean that even though we divided our optimal fs in dollars by two, that was not enough, We must instead divide them by a number higher than 2. So we will go back and divide our optimal fs in dollars by 3 or 4 until we get an optimal portfolio which includes a certain percentage allocation to non-interest-bearing cash. This will be the optimal portfolio. Of course, in real life this does not mean that we must actually allocate any of our trading capital to non-interest-bearing cash, Rather, the non-interest-bearing cash was used to derive the optimal amount of - 22 - funds to allocate for 1 contract to each market system, when viewed in light of each market system's relationship to each other market system. Be aware that the percentage allocations of the portfolio that would have resulted in the greatest geometric growth in the past can be in excess of 100% and usually are. This is accommodated for in this technique by dividing the optimal f in dollars for each market system by a specific integer (which usually is the number of market systems) and including non-interest-bearing cash (i.e., a market system with an HPR of 1.00 every day) as another market system. The correlations of the different market systems can have a profound effect on a portfolio. It is important that you realize that a portfolio can be greater than the sum of its parts (if the correlations of its component parts are low enough). It is also possible that a portfolio may be less than the sum of its parts (if the correlations are too high). Consider again a coin-toss game, a game where you win $2 on heads and lose $1 on tails. Such a game has a mathematical expectation (arithmetic) of fifty cents. The optimal f is .25, or bet $1 for every $4 in your stake, and results in a geometric mean of 1.0607. Now consider a second game, one where the amount you can win on a coin toss is $.90 and the amount you can lose is $1.10. Such a game has a negative mathematical expectation of -$.10, thus, there is no optimal f, and therefore no geometric mean either. Consider what happens when we play both games simultaneously. If the second game had a correlation coefficient of 1.0 to the first-that is, if we won on both games on heads or both coins always came up either both heads or both tails, then the two possible net outcomes would be that we win $2.90 on heads or lose $2.10 on tails. Such a game would have a mathematical expectation then of $.40, an optimal f of .14, and a geometric mean of 1.013. Obviously, this is an inferior approach to just trading the positive mathematical expectation game. Now assume that the games are negatively correlated. That is, when the coin on the game with the positive mathematical expectation comes up heads, we lose the $1.10 of the negative expectation game and vice versa. Thus, the net of the two games is a win of $.90 if the coins come up heads and a loss of -$.10 if the coins come up tails. The mathematical expectation is still $.40, yet the optimal f is .44, which yields a geometric mean of 1.67. Recall that the geometric mean is the growth factor on your stake on average per play. This means that on average in this game we would expect to make more than 10 times as much per play as in the outright positive mathematical expectation game. Yet this result is obtained by taking that positive mathematical expectation game and combining it with a negative expectation game. The reason for the dramatic difference in results is due to the negative correlation between the two market systems. Here is an example where the portfolio is greater than the sum of its parts. Yet it is also important to bear in mind that your drawdown, historically, would have been at least as high as f percent in terms of percentage of equity retraced. In real life, you should expect that in the future it will be higher than this. This means that the combination of the two market systems, even though they are negatively correlated, would have resulted in at least a 44% equity retracement. This is higher than the outright positive mathematical expectation which resulted in an optimal f of .25, and therefore a minimum historical drawdown of at least 25% equity retracement. The moral is clear. Diversification, if done properly, is a technique that increases returns. It does not necessarily reduce worst-case drawdowns. This is absolutely contrary to the popular notion. Diversification will buffer many of the little pullbacks from equity highs, but it does not reduce worst-case drawdowns. Further, as we have seen with optimal f, drawdowns are far greater than most people imagine. Therefore, even if you are very well diversified, you must still expect substantial equity retracements. However, let's go back and look at the results if the correlation coefficient between the two games were 0. In such a game, whatever the results of one toss were would have no bearing on the results of the other toss. Thus, there are four possible outcomes: Game 1 Outcome Win Win Lose Lose Amount $2.00 $2.00 -$1.00 -$1 .00 Game 2 Outcome Win Lose Win Lose Amount $.90 -$1.10 $.90 -$1.10 Net Outcome Win Win Lose Lose Amount $2.90 $.90 -S.10 -$2.10 The mathematical expectation is thus: ME = 2.9*.25+.9*.25-.1*.25-2.1*.25 = .725+.225-.025-.525 = .4 Once again, the mathematical expectation is $.40. The optimal f on this sequence is .26, or 1 bet for every $8.08 in account equity (since the biggest loss here is -$2.10). Thus, the least the historical drawdown may have been was 26% (about the same as with the outright positive expectation game). However, here is an example where there is buffering of the equity retracements. If we were simply playing the outright positive expectation game, the third sequence would have hit us for the maximum drawdown. Since we are combining the two systems, the third sequence is buffered. But that is the only benefit. The resultant geometric mean is 1.025, less than half the rate of growth of playing just the outright positive expectation game. We placed 4 bets in the same time as we would have placed 2 bets in the outright positive expectation game, but as you can see, still didn't make as much money: 1.0607^2 = 1.12508449 1.025^ 4 = 1.103812891 Clearly, when you diversify you must use market systems that have as low a correlation in returns to each other as possible and preferably a negative one. You must realize that your worst-case equity retracement will hardly be helped out by the diversification, although you may be able to buffer many of the other lesser equity retracements. The most important thing to realize about diversification is that its greatest benefit is in what it can do to improve your geometric mean. The technique for finding the optimal portfolio by looking at the net daily HPRs eliminates having to look at how many trades each market system accomplished in determining optimal portfolios. Using the technique allows you to look at the geometric mean alone, without regard to the frequency of trading. Thus, the geometric mean becomes the single statistic of how beneficial a portfolio is. There is no benefit to be obtained by diversifying into more market systems than that which results in the highest geometric mean. This may mean no diversification at all if a portfolio of one market system results in the highest geometric mean. It may also mean combining market systems that you would never want to trade by themselves. HOW THE DISPERSION OF OUTCOMES AFFECTS GEOMETRIC GROWTH Once we acknowledge the fact that whether we want to or not, whether consciously or not, we determine our quantities to trade in as a function of the level of equity in an account, we can look at HPRs instead of dollar amounts for trades. In so doing, we can give money management specificity and exactitude. We can examine our money-management strategies, draw rules, and make conclusions. One of the big conclusions, one that will no doubt spawn many others for us, regards the relationship of geometric growth and the dispersion of outcomes (HPRs). This discussion will use a gambling illustration for the sake of simplicity. Consider two systems, System A, which wins 10% of the time and has a 28 to 1 win/loss ratio, and System B, which wins 70% of the time and has a 1 to 1 win/loss ratio. Our mathematical expectation, per unit bet, for A is 1.9 and for B is .4. We can therefore say that for every unit bet System A will return, on average, 4.75 times as much as System B. But let's examine this under fixed fractional trading. We can find our optimal fs here by dividing the mathematical expectations by the win/loss ratios. This gives us an optimal f of .0678 for A and .4 for B. The geometric means for each system at their optimal f levels are then: A = 1.044176755 B = 1.0857629 System A B % Wins Win:Loss ME f Geomean 10 28:1 1.9 .0678 1.0441768 70 1:1 .4 .4 1.0857629 As you can see, System B, although less than one quarter the mathematical expectation of A, makes almost twice as much per bet (returning 8.57629% of your entire stake per bet on average when you reinvest at the optimal f levels) as does A (which returns 4.4176755% of your entire stake per bet on average when you reinvest at the optimal f levels). Now assuming a 50% drawdown on equity will require a 100% gain to recoup, then 1.044177 to the power of X is equal to 2.0 at approximately X equals 16.5, or more than 16 trades to recoup from a 50% drawdown for System A. Contrast this to System B, where 1.0857629 to the power of X is equal to 2.0 at approximately X equals 9, or 9 trades for System B to recoup from a 50% drawdown. - 23 - What's going on here? Is this because System B has a higher percentage of winning trades? The reason B is outperforming A has to do with the dispersion of outcomes and its effect on the growth function. Most people have the mistaken impression that the growth function, the TWR, is: (1.17) TWR = (1+R)^N where R = The interest rate per period (e.g., 7% = .07). N = The number of periods. Since 1+R is the same thing as an HPR, we can say that most people have the mistaken impression that the growth function,6 the TWR, is: (1.18) TWR = HPR^N This function is only true when the return (i.e., the HPR) is constant, which is not the case in trading. The real growth function in trading (or any event where the HPR is not constant) is the multiplicative product of the HPRs. Assume we are trading coffee, our optimal f is 1 contract for every $21,000 in equity, and we have 2 trades, a loss of $210 and a gain of $210, for HPRs of .99 and 1.01 respectively. In this example our TWR would be: TWR = 1.01*.99 = .9999 An insight can be gained by using the estimated geometric mean (EGM) for Equation (1.16a): (1.16a) EGM = (AHPR^2-SD^2)^(1/2) or (1.16b) EGM = (AHPR^2-V)^(1/2) Now we take Equation (1.16a) or (1.16b) to the power of N to estimate the TWR. This will very closely approximate the "multiplicative" growth function, the actual TWR: (1.19a) Estimated TWR = ((AHPR^2-SD^2) ^(1/2))^N or (1.19b) Estimated TWR = ((AHPR^2-V)^(1/2))^N where N = The number of periods. AHPR = The arithmetic mean HPR. SD = The population standard deviation in HPRs. V = The population variance in HPRs. The two equations in (1.19) are equivalent. The insight gained is that we can see here, mathematically, the tradeoff between an increase in the arithmetic average trade (the HPR) and the variance in the HPRs, and hence the reason that the 70% 1:1 system did better than the 10% 28:1 system! Our goal should be to maximize the coefficient of this function, to maximize: (1.16b) EGM = (AHPR^2-V)^(1/2) Expressed literally, our goal is "To maximize the square root of the quantity HPR squared minus the population variance in HPRs." The exponent of the estimated TWR, N, will take care of itself. That is to say that increasing N is not a problem, as we can increase the number of markets we are following, can trade more short-term types of systems, and so on. However, these statistical measures of dispersion, variance, and standard deviation (V and SD respectively), are difficult for most nonstatisticians to envision. What many people therefore use in lieu of these measures is known as the mean absolute deviation (which we'll call M). Essentially, to find M you simply take the average absolute value of the difference of each data point to an average of the data points. (1.20) M = ∑ABS(Xi-X[])/N In a bell-shaped distribution (as is almost always the case with the distribution of P&L's from a trading system) the mean absolute deviation equals about .8 of the standard deviation (in a Normal Distribution, it is .7979). Therefore, we can say: 6 Many people mistakenly use the arithmetic average HPR in the equation for HPH^N. As is demonstrated here, this will not give the true TWR after N plays. What you must use is the geometric, rather than the arithmetic, average HPR^N. This will give you the true TWR. If the standard deviation in HPRs is 0, then the arithmetic average HPR and the geometric average HPR are equivalent, and it matters not which you use. (1.21) M = .8*SD and (1.22) SD = 1.25*M We will denote the arithmetic average HPR with the variable A, and the geometric average HPR with the variable G. Using Equation (1.16b), we can express the estimated geometric mean as: (1.16b) G = (A^2-V)^(1/2) From this equation, we can obtain: (1.23) G^2 = (A^2-V) Now substituting the standard deviation squared for the variance [as in (1.16a)]: (1.24) G^2 = A^2-SD^2 From this equation we can isolate each variable, as well as isolating zero to obtain the fundamental relationships between the arithmetic mean, geometric mean, and dispersion, expressed as SD ^ 2 here: (1.25) A^2-C^2-SD^2 = 0 (1.26) G^2 = A^2-SD^2 (1.27) SD^2 = A^2-G^2 (1.28) A^2 = G^2+SD^2 In these equations, the value SD^2 can also be written as V or as (1.25*M)^2. This brings us to the point now where we can envision exactly what the relationships are. Notice that the last of these equations is the familiar Pythagorean Theorem: The hypotenuse of a right angle triangle squared equals the sum of the squares of its sides! But here the hypotenuse is A, and we want to maximize one of the legs, G. In maximizing G, any increase in D (the dispersion leg, equal to SD or V ^ (1/2) or 1.25*M) will require an increase in A to offset. When D equals zero, then A equals G, thus conforming to the misconstrued growth function TWR = (1+R)^N. Actually when D equals zero, then A equals G per Equation (1.26). So, in terms of their relative effect on G, we can state that an increase in A ^ 2 is equal to a decrease of the same amount in (1.25*M)^2. (1.29) ∆A^2 = -A ((1.25*M)^2) To see this, consider when A goes from 1.1 to 1.2: A SD M G A^2 SD^2 = (1.25*M)^2 1.1 .1 .08 1.095445 1.21 .01 1.2 .4899 .39192 1.095445 1.44 .24 .23 .23 When A = 1.1, we are given an SD of .1. When A = 1.2, to get an equivalent G, SD must equal .4899 per Equation (1.27). Since M = .8*SD, then M = .3919. If we square the values and take the difference, they are both equal to .23, as predicted by Equation (1.29). Consider the following: A SD M G A^2 SD^2 = (1.25*M)^2 1.1 .25 .2 1.071214 1.21 .0625 1.2 .5408 .4327 1.071214 1.44 .2925 .23 Notice that in the previous example, where we started with lower dispersion values (SD or M), how much proportionally greater an increase was required to yield the same G. Thus we can state that the more you reduce your dispersion, the better, with each reduction providing greater and greater benefit. It is an exponential function, with a limit at the dispersion equal to zero, where G is then equal to A. A trader who is trading on a fixed fractional basis wants to maximize G, not necessarily A. In maximizing G, the trader should realize that the standard deviation, SD, affects G in the same proportion as does A, per the Pythagorean Theorem! Thus, when the trader reduces the standard deviation (SD) of his or her trades, it is equivalent to an equal increase in the arithmetic average HPR (A), and vice versa! THE FUNDAMENTAL EQUATION OF TRADING We can glean a lot more here than just how trimming the size of our losses improves our bottom line. We return now to equation (1.19a): (1.19a) Estimated TWR = ((AHPR^2-SD^2)^(1/2))^N We again replace AHPR with A, representing the arithmetic average HPR. Also, since (X^Y)^Z = X^(Y*Z), we can further simplify the exponents in the equation, thus obtaining: - 24 - (1.19c) Estimated TWR = (A^2-SD^2)^(N/2) This last equation, the simplification for the estimated TWR, we call the fundamental equation for trading, since it describes how the different factors, A, SD, and N affect our bottom line in trading. A few things are readily apparent. The first of these is that if A is less than or equal to 1, then regardless of the other two variables, SD and N, our result can be no greater than 1. If A is less than 1, then as N approaches infinity, A approaches zero. This means that if A is less than or equal to 1 (mathematical expectation less than or equal to zero, since mathematical expectation = A-1), we do not stand a chance at making profits. In fact, if A is less than 1, it is simply a matter of time (i.e., as N increases) until we go broke. Provided that A is greater than 1, we can see that increasing N increases our total profits. For each increase of 1 trade, the coefficient is further multiplied by its square root. For instance, suppose your system showed an arithmetic mean of 1.1, and a standard deviation of .25. Thus: Estimated TWR = (1.1^2-.25^2)^(N/2) = (1.21-.0625)^(N/2) = 1.1475^(N/2) Each time we can increase N by 1, we increase our TWR by a factor equivalent to the square root of the coefficient. In the case of our example, where we have a coefficient of 1.1475, then 1.1475^(1/2) = 1.071214264. Thus every trade increase, every 1-point increase in N, is the equivalent to multiplying our final stake by 1.071214264. Notice that this figure is the geometric mean. Each time a trade occurs, each time N is increased by 1, the coefficient is multiplied by the geometric mean. Herein is the real benefit of diversification expressed mathematically in the fundamental equation of trading. Diversification lets you get more N off in a given period of time. The other important point to note about the fundamental trading equation is that it shows that if you reduce your standard deviation more than you reduce your arithmetic average HPR, you are better off. It stands to reason, therefore, that cutting your losses short, if possible, benefits you. But the equation demonstrates that at some point you no longer benefit by cutting your losses short. That point is the point where you would be getting stopped out of too many trades with a small loss that later would have turned profitable, thus reducing your A to a greater extent than your SD. Along these same lines, reducing big winning trades can help your program if it reduces your SD more than it reduces your A. In many cases, this can be accomplished by incorporating options into your trading program. Having an option position that goes against your position in the underlying (either by buying long an option or writing an option) can possibly help. For instance, if you are long a given stock (or commodity), buying a put option (or writing a call option) may reduce your SD on this net position more than it reduces your A. If you are profitable on the underlying, you will be unprofitable on the option, but profitable overall, only to a lesser extent than had you not had the option position. Hence, you have reduced both your SD and your A. If you are unprofitable on the underlying, you will have increased your A and decreased your SD. All told, you will tend to have reduced your SD to a greater extent than you have reduced your A. Of course, transaction costs are a large consideration in such a strategy, and they must always be taken into account. Your program may be too short-term oriented to take advantage of such a strategy, but it does point out the fact that different strategies, along with different trading rules, should be looked at relative to the fundamental trading equation. In doing so, we gain an insight into how these factors will affect the bottom line, and what specifically we can work on to improve our method. Suppose, for instance, that our trading program was long-term enough that the aforementioned strategy of buying a put in conjunction with a long position in the underlying was feasible and resulted in a greater estimated TWR. Such a position, a long position in the underlying and a long put, is the equivalent to simply being outright long the call. Hence, we are better off simply to be long the call, as it will result in considerably lower transaction costs'7 than being both long the underlying and long the put option. To demonstrate this, we'll use the extreme example of the stock indexes in 1987. Let's assume that we can actually buy the underlying OEX index. The system we will use is a simple 20-day channel breakout. Each day we calculate the highest high and lowest low of the last 20 days. Then, throughout the day if the market comes up and touches the high point, we enter long on a stop. If the system comes down and touches the low point, we go short on a stop. If the daily opens are through the entry points, we enter on the open. The system is always in the market: Date 870106 870414 870507 870904 871001 871012 871221 Position L S L S L S L Entry 24107 27654 29228 31347 32067 30281 24294 P&L 0 35.47 -15.74 21.19 -7.2 -17.86 59.87 Cumulative 0 35.47 19.73 40.92 33.72 15.86 75.73 Volatility .1516987 .2082573 .2182117 .1793583 .1 848783 .2076074 .3492674 If we were to determine the optimal f on this stream of trades, we would find its corresponding geometric mean, the growth factor on our stake per play, to be 1.12445. Now we will take the exact same trades, only, using the Black-Scholes stock option pricing model from Chapter 5, we will convert the entry prices to theoretical option prices. The inputs into the pricing model are the historical volatility determined on a 20-day basis (the calculation for historical volatility is also given in Chapter 5), a risk-free rate of 6%, and a 260.8875-day year (this is the average number of weekdays in a year). Further, we will assume that we are buying options with exactly .5 of a year left till expiration (6 months) and that they are at-the-money. In other words, that there is a strike price corresponding to the exact entry price. Buying long a call when the system goes long the underlying, and buying long a put when the system goes short the underlying, using the parameters of the option pricing model mentioned, would have resulted in a trade stream as follows: Date 870106 870414 870414 870507 870507 870904 870904 871001 871001 871012 871012 871221 871221 Position Entry P&L L 9.623 0 F 35.47 25.846 L 15.428 0 F 8.792 -6.637 L 17.116 0 F 21.242 4.126 L 14.957 0 F 10.844 -4.113 L 15.797 0 F 9.374 -6.423 L 16.839 0 F 61.013 44.173 L 23 0 Cumulative 0 25.846 25.846 19.21 19.21 23.336 23.336 19.223 19.223 12.8 12.8 56.974 56.974 Underlying 24107 27654 27654 29228 29228 31347 31347 32067 32067 30281 30281 24294 24294 Action LONG CALL LONG PUT LONG CALL LONG PUT LONG CALL LONG PUT LONG CALL If we were to determine the optimal f on this stream of trades, we would find its corresponding geometric mean, the growth factor on our stake per play, to be 1.2166, which compares to the geometric mean at the optimal f for the underlying of 1.12445. This is an enormous difference. Since there are a total of 6 trades, we can raise each geometric mean to the power of 6 to determine the TWR on our stake at the end of the 6 trades. This returns a TWR on the underlying of 2.02 versus a TWR on the options of 3.24. Subtracting 1 from each TWR translates these results to percentage gains on our starting stake, or a 102% gain trading the underlying and a 224% gain making the same trades in the options. The options are clearly superior in this case, as the fundamental equation of trading testifies. Trading long the options outright as in this example may not always be superior to being long the underlying instrument. This example is an extreme case, yet it does illuminate the fact that trading strategies (as 7 There is another benefit here that is not readily apparent hut has enormous merit. That is that we know, in advance, what our worst-case loss is in advance. Considering how sensitive the optimal f equation is to what the biggest loss in the future is, such a strategy can have us be much closer to the peak of the f curve in the future by allowing US to predetermine what our largest loss can he with certainty. Second, the problem of a loss of 3 standard deviations or more having a much higher probability of occurrence than the Normal Distribution implies is eliminated. It is the gargantuan losses in excess of 3 standard deviations that kill most traders. An options strategy such as this can totally eliminate such terminal losses. - 25 - well as what option series to buy) should be looked at in light of the fundamental equation for trading in order to be judged properly. As you can see, the fundamental trading equation can be utilized to dictate many changes in our trading. These changes may be in the way of tightening (or loosening) our stops, setting targets, and so on. These changes are the results of inefficiencies in the way we are carrying out our trading as well as inefficiencies in our trading program or methodology. I hope you will now begin to see that the computer has been terribly misused by most traders. Optimizing and searching for the systems and parameter values that made the most money over past data is, by and large a futile process. You only need something that will be marginally profitable in the future. By correct money management you can get an awful lot out of a system that is only marginally profitable. In general, then, the degree of profitability is determined by the money management you apply to the system more than by the system itself Therefore, you should build your systems (or trading techniques, for those opposed to mechanical systems) around how certain you can be that they will be profitable (even if only marginally so) in the future. This is accomplished primarily by not restricting a system or technique's degrees of freedom. The second thing you should do regarding building your system or technique is to bear the fundamental equation of trading in mind It will guide you in the right direction regarding inefficiencies in your system or technique, and when it is used in conjunction with the principle of not restricting the degrees of freedom, you will have obtained a technique or system on which you can now employ the money-management techniques. Using these moneymanagement techniques, whether empirical, as detailed in this chapter, or parametric (which we will delve into starting in Chapter 3), will determine the degree of profitability of your technique or system. Chapter 2 - Characteristics of Fixed Fractional Trading and Salutary Techniques We have seen that the optimal growth of an account is achieved through optimal f. This is true regardless of the underlying vehicle. Whether we are trading futures, stocks, or options, or managing a group of traders, we achieve optimal growth at the optimal f, and we reach a specified goal in the shortest time. We have also seen how to combine various market systems at their optimal f levels into an optimal portfolio from an empirical standpoint. That is, we have seen how to combine optimal f and portfolio theory, not from a mathematical model standpoint, but from the standpoint of using the past data directly to determine the optimal quantities to trade in for the components of the optimal portfolio. Certain important characteristics about fixed fractional trading still need to be mentioned. We now cover these characteristics. OPTIMAL F FOR SMALL TRADERS JUST STARTING OUT How does a very small account, an account that is going to start out trading 1 contract, use the optimal f approach? One suggestion is that such an account start out by trading 1 contract not for every optimal f amount in dollars (biggest loss/-f), but rather that the drawdown and margin must be considered in the initial phase. The amount of funds allocated towards the first contract should be the greater of the optimal f amount in dollars or the margin plus the maximum historic drawdown (on a 1-unit basis): (2.01) A = MAX {(Biggest Loss /-f), (Margin+ABS(Drawdown))} where A = The dollar amount to allocate to the first contract. f = The optimal f (0 to 1). Margin = The initial speculative margin for the given contract. Drawdown = The historic maximum drawdown. MAX{} = The maximum value of the bracketed values. ABS() = The absolute value function. With this procedure an account can experience the maximum drawdown again and still have enough funds to cover the initial margin on another trade. Although we cannot expect the worst-case drawdown in the future not to exceed the worst-case drawdown historically, it is rather unlikely that we will start trading right at the beginning of a new historic drawdown. A trader utilizing this idea will then subtract the amount in Equation (2.01) from his or her equity each day. With the remainder, he or she will then divide by (Biggest Loss/-f). The answer obtained will be rounded down to the integer, and 1 will be added. The result is how many contracts to trade. An example may help clarify. Suppose we have a system where the optimal f is .4, the biggest historical loss is -$3,000, the maximum drawdown was -$6,000, and the margin is $2,500. Employing Equation (2.01) then: A = MAX{( -$3,000/-.4), ($2,500+ABS( -$6,000))} = MAX(($7,500), ($2,500+$6,000)) = MAX($7,500, $8,500) = $8,500 We would thus allocate $8,500 for the first contract. Now suppose we are dealing with $22,500 in account equity. We therefore subtract this first contract allocation from the equity: $22,500-$8,500 = $14,000 We then divide this amount by the optimal fin dollars: $14,000/ $7,500 = 1.867 Then we take this result down to the integer: INT( 1.867) = 1 and add 1 to the result (the 1 contract represented by the $8,500 we have subtracted from our equity): 1+1 = 2 We therefore would trade 2 contracts. If we were just trading at the optimal f level of 1 contract for every $7,500 in account equity, we - 26 - would have traded 3 contracts ($22,500/$7,500). As you can see, this technique can be utilized no matter of how large an account's equity is (yet the larger the equity the closer the two answers will be). Further, the larger the equity, the less likely it is that we will eventually experience a drawdown that will have us eventually trading only 1 contract. For smaller accounts, or for accounts just starting out, this is a good idea to employ. THRESHOLD TO GEOMETRIC Here is another good idea for accounts just starting out, one that may not be possible if you are employing the technique just mentioned. This technique makes use of another by-product calculation of optimal f called the threshold to geometric. The by-products of the optimal f calculation include calculations, such as the TWR, the geometric mean, and so on, that were derived in obtaining the optimal f, and that tell us something about the system. The threshold to the geometric is another of these by-product calculations. Essentially, the threshold to geometric tells us at what point we should switch over to fixed fractional trading, assuming we are starting out constant-contract trading. Refer back to the example of a coin toss where we win $2 if the toss comes up heads and we lose $1 if the toss comes up tails. We know that our optimal f is .25, or to make 1 bet for every $4 we have in account equity. If we are starting out trading on a constant-contract basis, we know we will average $.50 per unit per play. However, if we start trading on a fixed fractional basis, we can expect to make the geometric average trade of $.2428 per unit per play. Assume we start out with an initial stake of $4, and therefore we are making 1 bet per play. Eventually, when we get to $8, the optimal f would have us step up to making 2 bets per play. However, 2 bets times the geometric average trade of $.2428 is $.4856. Wouldn't we be better off sticking with 1 bet at the equity level of $8, whereby our expectation per play would still be $.50? The answer is, "Yes." The reason that the optimal f is figured on the basis of contracts that are infinitely divisible, which may not be the case in real life. We can find that point where we should move up to trading two contracts by the formula for the threshold to the geometric, T: (2.02) T = AAT/GAT*Biggest Loss/-f where T = The threshold to the geometric. AAT = The arithmetic average trade. GAT s The geometric average trade, f = The optimal f (0 to 1). In our example of the 2-to-l coin toss: T = .50/.2428*-1/-.25 = 8.24 Therefore, we are better off switching up to trading 2 contracts when our equity gets to $8.24 rather than $8.00. Figure 2-1 shows the threshold to the geometric for a game with a 50% chance of winning $2 and a 50% chance of losing $1. Threshold in $ 120 100 80 60 40 Optimal f is .25 where threshold is $8.24 0,5 0,10 0,15 0,20 0,25 0,30 0,35 0,40 0,45 0,50 0,55 f values Figure 2-1 Threshold to the geometric for 2:1 coin toss. Notice that the trough of the threshold to the geometric curve occurs at the optimal f. This means that since the threshold to the geometric is the optimal level of equity to go to trading 2 units, you go to 2 units at the lowest level of equity, optimally, when incorporating the threshold to the geometric at the optimal f. Now the question is, "Can we use a similar approach to know when to go from 2 cars to 3 cars?" Also, 'Why can't the unit size be 100 cars starting out, assuming you are starting out with a large account, rather than simply a small account starting out with 1 car?" To answer the second question first, it is valid to use this technique when starting out with a unit size greater than 1. However, it is valid only if you do not trim back units on the downside before switching into the geometric mode. The reason is that before you switch into the geometric mode you are assumed to be trading in a constant-unit size. Assume you start out with a stake of 400 units in our 2-to-l cointoss game. Your optimal fin dollars is to trade 1 contract (make 1 bet) for every $4 in equity. Therefore, you will start out trading 100 contracts (making 100 bets) on the first trade. Your threshold to the geometric is at $8.24, and therefore you would start trading 101 contracts at an equity level of $404.24. You can convert your threshold to the geometric, which is computed on the basis of advancing from 1 contract to 2, as: (2.03) Converted T = EQ+T-(Biggest Loss/-f) where EQ = The starting account equity level. T = The threshold to the geometric for going from 1 car to 2. f = The optimal f (0 to 1). Therefore, since your starting account equity is $400, your T is $8.24, your biggest loss -$1, and your f is .25: Converted T = 400+8.24-(-1/-.25) = 400+8.24-4 = 404.24 Thus, you would progress to trading 101 contracts (making 101 bets) if and when your account equity reached $404.24. We will assume you are trading in a constant-contract mode until your account equity reaches $404.24, at which point you will begin the geometric mode. Therefore, until Your account equity reaches $404.24, you will trade 100 contracts on the next trade regardless of the remaining equity in your account. If, after you cross the geometric threshold (that is, after your account equity hits S404.24), you suffer a loss and your equity drops below $404.24, you will go back to trading on a constant 100-contract basis if and until you cross the geometric threshold again. This inability to trim back contracts on the downside when you are below the geometric threshold is the drawback to using this procedure when you are at an equity level of trading more than 2 contacts. If you are only trading 1 contract, the geometric threshold is a very valid technique for determining at what equity level to start trading 2 contracts (since you cannot trim back any further than 1 contract should you experience an equity decline). However, it is not a valid technique for advancing from 2 contracts to 3, because the technique is predicated upon the fact that you are currently trading on a constant-contract basis. That is, if you are trading 2 contracts, unless you are willing not to trim back to 1 contract if you suffer an equity decline, the technique is not valid, and likewise if you start out trading 100 contracts. You could do just that (not trim back the number of contracts you are presently trading if you experience an equity decline), in which case the threshold to the geometric, or its converted version in Equation (2.03), would be the valid equity point to add the next contract. The problem with doing this (not trimming back on the downside) is that you will make less (your TWR will be less) in an asymptotic sense. You will not make as much as if you simply traded the full optimal f. Further, your drawdowns will be greater and your risk of ruin higher. Therefore, the threshold to the geometric is only beneficial if you are starting out in the lowest denomination of bet size (1 contract) and advancing to 2, and it is only a benefit if the arithmetic average trade is more than twice the size of the geometric average trade. Furthermore, it is beneficial to use only when you cannot trade fractional units. ONE COMBINED BANKROLL VERSUS SEPARATE BANKROLLS Some very important points regarding fixed fractional trading must be covered before we discuss the parametric techniques. First, when trading more than one market system simultaneously, you will generally do better in an asymptotic sense using only one combined bankroll from which to figure your contract sizes, rather than separate bankrolls for each. - 27 - It is for this reason that we "recapitalize" the subaccounts on a daily basis as the equity in an account fluctuates. What follows is a run of two similar systems, System A and System B. Both have a 50% chance of winning, and both have a payoff ratio of 2:1. Therefore, the optimal f dictates that we bet $1 for every S4 units in equity. The first run we see shows these two systems with positive correlation to each other. We start out with $100, splitting it into 2 subaccount units of $50 each. After a trade is registered, it only affects the cumulative column for that system, as each system has its own separate bankroll. The size of each system's separate bankroll is used to determine bet size on the subsequent play: System A Trade P&L System B Cumulative Trade P&L 50.00 2 25.00 75.00 2 25.00 -1 -18.75 56.25 -1 -18.75 2 28.13 84 .38 2 28.13 -1 -21.09 63.28 -1 -21.09 2 31.64 94 .92 2 31.64 -1 -23.73 71.19 -1 -23.73 -50.00 Net Profit 21.19140 21.19140 Total net profit of the two banks = $42.38 Cumulative 50.00 75.00 56.25 84.38 63.28 94 .92 71.19 -50.0 Now we will see the same thing, only this time we will operate from a combined bank starting at 100 units. Rather than betting $1 for every $4 in the combined stake for each system, we will bet $1 for every $8 in the combined bank. Each trade for either system affects the combined bank, and it is the combined bank that is used to determine bet size on the subsequent play: System A Trade P&L System B Trade P&L Combined Bank 100.00 2 25.00 2 25.00 150.00 -1 -18.75 -1 -18.75 112.50 2 28.13 2 28.13 168.75 -1 -21.09 -1 -21.09 126.56 2 31.64 2 31.64 189.84 -1 -23.73 -1 -23.73 142.38 -100.00 Total net profit of the combined bank = $42.38 Notice that using either a combined bank or a separate bank in the preceding example shows a profit on the $100 of $42.38. Yet what was shown is the case where there is positive correlation between the two systems. Now we will look at negative correlation between the same two systems, first with both systems operating from their own separate bankrolls: System A Trade P&L System B Cumulative Trade P&L 50.00 2 25.00 75.00 -1 -12.50 -1 -18.75 56.25 2 18.75 2 28.13 84.38 -1 -14.06 -1 -21.09 63.28 2 21.09 2 31.64 94.92 -1 -15.82 -1 -23.73 71.19 2 23.73 -50.00 Net Profit 21.19140 Total net profit of the two banks = Cumulative 50.00 37.50 56.25 42.19 63.28 47.46 71.19 -50.00 21.19140 $42.38 As you can see, when operating from separate bankrolls, both systems net out making the same amount regardless of correlation. However, with the combined bank: System A Trade P&L System B Trade P&L Combined Bank 100.00 2 25.00 -1 -12.50 112.50 -1 -14.06 2 28.12 126.56 2 31.64 -1 -15.82 142.38 -1 -17.80 2 35.59 160.18 2 40.05 -1 -20.02 180.20 -1 -22.53 2 45.00 202.73 -100.00 Total net profit of the combined bank = $102.73 With the combined bank, the results are dramatically improved. When using fixed fractional trading you are best off operating from a single combined bank. THREAT EACH PLAY AS IF INFINITELY REPEATED The next axiom of fixed fractional trading regards maximizing the current event as though it were to be performed an infinite number of times in the future. We have determined that for an independent trials process, you should always bet that f which is optimal (and constant) and likewise when there is dependency involved, only with dependency f is not constant. Suppose we have a system where there is dependency in like begetting like, and suppose that this is one of those rare gems where the confidence limit is at an acceptable level for us, that we feel we can safely assume that there really is dependency here. For the sake of simplicity we will use a payoff ratio of 2:1. Our system has shown that, historically, if the last play was a win, then the next play has a 55% chance of being a tin. If the last play was a loss, our system has a 45% chance of the next play being a loss. Thus, if the last play was a win, then from the Kelly formula, Equation (1.10), for finding the optimal f (since the payoff ratio is Bernoulli distributed): (1.10) f = ((2 +1)*.55-1)/2 = (3*.55-1)/2 = .65/2 = .325 After a losing play, our optimal f is: f = ((2+ l)*.45-l)/2 = (3*.45- l)/2 = .35/2 = .175 Now dividing our biggest losses (-1) by these negative optimal fs dictates that we make 1 bet for every 3.076923077 units in our stake after a win, and make 1 bet for every 5.714285714 units in our stake after a loss. In so doing we will maximize the growth over the long run. Notice that we treat each individual play as though it were to be performed an infinite number of times. Notice in this example that betting after both the wins and the losses still has a positive mathematical expectation individually. What if, after a loss, the probability of a win was .3? In such a case, the mathematical expectation is negative, hence there is no optimal f and as a result you shouldn't take this play: (1.03) ME = (.3*2)+ (.7*-1) = .6-.7 = -.1 In such circumstances, you would bet the optimal amount only after a win, and you would not bet after a loss. If there is dependency present, you must segregate the trades of the market system based upon the dependency and treat the segregated trades as separate market systems. The same principle, namely that asymptotic growth is maximized if each play is considered to be performed an infinite number of times into the future, also applies to simultaneous wagering (or trading a portfolio). Consider two betting systems, A and B. Both have a 2:1 payoff ratio, and both win 50% of the time. We will assume that the correlation coefficient between the two systems is 0, but that is not relevant to the point being illuminated here. The optimal fs for both systems (if they were being traded alone, rather than simultaneously) are .25, or to make 1 bet for every 4 units in equity. The optimal fs for trading both systems simultaneously are .23, or 1 bet for every 4.347826087 units in account equity.1 System B only trades two-thirds of the time, so some trades will be done when the two systems are not trading simultaneously. This first sequence is demonstrated with a starting combined bank of 1,000 units, and each bet for each system is performed with an optimal f of 1 bet per every 4.347826087 units: A -1 2 -1 B -230.00 354.20 -217.83 -1 2 -177.10 435.67 Combined Bank 1,000.00 770.00 947.10 1,164.93 The method We are using here to arrive at these optimal bet sizes is described in Chapters 6 and 7. We are, in effect, using 3 market systems, Systems A and B as described here, both with an arithmetic HPR of 1.125 and a stand and deviation in HPRs of .375, and null cash, with an HPR of 1.0 and a standard deviation of 0. The geometric average is thus maximized at approximately E = .23, where the weightings for A and B both are .92. Thus, the optimal fs for both A and B are transformed to 4.347826. Using such factors will maximize growth in this game. - 28 - A 2 -1 2 B 535.87 -391.18 422.48 -1 2 -391.18 422.48 Combined Bank 1,700.80 918.43 1,763.39 Next we see the same exact thing, the only difference being that when A is betting alone (i.e., when B does not have a bet at the same time as A), we make 1 bet for every 4 units in the combined bank for System A, since that is the optimal f on the single, individual play. On the plays where the bets are simultaneous, we are still betting 1 unit for every 4.347826087 units in account equity for both A and B. Notice that in so doing we are taking each bet, whether it is individual or simultaneous, and applying that optimal f which would maximize the play as though it were to be performed an infinite number of times in the future. A -1 2 -1 2 -1 2 B -250.00 345.00 -212.17 567.34 -391.46 422.78 -1 2 -172.50 424.35 -1 2 -391.46 422.78 Combined Bank 1,000.00 750.00 922.50 1,134.67 1,702.01 919.09 1,764.65 As can be seen, there is a slight gain to be obtained by doing this, and the more trades that elapse, the greater the gain. The same principle applies to trading a portfolio where not all components of the portfolio are in the market all the time. You should trade at the optimal levels for the combination of components (or single component) that results in the optimal growth as though that combination of components (or single component) were to be traded an infinite number of times in the future. EFFICIENCY LOSS IN SIMULTANEOUS WAGERING OR PORTFOLIO TRADING Let's again return to our 2:1 coin-toss game. Let's again assume that we are going to play two of these games, which we'll call System A and System B, simultaneously and that there is zero correlation between the outcomes of the two games. We can determine our optimal fs for such a case as betting 1 unit for every 4.347826 in account equity when the games are played simultaneously. When starting with a bank of 100 units, notice that we finish with a bank of 156.86 units: System A System B Trade P&L Trade P&L Optimal f is 1 unit for every 4.347826 in equity: -1 -23.00 -1 -23.00 2 24.84 -1 -12.42 -1 -15.28 2 30.55 2 37.58 2 37.58 System A System B Trade P&L Trade P&L Optimal f is 1 unit for every 8.00 in equity: -1 -12.50 -1 -12.50 2 18.75 2 18.75 -1 -14.06 -1 -14.06 2 21.09 2 21.09 Bank 100.00 54.00 66.42 81.70 156.66 Bank 100.00 75.00 112.50 84.38 126.56 Now let's consider System C. This would be the same as Systems A and B, only we're going to play this game alone, without another game going simultaneously. We're also going to play it for 8 plays-as opposed to the previous endeavor, where we played 2 games for 4 simultaneous plays. Now our optimal f is to bet 1 unit for every 4 units in equity. What we have is the same 8 outcomes as before, but a different, better end result: System C Trade P&L Optimal f is 1 unit f or every 4.00 in equity: -1 -25.00 2 37.50 -1 -28.13 2 42.19 2 63.28 2 94.92 -1 -71.19 -1 -53.39 Bank 100.00 75.00 112.50 84.38 126.56 189.84 284.77 213.57 160.18 The end result here is better not because the optimal fs differ slightly (both are at their respective optimal levels), but because there is a small efficiency loss involved with simultaneous wagering. This inefficiency is the result of not being able to recapitalize your account after every single wager as you could betting only 1 market system. In the si- multaneous 2-bet case, you can only recapitalize 3 times, whereas in the single B-bet case you recapitalize 7 times. Hence, the efficiency loss in simultaneous wagering (or in trading a portfolio of market systems). We just witnessed the case where the simultaneous bets were not correlated. Let's look at what happens when we deal with positive (+1.00) correlation: Notice that after 4 simultaneous plays where the correlation between the market systems employed is+1.00, the result is a gain of 126.56 on a starting stake of 100 units. This equates to a TWR of 1.2656, or a geometric mean, a growth factor per play (even though these are combined plays) of 1.2656^(1/4) = 1.06066. Now refer back to the single-bet case. Notice here that after 4 plays, the outcome is 126.56, again on a starting stake of 100 units. Thus, the geometric mean of 1.06066. This demonstrates that the rate of growth is the same when trading at the optimal fractions for perfectly correlated markets. As soon as the correlation coefficient comes down below+1.00, the rate of growth increases. Thus, we can state that when combining market systems, your rate of growth will never be any less than with the single-bet case, no matter of how high the correlations are, provided that the market system being added has a positive arithmetic mathematical expectation. Recall the first example in this section, where there were 2 market systems that had a zero correlation coefficient between them. This market system made 156.86 on 100 units after 4 plays, for a geometric mean of (156.86/100)^(1/4) = 1.119. Let's now look at a case where the correlation coefficients are -1.00. Since there is never a losing play under the following scenario, the optimal amount to bet is an infinitely high amount (in other words, bet 1 unit for every infinitely small amount of account equity). But, rather than getting that greedy, we'll just make 1 bet for every 4 units in our stake so that we can make the illustration here: System A System B Trade P&L Trade P&L Bank Optimal f is 1 unit for every 0.00 in equity (shown is 1 for every 4): 100.00 -1 -12.50 2 25.00 112.50 2 28.13 -1 -14.06 126.56 -1 -15.82 2 31.64 142.38 2 35.60 -1 -17.80 160.18 There are two main points to glean from this section. The first is that there is a small efficiency loss with simultaneous betting or portfolio trading, a loss caused by the inability to recapitalize after every individual play. The second point is that combining market systems, provided they have a positive mathematical expectation, and even if they have perfect positive correlation, never decreases your total growth per time period. However, as you continue to add more and more market systems, the efficiency loss becomes considerably greater. If you have, say, 10 market systems and they all suffer a loss simultaneously, that loss could be terminal to the account, since you have not been able to trim back size for each loss as you would have had the trades occurred sequentially. Therefore, we can say that there is a gain from adding each new market system to the portfolio provided that the market system has a correlation coefficient less than 1 and a positive mathematical expectation, or a negative expectation but a low enough correlation to the other components in the portfolio to more than compensate for the negative expectation. There is a marginally decreasing benefit to the geometric mean for each market system added. That is, each new market system benefits the geometric mean to a lesser and lesser degree. Further, as you add each new market system, there is a greater and greater efficiency loss caused as a result of simultaneous rather than sequential outcomes. At some point, to add another market system will do more harm then good. TIME REQUIRED TO REACH A SPECIFIED GOAL AND THE TROUBLE WITH FRACTIONAL F Suppose we are given the arithmetic average HPR and the geometric average HPR for a given system. We can determine the standard deviation in HPRs from the formula for estimated geometric mean: (1.19a) EGM = (AHPR^2-SD^2)^(1/2) where AHPR = The arithmetic mean HPR. - 29 - SD = The population standard deviation in HPRs. Therefore, we can estimate the standard deviation, SD, as: (2.04) SD^2 = AHPR^2-EGM^2 Returning to our 2:1 coin-toss game, we have a mathematical expectation of $.50, and an optimal f of betting $1 for every $4 in equity, which yields a geometric mean of 1.06066. We can use Equation (2.05) to determine our arithmetic average HPR: (2.05) AHPR = l+(ME/f$) where AHPR = The arithmetic average HPR. ME = The arithmetic mathematical expectation in units. f$ = The biggest loss/-f. f = The optimal f (0 to 1). Thus, we would have an arithmetic average HPR of: AHPR = 1+(.5/( -1/ -.25)) = 1+(.5/4) = 1+.125 = 1.125 Now, since we have our AHPR and our ECM, we can employ equation (2.04) to determine the estimated standard deviation in the HPRs: (2.04) SD^2 = AHPR^2-EGM^2 = 1.125^2-1.06066^2 = 1.265625-1.124999636 = .140625364 Thus SD^2, which is the variance in HPRs, is .140625364. Taking the Square root of this yields a standard deviation in these HPRs of .140625364^(1/2) = .3750004853. You should note that this is the estimated standard deviation because it uses the estimated geometric mean as input. It is probably not completely exact, but it is close enough for our purposes. However, suppose we want to convert these values for the standard deviation (or variance), arithmetic, and geometric mean HPRs to reflect trading at the fractional f. These conversions are now given: (2.06) FAHPR = (AHPR-1)*FRAC+1 (2.07) FSD = SD*FRAC (2.08) FGHPR = (FAHPR^2-FSD^2)^(1/2) where FRAC = The fraction of optimal f we are solving for. AHPR = The arithmetic average HPR at the optimal f. SD = The standard deviation in HPRs at the optimal f. FAHPR = The arithmetic average HPR at the fractional f. FSD = The standard deviation in HPRs at the fractional f FGHPR = The geometric average HPR at the fractional f. For example, suppose we want to see what values we would have for FAHPR, FGHPR, and FSD at half the optimal f (FRAC = .5) in our 2:1 coin-toss game. Here, we know our AHPR is 1.125 and our SD is .3750004853. Thus: (2.06) FAHPR = (AHPR-1)*FRAC+1 = (1.125- 1)*.5+1 = .125*.5+1 = .0625+1 = 1.0625 (2.07) FSD = SD*FRAC = ,3750004853*.5 = .1875002427 (2.08) FGHPR = (FAHPR^2-FSD^2)^(1/2) = (1.0625^2-.1875002427^2)^(1/2) = (1.12890625-.03515634101)^(1/2) = 1.093749909^(1/2) = 1.04582499 Thus, for an optimal f of .25, or making 1 bet for every $4 in equity, we have values of 1.125, 1.06066, and .3750004853 for the arithmetic average, geometric average, and standard deviation of HPRs respectively. Now we have solved for a fractional (.5) f of .125 or making 1 bet for every $8 in our stake, yielding values of 1.0625, 1.04582499, and .1875002427 for the arithmetic average, geometric average, and standard deviation of HPRs respectively. We can now take a look at what happens when we practice a fractional f strategy. We have already determined that under fractional f we will make geometrically less money than under optimal f. Further, we have determined that the drawdowns and variance in returns will be less with fractional f. What about time required to reach a specific goal? We can quantify the expected number of trades required to reach a specific goal. This is not the same thing as the expected time required to reach a specific goal, but since our measurement is in trades we will use the two notions of time and trades elapsed interchangeably here: (2.09) N = ln(Goal)/ln(Geometric Mean) where N = The expected number of trades to reach a specific goal. Goal = The goal in terms of a multiple on our starting stake, a TWR. ln() = The natural logarithm function. Returning to our 2:1 coin-toss example. At optimal f we have a geometric mean of 1.06066, and at half f this is 1.04582499. Now let's calculate the expected number of trades required to double our stake (goal = 2). At full f: N = ln(2)/ln( 1.06066) = .6931471/.05889134 = 11.76993 Thus, at the full f amount in this 2:1 coin-toss game, we anticipate it will take us 11.76993 plays (trades) to double our stake. Now, at the half f amount: N = ln(2)/ln(1.04582499) = .6931471/.04480602 = 15.46996 Thus, at the half f amount, we anticipate it will take us 15.46996 trades to double our stake. In other words, trading half f in this case will take us 31.44% longer to reach our goal. Well, that doesn't sound too bad. By being more patient, allowing 31.44% longer to reach our goal, we eliminate our drawdown by half and our variance in the trades by half. Half f is a seemingly attractive way to go. The smaller the fraction of optimal f that you use, the smoother the equity curve, and hence the less time you can expect to be in the worst-case drawdown. Now, let's look at it in another light. Suppose you open two accounts, one to trade the full f and one to trade the half f. After 12 plays, your full f account will have more than doubled to 2.02728259 (1.06066^12) times your starting stake. After 12 trades your half f account will have grown to 1.712017427 (1.04582499^12) times your starting stake. This half f account will double at 16 trades to a multiple of 2.048067384 (1.04582499^16) times your starting stake. So, by waiting about one-third longer, you have achieved the same goal as with full optimal f, only with half the commotion. However, by trade 16 the full f account is now at a multiple of 2.565777865 (1.06066^16) times your starting stake. Full f will continue to pull out and away. By trade 100, your half f account should be at a multiple of 88.28796546 times your starting stake, but the full f will be at a multiple of 361.093016! So anyone who claims that the only thing you sacrifice with trading at a fractional versus full f is time required to reach a specific goal is completely correct. Yet time is what it's all about. We can put our money in Treasury Bills and they will reach a specific goal in a certain time with an absolute minimum of drawdown and variance! Time truly is of the essence. COMPARING TRADING SYSTEMS We have seen that two trading systems can be compared on the basis of their geometric means at their respective optimal fs. Further, we can compare systems based on how high their optimal fs themselves are, with the higher optimal f being the riskier system. This is because the least the drawdown may have been is at least an f percent equity retracement. So, there are two basic measures for comparing systems, the geometric means at the optimal fs, with the higher geometric mean being the superior system, and the optimal fs themselves, with the lower optimal f being the superior system. Thus, rather than having a single, onedimensional measure of system performance, we see that performance must be measured on a two-dimensional plane, one axis being the geometric mean, the other being the value for f itself. The higher the geometric mean at the optimal f, the better the system, Also, the lower the optimal f, the better the system. - 30 - Geometric mean does not imply anything regarding drawdown. That is, a higher geometric mean does not mean a higher (or lower) drawdown. The geometric mean only pertains to return. The optimal f is the measure of minimum expected historical drawdown as a percentage of equity retracement. A higher optimal f does not mean a higher (or lower) return. We can also use these benchmarks to compare a given system at a fractional f value and another given system at its full optimal f value. Therefore, when looking at systems, you should look at them in terms of how high their geometric means are and what their optimal fs are. For example, suppose we have System A, which has a 1.05 geometric mean and an optimal f of .8. Also, we have System B, which has a geometric mean of 1.025 and an optimal f of .4. System A at the half f level will have the same minimum historical worst-case equity retracement (drawdown) of 40%, just as System B's at full f, but System A's geometric mean at half f will still be higher than System B's at the full f amount. Therefore, System A is superior to System B. "Wait a minute," you say, "I thought the only thing that mattered was that we had a geometric mean greater than 1, that the system need be only marginally profitable, that we can make all the money we want through money management!" That's still true. However, the rate at which you will make the money is still a function of the geometric mean at the f level you are employing. The expected variability will be a function of how high the f you are using is. So, although it's true that you must have a system with a geometric mean at the optimal f that is greater than 1 (i.e., a positive mathematical expectation) and that you can still make virtually an unlimited amount with such a system after enough trades, the rate of growth (the number of trades required to reach a specific goal) is dependent upon the geometric mean at the f value employed. The variability en route to that goal is also a function of the f value employed. Yet these considerations, the degree of the geometric mean and the f employed, are secondary to the fact that you must have a positive mathematical expectation, although they are useful in comparing two systems or techniques that have positive mathematical expectations and an equal confidence of their working in the future. TOO MUCH SENSIVITY TO THE BIGGEST LOSS A recurring criticism with the entire approach of optimal f is that it is too dependent on the biggest losing trade. This seems to be rather disturbing to many traders. They argue that the amount of contracts you put on today should not be so much a function of a single bad trade in the past. Numerous different algorithms have been worked up by people to alleviate this apparent oversensitivity to the largest loss. Many of these algorithms work by adjusting the largest loss upward or downward to make the largest loss be a function of the current volatility in the market. The relationship seems to be a quadratic one. That is, the absolute value of the largest loss seems to get bigger at a faster rate than the volatility. (Volatility is usually defined by these practitioners as the average daily range of the last few weeks, or average absolute value of the daily net change of the last few weeks, or any of the other conventional measures of volatility.) However, this is not a deterministic relationship. That is, just because the volatility is X today does not mean that our largest loss will be X^Y. It simply means that it usually is somewhere near X^Y. If we could determine in advance what the largest possible loss would be going into today, we could then have a much better handle on our money management.2 Here again is a case where we must consider the worst-case scenario and build from there. The problem is that we do not know exactly what our largest loss can be going into today. An algorithm that can predict this is really not very useful to us because of the one time that it fails. 2 This is where using options in a trading strategy is so useful. Either buying a put or call out right in opposition to the underlying position to limit the loss to the strike price of the options, or simply buying options outright in lieu of the underlying, gives you a floor, an absolute maximum loss. Knowing this is extremely handy from a money-management, particularly an optimal f, standpoint, Further, if you know what your maximum possible loss is n advance (e.g., a day trade), then you can always determine what the f is in dollars perfectly for any trade by the relation dollars at risk per unit/optima] f. For example, suppose a day trader knew her optimal 1 was .4. Her stop today, on a I-unit basis, is going to be $900. She will therefore optimally trade 1 unit for every $2,250 ($900/.4) in account equity. Consider for instance the possibility of an exogenous shock occurring in a market overnight. Suppose the volatility were quite low prior to this overnight shock, and the market then went locked-limit against you for the next few days. Or suppose that there were no price limits, and the market just opened an enormous amount against you the next day. These types of events are as old as commodity and stock trading itself. They can and do happen, and they are not always telegraphed in advance by increased volatility. Generally then you are better off not to "shrink" your largest historical loss to reflect a current low-volatility marketplace. Furthermore, there Is the concrete possibility of experiencing a loss larger in the future than what was the historically largest loss. There is no mandate that the largest loss seen in the past is the largest loss you can experience today.3 This is true regardless of the current volatility coming into today. The problem is that, empirically, the f that has been optimal in the past is a function of the largest loss of the past. There's no getting around this. However, as you shall see when we get into the parametric techniques, you can budget for a greater loss in the future. In so doing, you will be prepared if the almost inevitable larger loss comes along. Rather than trying to adjust the largest loss to the current climate of a given market so that your empirical optimal f reflects the current climate, you will be much better off learning the parametric techniques. The technique that follows is a possible solution to this problem, and it can be applied whether we are deriving our optimal f empirically or, as we shall learn later, parametrically. EQUALIZING OPTIMAL F Optimal f will yield the greatest geometric growth on a stream of outcomes. This is a mathematical fact. Consider the hypothetical stream of outcomes: +2, -3, +10, -5 This is a stream from which we can determine our optimal f as .17, or to bet 1 unit for every $29.41 in equity. Doing so on such a stream will yield the greatest growth on our equity. Consider for a moment that this stream represents the trade profits and losses on one share of stock. Optimally we should buy one share of stock for every $29.41 that we have in account equity, regardless of what the current stock price is. But suppose the current stock price is $100 per share. Further, suppose the stock was $20 per share when the first two trades occurred and was $50 per share when the last two trades occurred. Recall that with optimal f we are using the stream of past trade P&L's as a proxy for the distribution of expected trade P&L's currently. Therefore, we can preprocess the trade P&L data to reflect this by converting the past trade P&L data to reflect a commensurate percentage gain or loss based upon the current price. For our first two trades, which occurred at a stock price of $20 per share, the $2 gain corresponds to a 10% gain and the $3 loss corresponds to a 15% loss. For the last two trades, taken at a stock price of $50 per share, the $10 gain corresponds to a 20% gain and the $5 loss corresponds to a 10% loss. The formulas to convert raw trade P&L's to percentage gains and losses for longs and shorts are as follows: (2.10a) P&L% = Exit Price/Entry Price-1 (for longs) (2.10b) P&L% = Entry Price/Exit Price-1 (for shorts) or we can use the following formula to convert both longs and shorts: (2.10c) P&L% = P&L in Points/Entry Price Thus, for our 4 hypothetical trades, we now have the following stream of percentage gains and losses (assuming all trades are long trades): +.l, -.15, +.2, -.l We call this new stream of translated P&L's the equalized data, because it is equalized to the price of the underlying instrument when the trade occurred. To account for commissions and slippage, you must adjust the exit price downward in Equation (2.10a) for an amount commensurate with the amount of the commissions and slippage. Likewise, you should adjust the exit price upward in (2.10b). If you are using (2.10c), you must deduct the amount of the commissions and slippage (in points again) from the numerator P&L in Points. Next we determine our optimal f on these percentage gains and losses. The f that is optimal is .09. We must now convert this optimal f of .09 into a dollar amount based upon the current stock price. This is accomplished by the following formula: (2.11) f$ = Biggest % Loss*Current Price*$ per Point/-f Thus, since our biggest percentage loss was -.15, the current price is $100 per share, and the number of dollars per full point is 1 (since we are only dealing with buying 1 share), we can determine our f$ as: f$ = -.15*100*1/-.09 = -15/-.09 = 166.67 Thus, we would optimally buy 1 share for every $166.67 in account equity. If we used 100 shares as our unit size, the only variable affected would have been the number of dollars per full point, which would have been 100. The resulting f$ would have been $16,666.67 in equity for every 100 shares. Suppose now that the stock went down to $3 per share. Our f$ equation would be exactly the same except for the current price variable which would now be 3. Thus, the amount to finance 1 share by becomes: f$ = -.15*3*1/-.09 = -.45/-.09 = 5 We optimally would buy 1 share for every $5 we had in account equity. Notice that the optimal f does not change with the current price of the stock. It remains at .09. However, the f$ changes continuously as the price of the stock changes. This doesn't mean that you must alter a position you are already in on a daily basis, but it does make it more likely to be beneficial that you do so. As an example, if you are long a given stock and it declines, the dollars that you should allocate to 1 unit (100 shares in this case) of this stock will decline as well, with the optimal f determined off of equalized data. If your optimal f is determined off of the raw trade P&L data, it will not decline. In both cases, your daily equity is declining. Using the equalized optimal f makes it more likely that adjusting your position size daily will be beneficial. Equalizing the data for your optimal f necessitates changes in the by-products.4 We have already seen that both the optimal f and the geometric mean (and hence the TWR) change. The arithmetic average trade changes because now it, too, must be based on the idea that all trades in the past must be adjusted as if they had occurred from the current price. Thus, in our hypothetical example of outcomes on 1 share of +2, -3,+10, and -5, we have an average trade of $1. When we take our percentage gains and losses of +.1, -15, +.2, and -.1, we have an average trade (in percent) of +.5. At $100 per share, this translates into an average trade of 100*.05 or $5 per trade. At $3 per share, the average trade becomes $.15 (3*.05). The geometric average trade changes as well. Recall Equation (1.14) for the geometric average trade: (1.14) GAT = G*(Biggest Loss/-f) where G = Geometric mean 1. f = Optimal fixed fraction. (and, of course, our biggest loss is always a negative number). This equation is the equivalent of: GAT = (geometric mean-1)*f$ Prudence requires that we USC a largest loss at least as big as the largest loss seen in the past. As the future unfolds and we obtain more and more data, we will derive longer runs of losses. For instance, if ] flip a coin 100 times I might see it come up tails 12 times for a row at the longest run of tails. If I go and flip it 1,000 times, I most likely will see a longer run of tails. This same principle is at work when we trade. Not only should we expect longer streaks of losing trades in the future, we should also expect a bigger largest losing trade. - 31 - Risk-of-ruin equations, although not directly addressed in this text, must also be adjusted to reflect equalized data when being used. Generally, risk-of-ruin equations use the raw trade P&L data as input. However, when you use equalized data, the new stream of percentage gains and losses must be multiplied by the current price of the underlying instrument and the resulting stream used. Thus, a stream of percentage gains and losses such as .1, -.15, .2, -.1 translates into a stream of 10, -15, 20, -10 for an underlying at a current price of $100. This new stream should then be used as the data for the risk-of-ruin equations. We have already obtained a new geometric mean by equalizing the past data. The f$ variable, which is constant when we do not equalize the past data, now changes continuously, as it is a function of the current underlying price. Hence our geometric average trade changes continuously as the price of the underlying instrument changes. Our threshold to the geometric also must be changed to reflect the equalized data. Recall Equation (2.02) for the threshold to the geometric: (2.02) T = AAT/GAT*Biggest Loss/-f where T = The threshold to the geometric. AAT = The arithmetic average trade. GAT = The geometric average trade. f = The optimal f (0 to 1). This equation can also be rewritten as: T = AAT/GAT*f$ Now, not only do the AAT and GAT variables change continuously as the price of the underlying changes, so too does the f$ variable. Finally, when putting together a portfolio of market systems we must figure daily HPRs. These too are a function of f$: (2.12) Daily HPR = D$/f$+1 where D$ = The dollar gain or loss on 1 unit from the previous day. This is equal to (Tonight's Close-Last Night's Close)*Dollars per Point. f$ = The current optimal fin dollars, calculated from Equation (2.11). Here, however, the current price variable is last night's close. For example, suppose a stock tonight closed at $99 per share. Last night it was $102 per share. Our biggest percentage loss is -15. If our f is .09 then our f$ is: f$ = -.15*102 *1/-.09 = -15.3/-.09 = 170 Since we are dealing with only 1 share, our dollars per point value is $1. We can now determine our daily HPR for today by Equation (2.12) as: (2.12) Daily HPR = (99-102)*1/170+1 = -3/170+1 = -.01764705882+1 = .9823529412 Return now to what was said at the outset of this discussion. Given a stream of trade P&L's, the optimal f will make the greatest geometric growth on that stream (provided it has a positive arithmetic mathematical expectation). We use the stream of trade P&L's as a proxy for the distribution of possible outcomes on the next trade. Along this line of reasoning, it may be advantageous for us to equalize the stream of past trade profits and losses to be what they would be if they were performed at the current market price. In so doing, we may obtain a more realistic proxy of the distribution of potential trade profits and losses on the next trade. Therefore, we should figure our optimal f from this adjusted distribution of trade profits and losses. This does not mean that we would have made more by using the optimal f off of the equalized data. We would not have, as the following demonstration shows: P&L Percentage Underly- f$ ing Price At f = .09, trading the equalized method: +2 -3 +10 -5 P&L .1 -.15 .2 -.1 Percentage Number of Shares 20 $33.33 300 20 $33.33 318 50 $83.33 115.752 50 $83.33 129.642 Underly- f$ Number of ing Price Shares At f = .17, trading the nonequalized method: +2 .1 20 $29.41 340.02 -3 -.15 20 $29.41 363.14 +10 .2 50 $29.41 326.1 -5 -.1 50 $29.41 436.98 Cumulative $10,000 $10,600 $9,646 $10,803.52 $10,155.31 Cumulative $10,000 $10,680.04 $9,590.61 $12,851.61 $10,666.71 However, if all of the trades were figured off of the current price (say $100 per share), the equalized optimal f would have made more than the raw optimal f. Which then is the better to use? Should we equalize our data and determine our optimal f (and its by-products), or should we just run every- 32 - thing as it is? This is more a matter of your beliefs than it is mathematical fact. It is a matter of what is more pertinent in the item you are trading, percentage changes or absolute changes. Is a $2 move in a $20 stock the same as a $10 move in a $100 stock? What if we are discussing dollars and deutsche marks? Is a 30-point move at .4500 the same as a .40-point move at .6000? My personal opinion is that you are probably better off with the equalized data. Often the matter is moot, in that if a stock has moved from $20 per share to $100 per share and we want to determine the optimal f, we want to use current data. The trades that occurred at $20 per share may not be representative of the way the stock is presently trading, regardless of whether they are equalized or not. Generally, then, you are better off not using data where the underlying was at a dramatically different price than it presently is, as the characteristics of the way the item trades may have changed as well. In that sense, the optimal f off of the raw data and the optimal f off of the equalized data will be identical if all trades occurred at the same underlying price. So we can state that if it does matter a great deal whether you equalize your data or not, then you're probably using too much data anyway. You've gone so far into the past that the trades generated back then probably are not very representative of the next trade. In short, we can say that it doesn't much matter whether you use equalized data or not, and if it does, there's probably a problem. If there isn't a problem, and there is a difference between using the equalized data and the raw data, you should opt for the equalized data. This does not mean that the optimal f figured off of the equalized data would have been optimal in the past. It would not have been. The optimal f figured off of the raw data would have been the optimal in the past. However, in terms of determining the as-yet-unknown answer to the question of what will be the optimal f (or closer to it tomorrow), the optimal f figured off of the equalized data makes better sense, as the equalized data is a fairer representation of the distribution of possible outcomes on the next trade. Equations (2.10a) through (2.10c) will give different answers depending upon whether the trade was initiated as a long or a short. For example, if a stock is bought at 80 and sold at 100, the percentage gain is 25. However, if a stock is sold short at 100 and covered at 80, the gain is only 20%. In both cases, the stock was bought at 80 and sold at 100, but the sequence-the chronology of these transactions-must be accounted for. As the chronology of transactions affects the distribution of percentage gains and losses, we assume that the chronology of transactions in the future will be more like the chronology in the past than not. Thus, Equations (2.10a) through (2,10c) will give different answers for longs and shorts. Of course, we could ignore the chronology of the trades (using 2.10c for longs and using the exit price in the denominator of 2.10c for shorts), but to do so would be to reduce the information content of the trade's history. Further, the risk involved with a trade is a function of the chronology of the trade, a fact we would be forced to ignore. DOLLAR AVERAGING AND SHARE AVERAGING IDEAS Here is an old, underused money-management technique that is an ideal tool for dealing with situations where you are absent knowledge. Consider a hypothetical motorist, Joe Putzivakian, case number 286952343. Every week, he puts $20 of gasoline into his auto, regardless of the price of gasoline that week. He always gets $20 worth, and every week he uses the $20 worth no matter how much or how little that buys him. When the price for gasoline is higher, it forces him to be more austere in his driving. As a result, Joe Putzivakian will have gone through life buying more gasoline when it is cheaper, and buying less when it was more expensive. He will have therefore gone through life paying a below average cost per gallon of gasoline. In other words, if you averaged the cost of a gallon of gasoline for all of the weeks of which Joe was a motorist, the average would have been higher than the average that Joe paid. Now consider his hypothetical cousin, Cecil Putzivakian, case number 286952344. Whenever he needs gasoline, he just fills up his pickup and complains about the high price of gasoline. As a result, Cecil has used a consistent amount of gas each week, and has therefore paid the average price for it throughout his motoring lifetime. Now let's suppose you are looking at a long-term investment program. You decide that you want to put money into a mutual fund to be used for your retirement many years down the road. You believe that when you retire the mutual fund will be at a much higher value than it is today. That is, you believe that in an asymptotic sense the mutual fund will be an investment that makes money (of course, in an asymptotic sense, lightning does strike twice). However, you do not know if it is going to go up or down over the next month, or the next year. You are absent knowledge about the nearer-term performance of the mutual fund. To cope with this, you can dollar average into the mutual fund. Say you want to space your entry into the mutual fund over the course of two years. Further, say you have $36,000 to invest. Therefore, every month for the next 24 months you will invest $1,500 of this $36,000 into the fund, until after 24 months you will be completely invested. By so doing, you have obtained a below average cost into the fund. "Average" as it is used here refers to the average price of the fund over the 24month period during which you are investing. It doesn't necessarily mean that you will get a price that is cheaper than if you put the full $36,000 into it today, nor does it guarantee that at the end of these 24 months of entering the fund you will show a profit on your $36,000. The amount you have in the fund at that time may be less than the $36,000. What it does mean is that if you simply entered arbitrarily at some point along the next 24 months with your full $36,000 in one shot, you would probably have ended up buying fewer mutual fund shares, and hence have paid a higher price than if you dollar averaged in. The same is true when you go to exit a mutual fund, only the exit side works with share averaging rather than dollar averaging. Say it is now time for you to retire and you have a total of 1,000 shares in this mutual fund, You don't know if this is a good time for you to be getting out or not, so you decide to take 2 years (24 months), to average out of the fund. Here's how you do it. You take the total number of shares you have (1,000) and divide it by the number of periods you want to get out over (24 months). Therefore, since 1,000/24 = 41.67, you will sell 41.67 shares every month for the next 24 months. In so doing, you will have ended up selling your shares at a higher price than the average price over the next 24 months. Of course, this is no guarantee that you will have sold them for a higher price than you could have received for them today, nor does it guarantee that you will have sold your shares at a higher price than what you might get if you were to sell all of your shares 24 months from now. What you will get is a higher price than the average over the time period that you are averaging out over. That is guaranteed. These same principles can be applied to a trading account. By dollar averaging money into a trading account as opposed to simply "taking the plunge" at some point during the time period you are averaging over, you will have gotten into the account at a better "average price." Absent knowledge of what the near-term equity changes in the account will be you are better off, on average, to dollar average into a trading program. Don't just rely on your gut and your nose, use the measures of dependency discussed in Chapter 1 on the monthly equity changes of a trading program. Try to see if there is dependency in the monthly equity changes. If there is dependency to a high enough confidence level so you can plunge in at a favorable point, then do so. However, if there isn't a high enough confidence in the dependency of the monthly equity changes, then dollar average into (and share average out of) a trading program. In so doing, you will be ahead in an asymptotic sense. The same is true for withdrawing money from an account. The way to share average out of a trading program (when there aren't any shares, like a commodity account) is to decide upon a date to start averaging out, as well as how long a period of time to average out for. On the date when you are going to start averaging out, divide the equity in the account by 100. This gives you the value of "1 share." Now, divide 100 by the number of periods that you want to average out over. Say you want to average out of the account weekly over the next 20 weeks. That makes 20 periods. Dividing 100 by 20 gives 5. Therefore, you are going to average out of your account by 5 "shares" per week. Multiply the value you had figured for 1 share by 5, and that will tell you how much money to withdraw from your trading account this week. Now, going into next week, you must keep track of how many shares you have left. Since you got out of 5 shares last week, you are left with 95. When the time comes along for withdrawal number 2, divide the equity in your account by 95 and multiply by 5. This will give you the value of the 5 - 33 - shares you are "cashing in" this week. You will keep on doing this until you have zero shares left, at which point no equity will be left in your account. By doing this, you have probably obtained a better average price for getting out of your account than you would have received had you gotten out of the account at some arbitrary point along this 20-week withdrawal period. This principle of averaging in and out of a trading account is so simple, you have to wonder why no one ever does it. I always ask the accounts that I manage to do this. Yet I have never had anyone, to date, take me up on it. The reason is simple. The concept, although completely valid, requires discipline and time in order to work-exactly the same ingredients as those required to make the concept of optimal f work. Just ask Joe Putzivakian. It's one thing to understand the concepts and believe in them. It's another thing to do it. THE ARC SINE LAWS AND RANDOM WALKS Now we turn the discussion toward drawdowns. First, however, we need to study a little bit of theory in the way of the first and second arc sine laws. These are principles that pertain to random walks. The stream of trade P&L's that you are dealing with may not be truly random. The degree to which the stream of P&L's you are using differs from being purely random is the degree to which this discussion will not pertain to your stream of profits and losses. Generally though, most streams of trade profits and losses are nearly random as determined by the runs test and the linear correlation coefficient (serial correlation). Furthermore, not only do the arc sine laws assume that you know in advance what the amount that you can win or lose is, they also assume that the amount you can win is equal to the amount you can lose, and that this is always a constant amount. In our discussion, we will assume that the amount that you can win or lose is $1 on each play. The arc sine laws also assume that you have a 50% chance of winning and a 50% chance of losing. Thus, the arc sine laws assume a game where the mathematical expectation is 0. These caveats make for a game that is considerably different, and considerably more simple, than trading is. However, the first and second arc sine laws are exact for the game just described. To the degree that trading differs from the game just described, the arc sine laws do not apply. For the sake of learning the theory, however, we will not let these differences concern us for the moment. Imagine a truly random sequence such as coin tossing5 where we win 1 unit when we win and we lose 1 unit when we lose. If we were to plot out our equity curve over X tosses, we could refer to a specific point (X,Y), where X represented the Xth toss and Y our cumulative gain or loss as of that toss. We define positive territory as anytime the equity curve is above the X axis or on the X axis when the previous point was above the X axis. Likewise, we define negative territory as anytime the equity curve is below the X axis or on the X axis when the previous point was below the X axis. We would expect the total number of points in positive territory to be close to the total number of points in negative territory. But this is not the case. If you were to toss the coin N times, your probability (Prob) of spending K of the events in positive territory is: (2.13) Prob~l/(Pi*K^.5*(N-K)^.5) where Pi = 3.141592654. The symbol ~ means that both sides tend to equality in the limit. In this case, as either K or (N-K) approaches infinity, the two sides of the equation will tend toward equality. Thus, if we were to toss a coin 10 times (N = 10) we would have the following probabilities of being in positive territory for K of the tosses: K Probability6 5 Although empirical tests show that coin tossing is not a truly random sequence due to slight imperfections in the coin used, we will assume here, and elsewhere in the text when referring to coin tossing, that we are tossing an ideal coin with exactly a .5 chance of landing heads or tails. 6 Note that since neither K nor N may equal 0 in Equation (2.13) (as you would then be dividing by 0), we can discern the probabilities corresponding to K = 0 and K = N by summing the probabilities from K = l to K = N-l and subtracting this sum from 1. Dividing this difference by 2 will give us the probabilities associated with K = 0 and K = N. .14795 .1061 .0796 .0695 .065 .0637 .065 .0695 .0796 .1061 .14795 You would expect to be in positive territory for 5 of the 10 tosses, yet that is the least likely outcome! In fact, the most likely outcomes are that you will be in positive territory for all of the tosses or for none of them! This principle is formally detailed in the first arc sine law which states: For a Fixed A (0 Probability .14795 .1061 .0796 .0695 .065 .0637 .065 .0695 .0796 .1061 .14795 the second law, where, rather than looking for an absolute maximum and minimum, we were looking for a maximum above the mathematical expectation and a minimum below it. The minimum below the mathematical expectation could be greater than the maximum above it if the minimum happened later and the arithmetic mathematical expectation was a rising line (as in trading) rather than a horizontal line at zero. Thus, we can interpret the spirit of the arc sine laws as applying to trading in the following ways. (However, rather than imagining the important line as being a, horizontal line at zero, we should imagine a line that slopes upward at the rate of the arithmetic average trade (if we are constant-con-tract trading). If we are Axed fractional trading, the line will be one that curves upward, getting ever steeper, 'at such a rate that the next point equals the current point times the geometric mean.) We can interpret the first arc sine law as stating that we should expect to be on one side of the mathematical expectation line for far more trades than we spend on the other side of the mathematical expectation line. Regarding the second arc sine law, we should expect the maximum deviations from the mathematical expectation line, either above or below it, as being most likely to occur near the beginning or the end of the equity curve graph and least likely near the center of it. You will notice another characteristic that happens when you are trading at the optimal f levels. This characteristic concerns the length of time you spend between two equity high points. If you are trading at the optimal f level, whether you are trading just 1 market system or a portfolio of market systems, the time of the longest drawdown7 (not necessarily the worst, or deepest, drawdown) takes to elapse is usually 35 to 55% of the total time you are looking at. This seems to be true no matter how long or short a time period you are looking at! (Again, time in this sense is measured in trades.) This is not a hard-and-fast rule. Rather, it is the effect of the spirit of the arc sine laws at work. It is perfectly natural, and should be expected This principle appears to hold true no matter how long or short a period we are looking at. This means that we can expect to be in the largest drawdown for approximately 35 to 55% of the trades over the life of a trading program we are employing! This is true whether we are trading 1 market system or an entire portfolio. Therefore, we must learn to expect to be within the maximum drawdown for 35 to 55% of the life of a program that we wish to trade. Knowing this before the fact allows us to be mentally prepared to trade through it. Whether you are about to manage an account, about to have one managed by someone else, or about to trade your own account, you should bear in mind the spirit of the arc sine laws and how they work on your equity curve relative to the mathematical expectation line, along with the 35% to 55% rule. By so doing you will be tuned to reality regarding what to expect as the future unfolds. We have now covered the empirical techniques entirely. Further, we have discussed many characteristics of fixed fractional trading and have introduced some salutary techniques, which will be used throughout the sequel. We have seen that by trading at the optimal levels of money management, not only can we expect substantial drawdowns, but the time spent between two equity highs can also be quite substantial. Now we turn our attention to studying the parametric techniques, the subject of the next chapter. In a nutshell, the second arc sine law states that the maximum or minimum are most likely to occur near the endpoints of the equity curve and least likely to occur in the center. TIME SPENT IN A DRAWDOWN Recall the caveats involved with the arc sine laws. That is, the arc sine laws assume a 50% chance of winning, and a 50% chance of losing. Further, they assume that you win or lose the exact same amounts and that the generating stream is purely random. Trading is considerably more complicated than this. Thus, the arc sine laws don't apply in a pure sense, but they do apply in spirit. Consider that the arc sine laws worked on an arithmetic mathematical expectation of 0. Thus, with the first law, we can interpret the percentage of time on either side of the zero line as the percentage of time on either side of the arithmetic mathematical expectation. Likewise with - 34 - 7By longest drawdown here is meant the longest time, in terms of the number of elapsed trades, between one equity peak and the time (or number of elapsed trades) until that peak is equaled or Chapter 3 - Parametric Optimal f on the Normal Distribution Now that we are finished with our discussion of the empirical techniques as well as the characteristics of fixed fractional trading, we enter the realm of the parametric techniques. Simply put, these techniques differ from the empirical in that they do not use the past history itself as the data to be operated on Bather, we observe the past history to develop a mathematical description of that distribution of that data This mathematical description is based upon what has happened in the past as well as what we expect to happen in the future. In the parametric techniques we operate on these mathematical descriptions rather than on the past history itself The mathematical descriptions used in the parametric techniques are most often what are referred to as probability distributions. Therefore, if we are to study the parametric techniques, we must study probability distributions (in general) as a foundation We will then move on to studying a certain type of distribution, the Normal Distribution. Then we will see how to find the optimal f and its byproducts on the Normal Distribution. THE BASICS OF PROBABILITY DISTRIBUTIONS Imagine if you will that you are at a racetrack and you want to keep a log of the position in which the horses in a race finish. Specifically, you want to record whether the horse in the pole position came in first, second, and so on for each race of the day. You will only record ten places. If the horse came in worse than in tenth place, you will record it as a tenth-place finish. If you do this for a number of days, you will have gathered enough data to see the distribution of finishing positions for a horse starting out in the pole position. Now you take your data and plot it on a graph. The horizontal axis represents where the horse finished, with the far left being the worst finishing position (tenth) and the far right being a win. The vertical axis will record how many times the pole position horse finished in the position noted on the horizontal axis. You would begin to see a bell-shaped curve develop. Under this scenario, there are ten possible finishing positions for each race. We say that there are ten bins in this distribution. What if, rather than using ten bins, we used five? The first bin would be for a first- or second-place finish, the second bin for a third-or fourth-place finish, and so on. What would have been the result? Using fewer bins on the same set of data would have resulted in a probability distribution with the same profile as one determined on the same data with more bins. That is, they would look pretty much the same graphically. However, using fewer bins does reduce the information content of a distribution. Likewise, using more bins increases the information content of a distribution. If, rather than recording the finishing position of the pole position horse in each race, we record the time the horse ran in, rounded to the nearest second, we will get more than ten bins; and thus the information content of the distribution obtained will be greater. If we recorded the exact finish time, rather than rounding finish times to use the nearest second, we would be creating what is called a continuous distribution. In a continuous distribution, there are no bins. Think of a continuous distribution as a series of infinitely thin bins (see Figure 3-1). A continuous distribution differs from a discrete distribution, the type we discussed first in that a discrete distribution is a binned distribution. Although binning does reduce the information content of a distribution, in real life it is often necessary to bin data. Therefore, in real life it is often necessary to lose some of the information content of a distribution, while keeping the profile of the distribution the same, so that you can process the distribution. Finally, you should know that it is possible to take a continuous distribution and make it discrete by binning it, but it is not possible to take a discrete distribution and make it continuous. - 35 - Figure 3-1 A continuous distribution is a series of infinitely thin bins When we are discussing the profits and losses of trades, we are essentially discussing a continuous distribution. A trade can take a multitude of values (although we could say that the data is binned to the nearest cent). In order to work with such a distribution, you may find it necessary to bin the data into, for example, one-hundred-dollar-wide bins. Such a distribution would have a bin for trades that made nothing to $99.99, the next bin would be for trades that made $100 to $199.99, and so on. There is a loss of information content in binning this way, yet the profile of the distribution of the trade profits and losses remains relatively unchanged. DESCRIPTIVE MEASURES OF DISTRIBUTIONS Most people are familiar with the average, or more specifically the arithmetic mean. This is simply the sum of the data points in a distribution divided by the number of data points: (3.01) A = (∑[i = 1,N] Xi)/N where A = The arithmetic mean. Xi = The ith data point. N = The total number of data points in the distribution. The arithmetic mean is the most common of the types of measures of location, or central tendency of a body of data, a distribution. However, you should be aware that the arithmetic mean is not the only available measure of central tendency and often it is not the best. The arithmetic mean tends to be a poor measure when a distribution has very broad tails. Suppose you randomly select data points from a distribution and calculate their mean. If you continue to do this you will find that the arithmetic means thus obtained converge poorly, if at all, when you are dealing with a distribution with very broad tails. Another important measure of location of a distribution is the median. The median is described as the middle value when data are arranged in an array according to size. The median divides a probability distribution into two halves such that the area under the curve of one half is equal to the area under the curve of the other half. The median is frequently a better measure of central tendency than the arithmetic mean. Unlike the arithmetic mean, the median is not distorted by extreme outlier values. Further, the median can be calculated even for open-ended distributions. An open-ended distribution is a distribution in which all of the values in excess of a certain bin are thrown into one bin. An example of an open-ended distribution is the one we were compiling when we recorded the finishing position in horse racing for the horse starting out in the pole position. Any finishes worse than tenth place were recorded as a tenth place finish. Thus, we had an open distribution. The median is extensively used by the U.S. Bureau of the Census. The third measure of central tendency is the mode-the most frequent occurrence. The mode is the peak of the distribution curve. In some distributions there is no mode and sometimes there is more than one mode. Like the median, the mode can often be regarded as a superior measure of central tendency. The mode is completely independent of extreme outlier values, and it is more readily obtained than the arithmetic mean or the median. We have seen how the median divides the distribution into two equal areas. In the same way a distribution can be divided by three quartiles (to give four areas of equal size or probability), or nine deciles (to give ten areas of equal size or probability) or 99 percentiles (to give 100 areas of equal size or probability). The 50th percentile is the median, and along with the 25th and 75th percentiles give us the quartiles. Fi- nally, another term you should become familiar with is that of a quantile. A quantile is any of the N-1 variate values that divide the total frequency into N equal parts. We now return to the mean. We have discussed the arithmetic mean as a measure of central tendency of a distribution. You should be aware that there are other types of means as well. These other means are less common, but they do have significance in certain applications. First is the geometric mean, which we saw how to calculate in the first chapter. The geometric mean is simply the Nth root of all the data points multiplied together. (3.02) G = (∏[i = 1,N]Xi)^(1/N) where G = The geometric mean. Xi = The ith data point. N = The total number of data points in the distribution. The geometric mean cannot be used if any of the variate-values is zero or negative. We can state that the arithmetic mathematical expectation is the arithmetic average outcome of each play (on a constant I-unit basis) minus the bet size. Likewise, we can state that the geometric mathematical expectation is the geometric average outcome of each play (on a constant I-unit basis) minus the bet size. Another type of mean is the harmonic mean. This is the reciprocal of the mean of the reciprocals of the data points. (3.03) 1/∏ = 1/N ∑[i = 1,N]1/Xi where H = The harmonic mean. Xi = The ith data point. N = The total number of data points in the distribution. The final measure of central tendency is the quadratic mean or roof mean square. (3.04) R^2 = l/N∑[i = 1,N]Xi^2 where R = The root mean square. Xi = The ith data point. N = The total number of data points in the distribution. You should realize that the arithmetic mean (A) is always greater than or equal to the geometric mean (G), and the geometric mean is always greater than or equal to the harmonic mean (H): (3.05) H<=G<=A where H = The harmonic mean. G = The geometric mean. A = The arithmetic mean. MOMENTS OF A DISTRIBUTION The central value or location of a distribution is often the first thing you want to know about a group of data, and often the next thing you want to know is the data's variability or "width" around that central value. We call the measures of a distributions central tendency the first moment of a distribution. The variability of the data points around this central tendency is called the second moment of a distribution. Hence the second moment measures a distribution's dispersion about the first moment. As with the measure of central tendency, many measures of dispersion are available. We cover seven of them here, starting with the least common measures and ending with the most common. The range of a distribution is simply the difference between the largest and smallest values in a distribution. Likewise, the 10-90 percentile range is the difference between the 90th and 10th percentile points. These first two measures of dispersion measure the spread from one extreme to the other. The remaining five measures of dispersion measure the departure from the central tendency (and hence measure the half-spread). The semi-interquartile range or quartile deviation equals one half of the distance between the first and third quartiles (the 25th and 75th - 36 - per-centiles). This is similar to the 10-90 percentile range, except that with this measure the range is commonly divided by 2. The half-width is an even more frequently used measure of dispersion. Here, we take the height of a distribution at its peak, the mode. If we find the point halfway up this vertical measure and run a horizontal line through it perpendicular to the vertical line, the horizontal line will touch the distribution at one point to the left and one point to the right. The distance between these two points is called the half-width. Next, the mean absolute deviation or mean deviation is the arithmetic average of the absolute value of the difference between the data points and the arithmetic average of the data points. In other words, as its name implies, it is the average distance that a data point is from the mean. Expressed mathematically: (3.06) M = 1/N ∑[i = 1,N] ABS (Xi-A) where M = The mean absolute deviation. N = The total number of data points. Xi = The ith data point. A = The arithmetic average of the data points. ABS() = The absolute value function. Equation (3.06) gives us what is known as the population mean absolute deviation. You should know that the mean absolute deviation can also be calculated as what is known as the sample mean absolute deviation. To calculate the sample mean absolute deviation, replace the term 1/N in Equation (3.06) with 1/(N-1). You use the sample version when you are making judgments about the population based on a sample of that population. The next two measures of dispersion, variance and standard deviation, are the two most commonly used. Both are used extensively, so we cannot say that one is more common than the other; suffice to say they are both the most common. Like the mean absolute deviation, they can be calculated two different ways, for a population as well as a sample. The population version is shown, and again it can readily be altered to the sample version by replacing the term 1/N with 1/(N-1). The variance is the same thing as the mean absolute deviation except that we square each difference between a data point and the average of the data points. As a result, we do not need to take the absolute value of each difference, since multiplying each difference by itself makes the result positive whether the difference was positive or negative. Further, since each distance is squared, extreme outliers will have a stronger effect on the variance than they would on the mean absolute deviation. Mathematically expressed: (3.07) V = 1/N ∑[i = 1,N] ((Xi-A)^2) where V = The variance. N = The total number of data points. Xi = The ith data point. A = The arithmetic average of the data points. Finally, the standard deviation is related to the variance (and hence the mean absolute deviation) in that the standard deviation is simply the square root of the variance. The third moment of a distribution is called skewness, and it describes the extent of asymmetry about a distributions mean (Figure 3-2). Whereas the first two moments of a distribution have values that can be considered dimensional (i.e., having the same units as the measured quantities), skew-ness is defined in such a way as to make it nondimensional. It is a pure number that represents nothing more than the shape of the distribution. Skew = 0 Figure 3-4 Kurtosis. Figure 3-2 Skewness A positive value for skewness means that the tails are thicker on the positive side of the distribution, and vice versa. A perfectly symmetrical distribution has a skewness of 0. Mode Mean Figure 3-3 Skewness alters location. In a symmetrical distribution the mean, median, and mode are all at the same value. However, when a distribution has a nonzero value for skewness, this changes as depicted in Figure 3-3. The relationship for a skewed distribution (any distribution with a nonzero skewness) is: (3.08) Mean-Mode = 3*(Mean-Median) As with the first two moments of a distribution, there are numerous measures for skewness, which most frequently will give different answers. These measures now follow: (3.09) S = (Mean-Mode)/Standard Deviation (3.10) S = (3*(Mean-Median))/Standard Deviation These last two equations, (3.09) and (3.10), are often referred to as Pearson's first and second coefficients of skewness, respectively. Skewness is also commonly determined as: (3.11) S = 1/N ∑[i = 1,N] (((Xi-A)/D)^3) where S = The skewness. N = The total number of data points. Xi = The ith data point. A = The arithmetic average of the data points. D = The population standard deviation of the data points. - 37 - Finally, the fourth moment of a distribution, kurtosis (see Figure 34) measures the peakedness or flatness of a distribution (relative to the Normal Distribution). Like skewness, it is a nondimensional quantity. A curve less peaked than the Normal is said to be platykurtic (kurtosis will be negative), and a curve more peaked than the Normal is called leptokurtic (kurtosis will be positive). When the peak of the curve resembles the Normal Distribution curve, kurtosis equals zero, and we call this type of peak on a distribution mesokurtic. Like the preceding moments, kurtosis has more than one measure. The two most common are: (3.12) K = Q/P where K = The kurtosis. Q = The semi-interquartile range. P = The 10-90 percentile range. (3.13) K = (1/N (∑[i = 1,N] (((Xi-A)/D)^ 4)))-3 where K = The kurtosis. N = The total number of data points. Xi = The ith data point. A = The arithmetic average of the data points. D = The population standard deviation of the data points. Finally, it should be pointed out there is a lot more "theory" behind the moments of a distribution than is covered here, For a more in-depth discussion you should consult one of the statistics books mentioned in the Bibliography. The depth of discussion about the moments of a distribution presented here will be more than adequate for our purposes throughout this text. Thus far, we have covered data distributions in a general sense. Now we will cover the specific distribution called the Normal Distribution. THE NORMAL DISTRIBUTION Frequently the Normal Distribution is referred to as the Gaussian distribution, or de Moivre's distribution, after those who are believed to have discovered it-Karl Friedrich Gauss (1777-1855) and, about a century earlier and far more obscurely, Abraham de Moivre (1667-1754). The Normal Distribution is considered to be the most useful distribution in modeling. This is due to the fact that the Normal Distribution accurately models many phenomena. Generally speaking, we can measure heights, weights, intelligence levels, and so on from a population, and these will very closely resemble the Normal Distribution. Let's consider what is known as Galton's board (Figure 3-5). This is a vertically mounted board in the shape of an isosceles triangle. The board is studded with pegs, one on the top row, two on the second, and so on. Each row down has one more peg than the previous row. The pegs are arranged in a triangular fashion such that when a ball is dropped in, it has a 50/50 probability of going right or left with each peg it encounters. At the base of the board is a series of troughs to record the exit gate of each ball. tion is distributed according to the Exponential Distribution (Figure 36), then it may be necessary to use an N of 100 or so. Even the means of samples taken from the exponential will tend to be normally distributed Figure 3-5 Galton's board. The balls falling through Galton's board and arriving in the troughs will begin to form a Normal Distribution. The "deeper" the board is (i.e., the more rows it has) and the more balls are dropped through, the more closely the final result will resemble the Normal Distribution. The Normal is useful in its own right, but also because it tends to be the limiting form of many other types of distributions. For example, if X is distributed binomially, then as N tends toward infinity, X tends to be Normally distributed. Further, the Normal Distribution is also the limiting form of a number of other useful probability distributions such as the Poisson, the Student's, or the T distribution. In other words, as the data (N) used in these other distributions increases, these distributions increasingly resemble the Normal Distribution. THE CENTRAL LIMIT THEOREM One of the most important applications for statistical purposes involving the Normal Distribution has to do with the distribution of averages. The averages of samples of a given size, taken such that each sampled item is selected independent of the others, will yield a distribution that is close to Normal. This is an extremely powerful fact, for it means that you can generalize about an actual random process from averages computed using sample data. Thus, we can state that if N random samples are drawn from a population, then the sums (or averages) of the samples will be approximately Normally distributed, regardless of the distribution of the population from which the samples are drawn The closeness to the Normal Distribution improves as N (the number of samples) increases. As an example, consider the distribution of numbers from 1 to 100. This is what is known as a uniform distribution: all elements (numbers in this case) occur only once. The number 82 occurs once and only once, as does 19, and so on. Suppose now that we take a sample of five elements and we take the average of these five sampled elements (we can just as well take their sums). Now, we replace those five elements back into the population, and we take another sample and calculate the sample mean. If we keep on repeating this process, we will see that the sample means are Normally distributed, even though the population from which they are drawn is uniformly distributed. Furthermore, this is true regardless of how the population is distributed! The Central Limit Theorem allows us to treat the distribution of sample means as being Normal without having to know the distribution of the population. This is an enormously convenient fact for many areas of study. If the population itself happens to be Normally distributed, then the distribution of sample means will be exactly (not approximately) Normal. This is true because how quickly the distribution of the sample means approaches the Normal, as N increases, is a function of how close the population is to Normal. As a general rule of thumb, if a population has a unimodal distribution-any type of distribution where there is a concentration of frequency around a single mode, and diminishing frequencies on either side of the mode (i.e., it is convex)-or is uniformly distributed, using a value of 20 for N is considered sufficient, and a value of 10 for N is considered probably sufficient. However, if the popula- 38 - Figure 3-6 The Exponential Distribution and the Normal. The Central Limit Theorem, this amazingly simple and beautiful fact, validates the importance of the Normal Distribution. WORKING WITH THE NORMAL DISTRIBUTION In using the Normal Distribution, we most frequently want to find the percentage of area under the curve at a given point along the curve. In the parlance of calculus this would be called the integral of the function for the curve itself. Likewise, we could call the function for the curve itself the derivative of the function for the area under the curve. Derivatives are often noted with a prime after the variable for the function. Therefore, if we have a function, N(X), that represents the percentage of area under the curve at a given point, X, we can say that the derivative of this function, N'(X) (called N prime of X), is the function for the curve itself at point X. We will begin with the formula for the curve itself, N'(X). This function is represented as: (3.14) N'(X) = 1/(S*(2*3.1415926536)^(1/2))*EXP(-((XU)^2)/(2*S^2)) where U = The mean of the data. S = The standard deviation of the data. X = The observed data point. EXP() = The exponential function. This formula will give us the Y axis value, or the height of the curve if you Will, at any given X axis value. Often it is easier to refer to a point along the curve with reference to its X coordinate in terms of how many standard deviations it is away from the mean. Thus, a data point that was one standard deviation away from the mean would be said to be one standard unit from the mean. Further, it is often easier to subtract the mean from all of the data points, which has the effect of shifting the distribution so that it is centered over zero rather than over the mean. Therefore, a data point that was one standard deviation to the right of the mean would now have a value of 1 on the X axis. When we make these conversions, subtracting the mean from the data points, then dividing the difference by the standard deviation of the data points, we are converting the distribution to what is called the standardized normal, which is the Normal Distribution with mean = 0 and variance = 1. Now, N'(Z) will give us the Y axis value (the height of the curve) for any value of Z: (3.15a) N'(Z) = l/((2*3.1415926536)^(1/2))*EXP(-(Z^2/2)) = .398942*EXP(-(Z^2/2)) where (3.16) Z = (X-U)/S and U = The mean of the data. S = The standard deviation of the data. X = The observed data point. EXP() = The exponential function. Equation (3.16) gives us the number of standard units that the data point corresponds to-in other words, how many standard deviations away from the mean the data point is. When Equation (3.16) equals 1, it is called the standard normal deviate. A standard deviation or a standard unit is sometimes referred to as a sigma. Thus, when someone speaks of an event being a "five sigma event," they are referring to an event whose probability of occurrence is the probability of being beyond five standard deviations. 0.5 0.4 0.3 0.2 0.1 0 -3 0 <-- Z --> Figure 3-7 The Normal Probability density function. Consider Figure 3-7, which shows this equation for the Normal curve. Notice that the height of the standard Normal curve is .39894. From Equation (3.15a), the height is: (3.15a) N'(Z) = .398942*EXP(-(Z^2/2)) N'(0) = .398942*EXP(-(0^2/2)) N'(0) = .398942 Notice that the curve is continuous-that is, there are no "breaks" in the curve as it runs from minus infinity on the left to positive infinity on the right. Notice also that the curve is symmetrical, the side to the right of the peak being the mirror image of the side to the left of the peak. Suppose we had a group of data where the mean of the data was 11 and the standard deviation of the group of data was 20. To see where a data point in that set would be located on the curve, we could first calculate it as a standard unit. Suppose the data point in question had a value of -9. To calculate how many standard units this is we first must subtract the mean from this data point: -9 -11 = -20 Next we need to divide the result by the standard deviation: -20/20 = -1 We can therefore say that the number of standard units is -1, when the data point equals -9, and the mean is 11, and the standard deviation is 20. In other words, we are one standard deviation away from the peak of the curve, the mean, and since this value is negative we know that it means we are one standard deviation to the left of the peak. To see where this places us on the curve itself (i.e., how high the curve is at one standard deviation left of center, or what the Y axis value of the curve is for a corresponding X axis value of -1), we need to now plug this into Equation (3.15a): (3.15a) N'(Z) = .398942*EXP(-(Z^2/2)) = .398942*2.7182818285^(-(-1^2/2)) = .398942*2.7182818285^(-1/2) = .398942*.6065307 = .2419705705 Thus we can say that the height of the curve at X = -1 is .2419705705. The function N'(Z) is also often expressed as: (3.15b) N'(Z) = EXP(-(Z^2/2))/((8*ATN(1))^(1/2) = EXP(-(Z^2/2))/((8*.7853983)^(1/2) = EXP(-(Z^2/2))/2.506629 where (3.16) Z = (X-U)/S and ATN() = The arctangent function. U = The mean of the data. S = The standard deviation of the data. - 39 - X = The observed data point. EXP() = The exponential function. Nonstatisticians often find the concept of the standard deviation (or its square, variance) hard to envision. A remedy for this is to use what is known as the mean absolute deviation and convert it to and from the standard deviation in these equations. The mean absolute deviation is exactly what its name implies. The mean of the data is subtracted from each data point. The absolute values of each of these differences are then summed, and this sum is divided by the number of data points. What you end up with is the average distance each data point is away from the mean. The conversion for mean absolute deviation and standard deviation are given now: (3.17) Mean Absolute Deviation = S*((2/3.1415926536)^(1/2)) = S*.7978845609 where M = The mean absolute deviation. S = The standard deviation. Thus we can say that in the Normal Distribution, the mean absolute deviation equals the standard deviation times .7979. Likewise: (3.18) S = M*1/.7978845609 = M*1.253314137 where S = The standard deviation. M = The mean absolute deviation. So we can also say that in the Normal Distribution the standard deviation equals the mean absolute deviation times 1.2533. Since the variance is always the standard deviation squared (and standard deviation is always the square root of variance), we can make the conversion between variance and mean absolute deviation. (3.19) M = V^(1/2)*((2/3.1415926536)^(1/2)) = V^(l/2)*.7978845609 where M = The mean absolute deviation. V = The variance. (3.20) V = (M*1.253314137)^2 where V = The variance. M = The mean absolute deviation. Since the standard deviation in the standard normal curve equals 1, we can state that the mean absolute deviation in the standard normal curve equals .7979. Further, in a bell-shaped curve like the Normal, the semi-interquartile range equals approximately two-thirds of the standard deviation, and therefore the standard deviation equals about 1.5 times the semi-interquartile range. This is true of most bell-shaped distributions, not just the Normal, as are the conversions given for the mean absolute deviation and standard deviation. NORMAL PROBABILITIES We now know how to convert our raw data to standard units and how to form the curve N'(Z) itself (i.e., how to find the height of the curve, or Y coordinate for a given standard unit) as well as N'(X) (Equation (3.14), the curve itself without first converting to standard units). To really use the Normal Probability Distribution though, we want to know what the probabilities of a certain outcome happening arc. This is not given by the height of the curve. Rather, the probabilities correspond to the area under the curve. These areas are given by the integral of this N'(Z) function which we have thus far studied. We will now concern ourselves with N(Z), the integral . to N'(Z), to find the areas under the curve (the probabilities).1 (3.21) N(Z) = 1 -N'(Z)* ((1.330274429*Y ^ 5)(1.821255978*Y^4)+(1.781477937*Y^3)(.356563782*Y^2)+(.31938153*Y)) If Z<0 then N(Z) = 1-N(Z) (3.15a) N'(Z) = .398942*EXP(-(Z^2/2)) where Y = 1/(1+2316419*ABS(Z)) 1 The actual integral to the Normal probability density does not exist in closed form, but it can very closely be approximated by Equation (3.21). and ABS() = The absolute value function. EXP() = The exponential function. We will always convert our data to standard units when finding probabilities under the curve. That is, we will not describe an N(X) function, but rather we will use the N(Z) function where: (3.16) Z = (X-U)/S and U = The mean of the data. S = The standard deviation of the data. X = The observed data point. Refer now to Equation (3.21). Suppose we want to know what the probability is of an event not exceeding +2 standard units (Z = +2). Y = 1/(1+2316419*ABS(+2)) = 1/1.4632838 = .68339443311 (3.15a) N'(Z) = .398942*EXP(-(+2^2/2)) = .398942*EXP(-2) = .398942*.1353353 = .05399093525 Notice that this tells us the height of the curve at +2 standard units. Plugging these values for Y and N'(Z) into Equation (3.21) we can obtain the probability of an event not exceeding +2 standard units: N(Z) = 1-N'(Z)*((1.330274429*Y^5)(1.821255978*Y^4)+(1.781477937*Y^3)(.356563782*Y^2)+(.31938153*Y)) = 1-.05399093525* ((1.330274429*.68339443311^5)(1.821255978*.68339443311^4+1.781477937*.68339443311^3)(.356563782*.68339443311^2)+(.31938153*.68339443311)) = 1-.05399093525*((1.330274429*.1490587) (1.821255978*.2181151+(1.781477937*.3191643)(-356563782*.467028+.31938153*.68339443311)) = 1-.05399093525*(.198288977-.3972434298+.5685841587.16652527+.2182635596) = 1-.05399093525*.4213679955 = 1-.02275005216 = .9772499478 Thus we can say that we can expect 97.72% of the outcomes in a Normally distributed random process to fall shy of +2 standard units. This is depicted in Figure 3-8. N(Z) & N'(Z) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 N'(Z) 0.2 0.1 0 -3 -2 0 Z Figure 3-8 Equation (3.21) showing probability with Z = +2. If we wanted to know what the probabilities were for an event equaling or exceeding a prescribed number of standard units (in this case +2), we would simply amend Equation (3.21), taking out the 1- in the beginning of the equation and doing away with the -Z provision (i.e., doing away with "If Z < 0 then N(Z) = 1-N(Z)"). Therefore, the second to last line in the last computation would be changed from = 1-.02275005216 to simply .02275005216 - 40 - We would therefore say that there is about a 2.275% chance that an event in a Normally distributed random process would equal or exceed +2 standard units. This is shown in Figure 3-9. N(Z) & N'(Z) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 N'(Z) 0.2 0.1 0 -3 -2 N(Z) N(Z) without the 1and -Z provision 0 Z Figure 3-9 Doing away with the 1- and -Z provision in Equation (3.21). Thus far we have looked at areas under the curve (probabilities) where we are only dealing with what are known as "1-tailed" probabilities. That is to say we have thus far looked to solve such questions as, "What are the probabilities of an event being less (more) than such-andsuch standard units from the mean?" Suppose now we were to pose the question as, “What are the probabilities of an event being within so many standard units of the mean?" In other words, we wish to find out what the "e-tailed" probabilities are. 0.5 0.4 0.3 0.2 0.1 0 -3 0 <-- Z --> Figure 3-10 A two-tailed probability of an event being+or-2 sigma. Consider Figure 3-10. This represents the probabilities of being within 2 standard units of the mean. Unlike Figure 3-8, this probability computation does not include the extreme left tail area, the area of less than -2 standard units. To calculate the probability of being within Z standard units of the mean, you must first calculate the I-tailed probability of the absolute value of Z with Equation (3.21). This will be your input to the next Equation, (3.22), which gives us the 2-tailed probabilities (i.e., the probabilities of being within ABS(Z) standard units of the mean): (3.22) e-tailed probability = 1-((1-N(ABS(Z)))*2) If we are considering what our probabilities of occurrence within 2 standard deviations are (Z = 2), then from Equation (3.21) we know that N(2) = .9772499478, and using this as input to Equation (3.22): 2-tailed probability = 1-((1-.9772499478)*2) = 1-(.02275005216* 2) = 1-.04550010432 = .9544998957 Thus we can state from this equation that the probability of an event in a Normally distributed random process falling within 2 standard units of the mean is about = -212.506628274*EXP(-2) = -2/2.506628274*.1353353 = -.1079968336 Therefore, we can state that the instantaneous rate of change in the N'(Z) function when Z = +2 is -.1079968336. This represents rise /run, so we can say that when Z = +2, the N'(Z) curve is rising -.1079968336 for ever) 1 unit run in Z. This is depicted in Figure 3-13. 0.4 0.3 0.2 0.1 0 -3 0.4 -2 0 <-- Z --> Figure 3-11 Two-tailed probability of an event being beyond 2 sigma. Just as with Equation (3.21), we can eliminate the leading 1- in Equation (3.22) to obtain (1-N(ABS(Z)))*2, which represents the probabilities of an event falling outside of ABS(Z) standard units of the mean. This is depicted in Figure 3-11. For the example where Z = 2, we can state that the probabilities of an event in a Normally distributed random process falling outside of 2 standard units is: 2 tailed probability (outside) = (1-.9772499478)*2 = .02275005216*2 = .04550010432 Finally, we come to the case where we want to find what the probabilities (areas under the N'(Z) curve) are for two different values of Z. 0.5 0.4 0.3 0.2 0.1 0 -3 0.3 0.2 0.1 0 -3 The tangent to the point Z=2 -2 0 <-- Z --> Figure 3-13 N"(Z) giving the slope of the line tangent tangent to N'(Z) at Z = +2. For the reader's own reference, further derivatives are now given. These will not be needed throughout the remainder of this text, but arc provided for the sake of completeness: (3.24) N'"(Z) = (Z^2-1)/2.506628274*EXP(-(Z^2)/2) (3.25) N""(Z) = ((3*Z)-Z^3)/2.506628274*EXP(-(Z^2)/2) (3.26) N'""(Z) = (Z^4-(6*Z^2)+3)/ 2.506628274*EXP(-(Z^2)/2) As a final note regarding the Normal Distribution, you should be aware that the distribution is nowhere near as “peaked” as the graphic examples presented in this chapter imply. The real shape of the Normal Distribution is depicted in Figure 3-14. 4 -2 0 <-- Z --> Figure 3-12 The area between -1 and +2 standard units. Suppose we want to find the area under the N'(Z) curve between -1 standard unit and +2 standard units. There are a couple of ways to accomplish this. To begin with, we can compute the probability of not exceeding +2 standard units with Equation (3.21), and from this we can subtract the probability of not exceeding -1 standard units (see Figure 312). This would give us: .9772499478-.1586552595 = .8185946883 Another way we could have performed this is to take the number 1, representing the entire area under the curve, and then subtract the sum of the probability of not exceeding -1 standard unit and the probability of exceeding 2 standard units: = 1-(.022750052+.1586552595) = 1 .1814053117 = .8185946883 With the basic mathematical tools regarding the Normal Distribution thus far covered in this chapter, you can now use your powers of reasoning to figure any probabilities of occurrence for Normally distributed random variables. FURTHER DERIVATIVES OF THE NORMAL Sometimes you may want to know the second derivative of the N(Z) function. Since the N(Z) function gives us the area under the curve at Z, and the N'(Z) function gives us the height of the curve itself at Z, then the N"(Z) function gives us the instantaneous slope of the curve at a given Z: (3.23) N"(Z) = -Z/2.506628274*EXP(-(Z^2/2) where EXP() = The exponential function. To determine what the slope of the N'(Z) curve is at +2 standard units: N"(Z) = -2/2.506628274*EXP(-(+2^2)/2) - 41 - 1 0 -3 Figure 3-14 The real shape of the Normal Distribution. Notice that here the scales of the two axes are the same, whereas in the other graphic examples they differ so as to exaggerate the shape of the THE LOGNORMAL DISTRIBUTION Many of the real-world applications in trading require a small but crucial modification to the Normal Distribution. This modification takes the Normal, and changes it to what is known as the Lognormal Distribution. Consider that the price of any freely traded item has zero as a lower limit.2 Therefore, as the price of an item drops and approaches zero, it should in theory become progressively more difficult for the item to get lower. For example, consider the price of a hypothetical stock at $10 per share. If the stock were to drop $5, to $5 per share, a 50% loss, then according to the Normal Distribution it could just as easily drop from $5 to $0. However, under the Lognormal, a similar drop of 50% from a price of $5 per share to $2.50 per share would be about as probable as a drop from $10 to $5 per share. The Lognormal Distribution, Figure 3-15, works exactly like the Normal Distribution except that with the Lognormal we are dealing with percentage changes rather than absolute changes. Figure 3-15 The Normal and Lognormal distributions. Consider now the upside. According to the Lognormal, a move from $10 per share to $20 per share is about as likely as a move from $5 to $10 per share, as both moves represent a 100% gain. That isn't to say that we won't be using the Normal Distribution. The purpose here is to introduce you to the Lognormal, show you its relationship to the Normal (the Lognormal uses percentage price changes rather than absolute price changes), and point out that it usually is used when talking about price moves, or anytime that the Normal would apply but be bounded on the low end at zero.2 To use the Lognormal distribution, you simply convert the data you are working with to natural logarithms.3 Now the converted data will be Normally distributed if the raw data was Lognormally distributed. For instance, if we are discussing the distribution of price changes as being Lognormal, we can use the Normal distribution on it. First, we must divide each closing price by the previous closing price. Suppose in this instance we are looking at the distribution of monthly closing prices (we could use any time period-hourly, daily, yearly, or whatever). Suppose we now see $10, $5, $10, $10, then $20 per share as our first five months closing prices. This would then equate to a loss of 50% going into the second month, a gain of 100% going into the third month, a gain of 0% going into the fourth month, and another gain of 100% into the fifth month. Respectively then, we have quotients of .5, 2, 1, and 2 for the monthly price changes of months 2 through 5. These are the same as HPRs from one month to the next in succession. We must now convert to natural logarithms in order to study their distribution under the math for the Normal Distribution. Thus, the natural log of .5 is -.6931473, of 2 it is .6931471, and of 1 it is 0. We are now able to apply the mathematics pertaining to the Normal distribution to this converted data. This idea that the lowest an item can trade for is zero is not always entirely true. For instance. during tile stock market crash of 1929 and the ensuing bear market, the shareholders of many failed banks were held liable to the depositors in those banks. Persons who owned stock in such banks not only lost their full investment, they also realized liability beyond the amount of their investment. The point here isn't to say that such an event can or cannot happen again. Rather, we cannot always say that zero is the absolute low end of what a freely traded item can be priced at, although it usually is. 3 The distinction between common and natural logarithms is reiterated here. A common log is a log base 10, while a natural log is a log base e, where e = 2.7182818285. The common log of X is referred to mathematically as log(X) while the natural log is referred to as ln(X). The distinction gets blurred when we observe BASIC programming code, which often utilizes a function LOG(X) to return the natural log. This is diametrically opposed to mathematical convention. BASIC does not have a provision For common logs, but the natural log can be converted to the common log by multiplying the natural log by .4342917. likewise, we CM convert common logs to natural logs by multiplying the common log by 2.3026. - 42 - THE PARAMETRIC OPTIMAL F Now that we have studied the mathematics of the Normal and Lognormal distributions, we will see how to determine an optimal f based on outcomes that are Normally distributed. The Kelly formula is an example of a parametric optimal f in that the optimal f returned is a function of two parameters. In the Kelly formula the input parameters are the percentage of winning bets and the payoff ratio. However, the Kelly formula only gives you the optimal f when the possible outcomes have a Bernoulli distribution. In other words, the Kelly formula will only give the correct optimal f when there are only two possible outcomes. When the outcomes do not have a Bernoulli distribution, such as Normally distributed outcomes (which we arc about to study), the Kelly formula will not give you the correct optimal f.4 When they are applicable, parametric techniques are far more powerful than their empirical counterparts. Assume we have a situation that can be described completely by the Bernoulli distribution. We can derive our optimal f here by way of either the Kelly formula or the empirical technique detailed in Portfolio Management Formulas. Suppose in this instance we win 60% of the time. Say we are tossing a coin that is biased, that we know that in the long run 60% of the tosses will be heads. We are therefore going to bet that each toss will be heads, and the payoff is 1:1. The Kelly formula would tell us to bet a fraction of .2 of our stake on the next bet. Further suppose that of the last 20 tosses, 11 were heads and 9 were tails. If we were to use these last 20 trades as the input into the empirical techniques, the result would be that we should risk .1 of our stake on the next bet. Which is correct, the .2 returned by the parametric technique (the Kelly formula in this Bernoulli distributed case) or the .1 returned empirically by the last 20 tosses? The correct answer is .2, the answer returned from the parametric technique. The reason is that the next toss has a 60% probability of being heads, not a 55% probability as the last 20 tosses would indicate. Although we are only discussing a 5% probability difference, 1 toss in 20, the effect on how much we should bet is dramatic. Generally, the parametric techniques are inherently more accurate in this regard than are their empirical counterparts (provided we know the distribution of the outcomes). This is the first advantage of the parametric to the empirical. This is also a critical proviso-that we must know what the distribution of outcomes is in the long run in order to use the parametric techniques. This is the biggest drawback to using the parametric techniques. The second advantage is that the empirical technique requires a past history of outcomes whereas the parametric does not. Further, this past history needs to be rather extensive. In the example just cited, we can assume that if we had a history of 50 tosses we would have arrived at an empirical optimal f closer to .2. With a history of 1,000 tosses, it would be even closer according to the law of averages. The fact that the empirical techniques require a rather lengthy stream of past data has almost restricted them to mechanical trading systems. Someone trading anything other than a mechanical trading system, be it by Elliott Wave or fundamentals, has almost been shut out from using the optimal f technique. With the parametric techniques this is no longer true. Someone who wishes to blindly follow some market guru, for instance, now has a way to employ the power of optimal f. Therein lies the third advantage of the parametric technique over the empirical-it can be used by any trader in any market. There is a big assumption here, however, for someone not employing a mechanical trading system. The assumption is that the future distribution of profits and losses will resemble the distribution in the past (which is what we figure the optimal f on). This may be less likely than with a mechanical system. This also sheds new light on the expected performance of any technique that is not purely mechanical. Even the best practitioners of such techniques, be it by fundamentals, Gann, Elliott Wave, and so on, are doomed to fail if they are too far beyond the peak of (to the right of) the f curve. If they are too far to the left of the peak, they are going to end up with geometrically lower profits than their expertise in their area 4 We are speaking of the Kelly formulas here in a singular sense even though there are, in fact, two different Kelly formulas, one for when the payoff ration is 1:1, and the other for when the payoff is any ratio. In the examples of Kelly in this discussion we are assuming a payoff of 1:1, hence it doesn't matter which of the two Kelly formulas we are using. should have made for them. Furthermore, practitioners of techniques that are not purely mechanical must realize that everything said about optimal f and the purely mechanical techniques applies. This should be considered when contemplating expected drawdowns of such techniques. Remember that the drawdowns Will be substantial, and this fact does not mean that the technique should be abandoned. The fourth and perhaps the biggest advantage of the parametric over the empirical method of determining optimal f, is that the parametric method allows you to do 'What if' types of modeling. For example, suppose you are trading a market system that has been running very hot. You want to be prepared for when that market system stops performing so well, as you know it Inevitably will. With the parametric techniques, you can vary your input parameters to reflect this and thereby put yourself at what the optimal f will be when the market system cools down to the state that the parameters you Input reflect. The parametric techniques are therefore far more powerful than the empirical ones. So why use the empirical techniques at all? The empirical techniques are more intuitively obvious than the parametric ones are. Hence, the empirical techniques are what one should learn first before moving on to the parametric. We have now covered the empirical techniques in detail and are therefore prepared to study the parametric techniques. THE DISTRIBUTION OF TRADE P&L'S Consider the following sequence of 232 trade profits and losses in points. It doesn't matter what the commodity is or what system generated this stream-it could be any system on any market. Trade# P&L 1. 0.18 2. -1.11 3. 0.42 4. -0.83 5. 1.42 6. 0.42 1. -0.99 8. 0.87 9. 0.92 10. -0.4 11. -1.48 12. 1.87 13. 1.37 14. -1.48 15. -0.21 16. 1.82 17. 0.15 18. 0.32 19. -1.18 20. -0.43 21. 0.42 22. 0.57 23. 4.72 24. 12.42 25. 0.15 26. 0.15 27. -1.14 28. 1.12 29. -1.88 30. 0.17 31. 0.57 32. 0.47 33. -1.88 34. 0.17 35. -1.93 36. 0.92 37. 1.45 38. 0.17 39. 1.87 40. 0.52 41. 0.67 Trade# P&L 166. 1.37 167. -1.93 168. 2.12 169. 0.62 170. 0.57 171. 0.42 172. 1.58 Trade# 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. Trade# 183. 184. 185. 186. 187. 188. 189. P&L -1.58 -0.5 0.17 0.17 -0.65 0.96 -0.88 0.17 -1.53 0.15 -0.93 0.42 2.77 8.52 2.47 -2.08 -1.88 -1.88 1.67 -1.88 3.72 2.87 2.17 1.37 1.62 0.17 0.62 0.92 0.17 1.52 -1.78 0.22 0.92 0.32 0.17 0.57 0.17 1.18 0.17 0.72 -3.33 P&L 0.24 0.57 0.35 1.57 -1.73 -0.83 -1.18 Trade# 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. Trade# 200. 201. 202. 203. 204. 205. 206. P&L -4.13 -1.63 -1.23 1.62 0.27 1.97 -1.72 1.47 -1.88 1.72 1.02 0.67 0.67 -1.18 3.22 -4.83 8.42 -1.58 -1.88 1.23 1.72 1.12 -0.97 -1.88 -1.88 1.27 0.16 1.22 -0.99 1.37 0.18 0.18 2.07 1.47 4.87 -1.08 1.27 0.62 -1.03 1.82 0.42 P&L -0.98 0.17 -0.96 0.35 0.52 0.77 1.10 Trade# 124. 125. 126. 127. 128. 130. 131. 132. 133. 134. 135. 136. 137. 138. 139. 140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. Trade# 217. 218. 219. 220. 221. 222. 223. P&L -2.63 -0.73 -1.83 0.32 1.62 1.02 -0.81 -0.74 1.09 -1.13 0.52 0.18 0.18 1.47 -1.07 -0.98 1.07 -0.88 -0.51 0.57 2.07 0.55 0.42 1.42 0.97 0.62 0.32 0.67 0.77 0.67 0.37 0.87 1.32 0.16 0.18 0.52 -2.33 1.07 1.32 1.42 2.72 P&L -1.08 0.25 0.14 0.79 -0.55 0.32 -1.30 - 43 - Trade# P&L 173. 0.17 174. 0.62 175. 0.77 176. 0.37 177. -1.33 178. -1.18 179. 0.97 180. 0.70 181. 1.64 182. 0.57 Trade# 190. 191. 192. 193. 194. 195. 196. 197. 198. 199. P&L -0.65 -0.78 -1.28 0.32 1.24 2.05 0.75 0.17 0.67 -0.56 Trade# 207. 208. 209. 210. 211. 212. 213. 214. 215. 216. P&L -1.88 0.35 0.92 1.55 1.17 0.67 0.82 -0.98 -0.85 0.22 Trade# 224. 225. 226. 227. 228. 229. 230. 231. 232. P&L 0.37 -0.51 0.34 -1.28 1.80 2.12 0.77 -1.33 1.52 If we wanted to determine an equalized parametric optimal f we would now convert these trade profits and losses to percentage gains and losses [based on Equations (2.10a) through (2.10c)]. Next, we would convert these percentage profits and losses by multiplying them by the current price of the underlying instrument. For example, P&L #1 is .18. Suppose that the entry price to this trade was 100.50. Thus, the percentage gain on this trade would be .18/100.50 = .001791044776. Now suppose that the current price of this underlying instrument is 112.00. Multiplying .001791044776 by 112.00 translates into an equalized P&L of .2005970149, If we were seeking to do this procedure on an equalized basis, we would perform this operation on all 232 trade profits and losses. Whether or not we are going to perform our calculations on an equalized basis (in this chapter we will not operate on an equalized basis), we must now calculate the mean (arithmetic) and population standard deviation of these 232 individual trade profits and losses as .330129 and 1.743232 respectively (again, if we were doing things on an equalized basis, we would need to determine the mean and standard deviation on the equalized trade P&L's). With these two numbers we can use Equation (3.16) to translate each individual trade profit and loss into standard units. (3.16) Z = (X-U)/S where U = The mean of the data. S = The standard deviation of the data. X = The observed data point. Thus, to translate trade #1, a profit of .18, to standard units: Z = (.18-.330129)/1.743232 = -.150129/1.743232 = -.08612106708 Likewise, the next three trades of -1.11, .42, and -.83 translate into -.8261258398, .05155423948, and -.6655046488 standard units respectively. If we are using equalized data, we simply standardize by subtracting the mean of the data and dividing by the data's standard deviation. Once we have converted all of our individual trade profits and losses over to standard units, we can bin the now standardized data. Recall that with binning there is a loss of information content about a particular distribution (in this case the distribution of the individual trades) but the character of the distribution remains unchanged. Suppose we were to now take these 232 individual trades and place them into 10 bins. We are choosing arbitrarily here-we could have chosen 9 bins or 50 bins. In fact, one of the big arguments about binning data is that most frequently there is considerable arbitrariness as to how the bins should be chosen. Whenever we bin something, we must decide on the ranges of the bins. We will therefore select a range of -2 to +2 sigmas, or standard deviations. This means we will have 10 equally spaced bins between -2 standard units to +2 standard units. Since there are 4 standard units in total between -2 and +2 standard units and we are dividing this space into 10 equal regions, we have 4/10 = -4 standard units as the size or "width" of each bin. Therefore, our first bin, the one "farthest to the left," will contain those trades that were within -2 to -1.6 standard units, the next one trades from -1.6 to -1.2, then -1.2 to -.8, and so on, until our final bin contains those trades that were 1.6 to 2 standard units. Those trades that are less than -2 standard units or greater than +2 standard units will not be binned in this exercise, and we will ignore them. If we so desired, we could have included them in the extreme bins, placing those data points less than -2 in the -2 to -1.6 bin, and likewise for those data points greater than 2. Of course, we could have chosen a wider range for binning, but since these trades are beyond the range of our bins, we have chosen not to include them. In other words, we are eliminating from this exercise those trades with P&L's less than .330129- (1.743232*2) = -3.156335 or greater than .330129+(1.743232*2) = 3.816593. What we have created now is a distribution of this system's trade P&L's. Our distribution contains 10 data points because we chose to work with 10 232 actual trades Normal distribution Figure 3-16 232 individual trades in 10 bins from -2 to +2 sigma versus the Normal Distribution. bins. Each data point represents the number of trades that fell into that bin. Each trade could not fall into more than 1 bin, and if the trade was beyond 2 standard units either side of the mean (P&L's<-3.156335 or >3.816593), then it is not represented in this distribution. Figure 3-16 shows this distribution as we have just calculated it. "Wait a minute," you say. "Shouldn't the distribution of a trading system's P&L's be skewed to the right because we are probably going to have a few large profits?" This particular distribution of 232 trade P&L's happens to be from a system that very often takes small profits via a target. Many people have the mistaken impression that P&L distributions are going to be skewed to the right for all trading systems. This is not at all true, as Figure 3-16 attests. Different market systems will have different distributions, and you shouldn't expect them all to be the same. Also in Figure 3-16, superimposed over the distribution we have just put together, is the Normal Distribution as it would look for 232 trade P&L's if they were Normally distributed. This was done so that you can compare, graphically, the trade P&L's as we have just calculated them to the Normal. The Normal Distribution here is calculated by first taking the boundaries of each bin. For the leftmost bin in our example this would be Z = -2 and Z = -1.6. Now we run these Z values through Equation (3.21) to convert these boundaries to a cumulative probability. In our example, this corresponds to .02275 for Z = -2 and .05479932 for Z = -1.6. Next, we take the absolute value of the difference between these two values, which gives us ABS (.02275-.05479932) = .03204932 for our example. Last, we multiply this answer by the number of data points, which in this case is 232 because there are 232 total trades (we still must use 232 even though some have been eliminated because they were beyond the range of our bins). Therefore, we can state that if the data were Normally distributed and placed into 10 bins of equal width between -2 and +2 sigmas, then the leftmost bin would contain .03204932*232 = 7.43544224 elements. If we were to calculate this for each of the 10 bins, we would calculate the Normal curve superimposed in Figure 3-16. FINDING OPTIMAL F ON THE NORMAL DISTRIBUTION Now we can construct a technique for finding the optimal f on Normally distributed data. Like the Kelly formula, this will be a parametric technique. However, this technique is far more powerful than the Kelly formula, because the Kelly formula allows for only two possible outcomes for an event whereas this technique allows for the full spectrum of the outcomes (provided that the outcomes are Normally distributed). The beauty of Normally distributed outcomes (aside from the fact that they so frequently occur, since they are the limit of many other distributions) is that they can be described by 2 parameters. The Kelly formulas will give you the optimal f for Bernoulli distributed outcomes by inputting the 2 parameters of the payoff ratio and the probability of winning. The technique about to be described likewise only needs two pa- 44 - rameters as input, the average and the standard deviation of the outcomes, to return the optimal f. Recall that the Normal Distribution is a continuous distribution, In order to use this technique we need to make this distribution be discrete. Further recall that the Normal Distribution is unbounded. That is, the distribution runs from minus infinity on the left to plus infinity on the right. Therefore, the first two steps that we must take to find the optimal f on Normally distributed data is that we must determine (1) at how many sigmas from the mean of the distribution we truncate the distribution, and (2) into how many equally spaced data points will we divide the range between the two extremes determined in (1). For instance, we know that 99.73% of all the data points will fall between plus and minus 3 sigmas of the mean, so we might decide to use 3 sigmas as our parameter for (1). In other words, we are deciding to consider the Normal Distribution only between minus 3 sigmas and plus 3 sigmas of the mean. In so doing, we will encompass 99.73% of all of the activity under the Normal Distribution. Generally we will want to use a value of 3 to 5 sigmas for this parameter. Regarding step (2), the number of equally spaced data points, we will generally want to use a bare minimum of ten times the number of sigmas we are using in (1). If we select 3 sigmas for (1), then we should select at least 30 equally spaced data points for (2). This means that we are going to take the horizontal axis of the Normal Distribution, of which we are using the area from minus 3 sigmas to plus 3 sigmas from the mean, and divide that into 30 equally spaced points. Since there are 6 sigmas between minus 3 sigmas and plus 3 sigmas, and we want to divide this into 30 equally spaced points, we must divide 6 by 30-1, or 29. This gives us .2068965517. So, our first data point will be minus 3, and we will add .2068965517 to each previous point until we reach plus 3, at which point we will have created 30 equally spaced data points between minus 3 and plus 3. Therefore, our second data point will be -3 +.2068965517 = -2.793103448, our third data point 2.79310344+.2068965517 = -2.586206896, and so on. In so doing, we will have determined the 30 horizontal input coordinates to this system. The more data points you decide on, the better will be the resolution of the Normal curve. Using ten times the number of sigmas is a rough rule for determining the bare minimum number of data points you should use. Recall that the Normal distribution is a continuous distribution. However, we must make it discrete in order to find the optimal f on it. The greater the number of equally spaced data points we use, the closer our discrete model will be to the actual continuous distribution itself, with the limit of the number of equally spaced data points approaching infinity where the discrete model approaches the continuous exactly. Why not use an extremely large number of data points? The more data points you use in the Normal curve, the more calculations will be required to find the optimal f on it. Even though you will usually be using a computer to solve for the optimal f, it will still be slower the more data points you use. Further, each data point added resolves the curve further to a lesser degree than the previous data point did. We will refer to these first two input parameters as the bounding parameters. Now, the third and fourth steps are to determine the arithmetic average trade and the population standard deviation for the market system we are working on. If you do not have a mechanical system, you can get these numbers from your brokerage statements or you can estimate them. That is the one of the real benefits of this technique-that you don't need to have a mechanical system, you don't even need brokerage statements or paper trading results to use this technique. The technique can be used by simply estimating these two inputs, the arithmetic mean average trade (in points or in dollars) and the population standard deviation of trades (in points or in dollars, so long as it's consistent with what you use for the arithmetic mean trade). Be forewarned, though, that your results will only be as accurate as your estimates. If you are having difficulty estimating your population standard deviation, then simply try to estimate by how much, on average, a trade will differ from the average trade. By estimating the mean absolute deviation in this way, you can use Equation (3.18) to convert your estimated mean absolute deviation into an estimated standard deviation: (3.18) S = M*1/.7978845609 = M*1.253314137 where S = The standard deviation. M = The mean absolute deviation. We will refer to these two parameters, the arithmetic mean average trade and the standard deviation of the trades, as the actual input parameters. Now we want to take all of the equally spaced data points from step (2) and find their corresponding price values, based on the arithmetic mean and standard deviation. Recall that our equally spaced data points are expressed in terms of standard units. Now for each of these equally spaced data points we will find the corresponding price as: (3.27) D = U+(S*E) where D = The price value corresponding to a standard unit value. E = The standard unit value. S = The population standard deviation. U = The arithmetic mean. Once we have determined all of the price values corresponding to each data point we have truly accomplished a great deal. We have now constructed the distribution that we expect the future data points to tend to. However, this technique allows us to do a lot more than that. We can incorporate two more parameters that will allow us to perform "What if ' types of scenarios about the future. These parameters, which we will call' the "What if" parameters, allow us to see the effect of a change in our average trade or a change in the dispersion (standard deviation) of our trades. The first of these parameters, called shrink, affects the average trade. Shrink is simply a multiplier on our average trade. Recall that when we find the optimal f we also obtain other calculations, which are useful by-products of the optimal f. Such calculations include the geometric mean, TWR, and geometric average trade. Shrink is the factor by which we will multiply our average trade before we perform the optimal f technique on it. Hence, shrink lets us see what the optimal f would be if our average trade were affected by shrink as well as how the other byproduct calculations would be affected. For example, suppose you are trading a system that has been running very hot lately. You know from past experience that the system is likely to stop performing so well in the future. You would like to see what would happen if the average trade were cut in half. By using a shrink value of .5 (since shrink is a multiplier, the average trade times .5 equals the average trade cut in half) you can perform the optimal f technique to determine what your optimal f should be if the average trade were to be cut in half. Further, you can see how such changes affect your geometric average trade, and so on. By using a shrink value of 2, you can also see the affect that a doubling of your average trade would have. In other words, the shrink parameter can also be used to increase (unshrink?) your average trade. What's more, it lets you take an unprofitable system (that is, a system with an average trade less than zero), and, by using a negative value for shrink, see what would happen if that system became profitable. For example, suppose you have a system that shows an average trade of -$100. If you use a shrink value of -.5, this will give you your optimal f for this distribution as if the average trade were $50, since -100*-.5 = 50. If we used a shrink factor of -2, we would obtain the distribution centered about an average trade of $200. You must be careful in using these "What if" parameters, for they make it easy to mismanage performance. Mention was just made of how you can turn a system with a negative arithmetic average trade into a positive one. This can lead to problems if, for instance, in the future, you still have a negative expectation. The other "What if" parameter is one called stretch. This is not, as its name would imply, the opposite of shrink. Rather, stretch is the multiplier to be used on the standard deviation. You can use this parameter to determine the effect on f and its by-products by an increase or decrease in the dispersion. Also, unlike shrink, stretch must always be a positive number, whereas shrink can be positive or negative (so long as the average trade times shrink is positive). If you want to see what will happen if your standard deviation doubles, simply use a value of 2 for stretch. To - 45 - see what Would happen if the dispersion quieted down, use a value less than 1. You will notice in using this technique that lowering the stretch toward zero will tend to increase the by-product calculations, resulting in a more optimistic assessment of the future and vice versa. Shrink works in an opposite fashion, as lowering the shrink towards zero will result in more pessimistic assessments about the future and vice versa. Once we have determined what values we want to use for stretch and shrink (and for the time being we will use values of 1 for both, which means to leave the actual parameters unaffected) we can amend Equation (3.27) to: (3.28) D = (U*Shrink)+(S*E*Stretch) where D = The price value corresponding to a standard unit value. E = The standard unit value. S = The population standard deviation. U = The arithmetic mean. To summarize thus far, the first two steps are to determine the bounding parameters of the number of sigmas either side of the mean we are going to use, as well as how many equally spaced data points we are going to use within this range. The next two steps are the actual input parameters of the arithmetic average trade and population standard deviation. We can derive these parameters empirically by looking at the results of a given trading system or by using brokerage statements or paper trading results. We can also derive these figures by estimation, but remember that the results obtained will only be as accurate as your estimates. The fifth and sixth steps are to determine the factors to use for stretch and shrink if you are going to perform a "What if type of scenario. If you are not, simply use values of 1 for both stretch and shrink. Once you have completed these six steps, you can now use Equation (3.28) to perform the seventh step. The seventh step is to convert the equally spaced data points from standard values to an actual amount of either points or dollars (depending on whether you used points or dollars as input for your arithmetic average trade and population standard deviation). Now the eighth step is to find the associated probability with each of the equally spaced data points. This probability is determined by using Equation (3.21): (3.21) N(Z) = 1-N'(Z)*((1.330274429*Y^5)(1.821255978*Y^4)+(1.781477937*Y^3)(.356563782*Y^2)+(.31938153*Y)) If Z<0 then N(Z) = 1-N(Z) where Y = 1/(1+.2316419*ABS(Z)) ABS() = The absolute value function. N'(Z) = .398942*EXP(-(Z^2/2)) EXP() = The exponential function. However, we will use Equation (3.21) without its 1-as the first term in the equation and without the -Z provision (i.e., without the "If Z<0 then N(Z)-1-N(Z)"), since we want to know what the probabilities are for an event equaling or exceeding a prescribed amount of standard units. So we go along through each of our equally spaced data points. Each point has a standard value, which we will use as the Z parameter in Equation (3.21), and a dollar or point amount. Now there will be another variable corresponding to each equally spaced data point-the associated probability. THE MECHANICS OF THE PROCEDURE The procedure will now be demonstrated on the trading example introduced earlier in this chapter. Since our 232 trades are currently in points, we should convert them to their dollar representations. However, since the market is a not specified, we will assign an arbitrary value of $1,000 per point. Thus, the average trade of .330129 now becomes .330129*$1000, or an average trade of $330.13. Likewise the population standard deviation of 1.743232 is also multiplied by $1,000 per point to give $1,743.23. Now we construct the matrix. First, we must determine the range, in sigmas from the mean, that we want our calculations to encompass. For our example we will choose 3 sigmas, so our range will go from minus 3 sigmas to plus 3 sigmas. Note that you should use the same amount to the left of the mean that you use to the right of the mean. That is, if you go 3 sigmas to the left (minus 3 sigmas) then you should not go only 2 or 4 sigmas to the right, but rather you should go 3 sigmas to the right as well (i.e., plus 3 sigmas from the mean). Next we must determine how many equally spaced data points to divide this range into. Choosing 61 as our value gives a data point at every tenth of a standard unit-simple. Thus we can determine our column of standard values. Now we must determine the arithmetic mean that we are going to use as input. We determine this empirically from the 232 trades as $330.13. Further, we must determine the population standard deviation, which we also determine empirically from the 232 trades as $1,743.23. Now to determine the column of associated P&L's. That is, we must determine a P&L amount for each standard value. Before we can determine our associated P&L column, we must decide on values for stretch and shrink. Since we are not going to perform any "What if types of scenarios at this time, we will choose a value of 1 for both stretch and shrink. Arithmetic mean = 330.13 Population Standard Deviation = 1743.23 Stretch = 1 Shrink = 1 Using Equation (3.28) we can calculate our associated P&L column. We do this by taking each standard value and using it as E in Equation (3.28) to get the column of associated P&L's: (3.29) D = (U*Shrink)+(S*E*Stretch) where D = The price value corresponding to a standard unit value. E = The standard unit value. S = The population standard deviation. U = The arithmetic mean. For the -3 standard value, the associated P&L is: D = (U*Shrink)+(S*E*Stretch) = (330.129*1)+(1743.232*(-3)*1) = 330.129+(-5229.696) = 330.129-5229.696 = 4899.567 Thus, our associated P&L column at a standard value of -3 equals 4899.567. We now want to construct the associated P&L for the next standard value, which is -2.9, so we simply perform the same Equation, (3.29), again-only this time we use a value of -2.9 for E. Now to determine the associated probability column. This is calculated using the standard value column as the Z input to Equation (3.21) without the preceding 1-and without the-Z provision (i.e, the "If Z < 0 then N(Z) = 1-N(Z)"). For the standard value of -3 (Z = -3), this is: N(Z) = N'(Z)*(( 1.330274429*Y^5)(1.821255978*Y^4)+(1.781477937*Y^3)(.356563782*Y^2+(.31938153*Y)) If Z<0 then N(Z) = 1-N(Z) where Y = 1/ (1+.2316419*ABS(Z)) ABS() = The absolute value function. N'(Z) = .398942*EXP(-(Z^2/2)) EXP() = The exponential function. Thus: N'(3) = .398942*EXP(-((-3)^2/2)) = .398942*EXP(-(9/2)) = .398942*EXP (-4.5) = .398942*.011109 = .004431846678 Y = 1/(1+2316419*ABS(-3)) = 1/(1+2316419*3) = 1/(1+6949257) = 1/1.6949257 = .5899963639 N(-3) = .004431846678*((1.330274429*.5899963639^5) - 46 - = .004431846678*((1.330274429*.07149022693)(1.821255978*.1211706)+(1.781477937*.2053752)(.356563782*.3480957094)+(.31938153*.5899963639)) = .004431846678* (.09510162081-.2206826796+.3658713876.1241183226+.1884339414) = .004431846678*.3046059476 = .001349966857 Note that even though Z is negative (Z = -3), we do not adjust N(Z) here by making N(Z) = 1-N (Z). Since we are not using the-Z provision, we just let the answer be. Now for each value in the standard value column there will be a corresponding entry in the associated P&L column and in the associated probability column. This is shown in the following table. Once you have these three columns established you are ready to begin the search for the optimal f and its by-products. STD VALUE -3.0 -2.9 -2.8 -2.7 -2.6 -2.5 -2.4 -2.3 -2.2 -2.1 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -6.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 ($4,899.57) ($4,725.24) ($4,550.92) ($4,376.60) ($4,202.27) ($4,027.95) ($3,853.63) ($3,679.30) ($3,504.98) ($3,330.66) ($3,156.33) ($2,982.01) ($2,807.69) ($2,633.37) ($2,459.04) ($2,284.72) ($2,110.40) ($1,936.07) ($1,761.75) ($1,587.43) ($1,413.10) ($1,238.78) ($1,064.46) ($890.13) ($715.81) ($541.49) ($367.16) ($192.84) ($18.52) $155.81 $330.13 $504.45 $678.78 $853.10 $1,027.42 $1,201.75 $1,376.07 $1,550.39 $1,724.71 $1,899.04 $2,073.36 $2,247.68 $2,422.01 $2,596.33 $2,770.65 $2,944.98 $3,119.30 $3,293.62 $3,467.95 $3,642.27 $3,816.59 $3,990.92 $4,165.24 $4,339.56 $4,513.89 $4,688.21 $4,862.53 $5,036.86 $5,211.18 $5,385.50 $5,559.83 ASSOCIATED PROBABILITY 0.001350 0.001866 0.002555 0.003467 0.004661 0.006210 0.008198 0.010724 0.013903 0.017864 0.022750 0.028716 0.035930 0.044565 0.054799 0.066807 0.080757 0.096800 0.115070 0.135666 0.158655 0.184060 0.211855 0.241963 0.274253 0.308537 0.344578 0.382088 0.420740 0.460172 0.500000 0.460172 0.420740 0.382088 0.344578 0.308537 0.274253 0.241963 0.211855 0.184060 0.158655 0.135666 0.115070 0.096800 0.080757 0.066807 0.054799 0.044565 0.035930 0.028716 0.022750 0.017864 0.013903 0.010724 0.008198 0.006210 0.004661 0.003467 0.002555 0.001866 0.001350 ASSOCIATED HPR AT f=.01 0.9999864325 0.9999819179 0.9999761557 0.9999688918 0.9999598499 0.9999487404 0.9999352717 0.9999191675 0.9999001875 0.9998781535 0.9998529794 0.9998247051 0.9997935316 0.9997598578 0.9997243139 0.9996877915 0.9996514657 0.9996168071 0.9995855817 0.999559835 0.9995418607 0.9995341524 0.9995393392 0.999560108 0.9995991135 0.9996588827 09997417168 0.9998495968 0.9999840984 1.0001463216 1.0003368389 1.0004736542 1.00058265 1.0006649234 1.0007220715 1.0007561259 1.0007694689 1.0007647383 1.0007447264 1.0007122776 1.0006701921 1.0006211392 .0005675842 .0005117319 .0004554875 1.0004004351 1.0003478328 .0002986228 .0002534528 1.0002127072 1.0001765438 .000144934 .0001177033 .0000945697 .0000751794 1.0000591373 1.0000460328 1.0000354603 1.0000270338 1.0000203976 1.0000152327 By-products atf-.01: TWR = 1.0053555695 Sum of the probabilities = 7.9791232176 Geomean = 1.0006696309 GAT = $328.09 Here is how you go about finding the optimal f. First, you must determine the search method for f. You can simply loop from 0 to 1 by a predetermined amount (e.g., .01), use an iterative technique, or use the technique of parabolic interpolation described in Portfolio Management formulas. What you seek to find is what value for f (between 0 and 1) will result in the highest geometric mean. Once you have decided upon a search technique, you must determine what the worst-case associated P&L is in your table. In our example it is the P&L corresponding to -3 standard units, 4899.57. You will need to use this particular value repeatedly throughout the calculations. In order to find the geometric mean for a given f value, for each value of f that you are going to process in your search for the optimal, you must convert each associated P&L and probability to an HPR. Equation (3.30) shows the calculation for the HPR: (3.30) HPR = (1+(L/(W/(-f))))^P where L = The associated P&L. W = The worst-case associated P&L in the table (This will always be a negative value). f = The tested value for f. P = The associated probability. Working through an example now where we use the value of .01 for the tested value for f, we will find the associated HPR at the standard value of -3. Here, our worst-case associated P&L is 4899.57, as is our associated P&L. Therefore, our HPR here is: HPR = (1+(-4899.57/-4899.57/(-.01))))^.001349966857 = (1+(-4899.57/489957))^.001349966857 = (1+(-.01))^.001349966857 = .99^.001349966857 = .9999864325 Now we move down to our next standard value, of -2.9, where we have an associated P&L of -2866.72 and an associated probability of 0.001865. Our associated HPR here will be: HPR = (-4725.24/(-4899.57/(-.01))))^.001866 = (1+(-4725.24/489957))^001866 = (1+(-4725.24/489957))^.001866 = (1+ (-.009644193266))^.001866 = .990355807^.001866 = .9999819 Once we have calculated an associated HPR for each standard value for a given test value off (.01 in our example table), you are ready to calculate the TWR. The TWR is simply the product of all of the HPRs for a given f value multiplied together: (3.31) TRW = (∏[i = 1,N]HPRi) where N = The total number of equally spaced data points. HPRi = The HPR corresponding to the i'th data point, given by Equation (3.30). So for our test value off = .01, the TWR will be: TWR = .9999864325*.9999819179*...*1.0000152327 = 1.0053555695 We can readily convert a TWR into a geometric mean by taking the TWR to the power of 1 divided by the sum of all of the associated probabilities. (3.32) G = TRW^(1/∑[i = 1,N] Pi) where N = The number of equally spaced data points. Pi = The associated probability of the ith data point. Note that if we sum the column that lists the 61 associated probabilities it equals 7.979105. Therefore, our geometric mean at f = .01 is: G = 1.0053555695^(1/7.979105) = 1.0053555695^.1253273393 = 1.00066963 - 47 - We can also calculate the geometric average trade (GAT). This is the amount you would have made, on average per contract per trade, if you were trading this distribution of outcomes at a specified f value. (3.33) GAT = (G(f)-1)*(w/(-f)) where G(f) = The geometric mean for a given f value. f = The given f value. W = The worst-case associated P&L. In the case of our example, the f value is .01: GAT = (1.00066963-1)*(-4899.57/(-.01)) = .00066963*489957 = 328.09 Therefore, we would expect to make, on average per contract per trade, $328.09. Now we go to our next value for f that must be tested according to our chosen search procedure for the optimal f In the case of our example we are looping from 0 to 1 by .01 for f, so our next test value for f is .02. We will do the same thing again. We will calculate a new associated HPRs column, and calculate our TWR and geometric mean. The f value that results in the highest geometric mean is that value for f which is the optimal based on the input parameters we have used. In our example, if we were to continue with our search for the optimal f, we would find the optimal at f = .744 (I am using a step increment of .001 in my search for the optimal f here.) This results in a geometric mean of 1.0265. Therefore, the corresponding geometric average trade is $174.45. It is important to note that the TWR itself doesn't have any real meaning as a by-product. Rather, when we are calculating our geometric mean parametrically, as we are here, the TWR is simply an interim step in obtaining that geometric mean. Now, we can figure what our TWR would be after X trades by taking the geometric mean to the power of X. Therefore, if we want to calculate our TWR for 232 trades at a geometric mean of 1.0265, we would raise 1.0265 to the power of 232, obtaining 431.79. So we can state that trading at an optimal f of .744, we would expect to make 43,079% ((431.79-1)*100) on our stake after 232 trades. Another by-product we will calculate is our threshold to geometric Equation (2.02): Threshold to geometric = 330.13/174.45*-4899.57/-.744 = 12,462.32 Notice that the arithmetic average trade of $330.13 is not something that we have calculated with this technique, rather it is a given as it is one of the input parameters. We can now convert our optimal f into how many contracts to trade by the equations: (3.34) K = E/Q where K = The number of contracts to trade. E = The current account equity. (3.35) Q = W/( -f) where W = The worst-case associated P&L. f = The optimal f value. Note that this variable, Q, represents a number that you can divide your account equity by as your equity changes on a day-by-day basis to know how many contracts to trade. Returning now to our example: Q = -4,899.57/-.744 = $6,585.44 Therefore, we will trade 1 contract for every $6,585.44 in account equity. For a $25,000 account this means we would trade: K = 25000/6585.44 = 3.796253553 Since we cannot trade in fractional contracts, we must round this figure of 3.796253553 down to the nearest integer. We would therefore trade 3 contracts for a $25,000 account. The reason we always round down rather than up is that the price extracted for being slightly below optimal is less than the price for being slightly beyond it. Notice how sensitive the optimal number of contracts to trade is to the worst loss. This worst loss is solely a function of how many sigmas you have decided to go to the left of the mean. This bounding parameter, the range of sigmas, is very important in this calculation. We have chosen three sigmas in our calculation. This means that we are, in effect, budgeted for a three-Sigma loss. However, a loss greater than three sigmas can really hurt us, depending on how far beyond three sigmas it is. Therefore, you should be very careful what value you choose for this range bounding parameter. You'll have a lot riding on it. Notice that for the sake of simplicity in illustration, we have not deducted commissions and slippage from these figures. If you wanted to incorporate commissions and slippage, you should deduct X dollars in commissions and slippage from each of the 232 trades at the outset of this exercise. You would calculate your arithmetic average trade and population standard deviation from this set of 232 adjusted trades, and then perform the exercise exactly as described. We could now go back and perform a "What if type of scenario here. Suppose we want to see what will happen if the system begins to perform at only half the profitability it is now (shrink = .5). Further, assume that the market that the system we are looking at is in gets very volatile, and that as a consequence the dispersion among the trades increases by 60% (stretch = 1.6). By pumping these parameters through this system we can see what the optimal will be so that we can make adjustments to our trading before these changes become history. In so doing we find that the optimal f now becomes ,262, or to trade 1 contract for every $31,305.92 in account equity (since the worst-case associated P&L is strongly affected by changes in stretch and shrink). This is quite a change. This means that if these changes in the market system start to materialize, we are going to have to do some altering in our money management regarding that system. The geometric mean will drop to 1.0027, the geometric average trade will be cut to $83.02, and the TWR over 232 trades will be 1.869. This is not even close to what it presently would be. All of this is predicated upon a 50% decrease in average trade and a 60% increase in standard deviation. This quite possibly could happen. It is also quite possible that the future could work out more favorably than the past. We can test this out, too. Suppose we want to see what will happen if our average profit increases by only 10%. We can check this by inputting a shrink value of 1.1. These “What if” parameters, stretch and shrink, really give us a great deal of power in our money management. The closer your distribution of trade P&L's is to Normal to begin with, the better the technique will work for you. The problem with almost any money management technique is that there is a certain amount of "slop" involved. Here, we can define slop as the difference between the Normal Distribution and the distribution we are actually using. The difference between the two is slop, and the more slop there is, the less effective the technique becomes. To illustrate, recall that using this method we have determined that to trade 1 contract for every $6,585.44 in account equity is optimal. However, if we were to go over these trades and find our optimal f empirically, we would find that the optimal is to trade 1 contract for every $7,918.04 in account equity. As you can see, using the Normal Distribution technique here would have us slightly to the right of the f curve, trading slightly more contracts than the empirical would suggest. However, as we shall see, there is a lot to be said for expecting the future distribution of prices to be Normally distributed. When someone buys or sells an option, the assumption that the future distribution of the log of price changes in the underlying instrument will be Normal is built into the price of the option. Along this same line of reasoning, someone who is entering a trade in a market and is not using a mechanical system can be said to be looking at the same possible future distribution. The technique detailed in this chapter was shown using data that was not equalized. We can also use this very same technique on equalized data by incorporating the following changes: Before the data is standardized, it should be equalized by first converting all of the trade profits and losses to percentage profits and losses per Equations (2.10a) through (2.10c). Then these percentage profits and losses should be translated into percentages of the current price by simply multiplying them by the current price. 1. When you go to standardize this data, standardize the now equalized data by using the mean and standard deviation of the equalized data. 2. The rest of the procedure is the same as written in this chapter in terms of determining the optimal f, geometric mean, and TWR. The - 48 - geometric average trade, arithmetic average trade, and threshold to the geometric are only valid for the current price of the underlying instrument. When the price of the underlying instrument changes, the procedure must be done again, going back to step 1 and multiplying the percentage profits and losses by the new underlying price. When you go to redo the procedure with a different underlying price, you will obtain the same optimal f, geometric mean, and TWR. However, your arithmetic average trade, geometric average trade, and threshold to the geometric will differ, depending on the new price of the underlying instrument. 3. The number of contracts to trade as given in Equation (3.34) must be changed. The worst-case associated P&L, the W variable in Equation (3.34) [as subequation (3.35)] will be different as a result of the changes caused in the equalized data by a different current price. In this chapter we have learned how to find the optimal f on a probability distribution. We have used the Normal Distribution because it shows up so frequently in many naturally occurring processes and because it is easier to work with than many other distributions, since its cumulative density function, Equation (3.21), exists.5 Yet the Normal is often regarded as a poor model for the distribution of trade profits and losses. What then is a good model for our purposes? In the next chapter we will address this question and build upon the techniques we have learned in this chapter to work for any type of probability distribution, whether its cumulative density function is known or not. Again, the cumulative density function to the Normal Distribution does not really exist, but rather is very closely approximated by Equation (3.21). However, the cumulative density of the Normal can at least be approximated by an equation, a luxury which not all distributions possess. Chapter 4 - Parametric Techniques on Other Distributions We have seen in the previous chapter how to find the optimal f and its by-products on the Normal Distribution. The same technique can be applied to any other distribution where the cumulative density function is known. Many of these more common distributions and their cumulative density functions are covered in Appendix B. Unfortunately, most distributions of trade P&L's do not fit neatly into the Normal or other common distribution functions. In this chapter we first treat this problem of the undefined nature of the distribution of trade P&L's and later look at the technique of scenario planning, a natural outgrowth of the notion of optimal f. This technique has many broad applications. This then leads into finding the optimal f on a binned distribution, which leads us to the next chapter regarding both options and multiple simultaneous positions. Before we attempt to model the real distribution of trade P&L's, we must have a method for comparing two distributions. THE KOLMOGOROV-SMIRNOV (K-S) TEST The chi-square test is no doubt the most popular of all methods of comparing two distributions. Since many market-oriented applications other than the ones we perform in this chapter often use the chi-square test, it is discussed in Appendix A. However, the best test for our purposes may well be the K-S test. This very efficient test is applicable to unbinned distributions that are a function of a single independent variable (profit per trade in our case). All cumulative density functions have a minimum value of 0 and a maximum value of 1. What goes on in between differentiates them. The K-S test measures a very simple variable, D, which is defined as the maximum absolute value of the difference between two distributions' cumulative density functions. To perform the K-S test is relatively simple. N objects (trades in our case) are standardized (by subtracting the mean and dividing by the standard deviation) and sorted in ascending order. As we go through these sorted and standardized trades, the cumulative probability is however many trades we've gone through divided by N. When we get to our first trade in the sorted sequence, the trade with the lowest standard value, the cumulative density function (CDF) is equal to 1/N. With each standard value that we pass along the way up to our highest standard value, 1 is added to the numerator until, at the end of the sequence, our CDF is equal to N/N or 1. For each standard value we can compute the theoretical distribution that we wish to compare to. Thus, we can compare our actual cumulative density to any theoretical cumulative density. The variable D, the K-S statistic, is equal to the greatest distance between any standard values of our actual cumulative density and the value of the theoretical distribution's CDF at that standard value. Whichever standard value results in the greatest difference is assigned to the variable D. When comparing our actual CDF at a given standard value to the theoretical CDF at that standard value, we must also compare the previous standard value's actual CDF to the current standard value's actual CDF. The reason is that the actual CDF breaks upward instantaneously at the data points, and, if the actual is below the theoretical, the difference between the lines is greater the instant before the actual jumps up. Cumulative probability 1 0.9 0.8 A 0.7 0.6 B 0.5 0.4 0.3 0.2 Actual 0.1 Theoretical 0 -3 -2 -1 0 1 Standard values Figure 4-1 The K-S test. - 49 - To see this, look at Figure 4-1. Notice that at point A the actual line is above the theoretical. Therefore, we want to compare the current actual CDF value to the current theoretical value to find the greatest difference. Yet at point B, the actual line is below the theoretical. Therefore, we want to compare the previous actual value to the current theoretical value. The rationale is that we are measuring the greatest distance between the two lines. Since we are measuring at the instant the actual jumps up, we can consider using the previous value for the actual as the current value for the actual the instant before it jumps. In summary, then, for each standard value, we want to take the absolute value of the difference between the current actual CDF value and the current theoretical CDF value. We also want to take the absolute value of the difference between the previous actual CDF value and the current theoretical CDF value. By doing this for all standard values, all points where the actual CDF jumps up by 1/N, and taking the greatest difference, we will have determined the variable D. The lower the value of D, the more the two distributions are alike. We can readily convert the D value to a significance level by the following formula: (4.01) SIG = ∑[j = 1, ∞] (j%2)*4-2*EXP(-2*j^2*(N^(1/2)*D)^2) where SIG = The significance level for a given D and N. D = The K-S statistic. N = The number of trades that the K-S statistic is determined over. % = The modulus operator, the remainder from division. As it is used here, J % 2 yields the remainder when J is divided by 2. EXP() = The exponential function. There is no need to keep summing the values until J gets to infinity. The equation converges (in short order, usually) to a value. Once the convergence is obtained to a close enough user tolerance, there is no need to continue summing values. To illustrate Equation (4.01) by example. Suppose we had 100 trades that yielded a K-S statistic of .04: J1 = (1%2)*4-2*EXP(-2*1^2*(100^(1/2)*.04)^2) = 1*4-2*EXP(-2*1^2*(10*.04)^2) = 2*EXP(-2*1^2*.4^2) =2*EXP(-2*1*.16) = 2*EXP(-.32) = 2*.726149 = 1.452298 So our first value is 1.452298. Now to this we will add the next pass through the equation, and as such we must increment J by 1 so that J now equals J2: J2 = (2%2)*4-2*EXP(-2*2^2*(100^(1/2)*.04)^2) = 0*4-2*EXP(-2*2^2*(10*.04)^2) = -2*EXP(-2*2^2*.4^2) = -2*EXP(-2*4*.16) = -2*EXP(-1.28) = -2*.2780373 = -.5560746 Adding this value of -.5560746 back into our running sum of 1.452298 gives us a new running sum of .8962234. We again increment J by 1, so it equals J3, and perform the equation. We take the resulting sum and add it to our running total of .8962234. We keep on doing this until we converge to a value within a close enough tolerance. For our example, this point of convergence will be right around .997, depending upon how many decimal places we want to be accurate to. This answer means that for 100 trades where the greatest value between the two distributions was .04, we can be 99.7% certain that the actual distribution was generated by the theoretical distribution function. In other words, we can be 99.7% certain that the theoretical distribution function represents the actual distribution. Incidentally, this is a very good significance level. CREATING OUR OWN CHARACTERISTIC DISTRIBUTION FUNCTION We have determined that the Normal Probability Distribution is generally not a very good model of the distribution of trade profits and losses. Further, none of the more common probability distributions are either. Therefore, we must create a function to model the distribution of our trade profits and losses ourselves. The distribution of the logs of price changes is generally assumed to be of the stable Paretian variety (for a discussion of the stable Paretian distribution, refer to Appendix B). The distribution of trade P&L's can be regarded as a transformation of the distribution of prices. This transformation occurs as a result of trading techniques such as traders trying to cut their losses and let their profits run. Hence, the distribution of trade P&L's can also be regarded as of the stable Paretian variety. What we are about to study, however, is not the stable Paretian. The stable Paretian, like all other distributional functions, models a specific probability phenomenon. The stable Paretian models the distribution of sums of independent, identically distributed random variables. The distributional function we arc about to study does not model a specific probability phenomenon. Rather, it models other unimodal distributional functions. As such, it can replicate the shape, and therefore the probability densities, of the stable Paretian as well as any other unimodal distribution. Now we will create this function. To begin with, consider the following equation: (4.02) Y = 1/(X^2+1) This equation graphs as a general bell-shaped curve, symmetric about the X axis, as is shown in Figure 4-2. 1 0.8 0.6 0.4 0.8 0.6 0.4 0.2 0 -3 Figure 4-3 LOC =-.5 SCALE = 1 SKEW = 0 KURT = 2 Likewise, if we wanted to shift location to the right, we would use a positive value for the LOC variable. Keeping LOC at zero will result in no shift in location, as depicted in Figure 4-2. The exponent in the denominator affects kurtosis. Thus far, we have seen the distribution with the kurtosis set to a value of 2, but we can control the kurtosis of the distribution by changing the value of the exponent. This alters our characteristic function, which now appears as: (4.04) Y = 1/((X-LOC)^KURT+1) where Y = The ordinate of the characteristic function. X = The standard value amount. LOC = A variable representing the location, the first moment of the distribution. KURT = A variable representing kurtosis, the fourth moment of the distribution. Figures 4-4 and 4-5 demonstrate the effect of the kurtosis variable on our characteristic function. Note that the higher the exponent the more flat topped and thin-tailed the distribution (platykurtic), and the lower the exponent, the more pointed the peak and thicker the tails of the distribution (leptokurtic). 0.2 0 -3 0.8 -2 Figure 4-2 LOC = 0 SCALE = 1 SKEW = 0 KURT = 2. We will thus build from this general equation. The variable X can be thought of as the number of standard units we are either side of the mean, or Y axis. We can affect the first moment of this "distribution," the location, by adding a value to represent a change in location to X. Thus, the equation becomes: (4.03) Y = 1/((X-LOC)^2+1) where Y = The ordinate of the characteristic function. X = The standard value amount. LOC = A variable representing the location, the first moment of the distribution. Thus, if we wanted to alter location by moving it to the left by 1/2 of a standard unit, we would set LOC to -.5. This would give us the graph depicted in Figure 4-3. 0.6 0.4 0.2 0 -3 Figure 4-4 LOC = 0 SCALE = 1 SKEW =0 KURT = 3. 1 0.8 0.6 0.4 0.2 0 -3 - 50 - Figure 4-5 LOC = 0 SCALE = 1 SKEW = 0 KURT = 1 So that we do not run into problems with irrational numbers when KURT<1, we will use the absolute value of the coefficient in the denominator. This does not affect the shape of the curve. Thus, we can rewrite Equation (4.04) as: (4.04) Y = 1/(ABS(X-LOC)^KURT+1) We can put a multiplier on the coefficient in the denominator to allow us to control the scale, the second moment of the distribution. Thus, our characteristic function has now become: (4.05) Y = 1/(ABS((X-LOC)*SCALE) ^ KURT+1) where Y = The ordinate of the characteristic function. X = The standard value amount. LOC = A variable representing the location, the first moment of the distribution. SCALE = A variable representing the scale, the second moment of the distribution. KURT = A variable representing kurtosis, the fourth moment of the distribution. Figures 4-6 and 4-7 demonstrate the effect of the scale parameter. The effect of this parameter can be thought of as moving the horizontal axis up or down on the distribution. When the axis is moved up (by decreasing scale), the graph is also enlarged. This results in what we have in Figure 4-6. This has the effect of moving the horizontal axis up and enlarging the distribution curve. The result is as though we were looking at the "cap" of the distribution. Figure 4-7 does just the opposite. As is borne out in the figure, the effect is that the horizontal axis has been moved down and the distribution curve shrunken. (4.06) Y = (1/(ABS((X-LOC)*SCALE)^KURT+1))^C where C = The exponent for skewness, calculated as: (4.07) C = (1+(ABS(SKEW)^ABS( 1/(X-LOC))*sign(X)*sign(SKEW)))^.5 Y = The ordinate of the characteristic function. X = The standard value amount. LOC = A variable representing the location, the first moment of the distribution. SCALE = A variable representing the scale, the second moment of the distribution. SKEW = A variable representing the skewness, the third moment of the distribution. KURT = A variable representing kurtosis, the fourth moment of the distribution. sign() = The sign function, equal to 1 or -1. The sign of X is calculated as X/ABS(X) for X not equal to 0. If X is equal to zero, the sign should be regarded as positive. Figures 4-8 and 4-9 demonstrate the effect of the skewness variable on our distribution. 1 0.8 0.6 0.2 0 -3 Figure 4-8 LOC = 0 SCALE = 1 SKEW = -.5 KURT = 2. 1 0.2 0.8 0 -3 Figure 4-6 LOC = 0 SCALE = .5 SKEW = 0 KURT = 2. 0 -3 Figure 4-9 LOC = 0 SCALE = 1 SKEW = +.5 KURT = 2. 0.4 0.2 0 -3 Figure 4-7 LOC = 0 SCALE = 2 SKEW = 0 KURT = 2. We now have a characteristic function to a distribution whereby we have complete control over three of the first four moments of the distribution. Presently, the distribution is symmetric about the location. What we now need is to be able to incorporate a variable for skewness, the third moment of the distribution, into this function. To account for skewness, we must amend our function further. Our characteristic function has now evolved to: - 51 - A few important notes on the four parameters LOC, SCALE, SKEW, and KURT. With the exception of the variable LOC (which is expressed as the number of standard values to offset the distribution by), the other three variables are nondimensional - that is, their values are pure numbers which have meaning only in a relative context, characterizing the shape of the distribution and are relevant only to this distribution. Furthermore, the parameter values are not the same values you would get if you employed any of the standard measuring techniques detailed in "Descriptive Measures of Distributions" in Chapter 3. For instance, if you determined one of Pearson's coefficients of skewness on a set of data, it would not be the same value that you would use for the variable SKEW in the adjustable distributions here. The values for the four variables are unique to our distribution and have meaning only in a relative context. Also of importance is the range that the variables can take. The SCALE. variable must always be positive with no upper bound, and likewise with KURT. In application, though, you will generally use values between .5 and 3, and in extreme cases between .05 and 5. However, you can use values beyond these extremes, so long as they are greater than zero. The LOC variable can be positive, negative, or zero. The SKEW parameter must be greater than or equal to -1 and less than or equal to +1. When SKEW equals +1, the entire right side of the distribution (right of the peak) is equal to the peak, and vice versa when SKEW equals -1. The ranges on the variables are summarized as: (4.08) -infinity0 (4.10) -1<=SKEW<=+1 (4.11) KURT>0 Figures 4-2 through 4-9 demonstrate just how pliable our distribution is. We can fit these four parameters such that the resultant distribution can fit to just about any other distribution. FITTING THE PARAMETERS OF THE DISTRIBUTION Just as with the process described in Chapter 3 for finding our optimal f on the Normal Distribution, we must convert our raw trades data over to standard units. We do this by first subtracting the mean from each trade, then dividing by the population standard deviation. From this point forward, we will be working with the data in standard units rather than in its raw form. After we have our trades in standard values, we can sort them in ascending order. With our trades data arranged this way, we will be able to perform the K-S test on it. Our objective now is to find what values for LOC, SCALE, SKEW, and KURT best fit our actual trades distribution. To determine this "best fit" we rely on the K-S test. We estimate the parameter values by employing the "twentieth-century brute force technique." We run every combination for KURT from 3 to .5 by -.1 (we could just as easily run it from .5 to 3 by .1, as it doesn't matter whether we ascend or descend through the values). We also run every combination for SCALE from 3 to .5 by -.1, For the time being we leave LOC and SKEW at 0. Thus, we are going to run the following combinations: LOC 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 SCALE 3 3 3 3 3 3 3 3 3 3 3 3 2.9 2.9 .5 .5 SKEW 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 KURT 3 2.9 2.8 2.7 2.6 2.5 2.4 2.3 2.2 2.1 2 1.9 3 2.9 .6 .5 We perform the K-S test for each combination. The combination that results in the lowest K-S statistic we assume to be our optimal bestfitting Parameter values for SCALE and KURT (for the time being). To perform the K-S test for each combination, we need both the actual distribution and the theoretical distribution (determined from the parameters for the adjustable distribution that we are testing). We already have seen how to construct the actual cumulative density as X/N, where N is the total number of trades and X is the ranking (between 1 and N) of a given trade. Now we need to calculate the CDF, (the function for what percentage of the area of the characteristic function a certain point constitutes) for our theoretical distribution for the given LOC, SCALE, SKEW, and KURT parameter values we are presently looping through. We have the characteristic function for our adjustable distribution. This is Equation (4.06). To obtain a CDF from a distribution's characteristic function we must find the integral of the characteristic function. We define the integral, the percentage of area under the characteristic func- 52 - tion at point X, as N(X). Thus, since Equation (4.06) gives us the first derivative to the integral, we define Equation (4.06) as N'(X). Often you may not be able to derive the integral of a function, even if you are proficient in calculus. Therefore, rather than determining the integral to Equation (4.06), we are going to rely on a different technique, one that, although a bit more labor intensive, is hardier than the technique of finding the integral. The respective probabilities can always be estimated for any point on the function's characteristic line by making the distribution be a series of many bars. Then, for any given bar on the distribution, you can calculate the probability associated at that bar by taking the sum of the areas of all those bars to the left of your bar, including your bar, and dividing it by the sum of the areas of all the bars in the distribution. The more bars you use, the more accurate your estimated probabilities will be. If you could use an infinite number of bars, your estimate would be exact. We now discuss the procedure for finding the areas under our adjustable distribution by way of an example. Assume we wish to find probabilities associated with every .1 increment in standard values from -3 to +3 sigmas of our adjustable distribution. Notice that our table (p. 163) starts at -5 standard units and ends at +5 standard units, the reason being that you should begin and end 2 sigmas beyond the bounding parameters (-3 and +3 sigmas in this case) to get more accurate results. Therefore, we begin our table at -5 sigmas and end it at +5 sigmas. Notice that X represents the number of standard units that we are away from the mean. This is then followed by the four parameter values. The next column is the N' (X) column, the height of the curve at point X given these parameter values. N'(X) is calculated as Equation (4.06). We now work with Equation (4.06). Assume that we want to calculate N'(X) for X at -3, with the values for the parameters of .02, 2.76, 0, and 1.78 for LOC, SCALE, SKEW, and KURT respectively. First, we calculate the exponent of skewness, C in Equation (4.06)-given as Equation (4.07)-as: x -5.0 -4.9 -4.8 -4.7 -4.6 -4.5 -4.4 -4.3 -4.2 -4.1 -4.0 -3.9 -3.8 -3.7 -3.6 -3.5 -3.4 -3.3 -3.2 -3.1 -3.0 -2.9 -2.8 -2.7 -2.6 -2.5 -2.4 -2.3 -2.2 -2.1 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 LOC SCA LE 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 SKE W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 KURT N'(X)Eq.(4.06) RUNNINGSUM 1.78 0.0092026741 0.0092026741 1.78 0.0095350519 0.018737726 1.78 0.0098865117 0.0286242377 1.78 0.01025857 0.0388828077 1.78 0.0106528988 0.0495357065 1.78 0.0110713449 0.0606070514 1.78 0.0115159524 0.0721230038 1.78 0.0119889887 0.08411I9925 1.78 0.0124929748 0.0966049673 1.78 0.0130307203 0.I096356876 1.78 0.0136053639 0.1232410515 1.78 0.0142204209 0.1374614724 1.78 0.0148798398 0.1523413122 1.78 0.0155880672 0.1679293795 1.78 0.0163501266 0.184279506 1.78 0.0171717099 0.2014512159 1.78 0.0180592883 0.2195105042 1.78 0.0190202443 0.2385307485 1.78 0.0200630301 0.2585937786 1.78 0.0211973606 0.2797911392 1.78 0.0224344468 0.302225586 1.78 0.0237872819 0.3260128679 1.78 0.0252709932 0.3512838612 1.78 0.0269032777 0.3781871389 1.78 0.0287049446 0.4068920835 1.78 0.0307005967 0.4375926802 1.78 0.032919491I 0.4705121713 1.78 0.0353966362 0.5059088075 1.78 0.0381742015 0.544083009 1.78 0.041303344 0.5853863529 1.78 0.0448465999 0.6302329529 1.78 0.0488810452 0.6791139981 1.78 0.0535025185 0.7326165166 1.78 0.0588313292 0.7914478458 1.78 0.0650200649 0.8564679107 1.78 0.0722644105 0.9287323213 1.78 0.080818341 1.0095506622 1.78 0.0910157581 1.1005664203 1.78 0.1033017455 1.2038681658 1.78 0.I182783502 1.322146516 N(X) 0.000388 0.001178 0.001997 0.002847 0.003729 0.004645 0.005598 0.006590 0.007622 0.008699 0.009823 0.010996 0.012224 0.013509 0.014856 0.016270 0.017756 0.019320 0.020969 0.022709 0.024550 0.026499 0.028569 0.030770 0.033115 0.035621 0.038305 0.041186 0.044290 0.047642 0.051276 0.055229 0.059548 0.064287 0.06951I 0.075302 0.081759 0.089007 0.097204 0.106550 x -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.0 LOC SCA LE 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 SKE W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 KURT N'(X)Eq.(4.06) RUNNINGSUM 1.78 0.1367725028 1.4589190187 1.78 0.1599377464 1.6188567651 1.78 0.1894070001 1.8082637653 1.78 0.2275190511 2.0357828164 1.78 0.2776382822 2.3134210986 1.78 0.3445412618 2.6579623604 1.78 0.4346363128 3.0925986732 1.78 0.5550465747 3.6476452479 1.78 0.7084848615 4.3561301093 1.78 0.8772840491 5.2334141584 1.78 1 6.2334141584 1.78 0.9363557429 7.1697699013 1.78 0.776473162 7.9462430634 1.78 0.6127219404 8.5589650037 1.78 0.4788099392 9.0377749429 1.78 0.377388991 9.4151639339 1.78 0.3020623672 9.7172263011 1.78 0.2458941852 9.9631204863 1.78 0.2034532796 10.1665737659 1.78 0.1708567846 10.3374305505 1.78 0.1453993995 10.48282995 1.78 0.1251979811 10.6080279311 1.78 0.1089291462 10.7169570773 1.78 0.0956499316 10.8126070089 1.78 0.0846780659 10.8972850748 1.78 0.0755122067 10.9727972814 1.78 0.0677784099 11.0405756913 1.78 0.0611937787 11.10176947 1.78 0.0555414402 11.1573109102 1.78 0.0506530744 11.2079639847 1.78 0.0463965419 11.2543605266 1.78 0.0426670018 11.2970275284 1.78 0.0393804519 11.3364079803 1.78 0.0364689711 11.3728769515 1.78 0.0338771754 11.4067541269 1.78 0.0315595472 11.4383136741 1.78 0.0294784036 11.4677920777 1.78 0.0276023341 11.4953944118 1.78 0.0259049892 11.5212994011 1.78 0.0243641331 11.5456635342 1.78 0.0229608959 11.5686244301 1.78 0.0216791802 11.5903036102 1.78 0.0205051855 11.6108087957 1.78 0.0194270256 11.6302358213 1.78 0.0184344179 11.6486702392 1.78 0.0175184304 11.6661886696 1.78 0.0166712734 11.682859943 1.78 0.0158861285 11.6987460714 1.78 0.0151570063 11.7139030777 1.78 0.014478628 11.7283817056 1.78 0.0138463263 11.742228032 1.78 0.0132559621 11.7554839941 1.78 0.012703854 11.7681878481 1.78 0.0121867187 11.7803745668 1.78 0.0117016203 11.7920761871 1.78 0.0112459269 11.8033221139 1.78 0.0108172734 11.8141393873 1.78 0.0104135298 11.8245529171 1.78 0.0100327732 11.8345856903 1.78 0.0096732643 11.8442589547 1.78 0.0093334265 11.8535923812 = .02243444681 Thus, at the point X = -3, the N'(X) value is .02243444681. (Notice that we calculate an N'(X) column, which corresponds to every value of X). The next step we must perform, the next column, is the running sum of the N'(X)'s as we advance up through the X's. This is straight forward enough. Now we calculate the N(X) column, the resultant probabilities associated with each value of X, for the given parameter values. To do this, we must perform Equation (4.12): (4.12) N(C) = (∑[i = 1,C]N'(Xi)+∑[i = 1,C-1]N'(Xi))/2/ ∑[i = 1,M]N'(Xi) where C = The current X value. M = The total count of X values. Equation (4.12) says, literally, to add the running sum at the current value of X to the running sum at the previous value of X as we advance up through the X's. Now divide this sum by 2. Then take the new quotient and divide it by the last value in the column of the running sum of the N'(X)'s (the total of the N'(X) column). This gives us the resultant probabilities for a given value of X, for given parameter values. Thus, for the value of -3 for X, the running sum of the N'(X)'s at -3 is .302225586, and the previous X, -3.1, has a running sum value of .2797911392. Summing these two running sums together gives us 5820167252. Dividing this by 2 gives us .2910083626. Then dividing this by the last value in the running sum column, the total of all of the N'(X)'s, 11.8535923812, gives us a quotient of .02455022522. This is the associated probability, N(X), at the standard value of X = -3. Once we have constructed cumulative probabilities for each trade in the actual distribution and probabilities for each standard value increment in our adjustable distribution, we can perform the K-S test for the parameter values we are currently using. Before we do, however, we must make adjustments for a couple of other preliminary considerations. In the example of the table of cumulative probabilities shown earlier for our adjustable distribution, we calculated probabilities at every .1 increment in standard values. This was for the sake of simplicity. In practice, you can obtain a greater degree of accuracy by using a smaller step increment. I find that using .01 standard values is a good step increment. A word on how to determine your bounding parameters in actual practice-that is, how many sigmas either side of the mean you should go in determining your probabilities for our adjustable distribution. In our example we were using 3 sigmas either side of the mean, but in reality you must use the absolute value of the farthest point from the mean. For our 232-trade example, the extreme left (lowest) standard value is -2.96 standard units and the extreme right (highest) is 6.935321 standard units. Since 6.93 is greater than ABS (-2.96), we must take the 6.935321. Now, we add at least 2 sigmas to this value, for the sake of accuracy, and construct probabilities for a distribution from -8.94 to +8.94 sigmas. Since we want a good deal of accuracy, we will use a step increment of .01. Therefore, we will figure probabilities for standard values of: -8.94 -8.93 -8.92 -8.91 N(X) 0.117308 0.129824 0.144560 0.162146 0.183455 0.209699 0.242566 0.284312 0.337609 0.404499 0.483685 0.565363 0.637613 0.696211 0.742253 0.778369 0.807029 0.830142 0.849096 0.864885 0.878225 0.889639 0.899515 0.908145 0.915751 0.922508 0.928552 0.933993 0.938917 0.943396 0.947490 0.951246 0.954707 0.957907 0.960874 0.963634 0.966209 0.968617 0.970874 0.972994 0.974990 0.976873 0.978653 0.980337 0.981934 0.983451 0.984893 0.986266 0.987576 0.988826 0.990020 0.991164 0.992259 0.993309 0.994316 0.995284 0.996215 0.997110 0.997973 0.998804 0.999606 +8.94 Now, the last thing we must do before we can actually perform our K-S statistic is to round the actual standard values of the sorted trades to the nearest .01 (since we are using .01 as our step value on the theoretical distribution). For example, the value 6.935321 will not have a corresponding theoretical probability associated with it, since it is in between the step values 6.93 and 6.94. Since 6.94 is closer to 6.935321, we round 6.935321 to 6.94. Before we can begin the procedure of optimizing our adjustable distribution parameters to the actual distribution by employing the K-S test, we must round our actual sorted standardized trades to the nearest step increment. In lieu of rounding the standard values of the trades to the nearest Xth decimal place you can use linear interpolation on your table of cumulative probabilities to derive probabilities corresponding to the actual standard values of the trades. For more on linear interpolation, consult a good statistics book, such as some of the ones suggested in the bibliography or Commodity Market Money Management by Fred Gehm. (4.07) C = (1+(ABS(SKEW)^ABS(1/(X-LOC))*sign(X)*sign(SKEW)))^.5 = (1+(ABS(0)^ABS(l/(-3-.02))*-1*-1))^5 = (1+0)^.5 = 1 Thus, substituting 1 for C in Equation (4.06): (4.06) Y= (1/(ABS((X-LOC)*SCALE)^ KUKT+1))^C = (l/(ABS((-3-.02)*2.76)^1.78+1))^1 = (1/((3.02*2.76)^1.78+1))^1 = (1/(8.3352^1.78+1))^1 = (1/(43.57431058+1))^1 = (1/44.57431058)^1 = .02243444681^1 - 53 - Thus far, we have been optimizing only for the best-fitting KURT and SCALE values. Logically, it would seem that if we standardized our data, as we have, then the LOC parameter should be kept at 0 and the SCALE parameter should be kept at 1. This is not necessarily true, as the true location of the distribution may not be the arithmetic mean, and the true optimal value for scale may not be at 1. The KURT and SCALE values have a very strong relationship to one another. Thus, we first try to isolate the -"neighborhood" of best-fitting parameter values for KURT and SCALE. For our 232 trades this occurs at SCALE equal to 2.7 and KURT equal to 1.9. Now we progressively try to zero in on the best-fitting parameter values. This is a computer-time-intensive process. We run our next pass through, cycling the LOC parameter from .1 to -.1 by -.05, the SCALE parameter from 2.6 to 2.8 by .05, the SKEW parameter from .1 to -.1 by -.05, and the KURT parameter from 1.86 to 1.92 by .02. The results of this cycle through give the optimal (lowest K-S statistic) at LOC = 0, SCALE = 2.8, SKEW = 0, and KURT = 1.86. Thus we perform a third cycle through. This time we run LOC from .04 to -.04 by -.02, SCALE from 2.76 to 2.82 by .02, SKEW from .04 to -.04 by -.02, and KURT from 1.8 to 1.9 by .02. The results of the third cycle through show optimal values at LOC = .02, SCALE = 2.76, SKEW = 0, and KURT = 1.8. Now we have zeroed right in on the optimal neighborhood, the areas where the parameters make for the best fit of our adjustable characteristic function to the actual data. For our last cycle through we are going to run LOC from 0 to .03 by .01, SCALE from 2.76 to 2.73 by -.01, SKEW from ,01 to -.01 by -.01, and KURT from 1.8 to 1.75 by -.01. The results of this final pass show optimal parameters for our 232 trades at LOC = .02, SCALE = 2.76, SKEW = 0, and KURT = 1.78. USING THE PARAMETERS TO FIND OPTIMAL F Now that we have found the best-fitting parameter values, we can find the optimal f on this distribution. We can take the same procedure we used to find the optimal f on the Normal Distribution discussed in the last chapter. The only difference now is that the associated probabilities for each standard value (X value) are calculated per the procedure described for Equations (4.06) and (4.12). With the Normal Distribution, we find our associated probabilities column (probabilities corresponding to a certain standard value) by using Equation (3.21). Here, to find our associated probabilities, we must follow the procedure detailed previously: 1. For a given standard value, X, we figure its corresponding N'(X) by Equation (4.06). 2. For each standard value, we also have the interim step of keeping a running sum of the N'(X) 's corresponding to each value of X. 3. Now, to find N(X), the resultant probability for a given X, add together the running sum corresponding to the X value with the running sum corresponding to the previous X value. Divide this sum by 2. Then divide this quotient by the sum total of the N'(X)'s, the last entry in the column of running sums. This new quotient is the associated 1- tailed probability for a given X. Since we now have a procedure to find the associated probabilities for a given standard value, X, for a given set of parameter values, we can find our optimal f. The procedure is exactly the same as that detailed for finding the optimal f on the Normal Distribution. The only difference is that we calculate the associated probabilities column differently. In our 232-trade example, the parameter values that result in the lowest K-S statistic are .02, 2.76, 0, and 1.78 for LOC, SCALE, SKEW, and KURT respectively. We arrived at these parameter values by using the optimization procedure outlined in this chapter. This resulted in a KS statistic of .0835529 (meaning that at its worst point, the two distributions were apart by 8.35529%), and a significance level of 7.8384%. Figure 4-10 shows the distribution function for those parameter values that best fit our 232 - 54 - 1.2 1 N(X) 0.8 0.6 0.4 0.2 0 -4899.56 -3156.33 -1413.1 2073.36 3716.59 Figure 4-10 Adjustable distribution fit to the 232 trades. If we take these parameters and find the optimal f on this distribution, bounding the distribution from +3 to -3 sigmas and using 100 equally spaced data points, we arrive at an optimal f value of .206, or 1 contract for every $23,783.17. Compare this to the empirical method, which showed that optimal growth is obtained at 1 contract for every $7,918.04 in account equity. But that is the result we get if we bound the distribution at 3 sigmas either side of the mean. In reality, in the empirical stream of trades, we had a worst-case loss of 2.96 sigmas and a best-case gain of 6.94 sigmas. Now if we go back and bound our distribution at 2.96 sigmas on the left (negative side) of the mean and 6.94 on the right (and we'll use 300 equally spaced data points this time), we obtain an optimal f of .954 or 1 contract for every $5,062.71 in account equity. Why does this differ from the empirical optimal f of $7,918.04? The difference is in the "roughness" of the actual distribution. Recall that the significance level of our best-fitting parameters was only 7.8384%. Let us take our 232-trade distribution and bin it into 12 bins from -3 to +3 sigmas. -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 Bin -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Number of Trades 2 1 2 24 39 43 69 38 7 2 0 2 Notice that out on the tails of the distribution are gaps, areas or bins where there isn't any empirical data. These areas invariably get smoothed over when we fit our adjustable distribution to the data, and it is these smoothed-over areas that cause the difference between the parametric and the empirical optimal fs. Why doesn't our distribution fit the observed better, especially in light of how malleable it is? The reason has to do with the observed distribution having too many pointy of inflection. A parabola can be cupped upward or downward. Yet over the extent of a parabola, the direction of the cup, whether it points upward or downward, is unchanged. We define a point of inflection as any time the direction of the concavity changes from up to down. Therefore, a parabola has 0 points of inflection, since the direction of the concavity never changes. An object shaped like the letter S lying on its side has one point of inflection, one point where the concavity changes from up to down. tioned, we must assume that the optimal f for the next trade is determined parametrically by the generating function, even though this may differ from the empirical optimal f. Obviously, the bounding parameters have a very important effect on the optimal f. Where should you place the bounding parameters so as to obtain the best results? Look at what happens as we move the upper bound up. The following table is compiled by bounding the lower end at 3 sigmas, and using 100 equally spaced data points and the optimal parameters to our 232 trades: Concave Down Concave Up Points of inflection Concave Up Figure 4-11 Points of inflection on a bell-shaped distribution. Figure 4-11 shows the Normal Distribution. Notice there are two points of inflection in a bell-shaped curve such as the Normal Distribution. Depending on the value for SCALE, our adjustable distribution can have n zero points of inflection (if SCALE is very low) or two points of inflection. The reason our adjustable distribution does not fit the actual distribution of trades any better than it does is that the actual distribution has too many Points of inflection. Does this mean that our fitted adjustable distribution is wrong? Probably not. If we were so inclined, we could create a distribution function that allowed for more than two points of inflection, which would better curve-fit to the actual observed distribution. If we created a distribution function that allowed for as many points of inflection as we desired, we could fit to the observed distribution perfectly. Our optimal f derived therefrom would • then be nearly the same as the empirical. However, the more points of inflection we were to add to our distribution function, the less robust it would be (i.e., it would probably be less representative of the trades in the future). However, we are not trying to fit the parametric f to the observed exactly. We are trying to determine how the observed data is distributed so that we can determine with a fair degree of accuracy what the optimal fin the future will be if the data is distributed as it were in the past. When we look at the adjustable distribution that has been fit to our actual trades, the spurious points of inflection are removed. An analogy may clarify this. Suppose we are using Galton's board. We know that asymptotically the distribution of the balls falling through the board will be Normal. However, we are only going to see 4 balls rolled through the board. Can we expect the outcomes of the 4 balls to be perfectly conformable to the Normal? How about 5 balls? 50 balls? In an asymptotic sense, we expect the observed distribution to flesh out to the expected as the number of trades increases. Fitting our theoretical distribution to every point of inflection in the actual will not give us any greater degree of accuracy in the future. As more trades occur, we can expect the observed distribution to converge toward the expected, as we can expect the extraneous points of inflection to be filled in with trades as the number of trades approaches infinity. If the process generating the trades is accurately modeled by our parameters, the optimal f derived from the theoretical will be more accurate over the future sequence of trades than the optimal f derived empirically over the past trades. In other words, if our 232 trades are a proxy of the distribution of the trades in the future, then we can expect the trades in the future to arrive in a distribution more like the theoretical one that we have fit than like the observed with its extraneous points of inflection and its roughness due to not having an infinite number of trades. In so doing, we can expect the optimal fin the future to be more like the optimal f obtained from the theoretical distribution than it is like the optimal f obtained empirically over the observed distribution. So, we are better off in this case to use the parametric optimal f rather than the empirical. The situation is analogous to the 20-coin-toss discussion of the previous chapter. If we expect 60% wins at a 1:1 payoff, the optimal f is correctly .2. However, if we only had empirical data of the last 20 tosses, 11 of which were wins, our optimal f would show as .1, even though ,2 is what we should optimally bet on the next toss since it has a 60% chance of winning. We must assume that the parametric optimal f ($5,062.71 in this case) is correct because it is the optimal f on the generating function. As with the coin-toss game just men- 55 - Upper Bound 3 Sigmas 4 Sigmas 5 Sigmas 6 Sigmas 7 Sigmas 8 Sigmas 100 Sigmas f .206 .588 .784 .887 .938 .963 .999 f$ $23783.17 $8,332.51 $6,249.42 $5,523.73 $5,223.41 $5,087.81 $4,904.46 Notice that, keeping the lower bound constant, the higher up we move the higher bound, the more the optimal f approaches 1. Thus, the more we move the upper bound up, the more the optimal f in dollars will approach the lower bound (worst-case expected loss) exactly. In this case, where our lower bound is at -3 sigmas, the more we move the upper bound up, the more the optimal f in dollars will approach the lower bound as a limit-$330.13-(1743.23*3) = -$4,899.56. Now observe what happens when we keep the upper bound constant (at 3), but move the lower bound lower. Very soon into this process the arithmetic mathematical expectation turns negative. This happens because more than 50% of the area under the characteristic function is to the left of the zero axis. Consequently, as we move the lower bounding parameter lower, the optimal f quickly goes to zero. Now consider what happens when we move both bounding parameters out at the same rate. Here we are using the optimal parameter set of .02, 2.76, 0, and 1.78 on our distribution of 232 trades, and 100 equally spaced data points: Upper and Lower Bound 3 Sigmas 4 Sigmas 5 Sigmas 6 Sigmas 10 Sigmas f .206 .158 ,126 .104 .053 f$ $23,783.17 $42,040.42 $66,550.75 $97,387.87 $322,625.17 Notice that our optimal f approaches 0 as we move both bounding parameters out to plus and minus infinity. Furthermore, since our worstcase loss gets greater and greater, and gets divided by a smaller and smaller optimal f, our f$, the amount to finance 1 unit by, approaches infinity as well. The problem of where the best place is to put the bounding parameters is best rephrased as, "Where, in the extreme case, do we expect the best and worst trades in the future (over the course of which we are going to trade this market system) to occur?" The tails of the distribution itself actually go to plus and minus infinity. To account for this we would optimally finance each contract by an infinitely high amount (as in our last example, where we moved both bounds outward). If we were going to trade for an infinitely long time into the future, our optimal f in dollars would be infinite. But we're not going to trade this market system forever. The optimal f in the future over which we are going to trade this market system is a function of what the best and worst trades in that future are. Recall that if we flip a coin 100 times and record what the longest streak of consecutive tails is, then flip the coin another 100 times, the longest streak of consecutive tails at the end of 200 flips will more than likely be greater than it was after only the first 100 flips. Similarly, if the worst-case loss seen over our 232-trade history was a 2.96-sigma loss (let's say a 3-sigma loss) then we should expect a loss of greater than 3 sigmas in the future over which we are going to trade this market system. Therefore, rather than bounding our distribution at what the bounds of the past history of trades were (-2.96 and +6.94 sigmas), we will bound it at -4 and +6.94 sigmas. We should perhaps expect the high-end bound to be violated in the future, much as we expect the low-end bound to be violated. However, we won't make this assumption for a couple of reasons. The first is that trading systems notoriously do not trade as well into the future, in general, as they have over historical data, even when there are no optimizable parameters involved. It gets back to the principle that mechanical trading systems seem to suffer from a con- tinually deteriorating edge. Second, the fact that we pay a lesser penalty for erring in optimal f if we err to the left of the peak of the f curve than if we err to the right of it suggests that we should err on the conservative side in our prognostications about the future. Therefore, we will determine our parametric optimal f by using the bounding parameters of -4 and +6.94 sigmas and use 300 equally spaced data points. However, in calculating the probabilities at each of the 300 equally spaced data points, it is important that we begin our distribution 2 sigmas before and after our selected bounding parameters. We therefore determine the associated probabilities by creating bars from -6 to +8.94 sigmas, even though we are only going to use the bars between -4 and +6.94 sigmas. In so doing, we have enhanced the accuracy of our results. Using our optimal parameters of .02, 2.76, 0, and 1.78 now yields an optimal f of .837, or 1 contract per every $7,936.41. So long as our selected bounding parameters are not violated, our model of reality is accurate in terms of the bounds selected. That is, so long as we do not see a loss greater than 4 sigmas-$330.13-(1743.23*4) = -$6,642.79-or a profit greater than 6.94 sigmas$330.13+(1743.23*6.94) = $12,428.15-we have accurately modeled the bounds of the distribution of trades in the future. The possible divergence between our model and reality is our blind spot. That is, the optimal f derived from our model (with our selected bounding parameters) is the optimal f for our model, not necessarily for reality. If our selected bounding parameters are violated in the future, our selected optimal f cannot then be the optimal. We would be smart to defend this blind spot with techniques, such as long options, that limit our liability to a prescribed amount. While we are discussing weaknesses with the method, one final weakness should be pointed out. Once you have obtained your parametric optimal f, you should be aware that the actual distribution of trade profits and losses is one in which the parameters are constantly changing, albeit slowly. You should frequently run the technique on your trade profits and losses for each market system you are trading to monitor these dynamics of the distributions. PERFORMING "WHAT IFS" Once you have obtained your parametric optimal f, you can perform "What If types of scenarios on your distribution function by altering the parameters LOC, SCALE, SKEW, and KURT of the distribution function to replicate different expected outcomes in the near future (different distributions the future might take) and observe the effects. Just as we can tinker with stretch and shrink on the Normal distribution, so, too, can we tinker with the parameters LOC, SCALE, SKEW, and KURT of our adjustable distribution. The "What if capabilities of the parametric technique are the strengths that help to offset the weaknesses of the actual distribution of trade P&L's moving around. The parametric techniques allow us to see the effects of changes in the distribution of actual trade profits and losses before they occur, and possibly to budget for them. When tinkering with the parameters, a suggestion is in order. When finding the optimal f, rather than tinkering with the LOC, the location parameter, you are better off tinkering with the arithmetic average trade in dollars that you are using as input. The reason is illustrated in Figure 4-12. As is Altering shrink or average trade Altering the location parameter Figure 4-12 Altering location parameters. - 56 - Notice that in Figure 4-12, changing the location parameter LOC moves the distribution right or left in the "window" of the bounding parameters. But the bounding parameters do not move with the distribution. Thus, a change in the LOC parameter also affects how many equally spaced data points will be left of the mode and right of the mode of the distribution. By changing the actual arithmetic mean (or using the shrink variable in the Normal Distribution search for f), the window of the bounding parameters moves also. When you alter the arithmetic average trade as input, or alter the shrink variable in the Normal Distribution mechanism, you still have the same number of equally spaced data points to the right and left of the mode of the distribution that you had before the alteration. EQUALIZING F The technique detailed in this chapter was shown using data that was not equalized. We can also use this very same technique on equalized data. If we want to determine an equalized parametric optimal f, we would convert the raw trade profits and losses over to percentage gains and losses, based on Equations (2.10a) through (2.10c). Next, we would convert these percentage profits and losses by multiplying them by the current price of the underlying instrument. For example, P&L number 1 is .18. Suppose the entry price to this trade was 100.50. The percentage gain on this trade would be .18/100.50 = .001791044776. Now suppose that the current price of this underlying instrument is 112.00. Multiplying .001791044776 by 112.00 translates into an equalized P&L of .2005970149. If we were seeking to do this procedure on an equalized basis, we would perform this operation on all 232 trade profits and losses. We would then calculate the arithmetic mean and population standard deviation on the equalized trades and would use Equation (3.16) to standardize the trades. Next, we could find the optimal parameter set for LOC, SCALE, SKEW, and KURT on the equalized data exactly as was shown in this chapter for nonequalized data. The rest of the procedure is the same in this chapter in terms of determining the optimal f, geometric mean, and TWR. The by-products of the geometric average trade, arithmetic average trade, and threshold to the geometric are only valid for the current price of the underlying instrument. When the price of the underlying instrument changes, the procedure must be done again, going back to step one and multiplying the percentage profits and losses by the new underlying price. When you go to redo the procedure with a different underlying price, you will obtain the same optimal f, geometric mean, and TWR. However, your arithmetic average trade, geometric average trade, and threshold to the geometric will be different based upon the new price of the underlying instrument. The number of contracts to trade as given in Equation (3.34) must be changed. The worst-case associated P&L, the W variable, Equation (3.35), will be different in Equation (3.34) as a result of the changes caused in the equalized data by a different current price. OPTIMAL F ON OTHER DISTRIBUTIONS AND FITTED CURVES At this point you should realize that there are many other ways you can determine your parametric optimal f. We have covered a procedure for finding the optimal f on Normally distributed data in the previous chapter. Thus we have a procedure that will give us the optimal f for any Normally distributed phenomenon. That same procedure can be used to find the optimal on data of any distribution, so long as the cumulative density function of the selected distribution is available (these functions arc given for many other common distributions in Appendix B). When the cumulative density function is not available, the optimal f can be found for any other function by the integration method used in this chapter to approximate the cumulative densities, the areas under the curve. I have elected in this chapter to model the actual distribution of trades by way of our adjustable distribution. This amounts to little more than finding a function and its appropriate values, which model the actual density function of the trade P&L's with a maximum of 2 points of inflection. You could use or create many other functions and methods to do this-such as polynomial interpolation and extrapolation, rational function (quotients of polynomials) interpolation and extrapolation, or using splines to fit a theoretical function to the actual. Once any theoret- ical function is found, the associated probabilities can be determined by the same method of integral estimation as was used in finding the associated probabilities of our adjustable distribution or by using integration techniques of calculus. There is a problem with fitting any of these other functions. Part of the thrust of this book has been to allow users of systems that are not purely mechanical to have the same account management power that users of purely mechanical systems have. As such, the adjustable distribution route that I took only requires estimates for the parameters. These parameters pertain to the first four moments of the distribution. It is these moments -location, scale, skewness, and kurtosis-that describe the distribution. Thus, someone trading on some not purely mechanical basis-e.g., Elliott wave— could estimate the parameters and have access to optimal f and its by-product calculations. A past history of trades is not a prerequisite for estimating these parameters. If you were to use any of the other fitting techniques mentioned, you wouldn't necessarily need a past history of trades either, but the estimates for the parameters of those fitting techniques do not necessarily pertain to the moments of the distribution. What they pertain to is a function of the particular function you are using. These other techniques would not necessarily allow you to see what would happen if kurtosis increased or skewness changed or the scale were altered, and so on. Our adjustable distribution is the logical choice for a theoretical function to fit to the actual, since the parameters not only measure the moments of the distribution, they give us control over those moments when prognosticating about future changes to the distribution. Furthermore, estimating the parameters of our adjustable distribution is easier than with fitting any other function which I am aware of. SCENARIO PLANNING People who forecast for a living (economists, stock market forecasters, weathermen, government agencies, etc.) have a notorious history for incorrect forecasts, but most decisions anyone must make in life usually require making a forecast about the future. A couple of pitfalls immediately crop up here. To begin with, people generally make assumptions about the future that are more optimistic than the actual probabilities. Most people feel that they arc far more likely to win the lottery this month than they are to die in an auto accident, even though the probabilities of the latter are greater. This is not only true on the level of the individual, it is even more pronounced at the level of the group. When people work together, they tend to see a favorable outcome as the most likely result (everyone else seems to, otherwise they wouldn't be working here), otherwise they would quit the project they are a part of (unless, of course, we have all become automatons mindlessly slaving away on sinking ships). The second and more harmful pitfall is that people make straightline forecasts into the future. People try to predict the price of a gallon of gas two years from now, predict what will happen with their jobs, who will be the next president, what the next styles will be, and on and on. Whenever we think of the future, we tend to think in terms of a single, most likely outcome. As a result, whenever we must make decisions, whether as an individual or a group, we tend to make these decisions based on what we think will be the single most likely outcome in the future. As a consequence, we are extremely vulnerable to unpleasant surprises. Scenario planning is a partial solution to this problem. A scenario is simply a possible forecast, a story about one way that the future might unfold. Scenario planning is a collection of scenarios to cover the spectrum of possibilities. Of course, the complete spectrum can never be covered, but the scenario planner wants to cover as many possibilities as he or she can. By acting in this manner, as opposed to a straight-line forecast of the most likely outcome, the scenario planner can prepare for the future as it unfolds. Furthermore, scenario planning allows the planner to be prepared for what might otherwise be an unexpected event. Scenario planning is tuned to reality in that it recognizes that certainty is an illusion. Suppose you are involved in long-run planning for your company. Say you make a particular product. Bather than making a single-mostlikely-outcome, straight-line forecast, you decide to exercise scenario planning. You Will need to sit down with the other planners and brainstorm for possible scenarios. What if you cannot get enough of the raw materials to make your product? What if one of your competitors fails? - 57 - What if a new competitor emerges? What if you have severely underestimated demand for this product? What if a war breaks out on such-andsuch a continent? What if it is a nuclear war? Because each scenario is only one of several, each scenario can be considered seriously. But what do you do once you have defined these scenarios? To begin with, you must determine what goal you would like to achieve for each given scenario. Depending upon the scenario, the goal need not be a positive one. For instance, under a bleak scenario your goal may simply be damage control. Once you have defined a goal for a given scenario, you then need to draw up the contingency plans pertaining to that scenario to achieve the desired goal. For instance, in the rather unlikely bleak scenario where your goal is damage control, you need to have plans formulated so that you can minimize the damage. Above all else, scenario planning provides the planner with a course of action to take should a certain scenario develop. It forces you to make plans before the fact; it forces you to be prepared for the unexpected. Scenario planning can do a lot more, however. There is a hand-inglove fit between scenario planning and optimal f. Optimal fallows us to determine the optimal quantity to allocate to a given set of possible scenarios. We can exist in only one scenario at a time, even though we are planning for multiple futures (multiple scenarios). Scenario planning puts us in a position where we must make a decision regarding how much of a resource to allocate today given the possible scenarios of tomorrow. This is the true heart of scenario planning-quantifying it. We can use another parametric method for optimal f to determine how much of a certain resource to allocate given a certain set of scenarios. This technique will maximize the utility obtained in an asymptotic geometric sense. First, we must define each unique scenario. Second, we must assign a number to the probability of that scenario's occurrence. Being a probability means that this number is between 0 and 1. Scenarios with a probability of 0 we need not consider any further. Note that these probabilities are not cumulative. In other words, the probability assigned to a given scenario is unique to that scenario. Suppose we are a decision maker for XYZ Manufacturing Corporation. Two of the many scenarios we have are as follows. In one scenario XYZ Manufacturing files for bankruptcy, with a probability of .15; in the other scenario XYZ is being put out of business by intense foreign competition, with a probability of .07. Now, we must ask if the first scenario, filing for bankruptcy, includes filing for bankruptcy due to the second scenario, intense foreign competition. If it does, then the probabilities in the first scenario have not taken the probabilities of the second scenario into account, and we must amend the probabilities of the first scenario to be .08 (.15-.07). Note also that just as important as the uniqueness of each probability to each scenario is that the sum of the probabilities of all of the scenarios we are considering must equal 1 exactly, not 1.01 nor .99, but 1. For each scenario we now have assigned a probability of just that scenario occurring. We must also assign an outcome result. This is a numerical value. It can be dollars made or lost as a result of a scenario manifesting itself, it can be units of utility, medication, or anything. However, our output is going to be in the same units that we put in as input. You must have at least one scenario with a negative outcome in order to use this technique. This is mandatory. Since we are trying to answer the question "How much of this resource should we allocate today given the possible scenarios of tomorrow?", if there is not a negative outcome scenario, then we should allocate 100% of this resource. Further, without a negative outcome scenario it is questionable how tuned to reality this set of scenarios really is. A last prerequisite to using this technique is that the mathematical expectation, the sum of all of the outcome results times their respective probabilities, must be greater than zero. (1.03) ME = ∑[i = 1,N] (Pi *Ai) where Pi = The probability associated with the ith scenario. Ai = The result of the ith scenario. N = The total number of scenarios under consideration. If the mathematical expectation equals zero or is negative, the following technique cannot be used. That's not to say that scenario planning itself cannot be used. It can and should. However, optimal f can only be incorporated with scenario planning when there is a positive mathematical expectation. When the mathematical expectation is zero or negative, we ought not allocate any of this resource at this time. Lastly, you must try to cover as much of the spectrum of outcomes as possible. In other words, you really want to account for 99% of the possible outcomes. This may sound nearly impossible, but many scenarios can be made broader so that you don't need 10,000 scenarios to cover 99% of the spectrum. In making your scenarios broader, you must avoid the common pitfall of three scenarios: an optimistic one, a pessimistic one, and a third where things remain the same. This is too simple, and the answers derived therefrom are often too crude to be of any value. Would you want to find your optimal f for a trading system based on only three trades? So even though there may be an unknowably large number of scenarios covering the entire spectrum, we can cover what we believe to be about 99% of the spectrum of outcomes. If this makes for an unmanageably large number of scenarios, we can make the scenarios broader to trim down their number. However, by trimming down their number we lose a certain amount of information. When we trim down the number of scenarios (by broadening them) down to only three, a common pitfall, we have effectively eliminated so much information that this technique is severely hampered in its effectiveness. What is a good number of scenarios to have then? As many as you can and still manage them. Here, a computer is a great asset. Assume again that we are decision making for XYZ. We are looking at marketing a new product of ours in a primitive, remote little country. We are looking at five possible scenarios (in reality you should have many more than this, but we'll use five for the sake of illustration). These five scenarios portray what we perceive as possible futures for this primitive remote country, their probabilities of occurrence, and the gain or loss of investing there. Scenario War Trouble Stagnation Peace Prosperity Probability .1 .2 .2 .45 .05 Sum 1.00 Result -$500,000 -$200,000 0 $500,000 $1 ,000,000 The sum of our probabilities equals 1. We have at least 1 scenario with a negative result, and our mathematical expectation is positive: (.1*-$500,000)+(.2*-$200,000)+.. = $185,000 We can therefore use the technique on this set of scenarios. Notice first, however, that if we used the single most likely outcome method we would conclude that peace will be the future of this country, and we would then act as though peace was to occur, as though it were a certainty, only vaguely remaining aware of the other possibilities. Returning to the technique, we must determine the optimal f. The optimal f is that value for f (between 0 and 1) which maximizes the geometric mean: (4.13) Geometric mean = TWR^(1/∑[i = 1,N] Pi) and (4.14) TWR = ∏[i = 1,N] HPRi and (4.15) HPRi = (1+(Ai/(W/-f))) ^ Pi therefore (4.16) Geometric mean = (∏[i = 1,N] (1+(Ai/(W/-f))) ^ Pi) ^ (1/∑[i = 1,N] Pi) Finally then, we can compute the real TWR as: (4.17) TWR = Geometric Mean ^ X where N = The number of different scenarios. TWR = The terminal wealth relative. HPRi = The holding period return of the ith scenario. Ai = The outcome of the ith scenario. Pi = The probability of the ith scenario. W = The worst outcome of all N scenarios. f = The value for f which we are testing. X = However many times we want to "expand" this scenario out. That is, what we would expect to make if we invested f amount into these possible scenarios X times. - 58 - The TWR returned by Equation (4.14) is just an interim value we must have in order to obtain the geometric mean. Once we have this geometric mean, the real TWR can be obtained by Equation (4.17). Here is how to perform these equations. To begin with, we must decide on an optimization scheme, a way of searching through the f values to find that f which maximizes our equation. Again, we can do this with a straight loop with f from .01 to 1, through iteration, or through parabolic interpolation. Next, we must determine what the worst possible result for a scenario is of all of the scenarios we are looking at, regardless of how small the probabilities of that scenario's occurrence are. In the example of XYZ Corporation this is -$500,000. Now for each possible scenario, we must first divide the worst possible outcome by negative f. In our XYZ Corporation example, we will assume that we are going to loop through f values from .01 to 1. Therefore we start out with an f value of .01. Now, if we divide the worst possible outcome of the scenarios under consideration by the negative value for f: -$500,000/-.01 = $50,000,000 Negative values divided by negative values yield positive results, so our result in this case is positive. As we go through each scenario, we divide the outcome of the scenario by the result just obtained. Since the outcome to the first scenario is also the worst scenario, a loss of $500,000, we now have: -$500,000/$50,000,000 = -.01 The next step is to add this value to 1. This gives us: l+(-.01) = .99 Lastly, we take this answer to the power of the probability of its occurrence, which in our example is .1: .99^.1 = .9989954713 Next, we go to the next scenario labeled 'Trouble," where there is a .2 probability of a loss of $200,000. Our worst-case result is still $500,000. The f value we are working on is still .01, so the value we want to divide this scenario's result by is still $50,000,000: -$200,000/$50,000,000 = -.004 Working through the rest of the steps to obtain our HPR: 1+(-.004) = .996 .996^.2 = .9991987169 If we continue through the scenarios for this test value of .01 for f, we will find the 3 HPRs corresponding to the last 3 scenarios: Stagnation 1.0 Peace 1.004467689 Prosperity 1.000990622 Once we have turned each scenario into an HPR for the given f value, we must multiply these HPRs together: .9989954713*.9991987169*1.0*1.004487689*1.000990622 = 1.00366'7853 This gives us the interim TWR, which in this case is 1.003667853. Our next step is to take this to the power of 1 divided by the sum of the probabilities. Since the sum of the probabilities is 1, we can state that we must raise the TWR to the power of 1 to give us the geometric mean. Since anything raised to the power of 1 equals itself, we can say that our geometric mean equals the TWR in this case. We therefore have a geometric mean of 1.003667853. If, however, we relaxed the constraint that each scenario must have a unique probability, then we could allow the sum of the probabilities of the scenarios to be greater than 1. In such a case, we would have to raise our TWR to the power of 1 divided by this sum of the probabilities in order to derive the geometric mean. The answer we have just obtained in our example is our geometric mean corresponding to an f value of .01. Now we move on to an f value of .02, and repeat the whole process until we have found the geometric mean corresponding to an f value of .02. We keep on proceeding until we arrive at that value for f which yields the highest geometric mean. In our example we find that the highest geometric mean is obtained at an f value of .57, which yields a geometric mean of 1.1106. Dividing our worst possible outcome to a scenario (-$500,000) by the negative optimal f yields a result of $877,192.35. In other words, if XYZ Corporation wants to commit to marketing this new product in this remote country, they will optimally commit this amount to this venture at this time. As time goes by and things develop, so do the scenarios, and as their resultant outcomes and •probabilities change, so does this f amount change. The more XYZ Corporation keeps abreast of these changing scenarios, and the more accurate the scenarios they develop as input are, the more accurate their decisions will be. Note that if XYZ Corporation cannot commit this $877,192.35 to this undertaking at this time, then they are too far beyond the peak of the f curve. It is the equivalent to the trader who has too many commodity contracts on with respect to what the optimal f says he or she should have on. If XYZ Corporation commits more than this amount to this project at this time, the situation would be analogous to a commodity trader with too few contracts on. Furthermore, although the quantity discussed here is a quantity of money, it could be a quantity of anything and the technique would be just as valid. The approach can be used for any quantitative decision in an environment of favorable uncertainty. If you create different scenarios for the stock market, the optimal f derived from this methodology will give you the correct percentage to be invested in the stock market at any given time. For instance, if the f returned is .65, then that means that 65% of your equity should be in the stock market with the remaining 35% in, say, cash. This approach will provide you with the greatest geometric growth of your capital in the long run. Of course, again, the output is only as accurate as the input you have provided the system with in terms of scenarios, their probabilities of occurrence, and resultant payoffs and costs. Furthermore, recall that everything said about optimal f applies here, and that also means that the expected drawdowns will approach a 100% equity retracement. If you exercise this scenario planning approach to asset allocation, you can expect close to 100% of the assets allocated to the endeavor in question to be depleted at any one time in the future. For example, suppose you arc using this technique to determine what percentage of investable funds should be in the stock market and what percentage should be in a risk-free asset. Assume that the answer is to have 65% invested in the stock market and the remaining 35% in the risk-free asset. You can expect the drawdowns in the future to approach 100% of the amount allocated to the stock market. In other words, you can expect to see, at some point in the future, almost 100% of your entire 65% allocated to the stock market to be gone. Yet this is how you will achieve maximum geometric growth. This same process can be used as an alternative parametric technique for determining the optimal f for a given trade. Suppose you are making your trading decisions based on fundamentals. If you wanted to, you could outline the different scenarios that the trade may take. The more scenarios, and the more accurate the scenarios, the more accurate your results would be. Say you are looking to buy a municipal bond for income, but you're not planning on holding the bond to maturity. You could outline numerous different scenarios of how the future might unfold and use these scenarios to determine how much to invest in this particular bond issue. This concept of using scenario planning to determine the optimal f can be used for everything from military strategies to deciding the optimal level to participate in an underwriting to the optimal down payment on a house. For our purposes, this technique is perhaps the best technique, and certainly the easiest to employ for someone not using a mechanical means of entering and exiting the markets. Those who trade on fundamentals, weather patterns, Elliott waves, or any other approach that requires a degree of subjective judgment, can easily discern their optimal fs with this approach. This approach is easier than determining distributional parameter values. The arithmetic average HPR of a group of scenarios can be computed as: (4.18) AHPR = (∑[i = 1,N](1+(Ai/(W/-f)))*Pi)∑[i = 1,N]Pi where N = the number of scenarios. f = the f value employed. Ai = the outcome (gain or loss) associated with the ith scenario. Pi = the probability associated with the ith scenario. W = the most negative outcome of all the scenarios. The AHPR will be important later in the text when we will need to discern the efficient frontier of numerous market systems. We will need to determine the expected return (arithmetic) of a given market system. This expected return is simply AHPR-1. The technique need not be applied parametrically, as detailed here; it can also be applied empirically. In other words, we can take the trade - 59 - listing of a given market system and use each of those trades as a scenario that might occur in the future, the profit or loss amount of the trade being the outcome result of the given scenario. Each scenario (trade) would have an equal probability of occurrence-1/N, where N is the total number of trades (scenarios). This will give us the optimal f empirically. This technique bridges the gap between the empirical and the parametric. There is not a fine line that delineates the two schools. As you can see, there is a gray area. When we are presented with a decision where there is a different set of scenarios for each facet of the decision, selecting the scenario whose geometric mean corresponding to its optimal f is greatest will maximize our decision in an asymptotic sense. Often this flies in the face of conventional decision-making rules such as the Hutwicz rule, maximax, minimax, minimax regret, and greatest mathematical expectation. For example, suppose we must decide between two possible choices. We could have many possible choices, but for the sake of simplicity we choose two, which we call "white" and "black." If we select the decision labeled "white," we determine that it will present the possible future scenarios to us: White Decision Scenario Probability Result A .3 -20 B .4 0 C .3 30 Mathematical expectation = $3.00 Optimal f = .17 Geometric mean = 1 .0123 It doesn't matter what these scenarios are, they can be anything, and to further illustrate this they will simply be assigned letters, A, B, C in this discussion. Further, it doesn't matter what the result is, it can be just about anything. The Black decision will present the following scenarios: Black Decision Scenario Probability Result A .3 -10 B .4 5 C .15 6 D .15 20 Mathematical expectation = $2.90 Optimal f = .31 Geometric mean = 1.0453 Many people would opt for the white decision, since it is the decision with the higher mathematical expectation. With the white decision you can expect, "on average," a $3.00 gain versus black's $2,90 gain. Yet the black decision is actually the correct decision, because it results in a greater geometric mean. With the black decision, you would expect to make 4.53% (1.0453-1) "on average" as opposed to white's 1.23% gain. When you Consider the effects of reinvestment, the black decision makes more than three times as much, on average, as does the white decision! "Hold on, pal," you say. "We're not doing this thing over again, we're doing it only once. We're not reinvesting back into the same future scenarios here. Won't we come out ahead if we always select the highest arithmetic mathematical expectation for each set of decisions that present themselves this way to us?" The only time we want to be making decisions based on greatest arithmetic mathematical expectation is if we are planning on not reinvesting the money risked on the decision at hand. Since, in almost every case, the money risked on an event today will be risked again on a different event in the future, and money made or lost in the past affects what we have available to risk today (i.e., an environment of geometric consequences), we should decide based on geometric mean to maximize the long-run growth of our money. Even though the scenarios that present themselves tomorrow won't be the same as those of today, by always deciding based on greatest geometric mean we are maximizing our decisions. It is analogous to a dependent trials process such as a game of blackjack. Each hand the probabilities change, and therefore the optimal fraction to bet changes as well. By always betting what is optimal for that hand, however, we maximize our long-run growth. Remember that to maximize long-run growth, we must look at the current contest as one that expands infinitely into the future. In other words, we must look at each individual event as though we were to play it an infinite number of times over if we want to maximize growth over many plays of different contests. As a generalization, whenever the outcome of an event has an effect on the outcome(s) of subsequent event(s) we are best off to maximize for greatest geometric expectation. In the rare cases where the outcome of an went has no effect on subsequent events, we are then best off to maximize for greatest arithmetic expectation. Mathematical expectation (arithmetic) does not take the variance between the outcomes of the different scenarios into account, and therefore can lead to incorrect decisions when reinvestment is considered, or in any environment of geometric consequences. Using this method in scenario planning gets you quantitatively positioned with respect to the possible scenarios, their outcomes, and the likelihood of their occurrence. The method is inherently more conservative than positioning yourself per the greatest arithmetic mathematical expectation. equation (3.05) Allowed that the geometric mean is never greater than the arithmetic mean. Likewise, this method can never have you position yourself (have a greater commitment) than selecting by the greatest arithmetic mathematical expectation would. In the asymptotic sense, the long-run sense, this is not only a superior method of positioning yourself, as it achieves greatest geometric growth, it is also a more conservative one than positioning yourself per the greatest arithmetic mathematical expectation, which would invariably put you to the right of the peak of the f curve. Since reinvestment is almost always a fact of life (except on the day before you retire1) - that is, you reuse the money that you are using today - we must make today's decision under the assumption that the same decision will present itself a thousand times over in order to maximize the results of our decision. We must make our decisions and position ourselves in order to maximize geometric expectation. Further, since the outcomes of most events do in fact have an effect on the outcomes of subsequent events, we should make our decisions and position ourselves based on maximum geometric expectation. This tends to lead to decisions and positions that arc not always apparently obvious. OPTIMAL F ON BINNED DATA Now we come to the case of finding the optimal f and its by-products on binned data. This approach is also something of a hybrid between the parametric and the empirical techniques. Essentially, the process is almost identical to the process of finding the optimal f on different scenarios, only rather than different payoffs for each bin (scenario), we use the midpoint of each bin. Therefore, for each bin we have an associated probability figured as the total number of elements (trades) in that bin divided by the total number of elements (trades) in all the bins. Further, for each bin we have an associated result of an element ending up in that bin. The associated results are calculated as the midpoint of each bin. For example, suppose we have 3 bins of 10 trades. The first bin we will define as those trades where the P&L's were -$1,000 to -$100. Say there are 2 elements in this bin. The next bin, we say, is for those trades which are -$100 to $100. This bin has 5 trades in it. Lastly, the third bin has 3 trades in it and is for those trades that have P&L's of $100 to $1,000. Bin -1,000 - 100 100 Bin -100 100 1,000 Trades Associated Probability 2 .2 5 .5 3 .3 Associated Result -550 0 550 Now it is simply a matter of solving for Equation (4.16), where each bin represents a different scenario. Thus, for the case of our S-bin example here, we find that our optimal f is at .2, or 1 contract for every $2,750 in equity (our worst-case loss being the midpoint of the first bin, or (-$1000+-$100)/2 = -$550). This technique, though valid, is also very rough. To begin with, it assumes that the biggest loss is the midpoint of the worst bin. This is not 1 There are certain tines when you will want to maximize for greatest arithmetic mathematical expectation instead of geometric, Such a case is when an entity is operating in a "constant-contract" kind or way and wants to switch over to a "fixed fractional" mode of operating at some favorable point in the future. This favorable point can be determined as the geometric threshold where the arithmetic average trade that is used as input is calculated as the arithmetic mathematical expectation (the sum of the outcome of each scenario times its probability of occurrence) divided by (he sum of the probabilities of all of the scenarios. Since the sum of the probabilities of all of the scenarios usually equals 1, we can state that the arithmetic average "trade" is equal to the arithmetic mathematical expectation. - 60 - always the case. Often it is helpful to make a single extra bin to hold the worst-case loss. As applied to our 3-bin example, suppose we had a trade that was a loss of $1,000. Such a trade would fall into the -$1,000 to -$100 bin, and would be recorded as -$550, the midpoint of the bin. Instead we can bin this same data as follows: Bin -1,000 -999 -100 100 Bin -1,000 -100 100 1,000 Trades Associated Probability 1 .1 1 .1 5 .5 3 .3 Associated Result -1,000 -550 0 550 Now, the optimal f is .04, or 1 contract for every $25,000 in equity. Are you beginning to see how rough this technique is? So, although this technique will give us the optimal f for binned data, we can see that the loss of information involved in binning the data to begin with can make our results so inaccurate as to be useless. If we had more data points and more bins to start with, the technique would not be rough at all. In fact, if we had infinite data and an infinite number of bins, the technique would be exact. (Another way in which this method could be exact is if the data in each of the bins equaled the midpoints of their respective bins exactly.) The other problem with this technique is that the average element in a bin is not necessarily the midpoint of the bin. In fact, the average of the elements in a bin will tend to be closer to the mode of the entire distribution than the midpoint of the bin is. Hence, the dispersion tends to be greater with this technique than is the real case. There are ways to correct for this, but these corrections themselves can often be incorrect, depending upon the shape of the distribution. Again, this problem would be alleviated and the results would be exact if we had an infinite number of elements (trades) and an infinite number of bins. If you happen to have a large enough number of trades and a large enough number of bins, you can use this technique with a fair degree of accuracy if you so desire. You can do "What if" types of simulations by altering the number of elements in the various bins and get a fair approximation for the effects of such changes. WHICH IS THE BEST OPTIMAL F? We have now seen that we can find our optimal f from an empirical procedure as well as from a number of different parametric procedures for both binned and unbinned data. Further, we have seen that we can equalize the data as a means of preprocessing, to find what our optimal f should be if all trades occurred at the present underlying price. At this point you are probably asking for the real optimal f to please stand up. Which optimal f is really optimal? For starters, the straight (nonequalized) empirical optimal f will give you the optimal f on past data. Using the empirical optimal f technique detailed in Chapter 1 and in Portfolio Management Formulas will yield the optimal f that would have realized the greatest geometric growth on a past stream of outcomes. However, we want to discern what the value for this optimal f will be in the future (specifically, over the next trade), considering that we are absent knowledge regarding the outcome of the next trade. We do not know whether it will be a profit, in which case the optimal f would be 1, or a loss, in which case the optimal f would be 0. Rather, we can only express the outcome of the next trade as an estimate of the probability distribution of outcomes for the next trade. That being said, our best estimate for traders employing a mechanical system, is most likely to be obtained by using the parametric technique on our adjustable distribution function as detailed in this chapter on either equalized or nonequalized data. If there is a material difference in using equalized versus nonequalized data, then there is most likely too much data, or not enough data at the present price level. For non-system traders, the scenario planning approach is the easiest to employ accurately. In my opinion, these techniques will result in the best estimate of the probability distribution of outcomes on the next trade. You now have a good conception of both the empirical and parametric techniques, as well as some hybrid techniques for finding the optimal f. In the next chapter, we consider finding the optimal j (parametrically) when more than one position is running Chapter 5 - Introduction to Multiple Simultaneous Positions under the Parametric Approach Mention has already been made in this text of the idea of using options, either by themselves or in conjunction with a position in the underlying, to improve returns. Buying a long put in conjunction with a long position in the underlying (or simply buying a call in lieu of both), or sometimes even writing (setting short) a call in conjunction with a long position in the underlying can increase asymptotic geometric growth. This happens as the result of incorporating the options into the position, which then often (but not always) reduces dispersion to a greater degree than it reduces arithmetic average return. Per the fundamental equation of trading, this then results in a greater estimated TWR. Options can be used in a variety of ways, both among themselves and in conjunction with positions in the underlying, to manage risk. In the future, as traders concentrate more and more on risk management, options will very likely play an ever greater role. Portfolio Management Formulas discussed the relationship of optimal j and options.1 In this chapter we pick up on that discussion and Carry it further into an introduction of multiple simultaneous positions, especially with regard to options. This chapter gives us another method for finding the optimal f s for positions that are not entered and exited by using a mechanical system. The parametric techniques discussed thus far could be utilized by someone not trading by means of a mechanical system, but aside from the scenario planning approach, they still have some rough edges. For example, someone not using a mechanical system who was using the technique described in Chapter 4 would need an estimate of the kurtosis of his or her trades. This may not be too easy to come by (at least, an accurate estimate of this may not be readily available). Therefore, this chapter is for those who are using purely nonmechanical means of entering and exiting their trades. Users of these techniques will not need parameter estimates for the distribution of trades. However, they will need parameter estimates for both the volatility of the underlying instrument and the trader's forecast for the price of the underlying instrument. For a trader not utilizing a mechanical, objective system, these parameters are for easier to come by than parameter estimates for the distribution of trades that have not yet occurred. This discussion of optimal f and its by-products for those traders not utilizing a mechanical, objective system comes at a convenient stage in the book, as it is the perfect entree for multiple simultaneous positions. Does this mean that someone who is using a mechanical means to enter and exit trades cannot engage in multiple simultaneous positions? No. Chapter 6 will show us a method for finding optimal multiple simultaneous positions for traders whether they are using a mechanical system or not. This chapter introduces the concept of multiple simultaneous positions, but the standpoint is that of someone not using a mechanical system, and possibly using options as well as the underlying instruments. ESTIMATING VOLATILITY One important parameter a trader wishing to use the following concepts must input is volatility. We discuss two ways to determine volatility. The first is to use the estimate that has been determined by the marketplace. This is called implied volatility. The option valuation models introduced in this chapter use volatility as one of their inputs to derive the fair theoretical price of an option. Implied volatility is determined by assuming that the market price of an option is equivalent to its fair theoretical price. Solving for the volatility value that yields a fair theoretical price equal to the market price determines the implied volatility. This value for volatility is arrived at by iteration. The second method of estimating volatility is to use what is known as historical volatility, which is determined by the actual price changes in the underlying instrument. Although volatility as input to the options 1 There were some minor formulative problems with the options material in Portfolio management Formulas, These have since been resolved, and the corrected formulations are presented here. My apologies for whatever confusion this may have caused. pricing models is an annualized figure, a much shorter period of time, usually 10 to 20 days, is used when determining historical volatility and the resulting answer is annualized. Here is how to calculate a 20-day annualized historical volatility. Step 1 Divide tonight's close by the previous market day's close. Step 2 Take the natural log of the quotient obtained in step 1. Thus, for the March 1991 Japanese yen on the night of 910225 (this is known as YYMMDD format for February 25, 1991), we take the close of 74.82 and divide it by the 910222 close of 75.52: 74.82/75.52 = .9907309322 We then take the natural log of this answer. Since the natural log of .9907309322 is -.009312258, our answer to step 2 is -.009312258. Step 3 After 21 days of back data have elapsed, you will have 20 values for step 2. Now you can start running a 20-day moving average to the answers from step 2. Step 4 You now want to run a 20-day sample variance for the data from step 2. For a 20-day variance you must first determine the moving average for the last 20 days. This was done in step 3. Then, for each day of the last 20 days, you take the difference between today's moving average, and that day's answer to step 2. In other words, for each of the last 20 days you will subtract the moving average from that day's answer to step 2. Now, you square this difference (multiply it by itself). In so doing, you convert all negative answers to positives so that all answers are now positive. Once that is done, you add up all of these positive differences for the last 20 days. Finally, you divide this sum by 19, and the result is your sample variance for the last 20 days. The following spreadsheet will show how to find the 20-day sample variance for the March 1991 Japanese yen for a single day, 901226 (December 26, 1990): A Date B C D E F Col E Close LN 20-Day Col C- Squared Change Average (-.0029) 77.96 76.91 74.93 75.37 74.18 74.72 74.57 75.42 76.44 75.54 75.37 75.9 75.57 75.08 75.11 74.99 74.52 74.06 73.91 73.49 73.5 -9.0136 -0.0261 0.0059 -0.0159 0.0073 -0.0020 0.0113 0.0134 -0.0118 -0.0023 0.0070 -0.0044 -0.0065 0.0004 -0.0016 -0.0063 -0.0062 -0.0020 -0.0057 0.0001 -.0029 -0.0107 -0.0232 0.0088 -0.0130 0.0102 0.0009 0.0142 0.0163 -0.0089 0.0006 0.0099 -0.0015 -0.0036 0.0033 0.0013 -0.0034 -0.0033 0.0009 -0.0028 0.0030 G H Col G Sum of Divided Last20 by 19 Values of Col F 0.000113 0.000537 0.000076 0.000169 0.000103 0.000000 0.000202 0.000266 0.000079 0.000000 0.000098 0.000002 0.000012 0.000010 0.000001 0.000011 0.000010 0.000000 0.000007 0.000009 .001716 As you can see, the 20-day sample variance for 901226 is .00009. You need to do this for every day so that you will have determined the 20-day sample variance for every single day. Step 5 Once you have determined the 20-day sample variance for every single day, you must convert this into a 20-day sample standard deviation. This is easily accomplished by taking the square root of the variance for each day. Thus, for 901226, taking the square root of the variance (which was shown to be .00009) gives us a 20-day sample standard deviation of .009486832981. Step 6 Now we must "annualize" the data. Since we are using daily data, and we'll suppose that there are 252 trading days in the yen per year (approximately), we must multiply the answers from step 5 by the square root of 252, or 15.87450787. Thus, for 901226, the 20-day sample standard deviation is ,009486832981, and multiplying by 15.87450787 gives us an answer of .1505988048. This answer is the historical volatility-in this case, 15.06%-and can be used as the volatility input to the Black-Scholes option pricing model. The following spreadsheet shows how to go through the steps to get to this 20-day annualized historical volatility. You will notice that the interim steps in determining variance for a given day, which were detailed on the previous spreadsheet, are not on this one. This was done in order for you to see the whole process. Therefore, bear in mind that the variance column in this following spreadsheet is determined for each row exactly as in the previous spreadsheet. A DATE B C CLOSE LN Change 901127 77.96 901128 76.91 -0.0136 901129 74.93 -0.0261 901130 75.37 0.0059 961203 74.18 -0.0159 901204 74.72 0.0073 901205 74.57 -0.0020 901206 75.42 0.0113 901207 76.44 0.0134 901210 75.54 -0.0118 901211 75.37 -0.0023 961212 75.9 0.0070 961213 75.57 -0.0044 901214 75.08 -0.0065 961217 75.11 0.0004 901218 74.99 -0.0016 901219 74.52 -0.0063 901220 74.06 -0.0062 901221 73.91 -0.0020 901224 73.49 -0.0057 901226 73.5 0.0001 901227 73.34 -0.0022 901 228 74.07 0.0099 901231 73.84 -0.0031 D 20- E 20-Day F 20-Day G AnnualDay Av- Variance SD ized*15.8745 erage 1 risk-of-ruin equation can tell us what the probability of ruin is before we start out trading this system. If we were trading this system on a fixed fractional basis, the line would curve upward, getting steeper and steeper with each elapsed trade. However, the amount we could drop off of this line is always commensurate with how high we are on the line. That is, the probability of ruin does not diminish as more and more trades elapse. In theory, though, the risk of ruin in fixed fractional trading is zero, because we can trade in infinitely divisible units. In real life this is not necessarily so. In real life, the risk of ruin in fixed fractional trading is always a little higher than in the same system under constant-contract trading. In reality, there is no limit on how much you can lose on any given trade. In reality, the equity expectation lines we are talking about can be retraced completely in one trade, regardless of how high they are. Thus, the risk of ruin, if we are to trade for an infinitely long period of time in an instrument with unlimited liability, regardless of whether we are trading on a constant-contract or a fixed fractional basis, is 1. Ruin is certain. The only way to defuse this is to be able to put a cap on the maximum loss. This can be accomplished by trading options where the position is initiated at a debit.2 -0.0029 -0.0024 -0.0006 -0.0010 0.0001 0.0001 0.0001 0.0001 0.0095 0.0092 0.0077 0.0076 0.1508 0.1460 0.1222 0.1206 RUIN, RISK AND REALITY Recall the following axiom from the Introduction to this text: if you play a game with unlimited liability, you will go broke with a probability that approaches certainty as the length of the game approaches infinity. What constitutes a game with unlimited liability? The answer is a distribution of outcomes where the left tail (the adverse outcomes) is unbounded and goes to minus infinity. Long option positions allow us to bound the adverse tail of the distribution of outcomes. You may take issue with this axiom. It seems irreconcilable that the risk of ruin be less than 1 (i.e., ruin is not certain), yet I contend that in trading an instrument with unlimited liability on any given trade, ruin is certain. In other words, my contention here is that if you trade anything other than options and you are looking at trading for an infinite length of time, your real risk of ruin is 1. Ruin is certain under such conditions. This can be reconciled with risk-of-ruin equations in that equations used for risk of ruin use empirical data as input. That is, the input to risk-ofruin equations comes from a finite sample of trades. My contention of certain ruin for playing an infinitely long game with unlimited liability on any given trade is derived from a parametric standpoint. The parametric standpoint encompasses the large losing trades, those trades way out on the left tail of the distribution, which have not yet occurred and are therefore not a part of the finite sample used as input into the risk-ofrum equations. To picture this, assume for a moment a trading system being performed under constant-contract trading. Each trade taken is taken with only 1 contract. To plot out where we would expect the equity to be X trades into the future, we simply multiply X by the average trade. Thus, if our system has an average trade of $250, and we want to know where we can expect our equity to be, say, 7 trades into the future, we can determine this as $250*7 = $1,750. Notice that this line of arithmetic mathematical expectation is a straight-line function. Now, on any given trade, a certain amount can be lost, thus dropping us down (temporarily) from this expected line. In this hypothetical situation we have a limit to what we can lose on any given trade. Since our line is always higher than the most we can lose on a given trade, we cannot be ruined on one trade. However, a prolonged losing streak could drop us far enough down from this line that we could not continue to trade, hence we would be "ruined." The probability of this diminishes as more trades elapse, as the line of expectation gets higher and higher. A Imagine an underlying instrument (it can be a stock, bond, foreign currency, commodity, or anything else) that can trade up or down by 1 tick on the next trade. If, say, we measure where this stock will be 100 ticks down the road, and if we do this over and over, we will find that the distribution of outcomes is Normal. This, per Galton's board, is as we would expect it to be. If we then figured the price of the option based on this principle such that you could not make a profit by buying these options, or by selling them short, we would have arrived at the Binomial Option Pricing Model (Binomial Model or Binomial). This is sometimes also called the Cox-Ross-Rubenstein model after those who devised it. Such an option price is based on its expected value (its arithmetic mathematical expectation), since you cannot make a profit by either buying these options repeatedly and holding them to expiration or selling them repeatedly and holding the position till expiration, losing on some and winning on others but netting out a profit in the end. Thus, the option is said to be fairly priced. We will not cover the specific mathematics of the Binomial Model. Rather, we till cover the mathematics of the Black-Scholes Stock Option Model and the Black Futures Option Model. You should be aware that, inside from these three models, there are other valid options pricing models which will not be covered here either, although the concepts discussed in this chapter apply to all options pricing models. Finally, the best reference I know of regarding the mathematics of options pricing models is Option Volatility and Pricing Strategies by Sheldon Natenberg. Natenberg's book covers the mathematics for many of the options pricing models (including the Binomial Model) in great detail. The math for the Black-Scholes Stock Option Model and the Black Futures Option Model, which we are about to discuss, comes from Natenberg. These topics take an entire text to discuss, more space than we have here. Those readers who want to pursue the concepts of optimal f and options are referred to Natenberg for foundational material regarding options. We must cover pricing models on a level sufficient to work the optimal f techniques about to be discussed on option prices. Therefore, we will now discuss the Black-Scholes Stock Option Pricing Model (hereafter, Black-Scholes). This model is named after those who devised it, Fischer Black at the University of Chicago and Myron Scholes at M.I.T., and appeared in the May-June 1973 Journal of Political Economy. Black-Scholes is considered the limiting form of the Binomial Model (hereafter, Binomial). In other words, with the Binomial, you must determine how many up or down ticks you are going to use before you 2 We will see later in this chapter that underlying instruments are identical to call options with infinite time till expiration. Therefore, if we are long the underlying installment we can assume that our worst-case loss is the full value of the instrument. In many cases, this can be regarded in a loss of such magnitude as to be synonymous with a cataclysmic loss. However, being short the underlying instrument is analogous to being short a call option with infinite time remaining of expiration, and liability is truly unlimited in such a situation. record where the price might end up. The following little diagram shows the idea. Initial Price Here, you start out at an initial price, where price can branch off in 2 directions for the next period. The period after that, there are 4 directions that the price might end up. Ultimately, with the Binomial you must determine in advance how many periods in total you are going to use to figure the fair price of the option on. Black-Scholes is considered the limiting form of the Binomial because it assumes an infinite number of periods (in theory). That is, Black-Scholes assumes that this little diagram will keep on branching out and to the right infinitely. If you determine an option's fair price via Black-Scholes, then you will tend toward the same answer with the Binomial as the number of periods used in the Binomial tends toward infinity. (The fact that Black-Scholes is the limiting form of the Binomial would imply that the Binomial Model appeared first. Oddly enough, the Black-Scholes model appeared first.) The mathematics of Black-Scholes are quite straightforward. The fair value of a call on a stock option is given as: (5.01) C = U*EXP(-R*T)*N(H)-E*EXP(-R*T)*N(H-V*T^(1/2)) and for a put: (5.02) P = -U*EXP(-R*T)*N(-H)+E*EXP(-R*T)*N(V*T^(l/2)-H) where C = The fair value of a call option. P = The fair value of a put option. U = The price of the underlying instrument. E = The exercise price of the option. T = Decimal fraction of the year left to expiration.3 V = The annual volatility in percent. R = The risk-free rate. ln() = The natural logarithm function. N() = The cumulative Normal density function, as given in Equation (3.21). (5.03) H = ln(U/(E*EXP(-R*T)))/(V*T^(l/2))+(V*T^(l/2))/2 For stocks that pay dividends, you must adjust the variable U to reflect the current price of the underlying minus the present value of the expected dividends: (5.04) U = U-∑[i = 1,N] Di*EXP(-R*Wi) where Di = The ith expected dividend payout. Wi = The time (decimal fraction of a year) to the ith payout. One of the very nice things about the Black-Scholes Model is the exact calculation of the delta, the first derivative of the price of the option. This is the option's instantaneous rate of change with respect to a change in U, the price of the underlying: (5.05) Call Delta = N(H) (5.06) Put Delta = -N(-H) These deltas become quite important in Chapter 7, when we discuss portfolio insurance. Black went on to make the model applicable to futures options, which have a stock-type settlement.4 The Black futures option pricing model is the same as the Black-Scholes stock option pricing model except for the variable H: (5.07) H = ln(U/E)/(V*T^(1/ 2))+(V*T^(l/2))/2 The only other difference in the futures model is the deltas, which are: (5.08) Call Delta = EXP(-R*T)*N(H) (5.09) Put Delta = -EXP(-R*T)*N(-H) For example, suppose we are looking at a futures option that has a strike price of 600, a current market price of 575 on the underlying, and an annual volatility of 25%. We will use the commodity options model, a 252-day year, and a risk-free rate of 0 for simplicity. Further, we will assume that the expiration day of the options is September 15, 1991 (910915), and that the day on which we are observing these options is August 1, 1991 (910801). To begin with, we will calculate the variable T, the decimal fraction of the year left to expiration. First, we must convert both 910801 and 910915 to their Julian day equivalents. To do this, we must use the following algorithm. 1. Set variable 1 equal to the year (1991), variable 2 equal to the month (8) and variable 3 equal to the day (1). 2. If variable 2 is less than 3 (i.e., the month is January or February) then set variable 1 equal to the year minus 1 and set variable 2 equal to the month plus 13. 3. If variable 2 is greater than 2 (i.e., the month is March or after) then set variable 2 equal to the month plus 1. 4. Set variable 4 equal to variable 3 plus 1720995 plus the integer of the quantity 365.25 times variable 1 plus the integer of the quantity 30.6001 times variable 2. Mathematically: V4 = V3+1720995+INT(365.25*V1)+INT(30.6001*V2) 5. Set variable 5 equal to the integer of the quantity .01 times variable 1: Mathematically: V5 = INT(.01*V1) Now to obtain the Julian date as variable 4 plus 2 minus variable 5 plus the integer of the quantity .25 times variable 5. Mathematically: JULIAN DATE = V4+2-V5+INT(.25*V5) So to convert our date of 910801 to Julian: Step 1 V1 = 1991, V2 = 8, V3 = 1 Step 2 Since it is later in the year than January or February, this step does not apply. Step 3 Since it is later in the year than January or February, this step does apply. Therefore V2 = 8+1 = 9. Step 4 Now we set V4 as: V4 = V3+1720995+INT(365.25*V1)+INT(30.6001*V2) = 1+1720995+INT(365.25*1991)+INT(30.6001*9) = 1+1720995+INT(727212.75) +INT(275.4009) = 1+1720995+727212+275 = 2448483 Step 5 Now we set V5 as: V5 = INT(.01*V1) = INT(.01*1991) = INT( 19.91) = 19 Step 6 Now we obtain the Julian date as: JULIAN DATE = V4+2-V5+INT(.25*V5) Most often, only market days are used in calculating the fraction of a year in options. The number of weekdays in a year (Gregorian) can be determined as 365.2425/7*5 = 260.8875 weekdays on average per year. Due to holidays, the actual number of trading days in a year is usually somewhere between 250 and 252. Therefore, if we are using a 252-trading-day year, and there are 50 trading days left to expiration, the decimal fraction of the year left to expiration, T, would be 50/252 = .1984126984. Futures-type settlement requires no initial cash payment, although the required margin must be posted. Additionally, all profits and losses are realized immediately, even if the position is not liquidated. These points are in direct contrast to stock-type settlement. In stock-type settlement, purchase requires full and immediate payment, and profits (or losses) are not realized until the position is liquidated. = 2448483+2-19+INT(.25*19) = 2448483+2-19+INT(4.75) = 2448483+2-19+4 = 2448470 Thus, we can state that the Julian date for August 1, 1991, is 2448470. Now if we convert the expiration date of September 15, 1991 to Julian, we would obtain a Julian date of 2448515. If we were using a 365 day year (or 365.2425, the Gregorian Calendar length), we could find the time left until expiration by simply taking the difference between these two Julian dates, subtracting 1 and dividing the sum by 365 (or 365.2425). However, we are not using a 365 day year; rather we are using a 252-day year, as we are only counting days when the exchange is open (weekdays less holidays). Here is how we account for this. We must examine each day between the two Julian dates to see if it is a weekend. We can determine what day of the week a given Julian date is by adding 1 to the Julian date, dividing by 7, and taking the remainder (the modulus operation). The remainder will be a value of 0 through 6, corresponding to Sunday through Saturday. Thus, for August 1, 1991, where the Julian date is 2448470: Day of week = ((2448470+l)/7) % 7 = 2448471/ % 7 = ((2448471/7)-INT(2448471/7))*7 = (349781.5714-349781)*7 = .5714*7 =4 Since 4 corresponds to Thursday, we can state that August 1, 1991 is a Thursday. We now proceed through each Julian date up to and including the expiration date. We count up all of the weekdays in between those two dates and find that there are 32 weekdays in between (and including) August 1, 1991 and September 15, 1991. From our final answer we must subtract 1, as we count day one when August 2, 1991 arrives. Therefore, we have 31 weekdays between 910801 and 910915. Now we must subtract holidays, when the exchange is closed. Monday September 2, 1991, is Labor Day in the United States. Even though we may not live in the United States, the exchange where this particular option is traded on, being in the United States, will be closed on September 2, and therefore we must subtract 1 from our count of days. Therefore, we determine that we have 30 "tradeable" days before expiration. Now we divide the number of tradeable days before expiration by the length of what we have determined the year to be. Since we are using a 252 day year, we divide 30 by 252 to obtain .119047619. This is the decimal fraction of the year left to expiration, the variable T. Next, we must determine the variable H for the pricing model. Since we are using the futures model, we must calculate H as in Equation (5.07): (5.07) H = ln(U/E)/(V*T^(1/2))+(V*T^(l/2))/2 = ln(575/600)/(.25*.119047619^(1/2))+(.25*.119047619 ^ (l/2))/2 = ln(575/600)/(.25*.119047619^.5)+(.25*.119047619^.5)/2 = ln(575/600)/(.25*.3450327796)+(.25*.3450327796)/2 = ln(575/600) /.0862581949+.0862581949/2 = ln(.9583333)/.0862581949+.0862581949/2 = .04255961442/.0862581949+.0862581949/2 = -.4933979255+.0862581949/2 = -.4933979255+.04312909745 = -.4502688281 In Equation (5.01) you will notice that we need to use Equation (3.21) on two occasions. The first is where we set the variable Z in Equation (3.01) to the variable H as we have just calculated it; the second is where we set it to the expression H-V*T^(1/2). We know that V*T^(1/2) is equal to .0862581949 from the last expression, so HV*T^(1/2) equals -.4502688281-.0862581949 = -.536527023. We therefore must use Equation (3.21) with the input variable Z as -.4502688281 and -.536527023. Prom Equation (3.21), this yields .3262583 and .2957971 respectively (Equation (3.21) has been demonstrated in Chapter 3, so we need not repeat it here). Notice, however, that we have now obtained the delta, the instantaneous rate of change of the price of the option with respect to the price of the underlying. The delta is N(H), or the variable H pumped through as Z in Equation (3.21). Our delta for this option is there-fore .3262583. We now have all of the inputs required to determine the theoretical option price. Plugging our values into Equation (5.01): (5.01) C = U*EXP (-R*T)*N(H)-E*EXP(-R*T)*N(H-V *T^(1/2)) = 575*EXP(-0*.119047619)*N(-.4502688281)-600*EXP(0*.119047619)*N(-.4502688281-.25*.119047619^(1/2)) = 575*EXP(-0*.119047619)*.3262583-600*EXP(0*.119047619) *.2957971 = 575*EXP(0)*.3262583-600*EXP(0)*.2957971 = 575*1*.3262583-600*1*.2957971 = 575*.3262583-600*.2957971 = 187.5985225-177.47826 = 10.1202625 Thus, the fair price of the 600 call option that expires September 15, 1991, with the underlying at 575 on August 1, 1991, with volatility at 25%, and using a 252-day year and the Black futures model with R = 0, is 10.1202625. It is interesting to note the relationship between options and their underlying instruments by using these pricing models. We know that 0 is the limiting downside price of an option, but on the upside the limiting price is the price of the underlying instrument itself. The models demonstrate this in that the theoretical fair price of an option approaches its upside limiting value of the value of the underlying, U, if any or all three of the variables T, R, or V are increased. This would mean, for instance, that if we increased T, the time till expiration of the option, to an infinitely high amount, then the price of the option would equal that of the underlying instrument. In this regard, we can state that all underlying instruments are really the same as options, only with infinite T. Thus, what follows in this discussion is not only true of options, it can likewise be said to be true of the underlying as though it were an option with infinite T. Both the Black-Scholes stock option model and the Black futures model are based on certain assumptions. The developers of these models were aware of these assumptions and so should you be. Nonetheless, despite whatever shortcomings are involved in the assumptions, these models are still very accurate, and option prices will tend to these models' values. The first of these assumptions is that the option cannot be exercised until the exercise date. This European style options settlement tends to underprice certain options as compared to the American style, where the options can be exercised at any time. Some of the other assumptions in this model are that we actually know the future volatility of the underlying instrument and that it will remain constant throughout the life of the option. Not only will this not happen (i.e., the volatility will change), but the distribution of volatility changes is lognormal, an issue that the models do not address.5 Another issue that the models assume is that the risk-free interest rate will remain constant throughout the life of an option. This also Is unlikely. Furthermore, short-term rates appear to be lognormally distributed. Since the higher the short-term rates are the higher the resultant option prices will be, this assumption regarding short-term rates being constant may further undervalue the fair price of the option (the price returned by the models) relative to the expected value (its true arithmetic mathematical expectation). Finally, another point (perhaps the most important point) that might undervalue the model-generated fair value of the option relative to the true expected value regards the assumption that the logs of price changes are normally distributed. If rather than having a time frame in which they expired, options had a given number of up and down ticks before they expired, and could only change by 1 tick at a time, and if each tick was statistically independent of the last tick, we could rightly make this assumption of Normality. The logs of price changes, however, do not have these clean characteristics. 5 The fact that the distribution of volatility changes is lognormal is not a very widely considered fact. In light of how extremely sensitive option prices are to the volatility of the underlying instrument, this certainly makes the prospect of buying a long Option (put Or call) more appealing in terms of mathematical expectation. All of these assumptions made by the pricing models aside, the theoretical fair prices returned by the models are monitored by professionals in the marketplace. Even though many are using models that differ from these detailed here, most models return similar theoretical fair prices. When actual prices diverge from the models to the extent that an arbitrageur has a profit opportunity, they will begin to again converge to what the models claim is the theoretical fair price. This fact, that we can predict with a fair degree of accuracy what the price of an option will be given the various inputs (time to expiration, price of the underlying instrument, etc.) allows us to perform the exercises regarding optimal f and its by-products on options and mixed positions. The reader should bear in mind that all of these techniques are based on the assumptions just noted about the options pricing models themselves. A EUROPEAN OPTIONS PRICING MODEL FOR ALL DISTRIBUTIONS We can create our own pricing model devoid of any assumptions regarding the distribution of price changes. First, the term "theoretically fair" needs to be defined when referring to an options price. This definition is given as the arithmetic mathematical expectation of the option at expiration, expressed in terms ofits present worth, assuming no directional bias in the underlying. This is our options pricing model in literal terms. The frame of reference employed here is 'What is this option worth to me today as an options buyer? " In mathematical terms, recall that the mathematical expectation (arithmetic) is defined as Equation (1.03): (1.03) Mathematical expectation = ∑[i = 1,N] (pi*ai) where p = Probability of winning or losing the ith trial. a = Amount won or lost on the ith trial. N = Number of possible outcomes (trials). The mathematical expectation is computed by multiplying each possible gain or loss by the probability of that gain or loss and then summing these products. When the sum of the probabilities, the pi terms, is greater than 1, Equation 1,03 must then be divided by the sum of the probabilities, the pi terms. In a nutshell, our options pricing model will take all those discrete price increments that have a probability greater than or equal to .001 of occurring at expiration and determine an arithmetic mathematical expectation on them. (5.10) C = ∑(pi*ai)/∑pi where C = The theoretically fair value of an option, or an arithmetic mathematical expectation. pi = The probability of being at price i on expiration. ai = The intrinsic value associated with the underlying instrument being at price i. In using this model, we first begin at the current price and work up I tick at a time, summing the values in both the numerator and denominator until the price, i, has a probability, pi, less than .001 (you can use a value less than this, but I find .001 to be a good value to use; it implies finding a fair value assuming you are going to have 1,000 option trades in your lifetime). Then, starting at that value which is 1 tick below the current price, we work down 1 tick at a time, summing values for both the numerator and denominator until the price, i, results in a probability, pi, less than .001. Note that the probabilities we are using are 1-tailed probabilities, where if a probability is greater than .5, we are subtracting the probability from 1. Of interest to note is that the pi terms, the probabilities, can be discerned by whatever distribution the user feels is applicable, not just the Normal. That is, the user can derive a theoretically fair value of an option for any distributional form! Thus, this model frees us to use the stable Paretian, Student's t, Poisson, our own adjustable distribution, or any other distribution we feel price conforms to in determining fair options values. We still need to amend the model to express the arithmetic mathematical expectation at expiration as a present value: (5.11) C = (∑ (pi*ai)*EXP(-R*T))/ ∑ pi where C = The theoretically fair value of an option, or the present value of the arithmetic mathematical expectation at time T. pi = The probability of being at price i on expiration. ai = The intrinsic value associated with the underlying instrument being at price i. R = The current risk-free rate. T = Decimal fraction of a year remaining till expiration. Equation (5.11) is the options pricing model for all distributions, returning the present worth of the arithmetic mathematical expectation of the option at expiration.6 Note that the model can be used for put values as well, the only difference being in discerning the intrinsic values, the ai terms, at each price increment, i. When dividends are involved, Equation (5.04) should be employed to adjust the current price of the underlying by. Then this adjusted current price is used in determining the probabilities associated with being at a given price, i, at expiration. An example of using Equation (5.11) is as follows. Suppose we determine that the Student's t distribution is a good model of the distribution of the log of price changes7 for a hypothetical commodity that we are considering buying options on. Now we use the K-S test to determine the best-fitting parameter value for the degrees of freedom parameter of the Student's t distribution. We will assume that 5 degrees of freedom provides for the best fit to the actual data per the K-S test. We will assume that we are discerning the fair price for a call option on 911104 that expires 911220, where the price of the underlying is 100 and the strike price is 100. We will assume an annualized volatility of 20%, a risk-free rate of 5%, and a 260.8875-day year (the average number of weekdays in a year; we therefore ignore holidays that fall on a weekday, for example, Thanksgiving in the United States). Further, we will assume that the minimum tick that this hypothetical commodity can trade in is .10. If we perform Equations (5.01) and (5.02) using (5.07) for the variable II, we obtain fair values of 2.861 for both the 100 call and 100 put. These options prices are thus the fair values according to the Black commodity options model, which assumes a lognormal distribution of prices. If, however, we use Equation (5.11), we must figure the pi terms. These we obtain from the snippet of BASIC code in Appendix B. Note that the snippet of code requires a standard value, given the variable name Z, and the degrees of freedom, given the variable name DEGFDM. Before we call this snippet of code we can convert the price, i, to a standard value by the following formula: (5.12) Z = ln(i/current underlying price)/(V*T^.5) where i = The price associated with the current status of the summation process. V = The annualized volatility as a standard deviation. T = Decimal fraction of a year remaining till expiration. ln() = The natural logarithm function. Equation (5.12) can be expressed in BASIC as: Z = LOG(I/U)/(V*T^.5) The variable U represents the current underlying price (adjusted for dividends, if necessary). 6 Notice that Equation (5.11) does not differentiate stock from commodity options. Conventional thinking has it that, embedded in the price of a stock option, is the interest on a pure discount bond that matures at expiration with a face value equal to the strike price. Commodity options, it is believed, see an interest rate of 0 on this, so it is as if they do not have it. From our frame of reference-that is, "What is this option worth to me today as an options buyer?"-we disregard this. If both a stock and a commodity have exactly the same expected distribution of outcomes, their arithmetic mathematical expectations are the same, and the rational investor would opt for buying the less expensive. This situation is analogous to someone considering buying One of two identical houses where one is priced higher because the seller has paid a higher interest rate on the mortgage. 7 The Student's t distribution is generally a poor model of the distribution of price changes. However, since the only other parameter, aside from volatility as an annualized standard deviation, which needs to be considered in using the Student's t distribution, is the degrees of freedom, and since the probabilities associated with the Student's t distribution are easily ascertained by the snippet of Basic code in Appendix B, we will use the Student's t distribution here for the sake of simplicity and demonstration. Lastly, once we have obtained a probability from the Student's t distribution BASIC code snippet in Appendix B, the probability returned is a 2-tailed one. We need to make it a 1-tailed probability and express it as a probability of deviating from the current price (i.e., bound it between 0 and .5). These two procedures are performed by the following two lines of BASIC: CF = 1-((1- CF)/2) IF CF >.5 then CF = 1-CF Doing this with the option parameters we have specified, and 5 degrees of freedom, yields a fair call option value of 3.842 and a fair put value of 2.562. These values differ considerably from the more conventional models for a number of reasons. First, the fatter tails of the Student's t distribution with 5 degrees of freedom will make for a higher fair call value. Generally, the thicker the tails of the distribution used, the greater the call value returned. Had we used 4 degrees of freedom, we would have obtained an even greater fair call value. Second, the put value and the call value differ substantially, whereas with the more conventional model the put and call value were equivalent. This difference requires some discussion. The fair value of a put can be determined from a call option with the same strike and expiration (or vice versa) by the put-call parity formula: (5.13) P = C+(E-U)*EXP(-R*T) where P = The fair put value. C = The fair call value. E = The strike price. U = The current price of the underlying instrument. R = The risk-free rate. T = Decimal fraction of a year remaining till expiration. When Equation (5.13) is not true, an arbitrage opportunity exists. From (5.13) we can see that the conventional model's prices, being equivalent, would appear to be correct since the expression E-U is 0, and therefore P = c. However, let's consider the variable U in Equation (5.13) as the expected price of the current underlying instrument at expiration. The expected value of the underlying can be discerned by (5.10) except the ai term simply equals i. For our example with DEGFDM = 5, the expected value for the underlying instrument = 101.288467. This happens as a result of the fact that the least a commodity can trade for in this model is 0, whereas there is no upside limit. A move from a price of 100 to a price of 50 is as likely as a move from a price of 100 to 200. Hence, call values will be priced greater than put values. It comes as no surprise then that the expected value of the underlying instrument at expiration should be greater than its current value. This seems to be consistent with our experience with inflation. When we replace the U in Equation (5.13), the current price of the underlying instrument, with its expected value at expiration, we can derive our fair put value from (5.13) as: P = 3.842+ (100-101.288467)*EXP(-.05*33/260.8875) = 3.842+1.288467*EXP(-.006324565186) = 3.842+-1.288467*.9936954 = 3.842+-1.280343731 = 2.561656269 This value is consistent with the put value discerned by using Equation (5.11) for the current value of the arithmetic mathematical expectation of the put at expiration. There's only one problem. If both the put and call options for the same strike and expiration are fairly priced per (5.11), then an arbitrage opportunity exists. In the real world the U in (5.13) is the current price of the underlying, not the expected value of the underlying, at expiration. In other words, if the current price is 100 and the December 100 call is 3.842 and the 100 put is 2.561656269, then an arbitrage opportunity exists per (5.13). The absence of put-call parity would suggest, given our newly derived options prices, that rather than buy the call for 3.842 we instead obtain a* equivalent position by buying the put for 2.562 and buy the underlying. The problem is resolved if we first calculate the expected value on the underlying, discerned by Equation (5.10) except the ai term simply equals i (for our example with DEGFDM = 5, the expected value for the underlying instrument equals 101.288467) and subtract the current price of the underlying from this value. This gives us 101.288467-100 = 1.288467. Now if we subtract this value from each ai term, each intrinsic value in (5.11) (and setting any resultant values less than 0 to 0), then Equation (5.11) will yield theoretical values that are consistent with (5.13). This procedure has the effect of forcing the arithmetic mathematical expectation on the underlying to equal the current price of the underlying. In the case of our example using the Student's t distribution with 5 degrees of freedom, we obtain a value for both the 100 put and call of 3.218. Thus our answer is consistent with Equation (5.13), and an arbitrage opportunity no longer exists between these two options and their underlying instrument. Whenever we are using a distribution that results in an arithmetic mathematical expectation at expiration on the underlying which differs from the current value of the underlying, we must subtract the difference (expectation-current value) from the intrinsic value at expiration of the options and floor those resultant intrinsic values less than 0 to 0. In so doing, Equation (5.11) will give us, for any distributional form we care to use, the present worth of the arithmetic mathematical expectation of the option at expiration, given an arithmetic mathematical expectation on the underlying instrument equivalent to its current price (i.e., assuming no directional bias in the underlying THE SINGLE LONG OPTION AND OPTIMAL F Let us assume here that we are speaking about the simple outright purchase of a call option. Rather than taking a full history of option trades that a given market system produced and deriving our optimal f therefrom, we are going to take a look at all the possible outcomes of what this particular option might do throughout the term that we hold it. We are going to weight each outcome by the probability of its occurrence. This probability-weighted outcome will be derived as an HPR relative to the purchase price of the option. Finally, we will look at the full spectrum of outcomes (i.e., the geometric mean) for each value for f until we obtain the optimal value. In almost all of the good options pricing models the input variables that have the most effect on the theoretical options price are (a) the time remaining till expiration, (b) the strike price, (c) the underlying price, and (d) the volatility. Different models have different input, but basically these four have the greatest bearing on the theoretical value returned. Of the four basic inputs, two-the time remaining till expiration and the underlying price-are certain to change. One, volatility, may change, yet rarely to the extent of the underlying price or the time till expiration, and certainly not as definitely as these two. One, the strike price, is certain not to change. Therefore, we must look at the theoretical price returned by our model for all of these different values of different underlying prices and different for all of these different values of different underlying prices and different times left till expiration. The HPR for an option is thus a function not only of the price of the underlying, but also of how much time is left on the option: (5.14) HPR(T,U) = (1+f*(Z(T,U-Y)/S-1))^P(T,U) where HPR(T,U) = The HPR for a given test value for T and U. f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical option price if the underlying were at price U-Y with time T remaining till expiration. This can be discerned by whatever pricing model the user deems appropriate. P(T,U) = The I-tailed probability of the underlying being at price U by time T remaining till expiration. This can discerned by whatever distributional form the user deems appropriate. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given by Equation (5.10), and the current price. This formula will give us the HPR (which is probability-weighted to the probability of the outcome) of one possible outcome for this option: that the underlying instrument will be at price U by time T. In the preceding equation the variable T represents the decimal part of the year remaining until option expiration. Therefore, at expiration T = 0. If 1 year is left to expiration, T = 1. The variable Z(T, U-Y) is found via whatever option model you are using. The only other you need to calculate is the variable P(T, U), the probability of the underlying being at price U with time T left in the life of the option. If we are using the Black-Scholes model or the Black commodity model, we can calculate P(T, U) as: if U < or = to Q: (5.15a) P(T,U) = N((ln(U/Q))/(V*(L^(1/2)))) if U > Q: (5.15b) P(T,U) = 1-N((ln(U/Q))/(V*(L^(1/2)))) where U = The price in question. Q = Current price of the underlying instrument. V = The annual volatility of the underlying instrument. L = Decimal fraction of the year elapsed since the option was put on. N() = The Cumulative Normal Distribution Function. This is given as Equation (3.21). ln() = The natural logarithm function. Having performed these equations, we can derive a probabilityweighted HPR for a particular outcome in the option. A broad range of outcomes are possible, but fortunately, these outcomes are not continuous. Take the time remaining till expiration. This is not a continuous function. Rather, a discrete number of days are left till expiration. The same is true for the price of the underlying. If a stock is at a price of, say, 35 and we want to know how many possible price outcomes there are between the possible prices of 30 and 40, and if the stock is traded in eighths, then we know that there are 81 possible price outcomes between 30 and 40 inclusive. What we must now do is calculate all of the probability- weighted HPRs on the option for the expiration date or for some other mandated exit date prior to the expiration date. Say we know we will be out of the option no later than a week from today. In such a case we do not need to calculate HPRs for the expiration day, since that is immaterial to the question of how many of these options to buy, given all of the available information (time to expiration, time we expect to remain in the trade, price of the underlying instrument, price of the option, and volatility). If we do not have a set time when we will be out of the trade, then we must use the expiration day as the date on which to calculate probability-weighted HPRs. Once we know how many days to calculate for (and we will assume here that we will calculate up to the expiration day), we must calculate the probability-weighted HPRs for all possible prices for that market day. Again, this is not as overwhelming as you might think. In the Normal Probability Distribution, 99.73% of all outcomes will fall within three standard deviations of the mean. The mean here is the current price of the underlying instrument. Therefore, we really only need to calculate the probability-weighted HPRs for a particular market day, for each discrete price between -3 and +3 standard deviations. This should get us quite accurately close to the correct answer. Of course if we wanted to we could go out to 4, 5, 6 or more standard deviations, but that would not be much more accurate. Likewise, if we wanted to, we could contract the price window in by only looking at 2 or 1 standard deviations. There is no gain in accuracy by doing this though. The point is that 3 standard deviations is not set in stone, but should provide for sufficient accuracy. If we are using the Black-Scholes model or the Black futures option model, we can determine how much 1 standard deviation is above a given underlying price, U: (5.16) Std. Dev. = U*EXP(V*(T^(1/2))) where U = Current price of the underlying instrument. V = The annual volatility of the underlying instrument. T = Decimal fraction of the year elapsed since the option was put on. EXP() = The exponential function. Notice that the standard deviation is a function of the time elapsed in the trade (i.e., you must know how much time has elapsed in order to know where the three standard deviation points are). Building upon this equation, to determine that point that is X standard deviations above the current underlying price: (5.17a) +X Std. Dev. = U*EXP(X*(V*T^(1/2))) Likewise, X standard deviations below the current underlying price is found by: (5.17b) -X Std. Dev. = U*EXP(-X*(V*T ^ (1/2))) where U = Current price of the underlying instrument. V = The annual volatility of the underlying instrument. T = Decimal fraction of the year elapsed since the option was put on. EXP() = The exponential function. X = The number of standard deviations away from the mean you are trying to discern probabilities on. Remember, you must first determine how old the trade is, as a fraction of a year, before you can determine what price constitutes X standard deviations above or below a given price U. Here, then, is a summary of the procedure for finding the optimal f for a given option. Step 1 Determine if you will be out of the option by a definite date. If not, then use the expiration date. Step 2 Counting the first day as day 1, determine how many days you will have been in the trade by the date in number 1. Now convert this number of days into a decimal fraction of a year. Step 3 For the day in number 1, calculate those points that are within +3 and -3 standard deviations of the current underlying price. Step 4 Convert these ranges of values of prices in step 3 to discrete values. In other words, using increments of 1 tick, determine all of the possible prices between and including those values in step 3 that bound the range. Step 5 For each of these outcomes now calculate the Z(T, U-Y)'s and P(T, U)'s for the probability-weighted HPR equation. In other words, for each of these outcomes now calculate the resultant theoretical option price as well as the probability of the underlying instrument being at that price by the dates in question. Step 6 After you have completed step 5, you now have all of the input required to calculate the probability-weighted HPRs for all of the outcomes. (5.14) HPR(T,U) = (1+f*(Z(T,U-Y)/S-1))^P(T,U) where f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical option price if the underlying were at price U-Y with time T remaining till expiration. This can discerned by whatever pricing model the user deems appropriate. P(T,U) = The 1-tailed probability of the underlying being at price U by time T remaining till expiration. This can be discerned by whatever distributional from the user deems appropriate. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given by (5.10), and the current price. You should note that the distributional form used for the variable P(T, U) need not be the same distributional form used by the pricing model employed to discern the values for Z(T, U-Y). For example, suppose you are using the Black-Scholes stock option model to discern the values for Z(T, U-Y). This model assumes a lognormal distribution of price changes. However, you can correctly use another distributional form to determine the corresponding P(T, U). Literally, this translates as follows: You know that if the underlying goes to price U, the option's price will tend to that value given by Black-Scholes. Yet the probability of the underlying going to price U from here is greater than the lognormal distribution would indicate. Step 7 Now you can begin the process of finding the optimal f. Again you can do this by iteration, by looping through all of the possible f values between 0 and 1, by parabolic interpolation, or by any other one-dimensional search algorithm. By plugging the test values for f into the HPRs (and you have an HPR for each of the possible price increments between +3 and -3 standard deviations on the expiration date or mandated exit date) you can find your geometric mean for a given test value of f. The way you now obtain this geometric mean is to multiply all Of these HPRs together and then take the resulting product to the power of 1 divided by the sum of the probabilities: (5.18a) G(f,T) = {∏[U = -3SD,+3SD]HPR(T,U)}^(1/∑[U = -3SD ,+3SD]P(T,U)) Therefore: (5.18b) G(f,T) = {∏[U = -3SD,+3SD](l+f*(Z(T,UY)/S1))^P(T,U)}^(1/∑[U = -3SD,+3SD]P(T,U)) where G(f, T) = The geometric mean HPR for a given test value for f and a given time remaining till expiration from a mandated exit date. f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical option price if the underlying were at price U -Y with time T remaining till expiration. This can be discerned by whatever pricing model the user deems appropriate. P(T,U) = The probability of the underlying being at price U by time T remaining till expiration. This can be discerned by whatever distributional form the user deems appropriate. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given by (5.10), and the current price. The value for f that results in the greatest geometric mean is the value for f that is optimal. We can optimize for the optimal mandated exit date as well. In other words, say we want to find what the optimal f is for a given option for each day between now and expiration. That is, we run this procedure over and lover, starting with tomorrow as the mandated exit date and finding the optimal f, then starting the whole process over again with the next day as the mandated exit date. We keep moving the mandated exit date forward until the mandated exit date is the expiration date. We record the optimal fs and geometric means for each mandated exit date. When we are through with this entire procedure, we can find the mandated exit date that results in the highest geometric mean. Now we know the date by which we must be out of the option position by in order to have the highest mathematical expectation (i.e., the highest geometric mean). We also know how many contracts to buy by using the f value that corresponds to the highest- geometric mean. We now have a mathematical technique whereby we can blindly go out and buy an option and (as long as we are out of it by the mandated exit date that has the highest geometric mean-provided that it is greater than 1.0, of course-and buy the number of contracts indicated by the optimal f corresponding to that highest geometric mean) be in a positive mathematical expectation. Furthermore, these are geometric positive mathematical expectations. In other words, the geometric mean (minus 1.0) is the mathematical expectation when you are reinvesting returns. (The true arithmetic positive mathematical expectation would of course be higher than the geometric.) Once you know the optimal f for a given option, you can readily turn this into how many contracts to buy based on the following equation: (5.19) K = INT(E/(S/f)) where K = The optimal number of option contracts to buy. f = The value for the optimal f (0 to 1). S = the current price of the option. E = The total account equity. INT() = The integer function. The answer derived from this equation must be "floored to the integer." In other words, for example, if the answer is to buy 4.53 contracts, you would buy 4 contracts. We can determine the TWR for the option trade. To do so we must know how many times we would perform this same trade over and over. In other words, if our geometric mean is 1.001 and we want to find the TWR that corresponds to make this same play over and over 100 times, our TWR would be 1.001 ^ 100 = 1.105115698. We would therefore expect to make 10.3115698% on our stake if we were to make this same options play 100 times over. The formula to convert from a geometric mean to a TWR was given as Equation (4.18): (4.18) TWR = Geometric Mean^X where TWR = The terminal wealth relative. X = However many times we want to "expand" this play out. That is, what we would expect to make if we invested f amount into these possible scenarios X times. Further, we can determine our other by-products, such as the geometric mathematical expectation, as the geometric mean minus 1. If we take the biggest loss possible (the cost of the option itself), divide this by the optimal f, and multiply the result by the geometric mathematical expectation, the result will yield the geometric average trade. As you have seen, when applied to options positions such as this, the optimal f technique has the added by-product of discerning what the optimal exit date is. We have discussed the options position in its pure form, devoid of any underlying bias we may have in the direction of the price of the underlying. For a mandated exit date, the points of 3 standard deviations above and below are calculated from the current price. This assumes that we know nothing of the future direction of the underlying. According to the mathematical pricing models, we should not be able to find positive arithmetic mathematical expectations if we were to hold these options to expiration. However, as we have seen, through the use of this technique it is possible to find positive geometric mathematical expectations if we put on a certain quantity and exit the position on a certain date. If you have a bias toward the direction of the underlying, that can also be incorporated. Suppose we are looking at options on a particular underlying instrument, which is currently priced at 100. Further suppose that our bias, generated by our analysis of this market, suggests a price of 105 by the expiration date, which is 40 market days from now. We expect the price to rise by 5 points in 40 days. If we assume a straightline basis for this advance, we can state that the price should rise, on average, .125 points per market day. Therefore, for the mandated exit day of tomorrow, we will figure a value of U of 100.125. For the next mandated exit date, U will be 100.25. Finally, by the time that the mandated exit date is the expiration date, U will be 105. If the underlying is a stock, you should subtract the dividends from this adjusted U via Equation (5.04). The bias is applied to the process by having a different value for U each day because of our forecast. Because they affect the outcomes of Equations (5.17a) and (5.17b), these different values for U will dramatically affect our optimal f and by-product calculations. Notice that because Equations (5.17a) and (5.17b) are affected by the new value for U each day, there is an automatic equalization of the data. Hence, the optimal f's we obtain are based on equalized data. As you work with this optimal f idea and options, you will notice that each day the numbers change. Suppose you buy an option today at a certain price that has a given mandated exit date. Suppose the option has a different price after tomorrow. If you run the optimal f procedure again on this new option, it, too, may have a positive mathematical expectation and a different mandated exit date. What does this mean? The situation is analogous to a horse race where you can still place bets after the race has begun, until the race is finished. The odds change continuously, and you can cash your ticket at any time, you need not wait until the race is over. Say you bet $2 on a horse before the race begins, based on a positive mathematical expectation that you have for that horse, and the horse is running next to last by the first turn. You make time stop (because you can do that in hypothetical situations) and now you look at the tote board. Your $2 ticket on this horse is now only worth S 1.50. You determine the mathematical expectation on your horse again, considering how much of the race is already finished, the current odds on your horse, and where it presently is in the field. You determine that the current price of that $1.50 ticket on your horse is 10% undervalued. Therefore, since you could cash your 82 ticket that you bought before the race for S 1.50 right now, taking a loss, and you could also purchase the $1.50 ticket on the horse right now with a positive mathematical expectation, you do nothing. The current situation is thus that you have a positive mathematical situation, but on the basis of a $l.50 ticket not a $2 ticket. This same analogy holds for our option trade, which is now slightly underwater but has a positive mathematical expectation on the basis of the new price. You should use the new optimal f on the new price, adjusting your current position if necessary, and go with the new optimal exit date. In so doing, you will have incorporated the latest price information about the underlying instrument. Often, doing this may have you take the position all the way into expiration. There are many inevitable losses along the way by following this technique of optimal f on options. Why you should be able to find positive mathematical expectations in options that are theoretically fairly priced in the first place may seem like a paradox or simply quackery to you. However, there is a very valid reason why this is so: Inefficiencies are a function of your frame of reference. Let's start by stating that theoretical option prices as returned by the models do not give a positive mathematical expectation (arithmetic) to either the buyer or seller. In other words, the models are theoretically fair. The missing caveat here is "if held till expiration." It is this missing caveat that allows an option to be fairly priced per the models, yet have a positive expectation if not held till expiration. Consider that options decay at the rate of the square root of the time remaining till expiration. Thus, the day with the least expected time premium decay will always be the first day you are in the option. Now consider Equations (5.17a) and (5.17b), the price corresponding to a move of X standard deviations after so much time has elapsed. Notice that each day the window returned by these formulas expands, but by less and less. The day of the greatest rate of expansion is the first day in the option. Thus, for the first day in the option, the time premium will shrink the least, and the window of X standard deviations will expand the fastest. The less the time decay, the more likely we are to have a positive expectation in a long option. Further, the wider the window of X standard deviations, the more likely we are to have a positive expectation, as the downside is fixed with an option but the upside is not. There is a constant tug-of-war going on between the window of X standard deviations getting wider and wider with each passing day (at a slower and slower rate, though) and time decaying the premium faster and faster with each passing day. What happens is that the first day sees the most positive mathematical expectation, although it may not be positive. In other words, the mathematical expectation (arithmetic and geometric) is greatest after you have been in the option 1 day (it's actually greatest the first instant you put on the option and decays gradually thereafter, but we are looking at this thing at discrete intervals-each day's close). Each day thereafter the expectation gets lower, but at a slower rate. The following table depicts this decay of expectation of a long option. The table is derived from the option discussed earlier in this chapter. This is the 100 call option where the underlying is at 100, and it expires 911220. The volatility is 20% and it is now 911104. We are using the Black commodity option formula (H discerned as in Equation (5.07) and R = 5%) and a 260.8875-day year. We are using 8 standard deviations to calculate our optimal f's from, and we are using a minimum tick increment of .1 (which will be explained shortly). Exit Date Tue. 911105 Wed. 911106 Thu. 911107 AHPR 1.000409 1.000001 <1 GHPR 1.000195 1 .000000 <1 f .0806 .0016 0 The AHPR column is the arithmetic average HPR (the calculation of which will be discussed later on in this chapter), and GHPR is the geometric mean HPR. The f column is the optimal f from which the AHPR and GHPR columns were derived. The arithmetic mathematical expectation, as a percentage, is simply the AHPR minus 1, and the geometric mathematical expectation, as a percentage, is the GHPR minus 1. Notice that the greatest mathematical expectations occur on the day after we put the option on (although this example has a positive mathematical expectation, not all options will show a positive mathematical expectation). Each day thereafter the expectations themselves decay. The rate of decay also gets slower and slower each day. After 911106 the mathematical expectations (HPR-1) go negative. Therefore, if we wanted to trade on this information, we could elect to enter today (911104) and exit on the close tomorrow (911105). The fair option price is 2.861. If we assume it is traded at a price of $100 per full point, the cost of the option is 2.861*$100 = $286.10. Dividing this price by the optimal f of .0806 tells us to buy one option for every $3,549.63 in equity. If we wanted to hold the option till the close of 911106, the last day that still has a positive mathematical expectation, we would have to initiate the position today using the f value corresponding to the optimal for an exit 911106 of .0016. We would therefore enter today (911104) with 1 contract for every $178,812.50 in ac- count equity ($286.10/ .0016). Notice that to do so has a much lower expectation than if we entered with 1 contract for every 33,549.63 in account equity and exited on the close tomorrow, 911105. The rate of change between the two functions, time premium decay and the expanding window of X standard deviations, may create a positive mathematical expectation for being long a, given option. This expectation is at its greatest the first instant in the position and declines at a decreasing rate from there. Thus, an option that is priced fairly to expiration based on the models can be found to have a positive expectation if exited early on in the premium decay. The next table looks at this same 100 call option again, only this time we look at it using different-sized windows (different amounts of standard deviations): Number of Standard Deviations 2 3 AHPR 1.000102 1.000379 GHPR 1.000047 1.00018 f .043989 .0781 Cutoff 911105 911105 5 1.000409 1.000195 .0806 911106 8 1.000409 1.000195 .0806 911106 10 1.000409 1.000195 .0806 911106 The AHPR and GHPR pertain to the arithmetic and geometric HPRs at the optimal f values if you exit the trade on the close of 911105 (the most opportune date to exit, because it has the highest AHPR and GHPR). The f corresponds to the optimal f for 911105. The heading Cutoff pertains to the last date when a positive expectation (i.e., AHPR and GHPR both greater than 1) exists. The interesting point to note is that the four values AHPR, GHPR, f, and Cutoff all converge to given points as we increase the number of standard deviations toward infinity. Beyond 5 standard deviations the values hardly change at all. Beyond 8 standard deviations they seem to stop changing. The tradeoff in using more standard deviations is that extra computer time is required. This seems a small price to pay, but as we get into multiple simultaneous positions in this chapter, you will notice that each additional leg of a multiple simultaneous position increases the time required exponentially. For one leg we can argue that using 8 standard deviations is ideal. However, for more than one leg simultaneously, we may find it necessary to trim back this number of standard deviations. Furthermore, this 8 standard deviation rule applies only when we assume Normality in the logs of price changes. THE SINGLE SHORT OPTION Everything stated about the single long option holds true for a single short option position. The only difference is in regard to Equation (5.14): (5.14) HPR(T,U) = (1+f*(Z (T,U-Y)/S-1))^P(T,U) where HPR(T,U) = The HPR for a given test value for T and U. f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical option price if the underlying were at price U with time T remaining till expiration. P(T,U) = The probability of the underlying being at price U by time T remaining till expiration. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given by (5.10), and the current price. For a single short option position this equation now becomes: (5.20) HPR(T,U) = (1+f*(1-Z (T,U-Y)/S))^P(T,U) where HPR(T,U) = The HPR for a given test value for T and U. f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical option price if the underlying were at price U with time T remaining till expiration. P(T,U) = The probability of the underlying being at price U by time T remaining till expiration. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given by (5.10), and the current price. You will notice that the only difference between Equation (5.14), the equation for a single long option position, and Equation (5.20), the equation for a single short option position, is in the expression (Z(T,UY)/S-1), which becomes (1-Z(T,U-Y)/S) for the single short option position. Aside from this change, everything else detailed about the single long option position holds for the single short option position. THE SINGLE POSITION IN THE UNDERLYING INSTRUMENT In Chapter 3 we detailed the math of finding the optimal f parametrically. Now we can use the same method as with a single long option, only our calculation of the HPR is taken from Equation (3.30). (3.30) HPR(U) = (1+(L/(W/(-f))))^P where HPR(U) = The HPR for a given U. L = The associated P&L. W = The worst-case associated P&L in the table (this will always be a negative value). f = The tested value for f. P = The associated probability. The variable L, the associated P&L, is discerned by taking the price of the underlying at a given price U, minus the price at which the trade was initiated, S, for a long position. (5.21a) L for a long position = U-S For a short position, the associated P&L is figured just the reverse: (5.21b) L for a short position = S-U where S = The current price of the underlying instrument. U = The price of the underlying instrument for this given HPR. We could also figure the optimal f for a single position in the underlying instrument using Equation (5.14). When doing so we must realize that the optimal f returned can be greater than 1. For example, consider an underlying instrument at a price of 100. We determine that the five following outcomes might occur: Outcome 110 105 100 95 90 Probability .15 .30 .50 .25 .10 P&L 10 5 0 -5 -10 Note that per Equation (5.10), our arithmetic mathematical expectation on the underlying is 100.5769230 77. This means that the variable Y in (5.14) is equal to .576923077 since 100.576923077-100 = .576923077. If we were to figure the optimal f using the P&L column and the Equation (3.30) method, we derive an f of .19, or 1 unit for every $52.63 in equity. If instead we used Equation (5.14) on the outcome column, whereby the variable S is therefore equal to 100, and we do not subtract the value of Y, the arithmetic mathematical expectation of the underlying minus its current value from U in discerning our Z(T, U -Y) variable, we find our optimal fat approximately 1.9. This translates again into 1 unit for every $52.63 in equity as 100/1.9 = 52.63. On the other hand, if we subtract the value of Y, the arithmetic mathematical expectation on the underlying per Equation (5.10), in the Z(T, U-Y) term of (5.14) we end up with a mathematical expectation on the underlying equal to its current value, and therefore we do not have an optimal f. This is what we must do, subtract the value of Y in the Z(T, U-Y) term of Equation (5.14) in order to be consistent with the options calculations as well as the put/call parity formula. If we are using the Equation (3.30) method instead of the Equation (5.14) method, then each value for U in (5.21a) and (5.21b) must have the arithmetic mathematical expectation of the underlying, Y, subtracted from it. That is, we must subtract the value of Y from each P&L. Doing so again yields a situation where there is not a positive mathematical expectation, and therefore there is no value for f that is optimal. Literally, this means only that if we blindly go out and take a position in the underlying instrument, we do not get a positive mathematical expectation (as we do with some options), and therefore there is no f that is optimal in this case. We can have an optimal f only if we have a positive mathematical expectation. We can have this only if we have a bias in the underlying. Now we have a methodology that can be used to give us the optimal f (and its by-products) for options, whether long or short, as well as trades in the underlying instrument (from a number of different methods). Note that the methods used in this chapter to discern the optimal fs and by-products for either options or the underlying instrument are predicated upon not necessarily using a mechanical system to enter your trades. For instance, the empirical method for finding optimal f used an empirical stream of trade P&L's generated by a mechanical system. In Chapter 3 we learned of a parametric technique to find the optimal f from data that was Normally distributed. This same technique can be used to find the optimal f from data of any distribution, so long as the distribution in question has a cumulative density function. In Chapter 4 we learned of a method to find the optimal f parametrically for distributions that do not have a cumulative density function, such as the distribution of trade P&L's (whether a mechanical system is used or not) or the scenario planning approach. In this chapter we have learned of a method for finding the optimal f when not using a mechanical system. You will notice that all of the calculations thus far assume that you are, in effect, blindly entering a position at some point in time and exiting at some unknown future point. Usually the method is shown where there isn't a bias in the price of the underlying -that is, the method is shown devoid of any price forecast in the underlying. We have seen however, that we can incorporate our price forecast into the process simply by changing the value of the underlying used as input into the Equations (5.17a and 5.17b) each day as the trade progresses. Even a slight bias changes the expectation function dramatically. The optimal exit date may now very well not be the market day immediately after the entry day. In fact, the optimal exit date may well become the expiration day. In such a case, the option has a positive mathematical expectation even if held all expiration. Not only is the expectation function altered dramatically by even a slight bias in the price of the underlying, so, too, are the optimal fs, AHPRs, and GHPRs. For instance, the following table is once again derived from the option discussed earlier in this chapter. This is the 100 call option where the underlying is at 100, and it expires 911220. The volatility is 20% and it is now 911104. We are using the Black commodity option formula (H discerned as in Equation (5.07) and R = 5%) and a 260.8875-day year. We will again use 8 standard deviations to calculate our optimal fs from (to be consistent with the previous tables showing no bias in the underlying, or bias = 0), and we are using a minimum tick increment of .1. Here, however, we will assume a bias of .01 points (one tenth of one tick) upward per day in the price of the underlying: Exit Date Tue. 911105 Wed. 911106 Thu. 911107 Fri. 911108 AHPR 1.000744 1.000149 1.000003 <1 GHPR 1.000357 1.000077 1.000003 <1 f .1081663 .0377557 .0040674 0 Notice how simply a tiny .01-point upward bias per day changes the results. Our optimal exit date is still 911105, and our optimal f is .1081663, which translates into 1 contract for every $2,645.00 in account equity (2.861*100/.1081663). Also notice that a positive expectation is obtained in this option all the way until the close of 911107. Had we had a stronger bias than simply .01 point upward per day, the results would be changed to an even more pronounced degree. The last point that needs to be addressed is the cost of commissions. In the price of the option obtained with Equation (5.14), the variable Z(T, U-Y) must be adjusted downward to reflect the commissions involved in the transaction (if you are charged commissions on the entry side also, then you must adjust the variable S in Equation (5.14) upward by the amount of the commissions). We have covered finding the optimal f and its by-products when we are not using a mechanical system. We can now begin to combine multiple positions. MULTIPLE SIMULTANEOUS POSITIONS WITH A CAUSAL RELATIONSHIP As we begin our discussion of multiple simultaneous positions, it is important to differentiate between causal relationships and correlative relationships. In the causal relationship, there is a factual, connective ex- planation of the correlation between two or more items. That is, a causal relationship is one where there is correlation, and the correlation can be explained or accounted for in some logical, connective fashion. This is in contrast to a correlative relationship where there is, of course, correlation, but there is no causal, connective, explanation of the correlation. As an example of a causal relationship, let's look at put options on IBM and call options on IBM. Certainly the correlation between the IBM puts and the IBM calls is -1 (or very close to it), but there is more to the relationship than simply correlation. We know for a fact that when there is upward pressure on IBM rah that there will be downward pressure on the puts (all else remaining constant, including volatility). This logical, connective relationship means that there is a causal relationship between IBM calls and IBM puts. When there is correlation but no cause, we simply say that there is a correlative relationship (as opposed to a causal relationship). Usually, correlative relationships will not have correlation coefficients whose absolute values are close to 1. Usually, the absolute value of the correlation coefficient will be closer to 0. For example, corn and soybeans tend to move in tandem. Although their correlation coefficients are not exactly equal to 1, there is still a causal relationship because both markets are affected by things that affect the grains. If we look at IBM calls and Digital Equipment puts (or calls), we cannot say that the relationship is completely a causal relationship. Surely there is somewhat of a causal relationship, as both of the underlying stocks are members of the computer group, but just because IBM goes up (or down) is not an absolute mandate that Digital Equipment will also. As you can see, there is not a fine line that differentiates causal and correlative relationships. This "clouding" of causal relationships and those that are simply correlative will make our work more difficult. For the time being, we will only deal with causal relationships, or what we believe are causal relationships. In the text chapter we will deal with correlative relationships, which encompass causal relationships as well. You should be aware right now that the techniques mentioned in the next chapter on correlative relationships arc also applicable to, or can be used in lieu of, the techniques for causal relationships about to be discussed. The reverse is not true. That is, it is erroneous to apply the following techniques on causal relationships to relationships that are simply correlative. A causal relationship is one where the correlation coefficients between the prices of two items is 1 or -1. To simplify matters, a causal relationship almost always consists of any two tradeable items (stock, commodity, option, etc.) that have the same underlying instrument. This includes, but is not limited to, options spreads, straddles, strangles, and combinations, as well as covered writes or any other position where you are using the underlying in conjunction with one or more of its options, or one or more options on the same underlying instrument, even if you do not have a position in that underlying instrument. In its simplest form, multiple simultaneous positions consisting of only options (no position in the underlying), when the position is put on at a debit, can be solved for by using Equation (5.14). By solved for I mean that we can determine the optimal f for the entire position and its by-products (including the optimal exit date). The only differences are that the variable S will now represent the net of the legs of the position at the trade's inception. The variable Z(T, U-Y) will now represent the net of the legs at price U by time T remaining till expiration. Likewise, multiple simultaneous positions consisting of only options (no position in the underlying), when the position is put on at a credit, can be solved for by using Equation (5.20). Again, we must alter the variables S and Z(T, U-Y) to reflect the net of the legs of the position. For example, suppose we are looking to put on a long option straddle, the purchase of a put and a call on the same underlying instrument with the same strike price and expiration date. Further suppose that the optimal f returned by this technique was 1 contract for every $2,000. This would mean that for every $2,000 in account equity we should buy 1 straddle; for every $2,000 in account equity we should buy 1 of the puts and 1 of the calls. The optimal f returned by this technique pertains to financing 1 unit of the entire position, no matter how large that position is. This fact will be true for all the multiple simultaneous techniques discussed throughout this chapter. We can now devise an equation for multiple simultaneous positions involving whether a position in the underlying instrument is included or not. We can use this generalized form for multiple simultaneous positions with a causal relationship: (5.22) HPR(T,U) = (1+∑[i = 1,N]Ci(T,U))^P(T,U) where N = The number of legs in the position. HPR (T,U) = The HPR for a given test value for T and U. Ci(T,U) = The coefficient of the ith kg at a given value for U, at a given time T remaining till expiration: For an option leg put on at a debit or a long position in the underlying: (5.23a) Ci(T, U) = f*(Z(T, U-Y)/S-l) For an option leg put on at a credit or a short position in the underlying: (5.23b) Ci(T,U) = f*(1-Z(T,U-Y)/S) where f = The tested value for f. S = The current price of the option or underlying instrument. Z(T,U-Y) = The theoretical option price if the underlying were at price U with time T remaining till expiration. P (T,U) = The probability of the underlying being at price U by time T remaining till expiration. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given by (5.10), and the current price. Equation (5.22) can be used if you are planning on putting these legs all on at once, one for one, and you only need to iterate for the optimal f and optimal exit date of the entire position (that is what is meant by "multiple simultaneous positions"). For each value of U you will have an HPR given by Equation (5.22). For each value for f you will have a geometric mean, composed of all of the HPRs per Equation (5.18a): (5.18a) G(f,T) = {∏[U = -8SD,8SD]HPR(T,U)}^(1/∑[U = -8SD,8SD] P(T,U)) where G(f,T) = The geometric mean HPR for a given test value for f and a given time remaining till expiration from a mandated exit date. Those values off and T (the values of the optimal f and mandated exit date) that result in the highest geometric means, are the ones that you should use on the net position of the legs. To summarize the entire procedure. We want to find the optimal f for each day, using each market day between now and expiration as the mandated exit date. For each mandated exit date you will determine those discrete prices between plus and minus X standard deviations (ordinarily we will let X equal 8) from the base price of the underlying instrument. The base price can be the current price of the underlying instrument or it can be altered to reflect a particular bias you might have regarding that market's direction. You now need to find the value between 0 and 1 for f that results in the greatest geometric mean HPR, using an HPR for each of the discrete prices between plus and minus X standard deviations of the base price for that mandated exit date. Therefore, for each mandated exit date you will have an optimal f and a corresponding geometric mean. The mandated exit date that has the greatest geometric mean is the optimal exit date for the position, and the f corresponding to that geometric mean is the f that is optimal. The "nesting" of the logic of this procedure is as follows: For each mandated exit date (weekday) between now and expiration For each value off (until the optimal is found) For each market system For each tick between+and-8 std. devs. Determine the HPR Finally, you should note that in this section we have been attempting, among other things, to discern the optimal exit date, which we have looked upon as a single date at which to close down all of the legs of the position. You can apply the same procedure to determine the optimal exit date for each leg in the position. This compounds the number of computations geometrically, but it can be accomplished. This would alter the logic to appear as: For each market system For each mandated exit date (weekday) between now and expiration For each value off (until the optimal is found) For each market system For each tick between +8 and -8 std. devs. Determine the HPR We have thus covered multiple simultaneous positions with a causal relationship. Now we can move on to a similar situation where the relationship is random. MULTIPLE SIMULTANEOUS POSITIONS WITH A RANDOM RELATIONSHIP You should be aware that, as with the causal relationships already discussed, the techniques mentioned in the next chapter on correlative relationships are also applicable to, or can be used in lieu of, the techniques for random relationships about to be discussed. This is not true the other way around. That is, it is erroneous to apply the techniques on random relationships that follow in this chapter to relationships that are correlative (unless the correlation coefficients equal 0). A random relationship is one where the correlation coefficients between the prices of two items is 0. A random relationship exists between any two tradeable items (stock, futures, options, etc.) whose prices are independent of one another, where the correlation coefficient between the two prices is zero, or is expected to be zero in an asymptotic sense. When there is a correlation coefficient of 0 between every combination 062 legs in a multiple simultaneous position, the HPR for the net position is given as: (5.24) HPR(T,U) = (1+∑[i = 1,N] Ci(T,U))^∏[i = 1,N] Pi(T,U) where N = The number of legs in the position. HPR(T,U) = The HPR for a given test value for T and U. Ci(T,U) = The coefficient of the ith leg at a given value for U, at a given time remaining till expiration of T: For an option leg put on at a debit or a long position in the underlying instrument: (5.23a) Ci(T,U) = f*(Z(T,U-Y)/S-1) For an option leg put on at a credit or a short position in the underlying instrument: (5.23b) Ci(T,U) = f*(l-Z(T,U-Y)/ S) where f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical option price if the underlying were at price U with time T remaining till expiration. Pi(T,U) = The probability of the ith underlying being at price U by time remaining till expiration of T. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given by (5.10), and the current price. We can now figure the geometric mean for random relationship HPRs as: (5.25) C(f,T) = {∏[U1 = -8SD,+ 8SD]...∏[UN = -8SD,+8SD]{(1+∑[i = 1,N]Ci(T,U))^∏[i = 1,N]Pi(T,U)}}^ {1/(∑[U1 = -8SD,+ 8SD]...∑[UN = -8SD,+ 8SD]∏[i = 1,N]Pi(T,U))} where G(f, T) = The geometric mean HPR for a given test value for f and a given time remaining till expiration from a mandated exit date. Once again, the f and T that result in the greatest geometric mean are optimal. The "nesting" of the logic of this procedure is exactly the same as with the causal relationships: For each mandated exit date (weekday) between now and expiration For each value off (until the optimal is found) For each market system For each tick between +8 and -8 std. devs. Determine the HPR The only difference between the procedure for solving for random relationships and that for causal relationships is that the exponent to each HPR in the random relationship is calculated by multiplying together the probabilities of all of the legs being at the given price of the particular HPR. Each of these probability sums used as exponents for each HPR are themselves summed so that when all of the HPRs are multiplied together to obtain the interim TWR, it can be raised to the power of 1 divided by the sum of the exponents used in the HPRs. And again, the outer loop of the logic could be mended to accommodate a search for the optimal exit date for each leg in the position. Complicated as Equation (5.25) looks, it still does not address the problem of a linear correlation coefficient between the prices of any two components that is not 0. As you can see, solving for the optimal mixture of components is quite a task! In the next few chapters you will see how to find the right quantities for each leg in a multiple position-using stock, commodities, options, or any other tradeable item-regardless of the relationship (causal, random, or correlative). The inputs you will need for a given option position in the next chapter are (1) the correlation coefficient of its average daily HPR on a 1-contract basis to each of the other positions in the portfolio, and (2) its arithmetic average HPR and standard deviation in HPRs. Equations (5.14) and (5.20) detailed how to find the HPR for long options and short options respectively. Equation (5.18) then showed how to turn this into a geometric mean. Now, we can also discern the arithmetic mean as: For long options, options put on at a debit: (5.26a) AHPR = {∑[U = -8SD,+ 8SD]((1+f*(Z(T, U-Y)/S1))*P(T,U))}/∑[U1 = -8SD,+ 8SD]P(TU) For short options, options put on at a credit: (5.26b) AHPR = (∑[U = -8SD,+ 8SD]((1+f*(1-Z(T, UY)/S))*P(T,U))}/∑[U = -8SD,+ 8SD]P(T,U) where AHPR = The arithmetic average HPR. f = The optimal f (0 to 1). S = The current price of the option. Z(T,U-Y) = The theoretical option price if the underlying were at price U with time T remaining till expiration. P(T, U) = The probability of the underlying being at price U with time T remaining till expiration. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given by (5.10), and the current price. Once you have the geometric average HPR and the arithmetic average HPR, you can readily discern the standard deviation in HPRs: (5.27) SD = (A^2-G^2)^(1/2) where A = The arithmetic average HPR. G = The geometric average HPR. SD = The standard deviation in HPRs. In this chapter we have leaned of yet another way to calculate optimal f. The technique shown was for nonsystem traders and used the distribution of outcomes on the underlying instrument by a certain date in the future as input. As a side benefit, this approach allows us to find the optimal f on both options and for multiple simultaneous positions. However, one of the drawbacks of this technique is that the relationships between all of the positions must be random or causal Does this mean we cannot use the techniques far finding the optimal f, discussed in earlier chapters, on multifile simultaneous positions or options? No-again, which method you choose is a matter of utility to you. The methods detailed in this chapter have certain drawbacks as well as benefits (such as the ability to discern optimal exit times). In the next chapter, we will begin to delve into optimal portfolio construction, which will later allow us to perform multiple simultaneous positions using the techniques detailed earlier. There are many different directions of study we could head off into at this function. However, the goal in this text is to study portfolios of different markets, portfolios of different market systems, and different tradeable items. This being the case, we will part from the trail of theoretical option prices and head in the direction of optimal portfolio Chapter 6 - Correlative Relationships and the Derivation of the Efficient Frontier We have now covered finding the optimal quantities to trade for futures, stocks, and options, trading them either alone or in tandem with another item, when there is either a random or a causal relationship between the prices of the items. That is, we have defined the optimal set when the linear correlation coefficient between any two elements in the portfolio equals 1, ~1, or 0. Yet the relationships between any two elements in a portfolio, whether we look at the correlation of prices (in a nonmechanical means of trading) or equity changes (in a mechanical system), are rarely at such convenient values of the linear correlation coefficient. In the last chapter we looked at trading these items from the standpoint of someone not using a mechanical trading system. Because a mechanical trading system was not employed, we were looking at the correlative relationship of the prices of the items. This chapter provides a method for determining the efficient frontier of portfolios of market systems when the linear correlation coefficient between any two portfolio components under consideration is any value between -1 and 1 inclusive. Herein is the technique employed by professionals for determining optimal portfolios of stocks. In the next chapter we will adapt it for use with any tradeable instrument. In this chapter, an important assumption is made regarding these techniques. The assumption is that the generating distributions (the distribution of returns) have finite variance. These techniques are effective only to the extent that the input data used, has finite variance.1 DEFINITION OF THE PROBLEM For the moment we are dropping the entire idea of optimal f; it will catch up with us later. It is easier to understand the derivation of the efficient frontier parametrically if we begin from the assumption that we are discussing a portfolio of stocks. These stocks are in a cash account and are paid for completely. That is, they are not on margin. Under such a circumstance, we derive the efficient frontier of portfolios. That is, for given stocks we want to find those with the lowest level of expected risk for a given level of expected gain, the given levels being determined by the particular investor's aversion to risk. Hence, this basic theory of Markowitz (aside from the general reference to it as Modern Portfolio Theory) is often referred to as E-V theory (Expected return-Variance of return). Note that the inputs are based on returns. That is, the inputs to the derivation of the efficient frontier are the returns we would expect on a given stock and the variance we would expect of those returns. Generally, returns on stocks can be defined as the dividends expected over a given period of time plus the capital appreciation (or minus depreciation) over that period of time, expressed as a percentage gain (or loss). Consider four potential investments, three of which are stocks and one a savings account paying 8.5% per year. Notice that we are defining the length of a holding period, the period we measure returns and their variances, as 1 year in this example: Investment Toxico Incubeast Corp. LA Garb Savings Account Expected Return 9.5% 13% 21 % 6.5% Expected Variance of Return 10% 25% 40% 0% We can express expected returns as HPR's by adding 1 to them. Also, we can express expected variance of return as expected standard For more on this, see Fama, Eugene F., "Portfolio Analysis in a Stable Paretian Market," Management Science 11, pp. 404-419, 1965. Fama has demonstrated techniques for finding the efficient frontier parametrically for stably distributed securities possessing the same characteristic exponent, A, when the returns of the components all depend upon a single underlying market index. Headers should be aware that other work has been done on determining the efficient frontier when there is infinite variance in the returns of the components in the portfolio. These techniques are not covered here other than to refer interested readers to pertinent articles. For more on the stable Paretian distribution, see Appendix B. For a discussion of infinite variance, see The Student's Distribution" in Appendix B. deviation of return by taking the square root of the variance. In so doing, we transform our table to: Investment Toxico lncubeast Corp. LA Garb Savings Account Expected Return as an HPR 1.095 1.13 1.21 1.085 Expected Standard Deviation of Return .316227766 .5 .632455532 0 The time horizon involved is irrelevant so long as it is consistent for all components under consideration. That is, when we discuss expected return, it doesn't matter if we mean over the next year, quarter, 5 years, or day, as long as the expected returns and standard deviations for all of the components under consideration all have the same time frame. (That is, they must will be for the next year, or they must all be for the next day, and so on.) Expected return is synonymous with potential gains, while variance (or standard deviation) in those expected returns is synonymous with potential risk. Note that the model is two-dimensional. In other words, we can say that the model can be represented on the upper right quadrant of the Cartesian plane (see Figure 6-1) by placing expected return along one axis (generally the vertical or Y axis) and expected variance or standard deviation of returns along the other axis (generally the horizontal or X axis). There are other aspects to potential risk, such as potential risk of (probability of) a catastrophic loss, which E-V theory does not differentiate from variance of returns in regards to defining potential risk. While this may very well be true, we will not address this concept any further in this chapter so is to discuss E-V theory in its classic sense. However, Markowitz himself nearly stated that a portfolio derived from E-V theory is optimal only if the utility, the "satisfaction," of the investor is a function of expected return and variance in expected return only. Markowitz indicated that investor utility may very well encompass moments of the distribution higher than the first two (which are what E-V theory addresses), such as skewness and kurtosis of expected returns. 1.3 LA Garb 1.2 1 -0.1 Savings account Figure 6-1 The upper-right quadrant of the Cartesian plane. Potential risk is still a far broader and more nebulous thing than what we have tried to define it as. Whether potential risk is simply variance on a contrived sample, or is represented on a multidimensional hypercube, or incorporates further moments of the distribution, we try to define potential risk to account for our inability to really put our finger on it. That said, we will go forward defining potential risk as the variance in expected returns. However, we must not delude ourselves into thinking that risk is simply defined as such. Risk is far broader, and its definition far more elusive. So the first step that an investor wishing to employ E-V theory must make is to quantify his or her beliefs regarding the expected returns and variance in returns of the securities under consideration for a certain time horizon (holding period) specified by the investor. These parameters can be, arrived at empirically. That is, the investor can examine the past history of the securities under consideration and calculate the returns and their variances over the specified holding periods. Again the term returns means not only the dividends in the underlying security, but any gains in the value of the security as well. This is then specified as a percentage. Variance is the statistical variance of the percentage returns. A user of this approach would often perform a linear regression on the past returns to determine the return (the expected return) in the next holding period. The variance portion of the input would then be determined by calculating the variance of each past data point from what would have been predicted for that past data point (and not from the regression line calculated to predict the next expected return). Rather than gathering these figures empirically, the investor can also simply estimate what he or she believes will be the future returns and variances2 in those returns. Perhaps the best way to arrive at these parameters is to use a combination of the two. The investor should gather the information empirically, then, if need be, interject his or her beliefs about the future of those expected returns and their variances. The next parameters the investor must gather in order to use this technique are the linear correlation coefficients of the returns. Again, these figures can be arrived at empirically, by estimation, or by a combination of the two. In determining the correlation coefficients, it is important to use data points of the same time frame as was used to determine the expected returns and variance in returns. In other words, if you are using yearly data to determine the expected returns and variance in returns (on a yearly basis), then you should use yearly data in determining the correlation coefficients. If you are using daily data to determine the expected returns and Variance in returns (on a daily basis), then you should use daily data in determining the correlation coefficients, It is also very important to realize that we are determining the correlation coefficients of returns (gains in the stock price plus dividends), not of the underlying price of the stocks in question. Consider our example of four alternative investments-Toxico, Incubeast Corp., LA Garb, and a savings account. We designate these with the symbols T, I, L, and S respectively. Next we construct a grid of the linear correlation coefficients as follows: T I L I -.15 L .05 .25 S 0 0 0 From the parameters the investor has input, we can calculate the covariance between any two securities as: (6.01) COVa,b = Ra,b*Sa*Sb where COVa,b = The covariance between the ath security and the bth one. Ra,b = The linear correlation coefficient between a and b. Sa = The standard deviation of the ath security. Sb = The standard deviation of the bth security. The standard deviations, Sa and Sb, are obtained by taking the square root of the variances in expected returns for securities a and b. Returning to our example, we can determine the covariance between Toxico (T) and Incubeast (I) as: COVT,I = -.15*.10^(1/2)*.25^(1/2) = -.15*.316227766*.5 = -.02371708245 Thus, given a covariance and the comprising standard deviations, we can calculate the linear correlation coefficient as: (6.02) Ra,b = COVa,b/(Sa*Sb) where COVa,b = The covariance between the ath security and the bth one. Ra,b = The linear correlation coefficient between a and b. Sa = The standard deviation of the ath security. Sb = The standard deviation of the bth security. Notice that the covariance of a security to itself is the variance, since the linear correlation coefficient of a security to itself is 1: (6.03) COVX,X = 1*SX*SX = 1*SX^2 = SX^2 = VX where COVX,X = The covariance of a security to itself. SX = The standard deviation of a security. VX = The variance of a security. We can now create a table of covariances for our example of four investment alternatives: T 2 T .1 I -.0237 L .01 S 0 Again estimating variance can be quite tricky. An easier way is to estimate the mean absolute deviation, then multiply this by 1.25 to arrive at the standard deviation. Now multiplying this standard deviation by itself, squaring it, gives the estimated variance. I L s -.0237 .01 0 .25 .079 0 .079 .4 0 We now have compiled the basic parametric information, and we can begin to state the basic problem formally. First, the sum of the weights of the securities comprising the portfolio must be equal to 1, since this is being done in a cash account and each security is paid for in full: (6.04) ∑[i = 1,N]Xi = 1 where N = The number of securities comprising the portfolio. Xi = The percentage weighting of the ith security. It is important to note that in Equation (6.04) each Xi must be nonnegative. That is, each Xi must be zero or positive. The next equation defining what we are trying to do regards the expected return of the entire portfolio. This is the E in E-V theory. Essentially what it says is that the expected return of the portfolio is the sum of the returns of its components times their respective weightings: (6.05) ∑[i = 1,N]Ui*Xi = E where E = The expected return of the portfolio. N = The number of securities comprising the portfolio. Xi = The percentage weighting of the ith security. Ui = The expected return of the ith security. Finally, we come to the V portion of E-V theory, the variance in expected returns. This is the sum of the variances contributed by each security in the portfolio plus the sum of all the possible covariances in the portfolio. (6.06a) V = ∑[i = 1,N]∑[j = 1,N] Xi*Xj*COVi,j (6.06b) V = ∑[i = 1,N]∑[j = 1,N]Xi*Xj*Ri,j*Si*Sj (6.06c) V = (∑ [i = 1,N]Xi^2*Si ^ 2)+2*∑[i = 1,N]∑[j = i+1,N]Xi*Xj*COVi,j (6.06d) V = (∑[i = 1,N]Xi^2*Si^2)+2*∑[i = 1,N]∑[j = i+1,N]Xi*Xj*Ri,j*Si*Sj where V = The variance in the expected returns of the portfolio. N = The number of securities comprising the portfolio. Xi = The percentage weighting of the ith security. Si = The standard deviation of expected returns of the ith security. COVi,j = The covariance of expected returns between the ith security and the jth security. Ri,j = The linear correlation coefficient of expected returns between the ith security and the jth security. All four forms of Equation (6.06) are equivalent. The final answer to Equation (6.06) is always expressed as a positive number. We can now consider that our goal is to find those values of Xi, which when summed equal 1, that result in the lowest value of V for a given value of E. When confronted with a problem such as trying to maximize (or minimize) a function, H(X,Y), subject to another condition or constraint, such as G(X,Y), one approach is to use the method of Lagrange. To do this, we must form the Lagrangian function, F(X,Y,L): (6.07) F(X,Y,L) = H(X,Y)+L*G(X,Y) Note the form of Equation (6.07). It states that the new function we have created, F(X,Y,L), is equal to the Lagrangian multiplier, L-a slack variable whose value is as yet undetermined-multiplied by the constraint function G(X,Y). This result is added to the original function H(X,Y), whose extreme we seek to find. Now, the simultaneous solution to the three equations will yield those points (X1,Y1) of relative extreme: FX(X,Y,L) = 0 FY(X,Y,L) = 0 FL(X,Y,L) = 0 For example, suppose we seek to maximize the product of two numbers, given that their sum is 20. We will let the variables X and Y be the two numbers. Therefore, H(X,Y) = X*Y is the function to be maximized given the constraining function G(X,Y) = X+Y-20 = 0. We must form the Lagrangian function: F(X,Y,L) = X*Y+L*(X+Y-20) FX(X,Y,L) = Y+L FY(X,Y,L) = X+L FL(X,Y,L) = X+Y-20 Now we set FX(X,Y,L) and FY (X,Y,L) both equal to zero and solve each for L: Y+L = 0 Y = -L and X+L = 0 X = -L Now setting FL(X,Y,L) = 0 we obtain X+Y-20 = 0. Lastly, we replace X and Y by their equivalent expressions in terms of L: (-L)+(-L)-20 = 0 2*-L = 20 L = -10 Since Y equals -L, we can state that Y equals 10, and likewise with X. The maximum product is 10*10 = 100. The method of Lagrangian multipliers has been demonstrated here for two variables and one constraint function. The method can also be applied when there are more than two variables and more than one constraint function. For instance, the following is the form for finding the extreme when there are three variables and two constraint functions: (6.08) F(X,Y,Z,L1,L2) = H(X,Y,Z)+L1*G1(X,Y,Z)+L2*G2(X,Y,Z) In this case, you would have to find the simultaneous solution for five equations in five unknowns in order to solve for the points of relative extreme. We will cover how to do that a little later on. We can restate the problem here as one where we must minimize V, the variance of the entire portfolio, subject to the two constraints that: (6.09) (∑[i = 1,N]Xi*Ui)-E = 0 and (6.10) (∑[i = 1,N]Xi) -1 = 0 where N = The number of securities comprising the portfolio. E = The expected return of the portfolio. Xi = The percentage weighting of the ith security. Ui = The expected return of the ith security. The minimization of a restricted multivariable function can be handled by introducing these Lagrangian multipliers and differentiating partially with respect to each variable. Therefore, we express our problem in terms of a Lagrangian function, which we call T. Let: (6.11) T = V+ L1*((∑[i = 1,N]Xi*Ui) -E)+L2*((∑[i = 1,N]Xi)-1) where V = The variance in the expected returns of the portfolio, from Equation (6.06). N = The number of securities comprising the portfolio. E = The expected return of the portfolio. Xi = The percentage weighting of the ith security. Ui = The expected return of the ith security. L1 = The first Lagrangian multiplier. L2 = The second Lagrangian multiplier. The minimum variance (risk) portfolio is found by setting the firstorder partial derivatives of T with respect to all variables equal to zero. Let us again assume that we are looking at four possible investment alternatives: Toxico, Incubeast Corp., LA Garb, and a savings account. If we take the first-order partial derivative of T with respect to X 1 we obtain: (6.12) δT/δX1 = 2*X1*COV1,1+2*X2*COV1,2+2*X3*COV1,2+2*X4*COV1,4+L1*U1+L2 Setting this equation equal to zero and dividing both sides by 2 yields: X1*COV1,1+X2*COV1,2+X3*COV1,3+X4*COV1,4+.5*L1*U1+.5*L2 = 0 Likewise: δT/δX2 = X1*COV2,1+X2 +COV2,2+X3*COV2,3+X4*COV2,4+.5*L1*U2+.5*L2 = 0 δT/δX3 = X1*COV3,1+X2*COV3,2+X3*COV3,3+X4*COV3,4+.5*L1*U3+.5*L2 = 0 δT/δX4 = X1*COV4,1+X2*COV4,2+X3*COV4,3+X4*COV4,4+.5 *L1*U4+ .5*L2 = 0 And we already have δT/δL1 as Equation (6.09) and δT/δL2 as Equation (6.10). Thus, the problem of minimizing V for a given E can be expressed in the N-component case as N+2 equations involving N+2 unknowns. For the four-component case, the generalized form is: X1*U1 X1 X1*COV1,1 X1*COV2,1 X1*COV3,1 X1*COV4,1 +X2*U2 +X2 +X2*COV1,2 +X2*COV2,2 +X2*COV3,2 +X2*COV4,2 +X3*U3 +X3 +X3*COV1,3 +X3*COV2,3 +X3*COV3,3 +X3*COV4,3 +X4*U4 +X4 +X4*COV1,4 +X4*COV2,4 +X4*COV3,4 +X4*COV4,4 +.5*L1*U1 +.5*L1*U2 +.5*L1*U3 +.5*L1*U4 =E =1 +.5*L2 =0 +.5*L2 =0 +.5*L2 =0 +.5*L2 =0 where E = The expected return of the portfolio. Xi = The percentage weighting of the ith security. Ui = The expected return of the ith security. COVA,B = The covariance between securities A and B. L1 = The first Lagrangian multiplier. L2 = The second Lagrangian multiplier. This is the generalized form, and you use this basic form for any number of components. For example, if we were working with the case of three components (i.e., N = 3), the generalized form would be: X1*U1 X1 X1*COV1,1 X1*COV2,1 X1*COV3,l +X2*U2 +X2 +X2*COV1,2 +X2*COV2,2 +X2*COV3,2 +X3*U3 +X3 +X3*COV1,3 +X3*COV2,3 +X3*COV3,3 +.5*L1*U1 +.5*L1*U2 +.5*L1*U3 +.5*L2 +.5*L2 +.5*L2 =E =1 =0 =0 =0 You need to decide on a level of expected return (E) to solve for, and your solution will be that combination of weightings which yields that E with the least variance. Once you have decided on E, you now have all of the input variables needed to construct the coefficients matrix. The E on the right-hand side of the first equation is the E you have decided you want to solve for (i.e., it is a given by you). The first line simply states that the sum of all of the expected returns times their weightings must equal the given E. The second line simply states that the sum of the weights must equal 1. Shown here is the matrix for a three-security case, but you can use the general form when solving for N securities. However, these first two lines are always the same. The next N lines then follow the prescribed form. Now, using our expected returns and covariances (from the covariance table we constructed earlier), we plug the coefficients into the generalized form. We thus create a matrix that represents the coefficients of the generalized form. In our four-component case (N = 4), we thus have 6 rows (N+2): X1 .095 1 .1 -.0237 .01 0 X2 .13 1 -.0237 .25 .079 0 X3 .21 1 .01 .079 .4 0 X4 .085 1 0 0 0 0 .095 .13 .21 .085 Answer =E =1 =0 =0 =0 =0 Note that the expected returns are not expressed in the matrix as HPR's, rather they are expressed in their "raw" decimal state. Notice that we also have 6 columns of coefficients. Adding the answer portion of each equation onto the right, and separating it from the coefficients with a creates what is known as an augmented matrix, which is constructed by fusing the coefficients matrix and the answer column, which is also known as the right-hand side vector. Notice that the coefficients in the matrix correspond to our generalized form of the problem: X1*U1 X1 X1*COV1,1 X1*COV2,1 X1*COV3,1 +X2*U2 +X2 +X2*COV1,2 +X2*COV2,2 +X2*COV3,2 +X2*COV4,2 +X3*U3 +X3 +X3*COV1,1 +X3*COV2,3 +X3*COV3,3 +X3*COV4,3 +X4*U4 +X4 +X4*COV1,4 +X4*COV2,4 +X4*COV3,4 +X4*COV4,4 +.5*L1*U1 +.5*L1*U2 +.5*L1*U3 +.5*L1*U4 =E =1 +.5*L2 =0 +.5*L2 =0 +.5*L2 =0 +.5*L2 =0 The matrix is simply a representation of these equations. To solve for the matrix, you must decide upon a level for E that you want to solve for. Once the matrix is solved, the resultant answers will be the optimal weightings required to minimize the variance in the portfolio as a whole for our specified level of E. Suppose we wish to solve for E=.14, which represents an expected return of 14%.Plugging .14 into the matrix for E and putting in zeros for the variables L1 and L2 in the first two rows to complete the matrix gives us a matrix of: X1 .095 1 .1 -.0237 .01 0 X2 .13 1 -.0237 .25 .079 0 X3 .21 1 .01 .079 .4 0 X4 .085 1 0 0 0 0 L1 0 0 .095 .13 .21 .085 L2 0 0 1 1 1 1 Answer =.14 =1 =0 =0 =0 =0 By solving the matrix we will solve the N+2 unknowns in the N+2 equations. SOLUTIONS OF LINEAR SYSTEMS USING ROW-EQUIVALENT MATRICES A polynomial is an algebraic expression that is the sum of one or more terms. A polynomial with only one term is called a monomial; with two terms a binomial; with three terms a trinomial. Polynomials with more than three terms are simply called polynomials. The expression 4*A^3+A^2+A+2 is a polynomial having four terms. The terms are separated by a plus (+) sign. Polynomials come in different degrees. The degree of a polynomial is the value of the highest degree of any of the terms. The degree of a term is the sum of the exponents on the variables contained in the term. Our example is a third-degree polynomial since the term 4*A^3 is raised to the power of 3, and that is a higher power than any of the other terms in the polynomial are raised to. If this term read 4*A^3*B^2*C, we would have a sixth-degree polynomial since the sum of the exponents of the variables (3+2+1) equals 6. A first-degree polynomial is also called a linear equation, and it graphs as a straight line. A second-degree polynomial is called a quadratic, and it graphs as a parabola. Third-, fourth-, and fifth-degree polynomials are also called cubics, quartics, and quintics, respectively. Beyond that there aren't any special names for higher-degree polynomials. The graphs of polynomials greater than second degree are rather unpredictable. Polynomials can have any number of terms and can be of any degree. Fortunately, we will be working only with linear equations, first-degree polynomials here. When we have more than one linear equation that must be solved simultaneously we can use what is called the method of row-equivalent matrices. This technique is also often referred to as the Gauss-Jordan procedure or the Gaussian elimination method. To perform the technique, we first create the augmented matrix of the problem by combining the coefficients matrix with the right-hand side vector as we have done. Next, we want to use what are called elementary transformations to obtain what is known as the identity matrix. An elementary transformation is a method of processing a matrix to obtain a different but equivalent matrix. Elementary transformations are accomplished by what are called row operations. (We will cover row operations in a moment.) An identity matrix is a square coefficients matrix where all of the elements are zeros except for a diagonal line of ones starting in the upper left comer. For a six-by-six coefficients matrix such as we are using in our example, the identity matrix would appear as: 1 0 0 0 0 0 This type of matrix, where the number of rows is equal to the number of columns, is called a square matrix. Fortunately, due to the generalized form of our problem of minimizing V for a given E, we are always dealing with a square coefficients matrix. Once an identity matrix is obtained through row operations, it can be regarded as equivalent to the starting coefficients matrix. The answers then are read from the right-hand-side vector. That is, in the first row of the identity matrix, the 1 corresponds to the variable X1, so the answer in the fight-hand side vector for the first row is the answer for X1. Likewise, the second row of the right-hand side vector contains the answer for X2, since the 1 in the second row corresponds to X2. By using row operations we can make elementary transformations to our original matrix until we obtain the identity matrix. From the identity matrix, we can discern the answers, the weights X1, ..., XN, for the components in a portfolio. These weights will produce the portfolio with the minimum variance, V, for a given level of expected return, E.3 Three types of row operations can be performed: 1. Any two rows may be interchanged. 2. Any row may be multiplied by any nonzero constant. 3. Any row may be multiplied by any nonzero constant and added to the corresponding entries of any other row. Using these three operations, we seek to transform the coefficients matrix to an identity matrix, which we do in a very prescribed manner. The first step, of course, is to simply start out by creating the augmented matrix. Next, we perform the first elementary transformation by invoking row operations rule 2. Here we take the value in the first row, first column, which is .095, and we want to convert it to the number 1. To do so, we multiply each value in the first row by the constant 1/.095. Since any number times 1 divided by that number yields 1, we have obtained a 1 in the first row, first column. We have also multiplied every entry in the first row by this constant, 1/.095, as specified by row operations rule 2. Thus, we have obtained elementary transformation number 1. Our next step is to invoke row operations rule 3 for all rows except the one we have just used rule 2 on. Here, for each row, we take the value of that row corresponding to the column we just invoked rule 2 on. In elementary transformation number 2, for row 2, we will use the value of 1, since that is the value of row 2, column 1, and we just performed rule 2 on column 1. We now make this value negative (or positive if it is already negative). Since our value is 1, we make it -1. We now multiply by the corresponding entry (i.e., same column) of the row we just performed rule 2 on. Since we just performed rule 2 on row 1, we will multiply this -1 by the value of row 1, column 1, which is 1, thus obtaining -1. Now we add this value back to the value of the cell we are working on, which is 1, and obtain 0. Now on row 2, column 2, we take the value of that row corresponding to the column we just invoked rule 2 on. Again we will use the value of 1, since that is the value of row 2, column 1, and we just performed rule 2 on column 1. We again make this value negative (or positive if it is already negative). Since our value is 1, we make it -1. Now multiply by the corresponding entry (i.e., same column) of the row we just performed rule 2 on. Since we just performed rule 2 on row 1, we will multiply this -1 by the value of row 1, column 2, which is 1.3684, thus obtaining -1.3684. Again, we add this value back to the value of the cell we are working on, row 2, column 2, which is 1, obtaining 1+(-1.3684) = -.3684. We proceed likewise for the value of every cell in row 2, including the value of the right-hand side vector of row 2. Then we do the same for all other rows until the column we are concerned with, column 1 here, is all zeros. Notice that we need not invoke row operations rule 3 for the last row, since that already has a value of zero for column 1. When we are finished, we will have obtained elementary transformation number 2. Now the first column is already that of the identity matrix. Now we proceed with this pattern, and in elementary transformation 3 we invoke row operations rule 2 to convert the value in the second row, second column to a 1. In elementary transformation number 4, we invoke row operations rule 3 to convert the remainder of the rows to zeros for the column corresponding to the column we just invoked row operations rule 2 on. We proceed likewise, converting the values along the diagonals to ones per row operations rule 2, then converting the remaining values in that column to zeros per row operations rule 3 until we have obtained the identity matrix on the left. The right-hand side vector will then be our. solution set. X1 X2 X3 X4 Starting Augmented Matrix .095 .13 .21 .085 3 Answer Explanation That is, these weights will produce the portfolio with a minimum V for a given E only to the extent that our inputs of E and V for each component and the linear correlation coefficient of every possible pair of components are accurate and variance in returns is infinite. X1 X2 X3 X4 L1 1 1 1 1 0 .1 -.023 .01 0 .095 -.023 .25 .079 0 .13 .01 .079 .4 0 .21 0 0 0 0 .085 Elementary Transformation Number 1 1 1.3684 2.2105 .8947 0 1 1 1 1 0 0.1 -.023 .01 0 .095 -.023 .25 .079 0 .13 .01 .079 .4 0 .21 0 0 0 0 .085 Elementary Transformation Number 2 1 1.3684 2.2105 .8947 0 0 -.368 -1.210 .1052 0 0 -.160 -.211 -.089 .095 0 .2824 .1313 .0212 .13 0 .0653 .3778 -.008 .21 0 0 0 0 .085 Elementary Transformation Number 3 1 1.3684 2.2105 .8947 0 0 1 3.2857 -.285 0 0 -.160 -.211 -.089 .095 0 .2824 .1313 .0212 .13 0 .0653 .3778 -.008 .21 0 0 0 0 .085 Elementary Transformation Number 4 1 0 -2.285 1.2857 0 L2 0 1 1 1 1 Answer Explanation 1 0 0 0 0 1.47368 row1*(1/.095) 1 0 0 0 0 1.47368 -.4736 -.1473 .03492 -.0147 0 1.47368 1.28571 row2*(1/-.36842) -.1473 .03492 -.0147 0 1.28571 .05904 row3+(.16054*row2) -.3282 Сгрока4+(.282431*row2) -.0987 row5+(-.065315*row2) 0 3.2857 -.285 .3164 -.135 -.796 .1019 0 .095 .13 0 0 .1632 .0097 .21 1 0 0 0 0 .085 1 Elementary Transformation Number 5 1 0 -2.285 1.2857 0 0 0 1 3.2857 -.285 0 0 0 0 1 -.427 .3002 3.1602 0 0 -.796 .1019 .13 1 0 0 .1632 .0097 .21 1 0 0 0 0 .085 1 Elementary Transformation Number 6 1 0 0 .3080 .6862 7.2233 0 1 0 1.1196 -.986 -1.38 0 0 1 -.427 .3002 3.1602 0 0 0 -.238 .3691 3.5174 0 0 0 .0795 .1609 .4839 0 0 0 0 .085 1 Elementary Transformation Number 7 1 0 0 .3080 .6862 7.2233 0 1 0 1.1196 -.986 -1.38 0 0 1 -.427 .3002 3.1602 0 0 0 1 -1.545 -14.72 0 0 0 .0795 .1609 .4839 0 0 0 0 .085 1 Elementary Transformation Number 8 1 0 0 0 1.1624 11.760 0 1 0 0 .7443 6.1080 0 0 1 0 -.360 -3.139 0 0 0 1 -1.545 -14.72 0 0 0 0 .2839 1.6557 0 0 0 0 .085 1 Elementary Transformation Number 9 1 0 0 0 1.1624 11.761 0 1 0 0 .7445 6.1098 0 0 1 0 -.361 -3.140 0 0 0 1 -1.545 -14.72 0 0 0 0 1 5.8307 0 0 0 0 0.085 1 Elementary Transformation Number 10 1 0 0 0 0 4.9831 0 1 0 0 0 1.7685 0 0 1 0 0 -1.035 0 0 0 1 0 -5.715 0 0 0 0 1 5.8312 0 0 0 0 0 0.5043 row2+(-1*row1) row3+(-.1*row1) row4+(.0237*row1) row5+(-.01*row1) -.2857 1.28571 .18658 row3*(1/.31643) -.3282 -.0987 0 .14075 .67265 .18658 -.1795 -.1291 0 .14075 .67265 .18658 .75192 -.1291 0 -.0908 -.1692 .50819 .75192 -.1889 0 -.0909 -.1693 .50823 .75192 -.6655 0 0.68280 0.32620 0.26796 -0.2769 -0.6655 0.05657 row1+(2.2857*row3) row2+(-3.28571*row3) row4+(.7966*row3) row5+(-.16328*row3) row1+(-.30806*row4) row2+(1.119669*row4) row3+(.42772*row4) row5+(-.079551*row4) row1+(-1.16248*row5) row2+(-.74455*row5) row3+(.3610*row5) row4+(1.5458trow5) row6+(-.085*row5) X1 X2 X3 X4 L1 L2 Elementary Transformation Number 11 1 0 0 0 0 49826 0 1 0 0 0 1.7682 0 0 1 0 0 -1.035 0 0 0 1 0 -5.715 0 0 0 0 1 5.8312 0 0 0 0 0 1 Elementary Transformation Number 12 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 Matrix Obtained 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 Answer Explanation 0.68283 0.32622 0.26795 -0.2769 -0.6655 0.11217 row6*(1/.50434) 0.12391 0.12787 0.38407 0.36424 -1.3197 0.11217 row1+(-4.98265*row6) row2+(-1.76821*row6) row3+(1.0352*row6) row4+(5.7158*row6) row5+(-5.83123*row6) 0.12391 =X1 0.12787 =X2 0.38407 =X3 0.36424 =X4 -1.3197 =-2.6394=L1 /.5 0.11217/ =.22434=L2 .5 INTERPRETING THE RESULTS Once we have obtained the identity matrix, we can interpret its meaning. Here, given the inputs of expected returns and expected variance in returns for all of the components under consideration, and given the linear correlation coefficients of each possible pair of components, for an expected yield of 14% this solution set is optimal. Optimal, as used here, means that this solution set will yield the lowest variance for a 14% yield. In a moment, we will determine the variance, but first we must interpret the results. The first four values, the values for X 1 through X4, tell us the weights (the percentages of investable funds) that should be allocated to these investments to achieve this optimal portfolio with a 14% expected return. Hence, we should invest 12.391% in Toxico, 12.787% in Incubeast, 38.407% in LA Garb, and 36.424% in the savings account. If we are looking at investing $50,000 per this portfolio mix: Stock Toxico Incubeast LA Garb Percentage .12391 .12787 .38407 .36424 (*50,000 = ) Dollars to Invest $6,195.50 $6,393.50 $19,203.50 $18,212.00 Thus, for Incubeast, we would invest $6,393.50. Now assume that Incubeast sells for $20 a share. We would optimally buy 319.675 shares (6393.5/20). However, in the real world we cannot run out and buy fractional shares, so we would say that optimally we would buy either 319 or 320 shares. Now, the odd lot, the 19 or 20 shares remaining after we purchased the first 300, we would have to pay up for. Odd lots are usually marked up a small fraction of a point, so we would have to pay extra for those 19 or 20 shares, which in turn would affect the expected return on our Incubeast holdings, which in turn would affect the optimal portfolio mix We are often better off to just buy the round lot-in this case, 300 shares. As you can see, more slop creeps into the mechanics of this. Whereas we can identify what the optimal portfolio is down to the fraction of a share, the real-life implementation requires again that we allow for slop. Furthermore, the larger the equity you are employing, the more closely the real-life implementation of the approach will resemble the theoretical optimal. Suppose, rather than looking at $50,000 to invest, you were running a fund of $5 million. You would be looking to invest 12.787% in Incubeast (if we were only considering these four investment alternatives)? and would therefore be investing 5,000,000*.12787 = $639,350. Therefore, at $20 a share, you would buy 639,350/20 = 31,967.8 shares. Again, if you restricted it down to the round lot, you would buy 31,900 shares, deviating from the optimal number of shares by about 0.2%. Contrast this to the case "where you have $50,000 to invest and buy 300 shares versus the optimal of 319.675. There you are deviating from the optimal by about 6.5%. The Lagrangian multipliers have an interesting interpretation. To begin with, the Lagrangians we are using here must be divided by .5 after the Identity matrix is obtained before we can interpret them. This is in accordance with the generalized form of our problem. The L1 variable equals -δV/δE. This means that L1 represents the marginal variance in expected returns. In the case of our example, where L 1 = -2.6394, we can state that V is changing at a rate of -L1, or -(-2.6394), or 2.6394 units for every unit in E instantaneously at E = .14. To interpret the L2 variable requires that the problem first be restated. Rather than having ∑Xi = 1, we will state that ∑Xi = M, where M equals the dollar amount of funds to be invested. Then L 2 = δV/δM. In other words, L2 represents the marginal risk of increased or decreased investment. Returning now to what the variance of the entire portfolio is, we can use Equation (6.06) to discern the variance. Although we could use any variation of Equation (6.06a) through (6.06d), here we will use variation a: (6.06a) V = ∑[i = 1,N]∑[j = 1,N] Xi*Xj*COVi,j Plugging in the values and performing Equation (6.06a) gives: Xi 0.12391* 0.12391* 0.12391* 0.12391* 0.12787* 0.12787* 0.12787* 0.12787* 0.38407* 0.38407* 0.38407* 0.38407* 0.36424* 0.36424* 0.36424* 0.36424* Xj 0.12391* 0.12787* 0.38407* 0.36424* 0.12391* 0.12787* 0.38407* 0.36424* 0.12391* 0.12787* 0.38407* 0.36424* 0.12391* 0.12787* 0.38407* 0.36424* COVi,j 0.1 -0.0237 0.01 0 -0.0237 0.25 0.079 0 0.01 0.079 0.4 0 0 0 0 0 0.0015353688 =-0.0003755116 0.0004759011 0 =-0.0003755116 0.0040876842 0.0038797714 =0 =0.0004759011 =0.0038797714 =0.059003906 =0 =0 =0 =0 =0 .0725872809 Thus, we see that at the value of E = .14, the lowest value for V is obtained at V = .0725872809. Now suppose we decided to input a value of E = .18. Again, we begin with the augmented matrix, which is exactly the same as in the last example of E = .14, only the upper rightmost cell, that is the first cell in the right-hand side vector, is changed to reflect this new E of .18: X1 X2 X3 X4 L1 Starting Augmented Matrix .095 .13 .21 .085 0 1 1 1 1 0 .1 -.023 0.01 0 .095 -.023 .25 .079 0 .13 .01 .079 .4 0 .21 0 0 0 0 .085 L2 Answer 0 0 1 1 1 1 .18 1 0 0 0 0 Through the use of row operations... the identity matrix is obtained: 1 0 0 0 0 0 0.21401=X1 0.22106=X2 0.66334=X3 -.0981=X4 -1.3197/.5=-2.639=L1 0.11217/.5=.22434=L2 We then go about solving the matrix exactly as before, only this time we get a negative answer in the fourth cell down of the right-hand side vector. Meaning, we should allocate a negative proportion, a disinvestment of 9.81% in the savings account. To account for this, whenever we get a negative answer for any of the Xi's-which means if any of the first N rows of the right-hand side vector is less than or equal to zero-we must pull that row+2 and that column out of the starting augmented matrix, and solve for the new augmented matrix. If either of the last 2 rows of the right-hand side vector are less than or equal to zero, we don't need to do this. These last 2 entries in the right-hand side vector always pertain to the Lagrangians, no matter how many or how few components there are in total in the matrix. The Lagrangians are allowed to be negative. Since the variable returning with the negative answer corresponds to the weighting of the fourth component, we pull out the fourth column and the sixth row from the starting augmented matrix. We then use row operations to perform elementary transformations until, again, the identity matrix is obtained: X1 X2 X3 L1 Starting Augmented Matrix .095 .13 .21 0 1 1 1 0 .1 -.023 0.01 .095 -.023 .25 .079 .13 .01 .079 .4 .21 .18 1 0 0 0 Through the use of row operations... the identity matrix is obtained: 1 0 0 0 0 0.1283688 0.1904699 0.6811613 -2.38/.5=-4.76 0.210944/.5=.4219 =X1 =X2 =X3 =L1 =L2 When you must pull out a row and column like this, it is important that you remember what rows correspond to what variables, especially when you have more than one row and column to pull. Again, using an example to illustrate, suppose we want to solve for E = .1965. The first identity matrix we arrive at will show negative values for the weighting of Toxico, X1, and the savings account, X4. Therefore, we return to our starting augmented matrix: X1 X2 X3 X4 Starting Augmented Matrix .095 .13 .21 .085 1 1 1 1 .1 -.023 .01 0 -.023 .25 .079 0 .01 .079 .4 0 0 0 0 0 Answer Pertains to 0 0 .095 .13 .21 .085 .1965 1 0 0 0 0 Toxico Incubeast LA Garb Savings L1 L2 Now we pull out row 3 and column 1, the ones that pertain to Toxico, and also pull row 6 and column 4, the ones that pertain to the savings account: X2 X3 X4 L1 L2 Answer Starting Augmented Matrix .13 .21 .085 0 0 .1965 1 1 1 0 0 1 .25 .079 0 .13 1 0 .079 .4 0 .21 1 0 Pertains to Incubeast LA Garb L1 L2 So we will be working with the following matrix: X2 X3 X4 L1 L2 Answer Starting Augmented Matrix .13 .21 .085 0 0 .1965 1 1 1 0 0 1 .25 .079 0 .13 1 0 .079 .4 0 .21 1 0 Pertains to Incubeast LA Garb L1 L2 Through the use of row operations ... the identity matrix is obtained: 1 0 0 0 .169 .831 -2.97/.5=-5.94 .2779695/.5=.555939 Incubeast LA Garb L1 L2 Another method we can use to solve for the matrix is to use the inverse of the coefficients matrix. An inverse matrix is a matrix that, when multiplied by the original matrix, yields the identity matrix. This technique will be explained without discussing the details of matrix multiplication. In matrix algebra, a matrix is often denoted with a boldface capita] letter. For example, we can denote our coefficients matrix as C. The inverse to a matrix is denoted as superscripting -1 to it. The inverse matrix to C then is C-1. To use this method, we need to first discern the inverse matrix to our coefficients matrix. To do this, rather than start by augmenting the righthand-side vector onto the coefficients matrix, we augment the identity matrix itself onto the coefficients matrix. For our 4-stock example: Starting Augmented Matrix X1 X2 X3 X4 0.095 0.13 0.21 0.085 1 1 1 1 0.1 -0.023 0.01 0 -0.023 0.25 0.079 0 0.01 0.079 0.4 0 0 0 0 0 L1 0 0 0.095 0.13 0.21 0.085 L2 0 0 1 1 1 1 Identity Matrix 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 Now we proceed using row operations to transform the coefficients matrix to an identity matrix. In the process, since every row operation performed on the left is also performed on the right, we will have transformed the identity matrix on the right-hand side into the inverse matrix C-r, of the coefficients matrix C. In our example, the result of the row operations yields: C 1 00 0 10 0 01 0 00 0 00 0 00 C-1 2.2527 -0.1915 2.3248 -0.1976 6.9829 -0.5935 -11.5603 1.9826 -23.9957 2.0396 2.0396 -0.1734 10.1049 0.9127 -1.1370 -9.8806 2.2526 -0.1915 0.9127 4.1654 -1.5726 -3.5056 2.3248 -0.1976 -1.1370 -1.5726 0.6571 2.0524 6.9829 -0.5935 -9.8806 -3.5056 2.0524 11.3337 -11.5603 1.9826 Now we can take the inverse matrix, C-i, and multiply it by our original right-hand side vector. Recall that our right-hand side vector is: E S 0 0 0 0 Whenever we multiply a matrix by a columnar vector (such as this) we multiply all elements in the first column of the matrix by the first element in the vector, all elements in the second column of the matrix by the second element in the vector, and so on. If our vector were a row vector, we would multiply all elements in the first row of the matrix by the first element in the vector, all elements in the second row of the matrix by the second element in the vector, and so on. Since our vector is columnar, and since the last four elements are zeros, we need only multiply the first column of the inverse matrix by E (the expected return for the portfolio) and the second column of the inverse matrix by S, the sum of the weights. This yields the following set of equations, which we can plug values for E and S into and obtain the optimal weightings. In our example, this yields: E*2.2527+S*-0.1915 = Optimal weight for first stock E*2.3248+S*-0.1976 = Optimal weight for second stock E*6.9829+S*-0.5935 = Optimal weight for third stock E*-11.5603+S*1.9826 = Optimal weight for fourth stock E*-23.9957+S*2.0396 = .5 of first Lagrangian E*2.0396+S*-0.1734 = .5 of second Lagrangian Thus, to solve for an expected return of 14% (E = .14) with the sum of the weights equal to 1: .14*2.2527+1*-0.1915 = .315378-.1915 = .1239 Toxico .14*2.3248+1*-0.1976 = .325472-.1976 = .1279 Incubeast .14*6.9829+1*-0.5935 = .977606-.5935 = .3841 LA Garb .14*-11.5603+1*1.9826 = -1.618442+1.9826 = .3641 Savings .14*-23.9957+1*2.0396 = -3.359398+2.0396 = -1.319798*2 = -2.6395 L1 .14*2.0396+1 *-0.1734 = .285544-.1734 = .1121144*2 = .2243L2 Once you have obtained the inverse to the coefficients matrix, you can quickly solve for any value of E provided that your answers, the optimal weights, are all positive. If not, again you must create the coefficients matrix without that item, and obtain a new inverse matrix. Thus far we have looked at investing in stocks from the long side only How can we consider short sale candidates in our analysis? To begin with, you would be looking to sell short a stock if you expected it would decline. Recall that the term "returns" means not only the dividends in the underlying security, but any gains in the value of the security as well. This figure is then specified as a percentage. Thus, in determining the returns of a short position, you would have to estimate what percentage gain you would expect to make on the declining stock, and from that you would then need to subtract the dividend (however many dividends go ex-date over the holding period you are calculating your E and V on) as a percentage.4 Lastly, any linear correlation coefficients of which the stock you are looking to short is a member must be multiplied by -1. Therefore, since the linear correlation coefficient between Toxico and Incubeast is -.15, if you were looking to short Toxico, you would multiply this by -1. In such a case you would use -.15*-1 = .15 as the linear correlation coefficient. If you linear looking to short both of these stocks, the linear correlation coefficient between the two would be -.15*-1*-1 = -.15. In other words, if you are looking to 4 In this chapter we are assuming that all transactions are performed in a cash account. So, though a short position is required to be performed in a margin account as opposed to a cash account, we will not calculate interest on the margin. short both stocks, the linear correlation coefficient between them remains unchanged, as it would if you were looking to go long both stocks. Thus far we have sought to obtain the optimal portfolio, and its variance V, when we know the expected return, E, that we seek. We can also solve for E when we know V. The simplest way to do this is by iteration using the techniques discussed thus far in this chapter. There is much more to matrix algebra than is presented in this chapter. There are other matrix algebra techniques to solve systems of linear equations. Often you will encounter reference to techniques such as Cramer's Rule, the Simplex Method, or the Simplex Tableau. These are techniques similar to the ones described in this chapter, although more involved. There are a multitude of applications in business and science for matrix algebra, and the topic is considerably involved We have only etched the surface, just enough for what we need to accomplish. For a more detailed discussion of matrix algebra and its applications in business and science, the reader is referred to Sets, Matrices, and Linear Programming, by Robert L. Childness. The next chapter covers utilizing the techniques detailed in this chapter for any tradeable instrument, as well as stocks, while incorporating optimal f, as well as a mechanical system. Chapter 7 - The Geometry of Portfolios We have now covered how to find the optimal fs for a given market system from a number of different standpoints. Also, we have seen how to derive the efficient frontier. In this chapter we show how to combine the two notions of optimal f and, the efficient frontier to obtain a truly efficient portfolio for which geometric growth is maximized. Furthermore, we will delve into an analytical study of the geometry of portfolio construction. THE CAPITAL MARKET LINES (CMLS) In the last chapter we saw how to determine the efficient frontier parametrically. We can improve upon the performance of any given portfolio by combining a certain percentage of the portfolio with cash. Figure 7-1 shows this relationship graphically. CML Line 1.04 B Efficient frontier 1.02 A 0.04 0.06 Standard Deviation Figure 7-1 Enhancing returns with the risk-free asset. In Figure 7-1, point A represents the return on the risk-free asset. This would usually be the return on 91-day Treasury Bills. Since the risk, the standard deviation in returns, is regarded as nonexistent, point A is at zero on the horizontal axis. Point B represents the tangent portfolio. It is the only portfolio lying upon the efficient frontier that would be touched by a line drawn from the risk-free rate of return on the vertical axis and zero on the horizontal axis. Any point along line segment AB will be composed of the portfolio at point B and the risk-free asset. At point B, all of the assets would be in the portfolio, and at point A all of the assets would be in the riskfree asset. Anywhere in between points A and B represents having a portion of the assets in both the portfolio and the risk-free asset. Notice that any portfolio along line segment AB dominates any portfolio on the efficient frontier at the same risk level, since being on the line segment AB has a higher return for the same risk. Thus, an investor who wanted a portfolio less risky than portfolio B would be better off to put a portion of his or her investable funds in portfolio B and a portion in the risk-free asset, as opposed to owning 100% of a portfolio on the efficient frontier at a point less risky than portfolio B. The line emanating from point A, the risk-free rate on the vertical axis and zero on the horizontal axis, and emanating to the right, tangent to one point on the efficient frontier, is called the capital market line (CML). To the right of point B, the CML line represents portfolios where the investor has gone out and borrowed more money to invest further in portfolio B. Notice that an investor who wanted a portfolio with a greater return than portfolio B would be better off to do this, as being on the CML line right of point B dominates (has higher return than) those portfolios on the efficient frontier with the same level of risk. Usually, point B will be a very well-diversified portfolio. Most portfolios high up and to the right and low down and to the left on the efficient frontier nave very few components. Those in the middle of the efficient frontier, where the tangent point to the risk-free rate is, usually are very well diversified. It has traditionally been assumed that all rational investors will want to get the greatest return for a given risk and take on the lowest risk for a given return. Thus, all investors would want to be somewhere on the CML line. In other words, all investors would want to own the same portfolio, only with differing degrees of leverage. This distinction between the investment decision and the financing decision is known as the separation theorem.1 We assume now that the vertical scale, the E in E-V theory, represents the arithmetic average HPR (AHPR) for the portfolios and the horizontal, or V, scale represents the standard deviation in the HPRs. For a given risk-free rate, we can determine where this tangent point portfolio on our efficient frontier is, as the coordinates (AHPR, V) that maximize the following function are: (7.01a) Tangent Portfolio = MAX{(AHPR-(1+RFR)) /SD} where MAX{} = The maximum value. AHPR = The arithmetic average HPR. This is the E coordinate of a given portfolio on the efficient frontier. SD = The standard deviation in HPRs. This is the V coordinate of a given portfolio on the efficient frontier. RFR = The risk-free rate. In Equation (7.0la), the formula inside the braces ({ }) is known as the Sharpe ratio, a measurement of risk-adjusted returns. Expressed literally, the Sharpe ratio for a portfolio is a measure of the ratio of the expected excess returns to the standard deviation. The portfolio with the highest Sharpe ratio, therefore, is the portfolio where the CML line is tangent to the efficient frontier for a given RFR. The Sharpe ratio, when multiplied by the square root of the number of periods over which it was derived, equals the t statistic. From the resulting t statistic it is possible to obtain a confidence level that the AHPR exceeds the RFR by more than chance alone, assuming finite variance in the returns. The following table shows how to use Equation (7.0la) and demonstrate the entire process discussed thus far. The first two columns represent the coordinates of different portfolios on the efficient frontier. The coordinates are given in (AHPR, SD) format, which corresponds to the Y and X axes of Figure 7-1. The third column is the answer obtained for Equation (7.01a) assuming a 1.5% risk-free rate (equating to an AHPR of 1.015. We assume that the HPRs here are quarterly HPRs, thus a 1.5% risk-free rate for the quarter equates to roughly a 6% risk-free rate for the year). Thus, to work out (7.0la) for the third set of coordinates (60013. 1.002): (AHPR-(1+RFR))/SD = (1.002-(1+.015))/.00013 = (1.0021.015)/.00013 = -.013/.00013 = -100 The process is completed for each point along the efficient frontier. Equation (7.01a) peaks out at .502265, which is at the coordinates (.02986, 1.03). These coordinates are the point where the CML line is tangent to the efficient frontier, corresponding to point B in Figure 7-1. This tangent point is a certain portfolio along the efficient frontier. The Sharpe ratio is the slope of the CML, with the steepest slope being the tangent line to the efficient frontier. Efficient Frontier AHPR SD 1.00000 1.00100 1.00200 1.00300 1.00400 l.00500 1.00600 1.00700 l.00600 1.00900 1.01000 1.01100 1.01200 1.01300 1.01400 0.91500 1.01600 1.01700 1.01800 1.01900 1 0.00000 0.00003 0.00013 0.00030 0.00053 0.00063 0.00119 0.00163 0.00212 0.00269 0.00332 0.00402 0.00476 0.00561 0.00650 0.00747 0.00649 0.00959 0.01075 0.01198 CML line Eq.(7.01a) Percentage RFR=.015 0 0.00% -421.902 0.11% -100.000 0.44% -40.1812 1.00% -20.7184 1.78% -12.0543 2.78% -7.53397 4.00% -4.92014 5.45% -3.29611 7.11% -2.23228 9.00% -1.50679 11.11% -0.99622 13.45% -0.62783 16.00% -0.35663 18.78% -0.15375 21.78% 0 25.00% 0.117718 28.45% 0.208552 32.12% 0.279036 36.01% 0.333916 40.12% AHPR 1.0150 1.0150 1.0151 1.0152 1.0153 1.0154 1.0156 1.0158 1.0161 1.0164 1.0167 1.0170 1.0174 1.0178 1.0183 1.0188 1.0193 1.0198 1.0204 1.0210 See Tobin, James, "Liquidity preference as Behavior Towards Risk," Review of Economic Studies 25, pp. 65-85, February 1958. Efficient Frontier AHPR SD 1.02000 0.01327 1.02100 0.01463 1.02200 0.01606 1.02300 0.01755 1.02400 0.01911 1.02500 0.02074 1.02600 0.02243 1.02700 0.02419 1.02800 0.02602 1.02900 0.02791 1.03000 0.02986 1.03100 1.03200 1.03300 1.03400 1.03500 1.03600 1.03700 1.03800 1.03900 1.04000 1.04100 1.04200 1.04300 1.04400 1.04500 1.04600 1.04700 1.04800 1.04900 1.05000 0.03189 0.03398 0.03614 0.03836 0.04065 0.04301 0.04543 0.04792 0.05047 0.05309 0.05578 0.05853 0.06136 0.06424 0.06720 0.07022 0.07330 0.07645 0.07967 0.08296 CML line Eq.(7.01a) Percentage 0.376698 44.45% 0.410012 49.01% 0.435850 53.79% 0.455741 58.79% 0.470073 64.01% 0.482174 69.46% 0.490377 75.12% 0.496064 81.01% 0.499702 87.12% 0.501667 93.46% 0.502265 ( 100.02% peak) 0.501742 106.79% 0.500303 113.80% 0.498114 121.02% 0.495313 128.46% 0.492014 136.13% 0.488313 144.02% 0.484287 152.13% 0.480004 160.47% 0.475517 169.03% 0.470873 177.81% 0.466111 186.81% 0.461264 196.03% 0.456357 205.48% 0.451416 215.14% 0.446458 225.04% 0.441499 235.15% 0.436554 245.48% 0.431634 256.04% 0.426747 266.82% 0.421902 277.82% AHPR 1.0217 1.0224 1.0231 1.0236 1.0246 1.0254 1.0263 1.0272 1.0281 1.0290 1.0300 1.0310 1.0321 1.0332 1.0343 1.0354 1.0366 1.0376 1.0391 1.0404 1.0417 1.0430 1.0444 1.0456 1.0473 1.0466 1.0503 1.0516 1.0534 1.0550 1.0567 The next column over, "percentage, "represents what percentage of your assets must be invested in the tangent portfolio if you are at the CML line for that standard deviation coordinate. In other words, for the last entry in the table, to be on the CML line at the .08296 standard deviation level, corresponds to having 277.82% of your assets in the tangent portfolio (i.e., being fully invested and borrowing another $1.7782 for every dollar already invested to invest further). This percentage value is calculated from the standard deviation of the tangent portfolio as: (7.02) P = SX/ST where SX = The standard deviation coordinate for a particular point on the CML line. ST = The standard deviation coordinate of the tangent portfolio. P = The percentage of your assets that must be invested in the tangent portfolio to be on the CML line for a given SX. Thus, the CML line at the standard deviation coordinate .08296, the last entry in the table, is divided by the standard deviation coordinate of the tangent portfolio, .02986, yielding 2.7782, or 277.82%. The last column in the table, the CML line AHPR, is the AHPR of the CML line at the given standard deviation coordinate. This is figured as: (7.03) ACML = (AT*P)+((1+RFR)*(1-P)) where ACML = The AHPR of the CML line at a given risk coordinate, or a corresponding percentage figured from (7.02). AT = The AHPR at the tangent point, figured from (7.01a). P = The percentage in the tangent portfolio, figured from (7.02) RFR = The risk-free rate. On occasion you may want to know the standard deviation of a certain point on the CML line for a given AHPR. This linear relationship can be obtained as: (7.04) SD = P*ST where SD = The standard deviation at a given point on the CML line corresponding to a certain percentage, P, corresponding to a certain AHPR. P = The percentage in the tangent portfolio, figured from (7.02). ST = The standard deviation coordinate of the tangent portfolio. THE GEOMETRIC EFFICIENT FRONTIER The problem with Figure 7-1 is that it shows the arithmetic average HPR. When we are reinvesting profits back into the program we must look at the geometric average HPR for the vertical axis of the efficient frontier. This changes things considerably. The formula to convert a point on the efficient frontier from an arithmetic HPR to a geometric is: (7.05) GHPR = (AHPR^2-V)^(1/2) where GHPR = The geometric average HPR. AHPR = The arithmetic average HPR. V = The variance coordinate. (This is equal to the standard deviation coordinate squared.) GHPR Figure 7-2 The efficient frontier with/without reinvestment In Figure 7-2 you can see the efficient frontier corresponding to the arithmetic average HPRs as well as that corresponding to the geometric average HPRs. You can see what happens to the efficient frontier when reinvestment is involved. By graphing your GHPR line, you can see which portfolio is the geometric optimal (the highest point on the GHPR line). You could also determine this portfolio by converting the AHPRs and Vs of each portfolio along the AHPR efficient frontier into GHPRs per Equation (7.05) and see which had the highest GHPR. Again, that would be the geometric optimal. However, given the AHPRs and the Vs of the portfolios lying along the AHPR efficient frontier, we can readily discern which portfolio would be geometric optimal- the one that solves the following equality: (7.06a) AHPR-1-V = 0 where AHPR = The arithmetic average HPRs. This is the E coordinate of a given portfolio on the efficient frontier. V = The variance in HPR. This is the V coordinate of a given portfolio on the efficient frontier. This is equal to the standard deviation squared. Equation (7.06a) can also be written as any one of the following three forms: (7.06b) AHPR-1 = V (7.06c) AHPR-V = 1 (7.06d) AHPR = V+1 A brief note on the geometric optimal portfolio is in order here. Variance in a portfolio is generally directly and positively correlated to drawdown in that higher variance is generally indicative of a portfolio with higher draw-down. Since the geometric optimal portfolio is that portfolio for which E and V are equal (with E = AHPR-1), then we can assume that the geometric optimal portfolio will see high drawdowns. In fact, the greater the GHPR of the geometric optimal portfolio-that is, the more the portfolio makes-the greater will be its drawdown in terms of equity retracements, since the GHPR is directly positively correlated with the AHPR. Here again is a paradox. We want to be at the geometric optimal portfolio. Yet, the higher the geometric mean of a portfolio, the greater will be the drawdowns in terms of percentage equity retracements generally. Hence, when we perform the exercise of diversification, we should view it as an exercise to obtain the highest geometric mean rather than the lowest drawdown, as the two tend to pull in oppo- site directions! The geometrical optimal portfolio is one where a line drawn from (0,0), with slope 1, intersects the AHPR efficient frontier. Figure 7-2 demonstrates the efficient frontiers on a one-trade basis. That is, its rows what you can expect on a one-trade basis. We can convert the geometric average HPR to a TWR by the equation: (7.07) GTWR = GHPR^N where GTWR = The vertical axis corresponding to a given GHPR after N trades. GHPR = The geometric average HPR. N = The number of trades we desire to observe. Thus, after 50 trades a GHPR of 1.0154 would be a GTWR of 1.0154 A 50 = 2.15. In other words, after 50 trades we would expect our stake to have grown by a multiple of 2.15. We can likewise project the efficient frontier of the arithmetic average HPRs into ATWRs as: (7.08) ATWR = 1+N*(AHPR-1) where ATWR = The vertical axis corresponding to a given AHPR after N trades. AHPR = The arithmetic average HPR. N = The number of trades we desire to observe. Thus, after 50 trades, an arithmetic average HPR of 1.03 would have made 1+50*(1.03-1) = 1+50*.03 = 1+1.5 = 2.5 times our starting stake. Note that this shows what happens when we do not reinvest our winnings back into the trading program. Equation (7.08) is the TWR you can expect when constant-contract trading. Figure 7-3 The efficient frontier with/without reinvestment Just as Figure 7-2 shows the TWRs, both arithmetic and geometric, for one trade, Figure 7-3 shows them for a few trades later. Notice that the GTWR line is approaching the ATWR line. At some point for N, the geometric TWR will overtake the arithmetic TWR. Figure 7-4 shows the arithmetic and geometric TWRs after more trades have elapsed. Notice that the geometric has overtaken the arithmetic. If we were to continue with more and more trades, the geometric TWR would continue to outpace the arithmetic. Eventually, the geometric TWR becomes infinitely greater than the arithmetic. The logical question is, “How many trades must elapse until the geometric TWR surpasses the arithmetic?” Recall Equation (2.09a), which tells us the number of trades required to reach a specific goal: (2.09a) N = ln(Goal)/ln(Geometric Mean) where N = The expected number of trades to reach a specific goal. Goal = The goal in terms of a multiple on our starting stake, a TWR. ln() = The natural logarithm function. We let the AHPR at the same V as our geometric optimal portfolio be our goal and use the geometric mean of our geometric optimal portfolio in the denominator of (2.09a). We can now discern how many trades are required to make our geometric optimal portfolio match one trade in the corresponding arithmetic portfolio. Thus: N = ln(l.031)/ln( 1.01542) = .035294/.0153023 = 1.995075 We would thus expect 1.995075, or roughly 2, trades for the optimal GHPR to be as high up as the corresponding (same V) AHPR after one trade. The problem is that the ATWR needs to reflect the fact that two trades have elapsed. In other words, as the GTWR approaches the ATWR, the ATWR is also moving upward, albeit at a constant rate (compared to the GTWR, which is accelerating). We can relate this problem to Equations (7.07) and (7.08), the geometric and arithmetic TWRs respectively, and express it mathematically: (7.09) GHPR^N => 1+N*(AHPR-1) Since we know that when N = 1, G will be less than A, we can rephrase the question to "At how many N will G equal A?" Mathematically this is: (7.10a) GHPR^N = 1+N*(AHPR-1) which can be written as: (7.10b) 1+N*(AHPR-1)-GHPR ^N = 0 or (7.10c) 1+N*AHPR-N-GHPR^N = 0 or (7.10d) N = (GHPR^N-1)/(AHPR -1) The N that solves (7.10a) through (7.10d) is the N that is required for the geometric HPR to equal the arithmetic. All three equations are equivalent. The solution must be arrived at by iteration. Taking our geometric optimal portfolio of a GHPR of 1.01542 and a corresponding AHPR of 1.031, if we were to solve for any of Equations (7.10a) through (7.10d), we would find the solution to these equations at N = 83.49894. That is, at 83.49894 elapsed trades, the geometric TWR will overtake the arithmetic TWR for those TWRs corresponding to a variance coordinate of the geometric optimal portfolio. ATWR GHPR SD Figure 7-5 AHPR, GHPR, and their CML lines. Figure 7-4 The efficient frontier with/without reinvestment. Just as the AHPR has a CML line, so too does the GHPR. Figure 75 shows both the AHPR and the GHPR with a CML line for both calculated from the same risk-free rate. The CML for the GHPR is calculated from the CML for the AHPR by the following equation: (7.11) CMLG = (CMLA^2-VT*P)^(1/2) where CMLG = The E coordinate (vertical) to the CML line to the GHPR for a given V coordinate corresponding to P. CMLA = The E coordinate (vertical) to the CML line to the AHPR for a given V coordinate corresponding to P. P = The percentage in the tangent portfolio, figured from (7.02). VT = The variance coordinate of the tangent portfolio. You should know that, for any given risk-free rate, the tangent portfolio and the geometric optimal portfolio are not necessarily (and usually are not) the same. The only time that these portfolios will be the same is when the following equation is satisfied: (7.12) RFR = GHPROPT-1 where RFR = The risk-free rate. GHPROPT = The geometric average HPR of the geometric optimal portfolio. This is the E coordinate of the portfolio on the efficient frontier. Only when the GHPR of the geometric optimal portfolio minus 1 is equal to the risk-free rate will the geometric optimal portfolio and the portfolio tangent to the CML line be the same. If RFR > GHPROPT-1, then the geometric optimal portfolio will be to the left of (have less variance than) the tangent portfolio. If RFR < GHPROPT-1, then the tangent portfolio will be to the left of (have less variance than) the geometric optimal portfolio. In all cases, though, the tangent portfolio will, of course, never have a higher GHPR than the geometric optimal portfolio. Note also that the point of tangency for the CML to the GHPR and for the CML to the AHPR is at the same SD coordinate. We could use Equation (7.01a) to find the tangent portfolio of the GHPR line by substituting the AHPR in (7.01a) with GHPR. The resultant equation is: (7.01b) Tangent Portfolio = MAX{(GHPR-(1+RFR))/SD} where MAX() = The maximum value. GHPR = The geometric average HPRs. This is the E coordinate of a given portfolio on the efficient frontier. SD = The standard deviation in HPRs. This is the SD coordinate of a given portfolio on the efficient frontier. RFR = The risk-free rate. UNCONSTRAINED PORTFOLIOS Now we will see how to enhance returns beyond the GCML line by lifting the sum of the weights constraint. Let us return to geometric optimal portfolios. If we look for the geometric optimal portfolio among our four market systems-Toxico, Incubeast, LA Garb and a savings accountwe find it at E equal to .1688965 and V equal to .1688965, thus conforming with Equations (7.06a) through (7.06d). The geometric mean of such a portfolio would therefore be 1.094268, and the portfolio's composition would be: Toxico 18.89891% Incubeast 19.50386% LA Garb 58.58387% Savings Account .03014% In using Equations (7.06a) through (7.06d), you must iterate to the solution. That is, you try a test value for E (halfway between the highest and the lowest AHPRs, -1 is a good starting point) and solve the matrix for that E. If your variance is higher than E, it means the tested for value of E was too high, and you should lower it for the next attempt. Conversely, if your variance is less than E, you should raise E for the next pass. You determine the variance for the portfolio by using one of Equations (6.06a) through (6.06d). You keep on repeating the process until whichever of Equations (7.06a) through (7.06d) you choose to use, is solved. Then you will have arrived at your geometric optimal portfolio. (Note that all of the portfolios discussed thus far, whether on the AHPR efficient frontier or the GHPR efficient frontier, are determined by constraining the sum of the percentages, the weights, to 100% or 1.00.) Recall Equation (6.10), the equation used in the starting augmented matrix to find the optimal weights in a portfolio. This equation dictates that the sum of the weights equal 1: (6.10) (∑[i = 1,N]Xi) -1 = 0 where N = The number of securities comprising the portfolio. Xi = The percentage weighting of the ith security. The equation can also be written as: (∑[i = 1,N]Xi) = l By allowing the left side of this equation to be greater than 1, we can find the unconstrained optimal portfolio. The easiest way to do this is to add another market system, called non-interest-bearing cash (NIC), into the Starting augmented matrix. This market system, NIC, will have an arithmetic average daily HPR of 1.0 and a population standard deviation (as well as variance and covariances) in those daily HPRs of 0. What this means is that each day the HPR for NIC will be 1.0. The correlation coefficients for NIC to any other market system are always 0. Now we set the sum of the weights constraint to some arbitrarily high number, greater than I. A good initial value is 3 times the number of market systems (without NIC) that you are using. Since we have 4 market systems (when not counting NIC) we should set this sum of the weights constraint to 4*3 = 12. Note that we are not really lifting the constraint that the sum of the weights be below some number, we are just setting this constraint at an arbitrarily high value. The difference between this arbitrarily high value and what the sum of the weights actually comes out to be will be the weight assigned to NIC. We are not going to really invest in NIC, though. It's just a null entry that we are pumping through the matrix to arrive at the unconstrained weights of our market systems. Now, let's take the parameters of our four market systems from Chapter 6 and add NIC as well: Investment Expected Return as an HPR Toxico 1.095 Incubeast Corp. 1.13 LA Garb 1.21 Savings Account 1.085 NIC 1.00 Expected Standard Deviation of Return .316227766 .5 .632455532 0 0 The covariances among the market systems, with NIC included, are as follows: T I L S N T .1 -.0237 .01 0 0 I -.0237 .25 .079 0 0 L .01 .079 .4 0 0 S 0 0 0 0 0 N 0 0 0 0 0 Thus, when we include NIC, we are now dealing with 5 market systems; therefore, the generalized form of the starting augmented matrix is: X1*U1+ X2*U2+ X3*U3+ X4*U4+ X5*U5 = E X1+ X2+ X3+ X4+ X5 = S X1*COV1,1+X2*COV1,2+X3*COV1,3+X4*COV1,4+X5*COV1,5+.5*L1*U1+. 5*L2 = 0 X1*COV2,1+X2*COV2,2+X3*COV2,3+X4*COV2,4+X5*COV2,5+.5*L1*U2+. 5*L2 = 0 X1*COV3,1+X2*COV3,2+X3*COV3,3+X4*COV3,4+X5*COV3,5+.5*L1*U3+. 5*L2 = 0 X1*COV4,1+X2*COV4,2+X3*COV4,3+X4*COV4,4+X5*COV4,5+.5*L1*U4+. 5*L2 = 0 X1*COV5,1+X2*COV5,2+X3*COV5,3+X4*COV5,4+X5*COV5,5+.5*L1*U5+. 5*L2 = 0 where E = The expected return of the portfolio. S = The sum of the weights constraint. COVA,B = The covariance between securities A and B. Xi = The percentage weighting of the ith security. Ui = The expected return of the ith security. L1 = The first Lagrangian multiplier. L2 = The second Lagrangian multiplier. Thus, once we have included NIC, our starting augmented matrix appears as follows: X1 .095 1 .1 X2 .13 1 -.0237 X3 .21 1 .01 X4 .085 1 0 X5 L1 L2 Answer 0 E 0 12 0 .095 1 0 -.0237 .01 0 0 .25 .079 0 0 .079 .4 0 0 .13 .21 .085 0 Note that the answer column of the second row, the sum of the weights constraint, is 12, as we determined it to be by multiplying the number of market systems (not including NIC) by 3. When you are using NIC, it is important that you include it as the last, the Nth market system of N market systems, in the starting augmented matrix. Now, the object is to obtain the identity matrix by using row operations to produce elementary transformations, as was detailed in Chapter 6. You can now create an unconstrained AHPR efficient frontier and an unconstrained GHPR efficient frontier. The unconstrained AHPR efficient frontier represents using leverage but not reinvesting. The GHPR efficient frontier represents using leverage and reinvesting the profits. Ideally, we want to find the unconstrained geometric optimal portfolio. This is the portfolio that will result in the greatest geometric growth for us. We can use Equations (7.06a) through (7.06d) to solve for which of the portfolios along the efficient frontier is geometric optimal. In so doing, we find that no matter what value we try to solve E for (the value in the answer column of the first row), we get the same portfolio-comprised of only the savings account levered up to give us whatever value for E we want. This results in giving us our answer; we get the lowest V (in this case zero) for any given E. What we must do, then, is take the savings account out of the matrix and start over. This time we will try to solve for only four market systems -Toxico, Incubeast, LA Garb, and NIC-and we set our sum of the weights constraint to 9. Whenever you have a component in the matrix with zero variance and an AHPR greater than 1, you'll end up with the optimal portfolio as that component levered up to meet the required E. Now, solving the matrix, we find Equations (7.06a) through (7.06d) satisfied at E equals .2457. Since this is the geometric optimal portfolio, V is also equal to .2457. The resultant geometric mean is 1.142833. The portfolio is: Toxico 102.5982% Incubeast 49.00558% LA Garb 40.24979% NIC 708.14643% “Wait,” you say. "How can you invest over 100% in certain components?” We will return to this in a moment. If NIC is not one of the components in the geometric optimal portfolio, then you must make your sum of the weights constraint, S, higher. You must keep on making it higher until NIC becomes one of the components of the geometric optimal portfolio. Recall that if there are only two components in a portfolio, if the correlation coefficient between them is -1, and if both have positive mathematical expectation, you will be required to finance an infinite number of contracts. This is so because such a portfolio would never have a losing day. Now, the lower the correlation coefficients are between the components in the portfolio, the higher the percentage required to be invested in those components is going to be. The difference between the percentages invested and the sum of the weights constraint, S, must be filled by NIC. If NIC doesn't show up in the percentage allocations for the geometric optimal portfolio, it means that the portfolio is running into a constraint at S and is therefore not the unconstrained geometric optimal. Since you are not going to be actually investing in NIC, it doesn't matter how high a percentage it commands, as long as it is listed as part of the geometric optimal portfolio. HOW OPTIMAL F FITS WITH OPTIMAL PORTFOLIOS In Chapter 6 we saw that we must determine an expected return (as a percentage) and an expected variance in returns for each component in a portfolio. Generally, the expected returns (and the variances) are determined from the current price of the stock. An optimal percentage (weighting) is then determined for each component. The equity of the account is then multiplied by a components weighting to determine the number of dollars to allocate to that component, and this dollar allocation is then divided by the current price per share to determine how many shares to have on. That generally is how portfolio strategies are currently practiced. But it is not optimal. Here lies one of this book's many hearts. Rather than determining the expected return and variance in expected return from the current price of the component, the expected return and variance in returns should be determined from the optimal f, in dollars, for the component. In other words, as input you should use the arithmetic average HPR and the variance in the HPRs. Here, the HPRs used should be not of trades, but of a fixed time length such as days, weeks, months, quarters, or years-as we did in Chapter 1 with Equation (1.15). (1.15) Daily HPR = (A/B)+1 where A = Dollars made or lost that day. B = Optimal fin dollars. We need not necessarily use days. We can use any time length we like so long as it is the same time length for all components in the portfolio (and the same time length is used for determining the correlation coefficients between these HPRs of the different components). Say the market system with an optimal f of $2,000 made $100 on a given day. Then the HPR for that market system for that day is 1.05. If you are figuring your optimal f based on equalized data, you must use Equation (2.12) in order to obtain your daily HPRs: (2.12) Daily HPR = D$/f$+1 where D$ = The dollar gain or loss on 1 unit from the previous day. This is equal to (Tonight's Close-Last Night's Close)*Dollars per Point f$ = The current optimal fin dollars, calculated from Equation (2.11). Here, however, the current price variable is last night's close. In other words, once you have determined the optimal fin dollars for 1 unit of a component, you then take the daily equity changes on a 1unit basis and convert them to HPRs per Equation (1.15)-or, if you are using equalized data, you can use Equation (2.12). When you are combining market systems in a portfolio, all the market systems should be the same in terms of whether their data, and hence their optimal fs and by-products, has been equalized or not. Then we take the arithmetic average of the HPRs. Subtracting 1 from the arithmetic average will give us the expected return to use for that component. Taking the variance of the daily (weekly, monthly, etc.) HPRs will give the variance input into the matrix. Lastly, we determine the correlation coefficients between the daily HPRs for each pair of market systems under consideration. Now here is the critical point. Portfolios whose parameters (expected returns, variance in expected ret urns, and correlation coefficients of the expected returns) are selected based on the current price of the component will not yield truly optimal portfolios. To discern the truly optimal portfolio you must derive the input parameters based on trading 1 unit at the optimal f for each component. You cannot be more at the peak of the optimal f curve than optimal f itself: to base the parameters on the current market price of the component is to base your parameters arbitrarily-and, as a consequence, not necessarily optimally. Now let's return to the question of how you can invest more than 100% in a certain component. One of the basic premises of this book is that weight and quantity are not the same thing. The weighting that you derive from solving for a geometric optimal portfolio must be reflected back into the optimal f's of the portfolio's components. The way to do this is to divide the optimal f's for each component by its corresponding weight. Assume we have the following optimal f's (in dollars): Toxico $2,500 Incubeast $4,750 LA Garb $5,000 (Note that, if you are equalizing your data, and hence obtaining an equalized optimal f and by-products, then your optimal fs in dollars will change each day based upon the previous day's closing price and Equation[2.11].) We now divide these f's by their respective weightings: Toxico $2,500/1.025982 = $2,436.69 Incubeast $4,750/.4900558 = $9,692.77 LA Garb $5,000/.4024979 = $12,422.43 Thus, by trading in these new "adjusted" f values, we Witt be at the geometric optimal portfolio. In other words, suppose Toxico represents a certain market system. By trading 1 contract under this market system for every $2,436.69 in equity (and doing the same with the other market systems at their new adjusted f values) we will be at the geometric optimal unconstrained portfolio. Likewise if Toxico is a stock, and we regard 100 shares as "1 contract," we will trade 100 shares of Toxico for every l$2,436.69 in account equity. For the moment, disregard margin completely. Later in the next chapter we will address the potential problem of margin requirements. "Wait a minute," you protest. "If you take an optimal portfolio and change it by using optimal f, you have to prove that it is still optimal. But if you treat the new values as a different portfolio, it must fall somewhere else on the return coordinate, not necessarily on the efficient frontier. In other words, if you keep reevaluating f, you cannot stay optimal, can you?" We are not changing the f values. That is, our f values (the number of units put on for so many dollars in equity) are still the same. We are simply performing a shortcut through the calculations, which makes it appear as though we are "adjusting" our f values. We derive our optimal portfolios based on the expected returns and variance in returns of trading 1 unit of each of the components, as well as on the correlation coefficients. We thus derive optimal weights (optimal percentages of the account to trade each component with). Thus, if a market system had an optimal f of $2,000, and in optimal portfolio weight of .5, we would trade 50% of our account at the full optimal f level of $2,000 for this market system. This is exactly the same is if we said we will trade 100% of our account at the optimal f divided by the optimal weighting ($2,000/.5) of $4000. In other words, we are going to trade the optimal f of $2,000 per unit on 50% of our equity, which in turn is exactly the same as saying we are going to trade the adjusted f of $4,000 on 100% of our equity. The AHPRs and SDs that you input into the matrix are determined from the optimal f values in dollars. If you are doing this on stocks, you can compute your values for AHPR, SD, and optimal f on a I-share or a 100-share basis (or any other basis you like). You dictate the size of one unit. In a nonleveraged situation, such as a portfolio of stocks that are not on margin, weighting and quantity are synonymous. Yet in a leveraged situation, such as a portfolio of futures market systems, weighting and quantity arc different indeed. you can now see the idea first roughly introduced in Portfolio Management Formulas: that optimal quantities are what we seek to know, and that this is a function of optimal weightings. When we figure the correlation coefficients on the HPRs of two market systems, both with a positive arithmetic mathematical expectation, we find a slight tendency toward positive correlation. This is because the equity curves (the cumulative running sum of daily equity changes) both tend to rise up and to the right. This can be bothersome to some people. One solution is to determine a least squares regression line to each equity curve (before equalization, if employed) and then take the difference at each point in time on the equity curve and its regression line. Next, convert this now detrended equity curve back to simple daily equity changes (noncumulative, i.e., the daily change in the detrended equity curve). If you are equalizing the data, you would then do it at this point in the sequence of events. Lastly, you figure your correlations on this processed data. This technique is valid so long as you are using the correlations of daily equity changes and not prices. If you use prices, you may do yourself more harm than good. Very often prices and daily equity changes are linked, as example would be a long-term moving average crossover system. This detrending technique must always be used with caution. Also, the daily AHPR and standard deviation in HPRs must always be figured off of non-detrended data. A final problem that happens when you detrend your data occurs with systems that trade infrequently. Imagine two day-trading systems that give one trade per week, both on different days. The correlation coefficient between them may be only slightly positive. Yet when we detrend their data, we get very high positive correlation. This mistakenly happens because their regression lines are rising a little each day. Yet on most days the equity change is zero. Therefore, the difference is nega- tive. The preponderance , slightly negative days with both market systems, then mistakenly results in high positive correlation. THRESHOLD TO THE GEOMETRIC FOR PORTFOLIOS Now let's address the problem of incorporating the threshold to the geometric with the given optimal portfolio mix. This problem is readily handled simply by dividing the threshold to the geometric for each component by its weighting in the optimal portfolio. This is done in exactly the same way as the optimal fs of the components are divided by their respective weightings to obtain a new value representative of the optimal portfolio mix. For example, assume that the threshold to the geometric for Toxico is $5,100. Dividing this by its weighting in the optimal portfolio mix of 1.025982 gives us a new adjusted threshold to the geometric of: Threshold = $5,100/1.025982 = $4,970.85 Since the weighting for Toxico is greater than 1, both its optimal f and its threshold to the geometric will be reduced, for they are divided by this weighting. In this case, if we cannot trade the fractional unit with Toxico, and if we are trading only 1 unit of Toxico, we will switch up to 2 units only when our equity gets up to $4,970.85. Recall that our new adjusted f value in the optimal portfolio mix for Toxico is $2,436.69 ($2,500/1.025982). Since twice this amount equals $4,873.38, we would ordinarily move up to trading two contracts at that point. However, our threshold to the geometric, being greater than twice the f allocation in dollars, tells us there isn't any benefit to switching to trading 2 units before our equity reaches the threshold to the geometric of $4970.85. Again, if you are equalizing your data, and hence obtaining an equalized optimal f and by-products, including the threshold to the geometric, then your optimal fs in dollars and your thresholds to the geometric will change each day, based upon the previous day's closing price and Equation (2.11). COMPLETING THE LOOP One thing you will readily notice about unconstrained portfolios (portfolios for which the sum of the weights is greater than 1 and NIC shows up as a market system in the portfolio) is that the portfolio is exactly the same for any given level of E-the only difference being the degree of leverage. This is not true for portfolios lying along the efficient frontier(s) when the sum of the weights is constrained). In other words, the ratios of the weightings of the different market systems to each other are always the same for any point along the unconstrained efficient frontiers (AHPR or GHPR). For example, the ratios of the different weightings between the different market systems in the geometric optimal portfolio can be calculated. The ratio of Toxico to Incubeast is 102.5982% divided by 49.00558%, which equals 2.0936. We can thus determine the ratios of all the components in this portfolio to one another: Toxico/Incubeast = 2.0936 Toxico/LA Garb = 2.5490 Incubeast/LA Garb = 1.2175 Now, we can go back to the unconstrained portfolio and solve for different values for E. What follows are the weightings for the components of the unconstrained portfolios that have the lowest variances for the given values of E. You will notice that the ratios of the weightings of the components are exactly the same: E = .1 E = .3 Toxico .4175733 1.252726 Incubeast .1 994545 .5983566 LA Garb .1638171 .49145 Thus, we can state that the unconstrained efficient frontiers are the same portfolio at different levels of leverage. This portfolio, the one that gets levered up and down with E when the sum of the weights constraint is lifted, is the portfolio that has a value of zero for the second Lagrangian multiplier when the sum of the weights equals 1. Therefore, we can readily determine what our unconstrained geometric optimal portfolio will be. First, we find the portfolio that has a value of zero for the second Lagrangian multiplier when the sum of the weights is constrained to 1.00. One way to find this is through iteration. The resultant portfolio will be that portfolio which gets levered up (or down) to satisfy any given E in the unconstrained portfolio. That value for E which satisfies any of Equations (7.06a) through (7.06d) will be the value for E that yields the unconstrained geometric optimal portfolio. Another equation that we can use to solve for which portfolio along the unconstrained AHPR efficient frontier is geometric optimal is to use the first Lagrangian multiplier that results in determining a portfolio along any particular point on the unconstrained AHPR efficient frontier. Recall from Chapter 6 that one of the by-products in determining the composition of a portfolio by the method of row-equivalent matrices is the first Lagrangian multiplier. The first Lagrangian multiplier represents the instantaneous rate of change in variance with respect to expected return, sign reversed. A first Lagrangian multiplier equal to -2 means that at that point the variance was changing at that rate (-2) opposite the expected return, sign reversed. This would result in a portfolio that was geometric optimal. (7.06e) L1 = -2 where L1 = The first Lagrangian multiplier of a given portfolio along the unconstrained AHPR efficient frontier.2 Now it gets interesting as we tie these concepts together. The portfolio that gets levered up and down the unconstrained efficient frontiers (arithmetic or geometric) is the portfolio tangent to the CML line emanating from an RFR of 0 when the sum of the weights is constrained to 1.00 and NIC is not employed. Therefore, we can also find the unconstrained geometric optimal portfolio by first finding the tangent portfolio to an RFR equal to 0 where the sum of the weights is constrained to 1.00, then levering this portfolio up to the point where it is the geometric optimal. But how can we determine how much to lever this constrained portfolio up to make it the equivalent of the unconstrained geometric optimal portfolio? Recall that the tangent portfolio is found by taking the portfolio along the constrained efficient frontier (arithmetic or geometric) that has the highest Sharpe ratio, which is Equation (7.01). Now we lever this portfolio up, and we multiply the weights of each of its components by a variable named q, which can be approximated by: (7.13) q = (E-RFR)/V where E = The expected return (arithmetic) of the tangent portfolio. RFR = The risk-free rate at which we assume you can borrow or loan. V = The variance in the tangent portfolio. Equation (7.13) actually is a very close approximation for the actual optimal q. An example may help illustrate the role of optimal q. Recall that our unconstrained geometric optimal portfolio is as follows: Component Toxico Incubeast LA Garb Weight 1.025955 .4900436 .4024874 This portfolio, we found, has an AHPR of 1.245694 and variance of .2456941. Throughout the remainder of this discussion we will assume for simplicity's sake an RFR of 0. (Incidentally, the Sharpe ratio of this portfolio, (AHPR-(1+RFR))/SD, is .49568.) Now, if we were to input the same returns, variances, and correlation coefficients of these components into the matrix and solve for which portfolio was tangent to an RFR of 0 when the sum of the weights is constrained to 1.00 and we do not include NIC, we would obtain the following portfolio: Component Toxico Incubeast LA Garb Weight .5344908 .2552975 .2102117 This particular portfolio has an AHPR of 1.128, a variance of .066683, and a Sharpe ratio of .49568. It is interesting to note that the Sharpe ratio of the tangent portfolio, a portfolio for which the sum of 2 Thus, we can state that the geometric optimal portfolio is that portfolio which, when the sum of the weights is Constrained to 1, has a second Lagrangian multiplier equal to 0, and when unconstrained has a first Lagrangian multiplier of -2. Such a portfolio will also have a second Lagrangian multiplier equal to 0 when unconstrained. the weights is con strained to 1.00 and we do not include NIC, is exactly the same as the Sharpe ratio for our unconstrained geometric optimal portfolio. Subtracting 1 from our AHPRs gives us the arithmetic average return of the portfolio. Doing so we notice that in order to obtain the same return for the constrained tangent portfolio as for the unconstrained geometric optimal portfolio, we must multiply the former by 1.9195. .245694/.128 = 1.9195 Now if we multiply each of the weights of the constrained tangent portfolio, the portfolio we obtain is virtually identical to the unconstrained geometric optimal portfolio: Component Toxico Incubeast LA Garb Weight .5344908 .2552975 .2102117 * 1.9195 = Weight 1.025955 .4900436 .4035013 The factor 1.9195 was arrived at by dividing the return on the unconstrained geometric optimal portfolio by the return on the constrained tangent portfolio. Usually, though, we will want to find the unconstrained geometric optimal portfolio knowing only the constrained tangent portfolio. This is where optimal q comes in.3 If we assume an RFR of 0, we can determine the optimal q on our constrained tangent portfolio as: (7.13) q = (E-RFR)/V = (. 128-0)7.066683 = 1.919529715 A few notes on the RFR. To begin with, we should always assume an RFR of 0 when we are dealing with futures contracts. Since we are not actually borrowing or lending funds to lever our portfolio up or down, there is effectively an RFR of 0. With stocks, however, it is a different story. The RFR you use should be determined with this fact in mind. Quite possibly, the leverage you employ does not require you to use an RFR other than 0. You will often be using AHPRs and variances for portfolios that were determined by using daily HPRs of the components. In such cases, you must adjust the RFR from an annual rate to a daily one. This is quite easy to accomplish. First, you must be certain that this annual rate is what is called the effective mutual interest rate. Interest rates are typically stated as annual percentages, but frequently these annual percentages are what is referred to as the nominal annual interest rate. When interest is compounded semiannually, quarterly, monthly, and so on, the interest earned during a year is greater than if compounded annually (the nominal rate is based on compounding annually). When interest is compounded more frequently than annually, an effective annual interest rate can be determined from the nominal interest rate. It is the effective annual interest rate that concerns us and that we will use in our calculations. To convert the nominal rate to an effective rate we can use: (7.14) E = (1+R/M)^M-1 where E = The effective annual interest rate. R = The nominal annual interest rate. M = The number of compounding periods per year. Assume that the nominal annual interest rate is 9%, and suppose that it is compounded monthly. Therefore, the corresponding effective annual interest rate is: (7.14) E = (1+.09/12)^12-1 = (1+.0075)^12-1 = 1.0075^12-1 = 1.093806898-1 = .093806898 Therefore, our effective annual interest rate is a little over 9.38%. Now if we figured our HPRs on the basis of weekdays, we can state that there are 365.2425/7*5 = 260.8875 weekdays, on average, in a year. Dividing .093806898 by 260.8875 gives us a daily RFR of .0003595683887. If we determine that we are actually paying interest to lever our portfolio up, and we want to determine from the constrained tangent portfolio what the unconstrained geometric optimal portfolio is, we simply input the value for the RFR into the Sharpe ratio, Equation (7.01), and the optimal q, Equation (7.13). Now to close the loop. Suppose you determine that the RFR for your portfolio is not 0, and you want to find the geometric optimal portfolio without first having to find the constrained portfolio tangent to your applicable RFR. Can you just go straight to the matrix, set the sum 3 Latane, Henry, and Donald Tuttle, "Criteria for Portfolio Building," journal of Finance 22, September 1967, pp. 362363. of the weights to some arbitrarily high number, include NIC, and find the unconstrained geometric optimal portfolio when the RFR is greater than 0? Yes, this is easily accomplished by subtracting the RFR from the expected returns of each of the components, but not from NIC (i.e., the expected return for NIC remains at 0, or an arithmetic average HPR of 1.00). Now, solving the matrix will yield the unconstrained geometric optimal portfolio when the RFR is greater than 0. Since the unconstrained efficient frontier is the same portfolio at different levels of leverage, you cannot put a CML line on the unconstrained efficient frontier. You can only put CML lines on the AHPR or GHPR efficient frontiers if they are constrained (i.e., if the sum of the weights equals 1). It is not logical to put CML lines on the AHPR or GHPR unconstrained efficient frontiers. We have seen numerous ways of arriving at the geometric optimal portfolio. For starters, we can find it empirically, as was detailed in Portfolio Management Formulas and recapped in Chapter 1 of this text. We have seen how to find it parametrically in this chapter, firm a number of different angles, for any value of the risk-free rate. Now that we know how to find the geometric optimal portfolio we must learn how to use it in real life. The geometric optimal portfolio will give us the greatest possible geometric growth In the next chapter we will go into techniques to use this portfolio within given risk constraints. Chapter 8 - Risk Management We now know haw to find the optimal portfolios by numerous different methods. Further, we now have a thorough understanding of the geometry of portfolios and the relationship of optimal quantities and optimal weightings. We can now see that the best way to trade any portfolio of any underlying instrument is at the geometric optimal level Doing so on a reinvestment of returns basis will maximize the ratio of expected gain to expected risk In this chapter we discuss how to use these geometric optimal portfolios within the risk constraints that we specify. Thus, whatever vehicles we are trading in, we can align ourselves anywhere we desire on the risk spectrum. In so doing, we will obtain the maximum rate of geometric growth for a given level of risk ASSET ALLOCATION You should be aware that the optimal portfolio obtained by this parametric technique will always be almost, if not exactly, the same as the portfolio that would be obtained by using an empirical technique such as the one detailed in the first chapter or in Portfolio Management Formulas. As such, we can expect tremendous drawdowns on the entire portfolio in terms of equity retracement. Our only guard against this is to dilute the portfolio somewhat. What this amounts to is combining the geometric optimal portfolio with the risk-free asset in some fashion. This we call asset allocation. The degree of risk and safety for any investment is not a function of the investment itself, but rather a function of asset allocation. Even portfolios of blue-chip stocks, if traded at their unconstrained geometric optimal portfolio levels, will show tremendous drawdowns. Yet these blue-chip stocks must be traded at these levels to maximize potential geometric gain relative to dispersion (risk) and also provide for attaining a goal in the least possible time. When viewed from such a perspective, trading blue-chip stocks is as risky as pork bellies, and pork bellies are no less conservative than blue-chip stocks. The same can be said of a portfolio of commodity trading systems and a portfolio of bonds. The object now is to achieve the desired level of potential geometric gain to dispersion (risk) by combining the risk-free asset with whatever it is we are trading, be it a portfolio of blue-chip stocks, bonds, or commodity trading systems. When you trade a portfolio at unconstrained fractional f, you are on the unconstrained GHPR efficient frontier, but to the left of the geometric optimal point-the point that satisfies any of Equations (7.06a) through (7.06e). Thus, you have less potential gain relative to the dispersion than you would if you were at the geometric optimal point. This is one way you can combine a portfolio with the risk-free asset. Another way you can practice asset allocation is by splitting your equity into two subaccounts, an active subaccount and an inactive subaccount. These are not two separate accounts, rather they are a way of splitting a single account in theory. The technique works as follows. First, you must decide upon an initial fractional level. Suppose that, initially, you want to emulate an account at the half f level. Your initial fractional level is .5 (the initial fractional level must be greater than zero and less than 1). This means you will split your account, with half the equity in your account going into the inactive subaccount and half going into the active subaccount. Assume you are starting out with a $100,000 account. Initially, $50,000 is in the inactive subaccount and $50,000 is in the active subaccount. It is the equity in the active subaccount that you use to determine how many contracts to trade. These subaccounts are not real; they are a hypothetical construct you are creating in order to manage your money more effectively. You always use the full optimal fs with this technique. Any equity changes are reflected in the active portion of the account. Therefore, each day you must look at the account's total equity (closed equity plus open equity, marking open Positions to the market), and subtract the inactive amount (which will remain constant from day to day). The difference is your active equity, and it is on this difference that you will calculate how many contracts to trade at the full f levels. Now suppose that the optimal f for market system A is 'to trade 1 contract for every $2,500 in account equity. You come into the first day with $50,000 in active equity, and therefore you will look to trade 20 contracts. If you were using the straight half f strategy; you would end up with the same number of contracts on day one. At half f, you would trade 1 contract for every $5,000 in account equity ($2,500/.5), and you would use the full $100,000 account equity to figure how many contracts to trade. Therefore, under the half f strategy, you would trade 20 contracts on this day as well. However, as soon as the equity in the accounts changes, the number of contracts you will trade changes as well. Assume now that you make $5,000 this next day, thus pushing the total equity in the account up to $105,000. Under the half f strategy, you will now be trading 21 contracts. However, with the split-equity technique, you must subtract the now-constant inactive amount of $50,000 from your total equity of $105,000. This leaves an active equity portion of $55,000, from which you will figure your contract size at the optimal f level of 1 contract for every $2,500 in equity. Therefore, with the split-equity technique, you will now look to trade 22 contracts. The procedure works the same way on the downside of the equity curve, with the split-equity technique peeling off contracts at a faster rate than the fractional f strategy does. Suppose you lost $5,000 on the first day of trading, putting the total account equity at $95,000. With the fractional f strategy you would now look to trade 19 contracts ($95,000/$5,000). However, with the split-equity technique you are now left with $45,000 of active equity, and thus you will look to trade 18 contracts ($45,000/ $2,500). Notice that with the split-equity technique, the exact fraction of optimal f that we are using changes with the equity changes. We specify the fraction we want to start at. In our example we used an initial fraction of .5. When the equity increases, this fraction of the optimal f increases too, approaching 1 as a limit as the account equity approaches infinity. On the downside, this fraction approaches 0 as a limit at the level where the total equity in the account equals the inactive portion. The fact that portfolio insurance is built into the split-equity technique is a tremendous benefit and will be discussed at length later in this chapter. Because the split-equity technique has a fraction for f that moves, we refer to it as a dynamic fractional ѓ strategy, as opposed to the straight fractional f (static fractional f) strategy. The static fractional f strategy puts you on the CML line somewhere to the left of the optimal portfolio if you are using a constrained portfolio. Throughout the life of the account, regardless of equity changes, the account will stay at that point on the CML line. If you are using an unconstrained portfolio (as you rightly should), you will be on the unconstrained efficient frontier (since there are no CML lines with unconstrained portfolios) at some point to the left of the optimal portfolio. As the equity in the account changes, you stay at the same point on the unconstrained efficient frontier. With the dynamic fractional f technique, you start at these same points for the constrained and unconstrained portfolios. However, as the account equity increases, the portfolio moves up and to the right, and as the equity decreases, the portfolio moves down and to the left. The limits are at the peak of the curve to the right where the fraction of f equals 1, and on the left at the point where the fraction off equals 0. With the static f method of asset allocation, the dispersion remains constant, since the fraction of optimal fused is constant. Unfortunately, this is not true with the dynamic fractional f technique. Here, as the account equity increases, so does the dispersion as the fraction of optimal f used increases. The upper limit to this dispersion is the dispersion at full f as the account equity approaches infinity. On the downside, the dispersion diminishes rapidly as the fraction of optimal f used approaches zero as the total account equity approaches the inactive subaccount equity. Here, the lower limit to the dispersion is zero. Using the dynamic fractional f technique is analogous to trading an account full out at the optimal f levels, where the initial size of account is the active equity portion. So we see that there are two ways to dilute an account down from the full geometric optimal portfolio, two ways to exercise asset allocation. We can trade a static fractional or a dynamic fractional f. The dynamic fractional will also have dynamic variance, a slight negative, but it also provides for portfolio insurance (more on this later). Although the two techniques are related, you can also see that they differ. Which is best? Assume we have a system in which the average daily arithmetic HPR is 1.0265. The standard deviation in these daily HPRs is .1211, so the geometric mean is 1.019. Now, we look at the numbers for a .2 static fractional f and a .1 static fractional f by using Equations (2.06) through (2.08): (2.06) FAHPR = (AHPR-1)*FRAC+1 (2.07) FSD = SD*FRAC (2.08) FGHPR = (FAHPR^2-FSD^2)^1/2 where FRAC = The fraction of optimal f we are solving for AHPR = The arithmetic average HPR at the optimal f, SD = The standard deviation in HPRs at the optimal f. FAHPR = The arithmetic average HPR at the fractional f. FSD = The standard deviation in HPRs at the fractional f, FGHPR = The geometric average HPR at the fractional f. The results then are: Full f .2 f .1 f AHPR 1.0265 1.0053 1.00265 SD .1211 .02422 .01211 GHPR 1.01933 1.005 1.002577 Now recall Equation (2.09a), the time expected to reach a specific goal: (2.09a) N = ln(Goal)/1n(Geometric Mean) where N = The expected number of trades to reach a specific goal. Coal = The goal in terms of a multiple on our starting stake, a TWR. ln() = The natural logarithm function. Now, we compare trading at the -2 static fractional f strategy, with a geometric mean of 1.005, to the .2 dynamic fractional f strategy (20% as initial active equity) with a daily geometric mean of 1.01933. The time (number of days since the geometric means are daily) required to double the static fractional f is given by Equation (2.09a) as: ln(2)/ln( 1.005) = 138.9751 To double the dynamic fractional f requires setting the goal to 6. This is because if you initially have 20% of the equity at work, and you start out with a $100,000 account, then you initially have $20,000 at work. The goal is to make the active equity equal $120,000. Since the inactive equity remains at $80,000, you will then have a total of $200,000 on your account. Thus, to make a $20,000 account grow to $120,000 means you need to achieve a TWR of 6. Therefore, the goal is 6 in order to double a .2 dynamic fractional f: 1n(6)/ln(1.01933) = 93.58634 Notice that it took 93 days for the dynamic fractional f versus 138 days for the static fractional f. Now look at the .1 fraction. The number of days expected in order for the static technique to double is: ln(2)/ln( 1.002577) = 269.3404 Compare this to doubling a dynamic fractional f that is initially set to .1 active. You need to achieve a TWR of 11, so the number of days required for the comparative dynamic fractional f strategy is: ln(11)/ln( 1.01933) = 125.2458 To double the account equity at the .1 level of fractional f takes 269 days for our static example, as compared to 125 days for the dynamic. The lower the fraction for f, the faster the dynamic will outperform the static technique. Now take a look at tripling the .2 fractional f. The number of days expected by the static technique to triple is: ln(3)/ln( 1.005) = 220.2704 This compares to its dynamic counterpart, which requires: ln(11)/ln( 1.01933) = 125.2458 days To make 400% profit (i.e., a goal or TWR of 5) requires of the .2 static technique: ln(5)/ln( 1.005) = 322.6902 days Which compares to its dynamic counterpart: ln(21)/ln( 1.01933) = 159.0201 days The dynamic technique takes almost half as much time as the static to teach the goal of 400% in this example. However, if you look out in time 322.6902 days to where the static technique doubled, the dynamic technique would be at a TWR of: TWR = .8+(1.01933^322.6902)*.2 = .8+482.0659576*.2 = 97.21319 This represents making over 9,600% in the time it took the static to make 100% We can now amend Equation (2.09a) to accommodate both the static and fractional dynamic f strategies to determine the expected length required to achieve a specific goal as a TWR. To begin with, for the static fractional f, we can create Equation (2.09b): (2.09b) N = ln(Goal)/ln(A) where N = The expected number of trades to reach a specific goal. Goal = The goal in terms of a multiple on our starting stake, a TWR. A = The adjusted geometric mean. This is the geometric mean, run through Equation (2.08 to determine the geometric mean for a given static fractional f. ln() = The natural logarithm function. For a dynamic fractional f, we have Equation (2.09c): (2.09c) N = ln(((Goal-1)/ACTV)+l)/ln(Geometric Mean) where N = The expected number of trades to reach a specific goal. Goal = The goal in terms of a multiple on our starting stake, a TWR. ACTV = The active equity percentage. Geometric Mean = This is simply the raw geometric mean, there is no adjustment performed on it as there is in (2.09b). ln() = The natural logarithm function. To illustrate the use of (2.09c), suppose we want to determine how long it will take an account to double (i.e., TWR = 2) at .1 active equity and a geometric mean of 1.01933: (2.09) N = ln(((Goal-1)/ ACTV)+l)/ln(Geometric Mean)-ln(((21)/.l)+l)/ln(1.01933) = ln((1/.1)+1)/ln(1.01933) = ln( 10+l)/ln( 1.01933) = ln(11)/ln( 1.01933) = 2.397895273/.01914554872 = 125.2455758 Thus, if our geometric mean is determined on a daily basis, we can expect to double in about 125% days. If our geometric mean is determined on a trade-by-trade basis, we can expect to double in about 125% trades. So long as you are dealing with an N great enough such that (2.09c) is less than (2.09b), then you are benefiting from dynamic fractional f trading. Ultimately the dynamic makes infinetely more than static fractional f strategy for the same initial level of risk static Time --> Figure 8-1 Static versus dynamic fractional f. Figure 8-1 demonstrates the relationship between trading at a static versus a dynamic fractional f strategy over time. The more the time that elapses, the greater the difference between the static fractional f and the dynamic fractional f strategy. Asymptotically, the dynamic fractional f strategy provides infinitely greater wealth than its static counterpart. In the long run you are better off to practice asset allocation in a dynamic fractional f technique. That is, you determine an initial level, a percentage, to allocate as active equity. The remainder is inactive equity. The day-to-day equity changes are reflected in the active portion only. The inactive dollar amount remains constant. Therefore, each day you subtract the constant inactive dollar amount from your total account equity. This difference is the active portion, and it is on this active portion that you will figure your quantities to trade in based on the optimal f levels. Eventually, if things go well for you, your active portion will dwarf your inactive portion, and you'll have the same problem of excessive variance and Potential drawdown that you would have had initially at the full optimal f level. We now discuss four ways to treat this "problem." There are no fine lines delineating these four methods, and it is possible to mix methods to meet your specific needs. REALLOCATION: FOUR METHODS First, a word about the risk-free asset. Throughout this chapter the risk-free asset has been treated as though it were simply cash, or nearcash equivalents such as Treasury Bills or money market funds (assuming that there is no risk in any of these). The risk-free asset can also be any asset which the investor believes has no risk, or risk so negligible as to be nonexistent. This may include long-term government and corporate bonds. These can be coupon bonds or zeros. Holders may even write call options against these risk-free assets to further enhance their returns. Many trading programs employ zero coupon bonds as the risk-free asset. For every dollar invested in such a program, a dollar's worth of face value zero coupon bonds is bought in the account. Such a bond, if it were to mature in, say, 5 years, would surely cost less than a dollar. The difference between the dollar face value of the bond and its actual cost is the return the bond will generate over its remaining life. This difference is then applied toward the trading program. If the program loses all of this money, the bonds will still mature at their full face value. At the time of the bond maturity, the investor is then paid an amount equal to his initial investment, although he would not have seen any return on that initial investment over the term that the money was in the program (5 years in the case of this example). Of course, this is predicated upon the managers of the program not losing an amount in excess of the difference between the face value of the bond and its market cost. This same principle can be applied by any trader. Further, you need not use zero coupon bonds. Any kind of interest-generating vehicle can be used. The point is that the risk-free asset need not be simply "dead" cash. It can be an actual investment program, designed to provide a real yield, and this yield can be made to offset potential losses in the program. The main consideration is that the risk-free asset be regarded as risk-free (i.e., treated as though safety of principal were the primary concern). Now on with our discussion of allocating between the risk-free asset, the "inactive" portion of the account, and the active, trading portion. The first, and perhaps the crudest, way to determine what the active/inactive percentage split will be initially, and when to reallocate back to this percentage, is the investor utility method. This can also referred to as the gut feel method. Here, we assume that the drawdowns to be seen will be equal to a complete retracement of active equity. Therefore, if we are willing to see a 50% drawdown, we initially allocate 50% to active equity. Likewise, if we are tilling to see a 10% drawdown, we initially split the account into 10% active, 90*inactive. Basically, with the investor utility method you are trying to allocate as high a percentage to active equity as you are willing to risk losing. Now, it is possible that the active portion may be completely wiped out, at which point the trader no longer has any active portion of his account left with which to continue trading. At such a point, it will be necessary for the trader to decide whether to keep on trading, and if so, what percentage of the remaining funds in the account (the inactive subaccount) to allocate as new active equity. This new active equity can also be lost, so it is important that the trader bear in mind at the outset of this program that the initial active equity is not the maximum amount that can be lost. Furthermore, in any trading where there is unlimited liability on a given position (such as a futures trade) the entire account is at risk, and even the trader's assets outside of the account are at risk! The reader should not be deluded into thinking that he or she is immune from a string of locked limit days, or an enormous opening gap that could take the entire account into a deficit position, regardless of what the "active" equity portion of the account is. This approach also makes a distinction between a drawdown in blood and a drawdown in diet cola. For instance, if a trader decides that a 25% equity retracement is the most that the trader would initially care to sit through, he or she should initially split the account into 75% inactive, 2.5% active. Suppose the trader is starting out with a $100,000 account. Initially, therefore, $25,000 is active and $75,000 is inactive. Now suppose that the account gets up to $200,000. The trader still has $75,000 inactive, but now the active portion is up to $125,000. Since he or she is trading at the full f amount on this $125,000, it is very possible to lose a good portion, if not all of this amount by going into an historically typical drawdown at this point. Such a drawdown would represent greater than a 25% equity retracement, even though the amount of the initial starting equity that would be lost would be 25% if the total account value plunged down to the inactive $75,000. An account that starts out at a lower percentage of active equity will therefore be able to reallocate sooner than an account trading the same market systems starting out at a higher percentage of active equity. Therefore, not only does the account that starts out at a lower percentage of active equity have a lower potential drawdown on initial margin, but also since the trader can reallocate sooner he is less likely to get into awkward ratios of active to inactive equity (assuming an equity runup) than if he started out at a higher initial active equity percentage. As a trader, you are also faced with the question of when to reallocate, whether you are using the crude investor utility method or one of the more sophisticated methods about to be described. You should decide in advance at what point in your equity, both on the upside and on the downside, you want to reallocate. For instance, you may decide that if you get a 100% return on your initial investment, it would be a good time to reallocate. Likewise, you should also decide in advance at what point on the downside you will reallocate. Usually this point is the point where there is either no active equity left or the active equity left doesn't allow for even 1 contract in any of the market systems you are using. You should decide, preferably in advance, whether to continue trading if this downside limit is hit, and if so, what percentage to reallocate to active equity to start anew. Also, you may decide to reallocate with respect to time, particularly for professionally managed accounts. For example, you may decide to reallocate every quarter. This could be incorporated with the equity limits of real-location. You may decide that if the active portion is completely wiped out, you will stop trading altogether until the quarter is over. At the beginning of the next quarter, the account is reallocated with X% as active equity and 100-X% as inactive equity. It is not beneficial to reallocate too frequently. Ideally, you will never reallocate. Ideally, you will let the fraction of optimal f you are using keep approaching 1 as your account equity grows. In reality, however, you most likely will reallocate at some point in time. It is to be hoped you will not reallocate so frequently that it becomes a problem. Consider the case of reallocating after every trade or every day. Such is the case with static fractional f trading. Recall again Equation (2.09a), the time required to reach a specific goal. Let's return to our system, which we are trading with a .2 active portion and a geometric mean of 1.01933. We will compare this to trading at the static fractional .2 f, where the resultant geometric mean is 1.005. If we start with a $100,000 account and we want to reallocate at $110,000 total equity, the number of days (since our geometric means here are on a per day basis) required by the static fractional .2 f is: ln(1.1)/ln(1.005) = 19.10956 This compares to using $20,000 of the $100,000 total equity at the full f amount and trying to get the total account up to $110,000. This would represent a goal of 1.5 times the $20,000: ln(1.5)/ln(1.01933) = 21.17807 At lower goals, the static fractional f strategy grows faster than its corresponding dynamic fractional f counterpart. As time elapses, the dynamic overtakes the static, until eventually the dynamic is infinitely farther ahead. Figure 8-1 displays this relationship between the static and dynamic fractional fs graphically. If you reallocate too frequently you are only shooting yourself in the foot, as the technique would then be inferior to its static fractional f counterpart, Therefore, since you are best off in the long run to use the dynamic fractional f approach to asset allocation, you are also best off to reallocate funds between the active and inactive subaccounts as infre- quently as possible. Ideally, you will make this division between active and inactive equity only once, at the outset of the program. Generally, the dynamic fractional f will overtake its static counterpart faster the lower the portion of initial active equity. In other words, a portfolio with an initial active equity of .1 will overcome its static counterpart faster than a portfolio with an initial active equity allocation of .2 will overtake its static counterpart. At an initial active equity allocation of 100% (1.0), the dynamic never overtakes the static fractional f (rather they grow at the same rate). Also affecting the rate at which the dynamic fractional f overtakes its static counterpart is the geometric mean of the portfolio itself. The higher the geometric mean, the sooner the dynamic will overtake the static. At a geometric mean of 1.0, the dynamic never overtakes its static counterpart. A second method for determining initial active equity amounts and real-location is the scenario planning method. Under this method the amount allocated initially is determined mathematically as a function of the different scenarios, their outcomes, and their probabilities of occurrence, for the performance of the account. This exercise, too, can be performed at regular intervals. The technique involves the scenario planning method detailed in Chapter 4. As an example, suppose you are pondering three possible scenarios for the next quarter: Scenario Drawdown No gain Good runup Probability 50% 25% 25% Result -100% 0% +300% The result column pertains to the results on the account's active equity. Thus, there is a 50% chance here of a 100% loss of active equity, a 25% chance of the active equity remaining unchanged, and a 25% chance of a 360% gain on the active equity. In reality you should consider more than three scenarios, but for simplicity, only three are used here. You input the three different scenarios, their probabilities of occurrence, and their results in units, where each unit represents a percentage point. The results are determined based on what you see happening for each scenario if you were trading at the full optimal f amount. Inputting these three scenarios yields an optimal f of .11. Don't confuse this optimal f with the optimal fs of the components of the portfolio you are trading. They are different. Optimal f here pertains to the optimal f of the scenario planning exercise you just performed, which also told you the optimal amount to allocate as active equity for your given parameters. Therefore, given these three scenarios, you are best off in an asymptotic sense to allocate 11% to active equity and the remaining 89% to inactive. At the beginning of the next quarter, you perform this exercise again, and determine your new allocations at that time. Since the amount of funds you have to reallocate for a given quarter is a function of how you have allocated them for the previous quarter, you are best off to use this optimal f amount, as it will provide you with the greatest geometric growth in the long run. (Again, that's provided that your input-the scenarios, their probabilities, and the corresponding results-is accurate.) This scenario planning method of asset allocation is also useful if you are trying to incorporate the opinion of more than one adviser. In our example, rather than pondering three possible scenarios for the next quarter, you might want to incorporate the opinions of three different advisers. The probability column corresponds to how much faith you have in each different adviser. So in our example, the first scenario, a 50% probability of a 100% loss on active equity, corresponds to a very bearish adviser whose opinion deserves twice the weight of the other two advisers. Recall the share average method of pulling out of a program, which was examined in Chapter 2. We can incorporate this concept here as a reallocation method. In so doing, we will be creating a technique that systematically takes profits out of a program advantageously and also takes us out of a losing program. The program calls for pulling out a regular periodic percentage of the total equity in the account (active equity + inactive equity). Therefore, each month, quarter, or whatever time period you are using, you will pull out X% of your equity. Remember though, that you want to get enough time in each period to make certain that you are benefiting, at least somewhat, by dynamic fractional f. Any value for N that is enough to satisfy Equation (8.01) is a value for N that we can use and be certain that we are benefiting from dynamic fractional f: (8.01) FG^N <= G^N*FRAC+1-FRAC where FG = The geometric mean for the fractional f, found by Equation (2.08). N = The number of periods, with G and FG figured on the basis of 1 period. G = The geometric mean at the optimal f level. FRAC = The active equity percentage. If we are using an active equity percentage of 20% (i.e., FRAC = .2), then FG must be figured on the basis of a .2 f. Thus, for the case where our geometric mean at full optimal f is 1.01933, and the .2 f (FG) is 1.005, we want a value for N that satisfies the following: 1.005^N <= 1.01933^N*.2+1-.2 We figured our geometric mean for optimal f(G) and therefore also our geometric mean for the fractional f (FG) on a daily basis, and we want to see if 1 quarter is enough time. Since there are about 63 trading days per quarter, we want to see if an N of 63 is enough time to benefit by dynamic fractional f. Therefore, we check Equation (8.01) at a value of 63 for N: 1.005^63 <= 1.01933^63*.2+1-.2 1.369184237 <= 3.340663933*.2+1-.2 1.369184237 <= .6681327866+1-.2 1.369184237 <= 1.6681327866-.2 1.369184237 <= 1.4681327866 The equation is satisfied, since the left side is less than or equal to the right side. Thus, we can reallocate on a quarterly basis under the given values here and be benefiting from using dynamic fractional f. And where do you put this now pulled-out equity? Why, it goes right back into the account as inactive equity. Each period you will figure the total value of your account, and transfer that amount from active to inactive equity. Thus, there is reallocation. For example, again assume a $100,000 account where $20,000 is regarded as the active amount. Say you are share averaging out on a quarterly basis, and the quarterly percentage you pull out is 2%. Now assume that at the beginning of the following quarter the account still stands at $100,000 total equity, of which $20,000 is active equity. You now take out 2% of the total account equity of $100,000 and transfer that amount from active to inactive equity. Therefore, you transfer $2,000 from active to inactive equity, and your $100,000 account now has $18,000 active equity and $82,000 inactive. We hope that the program will outpace the periodic percentage withdrawals to the upside. Suppose that in our last example, our $100,000 account goes to $110,000 at the end of the quarter. Now, when we go to reallocate 2%, $2,200, we debit our active equity amount of $30,000 and credit our inactive amount of $80,000. Thus, we have $27,800 active equity and $82,200 inactive. Since our active equity after the reallocation is still greater than it was at the beginning of the previous period, we can say that the program has outpaced the reallocation. On the other hand, if the program loses money, or if the program goes nowhere (in which case you are risking money repeatedly, yet not making any upward progress on your equity), this technique has you eventually end up with the entire account equity as inactive equity. At that point, you have automatically ceased trading a losing program. Naturally, two questions must now crop up. The first is, "What must this periodic percentage reduction be such that if the account equity were to stagnate after N periodic deductions from active equity, the program would automatically terminate (i.e., active equity equal to 0)?" The solution is given by Equation (8.02): (8.02) P = 1-INACTIVE^ (1/N) where P = The periodic percentage of the total account equity that should be transferred from active to inactive equity. INACTIVE = The inactive percent of account equity. N = The number of periods we want the program to terminate in if the equity stagnates. Thus, if we were to make quarterly transfers of equity from active to inactive, and we were using an initial allocation of 80% as inactive equity, and we wanted the program to terminate in 2,5 years (10 quartersi.e., N = 10), the quarterly percentage would be: P = 1-.8^(1/10) = 1-.8^.1 = 1-.9779327685 = .0220672315 Thus, we should pull out 2.20672315% of the total equity each quarter, and transfer that from active to inactive equity. The second question to arise is, "If we are pulling out a certain given percentage, what must the number of periods be in order for the active equity to equal 0?" In other words, if we know we want to pull out P% each period (again we assume that the periods here are quarters) and if the account equity stagnates, over how many periods, N, must we make these equity transfers until the active equity equals 0. The solution is given by Equation (8.03): (8.03) N = ln(INACTIVE)/ln(l-P) where P = The periodic percentage of the total account equity that will be transferred from active to inactive equity. INACTIVE = The inactive percentage of account equity. N = The number of periods it will take for the program to terminate if the equity stagnates. Again, assume that the initial inactive equity is allocated as 80% and that you are pulling out 2.20672315% per quarter. Therefore, the number of periods, quarters in this case, required until the program terminates if the equity stagnates is: N = ln(.8)/ln(l-.0220672315) = ln(.8)/ln(.9779327685) = -.223143/.0223143 = 10 For the given values, it would thus take 10 periods for the program to terminate. Share averaging will get us out of a portfolio over time at an aboveaverage price, just as dollar averaging will get us into a portfolio over time at a below-average cost. Consider now that most people do just the opposite of this, hence they are getting into and out of a portfolio at prices worse than average. When someone opens an account to trade, they dump all the trading capital in and just start trading. When they want to add funds, they will almost always invariably add in single blocks of cash, unable to make equal dollar deposits over time. A trader trying to live off trading profits will generally withdraw enough money from the account on a periodic basis to cover his living expenses, regardless of what percentage of his account this constitutes. This is exactly what he should not do. Suppose that the trader's living expenses are constant from one month to the next, SO he is withdrawing a constant dollar amount. By doing this he is accomplishing the exact opposite of share averaging in that he will be withdrawing a larger percentage of his funds when the account balance is lower, and a smaller percentage when the account balance is higher. In short, he is slowly getting out of the portfolio (or a portion of it) over time at a below-average price. Rather, the trader should withdraw a constant percentage (of total account equity, active plus inactive) each month. The withdrawn funds can be put into a middle account, a simple demand deposit account. Then from this demand deposit account the trader can withdraw a constant dollar amount each month to meet his living expenses. If the trader were to bypass this middle account and withdraw a constant dollar amount directly from the trading account, it would cause the ideas of share averaging and dollar averaging to work against him. Recall from Chapter 2 the observation that when you are trading at the optimal f levels you can expect to be in the worst-case drawdown 35 to 55% of the time period you are looking at. Generally, this doesn't sit well with most traders. Most traders want or need a much smoother equity curve, either to satisfy the needs of their living expenses or for other, more emotional, reasons. What trader wouldn't like to make a steady $X per day from trading? This 35 to 55% principle is true on a full optimal f basis, and therefore is true on a dynamic fractional f basis as well, but is not true on a static fractional f basis. Since the dynamic is asymptotically better than its static fractional f counterpart, we can expect this 35 to 55% principle to apply to us if we are going to trade our account in the mathematically optimal fashion-that is, at full optimal f for a given level of initial risk (our initial active equity). The establishment of a buffer demand deposit account allows for the account to be traded in the mathematically optimal fashion (dynamic optimal 0 while it also allows the share averaging method of reallocation to work (i.e., cash is transferred to the buffer demand deposit account) and allows for a steady dollar outcome from the buffer demand deposit account, thus meeting the trader's needs. Thus, if a trader needs $X per day to meet his needs, be they living expenses or otherwise, these can be satisfied without sabotaging the mathematics in the account by establishing and administering a buffer demand deposit account, and share averaging funds on a periodic basis from the trading program to this buffer account. The trader then makes regular withdrawals of a constant dollar amount from this buffer account. Of course, the regular dollar withdrawals must be for an amount less than the smallest amount transferred from the trading account to the buffer account. For example, if we are looking at a $500,000 account, we are withdrawing 1% per month, and we start out with 20% initial active equity, then we know that our smallest withdrawal from the trading account will be .01*500,000*(1-.2) = .01*500,000*.8 = $4,000. Therefore, our constant dollar withdrawal from the buffer account should be for an amount no greater than $4,000. The buffer account can also be the inactive subaccount. Before we come to the fourth asset allocation technique, a certain confusion must be cleared up. With optimal fixed fractional trading, you can see that you add more and more contracts when your equity increases, and vice versa when it decreases. This technique makes the greatest geometric growth of your equity in the long run. WHY REALLOCATE? Reallocation seems to do just the opposite of what we want to do in that reallocation trims back after a runup in equity or adds more equity to the active portion after a period where the equity has been run down. Reallocation is a compromise between the theoretical ideal and the reallife implementation. These techniques allow us to make the most of this compromise. Ideally, you would never reallocate. When your humble little $10,000 account grew to $10 million, it would never go through reallocation. Ideally, you would sit through the drawdown that took your account back down to $50,000 from the $10 million mark before it shot up to $20 million. Ideally, if your active equity were depleted down to 1 dollar, you would still be able to trade a fractional contract (a "microcontract"?). In an ideal world, all of these things would be possible. In real life, you are going to reallocate at some point on the upside or the downside. Given that you are going to do this, you might as well do it in a systematic, beneficial way. In reallocating, or compromising, you "reset" things back to a state you would be at if you were starting the program all over again, only at a different equity level. Then you let the outcome of the trading dictate where the fraction off used floats to by using a dynamic fractional fin between reallocations. Things can get levered up awfully fast, even when you start out with an active equity allocation of only 20%. Remember, you are using the full optimal f on this 20%, and if your program does modestly well, you'll be trading in substantial quantities relative to the total equity in the account in short order. PORTFOLIO INSURANCE – THE FOURTH REALLOCATION TECHNIQUE Assume for a moment that you are managing a stock fund. Figure 82 depicts a typical portfolio insurance strategy (also known as dynamic hedging). The floor in this example is the current portfolio value of 100 (dollars per share). The typical portfolio follows the equity market 1 for 1. This is represented by the unbroken line. The insured portfolio is depicted here by the dotted line. Note that the dotted line is below the unbroken line when the portfolio is at or above its initial value (100). This difference represents the cost of the portfolio insurance. Otherwise, as the portfolio falls in value, portfolio insurance provides a floor on the value of the portfolio at a desired floor value (in this case the present value of 100) minus the cost of performing the strategy. In a nutshell, portfolio insurance is akin to buying a put option on the portfolio. Suppose the fund you are managing consists of only 1 stock, which is currently priced at 100. Buying a put option on this stock, with a strike price of 100, at a cost of 10, would replicate the dotted line in Figure 8-2. The worst that could happen now to your portfolio of 1 stock and a put option on it is that you could exercise the put, which sells your stock at 100, and you lose the value of the put, 10. Thus, the worst that this portfolio can be worth is 90, no matter how far down the underlying stock goes. In a nutshell, portfolio insurance is akin to buying a put option on the portfolio. Suppose the fund you are managing consists of only 1 stock, which is currently priced at 100. Buying a put option on this stock, with a strike price of 100, at a cost of 10, would replicate the dotted line in Figure 8-2. The worst that could happen now to your portfolio of 1 stock and a put option on it is that you could exercise the put, which sells your stock at 100, and you lose the value of the put, 10. Thus, the worst that this portfolio can be worth is 90, no matter how far down the underlying stock goes. On the upside, your insured portfolio suffers somewhat in that the value of the portfolio is always reduced by the cost of the put. Total portfolio value Insured portfolio Uninshured portfolio 80 90 100 110 120 Underlying portfolio value Figure 8-2 Portfolio insurance. Clearly, looking at Figure 8-2 and considering the fundamental equation for trading, the estimated TWR of Equation (1.19c), you can intuitively see that an insured portfolio is superior to an uninsured portfolio in an asymptotic sense. In other words, if you're only as smart as your dumbest mistake, you have put a floor on that dumbest mistake by portfolio insurance. Now consider that being long a call option will give you the same profile as being long the underlying and long a put option with the same strike price and expiration date as the call option. Here, when we speak of the same profile, we mean an equivalent position in terms of the risk/reward characteristics at different values for the underlying. Thus, the dotted line in Figure 8-2 can also represent a portfolio comprised of simply being long the 100 call option at expiration. Here is how dynamic hedging works to provide portfolio insurance. Suppose you buy 100 shares of a single stock for your fund, at a price of $100 per share. You now replicate the call option by using this underlying stock. You do this by determining an initial floor for the stock. The floor you choose is, say, 100. You also determine an expiration date for the hypothetical option you are going to create. Say the expiration date you choose is the date on which this quarter ends. Now you figure the delta for this 100 call option with the chosen expiration date. You can use Equation (5.05) to find the delta of a call option on a stock (you can use the delta for whatever option model you are using; we're using the Black-Scholes Stock Option Model here). Suppose the delta is .5. This means that you should be 50% invested in the given stock. You would thus have only 50 shares of stock on rather than the 100 shares you would have on if you were not practicing portfolio insurance. As the value of the stock increases, so will the delta, and likewise the number of shares you hold. The upside limit is a delta at 1, where you would be 100% invested. In our example, at a delta of 1 you would have on 100 shares. As the stock price decreases, so does the delta, and so does the size of your position in the stock. The downside limit is at a delta of 0 (where the put delta is-1), at which point you wouldn't have any position in the stock. Operationally, stock fund managers have used noninvasive methods of dynamic hedging. Such a technique involves not having to trade the cash portfolio. Rather, the portfolio as a whole is adjusted to what the current delta should be as dictated by the model by using futures, and sometimes put options. One benefit of using futures is low transaction costs. Selling short futures against the portfolio is equivalent to selling off part of the portfolio and putting it into cash. As the portfolio falls, more futures are sold, and as it rises, these short positions are covered. The loss to the portfolio as it goes up and the short futures positions are covered is what accounts for the portfolio insurance cost, the cost of the replicated put options. Dynamic hedging, though, has the benefit of allowing us to closely estimate this cost at the outset. To managers trying to implement such a strategy, it allows the portfolio to remain untouched while the appropriate asset allocation shifts are performed through futures and/or options trades. This noninvasive technique of using futures and/or options permits the separation of asset allocation and active portfolio management. To implement portfolio insurance, you must continuously adjust the portfolio to the appropriate delta. This means that, say each day, you must input into the option pricing model the current portfolio value, time of expiration, interest rate levels, and portfolio volatility to determine the delta of the put option you are trying to replicate. Adding this delta (which is a number between 0 and -1) to 1 will give you the corresponding call's delta. This is the hedge ratio, the percentage that you should be invested in the fund. You must make sure that you stay as close to this hedge ratio as possible. Suppose your hedge ratio for the present moment is .46. Say that the size of the fund you are managing is the equivalent to 50 S&P futures contracts. Since you only want to be 46% invested, you want to be 54% dis-invested. Fifty-four percent of 50 contracts is 27 contracts. Therefore, at the present price level of the fund, at this point in time, for the given interest rate and volatility levels, the fund should be short 27 S&P contracts along with its long position in cash stocks. Because the delta needs to be recomputed on an ongoing basis, and portfolio adjustments constantly monitored, the strategy is called a dynamic hedging strategy. One problem with using futures in the strategy is that the futures market does not exactly track the cash market. Further, the portfolio you are selling futures against may not exactly follow the cash index upon which the futures market is traded. These tracking errors can add to the expense of a portfolio insurance program. Furthermore, when the option being replicated gets very near to expiration and the portfolio value is near the strike price, the gamma of the replicated option goes up astronomically. Gamma is the instantaneous rate of change of the delta or hedge ratio. In other words, gamma is the delta of the delta. If the delta is changing very fast (i.e., if the replicated option has a high gamma), portfolio insurance becomes increasingly more cumbersome to perform. There are numerous ways to work around this problem, some of which are very sophisticated. One of the simplest involves not only trying to match the delta of the replicated option, but using futures and options together to match both the delta and gamma of the replicated option. Again, this high gamma usually becomes a problem only when expiration draws near and the portfolio value and the replicated option's strike price are very close. There is a very interesting relationship between optimal f and portfolio insurance. When you enter a position, you can state that f percent of your funds are invested. For example, consider a gambling game in which your optimal f is .5, your biggest loss is -1, and your bankroll is $10,000. In such a case, you would bet $1 for every $2 in your stake, since -1, the biggest loss, divided by -.5, the negative optimal f, is 2. Dividing $10,000 by 2 yields $5,000. You would therefore bet $5,000 on the next bet, which is f percent, 50%, of your bankroll. Had you multiplied our bankroll of $10,000 by f, .5, you would have arrived at the same $5,000 result. Hence, you have bet f percent of our bankroll. Likewise, if your biggest loss were $250 and everything else remained the same, you would be making 1 bet for every $500 in your bankroll (since -$250/-.5 = $500). Dividing $10,000 by $500 means that you would make 20 bets. Since the most you can lose on any one bet is $250, you have thus risked f percent, 50% of our stake, in risking $5,000 ($250*20). We can therefore state that f equals the percentage of our funds at risk, or f equals the hedge ratio. Since f is only applied on the active portion of our portfolio in a dynamic fractional f strategy, the hedge ratio of the portfolio is: (8.04a) H = f*A/E where H = The hedge ratio of the portfolio. f = The optimal f (0 to 1). A = The active portion of funds in an account. E = The total equity of the account. Equation (8.04a) gives us the hedge ratio for a portfolio being traded on a dynamic fractional f strategy. Portfolio insurance is also at work in a static fractional f strategy, only the quotient A/E equals 1, and the value for f, the optimal f, is multiplied by whatever value we are using for the fraction off. Thus, in a static fractional f strategy the hedge ratio is: (8.04b) H = f*FRAC where H = The hedge ratio of the portfolio. f = The optimal f (0 to 1). FRAC = The fraction of optimal f that you are using. Since there is usually more than one market system working in an account, we must account for this. When this is the case, the variable fin Equation (8.04a) or (8.04b) must be calculated as: (8.05) f = ∑[i = 1,N]fi*Wi where f = The f (0 to 1) to be input in Equation (8.04a) or (8.04b). N = The total number of market systems in the portfolio. Wi = The weighting of the ith component in the portfolio (from the identity matrix). fi = The f factor (0 to 1) of the ith component in the portfolio. We can state that in trading an account on a dynamic fractional f basis we are performing portfolio insurance. Here, the floor is equal to the initial inactive equity plus the cost of performing the insurance. However, it is often simpler to refer to the floor of a dynamic fractional f strategy as simply the initial inactive equity of an account. We can state that Equation (8.04a) or (8.04b) equals the delta of the call option of the terms used in portfolio insurance. Further, we find that this delta changes much the way a call option that is deep out-of-themoney and very far from expiration changes. Thus, by using a constant inactive dollar amount, trading an account on a dynamic fractional f strategy is equivalent to owning a put option on the portfolio that is deep in-the-money and very far out in time. Equivalently, we can state that trading a dynamic fractional f strategy is the same as owning a call option on the portfolio that doesn't expire for a very long time and is very far out-of-the-money, rather than the portfolio itself. This quality, this relationship to portfolio insurance, is true for any dynamic fractional f strategy, whether we are using share averaging, scenario planning, or investor utility. It is also possible to use portfolio insurance as a reallocation technique to "steer" performance somewhat. This steering may be analogous to trying to steer a tanker with a rowboat oar, but this is a valid reallocation technique. The method involves setting parameters for the program initially. First you must determine a floor value. Once this has been chosen, you must decide upon an expiration date, a volatility level, and other input parameters for the particular option model you intend to use. These inputs will give you the options delta at any given point in time. Once the delta is known, you can determine what your active equity should be. Since the delta for the account, the variable H in Equation (8.04a), must equal the delta for the call option being replicated, D, we can replace H in Equation (8.04a) with D: D = f*A/E Therefore: (8.06) D/f = A/E if D < f (otherwise A/E = 1) where D = The hedge ratio of the call option being replicated. f = The f (0 to 1) from Equation (8.05). A = The active portion of funds in an account. E = The total equity of the account. Since A/E is equal to the percentage of active equity, we can state that the percentage of the total account equity funds that we should have in active equity is equal to the delta on the call option divided by the f determined in Equation (8.05). However, you will note that if D is greater than f, then it is suggesting that you allocate greater than 100% of an account's equity as active. Since this is not possible, there is an upper limit of 100% of the account's equity that can be used as active equity. You can use Equation (5.05) to find the delta of a call option on a stock, or Equation (5.08) to find the delta of a call option on a future. The problem with implementing portfolio insurance as a reallocation technique, as detailed here, is that reallocation is taking place constantly. This detracts from the fact that a dynamic fractional f strategy will asymptotically dominate a static fractional f strategy. As a result, trying to steer performance by way of portfolio insurance as a dynamic fractional f reallocation strategy probably isn't such a good idea. However, any time you use dynamic fractional f, you are employing portfolio insurance. We now cover an example of portfolio insurance. Recall our geometric optimal portfolio of Toxico, Incubeast, and LA Garb. We found the geometric optimal portfolio to exist at V = .2457. We must now convert this portfolio variance into the volatility input for the option pricing model. Recall that this input is described as the annualized standard deviation. Equation (8.07) allows us to convert between the portfolio variance and the volatility estimate for an option on the portfolio: (8.07) OV = (V^.5)*ACTV*YEARDAYS^.5 where OV = The option volatility input for an option on the portfolio. V = The variance on the portfolio. ACTV = The current active equity portion of the account. YEARDAYS = The number of market days in a year. If we assume a year of 251 market days and an active equity percentage of 100% (1.00) for the sake of simplicity: OV = (.2457^.5)*1*251^.5 = .4956813493*15.84297952 = 7.853069464 This corresponds to a volatility of over 785%! Remember, this is the annualized volatility on the portfolio being traded at the optimal f level with 100% of the account designated as active equity. As a result, we are going to get very high volatility readings. Since we are going to demonstrate portfolio insurance as a reallocation technique, we must use 1.00 as the value for ACTV. Equation (5.05) will give us the delta on a particular call option as: (5.05) Call Delta = N(H) The H term in (5.05) is given by (5.03) as: (5.03) H = ln(U/(E*EXP(-R*T)))/(V*T^(1/2))+(V*T^(l/2))/2 U = The price of the underlying instrument. E = The exercise price of the option. T = Decimal fraction of the year to expiration. V = The annual volatility in percent. R = The risk-free rate. ln() = The natural logarithm function. N() = The cumulative Normal density function, as given in Equation (3.21). Notice that we are using the stock option pricing model here. We now use our answer for OV as the volatility input, V, in Equation (5.03). If we assume the risk-free rate, R, to be 6% and the decimal fraction of the year left till expiration, T, to be .25, Equation (5.03) yields: H = ln(100/(100*EXP(.06*.25)))/(7.853069464*.25^.5)+ (7.853069464*.25^.5)/ 2 = ln(100/(100*EXP(-.015)))/(7.853069464*.5)+(7.853069464*.5)/2 = ln(100/(100*.9851119396))/(7.853069464*.5)+(7.853069464*.5)/2 = ln( 100/98.51119396) ∕ 3.926534732+3.926534732 /2 = ln( 1.015113065) ∕ 3.926534732+1.963267366 = .015 13.926534732+1.963267366 = .00382+1.963267366 = 1.967087528 This answer represents the H portion of (5.05). We must now run this through Equation (3.21) as the Z variable to obtain the actual call delta: (3.21) N(Z) = 1-N'(Z)*((1.330274429*Y^5)(1.821255978*Y^4)+(1.781477937*Y^3)(.356563782*Y^2)+(.31938153*Y)) where Y = 1/(1+.2316419*ABS(Z)) N'(Z) = .398942*EXP(-(Z^2/2)) Thus: Y = 1/ (1+.2316419*ABS (1.967087528)) = 1/(1+ .4556598925) = 1/1.4556598925 = .6869736574 Now solving for the term N'( 1.967087528) N'(1.967087528) = .398942*EXP(-(1.967087528 ^ 2/2)) = .398942*EXP(-(3.869433343/2)) = .398942*EXP(-1.934716672) = .398942*.1444651941 = .05763323346 Now, plugging the values for Y and N' (1.967087528) into (3.21) to obtain the actual call delta as given by Equation (5.05): N(Z) = 1-.05763323346*((1.330274429*.6869736574^5)(1.821255978*.6869736574^4)+(1.781477937*.6869736574^3)(.356563782*.6869736574^2)+(.31938153*.6869736574)) = 1-.05763323346*((1.330274429*.1530031) (1.821255978*.2227205)+(1.781477937*.3242054)(.356563782*.4719328)+(.31938153*.6869736)) = 1-.05763323346*(.2035361115-.405631042+-5775647672.168274144+.2194066794) = 1-.05763323346*.4266023721 = 1-.02458647411 = .9754135259 Thus, we have a delta of .9754135259 on our hypothetical call option for a portfolio trading at a price of 100%, with a strike price of 100%, with .25 of a year left to expiration, a risk-free rate of 6%, and a volatility on this portfolio of 785.3069464%. Now recall that the sum of the weights on this geometric optimal portfolio consisting of Toxico, Incubeast, and LA Garb, per Equation (8.05), is 1.9185357. Thus, per Equation (8.06), we would reallocate to 50.84156244% (.9754135259/1.9185357) active equity if we were using portfolio insurance to reallocate. "What is the cost of this insurance?" That depends upon the volatility that will actually be seen over the life of the replicated option. For instance, if the equity in the account were not to fluctuate at all over the life of the replicated option (volatility equal to 0), the replicated option, the insurance, would cost us nothing. This is a great benefit to portfolio insurance versus outright buying a put option (assuming one was available on our portfolio). We pay the actual theoretical price of the option for the volatility actually encountered, not the volatility perceived by the marketplace before the fact, as would be the case with actually buying the put option. Further, actually buying the put option (again assuming one was available) entails a bid-ask spread that is circumvented by replicating the option. THE MARGIN CONSTRAINT Here is a problem that continuously crops up when we take any of the fixed fractional trading techniques out of its theoretical context and apply it in the real world. We have seen that anytime an additional market system is added to the portfolio, so long as the linear correlation coefficient of daily equity changes between that market system and another market system in the portfolio is less than +1, the portfolio is improved. That is to say that the geometric mean of daily HPRs is increased. Thus, it stands to reason that you would want to have as many market systems as possible in a portfolio. Naturally, at some point, margin considerations become a problem. Even if you are trading only 1 market system, margin considerations can often be a problem. Consider that the optimal fin dollars is very often less than the initial margin requirements for a given market. Now, depending on what fraction of f you are using at the moment, whether you are using a static or dynamic fractional f strategy, you will encounter a margin call if the fraction is too high. When you trade a portfolio of market systems, the problem of a margin call becomes even more likely. With an unconstrained portfolio, the sum of the weights is often considerably greater than 1. When you trade only 1 market system, the weight is, de facto, 1. If the sum of the weights of a market system you are trading is, say, 3, then the likelihood of a margin call is 3 times as great as it would be if you were trading just 1 market. What is needed is a way to reconcile how to create an optimal portfolio within the bounds of the margin requirements on the components in the portfolio. This can very easily be found. The way to accomplish this is to find what fraction off you can use as an upper limit. This upper limit, U, is given by Equation (8.08) as: (8.08) U = ∑[i = 1,N]fi$/((∑[i = 1,N] margini$)*N) where U = The upside fraction of Ј At this particular fraction off you are trading the optimal portfolio as aggressively as possible without incurring an initial margin call. fi$ = The optimal fs in dollars for the ith market system. margini$ = The initial margin requirement of the ith market system. N = The total number of market systems in the portfolio. If U is greater than 1, then use 1 as the answer for U. For instance, suppose we have a portfolio with the three market systems as follows, with the following optimal fs in dollars for the three market systems and the following initial margin requirements. (Note: the f$ are the optimal fs in dollars for each market system in the portfolio. This represents the market system's individual optimal f$ divided by its weighting in the portfolio): Market System A B C Sums f$ $2,500 $2,000 $3,000 $7,500 Initial Margin $2,000 $2,000 $2,000 $6,000 Now, per Equation (8.08) we use the sum of the f$ column in the numerator, which is $7,500, and divide by the sum of the initial margin requirements, $6,000, times the number of markets, N, which is 3: U = $7,500/($6,000*3) = 7500/18,000 = .4167 Therefore, we can determine that, as an upside limit, our fraction off cannot exceed 41.67% in this case (that is, if we are employing a dynamic fractional f strategy). Therefore, we must reallocate when our active equity divided by our total equity in the account equals or exceeds .4167. If, however, you are still employing a static fractional f strategy (despite my protestations), then the highest you should set that fraction to is .4167. This will put you on the unconstrained geometric efficient frontier, to the left of the optimal portfolio, but as far to the right as possible without encountering a margin call. To see this, suppose we have a $100,000 account. We set our fractional f values to a .4167 fraction of optimal. Therefore for each market system: Market System A B C f$ $2,500 $2,000 $3,000 1.4167 = New f$ $6,000 $4,600 $7,200 For a $100,000 account, we will trade 16 contracts of market system A (100,000/6,000), 20 contracts of market system B (100,000/4,800), and 13 contracts of market system C (100,000/7,200). The resulting margin requirement for such a portfolio is: 16*$2,000 = $32,000 20*2,000 = 40,000 13*2,000 = 26,000 Initial margin requirement $96,000 Notice that using this formula (8.08) yields the highest fraction for f (without incurring an initial margin call) that gives you the same ratios of the different market systems to one another. Hence, Equation (8.08) returns the unconstrained optimal portfolio at its least diluted state without incurring an initial margin call. Notice in the previously cited example that if you are trading a fractional f strategy, the value returned from Equation (8.08) is the maximum fraction for f you can get to without incurring an initial margin call. Again consider a $100,000 account. Assume that at one time, when you opened this account, it had $70,000 in it. Further assume that of that initial $70,000 you allocated $58,330 as inactive equity. Thus, you initially started out at a roughly 83:17 percentage split between inactive and active equity. You have traded the active portion at the full optimal f values. Now your account stands at $100,000. You still have $58,330 as inactive equity, therefore your active equity is $41,670, which is .4167 of your total equity. This should now be the maximum fraction you can use, the maximum ratio of active to total equity, without incurring a margin call. Recall that you are trading at the full f levels. Therefore, you will trade 16 contracts of market system A (41,670/2,500), 20 contracts of market system B (41,670/2,000), and 13 contracts of market system C (41,670/3,000). The resultant margin requirement for such a portfolio is: 16*$2,000 = $32,000 20*2,000 = 40,000 13* 2,000 = 26,000 Initial margin requirement $96,000 Again we can see that this is pushing it as much as possible without incurring a margin call, since we have $100,000 total equity in the account. Recall from Chapter 2 the fact that adding more and more market systems results in higher and higher geometric means for the portfolio as a whole. However, there is a tradeoff in that each market system adds marginally less benefit to the geometric mean, but marginally more detriment in the way of efficiency loss due to simultaneous rather than sequential outcomes. Therefore, you do not want to trade an infinite number of market systems. What's more, theoretically optimal portfolios run into the real-life application problem of margin constraints. In other words, you are better off to trade 3 market systems at the full optimal f levels than to trade 300 market systems at dramatically reduced levels as a result of Equation (8.08). Usually, you will find that the optimal number of market systems to trade in, particularly when you have many orders to place and the potential for mistakes, is but a handful. If one or more market systems in the portfolio have optimal weightings greater than 1, a potential problem emerges. For example, assume a market system with an optimal f of .8 and a biggest loss of $4,000. Therefore, f$ is $5,000. Let's suppose the optimal weighting for this component of the portfolio is 1.25. Therefore you will trade one unit of this component for every $4,000 ($5,000/1.25) in account equity. As you can see, as soon as the component sees its largest loss, all of the active equity in the account will be wiped out (unless profits are sufficient in the other market systems to salvage some active equity). This problem tends to crop up for systems that trade infrequently. For example, recall that if we could have two market systems with perfect negative correlation and a positive expectation, we would optimally have on an infinite number of contracts. When one of the components lost, the other would win an equal or greater amount. Thus, we would always have a net profit on each play. However, these market systems are always having a simultaneous play. The situation being discussed is analogous to this hypothetical situation when one of these components is not active on a certain play. Now there's only one market system active on a given play, and that market system has on an infinite number of contracts. A loss is catastrophic. The solution is to divide 1 by the highest weighting of any of the components in the portfolio and use the answer as the upper limit on active equity if the answer is less than the answer to Equation (8.08). This ensures that if a loss is encountered in the future of the same magnitude as the largest loss over which f was derived, it will not wipe out the account. For example, suppose the highest weighting of any component in our portfolio is 1.25. Then if Equation (8.08) does not give us an answer less than .8 (1/1.25), we will use .8 as our upper limit on our active equity percentage. This is unlikely to be a problem if you start with a low active equity percentage. However, a more aggressive trader may encounter this problem. An alternative solution is to set additional constraints in the portfolio matrix (such as constraints on the maximum weighting for each market system being set to 1, as well as constraints pertaining to margin). These additional linear programming constraints may be slightly beneficial to the aggressive trader, but the matrix solutions can be involved. Interested readers are again referred to Childress. ROTATING MARKETS Many traders use systems or techniques that have them monitoring many markets all the time, filtering for what they feel are the best mar- kets for the systems at the moment. For example, some traders may prefer to monitor the volatility in all of the futures markets and trade only those markets whose volatility exceeds a certain amount. Sometimes they will be in many markets, sometimes they won't be in any. Further, the markets that they are in are constantly changing. This changing composition seems to be particularly a problem for stock fund managers. How can we manage such a thing and still be at the optimal portfolio? The solution is really quite simple. Anytime a market is added or deleted from the portfolio, the new unconstrained geometric optimal portfolio is calculated as detailed in this chapter. Any adjustments to existing positions in terms of the quantity that should be on in light of the newly added or deleted market system ought to be made as well. In a nutshell, it is alright to have a constantly changing portfolio in terms of components. The goal for the manager of such a portfolio, however, is to have the portfolio always be the unconstrained geometric optimal of the components involved and to keep the inactive equity amount constant. In so doing, a constantly changing portfolio composition can be managed in a manner that is asymptotically optimal. There is a potential problem with this type of trading from a portfolio standpoint. An example may help illustrate. Imagine two highly correlated markets, such as gold and silver. Now imagine that your system trades so infrequently that you have never had a position in both of these markets on the same day. When you determine the correlation coefficients of the daily equity changes, it is quite possible that the correlation coefficient you will show between gold and silver is 0. However, if in the future you have a trade in both markets simultaneously, you can expect them to have a high positive correlation. To solve this problem, it is helpful to edit your correlation coefficients with an eye toward this type of situation. In short, don't be afraid to edit the correlation coefficients upward. However, be wary of moving them lower. Suppose you show the correlation coefficient between Bonds and Soybeans as 0, but you feel it should be lower, say -.25. You really should not adjust correlation coefficients lower, as lower correlation coefficients tend to have you increase position size. In short, if you're going to err in the correlation coefficients, err by moving them upward rather than downward. Moving them upward will tend to move the portfolio to the left of the peak of the portfolio's f curve, while moving correlation coefficients lower will tend to move you to the right of the portfolio's f curve. Often people try to filter trades in a manner as to have them in a particular market during certain times and out at others in an attempt to lower drawdown. If the filtering technique works, if it lowers drawdown on a one-unit basis, then the f that is optimal for the filtered trades will be higher (and f$ lower) than for the entire series of trades before filtering. If the trader applies the optimal f over the entire prefiltered series to the postfiltered series, she will find herself at a fractional f on the postfiltered series and hence cannot be obtaining a geometric optimal portfolio. On the other hand, if the trader applies the optimal f on the postfiltered series, she can obtain the geometric optimal portfolio, but she is right back to the problem of impending large drawdowns at optimal f. She seems to have defeated the purpose of her filter. This illustrates the fallacy of filters from a money-management standpoint. Filters might work (reduce drawdown on a one-unit basis) only because they cause the trader to be at a fraction of the optimal f. Why filter at all? We could state that we benefit by filtering if our answer to the fundamental equation of trading on postfiltered trades at the prefiltered optimal f is greater than the answer to the fundamental equation of trading on prefiltered trades at the prefiltered optimal f. It is important to note when making such a comparison that the postfiltered trades are less in number (have lower N) than the prefiltered trades. TO SUMMARIZE We have seen that trading on a fixed fractional basis makes the most money in an asymptotic sense. It maximizes the ratio of potential gain to potential loss. Once we have an optimal f value we can convert our daily equity changes on a 1-unit basis to an HPR, we can determine the arithmetic average HPR and standard deviation in those HPRs, and we can calculate the correlation coefficient of the HPRs between any two market systems, We can then use these parameters as inputs in determining the optimal weightings for an optimal portfolio. (Since we are using leveraged vehicles, weighting and quantity are not synonymous, as they would be if there was no leverage involved.) These weightings then are reflected back into the f values, the amount we should finance each contract by, as the f values are divided by their respective weightings. This gives us new f values, which result in the greatest geometric growth with respect to the intercorrelations of the other market systems and their weightings. The greatest geometric growth is obtained by using that set of weightings whose sum is unconstrained and whose arithmetic average HPR minus its standard deviation in HPRs squared (its variance) equals 1 [Equation (7.06c)]. Rather than being diluted (which only puts you farther left on the unconstrained efficient frontier), as is the case with a static fractional f strategy, this portfolio is traded full out with only a fraction of the funds in the account. Such a technique is called a dynamic fractional f strategy. The remaining funds, the inactive equity, are left untouched by the activity that goes on in these active funds. Since this active portion is being traded at the optimal levels, fluctuations in this active equity will be swift. As a result, at some point on the upside or downside in the equity fluctuations, or at some point in time, you will likely find it necessary, even if only from an emotional standpoint, to reallocate funds between the active and inactive portions. Four methods of doing so have been explained, although other, possibly better, methods may exist: 1. Investor Utility. 2. Scenario Planning. 3. Share Averaging. 4. Portfolio Insurance. The fourth method, portfolio insurance or dynamic hedging, is inherent in any dynamic fractional f strategy, but it can also be utilized as a reallocation method. We have further seen that to take the unconstrained geometric optimal portfolio and apply it in real time will most likely encounter a problem in terms of the initial margin requirements. This problem can be alleviated by determining an upper level limit for the ratio of active equity to total account equity. APPLICATION TO STOCK TRADING The techniques that have been described in this book apply not only to futures traders, but to traders in any market. Even someone trading a portfolio of only blue chip stocks is not immune from the principles and the consequences discussed in this book. You have seen that such a portfolio of blue chip stocks has an optimal level of leverage where the ratio of potential gains to potential losses in equity are maximized. At such a level, the drawdowns to be expected arc also quite severe, and therefore the portfolio ought to be diluted, preferably by way of a dynamic fractional f strategy. The entire procedure can be performed exactly as though the stock being traded were a commodity market system. For instance, suppose Toxico were trading at $40 per share. The cost of 100 shares of Toxico would be $4,000. This 100-share block of Toxico can be treated as 1 contract of the Toxico market system. Thus, if we were operating in a cash account, we could replace the margini$ variable in Equation (8.08) with the value of 100 shares of Toxico ($4,000 in this example). In so doing, we can determine the upper limit on the fraction of f to use such that we never have to even perform the procedure in a margin account. When you are doing this type of exercise, remember that you are replicating a leveraged situation, but there isn't really any borrowing or lending going on. Therefore, you should use an RFR of 0 in any calculations (such as the Sharpe ratio) that require an RFR. On the other hand, if we perform the procedure in a margin account, and if initial margin levels are, say, 50%, then we would use a value of $2,000 for the margini$ variable for Toxico in (8.08). Traditionally, stock fund managers have used portfolios where the sum of the weights is constrained to 1. Then they opt for that portfolio composition which gives the lowest variance for a given level of arithmetic return. The resultant portfolio composition is expressed in the form of the weights, or percentages of the trading account, to apply to each component of the portfolio. By lifting this sum of the weights constraint and opting for the single portfolio that is geometric optimal, we get the optimal leveraged portfolio. Here, the weights and quantities are completely different. We now divide the optimal amount to finance I unit of each component by its respective weighting; the result is the optimal leverage for each component in the portfolio. Now, we can dilute this portfolio down by marrying it to the risk-free asset. We can dilute the portfolio to the point where there really isn't any leverage involved. That is, we are leveraging the active equity portion of the portfolio but the active equity portion is actually borrowing its own money, interest-free, from the inactive equity portion. The result is a portfolio and a method of adding to and trimming back from positions as the equity in the account changes that will result in the greatest geometric growth. As such a method maximizes the potential geometric growth to the potential loss and allows for the maximum loss acceptable to be essentially specified at the outset, it can also be argued to be a superior means of managing a stock portfolio. The current generally accepted procedure for determining the efficient frontier will not really yield the efficient frontier, much less the portfolio that is geometric optimal (the geometric optimal portfolio always lies on the efficient frontier). This can be derived only by incorporating the optimal f. Further, the generally accepted procedure yields a portfolio that gets traded on a static f basis rather than on a dynamic basis, the latter being asymptotically infinitely more powerful. A CLOSING COMMENT This is a very exciting time to be in this field, New concepts have been emerging nearly continuously since the mid 1950s. We have witnessed an avalanche of great ideas from the academic community building upon the E-V model. Among the ideas presented has been the E-S model. With the E-S model the measure of risk is semivariance in lieu of variance.1 Semivariance is defined as the variation beneath some target level of return, which could be the expected return, zero return, or any other fixed level of return. When this target level of return equals the expected return and the distribution of returns is symmetrical (without skew), the E-S efficient frontier is the same as the E-V efficient frontier. Other portfolio models have been presented using other measures for risk than variance in returns. Still other portfolio models have been presented using moments of the distribution of returns beyond the first two moments. Of particular interest in this regard have been the stochastic dominance approaches, which encompass the entire distribution of returns and hence can be considered the limiting case of multidimensional portfolio analysis as the number of moments incorporated approaches infinity.2 This approach may be particularly useful when the variance in returns is infinite or undefined. Again, I am not a so-called academic. This is neither a boast nor an apology. I am no more an academic than I am a ventriloquist or a TV wrestler. Academics want a model to explain how the markets work. As a nonacademic, I don't care how they work. For example, many people in the academic community argue that the efficient market hypothesis is flawed because there is no such thing as a rational investor. They argue that people do not behave rationally, and therefore conventional portfolio models, such as E-V theory (and its offshoots) and the Capital Asset Pricing model, are poor models of how the markets operate. While I agree that people certainly do not behave rationally, it does not mean that we shouldn't behave rationally or that we cannot benefit by behaving rationally. When variance in returns is finite, we can certainly benefit by being on the efficient frontier. There has been much debate in recent years over the usefulness of current portfolio models in light of the fact that the distribution of the logs of price changes appear to be stable Paretian with infinite (or undefined) variance. Yet many studies demonstrate that the markets in recent years have seen a move toward Normality (therefore finite variance) and independence, which the portfolio models being criticized assume.3 Fur1 Markowitz, Harry, Portfolio Selection: Efficient Diversification of Investments. New York: John Wiley, 1959. 2 See Quirk, J, P., and R. Saposnik, "Admissibility and Measurable Utility Functions," Review of Economic Studies, 29(79):140-146, February 1962. Also see Reilly, Frank K, Investment Analysis and Portfolio Management. Hinsdale, IL: The Dryden Press, 1979. 3 See Helms, Billy P., and Terrence F. Martell, "An Examination of the Distribution of Commodity Price Changes," Working Paper Series. New York: Columbia University Center for the Study of Futures Markets, CFSM-76, April 1984. Also see Hudson, Michael A., Raymond M. Leuthold, and Cboroton F. Sarassorro, "Commodity Futures Price Changes: Distribution, Market Efficiency, and Pricing Commodity Options," Working Paper Series, New York: Columbia University Center for the Study of Futures Markets, CFSM-127, June 1986. ther, the portfolio models use the distribution of returns as input, not the distribution of the logs of price changes. Whereas the distribution of returns is a transformed distribution of the logs of price changes (transformed by techniques such as cutting losses short and letting profits run), they are not necessarily the same distribution, and the distribution of returns may not be a member of the stable Paretian (which is why we modeled the distribution of trade P&L's in Chapter 4 with our adjustable distribution). Furthermore, there are derivative products such as options that have finite semivariance (if long) or finite variance altogether. For example, a vertical option spread put on at a debit guarantees finite variance in returns. I'm not defending against the attacks on the current portfolio models. Rather, I am playing devil's advocate here. The current portfolio models can be employed provided we are aware of their shortcomings. We no doubt need better portfolio models. It is not my contention that the current portfolio models are adequate. Rather, it is my contention that the input to the portfolio models, current and future for whatever portfolio models we use, should be based on trading one unit at the optimal level-or what we believe will be the optimal level for that item in the future, as though we were trading only that item. For example, if we are employing E-V theory, the Markowitz model, the inputs are the expected return, variance in returns, and correlation of returns to other market systems. These inputs must be determined from trading one unit on each market system at the optimal f level. Portfolio models other than E-V may require different input parameters. These parameters must be discerned based on trading one unit of the market systems at their optimal f levels. Portfolio models are but one facet of money management, but they are a facet where debate is certain to rage for quite some time. This book could not be definitive in that regard, as newer, better models are yet to be formulated. We most likely will never have a model we all agree upon as being adequate. That should make for a healthy and stimulating APPENDIX A - The Chi-Square Test There exist a number of statistical tests designed to determine if two samples come from the same population. Essentially, we want to know if two distributions are different. Perhaps the most well known of these tests is the chi-square test, devised by Karl Pearson around 1900. It is perhaps the most popular of all statistical tests used to determine whether two distributions are different. The chi-square statistic, X2, is computed as: (A.01) X2 =>[i = 1,N](Oi-Ei)^2/Ei where N = The total number of bins. Oi = The number of events observed in the ith bin. Ei = The number of events expected in the ith bin. A large value for the chisquare statistic indicates that it is unlikely that the two distributions are the same (i.e., the two samples are not drawn from the same population). Likewise, the smaller the value for the chi-square statistic, the more likely it is that the two distributions are the same (i.e., the two samples were drawn from the same population). Note that the observed values, the Oi's, will always be integers. However, the expected values, the Ei's, can be nonintegers. Equation (A.01) gives the &i-square statistic when both the expected and observed values are integers. When the expected values, the Ei's, are permitted to be nonintegers, we must use a different equation, known as Yates' correction, to find the chi-square statistic: (A.02) X2 = ∑[i = 1,N] (ABS(OiEi)-.5)^2/Ei where N = The total number of bins. Oi = The number of events observed in the ith bin. Ei = The number of events expected in the ith bin. ABS() -The absolute value function. If we are comparing the number of events observed in a bin to what the Normal Distribution dictates should be in that bin, we must employ Yates' correction. That is because the number of events expected,1 the Ei's, are nonintegers. We now work through an example of the chi-square statistic for the data corresponding to Figure 3-16. This is the 232 trades, converted to standard units, placed in 10 bins from -2 to +2 sigma, and plotted versus what the data would be if it were Normally distributed. Note that we must use Yates' correction: Bin# 1 2 3 4 5 6 7 8 9 10 Observed Expected 7.435423 17 13.98273 25 22.45426 27 30.79172 38 36.05795 61 36.078 37 30.7917 12 22.45426 4 13.98273 2 7.435423 ((ABS(O-E)-.5)^2 4.738029 .4531787 .1863813 .3518931 .05767105 16.56843 1.058229 4.41285 6.430941 3.275994 X2=37.5336 We can convert a chi-square statistic such as 37.5336 to a significance level. In the sense we are using here, a significance level is a number between 0, representing that the two distributions are different, and 1, meaning that the two distributions are the same. We can never be 100% certain that two distributions are the same (or different), but we can determine how alike or different two distributions are to a certain significance level. There are two ways in which we can find the significance level. This first and by far the simplest way is by using tables. The second way to convert a chi-square statistic to a significance level is to perform the math yourself (which is how the tables were drawn up in the first place). However, the math requires the use of incomplete gamma functions, which, as was mentioned in the Introduction, will not be treated in this text. Interested readers are referred to the Bibliography, in particular to Numerical Recipes. However, most readers who would want to know how to calculate a significance level from a given chi-square statistic would want to know this because tables are rather awkward to use from a programming standpoint. Therefore, what follows is a snippet of BASIC language code to convert from a given chisquare statistic to a significance level. 1000 REM INPUT NOBINS%, THE NUMBER OF BINS AND 1 As detailed in Chapter 3, this is determined by the Normal Distribution per Equation (3.21) for each boundary of the bin, taking the absolute value of the differences, and multiplying by the total number of events. CHISQ, THE CHI-SQUARE STATISTIC 1010 REM OUTPUT IS CONF, THE CONFIDENCE LEVEL FOR A GIVEN NOBINS% AND CHISQ 1020 PRINT "CHI SQUARE STATISTIC AT"NOBINS%3"DEGREES FREEDOM IS"CHISQ 1030 REM HERE WE CONVERT FROM A GIVEN CHISQR TO A SIGNIFICANCE LEVEL, CONF 1040 XI = 0:X2 = 0:X3# = 0:X4 = 0:X5 = 0:X6 = 0:CONF = 0 1050 IF CHISQ < 31 OR (NOBINS%-3) > 2 THEN X6 = (NOBINS%-3)/2-1 :X1 = 1 ELSE CONF = 1 :GOTO 1110 1060 FOR X2 = 1 TO ((NOBINS%-3)/2-.5):X1 = XI*X6:X6 = X6-1: NEXT 1070 IF (NOBINS%-3) MOD 2 <> 0 THEN X1 = X 1*1.77245374942627# 1080 X7 = 1:X4 = 1:X3# = ((CHISQ/2)*((NOBINS%3)/2))*2/(EXP (CHISQ/2) * XI*(NOBINS%-3)):X5 = NOBINS% -3+2 1090 X4 = X4*CHISQ/X5:X7 = X7+X4:X5 = X5+2:IF X4> 0 THEN 1090 1100 CONF = 1-X3#*X7 1110 PRINT "FOR A SIGNIFICANCE LEVEL OF ";USING".#########";CONF Whether you determine your significance levels via a table or calculate them yourself, you will need two parameters to determine a significance level. The first of these parameters is, of course, the chi-square statistic itself. The second is the number of degrees of freedom Generally, the number of degrees of freedom is equal to the number of bins minus 1 minus the number of population parameters that have to be estimated for the sample statistics. Since there are ten bins in our example and we must use the arithmetic mean and standard deviation of the sample to construct the Normal curve, we must therefore subtract 3 degrees of freedom. Hence, we have 7 degrees of freedom. The significance level of a chi-square statistic of 37.5336 at 7 degrees of freedom is .000002419, Since this significance level is so much closer to zero than one, we can safely assume that our 232 trades from Chapter 3 are not Normally distributed. What follows is a small table for converting between chisquare values and degrees of freedom to significance levels. More elaborate tables may be found in many of the statistics books mentioned in the Bibliography: VALUES OF X2 Degrees of Significance Level Freedom .20 .10 .05 .01 1 1.6 2.7 3.8 6.6 2 3.2 4.6 6.0 9.2 3 4.6 6.3 7.8 11.3 4 6.0 7.8 9.5 13.3 5 7.3 9.2 11.1 15.1 10 13.4 16.0 18.3 23.2 20 25.0 28.4 31.4 37.6 You should be aware that the chi-square test can do a lot more than is presented here. For instance, you can use the chi-square test on a 2 x 2 contingency table (actually on any N x M contingency table). If you are interested in learning more about the chisquare test on such a table, consult one of the statistics books mentioned in the Bibliography. Finally, there is the problem of the arbitrary way we have chosen our bins as regards both their number and their range. Recall that binning data involves a certain loss of information about that data, but generally the profile of the distribution remains relatively the same. If we choose to work with only 3 bins, or if we choose to work with 30, we will likely get somewhat different results. It is often a helpful exercise to bin your data in several different ways when conducting statistical tests that rely on binned data. In so doing, you can be rather certain that the results obtained were not due solely to the arbitrary nature of how you chose your bins. In a purely statistical sense, in order for our number of degrees of freedom to be valid, it is necessary that the number of elements in each of the expected bins, the Ei's, be at least five. When there is a bin with less than five expected elements in it, theoretically the number of bins should be reduced until all of the bins have at least five expected elements in them. Often, when only the lowest and/or highest bin has less than 5 expected elements in it, the adjustment can be made by making these groups "all less than" and "all greater than" APPENDIX B Other Common Distributions parametrically to other fields where there are such environments. For this reason this appendix has been included. This appendix covers many of the other common distributions aside from the Normal. This text has shown how to find the optimal f and its by-products on any distribution. We have seen in Chapter 3 how to find the optimal f and its by-products on the Normal distribution. We can use the same technique to find the optimal f on any other distribution where the cumulative density function is known. It matters not whether the distribution is continuous or discrete. When the distribution is discrete, the equally spaced data points are simply the discrete points along the cumulative density curve itself. When the distribution is continuous, we must contrive these equally spaced data points as we did with the Normal Distribution in Chapter 3. Further, it matters not whether the tails of the distribution go out to plus and minus infinity or are bounded at some finite number. When the tails go to plus and minus infinity we must determine the bounding parameters (i.e., how far to the left extreme and right extreme we are going to operate on the distribution). The farther out we go, the more accurate our results. If the distribution is bounded on its tails at some finite point already, then these points become the bounding parameters. Finally, in Chapter 4 we learned a technique to find the optimal f and its by-products for the area under any curve (not necessarily just our adjustable distribution) when we do not know the cumulative density function, so we can find the optimal f and it's by products for any process regardless of the distribution. The hardest part is determining what the distribution in question is for a particular process, what the cumulative density function is for that process, and what parameter value(s) are best for our application. One of the many hearts of this book is the broader concept of decision making in environments characterized by geometric consequences. Optimal f is the regulator of growth in such environments, and the by-products of optimal f tell us a great deal about the growth rate of a given environment. You may seek to apply the tools for finding the optimal f THE UNIFORM DISTRIBUTION The Uniform Distribution, sometimes referred to as the Rectangular Distribution from its shape, occurs when all items in a population have equal frequency. A good example is the 10 digits 0 through 9. If we were to randomly select one of these digits, each possible selection has an equal chance of occurrence. Thus, the Uniform Distribution is used to model truly random events. A particular type of uniform distribution where A = 0 and B = 1 is called the Standard Uniform Distribution, and it is used extensively in generating random numbers. The Uniform Distribution is a continuous distribution. The probability density function, N'(X), is described as: (B.01) N'(X) = 1/(B-A) for A<= X<= B else N'(X) = 0 where B = The rightmost limit of the interval AB. A = The leftmost limit of the interval AB. The cumulative density of the Uniform is given by: (B.02) N(X) = 0 for XB where B = The rightmost limit of the interval AB. A = The leftmost limit of the interval 1 0.8 0.6 0.4 0.2 0 Figure B-1 Probability density functions for the Uniform Distribution (A = 2, B = 7). Figure B-2 Cumulative probability functions for the Uniform Distribution (A = 2, B = 7). Figures B-1 and B-2 illustrate the probability density and cumulative probability (i.e., cdf) respectively of the Uniform Distribution. Other qualities of the Uniform Distribution are: (B.03) Mean = (A+B)/2 (B.04) Variance = (B-A)^2/12 where B = The rightmost limit of the interval AB. A = The leftmost limit of the interval AB. THE BERNOULI DISTRIBUTION Another simple, common distribution is the Bernoulli Distribution. This is the distribution when the random variable can have only two possible values. Examples of this are heads and tails, defective and nondefective articles, success or failure, hit or miss, and so on. Hence, we say that the Bernoulli Distribution is a discrete distribution (as opposed to being a continuous distribution). The distribution is completely described by one parameter, P, which is the probability of the first event occurring. The variance in the Bernoulli is: (B.05) Variance = P*Q where (B.06) Q = P-1 Figure B-3 Probability density functions for the Bernoulli Distribution (P = .5). 1 0.8 0.6 0.2 0 (B.08a) X! = X*(X-l)*(X-2)*...*1 which can be also written as: (B.08b) X! = ∏[J = 0,X-1]X-J Further, by convention: (B.08c) 0! = 1 The cumulative density function for the Binomial is: (B.09) N(X) = ∑ [J = 0,X] (N!/(J!*(N-J)!))*(P^J)*(Q^(N -J)) where N = The number of trials. X = The number of successes. P = The probability of a success on a single trial. Q = 1-P. 1 Figure B-4 Cumulative probability functions for the Bernoulli Distribution (P = .5). Figures B-3 and B-4 illustrate the probability density and cumulative probability (i.e., cdf) respectively of the Bernoulli Distribution. THE BINOMIAL DISTRIBUTION The Binomial Distribution arises naturally when sampling from a Bernoulli Distribution. The probability density function, N'(X), of the Binomial (the probability of X successes in N trials or X defects in N items or X heads in N coin tosses, etc.) is: (B.07) N'(X) = (N!/(X!*(NX)!))*(P^X)*(Q^(N-X)) where N = The number of trials. X = The number of successes. P = The probability of a success on a single trial. Q = 1-P. It should be noted here that the exclamation point after a variable denotes the factorial function: Figure B-5 Probability density functions for the Binomial Distribution (N = 5, P = .5). 1 0.8 0.6 0.4 0.2 0 Figure B-6 Cumulative probability functions for the Binomial Distribution (N = 5, P = .5). Figures B-5 and B-6 illustrate the probability density and cumulative probability (i.e., cdf) respectively of the Binomial Distribution. The Binomial is also a discrete distribution. Other properties of the Binomial Distribution are: (B.10) Mean = N*P (B.11) Variance = N*P*Q where N = The number of trials. P = The probability of a success on a single trial. Q = 1-P. As N becomes large, the Binomial tends to the Normal Distribution, with the Normal being the limiting form of the Binomial. Generally, if N*P and N*Q are both greater than 5, you could use the Normal in lieu of the Binomial as an approximation. The Binomial Distribution is often used to Statistically validate a gambling system. An example will illustrate. Suppose we have a gambling system that has won 51% Of the time. We want to determine what the winning percentage would be if it performs in the future at a level of 3 standard deviations worse. Thus, the variable of interest here, X, is equal to .51, the probability of a winning trade. The variable of interest need not always be for the probability of a win. It can be the probability of an event being in one of two mutually exclusive groups. We can now perform the first necessary equation in the test: (B.12) L = P-Z*((P*(1-P))/(N1))^.5 where 3 4 5 L = The lower boundary for P to be at Z standard deviations. P = The variable of interest representing the probability of being in one of two mutually exclusive groups. Z = The selected number of standard deviations. N = The total number of events in the sample. Suppose our sample consisted of 100 plays. Thus: L = .51-3*((.51*(1-.51))/(1001))^.5 = .51-3* ((.51*.49)/99)^.5 = .51-3*(.2499/99)^.5 = .51-3*.0025242424^.5 = .51-3*.05024183938 = .51-.1507255181 = .3592744819 3 4 5 Based on our history of 100 plays which generated a 51% win rate, we can state that it would take a 3-sigma event for the population of plays (the future if we play an infinite number of times into the future) to have less than 35.92744819 percent winners. What kind of a confidence level does this represent? That is a function of N, the total number of plays in the sample. We can determine the confidence level of achieving 35 or 36 wins in 100 tosses by Equation (B.09). However, (B.09) is clumsy to work with as N gets large because of all of the factorial functions in (B.09). Fortunately, the Normal distribution, Equation (3.21) for 1-tailed probabilities, can be used as a very close approximation for the Binomial probabilities. In the case of our example, using Equation (3.21), 3 standard deviations translates into a 99.865% confidence. Thus, if we were to play this gambling system over an infinite number of times, we could be 99.865% sure that the percentage of wins would be greater than or equal to 35.92744819%. This technique can also be used for statistical validation of trading systems. However, this method is only valid when the following assumptions are true. First, the N events (trades) are all independent and randomly selected. This can easily be verified for any trading system. Second, the N events (trades) can all be classified into two mutually exclusive groups (wins and losses, trades greater than or less than the median trade, etc.). This assumption, too, can easily be satisfied. The third assumption is that the probability of an event being classified into one of the two mutually exclusive groups is constant from one event to the next. This is not necessarily true in trading, and the technique becomes inaccurate to the degree that this assumption is false, Be that as it may, the technique still can have value for traders. Not only can it be used to determine the confidence level for a certain method being profitable, the technique can also be used to determine the confidence level for a given market indicator. For instance, if you have an indicator that will forecast the direction of the next day's close, you then have two mutually exclusive groups: correct forecasts, and incorrect forecasts. You can now express the reliability of your indicator to a certain confidence level. This technique can also be used to discern how many trials are necessary for a system to be profitable to a given confidence level. For example, suppose we have a gambling system that wins 51% of the time on a game that pays 1 to 1. We want to know how many trials we must observe to be certain to a given confidence level that the system will be profitable in an asymptotic sense. Thus we can restate the problem as, "If the system wins 51% of the time, how many trials must I witness, and have it show a 51% win rate, to know that it will be prof- itable to a given confidence level?" Since the payoff is 1:1, the system must win in excess of 50% of the time to be considered profitable. Let's say we want the given confidence level to again be 99.865, or 3 standard deviations (although we are using 3 standard deviations in this discussion, we aren't restricted to that amount; we can use any number of standard deviations that we want). How many trials must we now witness to be 99.865% confident that at least 51% of the trials will be winners? If .51-X = .5, then X = .01, Therefore, the right factors of Equation (B.12), Z*((P*(1P))/ (N-1))^.5, must equal .01. Since Z = 3 in this case, and .01/3 = .0033, then: ((P*(1-P))/(N-1))^.5 = .0033 We know that P equals .51, thus: ((.51*(1-.51))/(N-1))^.5 = .0033 Squaring both sides gives us: ((.51*(l-.51))/(N-1)) = .00001111 To continue: (.51*.49)/(N-1) = .00001111 .2499/(N-1) = .00001111 .2499/.00001111 = N-1 .2499/.00001111+1 = N 22,491+1 = N N = 22,492 Thus, we need to witness a 51% win rate over 22,492 trials to be 99.865% certain that we will see at least 51% wins. THE GEOMETRIC DISTRIBUTION Like the Binomial, the Geometric Distribution, also a discrete distribution, occurs as a result of N independent Bernoulli trials. The Geometric Distribution measures the number of trials before the first success (or failure). The probability density function, N'(X), is: (B.13) N'(X) = Q ^ (X- 1)*P where P = The probability of success for a given trial. Q = The probability of failure for a given trial. In other words, N'(X) here measures the number of trials until the first success. The cumulative density function for the Geometric is therefore: (B.14) N(X) = ∑[J = 1,X] Q^(J1)*P where P = The probability of success for a given trial. Q = The probability of failure for a given trial. 1 0.8 took until a 5 appeared, plotting these results would yield the Geometric Distribution function formulated in (B.13). Another type of discrete distribution related to the preceding distributions is termed the Hypergeometric Distribution. Recall that in the Binomial Distribution 0.4 it is assumed that each draw in succession from the population has the same probabilities. That 0.2 is, suppose we have a deck of 52 cards. 26 of these cards are black and 26 are red. If we draw a card 0 1 2 3 4 5 and6 record 7 whether 8 it9is black 10 or red, we then put the card back Figure B-7 Probability density into the deck for the next draw. functions for the Geometric DisThis "sampling with replacement" tribution (P = .6). is what the Binomial Distribution assumes. Now for the next draw, 1 there is still a .5 (26/52) probability of the next card being black (or red). 0.8 The Hypergeometric Distribution assumes almost the same thing, except there is no replace0.6 ment after sampling. Suppose we draw the first card and it is red, and we do not replace it back into 0.4 the deck. Now, the probability of the next draw being red is reduced to 25/51 or .4901960784. In the 0.2 Hypergeometric Distribution there is dependency, in that the probabilities of the next event are 0 1 2 3 4 5 dependent 6 7 on the 8 outcome(s) 9 10 of the prior event(s). Contrast this to Figure B-8 Cumulative probabilithe Binomial Distribution, where ty functions for the Geometric an event is independent of the Distribution (P = .6). outcome(s) of the prior event(s). The basic functions N'(X) and Figures B-7 and B-8 illustrate N(X) of the Hypergeometric are the probability density and cumuthe same as those for the Binomilative probability (i.e., cdf) real, (B.07) and (B.09) respectively, spectively of the Geometric Disexcept that with the Hypergeotribution. Other properties of the metric the variable P, the probaGeometric are: bility of success on a single trial, (B.15) Mean = 1/P (B.16) Varichanges from one trial to the next. ance = Q/P^2 It is interesting to note the rewhere lationship between the HypergeoP = The probability of sucmetric and Binomial Distribucess for a given trial. tions. As N becomes larger, the Q = The probability of failure differences between the computed for a given trial. probabilities of the HypergeometSuppose we are discussing ric and the Binomial draw closer tossing a single die. If we are talkto each other. Thus we can state ing about having the outcome of that as N approaches infinity, the 5, how many times will we have Hypergeometric approaches the to toss the die, on average, to Binomial as a limit. achieve this outcome? The mean If you want to use the Binoof the Geometric Distribution tells mial probabilities as an approxius this. If we know the probability mation of the Hypergeometric, as of throwing a 5 is 1/6 (.1667) then the Binomial is far easier to comthe mean is 1/.1667 = 6. Thus we pute, how big must the population would expect, on average, to toss be? It is not easy to state with any a die six times in order to get a 5. certainty, since the desired accuIf we kept repeating this process racy of the result will determine and recorded how many tosses it whether the approximation is suc- cessful or not. Generally, though, a population to sample size of 100 to 1 is usually sufficient to permit approximating the Hypergeometric with the Binomial. THE POISSON DISTRIBUTION The Poisson Distribution is another important discrete distribution. This distribution is used to model arrival distributions and other seemingly random events that occur repeatedly yet haphazardly. These events can occur at points in time or at points along a wire or line (one dimension), along a plane (two dimensions), or in any N-dimensional construct. Figure B-9 shows the arrival of events (the X's) along a line, or in time. The Poisson Distribution was originally developed to model incoming telephone calls to a switchboard. Other typical situations that can be modeled by the Poisson are the breakdown of a piece of equipment, the completion of a repair job by a steadily working repairman, a typing error, the growth of a colony of bacteria on a Petri plate, a defect in a long ribbon or chain, and so on. The main difference between the Poisson and the Binomial distributions is that the Binomial is not appropriate for events that can occur more than once within a given time frame. Such an example might be the probability of an automobile accident over the next 6 months. In the Binomial we would be working with two distinct cases: Either an accident occurs, with probability P, or it does not, with probability Q (i.e., 1-P). However, in the Poisson Distribution we can also account for the fact that more than one accident can occur in this time period. The probability density function of the Poisson, N'(X), is given by: (B.17) N'(X) = (L^X*EXP(L))/X! where L = The parameter of the distribution. EXP() = The exponential function. Note that X must take discrete values. Suppose that calls to a switchboard average four calls per minute (L = 4). The probability of three calls (X = 3) arriving in the next minute are: N'(3) = (4^3*EXP(-4))/3! = (64*EXP(-4))/(3*2) = (64*.01831564)/6 = 1.17220096/6 = .1953668267 So we can say there is about a 19.5% chance of getting 3 calls in the next minute. Note that this is not cumulative-that is, this is not the probability of getting 3 calls or fewer, it is the probability of getting exactly 3 calls. If we wanted to know the probability of getting 3 calls or fewer we would have had to use the N(3) formula [which is given in (B.20)]. Other properties of the Poisson Distribution are: (B.18) Mean = L (B.10) Variance =L where L = The parameter of the distribution. In the Poisson Distribution, both the mean and the variance equal the parameter L. Therefore, in our example case we can say that the mean is 4 calls and the variance is 4 calls (or, the standard deviation is 2 calls-the square root of the variance, 4). When this parameter, L, is small, the distribution is shaped like a reversed J, and when L is large, the distribution is not dissimilar to the Binomial. Actually, the Poisson is the limiting form of the Binomial as N approaches infinity and P approaches 0. Figures B-10 through B-13 show the Poisson Distribution with parameter values of .5 and 4.5. EXP() = The exponential function. 0.8 0.6 0.4 0.2 0 Figure B-11 Cumulative probability functions for the Poisson Distribution (L = .5). 1 0.8 0.6 0.4 0.2 0 Figure B-12 Probability density functions for the Poisson Distribution (L = 4.5). 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 Figure B-10 Probability density functions for the Poisson Distribution (L = .5). 0 1 2 3 4 3Figure B-13 4 Cumulative 5 6 probability functions for the Poisson Distribution (L = 4.5). The cumulative density function of the Poisson, N(X), is given by: (B.20) N(X) = ∑[J = 0,X] (L^ J*EXP(-L))/J! where L = The parameter of the distribution. Related to the Poisson Distribution is a continuous distribution with a wide utility called the Exponential Distribution, sometimes also referred to as the Negative Exponential Distribution. This distribution is used to model interarrival times in queuing systems, service times on equipment, and sudden, unexpected failures such as equipment failures due to defects, 3manufacturing 4 5 light 6bulbs burning out, the time that it takes for a radioactive particle to decay, and so on. (There is a very interesting relationship between the Exponential and the Poisson distributions. The arrival of calls to a queuing system follows a Poisson Distribution, with arrival rate L. The interarrival distribution (the time between the arrivals) is Exponential with parameter 1/L.) The probability density function N'(X) for the Exponential Distribution is given as: (B.21) N'(X) = A*EXP(-A*X) where A = The single parametric input, equal to 1/L in the Poisson Distribution. A must be greater than 0. 5 EXP() 6 7= The 8 exponential 9 10 function. The integral of (B.21), N(X), the cumulative density function for the Exponential Distribution is given as: (B.22) N(X) = 1-EXP(-A*X) where A = The single parametric input, equal to 1/L in the Poisson Distribution. A must be greater than 0. EXP() = The exponential function. Figures B-14 and B-15 show the functions of the Exponential Distribution. Note that once you know A, the distribution is completely determined. 1 0.8 0.6 0.4 0.2 0 Figure B-14 Probability density functions for the Exponential Distribution (A = 1). 1 0.8 0.6 0.4 0.2 0 Figure B-15 Cumulative probability functions for the Exponential Distribution (A = 1). The mean and variance of the Exponential Distribution are: (B.23) Mean = 1/A (B.24) Variance = 1/A^2 Again A is the single parametric input, equal to 1/L in the Poisson Distribution, and must be greater than 0. Another interesting quality about the Exponential Distribution is that it has what is known as the "forgetfulness property." In terms of a telephone switchboard, this property states that the probability of a call in a given time interval is not affected by the fact that no calls may have taken place in the preceding interval(s). THE CHI-SQUARE DISTRIBUTION A distribution that is used extensively in goodness-of-fit testing is the Chi-Square Distribution (pronounced ki square, from the Greek letter X (chi) and hence often represented as the X2 distribution). Appendix A shows how to perform the chi-square test to determine how alike or unalike two different distributions are. Assume that K is a standard normal random variable (i.e., it has mean 0 and variance 1). If we say that K equals the square root of J (J = K^2), then we know that K will be a continuous random variable. However, we know that K will not be less than zero, so its density function will differ from the Normal. The Chi-Square Distribution gives us the density function of K: (B.27) N'(K) = (K ^ ((V/2)1)*EXP(-V/2))/(2 ^ 3(V/2)*GAM(V/2)) 4 5 6 where K = The chi-square variable X2. V = The number of degrees of freedom, which is the single input parameter. EXP() = The exponential function. GAM() = The standard gamma function. A few notes on the gamma function are in order. This function has the following properties: 5. GAM(0) = 1 6. GAM( 1/2) = The square root of pi, or 1.772453851 7. GAM(N) = (N-1)*GAM(N1); therefore, if N is an integer, GAM(N) = (N-1)! Notice in Equation (B.25) 3 4 5 6 that the only input parameter is V, the number of degrees of freedom. Suppose that rather than just taking one independent random variable squared (K^2), we take M independent random variables squared, and take their sum: JM = K1^2+K2^2 ... KM^2 Now JM is said to have the Chi-Square Distribution with M degrees of freedom. It is the number of degrees of freedom that determines the shape of a particular Chi-Square Distribution. When there is one degree of freedom, the distribution is severely asymmetric and resembles the Exponential Distribution (with A = 1). At two degrees of freedom the distribution begins to look like a straight line going down and to the right, with just a slight concavity to it. At three degrees of freedom, a convexity starts taking shape and we begin to have a unimodal-shaped distribution. As the number of degrees of freedom increases, the density function gradually becomes more and more symmetric. As the number of degrees of freedom becomes very large, the Chi-Square Distribution begins to resemble the Normal Distribution per The Central Limit Theorem. THE STUDENT'S DISTRIBUTION The Student's Distribution, sometimes called the t Distribution or Student's t, is another important distribution used in hypothesis testing that is related to the Normal Distribution. When you are working with less than 30 samples of a near-Normally distributed population, the Normal Distribution can no longer be accurately used. Instead, you must use the Student's Distribution. This is a symmetrical distribution with one parametric input, again the degrees of freedom. The degrees of freedom usually equals the number of elements in a sample minus one (N-1). The shape of this distribution closely resembles the Normal except that the tails are thicker and the peak of the distribution is lower. As the number of degrees of freedom approaches infinity, this distribution approaches the Normal in that the tails lower and the peak increases to resemble the Normal Distribution. When there is one degree of freedom, the tails are at their thickest and the peak at its smallest. At this point, the distribution is called Cauchy. It is interesting that if there is only one degree of freedom, then the mean of this distribution is said not to exist. If there is more than one degree of freedom, then the mean does exist and is equal to zero, since the distribution is symmetrical about zero. The variance of the Student's Distribution is infinite if there are fewer than three degrees of freedom. The concept of infinite variance is really quite simple. Suppose we measure the variance in daily closing prices for a particular stock for the last month. We record that value. Now we measure the variance in daily closing prices for that stock for the next year and record that value. Generally, it will be greater than our first value, of simply last month's variance. Now let's go back over the last 5 years and measure the variance in daily closing prices. Again, the variance has gotten larger. The farther back we gothat is, the more data we incorporate into our measurement of variance-the greater the variance becomes. Thus, the variance increases without bound as the size of the sample increases. This is infinite variance. The distribution of the log of daily price changes appears to have infinite variance, and thus the Student's Distribution is sometimes used to model the log of price changes. (That is, if C0 is today's close and C1 yesterday's close, then ln(C0/C1) will give us a value symmetrical about 0. The distribution of these values is sometimes modeled by the Student's distribution). If there are three or more degrees of freedom, then the variance is finite and is equal to: (B.26) Variance = V/ (V-2) for V>2 (B.27) Mean = 0 for V>1 where V = The degrees of freedom. Suppose we have two independent random variables. The first of these, Z, is standard normal (mean of 0 and variance of 1). The second of these, which we call J, is Chi-Square distributed with V degrees of freedom. We can now say that the variable T, equal to Z/(J/V), is distributed according to the Student's Distribution. We can also say that the variable T will follow the Student's Distribution with N-1 degrees of freedom if: T = N^(1/2)*((X-U)/S) where X = A sample mean. S = A sample standard deviation, N = The size of a sample. U = The population mean. The probability density function for the Student's Distribution, N'(X), is given as: (B.28) N'(X) = (GAM((V+1)/2)/(((V*P)^(1/2))* GAM(V/2)))*((1+((X^2)/V))^((V+1)/2)) where P = pi, or 3.1415926536. V = The degrees of freedom. GAM() = The standard gamma function. The mathematics of the Student's Distribution are related to the incomplete beta function. Since we aren't going to plunge into functions of mathematical physics such as the incomplete beta function, we will leave the Student's Distribution at this point. Before we do, however, you still need to know how to calculate probabilities associated with the Student's Distribution for a given number of standard units (Z score) and degrees of freedom. You can use published tables to find these values. Yet, if you're as averse to tables as I am, you can simply use the following snippet of BASIC code to discern the probabilities. You'll note that as the degrees of freedom variable, DEGFDM, approaches infinity, the values returned, the probabilities, converge to the Normal as given by Equation (3.22): 1000 REM 2 TAIL PROBABILITIES ASSOCIATED WITH THE STUDENT'S T DISTRIBUTION 1010 REM INPUT ZSCORE AND DEGFDM, OUTPUTS CF 1020 ST = ABS(ZSCORE):R8 = ATN(ST/SQR(DEGFDM)):RC8 = COS(R8):X8 = 1:R28 = RC8*RC8:RS8 = SIN(R8) 1030 IF DEGFDM MOD 2 = 0 THEN 1080 1040 IF DEGFDM = 1 THEN Y8 = R8:GOTO 1070 1050 Y8 = RC8:FOR Z8 = 3 TO (DEGFDM-2) STEP 2:X8 = X8*R28*(Z8-1)/Z8:Y8 = Y8+X8*RC8:NEXT 1060 Y8 = R8+RS8*Y8 1070 CF = Y8*.6366197723657157#:GOT0 1100 1080 Y8 = 1 :FOR Z8 = 2 TO (DEGFDM-2) STEP 2:X8 = X8* R28 * (Z8-1)/Z8:Y8 = Y8+X8:NEXT 1090 CF = Y8*RS8 1100 PRINT CF Next we come to another distribution, related to the ChiSquare Distribution, that also has important uses in statistics. The F Distribution, sometimes referred to as Snedecor's Distribution or Snedecor's F, is useful in hypothesis testing. Let A and B be independent chi-square random variables with degrees of freedom of M and N respectively. Now the random variable: F = (A/M)/(B/N) Can be said to have the F Distribution with M and N degrees of freedom. The density function, N'(X), of the F Distribution is given as: (B.29) N'(X) = (GAM((M+N)/2)*((M/N)^(M/2)) )/(GAM(M/2)*GAM(N/2)*((1+M /N)^((M+N)/2))) where M = The number of degrees of freedom of the first parameter. N = The number of degrees of freedom of the second parameter. GAM() = The standard gamma function. THE MULTINOMIAL DISTRIBUTION The Multinomial Distribution is related to the Binomial, and likewise is a discrete distribution. Unlike the Binomial, which assumes two possible outcomes for an event, the Multinomial assumes that there are M different outcomes for each trial. The probability density function, N'(X), is given as: (B.30) N'(X) = (N!/(∏[i = 1,M] Ni!))*∏[i = 1,M] Pi^Ni where N = The total number of trials. Ni = The number of times the ith trial occurs. Pi = The probability that outcome number i will be the result of any one trial. The summation of all Pi's equals 1. M = The number of possible outcomes on each trial. For example, consider a single die where there are 6 possible outcomes on any given roll (M = 6). What is the probability of rolling a 1 once, a 2 twice, and a 3 three times out of 10 rolls of a fair die? The probabilities of rolling a 1, a 2 or a 3 are each 1/6. We must consider a fourth alternative to keep the sum of the probabilities equal to 1, and that is the probability of not rolling a 1, 2, or 3, which is 3/6. Therefore, P1 = P2 = P3 = 1/6, and P4 = 3/6. Also, N1 = 1, N2 = 2, N3 = 3, and N4 = 10 • 3-2-1 = 4. Therefore, Equation (B.30) can be worked through as: N'(X) = (10!/(1!*2!*3!*4!))*(1/ 6)^1*(1/6) ^2*(1/6)^3*(3/6) 4 = (3628800/(1*2*6*24))*.1667*.02 78*.00463*.0625 = (3628800/288)*.000001341 = 12600*.000001341 = .0168966 Note that this is the probability of rolling exactly a 1 once, a 2 twice, and a 3 three times, not the cumulative density. This is a type of distribution that uses more than one random variable, hence its cumulative density cannot be drawn out nicely and neatly in two dimensions as you could with the other distributions discussed thus far. We will not be working with other distributions that have more than one random variable, but you should be aware that such distributions and their functions do exist. THE STABLE PARETIAN DISTRIBUTION The stable Paretian Distribution is actually an entire class of distributions, sometimes referred to as "Pareto-Levy" distributions. The probability density function N'(U) is given as: (B.31) ln(N'(U)) = i*D*UV*abs(U)^A*Z where U = The variable of the stable distribution. A = The kurtosis parameter of the distribution. B = The skewness parameter of the distribution. D = The location parameter of the distribution. V = This is also called the scale parameter, i = The imaginary unit, -1^(1/2) Z = 1 -i*B* (U/ASS(U))*tan(A*3.141592653 6/2) when A >< 1 and 1+i*B*(U∕ASS(U))*2/3.1415926 536*log(ABS(U)) when A = 1. ABS() = The absolute value function. tan() = The tangent function. ln() = The natural logarithm function. The limits on the parameters of Equation (B.31) are: (B.32) 0 The cumulative density functions for the stable Paretian are not known to exist in closed form. For this reason, evaluation of the parameters of this distribution is complex, and work with this distribution is made more difficult. It is interesting to note that the stable Paretian parameters A, B, C, and D correspond to the fourth, third, second, and first moments of the distribution respectively. This gives the stable Paretian the power to model many types of real-life distributions-in particular, those where the tails of the distribution are thicker than they would be in the Normal, or those with infinite variance (i.e., when A is less than 2). For these reasons, the stable Paretian is an extremely powerful distribution with applications in economics and the social sciences, where data distributions often have those characteristics (fatter tails and infinite variance) that the stable Paretian addresses. This infinite variance characteristic makes the Central Limit Theorem inapplicable to data that is distributed per the stable Paretian distribution when A is less than 2. This is a very important fact if you plan on using the Central Limit Theorem. One of the major characteristics of the stable Paretian is that it is invariant under addition. This means that the sum of independent stable variables with characteristic exponent A will be stable, with approximately the same characteristic exponent. Thus we have the Generalized Central Limit Theorem, which is essentially the Central Limit Theorem, except that the limiting form of the distribution is the stable Paretian rather than the Normal, and the theorem applies even when the data has infinite variance (i.e., A < 2), which is when the Central Limit Theorem does not apply. For example, the heights of people have finite variance. Thus we could model the heights of people with the Normal Distribution. The distribution of people's incomes, however, does not have finite variance and is therefore modeled by the stable Paretian distribution rather than the Normal Distribution. It is because of this Generalized Central Limit Theorem that the stable Paretian Distribution is believed by many to be representative of the distribution of price changes.1 1 Do not confuse the stable Paretian Distribution with our adjustable distribution discussed in Chapter 4. The There are many more probability distributions that we could still cover (Negative Binomial Distribution, Gamma Distribution, Beta Distribution, etc.); however, they become increasingly more obscure as we continue from here. The distributions we have covered thus far are, by and large, the main common probability distributions. Efforts have been made to catalogue the many known probability distributions. Undeniably, one of the better efforts in this regard has been done by Karl Pearson, but perhaps the most comprehensive work done on cataloguing the many known probability distributions has been presented by Frank Haight.2 Haight's "Index" covers almost all of the known distributions on which information was published prior to January, 1958. Haight lists most of the mathematical functions associated with most of the distributions. More important, references to books and articles are given so that a user of the index can find what publications to consult for more in-depth matter on the particular distribution of interest. Haight's index categorizes distributions into ten basic types: 1. Normal 2. Type III 3. Binomial 4. Discrete 5. Distributions on (A, B) 6. Distributions on (0, infinity) 7. Distributions on (-infinity, infinity) 8. Miscellaneous Univariate 9. Miscellaneous Bivariate 10. Miscellaneous Multivariate Of the distributions we have covered in this Appendix, the Chi-Square and Exponential (Negative Exponential) are categorized by Haight as Type III. The Binomial, Geometric, and Bernoulli are categorized as Binomial. The Poisson and Hypergeometric are categorized as Discrete. The Rectangular is under Distributions on (A, B), the F Distribution as well as the Pareto are under Distributions on (0, infinity), the Student's stable Paretian is a real distribution because it models a probability phenomenon. Our adjustable distribution does not. Rather, it models other (Zdimensional) probability distributions, such as the stable Paretian. 2 Haight, F. A., "Index to the Distributions of Mathematical Statistics," Journal of Research of the National Bureau of Standards-B. Mathematics and Mathematical Physics 65 B No. 1, pp. 23-60, Januaiy-March 1961. is regarded as a Distribution on (-infinity, infinity), and the Multinomial as a Miscellaneous Multivariate. It should also be noted that not all distributions fit cleanly into one of these ten categories, as some distributions can actually be considered subclasses of others. For instance, the Student's distribution is catalogued as a Distribution on (-infinity, infinity), yet the Normal can be considered a subclass of the Student's, and the Normal is given its own category entirely. As you can see, there really isn't any "clean" way to categorize distributions. However, Haight's index is quite thorough. Readers interested in learning more about the different types of distributions should consult Haight as a starting point. APPENDIX C - Further on Dependency: The Turning Points and Phase Length Tests There exist statistical tests of dependence other than those mentioned in Portfolio Management Formulas and reiterated in Chapter 1. The turning points test is an altogether different test for dependency. Going through the stream of trades, a turning point is counted if a trade is for a greater P&L value than both the trade before it and the trade after it. A trade can also be counted as a turning point if it is for a lesser P&L value than both the trade before it and the trade after it. Notice that we are using the individual trades, not the equity curve (the cumulative values of the trades). The number of turning points is totaled up for the entire stream of trades. Note that we must start with the second trade and end with the next to last trade, as we need a trade on either side of the trade we are considering as a turning point. Consider now three values (1, 2, 3) in a random series, whereby each of the six possible orderings are equally likely: 1, 2, 3 2, 3,1 1, 3, 2 3, 1,2 2, 1,3 3, 2, 1 Of these six, four will result in a turning point. Thus, for a random stream of trades, the expected number of turning points is given as: (C.01) Expected number of turning points = 2/3*(N-2) where N = The total number of trades. We can derive the variance in the number of turning points of a random series as: (C.02) Variance = (16*N-29)/90 The standard deviation is the square root of the variance. Taking the difference between the actual number of turning points counted in the stream of trades and the expected number and then dividing the difference by the standard deviation will give us a Z score, which is then expressed as a confidence limit. The confidence limit is discerned from Equation (3.22) for 2-tailed Normal probabilities. Thus, if our stream of trades is very far away (very many standard deviations from the expected number), it is unlikely that our stream of trades is random; rather, dependency is present. If dependency appears to a high confidence limit (at least 95%) with the turning points test, you can determine from inspection whether like begets like (if there are fewer actual turning points than expected) or whether like begets unlike (if there are more actual turning points than-expected). Another test for dependence is the phase length test. This is a statistical test similar to the turning points test. Rather than counting up the number of turning points between (but not including) trade 1 and the last trade, the phase length test looks at how many trades have elapsed between turning points. A "phase" is the number of trades that elapse between a turning point high and a turning point low, or a turning point low and a turning point high. It doesn't matter which occurs first, the high turning point or the low turning point. Thus, if trade number 4 is a turning point (high or low) and trade number 5 is a turning point (high or low, so long as it's the opposite of what the last turning point was), then the phase length is 1, since the difference between 5 and 4 is 1. With the phase length test you add up the number of phases of length 1, 2, and 3 or more. Therefore, you will have 3 categories: 1, 2, and 3+. Thus, phase lengths of 4 or 5, and so on, are all totaled under the group of 3+. It doesn't matter if a phase goes from a high turning point to a low turning point or from a low turning point to a high turning point; the only thing that matters is how many trades the phase is comprised of. To figure the phase length, simply take the trade number of the latter phase (what number it is in sequence from 1 to N, where N is the total number of trades) and subtract the trade number of the prior phase. For each of the three categories you will have the total number of complete phases that occurred between (but not including) the first and the last trades. Each of these three categories also has an expected number of trades for that category. The expected number of trades-of phase length D is: (C.03) E(D) = 2*(N-D-2)*(D^2*3*D+1)/(D+3)! where D = The length of the phase. E(D) = The expected number of counts. N = The total number of trades. Once you have calculated the expected number of counts for the three categories of phase length (1, 2, and 3+), you can perform the chisquare test. According to Kendall and colleagues,1 you should use 2.5 degrees of freedom here in determining the significance levels, as the lengths of the phases are not independent. Remember that the phase length test doesn't tell you about the dependence (like begetting like, etc.), but rather whether or not there is dependence or randomness. Lastly, this discussion of dependence addresses converting a correlation coefficient to a confidence limit. The technique employs what is known as fisher's Z transformation, which converts/a correlation coefficient, r, to a Normally distributed variable: (C.04) F = .5*ln((1+r)/(l-r)) where F = The transformed variable, now Normally distributed. r = The correlation coefficient of the sample. ln() = The natural logarithm function. The distribution of these transformed variables will have a variance of: (C.05) V = 1/(N-3) where V = The variance of the transformed variables. N = The number of elements in the sample. The mean of the distribution of these transformed variables is discerned by Equation (C.04), only instead of being the correlation coefficient of the sample, r is the correlation coefficient of the population. Thus, since our population has a correlation coefficient of 0 (which we assume, since we are testing deviation from randomness) then Equation (C.04) gives us a value of 0 for the mean of the population. Now we can determine how many standard deviations the adjusted variable is from the mean by dividing the adjusted variable by the square root of the variance, Equation (C.05). The result is the Z score associated with a given correlation coefficient and sample size. For example, suppose we had a correlation coefficient of .25, and this was discerned over 100 trades. Thus, we can find our Z score as Equation (C.04) divided by the square root of Equation (C.05), or: (C.06) Z = (.5*ln((1+r)/(1-r)))/(l/(N-3))^.5 Which, for our example is: Z = (.5*ln((l+.25)/(l-.25)))/(l/(100-3))^.5 = (.5*ln(1.25/.75))/(l/97)^.5 = (.5*ln(1.6667))/.010309^.5 = (.5*.51085)/.1015346165 = .25541275/.1015346165 = 2.515523856 Now we can translate this into a confidence limit by using Equation (3.22) for a Normal Distribution e-tailed confidence limit. For our example this works out to a confidence limit in excess of 98.8%. If we had had 30 trades or less, we would have had to discern our confidence limit by using the Student's Distribution with N-1 degrees of freedom. Kendall, M. G., A. Stuart, and J. K. Ord. The Advanced Theory of Statistics, Vol. III. New York: Hafner Publishing, 1983.
{"url":"https://docer.tips/mathematics-money-management-ralph-vince.html","timestamp":"2024-11-10T12:21:31Z","content_type":"text/html","content_length":"815605","record_id":"<urn:uuid:e816fc20-aaac-4119-9798-7acf0be72e2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00456.warc.gz"}
Given the function f(x) = x^2, how do you determine whether f satisfies the hypotheses of the Mean Value Theorem on the interval [-1,1] and find the c? | Socratic Given the function #f(x) = x^2#, how do you determine whether f satisfies the hypotheses of the Mean Value Theorem on the interval [-1,1] and find the c? 1 Answer The Mean Value Theorem has two hypotheses: H1 : $f$ is continuous on the closed interval $\left[a , b\right]$ H2 : $f$ is differentiable on the open interval $\left(a , b\right)$. In this question, $f \left(x\right) = {x}^{2}$ , $a = - 1$ and $b = 1$. This function is a polynomial, so it is continuous on its domain, $\left(- \infty , \infty\right)$. In particular, $f$ is continuous on $\left[- 1 , 1\right]$ $f ' \left(x\right) = 2 x$ which exists for all real $x$, it exists for all $x$ in $\left(- 1 , 1\right)$, So $f$ is differentiable on $\left(- 1 , 1\right)$ So, both hypotheses are satisfied. Because the hypotheses are satisfied, we can be sure that there is a $c$ (at least one $c$) in $\left(a , b\right)$ with $f ' \left(c\right) = \frac{f \left(b\right) - f \left(a\right)}{b - a}$. To find the $c$ (or $c$'s) mentioned in the conclusion, solve $f ' \left(x\right) = \frac{f \left(1\right) - f \left(- 1\right)}{1 - \left(- 1\right)}$. If there are solutions outside the interval $\left(- 1 , 1\right)$, they are not values of $c$ mentioned in the conclusion of MVT. In this case the only solution is $0$, so $c = 0$. Impact of this question 1134 views around the world
{"url":"https://socratic.org/questions/given-the-function-f-x-x-2-how-do-you-determine-whether-f-satisfies-the-hypothes","timestamp":"2024-11-03T22:38:15Z","content_type":"text/html","content_length":"35900","record_id":"<urn:uuid:5bb87129-6605-4602-9055-733b354db538>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00668.warc.gz"}
How Do You Multiply and Divide Numbers with Different Signs? Multiplying and dividing numbers takes a good amount of thinking, and it's easy to make a mistake. But you can make sure that you're on the right track if you check whether the answer should be positive or negative. In this tutorial you'll see exactly how to tell if your answer will be positive or negative, even if you don't know the exact value of the answer. That way you'll always be able to check your answers!
{"url":"https://virtualnerd.com/texas-digits/tx-digits-grade-6/integer-operations/multiplying-integers/multiplying-or-dividing-with-different-signs","timestamp":"2024-11-03T13:16:19Z","content_type":"text/html","content_length":"23968","record_id":"<urn:uuid:8b8e9c95-3715-4081-b577-cd3276509efe>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00240.warc.gz"}
# write a program to read the scores of n players and display total score,min score # max score(max,min score expltn needed) | Sololearn: Learn to code for FREE! # write a program to read the scores of n players and display total score,min score # max score(max,min score expltn needed) n=int(input("Enter how many players?")) # 5 scores=[] for i in range(n): # 0 1 2 3 4 score=int(input("Enter score")) scores.append(score) print(f'Scores are {scores}') total=0 for s in scores: total= total+s print(f'Total score is {total}') maxscore=0 minscore=0 for s in scores: if s>maxscore: maxscore=s print(f'maximum score {maxscore}') for i in range(n): # 0 1 2 3 4 if i==0: minscore=scores[0] elif scores[i]<minscore: minscore=scores[i] print(f'minimum score {minscore}') Check in your code, from for i in range(5) : print(i, scores[i] ) # i used for index # scores[I] represents values I need minimum score explaination can any one First it set minscore = scores[0] And any other value are tested minscore<scores[I] , if yes then update minscore[I] Else continue Ex:: 3 4 5 2 7 Min=3 for i in range(5) : I = 0 => setting min=3 Next I= 1 => 4<3 false I= 2 => 5<3 false I= 3 => 2<3 true so set min=2 I= 4 => 7<2 false So min=2 May you have doubt, why not min=0 at initially, if yes then for all inputs expect for negative values min <scores[0] is false so minscore value remain 0 after loop. But input have no value 0. hope it helps.. It's more helpful brotherâ ¤ď¸ â ¤ď¸ I understood everything,in my solution, in if statment why I==0, is it index or just considered score. As zero initial,little confused Okkk got itâ ¤ď¸ â ¤ď¸ â ¤ď¸ â ¤ď¸ â ¤ď¸
{"url":"https://www.sololearn.com/en/discuss/2972465/write-a-program-to-read-the-scores-of-n-players-and-display-total-score-min-score-max-score-max-min","timestamp":"2024-11-02T22:06:18Z","content_type":"text/html","content_length":"936087","record_id":"<urn:uuid:a19f8a3d-9020-4048-9907-ea07ea0a1a74>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00013.warc.gz"}
UA Mathematicians Predict Patterns in Fingerprints, Cacti Patterns in nature can be seen every day, yet in many cases, little is understood about how and why they form. Now University of Arizona mathematicians have found a way to predict natural patterns, including fingerprints and the spirals seen in cacti. UA graduate student Michael Kuecken developed a mathematical model that can reproduce fingerprint patterns, while UA graduate student Patrick Shipman created a mathematical model to explain the arrangement of repeated units in various plants. Shipman's report on his work will be published in an upcoming issue of Physical Review Letters. Even though the use of fingerprints for identification began more than 2000 years ago in China and they have been studied experimentally for over two hundred years, there is no widely accepted explanation for their occurrence. Likewise, the reasons behind nature’s choice of patterns in plants have been difficult for mathematicians to explain, despite these patterns having been identified centuries ago. “What I like about this research is the interplay between math and biology. It is actually quite difficult, because the disciplines require a somewhat different mindset and biology is notoriously bewildering and full of detail,” Kuecken said. “In a way, dealing with this problem was like putting together a jigsaw puzzle of facts. I had to try out different things and could use math, and sometimes common sense, to see if the pieces actually fit.” Human skin has multiple layers, including the outermost epidermis and the inner dermis. The outer and inner layers are separated by the basal layer, which is composed of cells that constantly divide. Growth occurs in a similar fashion in plants, which have areas of continuous cell growth, such as the tip of a cactus, that allow the plant to grow larger. Human fingerprint patterns are created because basal skin grows faster than surface skin, which then buckles, forming ridges. The basal layer in human skin and the equivalent layer in plant skin grow at a faster rate than either the surface layers or the thick dermis layer. As the basal layer continues to grow, pressure increases. In both plants and fingertips, the growing layer buckles inward toward the softer inner layer of tissue, relieving the stress. As a result, ridges are formed on the surface. The undulations from the buckling form fingerprints and various patterns in plants, from the ridges in saguaro cacti to the hexagons in pineapples. The way a pattern is formed, regardless whether it is a fingerprint or a plant, is related to the forces imposed during ridge formation. The basic properties responsible for the mechanism of buckling in plants and fingerprints happen in other materials as well. Kuecken and Shipman's graduate advisor, UA professor of mathematics Alan Newell, said, “In material science, high-temperature superconductors seem to be connected with stresses that compress to build the structures in various high-temperature materials. Indeed, the idea that buckling and surface stresses would have something to do with the patterns you see in plants is fairly recent.” Kuecken developed a mathematical model that can generate patterns like this one, which looks like a fingerprint. In fingerprints, ridge formation is influenced by discrete elevations of the skin on the fingertips, called volar pads, which first appear in human embryos at about six and a half weeks. The volar pads' location is where the epidermal ridges for fingerprints will arise later in development. Kuecken explained that as the volar pads shrink, it places stress on the skin layers. The ridges then form perpendicular to this stress. There are three basic patterns of fingerprints known as arches, loops and whorls that form in response to the different directions of stress caused by shrinking of the volar pads. Other research on ridge formation has already shown that if a person has a high, rounded volar pad, they will end up with a whorl pattern. Kuecken's mathematical model was able to reproduce these large patterns, as well as the little intricacies that make an individual fingerprint unique. Shipman’s model, like Kuecken’s, also took into account stresses that influenced ridge formation. In plants, forces acting in multiple directions result in complex patterns. For example, when buckling occurs in three different directions, all three ridges will appear together and form a hexagonal pattern. “I’ve looked at cacti all my life, I really like them, and I’d really like to understand them,” Shipman said. To study these patterns, Shipman looked at the stickers on a cactus or florets on a When a line is drawn from sticker to sticker on a cactus in a clockwise or in a counterclockwise direction, the line ends up spiraling around the plant. This occurs in many plants, including pineapples and cauliflower. When these spirals are counted, it results in numbers that belong to the Fibonacci sequence, a series of numbers that appears frequently when scientists and mathematicians analyze natural patterns. Shipman found that cactus stickers predicatably align in spiral patterns. From his model, Shipman found that the initial curvature of a plant near its growth tip influences whether it will form ridges or hexagons. He found that plants with a flat top, or less curved top, such as saguaro cacti, will always form ridges and tend not to have Fibonacci sequences. Plants that have a high degree of curvature will produce hexagonal configurations, such as those in pinecones, and the number of spirals will always be numbers in the Fibonacci sequence. Newell says that Shipman's mathematical model demonstrates that the shapes chosen by nature are those that take the least energy to make. “Of all possible shapes you can have, what nature picked minimizes the energy in the plant.”
{"url":"http://www.fractal.org/Life-Science-Technology/Publications/Patterns-in-Fingerprints-and-Cacti.htm","timestamp":"2024-11-11T21:01:50Z","content_type":"text/html","content_length":"16365","record_id":"<urn:uuid:c1e78014-d1c8-4649-a433-a16256a1a765>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00822.warc.gz"}
What’s Not What in Differential Privacy – R-Craft What’s Not What in Differential Privacy This article is originally published at https://matloff.wordpress.com I take my title here from the “too clever by half” paper, “What’s Not What with Statistics” of many years ago. Or I just as appropriately could have borrowed the old Leo Breiman classic title “A Tale of Two Cultures,” comparing the statistics and computer science (CS) communities. Differential privacy (DP) is an approach to maintaining the privacy of individual records in a database, while still allowing statistical analysis. It is now perceived as the go-to method in the data privacy area, enjoying adoption by the US Census Bureau and several major firms in industry, as well as a highly visible media presence. DP has developed a vast research literature. On the other hand, it is also the subject of controversy, and now, of lawsuits. Here is a summary of this essay: • DP overpromises, a “solution to the data privacy problem, which provides a quantified guarantee of privacy.” • On the contrary, DP fails to deliver on the guarantee, for a large class queries on quantities typically arising in business, industry, medicine, education and so on. The promise is illusory, with the method often producing biased results and inaccurate guarantees. • The problems are in large part due to DP having been developed by CS researchers, rather than by statisticians or other get-your-hands-dirty data analysis professionals. Some preparatory remarks: I’ve been doing research in the data privacy area off and on for many years, e.g. IEEE Symposium on Security and Privacy; ACM Trans. on Database Systems; several book chapters; and current work in progress, arXiv. I was an appointed member of the IFIP Working Group 11.3 on Database Security in the 1990s. The ACM TODS paper was funded in part by the Census Bureau. I will take as a running example one that was popular in the classic privacy literature. Say we have an employee database, and an intruder knows there is just one female electrical engineer. The intruder may then submit a query for the mean salary of all female EEs, and thus illicitly obtain this worker’s salary. Notation: the database consists of n records on p variables. The R package diffpriv makes use of standard DP methods easy to implement, and is recommended for any reader who wishes to investigate these issues further. What is DP? Technically DP is just a criterion, not a method, but the term generally is taken to mean methods whose derivation is motivated by that criterion. DP is actually based on a very old and widely-used approach to data privacy, random noise perturbation. It’s quite simple. Say we have a database that includes a salary variable, which is considered confidential. We add random, mean-0 noise, to hide a person’s real income from intruders. The motivation is that, since the added noise has mean 0, researchers doing legitimate statistical analysis can still do their work. They work with averages, and the average salary in the database in noisy form should be pretty close to that of the original data, with the noise mostly “canceling out.” (We will see below, though, that this view is overly simple.) In DP methods, the noise is typically added to the final statistic, e.g. to a mean of interest, rather than directly to the variables. One issue is whether to add a different noise value each time a query arrives at the data server, vs. adding noise just once and then making that perturbed data open to public use. DP methods tend to do the former, while classically the latter approach is used. A related problem is that a DP version needs to be developed for every statistical method. If a user wants, say, to perform quantile regression, she must check whether a DP version has been developed and code made available for it. With classical privacy methods, once the dataset has been perturbed, users can apply any statistical method they wish. I liken it to an amusement park. Classical methods give one a “day pass” which allows one to enjoy any ride; DP requires a separate ticket for each ride. Privacy/Accuracy Tradeoffs: With any data privacy method, DP or classical, there is no perfect solution. One can only choose a “dial setting” in a range of tradeoffs. The latter come in two main types: • There is a tradeoff between protecting individual privacy on the one hand, and preserving the statistical accuracy for researchers. The larger the variance of added noise, the greater the privacy but the larger the standard errors in statistical quantities computed from the perturbed data. • Equally important, though rarely mentioned, there is the problem of attenuation of relationships between the variables. This is the core of most types of data analysis, finding and quantifying relationships; yet the more noise we add to the data, the weaker the reported relationships will be. This problem arises in classical noise addition, and occurs in some DP methods, such as ones that add noise to counts in contingency tables. So here we have not only a variance problem but also a bias problem; the absolute values of correlations, regression coefficients and so on are biased downward. A partial solution is to set the noise correlation structure equal to that of the data, but that doesn’t apply to categorical variables (where the noise addition approach doesn’t make much sense anyway). Other classical statistical disclosure control methods: Two other major data privacy methods should be mentioned here. 1. Cell suppression: Any query whose conditions are satisfied by just one record in the database is disallowed. In the example of the female EE above, for instance, that intruder’s query simply would not be answered. One problem with this approach is that it is vulnerable to set-differencing attacks. The intruder could query the total salaries of all EEs, then query the male EEs, and then subtract to illicitly obtain the female EE’s salary. Elaborate methods have been developed to counter such attacks. 2. Data swapping: For a certain subset of the data — either randomly chosen, or chosen according to a record’s vulnerability to attack — some of the data for one record is swapped with that of a similar record. In the female EE example, we might swap occupation or salaries, say. Note that neither of these methods avoids the problem of privacy/accuracy tradeoffs. In cell suppression, the more suppression we impose, the greater the problems of variance and bias in stat analyses. Data swapping essentially adds noise, again causing variance and bias. The DP privacy criterion: Since DP adds random noise, the DP criterion is couched in probabilistic terms. Consider two datasets, D and D’, with the same variables and the same number of records n, but differing in 1 record. Consider a given query Q. Denote the responses by Q(D) and Q(D’). Then the DP criterion is, for any set S in the image of Q, P(Q(D) in S) < P(Q(D’) in S) exp(ε) for all possible (D,D’) pairs and for a small tuning parameter ε. The smaller ε, the greater the privacy. Note that the definition involves all possible (D,D’) pairs; D here is NOT just the real database at hand (though there is a concept of local sensitivity in which D is indeed our actual database). On the other hand, in processing a query, we ARE using the database at hand, and we compute the noise level for the query based on n for this D. DP-compliant methods have been developed for various statistical quantities, producing formulas for the noise variance as a function of ε and an anticipated upper bound on |Q(D) – Q(D’)|. Again, that upper bound must apply to all possible (D,D’) pairs. For human height, say, we know that no one will have height, say, 300 cm, which we divide by n for a mean; it’s a rather sloppy bound, but it would work. Problems with DP’s claims of quantifiable guaranteed privacy: (Many flaws have been claimed for DP, but to my knowledge, this analysis is new.) Again consider the female EE example. One problem that arises right away is that, since this is a condtional mean, Q(D) and/or Q(D’) will be often be undefined. Say we try to address that issue by expressing Q() as an unconditional mean, E(Y A), divided by an unconditional probability E(A), where A is the indicator variable for the condition. At the data level, these are both averages, and we take Q() to mean querying the two averages separately. We then run into a more formidable problem, as follows. For even moderately large n, the amount of noise added to each of these averages will be small — not what we want at all. If there is only 1 female EE, we want the amount of noise added to her salary to be large. Another approach would be to consider all possible pairs of databases D and D’, each consisting of female EEs. Remember, in the definition of DP, we are anticipating all potential (D,D’) pairs. Then, in computing noise level, we would have n = 1, so the noise level would be appropriately large. But that won’t work. We would still have the problem described for the cell suppression method above: An intruder could employ set-differencing, say querying total salaries for all EE workers, then total for all male EEs — and here the added noise would definitely be small (after dividing by n). Then the intruder wins. The fact that the above analysis involves just a single record is irrelevant. A similar analysis can be made for any conditional mean or probability. Again consider the employee database. Say only 10% of the workers are women. Then the same reasoning shows that the noise added to a query involving women will either be too small or vulnerable to a set-differencing attack. So, the much-vaunted claims of DP advocates that DP gives “quantifiable guarantees of privacy” are simply not true. Yes, they are fine for univariate statistics, but they fall apart in conditional cases. And conditional cases are the bread and butter of virtually any statistical study. Bias problems: As noted, DP methods that work on contingency tables by adding independent noise values to cell counts can attenuate correlation and thus produce bias. The bias will be substantial for small tables. Another issue, also in DP contingency table settings, is that bias may occur from post-processing. If Laplacian noise is added to counts, some counts may be negative. As shown in Zhu et al, post-processing to achieve nonnegativity can result in bias. The US Census Bureau’s adoption of DP: The Census Bureau’s DP methodology replaces the swapping-based approach used in past census reports. Though my goal in this essay has been mainly to discuss DP in general, I will make a few comments. First, what does the Bureau intend to do? They will take a 2-phase approach. They view the database as one extremely large contingency table (“histogram,” in their terminology). Then they add noise to the cell counts. Next, they modify the cell counts to satisfy nonnegativity and certain other constraints, e.g. taking the total number of residents in a census block to be invariant. The final perturbed histogram is released to the public. Why are they doing this? The Bureau’s simulations indicate that, with very large computing resources and possibly external data, an intruder could reconstruct much of the original, unperturbed data. The Bureau concedes that the product is synthetic data. Isn’t any perturbed data synthetic? Yes, but here ALL of the data is perturbed, as opposed to swapping, where only a small fraction of the data Needless to say, then, use of synthetic data has many researchers up in arms. They don’t trust it, and have offered examples of undesirable outcomes, substantial distortions that could badly effect research work in business, industry and science. There is also concern that there will be serious impacts on next year’s congressional redistricting, which strongly relies on census data, though one analysis is more optimistic. There has already been one lawsuit against the Bureau’s use of DP. Expect a flurry more, after the Bureau releases its data — and after redistricting is done based on that data. So it once again boils down to the privacy/accuracy tradeoff. Critics say the Bureau’s reconstruction scenarios are unlikely and overblown. Again, add to that the illusory nature of DP’s privacy guarantees, and the problem gets even worse. Final comments: How did we get here? As seen above, DP has some very serious flaws. Yet it has largely become entrenched in the data privacy field. In addition to being chosen as the basis of the census data, it is used somewhat in industry. Apple, for instance, uses classic noise addition, applied to raw data, but with a DP privacy budget. As noted, early DP development was done mainly by CS researchers. CS people view the world in terms of algorithms, so that for example they feel very comfortable with investigating the data reconstruction problem and applying mathematical optimization techniques. But the CS people tend to have poor insight into what problems the users of statistical databases pursue in their day-to-day data analysis activities. Some mathematical statisticians entered the picture later, but by then DP had acquired great momentum, and some intriguing theoretical problems had arisen for the math stat people to work on. Major practical issues, such as that of conditional quantities, were overlooked. In other words, in my view, the statistician input into the development of DP came too little, too late. Also, to be frank, the DP community has not always been open to criticism, such as the skeptical material in Bambauer, Muralidhar and Sarathy. Statistical disclosure control is arguably one of the most important data science issues we are facing today. Bambauer et al in the above link sum up the situation quite well: The legal community has been misled into thinking that differential privacy can offer the benefits of data research without sacrificing privacy. In fact, differential privacy will usually produce either very wrong research results or very useless privacy protections. Policymakers and data stewards will have to rely on a mix of approaches: perhaps differential privacy where it is well-suited to the task, and other disclosure prevention techniques in the great majority of situations where it isn’t. A careful reassessment of the situation is urgently needed. Thanks for visiting r-craft.org This article is originally published at https://matloff.wordpress.com Please visit source website for post related comments.
{"url":"https://r-craft.org/whats-not-what-in-differential-privacy/","timestamp":"2024-11-08T21:00:22Z","content_type":"text/html","content_length":"110862","record_id":"<urn:uuid:473137e8-016e-4b7d-ab6a-3e1a43d09c7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00638.warc.gz"}
Ultimate Guide to Linear Discriminant Analysis (LDA) - Dataaspirant Ultimate Guide to Linear Discriminant Analysis (LDA) Linear Discriminant Analysis (LDA) isn't just a tool for dimensionality reduction or classification. It has its roots in the world of guinness and beer! Sir Ronald A. Fisher, the father of LDA, originally developed it in the context of distinguishing between two species of iris flowers. Later, it was used to classify the origin of ancient vases and even to differentiate between various types of brewed beverages. So next time you're sipping on your favorite drink, you can ponder how data science might have once played a role in its categorization! Dive deeper into LDA in this blog and discover how it's more than just math – it's a bridge between nature, history, and modern technology. Whether you're a beginner exploring the world of machine learning or an aspiring data scientist, this guide provides a comprehensive introduction to Linear Discriminant Analysis. Ultimate Guide to Linear Discriminant Analysis (LDA) LDA is widely used for dimensionality reduction and classification tasks, offering a robust framework for extracting meaningful features and maximizing class separability. By leveraging the statistical properties of data, LDA reveals hidden patterns, enhancing our understanding and prediction capabilities. This guide will take you through the fundamental concepts, techniques, and applications of Linear Discriminant Analysis. Starting with the core principles and assumptions, we'll cover the step-by-step process, including data preprocessing, feature extraction, and the mathematical formulation of LDA. Additionally, we'll explore performance evaluation metrics to assess the effectiveness of LDA in data classification. This knowledge will empower you to optimize your models and make informed Introduction to Linear Discriminant Analysis Linear Discriminant Analysis (LDA) is a powerful technique in the field of machine learning and data analysis. It provides a structured approach to data classification, enabling us to extract valuable insights and make accurate predictions. What is Linear Discriminant Analysis? Linear Discriminant Analysis, also known as Fisher's Linear Discriminant, is a statistical method used for dimensionality reduction and classification tasks. It aims to find a linear combination of features that maximally separates different classes in the data. By focusing on discriminative information, LDA helps us identify the most relevant features that contribute to class separation, improving the accuracy of classification models. Why is Linear Discriminant Analysis important for data classification? Data classification plays a fundamental role in various domains, from image recognition and natural language processing to fraud detection and sentiment analysis. Linear Discriminant Analysis offers a systematic approach to enhance data classification accuracy by reducing the dimensionality of the data while preserving class discrimination. This allows us to handle high-dimensional datasets effectively and make informed decisions based on the extracted features. Key benefits and applications of Linear Discriminant Analysis Improved classification accuracy: By focusing on the most discriminative features, LDA helps improve the accuracy of classification models. It maximizes the separability between classes, reducing the risk of misclassification and enhancing overall predictive performance. Dimensionality reduction: LDA transforms high-dimensional data into a lower-dimensional space while preserving class discrimination. This not only simplifies the data representation but also reduces computational complexity, making it easier to analyze and interpret the results. Feature selection and interpretation: LDA identifies the most relevant features for classification, providing valuable insights into the underlying data structure. This feature selection capability helps in understanding the factors that contribute significantly to differentiating classes, leading to more interpretable models. Robustness to multicollinearity: Linear Discriminant Analysis is less sensitive to multicollinearity, a common issue where predictor variables are highly correlated. Unlike some other classification algorithms, LDA can handle multicollinearity without compromising performance, making it a reliable choice for complex datasets. Wide-ranging applications: Linear Discriminant Analysis finds applications across diverse domains. It has been successfully employed in image recognition, text classification, sentiment analysis, medical diagnosis, and many other areas where accurate data classification is crucial. Foundations of Linear Discriminant Analysis Before diving into the practical aspects of LDA, it is important to establish a solid understanding of linear discriminant analysis foundations. Core Concepts and LDA Assumptions LDA operates under the assumption that the data follows a multivariate normal distribution, and each class has its own distribution with distinct mean vectors and a shared covariance matrix. The key concept in LDA is to find a linear combination of features that maximizes the separation between classes while minimizing the within-class scatter. This is achieved by calculating discriminant functions, which are linear combinations of the input features that provide the best separation between classes. Understanding the Discriminant Function The discriminant function is the heart of linear discriminant analysis. Represents a mathematical expression used to transform the input components into a value representing the probability of belonging to a particular class. The discriminant function is calculated based on the estimated class means, the covariance matrix, and the prior probabilities for each class. The goal of the discriminant function is to map the input data into a lower dimensional space where class separation is maximized. By comparing the discriminant function values for different classes, we can assign each data point to the most possible class label. Difference between LDA and PCA (Principal Component Analysis) Befor we are going to learn the difference between them in more technical way, let’s understand the difference with an crayons example Imagine you have a big box of crayons and you want to organize them. PCA (Principal Component Analysis) is like trying to line up all your crayons by how similar their colors are. You want to see which colors are most different from each other and which ones look kind of the same. So, with PCA, you might end up with a line of crayons that goes from lightest to darkest, but you're not really worried if they are from the same pack or different packs. LDA (Linear Discriminant Analysis) is a bit different. Let's say some of the crayons have stickers on them, like star stickers or heart stickers. With LDA, you're trying to put the crayons in a line where crayons with the same stickers are close together, and crayons with different stickers are far apart. So, you're focusing more on the stickers than just the colors. In short: • PCA is like organizing your crayons by color. • LDA is like organizing your crayons by the stickers on them. Now let’s understand in more technical way. PCA focuses on finding the directions (principal components) that capture the most variation in the data, regardless of class designation. It tries to represent the data in a new coordinate system where the parts are unrelated. On the other hand, LDA tries to find a linear combination of features that maximizes the separation between classes while minimizing the within-class variance. Removes class labels and focuses on maximum class separation. How to Prepare Data for Linear Discriminant Analysis Data preprocessing is a crucial step in the data analysis pipeline before applying Linear Discriminant Analysis (LDA). It involves handling missing values, addressing outliers, and performing feature scaling and normalization. These steps ensure that the data is in an appropriate format and that the LDA algorithm can effectively extract discriminative information from the features. Let's dive deeper into each aspect of data preparation for LDA. Handling Missing Values Missing values can arise due to various reasons, such as data collection errors or incomplete records. Dealing with missing values is important to avoid biased or inaccurate results. There are several approaches to handle missing values, including: • Removal: If the number of instances with missing values is relatively small compared to the overall dataset, you can choose to remove those instances. However, caution should be exercised as removing too many instances can lead to a loss of valuable information. • Imputation: Imputation involves filling in missing values with estimated values based on other observed data. Simple imputation methods include replacing missing values with the mean, median, or mode of the respective feature. More advanced techniques, such as k-nearest neighbors or regression-based imputation, can also be employed to infer missing values based on the relationships between variables. Handling Outliers: Outliers are data points that deviate significantly from the majority of the dataset. They can arise due to data entry errors, measurement issues, or represent genuine extreme observations. Outliers can potentially affect the LDA results, as they can distort the estimation of class means and covariance matrices. Here are some approaches to handling outliers: • Removal: If outliers are the result of data collection errors or measurement issues, it may be appropriate to remove them from the dataset. However, it is crucial to carefully evaluate the impact of removing outliers and consider the potential loss of valuable information. • Robust statistics: Robust statistical techniques, such as median absolute deviation or the Winsorization method, can be used to estimate robust measures of central tendency and dispersion. These methods are less influenced by extreme values and provide more reliable estimators. Feature scaling and normalization Feature scaling is important to ensure that features have a similar scale, as LDA is sensitive to the relative size of features. Here are common techniques for scaling and normalizing functions: • Standardization: Standardization, also known as z-score normalization, transforms the data to have a mean of 0 and a standard deviation of 1. It takes the mean of each function and divides it by its standard deviation. This technique ensures that segments have zero mean and equal variance. Min and Max • Scaling: Min and Max Scaling changes the values of each function to a certain range, usually between 0 and 1. It takes the minimum value and divides it by the range (maximum value minus minimum value). Minimum and maximum scaling preserves the relative relationships between data points. How to use LDA for Feature Extraction and Dimensionality Reduction Linear Discriminant Analysis (LDA) not only serves as a classification technique but also offers powerful feature extraction and dimensionality reduction capabilities. By leveraging the statistical properties of the data, LDA can identify the most discriminative features and project the data onto a lower-dimensional space that preserves class separability. Let's delve deeper into the intuition behind feature extraction, the process of reducing dimensionality using LDA, and how to interpret the results of the LDA transformation. Intuition behind feature extraction Feature extraction aims to transform the original set of features into a new set of features that capture the most discriminative information for classification. LDA achieves this by finding linear combinations of the original features that maximize the separability between classes. The intuition is to project the data onto a lower-dimensional space where the distances between classes are maximized while minimizing the scatter within each class. Reducing dimensionality using LDA LDA accomplishes dimensionality reduction by projecting the original high-dimensional feature space onto a lower-dimensional space. The number of dimensions in the reduced space is determined by the number of unique classes in the dataset (number of classes minus one). The reduced space is designed in a way that maximizes the separation between classes, making it easier to classify new instances. The LDA transformation involves two main steps: 1. Computation of Class Means and Variance Matrices: LDA computes the mean vectors for each class and computes variance matrices that capture within-class and between-class variance. These matrices provide valuable insight into the distribution and variability of the data. 2. Solving the eigenvalue problem: LDA solves the eigenvalue problem to find the linear discriminants, also known as eigenvectors, which represent the directions in the feature space where the data show the most differences between classes. These eigenvectors are associated with the largest eigenvalues and indicate the best direction of projection. Interpreting LDA transformation results The results of the LDA transformation can be interpreted in several ways: • Separation of classes: The goal of the LDA transformation is to maximize the separation between classes. A larger distance between classes in the reduced space provides better separation and indicates that the LDA transform successfully captures the discriminative information. • Discrimination values: The LDA transformation provides discrimination values for each instance, indicating closeness to each class. Instances with higher discriminant function values for a particular class are more likely to belong to that class. • Feature Importance: LDA provides insight into the importance of individual features for individual classes. The higher the absolute value of the coefficients of the linear discriminant functions, the greater the influence of the corresponding part of the classification process. Mathematical Formulation of Linear Discriminant Analysis Linear discriminant analysis (LDA) is a statistical technique that aims to find a linear combination of features that maximizes the separation between classes. Using the statistical properties of the data, LDA can efficiently identify the most discriminating directions in the feature space. In this section, we take a closer look at the mathematical formulation of LDA, including the equations involved in computing the mean classes, the covariance matrices, and the eigenvectors and eigenvalues used in the LDA transformation. Linear Discriminant Analysis equations Linear Discriminant Analysis (LDA) involves several equations that play a crucial role in the calculation and transformation of the data. These equations help us compute class means, covariance matrices, and the eigenvectors and eigenvalues used in LDA Calculating class means and covariance matrices To begin with, LDA involves computing the class means and covariance matrices. The class mean vector for each class represents the average feature values of instances belonging to that class. For a dataset with C classes and N instances, the mean vector for class c, denoted as μc, is calculated as the sum of the feature vectors divided by the number of instances in that class: μc = (1/Nc) * ∑xi Here, Nc represents the number of instances in class c, and xi represents the feature vector of instance i. Next, LDA involves calculating the within-class scatter matrix (Sw), which captures the spread or variance of the data within each class. The within-class scatter matrix is obtained by summing up the covariance matrices for each class. The covariance matrix for class c, denoted as Sc, is computed as: Sc = ∑(xi - μc)(xi - μc)ᵀ In this equation, xi represents the feature vector of instance i belonging to class c, and μc is the mean vector of class c. By summing up the covariance matrices for all classes, we obtain the within-class scatter matrix Sw. The between-class scatter matrix (Sb) quantifies the separation between classes and is computed by considering the differences between the class means. The between-class scatter matrix is defined as: Sb = ∑(μc - μ)(μc - μ)ᵀ Here, μ represents the overall mean vector calculated as the average of all class means: μ = (1/C) * ∑μc Eigenvectors and eigenvalues in LDA Once the class means and covariance matrices are computed, the next step in LDA involves finding the eigenvectors and eigenvalues of the matrix (Sw^(-1)) * Sb. These eigenvectors represent the directions in the feature space along which the data exhibits the most separation between classes. To obtain the eigenvectors, we solve the eigenvalue problem: (Sw^(-1)) * Sb * w = λ * w In this equation, w represents the eigenvector, and λ represents the corresponding eigenvalue. The eigenvectors derived from this eigenvalue problem represent the optimal directions of projection that maximize class separability. The eigenvalues associated with each eigenvector indicate the importance or discriminative power of that eigenvector. Higher eigenvalues suggest greater separation between classes along the corresponding eigenvector. Implementing Linear Discriminant Analysis: Step-by-Step Guide Linear discriminant analysis (LDA) is a statistical technique that aims to find a linear combination of features that maximizes the separation between classes. Using the statistical properties of the data, LDA can efficiently identify the most discriminating directions in the feature space. In this section, we take a closer look at the mathematical formulation of LDA, including the equations involved in computing the mean classes, covariance matrices, and eigenvectors and eigenvalues used in the LDA transformation. Data splitting for training and testing Before implementing LDA, it is important to split our dataset into training and testing subsets. This allows us to train the LDA model on a portion of the data and evaluate its performance on unseen data. We can use the train_test_split function from scikit-learn to achieve this. Here's an example: Linear Discriminant Analysis Implementation with Python To implement LDA, we can utilize the LinearDiscriminantAnalysis class from scikit-learn. This class provides the necessary functionality for dimensionality reduction and classification using LDA. Here's an example: Training and fitting the LDA model Next, we can train and fit the LDA model using the training data. This involves learning the discriminant information from the data to find the optimal projection vectors. Here's an example: Making predictions and evaluating performance After training the LDA model, we can evaluate its performance on the testing data. This helps us assess how well the model generalizes to unseen instances. Here's an example: Evaluating the Performance of Linear Discriminant Analysis Once you have trained and tested your Linear Discriminant Analysis (LDA) model, it's essential to evaluate its performance. This section discusses several metrics and techniques commonly used for evaluating the performance of a classification model, including LDA. Metrics for model evaluation When assessing the performance of a classification model, you can consider various metrics, depending on your specific requirements. Some commonly used metrics include: • Classification Accuracy: It measures the proportion of correctly classified instances out of the total number of instances. • Confusion Matrix: A table that provides a detailed breakdown of the model's predicted and actual class labels, enabling the calculation of various metrics. • Precision: It calculates the ratio of correctly predicted positive instances to the total predicted positive instances, indicating the model's ability to avoid false positives. • Recall: Also known as sensitivity or true positive rate, it calculates the ratio of correctly predicted positive instances to the total actual positive instances, indicating the model's ability to identify all positive instances. • F1 Score: It combines precision and recall into a single metric, providing a balanced measure of the model's performance. Using Linear Discriminant Analysis On Iris classification Dataset What is Iris Flower classification The Iris dataset is a well-known dataset in machine learning, consisting of measurements of four features (sepal length, sepal width, petal length, and petal width) from three different species of iris flowers (setosa, versicolor, and virginica). In this case study, we will explore how Linear Discriminant Analysis (LDA) can be used to classify the iris flowers based on their features. Step 1: Data Exploration and Preprocessing: We start by loading the Iris dataset and examining its features and target classes. We then preprocess the data by standardizing the feature matrix using StandardScaler() to ensure that all features have zero mean and unit variance. Step 2: Data Splitting: Next, we split the preprocessed data into training and testing subsets using train_test_split() from scikit-learn. This allows us to train the LDA model on a portion of the data and evaluate its performance on unseen data. Step 3: Linear Discriminant Analysis: We create an instance of LinearDiscriminantAnalysis() as lda and fit the LDA model using the training data. This step involves learning the discriminant information from the data and finding the optimal projection vectors. Step 4: Data Visualization: To visualize the LDA-transformed data, we plot a scatter plot where each class is represented by a different color for the original data as well as the data after LDA. This helps us visualize the separation of the iris flowers in the LDA space. Step 5: Classification and Evaluation: We transform the testing data using transform() to obtain the LDA-transformed features. Then, we use predict() to classify the LDA-transformed testing data. Finally, we calculate the accuracy of the LDA model by comparing the predicted classes with the true classes. • Accuracy: 1.0 • Precision: 1.0 • Recall: 1.0 • F1 Score: 1.0 • Confusion Matrix: [[10, 0, 0], [0, 9, 0], [0, 0, 11]] FAQ on Linear Discriminant Analysis (LDA) 1. What is LDA? LDA, or Linear Discriminant Analysis, is a statistical method used to find the "direction" that maximizes the separation between multiple classes in a dataset. 2. Why is LDA used? LDA is primarily used for dimensionality reduction and classification tasks, especially when you want to separate and classify data into distinct groups or classes. 3. How is LDA different from PCA? While both LDA and PCA are used for dimensionality reduction, PCA focuses on explaining variance in data, irrespective of classes. In contrast, LDA aims to maximize the separation between different 4. Can LDA be used for regression problems? No, LDA is specifically designed for classification problems. For regression tasks, other techniques, such as linear regression, should be used. 5. Does LDA assume anything about the data? Yes, LDA assumes that the features (variables) in your dataset are normally distributed and have the same covariance matrix for all classes. 6. How many components can LDA extract? The number of components LDA can extract is one less than the number of classes in the data. So, for a dataset with three classes, LDA can extract up to two components. 7. Is LDA sensitive to feature scaling? Yes. Just like many other machine learning algorithms, LDA can be sensitive to feature scaling. It's often a good practice to scale your data before applying LDA. 8. Can LDA handle non-linear data? LDA, by its very nature, is linear. If the classes in the data have a non-linear boundary, other techniques, such as kernel methods or neural networks, might be more suitable. 9. What are the advantages of LDA? A key benefit of LDA is its unsupervised nature, which means it doesn't require pre-labeled categories or topics for your documents. LDA can independently identify potential topics in the data and determine the likelihood of each document pertaining to those topics. In this beginner's guide to Linear Discriminant Analysis (LDA), we have covered the foundations, implementation, and evaluation of LDA for data classification and dimensionality reduction. LDA is a powerful technique that improves classification accuracy and facilitates data interpretation. We discussed the importance of LDA, its key benefits, and real-world applications. We explored the core concepts, assumptions, and data preparation techniques for LDA. Additionally, we provided an overview of the mathematical formulation of LDA and presented a step-by-step guide to implementing LDA in Python. By understanding LDA, beginners can effectively apply it for data analysis tasks and further enhance their knowledge in machine learning. Recommended Courses Follow us: I hope you like this post. If you have any questions ? or want me to write an article on a specific topic? then feel free to comment below.
{"url":"https://dataaspirant.com/linear-discriminant-analysis/","timestamp":"2024-11-03T00:08:15Z","content_type":"application/xhtml+xml","content_length":"282666","record_id":"<urn:uuid:117282d2-562c-4cd2-a88a-bce517dad5c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00433.warc.gz"}
Multivalued ALU - Patent 0305583 This invention relates to a programmable multivalued ALU (arithmetic and logic- unit), which is a fundamental building block for constructing a multivalued logic system, and which readily lends itself to integrated circuit techniques. Extensive research in the field of multivalued logic and associated arithmetic circuits is underway with the aim of compensating for or overcoming the several limitations of 2-valued (binary) logic, which is the foundation of many digital circuit systems, the foremost of which is the computer. Whereas 2-valued logic deals with the two values 0 and 1 and the signals employed by a 2-valued logic circuit system have two levels corresponding to these two values, multivalued logic deals with three or more values and the signals used by a multivalued logic circuit system have three or more Multivalued logic (and a multivalued logic circuit system) has the following advantages over two-valued logic (and a two-valued logic circuit system): 1) It is possible to describe an indeterminate state between 0 and 1 (as by employing three values). 2) The wiring area on an IC substrate and the number of pins can be reduced to enable a higher degree of effective integration. In the case of 64 values, for instance, one sixth the wiring area of a two-valued logic circuit is sufficient. 3) The realization of a 10-valued (decimal) machine would make it possible to employ logic the same as that used by human beings, so that the encoders and decoders required by two-valued machines would be unnecessary. The most fundamental building block for constructing a multivalued logic system having these advantages, particularly a system supplanting the conventional 2-valued computer, is a multivalued ALU. A multivalued ALU must be an ALU capable of performing any multivalued logical operation. There are ten types of multivalued logical operations even if we consider only those that are well known, such as MAX, MIN, addition, subtration, multiplication and division. For n values and r variables, the number of types of multivalued logical operations that exists is n raised to the n' power (i.e. nor ). For example, if there are three values and two variables, the number of types of multivalued logical operations is 3 ; if there are four values and two variables, the number of types of multivalued logical operations is approximately 4.3 x 10 A multivalued ALU capable of executing such an enormous number of multivalued logical operations is virtually impossible to design in the ordinary manner. An object of the present invention is to provide a programmable multivalued ALU capable of executing any operation. Another object of the present invention is to readily obtain a multivalued logic function circuit for performing an operation in accordance with a truth table of a multivalued logic function when such a truth table is given, the circuit being used in order to realize the abovementioned multivalued ALU. [0007] ^. A further object of the present invention is to obtain a multiple-function circuit, namely a multivalued logic function circuit for performing operations in accordance with truth tables of multivalued logic functions when such truth tables are given, wherein a plurality of different ones of the multivalued logic operations can be performed simultaneously, the circuit being used in order to realize the abovementioned multivalued ALU. Still another object of the present invention is to provide a simply constructed multivalued memory suitable for construction of a multivalued computer system. A multivalued logic function circuit in accordance with the present invention comprises: a plurality of multivalued signal sources for generating signals respectively representing a plurality of logic values of applied multivalued logic; a memory array including a plurality of signal lines connected to respective ones of the signal sources, and a plurality of address lines each connected to any one of the signal lines at a node by being programmed in accordance with a given truth table; an addressed switch for selectively rendering any one of the address lines conductive by a multivalued input signal; and an output line connected to all of the address lines on the output side of the addressed switch. In case of a field programmable circuit, only the memory array is programmed in accordance with the truth table. For most mask-programmable circuits, the memory array and addressed switch would be In any case, it will suffice to write a program in accordance with a truth table, and programming can readily be accomplished if a truth table is given. A circuit for implementing any multivalued logic function is thus realized. A multivalued memory according to the present invention comprises a plurality of signal lines connected to respective ones of a plurality of multivalued signal sources for generating signals respectively representing a plurality of logic values of applied multivalued logic, and a plurality of address lines each connected to any one of the signal-lines at a node by being programmed. When the multivalued memory is used, it is possible, by designating desired address lines, to obtain from the address lines multivalued signals representing previously programmed values. Since current or voltage sources can be used as the signal sources, it is possible to extract a multivalued voltage signal or multivalued current signal from the multivalued memory. It will be readily understood that the multivalued memory of the invention can be utilized in order to realize the above-mentioned multivalued logic function circuit. A multivalued ALU in accordance with the present invention comprises: a plurality of multivalued signal sources for generating signals respectively representing a plurality of logic values of applied multivalued logic; a memory array including a plurality of signal lines connected to respective ones of the signal sources, and a plurality of address line groups provided for respective ones of different functions to be implemented, each group comprising a plurality of address lines each connected to any one of the signal lines at a node by being programmed in accordance with a truth table of each function; first selecting means for selecting any one of the address line groups by a selection signal which is for selecting a function to be implmented; second selecting means for selecting at least one of the address lines in the selected address line group by a multivalued input signal; and an output circuit for outputting a signal representing a logic value obtained from the address line selected by the first and second selecting means. Execution of the multivalued logic operation is equivalent to generation of a multivalued logic function representing the results of the operation. Plural types of desired multivalued logic functions can be readily programmed in the abovementioned memory array in accordance with the truth tables thereof. One multivalued logic function among the plurality thereof can be selected by the first selecting means. By applying an input signal representing a variable, a signal representing a function value corresponding to the variable can be obtained from the selected multivalued logic Thus, it is possible by using truth tables to program a multivalued ALU which implements any multivalued logic function, and programmed operation results can be derived by operating the multivalued These and other characterizing features of the present invention will become clear from a description of preferred embodiments with reference to the accompanying drawings. Fig. 1 is a view illustrating examples of truth tables of a 4-value 3-input multivalued logic function; Fig. 2 is a circuit diagram illustrating a multivalued logic function circuit which outputs a multivalued function expressed by the truth tables shown in Fig. 1; Fig. 3 is a view for describing a symbol used in Fig. 2; Figs. 4a and 4b are circuit diagrams illustrating examples of a specific circuit construction of current signal sources shown in Fig. 2; Fig. 5 is a circuit diagram depicting a decoder shown in Fig. 2 and illustrates a specific example of operation of the same in a voltage mode; Fig. 6 illustrates a basic circuit for describing the circuit of Fig. 5; Fig. 7 is a view showing the relationship between the threshold value of drain current of a FET shown in Fig. 5 and the threshold level of the circuit; Fig. 8 is a circuit diagram showing a specific example of a decoder operated in a current mode; Figs. 9a through 9d are graphs of output signals from the circuit of Fig. 8; Fig. 10 is a circuit diagram illustrating another example of an addressed switch; Figs. 11 and 12 are circuit diagrams showing multivalued logic function circuits obtained through simplification of the circuit of Fig. 2 by adopting a mask programmable arrangement; Fig. 13a is a view showing an example of a multiple-function truth table; Fig. 13b is a view showing the correlation between four functions and truth table values; Fig. 14 is a circuit diagram showing an example of a multiple-function circuit for realizing the truth table of Fig. 13; Fig. 15 is a circuit diagram of a 4-value 2-input ALU, which illustrates one example of a multivalued ALU; Fig. 16 is a block illustrating the multivalued ALU of Fig. 15 constructed on a single chip by IC techniques; Fig. 17 is a circuit diagram illustrating an example of a multivalued ALU constituted by a memory array and bilateral T-gates; Fig. 18 is a block diagram in which a multivalued ALU capable of implementing 16 functions is realized by using five ALU chips; Fig. 19 is a circuit diagram illustrating the detailed wiring of an ALU used as a bilateral T-gate in the circuit of Fig. 18; Fig. 20 is a view showing the relationship between the inputs and outputs of a multivalued MAXIMIN function circuit block; Fig. 21 is a block diagram of a plural-digit MAX/MIN circuit constructed by cascade-connecting MAX/MIN circuits; Fig. 22 is a view showing truth tables of a 4-valued MAX/MIN function; Fig. 23 is a connection diagram of a 4-valued MAX/MIN function circuit; Fig. 24 is a connection diagram for a case where the circuit of Fig. 23 is simplified; Fig. 25 illustrates the relationship between the inputs and outputs of a multivalued full adder circuit; Fig. 26 is a block diagram of a full adder circuit for a plurality of digits and is constructed by cascade-connecting full adder circuits; Fig. 27 is a view showing truth tables of 4-valued full addition; Fig. 28 is a connection diagram of a 4-valued full adder circuit; Fig. 29 is an connection diagram showing a simplification of the circuit of Fig. 28; Fig. 30 illustrates the relationship between the inputs and outputs of a multivalued full subtracter circuit block of a first type; Fig. 31 is a block diagram showing a full subtracter circuit for a plurality of digits and is constructed by cascade-connecting the full subtracter circuits of the first type; Fig. 32 is a view illustrating truth tables of 4-valued full subtraction in a full subtracter circuit of the first type; Fig. 33 is a connection diagram showing a 4-valued full subtracter circuit of the first type; Fig. 34 is a connection diagram showing a simplification of the circuit of Fig. 33; Fig. 35 is a view illustrating a special subtracter circuit; Fig. 36 is a block diagram of a circuit obtained by connecting the circuit of Fig. 35 in a 4-digit cascade; Fig. 37 is a view illustrating the relationship between the inputs and outputs of a full subtracter circuit block of a second type; Fig. 38 is a block diagram of a full subtracter circuit for a plurality of digits and is constructed by cascade-connecting full subtracter circuits of the second type; Fig. 39 is a view showing truth tables of 4-valued full subtraction in the full subtracter circuit of the second type; Fig. 40 illustrates the relationship between the inputs and outputs of a multivalued full multiplier circuit block; Fig. 41 is a block diagram illustrating an example in which a multiplier circuit for 4-digit and 1-digit variables is constructed by cascade-connecting full multiplier circuits; Fig. 42 is a view showing truth tables of 4-valued full multiplication; Fig. 43 is a connection diagram of a 4-valued full multiplier circuit; Fig. 44 is a connection diagram showing a simplification of the circuit of Fig. 43; Fig. 45 is a block diagram showing an example of a circuit in which a 4-digit variable and 2-digit variable are multiplied at once; Fig. 46 shows a block in which several of the elements of Fig. 45 are consolidated; Fig. 47 is a view showing the relationship between the inputs and outputs of a multivalued full divider circuit block; Fig. 48 is a block diagram illustrating an example in which a divider circuit which divides a 4-digit variable by a 1-digit variable is constructed by cascade-connecting the full divider circuits; Fig. 49 is a view showing truth tables of 4-valued full division; Fig. 50 is a connection diagram of a 4-valued full divider circuit; Fig. 51 is a connection diagram showing a simplification of the circuit of Fig. 50; Fig. 52 is a block diagram showing the concept of a circuit for dividing a 4-digit variable by a 2-digit variable; Fig. 53a illustrates an n-channel MOSFET switching circuit; Fig. 53b illustrates waveforms associated with the circuit of Fig. 53a; Fig. 54a illustrates a CMOSFET switching circuit; Fig. 54b illustrates waveforms associated with the circuit of Fig. 54a; Fig. 55 is a circuit diagram illustrating an example of a CMOS multivalued logic function circuit and CMOS multivalued ALU; and Fig. 56 is a circuit diagram illustrating an example of a multivalued binary hybrid circuit. (1) Definition of a multivalued logic function A multivalued logic function can be defined in two ways. One method is to use an equation that expresses the function. For example, a multivalued logic function MAX in which x and y are variables is expressed by the following: and a multivalued logic function MIN in which x and y are variables is expressed by the following: The other method of defining a multivalued logic function is to use a truth table. Fig 1 illustrates an example of truth tables of a multivalued logic function (a 4-value function) having the three variables x, y and z each of which takes on the four values of 0, 1, 2 and 3. Since any value can be written in a truth table, using the truth table makes it possible to readily define any multivalued logic function. Whereas there is a limitation on the types of functions capable of being defined in accordance with the definition method that relies upon an equation of the function, all functions can be defined if truth tables are utilized. This is a significant advantage of the method using truth tables. Furthermore, since a function defined using an equation can be replaced by a truth table, the method using truth tables can be said to subsume the method relying upon equations. Accordingly, in the description that follows, multivalued logic functions are in principle defined by using truth tables. It is possible for a multivalued logic function defined by a truth table to be expressed by the formula given below using the values in the truth table. Specifically, a multivalued logic function f (x,y,z) in which x, y and z are variables is expressed by the following if the value in the truth table is C t, If there are r values, x, y, z, t, m and n each take on the values 0, 1,..., (r-1). As for a multivalued logic function circuit which outputs a multivalued signal indicative of the function f-(x,y,z) of Eq. (3), the variables x, y, z would be given by a multivalued input signal representing these values. The symbol would be realized by an AND logic operation, and the symbol "*" would be realized by switching. In order to obtain an output signal representing any multivalued logic function by such a multivalued logic function circuit, the circuit must be programmable. Any multivalued logic function may be obtained by programming the abovementioned value Ci of the truth table, i.e. by programming a circuit that generates the value Ctmn. Two methods are available for programming a circuit that generates the value Ct of a truth table. One method involves programming performed in the process of fabricating the circuit through IC manufacture, in which case the circuit is referred to as a mask-programmable circuit. The other is a method in which an unprogrammed circuit is programmed by the user using a PROM writer or the like. This circuit is referred to as a field-programmable circuit. (2) Field-programmable multivalued logic function circuit Fig. 2 shows a circuit which executes the operation expressed by Eq. (3) to obtain an output signal representing the function f(x,y,z). The value C of the truth table is programmed in a field-programmable circuit. The circuit of Fig. 2 is a 4-valued 3-input (3-variable) multivalued logic function circuit. Though the circuit shown in Fig. 2 is capable of operating in either a current mode or voltage mode, the circuit will first be described premised on the current mode of operation. The circuit will then be described for a case where it has been switched over to the voltage mode of operation. The multivalued logic function circuit shown in Fig. 2 includes four signal sources 10 - 13, a node memory array 21 connected to these signal sources 10 - 13, an addressed switch 23 connected to the memory array 21, and an output line 5 connected to the output side of the addressed switch 23. The signal sources 10 - 13 are current sources the specific construction of which will be described later. Since the present embodiment is a 4-valued logic function circuit, the signal sources 10 - 13 output currents indicative of the four values 0, 1, 2 and 3, respectively. The current source 10 for the current representing the value 0 is not always necessary. Lines (signal lines) connected to respective ones of the signal sources 10 - 13 in the node memory array 21 constitute rows and are referred to as Oth - 3rd row lines, respectively. Lines (address lines) constituting 64 columns are arranged with respect to the lines forming these rows and are referred to as as 1 st column line, 2nd column line, ..., 64th column line, starting from the left side of Fig. 2. Each line forming a row and each line forming a column are connected to each other at only one location, thereby forming a node. This memory array is of the field programmable-type and generally has its rows and columns arranged to cross each other three-dimensionally on a silicon chip. Like an ordinary programmable ROM, the memory array can be of the type in which the nodes are formed by pn junction breakdown or insulation layer breakdown produced by application of a large current or voltage, the type in which fuses at unnecessary node portions are melted away to leave only the nodes that are necessary, etc. Any of these memory arrays can be employed. In any event, the memory array 21 is programmed to have the values of a truth table representing the desired multivalued logic function, such as illustrated in Fig. 1. The 1st column line corresponds to x=0, y=0, z=0, the 2nd column line corresponds to x=1, y=0, z=0, the 3rd column line corresponds to x=2, y=0, z=0, the 9th column line corresponds to x=0, y=2, z=0, and so on. Thus, all columns are in one-to- one correspondence with all combinations of the variables x, y, z. Each column line is connected by a node to a row line from a current source representing the function f(x,y,z) in which the corresponding value of x, y, z is the variable, namely from a current source representing the value C ) of the truth table. The circuit combining the current sources 10 - 13 and the memory 21 is equivalent to 64 current sources. Since the memory array 21 is programmable, it is possible to program values of any truth table. A circuit which outputs a signal representing any multivalued logic function f(x,y,z) may thus be The addressed switch 23 selectively extracts or reads (switches on) currents representing the function f-(x,y,z), which has been set in the memory array 21, in dependence upon inputs x, y, z applied to input terminals 1, 2, 3, respectively. The addressed switch 23 includes decoders (1-of-4 decoders) 31, 32 and 33 to which the inputs x, y, are z are respectively applied, and an AND array 22 connected to the output sides of these decoders. The decoder 31 outputs an H-level signal at its t = 0 output terminal and an L-level signal at its other output terminals when the input x (irrespective of the current or voltage mode of operation) is indicative of the logic value 0. Similarly, when the input x is logical 1, 2, 3, the H-level signal output appears at the t = 1, 2, 3 output terminal, respectively, with the other output terminals delivering the L level. The other decoders 32, 33 operate in the same manner. Extending from each of the decoders 31, 32, 33 are control lines constituting four rows. These control lines intersect the column lines extending from the memory array 21 or connected to the column lines thereof. Provided at each intersection is a switching element comprising an n-channel MOSFET expressed by the symbol 0 denoted by A (see Fig. 3). These switching elements A are connected serially in threes to each column line and are controlled by respective ones of the control lines constituting the rows extending from the decoders 31, 32, 33. The control lines that control the three switching elements in each column differ from one column to the next. For instance, when x=0, y=0, z = holds, all three switching elements in the 1st column turn on so that the line is rendered conductive. Similarly, the 2nd column line is rendered conductive when x=1, y=0, z = 0holds, the 3rd column line is rendered conductive when x=2, y=0, z = 0 holds, and the 10th column line is rendered conductive when x=1, y=2, z=0 holds. In other words, the addressed switch 23 is so adapted that among the 64 column lines, only the column line addressed by the combination of inputs x, y, z is rendered conductive. It goes without saying that the addressed switch 23 or at least the AND array 22 can be realized by integrated circuitry. The 64 column lines are connected to the single output line 5 on the output side of the addressed switch 23. The output line 5 is provided with an output terminal 4 for the function f(x,y,z). Accordingly, a column line designated by the inputs x, y z is rendered conductive by the addressed switch 23, so that the current of the previously programmed value flows through the designated column line from the corresponding node of the memory array 21 and then through the output line 5 to appear at the output terminal 4 as an output current representing the corresponding function f In the arrangement of Fig. 4a, a current lo corresponding to the value 1 in multivalued logic is applied to an input terminal 9 and enters a six-output current mirror 14 comprising an n-channel MOSFET. One output current of the current mirror 14 has its direction reversed by a current mirror 15 comprising a p-channel MOSFET and appears as an output current lo at an output terminal 11 a. This output current lo corresponds to the output current of the current source 11 shown in Fig.2. Connecting together two of the output drains of the current mirror 14 forms a current of value 21 , which is reversed by a current mirror 16 before appearing at an output terminal 12a. Connecting together the other three output drains of current mirror 14 produces a current of value 31 , which is reversed by a current mirror 17 before appearing at an output terminal 13a. The output currents 21 , 31 of the output terminals 12a, 13a correspond to the output currents from the current sources 12, 13, respectively, of Fig. 2. In the arrangement of Fig. 4b, a three-output current mirror 14B outputs three currents of value lo applies to respective current mirrors 15B, 16B, 17B. Since the current mirror 15B is a one-output current mirror, it outputs a current the value of which is equal to that of the input current lo. The current mirrors 16B and 17B are 2- and 3-output current mirrors, respectively, and each has its output drains connected together. As a result, these current mirrors provide currents of values 21 , 31 , respectively. In Fig. 2, a series circuit composed of a switching element 34 comprising an n-channel MOSFET and a resistor 35 is connected in parallel with the output line 5. The resistor 35 is grounded. The switching element 34 has its on/off action controlled by a voltage signal applied to a mode select terminal 7. If the circuit of Fig. 2 is to be operated in the current mode, as described above, then the switching element 34 is turned off in advance by applying an L-level voltage to the terminal 7. If the circuit of Fig. 2 is to be operated in the voltage mode, then the switching element 34 is turned on in advance by applying an H-level voltage to the terminal 7. When this is done, the current indicative of the function f(x,y,z) flows into the resistor 35 via the switching element 34, whereby a voltage drop is produced that appears at the output terminal 4. It is assumed here that the input impedence of the succeeding circuit (not shown) connected to the output terminal 4 is high. Thus, without altering the current sources 10 - 13 in any way (though it is preferred that the Oth row line connected to the current source 10 be grounded beforehand), it is possible to switch between the current mode of operation and voltage mode of operation merely by changing over the mode select signal applied to the terminal 7. It is also permissible to connect the switching element 34 and resistor 35 to each row line of the memory array 21 rather than the output line 5. Furthermore, it goes without saying that the circuit of Fig. 2 can be converted into a voltage mode circuit if voltage sources are used instead of the current sources 10 - 13. The arrangement of Fig. 2 is also provided with a circuit for programming the memory array 21. The circuit is composed of switching elements 40 - 43 connected at one terminal thereof to respective ones of the row lines of memory array 21, a terminal 6 connected to the switching elements 40 - 43 at the other terminal thereof, and a decoder 44 for generating a signal to control the on/off action of the switching elements 40 - 43. The decoder 44 has a terminal 8 to which a line select signal is applied. It is preferred that the line select signal also be a 4-valued signal. As for the method of programming the memory array 21, any one of the four row lines of memory array 21 is selected by the line select signal applied to the terminal 8. The switching element (any one of the switching elements 40 - 43) corresponding to the selected row line is turned on. In addition, a signal for selecting a column line is applied as the inputs x, y, z to the input terminals 1, 2, 3 to render the desired column line conductive. After these operations have been performed, a large current or voltage is impressed across the output terminal 4 and the terminal 6. Depending upon the type of arrangement, this will cause a node to be formed or to be cut open at the point where the selected row line and column line intersect. If this operation is repeated for all nodes to be formed or cut open, the memory will be programmed in its entirety, thus completing the programming procedure. The entirety of the circuit of Fig. 2 in which each current source 11 - 13 is replaced by the circuit of either Fig. 4a or 4b can be realized in the form of an integrated circuit, in which case the number of input terminals can be greatly reduced. Specifically, the input terminals 1, 2, 3, output terminal 4, voltage or current application terminal 6 for programming, mode select terminal 7, line select terminal 8, input terminal 9 for the unit current lo and the terminals for the required operating power supply V and for ground can be realized by a total of 10 pins, and it is highly advantageous if these ten pins are provided on one chip. (3) Decoder A specific example of the decoders 31, 32, 33, 34 shown in Fig. 2 will now be described. As these decoders are structurally identical, they will be discussed taking decoder 31 as an example. Fig. 5 illustrates an example of a decoder applied when the input x is a voltage signal. The four output terminals I = 0 - 4 are indicated by xo - x . Since the circuit of Fig. 5 is obtained by successively connecting basic circuits, the basic circuit will be described with reference to Fig. 6. In Fig. 6, two p-channel MOSFETs Q , Q are connected in series. Power supply voltage, namely an H-level voltage V , is applied to the drain of the FET Q . The other FET Q has its source connected to its gate and to ground. An output terminal is provided at the point where the two FETs Q , O are connected together. An output voltage E is obtained at this output terminal. The output voltage E of this basic circuit attains the H level only when an H-level voltage is applied to the source of the FET Q and a L-level input voltage V is applied to the gate of the FET Q Returning to Fig. 5, it is seen that the circuit is obtained by successively connecting or "cascading" six of the above-described basic circuits. The power supply voltage V (e.g. 5 V) is applied to one FET Q , Q , Q in each of three of these basic circuits, and the FETs Q , Q , Q are controlled by the input voltage signal x. FETs Qs, 0 of the other two basic circuits are connected to the drain side of the FETs Q , Q . The power supply V is applied to a FET Q in the remaining basic circuit. Assuming that the right side of each basic circuit in Fig. 5 is the preceding stage, the gates of FETs 0 , Q , Q will be controlled by the output voltage (corresponding to E in Fig. 6) of the respective basic circuits constituting the preceding stage. As shown in Fig. 7, the threshold voltage values V of the drain currents of FETs O , Q , Q are set to mutually different values -θ , -θ , -θ , respectively. For example, θ = 0.83 V, 0 = 2,50 V, 0 = 4,17 V. V , V and V , are threshold voltages Thi, Th , Th for subjecting the input voltage x to level discrimination. The threshold voltages V of the other FETs Q . to Q12 should be not less 1 V. If the input voltage x is less than the threshold voltage Th , the result is that an L-level voltage is applied to the gate of FET Q in the basic circuit of the first stage; hence, the output voltage xo of this basic circuit attains the H level. Since the gates of FETs Q , Qs, Q4 in the basic circuits for generating the other outputs x , x , x will be at the H level, the outputs x , x will be at the L level. When the input voltage x satisfies the inequality Th >x>Thi, the voltage at each point changes in the manner indicated within the parentheses. That is, when the gate of FET Q attains the H level, the output xo assumes the L level. As a result, the gate of FET Q assumes the L level, so that the output w attains the H level. The other outputs x , x remain at the L level. Thus, only the output x attains the H level when Th holds, and only the output x attains the H level when Th <x holds. Though the circuit of Fig. 5 is shown to be constituted by p-channel MOSFETs, it goes without saying that the decoder can be constructed in the same manner even if n-channel MOSFETs are used. An example of a decoder for a case where the input signal x is a current will now be described. This decoder can be constructed utilizing a plurality of literal circuits which operate in the current mode. A current-mode literal circuit is described in detail in the specification of Japanese Patent Application No. 60-16897 (for which the corresponding U.S. Application Serial No. is 821,289) filed previously by the applicant. A brief discussion of this circuit now follows. Fig. 8 illustrates an example of a one-of-four decoder utilizing four literal circuits. The input current x is inputted to a multiple-output current mirror 54, where six currents each having a value equal to that of the input current x are produced. One of these outputs is delivered to a literal circuits 50 for generating an output x , two of the outputs are delivered to a literal circuit 51 for generating an output x , two of the outputs are delivered to a literal circuit 52 for generating an output x , and the remaining output is delivered to a literal circuit 53 for generating an output x The operation of the literal circuit 51 for generating an output signal X (a = 0.5, b = 1.5) will now be described. A p-channel MOSFET Q and an n-channel MOSFET Q are connected in series between a current source 59, which represents a value of r-1 (where r is radix), and the output terminal. Current sources 55, 56 are provided for producing currents representing the respective values a, b. The output side of the multiple-output current mirror 54 is connected to the output sides of the current sources 55, 56 at respective nodes 57, 58. The FETs Q , Q are controlled by the voltages at these nodes 57, 58, respectively. When x>a holds, the potential at node 57 falls to the L level and FET Q is turned on. When x<b holds, the potential at node 58 attains the H level and FET Q is turned on. Accordingly, a current (r-1) is obtained at the output terminal only when a<x<b holds, as shown in Fig. 9b. Similarly, in the literal circuit 52, an output current is obtained from the output terminal if 1.5<x<2.5 holds, since the settings are such that a = 1.5, b = 2.5. In literal circuit 50, the components corresponding to current source 55 and FET O , are omitted. Here the setting is such that b = 1.5. As a result, an output current is attained at x<0.5. In literal circuit 53, the components corresponding to current source 56 and FET Q are omitted. Here the setting is such that a=2.5. As a result, an output current is attained at 2.5<x. These output currents are depicted in Figs. 9a - 9d. Thus, the circuit of Fig. 8 provides an output current at any one of its four output terminals, this depending upon the value of the input current x. The output current flows to the ground side via a resistor, as shown by the dashed lines in Fig. 8, thereby producing a voltage drop across the resistor. The corresponding switch A in the AND array 22 of Fig. 2 is controlled by this voltage drop. If the circuit of Fig. 8 is adopted at the decoder, the value (r-1) of current source 59 may be any value. (4) Another example of an addressed switch The addressed switch 23 in the multivalued logic function circuit of Fig. 2 can be replaced by other circuitry, e.g. several bilateral T-gates, as illustrated in Fig. 10. For the sake of simplification, the circuit shown in Fig. 10 is illustrated for two inputs x, y. In addition, voltage sources 11A - 13A are used in place of the current sources 11 - 13 of Fig. 2. Obviously, current sources can also be provided, in which case a current/voltage changeover switch and a resistor would be furnished as well. A bilateral T-gate is adapted to select a plurality of input signals by a select signal and deliver one of them as an output signal. In the arrangement of Fig. 10, the addressed switch is constituted by four bilateral T-gates 61 - 64 each receiving the input x as a select signal and having four input terminals, and a bilateral T-gate 65 receiving the outputs of the bilateral T-gates 61 - 64 as inputs and the input y as a select signal. The 1 st through 4th, 5th through 8th, 9th through 12th and 13th through 16th column lines are connected to the input sides of the bilateral T-gates 61, 62, 63, 64, respectively. The output of the bilateral T-gate 65 represents the output signal f(x,y). (5) Mask-programmable multivalued logic function circuit In principle, the only difference between a field-programmable multivalued logic function circuit and a mask-programmable multivalued logic function circuit is in which step of a series of manufacturing steps the nodes of the memory array, indicated at numeral 21 in Fig. 2, are programmed. Accordingly, there is absolutely no difference between them as far as circuitry is concerned (though the circuits do differ in terms of IC structure). In the case of the mask-programmable multivalued logic function circuit, however, the circuit pattern is decided at the design stage. Consequently, such an arrangement is advantageous in that the circuit pattern can be simplified, though adaptability is sacrificed. Simplification of the circuit pattern will be described upon comparing the arrangement of Fig. 2, which shows an example of the field-programmable circuit, and Fig. 11, which shows an example of a circuit which results when the arrangement of Fig. 2 is simplified by adopting the mask-programmable configuration. In Fig. 11, portions similar to those shown in Fig. 2 are designated by like reference characters. Also, the decoder 44 and the switching elements 40 - 43 controlled thereby are unnecessary. It will be understood from the truth tables of Fig. 1 that f(x,y,z) = 1, irrespective of the value of x, when z=0, y = 2 holds. This portion is expressed as the lines of the 9th to 12th columns in the circuit of Fig. 2. Since the nodes of these four column lines all lie on the 1st row line of memory array 21, it is possible to delete three of the column lines and represent the four column lines by a single column line. This new column lines is indicated by a new 9th column line in Fig. 11. Since the output f(x,y,z) takes on the same value irrespective of the value of input x, the new 9th column line in the AND array 22 is not provided with a switching element controlled by the input x. The new 9th column line is provided only with switching elements A , A controlled by the input y (=2) and the input z (=0), respectively. Similarly, for z = 2 in the truth tables of Fig. 1, f(x,y,z) = 2, irrespective of the values of x,y. Accordingly, the corresponding 33rd through 48th column lines in Fig. 2 can be represented by a new 30th column line, as shown in Fig. 11. Since f(x,y,z) takes on the value 2, irrespective of the inputs x, y, when z=2 holds, the new 30th column line is provided solely with a switching element A3 controlled by the input z. Further, as shown in Fig. 12, it is possible to delete the column line connected to the Oth line, namely the line extending from the 0 current source 10, in the memory array 21. Thus, the multivalued logic function circuit of Fig. 2 can be greatly simplified, ultimately to the form shown in Fig. 12, if the above-described approach of deleting component is adopted. It should be noted that the mask-programmable circuit requires programming not only of the memory array 21 but also of the AND array 22. (6) Multiple-function circuit A multiple-function circuit is one which generates a plurality of function outputs for a certain combination of inputs. Figs. 13a, 13b show an example of a 2-input 4-valued multiple-function truth table, and Fig. 14 illustrates an example of a multiple-function circuit for executing the functions of the truth table. In Figs..13a, four function values are illustrated in the form of a matrix for combinations of input variables x and y. These function values indicate values of functions f , f , f [3 ] and f shown in Fig. 13b. In Fig. 14, there are provided four row lines (output lines) connected to output terminals 35 - 38 for outputting the four functions f, - f . The row lines intersect the column lines from the memory array 21. The output line of the first row corresponding to function f, is connected to the 1st through 4th column lines, the output line of the second row corresponding to function f is connected to the 5th through 14th column lines, the output line of the third row corresponding to function f is connected to the 15th through 28th column lines, and the output line of the fourth row corresponding to function f is connected to the 29th through 42nd column lines. Nodes are formed at the points where these connections are made. These four row lines and nodes construct a sum array 24. The four column lines connected to the output row line corresponding to function fi in the sum array have switching elements controlled by the outputs of the decoders 31, 32 in the AND array 22, and nodes with the three row lines extending from the voltage signal sources 11 A - 13A in the memory array 21. These switching elements and nodes are programmed in advance in the manner described It will be understood that when a certain combination of inputs x and y is applied to the circuit shown in Fig. 14, there will be obtained signals representing the four different functions f, - f indicating the results of operating on the inputs x, y in accordance with the truth table of Fig. 13a. With a multiple-function circuit of this type, it is required that the sum array 24 be programmed in advance in addition to the memory array 21 and AND array 22. It is possible to reduce the number of column lines in the circuit of Fig. 14. For example, the 2nd column line and 40th column line are both for generating output signals of value 3 when x = 2, y=1, and these signals appear at output terminals 35, 38, respectively. Accordingly, if the 2nd column line and 4th row line, which is connected to the output terminal 38, are connected together at the node indicated by B, output signals exactly the same as those mentioned above will be obtained at the output terminals 35, 38 even if the 40th column line is eliminated. Thus, for these column lines that generate function outputs of the same value for the same combination of inputs between different functions, it is possible to leave just one line and delete the other. (7) Multivalued ALU Fig. 15 depicts a field-programmable 4-valued 2-input ALU constructed by utilizing the above-described memory array and addressed switch (the decoders and AND array). Portions identical with those described so far are designated by like reference characters. This ALU has two input signals, namely x and y, and is capable of performing four different arithmetic operations. The operations or the results thereof are indicated by f , f , f , f . Since performing an operation is the same as finding the value of the function representing the operation, the aforementioned fi - f4 shall be referred to as functions in the following description when appropriate. In the arrangement of Fig. 15, the memory array 21 is programmed to have truth tables of the four different functions f, - f . An AND array portion is provided with respect to the inputs x and y, just as illustrated in Fig. 2, for the memory array portion of each function. The aggregate of these AND array portions constructs the AND array 22. All column lines of the AND array 22 are connected to the single output line 5. A select terminal 37 is provided in order to select the aforementioned four operations or functions and has a 4-value select signal applied thereto. The select terminal 37 is connected to the input side of the 1-of-4 decoder 36, from which four control lines emerge. Any one of the control lines designated by the inputted 4-valued select signal attains the H level. Each column line is provided with a single switching element between the AND array 22 and memory array 21. These switching elements are controlled by the control lines of decoder 36 that designate the corresponding function and construct a selection array 25. The latter, it should be noted, can be provided between the AND array 22 and output terminal 5 rather than in the position shown in Fig. Thus, the inputs x, y and the select signal S are applied to the ALU of Fig. 15. The arithmetic operation selected by the select signal S is executed for the inputs x, y, and the results of the operation are obtained at the output terminal 4 via the output line 5. It is possible to adopt an arrangement in which the ALU is switched between the current and voltage modes of operation, as shown in Fig. 2, and the ALU can be equipped with the decoder 40 and switching elements 40 - 43 for programming. The ALU can also be converted into an ALU of the mask-programmable type. In accordance with the present invention, the ALU can easily be expanded to handle three or more inputs and multiple values other than four. It is possible for the multivalued ALU of Fig. 15 to be fabricated on a single chip by adopting an integrating technique, in which case the chip would appear as shown in Fig. 16. The minimum number of pins necessary is ten, namely input pins for the x and y inputs, an input pin for the select signal S, an output pin for the operational result f, current input pins for inputting currents indicative of the four logic values 0 through 3, a pin for applying the operating voltage, and a pin for the ground terminal. If necessary, a pin for the programming terminal 6 and a pin for the current/voltage mode changeover signal input terminal 7 (both shown in Fig. 2) can be added. Fig. 17 illustrates an example in which the addressed switch and selection array of the multivalued ALU shown in Fig. 15 are realized by bilateral T-gates. Sixteen bilateral T-gates 71 controlled by the input x are provided, each having four different column lines of memory array 21 connected to its input side. Four bilateral T-gates 72 controlled by the y input are also provided, each having the output lines of four different bilateral T-gates 71 connected to its input side. The output lines of the bilateral T-gates 72 are connected to the input side of a bilateral T-gate 73 controlled by the selection signal S. The bilateral T-gate 73 has an output line connected to the output terminal 4. Thus, it is possible to construct a multivalued ALU by using bilateral T-gates. Fig. 18 depicts an example of a 4-valued 2-input ALU for 16 functions constructed by using five of the single-chip ALUs for four functions illustrated in Fig. 16. In Fig. 18, four 4-function single-chip ALUs 75 are connected in parallel and are programmed in advance to have a total of 16 different functions. The inputs x, y are applied to the respective input terminals 1, 2 of these ALUs, and currents representing the logic values of 0 - 3 are applied to current source terminals. The 16 functions can be selected by a 4-valued 2-digit signal S, So. The lower order digit (i.e. a digit having a lower significance) So of the select signal S So is applied to the selection terminal (designated by numeral 37a in Fig. 18) of each ALU 75. As a result, the four ALUs 75 output four functions F, - F4 selected by the signal So. Any one of these functions F, - F4 is selected by an additional single ALU 76 to which the signal S of the higher digit (i.e. a digit having a higher significance) is applied. The ALU 76 is one type of bilateral T-gate and a bilateral T-gate can of course be used in place of the ALU 76. Fig. 19 illustrates the specific connections of the ALU 76 acting as a bilateral T-gate which selects one of four inputs. Rather than currents representing logic values, currents indicative of the functions F, - F outputted by the abovementioned ALUs 75 are inputted to respective ones of the row lines of memory array 21. Only the 1st through 4th column lines controlled by the control line connected to the output 0 terminal of decoder 32 and the control line connected to the output 0 terminal of decoder 36 are programmed so as to have a node with one each of the four row lines in memory array 21. Further, the input sides of the decoders 32, 36 are connected together and to a terminal 37b to which a signal of logic value 0 is applied. Accordingly, an H-level control signal appears at the 0 output terminals of decoders 32, 36 to turn on switching elements corresponding to the 1 st through 4th column lines. The select signal S of the higher order digit is applied to the decoder 31 and selects any one of the 1 st through 4th column lines. As a results, any one of the functions F [i ] - F4 inputted to the memory array 21 is delivered to the output terminal 4. An ALU capable of implementing 16 different functions is thus realized. Thus, almost all functions currently defined can be covered. (8) Multivalued MAX/MIN function circuit A multivalued MAX function and multivalued MIN function are expressed by the aforementioned Eqs. (1) and (2), respectively. A multivalued 2-variable MAX operation involves selecting whichever of the variables x, y has the larger value and adopting the larger value as the result of the operation. A multivalued 2-variable MIN operation involves selecting whichever of the variables x, y has the lesser value and adopting the lesser value as the result of the operation. Fig. 20 depicts a circuit for performing both the MAX and MIN operations. The circuit, denoted by reference numeal 85, has two input terminals for signals representing the variables x and y, an input terminal for a flag input Fin, an output terminal for the result A of the MAX operation, an output terminal for the result I of the MIN operation, and an output terminal for a flag output Fout. The flag output Fout is indicative of the size relationship of the inputs x and y; Fout=0, 1, 2 when x=y, x<y, x>y, respectively. The input terminal for the flag input Fin is for introducing the flag output Fout of a preceding stage to the flag input of the next circuit stage in a case where MAX/MIN function circuits of this type are cascade-connected. The MAX/MIN circuit shown in Fig. 20 can be realized by modifying the multiple-function circuit of Fig. 14 into a 3-input circuit. It can also be realized as a portion of the multivalued ALU shown in Fig. 15. In the latter case, it would be possible to select either the MAX output A or MIN output I by the select signal S. As shown in Fig. 21, a plurality of MAXIMIN circuits 80 - 83 are cascade-connected by connecting the flag output of one stage to the flag input of the next. This makes it possible to perform MAX and MIN operations involving plural-digit multivalued variables (x ) and (y ). The initial flag input Fino is 0. The last flag Fout indicates which of the abovementioned 4-digit multivalued variables is larger. The truth tables of these MAX/MIN multivalued functions for a case where four values are involved are as shown in Fig. 22. For Fin = 0, the output A or I indicates which of the inputs x, y is larger or which is lesser. For Fin = 1 (i.e. y>x)k, the output A represents the value of the input y as such and the output I represents the value of input x as such. Further, Fout = 1. Fig. 23 is a connection diagram of a 4-valued MAX/MIN function circuit constructed using the truth tables of Fig. 22. Though nodes and switching elements are both indicated by the black circles in the diagram, the general features of this arrangement will be readily understood by comparing it with the multifunction circuit of Fig. 14. This arrangement is slightly different from that of Fig. 14 in that there are three inputs, namely x, y and Fin; there are three outputs, namely A, I and Fout; and current sources are used as the signal sources, the direction of these current being opposite of those shown in Fig. 2. Fig. 24 shows a simplification of the circuit depicted in Fig. 23. This is a mask-programmable type circuit mentioned above. (9) Multivalued full adder circuit Fig. 25 illustrates a block of a full adder circuit FA, the relationship between the input and output thereof, and an arithmetic formula expressing full addition. The inputs to the circuit are the variables x, y and a carry-in Cin from the lower order digit. The outputs are the sum S and a carry Cout to the higher order digit. Fig. 26 depicts a circuit for adding 4-digit variables (x ) and (y ), as well as an arithmetic formula representing this addition. Four circuits FAo, FA1, FAz, FA equivalent to the full circuit block of Fig. 25 are serially connected in such a manner that Cout of the lower order digit block is connected to Cin of the higher order digit block. Fig. 28 is a detailed connection diagram of a full adder circuit programmed on the basis of these truth tables. For four values, the carry-in Cin takes on only two values, namely 0 and 1. Accordingly, two of the four column lines connected to the output side of the decoder for Cin in the AND array of Fig. 28 are meaningless. In addition, since only two outputs exist, namely S and Cout, the remaining output line of the three output lines is unnecessary in actual practice. (10) Multivalued full subtracter circuit Fig. 30 illustrates the relationship between the inputs and outputs of a full subtracter circuit FS of a first type, as well as an arithmetic formula of full subtraction. The inputs to the circuit are the variables x, y and a loan Bin to a lower order digit. The outputs are the difference D and a borrow Bout to a higher order digit. Bin and Bout take on a value of 0 or 1. The circuit is effective only when x>y holds. Fig. 32 illustrates the truth tables of full subtraction for a case r=4, i.e. for a case where there are four values. Fig. 31 shows a circuit constructed by serially connecting four full subtracter circuits FSo, FSi, FS , FS , each of which has a function the same as that of the full subtracter circuit of Fig. 30, in such a manner that Bout of the lower order digit is connected to Bin of the higher order digit. This arrangement is capable of subtracting a 4-digit variable (y ) from a 4-digit variable (x ). The foregoing holds with the proviso (x ) > (y Fig. 33 is a detailed connection diagram showing a 4-valued full subtracter circuit, and Fig. 34 is a connection diagram showing a simplification of the circuit of Fig. 33. Fig. 35 shows a special subtracter circuit. If the flag input Fin and loan Bin are not taken into consideration, the larger of the inputs x, y is selected by the MAX circuit, the lesser is selected by the MIN circuit, the selected values are inputted to the full subtracter circuit FS, and a difference output D=x-y (x≧y) or D=y-x (y≧x) is obtained. This is an absolute difference. If Fin, Bin are taken into consideration, the circuit becomes highly specialized. Fig. 36 shows a circuit obtained by connecting the circuit of Fig. 35 in a four-digit cascade. Fig. 37 is a view illustrating the relationship between the inputs and outputs of a multivalued full subtracter circuit of a second type. The circuit can be applied in a case where either the input x or y is the larger and is capable of outputting either a + or - sign representing the results of a magitude comparison. The sign is represented by a flag F (Fin, Fout). The flag takes on the value 2 when x>y holds, the value 0 when x = y holds, and the value 1 when x<y holds. The flag input Fin is an input from a lower order digit, and the flag output Fout is an output to a higher order digit. The other input and output are the same as those depicted in Fig. 30. Fig. 39 illustrates the truth tables of the above-described full subtracter circuit in a case where there are four values. Fig. 38 depicts a subtracter circuit which performs a 4-digit subtraction (x ) - (y ) using the full subtracter circuit shown in Fig. 37. A determination is made as to which of (x ), (y ) is larger or smaller, or as to whether they are the same, by the MAX/MIN circuits for four digits shown in Fig. 21, and the results of the determination are represented by the flag output Fout of the circuit MAX/MIN . The four full subtracter circuits FSo - FS are cascade-connected by connecting the lower order digit Fout with the upper order digit Fin and the lower order digit Bout with the higher order digit Bin. Fout of the circuit MAX/MIN is inputted to the full subtracter circuit FSo of the least significant digit as Fino. Fout of circuit FS of the most significant digit represents the sign S of the results of the final subtraction operation. (11) Multivalued full multiplier circuit The inputs and outputs of a multivalued full multiplier circuit are shown in Fig. 40. The inputs are the two variables x, y and a carry-in Cin from a lower order digit. The outputs are the product P and a carry Cout to a higher order digit. Fig. 42 shows the truth tables of a full multiplier circuit in a case where there are four values. Fig. 43 illustrates the details of the connections programmed using these truth tables, and a simplified version of this circuit is depicted in Fig. 44. Fig. 41 illustrates an example of a 4-digit x 1-digit full multiplier circuit constructed using four of the abovementioned multiplier circuits. The circuit is formed by cascade-connecting 1-digit full multiplier circuits Mul - Mul in such a manner that Cout of the lower order digit is connected to Cin of the higher order digit. Since the operation (x ) y is performed, the inputs to the circuits Mulo - Mul are xo,yo; x ; x ; x ; x ; respectively. The results of the multiplication operation are represented by (Cout ), using the products Po - P of the multiplier circuits Mul - Mul and the carry Cout of circuit Mul . Cout may be replaced by P4 in the results of the multiplication. A multiplication between a plural-digit variable and a plural-digit variable may be achieved if multiplication between a plural-digit variable and a single digit are repeated and the results of these multiplication operations are successively added by adding up the digits. However, it is also possible to multiply a plurality of digits by a plurality of digits at one time. An example of a circuit for executing such a multiplication operation is illustrated in Fig. 45 in the form of a multiplier circuit for multiplying a 4-digit variable (x ) by a 2-digit variable (y,y ). The circuit of Fig. 45 comprises circuits Mul for performing the operation (x ) yo, circuits Mulo, -Mul for performing the operation (x ) y , and circuits FAo-FA4 for adding the corresponding digits of the outputs of above circuits. The two multiplier circuits Mulio, Mulo, and adder circuit FAo enclosed by the dashed lines in Fig. 45 can be consolidated in the form of a 7-input 4-output circuit, as shown in Fig. 46, and it is possible for this circuit to be programmed in accordance with the corresponding truth tables. (12) Multivalued full divider circuit Fig. 47 shows the relationship between the inputs and outputs of a multivalued full divider circuit FD. The inputs to the circuit are the variables x, y and a borrow Bin from a higher oreder digit, and the outputs are the quotient Q and remainder R. The truth tables of a 4-valued full divider circuit are shown in Fig. 49. Fig. 50 illustrates an example of the full divider circuit connections programmed using these truth tables, and Fig. 51 shows an example of the connection in simplified form. Fig. 48 illustrates a circuit in which four full divider circuits FDo - FD , the construction whereof is identical with that of the above-described full divider circuit, are cascade-connected for dividing a 4-digit variable (x ) by a 1-digit variable y . The remainder R of an upper order digit is inputted as the borrow Bin of a lower order digit. Though the one inputs of these circuits are xo, x , x , x , the other inputs are the same, namely yo. Performing the division operation gives the result (Q ), with a remainder of Ro. If a plural-digit variable is to be divided by a plural-digit variable, it is preferred that a circuit capable of performing the division at one time be constructed beforehand in accordance with truth tables conforming to the operation. Fig. 52 illustrates the concept of a circuit for dividing a 4-digit variable (x ) by a 2-digit variable (y ). The circuit is composed of four full divider circuits FD, [o ] - FD connected in cascade form, each full divider circuits having five inputs and three outputs. Specifically, the inputs are x , 2-digit variables yi, yo and 2-digit borrows Bin , Bin , and the outputs are the quotient 0 and 2-digit remainders R , R ; - (i=0 - 3). Truth tables of a circuit of this kind are somewhat complicated but can be prepared with ease. (13) CMOS multivalued logic function circuit and CMOS multivalued ALU In the above-described embodiments, n-channel MOSFETs mainly are used as the switching elements in the AND array 22 and selection array 25, though these can be replaced by p-channel MOSFETs. When a CMOSFET is employed, however, it is possible to achieve higher speed and prevent the occurrence of errors. A circuit using a CMOS will now be described. Figs. 53a, 53b illustrate a switching operation using an n-channel MOS, in which Fig. 53a is a circuit diagram and Fig. 53b a waveform diagram. In Fig. 53a, a switching element comprising an n-channel MOSFET ON is connected between a power supply E and an output terminal (output voltage Vo), and the output terminal is connected to ground via a resistor R having a large resistance value. The FET Q has its on/off action controlled by a control signal V When the control voltage V is at the H level (e.g. 5 V), the FET Q , is turned on and an H-level output voltage Vo appears. When the control voltage V falls to the L level (0 V), the FET ON turns off and the output Vo also falls to the L level (0 V). In the switching operation, the rise in the output voltage V when the FET ON turns on is extremely rapid, as shown in Fig. 53b. However, the power supply voltage E is voltage-divided by the resistance of the FET ON and the resistor R at conduction of the FET, so that the output voltage Vo assumes a value lower than that of the voltage E. This is a source of error and, while presenting no particular problems in 3- or 4- valued circuits, is a major concern when circuits involve on the order of ten values. Furthermore, when the FET Q turns off, an electric charge stored in the substrate-source capacitance C of the FET Q is discharged through the resistor R . The problem that results is a very slow response. Figs. 54a and 54b illustrate a solution to the above-mentioned problems achieved by using a CMOSFET, in which Fig. 54a is a circuit diagram and Fig. 54b is a waveform diagram. In Fig. 54a, a p-channel MOSFET Qp is connected between the voltage source E and output terminal, and an n-channel MOSFET Q , is connected between the output terminal and ground. The FETs Qp, Q are both controlled by the control voltage V . The control voltage V is bipolar (e.g. + 5 V - -5 V), as shown in Fig. 54b. Since the FET Qp is off and the FET ON is on when the control voltage V is at the H level (+5 V), the output voltage Vo is at the L level (0 V) at such time. Conversely, when the control voltage V attains the L level (-5 V), FET Qp turns on and FET Q turns off. As a result, the power supply voltage E appears at the output terminal, so that the output voltage Vo attains the H level (E). Thus, the switching circuit shown in Fig. 54a has a quick response (e.g. several nsec) at both rise and decay and is error-free. This makes it possible to construct a circuit for a high radix such as 10. Another advantage is that the output resistance is low since either FET Q . or FET ON is conducting at all times. Though the voltage relationship would differ, it goes without saying that the FETs Qp, ON in Fig. 54a may be interchanged. Fig. 55 illustrates a multivalued logic function circuit, which corresponds to the circuits shown in Fig. 2 and the like, constructed by utilizing the CMOSFET-based switching circuit depicted in Fig. 54a. Only a portion of the circuit is shown. A p-channel MOS AND array and an n-channel MOS AND array 22B are provided in place of the aforementioned AND array 22. In the p-channel MOS AND array 22A, three p-channel MOSFETs (e.g. Q , Qpy, Qp ) are serially connected in each column line. These FETs are controlled by those outputs of the decoders 31, 32, 33 that arrive from the programmed connections. Similarly, in the n-channel MOS AND array 22B, three n-channel MOSFETs (e.g. Q , Q y, Q ) are serially connected in each column line. These FETs are controlled by the decoder outputs applied to the corresponding p-channel MOSFETs of the p-channel MOS AND array 22A. One end of each column line of the n-channel MOS AND array 22B is grounded, and the other end is connected to the output line 5. One end of each column Iiine of the p-channel MOS AND array 22A is connected to the output line 5, and the other is connected by a program to one row line of the memory array 21. The decoders 31 - 33 are for outputting the abovementioned bipolar control signals. For example, when the input x is 0, the 0 output terminal of decoder 31 delivers -5 V and the other output terminals (1, 2, 3) deliver +5 V. Voltage sources 11A - 13A (e.g. 1 V, 2 V, 3 V, respectively) are employed as the signal sources. The portion of the circuit shown in Fig. 55 is so adapted that when × =0, y = 2, z = holds, a voltage signal (2 V) representing a function value of logic value 2 is generated at the output terminal. A multivalued ALU of the kind shown in Fig. 15 can also be readily constructed by using a CMOS switching circuit. For example, a 2-input 4-valued ALU can be constructed as follows using the arrangement of Fig. 55: The decoder 31 (36) is adpoted for the select signal S, and the decoders 32 (31), 33(32) are adpoted for the inputs x, y. Furthermore, the p-channel MOS AND array 22A is functionally divided into a p-channel MOS AND array (22a) and a p-channel MOS select array (25a). Similarly, an n-channel MOS AND array (22b) and an n-channel MOS select array (25b) are constructed by the n-channel MOS AND array 22B. It will be clear from a comparison with Fig. 15 that portions in Fig. 55 identical with or related to those shown in Fig. 15 are designated by like or related reference characters (characters with the suffix a or b attached) enclosed by parentheses. With the circuit portion shown in Fig. 55, the function (operation) f is selected and a signal representing the operational results fi(x,y) = f (2,3) = 2 (V) appears at the output terminal 4. (14) Multivalued binary hybrid circuit In a case where the memory array 21 is to be used as a simple multivalued ROM, it will suffice if the input signals x, y, z of the addressed switch 23 are binary rather than multivalued signals, as is shown in Fig. 56. In other words, the input signals x, y, z should take on values of 0, 1. In such case a decoder 86 contained in the addressed switch 23 would be well-known binary decoder. A circuit of this type is referred to as a multivalued binary hybrid circuit.
{"url":"https://data.epo.org/publication-server/rest/v1.2/publication-dates/19890308/patents/EP0305583NWA1/document.html","timestamp":"2024-11-05T12:13:01Z","content_type":"text/html","content_length":"123475","record_id":"<urn:uuid:8d6d8f9c-5c62-42df-81b3-ab1ba61d2031>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00152.warc.gz"}
Industrial Optimization Seminar Fields Industrial Optimization Seminar Supported by Organizing Committee Activities in Commercial and Industrial Mathematics Directions to Fields Audio of 2006-07 talks 2005-06 Industrial Optimization Seminars The inaugural meeting of the Fields Industrial Optimization Seminar took place on November 2, 2004.. This series will meet once a month, on the first Tuesday, in the early evening. Each meeting will comprise two related lectures on a topic in optimization; typically, one speaker will be a university-based research and the other from the private or government sector.We welcome the participation of everyone in the academic or industrial community with an interest in optimization -- theory or practice, expert or student. Please subscribe to the Fields mail list to be informed of upcoming seminars. UPCOMING SEMINARS, Seminars start at 5:00 pm at the Fields Institute, 222 College Street, Room 230 The Spring seminars will include seminars on: • Financial Optimization • Optimization of Energy Systems • Chemical Process Optimization • Convex Optimization in Electrical Engineering June 5, 2007 Christine A. Shoemaker, Cornell University Optimization, Calibration, and Uncertainty Analysis of Multi-modal, Computationally Expensive Models with Environmental Applications Many important problems in engineering and science require optimization of computationally expensive (costly) functions. These applications include calibration of simulation model parameters to data and optimizing a design or operational plan to meet an economic objective. With costly functions (like nonlinear systems of PDEs, partial differential equations), this optimization is made difficult by the limited number of model simulations that can be done because each simulation takes a long time (e.g. an hour or more). The optimization problem is even more difficult if it has multiple local optima, thereby requiring a global optimization algorithm. Our new algorithms use function approximation methods and experimental design to approximate the objective function based on previous costly function evaluations. In our optimization algorithm, function approximation is combined with metrics of locations of previous costly function evaluations to select iteratively the next costly function evaluation. We have different algorithms for unimodal and multimodal functions, both of which have theorem for convergence to the global minimum will be described. Numerical algorithm comparisons will be presented for test functions and for an environmentally based partial differential equation model that requires 1.5 hours to run for each simulation. This nonlinear model (based on fluid mechanics and chemical reactions) describes the fate and transport of water and pollutants in a groundwater aquifer. The optimization is used for calibration of the model by selecting the parameter values (decision variables) that best fit measured data from a military field site in Florida. The parameter surface is multi-modal so this is a global optimization problem. The results indicate that a Regis and Shoemaker (2006a) method generally gives better results for global optimization test problems and the environmental model than alternative methods when the number of model simulations is limited. It is especially effective at dimensions higher than 6. Related parallel algorithms will also be briefly discussed (Regis and Shoemaker, 2006b). Working with David Ruppert's statistics group, we have also expanded the use of function approximation to Bayesian analysis of uncertainty for costly functions. In this NSF project, we are combining optimization for calibration with an assessment of the uncertainty in calibrated parameter estimates and in calibrated model output based on input data. Numerical results for an environmental PDE problem based on a hypothetical chemical spill demonstrated good accuracy and a 60-fold reduction in costly simulations over that required for conventional MCMC analysis for Bayesian uncertainty analysis. All this research has been done jointly with Rommel Regis and was funded by NSF. Parts of the seminar will discuss research also done with Dr. Pradeep Mugunthan, Prof. David Ruppert, and Nikolai Mugunthan, P., C.A. Shoemaker, R. G. Regis "Comparison of Function Approximation, Heuristic and Derivative-based Methods for Automatic Calibration of Computationally Expensive Groundwater Bioremediation Models," Water Resources Research Vol. 41, W11427,doi:10.1029/2005WR004134, Dec. 2005. Regis, R.G., C.A. Shoemaker, "A Stochastic Radial Basis Function Method for the Global Optimization of Expensive Functions", INFORMS Journal of Computing, in press 2006a. Regis, R.G. and C. A. Shoemaker, "Parallel Radial Basis Function Methods for the Global Optimization of Expensive Functions," European Journal of Operations Research, in press 2006b. Blizniouk, N., D. Ruppert, C.A. Shoemaker, R. G. Regis, S. Wild, P. Mugunthan, "Bayesian Calibration of Computationally Expensive Models Using Optimization and Radial Basis Function Approximation." Computational and Graphical Statistics, requested revision submitted 2007. Ron Dembo, Zerofootprint The World Needs Optimization NOW! Climate crisis is now a fact of life whereas only a few years ago it was an abstract concept. We are clearly faced with a massive problem in planning for the future. How do we adapt to a world of finite resources and a surplus of carbon dioxide in the atmosphere? How do we continue to bring the world out of poverty and not destroy it in the process? How do we supply the future energy needs of the world? If ever there was a time where Optimization techniques and principles were needed it is now. We will have to minimize the use of fossil fuel, maximize carbon sequestration, optimize our use of energy. It is a long list. This talk will address these subjects and pose more problems than solutions. It is an amazing time for the field but it will require strong leadership and vision for Optimization to achieve the prominence it deserves. May 1, 2007 (audio of talks) Michael C. Ferris, University of Wisconsin Optimization of Gamma Knife Radiosurgery and Beyond The Gamma Knife is a highly specialized treatment unit that provides an advanced stereotactic approach to the treatment of tumors, vascular malformations, and pain disorders within the head. Inside a shielded treatment unit, beams from 201 radioactive sources are focused so that they intersect at the same location in space, resulting in a spherical region of high dose referred to as a shot of radiation. The location and width of the shots can be adjusted using focussing helmets. By properly combining a set of shots, larger treatment volumes can be successfully treated with the Gamma Knife. The goal of this project is to automate the treatment planning process. For each patient, an optimization seeks to produce a homogeneous dose distribution that conforms closely to the treatment volume, while avoiding overdosing nearby sensitive structures. The variables in the optimization can include the number of shots of radiation along with the size, the location, and the weight assigned to each. Formulation of such problems using a variety of mathematical programming models is described, and the solution of several test and real-patient examples is demonstrated. If time allows, we will describe commonalities of this approach with other treatment devices and outline some of the challenges for the future. Doug Moseley, Department of Radiation Oncology, University of Toronto The Data Tsunami in Radiation Therapy Recent technical innovations in radiation therapy such as intensity modulated radiation therapy (IMRT) and image-guided radiation therapy (IGRT) have given clinicians the ability to exquisitely sculpt and deliver the therapeutic dose of ionizing radiation to the patient. This revolution in radiotherapy is generating vast amounts of patient specific information in the form of volumetric computed tomography (CT) images. Analyzing and responding to these images presents an onerous task. This talk will briefly introduce the basics of radiation therapy and highlight some of the recent developments and technical challenges in patient care taking place at Princess Margaret Hospital. April 3, 2007 Theme: Energy Markets: Hydroelectric Dispatch Rick Adams, Operating Manager, Niagara Plant Group, Ontario Power Generation. Niagara, from Water to Wire This presentation provides the basics of how the fuel for (water) and the product from (electricity) hydroelectric generators are managed in the Niagara Peninsula. Hans J.H. Tuenter, Senior Model Developer, Planning & Analysis, Ontario Power Generation Hydroelectric dispatch from a system perspective. This talk places the hydroelectric dispatch problem within the setting of optimally dispatching a fleet of generators, and discusses what type of optimization problems this gives rise to. Ranendra Ponrajah, Senior Market Risk Analyst, Corporate Finance, Ontario Power Generation Optimal unit commitment at hydroelectric facilities. The presentation will focus on the unit commitment problem as it applies to hydroelectric facilities. Topics discussed will include the unit performance relations, characteristics of hydroelectric units, the input parameters and operating constraints. Typical benefits of unit commitment will be discussed followed by schematics outlining how one can implement unit commitment in an operational environment. In the latter part of the presentation the theory on Lagrangian Relaxation as implemented in the unit commitment algorithm will be presented. This implementation was co-developed with assistance from McGill University. The presentation will conclude with a demonstration of the unit commitment solution. March 6, 2007 Revenue management optimization Gilles Savard, Ecole Polytechnic Montreal Pricing a segmented market subject to congestion* Price optimization fits naturally the framework of bilevel programming, where a leader integrates within its decision process the reaction of rational customers. In this presentation we address the problem of setting profit-maximizing tolls on a congested transportation network involving several user classes. At the upper level, the firm (leader) sets tolls on a subset of arcs and strives to maximize its revenue. At the lower level, each user minimizes its generalized travel cost, expressed as a linear combination of travel time and out-of-pocket travel cost. We assume the existence of a probability density function that describes the repartition of the value of time (VOT) parameter throughout the population. The resulting non-convex infinite-dimensional problem is solved by a hybrid algorithm that alternates between global (combinatorial) and local (descent) phases. We will briefly present real life applications in airline and rail industries. *This is a joint work with M. Fortin, P. Marcotte and A. Schoeb. Jean-François Pagé, Air Canada Revenue management in the airline industry: where do we come from and where do we go All experts agree to recognize the benefits of revenue management in the airline industry (between 5% to 10% more revenue). First we will define revenue management in the airline industry from its three major components (scheduling, pricing and seat allocation). We will then revise quickly the classical approach to solve the seat allocation problem and discuss the current issues faced by the industry. We will conclude by presenting the application of the model proposed by Gilles Savard in the context of Air Canada. February 6, 2007 Audio of talks Virginia Torczon, Department of Computer Science, College of William & Mary Generating Set Search Methods for Nonlinear Optimization The nonlinear optimization problems encountered in industrial settings often have features that can foil gradient-based nonlinear optimization methods. These features include a lack of reliable derivative/adjoint information, discretization errors introduced by the computational simulations that define the optimization problem, discontinuities that either are inherent to the optimization problem or are introduced by discretization, and the need for global, versus local, minimizers. Generating set search (GSS) methods are derivative-free optimization techniques that, under ideal circumstances, deliver convergence guarantees comparable to those for gradient-based optimization techniques. This talk will focus on how the analysis for GSS methods accommodates heuristics, such as the use of response surface models, to tackle the less than ideal circumstances often encountered when attempting to solve industrial optimization problems. Don Jones, General Motors A Taxonomy of Global Optimization Methods Based on Response Surfaces This paper presents a taxonomy of existing approaches for using response surfaces for global optimization. Each method is illustrated with a simple numerical example that brings out its advantages and disadvantages. The central theme is that methods that seem quite reasonable often have non-obvious failure modes. Understanding these failure modes is essential for the development of practical algorithms that fulfill the intuitive promise of the response surface approach. December 5, 2006 Nicholas G. Hall, The Ohio State University The Coordination of Pricing and Production Decisions This talk has two purposes. First, it provides an overview of current and future research on the coordination of pricing and production decisions. Second, it considers a specific new model for the coordination of pricing and scheduling decisions. In this model, we assume knowledge of a deterministic demand function which is nonincreasing in price. We consider three standardmeasures of scheduling cost: total weighted completion time of jobs, total weight of jobs delivered late to customers, and overall schedule length, respectively. The objective is to maximize the total net profit, i.e. revenue less scheduling cost, resulting from the pricing and scheduling decisions. We developcomputationally efficient optimal algorithms for solving the three coordinated pricing and scheduling problems. We show that much faster algorithms are not possible. We also develop a fast approximation method for each problem. Managerial insights are obtained from a detailed computational study which compares solutions obtained by using various levels of coordination. Our results estimate the value of coordination between pricing and scheduling decisions, and provide tools with which such coordination can easily be achieved. Eddie Hsu, Sr. Developer (Optimization), Workbrain Inc. Vinh Quan (Formerly) Sr. Solutions Engineer, Workbrain Inc. Optimization in Practice: A Retail Labor Scheduling Example Retail labour scheduling can be described as the process of producing optimized timetables for employees. The challenges faced by store managers is to ensure that there is an optimal staffing level in place to accommodate fluctuating sales volumes and customer traffic, subject to various constraints that involve employee availability and qualifications, compliance with labour laws and regulation, as well as payroll costs and budgetary considerations. The problem can be formulated as a mixed integer programming model and solved using Dash Optimization's solver suite. This seminar will discuss in detail the different constraint requirements for the retail labour scheduling problem and how one can reduce the time for solving such problems in practice. These include (a) Optimizing the use of the modeling language itself (b) Alternative model formulations and (c) Employing a dynamic termination scheme. October 31, 2006 Optimization in the Airline Industry Diego Klabjan, University of Illinois at Urbana-Champaign Recent Advances in the Crew Pairing Optimization The first breakthrough in airline crew pairing optimization occurred approximately ten years ago. Since then several solution methodologies have been developed. The most important driving force is the step from a local view to the global view in how a solution is constructed. Many of these techniques are now successfully used by commercial software vendors. In the talk modern solution methodologies for crew pairing will be presented. We will also discuss the implications of being able to quickly solve relatively large crew pairing problems to problems in related domains, such as robustness and integration with other problems. Dr. Barry Smith, Sabre Holdings, Inc Revenue Management in the Airline Industry: A Review of Development and Some Bumps along the Way We'll review the development of revenue management and its impact on the airline industry, describing how changes in the airline business environment have driven the evolution of revenue management through multiple generations of approaches and applications. We'll consider how the current airline environment is likely to shape future revenue management theory and October 3, 2006 R. Baker Kearfott, University of Louisiana at Lafayette A Current Assessment of Interval Techniques in Global Optimization Interval techniques have been used for several decades in global optimization, for □ computing bounds on ranges in branch and bound processes, and □ providing mathematical rigor (to encompass roundoff error and algorithmic approximations), leading to mathematically rigorous bounds on the optimum and guarantees that all optimizers have been bounded. "Classical" interval global optimization methods include bounding the range of the objective with interval arithmetic, various types of constraint propagation, and use of interval Newton methods both to narrow sub-regions and to prove existence and uniqueness of critical points within tight bounds. These methods, effective for some problems, had not been able to tackle a number of important practical problems for which alternate techniques have made However, exciting developments have occurred during the past decade. For example, while foregoing mathematical rigor, some global optimization algorithm developers have used interval arithmetic effectively in selected places, resulting in highly competitive codes. Other theoretical developments, such as the Neumaier / Shcherbina technique for supplying a rigorous lower bound, given an approximate solution to a linear program, allow non-rigorous techniques to become rigorous, thus narrowing the gap between codes without guarantees and codes with guarantees. Refinement of implementation details, particularly in constraint propagation, has also led to advances, in both rigorous and non-rigorous codes. These classical and emerging techniques will be briefly surveyed. The talk will also include some speculation about advances based on literature that seems to have been overlooked to date. Janos D. Pinter, Pinter Consulting Services, Inc. Adjunct Professor, Dalhousie University Global Optimization: Software Development and Advanced Applications Global optimization (GO) is aimed at finding the absolutely best solution in nonlinear decision models that frequently may have an unknown number of local optima. Finding such solutions numerically requires non-traditional, global scope search algorithms and software. For over two decades, we have been developing GO algorithms followed by software implementations for (C and FORTRAN) compilers, optimization modeling environments (AIMMS, Excel, GAMS, MPL), and the scientific-technical computing platforms Maple, Mathematica, and MATLAB. In this presentation we review the key technical background, followed by software implementation examples and a glimpse at various scientific and engineering applications. Back to top
{"url":"http://www.fields.utoronto.ca/programs/cim/06-07/optimization_seminar/","timestamp":"2024-11-12T14:15:04Z","content_type":"text/html","content_length":"38148","record_id":"<urn:uuid:a67f50a6-d8a9-4c80-8b6e-ea9ba78ed045>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00211.warc.gz"}
An elephant has a mass of 156,000 kg and a volume of 6 kL. What is the elephant's density? | Socratic An elephant has a mass of 156,000 kg and a volume of 6 kL. What is the elephant's density? 1 Answer The numbers are not realistic! But see below.... Density is simply mass per unit volume. Typical units are $g . c {m}^{-} 3$ or $k g . {m}^{-} 3$. 1 cubic metre is equivalent to 1000 litres, so units of "thousands of litres" (kl) and units of cubic metres are numerically identical. So the mass is 156,000 kg, and the volume is 6 ${m}^{3}$ The density is therefore 156,000/6 = 26,000 $k g . {m}^{-} 3$ In reality, this is an unrealistic figure though. The density of water is 1000 $k g . {m}^{-} 3$ - a human being typically has density a bit lower, which is why we can swim in water. If an elephant had a density 26 times that of water it would sink like a stone whenever it went into a river! As a comparison, solid lead has a density of around 11,500 $k g . {m}^{-} 3$. So whoever set you this question needs to use some slightly more realistic numbers, but you can see the methodology anyway. Impact of this question 10392 views around the world
{"url":"https://socratic.org/questions/an-elephant-has-a-mass-of-156-000-kg-and-a-volume-of-6-kl-what-is-the-elephant-s","timestamp":"2024-11-03T06:04:33Z","content_type":"text/html","content_length":"34483","record_id":"<urn:uuid:0bd7647c-321a-4bc3-88f5-6619247b984d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00463.warc.gz"}
Further correspondences between plane piezoelectricity and generalized plane strain in elasticity We consider an anisotropic body bounded by a cylindrical surface. Suppose that the body is infinitely long in the axial direction and is loaded by boundary conditions which do not vary along the generator. We show that our previous correspondence relations between plane piezoelectricity and generalized plane strain in elasticity can be extended to a more general loading situation. This is accomplished by incorporating a constant axial strain and a uniform temperature change in the formulation. Specifically, we show that by setting a linkage of 21 elastic (electroelastic) constants and seven thermal quantities, the deformation of a general anisotropic elastic solid can be characterized by a certain deformation mode of a piezoelectric solid with identical geometry. Applied to inhomogeneous media, they imply that the correspondence also holds for effective tensors. The formulation suggests that, for a solid whose microstructure and fields are invariant with the x3-axis, the derivations can be much clarified if the constitutive equations are properly rearranged. Applied to two-phase fibrous composites, we demonstrate that several known exact relations can be reconstructed in a systematic manner based on the uniform field approach. These include Hill's universal relations, Levin's correspondence results between thermal and mechanical properties, and Rosen and Hashin's formula between specific heat and thermo-mechanical moduli. All Science Journal Classification (ASJC) codes 深入研究「Further correspondences between plane piezoelectricity and generalized plane strain in elasticity」主題。共同形成了獨特的指紋。
{"url":"https://researchoutput.ncku.edu.tw/zh/publications/further-correspondences-between-plane-piezoelectricity-and-genera","timestamp":"2024-11-06T21:02:18Z","content_type":"text/html","content_length":"58666","record_id":"<urn:uuid:91360ff2-c063-4b7a-bf23-e4e141ddc361>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00554.warc.gz"}
[Solved] A building has been purchased by a person at a cost of Rs. 2 A building has been purchased by a person at a cost of Rs. 25,000. The useful life of the building is 40 years and the scrap value of the building is Rs. 3,000. Calculate the annual sinking fund (Rs.) at the rate of 5% interest. (Take 1.05^40 = 7.04) Answer (Detailed Solution Below) Option 3 : 182 SSC JE Civil Building Materials Mock Test 24 K Users 20 Questions 20 Marks 14 Mins the annual sinking fund Annual sinking fund (I) is given by: \(I = \;\frac{{Si}}{{{{\left( {1 + i} \right)}^n} - 1}}\) S = total amount of sinking fund to be accumulated i = rate of interest n = number of years required to accumulate the sinking fund S = Cost - Scrap Value = 25000 - 3000 = 22000 n = 40 years r = 5% \(\rm{I = \;\frac{{22000 \times 0.05}}{{{{\left( {1 + 0.05} \right)}^{40}} - 1}} = 182.11}\) ∴ The annual sinking fund is Rs. 182/- Latest SSC JE CE Updates Last updated on Mar 29, 2024 -> The Staff Selection Commission conducts the SSC JE 2024 exam to recruit Junior Engineers in different disciplines under various departments of the Central Government. -> A total of 788 vacancies have been announced. -> The selected candidates can apply online from 28th March to 18th April 2024. -> The selection process of the candidates for the SSC Junior Engineer post consists of Paper I, Paper II, Document Verification, and Medical Examination. -> Candidates who will get selected will get a salary range between Rs. 35,400/- to Rs. 1,12,400/-. -> Candidates must refer to the SSC JE Previous Year Papers and SSC JE Civil Mock Test, SSC JE Electrical Mock Test, and SSC JE Mechanical Mock Test to understand the type of questions coming in the
{"url":"https://testbook.com/question-answer/a-building-has-been-purchased-by-a-person-at-a-cos--5d8327f0f60d5d14273ee569","timestamp":"2024-11-09T23:15:12Z","content_type":"text/html","content_length":"185463","record_id":"<urn:uuid:f283d961-37fb-4653-901c-ea5ee62d35c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00082.warc.gz"}
1 Introduction The spin–spin coupling analysis of 1D NMR spectra describes how nuclei are scalarly coupled within molecules. Scalar couplings are important NMR parameters that provide constraints for the building of 3-D molecular structures [1]. The high magnetic fields at which NMR is performed nowadays reduce the probability of strongly coupled homonuclear spin systems, so that first-order analysis is most often sufficient for measuring coupling constants. Each multiplet can therefore be analyzed independently of the others within the same spin system. The automatic extraction of coupling constants is a problem to which solutions have already been proposed ([2–11], and references cited therein). Human interpretation of a multiplet structure relies on the recognition of elementary peak cluster shapes [12]. Automated methods for first order multiplet analysis mimic this process [11]. A method based on time-domain analysis has already been described by our group [13,14]. Its basic principle was later extended to increase its performance. This communication describes the process that is presently implemented in the AUJ (automatic J couplings) computer software (www.univ-reims.fr/LSD/JmnSoft/Auj). AUJ is implemented as a GIFA [15] macro command that performs some pre-processing and calls a binary program whose source file is written in C language. The latter uses a library that contains the truely active part of the AUJ algorithm and that is designed to be invoked from any type of NMR processing software. The library uses the time-domain data as input and produces a set of coupling constants as well as the corresponding reconstructed time-domain signal. 2 Process outline Analysis starts with data point extraction of the user-selected multiplet. The multiplet is then converted to complex time-domain data by inverse Fourier transformation. Alternatively, a column of a 2-D J-resolved spectrum can also be simply extracted by the user and back-converted to real time-domain data. The AUJ algorithm is able to handle both input types. Complex data is supposed to originate from a non-centered non-symmetrical multiplet and therefore must be transformed to real data (step 1). The ‘log-abs’ algorithm [13,14] is then used (step 2) to produce a 1-D J-spectrum that is then optimized (step 3) and analyzed to produce a first set of coupling constants and multiplicities (step 4). At this stage, J values can be accurate but multiplicities are usually not. The latter are optimized (step 5) and the resulting values used to refine the final J values (step 6). Finally, the centered multiplet (when input data is complex), the reconstructed time-domain data, and the J values are exported back to the calling GIFA macro command and made visible through its graphical user interface, so that an algorithm failure is immediately visible. The ratio of the spectral noise level with the root-mean-square deviation between original and reconstructed multiplets is also a good performance indicator. The analysis process depends on empirically adjusted parameters whose default values, provided in the following paragraphs, can easily be adjusted by means of a single control panel. In practice, the proposed defaut parameter values fit well with most of the situations that were tested by the authors. 2.1 Step 1: Multiplet centering and symmetrization Time-domain data is first zero-filled (16 times by default) to obtain a high resolution multiplet by Fourier transformation. Then, a pivot frequency is searched so that frequency reversal around the pivot point leads to a spectrum that looks, at best, identical to the original one. Reversal and comparison are not directly performed on the high-resolution spectrum but on a binary-valued version of it that is built as follows. The real part of the spectrum is first divided by its Euclidian norm. Each value in the normalized spectrum is replaced by 1 if its value is greater than a given threshold (0.03 by default) and, if not, replaced by 0. The pivot position is selected so that the number of ones is maximum in the point-by-point product of the binary spectrum by its reversed version. The optimum pivot frequency value is used to frequency shift the original time-domain signal by multiplying it by the adequate linear phase ramp function. Frequency symmetrization is achieved de facto by setting the imaginary part of the resulting time-domain data to zero. Centering failure may occur for strongly unsymmetrical multiplets and threshold adjustment may be necessary. 2.2 Step 2: ‘log-abs’ analysis A first-order multiplet is described in the time domain by: $s–(tj)=A exp(–tj/T2*)∏i=1N[cos (πJ–itj)]n–i$ (2) in which $tj (1≤j≤M)$ is the signal measurement time, ɛ the noise, $S–$ the signal model in the absence of noise, A the signal amplitude, $T2*$ the apparent transverse relaxation time, $J–i (1≤i≤N)$ the i^th possible coupling constant value, and $n–i$ the associated multiplicity. If $n–i=0$, then $J–i$ is not a coupling constant of the multiplet, while $n–i=1$ means that $J–i$ gives rise to a doublet, $n–i=2$ means that $J–i$ gives rise to a triplet, and so on. The model described by equations (1) and (2) is slightly different from the usual one, because it imposes N (80 by default) predefined J values: $J–i=J–min+iN(J–max–J–min)$ (3) so that J ∊]$J–min$, $J–max$]. However, in this model, A, $T2*$ and all $n–i$ can be found through the ‘log-abs’ transformation. The equation: $log |s–(tj)|=log |A|·1–1T2*·tj+∑i=1Nn–i·log |cos (πJ–itj)|$ (4) indicates that the unknowns can be found by a linear fit of s(t) with a set of N+2 basis functions: 1, t, $log |cos (πJ–it)|$. Considering that the data points s(t[j]) having the smallest absolute value are those whose logarithm is, at most, affected by noise. Only M′ (N≤M′≤M) t[j] values are kept for the ‘log-abs’ analysis, those for which the |s(t[j])| values are the highest. The M′/M ratio is defined by the user (1/2 by default). In order to avoid t[j] and $J–i$ combinations for which $cos(πJ–itj)$ equals zero, $J–min$ and $J–max$ are chosen so that $πJ–min$ and $πJ–min$ (respectively 2 and 60 s^–1 by default) are rational numbers. The function $n–(J–)$ is called the 1-D multiplet J-spectrum. 2.3 Step 3: Optimization of the 1-D J-spectrum The crude 1-D J-spectum does not exploit all available data and is strongly affected by noise. Its refinement is possible through the minimization of the least squares residue R: $R=∑j=1M (|s(tj)|–|s–(tj)|)2$ (6) considered as a fonction of A, $T2*$ and all $n–i$. The absolute values in the expression of R are necessary to be able to calculate non-integer powers of $cos(πJ–itj)$. Minimization of R is achieved through an iterative conjugate gradient algorithm. The $n–i$ values found at step 2 are not directly used as a minimization starting point. Those that are less than a given threshold (0.05 by default) are replaced by zeros. The threshold value can be increased if the multiplet is not ideal, for example, if it presents a non-Lorentzian lineshape or a strong noise level. 2.4 Step 4: Analysis of the refined 1-D J-spectrum Each series of contiguous $n–i$ values (i[min]≤i≤i[max]) so that all $n–i$ are greater than a threshold (0.05 by default) is viewed as a peak in the 1-D J-spectrum. The position of the mass center of the k^th peak (1≤k≤K^*) is considered as the J[k] coupling constant value and the non-integer peak integral, $nk*$, rounded to the closest integer value, as the associated n[k] multiplicity [14]: $Jk=1nk*∑i=iminimaxn–iJ–i$ (8) 2.5 Step 5: Multiplicity and $T2*$ optimization This step is a grid search of the best integer multiplicities and $T2*$ values so that the linear fit residue of s(t) with: $s*(t)=exp (t/T2*)∏k=1K*[cos (πJkt)]nk$ (9) is minimum. The n[k] values from step 4 are simply ignored (they were useful to find all J[k]) and systematically replaced by values drawn from the [0, n[max]] interval (n[max]=4 by default), while $T2*$ values are drawn from a predefined set ({0.1 s, 0.2 s, 0.4 s, 0.7 s, 1.1 s} by default). It often happens that a particular n[k] is at best equal to zero. The corresponding J[k] value is removed from the set of J values and $K*$ decreased accordingly. 2.6 Step 6: J[k] optimization The low resolution in the 1-D J-spectrum may lead to believe that two different J values are identical. Therefore, the multiplet is considered as being produced by the effect of K independent couplings with: Each J[k] value is perturbed by addition of a small deviation drawn from a random number generator (±0.05 Hz by default). The J[k] and $T2*$ obtained in step 5 are used as the starting point for a conjugate gradient residue minimization of the linear fit of s(t) with s*(t). The user is left to decide whether two very close final J values are the same or not. This is a difficult decision when no standard deviation values have been evaluated. As already mentioned in [16], error evaluation is strongly dependent on the noise autocorrelation properties and therefore is beyond the scope of this Communication. 3 Results The proton NMR spectrum of sucrose in DMSO-d[6] was recorded at 500 MHz. The quadruplet-like signal at δ=3.81 ppm (Fig. 1, left) is obviously more complex than a regular quadruplet. This particular multiplet would clearly be a difficult problem to solve for any method based on peak list analysis. Fig. 1 The multiplet at δ=3.81 ppm (TMS as reference) in the ^1H NMR spectrum of sucrose dissolved in perdeuterated DMSO (left). The multiplet that is reconstructed from the AUJ analysis, with J= 5.37 Hz, 7.04 Hz, 8.14 Hz and $T2*$=0.1 s (right). The 1-D J-spectrum is analyzed as J=1.47 Hz (0.55), 2.13 Hz (0.54), 5.57 Hz (1.32), 7.20 Hz (0.72), and 8.25 Hz (0.80), where numbers in parenthesis are the $n–$ non-integer multiplicity values. Multiplicity refinement eliminates the two lowest J values and proposes the remaining ones to correspond to doublets. This behavior is very common, as the not strictly Lorentzian peak shape may be seen by the algorithm as originating from small, non-resolved coupling constants. The final refinement of the J values produces three close but different coupling constants: J=5.37 Hz, 7.04 Hz, and 8.14 Hz, giving rise to a reconstructed multiplet that is very similar to the original one (Fig. 1, right). A more difficult problem was given to AUJ, the analysis of a multiplet that is simulated on the basis of results presented in [11] for the methine proton of 3-bromo-2-methyl-1-propanol (Fig. 2, left). The multiplet is a quadruplet of quintet with J=5.43 Hz and 6.76 Hz, respectively. Some computer generated noise is added to the spectrum and the line width is chosen so that peak clusters are poorly resolved. The 1-D J-spectrum is interpreted as J=1.10 Hz (0.77), 5.46 Hz (1.24), and 6.69 Hz (1.80). Again, the noise introduces unwanted small coupling constants in the 1-D J-spectrum. The multiplicity optimization step finds the correct result and the final refinement produces J[1]=5.39 Hz, J[2]=5.40 Hz, J[3]=5.41 Hz, J[4]=J[5]=J[6]=6.78 Hz, J[7]=6,79 Hz, a result that compares well to what is expected. Fig. 2 Experiment on simulated data, using parameters published in [11] for the methine proton of 3-bromo-2 methyl-1-propanol. Simulated spectrum, using J[1]=J[2]=J[3]=5.43 Hz, J[4]=J[5]=J[6]=J [7]=6.76 Hz and $T2*$=0.25 s. Some computer-generated Gaussian noise was added (left). The multiplet that was reconstructed from the AUJ analysis, with J[1]=5.39 Hz, J[2]=5.40 Hz, J[3]= 5.41 Hz, J[4]=J[5]=J[6]=6.78 Hz, J[7]=6,79 Hz, and T =0.254 s (right). 4 Conclusion This Communication shows that the AUJ algorithm provides a pertinent way to analyze complex multiplets. The modeling of time-domain data ensures a reliable result on poorly resolved signals, even though non-ideal lineshapes and high noise levels may lead the user to modify the default algorithm parameters. However, it should always been remembered that the best first order analysis algorithm ever written cannot provide a safe and useful result if carried out on a part of a strongly coupled spin system. Further development of AUJ will deal with parameter selection improvement, the interfacing of the algorithm with commercial NMR data processing software, as well as its extension to slices of 2-D NMR spectra, different from J-resolved ones, in order to analyze nearly or fully superimposed multiplets. We thank Dr. Karen Plé for linguistic improvement.
{"url":"https://comptes-rendus.academie-sciences.fr/chimie/articles/en/10.1016/j.crci.2005.05.012/","timestamp":"2024-11-02T15:57:38Z","content_type":"text/html","content_length":"86801","record_id":"<urn:uuid:c8d93cd0-36b1-48eb-863f-1bff736e3c7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00458.warc.gz"}
I Switched to the Gnome Desktop I’ve been using SuSE or openSUSE since Caldera Open Linux shut down about 10 years ago. I liked the KDE desktop and YAST (Yet Another System Tool.) YAST concentrates all the system tools into a common graphic interface, which makes configuring the system easy. I’ve since upgraded my hardware many times and now have a dual monitor display. I have a slide show of photographs I’ve taken as the background image spanning both monitors. KDE3 was a very comfortable desktop. It was much more flexible than Gnome. I had a number of icons scattered around the periphery of the desktop so that I could get to frequently used applications with one click. My panel spanned both screens at the bottom. OpenSUSE 11.1 and 11.2 had early versions of KDE4 which were not ready for prime time. So I stuck with 11.0 and KDE3. I did have Qt4 installed and have converted my programs to run on it. Now openSUSE 11.0 has reached the end of support. With some trepidation, I installed 11.3 with KDE4. I quickly discovered that configuring it was a real PITA. It does not handle multiple screens well, if at all. It will not allow a background image to span the two monitors. It will not let you move icons from one display to the other. Just getting them on the desktop is frustrating. Panels cannot span both displays. Wobbly windows do not improve productivity. Over the years, Gnome has improved a lot. I reinstalled openSUSE 11.3 with the Gnome desktop. I still needed some KDE4 libraries to run KMyMoney, the best open source replacement for Quicken. I do have my pictures in a slide show spanning the two displays and icons scattered around edge of the desktop. It won’t let a panel span the two displays. So, I put a panel at the bottom of each display. Most of the fixed stuff goes on the second screen. You must be logged in to post a comment. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://www.bobsown.net/?p=189","timestamp":"2024-11-02T15:37:16Z","content_type":"text/html","content_length":"29733","record_id":"<urn:uuid:1d94120e-a425-45bb-94f2-bf0c56934a70>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00565.warc.gz"}
Need help with complex Antenna Theory assignments – where to go? | Pay Someone To Do Electrical Engineering Assignment Need help with complex Antenna Theory assignments – where to go? On a daily basis I have a large thought. The problem is to understand some things I can: (a) How to understand what a particular thing is? (b) How to use a certain element that is set to c but not set to d. (c) What happens after taking an element? (d) If I wanted to understand what this code was supposed to do, could you help me with a simple example? Concisely, I wondered if something like the following could be of use: f (d*(x*x)?y?c y) = f (d) x / y / (x*y / y)?./ f (d) x / (x*y / y)?./ My questions are now: What if x*y/y /c my site be taken as a finite element? or at least represent a special-purpose element that is set to d but not set to c (therefore set to d?)? What if x*y/y/c are elements? If x*y/y/c takes an element such that y is finite, what if such elements can be taken as an element and not take any other finite elements? Should I be interested in (a) The definition of truth, (b) the different-functional application of truth and truth for cases where truth is equal to falsity? or (c) the structure of truth relation and functional programming, (d) the function-mapping among elements, and (e) the specific application of a set-to-fitness of a function/element? Most of what I have written down is quite confusing (except a few of my many-word comments). Still, I have made it clear that my questions are not a need of any help or query. My first thought when I was asking this is in the context of a function-typing/element-set rule such as f[numeric]. This rule is sometimes useful in code-savvy applications whenever there is a set-to-fitness, though it does not really ensure that the predefined functions and sets that might be given to a particular result (which I do not look at more info “for” and “for” in this context) would work properly. On the other hand, knowing that one can take a particular finite element for example (e.g. with the x/y/c sequence with f[numeric] (=(x * y + x)) as a filter sequence, which is not quite what I would say) is sufficient for even finding an element that accepts one of the predefined x/y/c elements, and even non-zero for some of the others, though (at least temporarily) some elements could be not even given them. Consider that you had defined the q*s a function f[Need help with complex Antenna Theory assignments – where to go? After further investigation, I came across a post I viewed several times. The original post was about antenna theory in physics. I decided to go back with some more technical information and try to replicate in two different ways: Steps I needed a simple antenna model that I can scale to deal with real-world complexity issues in different ways without resorting to solutions to a given set of problems. I spent the bulk of the week hoping to find out more about the network model and used the net model example provided in the previous post as an example. As example. Steps 2-4: The antenna model I used for step 2 is a function of my analysis. The results I received were: Simple and noncommutative, it’s unknown if or how our model is actually built over a given network model, provided its physical description is correct or not. While some degree of uncertainty about how the model works is unlikely, based on the antenna model I’ve explored, I have seen it occur approximately 2/3 out of an area. (CDF) Steps 5-12: The task is still: How can we build the model if it’s not well-understood? What methods is there to solve the questions of the antenna theory problem? (For example, I’ve tried for a while to choose without too much thought…) I did all my work using NODESV-style learning, this simple algorithm is quite useful, an initial idea would be to use a filter set in your learning exercise to find the ground truth of your antenna model (same idea as in the previous step. Pay Someone To Take Precalculus ) I thought that since MTF is an all-of-your-sensory task, this approach can be quite fun and workarounds are much more suited for more complex settings. Next, I thought that if the analysis had been similar but that the antenna theory didn’t measure well in this case, then we can apply the network model example in a somewhat different way. In this part, however, I opted to select a more realistic network model without too much worry about ground truth accuracy, so that we could look at the antenna theory problem a little more closely and pick our antenna theory in this respect. Now I wanted to go further and see if we could go further and find out if our antenna theory was the correct antenna theory when it’s shown to work at the actual level and more meaningful on the actual level. One of my favorite antennaists’ suggestions for more specific network models for more complex task was to look at it closer (and more ways, there’s just really not much you can do to take the same approach). 2.1. For your needs Once I found out what was true about the antenna theory for the given network model case, I went further and further and looked more closely atNeed help with complex Antenna Theory assignments – where to go? To make your Antenna Theory homework, you can write a simple script, or use the [helpful] MODE file to help you through any complicated mathematics questions. Step 1: Write the script Step 2: Set up the text block Step 3: Create a file named Antenna.txt that will contain a calculator application – written in Java code – or whatever you want. If you haven’t launched your Antenna project today, you should now have a working Antenna application. Here’s how to build a program for it: Create a file named Antenna.java which should store all of your inputs Create a new class called Calculator.java which should be used to calculate the numbers Change the name of your class to Calculator.java Save the class file in place in your file manager Process your file. Step 4: Use a short list of the inputs and calculate them internet step through this 30-minute program Step 5: Make a list that will work by looking at it and making a little selection of input shapes/functions. Next, set up a small program named Calculator.chart that will open a calculator application (check if you have a choice) and can calculate the system inputs based on the inputs To do that, take a picture of a calculator application, on a Mac, and check if you have a choice. Next, create a small class called Calculator that reads inputs you would like to calculate and changes it so it works whenever you run the program. In the original picture, you can find the inputs on the screen, then it builds a list based on the set of values for inputs and changes the results to some others. Assignment Done For You On the list you’ll then see what are the inputs for the functions, but don’t worry about the design of the calculator. Just add the code for the function before running the program, so there is no need to open/close the application. Step 6: Make a file containing your simple program Now that you have a file, edit it and add an appended class with name Calculator to it. Now you can build your program again. To do that, simply paste our main program in the main program file using the main program text in the search box. Step 7: Pick out the most advanced Calc function To use your Calculator application, start it by running the program in the Calculator app from the menu bar. Now you’ll be able to take a picture of your Calc function and you can tell it to use this function when you press the “calculate” button. We can see that you put yourself in the “Calculator” position. Let’s change the name to Calc_and then we just
{"url":"https://electricalassignments.com/need-help-with-complex-antenna-theory-assignments-where-to-go","timestamp":"2024-11-10T19:26:13Z","content_type":"text/html","content_length":"145627","record_id":"<urn:uuid:789915e7-d9aa-436c-9d35-0f81b4e4607e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00428.warc.gz"}
Re: st: qnorm Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org. [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: qnorm From Maarten Buis <[email protected]> To [email protected] Subject Re: st: qnorm Date Mon, 5 Mar 2012 10:17:02 +0100 On Mon, Mar 5, 2012 at 4:26 AM, amir gahremanpour wrote: > In one of my lectures about distributions, I was taught that in QQ-plot all points should be within z=+/-2 !, If we have such a definition we should be able > to calculate p-value for QQ plot, right? visual assessment of qqplot is very subjective ! I would say that that the rule +/- 2 is very subjective. Not only is your test subjective, it is also wrong. A normal distribution is a distribution for a variable that can take values from -infinity to +infinity not -2*standard_deviation to +2*standard_deviation. In fact we know that if your variable truly follows a normal/Gaussian distribution than we would expect that about 4.5% of the observations should fall outside that range. A statistical test requires a null hypothesis, and the null hypothesis has to refer to a concrete statistic, so it cannot be "the distribution is normally distributed". In your case the statistic would be the range of your variable and the null hypothesis would be "the range is less than or equal to (-2*standard_deviaiton, +2*standard_deviation)", which, as I said above, does not make any sense when you want to test for normality. There are other tests that make more sense, and these look for example for specific values of the skewness (0) and the kurtosis (3), but those focus on very particular aspects of the normal distribution. In that sense statistical tests are more subjective than looking at plots; tests look at some aspects but ignore others, and the choice of which aspect to look at is typically guided by computational convenience rather than a substantive argument. Hope this helps, Maarten L. Buis Institut fuer Soziologie Universitaet Tuebingen Wilhelmstrasse 36 72074 Tuebingen * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2012-03/msg00178.html","timestamp":"2024-11-09T14:42:35Z","content_type":"text/html","content_length":"11457","record_id":"<urn:uuid:824190e8-5127-473a-8259-8c1b88bc1e3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00791.warc.gz"}
General Chemistry Learning Objectives By the end of this module, you will be able to: • Explain the form and function of a rate law • Use rate laws to calculate reaction rates • Use rate and concentration data to identify reaction orders and derive rate laws As described in the previous module, the rate of a reaction is affected by the concentrations of reactants. Rate laws or rate equations are mathematical expressions that describe the relationship between the rate of a chemical reaction and the concentration of its reactants. In general, a rate law (or differential rate law, as it is sometimes called) takes this form: [latex]\text{rate}=k{\left[A\right]}^{m}{\left[B\right]}^{n}{\left[C\right]}^{p}\dots [/latex] in which [A], [B], and [C] represent the molar concentrations of reactants, and k is the rate constant, which is specific for a particular reaction at a particular temperature. The exponents m, n, and p are usually positive integers (although it is possible for them to be fractions or negative numbers). The rate constant k and the exponents m, n, and p must be determined experimentally by observing how the rate of a reaction changes as the concentrations of the reactants are changed. The rate constant k is independent of the concentration of A, B, or C, but it does vary with temperature and surface area. The exponents in a rate law describe the effects of the reactant concentrations on the reaction rate and define the reaction order. Consider a reaction for which the rate law is: If the exponent m is 1, the reaction is first order with respect to A. If m is 2, the reaction is second order with respect to A. If n is 1, the reaction is first order in B. If n is 2, the reaction is second order in B. If m or n is zero, the reaction is zero order in A or B, respectively, and the rate of the reaction is not affected by the concentration of that reactant. The overall reaction order is the sum of the orders with respect to each reactant. If m = 1 and n = 1, the overall order of the reaction is second order (m + n = 1 + 1 = 2). The rate law: describes a reaction that is first order in hydrogen peroxide and first order overall. The rate law: describes a reaction that is second order in C[4]H[6] and second order overall. The rate law: describes a reaction that is first order in H^+, first order in OH^–, and second order overall. Example 1: Writing Rate Laws from Reaction Orders An experiment shows that the reaction of nitrogen dioxide with carbon monoxide: is second order in NO[2] and zero order in CO at 100 °C. What is the rate law for the reaction? Show Answer Check Your Learning The rate law for the reaction: has been determined to be rate = k[NO]^2[H[2]]. What are the orders with respect to each reactant, and what is the overall order of the reaction? Show Answer Check Your Learning In a transesterification reaction, a triglyceride reacts with an alcohol to form an ester and glycerol. Many students learn about the reaction between methanol (CH[3]OH) and ethyl acetate (CH[3]CH[2] OCOCH[3]) as a sample reaction before studying the chemical reactions that produce biodiesel: The rate law for the reaction between methanol and ethyl acetate is, under certain conditions, determined to be: What is the order of reaction with respect to methanol and ethyl acetate, and what is the overall order of reaction? Show Answer It is sometimes helpful to use a more explicit algebraic method, often referred to as the method of initial rates, to determine the orders in rate laws. To use this method, we select two sets of rate data that differ in the concentration of only one reactant and set up a ratio of the two rates and the two rate laws. After canceling terms that are equal, we are left with an equation that contains only one unknown, the coefficient of the concentration that varies. We then solve this equation for the coefficient. Example 2: Determining a Rate Law from Initial Rates Ozone in the upper atmosphere is depleted when it reacts with nitrogen oxides. The rates of the reactions of nitrogen oxides with ozone are important factors in deciding how significant these reactions are in the formation of the ozone hole over Antarctica. One such reaction is the combination of nitric oxide, NO, with ozone, O[3]: This reaction has been studied in the laboratory, and the following rate data were determined at 25 °C. Trial [NO] (mol/L) [O[3]] (mol/L) [latex]\frac{{\text{Delta [NO}}_{\text{2}}\text{]}}{\Delta t}\left(\text{mol}{\text{L}}^{-1}{\text{s}}^{-1}\right)[/latex] 1 1.00 × 10^−6 3.00 × 10^−6 6.60 × 10^−5 2 1.00 × 10^−6 6.00 × 10^−6 1.32 × 10^−4 3 1.00 × 10^−6 9.00 × 10^−6 1.98 × 10^−4 4 2.00 × 10^−6 9.00 × 10^−6 3.96 × 10^−4 5 3.00 × 10^−6 9.00 × 10^−6 5.94 × 10^−4 Determine the rate law and the rate constant for the reaction at 25 °C. Show Answer Check Your Learning Acetaldehyde decomposes when heated to yield methane and carbon monoxide according to the equation: Determine the rate law and the rate constant for the reaction from the following experimental data: Trial [CH[3]CHO] (mol/L) [latex]-\frac{{\text{Delta [CH}}_{3}\text{CHO]}}{\Delta t}\text{(mol}{\text{L}}^{-1}{\text{s}}^{-1}\text{)}[/latex] 1 1.75 × 10^−3 2.06 × 10^−11 2 3.50 × 10^−3 8.24 × 10^−11 3 7.00 × 10^−3 3.30 × 10^−10 Show Answer Example 3: Determining Rate Laws from Initial Rates Using the initial rates method and the experimental data, determine the rate law and the value of the rate constant for this reaction: Trial [NO] (mol/L) [Cl[2]] (mol/L) [latex]-\frac{\Delta\left[\text{NO}\right]}{\Delta t}\left(\text{mol}{\text{L}}^{-1}{\text{s}}^{-1}\right)[/latex] 1 0.10 0.10 0.00300 2 0.10 0.15 0.00450 3 0.15 0.10 0.00675 Show Answer Check Your Learning Use the provided initial rate data to derive the rate law for the reaction whose equation is: Trial [OCl^–] (mol/L) [I^–] (mol/L) Initial Rate (mol/L/s) 1 0.0040 0.0020 0.00184 2 0.0020 0.0040 0.00092 3 0.0020 0.0020 0.00046 Determine the rate law expression and the value of the rate constant k with appropriate units for this reaction. Show Answer Reaction Order and Rate Constant Units In some of our examples, the reaction orders in the rate law happen to be the same as the coefficients in the chemical equation for the reaction. This is merely a coincidence and very often not the Rate laws may exhibit fractional orders for some reactants, and negative reaction orders are sometimes observed when an increase in the concentration of one reactant causes a decrease in reaction rate. A few examples illustrating these points are provided: [latex]\begin{array}{l}\\ {\text{NO}}_{2}+\text{CO}\rightarrow\text{NO}+{\text{CO}}_{\text{2}}\text{rate}=\text{k}{\left[{\text{NO}}_{2}\right]}^{2}\\ {\text{CH}}_{3}\text{CHO}\rightarrow{\text{CH}}_ {4}+\text{CO}\text{rate}=\text{k}{\left[{\text{CH}}_{3}\text{CHO}\right]}^{2}\\ {\text{2N}}_{2}{\text{O}}_{5}\rightarrow{\text{2NO}}_{2}+{\text{O}}_{\text{2}}\text{rate}=\text{k}\left[{\text{N}}_{2} {\text{O}}_{5}\right]\\ {\text{2NO}}_{2}+{\text{F}}_{2}\rightarrow{\text{2NO}}_{2}\text{F}\text{rate}=\text{k}\left[{\text{NO}}_{2}\right]\left[{\text{F}}_{2}\right]\\ {\text{2NO}}_{2}\text{Cl}\ It is important to note that rate laws are determined by experiment only and are not reliably predicted by reaction stoichiometry. Reaction orders also play a role in determining the units for the rate constant k. In Example 2, a second-order reaction, we found the units for k to be [latex]\text{L}{\text{mol}}^{-4}{\text{s}}^ {-1},[/latex] whereas in Example 3, a third order reaction, we found the units for k to be mol^−2 L^2/s. More generally speaking, the units for the rate constant for a reaction of order [latex]\left (m+n\right)[/latex] are [latex]{\text{mol}}^{1-\left(m+n\right)}{\text{L}}^{\left(m+n\right)-1}{\text{s}}^{-1}[/latex]. Table 1 summarizes the rate constant units for common reaction orders. Table 1. Rate Constants for Common Reaction Orders Reaction Order Units of k [latex]\left(m+n\right)[/latex] [latex]{\text{mol}}^{1-\left(m+n\right)}{\text{L}}^{\left(m+n\right)-1}{\text{s}}^{-1}[/latex] zero mol/L/s first s^−1 second L/mol/s third mol^−2 L^2 s^−1 Note that the units in the table can also be expressed in terms of molarity (M) instead of mol/L. Also, units of time other than the second (such as minutes, hours, days) may be used, depending on the situation. Key Concepts and Summary Rate laws provide a mathematical description of how changes in the amount of a substance affect the rate of a chemical reaction. Rate laws are determined experimentally and cannot be predicted by reaction stoichiometry. The order of reaction describes how much a change in the amount of each substance affects the overall rate, and the overall order of a reaction is the sum of the orders for each substance present in the reaction. Reaction orders are typically first order, second order, or zero order, but fractional and even negative orders are possible. 1. How do the rate of a reaction and its rate constant differ? 2. Doubling the concentration of a reactant increases the rate of a reaction four times. With this knowledge, answer the following questions: 1. What is the order of the reaction with respect to that reactant? 2. Tripling the concentration of a different reactant increases the rate of a reaction three times. What is the order of the reaction with respect to that reactant? 3. Tripling the concentration of a reactant increases the rate of a reaction nine times. With this knowledge, answer the following questions: 1. What is the order of the reaction with respect to that reactant? 2. Increasing the concentration of a reactant by a factor of four increases the rate of a reaction four times. What is the order of the reaction with respect to that reactant? 4. How much and in what direction will each of the following affect the rate of the reaction: [latex]\text{CO(}g\text{)}+{\text{NO}}_{2}\text{(}g\text{)}\rightarrow{\text{CO}}_{2}\text{(}g\text{)}+\ text{NO(}g\text{)}[/latex] if the rate law for the reaction is [latex]\text{rate}=k{\left[{\text{NO}}_{2}\right]}^{2}?[/latex] 1. Decreasing the pressure of NO[2] from 0.50 atm to 0.250 atm. 2. Increasing the concentration of CO from 0.01 M to 0.03 M. 5. How will each of the following affect the rate of the reaction: [latex]\text{CO(}g\text{)}+{\text{NO}}_{2}\text{(}g\text{)}\rightarrow{\text{CO}}_{2}\text{(}g\text{)}+\text{NO(}g\text{)}[/latex] if the rate law for the reaction is [latex]\text{rate}=k\left[{\text{NO}}_{2}\right]\left[\text{CO}\right][/latex] ? 1. Increasing the pressure of NO[2] from 0.1 atm to 0.3 atm 2. Increasing the concentration of CO from 0.02 M to 0.06 M. 6. Regular flights of supersonic aircraft in the stratosphere are of concern because such aircraft produce nitric oxide, NO, as a byproduct in the exhaust of their engines. Nitric oxide reacts with ozone, and it has been suggested that this could contribute to depletion of the ozone layer. The reaction [latex]\text{NO}+{\text{O}}_{3}\rightarrow{\text{NO}}_{2}+{\text{O}}_{2}[/latex] is first order with respect to both NO and O[3] with a rate constant of 2.20 × 10^7 L/mol/s. What is the instantaneous rate of disappearance of NO when [NO] = 3.3 × 10^−6M and [O[3]] = 5.9 × 10^−7M? 7. Radioactive phosphorus is used in the study of biochemical reaction mechanisms because phosphorus atoms are components of many biochemical molecules. The location of the phosphorus (and the location of the molecule it is bound in) can be detected from the electrons (beta particles) it produces:[latex]{}_{15}^{32}\text{P}\rightarrow{}_{16}^{32}\text{S}+{\text{e}}^{-}[/latex]Rate = 4.85 × 10^−2 day^-1[^32P]What is the instantaneous rate of production of electrons in a sample with a phosphorus concentration of 0.0033 M? 8. The rate constant for the radioactive decay of ^14C is 1.21 × 10^−4 year^−1. The products of the decay are nitrogen atoms and electrons (beta particles):[latex]{}_{6}^{14}\text{C}\rightarrow{}_ {6}^{14}\text{N}+{\text{e}}^{-}[/latex][latex]\text{rate}=k\left[{}_{6}^{14}\text{C}\right][/latex]What is the instantaneous rate of production of N atoms in a sample with a carbon-14 content of 6.5 × 10^−9M? 9. The decomposition of acetaldehyde is a second order reaction with a rate constant of 4.71 × 10^−8 L/mol/s. What is the instantaneous rate of decomposition of acetaldehyde in a solution with a concentration of 5.55 × 10^−4M? 10. Alcohol is removed from the bloodstream by a series of metabolic reactions. The first reaction produces acetaldehyde; then other products are formed. The following data have been determined for the rate at which alcohol is removed from the blood of an average male, although individual rates can vary by 25–30%. Women metabolize alcohol a little more slowly than men: [C[2]H[5]OH] (M) 4.4 × 10^−2 3.3 × 10^−2 2.2 × 10^−2 Rate (mol/L/h) 2.0 × 10^−2 2.0 × 10^−2 2.0 × 10^−2 Determine the rate equation, the rate constant, and the overall order for this reaction. 11. Under certain conditions the decomposition of ammonia on a metal surface gives the following data: [NH[3]] (M) 1.0 × 10^−3 2.0 × 10^−3 3.0 × 10^−3 Rate (mol/L/h^1) 1.5 × 10^−6 1.5 × 10^−6 1.5 × 10^−6 Determine the rate equation, the rate constant, and the overall order for this reaction. 12. Nitrosyl chloride, NOCl, decomposes to NO and Cl[2].[latex]\text{2NOCl(}g\text{)}\rightarrow\text{2NO(}g\text{)}+{\text{Cl}}_{2}\text{(}g\text{)}[/latex]Determine the rate equation, the rate constant, and the overall order for this reaction from the following data: [NOCl] (M) 0.10 0.20 0.30 Rate (mol/L/h) 8.0 × 10^-10 3.2 × 10^−9 7.2 × 10^−9 13. From the following data, determine the rate equation, the rate constant, and the order with respect to A for the reaction [latex]A\rightarrow 2C.[/latex] [A] (M) [latex]1.33\times {10}^{-2}[/latex] [latex]\text{2.66}\times {10}^{-2}[/latex] [latex]\text{3.99}\times {10}^{-2}[/latex] Rate (mol/L/h) [latex]\text{3.80}\times {10}^{-7}[/latex] [latex]\text{1.52}\times {10}^{-6}[/latex] [latex]\text{3.42}\times {10}^{-6}[/latex] 14. Nitrogen(II) oxide reacts with chlorine according to the equation:[latex]\text{2NO(}g\text{)}+{\text{Cl}}_{2}\text{(}g\text{)}\rightarrow\text{2NOCl(}g\text{)}[/latex]The following initial rates of reaction have been observed for certain reactant concentrations: [NO] (mol/L^1) [Cl[2]] (mol/L) Rate (mol/L/h) 0.50 0.50 1.14 1.00 0.50 4.56 1.00 1.00 9.12 What is the rate equation that describes the rate’s dependence on the concentrations of NO and Cl[2]? What is the rate constant? What are the orders with respect to each reactant? 15. Hydrogen reacts with nitrogen monoxide to form dinitrogen monoxide (laughing gas) according to the equation: [latex]{\text{H}}_{2}\text{(}g\text{)}+\text{2NO(}g\text{)}\rightarrow{\text{N}}_{2}\ text{O(}g\text{)}+{\text{H}}_{2}\text{O(}g\text{)}[/latex]Determine the rate equation, the rate constant, and the orders with respect to each reactant from the following data: [NO] (M) 0.30 0.60 0.60 [H[2]] (M) 0.35 0.35 0.70 Rate (mol/L/s) [latex]2.835\times {\text{10}}^{-3}[/latex] [latex]1.134\times {\text{10}}^{-2}[/latex] [latex]2.268\times {\text{10}}^{-2}[/latex] 16. For the reaction [latex]A\rightarrow B+C,[/latex] the following data were obtained at 30 °C: [A] (M) 0.230 0.356 0.557 Rate (mol/L/s) 4.17 × 10^−4 9.99 × 10^−4 2.44 × 10^−3 1. What is the order of the reaction with respect to [A], and what is the rate equation? 2. What is the rate constant? 17. For the reaction [latex]Q\rightarrow W+X,[/latex] the following data were obtained at 30 °C: [Q][initial] (M) 0.170 0.212 0.357 Rate (mol/L/s) 6.68 × 10^−3 1.04 × 10^−2 2.94 × 10^−2 1. What is the order of the reaction with respect to [Q], and what is the rate equation? 2. What is the rate constant? 18. The rate constant for the first-order decomposition at 45 °C of dinitrogen pentoxide, N[2]O[5], dissolved in chloroform, CHCl[3], is 6.2 × 10^−4 min^−1.[latex]{\text{2N}}_{2}{\text{O}}_{5}\ rightarrow{\text{4NO}}_{2}+{\text{O}}_{2}[/latex]What is the rate of the reaction when [N[2]O[5]] = 0.40 M? 19. The annual production of HNO[3] in 2013 was 60 million metric tons Most of that was prepared by the following sequence of reactions, each run in a separate reaction vessel. 1. [latex]{\text{4NH}}_{3}\text{(}g\text{)}+{\text{5O}}_{2}\text{(}g\text{)}\rightarrow\text{4NO(}g\text{)}+{\text{6H}}_{2}\text{O(}g\text{)}[/latex] 2. [latex]\text{2NO(}g\text{)}+{\text{O}}_{2}\text{(}g\text{)}\rightarrow{\text{2NO}}_{2}\text{(}g\text{)}[/latex] 3. [latex]{\text{3NO}}_{2}\text{(}g\text{)}+{\text{H}}_{2}\text{O(}l\text{)}\rightarrow{\text{2HNO}}_{3}\text{(}aq\text{)}+\text{NO(}g\text{)}[/latex] 20. The first reaction is run by burning ammonia in air over a platinum catalyst. This reaction is fast. The reaction in equation (c) is also fast. The second reaction limits the rate at which nitric acid can be prepared from ammonia. If equation (b) is second order in NO and first order in O[2], what is the rate of formation of NO[2] when the oxygen concentration is 0.50 M and the nitric oxide concentration is 0.75 M? The rate constant for the reaction is 5.8 × 10^−6 L^2/mol^2/s. 21. The following data have been determined for the reaction:[latex]{\text{I}}^{-}+{\text{OCl}}^{-}\rightarrow{\text{IO}}^{-}+{\text{Cl}}^{-}[/latex] [latex]{\left[{\text{I}}^{-}\right]}_{\text{initial}}[/latex] (M) 0.10 0.20 0.30 [latex]{\left[{\text{OCl}}^{-}\right]}_{\text{initial}}[/latex] (M) 0.050 0.050 0.010 Rate (mol/L/s) 3.05 × 10^−4 6.20 × 10^−4 1.83 × 10^−4 Determine the rate equation and the rate constant for this reaction. Show Selected Answers method of initial rates: use of a more explicit algebraic method to determine the orders in a rate law overall reaction order: sum of the reaction orders for each substance represented in the rate law rate constant (k): proportionality constant in the relationship between reaction rate and concentrations of reactants rate law (also, rate equation): mathematical equation showing the dependence of reaction rate on the rate constant and the concentration of one or more reactants reaction order: value of an exponent in a rate law, expressed as an ordinal number (for example, zero order for 0, first order for 1, second order for 2, and so on)
{"url":"https://courses.lumenlearning.com/suny-binghamton-chemistry/chapter/rate-laws/","timestamp":"2024-11-07T13:59:35Z","content_type":"text/html","content_length":"96028","record_id":"<urn:uuid:462b2ec3-cd74-4952-8b84-5656fe8923d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00539.warc.gz"}
Asynchronous Cake Cuttting - a Fair Algorithm Written by Mike James Friday, 30 March 2012 The problem to be solved sounds trivial - cut up a cake so that each person thinks they get a fair share. You have to also throw in the observation that people cheat and are greedy to see that their might be a problem. Now we have an algorithm that works even when the cake is being shared over the internet. Cutting up a cake might not sound like an important problem but if you rephrase it as sharing resources or territory, then you can quickly see that it has lots of practical applications. It might even come in useful when there is a real cake to share but I suspect only if the party involves a group of programmers. Before you decide that cake cutting is simple let's go over a simple case - just two people to share a single cake. If each value the cake in the same way then it is obvious that they both want to get an equal amount of cake. To make things interesting, let's suppose that each person has a utility function u[1] and u[2] that measures how much they value different amounts of cake. Introducing utility functions makes the problem more interesting because each participant has a different view of the value or "size" of the portions of cake. What matters in this problem is that all of the participants think that they have got at least their fair share, or more, of the cake as measured by their utility function. So now how do you share the cake between two people? One solution is the "divide and choose" algorithm. One of the participants cuts the cake and the other chooses. The cutter will divide the cake into two portions that are equal as measured by their own utility function. The chooser will then select the slice that is bigger according to their utility function (or at random if they appear equal). You should be able to see that this algorithm has a number of desirable properties. The cutter's best play is to divide the cake equally because then no matter what piece the chooser takes it appears that the cake has been divided equally. No matter what the difference is between the chooser and cutter utility functions the chooser will always come away happy because they get to choose the biggest or at least equal piece of cake. So the cake is divided "fairly" from both players point of view - they each get what they measure to be half the cake or better. To generalize to n people: an algorithm is simply fair if each participant gets a slice that they value at 1/n or more of the whole. Thus for n=2 the "divide and choose" algorithm is good and provably "simply fair". There are algorithms that are simply fair for n>2. For example the "Dubins-Spanier" moving knife algorithm makes use of a "trusted third party" TPP. The TPP moves the knife steadily long the cake - which for simplicity we now assume is a rectangular bar. As the knife moves the slice S to be allocated grows in size. Each of the participants works out u[i](A) and the first to notice that their u[i](A)=u[i](cake)/n calls stop and is awarded the slice. They then drop out of the procedure and which repeats with the n-1 participants and the remainder of the cake. Notice that each participant gets a slice worth at least 1/n of the total cake and there is no incentive to cheat because you could end up with a smaller rather than larger slice. The moving knife algorithm seems simple but it can be difficult to see that it really is simply fair and to prove that it is takes a bit of work. There are two problems with this algorithm. The first is that it works only if the division is continuous and if all of the participants can see what is happening without any delay and can shout stop without any delays. Now we have a new algorithm that works asynchronously and so is suitable for use for sharing algorithms implemented on the internet say. The details are slightly complicated because it involves a cryptographically secure auction where each participant bids in an encrypted form making it possible to know the maximum and who made the bid without knowing the other bid - this requires homomorphic With this cryptographic auction in place the new discrete asynchronous algorithm works as follows: First assume that the cake to be shared runs from 0 to 2^m where m is a large integer and each cutting point is an integer. 1. Each player works out their preferred integer cutting point x[i] such that the slice that they would get is worth 1/n of the total cake - by their utility measure. 2. This cut point is encrypted and broadcast to the other participants as a bid in a secure auction. 3. All the players work out the maximum bid and the player who made it - without revealing the other bids - and the winner gets the piece they bid for. Again just like the moving knife algorithm the result is that the participant gets a slice worth at least 1/n of the entire cake by their measure. As in the moving knife algorithm the participant awarded the piece drops out and the procedure repeats with the remainder of the cake. There are some subtleties that have to be taken care of to make the procedure efficient but this is the general idea. The algorithm is simply fair and none of the events need to be executed So the next time you organize an internet party and need to share the cake you can do the job asynchronously. More Information A Cryptographic Moving-Knife Cake-Cutting Protocol: arXiv:1202.4507v1 If you would like a more general introduction to cake cutting algorithms - Cake Cutting Mechanisms arXiv:1203.0100v1 or email your comment to: comments@i-programmer.info To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter, Linkedin or Facebook or sign up for our weekly newsletter. Microsoft Open Sources Drasi Microsoft has announced the open source availability of Drasi, a data processing system designed to simplify the detection of and reaction to critical events within complex event-driven infrastructure [ ... ] + Full Story Data Wrangler Gets Copilot Integration Microsoft has announced that Copilot is being integrated into Data Wrangler. The move will give data scientists the ability to use natural language to clean and transform data, and to get help with fi [ ... ] + Full Story More News Last Updated ( Saturday, 07 April 2012 )
{"url":"https://www.i-programmer.info/news/181-algorithms/3872-asynchronous-cake-cuttting-a-fair-algorithm.html","timestamp":"2024-11-12T15:32:09Z","content_type":"text/html","content_length":"37449","record_id":"<urn:uuid:d080f3e2-cb75-4480-9fed-d6c995b896b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00371.warc.gz"}
Low frequency sound field reconstruction in a non-rectangular room using a small number of microphones Issue Acta Acust. Volume 4, Number 2, 2020 Article Number 5 Number of page(s) 14 Section Room Acoustics DOI https://doi.org/10.1051/aacus/2020006 Published online 12 May 2020 Acta Acustica 2020, , 5 Scientific Article Low frequency sound field reconstruction in a non-rectangular room using a small number of microphones Signal Processing Laboratory LTS2, EPFL, CH-1015 Lausanne, Switzerland ^* Corresponding author: thachphamvu@gmail.com Received: 11 November 2019 Accepted: 23 April 2020 An accurate knowledge of the sound field distribution inside a room is required to identify and optimally locate corrective measures for room acoustics. However, the spatial recovery of the sound field would result in an impractically high number of microphones in the room. Fortunately, at low frequencies, the possibility to rely on a sparse description of sound fields can help reduce the total number of measurement points without affecting the accuracy of the reconstruction. In this paper, the use of Greedy algorithm and Global curve-fitting techniques are proposed, in order to first recover the modal parameters of the room, and then to reconstruct the entire enclosed sound field at low frequencies, using a reasonably low set of measurements. First, numerical investigations are conducted on a non-rectangular room configuration, with different acoustic properties, in order to analyze various aspects of the reconstruction frameworks such as accuracy and robustness. The model is then validated with an experimental study in an actual reverberation chamber. The study yields promising results in which the enclosed sound field can be faithfully reconstructed using a practically feasible number of microphones, even in complex-shaped and damped rooms. Key words: Reconstruction / Room acoustics / Low frequency / Modal equalization / Modal identification © T. Pham Vu and H. Lissek, Published by EDP Sciences, 2020 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1 Introduction In room acoustics, sound field reconstruction generally consists of retrieving the entire enclosed sound field by performing a limited number of measurements. While the interpolations of the room impulse responses (RIRs) are commonly used for the purpose of auralization and sound reproduction, at low frequencies, a precise knowledge of its frequency domain equivalent – the room frequency responses (RFRs) – can provide useful information on the spatial distribution of sound pressure caused by the resonances of the room (room modes) [1]. In the low-frequency range, room modes highly affect the sound field in the room, yielding irregularities in both the spatial and frequency domains which give rise to coloration as well as masking effects and eventually alter the listening experience. An accurate depiction of the spatial sound field in a room can provide important information for applying ad hoc treatments for room mode correction [2]. It has been shown that, at low frequencies, a knowledge on the modal properties and sound pressure distribution in the room helps improving the design of different passive corrective measures [3–5]. This becomes even more crucial in case of active strategies for room modes correction [6–8] where control settings could be adjusted based on the knowledge of the resulting sound pressure distribution. This highlights the need of a practical method to accurately reconstruct the sound field in the room at low frequencies. Each RFR reveals the acoustic transfer from a given source to a given receiver in the room, in the frequency domain. Such RFRs embed the main properties of room modes, namely the resonance frequencies and modal decay times as well as the mode shapes of the room. To retrieve these information for a fixed source position, multiple measurements should be performed at different locations in the room and a reconstruction framework is required to recover the entire spatial information of the aforementioned quantities. The most intriguing question is how to faithfully reconstruct the spatial sound field in a room using the least number of measurements possible. A regular space and time sampling of the RFRs generally results in an impractically dense microphones grid. It has been shown that, under the frame of the Plenacoustic function in free field [9], the inherent sparsity of the space–time representation of the governing function allows a more effective sampling approach of the sound field. Several studies have also addressed the different sparse properties of enclosed sound fields. In a room with closed boundaries, the sound field is fully dependent on the physics of the room, including its geometries and acoustic properties. Furthermore, at low frequencies, the wave equation is governed by a discrete number of eigenmodes which gives rise to additional sparse approximation. In [10], the spatial RFRs in a rectangular room can be interpolated on a line based on the fact that these transfer functions share the same common poles, with the only difference being their amplitudes (also known as residues) [11, 12]. Mignot et al. [ 13] retrieved the low frequency RIRs in a rectangular room using a finite number of measurement points, by exploiting a low rank approximation using matching pursuit. In [14], a more conventional Compressed Sensing technique using a sensing matrix has been used in combination with plane waves expansion techniques to tackle the block-sparse properties of the acoustic field in a rectangular In this paper, we focus on these inherent sparse properties in room acoustics at low frequencies using approximation techniques such as matching pursuit and global curve-fitting to obtain the low-frequency information of a non-rectangular room under an extensive point of view, where the spatial distribution of sound pressure in a large volume inside the room can be reconstructed and analyzed using a practically small number of microphones. In practice, not every room can be considered as a rectangular room, especially in the case of a conventional listening room or private cinema. Non-rectangular rooms certainly possess a more complex distribution of eigenmodes frequency-wise, and the mode shapes are also harder to predict. This practical challenge is the main motivation to investigate here a model of a non-rectangular reverberation chamber. A first numerical study of this facility is then followed by the experimental validation inside the actual reverberation chamber. The analysis of the reconstruction results emphasizes on the frequency and spatial aspects of the responses in the room. As can be seen in [15–17], recent techniques for room modes equalization require an accurate knowledge of the sound field. For instance, an active electroacoustic absorber system [17], aiming at equalizing and flattening the frequency response of a room at low frequencies, requires an accurate model of the room to optimize the active acoustic impedance. In former studies, the actual efficiency of these low-frequency absorbers has been validated with a limited number of measurements inside the room, especially addressing the performance in terms of modal decay times reduction. With the possible help of the reconstruction framework proposed in this paper, the performance of the absorbers can be assessed space-wise. In addition, the framework can also provide precious information on how to adapt the acoustic impedance to be assigned at the diaphragm of the active electroacoustic absorbers. This motivates investigating how to minimize the number of measurement points for such a reconstruction framework as not only that it allows the reconstruction of the sound field within a specific bandwidth with limited equipment but also saves processing time that will eventually allow potential real time and online active-control The outline of the paper is as follows. Section 2 first introduces a sparse representation of room acoustics at low frequencies. The reconstruction method, which is composed of two steps, is then introduced in Section 3. The first part of the method consists of the modal identification of the room in which two different approaches, respectively in the time and frequency domains, are suggested. The second part aims at recovering mode shape functions through plane wave approximation techniques. Following the descriptions of the reconstruction mechanism, Section 4 is dedicated to the validation of the method using both numerical models and experimental measurements in the actual reverberation chamber at EPFL (non-rectangular room) to emphasize the robustness of the algorithm. Several discussions are raised concerning the accuracy of the sound field spatial recovery as well as the requirements for a faithful reconstruction. Concluding remarks are finally presented in Section 5. 2 Sparsity in room acoustics The main motivation of this study is to propose a simple, yet practical, experimental framework allowing a thorough characterization of the room behavior in the low frequency domain. Regardless of the method used to reduce the amount of measurement points, such a framework should rely on a sparse representation of the wave equation in a room at low frequencies. These could be exact sparsity that inherently emerges from the physics of the room or approximate sparsity which requires an approximation framework to reduce the degrees of freedom in the wave equation. In this section, several sparse aspects of room acoustics at low frequencies can be investigated using the modal decomposition form of the wave equation and the mode shape approximation theorem. The objective is to obtain a governing equation of the spatial distribution of sound pressure in a room where the number of variables is well defined and quantifiable. This could serve as the target for the reconstruction framework that follows in Section 3. 2.1 Modal decomposition At low frequencies, where wavelengths are of the same order of magnitudes as the room dimensions, room walls are mostly reflective which give rise to standing waves phenomenon. This creates the so called room modes that occur at discrete resonance frequencies where most of the acoustic energy is concentrated [18]. There exists a formulation of room modes at low frequencies that presents an inherent sparsity, corresponding to a limited number of discrete resonance frequencies bounded by the Schroeder cutoff frequency [1]. In this sparse representation, the solutions of the wave equation can be decomposed as a discrete sum of damped harmonic eigenmodes:$p(t,X⃗)=∑nAnΦn(X⃗)gn(t),$(1)where $Φn$ are the space-dependent mode shape functions (eigenfunctions of the Helmholtz equation) for each mode n of the room, $X⃗$ is the position in the room, $gn(t)$ is the harmonic time-dependent decaying function and $An$ is the corresponding complex expansion coefficient of mode n. Each eigenmode of a room is uniquely represented by a complex wavenumber $kn=(ωn+jδn)/c0$ (eigenvalues of the Helmholtz equation), where $c0$ is the sound celerity in the air, $ωn$ is the modal angular frequency and $δn>0$ is the corresponding damping factor [18]. The harmonic decaying function $gn(t)$ can be fully expressed as:$gn(t)=ejknc0t=ej(ωn+jδn)t=ejωnte-δnt.$(2) It is worth noticing that while $X⃗$ is a variable in equation (1) as the location of the point/microphone of interest, the location of the source and its properties are not explicitly written here. This information is however accounted for in the complex coefficients $An$, and will be made implicit in the following derivations. This is motivated by the fact that, in the case investigated here, only a single fixed source will be considered, and hence the location of the source is not a variable. 2.2 Mode shape approximation The previous derivation introduced a structured sparsity originated from the limited discrete modal decomposition of the wave equation at low frequencies. For a room with ideally rigid walls, $Φn$ is a space dependent function that corresponds to the exact solution of the Helmholtz equation [1]:$ΔΦn+kn2Φn=0.$(3) It has been shown in [19] that these mode shape functions can be further approximated with spherical harmonics and spherical Bessel functions. Accordingly, any mode shape function can be approximated by a finite sum of plane waves sharing the same wavenumber $|kn|$, pointing in various directions. Each individual mode shape can then be formulated using the R-th order approximation:$Φn(X⃗)≈∑r= 1RBn,rejk⃗n,r⋅X⃗,$(4)within which $k⃗n,r$ are the 3D wavevectors sharing the same wavenumber $||k⃗n,r||2=|kn|$. Note that, in opposition to the exact sparsity in the previous section, this is an approximate sparsity. This decomposition not only provides an approximation for each of the mode shape, but also allows a closed-form interpretation of the mode shape function regardless of the type of the modes in the room. Assuming now that we restrict this representation below a given upper frequency limit, a finite number R of wavevectors would be enough to closely approximate every mode shape function within this frequency range. Using equations (2) and (4), equation (1) could be expanded as:$p(t,X⃗)=∑n,rCn,rejωnte-δntejk⃗n,r⋅X⃗,$(5)where $Cn,r=AnBn,r$ with $r≤R$. Hence, through a series of derivations, the expression in equation (1) can be interpreted as the discrete sum of space-time damped harmonics with the expansion coefficients $Cn,r$. This expansion form directly links the acoustic response of the receiver to its location. 3 Reconstruction framework The role of the reconstruction framework is to identify and estimate the values of the unknown parameters of equation (5) from a limited set of measurements. The proposed algorithm addresses the general case of a non-rectangular room, the modal behavior of which is less predictable than in a shoe-box room. Figure 1 shows the geometry of the studied room with two simulated room modes. Inside the room, a number of M microphones are randomly placed at different locations to acquire the RIR measurements. Depending on the frequency range of interest, these measurements could be filtered as well as downsampled to reduce computational cost. Calling $Nt$ the length of the time vector of each microphone measurement, the ($Nt×M$) matrix S of signals is defined as the input of the framework. The output of the reconstruction framework, in short, should be all the unknowns present in equation (5), excluding the predefined parameters, namely, the number of modes N and the list of wavevectors $k⃗n,r$ for each mode shape approximation. The outputs, hence, include the angular frequency $ωn$ and the exponential damping factor $δn$ for each eigenmode, as well as the N × R expansion coefficients $Cn,r$. Once all these values are determined, it is possible to interpolate the responses at any position $X⃗int$ in the room by simply plugging it into equation (5). Figure 1 Examples of room modes in a non-rectangular room. The detailed framework can be divided into two steps. The first one is called modal identification, aiming at estimating the modal wavenumbers $kn$ for the N room modes. Once identified, the second step intends to approximate the expansion coefficients $Cn,r$ for a set of predefined wavevectors $k⃗n,r$ through projection. 3.1 Modal identification Two alternative approaches are introduced here, processing the input signals either in the time or in the frequency domain. The first approach is the simultaneous orthogonal matching pursuit (SOMP) method [20] for damped sinusoids [21]. This method is based on a greedy algorithm approach to recursively estimate each modal parameter of the room from the matrix of input time signals. The second method is based on the rational fraction polynomials (RFP) global curve fitting method [22] which, contrarily to the iterative SOMP, simultaneously estimates the modal parameters of the room from a set of input RFRs of the room. 3.1.1 Time domain approach This method has been successfully used in [13] to locally interpolate the RIRs in a rectangular room at low frequencies. From a pre-defined set of damped sinusoids, this method finds the ones that are highly correlated with the matrix of input signals using a low-rank approximation approach. To begin, two sets of ω and δ with $ωmin<ω<ωmax$ and $δmin<δ<δmax$, are formed. The range of variation of the sets are roughly estimated based on available knowledge on the room. Combining every pair of entries of the two sets together will produce an overly redundant set of complex components $ (jωq-δq)$ in which $q∈[1,Q]$ with Q as the total number of possible combinations. Each entry of this set is then used to form a time vector of length $Nt$ of time-decaying damped sinusoid $θq= ejωqte-δqt$. Using the normalized vectors $θq¯=θq/||θq||2$ as column vectors will produce an $(Nt×Q)$ array $Θ̄$. The algorithm performs an iterative matching procedure. Every loop indexed i starts with an $(Nt×M)$ residue matrix $Ri$ which is the result of the previous loop. At the first loop, $R1$ is set to be equal to the predefined signal matrix S. Through the searching procedure, a damped sinusoid with the highest correlation to the residue matrix (representing a pair of $ωn$ and $δn$) is chosen. The new residue matrix $Ri+1$ for the following loop can then be formed by extracting the contribution of this chosen sinusoid from $Ri$. The algorithm at a generic i-th iteration is detailed below: • Process the (Q × M) correlation matrix $Ξi=|Θ̄HRi|$. Each row q of $Ξi$ is composed of the M correlation values between the q-th normalized damped sinusoid and each of the M measurements. • By summing the energy of this set of values, process the evaluation correlation value $σq$ between the qth damped sinusoid and the entire set of measurements: $σq=∑m=1M(Ξi[q,m])2$. • Out of the Q available $σq$, choose the maximum one, which points to the pole with the highest correlation to the measurements. • As a result, the identified index (namely, $qi$) yields the chosen modal wavenumber of this loop: $ki=(ωqi+jδqi)/c0$. • After a modal wavenumber is found, following the orthogonalization and projection of SOMP in [20], the residue matrix for the next loop can be interpreted as $Ri+1=Ri-PiRi$ in which $Pi$ is the projection onto the chosen damped sinusoidal. • Repeat with $i=i+1$ until $i=N$. At the end of the procedure, a group of complex wavenumbers corresponding to the eigenmodes of the room is determined. 3.1.2 Frequency domain approach As room modes are mostly visible in the RFRs, it seems pragmatical to investigate a frequency-domain approach for room modes identification. One particular example is the global curve-fitting method in the frequency domain using the RFP form [22]. This has been used in [23] to estimate the modal parameters by curve fitting the RFR measurements. Curve-fitting methods are usually processed locally, initiating on a single function at a time. The method in [22], however, performs curve-fitting procedures on multiple frequency response functions at different locations simultaneously to identify the model of the system. The method assumes the linearity of the RFRs and that they can be formulated as a ratio of two polynomials. These RFRs share the same denominator whose poles contain information on the modal angular frequencies ($ωn$) and damping ($δn$) of the room. The method then performs a concurrently curve-fitting on the set of measured RFRs (see Appendix) to acquire the modal parameters of the room within a given bandwidth. 3.2 Projection onto spherical sampled wavevectors Up to now, only the eigenfrequency parameters of the room modes, namely, $ωn$ and $δn$ given in equation (5) have been identified. The remaining parameters to be determined are the expansion coefficients $Cn,r$, for which the following algorithm is used: • The first step is the separation of the current known and unknown parameters. Note that the time-varying terms in equation (1) have been identified in the former algorithm, and can be discarded from now on. Using a matrical form accordingly to the measurement matrix S gives: $ST=ΨG$ with G as the $(N×Nt)$ matrix where each of its row is a modal damped sinusoidal $gn(t)=ejωnte-δnt=ejknt$ . Furthermore, $Ψ$ is the ($M×N$) space-dependent matrix of modes with the inclusion of the expansion coefficients $An$ that appear in equation (1): $Ψ [ m , n ] = A n Φ n ( X ⃗m ) ,$(6) • with $X⃗m$’s the M position vectors for the location of the input measurements of S. • If $Nt>N$ (which usually is the case), the system of equation (6) is over-determined with $(M×N)$ unknown and $(M×Nt)$ equations. Hence, it is possible to estimate the ($M×N$) matrix of $Ψ$ by computing the least-squares estimation: $Ψ ≈ S T G H ( G G H ) - 1 .$(7) • Based on the expression in equation (5), $Ψ$ can be further expanded using plane waves expansion. □ First, the list of component wavevectors needs to be defined. For each mode shape function, a set of R wavevectors $k⃗n,r$ is created whose norm and directions match a uniform sampling over a sphere with radius $|ωn/c0|$. Spherical sampling (proposed in [24]) is chosen in this case because the room is non-rectangular and hence there is no preferred basis for the formation of mode shape functions. □ Each column $ψn$ of the matrix $Ψ$ can be treated individually as they are associated with different modes. Calling $ρn$ the ($M×R$) matrix of the plane wave harmonics for mode n in which $ρn [m,r]=ejk⃗n,r⋅X⃗m$, each column vector $ψn$ can be individually characterized as: $ψ n = ρ n C n ,$(8) • with $Cn$ the ($R×1$) vector consisting of the R expansion coefficients $Cn,r$ of mode n. First, assuming that $R<M$, taking $ρn$ as the basis, $ψn$ can be projected onto this basis to derive the coefficient vector $Cn$ using least-square projection: $C n ≈ ( ρ n H ρ n ) - 1 ρ n H ψ n .$(9) • As mentioned above, this derivation is only available when the number of sampled plane waves is lower than the number of microphones. As can be seen in [19, 25, 26], the convergence of the plane wave approximation is highly dependent on the number of plane waves available, especially in 3D. Hence, in the case where the number of measurement points is fairly low, restricting $R<M$ could affect the reconstruction of mode shape functions. One possibility would be to allow $R>M$ and derive the coefficient vector using a least norm optimization: $C n ≈ ρ n H ( ρ n ρ n H ) - 1 ψ n .$(10) • Further studies need to be done to verify the limitations of this solution as well as the optimal choice for R. In our case, for a low number of microphones, several trials have shown that choosing $R>M$ can estimate the mode shape better and increase the overall correlation. Regardless of the method used, in practice, the applicability of this step can always be cross-checked using a number of evaluation microphones. • Repeating the technique on each mode $n≤N$ will return the set of expansion coefficients $Cn,r$ required for the reconstruction. 4 Reconstruction results In this section, the results of the sound field reconstruction framework are analyzed using both numerical and experimental data. In the numerical simulations, a FEM model of a non-rectangular room is built for initial analysis. This first numerical study allows the assessment of multiple aspects of the reconstruction framework. It provides access to a very fine distribution of microphone placements, and the input data, such as wall impedance, can be changed straightforwardly. Furthermore, the FEM simulation not only provides the input but also can be used as the ground-truth reference for cross-checking the reconstruction results. In the second step, measurements are performed in the actual reverberation chamber at Ecole Polytechnique Fédérale de Lausanne (EPFL), with the same geometry as the one considered in the simulations, to confirm the validity and robustness of the framework. 4.1 Numerical simulation The FEM model consists of a non-rectangular room with maximum height of 4.6 m, maximum width of 9.8 m and maximum length of 6.6 m that replicates the actual reverberation chamber at EPFL. The damping of the walls are initially considered very low, with a uniform absorption coefficient of $α=0.01$, approaching that of the actual reverberation chamber. The source is chosen to be a monopole point source and is put in proximity to one corner of the room in order to excite all the modes of the room. The measurement points are spread randomly in the room (refer to Fig. 2). This placement of microphones, although possibly not the most ideal placement strategy for a given geometry, is guaranteed to capture enough information about the sound field and its modal properties assuming there is no readily available knowledge about the room. Figure 2 Geometry of the FEM model. The black dots represent the measurement points that are spread randomly in the room. 4.2 Modal identification In this section, the modal identification is performed using two different methods, namely SOMP and RFP, and their results are compared with each other. However, instead of directly comparing the retrieved modal properties $ωn$ and $δn$, the focus has been put on two other useful properties in modal analysis: the eigenfrequency ($fn$) and the modal decay time ($MT60n$) which is defined as [27 , 28]:$MT60n=3ln(10)δn.$(11) These two properties reflect the modal properties of the room and are directly linked to $ωn$ and $δn$. Using the same number of microphones, the modal decay times estimated from SOMP and RFP methods for the first 12 modes of the room are compared with those computed from the baseline FEM analysis considered as the ground-truth (see Fig. 3). After a few initial tests, it is observed that both methods performed equally well in identifying the frequency $fn$ (in Hz) for each of the modes of the room. As the values of $fn$ obtained using the two methods do not present much differences, the comparison is illustrated here only in terms of the modal decay times to compare their performance with respect to modal damping estimation. Using the numerical results from the FEM analysis as the reference, it can be seen in Figure 3 that both the RFP and SOMP methods are capable of identifying the room’s eigenmodes, except that SOMP, on average, may underestimate the damping for the mode at 40.5 Hz, which will be discussed at a later stage. Figure 3 Modal decay times for the eigenfrequencies of the non-rectangular reverberant room as estimated by the SOMP and RFP methods, in comparison with the FEM analysis (reference). Generally, it can be observed that the RFP method performs slightly better than SOMP. However, there are significant differences between the two methods regarding robustness. Although both methods require a manual input regarding the total number of modes in a limited bandwidth, they process this information differently. For the global curve fitting using RFP, if the total number of modes within a frequency range is not accurately known, it requires a considerable amount of trials and errors to eventually come up with a coherent curve-fitting result. Furthermore, as can be seen in the later stage, without a meticulous consistency check, the interpolation results from RFP can end up with a higher amount of errors. This vulnerability, for most cases, does not exist for SOMP. This is due to the fact that the modal parameters are found in RFP simultaneously whereas in SOMP they are found iteratively using a residual manner: the room modes that have the highest contributions to the collected signals are estimated first, followed by the ones with less. This gives SOMP an advantage for the reconstruction procedure as the results do not deviate much from reality even when underestimating or overestimating the number of modes within the frequency range of interest. The number that users enter can only alter how many times the algorithm is repeated but should not affect the result of each individual loop. In this particular case, the underestimated damping by SOMP that sometimes occurs at 40.5 Hz also comes from the fact that this algorithm processes residues at each computing step. The modes that are found at the later iterations of the algorithm are prone to higher errors and also, its correlation with the measurements is likely to be less than the ones that come before in the algorithm. When there are two modes that are very close together such as the particular cases at 40.5 Hz and 40.9 Hz respectively, depending on the set of input measurements, one of them may be found at the very far end of the algorithm compared to the other. Since one mode has been found earlier in the process, and its contribution to the residual has been extracted before, the error that occurs at the others would have minimal effects to the overall reconstruction result in the next stage. The same situation also occurs when users overestimate the number of modes. Then, around the final loops, the algorithm will certainly find some frequencies that do not correspond to any mode. As long as the overestimation is not too far from reality, this error in SOMP would have negligible effects on the reconstruction results in the next stage because the contributions of the few mismatched modes are generally significantly small compared to the correct ones. Although being less robust, the RFP curve fitting method does have a clear advantage over the SOMP method regarding computational cost. SOMP not only performs an iterative mode finding process that requires a regular refreshing of the residual matrix but also does so using multiple costly matrix operations. The RFP method developed in [29] on the other hand, does not perform an iterative process and has taken into account several computational simplifications. For instance, on a conventional work station with 32GB RAM and four cores CPU of 3.4GHZ, in order to perform the results in Figure 3 using 25 microphones, the SOMP method usually would usually take 4–7 min to finish while the RFP would finish in 5–10 s. This significant difference will further increase if the number of input measurements or the number of modes increases. Overall, it can be seen that SOMP is a robust method that works best in cases where not much information about the room is available or where a blind estimation is required. RFP, on the other hand, requires more a priori information about the modes in the room to produce a coherent result. However, RFP generally takes much less processing time than SOMP and hence can potentially be beneficial in certain application such as online estimation or real-time sound field control. In terms of accuracy, it should be noted that under sufficient conditions, both methods are capable of producing a good estimation of the modal information of the room. 4.3 Local interpolation From the outcomes of the algorithm, it is now possible to process and interpolate the responses at any point inside the geometry. The RFRs correspond to the transmissions between the source volume flow rate (in m^3/s) and the sound pressure (in Pa) acquired at the measurement points. One example can be seen in Figure 4 for an arbitrary point far from the walls but also not too close to the center of the room. The interpolation was processed using both the SOMP and RFP method with the same set of 25 microphone positions in the room. It can be seen that both methods can produce an accurate interpolation of the response at this particular point. Figure 4 Reconstruction of the RFR at a point inside the room using SOMP and RFP in comparison with the FEM ground truth reference. To illustrate the difference between RFP and SOMP, one example of interpolation is plotted in Figure 5 where the total number of modes were underestimated. As an iterative process, SOMP would still give a good estimation of all the modes except the ones it does not find whereas RFP would add some modes that are not from the real system and hence will lead to higher errors. However, testing a few different trials for RFP can solve this problem and hence this method should not be overlooked as its computation time is short and can therefore be advantageous in many situations. Figure 5 Comparison between the RFRs interpolated with RFP (top) and with SOMP (bottom), when underestimating the number of modes, at a given virtual microphone position. The black curve represents the reference given by FEM simulation. It must be noted, however, that the high level of accuracy seen in Figure 4 from both method is not yet guaranteed for every interpolated point in the room, and the error might be higher depending on the position of the point as well as on the precision of the modal identification results. This, once again, highlights the need for a spatial representation of the sound field to confirm the global validity of the algorithm. 4.4 Sound field reconstruction In this section, the interpolation process is extended to a large number of points inside the room to acquire a series of processed time responses of the room. The RFRs of the room can then be produced through the Fourier transform of these time responses. These resulting RFRs will allow the reconstruction of the spatial response of the room at any given frequency of interest. It is known that, for a room with non-ideally rigid boundaries ($α>0$), the Helmholtz equation is less valid close to the room walls [1]. Hence, the initial sound field reconstruction is performed for a shoe-box volume inside the room with each face being at least 1m away from the walls of the room. It is then possible to compare these results with frequency domain simulations, obtained using an FEM software, considered as the ground truth. Figure 6 shows three examples of the sound field reconstruction using 25 microphones, compared to such reference, at three different frequencies at very high spatial resolution. It can be observed that the reconstruction of the sound field yields qualitatively highly accurate results. The existence of the mode shapes is also clearly observed in all three examples. This proves that the spherical sampling technique for wave vectors is a powerful tool for rooms with complex geometries. Furthermore, this high level of accuracy is maintained in every direction of the 3D depiction since the input measurement points are spread randomly in the room. A few initial trials using a regular grid of microphones have not achieved such global precision in the results. This, once again, emphasizes the advantage of the much-recommended randomness that is used in common sparse and low-rank approximation frameworks. It is worth noticing that although there can be small differences when comparing the local sound pressure point by point, the general shapes as well as the separation between areas of high and low sound pressure are nevertheless precisely depicted. Furthermore the reconstruction of sound pressure field is accurate not just at the eigenfrequencies (45.25 Hz and 55.08 Hz) but also for frequencies in between two consecutive modes (e.g., at 38 Hz). Figure 6 Sound field reconstruction (bottom) at different frequencies for a rectangular area inside the room in comparison with the referencing sound fields from numerical simulation (top). The normalized Pearson correlation coefficient for the amplitude of the frequency responses, calculated as below:$COR%=100|〈|Sf|,|S̃f|〉|||Sf|| ||S̃f||,$(12)can be used to evaluate the overall accuracy of the reconstructed frequency response $S̃f$ with respect to the reference response $Sf$. Processing this coefficient to the regular grid of 11 × 11 × 11 points that samples the inner rectangular volume, yields an average correlation of 99.3% with a standard deviation of 0.8%. It is worth noting that $COR%$ is a good indication of the overall fitting of the reconstructed signals in a bandwidth, but does not provide accurate clues for interpreting the precision frequency-wise. A global error evaluation will be introduced further in the section to address this subject. So far, the analysis has shown good results for the sound field reconstruction of a lightly damped room with $α=0.01$. In order to further assess its robustness in more conventional situations with acoustic treatments, the algorithm is tested with various room absorption condition. To verify this, a uniform absorption coefficient α is considered for the room walls, and set first at 0.1 and then increased to 0.3 to better represent a case of a damped room. Using the same number of microphones, the reconstruction of the sound field for these cases is performed as in the preceding case. Figure 7 shows the comparison of the reconstruction at the same room mode (but slightly different eigenfrequencies due to the resulting change of modal damping) between different values of wall absorption. The reconstruction results for these cases still present a good agreement with the reference ones. Not only that the framework captures correctly the reduction in terms of energy in the room but it also succeeds in rendering the smoothing effect of the spatial distribution as the room becomes more damped. Figure 8 shows a global comparison of the dimensionless normalized errors defined as:$e(f)̄ =|S(f)-S̃(f)|̄∫S(f)dfΔf,$(13)which quantifies the error of the reconstruction result at each frequency, normalized by $∫S(f)df/Δf$ to discard the dependence on the acoustic energy difference in the room between different room absorption conditions. This quantity is then relevant for comparing the performance of the reconstruction as a function of the room damping because it accounts for the acoustic energy not absorbed by the room, which, for the same reference source, decreases as the room gets more damped. The non-normalized error $|S(f)-S̃(f)|$ is computed for each recovery point in a 11 × 11 × 11 points grid that spatially samples the aforementioned shoe-box test volume. Averaging the error within the spatial grid will then give the average error at each each frequency $|S(f)-S̃ (f)|̄$. Figure 8 shows a decrease in terms of accuracy as the damping of the walls increases. This is explainable as the orthogonality assumption of the mode shape functions in equation (3) becomes weaker with higher damping in the room. Furthermore, the modal identification on the RFRs is generally more challenging in a room with high damping than in a lightly damped room. Regarding the correlation, the average $COR%$ is still high at 98.3% with 1.8% of standard deviation ($α=0.1$) and 98.1% in average with 2.1% of standard deviation ($α=0.3$). Figure 8 also shows the error on the reconstruction sound field for a plane very close to a wall (maximum distance from the wall is 0.1 m). It can be observed that the sound field reconstruction near the wall induces higher errors, as can be anticipated. This is due to the aforementioned non-orthogonality of the mode shape function as well as the higher errors induced by extrapolation instead of interpolation as the concerned point is mostly outside of the microphones domain. The results from this evaluation are especially meaningful for modal equalization. It shows that this particular reconstruction framework can be used to effectively assess the sound field within a room before and after a given equalization method has been applied, which paves the way for a new tool for assessing the in situ performance of low-frequency room modes treatments [7] space-wise. Figure 7 Sound field reconstruction (bottom) compared to the reference (top) for different cases of wall damping (α = 0.01, 0.10, and 0.30 from left to right) at the eigenfrequency around 35.3 Hz. Figure 8 Comparison of the normalized error between different cases of room absorption for the reconstruction of the rectangular volume and a plane near a wall of the room. Figure 8 also shows that the reconstruction error generally increases for higher frequencies. As the algorithm does not particularly favor lower-order modes over the higher ones, this indicates that something over the parameters estimation step affects the accuracy level. The first possible answer appears to be the complexity of the mode shape function. For a room with complex geometry, the complexity of the mode shape functions will also increase for higher-order room modes which will generally require a higher number of plane waves to converge. Even when using a least norm method to increase the number of plane waves, the compromise between regularization and instability of the mode shape approximation [30, 31] means that the accuracy still relies heavily on the number of measurements available. Furthermore, as the frequency gets higher, the modal density will increase which means that the average distance (in Hz) between two consecutive modes will be smaller and induce more difficulties for the modal estimation framework. So far, the number of measurement points (microphones) has not been mentioned. Figure 9 compares the Pearson Correlation criteria (space-wise average and standard deviation) processed for different number of evaluation microphones, and for different absorption coefficients of the walls. For each case, the algorithm is repeated multiple times with the same number of measurement points but each time the locations of the input measurements are chosen randomly from a set of 600 random points. This procedure is chosen so as to eliminate the bias that could emanate from the placement of the microphones, especially in the cases where the number of microphones is considerably low. For each case, the Pearson Correlation is calculated for the 11 × 11 × 11 grid that samples the shoebox-shaped reconstruction region. As can be seen, the correlation value gets higher as the number of microphones increases. The standard deviation value mentioned in this figure specifies the standard deviation of the correlation value between different interpolating points in the rectangular reconstruction region. A high standard deviation value will then indicate a highly uneven reconstruction accuracy in which the correlation of the reconstruction varies significantly depending on the location of the interpolation. Conversely, a low standard deviation indicates that the spatial reconstruction result is stable and can be trusted. Figure 9 shows that the correlation values improve as the number of measurement points increases. Furthermore, the standard deviation also decreases significantly when more measurement points are used for the framework. This shows that while the reconstruction gets more accurate, the estimation accuracy becomes also uniformly higher across all interpolation points in the volume. One of the reason is the more measurement points available, the better the chance to estimate correctly the room modes information. It can also be observed that for the analysis within a fixed bandwidth, the performance typically becomes stable and reliable when a certain number of measurement points is reached. In the case of a lightly damped room, for instance, a grid of size 1331 within a volume of 40 $m3$ can be reconstructed with a high accuracy of 98.5% using just 30 input measurement points which is an effective result for a practical number of microphones. Furthermore, even with only 20 microphones, the result is still considered stable with a trusted average correlation around 95%. Figure 9 Analysis of the Pearson Correlation of the reconstructed sound field with respect to the number of measurement points. 4.5 Experimental results The reconstruction framework is now applied to actual measurements inside the reverberation chamber at EPFL, which has the same geometry as the FEM model (Fig. 10). The source, a custom-made subwoofer in a closed wooden cabinet, is located at a corner in the room to excite all room modes at low frequencies. The microphones (PCB 378B02 1/2″ microphones) are spread randomly in the room to replicate the previous numerical analysis. The reference velocity of the source is measured with a laser velocimeter (Polytec OFV 500) placed in front of the loudspeaker diaphragm. Figure 10 Measurement set up in a real reverberation chamber in the laboratory. Two main methods can be used to evaluate the reconstruction results. One method is to directly compare the reconstructed sound field to the simulated one in FEM. This method can certainly verify the faithfulness of the spatial reproduction results but is not recommended for a point-by-point comparison as it is difficult to accurately match the FEM model with the real one, since it relies on the absorbing properties of the room which are not accurately known. Moreover, the reference used for processing the RFRs can be different between the simulation (volume flow) and the actual case (velocity) and it is difficult to accurately match the source excitation as well as its position. Thus, besides this method, a small part of the measurement points can be reserved to serve as an evaluation set. Combining these two evaluation methods provides a more concrete analysis of the reconstruction framework with the experimental data. In this experiment, the signals from 25 microphone positions are used as the inputs of the algorithm to reproduce the sound field up to 75 Hz (within which about 20 modes can be observed). The microphones are located randomly in the room but were chosen so that they are practically evenly distributed space-wise to cover the area of the reconstructed rectangular volume. Figure 11 shows the spatial comparison between the reconstructed sound field and the reference one obtained from numerical simulation regarding the same eigenmodes. As the numerical model cannot be perfectly matched with the real room, there is a small difference in terms of the exact frequency of the eigenmodes. Comparing these results, it can be seen that similarly to the numerical results in the previous section, the reconstructed sound fields from real measurements yield highly accurate spatial recovery. The mode shapes are visible and the locations of nodal lines are correctly depicted with high spatial resolution. Once again, small mismatch in a few points is to be expected but the overall spatial representation remains to be faithful. Figure 11 Sound field reconstruction from real measurements (bottom) at two eigenfrequencies (left: near 35 Hz, right: near 51 Hz) as compared to the same eigenmodes from simulations (top). Using the evaluation set of 30 other microphone signals within the domain of interest, the results also agree with the previous simulation validation. Using SOMP for modal estimation, the average correlation stays at 97.8% with 1.89% of standard deviation. As expected, this result is slightly less accurate than the average correlation obtained with simulations but is still highly reliable. As mentioned earlier, SOMP is particularly robust and can perform well even when a priori information is missing. On the other hand, under the same circumstance, using a semi-supervised RFP curve fitting gives a slightly lower average correlation of 96.9% with a higher standard deviation of 2.3%. This result also agrees with the analysis in Section 4.2 regarding the different nature of the two methods. Three different examples of the reconstructed RFRs by both RFP and SOMP are plotted in Figure 12 and compared to the actual measurements from the evaluation set. Generally, without a detailed supervision and calibration, the RFP method will return a slightly less accurate result than SOMP as can be observed from the figure. However, its processing speed is much faster and hence could allow for re-calibration depending on the situation. Figure 12 RFR reconstruction from 25 measurements for three different evaluation points in the room using RFP and SOMP (correlation ranging between 97% and 99%). It should be noted that in practice, when the room geometries and wall absorption coefficient are not known, evaluation set like this along with the comparison parameters such as the correlation values are among the few available indications to know whether the reconstruction results are reliable. Therefore, practically, it is advised to always have a reserved evaluation set inside the domain of interest to navigate the adequate number of microphones required for any certain objective. Regarding the experiment set-up, as the number of microphones is practically small, a blind random placement of microphones might leave out crucial areas of the room. Hence out of all the possible randomization, it is advised to choose an appropriate placement that does not leave out crucial areas of the region of interest. Moreover, placement technique like the one suggested in [30] might also be used to improve the recovery results. Lastly, it should be noticed that the measurement was conducted in a reverberation chamber without removing the reflective diffusing panels (Fig. 10). This shows that the framework is robust enough to perform well even in a practical non-empty room. 5 Conclusion In this paper, we have investigated a robust sound field reconstruction framework in a room at low frequencies. Through modal decomposition and plane wave approximation of mode shape functions, the framework allows recovering the entire sound pressure distribution of the room at any frequency within the concerned bandwidth, from a limited set of measurements. Within the framework, the performance of two different modal estimation methods in the time and frequency domain, namely SOMP and RFP, are compared. Both methods are shown to allow retrieving the modal parameters of the room. Between the two approaches, SOMP has been proven to be more robust whereas RFP has significant advantages in terms of computational cost. The space-wise analysis of the reconstruction results confirms the practical applicability of the framework in the field of modal equalization. The reconstruction is performed inside a non-rectangular reverberation chamber using 20–30 microphones, which are proven sufficient to address the bandwidth of interest (containing around 20 modes). The results first show that the reconstruction is highly accurate for a lightly damped room. The framework is further tested by increasing the global absorption of the room walls. For these cases, the reconstruction shows a slight reduction in terms of accuracy, especially for positions close to the walls. This slight drop in accuracy is anticipated as it is generally more challenging to retrieve the modal parameters for highly damped room. Nevertheless, the overall reconstruction results retain a sufficiently high level of reliability. This means that the framework may be used to assess the space-wise performance of existing passive and active modal equalization methods. More importantly, the results of the method can be used as input for on-the-fly reconfiguration of active low frequency absorbers, such as the electroacoustic absorbers developed in [7]. Such in situ reconfigurability of active devices presents interesting potential for optimizing room mode equalization in real rooms, and should be further This paper tackles the case where a single source is fixed inside of the room. Further work should focus on retrieving the entire sound field for multiple source positions in the room. The microphones placement in this research are spread randomly in the room. However, considering a low number of microphones, the accuracy could benefit from a predefined microphones placement strategy, that should be further studied. This project was supported by the Swiss National Science Foundation (SNSF) under grant agreement 200021_169360. Conflict of interest Author declared no conflict of interests. Rational fraction polynomials and global curve-fitting Assuming that a system is linear and of second dynamical order, its frequency response measurements can be represented as a ratio of two polynomials in the Laplace domain ($s=jω$) using the RFP form of:$H(ω)=∑i=0maisi∑k=0nbksk,ai and bk∈R.$(14) Furthermore, if the system is resonant, which means that the response of the system is governed by its resonances, the frequency response function can be reformulated using the partial fractional form to highlight the poles of the system:$H(ω)=∑k=0n2[rks-pk+rk*s-pk*],$(15)where $pk=jωk-δk$ is the k-th pole of the system and $rk$ is the corresponding residue. The curve fitting procedure focuses on minimizing the squared error J between the analytical and measured response computed at each and every frequency bin $ei$:$J=∑i=1Lei*ei,$(16)in which L is the length of the frequency vector. Now, assuming that multiple different frequency response functions of the system were measured, they will contain the same inherent poles and hence the denominator of each and every measurements should contain the same characteristic polynomial. This is a valid claim since for a resonant system such as the one in room acoustics, the modal frequencies and modal damping are the same regardless of where they are measured within the room. This simplifies the curve fitting procedure and also allows a global curve fitting [22] of the entire set of measurements to recover the parameters in equations (14) or (15). To further avoid ill-conditioned problems, the frequency response function is reformulated using orthogonal polynomials only in the positive domain of the frequency axis using the Forsythe method [32 ]. This greatly simplifies the computation of the problem and although the final result is expressed in the orthogonal function expansion form, it is always possible to trace back to the form in equation (15) for information regarding the poles and residues. This method, detailed in [29], is non-iterative and sufficiently fast. Furthermore, by reasonably choosing m and n, it can compensate the effects created by out-of-band modes and hence, reduce the fitting error. As any other curve fitting method in the frequency domain, it suffers from over-fitting as well as from the lack of frequency resolution. Cite this article as: Pham Vu T & Lissek H. 2020. Low frequency sound field reconstruction in a non-rectangular room using a small number of microphones. Acta Acustica, 4, 5. All Figures Figure 1 Examples of room modes in a non-rectangular room. In the text Figure 2 Geometry of the FEM model. The black dots represent the measurement points that are spread randomly in the room. In the text Figure 3 Modal decay times for the eigenfrequencies of the non-rectangular reverberant room as estimated by the SOMP and RFP methods, in comparison with the FEM analysis (reference). In the text Figure 4 Reconstruction of the RFR at a point inside the room using SOMP and RFP in comparison with the FEM ground truth reference. In the text Figure 5 Comparison between the RFRs interpolated with RFP (top) and with SOMP (bottom), when underestimating the number of modes, at a given virtual microphone position. The black curve represents the reference given by FEM simulation. In the text Figure 6 Sound field reconstruction (bottom) at different frequencies for a rectangular area inside the room in comparison with the referencing sound fields from numerical simulation (top). In the text Figure 7 Sound field reconstruction (bottom) compared to the reference (top) for different cases of wall damping (α = 0.01, 0.10, and 0.30 from left to right) at the eigenfrequency around 35.3 Hz. In the text Figure 8 Comparison of the normalized error between different cases of room absorption for the reconstruction of the rectangular volume and a plane near a wall of the room. In the text Figure 9 Analysis of the Pearson Correlation of the reconstructed sound field with respect to the number of measurement points. In the text Figure 10 Measurement set up in a real reverberation chamber in the laboratory. In the text Figure 11 Sound field reconstruction from real measurements (bottom) at two eigenfrequencies (left: near 35 Hz, right: near 51 Hz) as compared to the same eigenmodes from simulations (top). In the text Figure 12 RFR reconstruction from 25 measurements for three different evaluation points in the room using RFP and SOMP (correlation ranging between 97% and 99%). In the text Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform. Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. Initial download of the metrics may take a while.
{"url":"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/02/aacus190004/aacus190004.html","timestamp":"2024-11-07T23:07:40Z","content_type":"text/html","content_length":"238232","record_id":"<urn:uuid:551eaf4f-91ea-4544-a76a-a40e238e00a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00050.warc.gz"}
How much HP can a 253 make? How much HP can a 253 make? The engine had a displacement of 253 cu. in., 4.146 litres, a compression ratio of 9:1, a maximum horsepower 185, and a maximum torque of 262 lb. How many kW does a 253 have? The 253 made its public debut in the Holden Hurricane concept car at the 1969 Melbourne International Motor Show, albeit it in a highly modified form featuring increased 10.1:1 compression, big cam and solid lifters, and producing over 250 hp (190 kW). Is a 253 motor a V8? The 253 was the economy V8 and was intended to compete with the 250-cubic-inch six-cylinder engine in the Falcon. Apart from its smaller capacity, achieved by a smaller bore, it had a two-barrel carburettor instead of the four-barrel carburettor that was fitted to the 308, the performance version of the V8. How much horsepower does a 308 have? The Holden 308 Later a ‘308’ version was announced. This was essentially an increased version of the 253 and gave 240 bhp (180 kW) at 4000 rpm and maximum torque of 315 ft/ lb (420 Nm) at 3000 rpm. It had a 9:1 compression ratio and a 4-barrel quadrajet carbie. Can you put VN heads on a 253? Don’t forget you need a VN heads style cam as you early headed cam will not work with the VN heads. Do 308 heads fit a 253? The 253 heads are the same casting as the 308; therefore, they have the same original chamber volume. With the block bored and honed to size, the ACL Duralites were fitted to the original rods and a trial assembly performed. It’s now that you find out if everything fits! Are 253 and 308 heads the same? The 253 heads are the same casting as the 308; therefore, they have the same original chamber volume. What oil does a Holden 253 take? CASTROL GTX 20W-50 good. How much horsepower does a 304 V8 have? 210 hp The 304 has a displacement of 304 cu in (5.0 L), which produced 210 hp (157 kW; 213 PS)(gross rating) in 1970-71 and was built starting in 1970. Will 308 heads fit a 253? Are 253 and 308 pistons the same? 253 and 308s have the same stroke, it is the bore size that is different! 308s have a 4 inch bore. 253s have a 3.625 inch bore. You can’t bore a 253 out to a 308. Will a 308 crank fit a 253? How much oil does a Holden 253 take? 3-speed, use CASTROL MANUAL VMX-M 75W-85, 2.1 litres; 4-speed, use CASTROL UNIVERSAL 80W-90, 1.6 litres. How much oil does a VE v8 take? Fill with 8.3 litres of your favourite oil then replace cap. Start engine and allow to idle for 5 minutes, then take for steady cruise unitl normal operating temp is reached. Return home and check for leaks. How much HP does a AMC 401 have? The practical limit of the plebian AMC 401 is about 500 hp at 7,500 rpm. The main caps will begin to walk above that, so limit these engines tohealthy street mills if you want to keep the price down and use stockparts. How much HP does a AMC 360 have? AMC 360 Engine Build – 370-Inch 480HP AMC. Can you put a 308 crank on 253 block? How many Litres does a LS1 take? Holden and the latest workshop I am using which specialise in a lot of LS1’s have always put 6 litres in mine from new. If left to settle overnight it shows a slight overfill, if checked after stopping for a minute or so it shows pretty much on the full line. Sump capacity changed over the years. What oil is best for a VE Commodore? CASTROL MAGNATEC FUEL SAVER DX 5W-30. What oil should I use in my VE SS? CASTROL MAGNATEC FUEL SAVER DX 5W-30 They cling to the engine like a magnet providing an extra layer of protection. Castrol MAGNATEC’s formulation protects from the moment you start, dramatically reducing* engine wear for protection you can see, hear and feel.
{"url":"https://liverpoololympia.com/how-much-hp-can-a-253-make/","timestamp":"2024-11-11T08:13:55Z","content_type":"text/html","content_length":"75072","record_id":"<urn:uuid:ffeecb0d-7552-4d2a-8258-46fa35a8bed8>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00090.warc.gz"}
HARMEAN Function • number1 - First value or reference. • number2 - [optional] Second value or reference. How to use The Excel HARMEAN function returns the harmonic mean for a set of numeric values. The harmonic mean is a kind of numeric average, calculated by dividing the number values in a list by the sum of the reciprocal of each value. In other words, the harmonic mean is the reciprocal of the average of the reciprocals. Because the harmonic mean tends toward the smallest values in a set of data, it limits the impact of large outliers, but exaggerates the impact of small outliers. The harmonic mean is always less than the geometric mean (GEOMEAN), which is always less than the arithmetic mean (AVERAGE). The HARMEAN function takes multiple arguments in the form number1, number2, number3, etc. up to 255 total. Arguments can be a hardcoded constant, a cell reference, or a range. Often, a single range or array is used instead of multiple arguments, as seen in the example worksheet. The average of 1, 2, and 6 is 3. The harmonic mean of 1, 2, and 6 is 1.8: =AVERAGE(1,2,6) // returns 3 =HARMEAN(1,2,6) // returns 1.8 In the example shown, the formulas in E5 and E6 are, respectively: Note that harmonic mean reduces the impact of the larger outliers in the data set. • Arguments can be numbers, names, arrays, or references that contain numbers. • Empty cells, and cells that contain text or logical values are ignored.
{"url":"https://exceljet.net/functions/harmean-function","timestamp":"2024-11-03T21:59:49Z","content_type":"text/html","content_length":"46579","record_id":"<urn:uuid:c330d534-ad03-4174-a9a7-27e644e2b24d>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00717.warc.gz"}
Iterative Graph Processing 1. Libraries 2. Iterative Graph Processing Iterative Graph Processing 本文档是 Apache Flink 的旧版本。建议访问最新的稳定版本。 Gelly exploits Flink’s efficient iteration operators to support large-scale iterative graph processing. Currently, we provide implementations of the vertex-centric, scatter-gather, and gather-sum-apply models. In the following sections, we describe these abstractions and show how you can use them in Gelly. Vertex-Centric Iterations The vertex-centric model, also known as “think like a vertex” or “Pregel”, expresses computation from the perspective of a vertex in the graph. The computation proceeds in synchronized iteration steps, called supersteps. In each superstep, each vertex executes one user-defined function. Vertices communicate with other vertices through messages. A vertex can send a message to any other vertex in the graph, as long as it knows its unique ID. The computational model is shown in the figure below. The dotted boxes correspond to parallelization units. In each superstep, all active vertices execute the same user-defined computation in parallel. Supersteps are executed synchronously, so that messages sent during one superstep are guaranteed to be delivered in the beginning of the next superstep. To use vertex-centric iterations in Gelly, the user only needs to define the vertex compute function, ComputeFunction. This function and the maximum number of iterations to run are given as parameters to Gelly’s runVertexCentricIteration. This method will execute the vertex-centric iteration on the input Graph and return a new Graph, with updated vertex values. An optional message combiner, MessageCombiner, can be defined to reduce communication costs. Let us consider computing Single-Source-Shortest-Paths with vertex-centric iterations. Initially, each vertex has a value of infinite distance, except from the source vertex, which has a value of zero. During the first superstep, the source propagates distances to its neighbors. During the following supersteps, each vertex checks its received messages and chooses the minimum distance among them. If this distance is smaller than its current value, it updates its state and produces messages for its neighbors. If a vertex does not change its value during a superstep, then it does not produce any messages for its neighbors for the next superstep. The algorithm converges when there are no value updates or the maximum number of supersteps has been reached. In this algorithm, a message combiner can be used to reduce the number of messages sent to a target vertex. // read the input graph Graph<Long, Double, Double> graph = ... // define the maximum number of iterations int maxIterations = 10; // Execute the vertex-centric iteration Graph<Long, Double, Double> result = graph.runVertexCentricIteration( new SSSPComputeFunction(), new SSSPCombiner(), maxIterations); // Extract the vertices as the result DataSet<Vertex<Long, Double>> singleSourceShortestPaths = result.getVertices(); // - - - UDFs - - - // public static final class SSSPComputeFunction extends ComputeFunction<Long, Double, Double, Double> { public void compute(Vertex<Long, Double> vertex, MessageIterator<Double> messages) { double minDistance = (vertex.getId().equals(srcId)) ? 0d : Double.POSITIVE_INFINITY; for (Double msg : messages) { minDistance = Math.min(minDistance, msg); if (minDistance < vertex.getValue()) { for (Edge<Long, Double> e: getEdges()) { sendMessageTo(e.getTarget(), minDistance + e.getValue()); // message combiner public static final class SSSPCombiner extends MessageCombiner<Long, Double> { public void combineMessages(MessageIterator<Double> messages) { double minMessage = Double.POSITIVE_INFINITY; for (Double msg: messages) { minMessage = Math.min(minMessage, msg); // read the input graph val graph: Graph[Long, Double, Double] = ... // define the maximum number of iterations val maxIterations = 10 // Execute the vertex-centric iteration val result = graph.runVertexCentricIteration(new SSSPComputeFunction, new SSSPCombiner, maxIterations) // Extract the vertices as the result val singleSourceShortestPaths = result.getVertices // - - - UDFs - - - // final class SSSPComputeFunction extends ComputeFunction[Long, Double, Double, Double] { override def compute(vertex: Vertex[Long, Double], messages: MessageIterator[Double]) = { var minDistance = if (vertex.getId.equals(srcId)) 0 else Double.MaxValue while (messages.hasNext) { val msg = messages.next if (msg < minDistance) { minDistance = msg if (vertex.getValue > minDistance) { for (edge: Edge[Long, Double] <- getEdges) { sendMessageTo(edge.getTarget, vertex.getValue + edge.getValue) // message combiner final class SSSPCombiner extends MessageCombiner[Long, Double] { override def combineMessages(messages: MessageIterator[Double]) { var minDistance = Double.MaxValue while (messages.hasNext) { val msg = inMessages.next if (msg < minDistance) { minDistance = msg Configuring a Vertex-Centric Iteration A vertex-centric iteration can be configured using a VertexCentricConfiguration object. Currently, the following parameters can be specified: • Name: The name for the vertex-centric iteration. The name is displayed in logs and messages and can be specified using the setName() method. • Parallelism: The parallelism for the iteration. It can be set using the setParallelism() method. • Solution set in unmanaged memory: Defines whether the solution set is kept in managed memory (Flink’s internal way of keeping objects in serialized form) or as a simple object map. By default, the solution set runs in managed memory. This property can be set using the setSolutionSetUnmanagedMemory() method. • Aggregators: Iteration aggregators can be registered using the registerAggregator() method. An iteration aggregator combines all aggregates globally once per superstep and makes them available in the next superstep. Registered aggregators can be accessed inside the user-defined ComputeFunction. • Broadcast Variables: DataSets can be added as Broadcast Variables to the ComputeFunction, using the addBroadcastSet() method. Graph<Long, Double, Double> graph = ... // configure the iteration VertexCentricConfiguration parameters = new VertexCentricConfiguration(); // set the iteration name parameters.setName("Gelly Iteration"); // set the parallelism // register an aggregator parameters.registerAggregator("sumAggregator", new LongSumAggregator()); // run the vertex-centric iteration, also passing the configuration parameters Graph<Long, Long, Double> result = new Compute(), null, maxIterations, parameters); // user-defined function public static final class Compute extends ComputeFunction { LongSumAggregator aggregator = new LongSumAggregator(); public void preSuperstep() { // retrieve the Aggregator aggregator = getIterationAggregator("sumAggregator"); public void compute(Vertex<Long, Long> vertex, MessageIterator inMessages) { //do some computation Long partialValue = ... // aggregate the partial value // update the vertex value val graph: Graph[Long, Long, Double] = ... val parameters = new VertexCentricConfiguration // set the iteration name parameters.setName("Gelly Iteration") // set the parallelism // register an aggregator parameters.registerAggregator("sumAggregator", new LongSumAggregator) // run the vertex-centric iteration, also passing the configuration parameters val result = graph.runVertexCentricIteration(new Compute, new Combiner, maxIterations, parameters) // user-defined function final class Compute extends ComputeFunction { var aggregator = new LongSumAggregator override def preSuperstep { // retrieve the Aggregator aggregator = getIterationAggregator("sumAggregator") override def compute(vertex: Vertex[Long, Long], inMessages: MessageIterator[Long]) { //do some computation val partialValue = ... // aggregate the partial value // update the vertex value Scatter-Gather Iterations The scatter-gather model, also known as “signal/collect” model, expresses computation from the perspective of a vertex in the graph. The computation proceeds in synchronized iteration steps, called supersteps. In each superstep, a vertex produces messages for other vertices and updates its value based on the messages it receives. To use scatter-gather iterations in Gelly, the user only needs to define how a vertex behaves in each superstep: • Scatter: produces the messages that a vertex will send to other vertices. • Gather: updates the vertex value using received messages. Gelly provides methods for scatter-gather iterations. The user only needs to implement two functions, corresponding to the scatter and gather phases. The first function is a ScatterFunction, which allows a vertex to send out messages to other vertices. Messages are received during the same superstep as they are sent. The second function is GatherFunction, which defines how a vertex will update its value based on the received messages. These functions and the maximum number of iterations to run are given as parameters to Gelly’s runScatterGatherIteration. This method will execute the scatter-gather iteration on the input Graph and return a new Graph, with updated vertex values. A scatter-gather iteration can be extended with information such as the total number of vertices, the in degree and out degree. Additionally, the neighborhood type (in/out/all) over which to run the scatter-gather iteration can be specified. By default, the updates from the in-neighbors are used to modify the current vertex’s state and messages are sent to out-neighbors. Let us consider computing Single-Source-Shortest-Paths with scatter-gather iterations on the following graph and let vertex 1 be the source. In each superstep, each vertex sends a candidate distance message to all its neighbors. The message value is the sum of the current value of the vertex and the edge weight connecting this vertex with its neighbor. Upon receiving candidate distance messages, each vertex calculates the minimum distance and, if a shorter path has been discovered, it updates its value. If a vertex does not change its value during a superstep, then it does not produce messages for its neighbors for the next superstep. The algorithm converges when there are no value updates. // read the input graph Graph<Long, Double, Double> graph = ... // define the maximum number of iterations int maxIterations = 10; // Execute the scatter-gather iteration Graph<Long, Double, Double> result = graph.runScatterGatherIteration( new MinDistanceMessenger(), new VertexDistanceUpdater(), maxIterations); // Extract the vertices as the result DataSet<Vertex<Long, Double>> singleSourceShortestPaths = result.getVertices(); // - - - UDFs - - - // // scatter: messaging public static final class MinDistanceMessenger extends ScatterFunction<Long, Double, Double, Double> { public void sendMessages(Vertex<Long, Double> vertex) { for (Edge<Long, Double> edge : getEdges()) { sendMessageTo(edge.getTarget(), vertex.getValue() + edge.getValue()); // gather: vertex update public static final class VertexDistanceUpdater extends GatherFunction<Long, Double, Double> { public void updateVertex(Vertex<Long, Double> vertex, MessageIterator<Double> inMessages) { Double minDistance = Double.MAX_VALUE; for (double msg : inMessages) { if (msg < minDistance) { minDistance = msg; if (vertex.getValue() > minDistance) { // read the input graph val graph: Graph[Long, Double, Double] = ... // define the maximum number of iterations val maxIterations = 10 // Execute the scatter-gather iteration val result = graph.runScatterGatherIteration(new MinDistanceMessenger, new VertexDistanceUpdater, maxIterations) // Extract the vertices as the result val singleSourceShortestPaths = result.getVertices // - - - UDFs - - - // // messaging final class MinDistanceMessenger extends ScatterFunction[Long, Double, Double, Double] { override def sendMessages(vertex: Vertex[Long, Double]) = { for (edge: Edge[Long, Double] <- getEdges) { sendMessageTo(edge.getTarget, vertex.getValue + edge.getValue) // vertex update final class VertexDistanceUpdater extends GatherFunction[Long, Double, Double] { override def updateVertex(vertex: Vertex[Long, Double], inMessages: MessageIterator[Double]) = { var minDistance = Double.MaxValue while (inMessages.hasNext) { val msg = inMessages.next if (msg < minDistance) { minDistance = msg if (vertex.getValue > minDistance) { Configuring a Scatter-Gather Iteration A scatter-gather iteration can be configured using a ScatterGatherConfiguration object. Currently, the following parameters can be specified: • Name: The name for the scatter-gather iteration. The name is displayed in logs and messages and can be specified using the setName() method. • Parallelism: The parallelism for the iteration. It can be set using the setParallelism() method. • Solution set in unmanaged memory: Defines whether the solution set is kept in managed memory (Flink’s internal way of keeping objects in serialized form) or as a simple object map. By default, the solution set runs in managed memory. This property can be set using the setSolutionSetUnmanagedMemory() method. • Aggregators: Iteration aggregators can be registered using the registerAggregator() method. An iteration aggregator combines all aggregates globally once per superstep and makes them available in the next superstep. Registered aggregators can be accessed inside the user-defined ScatterFunction and GatherFunction. • Broadcast Variables: DataSets can be added as Broadcast Variables to the ScatterFunction and GatherFunction, using the addBroadcastSetForUpdateFunction() and addBroadcastSetForMessagingFunction() methods, respectively. • Number of Vertices: Accessing the total number of vertices within the iteration. This property can be set using the setOptNumVertices() method. The number of vertices can then be accessed in the vertex update function and in the messaging function using the getNumberOfVertices() method. If the option is not set in the configuration, this method will return -1. • Degrees: Accessing the in/out degree for a vertex within an iteration. This property can be set using the setOptDegrees() method. The in/out degrees can then be accessed in the vertex update function and in the messaging function, per vertex using the getInDegree() and getOutDegree() methods. If the degrees option is not set in the configuration, these methods will return -1. • Messaging Direction: By default, a vertex sends messages to its out-neighbors and updates its value based on messages received from its in-neighbors. This configuration option allows users to change the messaging direction to either EdgeDirection.IN, EdgeDirection.OUT, EdgeDirection.ALL. The messaging direction also dictates the update direction which would be EdgeDirection.OUT, EdgeDirection.IN and EdgeDirection.ALL, respectively. This property can be set using the setDirection() method. Graph<Long, Double, Double> graph = ... // configure the iteration ScatterGatherConfiguration parameters = new ScatterGatherConfiguration(); // set the iteration name parameters.setName("Gelly Iteration"); // set the parallelism // register an aggregator parameters.registerAggregator("sumAggregator", new LongSumAggregator()); // run the scatter-gather iteration, also passing the configuration parameters Graph<Long, Double, Double> result = new Messenger(), new VertexUpdater(), maxIterations, parameters); // user-defined functions public static final class Messenger extends ScatterFunction {...} public static final class VertexUpdater extends GatherFunction { LongSumAggregator aggregator = new LongSumAggregator(); public void preSuperstep() { // retrieve the Aggregator aggregator = getIterationAggregator("sumAggregator"); public void updateVertex(Vertex<Long, Long> vertex, MessageIterator inMessages) { //do some computation Long partialValue = ... // aggregate the partial value // update the vertex value val graph: Graph[Long, Double, Double] = ... val parameters = new ScatterGatherConfiguration // set the iteration name parameters.setName("Gelly Iteration") // set the parallelism // register an aggregator parameters.registerAggregator("sumAggregator", new LongSumAggregator) // run the scatter-gather iteration, also passing the configuration parameters val result = graph.runScatterGatherIteration(new Messenger, new VertexUpdater, maxIterations, parameters) // user-defined functions final class Messenger extends ScatterFunction {...} final class VertexUpdater extends GatherFunction { var aggregator = new LongSumAggregator override def preSuperstep { // retrieve the Aggregator aggregator = getIterationAggregator("sumAggregator") override def updateVertex(vertex: Vertex[Long, Long], inMessages: MessageIterator[Long]) { //do some computation val partialValue = ... // aggregate the partial value // update the vertex value The following example illustrates the usage of the degree as well as the number of vertices options. Graph<Long, Double, Double> graph = ... // configure the iteration ScatterGatherConfiguration parameters = new ScatterGatherConfiguration(); // set the number of vertices option to true // set the degree option to true // run the scatter-gather iteration, also passing the configuration parameters Graph<Long, Double, Double> result = new Messenger(), new VertexUpdater(), maxIterations, parameters); // user-defined functions public static final class Messenger extends ScatterFunction { // retrieve the vertex out-degree outDegree = getOutDegree(); public static final class VertexUpdater extends GatherFunction { // get the number of vertices long numVertices = getNumberOfVertices(); val graph: Graph[Long, Double, Double] = ... // configure the iteration val parameters = new ScatterGatherConfiguration // set the number of vertices option to true // set the degree option to true // run the scatter-gather iteration, also passing the configuration parameters val result = graph.runScatterGatherIteration(new Messenger, new VertexUpdater, maxIterations, parameters) // user-defined functions final class Messenger extends ScatterFunction { // retrieve the vertex out-degree val outDegree = getOutDegree final class VertexUpdater extends GatherFunction { // get the number of vertices val numVertices = getNumberOfVertices The following example illustrates the usage of the edge direction option. Vertices update their values to contain a list of all their in-neighbors. Graph<Long, HashSet<Long>, Double> graph = ... // configure the iteration ScatterGatherConfiguration parameters = new ScatterGatherConfiguration(); // set the messaging direction // run the scatter-gather iteration, also passing the configuration parameters DataSet<Vertex<Long, HashSet<Long>>> result = new Messenger(), new VertexUpdater(), maxIterations, parameters) // user-defined functions public static final class Messenger extends GatherFunction {...} public static final class VertexUpdater extends ScatterFunction {...} val graph: Graph[Long, HashSet[Long], Double] = ... // configure the iteration val parameters = new ScatterGatherConfiguration // set the messaging direction // run the scatter-gather iteration, also passing the configuration parameters val result = graph.runScatterGatherIteration(new Messenger, new VertexUpdater, maxIterations, parameters) // user-defined functions final class Messenger extends ScatterFunction {...} final class VertexUpdater extends GatherFunction {...} Gather-Sum-Apply Iterations Like in the scatter-gather model, Gather-Sum-Apply also proceeds in synchronized iterative steps, called supersteps. Each superstep consists of the following three phases: • Gather: a user-defined function is invoked in parallel on the edges and neighbors of each vertex, producing a partial value. • Sum: the partial values produced in the Gather phase are aggregated to a single value, using a user-defined reducer. • Apply: each vertex value is updated by applying a function on the current value and the aggregated value produced by the Sum phase. Let us consider computing Single-Source-Shortest-Paths with GSA on the following graph and let vertex 1 be the source. During the Gather phase, we calculate the new candidate distances, by adding each vertex value with the edge weight. In Sum, the candidate distances are grouped by vertex ID and the minimum distance is chosen. In Apply, the newly calculated distance is compared to the current vertex value and the minimum of the two is assigned as the new value of the vertex. Notice that, if a vertex does not change its value during a superstep, it will not calculate candidate distance during the next superstep. The algorithm converges when no vertex changes value. To implement this example in Gelly GSA, the user only needs to call the runGatherSumApplyIteration method on the input graph and provide the GatherFunction, SumFunction and ApplyFunction UDFs. Iteration synchronization, grouping, value updates and convergence are handled by the system: // read the input graph Graph<Long, Double, Double> graph = ... // define the maximum number of iterations int maxIterations = 10; // Execute the GSA iteration Graph<Long, Double, Double> result = graph.runGatherSumApplyIteration( new CalculateDistances(), new ChooseMinDistance(), new UpdateDistance(), maxIterations); // Extract the vertices as the result DataSet<Vertex<Long, Double>> singleSourceShortestPaths = result.getVertices(); // - - - UDFs - - - // // Gather private static final class CalculateDistances extends GatherFunction<Double, Double, Double> { public Double gather(Neighbor<Double, Double> neighbor) { return neighbor.getNeighborValue() + neighbor.getEdgeValue(); // Sum private static final class ChooseMinDistance extends SumFunction<Double, Double, Double> { public Double sum(Double newValue, Double currentValue) { return Math.min(newValue, currentValue); // Apply private static final class UpdateDistance extends ApplyFunction<Long, Double, Double> { public void apply(Double newDistance, Double oldDistance) { if (newDistance < oldDistance) { // read the input graph val graph: Graph[Long, Double, Double] = ... // define the maximum number of iterations val maxIterations = 10 // Execute the GSA iteration val result = graph.runGatherSumApplyIteration(new CalculateDistances, new ChooseMinDistance, new UpdateDistance, maxIterations) // Extract the vertices as the result val singleSourceShortestPaths = result.getVertices // - - - UDFs - - - // // Gather final class CalculateDistances extends GatherFunction[Double, Double, Double] { override def gather(neighbor: Neighbor[Double, Double]): Double = { neighbor.getNeighborValue + neighbor.getEdgeValue // Sum final class ChooseMinDistance extends SumFunction[Double, Double, Double] { override def sum(newValue: Double, currentValue: Double): Double = { Math.min(newValue, currentValue) // Apply final class UpdateDistance extends ApplyFunction[Long, Double, Double] { override def apply(newDistance: Double, oldDistance: Double) = { if (newDistance < oldDistance) { Note that gather takes a Neighbor type as an argument. This is a convenience type which simply wraps a vertex with its neighboring edge. For more examples of how to implement algorithms with the Gather-Sum-Apply model, check the GSAPageRank and GSAConnectedComponents library methods of Gelly. Configuring a Gather-Sum-Apply Iteration A GSA iteration can be configured using a GSAConfiguration object. Currently, the following parameters can be specified: • Name: The name for the GSA iteration. The name is displayed in logs and messages and can be specified using the setName() method. • Parallelism: The parallelism for the iteration. It can be set using the setParallelism() method. • Solution set in unmanaged memory: Defines whether the solution set is kept in managed memory (Flink’s internal way of keeping objects in serialized form) or as a simple object map. By default, the solution set runs in managed memory. This property can be set using the setSolutionSetUnmanagedMemory() method. • Aggregators: Iteration aggregators can be registered using the registerAggregator() method. An iteration aggregator combines all aggregates globally once per superstep and makes them available in the next superstep. Registered aggregators can be accessed inside the user-defined GatherFunction, SumFunction and ApplyFunction. • Broadcast Variables: DataSets can be added as Broadcast Variables to the GatherFunction, SumFunction and ApplyFunction, using the methods addBroadcastSetForGatherFunction(), addBroadcastSetForSumFunction() and addBroadcastSetForApplyFunction methods, respectively. • Number of Vertices: Accessing the total number of vertices within the iteration. This property can be set using the setOptNumVertices() method. The number of vertices can then be accessed in the gather, sum and/or apply functions by using the getNumberOfVertices() method. If the option is not set in the configuration, this method will return -1. • Neighbor Direction: By default values are gathered from the out neighbors of the Vertex. This can be modified using the setDirection() method. The following example illustrates the usage of the number of vertices option. Graph<Long, Double, Double> graph = ... // configure the iteration GSAConfiguration parameters = new GSAConfiguration(); // set the number of vertices option to true // run the gather-sum-apply iteration, also passing the configuration parameters Graph<Long, Long, Long> result = graph.runGatherSumApplyIteration( new Gather(), new Sum(), new Apply(), maxIterations, parameters); // user-defined functions public static final class Gather { // get the number of vertices long numVertices = getNumberOfVertices(); public static final class Sum { // get the number of vertices long numVertices = getNumberOfVertices(); public static final class Apply { // get the number of vertices long numVertices = getNumberOfVertices(); val graph: Graph[Long, Double, Double] = ... // configure the iteration val parameters = new GSAConfiguration // set the number of vertices option to true // run the gather-sum-apply iteration, also passing the configuration parameters val result = graph.runGatherSumApplyIteration(new Gather, new Sum, new Apply, maxIterations, parameters) // user-defined functions final class Gather { // get the number of vertices val numVertices = getNumberOfVertices final class Sum { // get the number of vertices val numVertices = getNumberOfVertices final class Apply { // get the number of vertices val numVertices = getNumberOfVertices The following example illustrates the usage of the edge direction option. Graph<Long, HashSet<Long>, Double> graph = ... // configure the iteration GSAConfiguration parameters = new GSAConfiguration(); // set the messaging direction // run the gather-sum-apply iteration, also passing the configuration parameters DataSet<Vertex<Long, HashSet<Long>>> result = new Gather(), new Sum(), new Apply(), maxIterations, parameters) val graph: Graph[Long, HashSet[Long], Double] = ... // configure the iteration val parameters = new GSAConfiguration // set the messaging direction // run the gather-sum-apply iteration, also passing the configuration parameters val result = graph.runGatherSumApplyIteration(new Gather, new Sum, new Apply, maxIterations, parameters) Iteration Abstractions Comparison Although the three iteration abstractions in Gelly seem quite similar, understanding their differences can lead to more performant and maintainable programs. Among the three, the vertex-centric model is the most general model and supports arbitrary computation and messaging for each vertex. In the scatter-gather model, the logic of producing messages is decoupled from the logic of updating vertex values. Thus, programs written using scatter-gather are sometimes easier to follow and maintain. Separating the messaging phase from the vertex value update logic not only makes some programs easier to follow but might also have a positive impact on performance. Scatter-gather implementations typically have lower memory requirements, because concurrent access to the inbox (messages received) and outbox (messages to send) data structures is not required. However, this characteristic also limits expressiveness and makes some computation patterns non-intuitive. Naturally, if an algorithm requires a vertex to concurrently access its inbox and outbox, then the expression of this algorithm in scatter-gather might be problematic. Strongly Connected Components and Approximate Maximum Weight Matching are examples of such graph algorithms. A direct consequence of this restriction is that vertices cannot generate messages and update their states in the same phase. Thus, deciding whether to propagate a message based on its content would require storing it in the vertex value, so that the gather phase has access to it, in the following iteration step. Similarly, if the vertex update logic includes computation over the values of the neighboring edges, these have to be included inside a special message passed from the scatter to the gather phase. Such workarounds often lead to higher memory requirements and non-elegant, hard to understand algorithm implementations. Gather-sum-apply iterations are also quite similar to scatter-gather iterations. In fact, any algorithm which can be expressed as a GSA iteration can also be written in the scatter-gather model. The messaging phase of the scatter-gather model is equivalent to the Gather and Sum steps of GSA: Gather can be seen as the phase where the messages are produced and Sum as the phase where they are routed to the target vertex. Similarly, the value update phase corresponds to the Apply step. The main difference between the two implementations is that the Gather phase of GSA parallelizes the computation over the edges, while the messaging phase distributes the computation over the vertices. Using the SSSP examples above, we see that in the first superstep of the scatter-gather case, vertices 1, 2 and 3 produce messages in parallel. Vertex 1 produces 3 messages, while vertices 2 and 3 produce one message each. In the GSA case on the other hand, the computation is parallelized over the edges: the three candidate distance values of vertex 1 are produced in parallel. Thus, if the Gather step contains “heavy” computation, it might be a better idea to use GSA and spread out the computation, instead of burdening a single vertex. Another case when parallelizing over the edges might prove to be more efficient is when the input graph is skewed (some vertices have a lot more neighbors than others). Another difference between the two implementations is that the scatter-gather implementation uses a coGroup operator internally, while GSA uses a reduce. Therefore, if the function that combines neighbor values (messages) requires the whole group of values for the computation, scatter-gather should be used. If the update function is associative and commutative, then the GSA’s reducer is expected to give a more efficient implementation, as it can make use of a combiner. Another thing to note is that GSA works strictly on neighborhoods, while in the vertex-centric and scatter-gather models, a vertex can send a message to any vertex, given that it knows its vertex ID, regardless of whether it is a neighbor. Finally, in Gelly’s scatter-gather implementation, one can choose the messaging direction, i.e. the direction in which updates propagate. GSA does not support this yet, so each vertex will be updated based on the values of its in-neighbors only. The main differences among the Gelly iteration models are shown in the table below. Iteration Model Update Function Update Logic Communication Scope Communication Logic Vertex-Centric arbitrary arbitrary any vertex arbitrary Scatter-Gather arbitrary based on received messages any vertex based on vertex state Gather-Sum-Apply associative and commutative based on neighbors' values neighborhood based on vertex state
{"url":"https://nightlies.apache.org/flink/flink-docs-release-1.11/zh/dev/libs/gelly/iterative_graph_processing.html","timestamp":"2024-11-03T09:58:46Z","content_type":"text/html","content_length":"140475","record_id":"<urn:uuid:af4897a1-e42f-4248-8209-8c15db5f76cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00287.warc.gz"}
APS April Meeting 2015 Bulletin of the American Physical Society APS April Meeting 2015 Volume 60, Number 4 Saturday–Tuesday, April 11–14, 2015; Baltimore, Maryland Session R13: Gravitational Waveforms and Perturbation Theory Hide Abstracts Sponsoring Units: GGR Chair: Scott Field, Cornell University Room: Key 9 Monday, R13.00001: Secular gravitational-wave phasing to 3PN order for low-eccentricity inspiraling binaries April Blake Moore, Marc Favata, K. G. Arun, Chandra Mishra 2015 While gravitational waves cause binaries to circularize, several astrophysical scenarios suggest that some binaries will have non-negligible eccentricities when entering the LIGO frequency 10:45AM band. Time-domain waveforms for arbitrary eccentricity and to 3PN order are provided by the quasi-Keplerian formalism, but are computationally costly. Here we use a simplification of the - quasi-Keplerian formalism to produce a fast, analytic waveform for the secular phasing of low-eccentricity binaries to 3PN order. We will discuss how this waveform is constructed, its domain 10:57AM of validity, and possible applications. [Preview Abstract] Monday, R13.00002: Eccentric Post-Newtonian Bursts April Nicholas Loutrel, Nicolas Yunes, Frans Pretorius 2015 Gravitational wave emission from eccentric compact binaries is highly peaked around pericenter passage. As such, the gravitational wave signal looks like a sequence of discrete bursts in 10:57AM time-frequency space, as opposed to a continuous signal. Due to the relatively low power contained in each burst, standard matched filtering techniques may be impractical for extracting the - parameters of the signal. Alternatively, one can stack the power within each burst, creating an enhanced data product and amplifying the signal-to-noise ratio. In order to do this, however, 11:09AM one must have some prior knowledge of where the bursts will occur in time-frequency space, i.e. a burst model. We here discuss a new method of constructing burst models that allows for a formulation at generic post-Newtonian (PN) order. We discuss its implementation at 3PN order and the accuracy of the full 3PN model by comparison to different eccentric PN Taylor approximants. [Preview Abstract] Monday, R13.00003: Spin effects in the nonlinear gravitational-wave memory from inspiralling binaries April Marc Favata, Xinyi Guo 2015 The gravitational-wave memory effect is a time-varying but non-oscillatory contribution to the gravitational-wave amplitude. The nonlinear form of the memory arises from the gravitational 11:09AM waves produced by previously emitted gravitational waves. Despite the fact that it originates from higher-order interactions, it modifies the gravitational-waveform at leading (0PN) order. - Understanding the memory is important for building accurate knowledge of the gravitational-wave signal in order to probe the nonlinearity of general relativity. Previous analytic calculations 11:21AM of spinning binary waveforms have neglected the memory component. Here we compute the memory corrections to the waveform due to spin-orbit interactions. We consider both aligned and precessing spin configurations. [Preview Abstract] Monday, R13.00004: Nonlinear gravitational-wave memory from merging binary black holes April Goran Dojcinoski, Marc Favata 2015 The nonlinear memory effect is a nonoscillatory piece of the gravitational-wave signal that arises when gravitational waves themselves produce gravitational waves. Merging binary black holes 11:21AM produce the strongest nonlinear memory signal. However, many numerical relativity simulations have difficulty computing the memory modes. We use a semianalytic procedure to construct the - memory modes from the nonmemory modes of several nonspinning, quasicircular black hole binaries. We then fit analytic functions to these numerically generated waveforms. Our results could be 11:33AM used to improve estimates of the detectability of the memory effect. [Preview Abstract] Monday, R13.00005: Effective potentials and morphological transitions for binary black-hole spin precession April Michael Kesden, Davide Gerosa, Richard O'Shaughnessy, Emanuele Berti, Ulrich Sperhake 2015 We derive an effective potential $\xi_\pm(S)$ for binary black-hole (BBH) spin precession as a function of the magnitude of the total spin $S$. This allows us to solve the 2PN orbit-averaged 11:33AM spin-precession equations analytically for arbitrary BBH mass ratios and spins. These solutions are quasiperiodic functions of time: after a period $\tau$ the spins return to their initial - relative orientations and precess about the total angular momentum by an angle $\alpha$. We classify BBH spin precession into three distinct morphologies between which BBHs can transition 11:45AM during their inspiral. Our new solutions constitute fundamental progress in our understanding of BBH spin precession and also have important applications to astrophysical BBHs. We derive a precession-averaged evolution equation for the total angular momentum that can be integrated on the radiation-reaction time, allowing us to statistically track BBH spins from formation to merger far more efficiently than was possible with previous orbit-averaged precession equations. This will greatly help us predict the signatures of BBH formation in the GWs emitted near merger and the distributions of final spins and gravitational recoils. The solutions may also help efforts to model and interpret GWs from generic BBH mergers. [Preview Abstract] Monday, R13.00006: Higher order spin effects in inspiralling compact objects binaries April Sylvain Marsat 2015 We present recent progress on higher order spin effects in the post-Newtonian dynamics of compact objects binaries. We present first an extension of a Lagrangian formalism for point particle 11:45AM with spins, where finite size effects are represented by an additional multipolar structure. When applied to the case of a spin-induced octupole, the formalism allows for the computation of - the cubic-in-spin effects that enter at the order 3.5PN. We also report on results obtained for quadratic-in-spin effects at the next-to-leading order 3PN. In both cases, we recover existing 11:57AM results for the dynamics, and derive for the first time the gravitational wave energy flux and orbital phasing. These results will be useful for the data analysis of the upcoming generation of advanced detectors of gravitational waves. [Preview Abstract] Monday, R13.00007: Tidal invariants for compact binaries on quasi-circular orbits April Niels Warburton, Sam Dolan, Patrick Nolan, Adrian Ottewill, Barry Wardell 2015 We extend the gravitational self-force approach to encompass `self-interaction' tidal effects for a compact body of mass $\mu$ on a quasi-circular orbit around a black hole of mass $M \gg \ 11:57AM mu$. Specifically, we define and calculate at $O(\mu)$ (conservative) shifts in the eigenvalues of the electric- and magnetic-type tidal tensors, and a (dissipative) shift in a scalar product - between their eigenbases. This approach yields four gauge-invariant functions, from which one may construct other tidal quantities such as the curvature scalars and the speciality index. 12:09PM First, we analyze the general case of a geodesic in a regular perturbed vacuum spacetime admitting a helical Killing vector and a reflection symmetry. Next, we specialize to focus on circular orbits in the equatorial plane of Kerr spacetime at $O(\mu)$. We present accurate numerical results for the Schwarzschild case for orbital radii up to the light-ring, calculated via independent implementations in Lorenz and Regge-Wheeler gauges. We show that our results are consistent with leading-order post-Newtonian expansions, and demonstrate the existence of additional structure in the strong-field regime. We anticipate that our strong-field results will inform (e.g.)~effective one-body models for the gravitational two-body probl [Preview Monday, R13.00008: Computing the dissipative part of the gravitational self force: I. Formalism April Eanna Flanagan, Tanja Hinderer, Scott A. Hughes, Uchupol Ruangsri 2015 The computation of the gravitational self-force acting on a point particle inspiralling into a spinning black hole is a subject of much current research, and is relevant to future 12:09PM gravitational wave observations. We develop a formalism for numerically computing the dissipative part of the self-force for generic orbits, omitting the conservative part. The dissipative - part contains both orbit-averaged and oscillatory pieces, is sufficient to compute the leading order, adiabatic inspiral, and will also yield information about the kicks to the particle's 12:21PM energy and angular momentum that occur during transient resonances. The dissipative self-force can be computed from the half-advanced minus half-retarded prescription, for which no regularization is needed. The method involves a simple modification of frequency domain Teukolsky codes that compute the retarded linearized metric perturbation, in which a more general type of mode amplitude is computed. In the future, it may be possible to develop complementary methods to compute the conservative part only, which are simpler than methods currently under development that aim to compute the entire first order self-force. [Preview Abstract] Monday, R13.00009: Computing the dissipative part of the gravitational self force: II. Numerical implementation and preliminary results April Scott Hughes, Eanna Flanagan, Tanja Hinderer, Uchupol Ruangsri 2015 We describe how we have modified a frequency-domain Teukolsky-equation solver, previously used for computing orbit-averaged dissipation, in order to compute the dissipative piece of the 12:21PM gravitational self force on orbits of Kerr black holes. This calculation involves summing over a large number of harmonics. Each harmonic is independent of all others, so it is well suited to - parallel computation. We show preliminary results for equatorial eccentric orbits and circular inclined orbits, demonstrating convergence of the harmonic expansion, as well as interesting 12:33PM phenomenology of the self force's behavior in the strong field. We conclude by discussing plans for using this force to study generic orbits, with a focus on the behavior of orbital resonances. [Preview Abstract] Engage My APS Information for The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. Become an APS Member Renew Membership Librarians Submit a Meeting Abstract Join an APS Unit Authors Submit a Manuscript Get My Member Number Referees Find a Journal Article Update Contact Information Media Donate to APS Students © 2024 American Physical Society | All rights reserved | Terms of Use | Contact Us Headquarters 1 Physics Ellipse, College Park, MD 20740-3844 (301) 209-3200 Editorial Office 100 Motor Pkwy, Suite 110, Hauppauge, NY 11788 (631) 591-4000 Office of Public Affairs 529 14th St NW, Suite 1050, Washington, D.C. 20045-2001 (202) 662-8700
{"url":"https://meetings.aps.org/Meeting/APR15/Session/R13?showAbstract","timestamp":"2024-11-02T22:26:57Z","content_type":"text/html","content_length":"26040","record_id":"<urn:uuid:f612e3f1-b772-4397-8336-904edcb149da>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00674.warc.gz"}
Adaptive Deadbeat Predictive Control for PMSM-based solar-powered electric vehicles with enhanced stator resistance compensation Issue Sci. Tech. Energ. Transition Volume 78, 2023 Power Components For Electric Vehicles Article Number 35 Number of page(s) 10 DOI https://doi.org/10.2516/stet/2023033 Published online 29 November 2023 © The Author(s), published by EDP Sciences, 2023 E [ dq ] : dq-axis prediction errors i [ dq ][k]: dq-axis measured currents at time t = kT [ s ] i [ dq ][k]: dq-axis reference currents at time t = kT [ s ] i [ dq ][k + 1]: Predicted current for the next sampling time (k + 1)T [ s ] L [0] : Nominal stator inductance ΔL : Errors between nominal and actual values L [ dq ] : dq-axis stator inductances L : Stator inductance for surface-mounted PMSM (L = L [ dq ]) R [0] : Nominal stator phase resistance ΔR : Errors between nominal and actual values T : Electrical time constant (T = L/R) T [0] : PMSM winding temperature (°C) that corresponds to R [0] U [ dq ] : dq-axis applied voltages $U d opt [ k ]$ : dq-axis optimum motor voltage vector $U abc opt$ : abc-axis optimum motor voltage vector ω : Mechanical angular velocity (ω = ω [ e ]/p) ϕ [0] : Nominal flux linkage of permanent magnets ϕ : Flux linkage of permanent magnets Δϕ : Errors between nominal and actual values α : Copper electrical resistivity thermal coefficient (α = 1/°C = 4.29 10^−3 at T [0] = 20 °C) 1 Introduction At present, there is a growing trend towards the proliferation and adoption of electric vehicles (EVs) [1, 2]. This arises from the worldwide imperative to decrease carbon emissions and lessen our reliance on fossil fuels [3]. Moreover, it reflects the swift shift toward sustainable energy alternatives, particularly in the transportation industry. Additionally, owing to the remarkable advancements in photovoltaic technology, electric vehicles now incorporate photovoltaic modules to harness an abundant and eco-friendly energy source [4]. These innovative solar-powered electric vehicles (SEVs) not only help reduce transportation expenses but also contribute to environmental preservation by enabling “zero pollution” mobility [5, 6]. Permanent-Magnet Synchronous Motors (PMSMs) have found extensive use in electric vehicles due to their impressive attributes such as high power density, exceptional efficiency, compact design, and dependable performance [7]. Moreover, their widespread application in conventional SEVs can be attributed to the rapid advancements in permanent magnet materials and numerous improved design features, including reduced noise and vibration, increased power density, and more compact dimensions [8, 9]. The conventional field-oriented control (FOC) method has been extensively utilized within the PMSM drive system to attain the targeted control execution [10]. In this control scheme, a dual cascade loop controller is employed. Typically, the inner loop is dedicated to current control. Given the interdependence between torque and current responses, achieving extreme-performance current control becomes imperative. Numerous current control approaches have been explored to attain both steady-state accuracy and excellent transient performance. These strategies comprise fuzzy control [11], H[∞] control [12], predictive control [13], and hysteresis control [14]. Within this array of approaches, the conventional deadbeat predictive control (CDPC) distinguishes itself with its notable feature of high-speed response. Nonetheless, it’s worth noting that the permanent-magnet flux-linkage parameter, the stator resistance and inductance, play pivotal roles in shaping both the steady-state and transient performance of the conventional deadbeat predictive control. Discrepancies in these parameters often stem from factors like magnetic saturation and temperature fluctuations. Additionally, the specificity of the SEV, marked by frequent stops and starts, particularly in urban environments, can further impact these parameters. To strengthen the robustness of solar-powered electric vehicles versus parameter disparities, numerous studies have been presented in the existing literature. In [15], an innovative approach to enhance the robustness of predictive current control for PMSMs is introduced. This approach involves incorporating parallel compensation expressions into the conventional deadbeat predictive control to mitigate the impact of multi-parameter disparities. In [16] a solution to the parameter dependency issue of CDPC is proposed. This improved model predictive current control method relies on an incremental model of the Permanent-Magnet Synchronous Motor. In [17] authors investigate a DPC to resolve parameters disparity problem. The proposed control permanently recognizes the inductance and resistance of the stator for changed magnetic saturation stages. The present study delves into the application of Adaptive Deadbeat Predictive Control (ADPC) for Permanent-Magnet Synchronous Motors within solar-powered electric vehicles. This approach aims to harness the advantages of predictive control, such as enhanced precision and expanded control flexibility, while mitigating its primary drawback, sensitivity to parametric variations. This is achieved through the real-time estimation of PMSM parameters. The paper is organized as follows, after the introduction the PMSM model is described then the PMSM conventional control and the effect of PMSM parameter variation are presented. After that, the proposed control is detailed, and finally, simulation results are presented and discussed. 2 Solar-powered electric vehicle power system Figure 1 illustrates the electrical model of the solar-powered vehicle. The core components of this model include photovoltaic PV panels, acting as the primary renewable energy source, along with batteries and supercapacitors that serve as storage systems for energy. These components are interconnected to a DC bus using DC/DC power converters. Another critical component in the SEV is the PMSM motor, which is powered by a DC–AC converter. The PMSM motor plays a vital role in driving the vehicle’s propulsion system, converting electrical energy into mechanical motion. Additionally, there are auxiliary systems connected to the DC bus that provide various functionalities beyond propulsion. These auxiliary systems cater to essential features such as climate control, lighting, infotainment, power steering, and braking, enhancing the overall driving experience and safety of the vehicle. The integration of these components and systems creates an efficient and eco-friendly solar-powered electric vehicle, offering sustainable transportation solutions with reduced environmental impact. This study focuses on the PMSM motor and its corresponding DC/AC converter. The PMSM used in this research consists of three windings on the stator, along with permanent magnets either embedded inside the rotor for interior PMSM or mounted on the rotor surface for surface-mounted PMSM. In both cases, the continuous time model of the PMSM in the rotating coordinate system of dq-axis is represented by equations (1) and (2). The adoption of a rotating coordinate system, rather than the natural coordinate system, is a deliberate choice due to its ability to achieve decoupling control and effectively reduce the number of variables involved in the motor’s control scheme. This approach allows for enhanced control and improved efficiency of the PMSM, enabling precise manipulation of the motor’s performance and behavior.$U d ( t ) = R i d ( t ) - L d ω e ( t ) i q ( t ) + L d d i d ( t ) d t ,$(1) $U q ( t ) = R i q ( t ) + L q ω e ( t ) i d ( t ) + L q d i q ( t ) d t + ϕ ω e ( t ) .$(2) To analyze the relationship between current and stator voltage, equations (1) and (2) can be expressed as shown in equations (3) and (4), respectively, with the dq-axis stator current selected as the state variable.$d i d ( t ) d t = - 1 T i d ( t ) + ω e ( t ) i q ( t ) + 1 L U d ( t ) ,$(3) $d i q ( t ) d t = - 1 T i q ( t ) - ω e ( t ) i d ( t ) + 1 L U q ( t ) - ϕ L ω e ( t ) .$(4) 3 Conventional PMSM control Within the realm of conventional PMSM control, numerous strategies exist in literature. However, in this particular study, our attention is directed towards the FOC and CDPC methods. 3.1 Field oriented control Figure 2 illustrates the FOC for the dq-axis PMSM currents. This control strategy primarily focuses on precisely regulating the motor current components i [ d ] and i [ q ]. In this work, the d-axis reference value current is set to zero since the d-axis is aligned with the magnetic field of the permanent magnets. By setting the d-axis current to zero, the magnetic flux in the d-axis remains constant and aligned with the permanent magnet’s field, which ensures a maximum torque at the minimum current. The transverse stator current reference on the q-axis is computed via the external speed controller ensured by a PI controller. After that, PI controllers are used to regulate the d-axis and q-axis current components. The controller compares the desired current values with the actual currents and adjusts the voltage applied to the motor accordingly. Once the desired d–q current values are computed, they are transformed back to the stationary reference frame to generate the necessary voltage commands for the inverter that drives the motor. These voltage commands are used to modulate the inverter’s switching signals to control the motor. 3.2 Conventional deadbeat predictive control Figure 3 depicts the CDPC applied to the dq-axis currents of a PMSM. The primary objective of this control approach is to achieve precise regulation of the motor’s current components i [ d ] and i [ q ]. A PI controller is used to regulate the motor speed. The optimal voltage vector, which is applied to the PMSM through a Space Vector Modulation (SVM) process, is computed considering the PMSM state model (Eqs. (3) and (4)) and the Forward Euler discretization method (Eq. (5)). The obtained digital prediction equations are expressed by equations (6) and (7).$d i dq ( t ) d t = i dq [ k + 1 ] - i dq [ k ] T s ,$(5) $i d [ k + 1 ] = ( 1 - R T s L ) i d [ k ] + p T s ω [ k ] i q [ k ] + T s L U d [ k ] ,$(6) $i q [ k + 1 ] = ( 1 - R T s L ) i q [ k ] - p T s ω [ k ] i d [ k ] + T s L U q [ k ] - pϕ T s L ω [ k ] .$(7) The CDPC is founded on the prediction criterion outlined in equations (8) and (9) for the d and q axes, respectively. This criterion ensures that the error between the motor current components (i [ d ][k + 1] and i [ q ][k + 1]) and their corresponding references ($i d * [ k ]$ and $i q * [ k ]$) is minimized, ideally reduced to zero, as depicted in equations (8) and (9).$i d [ k + 1 ] = i d * [ k ] ⇔ i d [ k + 1 ] - i d * [ k ] = 0 ,$(8) $i q [ k + 1 ] = i q * [ k ] ⇔ i q [ k + 1 ] - i q * [ k ] = 0 .$(9) Considering the two previous equations, the reference voltages are expressed as follows:$U d opt [ k ] = a i d * [ k ] + b i d [ k ] - pLω [ k ] i q [ k ] ,$(10) $U q opt [ k ] = a i q * [ k ] + b i q [ k ] + pLω [ k ] i d [ k ] + cω [ k ] ,$(11) $a = L T s ; b = R - a ; c = pϕ a .$ According to equations (10) and (11), the accuracy of current prediction relies heavily on the precise estimation and calibration of model parameters. When these parameters are correctly determined, the stator currents at the next sampling instant closely align with the predicted values, resulting in enhanced control performance. However, real-world conditions introduce various factors that can affect these parameters. Mechanical vibrations, magnetic saturation, temperature variations, and measurement errors are some of the key factors leading to changes in model parameters over time. These variations can significantly impact the control performance by introducing discrepancies between the predicted and actual currents. Consequently, the control system may not respond as expected, potentially leading to reduced efficiency and less accurate torque and speed control. In the following sections, we delve into the effects of parameter uncertainty on current prediction. The study will analyze and discuss how these uncertainties impact the overall control performance. 4 PMSM parameter uncertainty effects Based on [10], the parameters that exert the most significant influence are resistance, inductance, and flux parameters. To explore the impact of parameter uncertainty on current prediction, we represent these parameters as follows:$R = R 0 + Δ R ,$(12) $L = L 0 + Δ L ,$(13) $ϕ = ϕ 0 + Δ ϕ .$(14) In case of parameter mismatch and based on equations (6) and (7), the current prediction model can be expressed as in equations (15) and (16).$i d ′ [ k + 1 ] = ( 1 - T s ( R 0 + Δ R ) L 0 + Δ L ) i d [ k ] + p T s ω [ k ] i q [ k ] + T s L 0 + Δ L U d [ k ] ,$(15) $i q ′ [ k + 1 ] = ( 1 - T s ( R 0 + Δ R ) L 0 + Δ L ) i q [ k ] - p T s ω [ k ] i d [ k ] + T s L 0 + Δ L U q [ k ] - p T s ω [ k ] ( ϕ 0 + Δ ϕ ) L 0 + Δ L .$(16) Finally, the prediction errors between the model subjected to parameter change given by equations (15) and (16) and the real model given by equations (8) and (9) is expressed as follows:$E d = i d ′ [ k + 1 ] - i d [ k + 1 ] = T s ( R 0 Δ L + L 0 Δ R ) L 0 ( L 0 + Δ L ) i d [ k ] - T s Δ L L 0 ( L 0 + Δ L ) U d [ k ] ,$(17) $E q = i q ′ [ k + 1 ] - i q [ k + 1 ] = T s ( R 0 Δ L + L 0 Δ R ) L 0 ( L 0 + Δ L ) i q [ k ] - T s Δ L L 0 ( L 0 + Δ L ) U q [ k ] + p T s ω [ k ] ( ϕ 0 Δ L - L 0 Δ ϕ ) L 0 ( L 0 + Δ L ) .$(18) Figure 4 illustrates the correlation between the mismatch of the stator inductance and resistance and the d-axis current error E [ d ]. Figure 5 is presented the relationship between the mismatch of the stator inductance and resistance and the q-axis current error E [ q ]. Additionally, Figure 6 showcases the relationship between the flux-linkage mismatch and the q-axis current error E [ q ]. Upon analyzing these figures, it becomes evident that both current error E [ d ] and E [ q ] experience significant variations when any one motor parameter mismatches. The variation of a single parameter induces errors in the predicted currents, thus leading to a decrease in control performance. 5 Adaptive Deadbeat Predictive Control with consideration for stator resistance rise For the used PMSM motor, the winding resistance of the stator is influenced by the internal temperature of the motor. During motor operation, various factors, such as current flowing through the windings and mechanical losses, contribute to the generation of heat. As the internal temperature of the motor increases, the resistance of the motor windings undergoes changes. The relationship between temperature and resistance in PMSM motors is typically described by the positive temperature coefficient of resistance. In simple terms, as the temperature rises, the resistance of the stator windings increases proportionally. This change in resistance due to temperature variations has significant implications for motor control and overall performance. With increasing temperature, the resistance also increases, which, in turn, affects the current flowing through the motor windings. Consequently, the torque and speed characteristics of the motor can be impacted. To address the influence of temperature on motor performance and ensure accurate control, real-time resistance adjustments are necessary. The adaptive control strategy takes into account the estimated value of the temperature (T) and dynamically adjusts the resistance according to the temperature changes, as described by equation (19) [18]. By incorporating temperature-compensated resistance adjustments into the motor control algorithm, the system can effectively adapt to varying operating conditions and temperature fluctuations, leading to enhanced motor performance and precise control. The real-time estimating of the temperature is treated in several references [19–21] and is not detailed in this paper.$R = R 0 + α R 0 ( T - T 0 ) 1 + α T 0 .$(19) The proposed adaptive ADPC is illustrated in Figures 7 and 8. In this control scheme, additional input is introduced to the existing CDPC. This new input represents the real-time value of the actual stator resistance. To obtain this value, the PMSM winding temperature is measured using temperature sensors or can be estimated through sensor-less temperature estimation methods or a prediction approach [22–24]. Once the real-time stator resistance value is determined, it is subtracted from the predicted value based on equation (19). This deduction enables the control system to account for the influence of temperature on the motor winding resistance. By incorporating this adaptive approach, the ADPC can dynamically adjust the control parameters and compensate for changes in the stator resistance due to temperature variations. The integration of real-time temperature-based resistance adjustments enhances the ADPC’s ability to adapt to varying temperature conditions, ensuring more accurate and efficient motor control. This adaptive control strategy contributes to improved motor performance and robustness, enabling the PMSM to operate optimally across a wide range of operating 6 Simulation results and discussion Simulations are established in MATLAB/Simulink. For simulation tests, several scenarios are used: the first one with a simple step reference input speed, and the second one with two driving cycle reference input speeds, namely, the Urban Dynamometer Driving Schedule (UDDS) driving cycle and the Economic Commission for Europe Regulation 15 (ECE R15) driving cycle. The studied vehicle is a golf car with 2–4 passengers. In fact, nowadays, this car is not limited to golf courses; it is used for short-distance trips in airports, businesses, universities, and residential areas. This vehicle, as well as its characteristics, are provided in Figure 9 and Table 1, respectively. The PMSM parameters are presented in Table 2. 6.1 Simple step reference input speed The simulation section is divided into two parts. The first part shows the effectiveness of the CDPC against the FOC. The second part proves the efficiency of the proposed ADPC compared to the CDPC in the case of stator resistance variation. During simulation, the rated speed of the PMSM is regulated to 50 rpm. Figures 10–12 present a comparison between PMSM comportment when applying FOC and CDPC. These figures illustrate the evolution of the currents i [ abc ], i [ d ] and i [ q ], respectively. As shown in both the three-phase current and the dq components current, CDPC results in fewer current harmonics. In order to show the robustness of the proposed control against stator resistance mismatch, an uncertainty equal to 5R is added to the real stator resistance. Figure 13 shows the evolution of the current i [ q ] in the case of the application of CDPC and ADPC for a speed rotation change at 0.1 s. As presented in this figure, the current ripple in the case of ADPC is reduced compared to the CDPC. Also, the current in the case of ADPC takes fewer time to attract its reference compared to the CDPC. Figure 14 presents the evolution of the current i [ d ] in the case of CDPC and ADPC for a speed rotation change at 0.1 s. As presented in this figure, the current i [ d ] is well equal to zero in the case of ADPC which is not the case for the CDPC. 6.2 Driving cycle reference input speed Real-world scenarios involve electric vehicles experiencing driving cycles of varying complexities, rather than simple-step inputs. These driving cycles, especially city driving cycles, are characterized by intricate patterns, random fluctuations, and sudden changes, which significantly amplify the challenges of effective control. In the subsequent section, the effectiveness of the proposed ADPC was evaluated using diverse international driving cycles. For the purpose of this study, the electric vehicle’s control system was subjected to two internationally recognized certified driving cycles: the UDDS driving cycle and the ECE R15 driving cycle. These selected driving cycles provide a comprehensive assessment of the vehicle’s behavior under different operational conditions, facilitating a rigorous evaluation of the control strategies. 6.2.1 UDDS driving cycle Figure 15 shows the PMSM speed response for CDPC and ADPC and its reference when the UDDS driving cycle is applied in case of stator resistance mismatch. One can notice that there is almost a match between the ADPC speed and the reference compared to the CDPC speed. PMSM phase currents are shown in Figure 16. The zoom on that figure shows that they are pure sinusoidal and 120° apart. Figure 17 presents the PMSM i [ d ] and i [ q ] currents in the case of ADPC. 6.2.2 ECE R15 driving cycle Figure 18 shows the speed response and its reference when the ECE R15 driving cycle is used as input speed reference and in the case of PMSM stator resistance uncertainty. As shown in this figure, the best results are obtained with the ADPC compared to the CDPC. Figures 19 and 20 present the PMSM current response and the PMSM i [ d ] and i [ q ] currents for the ECE R15 driving cycle in the case of ADPC. In summary, the application of ADPC leads to results that closely align with the baseline compared to the utilization of conventional deadbeat predictive control. 7 Conclusion This study introduces an Adaptive Deadbeat Predictive Control (ADPC) strategy for a Permanent Magnet Synchronous Motor (PMSM) integrated into a solar powered electric vehicle (SEV). The central aim of this control is to mitigate the impact of stator resistance mismatches. The core principle revolves around real-time control adaptation through the continuous estimation of the resistance value, considering the actual winding temperature. The simulation outcomes validate the effectiveness of the proposed ADPC strategy in enhancing control robustness against stator resistance variations. By offering dynamic adjustments to the control parameters based on the changing temperature and its effect on resistance, the ADPC demonstrates its potential to significantly improve the stability and performance of PMSMs in the context of SEVs. This approach holds promise for advancing the reliability and efficiency of electric vehicle propulsion systems, contributing to the broader objective of sustainable transportation. For further work, an experimental setup, given by Figure 21, will be used to illustrate the effectiveness of the proposed ADPC control strategy. It includes (1) an Integrated Permanent Magnet Synchronous Motor (PMSM) in the wheel functioning as the propulsion unit; (2) a microchip DC/AC power converter responsible for converting direct current from the power source into alternating current; (3) an incremental encoder system which provides instantaneous measurement of the wheel’s rotational speed; (4) a measurement board that provides current measurements and (5) Texas Instruments TI F28335 digital solution, which is used for the control algorithm implementation. This work was supported by the Tunisian Ministry of High Education and Research under Grant LSE-ENIT-LR 11ES15 and funded in part by the Programme d’Encouragement des Jeunes Chercheurs (PEJC) (Code 21PEJC D6P10). All Tables All Figures Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform. Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. Initial download of the metrics may take a while.
{"url":"https://www.stet-review.org/articles/stet/full_html/2023/01/stet20230175/stet20230175.html","timestamp":"2024-11-11T16:39:56Z","content_type":"text/html","content_length":"181175","record_id":"<urn:uuid:af2327e2-4d63-40b9-9626-682aa7f7bbb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00552.warc.gz"}
From Stretched String to gyre From Stretched String to gyre The numerical technique demonstrated in the The Stretched String Problem section provides a powerful analog to how gyre solves the oscillation equations. The full details of gyre’s approach are laid out in Townsend & Teitler (2013); in this section we briefly summarize it, highlighting similarities and differences with the stretched-string problem. Similar to the stretched-string problem, gyre begins by separating variables in space and time. For the radial displacement perturbation \(\xir\), trial solutions take the form \[\xir(r,\theta,\phi;t) = \operatorname{Re} \left[ \sqrt{4\pi} \, \txir(r) \, Y^{m}_{\ell}(\theta,\phi) \, \exp(-\ii \sigma t) \right]\] (this is taken from the Separated Equations section). In addition to the same sinusoidal time dependence as in eqn. (3), a spherical harmonic term \(Y^{m}_{\ell}\) appears because we are separating in three (spherical) spatial coordinates rather than one. As with the stretched-string problem, gyre discretizes the ODE governing \(\txir(r)\) and related quantities on a spatial grid \(\{x_{1},x_{2},\ldots,x_{N}\}\). However, a couple of important differences arise at this juncture. First, the oscillation equations are fourth order (sixth, in the non-adiabatic case). Rather than employing finite-difference approximations to high-order differential operators, gyre instead decomposes the problem into a system of coupled first-order equations. This system is written generically as \[x \deriv{\vty}{x} = \mA \, \vty,\] where \(\vty\) is a vector of \(\neqn\) dependent variables, and \(\mA\) is a \(\neqn \times \neqn\) Jacobian matrix. In the adiabatic case, \(\neqn=4\); in the non-adiabatic case, \(\neqn=6\). Second, while the above equation system can be discretized using a simple finite-difference approximation to the left-hand side, gyre offers more-sophisticated approaches with higher orders of accuracy. These include the Magnus schemes described in Townsend & Teitler (2013), and implicit Runge-Kutta schemes mentioned in Townsend et al. (2018). The choice of scheme is set by the diff_scheme parameter of the &num namelist group. The discretization leads to difference equations of the form \[\vty_{j+1} = \mY_{j+1;j} \, \vty_{j},\] relating the dependent variable vector at adjacent grid points. The \(\neqn \times \neqn\) fundamental solution matrix \(\mY_{j+1,j}\) is evaluated from the value(s) of \(\mA\) within the interval \ ([x_{j},x_{j+1}]\) using the discretization scheme. There are \(N-1\) of these sets of difference equations. They are augmented with the boundary conditions \[\subin{\mB} \, \vty_{1} = 0, \qquad\qquad \subout{\mB} \, \vty_{N} = 0,\] where \(\subin{\mB}\) is a \(\nin \times \neqn\) matrix representing the \(\nin\) inner boundary conditions, and \(\subout{\mB}\) is a \(\nout \times \neqn\) matrix representing the outer boundary conditions (note that \(\nin + \nout = \neqn\)). Together, the difference equations and boundary conditions comprise a linear system of \(\neqn\,N\) algebraic equations and \(\neqn N\) unknowns. Linear System The linear system can be written in the same form (cf. eqn. 4) as with the stretched-string problem . However, now \(\vu\) is the vector with components \[\begin{split} \vu = \begin{pmatrix} \vty_{1} \\ \vty_{2} \\ \vdots \\ \vty_{N-1} \\ \vty_{N} \end{pmatrix}\end{split}\] and the system matrix \(\mS\) is an \(\neq N \times \neqn N\) block-staircase matrix with components \[\begin{split}\mS = \begin{pmatrix} \subin{\mB} & \mz & \cdots & \mz & \mz \\ -\mY_{2;1} & \mI & \cdots & \mz & \mz \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \mz & \mz & \cdots & -\mY_{N;N-1} & \mI \\ \mz & \mz & \cdots & \mz & \subout{\mB} \end{pmatrix}.\end{split}\] As before, the linear system (4) has non-trivial solutions only when the determinant of \(\mS\) vanishes. Thus, gyre finds eigenvalues of the oscillation equation by solving the characteristic \[\Dfunc(\omega) \equiv \det(\mS) = 0,\] where the dimensionless frequency \[\omega \equiv \sqrt{\frac{R^{3}}{GM}} \, \sigma,\] is the product of the star’s dynamical timescale and the oscillation frequency \(\sigma\). (Internally, gyre works extensively with such dimensionless quantities, as it improves the stability of the numerical algorithms). Scanning for Eigenfrequencies In the adiabatic case, gyre searches for roots of the discriminant function \(\Dfunc\) using the same bracketing and refinement strategies as the stretched-string problem. In the non-adiabatic case, a complication is that the discriminant function and the dimensionless frequency are both complex quantities. Solving the characteristic equation in the complex plane is computationally challenging because there is no equivalent to bracketing and refinement. gyre implements a couple of different approaches to the problem, as discussed in the Non-Adiabatic Oscillations section.
{"url":"https://gyre.readthedocs.io/en/stable/user-guide/numerical-methods/from-string-to-gyre.html","timestamp":"2024-11-08T01:45:21Z","content_type":"text/html","content_length":"24835","record_id":"<urn:uuid:9fd71a85-4620-4c5d-bb57-6f67ffa4a123>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00672.warc.gz"}
Compound Interest Calculation: A Bright Future Awaits! How can compound interest help you grow your savings? Constable Clancy deposited $56,000 into a savings account today. For how long can months from now if interest is 4.0% compounded quarterly? Final answer: Constable Clancy can keep the money in the savings account for approximately [number of months] months. Compound interest is a powerful tool that can help you grow your savings over time. By earning interest not only on your initial deposit but also on the interest earned, your money can snowball into a substantial sum. To calculate the time period for compound interest, we can use the formula: t = (log(FV/PV) / log(1 + r/n)) * (n / 12) • t is the time period in years • FV is the future value • PV is the present value • r is the interest rate • n is the number of compounding periods per year In this case, Constable Clancy deposited $56,000 with an interest rate of 4.0% compounded quarterly. We need to find the future value (FV) to determine how many months he can keep the money in the By using the compound interest formula and converting it to months, we can find the time period in months by solving for t. Once we calculate the future value (FV) using the given values, we can substitute it back into the equation to find the number of months Constable Clancy can keep the money in the savings account. Compound interest is a valuable financial concept that can work in your favor to grow your savings and secure a brighter financial future. By understanding how compound interest works, you can make informed decisions about saving and investing your money.
{"url":"https://bsimm2.com/business/compound-interest-calculation-a-bright-future-awaits.html","timestamp":"2024-11-06T14:17:48Z","content_type":"text/html","content_length":"21496","record_id":"<urn:uuid:52167c5c-d810-4042-af12-bd4e7c736bbc>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00711.warc.gz"}
power calculation – Solved Problems A simple circuit is solved and power absorbed or supplied by each element is determined. KCL as well as Ohm’s law are used in solving the circuit. positive sign convention is used in determining element powers. It is shown and discussed how a source, here current source, can be neither absorbing or supplying power. It is also mentioned that resistors are passive elements and always absorb Mesh Analysis – Supermesh The mesh analysis used to solve the circuit which has a supermesh. After solving the circuit, power of sources determined. Nodal Analysis – Circuit with Dependent Voltage Source A 6-node circuit is solved with the nodal analysis. It contains one dependent voltage source, two independent voltage sources, two independent current sources and some resistors. The dependent causes two nodes to form a supernode. Problem 1-8: Nodal Analysis – Power of Current Source A simple DC resistive circuit with three resistors and two current sources are solved by the Nodal Analysis and power of one current source is determined.
{"url":"https://www.solved-problems.com/tag/power-calculation/","timestamp":"2024-11-06T20:15:24Z","content_type":"text/html","content_length":"78475","record_id":"<urn:uuid:7b0e7665-859f-4adb-a2f5-e1f3c4847942>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00318.warc.gz"}
Refining the Peak Oil Rosy Scenario Part 3: Hubbert's revised model Techniques of prediction of future events range from the completely irrational to the semi-rational to the highly rational. Rational techniques of predicting the behavior of a system require first an understanding of its mechanism and of the constraints under which it operates and evolves. This permits the development of appropriate theoretical relations, which, when applied to the data of the system, permit solutions of its future evolution with varying degrees of exactitude. Such a theoretical analysis provides an essential criterion for what data are significant and necessary for the solution. from M.K. Hubbert, TECHNIQUES OF PREDICTION AS APPLIED TO THE PRODUCTION OF OIL AND GAS, in Oil and Gas Supply Modeling, Proceedings of a Symposium held at the Department of Commerce, Washington, DC, June I5-20, 1980, Edited by: Saul l. Gass, Nat. Bur. Stand. (U.S.). Spec. Publ. 631.778 pages (May 1982) (excerpt from Abstract) After his 1956 paper, Hubbert admitted that he ran into another problem that forced him to develop another predictive methodology, as he described in his 1980 paper: The weakness of this analysis arose from the lack of an objective method of estimating the magnitude of Q∞ from primary petroleum-industry data. The estimates extant in 1956 were largely intuitive judgments of people with wide knowledge and experience, and they were reasonably unbiased because of the comfortable prospects for the future they were thought to imply. When it was shown, however, that if Q∞ for crude oil should fall within the range of 150~200 billion barrels the date of peak-production rate would have to occur within about the next 10 to 15 years, this complacency was shattered. It soon became evident that the only way this unpleasant conclusion could be voided would be to increase the estimates of Q∞, not by fractions but by multiples. Consequently, with insignificant new information, within a year published estimates began to be rapidly increased, and during the next 5 years, successively larger estimates of 250, 300, 400, and eventually 590 billion barrels were published. This lack of an objective means of estimating Q∞ directly, and the 4-fold range of such estimates, made it imperative that better methods of analysis, based directly upon the primary objective and publicly available data of the petroleum industry, should be derived..... "Techniques" p. 43-44 In "Techniques" (p. 45-50) Hubbert describes these better methods, which are mainly based on the idea that it was better to examine the production rate (dP/dt) as a function of Q, the cumulative oil produced, rather than as a function of time: ...It is convenient, therefore, to consider the production rate P, or dQ/dt, as a function of Q, rather than of time. In this system of coordinates, dQ/dt is zero when Q = 0, and when Q = Q∞. Between these limits dQ/dt> 0, and outside these limits, equal to zero. While it is possible that during the production cycle dQ/dt could become zero during some interval of time, for any large region this never happens. Hence we shall assume that for 0< Q < Q∞, dQ/dt > 0. (19) The curve of dQ/dt versus Q between the limits 0 and Q∞ can be represented by the Maclaurin series, dQ/dt = c[0] + c[1] Q + c[2]Q^2 + c[3]Q^3 + ..... (20) Since, when Q = 0, dQ/dt = 0, it follows that c[0] = 0. dQ/dt = c[1] Q + c[2]Q^2 + c[3]Q^3 + ....., (21) and, since the curve must return to zero when Q = Q∞ , the minimum number of terms that will permit this, and the simplest form of the equation, becomes the second-degree equation, dQ/dt = c[1] Q + c[2]Q^2. (22) By letting a = c[1 ] and -b = c[2 ] this can be rewritten as dQ/dt = aQ - bQ^2. (23) Then, since when Q = Q∞, dQ/dt = 0, aQ∞ - bQ∞^2 = 0, b = a/Q∞, dQ/dt – a(Q – Q^2/Q∞). (24) After noting that equation (24) defines a parabola, Hubbert further noted: It is to be emphasized that the curve of dQ/dt versus Q does not have to be a parabola, but that a parabola is the simplest mathematical form that this curve can assume. We may accordingly regard the parabolic form as a sort of idealization for all such actual data curves, just as the Gaussian error curve is an idealization of actual probability distributions. Hubbert also recognized that equation (24) can be converted into a linear equation with respect to dQ/dt and Q: One further important property of equation (24) becomes apparent when we divide it by Q. We then obtain (dQ/dt)/Q = a - (a/Q∞)Q. (27) This is the equation of a straight line with a slope of -a/Q∞ which intersects the vertical axis at (dQ/dt)/Q = a and the horizontal axis at Q = Q∞. If the data, dQ/dt versus Q, satisfy this equation, then the plotting of this straight line gives the values for its constants Q∞ and a. The linear equation (27) was of critical importance to Hubbert because it allowed him to estimate Q∞ and hence Q∞/2, without the need to rely upon guesstimates from experts, as he did in 1956, and which Hubbert suspected were bogus after his 1956 paper. Hubbert comments: The virtue of the first of these two equations lies in the fact that it depends only upon the plotting of primary data, (dQ/dt)/Q. versus Q. with no a priori assumptions whatever. Using actual data for Q and dQ/dt, it is to be expected that there will be a considerable scatter of the plotted points as Q Þ 0, because in that case both Q and dQ/dt are small quantities and even small irregularities of either quantity can produce a large variation in their ratio. For larger values of both quantities, as the production cycle evolves, these perturbations become progressively smaller and a comparatively smooth curve is produced. If the data satisfy the linear equation, then a determinate straight line results whose extrapolation to the vertical axis as Q Þ 0 gives the constant a, and whose extrapolated intercept with the Q-axis gives Q∞. However, even if the data do not satisfy a linear equation, they will nevertheless produce a definite curve whose intercept with the Q-axis will still be at Q = Q∞. "Techniques" (p. 52) I will not show Hubbert’s derivation here, but Hubbert also showed that a form of the logistic equation could be derived from equation (24) above: Q = Q∞ /(1+N[o ]e^-at ), where N[o] = (Q∞-Qo)/Qo (38) Hubbert notes: In equation (38) the choice of the date for t = 0 is arbitrary so long as it is within the range of the production cycle so that N[o] will have a determinate finite value. This is another important point for Hubbert, because it shows that Hubbert’s basic assumption that the simplest way to model the dQ/dt versus time, using an equation that defines a parabola, is equivalent to assuming a logistic growth curve with respect to the change in Q (cumulative oil production) with time. Hubbert goes on to derive several other linear equations and other relationships which apparently he used in 1962, (National Academy of Sciences-National Research Council Publication 1000-D, 1962) to reanalyze the oil production, proven reserves and discovery data for the USA: from "Techniques" p. 66 After a detailed review of the analysis of this, and additional previous data, using these models, Hubbert concludes: The present cumulative.statistical evidence with regard to crude oil leads to a figure of approximately 163 ± 2 billion barrels for the ultimate cumulative production in the Lower-48 states.... However there still remain geological uncertainties regarding the occurrences of undiscovered oil and gas fields, yet those are being severely restricted by the extent of exploratory activity. In the case of crude oil, there is also the uncertainty regarding the magnitude of future improvements in extraction technology. With due regard for these uncertainties, estimates for crude oil that do not exceed that given hereby more than 10 percent may still be within the range of geological uncertainties; estimates that do not exceed this by more than 20 percent may be within the combined range of geological and technological uncertainties. Estimates for natural gas that do not exceed the upper limit of the range given above by more than 10 percent may likewise be regarded as possible although improbable. But estimates for either oil or gas, such as those that have been published repeatedly during the last 25 years, which exceed the present estimates by multiples of 2, 3, or more, are so completely irreconcilable with the cumulative data of the petroleum industry as no longer to warrant being accorded the status of scientific respectability.
{"url":"https://crash-watcher.blogspot.com/2010/09/refining-peak-oil-rosy-scenario-part-3.html","timestamp":"2024-11-05T00:47:53Z","content_type":"text/html","content_length":"100628","record_id":"<urn:uuid:086966ae-e2b3-4079-9122-70ce96f1569e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00103.warc.gz"}
Comments on Computational Complexity: A natural function with very odd propertiesA correction: I meant as delta goes to 0. Gasarch, 1-d brownian motion is usually defined a...#2 - iirc any non-empty measure 0 subset of the in...&quot;In general, any semi-martingale can be writt...The gamma process is such that each increment look...Can&#39;t be true if you want continuous as well.1) Please elaborate- what is the 1-d Brownian Moti...I agree with the previous anonymous comment: If yo...Answer to #6 is gamma process! I still going with...Isn&#39;t the Cantor set natural when seen as {0,1...Please let me agree with Sasho, for the following ...For #4, as Konstantin and I pointed out last time,...Ah- good point. That raises the odd question- is ...Doesn&#39;t &quot;strictly increasing&quot; mean x... tag:blogger.com,1999:blog-3722233.post6676383261921810484..comments2024-10-30T20:00:47.980-05:00Lance Fortnowhttp://www.blogger.com/profile/ 06752030912874378610noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-3722233.post-64732665365073055462012-09-25T22:51:11.368-05:002012-09-25T22:51:11.368-05:00A correction: I meant as delta goes to 0. Sashohttps://www.blogger.com/profile/ 09380390882603977159noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-78574766893494130602012-09-25T17:31:59.566-05:002012-09-25T17:31:59.566-05:00Gasarch,<br /><br />1-d brownian motion is usually defined as a probability distribution on functions f:R --&gt; R with the following properties:<br /><br />* for any reals x, y, f(x) - f(y) is distributed like a gaussian random variable with mean 0 and variance (x-y)^2<br />* the increments f(x_1) - f(x_2), f(x_2) - f(x_3), etc., for any x_1 &lt; x_2 &lt; x_3, ... are independent<br />* f is continuous with probability 1<br /><br />A classical construction by Levy shows that such things exist. Another classical result is that with probability 1, a function sampled from this distribution is continuous and non-differentiable. <br /><br />Intuitively, the random variable f is a continuous time random walk on the real line: f(t) tells us the position at time t (well, and we have negative times, but let&#39;s restrict f to [0, 1) to make this easier). One can get the Brownian motion as the limit of discrete random walks with smaller and smaller steps. I.e. think of a random walk that proceeds in time steps of length delta and at each time i*delta moves to f((i-1)*delta) + N(0, delta) where N(0, delta) is a gaussian with mean 0 and variance delta^2. This defines f at points that are integer multiples of delta. To define f everywhere, interpolate linearly. Now, as delta goes to infinity, f converges to brownian motion in distribution. This in fact happens even if the steps are not guassian but follow any distribution with finite variance. This &quot;central limit theorem&quot; for random walks is the Donsker invariance principle.<br /><br />A great referene is the book by Moerters and Peres: http:// 09380390882603977159noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-55839164430239874862012-08-21T05:58:37.425-05:002012-08-21T05:58:37.425-05:00#2 - iirc any non-empty measure 0 subset of the interval without isolated points is Cantor (modulo an increasing continuous map of the interval). Being essentially unique makes it natural, I&#39;d say.Boris Borcichttps://www.blogger.com/ profile/05869004550299424489noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-12054602002598126632012-08-08T02:18:48.283-05:002012-08-08T02:18:48.283-05:00&quot;In general, any semi-martingale can be written as a determinsitic process plus a brownian motion process plus a gamma process.&quot;<br /><br />Interesting. Can you give an accessible reference for this result? Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-46993361338809173082012-08-01T14:23:49.477-05:002012-08-01T14:23:49.477-05:00The gamma process is such that each increment looks like a gamma random variable. So it consists of a few jumps mixed with a more small jumps mixed with a huge number of tiny jumps, etc. One of my favorite properties of the gamma process is that it is a &quot;perfect sun shade.&quot; Meaning if you were to sit under a gamma process you would be able to see in all directions EXCEPT straight up and down. So if the sun were perfectly over head, it wouldn&#39;t block any views at all--but it would block the sun out completely.<br /><br />I&#39;ve rethought my answer to #2. I&#39;m going with it being natural also. I had forgotten that zeros of a brownian motion form a perfect set and is uncountable. So I&#39;m happy with that being natural also.<br /><br />So all three have nice probabilistic answers and so show that the properties are & quot;generic.&quot;Dean Fosterhttps://www.blogger.com/profile/ 13906505132477512644noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-57287378269640007702012-08-01T12:38:43.715-05:002012-08-01T12:38:43.715-05:00Can&#39;t be true if you want continuous as well.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-41133478059211437252012-08-01T09:51:58.568-05:002012-08-01T09:51:58.568-05:001) Please elaborate- what is the 1-d Brownian Motion function?<br />I assume its someting like, a particle starts at 0 and<br />at every stage it goes left or right (how much?) and so<br />f(t) = prob that its at place t. But I am unclear on this so<br />please elaborate. I agree that something like that is clearly<br />natural.<br /><br />2) What is the Gamma Process?GASARCHhttps://www.blogger.com/profile/ 06134382469361359081noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-36900988692162610862012-07-31T21:10:20.359-05:002012-07-31T21:10:20.359-05:00I agree with the previous anonymous comment: If you have ever viewed real numbers in terms of their decimal expansions I don&#39;t see how you can avoid acknowledging that the (say) decimal strings that only have 0 and 1 in their expansions is natural set.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-31903039603381831742012-07-31T17:14:02.996-05:002012-07-31T17:14:02.996-05:00Answer to #6 is gamma process!<br /><br />I still going with the stochastic process definition of natural. So I should mention the gamma process as what I think of as the &quot;answer&quot; to question 6: http://en.wikipedia.org/wiki/ Gamma_process. It is a pure jump process and hence only has a countable number of jumps--this means its derivative is equal to zero almost everywhere. But, since there is a chance of jump in any arbitrarilly small time interval--it is a purely increasing process. It has lots of other nice properties--but I won&#39;t go into them now. (In general, any semi-martingale can be written as a determinsitic process plus a brownian motion process plus a gamma process. I consider semi-martingales the most natural processes we can work with, and the french school of probability proves theorems of this nature.)Dean Fosterhttps://www.blogger.com/profile/ 13906505132477512644noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-77909356550021262442012-07-31T13:24:46.421-05:002012-07-31T13:24:46.421-05:00Isn&#39;t the Cantor set natural when seen as {0,1}^Z for example? (A very natural family of objects, used in many places.)<br /><br />And isn&#39;t the boundary of a fractal a natural thing? Fractals also appear in many places, where the goal is not only to &quot;define a funny function& quot;.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-83192285201995170752012-07-31T12:22:25.589-05:002012-07-31T12:22:25.589-05:00Please let me agree with Sasho, for the following pragmatic reason. <br /><br />A central teaching of modern quantum information science is that dynamical phenomena whose coarse-grained descriptions are naturally algebraic (e.g., projective quantum measurements) are associated to informatically equivalent fine-grained descriptions (e.g, Carmichael-style quantum trajectory unravellings) whose functional descriptors generically have the paradoxical traits that Lance&#39;s post describes. <br /><br />Hence, from a purely pragmatic point-of-view, students henceforth should be taught that these paradoxical functions are &quot;natural& quot;.<br /><br />It&#39;s incredibly obvious, isn&#39;t it Mandrake? :)John Sidleshttps://www.blogger.com/profile/ 16286860374431298556noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-11016830877204830952012-07-31T10:48:39.329-05:002012-07-31T10:48:39.329-05:00For #4, as Konstantin and I pointed out last time, there is a probabilistic example that&#39;s at least as natural as the gambling game: the Wiener process aka Brownian motion in one dimension. The gambling game is neat, and of a similar spirit. By analogy with the devil&#39;s staircase, maybe the points where the F has derivative zero (or where the player loses money?) form an uncountable measure zero set. Then we&#39;ll have random walk based examples for all three beasts.Sashohttps://www.blogger.com/profile/ 09380390882603977159noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-1972761617168611042012-07-31T09:55:03.409-05:002012-07-31T09:55:03.409-05:00Ah- good point.<br /><br />That raises the odd question- is there a strictly increasing<br />function with dervi 0 at almost every point of [0,1],<br />strictly increasing, and you can PROVE this easily<br />(the function may be 06134382469361359081noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-58029595234299046312012-07-31T08:55:49.449-05:002012-07-31T08:55:49.449-05:00Doesn&#39;t &quot;strictly increasing&quot; mean x &gt; y -&gt; f(x) &gt; f(y). Cantor functions are nondecreasing.Anonymousnoreply@blogger.com
{"url":"https://blog.computationalcomplexity.org/feeds/6676383261921810484/comments/default","timestamp":"2024-11-02T23:40:45Z","content_type":"application/atom+xml","content_length":"32330","record_id":"<urn:uuid:bac5511b-edf5-49c1-b8ec-d4651d32a15c>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00005.warc.gz"}
Excel Formula: Translate Years into Days In this tutorial, we will learn how to write an Excel formula in Python that translates years into days. This formula is useful when you need to convert a given number of years into an approximation of the number of days. By multiplying the number of years by 365, you can get an estimate of the number of days. However, it's important to note that this formula does not take into account leap years or the exact number of days in each year. To translate years into days using an Excel formula in Python, you can use the multiplication operator (*) to multiply the value in a cell by 365. The value in the cell represents the number of years that you want to convert into days. For example, if the value in the cell is 2, the formula =A1*365 would return the value 730. This means that 2 years is approximately equal to 730 days. Similarly, if the value in the cell is 5, the formula =A1*365 would return the value 1825, indicating that 5 years is approximately equal to 1825 days. If you want a more accurate conversion, you can use a different value, such as 365.25, to account for leap years. However, keep in mind that this formula provides only an approximation and may not be precise in all cases. In conclusion, the Excel formula =A1*365 can be used in Python to translate years into days. It's a simple formula that multiplies the number of years by 365 to get an estimate of the number of days. Remember to consider the limitations of this formula and adjust it accordingly if more accuracy is required. An Excel formula Formula Explanation This formula multiplies the value in cell A1 by 365 to translate years into days. Detailed Explanation 1. The formula uses the multiplication operator (*) to multiply the value in cell A1 by 365. 2. The value in cell A1 represents the number of years that you want to convert into days. 3. By multiplying the number of years by 365, you get an approximation of the number of days. 4. This formula assumes that there are 365 days in a year. If you want a more accurate conversion, you can use a different value, such as 365.25 to account for leap years. For example, if cell A1 contains the value 2, the formula =A1*365 would return the value 730. This means that 2 years is approximately equal to 730 days. Similarly, if cell A1 contains the value 5, the formula =A1*365 would return the value 1825. This means that 5 years is approximately equal to 1825 days. It's important to note that this formula provides an approximation and does not take into account leap years or the exact number of days in each year.
{"url":"https://codepal.ai/excel-formula-generator/query/nkWc9fkH/excel-formula-translate-years-into-days","timestamp":"2024-11-05T18:11:47Z","content_type":"text/html","content_length":"91827","record_id":"<urn:uuid:1fe791a4-ad7b-45a3-9f8b-e33158d76de8>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00292.warc.gz"}