content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Value Left - Working with single and double digits
Hello, For the below formula I am attempting to grab the leading numbers from an alphanumeric field. the numbers will always be leading but will range between 1-and 10.
I am using the following with no success
=VALUE(LEFT([Probability Score]@row, 2))
If the cell contains 10 then it works. But if it contains a single-digit number it fails. If I use:
=VALUE(LEFT([Probability Score]@row))
Then I only get a single-digit return.
I have attempted to use helper columns but to no avail yet.
The intent is to multiple two fields together using the formula above. See screenshot for reference.
Best Answer
• This should remove the blank spaces on single digit values
=VALUE(SUBSTITUTE(LEFT([Probability Score]@row, 2), " ", ""))
• This should remove the blank spaces on single digit values
=VALUE(SUBSTITUTE(LEFT([Probability Score]@row, 2), " ", ""))
• Try this for your Column28. You should be able to use the same syntax for Column29
=VALUE(LEFT([Magnitude Score]@row, FIND("-", [Magnitude Score]@row) - 2))
Will this work for you?
• Many thanks for the quick assistance, but the options worked! | {"url":"https://community.smartsheet.com/discussion/90226/value-left-working-with-single-and-double-digits","timestamp":"2024-11-03T03:44:25Z","content_type":"text/html","content_length":"393318","record_id":"<urn:uuid:e5b4127e-48c4-487b-b408-39d477f66d36>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00586.warc.gz"} |
Now that fromRoman works properly with good input, it's time to fit in the last piece of the puzzle: making it work properly with bad input. That means finding a way to look at a string and determine
if it's a valid Roman numeral. This is inherently more difficult than validating numeric input in toRoman, but you have a powerful tool at your disposal: regular expressions.
If you're not familiar with regular expressions and didn't read Chapter 7, Regular Expressions, now would be a good time.
As you saw in Section 7.3, “Case Study: Roman Numerals”, there are several simple rules for constructing a Roman numeral, using the letters M, D, C, L, X, V, and I. Let's review the rules:
1. Characters are additive. I is 1, II is 2, and III is 3. VI is 6 (literally, “5 and 1”), VII is 7, and VIII is 8.
2. The tens characters (I, X, C, and M) can be repeated up to three times. At 4, you need to subtract from the next highest fives character. You can't represent 4 as IIII; instead, it is represented
as IV (“1 less than 5”). 40 is written as XL (“10 less than 50”), 41 as XLI, 42 as XLII, 43 as XLIII, and then 44 as XLIV (“10 less than 50, then 1 less than 5”).
3. Similarly, at 9, you need to subtract from the next highest tens character: 8 is VIII, but 9 is IX (“1 less than 10”), not VIIII (since the I character can not be repeated four times). 90 is XC,
900 is CM.
4. The fives characters can not be repeated. 10 is always represented as X, never as VV. 100 is always C, never LL.
5. Roman numerals are always written highest to lowest, and read left to right, so order of characters matters very much. DC is 600; CD is a completely different number (400, “100 less than 500”).
CI is 101; IC is not even a valid Roman numeral (because you can't subtract 1 directly from 100; you would need to write it as XCIX, “10 less than 100, then 1 less than 10”).
Example 14.12. roman5.py
This file is available in py/roman/stage5/ in the examples directory.
If you have not already done so, you can download this and other examples used in this book.
"""Convert to and from Roman numerals"""
import re
#Define exceptions
class RomanError(Exception): pass
class OutOfRangeError(RomanError): pass
class NotIntegerError(RomanError): pass
class InvalidRomanNumeralError(RomanError): pass
#Define digit mapping
romanNumeralMap = (('M', 1000),
('CM', 900),
('D', 500),
('CD', 400),
('C', 100),
('XC', 90),
('L', 50),
('XL', 40),
('X', 10),
('IX', 9),
('V', 5),
('IV', 4),
('I', 1))
def toRoman(n):
"""convert integer to Roman numeral"""
if not (0 < n < 4000):
raise OutOfRangeError, "number out of range (must be 1..3999)"
if int(n) <> n:
raise NotIntegerError, "non-integers can not be converted"
result = ""
for numeral, integer in romanNumeralMap:
while n >= integer:
result += numeral
n -= integer
return result
#Define pattern to detect valid Roman numerals
romanNumeralPattern = '^M?M?M?(CM|CD|D?C?C?C?)(XC|XL|L?X?X?X?)(IX|IV|V?I?I?I?)$' def fromRoman(s):
"""convert Roman numeral to integer"""
if not re.search(romanNumeralPattern, s): raise InvalidRomanNumeralError, 'Invalid Roman numeral: %s' % s
result = 0
index = 0
for numeral, integer in romanNumeralMap:
while s[index:index+len(numeral)] == numeral:
result += integer
index += len(numeral)
return result
This is just a continuation of the pattern you discussed in Section 7.3, “Case Study: Roman Numerals”. The tens places is either XC (90), XL (40), or an optional L followed by 0 to 3 optional X
characters. The ones place is either IX (9), IV (4), or an optional V followed by 0 to 3 optional I characters.
Having encoded all that logic into a regular expression, the code to check for invalid Roman numerals becomes trivial. If re.search returns an object, then the regular expression matched and the
input is valid; otherwise, the input is invalid.
At this point, you are allowed to be skeptical that that big ugly regular expression could possibly catch all the types of invalid Roman numerals. But don't take my word for it, look at the results:
Example 14.13. Output of romantest5.py against roman5.py
fromRoman should only accept uppercase input ... ok
toRoman should always return uppercase ... ok
fromRoman should fail with malformed antecedents ... ok
fromRoman should fail with repeated pairs of numerals ... ok
fromRoman should fail with too many repeated numerals ... ok
fromRoman should give known result with known input ... ok
toRoman should give known result with known input ... ok
fromRoman(toRoman(n))==n for all n ... ok
toRoman should fail with non-integer input ... ok
toRoman should fail with negative input ... ok
toRoman should fail with large input ... ok
toRoman should fail with 0 input ... ok
Ran 12 tests in 2.864s
One thing I didn't mention about regular expressions is that, by default, they are case-sensitive. Since the regular expression romanNumeralPattern was expressed in uppercase characters, the
re.search check will reject any input that isn't completely uppercase. So the uppercase input test passes.
More importantly, the bad input tests pass. For instance, the malformed antecedents test checks cases like MCMC. As you've seen, this does not match the regular expression, so fromRoman raises an
InvalidRomanNumeralError exception, which is what the malformed antecedents test case is looking for, so the test passes.
In fact, all the bad input tests pass. This regular expression catches everything you could think of when you made your test cases.
And the anticlimax award of the year goes to the word “OK”, which is printed by the unittest module when all the tests pass.
When all of your tests pass, stop coding. | {"url":"http://www.diveintopython.net/unit_testing/stage_5.html","timestamp":"2024-11-13T11:36:10Z","content_type":"text/html","content_length":"21993","record_id":"<urn:uuid:528d67a5-bf47-4b98-94b3-de403ded4f3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00583.warc.gz"} |
What are the characteristics of a three-dimensional work? – - La Cultura de los Mayas
What are the characteristics of a three-dimensional work? –
What are the characteristics of a three-dimensional work?
– Three-dimensional works of art have three dimensions: height, width and depth, whose shapes can be geometric and organic. – They can be appreciated from any angle or perspective, unlike
two-dimensional works of art, which can only be seen from the front.
What arts generate three-dimensional objects?
Sculpture is a true three-dimensional representation of an object that has height, width and depth, can be walked through and viewed from different angles. You can also bring three-dimensionality to
a drawing or painting through a method called perspective and shading.
What are three-dimensional images?
They are the three dimensions that make up the three-dimensional representation and, therefore, are present in any 3D animation project. In fact, in our reality everything is three-dimensional
because it has length, height and depth.
What is the name of the three dimensions?
Three dimensions of space: height, width and depth.
What are projections in space?
Informally, a projection is a drawing technique used to represent a three-dimensional object on a surface. Mathematically, a projection is an idempotent linear transformation onto a vector space.
How is the illusion of three-dimensional form achieved in drawing?
In figurative drawing, the illusion of three dimensions is usually named as volume. The essential resources to achieve this volume are the basic geometry and chiaroscuro. However, a truly
three-dimensional drawing involves considering more elements than just those that produce the volume.
What is a three-dimensional figure in arts?
Three-dimensional art is the one that is not flat, but has volume; unlike a drawing or a painting that are two-dimensional arts. Three-dimensional figures are also called solids, they are a portion
of space bounded by flat or curved faces.
How is three-dimensionality achieved in a plane?
Representation of three-dimensional space in the plane. Objects that are further away from the observer lose saturation, that is, they are seen as lighter or more gray. You can use resizing and
toning at the same time.
What is used to represent three-dimensionality or suggest volume and depth in the plane?
4- CONICAL PERSPECTIVE: It is a representation system studied by architects and draftsmen to create a sense of depth in the plane and has its own rules of representation.
What is depth in artistic drawing?
Depth is the distance of an element from a horizontal reference plane when said element is below the reference. When the opposite occurs, it is called elevation, level, or simply height.
What is form and depth in plastic composition?
How did they represent depth in ancient times?
In preclassical antiquity, in pre-Columbian paintings from Mesoamerica, and in Oriental painting, the usual means of suggesting depth is the different elevation of the figures relative to the lower
edge of the relief or painting. In Roman painting, we already find geometric and atmospheric perspective.
What are the elements of the sculpture?
CONSTITUTIVE ELEMENTS OF SCULPTURE The space exists, the material exists; there is gravity, proportions… While painting is characterized by its optical nature claimed by color, sculpture is
characterized by its physical nature, essentially tactile.
How is sculpture perceived?
Finish, texture and polychromy • The sculpture is perceived through the outer surface, its shape-surface and its complete perception should be through touch as well as sight. sculpture are
recognizable in nature but non-existent or unrecognizable volumes may appear. | {"url":"https://culturalmaya.com/what-are-the-characteristics-of-a-three-dimensional-work/","timestamp":"2024-11-12T09:01:58Z","content_type":"text/html","content_length":"47388","record_id":"<urn:uuid:f4ee97dc-43c5-4b3e-a069-61d84b1ebfb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00062.warc.gz"} |
Ramblings of a techie
I have been working mostly in Java for the past 4 years and my new work demands me to learn Ruby. So I've started looking into it for the past week and thought I would share my learning process so
that it would be useful for someone out there. And thats the objective of this series of posts for someone having a Java background and wanting to learn Ruby.
Lets get into business. I am really excited to see the way Ruby language syntax is and I am eager to learn this mouth-watering scripting language of coolest nature. During my initial analysis, I
learnt that Ruby is,
1. Implemented in many other high level languages such as C, Java, .Net etc.,
2. Is relatively slower.
3. Ruby is a high level language made of a high level language.
4. Not suitable for large applications.
5. Completely open source and is in a budding state.
6. Has a framework called Rails which is good for Agile development
7. Community out there is getting better day by day and finding help immediately should not be a problem as time goes by.
8. Has significant changes between releases which many developers wont welcome right away.
9. Running time cannot be easily estimated since the language has several underlying implementation in several languages. But there are various profilers available to test them.
10. Books are always outdated by the time when you finish them.
Now that the premise being established on what Ruby is, we shall learn it and have a comparison/understanding on how it relates to Java.
Now that google has removed the restriction on its documents, it is time for us to start exploiting it.
No need to upload your pictures in some free image webhosting websites where you wont be having the 100% surity of whether it might come up all time or you'l see a "bandwidth exceeded" message.
Upload it to your own google account and with some tweak, we can link it directly in the webpage. You can do the same with Picasa, but you will have to create an album every time and it becomes kind
of annoying to maintain it.
Same goes with the flash presentations as well. No need to host it to any unreliable free websites. Upload it to your own google document. Plus if you want any referring files you can very well
upload it to Google Docs. Yes, there is a limit on the size per account but still 7+ GB will become handy for small and medium bloggers.
Ok, the steps are very very simple.
1. Login to http://docs.google.com. Sign in with your google id and password.
2. Click on upload from the left top and select "any" file type you want only restriction is it cannot exceed 100 MB. Do not forget to uncheck "Convert documents, presentations, and spreadsheets to
the corresponding Google Docs formats" if you feel you dont want Google mess up your documents.
3. After you upload the file, select the file and click on Share and select "Get the link to share". You should be getting a link in a text bar as shown below
4. To append the file in your page all you got to do is add this to the url &export=open&type=.swf
This way you can directly embed your flash or jpg or png directly in your pages (the type=.xxx will vary depending on the file you've uploaded). If you want to give a direct download link append this
command at the end &export=download&confirm=no_antivirus
The last one &confirm=no_antivirus can be given to files of .exe and .zip extensions.
Hope this helped. A sample flash file embedded from google doc can be found here.
Hi friends,
This is one of the interview questions I faced recently and thought I would share it.
Given a matrix print all its value in a circular fashion like shown in the image below.
To attack this problem we definitely want to have a left-right, top-bottom, right-left and bottom-top spans. For attaining this, we obviously want to have 4 loops. The C++ code is given below.
// Name : circular_matrix_read.cpp
// Author : c++
// Description : output given matrix elements circularly
#include <iostream>
using namespace std;
#define HEIGHT 6
#define WIDTH 3
int main()
//int array[HEIGHT][WIDTH] = {{1,2,3,4,5},{16,17,18,19,6},{15,24,25,20,7},{14,23,22,21,8},{13,12,11,10,9}};
//int array[HEIGHT][WIDTH] = {{1,2,3,4,5,6},{14,15,16,17,18,7},{13,12,11,10,9,8}};
int array[HEIGHT][WIDTH] = {{1,2,3},{14,15,4},{13,16,5},{12,17,6},{11,18,7},{10,9,8}};
int maxh = HEIGHT-1, maxw = WIDTH-1;
int minh = 0, minw = 0;
if(minh > maxh || minw > maxw)
//top-left to top-right
for(int i=minh,j=minw;j<=maxw;j++)
cout << array[i][j] << endl;
if(minh > maxh || minw > maxw)
//top-right to bottom-right
for(int i=minh,j=maxw;i<=maxh;i++)
cout << array[i][j] << endl;
if(minh > maxh || minw > maxw)
//bottom-right to bottom-left
for(int i=maxh,j=maxw;j>=minw;j--)
cout << array[i][j] << endl;
if(minh > maxh || minw > maxw)
//bottom-left to top-left
for(int i=maxh,j=minw;i>=minh;i--)
cout << array[i][j] << endl;
return 0;
//1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
This is one of the three problems asked in the 2010 Edition of Google Codejam Qualification Round. The question is quite interesting and simple to solve and I did solve. But the problem here is the
running time. If not a proper strategy is followed, then the running time of the algorithm will be more and eventually you cannot submit the answer in time for large data set. Lets look at the
Roller coasters are so much fun! It seems like everybody who visits the theme park wants to ride the roller coaster. Some people go alone; other people go in groups, and don't want to board the
roller coaster unless they can all go together. And everyone who rides the roller coaster wants to ride again. A ride costs 1 Euro per person; your job is to figure out how much money the roller
coaster will make today.
The roller coaster can hold k people at once. People queue for it in groups. Groups board the roller coaster, one at a time, until there are no more groups left or there is no room for the next
group; then the roller coaster goes, whether it's full or not. Once the ride is over, all of its passengers re-queue in the same order. The roller coaster will run R times in a day.
For example, suppose R=4, k=6, and there are four groups of people with sizes: 1, 4, 2, 1. The first time the roller coaster goes, the first two groups [1, 4] will ride, leaving an empty seat (the
group of 2 won't fit, and the group of 1 can't go ahead of them). Then they'll go to the back of the queue, which now looks like 2, 1, 1, 4. The second time, the coaster will hold 4 people: [2, 1,
1]. Now the queue looks like 4, 2, 1, 1. The third time, it will hold 6 people: [4, 2]. Now the queue looks like [1, 1, 4, 2]. Finally, it will hold 6 people: [1, 1, 4]. The roller coaster has made a
total of 21 Euros!
The first line of the input gives the number of test cases, T. T test cases follow, with each test case consisting of two lines. The first line contains three space-separated integers: R, k and N.
The second line contains N space-separated integers g[i], each of which is the size of a group that wants to ride. g[0] is the size of the first group, g[1] is the size of the second group, etc.
For each test case, output one line containing "Case #x: y", where x is the case number (starting from 1) and y is the number of Euros made by the roller coaster.
1 ≤ T ≤ 50.
g[i] ≤ k.
Small dataset
1 ≤ R ≤ 1000.
1 ≤ k ≤ 100.
1 ≤ N ≤ 10.
1 ≤ g[i] ≤ 10.
Large dataset
1 ≤ R ≤ 10^8.
1 ≤ k ≤ 10^9.
1 ≤ N ≤ 1000.
1 ≤ g[i] ≤ 10^7.
Input Output
1 4 2 1 Case #1: 21
100 10 1 Case #2: 100
1 Case #3: 20
The solution for this problem may look simple and if we follow the problem as is, its pretty straight-forward. So, here is the solution that I initially arrived at,
private int solve(int R, int k, int[] groups){
int moneyMade = 0;
LinkedList<Integer> groupsQueue = new LinkedList<Integer>();
for(int i=0;i<groups.length;i++){
//calculate the people that can fit in
int eachSum = 0;
int i;
for(int j=0;j<i;j++){
return moneyMade;
If you look at the above solution, I have followed the problem by the word. But turned out this cannot take the large data input set simply because of the amount of time it takes to run. The mole in
the above solution is that I calculate the eachSum everytime for all R iterations. The large dataset is having R upto 10^8 and Group to be 10^7. So the runtime in worst case will be 10^16 which even
the fastest computers invented these days struggle to solve in small time.
So, I had to attack my problem in a different manner. All Google CodeJam problems dont have any space criterion, so its wise to make use of it. So, what would be ideal here is to precalculate the sum
and extent for each group before hand. This would take a max of 10^3 * 10^3 = 10^6 iterations. The results are stored in a separate array for faster recovery.
Then iterate through the entire R once and finish the problem in constant time. The matured code is given below.
private long solve(long R, long k, long[] groups){
int len = groups.length;
long[] sums = new long[len];
int[] span = new int[len];
//irrespective of R, calculate and keep the extent and sum in separate arrays
for(int x=0;x<len;x++){//Maximum 1000 runs
int sum = 0; int extent=0;
for(int y=x;;y++){
if(sum+groups[y%len]>k || len==extent){
sums[x] = sum;
span[x] = extent;
long totalSum = 0;
int i = 0;
for(int r=0;r<R;r++){
totalSum += sums[i];
return totalSum;
Files you may need.
1. Input file - Small dataset
2. Input file - Large dataset
3. Output file - Small dataset
4. Output file - Large dataset
5. Complete Source code in Java
This is the problem asked in 2009 Google Codejam's qualification round. Lets get into business straightaway.
After years of study, scientists at Google Labs have discovered an alien language transmitted from a faraway planet. The alien language is very unique in that every word consists of exactly L
lowercase letters. Also, there are exactly D words in this language.
Once the dictionary of all the words in the alien language was built, the next breakthrough was to discover that the aliens have been transmitting messages to Earth for the past decade.
Unfortunately, these signals are weakened due to the distance between our two planets and some of the words may be misinterpreted. In order to help them decipher these messages, the scientists have
asked you to devise an algorithm that will determine the number of possible interpretations for a given pattern.
A pattern consists of exactly L tokens. Each token is either a single lowercase letter (the scientists are very sure that this is the letter) or a group of unique lowercase letters surrounded by
parenthesis ( and ). For example: (ab)d(dc) means the first letter is either a or b, the second letter is definitely d and the last letter is either d or c. Therefore, the pattern (ab)d(dc) can stand
for either one of these 4 possibilities: add, adc, bdd, bdc.
The first line of input contains 3 integers, L, D and N separated by a space. D lines follow, each containing one word of length L. These are the words that are known to exist in the alien language.
N test cases then follow, each on its own line and each consisting of a pattern as described above. You may assume that all known words provided are unique.
For each test case, output
Case #X: K
where X is the test case number, starting from 1, and K indicates how many words in the alien language match the pattern.
Small dataset
1 ≤ L ≤ 10
1 ≤ D ≤ 25
1 ≤ N ≤ 10
Large dataset
1 ≤ L ≤ 15
1 ≤ D ≤ 5000
1 ≤ N ≤ 500
Case #1: 2
Case #2: 1
Case #3: 3
Case #4: 0
The solution for this problem is relatively simple compared to the other codejam problems that we are going to solve in the future. The solution is implemented in Perl language. It uses the regex
pattern matching method.
Input file of large dataset : Download
Output file of large dataset : Download
This is a problem that we had already seen. But it gives more kick if the input has negative elements and zero!!!
Given an array of integers (signed integers), find three numbers in that array which form the maximum product. [O(nlogn), O(n) solutions are available ].
int[] MaxProduct(int[] input, int size)
The solution involves in finding three maximum and two minimum numbers. If the minimum numbers are negatives and if their product is greater than the two maximum number's product, then they have to
considered for maximum product. Handling all these scenarios with comprehensive test cases, please find the code below for the problem implemented in C++
// Name : three_largest_elems.cpp
// Author : Prabhu Jayaraman
// Version : v2
// Copyright : open
// Description : To find three signed numbers in an array with max product
#include <iostream>
using namespace std;
#define MAX 10
int* MaxProduct(const int input[], const int size)
int* output = new int[3];
int negative = 0;
for(int i = 0; i < 3; i++)
output[i] = -999999;
int min[2] = {0,0};
for(int i=0;i<size;i++)
// find two smallest negative numbers
if(input[i] <= 0)
if(input[i] < min[0])
min[1] = min[0];
min[0] = input[i];
else if(input[i] < min[1])
min[1] = input[i];
// find three largest positive numbers
if(input[i] > output[0])
output[2] = output[1];
output[1] = output[0];
output[0] = input[i];
else if(input[i] > output[1])
output[2] = output[1];
output[1] = input[i];
else if(input[i] > output[2])
output[2] = input[i];
if(size != negative)
if((min[0] * min[1]) > (output[0] * output[1]) || (min[0] * min[1]) > (output[1] * output[2]))
output[1] = min[0];
output[2] = min[1];
return output;
int main()
const int input[MAX] = {-6,-1,-2,-33,-4,-15,-7,-28,-9,-10};
int* result = 0;
result = MaxProduct(input,MAX);
for(int i = 0; i < 3; i++)
cout << (i+1) << "# element : " << result[i] << endl;
const int input1[MAX] = {0,-1,-2,-33,4,15,-7,-28,-9,-10};
int* result1 = 0;
result1 = MaxProduct(input1,MAX);
for(int i = 0; i < 3; i++)
cout << (i+1) << "# element : " << result1[i] << endl;
const int input2[MAX] = {0,-1,-2,-33,-4,-15,-7,-28,-9,-10};
int* result2 = 0;
result2 = MaxProduct(input2,MAX);
for(int i = 0; i < 3; i++)
cout << (i+1) << "# element : " << result2[i] << endl;
const int input3[MAX] = {6,1,2,33,4,15,7,28,9,10};
int* result3 = 0;
result3 = MaxProduct(input3,MAX);
for(int i = 0; i < 3; i++)
cout << (i+1) << "# element : " << result3[i] << endl;
return 0;
This problem is quite frequently asked and we had already seen a solution in O(n) time but using a hash table here. However in this posting we shall see how to attack this problem when the input data
is sorted.
For example for the input,
[-12,-6,-4,-2,0,1,2,4,6,7,8,12,13,20,24] and the sum bein 0, the matching pair is -12 and 12
The solution for this problem relies upon the knowledge of the input data : sorted. We will be having two pointers, one from the start of the Array called as min and the other from the end of the
array called as max. We always calculate the sum with the values present in the minimum and maximum indexes. If the sum is greater than what is needed, we will be decrementing the max pointer and for
the other case, we will be incrementing the min pointer. This happens till min exceeds max.
Following the solution in C++
// Name : array_find_sum.cpp
// Author : Prabhu Jayaraman
// Version : v1
// Copyright : Free
// Description : Find elements in the array that equals to given sum
#include <iostream>
using namespace std;
#define MAX 15
int main()
int array[MAX] = {-12,-6,-4,-2,0,1,2,4,6,7,8,12,13,20,24};
const int find_sum = 0;
int max_index = MAX - 1;
int min_index = 0;
while(min_index < max_index)
if(array[min_index] + array[max_index-min_index] == find_sum)
cout << array[min_index] << " & " << array[max_index-min_index] << " Matched" << endl;
return 0;
if(array[min_index]+array[max_index-min_index] < find_sum)
if(array[min_index]+array[max_index-min_index] > find_sum)
cout << "NO MATCH" << endl;
return 0;
//-12 & 12 matched
This problem may look very simple and indeed it is because the solution can be arrived at very easily. But to arrive at a clean and an efficient may take sometime. This question is asked to test the
cleanliness and simplicity in providing solutions. Enough hype, here is the question.
Given an array of random numbers and -1 placed in-between, compact that array ie., all the -1s are to be removed and the final output should be the last valid index with the fresh array. For example.
You should not swap the values just the last valid index along with the array is enough to decipher the not -1 values.
The solution to this problem is seemingly simple and the same has been implemented in Java below. | {"url":"http://tech.bragboy.com/2010/05/","timestamp":"2024-11-13T06:28:19Z","content_type":"text/html","content_length":"95628","record_id":"<urn:uuid:9a0e89ac-4a2e-4bfb-89ce-9e4aa6af3e7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00853.warc.gz"} |
Sharpe Week: The Sharpe Ratio Broke Investors’ Brains | Portfolio for the Future | CAIA
By Richard Wiggins, CAIA, CFA, Managing Editor/Senior Advisor, Portfolio for the Future^TM
We’ve become the tool of our tool, and even Bill Sharpe wouldn’t like it.
Can we talk?
When Bobby Axelrod on the hit show Billions went to an institutional investor to raise funds for Axe Capital, the investor brought up a problem: “My people have a few questions. Your Sharpe ratio’s
very low.”
Real-life hedge fund managers can relate.
The Sharpe ratio is the asset management industry’s go-to statistic for summarizing achieved (or back-tested) performance. It is the most-cited reason to hire or fire individual money managers, in my
experience as an allocator.
The relationship between risk and return is an essential concept in finance, and the ratio captures this in a single number by gauging how much return investors earned for each unit of risk:
Conceptually uncomplicated — the bigger the number, the better the risk-adjusted performance — it’s become an institutional episteme of success. Go to Google Finance, Bloomberg, Thomson Reuters,
Morningstar, or any other provider of financial data, and you will find up-to-date Sharpe ratio rankings for virtually every mutual fund, hedge fund, trading strategy, and asset class. Professional
marketers lug around pitchbook binders and prospectuses salted with Sharpe ratios because the industry is built around it. But the scary part is — cue the theremin — that the Sharpe ratio is the most
misused financial statistic of all.
How Not to Do It
Most practitioners fail to understand that the Sharpe ratio is intended for one’s whole portfolio. Yet individuals and institutional investors have the bad habit of allocating as if high Sharpe
ratios are all it takes to build strong client portfolios, piece by optimized piece. Goldman Sachs makes this exact mistake with its High Sharpe Ratio index, which goes by the elegant acronym
GSTHSHRP, I kid you not. It’s a basket of equities selected by their individual Sharpe ratios — and Goldman should know better.
Looking at the individual Sharpe ratios of managers or investments inside a portfolio doesn’t make sense. Write that down.
Axe Capital’s ratio should not help that institutional pitch target decide whether to invest because the million-dollar question is how Axe Cap fits together with the rest of the portfolio. Comparing
Sharpe ratios in isolation is relatively meaningless because a fund with an itsy-bitsy one might increase the risk-adjusted return of the overall portfolio more than a fund with a high score if it
has a sufficiently lower correlation to the rest of the holdings.
A combination of good Sharpe ratios doesn’t necessarily result in a portfolio with a good Sharpe ratio. On the contrary, strategies and asset classes that have performed well over a period likely
share exposure to something in common. That’s the next important thing to remember.
How Can Everyone Get It So Wrong?
Sharpe ratios carry almost religious significance, despite so many ratio all-stars blowing up. Jack Bogle once said, “In terms of how the Sharpe ratio has done in evaluating mutual funds, I would say
the answer is poorly.” Smart man, that Mr. Bogle.
Many concentrated, technology-heavy funds like the Janus Twenty showed superior Sharpe ratios just before they plummeted in the first half of 2000. Long-Term Capital Management boasted a glowing 4.35
before it collapsed in 1998, nearly taking the financial system down with it. There’s a lesson here: The future isn’t what it used to be.
This metric was never intended as the end-all and be-all. It was meant to be quick and dirty. For example, if you are looking at an entire portfolio and have nothing else to go on and insist on only
one number, it can be useful. A unidimensional risk measure will never tell the whole story. This is an opportune point to pause and remind ourselves that we have machines called computers now and
can do so much more than this.
Sharper Sharpe Ratios
The main complaint against William Sharpe’s hallowed metric is that it treats all volatility the same, and volatility isn’t bad per se. By treating positive surprises in the same way as negative
surprises, the ratio penalizes strategies that have upside volatility — i.e., big positive returns. Newer, tail-based measures like the Calmar ratio, the Sterling ratio, the Burke ratio, the Pain
Index, and the Ulcer Index replace standard deviation in the denominator with a measure of drawdown performance. Drawdown can be measured in various ways — how deep, how long before recovery, the
so-called volume between the breakeven line and the drawdown line — but they’re ultimately pretty similar. Others, like the Sortino and Omega ratios, throw away the positive returns and measure
volatility only in the downward direction.
Yet the quest for a better Sharpe ratio confounds experts because distinguishing between good and bad volatility isn’t as easy — or fruitful — as one may think. Constructing portfolios based solely
on downside risk sounds like a revolutionary premise, but most investments have volatility that is more or less symmetrical. They result in rankings that are practically the same.
What’s true at the asset-class level is also true at the strategy level.
An article in The Journal of Portfolio Management’s special quant issue compared 3,168 different implementations of “value investing” and found that the wide range of portfolio construction choices
(signal definition, weighting scheme, sector adjustment, rebalancing frequency, etc.) make an enormous difference: Cumulative returns ranged from negative 69.9 percent to positive 393.4 percent. The
mind-boggling number of permutations and degrees of freedom in strategy design place risk and return combinations in different corners of an exceptionally voluminous cloud. In fact, the dispersion is
so broad that the correlation among some value strategies is so low as to suggest they’re not one value family after all.
Here’s an interesting exercise: If you sort the 3,000-plus varietals into ten buckets by their Sharpe ratio and then again by Sortino ratio, very little changes. The drawdown characteristics are
related in a near-linear manner to the Sharpe ratios. To quote Bill Murray, “It just doesn’t matter.”
In fact, maximum drawdown and attempts to create tail-risk ratios are simply noisier measures than the original because they rely on fewer observations to determine their values. Further, the Sharpe
ratio builds on a sound theoretical framework, so there are a wide range of statistical tests available for it, which cannot be said for many of these new measures. Viewed as a t-statistic, you can
test hypotheses with the ratio, get a handle on estimation error, and precisely quantify whether a manager was good or just lucky.
This whole debate centers on what to use as a measure of risk, but William Sharpe never claimed it should be volatility. The Sharpe ratio was originally called “reward-to-variability” because
volatility is not an identity for, nor an analogy to, risk. In 2007, volatility measures would have told you that U.S. equity funds had never been safer, on a risk-adjusted basis.
Sharpe’s famous paper addressed expected returns, but his metric’s near-universal application has been to historical returns — i.e., which manager was better over a time span. He never designed it to
certify the future performance of investments. Past Sharpe ratios are not indicative of future Sharpe ratios and — given the time-varying nature of asset class and risk premia — should never be taken
as a precise measure of anything.
Modern portfolio theory can be blamed for ingraining the goal of maximizing expected return for a given level of risk. Today, everybody promotes the Sharpe ratios of their funds in their marketing,
but high risk-adjusted returns don’t guarantee good, or safe, results because Sharpe ratios go up and down. One might be cautious of the sterling Sharpe scores, or even wonder if high Sharpe ratios
are predictive of blow-ups. It was the funds with impeccable track records that went bust during the 2008–09 rout. The more stable the return, the more likely there’s a big loss ahead? Volatile funds
lose money — but not as much as non-volatile ones. For example, six months before hedge fund Malachite Capital Management’s spectacular failure, consultants were recommending it as a “diversifying
strategy.” Malachite’s extremely attractive Sharpe (around 1.2) made it easy to sell, but certainly did not capture the fund’s true risk.
Bad Math
No investment group consistently boasts louder about its impressive Sharpe ratios than hedge funds, but the most commonly used method to calculate a strategy's Sharpe ratio misstates the true
investment risk. It’s easy to understand and easy to calculate . . . incorrectly. All of the large hedge fund indexes (Hedge Fund Research, Morningstar, Credit Suisse, Eurekahedge, and BarclayHedge),
consultants (Preqin, Albourne, Cambridge Associates, Aksia), and managers compute it the same way, but such Sharpe ratios are routinely overstated by as much as 70 percent.
The financial community is accustomed to estimating annual standard deviation by annualizing the monthly standard deviation. These are not the same. The formula of multiplying monthly estimates by
the square root of time traces to Albert Einstein in 1905, but is inapplicable in situations with serial correlation — for example, when one month of positive returns tends to be followed by another.
Careful readers will recall that Sharpe pointed this out on page 49 of the fall 1994 issue of The Journal of Portfolio Management.
Annualized standard deviation overstates a Sharpe ratio by as much as 65 percent. Properly computed using a private database, Malachite Capital’s standard deviation was 78 percent higher than
presented and its Sharpe ratio 44 percent lower.
Annualizing monthly returns makes sense if you don’t have much data, but in many cases you can compute the real standard deviation using annual quantities. The HFRI Fund of Funds Composite Index and
the Credit Suisse Broad Hedge Fund Index are the dominant providers of asset-weighted hedge fund data extending back to the early 1990s, and publicize lifetime Sharpe ratios of 0.81 and 0.80,
respectively, through December 2019. But one can easily calculate the actual measured standard deviations of annual returns, which are significantly higher than the annualized monthly standard
deviations calculated by HFRI and Credit Suisse. Correcting the statistical illusion drops their Sharpe ratios down to 0.45 and 0.52.
The Tyranny of Metrics
It’s not the ratio that has a problem. It’s us.
Some might say we are trying to quantify the unquantifiable, but there’s more than that. The salience and ideology of metrics has a flaw known as Goodhart’s Law: When a measure becomes a target, it
ceases to be a good measure. The more any quantitative indicator is used for decision-making, the more it becomes subject to corruption and apt to distort the processes it is intended to monitor.
Metric fixation invites gaming. Investment firms that manipulate their products to engineer a favorable Sharpe statistic are akin to teachers who juke the stats by teaching to the test. Similarly,
the returns reported by private-equity firms can be skewed to the extent that managers have used subscription credit lines to improve their internal rates of return. The IRR metric itself isn’t
flawed; we’re flawed.
The Sharpe ratio has changed investor behavior. We chase the metric rather than the underlying quality it is trying to assess. Drawn like Dostoevsky’s Raskolnikov to the flame, we can’t resist a
good-looking Sharpe ratio, despite knowing that past average experience may be a terrible predictor of future performance. Investors have begun to manipulate their Sharpe ratio — and their
value-at-risk — by loading up on asymmetric risk positions.
Strategies that generate slow, steady profits punctuated by periods of sharp losses are in vogue right now across a range of asset classes. As bond yields have tumbled since the financial crisis,
investors have looked for ways to increase returns. Shorting volatility has become an alternative to fixed income. The yield earned on an explicit short volatility position competes favorably with
most sovereign and corporate debt.
Selling stock-market volatility — insuring others against market moves — has been a consistent moneymaker. Carry trades like this deliver stable premiums each year with no apparent increase in
volatility, until the big disaster you’ve been writing insurance against materializes. The infrequency of losses increases the perceived “moneyness” of the strategy. But when losses do occur, they
tend to quickly spiral into giant, brutal wounds.
Some recent examples are reminiscent of Long-Term Capital Management, whose short gamma strategy worked until it did not. These strategies are vulnerable to surprise events that elude most
methodologies — even complex ones — for measuring risk. Like selling a credit-default swap, the chance of ever having to pay off is so minuscule, falling outside the 99 percent probability range,
that it disappears in the value-at-risk figure.
With the Sharpe ratio as yardstick, put-selling strategies look great compared to the S&P 500 because the premiums translate into immediate risk-adjusted alpha. Many retirement systems and nonprofits
reaped years of steady returns by selling short-term risk insurance (aka “harnessing volatility premia”). Viewed in that context, band wagoning into high Sharpe tailgating strategies that are certain
to eventually blow up makes all the sense in the world. To paraphrase Anchorman: 99 percent of the time, it works every time.
Key to deceiving the Sharpe ratio is optionality, the price of which is almost entirely determined by volatility. So, it can be said that optionality is volatility, and volatility is optionality. And
volatility hatched a multibillion-dollar business betting on volatility itself, morphing into a giant casino of its own. It has become a central obsession of the markets. The number of instruments
and risk-recycling strategies making bets on movement can only be described as extraordinary, and this giant trading ecosystem can magnify losses when turbulence hits. The violence of vol has never
been greater: Of the ten largest all-time highest VIX closes, five of them occurred in March.
Naively buying product for its shiny Sharpe ratio can be diametrically incorrect, warping institutions to favor strategies with the largest positive serial correlation and, therefore, the most hidden
risk. Strategies with asymmetric, highly skewed, dynamic distributional features potentially sucker investors into taking on frightening amounts of unknown risk. Big plans may be engaging in a
negative selection process that gravitates toward managers whose strategies imply a catastrophic loss of capital. As Alberta, Canada’s public investment arm recently learned, it’s not very hard to
lose a couple of billion selling volatility. That’s upward of C$480 ($363) per woman, man, and child in the province; but who’s counting? It seemed so safe. The Sharpe ratio was amazing. Until . . .
kaboom. This is the opposite of Moneyball.
Allocation decisions involving hundreds of billions of dollars and affecting millions of individuals hinge on the Sharpe ratio, but — if I may adopt a paternal tone here once again — it has become a
crutch for many investors. Asset allocators accustomed to comparing annualized Sharpe ratios across asset classes should be especially wary of the practical relevance of such comparisons. A manager
with an amazing-looking Sharpe ratio is guaranteed to get a close look from institutional investors. That shiny stat cannot replace a basic understanding of the return-generating processes of the
underlying strategies.
A high Sharpe ratio is a simulacrum of success. Yet what gets measured may have no relationship to what we really want to know.
We have become the tool of our tool.
About the Author:
Richard Wiggins, CAIA, CFA, Managing Editor/Senior Advisor, Portfolio for the Future^TM. Here's some recent stuff:
The Wall Street Math Hustle | Portfolio for the Future | CAIA
The Key to Energy Transition Lies in a Ron White Joke | Portfolio for the Future | CAIA
Original Article: The Sharpe Ratio Broke Investors’ Brains | Institutional Investor | {"url":"https://caia.org/blog/2022/02/19/sharpe-week-sharpe-ratio-broke-investors-brains","timestamp":"2024-11-03T03:38:19Z","content_type":"text/html","content_length":"57837","record_id":"<urn:uuid:89cc2625-b373-40cc-9b81-e88b22917a57>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00455.warc.gz"} |
Section: Research Program
Mean-field approaches
Modeling neural activity at scales integrating the effect of thousands of neurons is of central importance for several reasons. First, most imaging techniques are not able to measure individual
neuron activity (“microscopic” scale), but are instead measuring mesoscopic effects resulting from the activity of several hundreds to several hundreds of thousands of neurons. Second, anatomical
data recorded in the cortex reveal the existence of structures, such as the cortical columns, with a diameter of about 50μm to 1mm, containing of the order of one hundred to one hundred thousand
neurons belonging to a few different species. The description of this collective dynamics requires models which are different from individual neurons models. In particular, when the number of neurons
is large enough averaging effects appear, and the collective dynamics is well described by an effective mean-field, summarizing the effect of the interactions of a neuron with the other neurons, and
depending on a few effective control parameters. This vision, inherited from statistical physics requires that the space scale be large enough to include a large number of microscopic components
(here neurons) and small enough so that the region considered is homogeneous.
Our group is developing mathematical and numerical methods allowing on one hand to produce dynamic mean-field equations from the physiological characteristics of neural structure (neurons type,
synapse type and anatomical connectivity between neurons populations), and on the other so simulate these equations. These methods use tools from advanced probability theory such as the theory of
Large Deviations [7] and the study of interacting diffusions [1] . Our investigations have shown that the rigorous dynamics mean-field equations can have a quite more complex structure than the ones
commonly used in the literature (e.g. [67] ) as soon as realistic effects such as synaptic variability are taken into account. Our goal is to relate those theoretical results with experimental
measurement, especially in the field of optical imaging. For this we are collaborating with Institut des Neurosciences de la Timone, Marseille . | {"url":"https://radar.inria.fr/report/2014/neuromathcomp/uid13.html","timestamp":"2024-11-06T18:15:03Z","content_type":"text/html","content_length":"36817","record_id":"<urn:uuid:f6ad3e63-3b17-4e43-a079-056019197b18>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00561.warc.gz"} |
Edexcel Gcse Maths Paper 1 2022 Find Here To Learn More! - The News Heralds
Edexcel Gcse Maths Paper 1 2022 Find Here To Learn More!
This article gives pertinent details regarding the popular Edexcel Gcse Mathematics Exam 1-2022 The paper isthat’s receiving a lot of attention.
Have you had the opportunity to take this test? Are you aware of the tests given through Edexcel as well as the GCSE exam specifically? In the past, Edexcel was organized for students to take part in
the exam to qualify for. In the Mathematics exam students were given an issue that they considered difficult to answer.
Edexcel Gcse Maths Paper 1 2022 is trending because people are searching for this paper to identify this specific question. This question has become a viral topic across the United Kingdom and is now
the topic of some debate in the media and has been covered by news reports.
Information about Edexcel Gcse Maths Paper 1
This question is about the Mathematics section of the GCSE that is conducted by Edexcel. We will look into more important information about it in the following.
• One particular question in the Mathematics paper has caused shockwaves of excitement across the UK.
• Students were asked a complex question that they could not figure out.
• This Examle Gcse Maths Exam 1 in 2022 is now popular as students are searching for more details on this exam and the details.
• Students from UK United Kingdom couldn’t find a solution to this problem and used social media to voice their displeasure.
• According to sources, even teachers struggled with this problem and found it to be quite difficult.
• Students from universities and teachers online have also expressed their opinion that this essay was slightly above the level of difficulty that is expected for this type of paper.
• The issue was determining the radius of the shaded portion of a circle that all of the circles had a radius of 4.
Edexcel Gcse Maths Paper 1 2022
• People are also sharing their experiences in dealing with extremely difficult and difficult questions in their Mathematics test after this incident gained popularity.
• The GCSE refers to the General Certificate of Secondary Education A qualification in the field of education that is valid to England, Northern Ireland, and Wales.
• The GCSE certificate is valid for only one area: Mathematics, English, or any other .
• The exam is also free for students currently enrolled in schools. Additional fees could be applicable to students from other schools.
• Discussions about the hottest Gcse maths paper of Edexcel 1 2022 have been attracting interest on the internet, as the general consensus is that this test was challenging for 16-year-olds.
• Users are also able to find the answer to this issue on the internet.
Final Thoughts On The Topic
The GCSE exam conducted by Edexcel was recently held for Maths students. One particular question on the exam frightened nearly every student, as they could not figure it out. The question was deemed
to be somewhat difficult at this stage. Learn the full details about the GCSE on this page.
What do you think about the level of difficulty for this particular question? Share your thoughts on the question that is a viral Edexcel Gcse Maths Paper 1 2022 question in the comments below. | {"url":"https://thenewsheralds.com/edexcel-gcse-maths-paper-1-2022-find-here-to-learn-more/","timestamp":"2024-11-08T22:01:52Z","content_type":"text/html","content_length":"42209","record_id":"<urn:uuid:965caf31-1fdb-47ca-bd99-0477be0f3bb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00067.warc.gz"} |
Bubble Bots TE | stemtaught
top of page
Meet the Micro:bits
Coding is easy and fun when students meet the micro:bits. Learn to write and download code, so you are ready to create your own scientific tools!
Meet the Micro:bits
Coding is easy and fun when students meet the micro:bits. Learn to write and download code, so you are ready to create your own scientific tools!
Student Edition
Student Edition
Student Edition
Bubble Bots Teacher Edition
Programming Challenges
This page contains all the answers to the bubble bot programming challenges. Use these links to explore the solutions to the challenges your students are working on.
Straight Line (Right)
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Straight Line (Left)
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Straight Line (Up)
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Straight Line (Downward)
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Right Angle (Forward)
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Right Angle (Another Direction)
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Square (Coding Sequence)
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Square (Using a Loop)
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Acute Angle
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Obtuse Angle
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Equilateral Triangle
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Isosceles Angle
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Right Triangle
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Scalene Triangle
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Obtuse Triangle
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Acute Triangle
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Septagram Star
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Septagram Star
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
Prime Pentogram Star
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
The Bubble Bots are on the Move
Manipulate the coordinate axes
Solve the geometric puzzle background. Make your bubble bot trace the shape starting at "X" and ending at the ending circle.
bottom of page | {"url":"https://www.stemtaught.com/bubble-bots-te","timestamp":"2024-11-05T12:38:18Z","content_type":"text/html","content_length":"1050482","record_id":"<urn:uuid:cf0c06fa-626b-4321-b1ef-74340d44bfd8>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00086.warc.gz"} |
Chapter 5 in math glencoe book
Google visitors found us today by typing in these algebra terms:
Online logarithm solver, casio calculator programs, differential equations matlab.
Algebra and Trigonometry: Structure and Method, Book 2 answers, scale factor, getting r2 in calculator ti-83, change a mixed number to a decimal worksheet, practice problems for multiplying and
dividing scientific notation, trig calculator , Multiplication of exponents worksheets.
Pre algebra worksheets for first graders, free printable math worksheet inequalites 8th grade, using formulas to solve problems, ti 84 emulator rom, how to solve polynomials in solver excel 2007,
what is the difference between y=(x-4)^2 and the square root of x+4, algebra grid formulas.
Holt and rinehart algebra 1 ebook, 6th grade hands on equations worksheets, online calculators for finding rational zeros, fraction and mixed number to decimal converter, radical quadratics.
Free 6th grade order of operations worksheets, square roots exponents, ti 83 emulator download, Ti-86 3 equation substitution, McDougal Littell Biology online test.
Expanding logarithms online calculator, algebra 2 homeschool homework, java codes with primality testing, linear and nonlinear progressions worksheet.
The poem about 3 being a prime number, free cheats for math, multiplying and dividing fractions work sheets, algebra questions yr 8, free answers to math questions, simple ratio formula, math
properties quiz printable worksheet.
Linear equation with two variables notes, algebra II calculator, 9th grade algebra quiz, Factoring Practice Problems and Answers, teach* and math* and function*, college algbra software.
LCM SOLVER, free algebra 1 problems cheat, least common denominator calculator, free intermediate algebra calculator, determine if 2 fractions are equivalent worksheet.
Free algebra answers, alegbra test, "order of operations" word problems, ncert model questions and answers for class viii, intermediate algebra 2 course syllabus high school, TI For Loop Syntax.
- simultaneous equation calculator, free printable math sheets about volume, factoring cubed binomials.
Glencoe/mcgraw-hill practice worksheet answers, ti 83 graphing calculator(decimal to fraction), free download of aptitude test, Algebra with pizzazz pg 47 answer key, how to change fraction into
decimal and vise versa, how to convert fractions to decimal calculator.
Free 5th grade worksheets on commutative, associative property, TI 83 roots, answers for PRactice Workbook/prentice Hall Mathematicas/pre algebra, Divisibility Worksheet (Free Printable), algebra
formulas finding percentage, solve simultaneous nonlinear equations, worksheets equations for a parabola.
Converting decimals in algebraic equations, worksheets of algebra for 4th grade, pages that you can do and printand it has add and subtract, java linear equation solver, formula for probability grade
6, algebra 2 texas edition answers, simplifying exponential expressions.
Formula used to convert decimals to fractions, square and cubed roots calculater, FREE maths homework sheet 9 yrs old 10.
Worksheets linear graphs, factorization of equations, 5th grade math combination formula, solving quadratics games, write multiplication expressions using exponents.
Free download cpt question paper only for quantitative aptitude, algebra and square roots, steps on adding subtracting multiplying and dividing integers, how to solve equations using the distributive
Convert fraction into simplest form, mcdougal littell algebra 2 book answers, parabola worksheets, x 10 x 100 worksheet, change of base program for ti 89.
Mcdougal littell algebra 2 all answers, probability on my calculator Ti 83 plus, online student edition book code for glenco algebra 1, mathematica to show steps of a solution, "completing the
square" real life application, simplify square root equation.
UNDERSTANDING ALGEBRA YEAR 8, Formula To Calculate A Discount elementary grade, integer worksheets for grade 7, 11+ papers online to do.
Solved problem on permutation and combination, grade 11 Trigonometry student summary notes, using distributive property on printable math worksheets, SOLUTION OF THE BOOK ABSTRACT ALGEBRA BY FOOTE,
algebra with pizzazz worksheet 158, 55% convert to fraction, solving systems of linear equations in three variables.
Dummit and foote solutions, free answers on math fraction finding the least common denominator, when solving systems of equations graphically are we just guessing, divide integers game, solutions
online calculator "quadratic solution", s.o.l.v.e method math, associative property worksheet.
Online calculator to find LCM, Equations - Brain Teasers #512 6th grade, prentice hall mathematics worksheets, work sheet formulae and isolating variable, adding subtracting signed integers
worksheet, sum of the first 100 integers java.
Year 11 logarithm exam practise, free integers worksheets, printable gcse english past papers, online quiz graphing and solving logarithmic equations.
Changing mixed number to a decimal, simolifying square root, CHEATS ON MATH 6-3 PRATICE ARRAYS AND AN EXPANDED ALGORITHM, chapter 3 rudin solutions, mathematics for dummies.
6th grade math trivia questions, integration by parts calculator, solve simultaneous equation using excel, math variables and expressions worksheets, Basic Algebra Exponents Worksheet.
How is doing operations (adding, subtracting, multiplying, and dividing) with rational expressions similar to or different from doing operations with fractions?, Free English Term Papers FOR PRIMARY
ONE IN SINGAPORE, decimal as a mixed numbers in simplest form, least to greatest fraction, exam3 ppc, adding subtracting equations, slope of a quadratic equation.
Adding integers+ word problems worksheets+free, practice 6th grade algebra test, ladder method math.
Easy year seven maths, solve an algebraic expression that has a absolute value easy examples, FREE BEGINNING ALGEBRA WORKSHEETS, mastering physics answer key, completing the square worksheets,
solving quadratics by factoring powerpoint, adding like terms with positive and negative numbers.
Algebra herstein solution, "root" calculator, zero and negative exponents worksheet.
How to do percentages, algebra 2 optimization problems, quadratic sequences gcse, how to do verbal algebra problems, find the scale math questions.
Glencomcgrawhill, solving quadratic equations with ti-89, real life equations, test of genius worksheet answers.
How do you add fractions on the ti 83 plus, lineal metre, free algebra word problem solver, COMPASS TEST CHEATS, factor into binomial calculator.
Easy way to understand algebra, how to solve second order differential equations in matlab, math sheets grade 7, Printable First Grade Math Sheets, math 8 sol worksheets to printed, graph an
exponential, quadratic and cubic equation.
Java list positive integers and sum, ~emulator TI 84 calculator free, how do i solve -2 square root -x times -5 square root -y, factor problems online, non homogeneous second order differential
equations, fractions from least to greatest, codes to factor on a ti-84 plus.
Multiplying integers lesson practice 3A, free college algebra practice sheets, Lesson 8: 3 x 3 Matrices, Determinants, and Inverses test in the book of pearson prentice hall, cubed factoring.
Free algebra exercises ks2, 9th Grade Algebra, Calculator TI-86 software download, algebra and trigonometry structure and method book 2 help, math aptitude questions, free worksheets for 10 year old
kids, cube roots math practice.
Convert mixed fractions into decimals, help ordering numbers from least to greatest, formula fractions to decimal.
Math Problem Solver, Rudin Chapter 3 solution, ks2 maths - conversion graphs - miles to kilometres, adding and subtracting interactive games, how to solve quadratic equations with graphing.
6th grade math books- applications connections, free downloads of apptitude books, grade eight math : circle area worksheet.
How to solve equations with squared variables, least common multiples worksheets, interactive games square numbers, 4th grade algebra word problems worksheet.
How to find the scale factor, difference of square, solving algebraic equation practice test, when to use factoring, rational to mixed radical form.
Multipying equations, math worksheet Graphing linear Equations, worksheet for adding transition words, solve equation for given domain, radicals with exponent fractions, Free Algebra Lesson
"manual square root calculation, Improper Integrals drill, equation factor calculator, examples of 8th grade pre algebra word problems, maths for dummies, algebra 2 expression calculator, How to use
linear function to solve equations.
Decimal statistics fourth grade, school usable free online games, three digit integers in adding and subtracting, solving a set of coupled ode and pde in MATLAB, Fifth Grade Printable Worksheets,
Printable Third Grade Math Problems, worksheets for algebra tiles.
Grade 9 past examination papers, adding and subtracting worksheets grade 1, least common multiple fraction calculator, subtraction of fractions worksheet, factoring quadratics with non real factors.
Solving first order differential equations on matlab, EXAMPLARS FOR GRADE 10 ENGLISH 2007, solver for How do you find the least common multiples and denominators of two numbers, solving vector
algebra using TI 89 calculator, algebra with pizzazz worksheets, mcdougal littell algebra 1 answers key.
Coordinate Plane Graph Pictures, how to find DelVar in ti calculator, factor polynomial by grouping calculator, scale word problems, algebraic formulas.
Add subtract multiply divide integers free worksheets, square algrebraic fractions, solve 3 complex equation with three unknowns, program a quadratic formula on excel, introducing algebra +ks2 +free
worksheets, addition and subration 'rational expression solver.
Online calculator and how it's answers, six grade math, how to get decimal to square root on ti 83, arithmetic reasoning worksheets, pure physics mcq practice paper for o level, cheat on pre algebra,
general aptitude questions for net exam+pdf.
Application of linner systems of equaton, rules in subtracting algebraic expressions, solving equations containing rational expressions, free algebra test, Inverse Operation Quiz for 6th Graders,
probability that x is between how to solve.
Math scale factor, ALGEBRA CALCULATOR.COM, Algebra 2 Chapter 5 resource book.
TI-84 game making programs, What's the least common multiple of 25 and 32, addition equation worksheets, Adding Integers Worksheet, algebraic calculator that can do fractions, worksheet "factor
tree", ti rom code.
Middle school math with pizzazz book e E-12, example of a rational expression being used at home or work, calculating lineal meterage, Subtraction of 2 and 3 digit numbers, rule of multiplying signed
numbers, roots polynom c++.
Ppt solving system of 3 equations, algebraic expressions worksheets, subtracting decimals calculator, elementary algebra answer for college student answer sheet free, graphing calculators using third
root online, TI-84 F-test, solve complex radicals.
Macromedia flash mx 2004 hands on training worksheet, how to put in x,y variables in TI 83 plus?, simplify my algebra problem, math computer exercis, grade 7 fractions free exams.
E key+ti83, solve by elimination calculator, greatest common factor of 34, dividing fractions with a radical, rational expression solve for u.
Quadratic roots of polynomial problems with their solutions, free 8th grade math tutor online, Holt Modern Biology Online Study Guides, solve two-step and multi-step equations ask a questions, pre
algebra printouts.
Quadratic equtions worksheets, online radical simplifier, unit step function ti-89, maths - combinations for children, beginning algebra worksheets 4th graders, what is the difference between square
root and number squared, examples on expressions, equations, formula, functions.
Free math software for 6th graders, Free Algebra Homework Solver, ti-89 solver answer true.
Polynomial problem solver, define value ( exponents), how to write subtraction expression as an addition expression, Addition And Subtraction of Radical Expressions, work problems algebra equations,
year 8 algebra questions nz.
How do you solve a algebra problem, formula sheets in teaching mathematics, grade 6 Math Worksheet, adding subtracting minus, solve complex rational expression.
Adding, subtracting,absolute values, worksheet, powers of a fraction problems, modeling exponential growth on a ti 83 plus, third grade homework combination, free linear systems graphing solver
calculator online.
How to manually compute LCM, 5th grade algebra and graph lesson plans, saxon algebra 2 answers.
SAXON MATH TUTORS, mcDougal Littell answers, how to solve polynomials in matlab, permutation calculator with restrictions, TI-84 plus programming.
Factoring polynomials quiz worksheet, +Work and Power Physics Worksheets, Dividing Polynomials Calculator, 6th grade chapter 3 math test.
Algebra calculator solves for x, Greatest Common Factor Example using division ladder, Prentice hall online textbooks Algebra 2 questions.
Pdf convert ti 89, california high school past examination papers-math(free), factoring and simplifying algebraic expressions, cost accounting exercises, who invented slope formula?, free exponent
worksheets for middle school.
Ladder method in math, algebra 2 holt workbook, solution to walter rudin.
Gre study free math cheat sheet, 6TH GRADE MATH PRACTICE CONVERTING FRACTIONS TO DECIMALS, Grade 11 mathematics question papers, mcgraw-hill biology: the dynamics of life chapter 7 worksheet answer
key, system of equations in ti 83, mcdougal littell algebra 2 chapter 10 answers free.
Solving quadratic equations by factoring calculator, free exponent worksheets, past exam papers on C programming, standard form equation calculator, sample of Aptitude Test question and answers, give
me some 9th grade math problems, solving for a specified variable.
Show me the exam for science years 8, glencoe math answers, factoring polynomials solver, how to find number of integers java, maths mcqs.
Mathematical investigatory project, pre algebra with pizzazz answers, Balancing Chemical Equations CHeat, Instant answer to a solving a venn Diagram.
How do you solve radical operations, TI-84 plus directions, illinois prentice hall math books, intermediate algebra fourth edition. tussy gustafson solution manual.
Trinomial factor calculator, Pre Algebra Worksheet Inequalities, ti 89 interval equations.
"estimating the sum" worksheets, radical expressions solver, conversion of numbers to decimal place.
Solve this code in java plot the following parametric equations, printable quadratic equation group activity, quadratics games, clollege algebra tutor, free online inequality calculator, factoring
quadratic expressions calculator.
Fuction factoring online, find the missing proportion calculator, algebra 1 worksheet generator, Trick Algebra Problems involving 3 variables and exponents, quadratic equations games, how can a
Percentage be expressed as a Decimal or a Fraction.
Discrete maths on the ti-89, +calculation on RD india, math practice 6th grade fractions adding and subtracting, online algebra problems, to compute and simplify algebra expressions, base conversion
java decimal to binary code, mcdougal littell worksheet answers.
Prentice-hall, Inc. Activity answers, free algebra answers online, free online quiz graphing and solving logarithmic equations, solve for roots using casio calculators, download c Answer book.
How do you order fractions from least to greatest, foiling trigonomtry, solve function online, partial-sums method for addition lesson, basic algebra questions, decimal dividing calculator, 11+ maths
Answers Cheats to Accelerated Reader Tests, Check my algebraic expression as i go along, lesson plans for comparing and ordering positive and negative numbers, integration by substitution calculator.
Lcm ladder method, Need Help in Solving Radical Expressions, second order differential equation roots, expression of triangle.
Algebra tutorial radical multiplied by radical, free online slope calculator, how to do logarithum problems in ti 83, multiplication worksheets/ 11/ 12/ grade 4, online pre algebra calculator, online
year 9 exam papers.
Understanding changing a mixed fraction to a percent, simplify radical calculator, quadratic equation calculator k, mastering physics 6.42 answer, non-linear equation excel.
Softmath greater than less than games, worksheets on finding determinates, green's function to solve parabolic differential equation, ti-84 plus factoring polynomials completely.
Dividing mixed numbers and fractions worksheet, second orgder differential equation MATLAB, algebra chapter 4 review games and assessment mcdougal littell, how to solve polar equation on ti 89 \,
Rational expressions Solver.
KS3 Maths sheet and answers, how to solve fractions with factoring, Applications of Linear Programming Glencoe McGraw Hill Algebra 2, The Last Dance Math Project Algebra 1.
Math practice adding subtracting negative numbers, story math problems factoring right triangle, quadratic equation by extracting square root, Solve the polynominal equation in order to obtain the
first root, softmath.
Second order ode45 tutorial, function algebra worksheets, three simultaneous equation cheat.
Worksheet, exercice de maths, math symbols for adding,subtracting, multiplying and dividing, algebra: printable blank plot worksheet.
Factoring quadratic calculator, ti-89 vti rom image download, a variable over a variable is going to equal an exponent, graphing absolute values with one variable, free online algebra age problem
solver, coupled differential equations solve matlab, free online 7th grade algebra worksheets.
Adding like terms simple worksheets, adding and subtracting worksheets, 6TH GRADE CONVERT BASIC FRACTIONS INTO DECIMALS, Mcdougal littel algebra 1 assessment, best way to find combinations in a math
problem, online solver for algebra brackets.
Addition of mixed fractions worksheets, easiest way to find scale factor, How would I plot a simple quadratic equation where the values of x range from -3 to 3?, practice questions on algebra solving
equations, algebra programma.
Maths formulaes, free 5th grade algebra help, Finding the X And Y Intercept Solver, algebra with pizzazz steps and answers, how to calculate greatest common factor.
Parabola exponential or quadratic?, Solving indirect proportions, mathematic trivia, roots solver, solving equations video, add subtract multiply divide negative positive numbers.
"Synthetic Division" calculator, ti 84 plus FACTORING programs, msn 6th grade math calculator, decimal to mixed number, function table 4th grade worksheet, Factor functions online, mcdougal littell
florida edition algebra.
Printable math on solving probability problems using permutations and combinations, solving equations with 3 variables, solving algebra, free math problem solver algebra 2, math problems online scale
factors, free algebra worksheets.
Translation free worksheet, geometric series(distance) formula in matlab, free function table worksheets for elementary, balancing algebraic equations, commonly used college algebra textbooks,
www.practice makes perfect.word problems grade4.com.
Algebra 2 factoring calculator, simplifying exponential radical expressions, addition and subtraction 'rational expression solver, TI Calculator Roms, solving equations with Excel Solver.
Ti-83 plus emulator, simplifying radical answers, find factor for third degree quadratic equation in ti 89, ti-89 polar, proportion word problem solver.
World's hardest equation, converting fractions to greatest terms, calculator for differentiation implicit, T1-84 plus tutorials domain and range, life saver math 5th grade, adding,dividing,
multiplying, and subtracting decimal worksheets, solving nonlinear simultaneous equations using newton's method matlab code.
University of Chicago Graduate Problems in Physics with Solutions download, permutation and combination software, java "hybrd" Powell.
Prime factor 2, 3, 5, 5 calculator, leaner equation, ti 84 plus applications download, worksheet imaginary numbers, fraction to root, Grade 5 GCF math word problems, 3d coordinate worksheets for
Solving by elimination calculator, most common math formulas percent formulas, Algebra series in software, trignometery chart, multiply radicals calculator, simplfying square roots worksheet.
Answers to Prentice Hall Mathematics Oklahoma pre- Algebra, quadratic formula program ti-84, ti 84 decimal conversion, ti 86 emulator, adding and subtracting decimal numbers.
Algebra with pizzazz WORKSHEETS, linear equation occur in many real-life situations, al gebra baldor, probabillity algebra 1 worksheet.
Exponents and factoring 6th grade practice excersies, free How To Do Algebra, 3rd grade combination worksheets, matlab permutation combination, slope and quadratics.
Math printouts for kids, ti-89 solve system equations, rules in adding and subtracting algebraic expression, high school logic worksheets, solving equations with decimals variable denominator.
Convert to spherical coordinates calculator, simplify square root, mcdougal littell biology answers.
Graphing equations worksheet, LCM Answers, converting a mix number calculator.
Poems about dividing rational numbers, McDougal littell math book answers, 6th Grade Math Help, Gr.11 mathematics paper 2, free math worksheets gr 1 and 2, downloadable distributive property problems
or worksheets.
Divide rational expressions and equations, aptitude question papers.pdf, algebrator free, lesson plan. how to subtract positive and negative integers, convert decimal to fraction.
Year 7 intermediate homework sheet 2 chapter 2 answers english, algebra hungerford solution, Highest Common Factor for 441, ti-89 laplace transform.
Glencoe Algebra 2 Teacher Edition Page 357, online standard form calculator, Scientific methods free worksheets, free Worksheets square root and cubic roots grade 7, how to solve differential
Math 6th grade fraction answers, Converting mixed numbers to decimal, middle school math with pizzazz! book c answer sheet, linear programing/pdf.
TI 89 calculator download, math 3 graph linear inequalities [coordinate plane], ti-84 plus online, solve polynomial cubed, math word problems eight grade.
How to solve phase plane, in algebra what is vertex form, pre-algebra dividing expressions examples, statistics for beginners, factor quadratic with no b, olevels solved past papers 2004 english.
Solving third power polynomials, ALGEBRATOR, balancing linear equations, calculator steps.
Excel solver simultaneous equations, ti 84 plus calculator online, solve and graph, free worksheets two step equations.
Lcm worksheets + gr 8, radical calculator, Solving Algebra Problems Showing Work, star test 6th grade sample questions, qudratic, scott foreman biology the web of life teachers edition, module 7
maths exam past papers.
Sample Algebra Problems, what is the greatest common factor of 6,50, and 60, equations functions free worksheets, simplify the square root of 1.69, year 7 math test, 8th grade math worksheets slope.
3rd grade equations, +chemistry tutor answers online websites, online trig equation solver.
Year 8 practice maths test, Adding rational expressions tutorial, Algebra 2: Prentice Hall Mathematics textbook have homework page, greatest common factors with work.
Free Algebra Math Software, mcdougal littell world history online free printables, matric maths solutions.
Maths worksheet area ks2, real life use of coordinate planes, free downloadable book accounting, ti-83 log graphing, multiply out and simplify in, worksheet+linear functions, equations fifth grade
Prentice hall pre algebra answers book, Saxon Math Algebra 1 4th Edition Solutions Manual for sale, square root printables, how do i find the square root of a fraction, solve three order equation
Maths exam model papers for primary 1+singapore, algebra II pdf, demos of synthetic division of polynomials, math investigatory project, how to programs formulas on ti calculator, how to find
quadratic equation from table.
Download free games to your ti 84 plus calculator, java aptitude, ti-84 factoring.
Free worksheet for 9th grade science, solving two step equations worksheet, associative property math 3rd grade worksheets, ADDING LIKE TERMS ALGEBRA TILES WORKSHEET, a calculator that turns decimals
into a fraction.
Cube root on ti-83 plus, free printable lcd worksheets, decimal common factor worksheets, heaviside function ti-89, college math printable test, steps in balancing chemical equation, Lineal metre.
CALCULATOR TURN DECIMALS INto fractions, exponents for dummies, factoring polynomials with fractional exponents, mcdougal littell algebra 1 chapter review exercises.
Algebra test sheets, maths metric 12th question with solution in trichy, subtraction for variables raised to powers.
Shading fraction parts worksheet, how to solve functions, probability combination ti 84, cheats for fraction decimals and percents, transforming formulas in algebra worksheets, sixth grade math+help
with scale factor.
Worksheet for problems in permutation, evaluating expressions quiz test, year 11 maths, solver system of linear equations ti-83, ti83 plus log base, free worksheets from mcgrawhill.
Ti-83 Base, ti 84 downloads, ti 89 help with 3rd roots, math, grade 5, permutations, holt middle school math course 3 worksheet.
Worded problems with solutions on quadratic equation, what is the least common multiple of 33 55 and 44, fraction worksheets with negative, Hands on algebra volume 2, Free Online Algebra Help.
Java(inputting ten numbers using for loop and get Sum), radical and fractional expressions, free online ti 84 calculator, free ebook of indian advanced accountancy.
Add subtract multiply divide integers worksheet with negative, mastering physics answer key pdf, aptitude test solved papers.
6th grade fraction worksheets, simplifying radical expressions, 293's factors, factor cubed polynomials, timesing standard form, systems of nonlinear equations worksheet.
Simplifing radical worksheets, TI - 84 Determinant Video, prime factorization worksheets free, multiply by 6 worksheets.
Math 11 year, quadratic projectile factors, example sats problem solving ks2, Newton-Raphson MATLAB.
Solve system of complex equations on TI-89, cube root ti-89, rom codes for ti, polynominal, ti 83 plus emulator, graphing calculator second derivative, prentice hall pre algebra practice worksheets.
Grade 9 past papers, 9th grade algebra 1 book, decimal to square root, worksheet for permutation problem, algebra help search domain and range solver.
Dividing polynomials by a monomial worksheet, answers to Prentice hall mathematics algebra 1 workbook, dividing decimals calculator, Pre-Algebra Ordered Pairs and Graphing 97 cheat sheet, WHEN COST
Simultaneous quadratic equation, solving and graphing quadratic equations worksheets tests, high school statistics worksheets binomial probabilities, Division Ladder to find LCM, factorization of
equations of 5 grade.
Rearranging square root of formulae calculator, factoring hard trinomials, Real World units for Algebra I + Publishing + Company, grade three rounding numbers worksheet, programming texas ti-83
examples, free printable pregrade worksheets, adding and subtracting 3 integers free work sheets.
Online free calculator, standard form of a linear equation calculator, videos on how to Graph Linear Equations for Algebra 1, algebra division calculators using fractions, grade 11 Trigonometry
details notes, Middle school math with pizazz.com, algebra taking a power across.
Order, im so lost in my college algebra class, how do I get fractions for answers on ti86.
Math textbook solutions, Calculating Greatest Common Factors, grade 10 maths papers, mcdougal littell algebra 1 answers, factoring on the TI 83 plus, free math answers for greatest common factors.
Chemistry skeleton equation solver, free online algebra calculator, simplifying radical expression worksheets, aptitude question+answer,, convert fraction to decimal matlab, pre algebra problem set.
Word problems using quadratic equation, what are the two basic steps in simplifying rational algebraic expression/, fraction and mix number calculator, solve logarithms algebraically.
Balancing quadratic equations, worksheets for adding like terms in math, matlab nonlinear equations, physics reading quiz conceptual physics, compound interest year 10 non calculator test.
"simultaneous equations" word problems, getting rid of decimals to make standard form, ellipses calculator, kumon math worksheets free, how to convert hexadecimal to decimal using TI-84 plus, TI-89
cube root, proportions worksheets free.
How to factor quadratic equations on a ti 84, combining like terms test, problems in algebra 1 with multi step equations free.
Erb math test practice sixth grade, WHAT IS THE FORMULA TO CONVERT A DECIMAL INTO A FRACTION, ged mathmatics.com.
Formulas for cutting expenses, powers in fractions, aptitude questions free, integration basic help maths bbc, math test online for year 8, how to solve a fraction being cubed, free online kids
revision worksheets.
Vertex form, gre, free Algebra Problems guide, matlab for taylor second order, how to convert decimals to mixed numbers, mathematics algebra software, solving addition and subtraction equations.
Parabola formula's, online graphing calculator to solve directrix, and focus, convert java system time, permutation and combination "tutorial", Give me maths answer on functioning machine adding and
multiplying, general maths test questions and answers, pizzazz puzzles.
Factoring vs simplifying, ti calculator rom image, a 7 grade example of an equation, 6th grader integration word problems, "characteristic properties of the compounds of the s-block elements", square
roots radicals easy quick cheat, equations relations powerpoint.
How to store motion formula on TI-84, How do you simplify radical expressions with exponents in them, hyperbola inequalities, calculas 2, factoring 3rd order polynomials.
Answers to algebra 2, solving 2 variable equations, algebra helper'.
Square roots and exponents, how to convert a double to a 2 decimal precision in java, how to find a scale factor help for 7th grader, graphing inequality worksheets, radicals simplified chart,
algebra 2 problems, florida algebra 2 book.
Algebra calculator machine, determinants and matrices multiple choice questions and answersfrom universities, canadian grade 2 math - money, glencoe algebra homework for 7 grade, distance formula
simplifying the radical.
Dividing polynomials by quadratic trinomial, 84 plus rom image, ti-84 emulater.
Pre algebra answers glencoe, online calculator for division of rational expressions, simplify using multiplication methods, rational equation calculator, how is polynomial division used in real life,
distributive property with decimaLS.
Multiple variable solving, Permutation Math Problems, how to put 10 log base 10 in ti 89.
Multistep equations worksheet, 8% as a decimal, Basic Math Free lessons, Boolean logic algebra calculator, lowest common denominator fractions calculator, math secret code worksheets, radical
simplifying calculator with variables.
Subtracting tens, hard algebra equations, hyperbola to linear graph, sample problems solving decimal linear equations, free download software to study accounting, polynomials with two variables
How do you do mathematical word problems? grade 10, algebra I identity equality powerpoint, evaluating variable expressions worksheets, expresions for algibra.
Free maths test yr 8, grade five australian maths plus homework do it online, math, solving for x, worksheets, glencoe mcgraw-hill pre algebra online download books, double convert to time java,
decimal to fraction converter with square root.
How to teach permutation and combination, vertex form algebra 2, third expression to simplify that includes rational (fractional) exponents., 9 as a factor worksheets, Math Homework Answers, Basic
turning a decimal into a mixed number, simplify expressions worksheet.
Ti 89 disable axes, easy way in solving mathematical equation, solving quadratic equations negative exponents, order of operation worksheets, beginning algebra worksheets, long division + step by
step worksheet.
Math help/volume, free maths worksheets algebra substitution, equation system maple, how to cube root ti 83, convert decimal point to percent.
Calculating Common Factors, cube root on a graphing calculator, adding and subtracting fraction worksheet, solving basic inequalities worksheets.
TI-83 plus emulator, mcdougal littell world history workbook answers, free scale factor worksheets.
Mcdougal little, chemistry addison wesley review module chapter 8, apitutde question with answer with time calculation, Equations and Applications with fractions, home school algebra software,
algebraic equations for beginners, free level 5 algebra solver.
How to find cubed root on a t-89 calculator, multiplying and dividing scientific notation questions, on line with results 9th grade alegbra test, matlab second order differential equations, linear
and non linear functions powerpoints.
Rational expression calculator online, solve maths algebra rules, rearranging formulas worksheet, algebraic equation sample questions.
When subtracting integers is it always add the opposite, solving limit equation calculator, decimal to base 7, online inequality graphing calculator.
Minimum of quadratic equation, solve rational expressions, worksheet Converting Between Percents, Decimals and Fractions, free download accounting books.
University of phoenix math cheat sheet, ti-84 plus texas download, free geomtry printouts grade 7, greatest common factor machine, formulae on polynomials for 9th grade, WHAT IS THE FORMULA FOR
Math Exercise Grade6 Worksheet, free printouts for math, algebra problems, addition and subtraction to 13, complex square root calculator, algabra, Transforming Formulas.
Conceptual physics+answers+chapter 2, addison-wesley + college algebra worksheets, graphic calculator factoring, computing simultaneous equations in excel, fraction variable calc, free basic Algebra
problems for 4th grade, polynomial with multi variables in c++.
Free online 7th grade algebra 1 help, equation powers graphs, Who invented graphing Calculators, :"grade nine" word problems.
Algebra games- printable, "Euclid's method", HOW TO CALCULATE LINEAR FEET.
Glencoe computer concepts in action worksheet answers, turning rational equations to linear, "nonlinear differential equations" "maple", prentice hall math textbooks answer key.
Adding quad eqn in java, two digit two step equation printable worksheets, 6th grade worksheet comparing and ordering fractions and decimals, worksheet reviewing multiplying and dividing decimals,
rules for adding and subtracting integers.
Algebra 2 mcdougal littell answers, when would you use algebraic equations, third order polynomial equation, printable worksheets on Writing a mathematical expression, Math investigatory project,
convert roots to exponents, algebra 2 textbook grade 10 download.
Horizontal and vertical stretch and compression on a radical graph, answer math questions for free, gauss-jordan elimination program download ti-83.
Square root of a quadratic, how do you figure out $65. divided by 12 using two-digit divisors, fraction inequality calculator, free ks3 science past exam papers, interactive square number activity,
how to add fractions with like signs.
Factors and multiples 6th grade worksheet, cubic polynoms solver tools, what is a standard algebra equation, how to do scale factor, worksheet - solving equations, Singapore Primary 3 maths
worksheets, download worksheet factoring polynomials.
Math problem solver online, learningbasics of engineering drawing 1st year Chemical Engineering, Free Calculator Download, multiplying integers, adding +integers worksheets+word problems.
Adding subtracting algebraic expressions, using a bar graph to subtract integers, Prentice Hall Math Book Answers, cheat mcdougal littell, free printable polar graph paper, calculator to evaluate
exponents, algebra 1 problems cheat.
Holt workbook answers chemistry, mathematics trivia, excel solve equation.
Greatest common factor + 98, LCD calculator, cost accounting + free bookreading online+book, who invented multiplying equations, what is the E after a calculation with decimals, 9th grade printable
How to save formulas in a texas instrument Ti-83 plus, quad root +maths, multiplying decimals worksheet, free fourth grade math printable, greatest common denominator calculator, solving equations
with fractions calculator, ti rom 84 image.
Least common multiple gratest common factor pairs, 9th grade algebra download, polynomials with fractional exponents.
Rational expressions calculators, solving equations and inequalities involving absolute values, algerbric form, Mcdougal littell school math workbook answers, 100% free accounting books.
Simultaneous equations engineering, cheat on saxon math homework, iowa algebra aptitude test sample questions, convert mixed fractions to a decimal, coolmath4kids online graphing calculator, second
order differential equations matlab.
Algebra1 SOL PPT, square root of a fraction, how to find cube root on ti-83, online fraction calculator algebra.help, free fifth grade decimal worksheets, TI 85 factor quadratic equation.
Graph circles on ti84, free algebra 2 test maker, graph charts for algebra.
Free math calculator-step by step, Holt algebra 1 answers, Softmath Algebrator, how to change standard form to vertex form.
Square rooting fractions, algebra 1 prentice hall quizzes, finding cube root on TI-83.
Math websites games 9th graders free, cross product t183 plus, cross product matrix t183 plus, using casio calculator, how i know reading math question adding or multiplying, difference between
permutations vs combinations easy kids.
"scientific notation" powerpoint "lesson plan", positive and negative integer worksheet, how do you do fractions?, practice workbook prentice hall pre-algebra answers, Aptitude Question Answer, maths
tests yr 8, uses of quadratic equation.
Grade 4 decimal homework practice, linear programming ti 89, fraction rules calculator addition, FREE downloadable ebooks for 8th grade mathematics on 8th grade pre algebra, cubed polynomial, algebra
1 cheats, complex zeros of parabolas.
Math+scale factor, common denominator solver, converting decimals to a mixed numbers, patterns and algebra test problems, Fractional Exponents Worksheet, How to solve and graph an inequality
What is the square root of a decimal, aptitude question & answer', multiplying decimal poems.
Worksheets related to showing time on clock for primary classes, Algabra.com Solve using the quadratic formula x^2 - 2x = 5x - 2, cheat sheet for rules for order of operations, roots for third order
quadratic equation, quadratic + vertex form + a greater than 1, good work sheets for 13 year olds/free, balancing chemical equations with fractions.
Relative algebra, online limit calculator, a plus algebra helper free download, factoring polynomials printable worksheets.
Implicit differentiation calculator, Operations on Integers worksheet, Free download Heath Geometry: An Integrated Approach, algebra grade 7th equation generator, SUbtracting decimal worksheets,
formula to solve non homogeneous differential equations.
2 step equation with fractions worksheet, find the quadratic equation from a table, solving a forced second-order differential equation, free factoring solver and explanation, solving differential
equation calculator.
Square root of exponents, quadratic exam questions, factoring on a graphing calculator, mcdougal littell algebra 2 practice workbook answers, square roots with exponents, math work sheet for class
vii india.
Permutation and combination review, changing decimals to mixed numbers calculator, simplifying symbolic equations in matlab, root formula, what is the least common denominator for 6/16, exponents
lesson plan, 3rd order solver in matlab.
M. L. Bittinger's Introductory Algebra in pdf format free, factoring polynomials to the cubed, eigenvalues calculator workings, ti-89 solver, walter rudin ebook free, math calculators simplifying
Math revision sheet fourth primary, interval notation with parabolas examples, create your own integer worksheets, kumon answers online, Ti 83 Plus Graphing Calculator online.
Quadratic calculator app, glencoe integers, converting numbers to metres, online book+ cost accounting +download, the least common denominater of 3/8 and 2/3 using a factor tree, algebra equation
creator, holt math study guide.
Math factoring solver, solving equations with integers games, CPM Teacher Manual, factoring binomials worksheets, free square root addition calculator, Solving equations by multiplying or dividing.
Fractions and integers least to greatest, MIDDLE SCHOOL MATH WITH PIZZAZZ! BOOK B, square root of variable with power, mathematics year 1 working sheet, multiplying and adding percentages worksheer,
LU with TI89.
Scale factor of circle, simplifying square roots with fractions and exponents, "Grade nine math games", algebra word problems worksheet, add and subtract integers worksheet.
Solutions manual dummit algebra, HOUGHTON MIFFLEN ALGEBRA 9TH GRADE/STRUCTURE AND METHOD BOOK1, first degree equations "worksheets".
Least common multiple ladder method, math database radical exponent, solving radical expressions calculator, 9th grade math printables, gcse biology for ninth standard, related literature in college
algebra, Factoring Trinomial Equations the diamond way.
Downloadable maths tests, how take second order derivative on ti, year 4 maths factors and multiples, what way can we simplify exponent expressions where the base are different.
Rational expressions calculator, tutoring online for algebraic expressions, highschool math test online, second order differential equations in matlab, base 10 math homework, gr 8 math worksheet free
printables ontario, first grade math homework.
Convert mixed number to decimals, aptitude ques, Using quadratic equations to solve real life problems, Math practice sheets for adding, subtracting, multiplying, dividing, 6th grade level,
Mathematical aptitude sample papers, algebra calculator multiplying integers, game with slope worksheet.
How To Do Algebra, worksheets on reflection gcse, EXPONENTS calculator, mathmatical powers.
Quadratic equations with fractions, TI-84 Plus emulator, windows simultaneous equation solver programs, algebra + factoring + denominators, aptitude questions and solving answers, root solver
programs for t1-84 plus.
Square root of 85, how to simplify using distributive property and fractions, trig pocket pc calc.
Adding and subtracting negative powers, Worksheet Mixed Numbers Patterns, Ti-89 ROM скачать.
Enter and solve rational equations and functions, how to subract equations for beginning alegebra for grade 6, how to get cube roots on ti calculator, free worksheets solve relations and functions.
Factor x2+4x+13, applications of algebra in real life, square root calculator in radicals, how to do cubed root, properties of rational exponents TI calculators.
Poems with math terms, interconverting pH and hydronium ion, dividing rational exponents, interactive algebra online games.
Mcdougal littell middle school math answers, rules to add and subtract basic integers, one-step equations+worksheet, convert a mixed number to a decemal, sample algebra 1 workbooks pdf.
Laplace mathematic equation writer, first grade mathmatics, wronskian for differential equation, free erb math practice sixth grade, pre algebra multiplying exponents worksheet, cliffs problem solver
free download.
Ti 83 plus rom download -torrent, geometry worksheets free + mixtrue problems, variables, expressions, graphs and equations to solve problems + tutorial, Modern Algebra text Book by Hungerford, How
to Solve a Geometric Probability Problem, java sum of numbers.
Permutations and combinations book, evaluate square root expressions calculator, 6th grade equations worksheet, definitions for pre-algebra math, answers to factoring problems.
College Algebra (factoring rules), resolve problems of algebra 1, adding and subtracting fractions game puzzle, understand basic algebra.
Cost accounting books, vertex to standard form calculator, least to greatest calculator, pre-algebra software help.
Factoring pattern for quadratic equation ( how do positive or negative, clep college mathematics cheat sheet, rational calculator, properties worksheet free, interpreting decimals worksheet, general
aptitude tips, pre algebra coefficients.
Solving n equations worksheet, math websites for kids - adding negative fractions, online factoring program, algebra 1-absolute-value functions worksheets.
My algebra solver, gauss jordan ti89 calculator, fifth grade algebra, nth square calculator.
Free problems + division + mixed numbers, quadratic calculator programs, multiple choice aptitude questions with answer, .freeworksheet, www.math fractoins.com, how to factor cubed polynomials.
Poems about strings, maths, evaluate expressions exponent worksheet, ladder method for least common mul;tiple, simplify fractions expression calculator, pre-algebra with pizzazz 163 creative
publications, workbook with answer for algebra 1 houghton mifflin.
How to find y intercept in maple, fx-115MS simultaneous EQ, simplifying rational expressions calculator, ADDING RADICAL CALCULATER.
Graphing systems of linear equations worksheets, online nth term calculator, powerpoint presentation on linear equations in two variable, WORKSHEET ANSWERS, factoring out a 4th root.
Calculator for solving quadratic equations by factoring, conceptual understanding of the subtraction of fractions, 6th Grade math Division with decimals PRINTABLES FREE, glencoe mcgraw-hill algebra 1
permutations and combinations answer sheets.
Download free calculator texas instruments, compute third order polynomial, mcdougal littell geometry answers textbook.
Integers+ work sheet, slopes review grade 9 math, algebra percent formula.
An easy way to find the lcm, scott foresman addison wesley mathamatics 1st grade free printable samples, calculator with radicals, what is the least common factor of 25 and 30.
Nonhomogeneous wave equation PDE, ti 84 emulator software, aptitude question primary level, add or subtract square roots worksheet, trinomial factoring calculator, programing a ti 83 to convert
inches to centimeter, logic puzzles with chart 6th grade.
How to factor equations with the varibles cubed, Logarithmic Expression without calculator, simplify non perfect square roots no decimals, add and subtracting negative decimals, Discrete Mathematics
and Its applications 6th edition answer key download.
Free math worksheets on radicals, Answers to McDougal Littell U.S History Workbook, math problem grade 10 elimination, mathematics- exercise on expansion for year 8.
Algebra 1 2007 book answers, simplifying fractions with negative exponents calculator, ti 89 equation solver symbolic.
Online matrice solver, simplify derivative calculator, word problem subtracting negative numbers, finding solving systems of equations with graphs.
Math Formula percent, factor binomials calculator, simplified radicals chart, factor a quadratic equation, online lowest common denominator calculator, factor tree test.
Decomposing worksheets elementary, quadratic equation solver factoring, help me solve my algebra problem, math trivia and tricks, ti-83 greatest common factor.
Practice mathematics exam year 10, variation in fraction algebra, chemistry addison wesley test hack.
Online common denominator calculator, CALCULATE MOLECULAR COEFFICIENTS COMBUSTION, "college algebra" "excel projects", linear Equation in two variables.
Primary algebra worksheets, fractions into decimals formula, free printable worksheets on adding and subtracting using integers, convert linear metres to square metres, explain inequality in Algebra
Database symplifying WHERE expression, online systems of equation grapher, grade 9 algebra questions, algebra fractions.
Online ti-84 plus, turning fractions upside down negative power, pattern convert decimal number, using graphing calculator to find zeros ti-83, algebra addition pyramids worksheet, how to calculate
Decomposing math worksheets elementary, Creative Lesson Plans Permutations, solving addition and subtraction fraction equation power points for elementary students.
Associative property worksheet for sixth grade, algebra get a percentage to the other side, linear combination method with 3 variables, free printouts first grade.
"Production Possibilities Frontier" maple plot, how to solve some question on quadratic expresion in general mathematics text book, cube root on calc.
Stepped graphs equation, add, subtract, multiply, and divide radical expressions, Dividing Polynomials Free Printable Worksheets, online rational expression solver, TI-84 calculator helo with solving
permutations, translating words into mathematical expressions and formulas worksheets, Holt Algebra 1 Online.
Determine equation with 2 points worksheet, logarithmic expression calculator, addition and subtraction worksheet with fractions with same denominator, gaming activity "on order of" operations
printable, ti 84 calculator free, Algebra 1 Exponents with variables worksheets.
Solving quadratics by factoring test, software, Factoring Algebra equations.
Simple concept polynomial long division, sol quadratics, fractional answers on ti86, finding common denominator worksheets, finding wronskians, Mcdougal Littell "Algebra 2" free Answer Key,
simultaneous equations powerpoint.
Elementary statistics cheat sheet, factoring calculators, fractions + algebra worksheet, (x + 5) ² = ± 9 algerba, Using scale factor to find area worksheet, free online calculator fractions to
decimals, combining like terms.
Nonhomogeneous first order linear equation, using manipulative to teach algebraic equations 7th graders, simplifying radical functions, how to do cubed roots on calculator.
Intercept and slope curve explanation, substitution method calculator, homework sheets first grade.
Solving second order differential equation matlab, free pre-algebra quick reference cards, example 7th grade inequalities, polynomial roots ti 83+, multiplying and dividing integers board games.
Simplifying complex radicals, houghton mifflin geometry, ace practice tests, Ionic Bond+Nacl+Videos+Powerpoint+Animation, comparing like terms algebra practice, converting fractions into decimals
Solve my algebra problems, permutation and combination calculator programme, adding subtracting equation games, Printable Volume Worksheets, Basic Algerbra printouts and answer key.
Advanced algebra order of operations, simplifying notation (exponents), florida mcgraw 6 grade online math, fraction power algebra 2, math root equartion simplify.
How to calculate the volume on the parabolic cone, how do you put a number to a higher exponent than 2 on a TI-84 plus, factor cubed binomial, free drawing conclusions worksheets,
Solving +algerbra word problems, algebraic formula examples for calculating swine ration programs, Algebra tutor, summation on TI 83, converting decimals to fractions machine.
How to do vertex form, linear equations worksheet, eigenvector texas TI-82, free solvers, free printable math variable worksheets, 3rd order polynomial.
"research paper" distributive lattice, find solutions to equation using subtraction or addition trigonometry, What Is the Importance of Factoring Polynomials, how to calculate distance between
vectors with ti89, foil method gcse.
Solving equations involving rational expressions, parent help students in algebra, glencoe mathematics + what's math got to do with it?, add fractions+worksheet, 2 digit multiplying with decimals
practice worksheet.
Online matrix algebra calculator, ti 83 complete the square, intermediate algebra math trivia, generate fractions on computer programs, online calculator for expanding logarithms, graphical vector
worksheet, adding and subtracting algebraic expressions lesson plan.
Lowest common factor, t-83 graphing calculator+download, solving homogenous equations in matlab, Operations on Rationals and Radicals, Algebra 2 Finding Vertex.
College algebra gustafson solutions guide online, laddermethod for calculating greatest common factor, cubed polynomials, online Mcdougal algebra 1 book.
Math online worksheets, fortran solve non linear equation, general aptitude formula tips, practice worksheet for the end of course test for physical science, pictures with ordered pairs.
How To Factor A cubed expression, online inequality solver , what is a scale in math?, convert mixed fractions to decimals, different rules in subtracting integers, find factor for third degree
quadratic equation in ti 84.
Adding integers worksheet, easiest way to solve problems in appititude test, free math plot graphics for middle schoolers, free McDougal Littel worksheets, free algebra 2 help.
Cost accounting ebook, glencoe precalculus answers, mixed numbers as decimals, factor tree worksheets fractions.
9th grade algebra 1, Write the following expression in simplified radical form, java codes ( equation ), function rule worksheet, different ways of adding/subtracting fractions, steps to solving pre
algebra two step problems, multiple choice worksheet for adding fractions with like denominators.
Quadratic function factored form to vertex form, factors algebra worksheets, notes on permutation and combintion, permutation and combination study notes.
Maths hyperbola translations sheets, java remove punctuation, graphing calculater, matlab nonlinear solve, solve problems with positive exponents pre algebra.
Index of a square root, number sequences KS2 worksheet, check square circle overlap java geometry, adding and subtracting integer fractions, review test for algebra1 solving equation, worksheet
graphing linear equations, equations with fractions calculator.
Method square root, square root problem solver, dividing polynomials tI-83, divide mixed numbers caculator.
Nonliner Algebra, 3 variables completing the square, 3rd order quadratic equation, kumon tutorials, how to take percentage of a number formula, saxon algebra one answers, algebra with pizzazz answers
Divisibility by 2,5,and 10 games and quizzes, Merrill Physics textbook answer key, factoring difference of squares powerpoint, Substitution method algebra, nonlinear systems of equations on ti-89.
Bing visitors found us yesterday by using these keyword phrases :
+pearson +education +daily +practice symmetry +printable +2ND GRADE, long math equations to answer, imaginary numbers worksheet +free, ratios using cross products as fractions online calculators.
Multiplying dividing whole numbers decimals, second order equation convert to quadratic form in matlab, factoring differences of squares calculator, what is first grade algebra, differential equation
solve non linear, free algebra problems and key.
Grade 10 maths exercise with logarithm, integrated math chapter 2 and 3 test answers, solving 2nd order ODE - non-homogeneous, factoring trinomials calculator online, simplify expressions game in
Formula for figuring out a fraction of a fraction?, finding model for line graphing calculator, graphing pictures on the coordinate plane, how to get to vertex form to intercept form.
Hardest math equation, algebra review for dummies, multiplication of integers game, solve systems of equation by elimination calculator, multiplying absolute value, texas algebra 1 ANSWERS.
How to use logbase on TI-89, maths quiz questions ks3, ti 84 plus factors, games ti 84 plus, adding subtracting integers activity, maths equations quadratics games.
Graphing linear and non linear powerpoints, advanced algebra 2 book answers, free printable math worksheets ks3, download Glencoe Algebra 2 Answer Key, program that helps you with your math
Decimals under radical, rational number, real life word problems dealing with linear equations, simplifying algebraic expressions + ppt, exponent radicals algebra worksheet, Find Domain and range
using TI-83.
Number factor table, abstract and concrete goodman solution manual, solving substitution 6.2 worksheet answers, highest common factor of 32 and 24.
Adding and subtracting integers with fractions, long division polynomial challenging problems, qudratic equations.
Addison wesley math 6th grade course 1, square root tutorial, math integers worksheets, simplifying square roots with exponents, laplace solving systems with maple.
3rd order quadratic equations, base converter java code, algebra problems first grade.
Foiling trig, graphing calculator and square root of different numbers, multivariable graphic calculator, trig calculators, how to do the difference of two square?, mcdougal littel geometry book
practice online.
Second order linear ordinary differential equations - nonhomogeneous, Least Common Multiple Worksheet, pythagorean theorem worksheet joke #16, diff between rational and polynomial graphs, grade 9
exponents in algebra, online practice with permutations and combinations with sixth graders, printable 1st grade math.
English for math for junoir high school free download, square root worksheet, how to do addition and subtraction of rational expression, algebra lesson yr 6, cubed roots, dividing with exponential
Solving equations involving Rational Expressions, Fun with proportions worksheets, adding and subtracting fractions calculator online, algerbr, free online decimal test printable, Algebra the easy
way teacher book.
Solve polynomial on TI-83, CRBond code, how to multiply a squared radical equation, answers to math problems free, free accounting ebooks download pdf, find lowest common denominator, how to solve
combination problem on ti 84.
Factoring common terms worksheet, 3.1 as a mixed number, solving Differential algebraic equation using matlab.
Exponents and logs powerpoint, multiplying equations with exponents, x intercept cubed, saxon algebra 1 answers.
Factor trinomial calculator, integers worksheets, Free worksheet of simple interest for class 7 maths, punctuation symbol java count, subtracting integers for dummies, online answer key advanced
Quadratic factors in Matlab, Free polynomials problems, algebra for dummies and free, ti 89 quadratic, mathematica download free, Math Homework answers, slope worksheet key.
Texas instruments calculators instructions roots, quadratic equations interactive games, accounting free books, math poem, algebra problem of week ratio an "average".
Aleks cheats, fraction equation squares, algebra word questions for grade 9.
Free worksheets and ti-83 calculator, chemical equation steps, "Quadratic equation solver for TI-86", slope formula worksheet, FREE ALGEBRA EXERCISES, key of calculus 7th edition free download.
Samples of english test papers for grade 6, best algebra software learn, how to order decimals from least to greatest.
Algebra problems-distributive property, worksheet on translating phrases into algebraic expressions, counting punctuation within a string in java.
Finding the standard deviation on a ti 83 plus, binomials cubed, hard fractional exponents problems and answers.
College algebra solving for non-linear equations, mcgraw hill science chapter test for 6th grade, ask jeeves for kids division with remainder cheat sheet, Exams in solving logarithmic functions.
Solving equation trinomials, free download mathematical lesson notes with pdf, beginning and intermediate algebra free help, solving scientific notation with a online calculator, Permutation &
combination sample questions for CAT, free adding and subtracting fractions worksheets 6th grade.
Ti 84 plus online, give me some practise on solving the square, online evaluating expressions with graphing calculator, math activities for slope.
Multi step inequality equations exercises, solve nonhomogeneous first order differential equation, algebrator software download, how does simplifying a radical expression help solve an equation
easier, glencoe pre algebra answers.
Absolute value 7th grade worksheet, algebraic expressions worksheet, free negative integer calculator, 1st grade algebra worksheets.
13+ online exam papers, "a level maths" worksheet, download Ti 84, Exponets and multiplication, adding and subtracting with number lines worksheets.
Solving equations by multiplying or dividing decimals, how do you rewrite multiplication as division, square roots worksheet, dividing rational expression calculator, evaluating expressions integral.
Algebra 2 linear programming, download phenix to graphing calc, online graph fractions calculator, ti-84 solve 2 variable equations, ti 89 polar equations.
Log base ti 89, review solving equations problem, divisor calculator, exponents and square roots, solving systems of equations activity, how to pass college algebra, free ninth grade math sheets.
Online calculator trinomials, multiplying dividing scientific notation worksheets, example of parabola equation with points, softmath.com.
Alegebr, "TI-83 Plus" "inequalities", prentice hall online answer key advanced algebra, algebra and trigonometry structure and method book 2 teachers edition, x y graphing fun pictures worksheets,
lesson plan on exponents.
Factoring program algebra equations, answers to page 230 in texas algebra 1 book, how can you write a mixed or a fraction number as a decimal number, 1-100 square root table, work pages for third
CA ALG 1 UNIT 4 worksheet 8 ANSWERS, iamlooking math games, simple rules for adding and subtracting integers, online practice problems dividing and multiplying fractions.
Simplify square root calculator, how to find the equation of a given polynomial on a graph, 6th grade math conversions charts in oregon, 8th grade extend patterns worksheets, 4th grade algebra
worksheets, free rational expression calculator.
Algebra with pizzazz objective 1-a:, Teach Me Trigonometry, math trivia with answers mathematics, free ged worksheet to print, how to cheat on algebra math homework, divideing games, math multi step
word problems 5th grade.
Algebra solver, accounting books sample, order of operations and integers worksheets word problems, How to do pre-algebra for adults for free, radical simplify calculator.
Geography worksheets for 7th graders, list of complete the square questions and answers, transformations of quadratic functions from f(x)=x2 to vertex form, equations involving multiplying rational
Percent proportion, non linear equations matlab, multiplying rational square root function, graphing linear equations on the Ti-83, ti-83 roots.
McDougal online answer to textbooks, addition and subtraction to 18 worksheets, rational expression solutions, HWO TO FIND A PERCENT OF A NUMBER IN EXCEL, free grade 9 algebraic questions and word
Common factors formula, solving for a logarithm of fraction, modifying the pythagorean theorem in asia and africa, practice multiplication worksheets with variables, online calculator for solving
system by substitution method with fractions, math 10 radicals online quiz, lesson plan simultaneous equations junior secondary.
Linear Equation java, Algebra 2 Work Problems, example of mixed number to decimal conversion., multiplying decimals test, difference between solving and evaluating.
Completing the square TI-89, beginning algebra tutorial 26, algebra 1 practice 5 relating graphs to events, completing the square with multiple variables.
Examples of linear programming for matric, fraction and deimal worksheet, algerbra questions, probability combination word problems, cubed root of fraction, online geometry problem solvers for free.
Least common multiple calculator fractions, free maths questions for 11 years kids india, CONCEPTUAL PHYSICS ANSWER, free maths practice papers for ks3, eight grade calculator download, math
transformation questions for 6th grade.
Trigonometric problems with solution, free worksheets on adding and subtracting decimals,7th grade, multiplying scientific notation.
Find least common denominator calculator, solving quadratic equations activities, aptitude question with answer, calculating slope and y-intercept worksheets, gcf monomial calculator, yoshiwara
intermediate algebra textbook review.
1. Simplify cube roots, 2nd order nonhomogeneous, solve nonlinear differential equation, solving the quadratic equation by finding the square roots.
Test mathematic, prentice hall mathematics pre algebra page 200 answers, factoring polynomials to the third formula, I NEED HELP WITH COLLEGE ALGEBRA, online free math guides for 9th graders, what is
easy way to do spuare root with big number, Finding the Least common Denominator of an Equation.
HOw do i use probability on my ti86, simplifying functions calculator, hardest Alg 2 problem.
Algebraic expression solver, math combination solved problems, elementary algebra answer, hungerford solutions algebra, solve sets of linear equations with ti-89, rules in operating algebraic
expressions, slopes worksheet.
Quadratic equation with a complex, completing the square exam questions, formula for adding fractions, answer to algebraic expression problems, algebra 2 math solutions, 4th grade math expressions
Algebra and trigonometry structure and method book 2 answers, first grade CA history lesson, advance algebra of chicago school mathematics worksheets, +mathematics 6th grade dominators, ninth grade
math printables.
Integrar en ti 84, Revision rationalizing denominators of a fraction surds bitesize, Mathmatical extrapolation, linear expression calculator, square root formula, derivative solver online.
Simplifying square roots calculator, multiply divide fractions worksheet, prime factorization of the denominator, year seven fraction work sheet, radical square roots chart, 8th grade algebra 1 quiz
on eliminations.
Secondary math problem of KS3, eliminating the denominator calculator, factoring polynomial cubed, aptitude questions pdf, subtracting integer+powerpoint.
Solutions for rudin chapter 3, finding equations for asymptotes, precalc 2nd edition answer key.
9th class maths theorem test, ti-84 emulator, advanced algebra solver, rational expression online calculator, free grade 9 algebra online test, download clep college algebra, Finding the Least Common
What is the number six in base three, how to solve simultaneous equation with 12 unknowns, simplifying algebraic equations, simplifying numbers in exponential form, need free printable worksheets for
function tables, what is a decimal that never ends called, simultaneous equation solver matlab using sym.
Solving multistep equations fun worksheets, algebra 1 textbook prentice hall, permutations and combinations worksheet, on line free calulator turning fractions and mixed numbers into mixed decimals,
creative introduction to slope for 8th grade math, Solve for x using Casio calculator, log base 2 ti-89.
Lcm and gcm math tests, radical expressions adding, download aptitude materials, Calculate Common Denominator, what is a polynomial that is not factorable called?.
Pre-algebra equation solving, solve for y fractions algebra, Online Radical Calculator, electrical cheats TI-86, test question in definition of arithmetic sequence"algebra" in math 2.
Free Download Aptitude books, Math Problem solver order of operations, system of equation grapher, University level math problems bank, hyperbola formula + graph.
Dividing square root worksheet, presentations of linear equations, MULTIPLYING AND DIVIDING EQUATIONS.
Factorising quadratic double brackets indices year 10, percents to decimals + mixed number, solving differential equations n-th order, what's the rule for multiplying and dividing integers, texas
instrument Ti-83 plus how to save notes, factoring equations online.
Free math ppts, graph worksheet 2nd grade, polar to rectangular equation, second order differential equation into a system of equations, teach me college algebra.
Answer key for Gallian 5th ed. contemporary abstract alg., math worksheets - two step equations, steps on balancing chemical equations, online practice ks3 tests, softmath algebrator, how to change
to a mixed fraction percentage to just a fraction, decimals.
Learning basic algebra, worksheet bearing maths free, solve by the elimination method calculator, ordering number from greatest to lowest, Solving Rational Expressions on a TI-89, alg worksheets.
Algebra help / subtracting integers, 3 root x^6 z^3, glencoe geometry sequence worksheets, how to convert whole numbers to decimals, beginner algebra games.
Differential solving homogen second, free softmath, subtracting quadratics, mixed fraction converted to decimal.
Understanding Positive And Negative Numbers, accountancy books pdf download, inverse operations worksheets 6th grade.
Graphing absolute value on number lines worksheets, "kumon free", algebra factions, algebra for children grade 7.
Abstract algebra homework solutions, algebra multi step equation worksheets, "roots of real numbers worksheet", example of math trivia, solve recursive cubed sum.
Subtract 2 digit numbers worksheet, maths quiz questionsfor grades 8 to 11, fifth grade algebra games, linear graphs and equations worksheets, linear differential system calculator.
Free pre algabra how to worksheets, adding,subtracting,multiplying,and dividing decimals, java sum of first 100 integers, convert decimals with whole numbers to fractions worksheet, worksheet
multiplying dividing powers of 10.
Mixed number to a decimal, cost accounting ppt(book), how to solve partial fraction decomposition problems online, free algebra help rational numbers and equations., T1 83 Online Graphing Calculator.
8th grade math formula sheet, cubic formula program code calculator, error messages on graphing calculators, plot 2nd order ode matlab.
Printable SOL preparation worksheet second grade, algebra 1a homework helper/solving compound inequalities, ask jeeves maths decimal.
Solve fractions of algebra, coordinate picture worksheets, cube root on a ti 83, what is Descending Decimals, matlab solution of nonlinear differential equation.
Standard first-order linear differential equation, solve second order differential equation, 8th grade online calculators.
Common denominator between 5 and 15, hyperbola equation, test grade cauculator, Multiplying Radical Expressions.
Manual computing squere roots, find the square root by using factor method, free algebra worksheets solving systems of equations, solutions for problems in conceptual physics, 7th grade math
worksheets COORDINATE PLANE, quadratic with 1 unknowns 2 constants.
MATH TRIVIA, calculator equivalent fractions with least common denominator, McDougal Littell Algebra 1 problems online, finding the lowest common denominator tools, evaluation of expressions and
equations, Simplify the mathmatical expression.
Worksheets for least common multiple, algebraic equations with addition and subtraction, McDougal Littell Middle School Math answers, worksheets + teaching greatest common denominator, Samples Of Pre
Algebra Problems, 1806 square root, addd+subtract+integer+worksheet+grade 7.
Math trivias, FREE algebra 1a WORKSHEETS, addition problem solving worksheets.
Simplifying radicals calculator, mcdougal litell worksheets, reciprocal solver, Texas Instruments T1-81 Graphing Calculator, geometry triangle poems.
Practice math problems for georgia students in grades 6-8, worksheets for graphing negative integers on a number line, integer rules worksheet, javascript basic maths calculator.
Solving addition and subtraction fraction equation powerpoints for elementary students, finding common denominators in fractions rational expressions, quadratic formula third order, prentice hall pre
algebra, algebraic formula for mean, percentage equation, list of common math formulas.
MATHAMATICS SQURE, integer worksheets, ALEGEBRA MATH CHECKER, formula cheat sheet for graphing calculator, glencoe pre algebra free answers, cubed binomials, first order linear differential equation
electrical examples.
Positive and negative numbers, dividing polynomials by binomials, How is doing operations (adding, subtracting, multiplying, and dividing) with rational expressions similar to or different from doing
operations with fractions?, prentice hall mathematics algebra 1 answer sheet, Common Aptitude Test cheat Sheets, yr 8 paper for maths, java code not prime while loop.
Reading a fraction and decimal pie chart, algebra + equivalent formulas + worksheets, Aptitude Questions & Answers PDF.
Calculating first difference linear relations, ks2 ks3 algabra, sqaure root test questions, adding and subtracting negative and positive integers worksheets.
Fraction equations, finding the largest common denominator between 2 integers, trigonometry online calculator, algebra 2 problem solver, squaring quadratic calculator, cordinate point worksheets
third grade.
Algebra online cheater, rational expressions online calculator, graphing calculator matrix online, alegbra worksheets, HOW DO WE DIVIDE RAICALS SQUARE ROOT MATH A, free printable algebra
lessons-difference of squares.
Free algebra practice for third grade, online ti 84 graphing calculator emulator, how to cube root on a ti 83, fourth grade math money wor probelms, equations for sqaure yards, what is a mathmatical
pie, free year 7 maths worksheets.
How to write a function in vertex form, missing denominator calculator, algebra area, functions calculator solver, quadratic formula and the property of a, Orleans Hanna Algebra Prognosis Test-Third
Edition - Test download, prentice hall mathematics algebra 1 online book.
Hrw algebra 2 2004 online text, algebraic fraction powers series, standard form functions calculator, prentice hall conceptual physics textbook, rational expressions review worksheet, online gcse
practise statistic exam tests, prime factoring lesson 3rd grade.
Step ladder method in math, step by step lessons to teach algebraic expressions 5th grade, college math 0024 practice exit exams.
Physics trivia with answer, COST ACCOUNTING PPT (BOOK), decimal to fraction solve, how to simplify a fraction on a ti-84 plus calculator, solving 3rd order polynomial equations with matrix, why
simplify a rational expression makes solving more efficient.
Lcm monomial calculator, metre math work, prentice hall mathematics online book, google calculator for integers and monomials, mathematics aptitude questions, free printable maths angles sheets.
Gcse practice papers free download, transforming second order differential equations to first order, Basic Math Equations/Sq.ft, calculator easily converts decimal into fractions, equation solving
and dividing calculator.
Calculate gcd, solution of abstract algebra foote, graphing linear equations for dummies, trigonometric proof solver, fraction with variable calculator.
Differential equations solving matlab, intermediate algebra factoring worksheet, Material to be taught math for tutoring in spring texas, About T-89 calculators, pre-algebra decimals in expanded
form, pizzazz fun math riddles.
Formula turning decimals to fractions, coordinate plane worksheets, maths foundation general online games and quizzes, free integrated math worksheets, exponential expressions, prentice hall
california pre algebra powerpoints, how to solve algebra equations grade nine.
Ti calculator convert radicals to decimals, nth term calculator, holt ALGEBRA 1, algebra explanation worksheet, how do add and subtract integers, factor polynomials online calculator, partial
fractions 3rd order polynomials.
Multiplication and Division of Radical Expressions., ti 89 econ notes, solving for square roots no calculator.
Find domain of the equation, how to solve simultaneous quadratic equations, www.math book of class-8th.
Simultaneous equation solver online, websites with interactive activities to practice combining like terms in 8th grade algebra, learn algebra on-line, free online line regression calculator.
Discovery math lesson on slope, completing the square creative lesson plan, gr.11 quadratic equations and inequalities help, java how to convert fraction to decimal, Homework Abstract Algebra
Solutions, simple square root worksheets.
Example of integers rules of multiply, algerbra calculator, roots of 3rd order polynomial, variation and proportion AND math cheat, Integer worksheets for 11 yr old, what is an algebraic expression
that relates oxidation numbers and subscripts, algebraic calculator reduction.
Simplify rational expression calculator, Using Equations to solve Problems grade 5 algebra, Holt algebra one textbook.
Calculator that can factor polynomials, algebraic expressions with exponents, how to multiply a mixed number by a decimal, java exmple gradecalculater code, addition and subtraction with like
denominators worksheets, Solve My Algebra Problem.
Math worksheets with order of operation, how to code for solving variables, turning fractions into decimal calculator, Algebra With Pizzazz, quadratic expression definition, add, subtract, multiple,
divide + and - integers.
Pre-algebra online free, solve multiple equations multiple variables, ti-83 log base e graphing, interpolation on TI-83, 3rd grade math solving for 2 unknowns.
Perpendicular slope intercept solver, free ratio worksheets, simultaneous equations solver showing workings, free algebra 1 tutoring, solve for variable in denominator, why algebra, Convert Number To
Easy ways to learn algebra, decimal to fragment conversion, maths revision sheets year 10.
Balanced chemical equation formulas of gold metallurgy, creative publications test of genius answers, SCIENCE FOR 4TH GRADERS IN NEW JERSEY, FREE PRINTABLES, solving complex rational expressions.
How to calculate binomial distribution using T1-83 calculator, writing linear equations real world, sum += number java, worksheets on evaluating expressions, pre algebra for dummies, algebra>factor
entered equation, finding the quadratic polynomial linear algebra.
Ged matric maths paper 1, rudin analysis solutions, free worksheet on addition to 12, printable gcse worksheets free, online t-83, multiplying and dividing fractions grade 8, solve radical
Convert from a mixed number to a decimal, Basic algebraic graphs functions, ti-89 log key.
How to solve quadratics by the square root method, how to used t-83 to graph inequalities functions, subtracting negative integers worksheet.
Solving algebra problems, McDougal Littell Inc. worksheets History, answers for prentice hall, cost accounting + books, variables terms and expressions calculator, Excel solve 3 equations, logarithms
free exam paper.
Mastering the taks glencoe science, how to factor trinomials ax+by=c, quadratic functions game, what is the common greatest denominator of 9, free online california 7th grade algebra study aids,
linear equations with two variables worksheets, common denominator calculator 3 number.
Turn decimals into fractions online, greatest common divisor calculator, solving equations by dividing or multiplying decimals, solve a quadratic equation using excel solver, free software for
solving logarithm problems.
Year 7 algebra worksheet, radical form, math for 3rd grade sheets, basics and theorem of trignometry, maths aptitude test paper.
Basic scale factor worksheet, writing linear equations power point presentation, free printable worksheets on ratios.
Fraction least to greatest, algebra polynomial grade nine, download rudin "principle of mathematical analysis", adding and subtracting decimals worksheet, percent volume formulas.
Samples of accounting books, least common multiple word problem, lattice square printables, maths statistics problem solvers.
-f softmath, free online worksheet for basic math, solving addition and subtraction equations worksheets, assessment book mcdougal littell biology chapter test, quadratic equations grade 10, online
math aptitude exam.
Free kumon worksheets, fifth grade equations examples, excel hyperbola formula, factoring quadratic, games for practicing subtracting negative numbers, numerical solution of simultaneous nonlinear
partial differential equations, How to use the TI 84 for elipse.
A level adding fractions, 9th grade slope, lowest common multiple of 94 and 34, chemistry balancing equations prentice hall Chemical quantities, ladder method for calculating least common multiple,
Fifth Grade Mathematics Worksheet.
Download question & answer of c language, equivalent decimals and fractions practice test, how to solve problems with different variables, third root, algebraic expressions free worksheets, what is
the highest common factor of 48 and 108, glencoe physics online book.
Trinomial equation solver, solve equation 7th grade, mathematica show steps integrating, ti-83 plus solving third order equations.
Math trivias and puzzles, multiply and divide algebraic fractions 2 variables, What does it mean to solve by graphing?, adding and subtracting integers word problems, hard maths sheets, online
exponent calculator.
How to convert a decimal to a radical?, Glencoe Algebra Solving systems of equations, monomial prime factorization calc.
Www.algebrapractices, fun 5th grade algebra activities, Multiply a mixed number by a decimal, algebra rules radical.
MATLAB solve nonlinear equations jacobian, number grids ks3 revision, learning algebra free, solution of abstract algebra foote, pearson, prime factor fractions calculator, help with introductory
Free algebra using cubes for grade 1, glencoe algebra, Program quadratic formula into ti 84.
What is a lineal metre, rational expressions complex numbers, ti-84 plus emulator, free step by step solutions to college algebra questions, extracting the square root.
Symmetry Math Printable Worksheets Kids, Who Invented the Slope Formula, using the solver program on Ti-84.
Adding and subtracting fraction worksheets, volume problems worksheet middle school, worksheet 4-5 graphing linear equations, test of genius from pre algebra with pizzazz, how to order decimals and
fractions from least to greatest.
Algebrator Calculator, Balancing chemical equations illustrations, printable symmetry KS2.
Free 1st grad math work sheets, quadratic solving with zero calculator, "special numbers" maths worksheet puzzle, conceptual physics chapter 5 answers addison-wesley, convert equation of parabola
from standard form to vertex form.
Print off maths sats papers free, integers add and subtract fractions, ti 84 probability combination permutation, Finding patterns Writing formula, free solutions for geometry textbook by McDougal
Littell, Exponent rule to simplify an equation, free calculater software.
Powerpoint presentation on linear equation, multiplying and subtraction, Fundamentals of Differential Equations fourth edition free download, multiple presentation of linear and non linear functions,
cheat finding complex roots, how do you change subtraction into adding the opposite.
Java program for premutation and combination, convert price to lineal metres, how to do simple algebraic expressions.
Square root calculator simplest radical form, Square root review worksheets, printable math games for beginning algebra, general aptitude question, boolean algebra calculator, simplify exponents
Kinematic tolerance analysis slider crank, What's a factor ? in kids maths, in math, what is the best to solve a fraction equation?, world accounting free book, THE PYTHAGOREON THEORY CALCULATOR.
9th grade algebra games online, worksheet adding subtracting multiplying dividing integers, solve by elimination answers to practice 7-2 in the algerbra 2 book, Some Rules in Subtracting Algebraic
Expressions, proportion printable worksheets.
Formula on how to convert fraction to decimal, formula for the sample algebra, Scale Factors Math, turning Fractions to decimals calculator, multiply fraction with exponents calculator.
Simplify radical expressions, adding sum of an integer recursevly, partial fraction decomposition calculator.
Saxon math homework answers, cube roots square roots java, 7th grade adding,subtracting,multiplying,and dividing fraction games, creative publications algebra.
Factoring solution calculator, equations 5th grade, changing a mixed number to a decimal, 2004 algebra level 1 exam questions and answers, signed numbers worksheets.
List integers greatest to least, saxon algebra 2 answer key, complex rational expression solver.
Subtracting integers rule table, Like terms/pre-algebra, Glencoe Mathmatics answers, "Mathcad" tutorials "graphing circles", Real Life Examples of Absolute Value Graphs.
How to solve not simple trinomials, programs for casio trigonometry, subtracting mixed numbers pictures, the answers to chapter 4 quiz 6th grade pre-algebra, easy to learn algebra, radical
simplifier, texas ti-84 plus emulator download.
5th Grade base and exponent computation, t-83 graphic calculator free online, rational equation solver, quadriatic equations, Find Elementary Linear Algebra (8th) textbook problems explained
step-by-step, broyden java equations systems.
Division equations worksheet elementary school, pre algebra for first grade, graphing calculators for differential equations, permutation in vba, factoring fractions with exponents, 9th grade algebra
review, solving 6th grade fraction equations.
Mathmatics log example, convert decimal to fraction on TI 86, algebraic equation related 2 real life situations, least common multiple ladder mathod.
Converting decimals to fractions calculator, projects for solving addition and subtraction equations, logarithm for dummies, free math worksheets in solving equations, partial fraction decomposition
on TI-83, applications of trigonometry in our daily life, add fraction with variables.
Factor Polynomials Calculator, addition sign vs. subtraction sign worksheets, free multiply integers worksheets.
Convert decimal as rational number, help factoring online, how to solve properties of exponents, worksheets on LCD for fractions, hard math trivia, inverses of quadratic and square roots, multiply
radical expressions.
Math worksheets for computing averages, online factoring, basic canadian Grade 8 math test.
Free e-book on apitude by n.k. Mohanti, solving simultaneous equations in MATLAB, online t-83 calculator, free worksheets on using calculator to get equation of lines from a table, free worksheets
finding common denominator, Hands on Algebra Projects.
Multiply radical expressions calculator, Printable 3rd Grade Math, Free Algebra Solutions, ti-83, find slope of line, least common multiple chart.
Ti-83 cube root, absolute value radicals, permutation math.
Polynomial implementation program in java, logarithmic equation solver, saxon free math work sheet provided, solving quadratic equations by completing the square, online calculator for solving
quadratic equations by factoring, calculator dividing algebra equations, worksheets for adding and subtracting neg numbers.
Factoring trinomials calculator, exercises for finding the lowest common denominator, ti-84 radical expressions, McDougal Littell Algebra 1 book online, solving second order differential equations in
matlab, multiplication problem solver, 6th grade property worksheets.
Casio calculator formula, MATH TRIVIA WITH ANSWERS, Algelbra made simple .
Linear combination worksheet, pdf for ti 89 titanium, houghton mifflin algebra 1 answers, implicit differentiation calculator online, gcd algorithm+vhdl coding, trivias about trigonometry.
Algebra converter, prentice hall algebra 2 answers, how to rewrite algrebra problems.
Multiplication expression worksheets, Iowa Algebra Aptitude Test, 5th Ed. Test Booklets, Form A, NUMBERES WORKSHEET ONE TO TEN.
Ti-89 division show decimals, how to calculate a fraction, tricky aptitude question with answers, facoring quadratic expressions, step differential calculator, find lowest common factor worksheet,
how to solve systems of equations with complex numbers.
Factoring a four term polynomial, multi step equations with fractions worksheet, HOW TO CALCULATE VOLUME OF SLOPE.
Simplify exponential notation, who invented the graphing calculator?, algebra games for practise, pizzazz math worksheets, ordering fractions.
T-189 calculator- chemistry problems, square root + exponents, reducing fractions with square roots, fastest way to learn algebra, quadratics with variable exponents.
How we can find Greatest common denominator, physics prentice hall solutions, long division ti-89, When a Polynomial Is Not Factorable, free solve algebra problems step by step.
Combining like terms worksheet, powerpoint solving one variable inequalities, TI 83 emulator download, math investigatory project about statistics, permutation and combination for GRE, divide decimal
by whole number + worksheet.
Square roots and exponent, using a ti-83, how do you find the y intercept, online TI 84 plus, adding and subtracting practice sheets year 3, equation, solving systems nonhomogeneous equations matlab,
how do you do quadratic equations on ti 89.
Pdfs for adding and subtracting unlike fractions for algebra 1, math trivia samples, SQUARE ROUT OF A RATIONAL NUMBER, simplifying algebraic expressions free worksheets, differential equations SHow
that the transform linear system, square root symbol word equation in bluedocs, maths exam online.
Download ROM ti89, 6th grade order of operations worksheets, Diamond method in quadratic equation factoring, ti 84 plus simulator, free math sheets third grade, Scale Factor Problems Middle School.
Sylow third theorem.ppt, glencoe algebra 1 book answers, free interger worksheets.
Basic algebra worksheet, quadratic simultaneous equations solver, partial sums addition.
Www.mathtriviaproblems.com, grade 11 exam papers, maths permutations formular, convert 1 2/3" to decimal, graphing a line powerpoint, free online solver differential equations.
Things to include on your cheat sheet for year 11 probability, solve for two radical equations, Algebra Poems.
Year 8 maths questions download, how to graph a parabola on TI-84 plus, Dividing Integers printable worksheets, where did the concept of algebra begin, free online least common denominator
calculator, ti 83 plus prime factoring, +High Order Thinking Lessons for second graders.
Linear programing word problem examples, positive and negative fraction integers worksheets, 6th grade math test answers, solving equations containing fractions calculator, graphing an x value on the
TI 83, printable one step equation worksheets, lineal metre definition.
Practice on addition and subtraction of algebraic expressions, how to solve for a specified variable, Dividing Polynomials Online Worksheet, operations with radicals and rational exponents, algerbra
Permutation and combination linear programming, glencoe algebra 2 chapter 3 test form 1A, questions of maths class 7th, Fraction in java, simultaneous equations using exel solver.
Solving pde by D'Alembert, online nonlinear equation solver, online integer calculators, downloadable algebra problem games, rudin hw ch8.
Pdf accounting books, free equation factoring online, variables and expressions lesson plans, multiplying and dividing fractions worksheets, trigonometry answers, Simple and Compound Interest
Problems-for GMAT.
Rectangular Triangle Ratio Formula, prentice hall biology worksheets, mathtype laplace, graph pictures on a calculator, algebra trivia, graph the equation y=5x-3, Basic programs for permutation and
How to solve third order polynomial, add subtract multiply decimals worksheet, aptitude question and answer paper, factoring online, Algebra Equations for 12 year olds, Introductory Algebra Problem
Adding and subtracting "polynomials" 8th grade worksheets, Algebra 1 Answers, solve system ode matlab.
Adding dividing, adding and subtracting mixed #s free worksheets 7th grade, finding cube roots of large numbers on TI-83, accounting+ebooks+free downloads.
Coordinate plane, how to find out a mixed number, simplifying radicals calculator factor, Graph Hyperbola, mixed number to decimal.
6th grade test questions on probability, math worksheets cheating, differential nonlinear coefficients solving.
Holt mathematics estimating square roots answer sheet, Free typing sheet lesson paper for grade 6, free PRE-ALGEBRA WITH PIZZAZZ! answers to worksheets, Illinois basic skills testhelp, cost
accounting tutorials, mathmatics percentages.
Completing the square worksheet, easy algebra, MULTIPYING TWO SQUARE ROOTS WITH A VARIABLE, prentice hall mathematics pre algebra answers chapter 7, fun worksheet system of equations.
A graphical approach to compound interest project answers, learn algebra 1, clep college algebra test.
TI 84 plus quadratic, prentice hall algebra 1 book, Interactive KS3 collecting like terms, Calculate Linear Feet, dividing complex rational expressions, downloading free games for TI-84 graphing
Factorization worksheets brackets, multipying intergers worksheet, standard deviation function on TI 83 plus, algebra ratio worksheet.
Simplifying rational expressions using synthetic division, online surd calculator, radical equations add, least common multiple of 33, 28, and 23, Algebra graphing quiz worksheet, how to find
multiple fractions from least to greatest, arithmetic sequences powerpoint.
Least common denominator worksheets, ks3 substitution algebra resources, system of equations 3 variables and 3 equations on a ti 83 calculator, factor trinomials decomposition.
Gauss jordan elimination method"worksheet", add subtract multiply divide fractions and mixed numbers test, the history of exponents, mcdougal littell biology powerpoints, "california math workbook",
nyc advance algebra tutoring.
Free online aptitude question, least common multiple calculator, difference quotient for a quadratic equation, printable maths sheets australia, multiplying factors with two terms worksheets.
Evaluating variable expressions worksheet, rational expression solver, how to teach combinations and permutations in middle school, least common denomenator calculator.
What is the least common factor of 30 and 26, linear combination practice worksheet free, partial differential equation of non homogenous equation, how to solve algebra problems, how to do algerba.
Accounting book download, freeware enter algebra equation get a worked answer, free printable linear graphing worksheet, calculate simultaneous equations.
Addition and subtraction positive and negative fractions, what is the difference between pre-algebra and algebra 1, simplify algebraic fractions glencoe pre algebra, what happens if i have a radical
and the a division sign and a whole number.
Rules how to add,subtract,multiply and divide integers, free 9th grade algebra worksheets, ti 84 calculator Phoenix 4.0 cheat codes, LCM of algebraic expressions.
College algebra clep, graph linear equations worksheet, powerpoint presentation on polynomials based on 10th class syllabus, free algebraic expressions worksheet elementary, algebra of sums.
Q, answers key to McDougall Littell algebra 1 workbook, proportion worksheet.
Holt physics worksheet answers, dividing without calculator, scale factor free worksheet, use of trignometry in our daily life, free area worksheet grade 5, soliving decimal equations with integers
worksheets, solve fractions for y algebra equation.
Answers for chapter 3 in Holt algebra, simple algebra equations formula, fundamental math concepts used in evaluating an expression, "College Math teachers edition", free worksheets on prime
factorization, algebra balancing.
Fifth grade worksheet on factor trees, www.algebra with pizzazz.com, calculate even divisor.
Solving quadratic word problems by factoring, comparison between linear and quadratic equation, my java calculator prints out backwards, year 8 algebra test, third grade homework samples, how to do a
square root, simplifying complex numbers.
Solving for x using the laws of exponents calculater online, ti 83+ cube root, "online simultaneous equation solver", complex number examples polar equations, 5th grade algebra, divide polynomials
Quadratic factor calculator, rational number equations worksheets, adding and subtracting and multiplying and dividing integers quiz, square roots of imperfect squares, reverse FOIL when factoring.
Multistep equation free worksheets, 8th grade algebra chapter 5 review, 6th grade math workbook worksheets, trgonometry for dummies, Mixed numbers TI-83 Plus, 9th grade printable science.
Solutions to rudin chapter 3, algerbra for 6th graders, simplifying cubed roots, dividing polynomials tool.
TI 89 and first linear equation, very hard algebra problems\, adding and multiplying integers, sc pratice ged, How to calculate GCD.
Gnuplot linear regression, fourth grade logic problems worksheets, algebra 1 vs glencoe applications and concepts course 3.
Algebra 1 problems online, algebra II software, isolating x alegebra when x is on both sides of = sign.
Completing the square puzzle worksheet, Solve this equation (use factoring by grouping), Calculating Linear Foot.
Quadratic equation calculator complex solutions, free downloadable advanced scientific calculater, Intermediate Agebra made simple free lesson, polynomial standard form equation solving, how to
divide a fraction that cubed.
How to find root in graphic calculator, yr 8 free english test, square root property, applications of algebra used in everyday life.
Prentice hall mathematics pre algebra, subtracting integers, solve my algebra, middle school pre algebra PRINTABLE MATH worksheets.
What is the formula to find percents of numbers, great "intermediate algebra textbook", free online quadratic calculators.
Glencoe science 6th grade texas physical/chemical properties sheet, fraction formula, program solving simultaneous equation, linear algebra assignment solutions.
Caculator maths, rational funct graph, LCM solver.
Determining if is integer or decimal java, 7th grade mathmatics, general factoring-algebra 2 solutions.
SUARE ROOT, how do I enter rational exponents on TI-83 plus, online ti 84 plus, mathematical algebra software, basic cube root chart.
Rationalizing denominator worksheet, TI-84 plus online simulator, Math fractions print outs, greatest common factor worksheets.
GlencoeMcGraw-Hill Algebra 1 solutions, java ignore string punctuation, yr 9 math, worksheet compatible numbers fifth grade.
Coordinate plane powerpoints, subtraction of positive and negative fractions, square second order polynomial, highest common factors practice questions.
Algebra practice for third grade, quizzes in math 2 "algebra" in definition of arithmetic sequence, can anyone help me with my algebra homework?, divisibility worksheets.
Adding and subtracting integer worksheets, hard math worksheets, notes on elementry math, dividing radical expressions, combining like terms calculator, free pictograph tests for grade 3, maple
inequality nonlinear.
Solving math problems.com, free math answers for greatest common factors right away, worksheets for kids, "GED" "EXAM PDF", decimal square, advance math gauss visual command.
Solving a nonlinear function for x and y intercept, RATIONAL EXPRESSION CALCULATOR, fraction to a simplified ratio calculator, EXAMPLARS FOR 10TH GRADE MATHS 2007, advanced accounting ebook free
downloads, Simplify the following expression calcualtar.
Power of a fraction, ti-83 free, java find the sum of the first 100 integers, math fundamental skill workbooks 7th 8th grade, GCD calculation, worksheet over half-life algebra, learn basic allgebra
online free.
Algebra 2 free worksheets/ chapter 2, excel slope intercept, Advanced Algebra with the TI-89 Di Brendan Kelly ebook, number problems adding, subtracting, multiplying, dividing negative numbers, ti 84
plus online rechner, +Algebra Math Dictionary, what is the difference between domain and range in algebra.
Maths algebra fractions sheets, matric model papers of 9th free, printable math sheets, engineering mathmatic, radical expressions with variable calculator, multiplying polynomials worksheet
generator free.
Free online maths simplify, combining like terms ppt, free equations problem solving worksheets, 4th grade partial differences method, mcdougal littell biology handbook, example problems for scale
Balancing algebra equations, mixed number percentages to decimals calculator, how to solve fractions with radicals.
Online factoring equations, o'level add.maths key, Solving for a Specific Variable worksheets, fraction least to greatest.
Formulaes in probablity, nth power to the nth root, greatest common factors worksheet, SAMLE OF INVESTIGATORY PROJECT, Use of English Exams Papers - ebook.
Free online trigonometry "Homework Help"McDougal LITTELL, nature of solutions in algebra 2, free downloadable worksheets for maths inverse operations, free Solver for factoring, 8th grade algebra
test printouts, free downloads algebra worksheets.
Free 6 Grade Math Problems, dividing decimals worksheets grade 8, trig cheat sheet.
Basketball image on a ti-84, best algebra help, subtraction exam, free begginers algebra lessons, mcdougal littell online free reading.
Second order nonhomogeneous equations constant coefficients, free calculating simple interest worksheets, practise test paper on maths(quadratic sequence), download aptitude Question and answer,
cheats on writting decimal as fractions for 6 grade, radical equations and inequalities, algebra 2 finding vertex.
Algebra 2 help with solving ax2+bx+c=0 by factoring, glencoe algebra 2 answers, how to find the common denominator with variables, how do you change a mixed number to a decimal, math homework
answers, permutations worksheets.
Latest math trivia, Least Common Multiple Calculation, ti 89 heaviside, simplify square roots, adding fractions with integers.
Online fraction subtractor, using quadratic formula on ti-89, Solving Nonlinear SYSTEM OF Equations with MATLAB, mcdougall littell english california edition, algebra for kids.
Graphing parabolas work sheet, substitution calculator\, how do i solve algebraic equations?, Algebra 2 worksheets of translating and stretching graphs, algebra2 worksheets on equations and
inequalities using addition and subtraction.
Online calculator w/ logarithms, mcdougal scientific, graphing circle calculator online, Java program that determines if an integer is a prime, RATIONAL ALGEBRAIC EXPRESSION WITH DISCRIMINANT,
converting base 8 to base 16, quadratic formula from table.
Summation xy on ti-84 calculator, solving a second-order differential equation forced by dirac delta, 2-step equation worksheets, error 13 ti-86, WORKSHEET FIND THE LEAST COMMON FACTOR.
How to solve a quadratic equation with vertex, mcdougal littell math course 3 answers, yr 10 literature and numeracy test questions, algebra 1 CPM Teacher Manual.
Probability word problem solver, practise maths papers ks3 online, simplify expressions lisp.
Roots and radicals 2, subtraction learning center, books for dummies on +elipse software, how to solve for sum of squares errors ti 83.
Simplifying cube root to square root, prove that product of four consecutive non-zero numbers cannot be the square of an integer, solving third order polynomials roots, mark sheet free download
in-ms-excel, +Formula To Calculate A Discount elementary grade, free algebra calculator that shows work.
Grade 10 factoring polynomials practice test, free english test for yr 8, Free Calculus worksheets.
Multiplying fractions cheat, essentials to statistics formula sheet, two-step equations with fractions worksheet, teach yourself maths free.
First grade math sheet, Balancing Algebraic equations worksheets, pros and cons of solving graphs with substitutions elimination, free accounting books, graph of polynomial functionppt, prentice hall
mathmatics algebra 1 practice 4-3, free download the aptitude question.
Google users came to this page today by typing in these math terms :
│solving trinomials │TI-83 finding roots programs │quadratic equations with the calculator │enter homework problems }dividing radical expressions} │
│math test paper │free printable math worksheet solving │free online calculator to simplify │simplify quadratic equation calculator │
│ │inequalities 8th grade │rational expressions │ │
│matlab solve 4th order polynomial equation │algebrator controls │glenco algebra 2 skills practice │matric grade 10 past exams papers │
│do my algebra │explain hw to graph solutions on coordinate plane│how do you get rid of a radical │Quadratic Equations by factoring using the diamond method│
│How to do Algerbra │algebra 1 CPM │Math Practice, Lesson 4.3 6th │free download books in principle of accounting │
│McDougal Littell Biology Study Guide 08 Answers │TI-84 Calculator download │online equation calculator complex │online worksheets of cube of binomial │
│ │ │square │ │
│finding least common denominator worksheet │adding integers game │partial fraction decomposition ti 84 │finding the vertex worksheet │
│free answers to algebra show work │method of quadratic formula │who invented th slope intercept formula │ti 83 laplace │
│convert square root to decimal │free least common multiple tool for fractions │algebra 2 cpm book │calculater for finding slope │
│fractions greatest to least │eigenvalue program download for ti-84 │HOW TO DO ALGREBRA 2 STEP BY STEP │factoring numbers linux │
│answers for the book Algebra 1 for 9th grade │TI-84 ROM code generator │prentice hall conceptual physics │"cheat" "ti-84 plus" │
│ │ │solutions │ │
│using a calculator to factor trinomials using │6th grade line symmetry worksheet │gcf and difference between numbers │written answers basic college math 5th edition answers │
│FOIL │ │ │ │
│prentice hall college physics practice test │infinity limit calculator │mathmatical analysis pdf │cost accounting for dummies │
│free solutions to problems in my algebra book │parabolas for dummies │"trinomial practice problems" │glencoe algebra 1 answers practice workbook │
│How to add, subtract fractions for children │gcse logarithms rules without a calculator │SOLVING EQUATIONS WITH INTERGER NUMBERS │ti 89 how to solve multiple equations │
│ti-89 programs for "discrete math" │worksheets for divisibility │printable two step equations │Parabola Real Life Examples │
│3rd grade math permutations and combinations │sums of radicals help │one-step equations worksheets │easy ways to find out the square root │
│a calculator that will add and subtract radical │how to get equation using roots │ti 89 second order differential equation│write a mixed number as a decimal worksheet │
│expressions │ │at a point │ │
│inequality solver applet │graphing parabola │Algebra 2 (2007 Edition) homework help │triangle worksheets │
│help with 6th Grade Math, converting decimals │functions statistics and trigonometry chp 4 │How to solve for x on a graphing │pre algebra with pizzazz creative publications │
│and fractions │answers chp review answers │calculator │ │
│holt algebra online book │ks2 ks3 algebra │solution of nonhomogeneous partial │calculate common denominator │
│ │ │differential equation │ │
│using java solving ordinary derivatives │TI-83 symbols font │RADICAL PROPERTY LINEAR EQUATIONS │www.sciencetific method.com │
│ │ │"square root" │ │
│solving differential equations trial │simultaneous equation solver │how to pass college introduction to │complete the square algebra free │
│ │ │algebra │ │
│LCM program for ti 84 │adding decimal numbers ks2 worksheets │lcd calculator │algebra 1 helping workout problems │
│ucsmp "online textbooks" │fun solving "linear equations" │system of equation definition │holt 2006 physics book online │
│topic 7-b: Test of Genius answers │simplify by factoring │(prentice hall mathematics) answer key │equation for percentages │
│solving dividing rational expressions │math combinations solutions │ti 83 graphing systems linear equations │algebra division for 9thgrade │
│complex online calculator │free math answers │4th grade algebra division worksheets │year9 maths exponents │
│how to find a square root without a calculator │solve nonhomogeneous second order linear │how to write decimals as a fraction │saxson multiplication worksheets │
│elementary │differential equation │calculator │ │
│quadratic solver third order │free download book on cost accountancy │from decimal to mixed number │free McDougal Littell worksheets │
│graphing calculator online ellipse │6th grade math worksheets for intergers │algebra with pizzaz │10th grade math quizzes online │
│exam paper maths method unit 2 │origin of trigonometry chart? │ratio proportion worksheet + algebra │free Number Pattern solver │
│cubed root │intermediate algebraic calculator │second order differential equation │algebra* forumla │
│ │ │powerpoint │ │
│math 3 unknowns 2 knowns │Using Simulink to Solve Ordinary Differential │bc ged test cheat │math problems print hardest │
│ │Equations │ │ │
│Mathematical Intelligencer 1988 proof of │mixed number calculator │trig calculator download │algebra for college students.pdf │
│infinitude of prime numbers │ │ │ │
│java aptitude questions │Adding squared fractions │algebra 2 quiz answers │cost accounting tutorial │
│Iowa Algebra Aptitude Test Practice │trigonometry book print out │coupled mass-spring, tutorials │notes on how to find the roots of quadratic equations │
│free worksheets factoring polynomials with gcf │calculate GCD │polynomials two variables roots │limits solver online │
│triangle proportion worksheet │subtracting integers + worksheet │Pictures of Completing the square │MAC1147 calculator │
│hard math problems and answers │thrid grade math printable. │converting decimals to fractions; │free math problem solver │
│ │ │worksheets │ │
│multiplying adding subtracting and dividing │beginner basic algebra │Free Pre Algebra Problem Examples │11+ maths papers │
│fractions and decemals │ │ │ │
│Linear Programming+free lecture notes+pdf+ppt │My calculator does not have the SIN-1 function │what is the concept of algebra begin │square roots printable worksheets │
│accountant book download │online calculator to calculate the number of │solving 3 order polynom │algebra 2 radical problems │
│ │permutations │ │ │
│Ti-83 plus │Online Radical Expression Calculators │positive and negative numbers quiz │who invented geometry algebra │
│writing balanced equations worksheet │SOLVING FRACTIONAL INEQUALITIES WITH ABSOLUTE │cheat sheet for algebra final │Permutation solver │
│ │VALUE │ │ │
│download aptitude papers │synthetic division online calculator │Fluid mechanics revision notes │"introductory math" │
│download log log sheets for maths │christmas math games- printout worksheets │pre-algebra worksheet │time table flash cards print grade3 │
│aptitude papers with solutions │function and domain absolute values square root │solving linear equation worksheet │math anwsers │
│hardest algebra mathematical formula │how to do algebra │simplifying algebra in matlab │free algebra calculator │
│graphing calculator free download │modern algebra exam solutions │algèbre equation du second degré │square root calculators │
│ │ │factoriel │ │
│formula of vertex of absolute value equations │quadratic story problems │venn diagram solver │Complex Rational Expressions │
│handle decimal calculations in java │SAS free links for beginners │algerbra │solving aptitude papers │
│"everyday math" "grade five" chapter 6 │lesson plans on exponents │finging percents │CLEP cheat │
│algebra ks2 │how to do combination on TI 83 │Rational Expression Calculator │learn online/6th grade for free │
│free math worksheet ration and proportion │kumon answers │trig answers │t1-89 calculator │
│maths finding │probability statistic ebooks download free │cylinder fit matlab │adding negative and positive number worksheets │
│solving binomial expansions │to the power of a fraction │quadratic formula program with fractions│6 GRADE MATH DIVIDING DECIMALS │
│trig problem solving │1st steps of 9th grade algebraic problems for │"PRE ASSESSMENT">FREE>ALGEBRA>HIGHSCHOOL│solving nonlinear differential equations in matlab │
│ │studesnts to work out │ │ │
│free mathmatics tricks │quadratic formula program ti84 │free download 10th class quickbasic │factoring a cubic function by grouping │
│ │ │software │ │
│SAMPLE PAPERS OF CLASS 8 │saxon printable paper │free IQ papers & answers sheets │nonlinear regression using matlab │
│subtracting adding multiplying and dividing │algebraic quadric surface system of odes │www.4th grade math sheets.com │suare root of 4 │
│Multidimensional Newton's Method matlab │TI-84 print download │india billset previous question papers │"combining like terms" puzzle │
│ │ │free download │ │
│aptitude question papers │practice algebra software │online 10th grade mathematics practice │mixed percentages to fraction tutorial │
│basic algebra math trivia │Free eBook: Algebra 2 instruction guide │denominator algebra │prentice hall math algebra 1 │
│gelosia multiplication │greatest common factor finder │simultaneous equations solver excel │aptitude question solution │
│examples of math trivia with answers │modulo math solver │indianapolis tutor high school │substitution calculator │
│factoring square root of polinomials │easy algebra software │Elementary Algebra Radical Expression │balancing chemical equations using calculator │
│applications of conics in baseball │decimals worksheets │solve square cube │algebra dilation homework help │
│math term poetry │exponent worksheets │"algebra puzzles" investigations │download of ias previous year question papers │
│solve equasion │factoring practice worksheets difference of │10th trigonometry │ordinary third-order differential equation+MATLAB │
│ │squares │ │ │
│Formula to Find Square Root of a Number? │simplifying division negative exponent │mathematical fration │ti 89 binary conversions │
│simplify boolean online │TI-89 inverse log │adding integers+worksheets │"free 8th grade worksheets" │
│free printable simple geometric formulas │junior high school algebra problems;pdf │studying for intermediate algebra │free Algebraic Fractions cheat site │
│how to explain solving equations with integers │math quiz teaching online │lattice worksheets │linear algebra done right solution manual │
│Conceptual Physics workbook answers │aptitude questions and its solution │grad nine maths paper │online algebra answers │
│free Managerial Accounting Eighth Edition.pdf │common multiples chart free │decimals to fractions worksheets │algebraic equations in 3rd grade │
│simplifying a expression of a rectangle │simple sqare route maths explained │class viii question papers │converting functions to slope intercept │
│solve college algebra problems │division calculater │prentice hall mathematics answers │online graphing calculater │
│ │ │algebra 1 │ │
│free online equations to solve or work │free aptitude test download │algebra real life students │How to graph inequalities in Excel │
│math problems.com │online rational function graphing calculator │word for algebra rearrange formulas from│solve simultaneous equations calculator │
│ │ │words │ │
│Free 6th Grade Science Worksheets │free math worksheets on adding and subtracting │calculate cube root java │Free online math games for grade nines │
│ │negative numbers │ │ │
│Saxon Algebra I help │algebra word search worksheet │online square root calculator. │5th grade algebra exercise │
│highest common factor worksheets │for absolute value equations "free calculator" │pre algebra 9th grade │college algebra math poems │
│learn algebra easy │sample aptitude test paper │SIMULtaneous equation solver 4 │algebra reference free sites │
│printable algebra 1 9th grade worksheets │SOLVING SIMULTANEOUS EQUATIONS IN MATLAB │heath algebra ebook │java program + to find LCM │
│permutations and combinations project algebra 2 │Free Printable Math Problems │adding like terms for kids │free online algebra 1b tests (COM) │
│pre-algebra information cheat sheets │who discovered foil math │Math work sheet translating words to │subtracting integer worksheets │
│ │ │math symbols │ │
│9th grade geometry test online taks │how to square radicals │College Algebra CLEP │alberta grade 4 math review sheets │
│6th grade Probability Powerpoint │math-substitution & evaluation of algebraic │maths papers online KS2 │code for converting exponentials into numbers in excel │
│ │expressions │ │ │
│Multiplying rational expressions calculator │simplifying radical expressions with fractions │"download gmat test papers" │yr 10 eng past papers │
│pre-algebra with pizzazz │TI-83 interpolation calculator │decimal calculator into expressions │pre algebra sample video for dummies │
│plus and minus mixed facts free worksheet │8th grade algebra midterm │how to solve of a 4th order algebraic │algebra calculator free │
│ │ │equation easily │ │
│free tutorial question paper for both MAT & SAT │free download for aptitude │rational exponent division tutorial │complicated algebra practice │
│8th grademathquizes for students │hard algebraic equations │self teach yourself algebra │pdf+math aptitude question │
│mathbook florida │convert numbers to fractions │free kids trivia questions printable │larger root in algebra │
│school learning adding and subtracting │"combination solver" │Algebraic change to Difference Quotient │how to change a product of a number into a trinomial │
│worksheets │ │ │ │
│Algebra Lesson Interactive CD │algebra equations in excel │math+equation problems+sixth grade │Solving systems by linear combination │
│line │answer keys for algebra 2 glencoe books │grade │calculator algebraically solving system of equation │
│Contemporary Abstract Algebra 6th edition │free worksheets for teaching area to second grade│order fractions from least to greatest │free dividing exponent solver │
│solution │ │ │ │
│finding multiple roots in C++ │trigonometric addition │online antiderivative calculator │hardest 6 grade integer problem │
│free pythagorean theorem worksheets │finding cost equation │hardest math problem │Advanced Algebra Worksheets │
│teaching kids algebra │learn college algebra on cd │solved exercices of tutorial physics │printable worksheets + square roots │
│cost accounting books free │apply algebra in real life situations │free printable math for 6th graders │some sample question for phase II Real Estate Exam in │
│ │ │ │Ontario │
│steps in multiplying radicals │free download math 8 class book │common denominator calculator │algebra 1 fast │
│solving one step inequalities worksheets │7 grade TAKS math practice │factoring test GCF worksheet │graph to solve circle system │
│trigonomic │solving nonlinear simultaneous equations ellipse │primary school past exam papers │saxon online chemistry │
│"integers games" │download+aptitude math question sample │free math lesson in indianapolis │NC/algebra 1/ 4-3 skills practice relations │
│KUMON ANSWERS │absolute value application │Changing Difference formula │algebra worksheets- relations │
│test algebrator free │circle equation solver │how to do a summation on a TI-83 │What are some examples from real life in which you might │
│ │ │calculator │use polynomial division? │
│FREE ONLINE TESTS OF PHYSICS FOR 9 STANDARD │online calculaters │online usable graph │"slope intercept formula" │
│solving systems of 3 equations using a graphing │Christmas math code worksheet │algebra readiness worksheets 7th grade │permutations + grade 4 │
│calculator │ │ │ │
│free integers homework sheets and answers for │quadratic equation calculator apps │McDougal Littel World History Workbook │sixth grade math worksheets │
│printing │ │Teachers Addition │ │
│ti-84 rom emulator │Factorizing on a ti83 button │algebra formula cheat sheet │Learn Pre- +Algerbra and free online │ | {"url":"https://softmath.com/math-com-calculator/distance-of-points/chapter-5-in-math-glencoe-book.html","timestamp":"2024-11-12T23:01:46Z","content_type":"text/html","content_length":"190721","record_id":"<urn:uuid:c9a6210e-5e38-4884-8071-083c68f25341>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00249.warc.gz"} |
Category: { Statistics }
Category: { Math }
Summary: Bayes’ Theorem is stated as $$ P(A\mid B) = \frac{P(B \mid A) P(A)}{P(B)} $$ $P(A\mid B)$: likelihood of A given B $P(A)$: marginal probability of A There is a nice tree diagram for the
Bayes’ theorem on Wikipedia. Tree diagram of Bayes’ theorem
Category: { Statistics }
Summary: Definition two series of data: $X$ and $Y$ cooccurance of them: $(x_i, x_j)$, and we assume that $i<j$ concordant: $x_i < x_j$ and $y_i < y_j$; $x_i > x_j$ and $y_i > y_j$; denoted as $C$
discordant: $x_i < x_j$ and $y_i > y_j$; $x_i > x_j$ and $y_i < y_j$; denoted as $D$ neither concordant nor discordant: whenever equal sign happens Kendall’s tau is defined as $$ $$\tau = \frac{C- D}
{\text{all possible pairs of comparison}} = \frac{C- D}{n^2/2 - n/2}$$ $$
Category: { Statistics }
Category: { Statistics }
Summary: Jackknife resampling method
Category: { Math }
Summary: Also known as the second central moment is a measurement of the spread.
Category: { Statistics }
Summary: Gamma Distribution PDF: $$ \frac{\beta^\alpha x^{\alpha-1} e^{-\beta x}}{\Gamma(\alpha)} $$ Visualize
Category: { Statistics }
Summary: Cauchy-Lorentz Distribution .. ratio of two independent normally distributed random variables with mean zero. Source: https://en.wikipedia.org/wiki/Cauchy_distribution Lorentz distribution
is frequently used in physics. PDF: $$ \frac{1}{\pi\gamma} \left( \frac{\gamma^2}{ (x-x_0)^2 + \gamma^2} \right) $$ The median and mode of the Cauchy-Lorentz distribution is always $x_0$. $\gamma$ is
the FWHM. Visualize
Category: { Statistics }
Summary: By generalizing the Bernoulli distribution to $k$ states, we get a categorical distribution. The sample space is $\{s_1, s_2, \cdots, s_k\}$. The corresponding probabilities for each state
are $\{p_1, p_2, \cdots, p_k\}$ with the constraint $\sum_{i=1}^k p_i = 1$.
Category: { Statistics }
Summary: The number of successes in $n$ independent events where each trial has a success rate of $p$. PMF: $$ C_n^k p^k (1-p)^{n-k} $$
Category: { Statistics }
Summary: Beta Distribution Interact Alpha Beta mode ((beta_mode)) median ((beta_median)) mean ((beta_mean)) ((makeGraph))
Category: { Statistics }
Summary: Two categories with probability $p$ and $1-p$ respectively. For each experiment, the sample space is $\{A, B\}$. The probability for state $A$ is given by $p$ and the probability for state
$B$ is given by $1-p$. The Bernoulli distribution describes the probability of $K$ results with state $s$ being $s=A$ and $N-K$ results with state $s$ being $B$ after $N$ experiments, $$ P\left(\
sum_i^N s_i = K \right) = C _ N^K p^K (1 - p)^{N-K}. $$
Category: { Statistics }
Summary: Arcsine Distribution The PDF is $$ \frac{1}{\pi\sqrt{x(1-x)}} $$ for $x\in [0,1]$. It can also be generalized to $$ \frac{1}{\pi\sqrt{(x-1)(b-x)}} $$ for $x\in [a,b]$. Visualize
Category: { Statistics }
Summary: In a multiple comparisons problem, we deal with multiple statistical tests simultaneously. Examples We see such problems a lot in IT companies. Suppose we have a website and would like to
test if a new design of a button can lead to some changes in five different KPIs (e.g., view-to-click rate, click-to-book rate, …). In multi-horizon time series forecasting, we sometimes choose
to forecast multiple future data points in one shot. To properly find the confidence intervals of our predictions, one approach is the so called conformal prediction method. This becomes a multiple
comparisons problem because we have to tell if we can reject at least one true null hypothesis.
Category: { Statistics }
Summary: Bonferroni correction is very useful in a multiple comparison problem
Category: { Math }
Summary: The conditional probability table is also called CPT
Summary: $$ \mathrm{NML} = \frac{ p(y| \hat \theta(y)) }{ \int_X p( x| \hat \theta (x) ) dx } $$
Category: { Statistics }
Summary: MDL is a measure of how well a model compresses data by minimizing the combined cost of the description of the model and the misfit.
Summary: Description of Data The measurement of complexity is based on the observation that the compressibility of data doesn’t depend on the “language” used to describe the compression process that
much. This makes it possible for us to find a universal language, such as a universal computer language, to quantify the compressibility of the data. One intuitive idea is to use a programming
language to describe the data. If we have a sequence of data, 0,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,…,9999 It takes a lot of space if we show the complete sequence. However, our
math intuition tells us that this is nothing but a list of consecutive numbers from 0 to 9999.
Summary: FIA is a method to describe the minimum description length ( [[MDL]] Minimum Description Length MDL is a measure of how well a model compresses data by minimizing the combined cost of the
description of the model and the misfit. ) of models, $$ \mathrm{FIA} = -\ln p(y | \hat\theta) + \frac{k}{2} \ln \frac{n}{2\pi} + \ln \int_\Theta \sqrt{ \operatorname{det}[I(\theta)] d\theta } $$ $I
(\theta)$: Fisher information matrix of sample size 1. $$I_{i,j}(\theta) = E\left( \frac{\partial \ln p(y| \theta)}{\partial \theta_i}\frac{ \partial \ln p (y | \theta) }{ \partial \theta_j } \right) | {"url":"https://datumorphism.leima.is/cards/statistics/","timestamp":"2024-11-14T08:17:27Z","content_type":"text/html","content_length":"141733","record_id":"<urn:uuid:563f1d71-eabc-4541-9ac4-e86c304d6261>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00894.warc.gz"} |
How do I find a natural log without a calculator? | Socratic
How do I find a natural log without a calculator?
1 Answer
You can approximate $\ln x$ by approximating ${\int}_{1}^{x} \frac{1}{t} \mathrm{dt}$ using Riemann sums with the trapezoidal rule or better with Simpson's rule.
For example, to approximate $\ln \left(7\right)$, split the interval $\left[1 , 7\right]$ into a number of strips of equal width, and sum the areas of the trapezoids with vertices:
$\left({x}_{n} , 0\right)$, $\left({x}_{n} , \frac{1}{x} _ n\right)$, $\left({x}_{n + 1} , 0\right)$, $\left({x}_{n + 1} , \frac{1}{{x}_{n + 1}}\right)$
If we use strips of width $1$, then we get six trapezoids with average heights: $\frac{3}{4}$, $\frac{5}{12}$, $\frac{7}{24}$, $\frac{9}{40}$, $\frac{11}{60}$, $\frac{13}{84}$.
If you add these, you find:
$\frac{3}{4} + \frac{5}{12} + \frac{7}{24} + \frac{9}{40} + \frac{11}{60} + \frac{13}{84} \approx 2.02$
If we use strips of width $\frac{1}{2}$, then we get twelve trapezoids with average heights: $\frac{5}{6}$, $\frac{7}{12}$, $\frac{9}{20}$,...,$\frac{25}{156}$,$\frac{27}{182}$
Then the total area is:
$\frac{1}{2} \times \left(\frac{5}{6} + \frac{7}{12} + \frac{9}{20} + \ldots + \frac{25}{156} + \frac{27}{182}\right) \approx 1.97$
Actually $\ln \left(7\right) \approx 1.94591$, so these are not particularly accurate approximations.
Simpson's rule approximates the area under a curve using a quadratic approximation. For a given $h > 0$, the area under the curve of $f \left(x\right)$ between ${x}_{0}$ and ${x}_{0} + 2 h$ is given
$\frac{h}{3} \left(f \left({x}_{0}\right) + 4 f \left({x}_{0} + h\right) + f \left({x}_{0} + 2 h\right)\right)$
Try improving the approximation using Simpson's rule, using $h = \frac{1}{2}$ then we get six approximate areas to sum:
$\frac{h}{3} \left(\frac{25}{6} + \frac{73}{30} + \frac{145}{84} + \frac{241}{180} + \frac{361}{330} + \frac{505}{546}\right) \approx 1.947$
That's somewhat better.
Impact of this question
19912 views around the world | {"url":"https://socratic.org/questions/how-do-i-find-a-natural-log-without-a-calculator","timestamp":"2024-11-12T00:21:51Z","content_type":"text/html","content_length":"36049","record_id":"<urn:uuid:f3d2658f-d599-401c-81d9-fd4ed50f3ddc>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00122.warc.gz"} |
Ascomycetes members are commonly calleda. Fission fungi
b. Clu... | Filo
Question asked by Filo student
Ascomycetes members are commonly calleda. Fission fungi b. Club-fungi c. Sac fungi d. Bread mould
Not the question you're searching for?
+ Ask your question
Sure, I can provide step by step solutions using LaTeX. Here's an example: Problem: Find the value of x in the equation 3x + 5 = 14. Solution: Step 1: Subtract 5 from both sides of the equation. Step
2: Divide both sides of the equation by 3. Therefore, the value of x in the equation 3x + 5 = 14 is 3.
Found 4 tutors discussing this question
Discuss this question LIVE for FREE
6 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Classification, biodiversity, and conservation
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Biology tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Ascomycetes members are commonly calleda. Fission fungi b. Club-fungi c. Sac fungi d. Bread mould
Updated On Feb 13, 2024
Topic Classification, biodiversity, and conservation
Subject Biology
Class Class 12
Answer Type Text solution:1 | {"url":"https://askfilo.com/user-question-answers-biology/ascomycetes-members-are-commonly-calleda-fission-fungi-b-36393039333535","timestamp":"2024-11-06T18:44:59Z","content_type":"text/html","content_length":"95570","record_id":"<urn:uuid:f00d3306-b17d-4b15-8855-a993c43b33cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00581.warc.gz"} |
Clarifications for US 26623
US 26623 version 5: Use number to solve problems
Updated October 2024 for version 5.
Appropriate technology (Guidance Information 5)
Any technology used must allow the learner to demonstrate the skills and understanding required by the standard. For example, if the learner has used a calculator to solve a multiplication problem,
the learner may demonstrate the multiplication skills required through judging the reasonableness of the solution.
Naturally occurring evidence (Guidance Information 2)
Evidence for this standard can come from formative and summative assessments, so long as the main purpose of those assessments is not the one-off assessment of standard 26623.
Reasonable solutions resulting from effective methods (Guidance Information 2 and 5, Performance criterion 1.1 and 1.2)
Evidence is needed that the learner has determined that the solution to the problems they have solved is appropriate. This applies regardless of whether or not the learner used a calculator (or other
technology), traditional algorithms, or mental strategies to complete the calculations and solve the problems.
How competence is demonstrated (Guidance Information 3)
Assessors are reminded that where competence has been demonstrated orally or visually, sufficient evidence of this competence must be captured and presented in submissions for national external
moderation to allow moderation to occur.
Evidence from activities that require higher levels of mathematics (Outcome 1)
Evidence occurring naturally may reflect performance at a higher level than is required for standard 26623. In such cases the learner’s problem solving and calculations – that are at the required
level for standard 26623 – may not be obvious in the learner work. Adequate documentation of these lower level calculations (e.g. annotations giving examples of workings) would be required for
national external moderation.
Effective strategies used to solve problems (Performance criteria 1.1)
Learners must select the strategy to use to solve the problem (as opposed to being guided to the strategy). This may rule out some sources of evidence (e.g. if directions as to what operations to
use, or step-by-step instructions as to how to solve the problem, are provided by task information).
Required range items (Performance criteria 1.1)
Across all of the activities, three instances of addition, three instances of subtraction, three instances of multiplication and three instances of division need to be demonstrated in total.
The three instances of each operation must be in activities which are different enough to show transfer of skills.
Calculations reflecting the required level of demand (Performance criteria 1.1 range and Guidance Information 4)
The problems solved must involve calculations that reflect the level of demand described by the additive and multiplicative strands of step 5 of the Learning Progressions for Adult Numeracy. At this
level, addition and subtraction calculations involve multi-digit numbers and/or decimals, and multiplication and division calculations involve multi-digit or decimal multipliers and divisors.
Multiplication and division by 10,100 and 1000 do not reflect step 5.
Learning Progressions for Adult Numeracy (external link)
Positive and/or negative integers (Performance criteria 1.1 range item)
Positive integers, negative integers, or both can be included in a learner’s portfolio of evidence. It is not necessary to include negative integers unless they occur naturally in the problem the
learner is attempting to solve.
Percentages and fractions (Performance criteria 1.1 range items)
Learners must demonstrate they can deal with fractions and percentages within the context of solving a problem. Unless it is required for the problem, the standard does not require learners to
express things as a percentage or in fractional form. Conversions between fractions, decimals and percentages do not provide appropriate evidence for the standard unless they are required by the
problem being solved.
Fina clarifications for standards 26622 and 26624-26627 | {"url":"https://www2.nzqa.govt.nz/ncea/subjects/litnum/26622-26627/clarifications/us-26623/","timestamp":"2024-11-13T04:55:53Z","content_type":"text/html","content_length":"56689","record_id":"<urn:uuid:8d33c281-bff1-4b9b-92d8-c6e8e059e56c>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00062.warc.gz"} |
Importance Sampling
Numerical integration is a central component of a complex radiometric simulation tool. In order to accurately perform integrations of discrete functions in a reasonable amount of time, the function
is usually sampled. Regular or uniform sampling of some discrete functions can give rise to artifacts including aliasing. In addition, uniform sampling can be inefficient because computing the
contribution for each sample of the function can be costly and uniform sample doesn’t prioritize getting the contributions for the "most important" regions of the function.
In order to illustrate the process for importance sampling a function, the following example function will be used:
\begin{equation*} f( \theta ) = \cos^5 \left ( \frac{|180 - \theta|}{2} \right ) \end{equation*}
This function approximates a "scattering phase function", which is a function that describes the probability of an incident photon interacting with a particle (or molecule) and scattering in a new
direction. The magnitude of the function as a function of angle describes the directional probability. The plot below shows the example phase function plotted in polar coordinates. The magnitude of
the function is significantly higher in the forward scattering direction (angles near 180 degrees).
Figure 1. Example Phase Function in Polar Coordinates
In the context of a ray tracer, an efficient way to compute the new scattering direction for a photon is sought. As the function illustrates, a photon should be scattered in the forward directions
with a significantly higher probability. Therefore, we want to method to use this function to drive a semi-random mechanism that will compute new photon directions (angles) such that the ensemble
direction statistics are correct. If a large number of new directions are pulled from this mechanism, the normalized histogram of the directions should reproduce the input function.
Probability Functions
The plot below illustrates the same phase function as a 1D Cartesian function. The function has been area normalized to produce a probability distribution function (PDF).
Figure 2. Probability Distribution Function (PDF)
The cumulative distribution function (CDF) is created by accumulating the PDF function. If the PDF was correctly area normalized, the CDF will eventually reach a peak value of 1.
Figure 3. Cumulative Probability Distribution Function (CDF)
Look-up Table
The key component in an importance sampling scheme is the function that will reproject an otherwise uniform distribution of random samples. This function is usually implemented as either an
analytical function (when one can be derived) or as a look-up table.
For this example, a look-up table (LUT) approach is used where a uniformly disributed random input values between 0 → 1 can be directly indexed into a finite array of output values. To create the LUT
to map input to output values, the input angles for the CDF were range normalized and then the CDF was transposed. The plot below shows the look-up table (LUT) created from the CDF.
Figure 4. Reprojection Look-up Table (LUT)
A diagonal line (slope = 1) would produce a null projection. In this example, it can be seen that an input value of 0.2 results in an output value of nearly 0.4. The shape of this LUT projects the
smaller (nearer 0) and larger (nearer 1) values closer to 0.5. This projection will end up remapping uniformly distributed points closer to the central value of 0.5, which is what is desired to get
more samples closer to the center lobe of the input PDF.
Uniform vs. Importance Samples
The plot below shows how the PDF would be sampled with 20 uniformly distributed samples. The green points on the curve show the locations of the 20 samples, which uniformly sample the regions of the
function with low magnitude as well as those regions where the magnitude is much higher.
Figure 5. Uniform Sampling of the PDF
The plot below shows how the same PDF would be sampled if the same 20 uniformly distributed samples are reprojected by the look-up table (LUT). Unlike the uniform sampling scheme, these samples are
clustered near the highest magnitude regions of the PDF.
Figure 6. Importance Sampling of the PDF using the LUT
Although the concept of importance sampling was illustrated here for a 1D function, these same techniques can be applied to multi-dimensional functions. The challange in developing importance
sampling schemes is creating an efficient redistribution mechanism.
The DIRSIG model using importance sampling in the following areas:
• Sampling (1D) scattering phase functions
• Sampling (2D) bi-directional reflectance distribution functions (BRDFs). | {"url":"http://dirsig.cis.rit.edu/docs/new/importance_sampling.html","timestamp":"2024-11-09T11:15:36Z","content_type":"text/html","content_length":"41260","record_id":"<urn:uuid:fae5a4ee-3d94-499e-ac5d-e185e3f85821>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00330.warc.gz"} |
What is the difference between a quantitative and a qualitative measurement? | Socratic
What is the difference between a quantitative and a qualitative measurement?
1 Answer
Quantitative data is numerical. (Quantities) This type of data results from measurements. Some examples of quantitative data would be:
the mass of a cylinder of aluminum was 14.23g
the length of the pencil was 9.18cm
Qualitative data is non-numerical. (Qualities) Some examples of qualitative data would be:
-the color of the sulfur sample was yellow
-the reaction produced a white solid
-the object felt soft
-the reaction produced a strong ammonia odor
Very good examples and nice pictures can be found here: http://regentsprep.org/regents/math/algebra/ad1/qualquant.htm
Impact of this question
44014 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/what-is-the-difference-between-a-quantitative-and-a-qualitative-measurement","timestamp":"2024-11-03T22:56:55Z","content_type":"text/html","content_length":"34264","record_id":"<urn:uuid:f5384a7b-cdae-4d82-8b11-fcaf278e0767>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00155.warc.gz"} |
Agisoft PhotoScan 1.2.0 pre-release
Hello mwillis,
The procedure can be briefly described as follows:
- after building the orthomosaics open it in the Ortho tab by double-clicking on the corresponding line in the Workspace pane,
- choose Draw Polygon instrument on the toolbar,
- draw a polygon,
- select polygon with the left-click then right-click on it and choose Assign Images in the context menu,
- specify the image (or images) to be used for the current fragment,
- repeat previous steps any times you wish on different areas,
- when all the changes have been made press Update Orthomosaic button on the toolbar to apply the changes (it will take some time).
Hello Alexey,
I tried this workflow hoping that roofs will be computed with higher accuracy.
So I marked a roof with the polygon and assigned an image to it where the whole roof is visible.
But the jagged roof edge is still there.
I have attached a screenshot.
For clean roof lines, I first filter the building out of the dense point cloud before meshing and building orthomosaic. This method works well for small houses,sheds, and cars to keep their edges | {"url":"https://www.agisoft.com/forum/index.php?topic=4149.msg21749","timestamp":"2024-11-10T01:45:57Z","content_type":"application/xhtml+xml","content_length":"58234","record_id":"<urn:uuid:b12d217a-2808-4b80-a5ad-6159f90994ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00871.warc.gz"} |
Yards to Tons Calculator - Online Calculators
To convert cubic yards to tons, multiply the cubic yards by the material’s density. This calculation provides the weight in tons for a specified volume of material.
The Yards to Tons Calculator is a helpful tool for converting the volume of various materials. Those materials include gravel, sand, and concrete, from cubic yards to tons. This conversion is crucial
for construction, landscaping, and project planning, where it’s essential to know the weight of materials being transported or used.
Different materials have unique densities, affecting the conversion. By capitalizing on this calculator, project planners and contractors can ensure they’re ordering the correct quantity of material,
saving time and costs on any job site.
$T = CY \times D$
Variable Description
$T$ Total weight in tons
$CY$ Volume in cubic yards
$D$ Density of the material (tons per cubic yard)
Solved Calculations:
Example 1:
Convert 10 cubic yards of gravel to tons, with a density of 1.4 tons per cubic yard.
Step Calculation
1. $T = 10 \times 1.4$
2. $T = 14$
Answer: 14 tons
Example 2:
Convert 15 cubic yards of sand to tons, with a density of 1.2 tons per cubic yard.
Step Calculation
1. $T = 15 \times 1.2$
2. $T = 18$
Answer: 18 tons
What is a Yards To Tons Calculator?
The Yards to Tons Calculator is a convenient tool for converting cubic yards of materials, like gravel, sand, or asphalt, into tons. This conversion is majorly useful in construction, landscaping,
and roadwork, where precise material quantities are needed for projects.
For example, the weight of a cubic yard of gravel or sand can vary, so this calculator simplifies the process, ensuring that you get accurate results every time.
Moreover, to use this calculator, you simply enter the material type and volume in yards. It will then provide the equivalent weight in tons, making it easy to estimate costs and transportation
needs. Learning how to convert yards to tons helps ensure that you order the correct amount of materials, reducing waste and saving time.
Final Words:
Shortly, the Yards to Tons Calculator is essential for planning and managing material requirements efficiently, supporting smoother operations across various projects. | {"url":"https://areacalculators.com/yards-to-tons-calculator/","timestamp":"2024-11-04T01:08:48Z","content_type":"text/html","content_length":"105843","record_id":"<urn:uuid:1d4c48c6-12bd-4e17-a784-a130b3396406>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00742.warc.gz"} |
Confidence Interval for a Population mean, with an Unknown Population Variance - Finance Train
Confidence Interval for a Population mean, with an Unknown Population Variance
If the population variance is not known, then we do the following change to the above confidence interval formula:
1. Substitute the population variance (s) with the sample variance (s)
2. Us t-distribution instead of normal distribution (explained in the following pages)
We use t-distribution because the use of sample variance introduces extra uncertainty as s varies from sample to sample.
We take a sample of 16 stocks from a large population with a mean return of 5.2% and a standard deviation of 1.2%. The population standard deviation is not known.
Calculate the 95% confidence interval for the population mean.
The confidence interval will be:
The value of t is observed from the t-table.
Data Science in Finance: 9-Book Bundle
Master R and Python for financial data science with our comprehensive bundle of 9 ebooks.
What's Included:
• Getting Started with R
• R Programming for Data Science
• Data Visualization with R
• Financial Time Series Analysis with R
• Quantitative Trading Strategies with R
• Derivatives with R
• Credit Risk Modelling With R
• Python for Data Science
• Machine Learning in Finance using Python
Each book includes PDFs, explanations, instructions, data files, and R code for all examples.
Get the Bundle for $39 (Regular $57)
JOIN 30,000 DATA PROFESSIONALS
Free Guides - Getting Started with R and Python
Enter your name and email address below and we will email you the guides for R programming and Python. | {"url":"https://financetrain.com/confidence-interval-population-mean-unknown-population-variance","timestamp":"2024-11-04T05:40:35Z","content_type":"text/html","content_length":"99161","record_id":"<urn:uuid:d79704b4-de45-472b-baf3-e22dbd7d1e3f>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00419.warc.gz"} |
How to calculate natural and base 10 logarithm of a number through a Java program ?.
Program to demonstrate how to calculate natural and base 10 logarithm of a number in Java.
package com.hubberspot.code;
import java.util.Scanner;
public class LogarithmCalculation {
public static void main(String[] args) {
// Create a Scanner object which will read
// values from the console which user enters
Scanner scanner = new Scanner(System.in);
// Getting input from user from the console
System.out.println("Enter value of number ");
// Calling nextDouble method of scanner for
// taking a double value from user and storing
// it in number variable
double number = scanner.nextDouble();
System.out.println("Calculating base 10 logarithm of a number ... ");
// In order to calculate base 10 logarithm of a number
// we use Math class log10() static method which takes in a
// number and returns back the base 10 logarithm of a number
double result = Math.log10(number);
// printing the result on the console
System.out.println("Base 10 Logarithm of " + number + " is : " + result);
System.out.println("Calculating Natural logarithm of a number ... ");
// In order to calculate natural logarithm of a number
// we use Math class log() static method which takes in a
// number and returns back the natural logarithm of a number
result = Math.log(number);
// printing the result on the console
System.out.println("Natural Logarithm of " + number + " is : " + result);
Output of the program : | {"url":"https://www.hubberspot.com/2012/10/how-to-calculate-natural-and-base-10.html","timestamp":"2024-11-13T23:10:54Z","content_type":"application/xhtml+xml","content_length":"99774","record_id":"<urn:uuid:d0fff90c-e2bf-4190-bb2a-d02782584dda>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00631.warc.gz"} |
p-Value for a Chi-Square Test Related Calculators
cited in more than 3,000 scientific papers!
Related Calculators: p-Value for a Chi-Square Test
p-Value for a Chi-Square Test Related Calculators
Below you will find descriptions and links to 26 different statistics calculators that are related to the free p-value calculator for a chi-square test. The related calculators have been organized
into categories in order to make your life a bit easier. | {"url":"https://www.danielsoper.com/statcalc/related.aspx?id=11","timestamp":"2024-11-04T23:17:51Z","content_type":"text/html","content_length":"37530","record_id":"<urn:uuid:5c59944a-e20c-4d8e-8a69-2ba31f5de9e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00845.warc.gz"} |
Lecture 15: 11.02.05 Phase Changes and Phase Diagrams of Single- Component Materials
polymers Article Phase Diagrams of Ternary π-Conjugated Polymer Solutions for Organic Photovoltaics Jung Yong Kim School of Chemical Engineering and Materials Science and Engineering, Jimma Institute
of Technology, Jimma University, Post Office Box 378 Jimma, Ethiopia;
[email protected]
Abstract: Phase diagrams of ternary conjugated polymer solutions were constructed based on Flory-Huggins lattice theory with a constant interaction parameter. For this purpose, the poly(3-
hexylthiophene-2,5-diyl) (P3HT) solution as a model system was investigated as a function of temperature, molecular weight (or chain length), solvent species, processing additives, and electron-
accepting small molecules. Then, other high-performance conjugated polymers such as PTB7 and PffBT4T-2OD were also studied in the same vein of demixing processes. Herein, the liquid-liquid phase
transition is processed through the nucleation and growth of the metastable phase or the spontaneous spinodal decomposition of the unstable phase. Resultantly, the versatile binodal, spinodal, tie
line, and critical point were calculated depending on the Flory-Huggins interaction parameter as well as the relative molar volume of each component. These findings may pave the way to rationally
understand the phase behavior of solvent-polymer-fullerene (or nonfullerene) systems at the interface of organic photovoltaics and molecular thermodynamics. Keywords: conjugated polymer; phase
diagram; ternary; polymer solutions; polymer blends; Flory- Huggins theory; polymer solar cells; organic photovoltaics; organic electronics Citation: Kim, J.Y. Phase Diagrams of Ternary π-Conjugated
Polymer 1. Introduction Solutions for Organic Photovoltaics. Polymers 2021, 13, 983. https:// Since Flory-Huggins lattice theory was conceived in 1942, it has been widely used be- doi.org/10.3390/
polym13060983 cause of its capability of capturing the phase behavior of polymer solutions and blends [1–3]. | {"url":"https://docslib.org/doc/344272/lecture-15-11-02-05-phase-changes-and-phase-diagrams-of-single-component-materials","timestamp":"2024-11-12T22:12:38Z","content_type":"text/html","content_length":"64317","record_id":"<urn:uuid:d5072071-a9dd-41e0-bbe9-d5249f9984dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00699.warc.gz"} |
APS March Meeting 2019
Bulletin of the American Physical Society
APS March Meeting 2019
Volume 64, Number 2
Monday–Friday, March 4–8, 2019; Boston, Massachusetts
Session H36: Real-Space Methods for Large Scale Electronic Structure Problems Hide Abstracts
Sponsoring Units: DCOMP
Chair: Leeor Kronik, Weizmann Institute of Science
Room: BCEC 205C
Tuesday, H36.00001: Real-space numerical grid methods: The next generation of electronic structure codes
March 5, Invited Speaker: James Chelikowsky
2:30PM - Two physical ingredients, pseudopotentials and density functional theory, are widely used in electronic structure computations for a variety of materials applications. If we wish to address
3:06PM large, complex systems, the implementation of these ingredients on high performance computational platforms is vital. Real space grid methods offer a compelling vehicle for such
computations. These methods are mathematically robust, very accurate and well suited for modern, massively parallel computing resources [1]. I will illustrate the utility of these methods as
implemented in the PARSEC code [2]. Key algorithms in this code include subspace filtering based on Chebyshev polynomials for an accelerated eigenvalue solution, spectrum slicing for an
added level of parallelism, Cholesky QR algorithms to improve the performance of orthogonalization, and a 2D partition of the wave functions for efficient matrix-vector operations.
Applications will be illustrated for nanostructures containing tens of thousands of atoms.
[1] L. Frediani and D. Sundholm, Phys. Chem. Chem. Phys. 17, 31357 (2015).
[2] http://parsec.ices.utexas.edu
Tuesday, H36.00002: Large-scale density-functional calculations in real space and its application to bilayer graphene and semiconductor epitaxial growth
March 5, Invited Speaker: Atsushi Oshiyama
3:06PM - Facing current and future massively parallel architecture of supercomputers, we need to make close collaboration between the fields of physical science and computer science. Such
3:42PM collaboration we name COMPUTICS is already in progress (http://computics-material.jp/index-e.html). I here explain an example of such collaboration which allows us to perform total-energy
electronic-structure calculations based on the density-functional theory (DFT) in the real-space scheme for tens-of-thousands-atom systems and also the real-space Car-Parrinello Molecular
Dynamics simulations for thousands-of-atom systems. I first explain how we are able to perform such large-scale computations efficiently in our code named RSDFT. Recent development of the
device simulation combined with the non-equilibrium Green’s function (NEGF) method and its application to Si nanowire MOSFETs are also reported. As examples of the application to materials
science, I will discuss (1) the localization of Dirac electrons induced by moire pattern in twisted bilayer graphene, (2) ammonia decomposition and N incorporation on epitaxially grown GaN
films, (3) intrinsic carrier traps near SiC/SiO2 interfaces, and possibly (4) the formation of amorphous systems with thousands of atoms.
In collaboration with J.-I Iwata (Advance Soft), D. Takahashi (U Tsukuba), G. Milnikov (Osaka U), N. Mori (Osaka U), K. Uchida (Kyoto Inst Tech), Y.-i. Matsushita (Tokyo Inst Tech), K. M.
Bui (Nagoya U), M. Boero (Strasbourg U), and K. Shiraishi (Nagoya U).
Tuesday, H36.00003: Discontinuous projection method for large, accurate electronic structure calculations in real space
March 5, Invited Speaker: John Pask
3:42PM - For decades, the planewave (PW) pseudopotential method has been the method of choice for large, accurate Kohn-Sham calculations of condensed matter systems, in ab initio molecular dynamics
4:18PM simulations in particular. However, due to its reliance on a Fourier basis, the method has proven notoriously difficult to parallelize at scale, thus limiting the length and time scales
accessible. In this talk, we discuss new developments aimed at increasing the scales accessible substantially, while retaining the fundamental simplicity, systematic convergence, and
generality instrumental to the PW method's success in practice. The key idea is to release the constraint of continuity in the basis set, and with the freedom so obtained, employ a basis of
local Kohn-Sham eigenfunctions to solve the global Kohn-Sham problem. In so doing, the basis obtained is highly efficient, requiring just a few tens of basis functions per atom to attain
chemical accuracy, while simultaneously strictly local, orthonormal, and systematically improvable. We show how this basis can be employed to accelerate current state-of-the-art real-space
methods substantially by reducing the dimension of the real-space Hamiltonian by up to three orders of magnitude. Results for metallic and insulating systems of up to 27,000 atoms using up
to 38,000 processors demonstrate the scalability of the methodology in a discontinuous Galerkin formulation. Proceeding via projection of the real-space Hamiltonian instead promises to reach
larger scales still.
Tuesday, H36.00004: Efficient Computation of Hybrid and Screened Hybrid Functionals in Real-Space with Projection Operators
March 5, Invited Speaker: Amir Natan
4:18PM - The use of hybrid and screened hybrid functionals in Density Functional Theory (DFT) became popular as they allow to reduce the error between the calculated and experimentally measured
4:54PM properties. The calculation of the Fock exchange operator, required for those methods, is becoming a computationally prohibitive task with system size. An efficient approach, is to replace
the explicit calculation of the Fock operator with its projection on the Hilbert sub-space that is spanned by the previous self-consistent field (SCF) occupied eigenvectors. It is possible
to extend the method by projecting also on low lying empty eigenvectors to calculate also the empty eigenvalues. We have implemented this method^1 within the PARSEC^2,3 real-space code and
combined it with efficient Poisson solvers^4 and further hardware acceleration by Graphical Processing Units (GPUs) to achieve affordable hybrid calculations of atomistic structures with
1000 atoms on a single workstation^5. We demonstrate the efficiency of this method by calculating the electronic properties of silicon quantum dots (QD) and graphene nano-ribbons with hybrid
and screened hybrid functionals (e.g. PBE0 and HSE)^5. We show how the formalism can be equally applied in real-space to 3D^6 and 2D periodic systems.
[1] Nicholas M. Boffi, Manish Jain, and Amir Natan, J. Chem. Theory Comput. 12, no. 8 (2016): 3614-3622.
[2] James R. Chelikowsky, N. Troullier, and Y. Saad, Phys. Rev. Lett. 72, no. 8 (1994): 1240.
[3] Leeor Kronik, Adi Makmal, Murilo L. Tiago, M. M. G. Alemany, Manish Jain, Xiangyang Huang, Yousef Saad, and James R. Chelikowsky, Phys. Status Solidi (b) 243, no. 5 (2006): 1063-1079.
[4] D. Gabay, A. Boag, and A. Natan, Comput. Phys. Comm. 215 (2017): 1-6.
[5] D. Gabay, X. Wang, V. Lomakin, A. Boag, M. Jain, and A. Natan, Comput. Phys. Comm. 221 (2017): 95-101.
[6] Amir Natan, Phys. Chem. Chem. Phys, 17, no. 47 (2015): 31510-31515.
Tuesday, H36.00005: Computational Materials Discovery by RESCU - a KS-DFT method for solving thousands of atoms
March 5, Invited Speaker: Hong Guo
4:54PM - A major bottleneck for solving realistic materials problems is the lack of a first principles method that can accurately, efficiently and comfortably calculate condensed phase materials
5:30PM comprising thousands of atoms. Solving large systems is necessary when dealing with structures involving interfaces, surfaces, dilute impurities, grain boundaries, dislocations, domains,
solvents etc. Well-known methods of Kohn–Sham density functional theory (KS-DFT) can solve problems at a few hundred atoms level on a modest computer. For larger systems, supercomputers or
further approximations are necessary. Here I shall present our effort in developing a general-purpose KS-DFT solver called RESCU (stand for real space electronic structure calculator). We
demonstrate that RESCU can easily compute electronic structure for systems comprising thousands of atoms on a modest computer, for metals, semiconductors, insulators, liquids, moire patterns
in 2D heterjunction materials, dilute doped III-nitrides etc. For these problems and up to 14,000 atoms as we have used it for, RESCU converges KS-DFT in a few to ten wall-clock hours. RESCU
achieves high efficiency without compromising accuracy. I shall present the novel computational mathematics behind the efficiency gain^1, and apply it for property discovery of materials.
Acknowledgements: I wish to thank Dr. Lei Zhang who participated earlier work in this effort, and Dr. Xiaobin Chen for the phonon package. Many other researchers generously helped us to
improve and apply RESCU, and they will be acknowledged during the presentation. Funding from NSERC of Canada is gratefully acknowledged.
Engage My APS Information for The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics.
Become an APS Member Renew Membership Librarians
Submit a Meeting Abstract Join an APS Unit Authors
Submit a Manuscript Get My Member Number Referees
Find a Journal Article Update Contact Information Media
Donate to APS Students
© 2024 American Physical Society | All rights reserved | Terms of Use | Contact Us
Headquarters 1 Physics Ellipse, College Park, MD 20740-3844 (301) 209-3200
Editorial Office 100 Motor Pkwy, Suite 110, Hauppauge, NY 11788 (631) 591-4000
Office of Public Affairs 529 14th St NW, Suite 1050, Washington, D.C. 20045-2001 (202) 662-8700 | {"url":"https://meetings.aps.org/Meeting/MAR19/Session/H36?showAbstract","timestamp":"2024-11-07T23:50:57Z","content_type":"text/html","content_length":"23418","record_id":"<urn:uuid:17a40784-7377-4423-ae8f-5345346184c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00780.warc.gz"} |
K-Regular Matroids
posted on 2021-11-08, 20:27 authored by Semple, Charles A
The class of matroids representable over all fields is the class of regular matroids. The class of matroids representable over all fields except perhaps GF(2) is the class of near-regular matroids.
Let k be a non-negative integer. This thesis considers the class of k-regular matroids, a generalization of the last two classes. Indeed, the classes of regular and near-regular matroids coincide
with the classes of 0-regular and 1-regular matroids, respectively. This thesis extends many results for regular and near-regular matroids. In particular, for all k, the class of k-regular matroids
is precisely the class of matroids representable over a particular partial field. Every 3-connected member of the classes of either regular or near-regular matroids has a unique representability
property. This thesis extends this property to the 3-connected members of the class of k-regular matroids for all k. A matroid is [omega] -regular if it is k-regular for some k. It is shown that, for
all k [greater than or equal to] 0, every 3-connected k-regular matroid is uniquely representable over the partial field canonically associated with the class of [omega] -regular matroids. To prove
this result, the excluded-minor characterization of the class of k-regular matroids within the class of [omega] -regular matroids is first proved. It turns out that, for all k, there are a finite
number of [omega] -regular excluded minors for the class of k-regular matroids. The proofs of the last two results on k-regular matroids are closely related. The result referred to next is quite
different in this regard. The thesis determines, for all r and all k, the maximum number of points that a simple rank-r k-regular matroid can have and identifies all such matroids having this number.
This last result generalizes the corresponding results for regular and near-regular matroids. Some of the main results for k-regular matroids are obtained via a matroid operation that is a
generalization of the operation of [Delta] - Y exchange. This operation is called segment-cosegment exchange and, like the operation of [Delta] - Y exchange, has a dual operation. This thesis defines
the generalized operation and its dual, and identifies many of their attractive properties. One property in particular, is that, for a partial field P, the set of excluded minors for representability
over P is closed under the operations of segment-cosegment exchange and its dual. This result generalizes the corresponding result for [Delta] - Y and Y - [Delta] exchanges. Moreover, a consequence
of it is that, for a prime power q, the number of excluded minors for GF(q)-representability is at least 2q-4.
Copyright Date
Date of Award
Te Herenga Waka—Victoria University of Wellington
Rights License
Author Retains Copyright
Degree Discipline
Degree Grantor
Te Herenga Waka—Victoria University of Wellington
Degree Level
Degree Name
Doctor of Philosophy
Victoria University of Wellington Item Type
Awarded Doctoral Thesis
Victoria University of Wellington School
School of Mathematics, Statistics and Computer Science
Whittle, Geoff | {"url":"https://openaccess.wgtn.ac.nz/articles/thesis/K-Regular_Matroids/16958560","timestamp":"2024-11-12T03:45:58Z","content_type":"text/html","content_length":"149084","record_id":"<urn:uuid:205b8447-86db-41a9-83e1-fd6c417365de>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00407.warc.gz"} |
Topological vector space
Jump to navigation Jump to search
In mathematics, a topological vector space (also called a linear topological space) is one of the basic structures investigated in functional analysis. A topological vector space is a vector space
(an algebraic structure) which is also a topological space, thereby admitting a notion of continuity. More specifically, its topological space has a uniform topological structure, allowing a notion
of uniform convergence.
The elements of topological vector spaces are typically functions or linear operators acting on topological vector spaces, and the topology is often defined so as to capture a particular notion of
convergence of sequences of functions.
Hilbert spaces and Banach spaces are well-known examples.
Unless stated otherwise, the underlying field of a topological vector space is assumed to be either the complex numbers C or the real numbers R.
A topological vector space X is a vector space over a topological field K (most often the real or complex numbers with their standard topologies) that is endowed with a topology such that vector
addition X × X → X and scalar multiplication K × X → X are continuous functions (where the domains of these functions are endowed with product topologies).
Some authors (e.g., Walter Rudin) require the topology on X to be T[1]; it then follows that the space is Hausdorff, and even Tychonoff. The topological and linear algebraic structures can be tied
together even more closely with additional assumptions, the most common of which are listed below.
The category of topological vector spaces over a given topological field K is commonly denoted TVS[K] or TVect[K]. The objects are the topological vector spaces over K and the morphisms are the
continuous K-linear maps from one object to another.
Every normed vector space has a natural topological structure: the norm induces a metric and the metric induces a topology. This is a topological vector space because:
1. The vector addition + : V × V → V is jointly continuous with respect to this topology. This follows directly from the triangle inequality obeyed by the norm.
2. The scalar multiplication · : K × V → V, where K is the underlying scalar field of V, is jointly continuous. This follows from the triangle inequality and homogeneity of the norm.
Therefore, all Banach spaces and Hilbert spaces are examples of topological vector spaces.
There are topological vector spaces whose topology is not induced by a norm, but are still of interest in analysis. Examples of such spaces are spaces of holomorphic functions on an open domain,
spaces of infinitely differentiable functions, the Schwartz spaces, and spaces of test functions and the spaces of distributions on them. These are all examples of Montel spaces. An
infinite-dimensional Montel space is never normable.
A topological field is a topological vector space over each of its subfields.
Product vector spaces[edit]
A cartesian product of a family of topological vector spaces, when endowed with the product topology, is a topological vector space. For instance, the set X of all functions f : R → R: this set X can
be identified with the product space R^R and carries a natural product topology. With this topology, X becomes a topological vector space, endowed with a topology called the topology of pointwise
convergence. The reason for this name is the following: if (f[n]) is a sequence of elements in X, then f[n] has limit f in X if and only if f[n](x) has limit f(x) for every real number x. This space
is complete, but not normable: indeed, every neighborhood of 0 in the product topology contains lines, i.e., sets K f for f ≠ 0.
Topological structure[edit]
A vector space is an abelian group with respect to the operation of addition, and in a topological vector space the inverse operation is always continuous (since it is the same as multiplication by
−1). Hence, every topological vector space is an abelian topological group.
Let X be a topological vector space. Given a subspace M ⊂ X, the quotient space X/M with the usual quotient topology is a Hausdorff topological vector space if and only if M is closed.^[1] This
permits the following construction: given a topological vector space X (that is probably not Hausdorff), form the quotient space X / M where M is the closure of {0}. X / M is then a Hausdorff
topological vector space that can be studied instead of X.
In particular, topological vector spaces are uniform spaces and one can thus talk about completeness, uniform convergence and uniform continuity. (This implies that every Hausdorff topological vector
space is Tychonoff.^[2]) The vector space operations of addition and scalar multiplication are actually uniformly continuous. Because of this, every topological vector space can be completed and is
thus a dense linear subspace of a complete topological vector space.
The Birkhoff–Kakutani theorem gives that the following three conditions on a topological vector space V are equivalent:^[3]
A metric linear space means a (real or complex) vector space together with a metric for which addition and scalar multiplication are continuous. By the Birkhoff–Kakutani theorem, it follows that
there is an equivalent metric that is translation-invariant.
More strongly: a topological vector space is said to be normable if its topology can be induced by a norm. A topological vector space is normable if and only if it is Hausdorff and has a convex
bounded neighborhood of 0.^[4]
A linear operator between two topological vector spaces which is continuous at one point is continuous on the whole domain. Moreover, a linear operator f is continuous if f(V) is bounded (as defined
below) for some neighborhood V of 0.
A hyperplane on a topological vector space X is either dense or closed. A linear functional f on a topological vector space X has either dense or closed kernel. Moreover, f is continuous if and only
if its kernel is closed.
Let K be a non-discrete locally compact topological field, for example the real or complex numbers. A topological vector space over K is locally compact if and only if it is finite-dimensional, that
is, isomorphic to K^n for some natural number n.
Local notions[edit]
A subset E of a topological vector space X is said to be
• balanced if tE ⊂ E for every scalar |t| ≤ 1
• bounded if for every neighborhood V of 0, then E ⊂ tV when t is sufficiently large.^[5]
The definition of boundedness can be weakened a bit; E is bounded if and only if every countable subset of it is bounded. Also, E is bounded if and only if for every balanced neighborhood V of 0,
there exists t such that E ⊂ tV. Moreover, when X is locally convex, the boundedness can be characterized by seminorms: the subset E is bounded iff every continuous semi-norm p is bounded on E.
Every topological vector space has a local base of absorbing and balanced sets.
A sequence {x[n]} is said to be Cauchy if for every neighborhood V of 0, the difference x[m] − x[n] belongs to V when m and n are sufficiently large. Every Cauchy sequence is bounded, although Cauchy
nets or Cauchy filters may not be bounded. A topological vector space where every Cauchy sequence converges is called sequentially complete; in general, it may not be complete (in the sense that
Cauchy filters converge). Every compact set is bounded.
Depending on the application additional constraints are usually enforced on the topological structure of the space. In fact, several principal results in functional analysis fail to hold in general
for topological vector spaces: the closed graph theorem, the open mapping theorem, and the fact that the dual space of the space separates points in the space.
Below are some common topological vector spaces, roughly ordered by their niceness.
• F-spaces are complete topological vector spaces with a translation-invariant metric. These include L^p spaces for all p > 0.
• Locally convex topological vector spaces: here each point has a local base consisting of convex sets. By a technique known as Minkowski functionals it can be shown that a space is locally convex
if and only if its topology can be defined by a family of semi-norms. Local convexity is the minimum requirement for "geometrical" arguments like the Hahn–Banach theorem. The L^p spaces are
locally convex (in fact, Banach spaces) for all p ≥ 1, but not for 0 < p < 1.
• Barrelled spaces: locally convex spaces where the Banach–Steinhaus theorem holds.
• Bornological space: a locally convex space where the continuous linear operators to any locally convex space are exactly the bounded linear operators.
• Stereotype space: a locally convex space satisfying a variant of reflexivity condition, where the dual space is endowed with the topology of uniform convergence on totally bounded sets.
• Montel space: a barrelled space where every closed and bounded set is compact
• Fréchet spaces: these are complete locally convex spaces where the topology comes from a translation-invariant metric, or equivalently: from a countable family of semi-norms. Many interesting
spaces of functions fall into this class. A locally convex F-space is a Fréchet space.
• LF-spaces are limits of Fréchet spaces. ILH spaces are inverse limits of Hilbert spaces.
• Nuclear spaces: these are locally convex spaces with the property that every bounded map from the nuclear space to an arbitrary Banach space is a nuclear operator.
• Normed spaces and semi-normed spaces: locally convex spaces where the topology can be described by a single norm or semi-norm. In normed spaces a linear operator is continuous if and only if it
is bounded.
• Banach spaces: Complete normed vector spaces. Most of functional analysis is formulated for Banach spaces.
• Reflexive Banach spaces: Banach spaces naturally isomorphic to their double dual (see below), which ensures that some geometrical arguments can be carried out. An important example which is not
reflexive is L^1, whose dual is L^∞ but is strictly contained in the dual of L^∞.
• Hilbert spaces: these have an inner product; even though these spaces may be infinite-dimensional, most geometrical reasoning familiar from finite dimensions can be carried out in them. These
include L^2 spaces.
• Euclidean spaces: R^n or C^n with the topology induced by the standard inner product. As pointed out in the preceding section, for a given finite n, there is only one n-dimensional topological
vector space, up to isomorphism. It follows from this that any finite-dimensional subspace of a TVS is closed. A characterization of finite dimensionality is that a Hausdorff TVS is locally
compact if and only if it is finite-dimensional (therefore isomorphic to some Euclidean space).
Dual space[edit]
Every topological vector space has a continuous dual space—the set V* of all continuous linear functionals, i.e. continuous linear maps from the space into the base field K. A topology on the dual
can be defined to be the coarsest topology such that the dual pairing each point evaluation V* → K is continuous. This turns the dual into a locally convex topological vector space. This topology is
called the weak-* topology. This may not be the only natural topology on the dual space; for instance, the dual of a normed space has a natural norm defined on it. However, it is very important in
applications because of its compactness properties (see Banach–Alaoglu theorem). Caution: Whenever V is a not-normable locally convex space, then the pairing map V* × V → K is never continuous, no
matter which vector space topology one chooses on V*. | {"url":"https://static.hlt.bme.hu/semantics/external/pages/tenzorszorzatok/en.wikipedia.org/wiki/Topological_vector_spaces.html","timestamp":"2024-11-05T23:09:57Z","content_type":"text/html","content_length":"96362","record_id":"<urn:uuid:29c064ba-d72c-4854-a3ec-136a6db7b08a>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00577.warc.gz"} |
What is the Magnitude of the Gravitational Force Acting on the Earth Due to the Sun?
• Posted byby Harrie Wade
• Updated:
• 8 minute read
There’s a significant force at play between the Earth and the Sun that keeps our planet in its orbit. Understanding the magnitude of this gravitational force is imperative for grasping the mechanics
of our solar system. You may wonder how such an immense force influences your day-to-day life and the overall stability of Earth’s climate and environment. In this blog post, you’ll explore the
calculations and concepts behind this gravitational pull, shedding light on the fundamental relationship between our planet and its closest star.
Key Takeaways:
• Gravitational Force: The gravitational force between the Earth and the Sun is a fundamental interaction that governs planetary motion.
• Newton’s Law of Gravitation: This force can be calculated using Newton’s law, which states that the gravitational force is proportional to the product of the masses of the two objects and
inversely proportional to the square of the distance between them.
• Magnitude Calculation: The magnitude of the gravitational force acting on the Earth due to the Sun is approximately 3.6 x 10^22 N.
• Massive Sun: The Sun’s mass, about 1.989 x 10^30 kg, plays a key role in the strength of this gravitational force.
• Distance Factor: The average distance from the Earth to the Sun, roughly 1.496 x 10^11 meters, significantly impacts the magnitude of the gravitational force.
The Gravitational Force: An Overview
To understand the gravitational force, you need to recognize that it is one of the fundamental interactions of nature. Gravity is the attractive force that acts between two masses, influencing their
motion and behavior in the cosmos. The magnitude of this force varies depending on the masses involved and the distance separating them. In your exploration of gravitational forces, the relationship
between celestial bodies, such as the Earth and the Sun, plays a crucial role in maintaining the stability of our solar system.
Understanding Gravity
With gravity, you are encountering one of the most significant forces in the universe. It governs the motion of planets, stars, and galaxies, as well as the phenomena you observe in daily life. This
omnipresent force instills a sense of weight in objects, drawing them towards one another, and is imperative for maintaining the orbits of celestial bodies.
The Law of Universal Gravitation
Overview, the Law of Universal Gravitation is a principle formulated by Sir Isaac Newton, stating that every point mass attracts every other point mass in the universe with a force that is directly
proportional to the product of their masses and inversely proportional to the square of the distance between them. This law holds true for all objects, regardless of their size, making it fundamental
to your understanding of gravitational interactions.
Another key aspect of the Law of Universal Gravitation is its mathematical representation, often expressed as F = G(m1*m2)/r^2, where F is the gravitational force, G is the gravitational constant, m1
and m2 are the masses in question, and r is the distance between their centers. This equation allows you to quantify the gravitational force between any two bodies, providing a solid foundation for
understanding the gravitational attraction that occurs in your universe, from apples falling from trees to the Earth orbiting the Sun.
The Earth and Sun System
Little do many realize, the Earth and Sun are intricately linked in a delicate gravitational relationship that governs the dynamics of our solar system. This connection not only influences seasons
and climate on Earth but also maintains the intricate balance of celestial bodies, showcasing the power of gravity at work in the cosmos.
Distance between Earth and Sun
The average distance between the Earth and the Sun is approximately 93 million miles (150 million kilometers), a measurement defined as one astronomical unit (AU). This vast distance plays a crucial
role in the gravitational forces exerted between these two massive bodies, influencing the orbital mechanics that keep the Earth in stable orbit.
Mass of the Earth and Sun
On a cosmic scale, the mass of the Sun is around 333,000 times that of the Earth, emphasizing its dominant gravitational influence in the Earth-Sun system. While the Earth has a mass of about 5.97 x
10^24 kilograms, the Sun’s mass is approximately 1.99 x 10^30 kilograms, showcasing the significant disparity between these two astronomical entities.
Understanding the mass of both the Earth and the Sun is fundamental to grasping the dynamics within our solar system. The Sun’s immense mass generates a powerful gravitational pull that keeps the
Earth and other planets in orbit. This gravitational interplay is imperative for sustaining life on our planet, as it ensures a stable climate and conditions necessary for growth. By recognizing
these mass relationships, you can appreciate the complexities of orbital mechanics and the intricate balance that defines the cosmos.
Calculating Gravitational Force
Your exploration of gravitational forces begins with understanding how to calculate the gravitational force between two massive bodies. This is typically determined using Newton’s Law of Universal
Gravitation, which takes into account the masses of the objects and the distance separating them. By applying this formula, you can compute the gravitational attraction acting on the Earth due to the
Sun, providing insight into the dynamics of our solar system.
Using the Gravitational Formula
With Newton’s Gravitational Formula, F = G(m1 * m2) / r², you can accurately calculate the gravitational force. Here, F represents the gravitational force, G is the gravitational constant, m1 and m2
are the masses of the two bodies (Earth and Sun), and r is the distance between their centers. By substituting the appropriate values into this formula, you will reveal the strength of the
gravitational pull that the Sun exerts on the Earth.
Factors Affecting Gravitational Force
With several factors influencing gravitational force, it’s crucial to recognize their interplay. Key factors include:
• The masses of the objects involved.
• The distance between the centers of the two masses.
Thou must consider that any changes in mass or distance will directly impact the gravitational attraction between the Earth and the Sun.
Understanding these factors enhances your grasp of gravitational interactions. The magnitude of gravitational force depends on multiple elements that are vital for accurate calculations. Some
considerations are:
• The gravitational constant (G).
• Environmental influences, such as nearby celestial bodies.
Thou shall see that these factors play a significant role in shaping the gravitational dynamics within our solar system.
Implications of Gravitational Force on Earth
All the gravitational force exerted by the Sun plays a crucial role in maintaining the stability of Earth’s orbit and climate. This fundamental force not only governs the position of our planet in
the solar system but also influences various geological and astronomical phenomena that impact your daily life.
Effects on Earth’s Orbit
Force exerted by the Sun’s gravity keeps Earth in a nearly circular orbit, enabling you to experience seasons and providing a stable environment for life. Any shifts in this gravitational force due
to changes in the Sun or the Earth itself can affect your planet’s trajectory and climate over time.
Influence on Tides
Influence of the Sun’s gravitational pull extends to the ocean tides on Earth, which are primarily affected by both the Moon and the Sun. This interaction creates varying tidal patterns that affect
coastal ecosystems and influence your recreational activities by impacting water levels.
Plus, the Sun’s gravity amplifies the tidal forces caused by the Moon, leading to higher high tides and lower low tides during specific lunar phases. These cyclical changes are vital for maintaining
healthy marine life and can even affect your local fishing or boating conditions, highlighting the interconnectedness of gravitational forces and our everyday experiences.
Gravitational Force Compared to Other Celestial Bodies
Many factors influence the gravitational force experienced on Earth, not solely from the Sun. Understanding these influences can enhance your perspective on the forces at play in our solar system.
Gravitational Force Comparison
Celestial Body Gravitational Force (Newtons)
Sun 3.53 x 10^22
Moon 1.98 x 10^20
Jupiter 1.88 x 10^23
Comparison with Moon’s Gravitational Impact
Gravitational forces fluctuate between Earth and other celestial bodies, particularly the Moon. You will find that the Moon exerts a significant, yet comparatively weaker gravitational pull on Earth
than the Sun.
Moon’s Gravitational Influence
Aspect Details
Gravitational Pull 1.98 x 10^20 N
Tidal Effects Causes tides in Earth’s oceans
Other Planetary Influences
To fully appreciate these forces, you should also consider the gravitational influences of other planets. These celestial entities can create variations in the gravitational field that affect your
understanding of Earth’s position in the solar system.
Planetary gravitation varies significantly across the solar system. Each planet’s mass and distance from Earth contribute to its influence. For instance, while Venus and Jupiter may have less
gravitational impact than the Sun, their positions can alter gravitational pulls slightly, affecting not only tidal patterns but also Earth’s orbital mechanics. The combined gravitational attractions
create a dynamic setting that you, as an inquisitive observer, should appreciate.
Future Research Directions
For future research, you can explore the various factors that influence the gravitational force between celestial bodies, such as mass variations and distance anomalies. Understanding these dynamics
will not only enhance your knowledge of gravitational interactions but also improve predictions of agential movements in our solar system. As advancements continue, you could contribute to developing
methods that refine our knowledge of gravitational forces at larger scales.
Advancements in Astrophysics
With ongoing advancements in astrophysics, you can look forward to a deeper understanding of gravitational forces. Breakthroughs in theoretical models and observational techniques are reshaping our
comprehension of gravitational interactions, offering you fresh insights into the mechanics that govern celestial bodies. These advancements have the potential to clarify complex phenomena, making
them more accessible for both researchers and enthusiasts alike.
The Role of Technology in Calculating Gravitational Forces
On the technological front, you will find that advancements in computing power and data analysis significantly enhance the accuracy of gravitational force calculations. Sophisticated models now allow
you to simulate and predict interactions with exceptional precision, which is critical for space missions and astronomical research. The integration of technology with theoretical physics empowers
you to explore the intricacies of gravitational relationships more effectively than ever before.
Calculating gravitational forces between celestial bodies often requires advanced algorithms, sophisticated telescopes, and satellite data. These technologies enable you to gather vast amounts of
data, which you can analyze to improve your understanding of gravitational interactions. By leveraging high-performance computing, you can model complex scenarios, assess variations in mass
distributions, and account for various factors, such as the influence of nearby planets. As you engage with these technological advancements, your interpretation of gravitational forces will become
increasingly nuanced, pushing the boundaries of astrophysical research.
Final Words
From above, you can appreciate that the magnitude of the gravitational force acting on the Earth due to the Sun is approximately 3.54 x 10^22 Newtons. This immense force is what keeps the Earth in
its stable orbit around the Sun, highlighting the intricate balance of gravitational interactions in our solar system. Understanding this force not only deepens your knowledge of astrophysics but
also reinforces the interconnectedness of celestial bodies. Recognizing these fundamental forces enables you to better appreciate the cosmos in which you reside.
Q: What is the gravitational force acting on Earth due to the Sun?
A: The gravitational force acting on Earth due to the Sun can be calculated using Newton’s law of universal gravitation, which states that the force between two objects is proportional to the product
of their masses and inversely proportional to the square of the distance between their centers. The gravitational force \( F \) is given by the formula: \( F = G \frac{m_1 m_2}{r^2} \), where \( G \)
is the gravitational constant, \( m_1 \) is the mass of the Sun, \( m_2 \) is the mass of the Earth, and \( r \) is the distance between the centers of the two bodies. The magnitude of this force is
approximately \( 3.5 \times 10^{22} \) Newtons.
Q: What are the masses of the Earth and the Sun used in the gravitational force calculation?
A: The mass of the Sun is approximately \( 1.989 \times 10^{30} \) kilograms, while the mass of the Earth is roughly \( 5.972 \times 10^{24} \) kilograms. These values are crucial for accurately
calculating the gravitational force between the two celestial bodies.
Q: How does the distance between the Earth and the Sun affect the gravitational force?
A: The distance between the Earth and the Sun plays a critical role in determining the magnitude of the gravitational force. According to Newton’s law of universal gravitation, the force is inversely
proportional to the square of the distance. As the distance increases, the gravitational force decreases sharply. For example, the average distance from the Earth to the Sun is about \( 1.496 \times
10^{11} \) meters (1 Astronomical Unit), and any significant change in this distance would result in a noticeable change in the gravitational pull.
Q: Is the gravitational force between the Earth and the Sun constant?
A: While the gravitational force is based on fixed values for the masses of the Earth and the Sun, it is not entirely constant due to the elliptical shape of Earth’s orbit. The gravitational force
varies slightly throughout the year as the distance changes. At its closest point (perihelion), the gravitational force is slightly stronger, while at its farthest point (aphelion), it is slightly
weaker. However, these variations are relatively small compared to the overall force experienced by the Earth.
Q: What role does the gravitational force between the Earth and the Sun play in the Solar System?
A: The gravitational force between the Earth and the Sun is fundamental to maintaining the orbits of the planets in the Solar System. This force keeps the Earth in a stable orbit around the Sun,
allowing for the cyclical seasons and influencing various planetary phenomena. Moreover, it also affects smaller celestial bodies like comets and asteroids, guiding their trajectories as they
interact with the gravitational fields of larger objects in the Solar System.
Leave a Reply Cancel reply | {"url":"https://newsanalysis.net/gravitational-force-of-the-sun-on-earth/","timestamp":"2024-11-06T04:51:47Z","content_type":"text/html","content_length":"97525","record_id":"<urn:uuid:d282d226-d0c5-4713-adc3-e0fc5e1251ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00031.warc.gz"} |
Next: MINIMUM PHASE Up: One-sided functions Previous: One-sided functions
To understand causal filters better, we now take up the task of undoing what a causal filter has done. Consider the output y[t] of a filter b[t] is known but the input x[t] is unknown. See Figure 1.
Figure 1 Sometimes the input to a filter is unknown.
This is the problem that one always has with a transducer/recorder system. For example, the output of a seismometer is a wiggly line on a piece of paper from which the seismologist may wish to
determine the displacement, velocity, or acceleration of the ground. To undo the filtering operation of the filter B(Z), we will try to find another filter A(Z) as indicated in Figure 2.
Figure 2 The filter A(Z) is inverse to the filter B(Z).
To solve for the coefficients of the filter A(Z), we merely identify coefficients of powers of Z in B(Z)A(Z) = 1. For B(Z), a three-term filter, this is
The coefficients of 1) are From (2) one may get a[0] from b[0]. From (3) one may get a[1] from a[0] and the b[k]. From (4) one may get a[2] from a[1], a[0], and the b[k]. Likewise, in the general
case a[k] may be found from a[k-1], a[k-2], and the b[k]. Specifically, from (7) the a[k] may be determined recursively by
Consider the example where B(Z) = 1 - Z/2; then, by equations like (2) to (7), by the binomial theorem, by polynomial division, or by Taylor's power series formula we obtain
We see that there are an infinite number of filter coefficients but that they drop off rapidly in size so that approximation in a computer presents no problem. The situation is not so rosy with the
filter B(Z) = 1 - 2Z. Here we obtain The coefficients of the series increase without bound. The outputs of the filter A(Z) depend infinitely strongly on inputs of the infinitely distant past. [Recall
that the present output of A(Z) is a[0] times the present input x[1] plus a[1] times the previous input x[t-1], etc., so a[n] represents memory on n time units earlier.] The implication of this is
that some filters B(Z) will not have useful finite approximate inverses A(Z) determined from (2) to (8). We now seek ways to identify the good filters from the bad ones. With a two-pulse filter, the
criterion is merely that the first pulse in B(Z) be larger than the second. A more mathematical description of the state of affairs results from solving for the roots of B(Z), that is, find the
values of Z[0] for which B(Z[0]) = 0. For example 1 - Z/2 we find Z[0] = 2. For the example 1 - 2Z, we find Z[0] of B(Z[0]) = 0 lies inside the unit circle in the complex plane, then 1/B(Z) will have
coefficients which blow up; and if the root lies outside the unit circle, then the inverse 1/B(Z) will be bounded.
Figure 3 Factoring the polynomial B(Z) breaks the filter into many two-term filters. Each one should have a bounded inverse.
Recalling earlier discussion that a polynomial B(Z) of degree N may be factored into N subsystems and that the ordering of subsystems is unimportant (see Figure 3), we suspect that if any of the N
roots of B(Z) lies inside the unit circle we may have difficulty with A(Z). Actual proof of this suspicion relies on a theorem from complex-variable theory about absolutely convergent series. The
theorem is that the product of absolutely convergent series is convergent, and conversely the product of any convergent series with a divergent series is divergent. Another proof may be based upon
the fact that a power series for 1/B(Z) converges in a circle about the origin with a radius from the origin out to he first pole [the zero of B(Z) of smallest magnitude]. Convergence of A(Z) on the
unit circle means, in terms of filters, that the coefficients of A(Z) are decreasing. Thus, if all the zeros of B(Z) are outside the unit circle, we will get a convergent filter from (8).
Can anything at all be done if there is one root or more inside the circle? An answer is suggested by the example
Equation (11) is a series expansion in 1/Z, that is, a Taylor series about infinity. It converges from
In the general case, then, one must factor B(Z) into two parts: B(Z) = B[out](Z) B[in](Z) where B[out] contains roots outside the unit circle and B[in] contains the roots inside. Then the inverse of
B[out] is expressed as a Taylor series about the origin and the inverse of B[in] is expressed as a Taylor series about infinity. The final expression for 1/B(Z) is called a Laurent expansion for 1/B(
Z), and it converges on a ring surrounding the unit circle. Cases with zeros exactly on the unit circle present special problems. Sometimes you can argue yourself out of the difficulty but at other
times roots on or even near the circle may mean that a certain computing scheme won't work out well in practice.
Finally, let us consider a mechanical interpretation. The stress (pressure) in a material may be represented by x[t], and the strain (volume charge) may be represented by y[t]. The following two
statements are equivalent; that is, in some situations they are both true, and in other situations they are both false:
STATEMENT A The stress in a material may be expressed as a linear combination of present and past strains. Likewise, the strain may be deduced from present and past stresses.
STATEMENT B The filter which relates stress to strain and vice versa has all poles and zeros outside the unit circle.
1. Find the filter which is inverse to (2 - 5Z + 2Z^2). You may just drop higher-order powers of Z, but an exact expression for the coefficients of any power of Z is preferred. (Partial fractions is
a useful, though not necessary, technique.) Sketch the impulse response.
2. Show that multiplication by (1 - Z) in discretized time is analogous to time differentiation in continuous time. Show that dividing by (1 - Z) is analogous to integration. What are the limits on
the integral?
3. Describe a general method for determining A(Z) and B(Z) from a Taylor series of B(Z) and A(Z) are polynomials of unknown degree n and m, respectively. Work out the case HINT: Identify
coefficients of B(Z) = A(Z) C(Z).]
Next: MINIMUM PHASE Up: One-sided functions Previous: One-sided functions Stanford Exploration Project | {"url":"https://sepwww.stanford.edu/sep/prof/fgdp/c2/paper_html/node2.html","timestamp":"2024-11-10T19:15:57Z","content_type":"text/html","content_length":"17071","record_id":"<urn:uuid:bfeb5f64-6c21-4f44-b03e-5b1f90289a3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00284.warc.gz"} |
Unable to achieve better performance with transformer than LSTM
I am trying to train an ML model on time series data. The input is 10 timeseries which are essentially a sensor data. The output is another set of three time series. I feed the model with the window
of 100. So, the input shape becomes (100, 10). I want to predict output time series values for single time step. So, the output shape becomes (1, 3). (If I create mini batches of size say x, the
input and output shapes become (x, 100, 10) and (x, 1, 3)).
My approach is to first overfit the model on smaller number of records. See if model is indeed learning / able to overfit the data. Then add some regularization (mostly dropout) and then try to train
the model on full dataset.
First, I tried to overfit LSTM model (4 LSTM layers with 1400 hidden units each) on small dataset and visualised the outcome. It did well. So, I tried to train it on the whole dataset. It did
okayish, but still struggled at some places.
I tried adding dropouts too, but it did not yield any significant improvement. So, I tried to train PatchTST transformer model. First, I tried to overfit smaller model and did well. In fact, when I
visualized the output, I realised that it was able to get tighter overfit than the LSTM model. So, I tried to train it on the whole dataset.
The initial version of PatchTST I tried is as follows:
config = PatchTSTConfig(
model = PatchTSTForRegression(config)
# Model Trainer body looks like this:
patch_tst_y_hat = self.model(X)
y_hat = patch_tst_y_hat.regression_outputs
# for LSTM it was:
# y_hat = self.model(X)
loss = self.loss_fn(y_hat, Y)
With this as base config, I tried different changes to it for hyper parameter optimization:
1. layers = 7
2. d_model = 1000
3. d_model = 1000, num_attention_heads = 4
The validation loss curves as follows:
The training loss curves also look the same: image (Forum is allowing me to insert only one image)
Can someone suggest any idea to improve LSTM model performance further with PatchTST or some other transformer related changes?
PS: I have also tried LSTM with attention, it did yield tiny improvement, which is not sufficient. I am also open for any non-transformer related suggestions too. But will love to learn how can we
achieve better performance with transformer. | {"url":"https://discuss.pytorch.org/t/unable-to-achieve-better-performance-with-transformer-than-lstm/211734","timestamp":"2024-11-04T04:51:34Z","content_type":"text/html","content_length":"16516","record_id":"<urn:uuid:4007b4bb-b12a-40ec-9bf6-2946ecb4c924>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00673.warc.gz"} |
[Solved] Which of the following is not the collision resolution techn
Which of the following is not the collision resolution technique of hashing?
Answer (Detailed Solution Below)
Option 4 : Folding
DSSSB TGT Hindi Female 4th Sep 2021 Shift 2
14 K Users
200 Questions 200 Marks 120 Mins
Double Hashing method is based upon the idea that in the event of a collision we use an another hashing function with the key value as an input to find where in the open addressing scheme the data
should actually be placed at.
• In this case we use two hashing functions, such that the final hashing function looks like:
H(x, i) = (H1(x) + i*H2(x))%N
• Typically for H1(x) = x%N a good H2 is H2(x) = P - (x%P), where P is a prime number smaller than N.
Additional Information
Quadratic Probing method lies in the middle of great cache performance and the problem of clustering. The general idea remains the same, the only difference is that we look at the Q(i) increment at
each iteration when looking for an empty bucket, where Q(i) is some quadratic expression of i. A simple expression of Q would be Q(i) = i^2, H(x, i) = (H(x) + i^2)%N
Folding is not a collision resolution technique, but rather a technique for creating a hash value from a key. In folding, the key is divided into equal parts and then added together to form a hash
value. For example, if the key is "123456789", it might be divided into two parts: "12" and "34 56 78 9". The parts are then added together to form the hash value: 12 + 34 + 56 + 78 + 9 = 189.
Latest DSSSB TGT Updates
Last updated on Sep 3, 2024
-> The DSSSB TGT Exam for TGT Punjabi & TGT Urdu will be held on 13th October 2024.
-> The DSSSB TGT Notification was released for 4591 vacancies.
-> The selection of the DSSSB TGT is based on the CBT Test which will be held for 200 marks.
-> Candidates can check the DSSSB TGT Previous Year Papers which helps in preparation. Candidates can also check the DSSSB Test Series. | {"url":"https://testbook.com/question-answer/which-of-the-following-is-not-the-collision-resolu--611a42d75836b0fbd61e1a71","timestamp":"2024-11-06T21:52:25Z","content_type":"text/html","content_length":"197590","record_id":"<urn:uuid:3c1b1f83-bb88-4b82-b162-9b6e931017e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00873.warc.gz"} |
Mathematics and Computer Science
Maria Serafina MADONIA
Associate Professor of Informatics [INFO-01/A]
Maria Serafina Madonia, born in Palermo on 31 December 1962, has been an associate professor (S.S.D. INF/01) at the Department of Mathematics and Computer Science (DMI) of the University of Catania
since 1 January 2022. From May 1996 to December 2021 she is was confirmed researcher (S.S.D. INF/01) at the same department. In 1985, she graduated in Mathematics (Applicative address) at the
University of Palermo and, in 1991, she obtained the title of PhD in Mathematics (III cycle - Consortium Universities of Palermo, Catania and Messina). He teaches in Bachelor's and Master's degree
courses in Computer Science and carries out research activities in the field of theoretical computer science.
She carries out research in the field of formal language theory and, more exactly, she is interested in algebraic and combinatorial problems of the theories of automata, two-dimensional languages
and codes. In particular, she addressed the following topics:
- Recognizable two-dimensional languages
- Picture codes
- Automata on unary alphabets
- “Covering” of words
- Z-monoids and z-codes
- Rational relationships
- Reducibility of binary trees
The complete list of publications is available at the link http://www.dmi.unict.it/madonia/ricerche.html.
She has worked, and still does, as scientific reviewer for numerous international journals and for numerous national and international conferences.
From 1998 to today she has participated in various research projects funded by MIUR, INDAM and the University of Catania. | {"url":"https://dmi.unict.it/faculty/maria.serafina.madonia","timestamp":"2024-11-07T12:32:40Z","content_type":"text/html","content_length":"30730","record_id":"<urn:uuid:394ae2ee-69b5-4402-90b5-a6d8750f113f>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00552.warc.gz"} |
Evaluate:∫2x2+3x−2dx... | Filo
Not the question you're searching for?
+ Ask your question
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE
11 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from similar books
Practice more questions from Integrals
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Evaluate:
Updated On Dec 5, 2022
Topic Integrals
Subject Mathematics
Class Class 12
Answer Type Text solution:1 Video solution: 1
Upvotes 165
Avg. Video Duration 25 min | {"url":"https://askfilo.com/math-question-answers/evaluate-displaystyle-int-frac-dx-sqrt-2x-2-3x-2","timestamp":"2024-11-05T02:17:31Z","content_type":"text/html","content_length":"498930","record_id":"<urn:uuid:119bd1e9-0ff5-488b-8b3e-eb601e36bcfb>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00194.warc.gz"} |
Enum PdfFillMode
Specifies how the interior of a closed path is filled.
Alternate = 0
The alternate fill mode (the even-odd rule) should be used.
Winding = 1
The winding fill mode (the nonzero winding number rule) should be used.
For a simple path, it is intuitively clear what region lies inside. However, for a more complex path - for example, a path that intersects itself or has one subpath that encloses another - it is not
always obvious which points lie inside the path. The path machinery uses one of two rules for determining which points lie inside a path: the nonzero winding number rule and the even-odd rule.
Even-Odd Rule
The even-odd rule determines whether a point is inside a path by drawing a ray from that point in any direction and simply counting the number of path segments that cross the ray, regardless of
direction. If this number is odd, the point is inside; if even, the point is outside. This yields the same results as the nonzero winding number rule for paths with simple shapes, but produces
different results for more complex shapes.
The image below shows the effects of applying the even-odd rule to complex paths. For the five-pointed star, the rule considers the triangular points to be inside the path, but not the pentagon in
the center. For the two concentric circles, only the doughnut shape between the two circles is considered inside, regardless of the directions in which the circles are drawn.
Nonzero Winding Number Rule
The nonzero winding number rule determines whether a given point is inside a path by conceptually drawing a ray from that point to infinity in any direction and then examining the places where a
segment of the path crosses the ray. Starting with a count of 0, the rule adds 1 each time a path segment crosses the ray from left to right and subtracts 1 each time a segment crosses from right to
left. After counting all the crossings, if the result is 0, the point is outside the path; otherwise, it is inside.
Note: The method just described does not specify what to do if a path segment coincides with or is tangent to the chosen ray. Since the direction of the ray is arbitrary, the rule simply chooses a
ray that does not encounter such problem intersections.
For simple convex paths, the nonzero winding number rule defines the inside and outside as one would intuitively expect. The more interesting cases are those involving complex or self-intersecting
paths like the ones shown below:
For a path consisting of a five-pointed star, drawn with five connected straight line segments intersecting each other, the rule considers the inside to be the entire area enclosed by the star,
including the pentagon in the center.
For a path composed of two concentric circles, the areas enclosed by both circles are considered to be inside, provided that both are drawn in the same direction.
If the circles are drawn in opposite directions, only the doughnut shape between them is inside, according to the rule; the doughnut hole is outside. | {"url":"https://api.docotic.com/pdffillmode","timestamp":"2024-11-10T17:27:39Z","content_type":"text/html","content_length":"11824","record_id":"<urn:uuid:be1229ec-b300-4c33-a8a5-5b427a866c20>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00007.warc.gz"} |
F.Dist: Excel Formulae Explained - ExcelAdept
Key Takeaways:
• The F.DIST function in Excel is used to calculate the probability of a random variable having a value less than or equal to a certain value, based on the F distribution.
• The syntax of the F.DIST function includes inputs for the value, degrees of freedom for the numerator and denominator, and a Boolean value for whether or not to calculate the cumulative
• Examples of using the F.DIST function in Excel include finding probabilities, calculating cumulative distribution, and determining inverse probabilities.
Are you overwhelmed by using Excel’s F.DIST formulae? Don’t worry, we have all the information you need to make your task easier! This article will help you understand how and when to use the F.DIST
formulae to make your work easier.
F.DIST Function in Excel
Gaining knowledge of the F.DIST function in Excel requires comprehension of its definition, syntax, and implementation.
What is the F.DIST function? How does the syntax work? How can it be utilized in Excel? These sub-sections explain the answer and provide an understanding of this formula.
What is F.DIST Function?
F.DIST is a statistical function in Excel used to calculate the cumulative distribution value of a random variable. It belongs to the family of functions that help in probability distribution
calculations. This function can be used for continuous variable distributions such as beta, normal and student-t.
In contrast to F.INV, which calculates an inverse distribution, F.DIST provides results for a forward cumulative distribution. With its ability to handle different distributions such as base, beta,
and Fisher-Snedecor’s F-distribution, it is useful for solving real-life mathematical problems related to investments or quality control.
One important thing to note about F.DIST is that it requires degrees of freedom (df) values, which significantly impact results. The higher the df value assigned for computations, the more accurate
and precise will be the results when calculating probabilities.
Did you know that F-Distribution plays a vital role in ANOVA (Analysis of Variance)? Statistical researchers use ANOVA for testing differences among group means while incorporating probability theory
principals into their analytical work. By using ANOVA with F-Distribution within Excel’s Statistical Functions library increases proficiency without any complex coding involved.
Get ready to embrace your inner maths nerd as we dive into the syntax of F.DIST function in Excel.
Syntax of F.DIST Function
The F.DIST function in Excel is used to calculate the cumulative distribution of a random variable. The syntax for this function involves using four parameters – x, degree_freedom1, degree_freedom2,
X refers to the value at which we want to evaluate the distribution, degree_freedom1 and degree_freedom2 are the degrees of freedom for the numerator and denominator respectively, and cumulative is a
logical value that determines whether we want to calculate the cumulative distribution or not.
To use this function effectively, it’s important to understand how to properly input values for each parameter. First, be sure to enter numerical values for x and both degrees of freedom.
Additionally, it’s crucial to set the cumulative parameter as either TRUE or FALSE – this will determine whether you’re calculating a one-tailed or two-tailed distribution.
When using the F.DIST function in Excel, take care when choosing between one-tailed and two-tailed distributions. If you’re unsure about which type of distribution is appropriate for your needs,
consult an expert or professional statistician for guidance.
In summary, understanding the syntax and usage of F.DIST in Excel is essential for accurate statistical analysis. By following these guidelines and seeking additional assistance as needed, you can
maximize your accuracy and confidence in working with this powerful tool.
Move over fortune tellers, F.DIST function in Excel can predict your future success with just a few clicks.
How to Use F.DIST Function in Excel
F.DIST Function in Excel is a crucial tool for statistical calculations. It helps find the probability of a random variable being less than or equal to a specified value, using the F distribution. To
use this function effectively, follow the steps below:
1. Select an empty cell to input the formula.
2. In the cell, type =F.DIST(value, degrees of freedom 1, degrees of freedom 2, cumulative), where value refers to the data point you want to evaluate and cumulative indicates if you want to
calculate a cumulative distribution or not.
3. Input appropriate values for degrees of freedom 1 and 2.
4. Press enter to analyze your data.
To improve your usage skills further, keep in mind that F.DIST Range should be between zero and one, inclusive of these values only.
Pro Tip – Always remember that this function estimates probabilities associated with one sided tests by default; calculation for two-sided tests requires slight modifications in syntax as well as
Get ready to see some F.DIST-urbing examples of statistical distribution in Excel.
Examples of F.DIST Function
Let’s explore the F.DIST function! We will look at three sub-sections. Firstly, finding probability. Secondly, getting cumulative distribution. Lastly, calculating inverse probability. Let’s dig in
and understand how to use F.DIST!
Example 1: Finding Probability using F.DIST Function
Finding probability using the F.DIST function in Excel requires the correct implementation of the formula. This is how to find probability using the F.DIST function:
1. First, select an empty cell and input ‘=F.DIST(x,deg_freedom,cumulative)’
2. Next, replace ‘x’ with a continuous random variable and ‘deg_freedom’ with degrees of freedom
3. Finally, set the cumulative argument to ‘TRUE’ for a cumulative distribution or ‘FALSE’ for a probability density function.
In addition, it is essential to ensure that all arguments are numeric values and that proper syntax is used when referring to cells.
It is said that Excel has more than 475 formulas that we can use. The F.DIST function is one such tool. Invented as recently as 2003 by Microsoft developers; this function efficiently finds the
left-tail (or lower-tail) value of the F-distribution without considering negative values.
Get ready to see the distribution of excitement on people’s faces when they realize you know how to use F.DIST function like a pro in Excel.
Example 2: Using F.DIST Function to Get Cumulative Distribution
The F.DIST function in Excel is used to calculate the cumulative probability of a random variable. To show you how this works, we’ll walk through an example that demonstrates using F.DIST to get
cumulative distribution.
1. First, open your Excel spreadsheet and click on the cell where you want to display the result.
2. In this cell, type the formula: =F.DIST(x, α, β, TRUE)
3. Replace x with your random variable’s value for which you want to find the cumulative probability.
4. Replace α and β with shape parameters that correspond to the distribution of your data set.
5. Finally, set TRUE for the last argument to produce a cumulative distribution value.
This five-step process shows how using F.DIST function makes it simple to calculate accurate results for any given data set.
What’s unique about using the F.DIST function to calculate a cumulative distribution is that it can handle many types of distributions like normal or binomial and offer more flexibility in choosing
shape parameters.
Fun fact: The widespread use of Microsoft Excel has made statistical computation easy for all levels of users from beginner reporters to statistical researchers (Forbes Magazine).
Finally, a mathematical function that can tell us the probability of our ex texting us back – and it’s called F.DIST!
Example 3: Calculating Inverse Probability using F.DIST Function
To calculate inverse probability using the F.DIST function, follow these steps:
1. Have a value ready that represents probability, say p.
2. Select the degree of freedom or df and decide the type of F distribution whether it’s Cumulative or Non-Cumulative.
3. Invoke the F.DIST function with input variables: p, df as Num and cumulative(bool).
4. Feed in values for p and df obtained in Steps 1 & 2 respectively.
5. Interpret the output to get probability associated with x.
A typical use case scenario for F.DIST is when we are looking to compute critical values associated with any underlying distribution. For this purpose, we can use F.DIST to find out at which point in
the distribution curve our certain level of significance lies.
Pro Tip: Ensure that data is correctly entered into cells referencing a function. Wrong data input could cause erroneous results.
Five Facts About F.DIST: Excel Formulae Explained:
• ✅ F.DIST is an Excel function that calculates the cumulative distribution function of the F-distribution, which is commonly used in statistical analysis. (Source: Excel Easy)
• ✅ The F-distribution is a continuous probability distribution that arises in the analysis of variances and the testing of statistical hypotheses. (Source: Investopedia)
• ✅ The F.DIST function takes three arguments: the input value, the degrees of freedom of the numerator, and the degrees of freedom of the denominator. (Source: ExcelJet)
• ✅ The F.DIST function returns the probability that an F-statistic is less than or equal to the input value. (Source: Corporate Finance Institute)
• ✅ The F.DIST function can be useful in hypothesis testing, confidence interval estimation, and other statistical applications. (Source: Microsoft Office Support)
FAQs about F.Dist: Excel Formulae Explained
What is F.DIST?
F.DIST is an Excel function that calculates the cumulative distribution function (CDF) of a given value using the F-distribution. It returns the probability that an F-value is less than or equal to a
certain value.
How do I use the F.DIST formula in Excel?
To use the F.DIST formula in Excel, start by selecting a cell where you want to display the result. Then, type “=F.DIST(x, degrees_freedom1, degrees_freedom2, cumulative)” into the formula bar,
replacing “x” with the value for which you want to find the probability, “degrees_freedom1” with the numerator degrees of freedom, “degrees_freedom2” with the denominator degrees of freedom, and
“cumulative” with either “TRUE” or “FALSE” to indicate whether you want to calculate the cumulative distribution function or the probability density function, respectively.
What are degrees of freedom in F.DIST formula?
Degrees of freedom (df) in the F.DIST formula represent the number of observations used to estimate a statistic. In the context of the F-distribution, there are two degrees of freedom: numerator
(df1) and denominator (df2). Numenator refers to the sample from which we calculate the variance, and denominator refers to the sample from which we calculate the denominator.
What does the F.DIST formula return?
The F.DIST formula returns the probability that an F-value is less than or equal to a certain value. This probability represents the area under the F-distribution curve to the left of the specified
value. If the value you specify is higher than the maximum value from the F-distribution, the F.DIST formula will return the value 1.0.
How do I interpret F.DIST result?
The F.DIST result represents the probability that an F-value is less than or equal to a certain value. A higher probability indicates that the F-value is more likely to occur, while a lower
probability indicates that the F-value is less likely to occur. You can use the F.DIST formula to determine critical values or p-values for hypothesis testing and confidence intervals.
Can F.DIST function be used for non-parametric tests?
No, the F.DIST function cannot be used for non-parametric tests as it requires the assumption of a continuous, normally distributed population. Non-parametric tests do not rely on this assumption and
therefore require different statistical procedures. | {"url":"https://exceladept.com/f-dist-excel-formulae-explained/","timestamp":"2024-11-07T06:07:17Z","content_type":"text/html","content_length":"66168","record_id":"<urn:uuid:b132b254-4ee2-40d4-a88d-4178f638d6b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00154.warc.gz"} |
2008 Paper 9 Question 11
Digital Signal Processing
(a) A radio system outputs signals with frequency components only in the range
2.5 MHz to 3.5 MHz. The analog-to-digital converter that you want to use
to digitise such signals can be operated at sampling frequencies that are an
integer multiple of 1 MHz. What is the lowest sampling frequency that you
can use without destroying information through aliasing?
[5 marks]
(b) Consider a digital filter with an impulse response for which the z-transform is
H(z) =
(z + 1)2
(z − 0.7 − 0.7j)(z − 0.7 + 0.7j)
(i ) Draw the location of zeros and poles of this function in relation to the
complex unit circle.
[2 marks]
(ii ) If this filter is operated at a sampling frequency of 48 kHz, which
(approximate) input frequency will experience the lowest attenuation?
[2 marks]
(iii ) Draw a direct form I block-diagram representation of this digital filter.
[5 marks]
(c) Make the following statements correct by changing one word or number in
each case. (Negating the sentence is not sufficient.)
(i ) Statistical independence implies negative covariance.
(ii ) Group 3 MH fax code uses a form of arithmetic coding.
(iii ) Steven’s law states that rational scales follow a logarithmic law.
(iv ) The Karhunen–Loève transform is commonly approximated by the
(v ) 40 dB corresponds to an 80× increase in voltage.
(vi ) The human ear has about 480 critical bands.
[6 marks] | {"url":"https://studylib.net/doc/18700636/2008-paper-9-question-11","timestamp":"2024-11-03T15:42:39Z","content_type":"text/html","content_length":"54652","record_id":"<urn:uuid:f0fceb84-b9db-479c-96d3-1f31faf7cf5c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00746.warc.gz"} |
Bearing Word Problems Worksheet
Problem 1 :
Two ships leave a harbor at the same time. One ship travels on a bearing of S12°W at 14 miles per hour. The other ship travels on a bearing of N75°E at 10 miles per hour. How far apart will the ships
be after three hours? Round to the nearest tenth of a mile.
Problem 2 :
A plane leaves airport A and travels 580 miles to airport B on a bearing of N34°E. The plane later leaves airport B and travels to airport C 400 miles away on a bearing of S74°E. Find the distance
from airport A to airport C to the nearest tenth of a mile.
Problem 3 :
You are on a fishing boat that leaves its pier and heads east. After traveling for 25 miles, there is a report warning of rough seas directly south. The captain turns the boat and follows a bearing
of S40°W for 13.5 miles.
a. At this time, how far are you from the boat’s pier? Round to the nearest tenth of a mile.
b. What bearing could the boat have originally taken to arrive at this spot?
Problem 4 :
You are on a fishing boat that leaves its pier and heads east. After traveling for 30 miles, there is a report warning of rough seas directly south. The captain turns the boat and follows a bearing
of S45°W for 12 miles.
a. At this time, how far are you from the boat’s pier? Round to the nearest tenth of a mile.
b. What bearing could the boat have originally taken to arrive at this spot?
Problem 5 :
Two airplanes leave an airport at the same time on different runways. One flies on a bearing of N66°W at 325 miles per hour. The other airplane flies on a bearing of S26°W at 300 miles per hour. How
far apart will the airplanes be after two hours?
Answer Key
1) So, the distance between two ships after 3 hours is 61.68 approximately 62 miles.
2) Distance between airport A to C is 800 miles approximately.
At this time, i am 19.3 miles away from the boat’s pier.
b) To boat should take N40°W to reach the spot.
4) a) From boat pier, i will be there are 23.13 miles distance.
b) = 111.6
5) After two hours, the planes are approximately 869 miles apart. | {"url":"https://www.intellectualmath.com/bearing-word-problems-worksheet.html","timestamp":"2024-11-08T17:13:31Z","content_type":"text/html","content_length":"22877","record_id":"<urn:uuid:7cbede2d-2ab6-42fc-b21e-c1ba9a0b67b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00613.warc.gz"} |
Discrete warping restraint – ConsteelConsteel
Knowledge base Discrete warping restraint June 11, 2021 5 min read
Discrete warping restraint
Theoretical background
According to the beam-column theory, two types of torsional effects exist.
Saint-Venant torsional component
Some closed thin-walled cross-sections produce only uniform St. Venant torsion if subjected to torsion. For these, only shear stress τ[t ]occurs.
Fig. 1: rotated section [1.]
The non-uniform torsional component
Open cross-sections might produce also normal stresses as a result of torsion.[1.]
Fig. 2: effect of the warping in a thin-walled open section [1.]
Warping causes in-plane bending moments in the flanges. From the bending moment arise both shear and normal stresses as it can be seen in Fig. 2 above.
Discrete warping restraint
The load-bearing capacity of a thin-walled open section against lateral-torsional buckling can be increased by improving the section’s warping stiffness. This can be done by adding additional
stiffeners to the section at the right locations, which will reduce the relative rotation between the flanges due to the torsional stiffness of this stiffener. In Consteel, such stiffener can be
added to a Superbeam using the special Stiffener tool. Consteel will automatically create a warping support in the position of the stiffener, the stiffness of which is calculated using the formulas
below. Of course, warping support can also be defined manually by specifying the correct stiffness value, calculated with the same formulas (see literature [3]).
The following types of stiffeners can be used:
• Web stiffeners
• T - stiffener
• L - stiffener
• Box stiffener
• Channel –stiffener
The general formula which can be used to determine the stiffness of the discrete warping restraint is the following:
R[ω] = the stiffness of the discrete warping restraint
G = shear modulus
GI[t] = the Saint-Venan torsional constant
h = height of the stiffener
Effect of the different stiffener types
Web stiffener
b = width of the web stiffener [mm]
t = thickness of the web stiffener [mm]
h = height of the web stiffener [mm]
Fig. 3: web stiffener
T - stiffener
b[1] = width of the battens [mm]
t[1] = thickness of the battens [mm]
b[2] = width of the web stiffener [mm]
t[2] = thickness of the web stiffener [mm]
h = height of the web stiffener [mm]
Fig. 4: T–stiffener
b = width of the L-section [mm]
t = thickness of the L-section [mm]
h = height of the L-section [mm]
Fig. 5: L–stiffener
Channel stiffener
b[1] = width of channel web [mm]
t[1] = thickness of channel web [mm]
b[2] = width of channel flange [mm]
t[2] = thickness of channel flange [mm]
h = height of the web stiffener [mm]
Fig. 6: Channel stiffener
Numerical example
The following example will show the increase of the lateral-torsional buckling resistance of a simple supported structural beam strengthened with a box stiffeners. The effect of such additional
plates can be clearly visible when shell finite elements are used.
Shell model
Fig. 7 shows a simple fork supported structural member with welded cross-section modeled with shell finite elements and subjected to a uniform load along the member length acting at the level of the
top flange.
Table 1. and Table 2. contain the geometric parameters and material properties of the double symmetric I section. The total length of the beam member is 5000 mm, the eccentricity of the line load is
150 mm in direction z.
Fig. 7: simple supported, double symmetric structural member modeled by shell elements
Name Dimension Value
Width of the top Flange [mm] 200
Thickness of the top Flange [mm] 10
Web height [mm] 300
Web thickness [mm] 10
Width of the bottom Flange [mm] 200
Thickness of the bottom Flange [mm] 10
Table 1: geometric parameters
Name Dimension Value
Elastic modulus [N/mm^2] 200
Poisson ratio [-] 10
Yield strength [N/mm^2] 300
Table 2: material properties
Box stiffener
The box stiffeners are located near the supports as can be seen in Fig. 8. Table 3. contains the geometric parameters of the box stiffeners.
Fig. 8: the structural shell member with added box stiffeners
Name Dimension Value
Width of the web stiffener [mm] 100
Thickness of the battens [mm] 100
Total width of the box stiffener [mm] 200
Height of the plates [mm] 300
Thickness of the plates [mm] 10
Table 3: geometric parameters of the box stiffeners
7DOF beam model
The same effect in a model using 7DOF beam finite elements can be obtained when discrete warping spring supports are defined at the location of the box stiffeners.
Fig. 9: beam member supported with fork supports and loaded with eccentric uniform load
Discrete warping stiffness calculated by hand
Log in to view this content
Online service access and support options are based on subscription plans. Log in to view this content or contact our sales department to upgrade your subscription. If you haven’t tried Consteel yet,
try for free and get Pro access to our learning materials for 30 days! | {"url":"https://consteelsoftware.com/knowledgebase/discrete-warping-restraint/","timestamp":"2024-11-10T22:24:58Z","content_type":"text/html","content_length":"77076","record_id":"<urn:uuid:b4faca0e-6a1d-43da-9ab9-d3399ddeeec1>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00698.warc.gz"} |
Finite Antenna Array with Non-Linear Spacing
Calculate the radiation pattern for an array of arbitrarily placed pin-fed patch antennas. Use the finite array tool to construct the array and the numerical Green's function method (DGFM) to
minimize computational resources.
Figure 1. A 3D view of the finite antenna array with far field pattern in POSTFEKO | {"url":"https://2021.help.altair.com/2021.1/feko/topics/feko/example_guide/antenna_synthesis_analysis/finite_array_intro_feko_t.htm","timestamp":"2024-11-04T07:27:51Z","content_type":"application/xhtml+xml","content_length":"72999","record_id":"<urn:uuid:73a2ad27-9fc0-4afb-a354-88a11b3f909a>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00437.warc.gz"} |
Volume 1, Issue 1, 2020, Pages 41 - 43
Allometric Laws in Modular Dynamics: The Bauplan of Ontogenesis
Peter L. Antonelli^1, Solange F. Rutz^2^, *
Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Canada
Department of Mathematics, Federal University of Pernambuco, Recife, Brazil
Article History
Received 19 September 2019
Accepted 4 November 2019
Available Online 2 March 2020
In recent decades, a resurgence of allometry in ecology and its associated scaling laws has been observed, going under the name macroecology. It is reasonable to think that the plethora of
current works on experimental physiology using allometry is a continuation of the tradition of searching for the “Holy Grail” or Bauplan, the foundation of organic form and metabolic function.
The project our group focused on over several decades is the development of a corpus of geometric techniques, especially Finsler differential geometry, for the study of systems of second order
ordinary differential equations called Analytical Modular Dynamics, which seeks to describe interactions between cell populations of various organs. Thus, the Huxley/Needham Law in allometry
becomes a consequence of metabolism and physiological interactions. The models obtained contrast strongly with Riemannian theory. The geodesic coefficients for our example depend only on the x
-variables, as in all Riemannian geometries, but, is true in Finsler theory only for Berwald spaces.
© 2020 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).
1. INTRODUCTION: THE RISE OF ALLOMETRY
Recently, there has been a flurry of excitement in ecology concerning the resurgence of allometry and its associated scaling laws going under the name macroecology [1]. The 20th century works on
allometry by Huxley [2], Needham [3], Laird [4], and Medawar [5], have formed a basis for ontogenic studies of large numbers of fauna, which ultimately has informed this recent interest. It is
reasonable to think that the plethora of current works on experimental physiology using allometry is a continuation of the tradition of searching for the “Holy Grail” or Bauplan, the foundation of
organic form and metabolic function.
Sir Joseph Needham called his discovery of allometry in his experimental studies of growing animal embryos, a “chemical ground plan” for development [3]. Also, Sir Julian Huxley, father of the
Neo-Darwinian paradigm, focused on allometric descriptions of morphology in animals, especially invertebrates [2], and Laird [4] proved the ubiquity of Gompertz growth curves for organ biomasses in
large numbers of vertebrate species. She discovered that virtually all organ biomasses of a given individual have the same Gompertz rate constant. Although these are different for different
individuals, they are characteristic of specific species. She concluded that the Huxley/Needham straight line allometric law holds for many sizes and kinds of vertebrate individuals.
Also in plants allometry is a major topic. Allometry in Plants: The Scaling of Form and Process [6], presents an important engineering perspective on a “design of plants” based on allometry between
their multitudes of organs. Such studies can be seen to relate historically to the 19th century work on the morphology of plants by von Goethe [7] with his botanist contemporary, de Candolle and
Agnes’s work in the 1950’s [8]. Application in plants was slow. Yet, allometry was used in American forestry as a method for estimating crown-biomass from trunk diameter in stands of trees, by the
1940’s [9]. Before that, allometry with respect to scaling and as differential growth was discussed in On Growth and Form, by Thompson [10].
Although, Harper’s 1967 magnum opus [11], on plant ecology brought flora into mathematical and quantitative ecology in a way comparable to what had already been achieved for fauna, no special role
had been singled out for allometric ideas. However, Harper in 1976 wrote “it appears therefore that the 3/2 thinning law describes an upper limiting condition which may not be exceeded by any
combination of surviving plant numbers and weights” [12]. In Harper’s concept of plants as clonal organisms, the modular unit or phytomer, is a piece of stem, a bud and a subtending leaf and this is
repeated throughout the development and lifetime of plants. The individual phytomers may, in the vegetative phase, consists of vegetative leaves, whereas in the generative phase petals and stamen can
be considered as variations on leaves. This great finding of von Goethe has been corroborated by molecular biology.
In the past two decades, allometry has been an active field of research in particular because of the West–Brown–Enquist model [1,13]. Moreover, datasets have become increasingly large, also in
plants. In a study on scaling relationships between leaf area and leaf shape in 12 species of Rosaceae, more than 3000 leaves were sampled, and for each leaf 500 data points were determined [14].
If we think of allometry as simply a collection of straight lines obtained by least squares fitting of pairs of data points on morphology, physiology, or ecology for individuals, or populations, with
a single regression line for each data pair, then allometry becomes a sort of species-specific blue-print of internal architecture, or of external architecture as observed in plant phyllotaxis [15],
the forms of flowers or in seashells [16]. The method has been applied for decades to non-biological objects. There is for instance, the famous “golden ratio” approach to fine art and architecture [
17]. There are also the lesser known Gutenberg–Richter Law for the frequency distribution of earthquakes, the Pareto Law for the distribution of incomes among individuals, the so-called Zipf’s Law
for the sizes of cities and the Kleiber Law for the basal metabolic rate in individual animals, especially humans [18].
The project our group focused on over several decades is the development of a corpus of geometric techniques, especially Finsler differential geometry, for the study of systems of second order
ordinary differential equations (SODE’s) called Analytical Modular Dynamics (AMD), which seeks to describe interactions between cell populations of various organs, each producing hormones, x’s,
effecting the set of organs in an individual during growth and development, and which exhibit the Huxley/Needham allometric law between the x’s produced. Thus, the Huxley/Needham Law becomes a
consequence of metabolism and physiological interactions describing the dynamics of hormone production in the different sets of modular units (i.e., cells of organs). We have also developed a theory
of Brownian motion in Finsler geometry in order to model internal noise during development [19].
Readers who are not familiar with Finsler geometry may recall the helpful dictum, “Finsler geometry is Riemannian geometry without the quadratic restriction”, uttered by the great geometer, Chern, at
the first AMS conference on Finsler Geometry, in 1995, in Seattle, Washington. It refers to the scalar product on tangent spaces of Riemannian manifolds being allowed relaxation from this quadratic
condition to become a norm on tangent spaces of Finsler manifolds.
One important finding in Finsler science is the equivalency to Hilbert’s 4th problem, that of classifying the Finsler geometries having straight lines as shortest distances between two points, where
straight lines are allometries holding globally [20–22]. The mathematician, Berwald, founder of Finsler geometry (along with Cartan and Finsler), is credited with solving this famous problem in
two-dimension, with the condition that the geodesic equations be quadratic, called nowadays, geodesics of Berwald space [23]. The general case remains unsolved to this day.
One proceeds in AMD by modelling a dictum from 19th century Russian botanists studying lichen symbiosis [24]. In the deep evolutionary past of lichens, the algal and fungal partners interacted
ecologically and gradually that interaction became more and more integrated due to genome modifications, thereby stabilizing chemical exchanges [24]. Following the early work of Volterra, Gause, Witt
and Lotka, it is natural to try constant coefficient quadratic equations to model this. But this must be coupled with chemical production, so that over time, these coefficients become dependent on
the products the alga and fungi produce. Furthermore, the symbiosis must have energy constraints and so SODE candidates must be Euler–Lagrange equations for a cost function, F, depending on x, dx,
and t.
Furthermore, the cost must be assumed to be first degree positively homogeneous in dx (just the norm condition mentioned above) so that the total cost over a time interval will be independent of how
time is measured. Moreover, if one assumes each partner reproduces its modular units at nearly the same rate (so each kind of cell is never isolated from the other), reparametrization of production
curves with S, with dS = F (x, dx), eliminates, t. In this way the Euler–Lagrange equations become geodesics of the Finsler geometry defined by F. If the assumption of quadratic F-geodesics (i.e.,
geodesics of Berwald spaces) is adhered to the two-dimensional Finsler geometries possible are essentially of three types and when the geodesic coefficients are all constants, are described by the
theorem known as the Finsler Gate [19–21; Appendix]. Such constant connection Berwald spaces have ecological meaning and broad applicability in ecology, evolution, physiology and epidemiology [20–23,
Among the three Berwald types one has an allometric Bauplan, i.e., has geodesics that are straight lines as in Hilbert’s fourth problem mentioned above. To briefly describe Berwald’s idea, we write
F(x,dxdt)=12[N1H1+N2H2]whereH1=FN1,andH2=FN2 (1)
Then, taking first order Taylor expansions by N^1/N^2 of H[1] and by N^2/N^1 of H[2], we arrive at the symmetric first order approximation of the cost F:
dSdt =N12[P0(x)+P1(x)N1N2]+N22[Q0(x)+Q1(x)N2N1] (2)
If we stipulate that P[0] = 4Z(x), P[1] = 2, Q[0] = 2Z^2(x) and Q[1] = 0, the F-geodesics conform to the Huxley/Needham allometric law, provided x = (x^1, x^2) are interpreted as log biomasses and Z
(x) is a smooth solution of Berwald’s equation for projective flatness, Z ∂[1](Z) = ∂[2]Z [25]. Here the subscripts indicate partial differentiation by either x^1 or x^2. Rewriting, we have the
Kropina-type Finsler metric expression
dSdt=(N1+Z(x)N2)2N2 (3)
There are many Z(x) that provide solutions and have positive Berwald-Gauss curvature scalar indicating Jacobi stability of solution trajectories [26,27]. A simple Riemannian geometric example of this
type of stability is great circle arcs on a sphere. They oscillate back and forth crossing any chosen arc at the poles. Geodesics on a trumpet-shaped surface diverge away and so the system, having
negative curvature is Jacobi unstable. Finally, along any solution curve of the Kropina metric the Huxley/Needham law holds true provided Berwald’s projective equation holds.
4. CONCLUSION
We have used the above procedure to obtain interaction schemes whose Finslerian cost functionals depend explicitly on the ratios N^1/N^2 while maintaining the Huxley/Needham law along solutions.
The Euler–Lagrange curves are geodesics of the Kropina metric above [27]. The curvature scalar is highly variable. This contrasts strongly with Riemannian theory where the projective geometries must
have constant curvature. The geodesic coefficients for our example depend only on the x-variables. Of course, this holds for all Riemannian geometries, but, is true in Finsler theory only for Berwald
spaces [23].
The authors declare they have no conflicts of interest.
JS Huxley, Problems of relative growth, 2nd ed, Dover Press, New York, 1972.
JA Needham, Heterogony, a chemical ground-plan for development, Biol Rev, Vol. 9, 1934, pp. 79-109.
PB Medawar, The growth, growth energy, and ageing of the chicken’s heart, Proc R Soc Lond B, Vol. 129, 1940, pp. 332-55.
KJ Niklas, Plant allometry: the scaling of form and process, University of Chicago Press, Chicago, 1994.
JW von Goethe, Goethe’s botany: the metamorphosis of plants, Sacred Science Library, 1790.
A de Candolle and A Agnes, The natural philosophy of plant form, Cambridge University Press, 1950.
J Kittredge, Estimation of the amount of foliage of trees and stands, J Forestry, Vol. 42, 1944, pp. 905-12.
DW Thompson, On growth and form: the complete revised Edition, Dover Press, 1942.
JL Harper, Population biology of plants, Academic Press, London, 1977, pp. 892.
R May and AR McLean (editors), Theoretical ecology: principles and applications, 3rd ed, Oxford University Press, 2007.
GB West, Scale: the universal laws of growth, innovation, sustainability, and the pace of life in organisms, cities, economies, and companies, Penguin Press, 2017.
RV Jean, Phyllotaxis: a systemic study in plant morphogenesis, Cambridge University Press, New York, 1994, pp. 386.
H Meinhardt, The algorithmic beauty of sea shells, Springer Science & Business Media, 2009.
RA Dunlap, The golden ratio and Fibonacci numbers, World Scientific Press, 1997.
PL Antonelli and CG Leandro, Phenotypic plasticity for allometric laws of ontogeny from cellular interactions, Univ J Appl Math Comput, Vol. 3, 2015, pp. 37-53.
PL Antonelli, CG Leandro, and SF Rutz, Stochastic canalization of phenotypic deformations during ontogenesis, Int J Appl Math, Vol. 29, 2016, pp. 655-71.
LN Khakhina, Concepts of symbiogenesis: a historical and critical study of the research of Russian botanists, Yale University Press, 1992.
Cite This Article
TY - JOUR
AU - Peter L. Antonelli
AU - Solange F. Rutz
PY - 2020
DA - 2020/03/02
TI - Allometric Laws in Modular Dynamics: The Bauplan of Ontogenesis
JO - Growth and Form
SP - 41
EP - 43
VL - 1
IS - 1
SN - 2589-8426
UR - https://doi.org/10.2991/gaf.k.200124.002
DO - https://doi.org/10.2991/gaf.k.200124.002
ID - Antonelli2020
ER - | {"url":"https://files.athena-publishing.com/journals/gandf/articles/15/view","timestamp":"2024-11-04T13:47:42Z","content_type":"text/html","content_length":"106404","record_id":"<urn:uuid:69e7757f-a7b9-438e-86a0-8880ca77cab0>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00080.warc.gz"} |
Sparse singularities
Algebraic geometry
Sparse curve singularities, singular loci of resultants, and Vandermonde matrices
Singularities, such as cusps or self-intersections, are points where mathematical objects such as functions or surfaces cease to be well-behaved. Extensively studied, important examples of these
points can, intriguingly, be described by polynomials with indeterminate coefficients. We deduce some general results for singularities of this form, including a formula for the delta invariant, an
index of their complexity. | {"url":"https://lims.ac.uk/paper/sparse-curve-singularities-singular-loci-of-resultants-and-vandermonde-matrices/","timestamp":"2024-11-06T15:19:04Z","content_type":"text/html","content_length":"85318","record_id":"<urn:uuid:2dd53430-09ef-4a54-b18d-420cc7d73b84>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00621.warc.gz"} |
Physics 1 Work and Energy Problems and Solutions
Physics 1: Work and Energy
AllNeeds SolutionEasyMediumHardVideo
The 100 kg box shown below is being pulled along the x-axis by a student. The box slides across a rough surface, and its position $x$ varies with time $t$ according to the equation $x = 0.5t^3 + 2t$
, where $x$ is in meters and $t$ is in seconds.
C. Calculate the net work done on the box in the interval $t$ = 0 to $t$ = 2 s would be greater than, less than, or equal to the answer in part (C). Justify your answer.
A nonlinear spring is compressed various distances x, and the force Frequired to compress it is measured for each distance. The data are shown in the table below.
In an experiment to determine the spring constant of an elastic cord of length 0.60 m, a student hangs the cord from a rod and then attaches a variety of weights to the cord. For each weight, the
student allows the weight to hang in equilibrium and then measures the entire length of the cord. The data are recorded in the table below:
iii. Calculate the maximum speed of the object.
A rubber ball of mass $m$ is dropped from a cliff. As the ball falls, it is subject to air drag (a resistive force caused by the air). The drag force on the ball has a magnitude $bv^2$ , where $b$ is
a constant drag coefficient and $v$ is the instantaneous speed of the ball. The drag coefficient $b$ is directly proportional to the cross-sectional area of the ball and the density of the air and
does not depend on the mass of the ball. As the ball falls, its speed approaches a constant value called the terminal speed.
A. Draw and label all the forces on the ball at some instant before it reaches terminal speed.
B. State whether the magnitude of the acceleration of the ball of mass $m$ increases, decreases, or remains the same as the ball approaches terminal speed. Explain.
C. Write, but do NOT solve, a differential equation for the instantaneous speed $v$ of the ball in terms of time $t$ , the given quantities, and fundamental constants.
D. Determine the terminal speed $v_t$ in terms of the given quantities and fundamental constants.
E. Determine the energy dissipated by the drag force during the fall if the ball is released at height $h$ and reaches its terminal speed before hitting the ground, in terms of the given quantities
and fundamental constants.
A small sphere is moving at a constant speed in a vertical circle. Which of the following quantities is changing?
i. kinetic energy ii. potential energy iii. momentum
i and ii only
i and iii only
i and ii only
ii only
iii only
ii and iii only
In the system of two blocks and a spring shown below, blocks 1 and 2 are connected by a string that passes over a pulley. The initially unstretched spring connects block 1 to a ridged wall. Block 1
is released from rest, initially slides to the right, and is eventually brought to rest by the spring and the friction on the horizontal surface. Which of the following is true of the energy of the
system during the process?
E. The potential energy lost by block 2 is greater in magnitude than the potential energy gained by the spring
In the situation above, after block 1 comes to rest, the force exerted on the rope must be equal in magnitude to
A. Zero
B. the frictional force on block 1
C. the vector sum of the force on block 1 due to the friction and tension in the spring
D. the sum of the weights of the two blocks
E. Shiang doesn't care, he just wants to sleep
F. the difference in the weights of the two blocks
A rope of length $L$ is attached to a support at point C. A person of mass $m_1$ sits on a ledge at position A holding the other end of the rope so that it is horizontal and taut, as shown below. The
person then drops off the ledge and swings down on the rope toward position B on a lower ledge where an object of mass $m_2$ is at rest. At position B the person grabs hold of the object and
simultaneously lets go of the rope. The person and object then land together in the lake at point D, which is a vertical distance $L$ below position B. Air resistance and the mass of the rope are
negligible. Derive expression for each of the following in terms of $m_1$ , $m_2$ , $L$ , and $g$.
E. The total horizontal displacement x of the person from position A until the person and object land in the water at point D.
A spherical non rotating planet has a radius R and a uniform density $ho$ throughout its volume. Suppose a narrow tunnel were drilled through the planet along one of its diameters as shown in the
figure below, in which a small ball of mass m could move freely under the influence of gravity. Let r be the distance of the ball from the center of the planet. Suppose the ball is dropped into the
tunnel from rest at the planet's surface.
F. Write an equation that could be used to calculate the time it takes the ball to move from point P to the center of the planet.
A solid brass block of mass m slides along a frictionless track when released from rest along the straight section. The circular loop has a radius R.
B. Assume the block is released at height h = 6.0 R, what are the magnitude and direction of the horizontal force competent acting on the block at point Q? | {"url":"https://www.practiceproblems.org/course/Physics_1/Work_and_Energy/1/clwl23om800047b9yazseoyzq","timestamp":"2024-11-11T11:34:37Z","content_type":"text/html","content_length":"100668","record_id":"<urn:uuid:a75ec7bb-31d4-47bb-bb9a-d6deae174b22>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00236.warc.gz"} |
Texas Administrative Code
(a) Introduction. (1) The desire to achieve educational excellence is the driving force behind the Texas essential knowledge and skills for mathematics, guided by the college and career readiness
standards. By embedding statistics, probability, and finance, while focusing on computational thinking, mathematical fluency, and solid understanding, Texas will lead the way in mathematics education
and prepare all Texas students for the challenges they will face in the 21st century. (2) The process standards describe ways in which students are expected to engage in the content. The placement of
the process standards at the beginning of the knowledge and skills listed for each grade and course is intentional. The process standards weave the other knowledge and skills together so that
students may be successful problem solvers and use mathematics efficiently and effectively in daily life. The process standards are integrated at every grade level and course. When possible, students
will apply mathematics to problems arising in everyday life, society, and the workplace. Students will use a problem-solving model that incorporates analyzing given information, formulating a plan or
strategy, determining a solution, justifying the solution, and evaluating the problem-solving process and the reasonableness of the solution. Students will select appropriate tools such as real
objects, manipulatives, algorithms, paper and pencil, and technology and techniques such as mental math, estimation, number sense, and generalization and abstraction to solve problems. Students will
effectively communicate mathematical ideas, reasoning, and their implications using multiple representations such as symbols, diagrams, graphs, computer programs, and language. Students will use
mathematical relationships to generate solutions and make connections and predictions. Students will analyze mathematical relationships to connect and communicate mathematical ideas. Students will
display, explain, or justify mathematical ideas and arguments using precise mathematical language in written or oral communication. (3) The primary focal areas in Grade 6 are number and operations;
proportionality; expressions, equations, and relationships; and measurement and data. Students use concepts, algorithms, and properties of rational numbers to explore mathematical relationships and
to describe increasingly complex situations. Students use concepts of proportionality to explore, develop, and communicate mathematical relationships. Students use algebraic thinking to describe how
a change in one quantity in a relationship results in a change in the other. Students connect verbal, numeric, graphic, and symbolic representations of relationships, including equations and
inequalities. Students use geometric properties and relationships, as well as spatial reasoning, to model and analyze situations and solve problems. Students communicate information about geometric
figures or situations by quantifying attributes, generalize procedures from measurement experiences, and use the procedures to solve problems. Students use appropriate statistics, representations of
data, and reasoning to draw conclusions, evaluate arguments, and make recommendations. While the use of all types of technology is important, the emphasis on algebra readiness skills necessitates the
implementation of graphing technology. (4) Statements that contain the word "including" reference content that must be mastered, while those containing the phrase "such as" are intended as possible
illustrative examples. (b) Knowledge and skills. (1) Mathematical process standards. The student uses mathematical processes to acquire and demonstrate mathematical understanding. The student is
expected to: (A) apply mathematics to problems arising in everyday life, society, and the workplace; (B) use a problem-solving model that incorporates analyzing given information, formulating a plan
or strategy, determining a solution, justifying the solution, and evaluating the problem-solving process and the reasonableness of the solution; (C) select tools, including real objects,
manipulatives, paper and pencil, and technology as appropriate, and techniques, including mental math, estimation, and number sense as appropriate, to solve problems; (D) communicate mathematical
ideas, reasoning, and their implications using multiple representations, including symbols, diagrams, graphs, and language as appropriate; (E) create and use representations to organize, record, and
communicate mathematical ideas; (F) analyze mathematical relationships to connect and communicate mathematical ideas; and (G) display, explain, and justify mathematical ideas and arguments using
precise mathematical language in written or oral communication. (2) Number and operations. The student applies mathematical process standards to represent and use rational numbers in a variety of
forms. The student is expected to: (A) classify whole numbers, integers, and rational numbers using a visual representation such as a Venn diagram to describe relationships between sets of numbers;
(B) identify a number, its opposite, and its absolute value; (C) locate, compare, and order integers and rational numbers using a number line; (D) order a set of rational numbers arising from
mathematical and real-world contexts; and (E) extend representations for division to include fraction notation such as a/b represents the same number as a ÷ b where b ≠ 0. (3) Number and operations.
The student applies mathematical process standards to represent addition, subtraction, multiplication, and division while solving problems and justifying solutions. The student is expected to: (A)
recognize that dividing by a rational number and multiplying by its reciprocal result in equivalent values; (B) determine, with and without computation, whether a quantity is increased or decreased
when multiplied by a fraction, including values greater than or less than one; (C) represent integer operations with concrete models and connect the actions with the models to standardized
algorithms; (D) add, subtract, multiply, and divide integers fluently; and (E) multiply and divide positive rational numbers fluently. (4) Proportionality. The student applies mathematical process
standards to develop an understanding of proportional relationships in problem situations. The student is expected to: (A) compare two rules verbally, numerically, graphically, and symbolically in
the form of y = ax or y = x + a in order to differentiate between additive and multiplicative relationships; (B) apply qualitative and quantitative reasoning to solve prediction and comparison of
real-world problems involving ratios and rates; (C) give examples of ratios as multiplicative comparisons of two quantities describing the same attribute; (D) give examples of rates as the comparison
by division of two quantities having different attributes, including rates as quotients; (E) represent ratios and percents with concrete models, fractions, and decimals; (F) represent benchmark
fractions and percents such as 1%, 10%, 25%, 33 1/3%, and multiples of these values using 10 by 10 grids, strip diagrams, number lines, and numbers; (G) generate equivalent forms of fractions,
decimals, and percents using real-world problems, including problems that involve money; and (H) convert units within a measurement system, including the use of proportions and unit rates. (5)
Proportionality. The student applies mathematical process standards to solve problems involving proportional relationships. The student is expected to: (A) represent mathematical and real-world
problems involving ratios and rates using scale factors, tables, graphs, and proportions; (B) solve real-world problems to find the whole given a part and the percent, to find the part given the
whole and the percent, and to find the percent given the part and the whole, including the use of concrete and pictorial models; and (C) use equivalent fractions, decimals, and percents to show equal
parts of the same whole. (6) Expressions, equations, and relationships. The student applies mathematical process standards to use multiple representations to describe algebraic relationships. The
student is expected to: (A) identify independent and dependent quantities from tables and graphs; (B) write an equation that represents the relationship between independent and dependent quantities
from a table; and Cont'd... | {"url":"https://texreg.sos.state.tx.us/public/readtac$ext.TacPage?sl=T&app=9&p_dir=P&p_rloc=158643&p_tloc=&p_ploc=9974&pg=3&p_tac=&ti=19&pt=2&ch=111&rl=27","timestamp":"2024-11-09T03:14:22Z","content_type":"text/html","content_length":"17401","record_id":"<urn:uuid:996f4ba4-c3dc-4ee6-b562-57cd28139a17>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00836.warc.gz"} |
Short multiplication and short division
Unit 4 – 6 weeks
Primary KS2 Year 5
The PowerPoint file contains slides you can use in the classroom to support each of the learning outcomes for this unit, listed below.
The slides are comprehensively linked to associated pedagogical guidance in theĀ NCETM Primary Mastery Professional Development materials. There are also links to the ready-to-progress criteria
detailed in theĀ DfE Primary Mathematics Guidance 2020.
Learning outcomes
# Title
1 Pupils multiply a two-digit number by a single-digit number using partitioning and representations (no regroups)
2 Pupils multiply a two-digit number by a single-digit number using partitioning and representations (one regroup)
3 Pupils multiply a two-digit number by a single-digit number using partitioning and representations (two regroups)
4 Pupils multiply a two-digit number by a single-digit number using partitioning
5 Pupils multiply a two-digit number by a single-digit number using expanded multiplication (no regroups)
6 Pupils multiply a two-digit number by a single-digit number using short multiplication (no regroups)
7 Pupils multiply a two-digit number by a single-digit number using expanded multiplication (regrouping ones to tens)
8 Pupils multiply a two-digit number by a single-digit number using short multiplication (regrouping ones to tens)
9 Pupils multiply a two-digit number by a single-digit number using expanded multiplication (regrouping tens to hundreds)
10 Pupils multiply a two-digit number by a single-digit number using short multiplication (regrouping tens to hundreds)
11 Pupils multiply a two-digit number by a single-digit number using both expanded and short multiplication (two regroups)
12 Pupils use estimation to support accurate calculation
13 Pupils multiply a three-digit number by a single-digit number using partitioning and representations
14 Pupils multiply a three-digit number by a single-digit number using partitioning
15 Pupils multiply a three-digit number by a single-digit number using expanded and short multiplication (no regroups)
16 Pupils multiply a three-digit number by a single-digit number using expanded and short multiplication (one regroup)
17 Pupils multiply a three-digit number by a single-digit number using expanded and short multiplication (multiple regroups)
18 Pupils use estimation to support accurate calculation
19 Pupils divide a two-digit number by a single-digit number using partitioning and representations (no remainders, no exchanging)
20 Pupils divide a two-digit number by a single-digit number using partitioning and representations (with exchanging)
21 Pupils divide a two-digit number by a single-digit number using partitioning and representations (with exchanging and remainders)
22 Pupils divide a two-digit number by a single-digit number using short division (no exchanging, no remainders)
23 Pupils divide a two-digit number by a single-digit number using short division (with exchanging)
24 Pupils divide a two-digit number by a single-digit number using short division (with exchanging and remainders)
25 Pupils divide a three-digit number by a single-digit number using partitioning and representations (no exchanging, no remainders)
26 Pupils divide a three-digit number by a single-digit number using partitioning and representations (one exchange, no remainders)
27 Pupils divide a three-digit number by a single-digit number using partitioning and representations (with exchanging and remainders)
28 Pupils divide a three-digit number by a single-digit number using short division
29 Pupils divide a three-digit number by a single-digit number using short division (with exchanging and remainders)
30 Pupils solve short division problems accurately when the hundreds digit is smaller than the divisor
31 Pupils will use efficient strategies of division to solve problems
Is this page useful? Yes No
Was this written in plain English? Yes No
Subscribe to our newsletter | {"url":"https://www.ncetm.org.uk/classroom-resources/cp-year-5-unit-4-short-multiplication-and-short-division/","timestamp":"2024-11-14T18:49:43Z","content_type":"text/html","content_length":"50890","record_id":"<urn:uuid:d3fa50ff-a92f-4ad4-9792-a5e18cf6f448>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00656.warc.gz"} |
TouchMath Extend - Lesson 34
Lesson 34: Expanded Multiplication
• Use objects, pictures, drawings, and number lines to understand multiplication
• See multiplication as repeated addition
• Use addition to find the total number of objects and dots in arrays
• Write equations of repeated addition and multiplication
• Apply skip counting and the TouchMath approach for finding products
• Apply vocabulary (factors and products) in explaining multiplication
• Apply the commutative property to multiplication
• Use arrays as a strategy for skip counting
• Use strategies, including repeated addition and skip counting, to find products
• Extend Workbook (Page 34)
• Counters or manipulatives
• Highlighters, markers, or crayons
• TouchMath’s Foam Numerals and TouchPoints (for more concrete)
• Skip counting multiplication songs (multiples of 5) (as needed)
Review prior vocabulary showing a few multiplication problems with visuals (e.g. arrays) and going through each part (factor x factor = product), look at the groups of dots and circle/ring, count the
number of items in each group to determine the factors (3 groups of 5 = 15 total items). Review that multiplication is repeated addition (5+5+5, but also if factors are reversed, the result is same
product (3+3+3+3+3). Optional: review skip counting songs for multiples of 5.
Step 2: Vocab Review (5 min)
Review prior vocabulary: repeated addition, arrays (an arrangement of a set of numbers or objects in rows and columns) products of multiplication problems, and factors (a number that you multiply
with another number to get a product).
Expanding on the warm-up, review several examples of commutative property of multiplication with visuals underneath both examples (5×4 and 4x5) and illustrating the separate repeated addition
problems, with the same products. Use counters and the whiteboard to physically break down this concept in the concrete and then representational. Show a few multiplication problems with multiples of
5 and teach skip counting by one number while touching the TouchPoints on the other number (e.g. if skip counting by 5’s, skip count on the 4 by 5’s in 5x4: 5, 10, 15, 20). Using TouchMath’s Foam
Numerals, show how to skip count on one number (place visual examples with counters underneath, as well).
Step 4: Guided Practice (5 min)
Pull out 100s Chart and multiples of 5 resource pages made in prior lessons, or make a resource page prior to this lesson. In groups, ask students to draw arrays with dots/circles/dots with multiples
of 2’s and 5’s for review (5×4, 6×2, 5×8, 2x9, etc.). Have students circle each group. Ask them, “how many groups of dots/stars did you circle? How many dots/stars are in each group?” Then have them
write down their multiplication sentences/problems next to their arrays (e.g. “We circled 8 groups of dots and there are 5 dots in each group. Therefore, my multiplication sentence is 8×5=42. There
are 42 total dots altogether. This means the product of 5+5+5+5+5+5+5+5 is 40”). Ask what the repeated addition statements would look like (e.g. 5+5+5+5+5+5+5+5 OR 8+8+8+8+8). Have students practice
skip counting by one number while touching the TouchPoints on the other number using TouchMath’s Foam Numerals.
Step 5: Student Practice (5 min)
Go to Student Workbook Page (34). Read the directions at the top of the page. Tell students they will be practicing repeated addition, skip counting, and creating multiplication problems based on the
repeated addition statements. For the first problem, ask students to count how many boxes of 5’s there are at the top of the page or how many 5’s there are in the repeated addition equation right
below it (4). Ask students to skip count 5 four times (5, 10, 15, 20) and then write 20 on the line provided. Next, have students fill out the statements below: There are 4 groups of 5. Then, remind
them this means 4×5, which is the same thing as 5+5+5+5, and the sum of 5+5+5+5 is the same as the product of 4×5 (20). Have students write 20 on the line provided. Repeat these steps for the next
two problems.
To wrap up the lesson, review the learning objectives and core vocabulary words again and ask your students about their experience. | {"url":"https://touchmath.com/extend/lesson-34/","timestamp":"2024-11-03T23:26:11Z","content_type":"text/html","content_length":"135699","record_id":"<urn:uuid:16a47a75-f801-4808-b4d9-1c9db18d7c4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00241.warc.gz"} |
Estimating Likelihood for Lightning Payments to be (in)feasible
Dear fellow Lightning Network developers,
as some of you are already aware (through out of band communication) I am currently developing and finalizing a paper with a mathematical theory of payment channel networks. For the paper I created a
small example that shows how to apply the rather technical and abstract geometric concepts to describe the Lightning Network. I thought this standalone example / application might be of interest for
The following is a summary of an ipython notebook from my Github reopository in which I develop the theory and research paper (Caution: The current latex file of the paper in the repository is a very
early / incomplete and wrong draft. I would not recomand to read it yet. I expect a much more complete and accurate version to be updated later this month):
In prior research we have provided a method of estimating the liquidity in payment channels. This was used to compute success probabilities for payments. The proposed model has several flaws:
1. It assumes the liquidity distributions in channels are independent of each other
2. it uses a uniform distribution to model the uncertainty
The second issue is being addressed through the introduction of bimodal models. However we believe the observed bimodal distributions emerge due to the geometry of payment channel network. This
happens in particular if the first assumption is dropped.
This notebook provides a tiny examples to demonstrate how one can compute the payment success rate between two peers on the lightning network for a payment amount even if no information about the
liquidity distribution is availble. We do however assume that the sender owns more coins that he wishes to send (Though we could easily drop that assumption).
For this we start from the assumption that all feasible wealth distributions (not channel states!) in the given topology are equally likely to occur. As payments are changes in the wealth
distribution we just have to test for all feasible wealth distributions if the change induced by the payment is still a feasible wealth distribution. The fraction of payments for which this is
possible is the expected success probability of a payment.
Math formulation
Let w=(w_1,\dots,w_n) be a wealth vector. Furth let b_1,\dots,b_n \in \mathbb{Z}^n be unit base vectors that span the space of feasible wealth distributions. Then a payment is the new wealth vector
w' which can be computed via the following:
w'= w -amt\cdot b_{src} + amt\cdot b_{dest}
if w' is feasible then the payment was feasible.
Caution this assumes that all feasible wealth distributions are equally likely. This is neither the same as assuming that the liquidity in channels is uniformly distributed nor the same as assuming
that the wealth is equally distributed. The reason is that the network topology makes many wealth distributions infeasible. Thus despite assuming uniformity among the feasible wealth distributions
this assumption is hopefully close enough to match the reality of a skew power law distributed wealth.
Important: The probabilities for a payment to be feasible is not related to attempt probabilities! Instead the probabilties computed with this method estimate weather it is likely that with
sufficient attempts a payment is feasible and thus eventually succesfully being deliverd.
Take the following Network
We get the blue curve that depicts how likely it is that a payment of a certain amount can be sent from any peer to any other peer. In comparison we depicted the likelihood of the min cost flow. Of
course the min cost flow probability has to be lower as the payment could still be feasible while the min cost flow fails:
Let us look at the likelihood of node 0 being able to send 5 coins to node 1. Given our methodology the likelihood that this payment is feasible on this network is: 39.22%
Using the methods of probabilistic pathfinding with independend and uniform liquidity distributions we see:
P1: s=0.00% (0)--- 3---->(1)
P2: s=21.87% (0)--- 7---->(2)---11---->(1)
This means of course node 0 cannot send 5 coins on P1 to node 1 as the path has only one channel with capacity 3.
The likelihood on the send path is 21.87%
The minmum cost flow would however split the payment and attempt to send coin along P1 and 4 coins along P2. This results in a 25% chance to settle this MPP attempt.
If on the other hand node 1 wanted to send 5 coins to node 2 then we get the following likelihoods:
5 coins from (1) to (2) expected success rate = 64.05%
P1: s=58.33% (1)---11---->(2)
P2: s=0.00% (1)--- 3---->(0)--- 7---->(2)
Not both P1 and P2 fail: 58.33%
MCF: s=43.75% 4 to P1 and 1 to P2
The increase makes sense as node 1 and 2 have access to more liquidity and thus should be more likeli to be able to conduct a payment.
We can produce a likelihood that estimates how likely a payment is feasible on the lightning network. We have seen that it is not necessary to estimate the liquidity in channels in order to do so. In
particular there are always feasible states from which the payment is not feasible. (for example the receiving node doesn’t have sufficient inbound liquidity)
Thus Lightning Network payments cannot work in 100% of the time.
Also we have seen that optimally reliable payment flows always have a lower success probability than the probability that the payment is feasible. This is to be expected. However relying purely on
flows and paths it seemed tricky to estimate the feasibility of a payment as one would have to simulate the payment loop.
Of course extending this method to a larger network is a bit more work as testing feasibility and sampling feasible wealth distributions is a bit more tricky (but possible in polynomial time as will
be explained in detail in the paper)
I am curious about your thoughts, feedback or questions.
with kind regards Rene Pickhardt
3 Likes
Thanks for this research, Rene! I was wondering if this is something you imagine being incorporated into user and business software. For example:
• Businessperson Bob typically receives payments up to 0.05 BTC. His node management software occasionally runs a background job that calculates the average likelihood of feasibility of a 0.05 BTC
payment from every node on the network to his node. The node management software also looks at current liquidity advertisements and simulates what would happen if Bob had a channel with the
advertiser, calculating a hypothetical alternative average likelihood of feasibility. If the hypothetical alternative with a new channel has a significantly higher average likelihood of
feasibility, Bob’s node management software automatically accepts the liquidity advertisement and opens the new channel.
• User Alice makes a regular monthly bill payment set up through BOLT12 offers. She’s configures her wallet to start to try paying 5 business days before the due date. The first try and first few
automatic retries don’t succeed. Before her wallet marks the payment attempt as a failure or takes other steps, it checks the likelihood of feasibility. If it’s low but still practical, it will
keep retrying at lengthening intervals for another few hours or days before finally marking the payment attempt as a failure.
1 Like
This is a very interesting use case which I haven’t thought of so far and it certainly could be done in this way. I do have a few additional thoughts on this use case:
1. If a user was interested in the likelihood of feasibility of a payment from all other nodes he might want to look at the histogram of amounts that he could receive from any user. So for a given
random wealth distribution (which can be sampled with this library as explained in this notebook) he could use this method to compute a feasible network state. From here one could use Gomory-Hu
Trees to compute the all pair max flow more efficiently than computing a max flow problem from every user to Bob. Similar to this notebook one could compute the distribution of max flows / min
cuts and have a more precise view. This is because besides the percentile of nodes that are below the amount which Bob wishes to receive one would also get confidence intervals (if for example
this method was repeated for several random wealth distributions). Important: Assuming random liquidity in channels as I have done in the notebook to compute the min cut distribution is probably
not as precise as starting from random wealth distributions.
2. Of course when sampling random wealth distributions Bob could take into account that he ownes x coins in his channels. Furthermore as Bob knows the capacity and state of his channels he could
also know what wealth his peers own at least. putting these constraint to the polytope of wealth distributions from which to sample can be done with the above mentioned library.
3. Similar to 2. Bob could use his local knowledge see the receiving problem as max flow problem with his peers as sinks (assuming he has enough inbound liquidity in those channels) This knowledge
would in particular be taken into account for the liquidity advertisement which he uses in his simulation. (Those are just som engineering / modelling optimizations. I am not sure how much
improvement they will bring)
4. The biggest issue is that with larger lightning networks it will be always harder to successfully sample feasible wealth distributions from where to start the above described computation. That is
because it seems that the likelihood for sampled Bitcoin wealth distributions to also be feasible on the lightning network declines when the network grows in size. Furthermore the test of
feasibility is also rather costly. As discussed (out of band) with Stefan Richter (who was first to state this in our conversation) testing if a sampled wealth distribution has a feasible state
cannot only be solved through linear integer programming but it boils down to solving a particular multi source multi sink max flow problem.
Yes absolutely. Knowing the likelihood of feasibility for a payment and being able to decide how often to attempt a payment and when to give up to make an on chain transaction to either pay someone
on chain or open a new channel is exactly the one application I had in mind.
1 Like
First of all, I want to congratulate @renepickhardt for pioneering this promising approach.
If I am allowed a little nitpicking that doesn’t in any way detract from the achievement, I believe that comparing the min cost flow probability (over the probability space of independently uniform
channel balances) and the probability of feasibility calculated here (over the probability space of uniformly chosing a feasible wealth distribution) doesn’t make much sense, because these are two
different models that give incomparable results.
In particular, if I am not mistaken, this observation here is not true in general:
To see this, consider the example network that Rene gave, but make it a little easier to compute by setting the capacity of channels 01 and 02 to 1, and 12 to 2. Then there are only 12 possible
overall channel states. Of these, there are 3 states that lead to the wealth distribution 121 (0 has 1, 1 has 2, 2 has 1 coin), and 3 states that lead to 112, so overall there are 8 feasible wealth
Now let’s ask ourselves how probable is it that node 1 can pay 1 coin to node 0 as well as 1 coin to node 2. This is feasible whenever node 1 has at least 2 coins, node 0 has at most 1 and node 2 has
at most 2. There are exactly 3 wealth distributions that apply: 121,022, and 031. So this is feasible with probability 3/8.
Now observe that there is only one flow that can achieve these two payments at the same time: 1 coin each directly from 1 to 0 as well as from 1 to 2. This is the min cost (=most probable) flow, and
there are 5 network states where it will succeed (the same as the ones coming from the feasible wealth distributions above, but 121 is counted 3 times), So its success probability is 5/12, which is
strictly greater than 3/8.
So the min cost flow in this example is more probable than the feasibility of these two payments, which makes no sense but is an artefact of the different probability spaces that are compared.
Thank you @stefanwouldgo for your thoughts.
1. I believe you know that I agree that the probability space of feasible payments and the space of likelihood for min costs flows are different.
2. I agree that my statement that the MCF probability has to be lower than the likelihood for a payment to be feasible this does not necessarily have to be true when various probability space are
taken into account and I was probably a bit sloppy in my formulation (In which I only meant to be a requirement for theories to be sound)
That being said. I don’t think your counter example is accurate:
If I have counted correctly we have 10 feasible wealth distributions (instead of 8):
| `node0` | `node1` | `node2` |
| 0 | 1 | 3 |
| 0 | 2 | 2 |
| 0 | 3 | 1 |
| 1 | 0 | 3 |
| 1 | 1 | 2 |
| 1 | 2 | 1 |
| 1 | 3 | 0 |
| 2 | 0 | 2 |
| 2 | 1 | 1 |
| 2 | 2 | 0 |
What about the distribution 130?
As seen in the following pictures I believe there are 4 feasible wealth distributions that allow node 1 to make a payment of 1 coin to both node 0 and node 2
→ thus we have 4 different wealth distributions that are feasible.
By the way, the only ones that occur twice and (and not three times) are: (1,1,2) and (1,2,1)
(Note: given that states with the same wealth distribution are created through circulations on the State Network and given the fact that the lowest capacity is 1 it makes sense that we can only find
at most two different states with the same wealth distribution)
Thus accordingly in your example the likelihood that the payment is feasible would be 4/10 and not 3/8.
The mcf computation in my comparison does not start from network states! It takes the assumption from our paper in which we assume liquidity is uniformly independently distributed in each channel.
Additinally there are actually two flows that can achieve this payment that u describe:
1. The flow that you described (node1 sends 1 coin to node 0 AND node 1 sends 1 coin to node 2
2. The flow in which node 1 sends 2 coins to node 2 and then node 2 sends 1 coin to node 0. (Flow conservation is fulfilled here at node 2 because its demand is 1 and not 0)
according to the uniform probability distributions we get the following probabilities:
• Flow 1: 1/2*2/3 = 1/3 < 4/10
• Flow 2: 1/3*1/2 = 1/6 < 4/10
So indeed the flow that you picked was the MCF but according to the model that I was using to make the computation the probability of the attempt is 33.3% which is indeed lower that the likelihood
that the payment is feasible (which according to my numbers is 40%)
To follow your flow computation: I only quickly counted how many feasible starting states exist and came to 4. I guess as the wealth distribution (1,2,1) should only be counted 2 times and not three
times thay my quick counting is correct. However in that case the result would be 4/12 = 1/3 (surprisingly the same as my MCF model suggests) In particular this would not be a counter example.
Thanks for checking my example. You‘re right, I miscounted, so there are 10 different states and this is probably not a counter example. Also thank you for acknowledging my point that you are
comparing different probability models. It remains to see if it is just harder or impossible to find an example where the min cost flow is more probable than the feasibility of wealth distributions.
Isn‘t equally weighting all network states equivalent to uniformly independently choosing channel balances?
1 Like
I don’t think so. A node balance is the sum of local liquidity in channels, but if those liquidity are uniformly distributed then it doesn’t mean their sum (node balance) it’s uniformly distributed.
Like if you roll 2 dice, each one is has an uniform value 1…6, but the resulting sum is not an uniform value in the range 2 - 12. And viceversa. | {"url":"https://test.delvingbitcoin.org/t/estimating-likelihood-for-lightning-payments-to-be-in-feasible/973","timestamp":"2024-11-06T05:46:12Z","content_type":"text/html","content_length":"50734","record_id":"<urn:uuid:8a85b25c-40af-4c52-8a48-e9609cad462d>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00610.warc.gz"} |
Force and laws of motion
Tanusri Gururaj, Academic content writer of Physics at Edumarz
If two or more bodies are in an isolated system, the total momentum of the objects before colliding is equal to the total momentum after colliding unless acted upon by an external force.
Proof of conservation of momentum:
Let us assume that there are two balls A and B travelling in the same direction on a straight path with different initial velocities.
For ball A:
Mass = mA
Initial velocity = uA
For ball B:
Mass = mB
Initial velocity = uB
Suppose the two balls collide together.
The force exerted by ball A on ball B is FAB and, the force exerted by ball B on ball A is FBA.
Now the final velocity of ball A becomes vA and, the final velocity of ball B becomes vB.
Rate of change of momentum of ball A = mA(vA – uA)/t = FAB (according to newton’s second law of motion )
Rate of change of momentum of ball B = mB(vB – uB)/t = FBA (according to newton’s second law of motion )
Where t is the time for which the collision between the two balls lasts.
According to Newton’s third law of motion,
FAB = -FBA
On equating these equations we get,
mAuA + mBuB = mAvA + mBvB
In the above equation, the left side represents the total momentum of the two bodies before the collision and, the right side represents the total momentum of the two bodies after the collision.
This example proves the law of conservation of momentum. | {"url":"https://edumarz.com/force-and-laws-of-motion-7/","timestamp":"2024-11-09T00:27:48Z","content_type":"text/html","content_length":"197031","record_id":"<urn:uuid:8a467914-f343-4650-ae4f-078088b59a5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00555.warc.gz"} |
mixOmics source: R/tune.pca.R
############################################################################################################# # Author : # Kim-Anh Le Cao, ARC Centre of Excellence in Bioinformatics, Institute for
Molecular Bioscience, University of Queensland, Australia # Leigh Coonan, Student, University of Quuensland, Australia # Fangzhou Yao, Student, University of Queensland, Australia # Florian Rohart,
The University of Queensland, The University of Queensland Diamantina Institute, Translational Research Institute, Brisbane, QLD # # created: 2011 # last modified: 21-04-2016 # # Copyright (C) 2011 #
# This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the
License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY
or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not,
write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. ###############################################################################################
############## tune.pca = function(X, ncomp = NULL, center = TRUE, # sets the mean of the data to zero, ensures that the first PC describes the direction of the maximum variance scale = FALSE, #
variance is unit accross different units max.iter = 500, tol = 1e-09, logratio = 'none',# one of ('none','CLR','ILR') V = NULL, multilevel = NULL) { result = pca(X = X, ncomp = ncomp, center =
center, scale = scale, max.iter = max.iter, tol = tol, logratio = logratio, V = V, multilevel = multilevel) is.na.X = is.na(X) na.X = FALSE if (any(is.na.X)) na.X = TRUE # list eigenvalues, prop. of
explained varience and cumulative proportion of explained variance prop.var = result$explained_variance cum.var = result$cum.var ind.show = min(10, ncomp) print(result) # Plot the principal
components and explained variance # note: if NA values, we have an estimation of the variance using NIPALS if(!na.X) { ylab = "Proportion of Explained Variance" } else{ ylab = "Estimated Proportion
of Explained Variance" } barplot(prop.var[1:result$ncomp], names.arg = 1:result$ncomp, xlab = "Principal Components", ylab = ylab) result$call = match.call() class(result) = "tune.pca" return
(invisible(result)) }
Any scripts or data that you put into this service are public.
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/mixOmics/src/R/tune.pca.R","timestamp":"2024-11-10T03:46:26Z","content_type":"text/html","content_length":"40363","record_id":"<urn:uuid:7b86ae8f-c5cf-4442-8ba8-739c8dfedff2>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00538.warc.gz"} |
Estimating Person Characteristics from IRT Data - 3PL Model
Original Code
# One of the basic tasks in item response theory is estimating the ability of a test taker from the responses to a series of items (questions).
# Let's draw the same pool of items that we have used on several previous posts:
# First let's imagine we have a pool of 100 3PL items.
npool = 500
pool = as.data.frame(cbind(item=1:npool, a=abs(rnorm(npool)*.25+1.1), b=rnorm(npool), c=abs(rnorm(npool)/7.5+.1)))
# Drawing on code from a previous post we can calculate several useful functions:
# (http://www.econometricsbysimulation.com/2012/11/matching-item-information.html)
# Each item has a item characteristic curve (ICC) of:
PL3 = function(theta,a, b, c) c+(1-c)*exp(a*(theta-b))/(1+exp(a*(theta-b)))
# Let's imagine that we have a single test taker and that test taker has taken the first 15 items from the pool.
items.count = 15
items.taken = pool[1:items.count,]
# And that the person has a latent theta ability of 1
theta = 1.3
# Let's calculate the cut points for each of the items.
items.cut = PL3(theta, items.taken$a, items.taken$b, items.taken$c)
# We can see how the cut point works by graphing
plot(0,0, type="n", xlim=c(-3,3),ylim=c(0,1), xlab=~theta,
ylab="Probability of Correct Response", yaxs = "i", xaxs = "i" , main="Item Characteristics Curves and Ability Level")
for(i in 1:items.count) {
lines(seq(-3,3,.1),PL3(seq(-3,3,.1), items.taken$a[i], items.taken$b[i], items.taken$c[i]), lwd=2)
abline(h=items.cut[i], col="blue")
abline(v=theta,col="red", lwd=3)
# Now let's draw a uniform draw that we will use to calculate whether each item as passed.
rdraw = runif(items.count)
# Finally, we will calculate item responses
item.responses = 0
item.responses = items.cut > rdraw
# Done with Simulation - Time for Estimation
# We want to use the information we know about the items (the item parameters and the responses) in order to estimate a best guess at the true ability of the test taker.
# First we must check if the person got all of the items either correct or incorrect.
# If this is either a 0 or a number equal to the number of items then we cannot esimate an interior maximum without additional assumptions.
# We will attempt to recover our theta value using the r command optim
# First we need to specify the function to optimize over.
MLE = function(theta) sum(log((item.responses==T)*PL3(theta, items.taken$a, items.taken$b, items.taken$c) +
(item.responses==F)*(1-PL3(theta, items.taken$a, items.taken$b, items.taken$c))))
# The optimization function takes as its argument the choice variables to be optimized (theta).
# The way the above optimization works is that you specify the probility of each response piecewise.
# If the response is correct, then you count the CDF of theta up to that point as contributing to the probability of observing a correct outcome. If the response is negative, then you count it as
contributing the the probability of a incorrect outcome. You then choose the theta that produces the greatest total probability.
MLEval = 0
theta.range = seq(-3,3,.1)
for(i in 1:length(theta.range)) MLEval[i] =MLE(theta.range[i])
plot(theta.range, MLEval, type="l", main="Maximim Likelihood function", xlab= ~theta, ylab="Sum of Log Likelihood")
abline(v=theta, col="blue")
# We can visually see that the maximum of the slope will not be at the true value though it will be close.
optim(0,MLE, method="Brent", lower=-6, upper=6, control=list(fnscale = -1))
abline(v=optim(0,MLE, method="Brent", lower=-6, upper=6, control=list(fnscale = -1))$par, col="red")
# We can see that we can estimate theta reasonably well with just 15 items from a paper test (red line estimate, blue line true). However, looking at the graph of the ICCs, we can see that for most
of the items, the steepest point (where they have the most discriminating power) is at an ability set lower than the test taker's ability. Thus, this test provides the most information about a person
who has a lower ability than the person with a theta=1.2.
# We can use R's optim function to find the ideal theta that would maximize the information from this test.
# Item information is:
PL3.info = function(theta, a, b, c) a^2 *(PL3(theta,a,b,c)-c)^2/(1-c)^2 * (1-PL3(theta,a,b,c))/PL3(theta,a,b,c)
# Notice, this is not the best way of defining the test information function since the items are not arguments.
test.info = function(theta) sum(PL3.info(theta, items.taken$a, items.taken$b, items.taken$c))
# Construct a vector to hold the test information
info = 0
for(i in 1:length(theta.range)) info[i]=test.info(theta.range[i])
plot(theta.range, info, type="l", main="Information Peaks Slightly Above 0", xlab= ~theta, ylab="Information")
abline(v=theta, col="blue")
# But we want to know about the test taker at theta
optim(0,test.info, method="Brent", lower=-6, upper=6, control=list(fnscale = -1))
# The person this test would be best suited to evaluate would have an ability rating of .19
abline(v=optim(0,test.info, method="Brent", lower=-6, upper=6, control=list(fnscale = -1))$par, col="red") | {"url":"http://www.econometricsbysimulation.com/2012/11/estimating-person-characteristics-from.html","timestamp":"2024-11-07T23:09:46Z","content_type":"text/html","content_length":"175467","record_id":"<urn:uuid:e2ff79ee-33cd-4058-aecc-cb00090d8395>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00770.warc.gz"} |
How to calculate the Uncertainty in chemistry? |Chemistry Questions
Calculating Uncertainty is not an easy process. People find it challenging to estimate Uncertainty. This is why we have picked up the topic here so that you will get to know the procedure that will
help you calculate it easily. All you need to do is focus on an eight-step process appropriately, and you will never find it difficult to calculate it quickly and adequately.
How to calculate Uncertainty?
1. Specify the measurement process
Before getting into the calculation process, it is essential to have a plan. The planning is undoubtedly an excellent start to get the appropriate outcomes. First of all, you need to identify the
measurement process. This will help you in uncertainty analysis and focus your kind attention on what matters the most.
How to specify the Measurement process?
Follow the below-mentioned instructions to specify the measurement process:
• Select the measurement function to evaluate
• Select the procedure or measurement method to be used
• Select the equipment to be used
• Select the desired range of measurement function
• Determine the test points to be evaluated.
1. Identify and characterize Uncertainty resources.
Now that you have figured out the measurement processes to be evaluated, you need to identify the factors influencing Uncertainty in measurement results. However, this process is not an easy one, so
be patient while working.
Finding Uncertainty sources
Finding uncertainty resources can be complicated and require a lot of time and effort. This stage is considered the most time-consuming process while evaluating the uncertainty measurement.
How to find Uncertainty sources
Follow the below-mentioned steps to find uncertainty sources
• Evaluate measurement process, calibration procedure, or test method
• Evaluate measurement equations
• Evaluate reference standards, reagents, and equipment
• Identify minimum required uncertainty resources
• Research for various information sources
• Consult an expert
1. Quantify Uncertainty resources
Before moving the calculation of measurement uncertainty, you need first to determine each contributing factor’s magnitude. To attain this, you need to perform such data analysis and reduction.
How to Quantify Uncertainty?
Follow the below-mentioned steps to quantify the Uncertainty
• Collect data and information
• Select the correct data after appropriate evaluation
• Data analysis
• Quantify Uncertainty components.
1. Characterize Uncertainty resources
Characterize each factor by a probability distribution and uncertainty type.
How to characterize Uncertainty sources?
Follow the procedure to characterize your uncertainty sources
• Categorize each uncertainty source: Type A or Type B
• Assign a probability distribution to each component
1. Convert Uncertainties to standard deviations
After the probability distribution, identify the equation required to convert each uncertainty contributor to a standard deviation equivalent. This will help to reduce the uncertainty source to a
1-sigma level.
How to convert Uncertainty to standard deviations?
Follow the below-mentioned steps to convert uncertainty components to standard deviations
• Assign a probability distribution to each uncertainty sources
• Find the divisor for the selected probability distribution
• Divide each uncertainty source by respective divisor.
1. Calculate Combined Uncertainty
After converting the uncertainty sources, it is time to calculate combined Uncertainty by the root sum of squares (RSS) method.
How to calculate the combined Uncertainty?
Follow the below-mentioned steps to calculate combined Uncertainty
• Square each uncertainty component’s value
• Add together all results obtained in the first step
• Calculate the square root of results obtained in step 2
1. Calculate expanded Uncertainty
You have reached the phase where you are almost done with the uncertainty estimation.
How to calculate the expanded Uncertainty?
Follow the steps to calculate expanded Uncertainty
• Calculate combined Uncertainty
• Calculate Freedom’s effective degrees
• Select or Find a coverage factor (k)
• Multiply combined Uncertainty by a coverage factor
1. Evaluate your Uncertainty budget
Now that you are done with the calculation of expanded Uncertainty, it is the best time to evaluate the uncertainty estimate for appropriateness. Ensure that your measurement uncertainty estimate
appropriately represents the measurement process and is not under or over-estimated.
Discover the exact logic behind the reactions!
Get a deeper understanding of every possible interaction between atoms, molecules and elements in an easy and fun-loving way. | {"url":"https://telgurus.co.uk/how-to-calculate-uncertainty-in-chemistry/","timestamp":"2024-11-08T20:08:51Z","content_type":"text/html","content_length":"129431","record_id":"<urn:uuid:c610c476-6c6d-4443-ab9d-52c4c4a11ba9>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00062.warc.gz"} |
The diagram shows a circuit containing two resistors of resista... | Filo
The diagram shows a circuit containing two resistors of resistance and .A voltmeter is connected across the resistor by connecting to .The reading on the voltmeter is .
P is moved to point Y in the circuit.What is the new reading on the voltmeter?
Not the question you're searching for?
+ Ask your question
Correct answer: Option (D)
Explanation for correct answer:
According to Ohm's law,
Case I: connecting P to X
In a series combination of resistors the current is same.
Therefore, Current in the circuit is
Case II: connecting P to Y
Effective resistance of two resistors connected in series
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE
14 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Elelctric Circuits
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Physics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
The diagram shows a circuit containing two resistors of resistance and .A voltmeter is connected across the resistor by connecting to .The reading on the voltmeter is .
Question Text
P is moved to point Y in the circuit.What is the new reading on the voltmeter?
Topic Elelctric Circuits
Subject Physics
Class Grade 10
Answer Type Text solution:1
Upvotes 110 | {"url":"https://askfilo.com/physics-question-answers/the-diagram-shows-a-circuit-containing-two-resistors-of-resistance-1-0-omega-and-115248","timestamp":"2024-11-02T11:17:39Z","content_type":"text/html","content_length":"180784","record_id":"<urn:uuid:80d113b8-de06-4db6-83d4-d9b15648e537>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00682.warc.gz"} |
Two researchers analyse the exact same data set, using multiple
regression analyses to predict depression from...
Two researchers analyse the exact same data set, using multiple regression analyses to predict depression from...
Two researchers analyse the exact same data set, using multiple regression analyses to predict depression from secure attachment, neuroticism, and anxiety. Catherine finds that anxiety is a
significant predictor of depression in her analysis, while Sam finds that anxiety is not a significant predictor in her analysis. Assuming neither made a mistake, which of the following is true about
how the researchers conducted their analyses?
(a) Sam must have conducted a standard multiple regression (SMR); while Catherine must have conducted a hierarchical multiple regression (HMR), entering secure attachment and neuroticism in Block 1,
and anxiety in Block 2.
(b) Catherine must have conducted a standard multiple regression (SMR); while Sam must have conducted a hierarchical multiple regression (HMR), entering anxiety in Block 1, and secure attachment and
neuroticism in Block 2.
(c) Sam must have conducted a standard multiple regression (SMR); while Catherine must have conducted a hierarchical multiple regression (HMR), entering anxiety in Block 1, and secure attachment and
neuroticism in Block 2.
(d) Catherine must have conducted a standard multiple regression (SMR); while Sam must have conducted a hierarchical multiple regression (HMR), entering secure attachment and neuroticism in Block 1,
and anxiety in Block 2.
(c) Sam must have conducted a standard multiple regression (SMR); while Catherine must have conducted a hierarchical multiple regression (HMR), entering anxiety in Block 1, and secure attachment and
neuroticism in Block 2.
Anxiety as single predictor would be a significant predictor of depression. But Anxiety in multiple regression model with secure attachment, neuroticism may not be a significant predictor. This may
be because the variation of depression due to Anxiety is explained by secure attachment, neuroticism. | {"url":"https://justaaa.com/statistics-and-probability/250996-two-researchers-analyse-the-exact-same-data-set","timestamp":"2024-11-13T22:47:59Z","content_type":"text/html","content_length":"40622","record_id":"<urn:uuid:db522c88-df27-4884-a131-7439f9fecf5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00402.warc.gz"} |
In some ways path tracing is one of the simplest and most intuitive ways to do ray tracing.
Imagine you want to simulate how the photons from one or more light sources bounce around a scene before reaching a camera. Each time a photon hits a surface, we choose a new randomly reflected
direction and continue, adjusting the intensity according to how likely the chosen reflection is. Though this approach works, only a very tiny fraction of paths would terminate at the camera.
So instead, we might start from the camera and trace the ray from here and until we hit a light source. And, if the light source is large and slowly varying (for instance when using Image Based
Lighting), this may provide good results.
But if the light source is small, e.g. like the sun, we have the same problem: the chance that we hit a light source using a path of random reflections is very low, and our image will be very noisy
and slowly converging. There are ways around this: one way is to trace rays starting from both the camera and the lights, and connect them (bidirectional path tracing), another is to test for
possible direct lighting at each surface intersection (this is sometimes called ‘next event estimation’).
Even though the concept of path tracing might be simple, introductions to path tracing often get very mathematical. This blog post is an attempt to introduce path tracing as an operational tool
without going through too many formal definitions. The examples are built around Fragmentarium (and thus GLSL) snippets, but the discussion should be quite general.
Let us start by considering how light behaves when hitting a very simple material: a perfect diffuse material.
Diffuse reflections
A Lambertian material is an ideal diffuse material, which has the same radiance when viewed from any angle.
Imagine that a Lambertian surface is hit by a light source. Consider the image above, showing some photons hitting a patch of a surface. By pure geometrical reasoning, we can see that the amount of
light that hits this patch of the surface will be proportional to the cosine of the angle between the surface normal and the light ray:
cos(\theta)=\vec{n} \cdot \vec{l}
By definition of a Lambertian material this amount of incoming light will then be reflected with the same probability in all directions.
Now, to find the total light intensity in a given (outgoing) direction, we need to integrate over all possible incoming directions in the hemisphere:
L_{out}(\vec\omega_o) = \int K*L_{in}(\vec\omega_i)cos(\theta)d\vec\omega_i
where K is a constant that determines how much of the incoming light is absorbed in the material, and how much is reflected. Notice, that there must be an upper bound to the value of K – too high a
value would mean we emitted more light than we received. This is referred to as the ‘conservation of energy’ constraint, which puts the following bound on K:
\int Kcos(\theta)d\vec\omega_i \leq 1
Since K is a constant, this integral is easy to solve (see e.g. equation 30 here):
K \leq 1/\pi
Instead of using the constant K, when talking about a diffuse materials reflectivity, it is common to use the Albedo, defined as \( Albedo = K\pi \). The Albedo is thus always between 0 and 1 for a
physical diffuse materials. Using the Albedo definition, we have:
L_{out}(\vec\omega_o) = \int (Albedo/\pi)*L_{in}(\vec\omega_i)cos(\theta)d\vec\omega_i
The above is the Rendering Equation for a diffuse material. It describes how light scatters at a single point. Our diffuse material is a special case of the more general formula:
L_{out}(\vec\omega_o) = \int BRDF(\vec\omega_i,\vec\omega_o)*L_{in}(\vec\omega_i)cos(\theta)d\vec\omega_i
Where the BRDF (Bidirectional Reflectance Distribution Function) is a function that describes the reflection properties of the given material: i.e. do we have a shiny, metallic surface or a diffuse
Completely diffuse material (click for large version)
How to solve the rendering equation
An integral is a continuous quantity, which we must turn into something discrete before we can handle it on the computer.
To evaluate the integral, we will use Monte Carlo sampling, which is a very simple: to provide an estimate for an integral, we will take a number of samples and use the average values of these
samples multiplied by the integration interval length.
\int_a^b f(x)dx \approx \frac{b-a}{N}\sum _{i=1}^N f(X_i)
If we apply this to our diffuse rendering equation above, we get the following discrete summation:
L_{out}(\vec\omega_o) &= \int (Albedo/\pi)*L_{in}(\vec\omega_i)cos(\theta)d\vec\omega_i \\
& = \frac{2\pi}{N}\sum_{\vec\omega_i} (\frac{Albedo}{\pi}) L_{in}(\vec\omega_i) (\vec{n} \cdot \vec\omega_i) \\
& = \frac{2 Albedo}{N}\sum_{\vec\omega_i} L_{in}(\vec\omega_i) (\vec{n} \cdot \vec\omega_i)
Test render (click for large version)
Building a path tracer (in GLSL)
Now we are able to build a simple path tracer for diffuse materials. All we need to do is to shoot rays starting from the camera, and when a ray hits a surface, we will choose a random direction in
the hemisphere defined by the surface normal. We will continue with this until we hit a light source. Each time the ray changes direction, we will modulate the light intensity by the factor found
2*Color*Albedo*L_{in}(\vec\omega_i) (\vec{n} \cdot \vec\omega_i)
The idea is to repeat this many times for each pixel, and then average the samples. This is why the sum and the division by N is no longer present in the formula. Also notice, that we have added a
(material specific) color. Until now we have assumed that our materials handled all wavelengths the same way, but of course some materials absorb some wavelengths, while reflecting others. We will
describe this using a three-component material color, which will modulate the light ray at each surface intersection.
All of this boils down to very few lines of codes:
vec3 color(vec3 from, vec3 dir)
vec3 hit = vec3(0.0);
vec3 hitNormal = vec3(0.0);
vec3 luminance = vec3(1.0);
for (int i=0; i < RayDepth; i++) {
if (trace(from,dir,hit,hitNormal)) {
dir = getSample(hitNormal); // new direction (towards light)
luminance *= getColor()*2.0*Albedo*dot(dir,hitNormal);
from = hit + hitNormal*minDist*2.0; // new start point
} else {
return luminance * getBackground( dir );
return vec3(0.0); // Ray never reached a light source
The getBackground() method simulates the light sources in a given direction (i.e. infinitely far away). As we will see below, this fits nicely together with using Image Based Lighting.
But even when implementing getBackground() as a simple function returning a constant white color, we can get very nice images:
The above images were lightened only a constant white dome light, which gives the pure ambient occlusion like renders seen above.
Sampling the hemisphere in GLSL
The code above calls a 'getSample' function to sample the hemisphere.
dir = getSample(hitNormal); // new direction (towards light)
This can be a bit tricky. There is a nice formula for \(cos^n\) sampling of a hemisphere in the GI compendium (equation 36), but you still need to align the hemisphere with the surface normal. And
you need to be able to draw uniform random numbers in GLSL, which is not easy.
Below I use the standard approach of putting a seed into a noisy function. The seed should depend on the pixel coordinate and the sample number. Here is some example code:
vec2 seed = viewCoord*(float(subframe)+1.0);
vec2 rand2n() {
// implementation based on: lumina.sourceforge.net/Tutorials/Noise.html
return vec2(fract(sin(dot(seed.xy ,vec2(12.9898,78.233))) * 43758.5453),
fract(cos(dot(seed.xy ,vec2(4.898,7.23))) * 23421.631));
vec3 ortho(vec3 v) {
// See : http://lolengine.net/blog/2013/09/21/picking-orthogonal-vector-combing-coconuts
return abs(v.x) > abs(v.z) ? vec3(-v.y, v.x, 0.0) : vec3(0.0, -v.z, v.y);
vec3 getSampleBiased(vec3 dir, float power) {
dir = normalize(dir);
vec3 o1 = normalize(ortho(dir));
vec3 o2 = normalize(cross(dir, o1));
vec2 r = rand2n();
float oneminus = sqrt(1.0-r.y*r.y);
return cos(r.x)*oneminus*o1+sin(r.x)*oneminus*o2+r.y*dir;
vec3 getSample(vec3 dir) {
return getSampleBiased(dir,0.0); // <- unbiased!
vec3 getCosineWeightedSample(vec3 dir) {
return getSampleBiased(dir,1.0);
Importance Sampling
Now there are some tricks to improve the rendering a bit: Looking at the formulas above, it is clear that light sources in the surface normal direction will contribute the most to the final intensity
(because of the \( \vec{n} \cdot \vec\omega_i \) term).
This means we might want sample more in the surface normal directions, since these contributions will have a bigger impact on the final average. But wait: we are estimating an integral using Monte
Carlo sampling. If we bias the samples towards the higher values, surely our estimate will be too large. It turns out there is a way around that: it is okay to sample using a non-uniform
distribution, as long as we divide the sample value by the probability density function (PDF).
Since we know the diffuse term is modulated by the \( \vec{n} \cdot \vec\omega_i = cos(\theta) \), it makes sense to sample from a non-uniform cosine weighted distribution. According to GI compendium
(equation 35), this distribution has a PDF of \( cos(\theta) / \pi \), which we must divide by, when using cosine weighted sampling. In comparison, the uniform sampling on the hemisphere we used
above, can be thought of either to be multiplied by the integral interval length (\( 2\pi \)), or diving by a constant PDF of \( 1 / 2\pi \).
If we insert this, we end up with a simpler expression for the cosine weighted sampling, since the cosine terms cancel out:
vec3 color(vec3 from, vec3 dir)
vec3 hit = vec3(0.0);
vec3 hitNormal = vec3(0.0);
vec3 luminance = vec3(1.0);
for (int i=0; i < RayDepth; i++) {
if (trace(from,dir,hit,hitNormal)) {
dir =getCosineWeightedSample(hitNormal);
luminance *= getColor()*Albedo;
from = hit + hitNormal*minDist*2.0; // new start point
} else {
return luminance * getBackground( dir );
return vec3(0.0); // Ray never reached a light source
Image Based Lighting
It is now trivial to replace the constant dome light, with Image Based Lighting: just lookup the lighting from a panoramic HDR image in the 'getBackground(dir)' function.
This works nicely, at least if the environment map is not varying too much in light intensity. Here is an example:
Stereographic 4D Quaternion system (click for large version)
If, however, the environment has small, strong light sources (such as a sun), the path tracing will converge very slowly, since we are not likely to hit these by chance. But for some IBL images this
works nicely - I usually use a filtered (blurred) image for lighting, since this will reduce noise a lot (though the result is not physically correct). The sIBL archive has many great free HDR images
(the ones named '*_env.hdr' are prefiltered and useful for lighting).
Direct Lighting / Next Event Estimation
But without strong, localized light sources, there will be no cast shadows - only ambient occlusion like contact shadows. So how do we handle strong lights?
Test scene with IBL lighting
Let us consider the sun for a moment.
The sun has an angular diameter of 32 arc minutes, or roughly 0.5 degrees. How much of the hemisphere is this? The solid angle (which corresponds to the area covered of a unit sphere) is given by:
\Omega = 2\pi (1 - \cos {\theta} )
where \( \theta \) is half the angular diameter. Using this we get that the sun covers roughly \( 6*10^{-5} \) steradians or around 1/100000 of the hemisphere surface. You would actually need around
70000 samples, before there is even a 50% chance of a pixel actually catching some sun light (using \( 1-(1-10^{-5})^{70000} \approx 50\% \)).
Test scene: naive path tracing of a sun like light source (10000 samples per pixel!)
Obviously, we need to bias the sampling towards the important light sources in the scene - similar to what we did earlier, when we biased the sampling to follow the BRDF distribution.
One way to do this, is Direct Lighting or Next Event Estimation sampling. This is a simple extension: instead of tracing the light ray until we hit a light source, we send out a test ray in the
direction of the sun light source at each surface intersection.
Test scene with direct lighting (100 samples per pixel)
Here is some example code:
vec3 getConeSample(vec3 dir, float extent) {
// Formula 34 in GI Compendium
dir = normalize(dir);
vec3 o1 = normalize(ortho(dir));
vec3 o2 = normalize(cross(dir, o1));
vec2 r = rand2n();
float oneminus = sqrt(1.0-r.y*r.y);
return cos(r.x)*oneminus*o1+sin(r.x)*oneminus*o2+r.y*dir;
vec3 color(vec3 from, vec3 dir)
vec3 hit = vec3(0.0);
vec3 direct = vec3(0.0);
vec3 hitNormal = vec3(0.0);
vec3 luminance = vec3(1.0);
for (int i=0; i < RayDepth; i++) {
if (trace(from,dir,hit,hitNormal)) {
dir =getCosineWeightedSample(hitNormal);
luminance *= getColor()*Albedo;
from = hit + hitNormal*minDist*2.0; // new start point
// Direct lighting
vec3 sunSampleDir = getConeSample(sunDirection,1E-5);
float sunLight = dot(hitNormal, sunSampleDir);
if (sunLight>0.0 && !trace(hit + hitNormal*2.0*minDist,sunSampleDir)) {
direct += luminance*sunLight*1E-5;
} else {
return direct + luminance*getBackground( dir );
return vec3(0.0); // Ray never reached a light source
The 1E-5 factor is the hemisphere area covered by the sun. Notice, that you might run into precision errors with the single-precision floats used in GLSL when doing these calculations. For instance,
on my graphics card, cos(0.4753 degrees) is exactly equal to 1.0, which means a physically sized sun can easily introduce large numerical errors (remember the sun is roughly 0.5 degrees).
Sky model
To provide somewhat more natural lighting, an easy improvement is to combine the sun light with a blue sky dome.
A slightly more complex model is the Preetham sky model, which is a physically based model, taking different kinds of scattering into account. Based on the code from Simon Wallner I implemented a
Preetham model in Fragmentarium.
Here is an animated example, showing how the color of the sun light changes during the day:
Path tracing test from Syntopia (Mikael H. Christensen) on Vimeo.
Now finally, we are ready to apply path tracing to fractals. Technically, there is not much new to this - I have previously covered how to do the ray-fractal intersection in this series of blog
posts: Distance Estimated 3D fractals.
So the big question is whether it makes sense to apply path tracing to fractals, or whether the subtle details of multiple light bounces are lost on the complex fractal surfaces. Here is the
Mandelbulb, rendered with the sky model:
Path traced Mandelbulb (click for larger version)
Here path tracing provides a very natural and pleasant lighting, which improves the 3D perceptions.
Here are some more comparisons of complex geometry:
Default ray tracer in Fragmentarium
And another one:
Default ray tracer in Fragmentarium
What's the catch?
The main concern with path tracing is of course the rendering speed, which I have not talked much about, mainly because it depends on a lot of factors, making it difficult to give a simple answer.
First of all, the images above are distance estimated fractals, which means they are a lot slower to render than polygons (at least of you have a decent spatial acceleration structure for the
polygons, which is surprisingly difficult to implement on a GPU). But let me give some numbers anyway.
In general, the rendering speed will be (roughly) proportional to the number of pixels, the FLOPS of the GPU, and the number of samples per pixel.
On my laptop (a mobile mid-range NVIDIA 850M GPU) the Mandelbulb image above took 5 minutes to render at 2442x1917 resolution (with 100 samples per pixel). The simple test scene above took 30 seconds
at the same resolution (with 100 samples per pixel). But remember, that since we can show the render progressively, it is still possible to use this at interactive speeds.
What about the ray lengths (the number of light bounces)?
Here is a comparison as an animated GIF, showing direct light only (the darkest), followed by one internal light bounce, and finally two internal light bounces:
In terms of speed one internal bounce made the render 2.2x slower, while two bounces made it 3.5x slower. It should be noted that the visual effect of adding additional light bounces is normally
relatively small - I usually use only a single internal light bounce.
Even though the images above suggests that path tracing is a superior technique, it is also possible to create good looking images in Fragmentarium with the existing ray tracers. For instance, take a
look at this image:
(taken from the Knots and Polyhedra series)
It was ray traced using the 'Soft-Raytracer.frag', and I was not able to improve the render using the Path tracer. Having said that, the Soft-Raytracer is also a multi-sample ray tracer which has to
use lots of samples to produce the nice noise-free soft shadows.
The Fragmentarium path tracers are still Work-In-Progress, but they can be downloaded here:
Sky-Pathtracer.frag (which needs the Preetham model: Sunsky.frag).
and the image based lighting one:
The path tracers can be used by replacing an existing ray tracer '#include' in any Fragmentarium .frag file.
External resources
GI Total Compendium - very valuable collection of all formulas needed for ray tracing.
Vilém Otte's Bachelor Thesis on GPU Path Tracing is a good introduction.
Disney's BRDF explorer - Interactive display of different BRDF models - many examples included. The BRDF definitions are short GLSL snippets making them easy to use in Fragmentarium!
Inigo Quilez's path tracer was the first example I saw of using GPU path tracing of fractals.
Evan Wallace - the first WebGL Path tracer I am aware of.
Brigade is probably the most interesting real time path tracer: Vimeo video and paper.
I would have liked to talk a bit about unbiased and consistent rendering, but I don't understand these issues properly yet. It should be said, however, that since the examples I have given terminate
after a fixed number of ray bounces, they will not converge to a true solution of the rendering equation (and, are thus both biased and inconsistent). For consistency, a better termination criterion,
such as russian roulette termination, is needed.
Combining ray tracing and polygons
I have written a lot about distance estimated ray marching using OpenGL shaders on this blog.
But one of the things I have always left out is how to setup the camera and perspective projection in OpenGL. The traditional way to do this is by using functions such as ‘gluLookAt’ and
‘gluPerspective’. But things become more complicated if you want to combine ray marched shader graphics with the traditional OpenGL polygons. And if you are using modern OpenGL (the ‘core’ context),
there is no matrix stack and no ‘gluLookAt’ functions. This post goes through through the math necessary to combine raytraced and polygon graphics in shaders. I have seen several people implement
this, but I couldn’t find a thorough description of how to derive the math.
Here is the rendering pipeline we will be using:
It is important to point out, that in modern OpenGL there is no such thing as a model, view, or projection matrix. The green part on the diagram above is completely programmable, and it is possible
to do whatever you like there. Only the part after the green box of the diagram (starting with clip coordinates) is fixed by the graphics card. But the goal here is to precisely match the convention
of the fixed-function OpenGL pipeline matrices and the GLU functions gluLookAt and gluPerspective, so we will stick to the conventional model, view, and projection matrix terminology.
The object coords are the raw coordinates, for instance as specified in VBO buffers. This is the vertices of an 3D object in its local coordinate system. The next step is to position and orient the
3D object in the scene. This is accomplished by applying the model matrix, that transform the object coordinates to global world coordinates. The model transformation will be different for the
different objects that are placed in the scene.
The camera transformation
The next step is to transform the world coordinates into camera or eye space. Now, neither old nor modern OpenGL has any special support for implementing a camera. Instead the conventional
gluPerspective always assumes an origo centered camera facing the negative z-direction, and with an up-vector in the positive y-direction. So, in order to implement a generic, movable camera, we
instead find a camera-view matrix, and then apply the inverse transformation to our world coordinates – i.e. instead of moving/rotate the camera, we apply the opposite transformation to the world.
Personally, I prefer using a camera specified using a forward, up, and right vector, and a position. It is easy to understand, and the only problem is that you need to keep the vectors orthogonal at
all times. So we will use a camera identical to the one implemented in gluLookAt.
The camera-view matrix is then of the form:
r.x & u.x & -f.x & p.x \\
r.y & u.y & -f.y & p.y \\
r.z & u.z & -f.z & p.z \\
0 & 0 & 0 & 1
where r=right, u=up, f=forward, and p is the position in world coordinates. R, u, and f must be normalized and orthogonal.
Which gives an inverse of the form:
r.x & r.y & r.z & q.x \\
u.x & u.y & u.z & q.y \\
-f.x & -f.y & -f.z & q.z \\
0 & 0 & 0 & 1
By multiplying the matrices together and requiring the result is the identity matrix, the following relations between p and q can be established:
q.x = -dot(r,p), q.y = -dot(u,p), q.z = dot(f,p)
p = -vec3(vec4(q,0)*modelView);
As may be seen, the translation part (q) of this matrix is the position of the camera expressed in the R,u, and f coordinate system.
Now, per default, the OpenGL shaders use a column-major representation of matrices, in which the data is stored sequentially as a series of columns (notice, that this can be changed by specifying
‘layout (row_major) uniform;’ in the shader). So creating the model-view matrix as an array on the CPU side looks like this:
float[] values = new float[] {
r[0], u[0], -f[0], 0,
r[1], u[1], -f[1], 0,
r[2], u[2], -f[2], 0,
q[0], q[1], q[2], 1};
Don’t confuse this with the original camera-transformation: it is the inverse camera-transformation, represented in column-major format.
The Projection Transformation
The gluPerspective transformation uses the following matrix to transform from eye coordinates to clip coordinates:
f/aspect & 0 & 0 & 0 \\
0 & f & 0 & 0 \\
0 & 0 & \frac{(zF+zN)}{(zN-zF)} & \frac{(2*zF*zN)}{(zN-zF)} \\
0 & 0 & -1 & 0
where ‘f’ is cotangent(fovY/2) and ‘aspect’ is the width to height ratio of the output window.
(If you want to understand the form of this matrix, try this link)
Since we are going to raytrace the view frustum, consider what happens when we transform an direction of the form (x,y,-1,0) from eye space to clip coordinates and further to normalized device
coordinates. Since the clip.w in this case will be 1, the x and y part of the NDC will be:
ndc.xy = (x*f/aspect, y*f)
Since normalized device coordinates range from [-1;1], this means that when we ray trace our frustrum, our ray direction (in eye space) must be in the range:
eyeX = [-aspect/f ; aspect/f]
eyeY = [-1/f ; 1/f]
eyeZ = -1
where 1/f = tangent(fovY/2).
We now have the necessary ingredients to set up our raytracing shaders.
The polygon shaders
But let us start with the polygon shaders. In order to draw the polygons, we need to apply the model, view, and projection transformations to the object space vertices:
gl_Position = projection * modelView * vertex;
Notice, that we premultiply the model and view matrix on the CPU side. We don’t need them individually on the GPU side. If you wonder why we don’t combine the projection matrix as well, it is because
we want to use the modelView to transform the normals as well:
eyeSpaceNormal = mat3(modelView) * objectSpaceNormal;
Notice, that in general normals transform different from positions. They should be multiplied by the inverse of the transposed 3×3 part of the modelView matrix. But if we only do uniform scaling and
rotations, the above will work, since the rotational part of matrix is orthogonal, and the uniform scaling does not matter if we normalize our normals. But if you do non-uniform scaling in the model
matrix, the above will not work.
The raytracer shaders
The raytracing must be done in world coordinates. So in the vertex shader for the raytracer, we need figure out the eye position and ray direction (both in world coordinates) for each pixel. Assume
that we render a quad, with the vertices ranging from [-1,-1] to [1,1].
The eye position can be easily found from the formula found under ‘the camera transformation’:
eye = -(modelView[3].xyz)*mat3(modelView);
Similar, by transforming the ranges we found above from eye to world space we get that:
dir = vec3(vertex.x*fov_y_scale*aspect,vertex.y*fov_y_scale,-1.0)
where fov_y_scale = tangent(fovY/2) is an uniform calculated on the CPU side.
Normally, OpenGL takes care of filling the z-buffer. But for raytracing, we have to do it manually, which can be done by writing to gl_fragDepth. Now, the ray tracing takes place in world
coordinates: we are tracing from the eye position and into the camera-forward direction (mixed with camera-up and camera-right). But we need the z-coordinate of the hit position in eye coordinates.
The raytracing is of the form:
vec3 hit = p + rayDirection * distance; // hit in world coords
Converting the hit point to eye coordinates gives (the p and q terms cancel):
eyeHitZ = -distance * dot(rayDirection * cameraForward);
which in clip coordinates becomes:
clip.z = [(zF+zN)/(zN-zF)]*eyeHitZ + (2*zF*zN)/(zN-zF);
clip.w = -eyeHitZ;
Making the perspective divide, we arrive at normalized device coordinates:
ndcDepth = ((zF+zN) + (2*zF*zN)/eyeHitZ)/(zF-zN)
The ncdDepth is in the interval [-1;1]. The last step that remains is to convert into window coordinates. Here the depth value is mapped onto an interval determined by the gl_DepthRange.near and
gl_DepthRange.far parameters (usually these are just 0 and 1). So finally we arrive at the following:
gl_FragDepth =((gl_DepthRange.diff * ndcDepth) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
Putting the pieces together, we arrive at the following for the ray tracing vertex shader:
void main(void)
gl_Position = vertex;
eye = -(modelView[3].xyz)*mat3(modelView);
dir = vec3(vertex.x*fov_y_scale*aspect,vertex.y*fov_y_scale,-1.0)
cameraForward = vec3(0,0,-1.0)*mat3(modelView);
and this code for the fragment shader:
void main (void)
vec3 rayDirection=normalize(dir);
trace(eye,rayDirection, distance, color);
fragColor = color;
float eyeHitZ = -distance *dot(cameraForward,rayDirection);
float ndcDepth = ((zFar+zNear) + (2.0*zFar*zNear)/eyeHitZ)
gl_FragDepth =((gl_DepthRange.diff * ndcDepth)
+ gl_DepthRange.near + gl_DepthRange.far) / 2.0;
The above is of course just snippets. I’m currently experimenting with a Java/JOGL implementation of the above (Github repo), with some more complete code.
Knots and Polyhedra
Over at Fractal Forums, DarkBeam came up with a Distance Estimator for a trefoil knot in this thread. Here are a few samples, created using the new ‘soft’ raytracer, I’m working on in Fragmentarium:
These kinds of knots are easy to describe by a parametrized curve, but making a distance estimator for them is impressive – I wouldn’t have guessed it was possible at all.
It is also possible to create several variations:
In the same thread, Knighty came up with an impressive figure-8 knot distance estimator:
In another thread at Fractal Forums, Knighty also published an interesting technique (“Fold and Cuts”) for creating a large variety of distance estimated polyhedra:
(If you wonder about the materials, I’ve added some 3D Perlin noise to the distance estimate – this is simple way to creature a structural texture, and it creates true displacements, not just surface
normal perturbations).
The threads linked to above contains Fragmentarium scripts with the relevant distance estimators.
Creating a Raytracer for Structure Synth (Part II)
When I decided to implement the raytracer in Structure Synth, I figured it would be an easy task – after all, it should be quite simple to trace rays from a camera and check if they intersect the
geometry in the scene.
And it turned out, that it actually is quite simple – but it did not produce very convincing pictures. The Phong-based lighting and hard shadows are really not much better than what you can achieve
in OpenGL (although the spheres are rounder). So I figured out that what I wanted was some softer qualities to the images. In particular, I have always liked the Ambient Occlusion and Depth-of-field
in Sunflow. One way to achieve this is by shooting a lot of rays for each pixel (so-called distributed raytracing). But this is obviously slow.
So I decided to try to choose a smaller subset of samples for estimating the ambient occlusion, and then do some intelligent interpolation between these points in screen space. The way I did this was
to create several screen buffers (depth, object hit, normal) and then sample at regions with high variations in these buffers (for instance at every object boundary). Then followed the non-trivial
task of interpolating between the sampled pixels (which were not uniformly distributed). I had an idea that I could solve this by relaxation (essentially iterative smoothing of the AO screen buffer,
while keeping the chosen samples fixed) – the same way the Laplace equation can be numerically solved.
While this worked, it had a number of drawbacks: choosing the condition for where to sample was tricky, the smoothing required many steps to converge, and the approach could not be easily
multi-threaded. But the worst problem was that it was difficult to combine with other stuff, such as anti-alias and depth-of-field calculations, so artifacts would show up in the final image.
I also played around with screen based depth-of-field. Again I thought it would be easy to apply a Gaussian blur based on the z-buffer depth (of course you have to prevent background objects from
blurring the foreground, which complicates things a bit). But once again, it turned out that creating a Gaussian filter for each particular depth actually gets quite slow. Of course you can bin the
depths, and reuse the Gaussian filters from a cache, but this approach got complicated, and the images still displayed artifacts. And a screen based method will always have limitations: for instance,
the blur from an object hidden behind another object will never be visible, because the object is not part of the screen buffers.
So in the end, I ended up discarding all the hacks, and settled for the much more satisfying solution of simply using a lot of rays for each pixel.
This may sound very slow: after all you need multiple rays for anti-alias, multiple rays for depth-of-field, multiple rays for ambient occlusion, for reflections, and so forth, which means you might
end up with a combinatorial explosion of rays per pixel. But in practice there is a nice shortcut: instead of trying all combinations, just choose some random samples from all the possible
This works remarkably well. You can simulate all these complex phenomena with a reasonably number of rays. And you can use more clever sampling strategies in order to reduce the noise (I use
stratified sampling in Structure Synth). The only drawback is, that you need a bit of book-keeping to prepare your stratified samples (between threads) and ensure you don’t get coherence between the
different dimensions you sample.
Another issue was how to accelerate the ray-object intersections. This is a crucial part of all raytracers: if you need to check your rays against every single object in the scene, the renders will
be extremely slow – the rendering time will be proportional to the number of objects. On the other hand spatial acceleration structures are often able to render a scene in a time proportional to the
logarithm of the number of objects.
For the raytracer in Structure Synth I chose to use a uniform grid (aka voxel stepping). This turned out to be a very bad choice. The uniform grid works very well, when the geometry is evenly
distributed in the scene. But for recursive systems, objects in a scene often appear at very different scales, making the cells in the grid very unevenly populated.
Another example of this is, that I often include a ground plane in my Structure Synth scenes (by using a flat box, such as “{ s 1000 1000 0.1 } box”). But this will completely kill the performance of
the uniform grid – most objects will end up in the same cell in the grid, and the acceleration structure gets useless. So in general, for generative systems with different scales, the use of a
uniform grid is a bad choice.
Not that is a lot of stuff, that didn’t work out well. So what is working?
As of now the raytracer in Structure Synth provides a nice foundation for things to come. I’ve gotten the multi-threaded part set correctly up, which includes a system for coordinating stratified
samples. Each thread have its own (partial) screen space buffer, which means I can do progressive rendering. This also makes it possible to implement more complex filtering (where the filtered
samples may contribute to more than one pixel – in which case the raytracer is not embarrassingly parallel anymore).
What is missing?
Materials. As of now there is only very limited control of materials. And things like transparency doesn’t work very well.
Filtering. As I mentioned above, the multi-threaded renderer supports working with filters, but I haven’t included any filters in the latest release. My first experiments (with a Gaussian filter)
were not particularly successful.
Lighting. As of now the only option is a single, white, point-like light source casting hard shadows. This rarely produce nice pictures.
In the next post I’ll talk a bit about what I’m planning for future versions of the raytracers.
Structure Synth 1.5.0 (Hinxton) Released
It has been more than a year since the last release of Structure Synth, but now a new version is finally ready.
The biggest additions are the new raytracer and the scripting interface for animations. The raytracer is not an attempt to create a feature complete renderer, but it makes it possible to create
images in a quality acceptable for printing without the complexity of setting up a scene in a conventional raytracer.
New features:
• JavaScript integration (for building animations).
• Native OBJ exporter (supporting all primitives and tagging)
Minor updates:
• Added support for preprocessor generated random numbers (uniform distributed).
• Added ‘show coordinate system’ option.
• Added ‘Autosave Eisenscript’ option to the Template Export Dialog. The autosaved script includes the random seed and camera settings.
• Context menu with command help in editor window.
• Proper sorting of transparent OpenGL objects.
• Added a patch by François Beaune to support Appleseed.
• GUI Refactoring.
Binaries for Windows (XP, Vista, and 7) and Mac OS X (10.4 and later, universal app). Linux is source only.
As something new, there is now an installer for Windows. It is still possible to just unzip the archived executable, but the Windows installer offers file associations for EisenScript files.
The raytracer and JavaScript interface are described in more details in the blog posts linked to above.
The OBJ exporter is simpe to use: Choose ‘Export | OBJ Export…’ from the menu. Since the OBJ format does not support spheres, these must be polygonized before export: it is possible to adjust the
resolution for this. OBJ does not directly support colors either, so I’ve made it possible to group the OBJ output into sections according to either color or tags (or both). The group and material
will be named after the OpenGL color, e.g.
g #f1ffe3
usemtl #f1ffe3
v 8.27049 2.4216 7.09626
Another new feature which probably requires a bit of explanation is the preprocessor generated random numbers. They can be used using the following syntax:
10 * { x 3 } 1 * { s random[1,3] } box
The above fragment will produce ten boxes, with a random size between 1 and 3. But notice that each box will have the same size: the ‘random[1,3]’ is substituted at compile-time, not run-time.
I’ve had several request for some way to produce random variation each time a rule is called, but this would require rather large changes to Structure Synth, since the EisenScript program is compiled
into a binary structure at compile time, and I would essentially need to turn the builder into a parser to accomodate this (which would be slow).
Well, that’s about it.
Download instructions at:
For more information see:
Scripting in Structure Synth
I’ve added a JavaScript interface to Structure Synth. This makes it possible to automate and script animations from within Structure Synth. Here is an example:
(Image links to YouTube video)
The JavaScript interface will be part of the next version of Structure Synth (which is almost complete), but it is possible to try out the new features right now, by checking out the sources from the
Subversion repository.
The rest of this post shows how to use the JavaScript interface.
Assorted Links
Pixel Bender 3D
Adobe has announced Pixel Bender 3D:
If I understand it correctly, it is a new API for Flash, and not as such a direct extension of the Pixel Bender Toolkit. So what does it do?
As far as I can tell, it is simply a way to write vertex and fragment shader for Flash. While this is certainly nice, I think Adobe is playing catchup with HTML5 here – many browsers already support
custom shaders through WebGL (in their development builds, at least). Or compare it to a modern 3D browser plugin such as Unity, with deferred lightning, depth-of-field, and occlusion culling…
And do we really need another shader language dialect?
Flash raytracer
Kris Temmerman (Neuro Productions) has created a raytracer in Flash, complete with ambient occlusion and depth-of-field:
Kris has also produced several other impressive works in Flash:
Chaos Constructions
Quite and Orange won the 4K demo at Chaos Constructions 2010 with the very impressive ‘CDAK’ demo:
(Link at Pouet, including executable).
Ex-Silico Fractals
This YouTube video shows how to produce fractals without a computer. I’ve seen video feedback before, but this is a clever setup using multiple projectors to create iterated function systems.
Vimeo Motion Graphics Award
‘Triangle’ by Onur Senturk won the Vimeo Motion Graphics Award. The specular black material looks good. Wonder if I could create something similar in Structure Synth’s internal raytracer?
Creating a Raytracer for Structure Synth
Updated november 17th, 2011
Structure Synth has quite flexible support for exporting geometry to third-party raytracers, but even though I’ve tried to make it as simple as possible, the Template Export system can be difficult
to use. It requires knowledge of the scene description format used by the target raytracer, and of the XML format used by the templates in Structure Synth. Besides that, exporting and importing can
be slow when dealing with complicated geometry.
So I decided to implement a simple raytracer inside Structure Synth. I probably could have integrated some existing open-source renderer, but I wanted to have a go at this myself. The design goal was
to create something reasonably fast, aiming for interesting, rather than natural, results.
The first version of the raytracer is available now in SVN, and will be part of the next Structure Synth release.
How to use the raytracer
The raytracer has a few settings which can be controlled by issuing ‘set’ commands in the EisenScript.
The following is a list of commands with their default values given as argument:
set raytracer::light [0.2,0.4,0.5]
Sets the position of a light in the scene. If a light source position is not specified, the default position is a light placed at the lower, left corner of the viewport. Notice that only a single
light source is possible as of now. This light source controls the specular and diffuse lightning, and the hard shadow positions. The point light source model will very likely disappear in future
versions of Structure Synth – I’d prefer environment lightning or something else that is easier to setup and use.
set raytracer::shadows true
This allows you to toggle hard shadows on or off. The shadow positions are determined by the light source position above.
Rendering without and with anti-alias enabled
set raytracer::samples 6
This sets the number of samples per pixel. Notice the actual number of samples is the square of this argument, i.e. a value of 2 means 2×2 camera rays will be traced for each pixel. The default value
is 6×6 samples for ‘Raytrace (in Window)’ and 8×8 samples for ‘Raytrace (Final)’. This may sound like a lot of samples per pixel, but the number of samples also control the quality of the
depth-of-field or ambient occlusion rendering. If the image appears noisy, increase the sample count.
To the left, a render with a single Phong light source. To the right, the same picture using ambient occlusion
set raytracer::ambient-occlusion-samples 1
Ambient occlusion is a Global Illumination technique for calculating soft shadows based on geometry alone. Per default the number of samples is set to 1. This may not sound like a lot, but each pixel
will be sampled multiple times courtesy of the ‘raytracer::samples’ count – this makes sense because the ‘raytracer::samples’ will be used to sample both the lens model (for anti-alias and
depth-of-field) and the ambient occlusion. And when I get a chance to implement some better shader materials, the samples can also be used there as well. Notice, that as above, the number refers to
samples per dimensions. Example: if ‘raytracer::ambient-occlusion-samples = 3’ and ‘raytracer::samples = 2’, a total of 3x3x2x2=36 samples will be used per pixel.
Depth-of-Field example
set raytracer::dof [0.23,0.2]
Enables Depth-Of-Field calculations. The first parameter determines the distance to the focal plane, in terms of the viewport coordinates. It is always a number between 0 and 1. The second parameter
determines how blurred objects away from the focal plane appear. Higher values correspond to more blurred foregrounds and backgrounds.
Hint: in order to get the viewport plane distance to a given object, right-click the OpenGL view, and ‘Toggle 3D Object Information’. This makes it possible to fit the focal plane exactly.
set raytracer::size [0x0]
Sets the size of the output window. If the size is 0x0 (the default choice), the output will match the OpenGL window size. If only one dimension is specified, e.g. ‘set raytracer::size [0x600]’, the
missing dimension will be calculated such that the aspect ratio of the OpenGL window will be matched. Be careful when specifying both dimensions: the aspect ratio may be different.
set raytracer::max-threads 0
This determines how many threads, Structure Synth will use during the rendering. The default value of 0, means the system will suggest an ideal number of threads. For a dual-core processor with
hyper-threading, this means four threads will be used. Lower the number of threads, if you want to use the computer for other tasks, and it is unresponsive.
set raytracer::voxel-steps 30
This determines the resolution of the uniform grid used to accelerate ray intersections. Per default a simple heuristic is used to control the resolution based on the number of objects. A value of 30
means that a grid with 30x30x30 cells will be used. The uniform grid used by the raytracer is not very efficient for non-uniform structure, and will likely be replaced in a future version of
Structure Synth.
set raytracer::max-depth 5
This is the maximum recursion depth of the raytracer (for transparent and reflective materials).
Finally two material settings are available:
set raytracer::reflection 0.0
Simple reflection. A value between 0 and 1.
set raytracer::phong [0.6,0.6,0.3]
The first number determines the ambient lightning, the second the diffuse, and the third the specular lightning. Diffuse and specular lightning depends on the location of the light source.
It is also possible to apply the materials to individual primitives in Structure Synth directly. This is done by tagging the objects.
Consider the following EisenScript fragment:
Rule R1 {
{ x 1 } sphere::mymaterial
The sphere above now belongs to the ‘mymaterial’ class, and its material settings may be set using the following syntax:
set raytracer::mymaterial::reflection 0.0
set raytracer::mymaterial::phong [0.6,0.6,0.0]
An important tip: writing these long parameter setting names is tedious and error-prone. But I’ve added a list with the most used EisenScript and Raytracer commands to the context menu in Structure
Synth editor window. Just right-click and select a command. | {"url":"https://blog.hvidtfeldts.net/index.php/category/raytracing/","timestamp":"2024-11-05T20:31:10Z","content_type":"text/html","content_length":"87218","record_id":"<urn:uuid:4ab6cda0-5a18-45c0-8a58-f10b143e5e5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00776.warc.gz"} |
Doppler Effect and Wavelength Changes Calculators | List of Doppler Effect and Wavelength Changes Calculators
List of Doppler Effect and Wavelength Changes Calculators
Doppler Effect and Wavelength Changes calculators give you a list of online Doppler Effect and Wavelength Changes calculators. A tool perform calculations on the concepts and applications for Doppler
Effect and Wavelength Changes calculations.
These calculators will be useful for everyone and save time with the complex procedure involved to obtain the calculation results. You can also download, share as well as print the list of Doppler
Effect and Wavelength Changes calculators with all the formulas. | {"url":"https://www.calculatoratoz.com/en/doppler-effect-and-wavelength-changes-Calculators/CalcList-12893","timestamp":"2024-11-05T06:39:15Z","content_type":"application/xhtml+xml","content_length":"89417","record_id":"<urn:uuid:63523e82-80cd-4caa-a324-5de16cbf3262>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00175.warc.gz"} |
Depth-Limited Search
There is a total of 255,168 possible Tic Tac Toe games, and 10²⁹⁰⁰⁰ possible games in Chess. Solving for the entire game tree is not possible nor tractable with vanilla Minimax, therefore we use a
depth-limited version of Minimax.
Need to go, but we are assuming a rational player in Alpha-Beta Pruning.
Depth-limited search considers only a pre-defined number of moves before it stops, without ever getting to a terminal state.
If you want to limit by breadth instead, you should useMonte-Carlo Tree Search.
Is this and Iterative Deepening Search the same thing??#todo
How do you get the value of non-leaf nodes? Use an Heurisitic Function.
What does a heuristic function look like? (Chess)
Input: Current configuration of the board
Output: expected utility (based on what pieces each player has and their locations on the board), and then return a positive or a negative value that represents how favorable the board is for one
player versus the other.
i dont know why this would be a problem? | {"url":"https://stevengong.co/notes/Depth-Limited-Search","timestamp":"2024-11-11T16:11:00Z","content_type":"text/html","content_length":"14987","record_id":"<urn:uuid:2f76a6a4-4f45-4ed6-8c32-a1370931cba3>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00164.warc.gz"} |
Two investment advisers are comparing performance. One averaged
a 16.27% rate of return and the other...
Two investment advisers are comparing performance. One averaged a 16.27% rate of return and the other...
Two investment advisers are comparing performance. One averaged a 16.27% rate of return and the other a 20.51% rate of return. However, the β of the first investor was 1.5, whereas that of the second
investor was 1.
Required: Suppose that the T-bill rate was 3% and the market return during the period was 15%. Aside from the issue of general movements in the market, outline the difference between the superior and
inferior portfolios.
Answer% Do not round intermediate calculations. Input your answer as a percent rounded to 2 decimal places (for example: 28.31%).
Two investment advisers are comparing performance. One averaged a 16.27% rate of return and the other a 20.51% rate of return. However, the β of the first investor was 1.5, whereas that of the second
investor was 1.
Required: Suppose that the T-bill rate was 3% and the market return during the period was 15%. Aside from the issue of general movements in the market, outline the difference between the superior and
inferior portfolios.
Answer% Do not round intermediate calculations. Input your answer as a percent rounded to 2 decimal places (for example: 28.31%).
Two investment advisers are comparing performance. One averaged a 16.27% rate of return and the other a 20.51% rate of return. However, the β of the first investor was 1.5, whereas that of the second
investor was 1.
Required: Suppose that the T-bill rate was 3% and the market return during the period was 15%. Aside from the issue of general movements in the market, outline the difference between the superior and
inferior portfolios.
Answer% Do not round intermediate calculations. Input your answer as a percent rounded to 2 decimal places (for example: 28.31%).
Required return for Portfolio 1 = 3% + (15% -3%)*1.5 = 21%
Required return for Portfolio 2 = 3% + (15% -3%)*1 = 15%
Alpha of portfolio 1= Actual return - Required return = 16.27% - 21% = -4.73%
Alpha of portfolio 2= Actual return - Required return = 20.51 - 15% = 5.51%
A superior portfolio is one which has generated excess returns over the required return and has lower beta. It has also generated higher returns and standard deviation is only slightly higher.
Considering the characteristics, Portfolio 2 is a superior portfolio. | {"url":"https://justaaa.com/finance/354663-two-investment-advisers-are-comparing-performance","timestamp":"2024-11-04T18:28:38Z","content_type":"text/html","content_length":"44933","record_id":"<urn:uuid:18109995-6d8a-47f1-826c-f7427e28d662>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00481.warc.gz"} |
Multiplication Questions Year 4 Worksheets - Worksheets Day
Multiplication Questions Year 4 Worksheets
What are multiplication questions?
Multiplication questions are mathematical problems that involve multiplying two or more numbers together to find the total or product. In Year 4, students are introduced to multiplication and are
expected to understand the concept and apply it to solve problems.
Why are multiplication questions important?
Multiplication is a fundamental mathematical operation that is used in everyday life. It helps in calculating quantities, determining total costs, and solving various real-life problems. Mastering
multiplication skills at a young age is crucial for future mathematical success.
What do Year 4 multiplication worksheets include?
Year 4 multiplication worksheets typically cover a range of topics, including:
– Multiplication tables: Students practice memorizing and applying multiplication facts up to 12 times 12.
– Multiplication using arrays: Students learn to represent multiplication problems using arrays or grids.
– Multiplication word problems: Students solve real-life word problems that require multiplication skills.
– Missing factors: Students fill in the missing numbers in multiplication equations.
How can Year 4 multiplication worksheets be used?
Year 4 multiplication worksheets can be used in various ways to enhance learning and practice. Teachers can incorporate them into classroom lessons, assign them as homework, or use them for
individual or group activities. Parents can also use these worksheets to support their child’s learning at home.
Tips for solving multiplication questions
– Memorize multiplication tables: Practice regularly to memorize multiplication facts.
– Understand the concept: Understand that multiplication is repeated addition or groups of equal numbers.
– Use visual aids: Draw arrays or use objects to help visualize and solve multiplication problems.
– Practice word problems: Solve word problems that involve multiplication to apply the concept to real-life situations.
– Review regularly: Continually review multiplication skills to maintain proficiency.
In conclusion
Year 4 multiplication worksheets are valuable resources for developing and reinforcing multiplication skills. By using these worksheets and following the provided tips, students can improve their
understanding and mastery of multiplication, setting a solid foundation for future mathematical success.
Short Multiplication Worksheets Year 4
Multiplication Practice Worksheets
Year 4 Times Tables Test Practice Online
4th Grade Multiplication Worksheets
Math Multiplication Worksheets Grade 4
Pin On Year 4 Maths Worksheets And Printable Pdf
Grade 4 Multiplication Worksheets
Multiplication Tables Check MTC Worksheets
Multiplication Worksheets Year 4 Printablemultiplicationcom
Multiplication Questions Year 4 Worksheets | {"url":"https://www.worksheetsday.com/multiplication-questions-year-4-worksheets/","timestamp":"2024-11-05T03:18:27Z","content_type":"text/html","content_length":"51987","record_id":"<urn:uuid:15127c12-778e-40cd-87b4-e7c9e24dd049>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00380.warc.gz"} |
Is applied math a good major?
If you are thinking of getting an applied math degree then you are probably wondering whether or not it is a good degree to get.
This post will show you whether or not applied math will be a good major for you and some things that you might want to consider.
So, is applied math a good major? Applied math can be a very marketable major provided that you take classes from the field that you want to enter. However, it is a challenging degree to obtain.
There are actually a number of things to consider if you are thinking of getting an applied math degree and there are a number of alternatives that you might prefer.
Job outlook
The Bureau of Labor Statistics predicts that the demand for mathematicians will rise by 33% by 2026. This is mainly due to the surge in data that companies have been receiving in recent years that
they need people with statistical skills to make sense of. This is good for an applied math major which will normally include some statistics as well as math.
According to Payscale, the average salary for someone that has an applied math degree is $73,000. This is $12,000 more than the average across all majors.
Jobs you can get with an applied math degree
Since applied math is used in many different fields and since it has many different use cases, an applied math degree will open you up to many different job opportunities.
With that being said, something to consider is that many of the more lucrative jobs will require a master’s degree.
However, there are many jobs that an applied math bachelor’s degree will qualify you for.
However, since applied math is a general degree not specific to a certain job field it would help to also take classes from the field that you want to enter. The reason for this is that, while an
applied math degree will qualify you for many jobs, having knowledge for that specific domain will be very useful.
For example, if you want to get into data science then taking lots of statistics, computer science and data analytics classes will help you a lot.
In addition to taking classes relevant to the field that you want to enter it would be very helpful to try and get some internships in that field while you are an undergrad.
You’ll likely be taking a number of computer science classes as part of an applied math degree. Many applied math majors go on to become software engineers since it is a highly rated job for people
with a bachelor’s degree.
If the applied math degree at your college does not require a data structures or an algorithms class it would broaden your job opportunities a lot if you were to choose them as electives.
Jobs that you could get with a master’s degree in applied math could include:
Jobs that you could get with a bachelor’s degree in applied math could include:
• Business analyst
• Financial analyst
• Insurance underwriter
• Data analyst
• Software engineer
• Market researcher
• Actuary
• Digital marketer
Classes you will be taking in applied math
An applied math degree will allow you to take a number of classes from a number of different fields.
Most of the classes you will take will be math classes including:
• Calculus
• Linear algebra
• Discrete math
• Differential equations
• Partial differential equations
• Graph theory
• Numerical analysis
• Combinatorics
Classes from other fields that you might take could include:
• Computer science
• Algorithms
• Statistics
• Probability
• Big data
• Physics
Alternatives to an applied math degree
If you are thinking of getting an applied math degree then there are a number of other degrees that you might want to consider which can include:
You can click on their links to see what I have written about them as majors themselves.
Is an applied math degree marketable?
How marketable an applied math degree will be will depend a lot on what you do in your time in the major and the classes you take.
If you take classes relevant to the types of jobs that you want to get upon graduating then it will be a very marketable degree. It will be especially marketable if you also can get some summer
internships and do some projects in the field that you want to enter.
If you just take the traditional courses, while in the major, it will still be marketable for many different jobs such as data analytics. But you will have to do some extra work to increase your
skills in that particular area.
For example, if you want to get a job in data analytics, but you didn’t take many data analysis classes then you could work through the material on websites such as Datacamp in your spare time.
Should I do applied math or statistics?
Applied math and statistics will both feature a number of the same courses. The difference between then will be that applied math will not require many statistics classes but more classes from
mathematics such as differential equations. Whereas, statistics will require fewer upper-division math classes but more stats classes.
The major that would be best for you will depend on what interests you and what you want to do after college. If you enjoy working with data and you want to work as a data scientist then statistics
would be marginally better (but applied math would still be good especially if you take stats and cs classes).
One thing to consider is that applied math degrees do tend to give you a lot of flexibility in the types of classes that you take so you could make the degree be very close to a stats degree if you
wanted to.
Should I do applied or pure math?
Applied math will allow you to take more classes from different fields such as statistics or computer science. Applied math will also feature fewer proof-based classes. These factors will generally
mean that an applied math degree will be more marketable and easier since most students struggle with proof-based classes.
However, if you want to go to graduate school for a mathematical subject then you will often find that a math degree would be more useful.
Is a bachelor’s degree in applied math sufficient?
As mentioned above, many of the more math-heavy jobs will require a master’s degree in mathematics.
However, there will be many jobs that an applied math degree will qualify you for. Before choosing your classes it would help to consider the job that you want and to pick your classes accordingly.
Applied math is a difficult major
Applied math is classed as a STEM major. Data shows that 35% of those who initially choose a STEM major switch out of the major within three years.
The reason for this is likely to be that STEM majors tend to require a much larger time commitment than most people are used to.
While an applied math degree will not feature as many proof-based classes and a pure math degree, the classes will still be challenging and you will have to study a lot in the major.
How difficult the degree will be for you will depend on how much math you have already done. If you have already taken a number of math classes in high school and done well in them then you will be
more likely to do well in an applied math major.
With that being said, the degree will be designed assuming that you are starting without much prerequisite knowledge so it will still be possible for you to do well in the major if you haven’t
studied much math. However, you will likely have to study a lot more than you are used to.
Another thing to consider is your level of interest in the degree. If you find things such as physics, math, data and computing interesting then an applied math degree will likely be easier and more
rewarding for you. This is because it will be easier for you to motivate yourself to study when things are getting challenging. | {"url":"https://college-corner.com/is-applied-math-a-good-major/","timestamp":"2024-11-11T11:41:00Z","content_type":"text/html","content_length":"63290","record_id":"<urn:uuid:70cba014-5220-4c17-8094-cca5b4ee1efa>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00655.warc.gz"} |
From Scholarpedia
James Murdock (2006), Scholarpedia, 1(12):1904. doi:10.4249/scholarpedia.1904 revision #91898 [link to/cite this article]
(Redirected from
A mathematical model of a real-world problem is usually based on idealized assumptions. If the model proves inadequate, it can be improved by adding small terms that were neglected at first. A model
obtained by adding small parameters to a given system is called an unfolding of the original system. (One may picture the various behaviors of the expanded system as being hidden or "folded up" when
the parameters are set to zero.)
For instance, a pair of coupled oscillators that are first modelled as a conservative system in exact resonance might be improved by adding three small parameters representing damping in each
oscillator and detuning of the resonance. Perturbation methods may then be used to obtain approximate solutions expanded in the small parameters, and bifurcation analysis may be used to determine the
qualitative changes in the behavior of the system in a neighborhood of the original model.
The mathematical theory of unfolding originated in the theory of singularities of mappings and in catastrophe theory. (For an introduction from this point of view, see Bruce and Giblin 1992.) In
dynamical systems, unfolding means the attempt to exhibit all possible behaviors for systems close to a given original system (sometimes called the organizing center of the unfolding) by adding a
finite number \(k\) of small parameters \(\mu_1,\dots,\mu_k\ .\) The number \(k\) is called the codimension of the organizing center. In order to begin, it is necessary to specify some space of
admissible systems (at least a topological space, usually a smooth manifold) and some equivalence relation on this space expressing the idea that two equivalent systems "have the same behavior".
Under these conditions it makes sense to specify an original system (or organizing center) and ask whether there exists a \(k\)-parameter family (for some \(k\)) of systems that intersects each
equivalence class in a neighborhood of the organizing center. If so, the goal of the theory can be achieved. If not, the organizing center is said to have "infinite codimension".
Unfoldings of Matrices
Many finite-dimensional linear systems can be represented by a square matrix, whether it be the matrix of a linear transformation or of a linear system of differential equations \(\dot x = Ax\ .\) In
either case, a natural equivalence relation is similarity. Suppose that \(A_0\) is a given \(n\times n\) matrix, taken as the organizing center. We wish to construct a family \(A(\mu_1,\dots,\mu_k)\)
of matrices that depends continuously (or better, smoothly) on \(\mu_1,\dots,\mu_k\ ,\) reduces to \(A_0\) when \(\mu_1=\cdots=\mu_k=0\ ,\) and intersects each similarity class near \(A_0\ .\) We may
assume \(A_0\) is in Jordan normal form, but it cannot always be the case that \(A(\mu_1,\dots,\mu_k)\) will be in Jordan form for all \(\mu_1,\dots,\mu_k\) near zero, because the Jordan form of a
matrix does not (always) depend continuously on the matrix. Since the similarity class \(M\) of \(A_0\) is a smooth submanifold of \(\R^{n^2}\) (the space of \(n\times n\) matrices), we require that
\(A(\mu_1,\dots,\mu_k)\ ,\) for \(\mu_1,\dots,\mu_k\) near zero, be a smoothly embedded submanifold transverse to \(M\ .\) Such an unfolding of \(A_0\) is called versal (an abbreviation of
transversal), and automatically intersects all similarity classes near \(A_0\) (even though these classes have various dimensions). The smallest possible number \(k\) of parameters will equal the
codimension (in the usual manifold sense) of \(M\) in \({\mathbb R}^{n^2}\ ;\) this explains the use of "codimension" as defined above. A versal unfolding of this kind is called miniversal.
If \(n=2\) and \(A_0=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}\ ,\) the codimension is two and two different miniversal unfoldings are \[\begin{pmatrix} \mu_2&1\\ \mu_1&\mu_2\end{pmatrix}\] and \
(\begin{pmatrix} 0&1\\ \mu_1&\mu_2\end{pmatrix}\ .\) The first form is known as a striped matrix. We may observe that this striped matrix commutes with \(A_0^*\) (the adjoint or conjugate transpose
of \( A_0 \)), and it is always true that a miniversal unfolding of a matrix can be found by obtaining the most general matrix that commutes with \( A_0^*\ .\) The second form illustrates that the
striped matrix is not the simplest choice for the unfolding, if "simplest" is interpreted as "having the most zero entries". (The striped matrix has the advantage that this unfolding is not only
transverse to \(M\) but is orthogonal with respect to the inner product \(\langle P,Q\rangle = {\rm tr}\, PQ^*\ .\)) These unfoldings (for matrices of any size) are due to Arnold and are explained in
Arnold (1988, section 30), Wiggins (2003, section 20.5), and Murdock (2003, chapter 3).
There is a close relationship between unfoldings and normal form ideas. Any smooth one-parameter family \(A(\varepsilon)\) of matrices with \(A(0)=A_0\) can be embedded (up to similarity) into any
miniversal unfolding \(A(\mu_1,\dots,\mu_k)\) of \(A_0\ ;\) that is, there is a smooth family of matrices \(T(\varepsilon)\) with \(T(0)=I\) such that \[T(\varepsilon)^{-1}A(\varepsilon)T(\
varepsilon) = A(\mu_1(\varepsilon),\dots,\mu_k(\varepsilon))\ ,\] where the functions \(\mu_i(\varepsilon)\) are smooth. The form of the unfolding, as well as the power series expansions of \(\mu_i(\
varepsilon)\) can be computed by normal form methods. Writing \(U(\varepsilon)=I+\varepsilon U_1+\cdots\) and \(A(\varepsilon)=A_0+\varepsilon A_1 +\cdots\ ,\) and setting \(T(\varepsilon)^{-1}A(\
varepsilon)T(\varepsilon)=B(\varepsilon)=A_0 + \varepsilon B_1 +\cdots\ ,\) one finds that \[L_{A_0} U_1 = A_1-B_1\ ,\] where \(L_P Q = QP-PQ\ .\) Similar homological equations exist at higher
orders. Choosing a complement to the image of \(L_{A_0}\) (that is, choosing a normal form style) fixes the form of the unfolding to which \(B(\varepsilon)\) belongs, and the rest of the computation
determines the \(\mu_i(\varepsilon)\ .\) The striped matrix unfolding comes from the inner product normal form style \(\ker\Lambda_{A_0^*}\ ,\) and the second type of unfolding illustrated above
comes from the simplified normal form style.
For nonlinear dynamical systems, it is much more difficult to define an appropriate space of systems and equivalence relation with which to begin. Any suitable space of systems will be
infinite-dimensional, and under the most natural equivalence relations (either topological equivalence or topological conjugacy), most systems turn out to have infinite codimension, so a versal
unfolding is impossible. We must either restrict attention to those few (but important) systems that have finite codimension with respect to topological equivalence, or else adopt a coarser
equivalence relation. One often-used equivalence relation is static equivalence, in which attention is limited to the equilibrium solutions.
An unfolding of a dynamical system under static equivalence is one that exhibits all possible bifurcations of the equilibrium (rest) points, up to topological equivalence of the set of equilibria. It
is easiest to localize the problem to the bifurcations of a single equilibrium point of the organizing center. Since no bifurcations take place in hyperbolic directions, it is enough to unfold the
system on its center manifold. The various cases are classified by the eigenvalues of the Jacobian matrix (i.e., the linearized system at the equilibrium) on the imaginary axis.
A single zero eigenvalue
The simplest case is a system with a single zero eigenvalue at the equilibrium, leading to a center manifold of dimension one. Since the behavior of the system should be dominated by the lowest order
term, one considers a (scalar) organizing center of the form \(\dot x = x^k\) (for \(k\) a positive integer). An unfolding under static equivalence is \[\dot x = \mu_1 +\mu_2x+\cdots+\mu_{k-1}x^{k-2}
+ x^k\ ,\] the interesting point being the absence of \(x^{k-1}\ .\) For instance,
• the unfolding of \(\dot x = x^2\) is \(\dot x = \mu_1 + x^2\ ,\) which exhibits a saddle-node bifurcation as \(\mu_1\) is varied.
• The unfolding of \(\dot x = x^3\) is \(\dot x = \mu_1 + \mu_2x + x^3\ .\) If \(\mu_1=0\) this gives a pitchfork bifurcation as \(\mu_2\) is varied; \(\mu_1\) is an "imperfection parameter" that
splits the pitchfork into a saddle-node bifurcation and a continuation curve (i.e., a curve of equilibria that does not bifurcate).
This sort of analysis is very close to the original use of unfoldings in singularity theory. For further information see section 6.3 of Murdock (2003), and for complete details of this approach see
Golubitsky and Schaeffer (1985) and Golubitsky et al. (1988). (In the last references, one of the unfolding parameters is treated as the bifurcation parameter and is not counted in the codimension.)
A conjugate pair
The organizing center \[\begin{pmatrix} \dot x \\ \dot y\end{pmatrix} =\begin{pmatrix} 0 &-1\\1 &0\end{pmatrix} \begin{pmatrix} x\\y\end{pmatrix}+ \alpha(x^2+y^2) \begin{pmatrix} x\\y\end{pmatrix} +
\beta(x^2+y^2)\begin{pmatrix} -x\\y\end{pmatrix}\ ,\] which is in (semisimple) normal form truncated at the quadratic terms and has a conjugate pair of eigenvalues \(\pm \imath\ ,\) takes the form \
[\dot r = \alpha r^3\] \[\dot\theta = 1 + \beta r^2\ .\] If \(\alpha\ne 0\ ,\) an unfolding under local topological equivalence (but not topological conjugacy) is \[\dot r = \mu_1 r + \alpha r^3\] \
[\dot\theta = 1 + \beta r^2\ .\] This exhibits an Andronov-Hopf bifurcation as \(\mu_1\) is varied.
A nonsemisimple double eigenvalue
For the case of a double zero eigenvalue with a nonsemisimple linear part, the organizing center is \[\begin{pmatrix} \dot x\\ \dot y\end{pmatrix} = \begin{pmatrix} 0&1\\0&0\end{pmatrix}\begin
{pmatrix} x\\y\end{pmatrix} + \begin{pmatrix} 0\\ \alpha x^2+\beta xy\end{pmatrix}\ ,\] with quadratic term in (simplified) normal form. Assuming that \(\alpha\ne 0\ ,\) an unfolding is \[ \begin
{pmatrix} \dot x\\ \dot y\end{pmatrix} = \begin{pmatrix} 0\\ \mu_1\end{pmatrix} + \begin{pmatrix} 0&1\\0&\mu_2\end{pmatrix}\begin{pmatrix} x\\ y\end{pmatrix} + \begin{pmatrix} 0\\ \alpha x^2+\beta xy
\end{pmatrix}\ .\] It is remarkable that this can be proved to be an unfolding under topological equivalence. (The proof is difficult and uses one of or another of several "blowing-up" techniques.)
For further discussion see Bogdanov-Takens bifurcation.
Comparing this unfolding to the matrix unfolding of \(\begin{pmatrix} 0&1\\0&0\end{pmatrix}\) given above, it is seen that the codimension is the same but that one unfolding parameter appears in the
constant term rather than in the matrix. This phenomenon is typical, as can be seen using asymptotic unfoldings, sketched below.
Additional Examples
For additional examples of unfoldings presented in an elementary manner, see Kuznetsov (1998), Guckenheimer and Holmes (1986), and Wiggins (2003). For a detailed treatment of some unfoldings with
respect to topological equivalence, proved via blowup techniques, see Dumortier et al. (1991).
Asymptotic Unfoldings
As in the case of matrices, unfoldings of dynamical systems can be approached from a normal form viewpoint. Beginning with an organizing center \[\dot x = Ax + a_1(x) + a_2(x)+\cdots\] in normal form
(of some chosen style), consider an arbitrary one-parameter perturbation of the following form (where the degree of a term is the subscript plus \(1\)): \[\dot x = Ax + a_1(x) + a_2(x)+\cdots\ :\]
\[+\varepsilon(p + Bx + b_1(x) + b_2(x) + \cdots) + \cdots\ .\]
The final \(\cdots\) refer to higher powers of \(\varepsilon\ .\) Notice that the \(\varepsilon\) part contains a constant term \(p\ ,\) not present in the unperturbed system. Normal form methods can
be applied to simplify \(p\ ,\) \(B\ ,\) \(b_i\ ,\) and so forth. Whatever coefficients cannot be eliminated become unfolding parameters expressed as functions of \(\varepsilon\ .\) Stopping the
calculation at a finite degree in \(x\) gives an unfolding with finite codimension, but it is (usually) not a versal unfolding with respect to topological equivalence. Nevertheless, it is often
possible to prove that the unfolding correctly exhibits specific features of the behavior. Under generic hypotheses on the quadratic terms \(a_2\ ,\) the number of unfolding parameters in the
constant and linear terms (coming from \(p\) and \(B\)) always equals the codimension of the matrix unfolding of \(A\ ,\) explaining the remark in the last section. Asymptotic unfoldings have been
used informally without a name for many years, and a number of them are computed by Elphick et al. (1992). A general treatment is given in section 6.4 of Murdock (2003). (The restriction to the
simplified normal form style has since been removed, see Murdock and Malonza.) This approach to unfoldings makes the computation of unfoldings quite easy, as illustrated (in the references)by an
example of codimension 14. (Deriving useful dynamical conclusions from unfoldings of high codimension is another matter altogether.)
• Arnold V. I. (1988) Geometrical Methods in the Theory of Ordinary Differential Equations.Springer, New York, second edition.
• Bruce J. W. and Giblin P. G. (1992) Curves and Singularities. Cambridge University Press, Cambridge, England, second edition.
• Dumortier, F. and Roussarie, R. and Sotomayor, J. (1991) Generic three-parameter families of planar vector elds, unfoldings of saddle, focus, and elliptic singularities with nilpotent linear
parts. Lecture Notes in Mathematics, 1480:1-164.
• Elphick C., Tirapegui E., Brachet M. E., Coullet P., and Iooss G. (1987) A simple global characterization for normal forms of singular vector fields. Physica D, 29:95–127.
• Golubitsky M. and Schaeffer D.G. (1985) Singularities and Groups in Bifurcation Theory, volume 1. Springer, New York.
• Golubitsky M., Stewart I., and Schaeffer D.G. (1988) Singularities and Groups in Bifurcation Theory, volume 2. Springer, New York.
• Guckenheimer J. and Holmes P. (1986) Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Corrected second printing, Springer, N.Y.
• Kuznetsov Y.A. (1998) Elements of Applied Bifurcation Theory, Second edition, Springer, N.Y.
• Murdock J. (2003) Normal Forms and Unfoldings for Local Dynamical Systems. Springer, New York.
• Murdock J. and Malonza D.M. Improved computation of asymptotic unfoldings. In preparation.
• Wiggins S. (2003) Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer, New York, second edition.
Internal references
External Links
See Also
Bifurcation, Dynamical Systems, Equilibria, Jordan Normal Form, Normal Forms, Ordinary Differential Equations | {"url":"http://var.scholarpedia.org/article/Unfolding","timestamp":"2024-11-12T10:33:53Z","content_type":"text/html","content_length":"50098","record_id":"<urn:uuid:e03e39a3-5c6d-4499-b681-966283efcee8>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00223.warc.gz"} |
aive Bayes
Classification edge for naive Bayes classifier
e = edge(Mdl,tbl,ResponseVarName) returns the Classification Edge (e) for the naive Bayes classifier Mdl using the predictor data in table tbl and the class labels in tbl.ResponseVarName.
The classification edge (e) is a scalar value that represents the weighted mean of the Classification Margins.
e = edge(Mdl,tbl,Y) returns the classification edge for Mdl using the predictor data in table tbl and the class labels in vector Y.
e = edge(Mdl,X,Y) returns the classification edge for Mdl using the predictor data in matrix X and the class labels in Y.
e = edge(___,'Weights',Weights) returns the classification edge with additional observation weights supplied in Weights using any of the input argument combinations in the previous syntaxes.
Estimate Test Sample Edge of Naive Bayes Classifier
Estimate the test sample edge (the classification margin average) of a naive Bayes classifier. The test sample edge is the average test sample difference between the estimated posterior probability
for the predicted class and the posterior probability for the class with the next lowest posterior probability.
Load the fisheriris data set. Create X as a numeric matrix that contains four measurements for 150 irises. Create Y as a cell array of character vectors that contains the corresponding iris species.
load fisheriris
X = meas;
Y = species;
rng('default') % for reproducibility
Randomly partition observations into a training set and a test set with stratification, using the class information in Y. Specify a 30% holdout sample for testing.
cv = cvpartition(Y,'HoldOut',0.30);
Extract the training and test indices.
trainInds = training(cv);
testInds = test(cv);
Specify the training and test data sets.
XTrain = X(trainInds,:);
YTrain = Y(trainInds);
XTest = X(testInds,:);
YTest = Y(testInds);
Train a naive Bayes classifier using the predictors XTrain and class labels YTrain. A recommended practice is to specify the class names. fitcnb assumes that each predictor is conditionally and
normally distributed.
Mdl = fitcnb(XTrain,YTrain,'ClassNames',{'setosa','versicolor','virginica'})
Mdl =
ResponseName: 'Y'
CategoricalPredictors: []
ClassNames: {'setosa' 'versicolor' 'virginica'}
ScoreTransform: 'none'
NumObservations: 105
DistributionNames: {'normal' 'normal' 'normal' 'normal'}
DistributionParameters: {3x4 cell}
Mdl is a trained ClassificationNaiveBayes classifier.
Estimate the test sample edge.
e = edge(Mdl,XTest,YTest)
The margin average is approximately 0.87. This result suggests that the classifier labels predictors with high confidence.
Estimate Test Sample Weighted Edge of Naive Bayes Classifier
Estimate the test sample weighted edge (the weighted margin average) of a naive Bayes classifier. The test sample edge is the average test sample difference between the estimated posterior
probability for the predicted class and the posterior probability for the class with the next lowest posterior probability. The weighted sample edge estimates the margin average when the software
assigns a weight to each observation.
Load the fisheriris data set. Create X as a numeric matrix that contains four measurements for 150 irises. Create Y as a cell array of character vectors that contains the corresponding iris species.
load fisheriris
X = meas;
Y = species;
rng('default') % for reproducibility
Suppose that some of the measurements are lower quality because they were measured with older technology. To simulate this effect, add noise to a random subset of 20 measurements.
idx = randperm(size(X,1),20);
X(idx,:) = X(idx,:) + 2*randn(20,size(X,2));
Randomly partition observations into a training set and a test set with stratification, using the class information in Y. Specify a 30% holdout sample for testing.
cv = cvpartition(Y,'HoldOut',0.30);
Extract the training and test indices.
trainInds = training(cv);
testInds = test(cv);
Specify the training and test data sets.
XTrain = X(trainInds,:);
YTrain = Y(trainInds);
XTest = X(testInds,:);
YTest = Y(testInds);
Train a naive Bayes classifier using the predictors XTrain and class labels YTrain. A recommended practice is to specify the class names. fitcnb assumes that each predictor is conditionally and
normally distributed.
Mdl = fitcnb(XTrain,YTrain,'ClassNames',{'setosa','versicolor','virginica'});
Mdl is a trained ClassificationNaiveBayes classifier.
Estimate the test sample edge.
e = edge(Mdl,XTest,YTest)
The average margin is approximately 0.59.
One way to reduce the effect of the noisy measurements is to assign them less weight than the other observations. Define a weight vector that gives the better quality observations twice the weight of
the other observations.
n = size(X,1);
weights = ones(size(X,1),1);
weights(idx) = 0.5;
weightsTrain = weights(trainInds);
weightsTest = weights(testInds);
Train a naive Bayes classifier using the predictors XTrain, class labels YTrain, and weights weightsTrain.
Mdl_W = fitcnb(XTrain,YTrain,'Weights',weightsTrain,...
Mdl_W is a trained ClassificationNaiveBayes classifier.
Estimate the test sample weighted edge using the weighting scheme.
e_W = edge(Mdl_W,XTest,YTest,'Weights',weightsTest)
The weighted average margin is approximately 0.69. This result indicates that, on average, the weighted classifier labels predictors with higher confidence than the noise corrupted predictors.
Select Naive Bayes Classifier Features by Comparing Test Sample Edges
The classifier edge measures the average of the classifier margins. One way to perform feature selection is to compare test sample edges from multiple models. Based solely on this criterion, the
classifier with the highest edge is the best classifier.
Load the ionosphere data set. Remove the first two predictors for stability.
load ionosphere
X = X(:,3:end);
rng('default') % for reproducibility
Randomly partition observations into a training set and a test set with stratification, using the class information in Y. Specify a 30% holdout sample for testing.
cv = cvpartition(Y,'Holdout',0.30);
Extract the training and test indices.
trainInds = training(cv);
testInds = test(cv);
Specify the training and test data sets.
XTrain = X(trainInds,:);
YTrain = Y(trainInds);
XTest = X(testInds,:);
YTest = Y(testInds);
Define these two training data sets:
• fullXTrain contains all predictors.
• partXTrain contains the 10 most important predictors.
fullXTrain = XTrain;
idx = fscmrmr(XTrain,YTrain);
partXTrain = XTrain(:,idx(1:10));
Train a naive Bayes classifier for each predictor set.
fullMdl = fitcnb(fullXTrain,YTrain);
partMdl = fitcnb(partXTrain,YTrain);
fullMdl and partMdl are trained ClassificationNaiveBayes classifiers.
Estimate the test sample edge for each classifier.
fullEdge = edge(fullMdl,XTest,YTest)
partEdge = edge(partMdl,XTest(:,idx(1:10)),YTest)
The test sample edge of the classifier using the 10 most important predictors is larger.
Input Arguments
Weights — Observation weights
ones(size(X,1),1) (default) | numeric vector | name of a variable in tbl
Observation weights, specified as a numeric vector or the name of a variable in tbl. The software weighs the observations in each row of X or tbl with the corresponding weights in Weights.
If you specify Weights as a numeric vector, then the size of Weights must be equal to the number of rows of X or tbl.
If you specify Weights as the name of a variable in tbl, then the name must be a character vector or string scalar. For example, if the weights are stored as tbl.w, then specify Weights as 'w'.
Otherwise, the software treats all columns of tbl, including tbl.w, as predictors.
Data Types: double | char | string
More About
Classification Edge
The classification edge is the weighted mean of the classification margins.
If you supply weights, then the software normalizes them to sum to the prior probability of their respective class. The software uses the normalized weights to compute the weighted mean.
When choosing among multiple classifiers to perform a task such as feature section, choose the classifier that yields the highest edge.
Classification Margins
The classification margin for each observation is the difference between the score for the true class and the maximal score for the false classes. Margins provide a classification confidence measure;
among multiple classifiers, those that yield larger margins (on the same scale) are better.
Posterior Probability
The posterior probability is the probability that an observation belongs in a particular class, given the data.
For naive Bayes, the posterior probability that a classification is k for a given observation (x[1],...,x[P]) is
$\stackrel{^}{P}\left(Y=k|{x}_{1},..,{x}_{P}\right)=\frac{P\left({X}_{1},...,{X}_{P}|y=k\right)\pi \left(Y=k\right)}{P\left({X}_{1},...,{X}_{P}\right)},$
• $P\left({X}_{1},...,{X}_{P}|y=k\right)$ is the conditional joint density of the predictors given they are in class k. Mdl.DistributionNames stores the distribution names of the predictors.
• π(Y = k) is the class prior probability distribution. Mdl.Prior stores the prior distribution.
• $P\left({X}_{1},..,{X}_{P}\right)$ is the joint density of the predictors. The classes are discrete, so $P\left({X}_{1},...,{X}_{P}\right)=\sum _{k=1}^{K}P\left({X}_{1},...,{X}_{P}|y=k\right)\pi
Prior Probability
The prior probability of a class is the assumed relative frequency with which observations from that class occur in a population.
Classification Score
The naive Bayes score is the class posterior probability given the observation.
Extended Capabilities
Tall Arrays
Calculate with arrays that have more rows than fit in memory.
The edge function fully supports tall arrays. For more information, see Tall Arrays.
Version History
Introduced in R2014b | {"url":"https://de.mathworks.com/help/stats/classificationnaivebayes.edge.html","timestamp":"2024-11-05T00:50:31Z","content_type":"text/html","content_length":"120219","record_id":"<urn:uuid:52456fe1-fb3e-4a75-ab79-b82372e87d5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00046.warc.gz"} |
Reconstructing the calculating machine Z3
The project of reconstructing Konrad Zuse's calculating machine Z3 was initiated by Prof. Dr. Raúl Rojas (FU Berlin) and Dr. Horst Zuse (TU Berlin). Prof. Dr. Raúl Rojas was the technical leader and
was in charge of the construction of the memory and the processor for which he defined the block architecture. As foundation for this served the patent application of the Z391 of Konrad Zuse.
(published in: R. Rojas (ed.), Die Rechenmaschinen von Konrad Zuse, Springer-Verlag, Berlin, 1998).
On the basis of the block architecture Dr. Frank Darius (FU Berlin) conceptualized the concrete circuits with modern relays. The circuits were reviewed and adjusted to the existing hardware by Georg
Heyne (head of the electronic laboratory of Fritz-Haber-Institute, Max-Planck-Gesellschaft). Wolfram Däumel (Fritz-Haber-Institute) designed the layout of the boards and Lothar Schönbein and Torsten
Vetter (Fritz-Haber-Institute) did the microprogramming and assembly. With support of Bernhard Frötschl (FU Berlin) and Prof. Rojas Cüneyt Göktekin (FU Berlin) created a user interface to control the
Z3 via PC instead of the Z3-console. In addition to that Cüneyt Göktekin implemented this virtual console in the programming languages Java and C. The debugging of hard- and software was performed by
Dr. Darius, Bernhard Frötschl, Cüneyt Göktekin and Prof. Rojas.
With their teachers Thekla Lewandowki, Olaf Morgenbrod und Norbert Wagner pupils of the 1. Berufsschule Pankow welded the frames for the memory and the calculator unit. The pupils of the
Konrad-Zuse-Schule in Hünfeld constructed together with their teacher Uwe Trautrims the punched tape reader and the punching machine.
Video of the Z3 reconstruction
In this video the reconstruction of the Z3 is presented. It was shot on the 9.1.2002. On the left you can see the calculating unit and on the right the memory. The console of the Z3 with which the
next executive operation and the operands were chosen was substituted with a PC in the replica. Like the original Z3 the reconstruction can perform the following operations: addition, subtraction,
multiplication, division, square root and the conversion of decimal into binary digits and vice versa. Above the calculating unit you can see the stepping switches which are responsible for the
control of the operations. The memory stores 32 floating point digits per 22 bits (1 bit for the algebraic sign, 7 for the exponent and 14 for the mantissa).
Reconstruction of the Z3 adder
The addition circuit of the Z3 as an interactive demonstration object:
The addition circuit of the Z3 as a poster with additional information on the Z3:
The mantissa addition circuit of the Z3 was built on a DIN A3 sized board for exhibitions and lectures about Konrad Zuse.
According to the original documents a 10-bit electromechanical circuit was developed. It consists of two manually settable registers to enter the summands - this was realized just as in the original
using bistable relays - and of two XOR gates and a AND gate per bit for the calculation of the carry-over. The entered summands, all intermediate data of partial operations and the end result are
displayed with LEDs.
In the same manner as in the original machine each calculation was divided into three steps that can sequentially be triggered with a turning knob. A switch allows to choose between addition and
Acknowledgment: We like to thank the electronic laboratory of the Fritz-Haber-Institute (MPG) especially Georg Heyne, Wolfram Däumel and Viktor Platschkowski for their advices, the use of their
layout software and the digital milling machine.
of the reconstruction of the Z3 project. | {"url":"http://zuse.zib.de/reconstructionZ3;jsessionid=6C3E9EE3CBD784A7A221B6132E3609FF","timestamp":"2024-11-09T10:18:46Z","content_type":"application/xhtml+xml","content_length":"53443","record_id":"<urn:uuid:07bb5ae6-e816-4b32-bf65-4f0c78a2ecb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00385.warc.gz"} |
Arnoldi versus nonsymmetric Lanczos algorithms for solving matrix eigenvalue problems for BIT Numerical Mathematics
BIT Numerical Mathematics
Arnoldi versus nonsymmetric Lanczos algorithms for solving matrix eigenvalue problems
View publication
We present theoretical and numerical comparisons between Arnoldi and nonsymmetric Lanczos procedures for computing eigenvalues of nonsymmetric matrices. In exact arithmetic we prove that any type of
eigenvalue convergence behavior obtained using a nonsymmetric Lanczos procedure may also be obtained using an Arnoldi procedure but on a different matrix and with a different starting vector. In
exact arithmetic we derive relationships between these types of procedures and normal matrices which suggest some interesting questions regarding the roles of nonnormality and of the choice of
starting vectors in any characterizations of the convergence behavior of these procedures. Then, through a set of numerical experiments on a complex Arnoldi and on a complex nonsymmetric Lanczos
procedure, we consider the more practical question of the behavior of these procedures when they are applied to the same matrices. | {"url":"https://research.ibm.com/publications/arnoldi-versus-nonsymmetric-lanczos-algorithms-for-solving-matrix-eigenvalue-problems","timestamp":"2024-11-11T10:40:34Z","content_type":"text/html","content_length":"71101","record_id":"<urn:uuid:7aa7ec89-8995-4ea8-a77a-da8d52692e0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00558.warc.gz"} |
Alberto Calderón
Jump to navigation Jump to search
This article
needs additional citations for verification
(September 2016) (Learn how and when to remove this template message)
Alberto Pedro Calderón (September 14, 1920 – April 16, 1998) was an Argentinian mathematician. His name is associated with the University of Buenos Aires, but first and foremost with the University
of Chicago, where Calderón and his mentor, the analyst Antoni Zygmund, developed the theory of singular integral operators. This created the "Chicago School of (hard) Analysis" (sometimes simply
known as the "Calderón-Zygmund School"). Calderón's work ranged over a wide variety of topics: from singular integral operators to partial differential equations, from interpolation theory to Cauchy
integrals on Lipschitz curves, from ergodic theory to inverse problems in electrical prospection. Calderón's work has also had a powerful impact on practical applications including signal processing,
geophysics, and tomography.
Early life and education[edit]
Alberto Pedro Calderón was born on September 14, 1920, in Mendoza, Argentina, to Don Pedro Calderón, a physician (urologist), and Haydée. He had several siblings, including a younger brother, Calixto
Pedro Calderón, also a mathematician. His father encouraged his mathematical studies. After his mother's unexpected death when he was twelve, he spent two years at the Montana Knabeninstitut, a boys'
boarding school near Zürich in Switzerland, where he was mentored by Save Bercovici, who interested him in mathematics. He then completed his high school studies in Mendoza.
Persuaded by his father that he could not make a living as a mathematician, he entered the University of Buenos Aires, where he studied engineering. After graduating in civil engineering in 1947, he
got a job in the research laboratory of the geophysical division of the state-owned oil company, the YPF (Yacimientos Petrolíferos Fiscales).
While still working at YPF, Calderón became acquainted with the mathematicians at the University of Buenos Aires: Julio Rey Pastor, the first Professor in the Institute of Mathematics, his Assistant
Alberto González Domínguez (who became his mentor and friend), Luis Santaló and Manuel Balanzat. At the YPF Lab Calderón first conceived the possibility of determining the conductivity of a body by
making electrical measurements at the boundary; he did not publish his results until several decades later, in 1980, in his short Brazilian paper,^[1] see also On an inverse boundary value problem
and the Commentary by Gunther Uhlmann,^[2] which pioneered a whole new area of mathematical research on ''"inverse problems".
Calderón then took up a post at the University of Buenos Aires. Antoni Zygmund, one of the world's leading mathematical analysts and a professor at the University of Chicago, arrived at the
University of Buenos Aires in 1948 at the invitation of Alberto González Domínguez and Calderón was assigned to him as his assistant. Zygmund invited Calderón to come to Chicago to work with him. In
1949 Calderón arrived in Chicago with a Rockefeller Fellowship. He was encouraged by Marshall Stone to obtain a doctorate. Stone suggested that Calderón assemble three recently published papers into
a dissertation, allowing Calderón to obtain his Ph.D. in Mathematics under Zygmund's supervision in 1950, only a year after arriving in Chicago. The dissertation proved momentous: each of the three
papers solved a long-standing open problem in ergodic theory or harmonic analysis.
The collaboration begun by Zygmund and Calderón in 1948 reached fruition in the Calderón-Zygmund Theory of Singular Integrals and lasted more than three decades. The Calderón-Zygmund memoir^[3]
continues to be one of the most influential papers in the modern history of analysis; it laid the foundations of what became internationally known as the "Calderón-Zygmund School of Analysis" (or
Chicago School of (hard) Analysis) which developed methods with far-reaching consequences in many different areas of mathematics. A prime example of such a general method is one of their first joint
results, the Calderón-Zygmund decomposition lemma, invented to prove the "weak-type continuity" of singular integrals of integrable functions, which is now widely used throughout analysis and
probability theory. The Calderón-Zygmund Seminar has been for several decades and continues to be an important tradition in the mathematical life of Eckhart Hall at the University of Chicago.
By the mid 1960s the theory of singular integrals was firmly established by Calderón's contributions to the theory of differential equations, including his proof of the uniqueness in the Cauchy
problem^[4] using algebras of singular integral operators, his reduction of elliptic boundary value problems to singular integral equations on the boundary (the method of the Calderón projector),^[5]
and the crucial role played by algebras of singular integrals (through the work of Calderón's student R. Seeley) in the initial proof of the Atiyah-Singer Index Theorem,^[6] see also the Commentary
by Paul Malliavin.^[2] The development of pseudo-differential operators by Kohn-Nirenberg and Hörmander also owed a great deal to Calderón and his collaborators R. Vaillancourt and J. Alvarez-Alonso.
However Calderón insisted that the focus should be on algebras of singular integral operators with non-smooth kernels to solve actual problems arising in physics and engineering, where lack of
smoothness is a natural feature. This led to what is now known as the "Calderón program''" whose first important accomplishments were: Calderón's seminal study of the Cauchy integral on Lipschitz
curves,^[7] and Calderón's proof of the boundedness of the "first commutator".^[8] These papers stimulated considerable research by other mathematicians in the following decades; see also the later
paper by the Calderón brothers^[2]^[9] and the Commentary by Y. Meyer.^[2] Calderón's pioneering work in interpolation theory opened up a whole new area of research,^[10] see also the Commentary by
Charles Fefferman and Elias M. Stein,^[2] and in ergodic theory, his elementary but basic paper^[11] (see also the Commentary by Donald L. Burkholder,^[2] and^[12]) formulated a transference
principle that reduced the proof of maximal inequalities for abstract dynamical systems to the case of the dynamical system of the integers.
In his academic career, Calderón taught at many different universities, but primarily at the University of Chicago and the University of Buenos Aires. Calderón together with his mentor and
collaborator Zygmund, maintained close ties with Argentina and Spain, and through their doctoral students and their visits, strongly influenced the development of mathematics in these countries.^[2]
He was also visiting professor at universities including the University of Buenos Aires, Cornell University, Stanford University, National University of Bogotá, Colombia, Collège de France, Paris,
University of Paris (Sorbonne), Autónoma and Complutense Universities, Madrid, University of Rome and Göttingen University.
Personal life[edit]
In 1950, Calderón married Mabel Molinelli Wells, a mathematics graduate whom he had met while both were students at the University of Buenos Aires. They had a daughter, María Josefina, who now lives
in Paris and a son, Pablo, who lives in Connecticut. Calderón retired early from the University of Chicago, in 1985, and returned to Argentina, where his wife Mabel, who had been seriously ill, died.
In 1989 Calderón came back to the University of Chicago on a post-retirement appointment. He also remarried in 1989: his second wife was the mathematician Alexandra Bellow, now Professor Emeritus at
Northwestern University.^[2]
Calderón died on April 16, 1998, at the age of 78, in Chicago, after a brief illness.
Awards and honors[edit]
This section
needs additional citations for verification
(February 2017) (Learn how and when to remove this template message)
Calderón was recognized internationally for his outstanding contributions to Mathematics as attested to by his numerous prizes and membership in various academies. He gave many invited addresses to
universities and learned societies. In particular he addressed the International Congress of Mathematicians: a) as invited lecturer in Moscow in 1966 and b) as plenary lecturer in Helsinki in 1978.
The Instituto Argentino de Matemática (I.A.M.), based in Buenos Aires, a prime research center of the National Research Council of Argentina (CONICET), now honors Alberto Calderón by bearing his
name: Instituto Argentino de Matemática Alberto Calderón. In 2007, the Inverse Problems International Association (IPIA) instituted the Calderón Prize, named in honor of Alberto P. Calderón, and
awarded to a "researcher who has made distinguished contributions to the field of inverse problems broadly defined".
• 1958 Member, American Academy of Arts and Sciences, Boston, Massachusetts
• 1959 Correspondent Member, National Academy of Exact, Physical and Natural Sciences, Buenos Aires, Argentina
• 1968 Member, National Academy of Sciences of the U.S.A.
• 1970 Correspondent Member, Royal Academy of Sciences, Madrid, Spain
• 1983 Member, Latin American Academy of Sciences, Caracas, Venezuela
• 1984 Member, National Academy of Exact, Physical and Natural Sciences, Buenos Aires, Argentina
• 1984 Foreign Associate, Institut de France, Paris, France
• 1984 Member, Third World Academy of Sciences, Trieste, Italy
Honorary degrees[edit]
• 1969 Doctor Honoris Causa, University of Buenos Aires, Argentina
• 1989 Doctor of Science, Honoris Causa, Technion, Haifa, Israel
• 1995 Doctor of Science, Honoris Causa, Ohio State University, Columbus, Ohio
• 1997 Doctor Honoris Causa, Universidad Autónoma de Madrid, Spain
Selected papers[edit]
1. Calderón, A. P.; Zygmund, A. (1952), "On the existence of certain singular integrals", Acta Mathematica, 88 (1): 85–139, doi:10.1007/BF02392130, ISSN 0001-5962, MR 0052553, Zbl 0047.10201. This
is one of the key papers on singular integral operators.
2. Calderón, A. P. (1958). "Uniqueness in the Cauchy Problem for Partial Differential Equations". American Journal of Mathematics. 80: 16. doi:10.2307/2372819. JSTOR 2372819.
3. Calderón, A. P. (1963): "Boundary value problems for elliptic equations", Outlines for the Joint Soviet - American Symposium on Partial Differential Equations, Novosibirsk, pp. 303–304.
4. Calderón, A. P. (1977). "Cauchy integrals on Lipschitz curves and related operators". Proceedings of the National Academy of Sciences of the United States of America. 74 (4): 1324–1327. doi:
10.1073/pnas.74.4.1324. PMC 430741. PMID 16578748.
5. Calderón, A. P. (1980): "Commutators, Singular Integrals on Lipschitz curves and Applications", Proc. Internat. Congress of Math. 1978, Helsinki, pp. 85–96.
6. Calderón, A. P. (1964). "Intermediate spaces and interpolation, the complex Method" (PDF). Studia Mathematica. 24: 113–190.
7. Calderón, A. P. (1968). "Ergodic Theory and Translation-Invariant Operators". Proceedings of the National Academy of Sciences of the United States of America. 59 (2): 349–353. doi:10.1073/
pnas.59.2.349. PMC 224676. PMID 16591604.
8. Calderón, A. P. (1980). "On an inverse boundary value problem" (PDF). Seminar on Numerical Analysis and its Applications to Continuum Physics, Atas 12. Río de Janeiro: Sociedade Brasileira de
Matematica: 67–73. ISSN 0101-8205.
External links[edit] | {"url":"https://static.hlt.bme.hu/semantics/external/pages/John_McCarthy/en.wikipedia.org/wiki/Alberto_Calder%C3%B3n.html","timestamp":"2024-11-13T13:15:33Z","content_type":"text/html","content_length":"162410","record_id":"<urn:uuid:a1401c75-a705-4dd9-896a-551219cba366>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00244.warc.gz"} |
Cluster Analysis
Episode #4 of the course Business analysis fundamentals by Polina Durneva
Hello! Today, we will talk about cluster analysis, which is used to generate insights about data. Cluster analysis breaks a heterogeneous dataset into several smaller homogeneous datasets.
What Is Cluster Analysis?
As previously stated, cluster analysis is used to evaluate heterogeneity of data by breaking these data into clusters. There are two main tools used in cluster analysis: hierarchical clustering and
k-means clustering. In this course, we will cover only hierarchical clustering.
To better understand cluster analysis, let’s use an example. A professor wants to break their class into several groups in order to evaluate and compare different students. The graph below
illustrates 20 data points (students). The x-axis represents how hardworking a student is (based on the time spent studying outside of class), and the y-axis represents how creative a student is
(based on the originality of their work).
In hierarchical clustering, you can use two different approaches: agglomerative clustering and divisive clustering. In agglomerative clustering, you keep merging clusters until you create one big
cluster (your original heterogeneous dataset). In divisive clustering, you keep dividing your original heterogeneous dataset into the smallest possible number of clusters (the number of records in
your dataset). In our example, let’s use agglomerative clustering.
First, we assume that each individual record is a cluster. Then, we start creating clusters using the Euclidean distance. (The Euclidean distance is the distance between two points on the graph. For
instance, if you have point A(0;0) and point B(3;4), the distance between A and B is √((4-0)^2+ (3-0)^2) = 5.) Let’s create five clusters minimizing the Euclidean between the points (this is only the
If our professor wants to have fewer clusters, we can merge some clusters using the Euclidean distance again. (It’s important to note that the Euclidean distance is not the only way to break data
into clusters, but it is perhaps the most popular way.)
We can keep doing it until we have one huge cluster, which is basically our entire dataset.
The number of clusters really depends on the objective of cluster analysis. In our example, our professor decides how many groups of students they want to evaluate: 1, 3, 5, or 20.
It is also important to note that there are other factors to be considered in cluster analysis—the distance between clusters, for example. But this topic is bit more complex and won’t be covered in
this lesson.
Most Common Applications of Cluster Analysis
Here are the most common examples of the application of cluster analysis:
• Creating a stock portfolio. Before you create your stock portfolio, you can break all existing stocks into different clusters and select individual securities from each cluster in order to
diversify your portfolio.
• Segmenting a market. You can segment a market according to different demographic characteristics of your customers in order to target individual clusters with a specific marketing strategy.
• Analyzing industry and market structure. If you want to analyze firms in a certain industry or in the entire market, you can use cluster analysis. It can be quite helpful if you want to compare
different firms or industries.
That’s it for today. Tomorrow, we’ll talk about decision tree models.
See you soon,
Recommended book
Business Analysis and Leadership: Influencing Change by Penny Pullan, James Archer
Share with friends | {"url":"https://gohighbrow.com/cluster-analysis/","timestamp":"2024-11-02T21:42:54Z","content_type":"text/html","content_length":"68626","record_id":"<urn:uuid:735d5ab3-9f7a-4a7e-94b0-dd0b60022b38>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00838.warc.gz"} |
Relativity, theory of the nature of space, time, and matter. Albert Einstein's special theory of relativity (1905) is based on the premise that different observers moving at a constant speed with
respect to each other find the laws of physics to be identical, and, in particular, find the speed of light waves to be the same (the principle of relativity). Among its consequences are (1) that
events occurring simultaneously according to one observer may happen at different times according to an observer moving relative to the first (although the order of two causally related events is
never reversed), (2) that a moving object is shortened in the direction of its motion, (3) that time runs more slowly for a moving object, (4) that the velocity of a projectile emitted from a moving
body is less than the sum of the relative ejection velocity and the velocity of the body, (5) that a body has a greater mass when moving than when at rest, and (6) that no massive body can travel as
fast as, or faster than, the speed of light. These effects are too small to be noticed at normal velocities; they have nevertheless found ample experimental verification and are common considerations
in many physical calculations. The relationship between the position and time of a given event according to different observers is known (for H.A. Lorentz) as the Lorentz transformation. In this,
time mixes on a similar footing with the three spatial dimensions, and it is in this sense that time has been called the fourth dimension. The greater mass of a moving body implies a relationship
between kinetic energy and mass; Einstein made the bold additional hypothesis that all energy is equivalent to mass, according to the famous equation E = mc^2. The conversion of mass to energy is now
the basis of nuclear reactors and is indeed the source of the energy of the sun itself.
Einstein's general theory (1916) is of importance chiefly to cosmologists. It asserts the equivalence of the effects of acceleration and gravitational fields and that gravitational fields cause space
to become “curved,” so that light no longer travels in straight lines, while the wavelength of light falls as the light falls through a gravitational field. The direct verification of these last two
predictions, among others, has helped deeply to entrench the theory of relativity in the language of physics.
See also: Einstein, Albert.
Additional topics | {"url":"https://www.jrank.org/encyclopedia/pages/cm7ky7igbi/Relativity.html","timestamp":"2024-11-09T16:31:25Z","content_type":"text/html","content_length":"9297","record_id":"<urn:uuid:17c19c89-a459-4853-82be-106c189d778b>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00810.warc.gz"} |
How can I calculate viscosity of water at different temperatures? | Socratic
How can I calculate viscosity of water at different temperatures?
1 Answer
The rate of flow of a liquid through a pipe at a given temperature is:
R =$\frac{p \pi {a}^{4}}{8 \eta l}$
Where p = pressure
a = radius of pipe
l = length of pipe
$\eta$ = coefficient of viscosity.
You would need to measure the rate of flow of water through a pipe of known length and radius at different temperatures at a constant pressure.
The coefficient of viscosity = $\eta$
This will be for a particular value of T.
You would then repeat for different values of T.
Impact of this question
5400 views around the world | {"url":"https://socratic.org/questions/how-can-i-calculate-viscosity-of-water-at-different-temperatures","timestamp":"2024-11-09T06:48:22Z","content_type":"text/html","content_length":"33074","record_id":"<urn:uuid:c4a79632-eb58-4b7a-aa4d-ae1ab6a182de>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00635.warc.gz"} |
Polynomial Function Quiz Questions And Answers
Questions and Answers
Have you studied polynomials well during your math class in school? If you think you know it well, try out polynomial function quiz questions and answers. This quiz is all about polynomial function,
with all multiple choice. This will help you become a better learner in the basics and fundamentals of algebra. You could either practice or challenge yourself on this quiz to find out whether you
are a pro on the topic or you need more practice. All the best!
• 1.
The graph of a polynomial function is tangent to its?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. X-axis
The graph of a polynomial function is tangent to its x-axis because when a polynomial function is tangent to the x-axis, it means that the function intersects the x-axis at exactly one point and
does not cross it. This occurs when the polynomial function has a root or zero, which is a value of x that makes the function equal to zero. Therefore, the graph of the polynomial function will
touch the x-axis at this root, resulting in a tangent.
• 2.
How many possible roots than fourth-degree polynomials can have?
Correct Answer
A. 4
A fourth-degree polynomial can have a maximum of four roots. This is because a polynomial of degree n can have at most n distinct roots. In this case, since the polynomial is of degree four, it
can have up to four roots. Therefore, the correct answer is 4.
• 3.
Is a linear function a polynomial?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Always
A linear function is always a polynomial. A polynomial is an algebraic expression consisting of variables, coefficients, and exponents, combined using addition, subtraction, and multiplication. A
linear function is a polynomial of degree 1, meaning it has one variable raised to the power of 1 and no other variables or exponents. Therefore, it falls under the category of polynomials.
• 4.
What do you call a polynomial of five degrees?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. None of the above
The correct answer is "none of the above" because a polynomial of five degrees is called a quintic polynomial. "Pentanomial" refers to a polynomial with five terms, "heptanomial" refers to a
polynomial with seven terms, and "hexanomial" refers to a polynomial with six terms. None of these terms accurately describe a polynomial of five degrees.
• 5.
What is the graph of a polynomial function?
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Continuous
A polynomial function is a mathematical function that consists of terms involving variables raised to non-negative integer powers. The graph of a polynomial function is continuous, meaning that
it has no breaks or gaps. It is a smooth curve that can be traced without lifting the pencil from the paper. This is because polynomial functions are defined for all real numbers, and there are
no restrictions or interruptions in their domain. Therefore, the correct answer is continuous.
• 6.
What are the x-intercepts of polynomials?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Roots
The x-intercepts of a polynomial are the values of x where the polynomial intersects the x-axis. These points are also known as the roots of the polynomial. Therefore, the correct answer is
• 7.
What do you call a polynomial with two as the highest degree 4?
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Quadratic
A polynomial is an algebraic expression with multiple terms. The highest degree in a polynomial refers to the exponent of the term with the highest power. In this case, the polynomial has a
highest degree of 4, which means the term with the highest power is x^4. A quadratic polynomial is a polynomial with a highest degree of 2, so it does not fit the description given. Therefore,
the correct answer is quadratic.
• 8.
What is the degree of this given, x+2=0?
Correct Answer
B. 1
The degree of a given equation is determined by the highest power of the variable present in the equation. In this case, the equation x+2=0 is a linear equation, meaning it has a degree of 1. The
highest power of the variable x is 1, as there is no x^2 or any higher power term present in the equation. Therefore, the degree of the given equation x+2=0 is 1.
• 9.
Which is a polynomial of degree 1?
□ A.
□ B.
□ C.
Polynomial has no degree 1
□ D.
Correct Answer
A. X+1
The expression "x+1" is a polynomial of degree 1 because it consists of a variable (x) raised to the power of 1 (degree 1) and a constant term (1). The other options, "5+5", "polynomial has no
degree 1", and "quadratic", do not meet the criteria of being a polynomial of degree 1.
• 10.
What is the constant term in this equation, x+25?
Correct Answer
C. 25
The constant term in an equation is the term that does not contain any variables. In the equation x+25, the constant term is 25 because it does not have any variable attached to it.
• 11.
How do you define the maximum number of roots in a polynomial?
□ A.
Equal to the highest degree of a polynomial
□ B.
□ C.
□ D.
Correct Answer
A. Equal to the highest degree of a polynomial
The maximum number of roots in a polynomial is determined by the highest degree of the polynomial. The degree of a polynomial represents the highest power of the variable in the polynomial
expression. Since each root of a polynomial corresponds to a factor of the polynomial, the maximum number of roots can be equal to the degree of the polynomial.
• 12.
What theorem states that if x-c is a factor of f(x), then it becomes f(c)?
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Remainder theorem
The remainder theorem states that if (x-c) is a factor of f(x), then it becomes f(c).
• 13.
What do you call the shortcut method in dividing polynomials?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Synthetic division
Synthetic division is a shortcut method used to divide polynomials. It is a process that simplifies the long division of polynomials by using coefficients and remainders. This method is
particularly useful when dividing by linear factors. By using synthetic division, one can quickly find the quotient and remainder of the division, making it an efficient technique for dividing
• 14.
What do you call a four-degree polynomial?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Bi-quadratic
A four-degree polynomial is called a bi-quadratic polynomial. This is because the prefix "bi-" means two, and a bi-quadratic polynomial has two quadratic terms.
• 15.
What are the roots of G(x)= (x+2)(x-5)(x-1)?
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. -2,5,1
The roots of a polynomial are the values of x that make the polynomial equal to zero. In this case, the polynomial G(x)=(x+2)(x-5)(x-1) will be equal to zero when any of the factors (x+2), (x-5),
or (x-1) are equal to zero. By setting each factor equal to zero and solving for x, we find that the roots are x=-2, x=5, and x=1. Therefore, the correct answer is -2, 5, 1.
• 16.
Is the graph of the polynomial function smooth?
□ A.
□ B.
Yes, continuous, and straight
□ C.
Yes, continuous, and curve
□ D.
Correct Answer
C. Yes, continuous, and curve
The graph of a polynomial function is smooth because it is continuous and has no breaks or jumps. Additionally, the graph of a polynomial function is typically curved, as it can have various
shapes such as parabolas, cubic curves, or higher-degree curves. Therefore, the correct answer is "yes, continuous, and curve."
• 17.
How do you define the turning points of the graph of polynomial functions?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. N-1
The turning points of a graph of a polynomial function are defined as the points where the graph changes direction from increasing to decreasing or vice versa. In this case, the correct answer is
n-1, which suggests that the number of turning points in the graph is equal to n-1. This means that the graph will have one less turning point than the degree of the polynomial function. | {"url":"https://www.proprofs.com/quiz-school/story.php?title=quiz-on-polynomial-function","timestamp":"2024-11-06T11:15:53Z","content_type":"text/html","content_length":"645702","record_id":"<urn:uuid:e694e342-9bda-46ee-bf43-2179918cbe69>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00235.warc.gz"} |
Optimize PDF files - Part II
Your applications can write incredibly small PDF files if you know what you're doing. This article is intended for programmers who create PDF files programmatically using custom routines. Read Part I
if you are a user saving PDFs and also to gain a general understanding of PDF optimization.
Amaze your users by saving small, high-quality PDF files. Users expecting multimegabyte PDFs will be pleased to find out your application requires only tens of kilobytes, even just a few kilobytes,
for a simple PDF report.
This article assumes you are writing PDF files programmatically. It further assumes you are doing this with a PDF writer module or class for which you have the source code so that you can actually
fine tune the PDF output. You need to know quite a bit about the PDF file format to take advantage of these techniques.
The article focuses on PDF v1.3. The optimizations are potentially as useful with other versions as well. Get PDF Reference, Adobe Portable Document Format Version 1.3, to follow the tricks.
Font optimization
Let's start with the obvious optimization: fonts.
Optimization #1: Don't embed fonts
Did you know fonts don't need to be embedded in PDF? Font embedding is optional. The PDF standard allows you to use any font, whether or not it exists on the reader's machine. If a required font is
not found, PDF reader applications use font metrics (in /FontDescriptor) to find a reasonable replacement font.
Indeed, in many cases font embedding will unnecessarily bloat the file. Consider an application that creates reports, which are mainly used by the same user on the same PC. They will display
perfectly well as long as they are on the same PC (unless the user happens to uninstall the required fonts). When common fonts are used, you have good chances the fonts will always show up correctly.
Optimization #2: Use standard fonts
PDF comes with 5 standard font families. The families are Times, Helvetica, Courier, Symbol and ZapfDingbats. All PDF readers support these standard fonts. Except for ZapfDingbats, the other fonts
are similar to standard Windows fonts.
The standard fonts will not need embedding. That's why you can safely use them.
PDF standard fonts and their replacements
PDF font Windows font Sample
Times Times New Roman Times is a serif font
Helvetica Arial Helvetica is a sans-serif font
Courier Courier New Courier is a fixed-width font
Symbol Symbol Symbol Symbol is, well, a symbol font
ZapfDingbats (ZapfDingbats) ZapfDingbats includes symbols and ornaments
Optimized representation of text and numeric values
Don't bloat PDFs by representing text and numeric values with too many bytes. You can potentially do the same with less.
Optimization #3: Use PDFDocEncoding
This is a relatively small optimization, but simple enough. Text strings, such as /Subject in file info or /Title in /Outlines, can be in either Unicode or PDFDocEncoding. Unicode takes twice the
space: two bytes per character compared to one byte with PDFDocEncoding. Use Unicode only when the content cannot be represented in PDFDocEncoding.
Note that PDFDocEncoding contains a wider range of characters than WinAnsiEncoding or MacRomanEncoding. It's good news for optimizers.
Optimization #4: Optimize number of decimal digits
This optimization is for representing all numeric values in PDF. Use only as few decimals as required. It's unnecessary to bloat the file with too many useless decimals.
Write a utility function that rounds values for you. Supposing you need 2 decimals precision, the function should round like this:
1.2345 → 1.23
1.2000 → 1.2
1.0000 → 1
Stream optimization
Now we get to optimizing streams, the actual page content.
Optimization #5: Clip to viewable area
This rule is especially important if you're drawing a part of a larger graphic into PDF. When drawing graphics objects or text it's a good idea to check for page boundaries. If no part of the object
will be visible, there's no point adding the respective drawing operations in the PDF file. The result will be invisible anyway. Besides, hidden data in a PDF is a security concern.
Optimization #6: Don't repeat operators unnecessarily
PDF keeps track on the currently selected color, line width, font and so on. You don't need to select the color each time you draw a line. Only set the color when it needs to change. The same goes
for line width, line cap style and other drawing attributes. Keep track on the current attributes. Only change them when you need.
Optimization #7: Close polygons
When drawing a polygon, there is no need to draw the last edge (with the l operator). Close polygons with the h operator instead. This closes a subpath by appending a straight line segment from the
current point to the starting point.
Even better optimizations are available. Instead of h, use s to close and stroke the path and B to close, fill and stroke. There are even more options, see Path-painting operators in the PDF
Optimization #8: Use shortcuts for splines
The default way to draw a spline curve from the current point to (x3,x3) with (x1,y1) and (x2,y2) as the control points is this:
x1 y1 x2 y2 x3 y3 c
If the current point and (x1,y1) are the same, there is a shorter form:
x2 y2 x3 y3 v
If points (x2,y2) and (x3,y3) are the same, use this shorter form:
x1 y1 x3 y3 y
Optimization #9: Use color shortcuts
The standard operators to set color are:
0.123 0.123 0.123 rg
0.123 0.123 0.123 RG
These operators take 3 values: Red, Green and Blue.
For black, gray or white colors you don't need the full RGB color space. Grayscale is enough. To select black, use one of these operators:
0 g
0 G
For white, use these:
1 g
1 G
You can do the same for any shade of gray. To select 0.123 gray, use one of the following:
0.123 g
0.123 G
Optimization #10: Compress
Compress streams to get the size down. Get a copy of the zlib library to do the compression for you. zlib is relatively straightforward to use.
Visual Basic note: VB6 cannot call the regular zlib.dll, but you can use zlibwapi.dll instead.
Small PDF samples
Here are small PDF samples with vector graphics and text. The graphic was originally created with Visustin.
• Uncompressed PDF (3987 bytes) is an optimized, but not compressed, PDF.
• Compressed PDF (2274 bytes) is the same document compressed with zlib.
• Word PDF (4641 bytes) is the same document saved by Word 2007 (with optimal settings). Not large, yet twice the size! The difference is largely due to the stream (page content). Word also
unnecessarily added character widths in 9 0 obj, even though the document should really use a PDF standard font with built-in widths.
Optimize PDF files - Part II | {"url":"https://www.aivosto.com/articles/pdf-optimize2.html","timestamp":"2024-11-12T16:59:11Z","content_type":"text/html","content_length":"10921","record_id":"<urn:uuid:7e5e4aed-3a3b-4be2-9f0e-abbaed0fa407>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00659.warc.gz"} |
Statistical Physics Including Applications To Condensed Matter - PDF Free Download
Statistical Physics
Advanced Texts in Physics This program of advanced texts covers a broad spectrum of topics that are of current and emerging interest in physics. Each book provides a comprehensive and yet accessible
introduction to a field at the forefront of modern research. As such, these texts are intended for senior undergraduate and graduate students at the M.S. and Ph.D. levels; however, research
scientists seeking an introduction to particular areas of physics will also benefit from the titles in this collection.
Claudine Hermann
Statistical Physics Including Applications to Condensed Matter
With 63 Figures
Claudine Hermann Laboratoire de Physique de la Matie` re Condense´ e Ecole Polytechnique 91128 Palaiseau France
[email protected]
Library of Congress Cataloging-in-Publication Data is available.
ISBN 0-387-22660-5
Printed on acid-free paper.
© 2005 Springer Science+Business Media, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer
Science+Business Media, Inc., 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names,
trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.
Printed in the United States of America. 9 8 7 6 5 4 3 2 1 springeronline.com
SPIN 10955284
Table of Contents Introduction
1 Statistical Description of Large Systems. Postulates 1.1 Classical or Quantum Evolution of a Particle ; Phase Space 1.1.1 Classical Evolution . . . . . . . . . . . . . . . . . . . 1.1.2 Quantum
Evolution . . . . . . . . . . . . . . . . . . 1.1.3 Uncertainty Principle and Phase Space . . . . . . . . 1.1.4 Other Degrees of Freedom . . . . . . . . . . . . . . . 1.2 Classical Probability
Density ; Quantum Density Operator . 1.2.1 Statistical Approach for Macroscopic Systems . . . . 1.2.2 Classical Probability Density . . . . . . . . . . . . . 1.2.3 Density Operator in Quantum
Mechanics . . . . . . 1.3 Statistical Postulates ; Equiprobability . . . . . . . . . . . . 1.3.1 Microstate, Macrostate . . . . . . . . . . . . . . . . 1.3.2 Time Average and Ensemble Average . . . .
. . . . 1.3.3 Equiprobability . . . . . . . . . . . . . . . . . . . . . 1.4 General Properties of the Statistical Entropy . . . . . . . . 1.4.1 The Boltzmann Definition . . . . . . . . . . . . . .
1.4.2 The Gibbs Definition . . . . . . . . . . . . . . . . . . 1.4.3 The Shannon Definition of Information . . . . . . . Summary of Chapter 1 . . . . . . . . . . . . . . . . . . . . . . . . Appendix
1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 2 The 2.1 2.2 2.3
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
Different Statistical Ensembles. General Methods Energy States of an N -Particle System . . . . . . . . . . . . . . Isolated System in Equilibrium : “Microcanonical Ensemble” . Equilibrium Conditions
for Two Systems in Contact . . . . . . 2.3.1 Equilibrium Condition : Equal β Parameters . . . . . . 2.3.2 Fluctuations of the Energy Around its Most Likely Value v
Table of Contents
Contact with a Heat Reservoir, “Canonical Ensemble” . . . . . 2.4.1 The Boltzmann Factor . . . . . . . . . . . . . . . . . . . 2.4.2 Energy, with Fixed Average Value . . . . . . . . . . . . 2.4.3
Partition Function Z . . . . . . . . . . . . . . . . . . . . 2.4.4 Entropy in the Canonical Ensemble . . . . . . . . . . . 2.4.5 Partition Function of a Set of Two Independent Systems 2.5 Grand
Canonical Ensemble . . . . . . . . . . . . . . . . . . . . 2.5.1 Equilibrium Condition : Equality of Both T ’s and µ’s . 2.5.2 Heat Reservoir and Particles Reservoir . . . . . . . . . . 2.5.3 Grand
Canonical Probability and Partition Function . . 2.5.4 Average Values . . . . . . . . . . . . . . . . . . . . . . . 2.5.5 Grand Canonical Entropy . . . . . . . . . . . . . . . . . 2.6 Other
Statistical Ensembles . . . . . . . . . . . . . . . . . . . . Summary of Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
3 Thermodynamics and Statistical Physics 3.1 Zeroth Law of Thermodynamics . . . . . . . . . . . . . . . . . . 3.2 First Law of Thermodynamics . . . . . . . . . . . . . . . . . . . 3.2.1 Work . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Heat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Quasi-Static General Process . . . . . . . . . . . . . . . 3.3 Second Law of
Thermodynamics . . . . . . . . . . . . . . . . . 3.4 Third Law of Thermodynamics . . . . . . . . . . . . . . . . . . 3.5 The Thermodynamical Potentials ; the Legendre Transformation 3.5.1 Isolated
System . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Fixed N , Contact with a Heat Reservoir at T . . . . . . 3.5.3 Contact with a Heat and Particle Reservoir at T . . . . 3.5.4 Transformation
of Legendre ; Other Potentials . . . . . . Summary of Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . .
4 The Ideal Gas 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Kinetic Approach . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Scattering Cross Section, Mean Free Path . .
. . 4.2.2 Kinetic Calculation of the Pressure . . . . . . . . 4.3 Classical or Quantum Statistics ? . . . . . . . . . . . . . 4.4 Classical Statistics Treatment of the Ideal Gas . . . . . 4.4.1
Calculation of the Canonical Partition Function . 4.4.2 Average Energy ; Equipartition Theorem . . . . . 4.4.3 Free Energy ; Physical Parameters (P, S, µ) . . . 4.4.4 Gibbs Paradox . . . . . . . . .
. . . . . . . . . . 4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . Summary of Chapter 4 . . . . . . . . . . . . . . . . . . . . . . Appendix 4.1 . . . . . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
Table of Contents
Appendix 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5 Indistinguishability, the Pauli Principle 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . 5.2 States of Two
Indistinguishable Particles . . . . . 5.2.1 General Case . . . . . . . . . . . . . . . . 5.2.2 Independent Particles . . . . . . . . . . . 5.3 Pauli Principle ; Spin-Statistics Connection . . . 5.3.1
Pauli Principle ; Pauli Exclusion Principle 5.3.2 Theorem of Spin-Statistics Connection . . 5.4 Case of Two Particles of Spin 1/2 . . . . . . . . . 5.4.1 Triplet and Singlet Spin States . . . . . .
5.4.2 Wave Function of Two Spin 1/2 Particles 5.5 Special Case of N Independent Particles . . . . . 5.5.1 Wave Function . . . . . . . . . . . . . . . 5.5.2 Occupation Numbers . . . . . . . . . . . .
5.6 Return to the Introduction Examples . . . . . . 5.6.1 Fermions Properties . . . . . . . . . . . . 5.6.2 Bosons Properties . . . . . . . . . . . . . Summary of Chapter 5 . . . . . . . . . . . . .
. . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
6 General Properties of the Quantum Statistics 131 6.1 Use of the Grand Canonical Ensemble . . . . . . . . . . . . . . 132 6.1.1 2 Indistinguishable Particles at T , Canonical Ensemble . 132 6.1.2
Description in the Grand Canonical Ensemble . . . . . . 133 6.2 Factorization of the Grand PartitionFunction . . . . . . . . . . 134 6.2.1 Fermions and Bosons . . . . . . . . . . . . . . . . . . . .
134 6.2.2 Fermions . . . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2.3 Bosons . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2.4 Chemical Potential and Number of Particles .
. . . . . . 136 6.3 Average Occupation Number ;Grand Potential . . . . . . . . . . 136 6.4 Free Particle in a Box ; Density of States . . . . . . . . . . . . . 138 6.4.1 Quantum States of a Free
Particle in a Box . . . . . . . 138 6.4.2 Density of States . . . . . . . . . . . . . . . . . . . . . . 143 6.5 Fermi-Dirac Distribution ; Bose-Einstein Distribution . . . . . . 147 6.6 Average
Values of Physical Parameters at T . . . . . . . . . . . 148 6.7 Common Limit of the Quantum Statistics . . . . . . . . . . . . 149 6.7.1 Chemical Potential of the Ideal Gas . . . . . . . . . . . 149
6.7.2 Grand Canonical Partition Function of the Ideal Gas . . 151 Summary of Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . 153 7 Free Fermions Properties 155 7.1 Properties of
Fermions at Zero Temperature . . . . . . . . . . . 155 7.1.1 Fermi Distribution, Fermi Energy . . . . . . . . . . . . . 155 7.1.2 Internal Energy and Pressure at Zero Temperature . . . 158
Table of Contents
7.1.3 Magnetic Properties. Pauli Paramagnetism . . . . . . . Properties of Fermions at Non-Zero Temperature . . . . . . . . 7.2.1 Temperature Ranges and Chemical Potential Variation . 7.2.2 Specific
Heat of Fermions . . . . . . . . . . . . . . . . . 7.2.3 Thermionic Emission . . . . . . . . . . . . . . . . . . . . Summary of Chapter 7 . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix
7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2
8 Elements of Bands Theory and Crystal Conductivity 8.1 What is a Solid, a Crystal ? . . . . . . . . . . . . . . . . . 8.2 The Eigenstates for the Chosen Model . . . . . . . . . . . 8.2.1 Recall :
the Double Potential Well . . . . . . . . . 8.2.2 Electron on an Infinite and Periodic Chain . . . . 8.2.3 Energy Bands and Bloch Functions . . . . . . . . . 8.3 The Electron States in a Crystal . . .
. . . . . . . . . . . 8.3.1 Wave Packet of Bloch Waves . . . . . . . . . . . . 8.3.2 Resistance ; Mean Free Path . . . . . . . . . . . . . 8.3.3 Finite Chain, Density of States, Effective Mass . . 8.4
Statistical Physics of Solids . . . . . . . . . . . . . . . . . 8.4.1 Filling of the Levels . . . . . . . . . . . . . . . . . 8.4.2 Variation of Metal Resistance versus T . . . . . . . 8.4.3
Insulators’ Conductivity Versus T ; Semiconductors 8.5 Examples of Semiconductor Devices . . . . . . . . . . . . 8.5.1 The Photocopier : Photoconductivity Properties . 8.5.2 The Solar Cell : an
Illuminated p − n Junction . . 8.5.3 CD Readers : the Semiconductor Quantum Wells . Summary of Chapter 8 . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
9 Bosons : Helium 4, Photons,Thermal Radiation 201 9.1 Material Particles . . . . . . . . . . . . . . . . . . . . . . . . . . 201 9.1.1 Thermodynamics of the Boson Gas . . . . . . . . . . . . 202
9.1.2 Bose-Einstein Condensation . . . . . . . . . . . . . . . . 203 9.2 Bose-Einstein Distribution of Photons . . . . . . . . . . . . . . 206 9.2.1 Description of the Thermal Radiation ; the Photons
. . 206 9.2.2 Statistics of Photons, Bosons in Non-Conserved Number 207 9.2.3 Black Body Definition and Spectrum . . . . . . . . . . . 210 9.2.4 Microscopic Interpretation . . . . . . . . . . . . . .
. . . 212 9.2.5 Photometric Measurements : Definitions . . . . . . . . . 214 9.2.6 Radiative Balances . . . . . . . . . . . . . . . . . . . . . 217 9.2.7 Greenhouse Effect . . . . . . . . . . . . . . .
. . . . . . 218 Summary of Chapter 9 . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Solving Exercises and Problems of Statistical Physics
Units and Physical Constants
Table of Contents
A few useful formulae
Exercises and Problems Ex. 2000 : Electrostatic Screening . . . . . . . . . . Ex. 2001 : Magnetic Susceptibility of a “Quasi-1D” Ex. 2002 : Entropies of the HC Molecule . . . . . Pr. 2001 : Quantum
Boxes and Optoelectronics . . Pr. 2002 : Physical Foundations of Spintronics . .
. . . . . . . Conductor . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
Solution of the Exercises and Problems Ex. 2000 : Electrostatic Screening . . . . . . . . . . Ex. 2001 : Magnetic Susceptibility of a “Quasi-1D” Ex. 2002 : Entropies of the HC Molecule . . . . . Pr.
2001 : Quantum Boxes and Optoelectronics . . Pr. 2002 : Physical Foundations of Spintronics . .
. . . . . . . Conductor . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
Introduction A glosssary at the end of this introduction defines the terms specific to Statistical Physics. In the text, these terms are marked by an asterisk in exponent. A course of Quantum
Mechanics, like the one taught at Ecole Polytechnique, is devoted to the description of the state of an individual particle, or possibly of a few ones. Conversely, the topic of this book will be the
study of systems ∗ containing very many particles, of the order of the Avogadro number N , for example the molecules in a gas, the components of a chemical reaction, the adsorption sites for a gas on
a surface, the electrons of a solid. You certainly previously studied this type of system, using Thermodynamics which is ruled by “exact” laws, such as the ideal gas one. Its physical parameters,
that can be measured in experiments, are macroscopic quantities like its pressure, volume, temperature, magnetization, etc. It is now well-known that the correct microscopic description of the state
of a system, or of its evolution, requires the Quantum Mechanics approach and the solution of the Schroedinger equation, but how can this equation be solved when such a huge number of particles comes
into play ? Printing on a listing the positions and velocities of the N molecules of a gas would take a time much longer that the one elapsed since the Big Bang ! A statistical description is the
only issue, which is the more justified as the studied system is larger, since the relative fluctuations are then very small (Ch. 1). This course is restricted to systems in thermal equilibrium ∗ : to
reach such an equilibrium it is mandatory that interaction terms should be present in the hamiltonian of the total system : even if they are weak, they allow energy exchanges between the system and
its environment (for example one may be concerned by the electrons of a solid, the environment being the ions of the same solid). These interactions provide the way to equilibrium. This approach
takes place during a characteristic time, the so-called “relaxation time”, with a range of values which depends on the considered system (the study of offequilibrium phenomena, in particular transport
phenomena, is another branch
of the field of Statistical Physics, which is not discussed in this book). Whether the system is in equilibrium or not, the parameters accessible to an experiment are just those of Thermodynamics. The
purpose of Statistical Physics is to bridge the gap between the microscopic modeling of the system and the macroscopic physical parameters that characterize it (Ch. 2). The studied system can be
found in a great many states, solutions of the Quantum Mechanics problem, which differ by the values of their microscopic parameters while satisfying the same specified macroscopic physical conditions,
for example a fixed number of particles and a given temperature. The aim is thus to propose statistical hypotheses on the likehood for a particular microscopic state or another one to be indeed
realized in these conditions : the statistical description of systems in equilibrium is based on the postulate on the quantity statistical entropy : it should be maximum consistently with the
constraints on the system under study. The treated problems concern all kinds of degrees of freedom : translation, rotation, magnetization, sites occupied by the particles, etc. For example, for a
system made of N spins, of given total magnetization M , from combinatory arguments one will look for all the microscopic spin configurations leading to M ; then one will make the basic hypothesis
that, in the absence of additional information, all the possible configurations are equally likely (Ch. 2). Obviously it will be necessary to verify the validity of the microscopic model, as this
approach must be consistent with the laws and results of Thermodynamics (Ch. 3) It is specified in the Quantum Mechanics courses that the limit of Classical Mechanics is justified when the de Broglie
wavelength associated with the wavefunction is much shorter than all the characteristic dimensions of the problem. When dealing with free indistinguishable mobile particles, the characteristic length
to be considered is the average distance between particles, which is thus compared to the de Broglie wavelength, or to the size of a typical wave packet, at the considered temperature. According to
this criterion, among the systems including a very large number of mobile particles, all described by Quantum Mechanics, two types will thus be distinguished, and this will lead to consequences on
their Statistical Physics properties : – in dilute enough systems the wave packets associated with two neighboring particles do not overlap. In such systems, the possible potential energy of a
particle expresses an external field force (for example gravity) or its confinement in a finite volume. This system constitutes THE “ideal gas” : this means that all diluted systems of mobile particles,
subjected to the same external potential, have the same properties, except for those related to the particle mass (Ch. 4) ;
– in the opposite case of a high density of particles, like the atoms of a liquid or the electrons of a solid, the wave packets of neighboring particles do overlap. Quantum Mechanics analyzes this
latter situation through the Pauli principle : you certainly learnt a special case of it, the Pauli exclusion principle, which applies to the filling of atomic levels by electrons in Chemistry. The
general expression of the Pauli principle (Ch. 5) specifies the conditions on the symmetry of the wavefunction for N identical particles. There are only two possibilities and to each of them is
associated a type of particle : - on the one hand, the fermions, such that only a single fermion can be in a given quantum state ; - on the other hand, the bosons, which can be in unlimited number in
a determined quantum state. The consequences of the Pauli principle on the statistical treatment of noninteracting indistinguishable particles, i.e., the two types of Quantum Statistics, that of
Fermi-Dirac and that of Bose-Einstein, are first presented in very general terms in Ch. 6. In the second part of this course (Ch. 7 and the following chapters), examples of systems following Quantum
Statistics are treated in detail. They are very important for the physics and technology of today. Some properties of fermions are presented using the example of electrons in metallic (Ch. 7) or,
more generally, crystalline (Ch. 8) solids. Massive boson particles, in conserved number, are studied using the examples of the superfluid helium and the Bose-Einstein condensation of atoms. Finally,
the thermal radiation, an example of a system of bosons in non-conserved number, will introduce us into very practical current problems (Ch. 9). A topic in physics can only be fully understood after
sufficient practice. A selection of exercises and problems with their solution is presented at the end of the book. This introductory course of Statistical Physics emphasizes the microscopic
interpretation of results obtained in the framework of Thermodynamics and illustrates its approach, as much as possible, through practical examples : thus the Quantum Statistics will provide an
opportunity to understand what is an insulator, a metal, a semiconductor, or what is the principle of the greenhouse effect that could deeply influence our life on earth (and particularly that of our
descendants !). The content of this book is influenced by the previous courses of Statistical Physics taught at Ecole Polytechnique : the one by Roger Balian From microphysics to macrophysics :
methods and application to statistical physics, volume I translated by D. ter Haar and J.F. Gregg, volume II translated by D.
ter Haar, Springer Verlag Berlin (1991), given during the 1980s ; the course by Edouard Brézin during the 1990s. I thank them here for all that they brought to me in the stimulating field of
Statistical Physics. Several discussions in the present book are inspired from the course by F. Reif Fundamentals of Statistical and Thermal Physics, Mac Graw-Hill (1965), a not so recent work, but
with a clear and practical approach, that should suit students attracted by the “physical” aspect of arguments. Many other Physical Statistics text books, of introductory or advanced level, are
edited by Springer. This one offers the point of view of a top French scientific Grande Ecole on the subject. The present course is the result of a collective work of the Physics Department of Ecole
Polytechnique. I thank the colleagues with whom I worked the past years, L. Auvray, G. Bastard, C. Bachas, B. Duplantier, A. Georges, T. Jolicoeur, M. Mézard, D. Quéré, J-C. Tolédano, and
particularly my coworkers from the course of Statistical Physics “A”, F. Albenque, I. Antoniadis, U. Bockelmann, J.-M. Gérard, C. Kopper, J.-Y. Marzin, who brought their suggestions to this book and
with whom it is a real pleasure to teach. Finally, I would like to thank M. Digot, M. Maguer and D. Toustou, from the Printing Office of Ecole Polytechnique, for their expert and good-humored help in
the preparation of the book.
Glossary We begin by recalling some definitions in Thermodynamics that will be very useful in the following. For convenience, we will also list the main definitions of Statistical Physics, introduced
in the next chapters of this book. This section is much inspired by chapter 1, The Language of Thermodynamics, from the book Thermodynamique, by J.-P. Faroux and J. Renault (Dunod, 1997). Some
definitions of Thermodynamics : The system is the object under study ; it can be of microscopic or macroscopic size. It is distinguished from the rest of the Universe, called the surroundings. A
system is isolated if it does not exchange anything (in particular energy, particles) with its surroundings. The parameters (or state variables) are independent quantities which define the macroscopic
state of the system : their nature can be mechanical (pressure, volume), electrical (charge, potential), thermal (temperature, entropy), etc. If these parameters take the same value at any point of
the system, the system is homogeneous. The external parameters are independent parameters (volume, electrical or magnetic field) which can be controlled from the outside and imposed to the system, to
the experimental accuracy. The internal parameters cannot be controlled and may fluctuate ; this is the case for example of the local repartition of density under an external constraint. The internal
parameters adjust themselves under the effect of a modification of the external parameters. A system is in equilibrium when all its internal variables remain constant in time and, in the case of a
system which is not isolated, if it has no exchange with its surroundings : that is, there is no exchange of energy, of electric charges, of particles. In Thermodynamics it is assumed that any
system, submitted to constant and uniform external conditions, evolves toward an equilibrium state that it can no longer spontaneously leave afterward. The thermal equilibrium between two systems is
realized after exchanges between themselves : this is not possible if the walls which separate them are adiabatical, i.e., they do not transmit any energy. xv
In a homogeneous system, the intensive parameters, such as its temperature, pressure, the difference in electrical potential, do not vary when the system volume increases. On the other hand, the
extensive parameters such as the volume, the internal energy, the electrical charge are proportional to the volume. Any evolution of the system from one state to another one is called a process or
transformation. An infinitesimal process corresponds to an infinitely small variation of the external parameters between the initial state and the final state of the system. A reversible transformation
takes place through a continuous set of equilibrium intermediate states, for both the system and the surroundings, i.e., all the parameters defining the system state vary continuously : it is then
possible to vary these parameters in the reverse direction and return to the initial state. A transformation which does not obey this definition is said to be irreversible. In a quasi-static
transformation, at any time the system is in internal equilibrium and its internal parameters are continuously defined. Contrarly to a reversible process, this does not imply anything about the
surroundings but only means that the process is slow enough with respect to the characteristic relaxation time of the system. Some definitions of Statistical Physics : A configuration defined by the
data of the microscopic physical parameters, given by Quantum or Classical Mechanics, is a microstate. A configuration defined by the data of the macroscopic physical parameters, given by
Thermodynamics, is a macrostate. An ensemble average is performed at a given time on an assembly of systems of the same type, prepared in the same macroscopic conditions. In the microcanonical
ensemble, each of these systems is isolated and its energy E is fixed, that is, it is lying in the range between E and E + δE. It contains a fixed number N of particles. In the canonical ensemble, each
system is in thermal contact with a large system, a heat reservoir, which imposes its temperature T ; the energy of each system is different but for macroscopic systems the average E is defined with
very good accuracy. Each system contains N particles. In the canonical ensemble the partition function is the quantity that norms the probabilities, which are Boltzmann factors. (see Ch. 2) In the
grand canonical ensemble, each system is in thermal contact with a heat
reservoir which imposes its temperature T , the energy of each system being different. The energies of the various systems are spread around an average value, defined with high accuracy in the case of
a macroscopic system. Each system is also in contact with a particle reservoir, which imposes its chemical potential : the number of particles differs according to the system, the average value N is
defined with high accuracy for a macroscopic system.
Chapter 1
Statistical Description of Large Systems. Postulates of Statistical Physics The aim of Statistical Physics is to bridge the gap between the microscopic and macroscopic worlds. Its first step consists
in stating hypotheses about the microscopic behavior of the particles of a macroscopic system, i.e., with characteristic dimensions very large with respect to atomic distances ; the objective is then
the prediction of macroscopic properties, which can be measured in experiments. The system under study may be a gas, a solid, etc., i.e., of physical or chemical or biological nature, and the
measurements may deal with thermal, electrical, magnetic, chemical, properties. In the present chapter we first choose a microscopic description, either through Classical or Quantum Mechanics, of an
individual particle and its degrees of freedom. The phase space is introduced, in which the time evolution of such a particle is described by a trajectory : in the quantum case, this trajectory is
defined with a limited resolution because of the Heisenberg uncertainty principle (§ 1.1). Such a description is then generalized to the case of the very many particles in a macroscopic system : the
complexity arising from the large number of particles will be suggested from the example of molecules in a gas (§ 1.2). For so large numbers, only a statistical description can be considered and §
1.3 presents the basic postulate of Statistical Physics and the concept of statistical entropy, as introduced by Ludwig Boltzmann (1844-1906), an Austrian physicist, at the end of the 19th century.
Chapter 1. Statistical Description of Large Systems
Classical or Quantum Evolution of a Particle ; Phase Space
For a single particle we describe the time evolution first in a classical framework, then in a quantum description. In both cases, this evolution is conveniently described in the phase space.
Classical Evolution px
t1 t0
Fig. 1.1 : Trajectory of a particle in the phase space. Consider a single classical particle, of momentum p0 , located at the coordinate r0 at time t0 . It is submitted to a force F0 (t). Its time
evolution can be predicted through the Fundamental Principle of Dynamics, since d p = F (t) dt
This evolution is deterministic since the set (r, p) can be deduced at any later time t. One introduces the one-particle “phase space”, at six dimensions, of coordinates (x, y, z, px , py , pz ). The
data (r, p) correspond to a given point of this space, the time evolution of the particle defines a trajectory, schematized on the Fig. 1.1 in the case of a one-dimension space motion. In the
particular case of a periodic motion, this trajectory is closed since after a period the particle returns at the same position with the same momentum : for example the abscissa x and momentum px of a
one-dimension harmonic oscillator of
Classical or Quantum Evolution of a Particle
mass m and frequency ω are linked by : 1 p2x + mω 2 x2 = E (1.2) 2m 2 In the phase space this relation is the equation of an ellipse, a closed trajectory periodically described (Fig. 1.2). Another
way of determining the time px
t0 t1 O
Fig. 1.2: Trajectory in the phase space for a one-dimension harmonic oscillator of energy E. evolution of a particle is to use the Hamilton equations (see for example the course of Quantum Mechanics
by J.-L. Basdevant and J. Dalibard, including a CDROM, Springer, 2002) : the motion is deduced from the position r and its corresponding momentum p. The hamiltonian function associated with the total
energy of the particle of mass m is given by : p2 + V (r) (1.3) 2m where the first term is the particle kinetic energy and V is its potential energy, from which the force in (1.1) is derived. The
Hamilton equations of motion are given by : ⎧ ∂h ⎪ ⎨ r˙ = ∂ p (1.4) ⎪ ⎩ p˙ = − ∂h ∂r and are equivalent to the Fundamental Relation of Dynamics (1.1). h=
Note : Statistical Physics also applies to relativistic particles which have a different energy expression.
Chapter 1. Statistical Description of Large Systems
Quantum Evolution
It is well known that the correct description of a particle and of its evolution requires Quantum Mechanics and that to the classical hamiltonian function h ˆ The Schroedinger equation corresponds
the quantum hamiltonian operator h. provides the time evolution of the state |ψ : i
∂|ψ ˆ = h|ψ ∂t
The squared modulus of the spatial wave function associated with |ψ gives the probability of the location of the particle at any position and any time. Both in Classical and Quantum Mechanics, it is
possible to reverse the time direction, i.e., to replace t by −t in the motion equations (1.1), (1.4) or (1.5), and thus obtain an equally acceptable solution.
Uncertainty Principle and Phase Space
You know that Quantum Mechanics introduces a probabilistic character, even when the particle is in a well-defined state : if the particle is not in an eiˆ the measurement result is uncertain. genstate
of the measured observable A, Indeed, for a single measurement the result is any of the eigenvalues aα of the considered observable. On the other hand, when this measurement is reproduced a large
number of times on identical, similarly prepared systems, the ˆ average of the results is ψ|A|ψ, a well-defined value. px
Fig. 1.3: A phase space cell, of area h for a one-dimension motion, corresponds to one quantum state.
Classical Probability Density ; Quantum Density Operator
In the phase space, this quantum particle has no well-defined trajectory since, at a given time t, in space and momentum coordinates there is no longer an exact position, but a typical extent of the
wave function inside which this particle can be found. Another way of expressing this difficulty is to state the Heisenberg uncertainty principle : if the particle abscissa x is known to ∆x, then the
corresponding component px of its momentum cannot be determined with an accuracy better than ∆px , such that ∆x · ∆px ≥ /2, where = h/2π, h being the Planck constant. The “accuracy” on the phase
space trajectory is thus limited, in other words this trajectory is “blurred” : the phase space is therefore divided into cells of area of the order of h for a one-dimension motion, h3 if the problem
is in three dimensions, and the state of the considered particle cannot be defined better than within such a cell (Fig. 1.3). This produces a discretization of the phase space. (In the case of a
classical particle, it is not a priori obvious that such a division is necessary, and if it is the case, that the same cell area should be taken. It will be stated in § 1.2.2 that, for sake of
consistency at the classical limit of the quantum treatment, this very area should be chosen.)
Other Degrees of Freedom
Up to now we have only considered the degrees of freedom related to the particle position, which take continuous values in Classical Mechanics and are described by the spatial part of the wave
function in Quantum Mechanics. If the particle carries a magnetic moment, the wave function includes an additional component, with discrete eigenvalues (see a Quantum Mechanics course).
Classical Probability Density ; Quantum Density Operator
When considering a macroscopic system, owing to the extremely large number of parameters, the description of the particles’ evolution can only be done through a statistical approach. The tools are
different for either the classical or the quantum description of the particles’ motion.
Chapter 1. Statistical Description of Large Systems
Necessity of a Statistical Approach for Macroscopic Systems
The physical parameters referring to individual particles, the evolution of which was described in the above section, are measured at the atomic scale (distances of the order of a Bohr radius of the
hydrogen atom, i.e., a tenth of a nm, energies of the order of a Rydberg or of an electron-volt, characteristic times in the range of the time between two collisions in a gas, that is about 10−10 sec
for N2 in standard conditions). For an object studied in Thermodynamics or measured in usual laboratory conditions, the scales are quite different : the dimensions are currently of the order of a mm
or a cm, the energies are measured in joules, the measurement time is of the order of a second. Thus two very different ranges are evidenced : the microscopic one, described by Classical or Quantum
Mechanics laws, unchanged when reversing the direction of time ; the macroscopic range, which is often the domain of irreversible phenomena, as you learnt in previous courses. Now we will see what
are the implications when shifting from the microscopic scale to the macroscopic one. For that we will first consider the case of a few particles, then of a number of particles of the order of the
Avogadro number. If the physical system under study contains only a few particles, for example an atom with several electrons, the behavior of each particle follows the Mechanics laws given above.
Once the position and momentum of each particle are known at t = t0 , together with the hamiltonian determining the system evolution, in principle one can deduce r and p at any further time t (yet
the calculations, which are the task of Quantum Chemistry in particular, quickly become very complex). Thus in principle the problem can be solved, but one is facing practical difficulties, which
quickly become “impossibilities”, as soon as the system under study contains “many” particles. Indeed in Molecular Dynamics, a branch of Physics, the evolution of particle assemblies is calculated
according to the Classical Mechanics laws, using extremely powerful computers. However, considering the computing capacities available today, the size of the sample is restricted to several thousands
of particles at most. Consider now a macroscopic situation, like those described by Thermodynamics : take a monoatomic gas of N molecules, where N = 6.02 × 1023 is the Avogadro number, and assume
that at time t0 the set of data (r0 , p0 ) is known for each particle. Without prejudging the time required for their calculation, just printing the coordinates of all these molecules at a later time
t, with a printer delivering the coordinates of one molecule per second, would take 2 × 1016 years ! The information issued in one day would be for 105 molecules only. Another way to understand the
enormity of such an information on N molecules is to wonder how many volumes would be needed to print it.
Classical Probability Density ; Quantum Density Operator
The answer is : many more than the total of books manufactured since the discovery of printing ! One thus understands the impossibility of knowing the position and momentum of each particle at any
time. In fact, the measurable physical quantities (for example volume, pressure, temperature, magnetization) only rarely concern the properties of an individual molecule at a precise time : if the
recent achievements of near-field optical microscopy or atomic-force microscopy indeed allow the observation of a single molecule, and only in very specific and favorable cases, the duration of such
experiments is on a “human” scale (in the range of a second). One almost always deals with physical quantities concerning an assembly of molecules (or particles) during the observation time. “It is
sufficient for the farmer to be certain that a cloud burst and watered the ground and of no use to know the way each distinct drop fell. Take another example : everybody understands the meaning of the
word “granite”, even if the shape, chemical composition of the different crystallites, their composition ratios and their colors are not exactly known. We thus always use concepts which deal with the
behavior of large scale phenomena without considering the isolated processes at the corpuscular scale.” (W. Heisenberg, “Nature in the contemporary physical science, ” from the French translation,
p.42 Idées NRF, Paris (1962)). Besides, some of the observed macroscopic properties may have no microscopic equivalent : “Gibbs was the first one to introduce a physical concept which can apply to a
natural object only if our knowledge of this object is incomplete. For example if the motions and positions of all the molecules in a gas were known, speaking of the temperature of this gas would no
longer retain a meaning. The temperature concept can only be used when a system is insufficiently determined and statistical conclusions are to be deduced from this incomplete knowledge.” (W.
Heisenberg, same reference, p.45). It is impossible to collect the detailed information on all the molecules of a gas, anyway all these data would not really be useful. A more global knowledge
yielding the relevant physical parameters is all that matters : a statistical approach is thus justified from a physical point of view. Statistics are directly related to probabilities, i.e., a large
number of experiments have to be repeated either in time or on similar systems (§ 1.2.2). Consequently, one will have to consider an assembly (or an “ensemble”) consisting of a large number of
systems, prepared in a similar way, in which the microscopic structures differ and the measured values of the macroscopic parameters are not necessarily identical : the probability of occurrence of a
particular “event” or a particular
Chapter 1. Statistical Description of Large Systems
measurement is equal to the fraction of the ensemble systems for which it occurs (see § 1.3.2).
Classical Probability Density
The hamiltonian is now a function of the positions and momenta of the N i , . . . p N ). particles, i.e., H(r1 , . . . ri , . . . rN , p1 , . . . p Using a description analogous to that of the
one-particle phase space of § 1.1.1, the system of N classical particles now has a trajectory in a new phase space, at 6N dimensions since there are 3N space and 3N momenta coordinates. As it is
impossible to exactly know the trajectory, owing to the lack of information on the coordinates of the individual particles (see § 1.2.1), one can only speak of the probability of finding the system
particles at time t in the neighborhood of a given point of the phase space, of coordinates N d3ri d3 pi . (r1 , . . . ri , . . . rN , p1 , . . . pi , . . . pN ) to i=1
On this phase space, a measure dτ and a probability density D have to be defined. Then the probability will be given by D(r1 , . . . , ri , . . . , p1 , . . . , pi , . . . t)dτ . From the Liouville
theorem (demonstrated in Appendix 1.1) the phase space N elementary volume d3ri d3 pi is conserved for any time t during the pari=1
ticles evolution. A consequence is that for a macroscopic equilibrium, the only situation considered in the present book, the probability density is timeindependent. The elementary volume of the N
-particle phase space is homogeneous to an action to the power 3N , that is, [mass × (length)2 × (time)−1 ]3N , and thus has the same dimension as h3N , the Planck constant to the same power. The
infinitesimal volume will always be chosen large with respect to the volume of the elementary cell of the N -particle phase space, so that it will contain a large number of such cells. (The volume of
the elementary cell will be chosen as h3N , from arguments similar to the ones for a single particle.) Finally, dτ may depend on N , the particle number, through a constant CN . Here we will take dτ
N CN 3 3 d ri d pi . The quantity dτ is thus dimensionh3N i=1
less and is related to the “number of states” inside the volume d3ri d3 pi ; the constant CN is determined in § 4.4.3 and 6.7 so that dτ should indeed give
Classical Probability Density ; Quantum Density Operator
the number of states in this volume : this choice has a interpretation in the quantum description, and the continuity between classical and quantum descriptions is achieved through this choice of CN
. The classical probability density D(r1 , . . . ri , . . . , p1 , . . . , pi , . . . , t) is then defined : it is such that D(r1 , . . . ri , . . . , p1 , . . . , pi , . . . , t) ≥ 0, and D(r1 , . .
. ri , . . . , p1 , . . . pi , . . . , t)dτ = 1 (1.6) on the whole phase space. The average value at a given time t of a physical parameter A(ri , p i , t) is then calculated using : A(t) = A(ri , p
i , t)D(ri , p i , t)dτ (1.7) In the same way, the standard deviation σA , representing the fluctuation of the parameter A around its average value at a given t, is calculated from : 2 (σA (t))2 = A2
(t) − A(t)2 = A(ri , p i , t) − A(ri , p i , t) D(ri , p i , t)dτ (1.8)
Density Operator in Quantum Mechanics
By analogy with the argument of § 1.1.2, if at time t0 the total system is in the state |ψ, now with N particles, the Schroedinger equation allows one to deduce its evolution through : i
∂|ψ ˆ = H|ψ ∂t
ˆ is the hamiltonian of the total system with N particles. where H It was already pointed out that, when the system is in a certain state |ψ, the result of a single measurement of the observable Aˆ
on this system is uncertain ˆ : the result is one of the eigenvalues (except when |ψ is an eigenvector of A) ˆ with the probability |ψ|ϕα |2 , associated with the projection of |ψ aα of A, on the
eigenvector |ϕα corresponding to the eigenvalue aα (that we take here to be nondegenerate, for sake of simplicity). If the measurement is repeated ˆ many times, the average of the results is ψ|A|ψ.
Now another origin of uncertainty, of a different nature, must also be included if the state of the system at time t is already uncertain. Assume that the
Chapter 1. Statistical Description of Large Systems
macroscopic system may be found in different orthonormal states |ψn , with probabilities pn (0 ≤ pn ≤ 1). In such a case the results average for repeated measurements will be : ˆ n = ˆ n = Tr (D ˆ A)
ˆ ˆ = pn ψn |A|ψ pn ψn |ψn ψn |A|ψ (1.10) A n
The density operator, defined by ˆ = D
pn |ψn ψn |
has been introduced (its properties are given in Appendix 1.2). This operator includes both our imperfect knowledge of the system’s quantum state, through the pn factors, and the fluctuations related
to the quantum measurement, which would already be present if the state |ψn was certain (this state appears through its projector |ψn ψn |). One verifies that ˆ n = A ˆ ˆ A) ˆ = pn ψn |ψn ψn |A|ψ
(1.12) Tr (D n
since ψn |ψn = δn n . In the current situations described in the present course, the projection states ˆ of the N -particle total |ψn will be the eigenstates of the hamiltonian H system, the
eigenvalues of which are the accessible energies of the system. In ˆ will thus be diagonal on such a basis : the problems considered the operator D each energy eigenstate |ψn will simply be weighted
by its probability pn . Now hypotheses are required to obtain an expression of D(ri , pi , t) in the classical description, or of the probabilities pn in the quantum one.
Statistical Postulates ; Equiprobability
We should now be convinced that a detailed microscopic treatment is impossible for a system with a very large number of particles : we will have to limit ourselves to a statistical approach. It is
necessary to define the microscopic and macroscopic states to which this description will apply.
Microstate, Macrostate
A configuration defined by the data of microscopic physical parameters (for example, positions and momenta of all the particles ; quantum numbers characterizing the state of each particle ;
magnetization of each paramagnetic
Statistical Hypotheses ; Equiprobability
atom localized in a solid) is a microstate ∗ .1 A configuration defined by the value of macroscopic physical parameters (total energy, pressure, temperature, total magnetization, etc.) is a macrostate
∗ . Obviously a macrostate is almost always realized by a very large number of microstates : for example, in a paramagnetic solid, permuting the values of the magnetic moments of two fixed atoms
creates a new microstate associated with the same macrostate. Assume that each magnetic moment can only take two values : +µ and −µ. The number of microstates W (p) all corresponding to the
macrostate = (2p − N ) of magnetization M µ, in which p magnetic moments µ are in a given direction and N − p in the opposite direction, is the number of choices of the p moments to be reversed among
the total of N . It is thus equal to N! p p . Since N is macroscopic and in general p too, CN = is a very CN p!(N − p)! large number. The scope of this book is the analysis of systems in equilibrium
∗ : this means that one has waited long enough so that macroscopic physical parameters now keep the same values in time within very small fluctuations (the macrostate is fixed), whereas microscopic
quantities may continue their evolution. Besides, even when the system is not isolated ∗ from its surroundings, if it has reached equilibrium, by definition there is no algebraical transport out of
the system of matter, energy, etc. The approach toward equilibrium is achieved through interactions, bringing very small amounts of energy into play, yet essential to thermalization : in a solid, for
example, electrons reach an equilibrium, among themselves and with the ions, through the collisions they suffer, for example, on ions vibrating at non-vanishing temperature, or impurities. Here the
characteristic time is very short, of the order of 10−14 sec in copper at room temperature. For other processes this time may be much longer : the time for a drink to reach its equilibrium
temperature in a refrigerator is of the order of an hour. If the system has no measurable evolution during the experiment, it is considered to be in equilibrium.
Time Average and Ensemble Average
Once equilibrium is reached, the time average of a physical parameter, which is likely to fluctuate in a macrostate, is calculated using the classical probability density (1.7) or the quantum density
operator (1.11), that is, for example
1 The
words marked by an asterisk are defined in the glossary preceding Chapter 1.
Chapter 1. Statistical Description of Large Systems
1 A = T
t +T
dt t
A(ri , p i , t)D(ri , p i , t)dτ,
Would the same result be obtained by calculating the average at a given time on an assembly of macrostates prepared in a similar way ? A special case, allowing this issue to be understood, is that of
an isolated system, of constant total energy E. In the N -particle phase space, the condition of constant energy defines a 6N − 1-dimension surface. The points representing the various systems
prepared at the same energy E are on this surface. Is the whole surface described ? What is the time required for that ? This equivalence of the time average with the average on an ensemble of
systems prepared in the same way is the object of the ergodic theorem, almost always applicable and that we will assume to be valid in all the cases treated in this course. In Statistical Physics,
the common practice since Josiah Willard Gibbs (18391903) is indeed to take the average, at a fixed time, on an assembly (an ensemble) of systems of the same nature, prepared under the same
macroscopic conditions : this is the so-called ensemble average. As an example, consider a system, replicated a large number of times, which is exchanging energy with a reservoir. The energy
distribution between the system and the reservoir differs according to the replica whereas, as will be shown in § 2.3.1, the system temperature is always that of the heat reservoir. Let N be the
number of prepared identical systems, where N is a very large number. We consider a physical parameter f that can fluctuate, in the present case the energy of each system. Among the N systems, this
parameter takes the value fl in Nl of them ; consequently, the ensemble average of the parameter f will be equal to 1 fN = Nl fl (1.14) N l
When N tends to infinity, Nl /N tends to the probability pl to achieve the value fl and the ensemble average tends to the average value calculated using these probabilities : lim fN = f = pl f l
(1.15) N →∞
On daily experience ( !), when throwing a die, in the absence of any other information, it is commonly assumed that every face is equally likely to occur, so that the probability of occurrence of a
given face (1 for example) is 1/6.
Statistical Hypotheses ; Equiprobability
In Statistical Physics, an analogous postulate is stated : one assumes that in the absence of additional information, for an isolated system of energy ranging between E and E + δE, where δE is the
uncertainty, all the accessible microstates are equally probable. (In a classical description, this is equivalent to saying that all the accessible phase space cells are equiprobable.) Considering
the example of the magnetic moments, this means that, to obtain the same magnetization value, it is equally likely to have a localized magnetic moment reversed on a particular site or on another one.
The larger the number of accessible states, the larger the uncertainty or disorder, the smaller the information about the system : a measure of the disorder is the number of accessible states, or the
size of the accessible volume, of the phase space.
Fig. 1.4 : Joule-Gay-Lussac expansion. Consider for example a container of volume Ω, split into two chambers of volume Ω/2 each, isolated from its surroundings (Fig. 1.4). If the gas at equilibrium
is completely located inside the left chamber, the disorder is smaller than if, at the same conditions, the equilibrium occurs in the total volume Ω. Opening the partition between the two chambers,
in the so-called JouleGay-Lussac free expansion, means relaxing the constraint on the molecules to be located in the left chamber. This is an irreversible ∗ process2 as the probability for a
spontaneous return, at a later time, to the situation where the N molecules all lie in the left chamber is extremely low (2−N ). It appears that a macroscopic initial state distinct from equilibrium
will most probably evolve toward a larger disorder, although the fluctuations in equilibrium are identical when the direction of time is reversed. The direction of time defined in Statistical Physics
is based on the idea that a statistically improbable state was built through an external action, requiring the expending of energy. The basic postulate of Statistical Physics, stated by Ludwig
Boltzmann in 1877 for an isolated system, consists in defining a macroscopic quantity from the number of accessible microstates. This is the statistical entropy S, given 2 A very interesting
discussion on the character of the time direction and on irreversibility in Statistical Physics can be found in the paper by V. Ambegaokar and A.A. Clerk, American Journal of Physics, vol 67, p. 1068
Chapter 1. Statistical Description of Large Systems
by S = kB ln W (E)
Here W (E) is the number of microstates with an energy value between E and E+δE ; the constant kB is a priori arbitrary. To make this statistical definition of the entropy coincide with its
thermodynamical expression (see § 3.3), one should take kB equal to the Boltzmann constant, which is the ratio of the ideal gas constant R = 8.31 J/K to the Avogadro number N = 6.02 × 1023, that is,
kB = R/N = 1.38 × 10−23J/K. This is what will be taken in this course. Take the example of the localized magnetic moments of a solid in the presence of an : in the configuration where p moments are
parallel external magnetic field B and N − p moments are in the opposite direction, the magnetization is to B = (2p − N ) ·B = (N − 2p)µ · B. It was M µ and the magnetic energy E = −M N! . For a large
system, shown in § 1.3.1 that in such a case W (E) = p!(N − p)! using the Stirling formula for the factorial expansion, one gets ïÅ ã Å ã E 2 kB N S = kB ln W (E) = 1+ ln 2 N µB B 1 + (E/N µB B) ã Å
ãò (1.17) Å 2 E ln + 1− N µB B 1 − (E/N µB B) When two independent, or weakly coupled, subsystems are associated, with respective numbers of microstates W1 and W2 , the number of microstates of the
combined system is W = W1 · W2 , since any microstate of subsystem 1 can be associated with any microstate of 2. Consequently, S = S1 + S2 , so that S is additive ∗ . A special case is obtained when
the two subsystems are identical, then S = 2S1 , that is, S is extensive ∗ . Formula (1.16) is fundamental in that it bridges the microscopic world (W ) and the macroscopic one (S). It will be shown
in § 3.3 that S is indeed identical to the entropy defined in Thermodynamics. Before stating the general properties of the statistical entropy S as defined by (1.16), we first discuss those of W (E).
The aim is to enumerate the microstates of given energy E, to the measurement accuracy δE. The number of accessible states is related to both the number of particles and to the number of degrees of
freedom of each particle. It is equal to the accessible volume of the phase space, to the factor CN /h3N (§ 1.2.2). The special case of free particles, in motion in a macroscopic volume, is treated
in Appendix 1.3. This example of macroscopic system allows one to understand : i) the extremely fast variation of W (E) with E, as a power of the particle number N , assumed to be very large (a
similar fast variation is also obtained for an assembly of N particles subjected to a potential) ; ii) the to-
General Properties of the Statistical Entropy
tally negligible effect of δE in practical cases ; iii) the effective proportionality of ln W (E), and thus of S, to N . In the general case, the particles may have other degrees of freedom (in the
example of diatomic molecules : rotation, vibration), they may be subjected to a potential energy, etc. Results i), ii), iii) are still valid.
General Properties of the Statistical Entropy
As just explained on a special case, the physical parameter that contains the statistical character of the system under study is the statistical entropy. We now define it on more general grounds and
relate it to the information on the system.
The Boltzmann Definition
The expression (1.16) of S given by Boltzmann relies on the postulate of equal probability of the W accessible microstates of an isolated system. S increases as the logarithm of the microstates
The Gibbs Definition
A more general definition of the statistical entropy was proposed by Gibbs : S = −kB
pi ln pi
This definition applies to an assembly of systems constructed under the same macroscopic conditions, made up in a similar way according to the process defined in § 1.3.2 ; in the ensemble of all the
reproduced systems, pi is the probability to attain the particular state i of the system, that is, the microstate i. It is this Gibbs definition of entropy that will be used in the remaining part of
this course. For example, for systems having reached thermal equilibrium at temperature T , we will see that the probability of realizing a microstate i is higher, the lower the microstate energy (pi
is proportional to the Boltzmann factor, introduced in § 2.4.1). The Boltzmann expression (1.16) of the entropy is a special case, which corresponds to the situation where all W microstates are
equiprobable, so that the probability of realizing each of them is pi = 1/W .
Chapter 1. Statistical Description of Large Systems
Then : S = −kB ln
1 ( pi ) = kB ln W W i
It is not possible to proceed the other way and to deduce the general Gibbs definition from the particular Boltzmann definition. Yet let us try to relate the Gibbs and the Boltzmann definitions using a
particular example : we are considering a system made up of n1 + n2 equiprobable microstates regrouped into two subsystems 1 and 2. Subsystem 1 contains n1 microstates, its probability of realization
is p1 = n1 /(n1 + n2 ). In the same way, subsystem 2 contains n2 microstates, its probability of realization is equal to p2 = n2 /(n1 + n2 ). From the Boltzmann formulation, the statistical entropy
of the total system is Stot = kB ln(n1 + n2 )
This entropy is related to the absence of information on the microstate of the total system in fact realized : – we do not know whether the considered microstate belongs subsystem 1 or 2 : this
corresponds to the entropy S12 , which is the Gibbs term that we want to find in this example ; – a part S1 of the entropy expresses the uncertainty inside system 1 owing to its division into n1
microstates S1 = p1 (kB ln n1 )
and is weighted by the probability p1 . In the same way S2 comes from the uncertainty inside system 2 S2 = p2 (kB ln n2 )
Since the various probabilities of realization are multiplicative, the corresponding entropies, which are related to the probabilities through logarithms, are additive : Stot = S1 + S2 + S12 kB ln(n1
+ n2 ) =
n1 n2 kB ln n1 + kB ln n2 + S12 n1 + n2 n1 + n2
General Properties of the Statistical Entropy
Then the S12 term is given by Å S12 = kB Å
n1 + n2 n1 + n2
ã ln(n1 + n2 )
− kB
ã Å ã n1 n2 ln n1 − kB ln n2 n1 + n2 n1 + n2
= −kB
n1 n2 n1 n2 ln − kB ln n1 + n2 n1 + n2 n1 + n2 n1 + n2
= −kB (p1 ln p1 + p2 ln p2 ) Indeed this expression is the application of the Gibbs formula to this particular case. On the other hand, expression (1.16) of the Boltzmann entropy is a special case of
SGibbs , which makes (1.18) maximum : the information on the system 1 for each is the smallest when all W microstates are equiprobable and pi = W i. The Boltzmann expression is the one that brings S
to its maximum with the constraint of the fixed value of the system energy. The expression (1.18) of S, as proposed by Gibbs, satisfies the following properties : S ≥ 0, for 0 ≤ pi ≤ 1 and ln pi ≤ 0 S
= 0 if the system state is certain : then p = 1 for this particular state, 0 for the other ones. S is maximum when all pi ’s are equal. S is additive : indeed, when two systems 1 and 2 are weakly
coupled, so that ⎧ ⎪ ⎪ ⎨ S1
= −kB
⎪ S ⎪ ⎩ 2
= −kB
p1i ln p1i p2j ln p2j
by definition, the entropy of the coupled system will be written : Stot = −kB
pl ln pl
Now, since the coupling is weak, each state of the total system is characterized
Chapter 1. Statistical Description of Large Systems
by two particular states of subsystems 1 and 2 and pl = p1i p2j . Consequently, Stot = −kB p1i p2j (ln p1i + ln p2j ) i,j
= −kB
p1i ln p1i
p2j + p2j ln p2j p1i
= S1 + S 2 One thus finds the additivity of S, its extensivity being a special case that is obtained by combining two systems of the same density, but of different volumes.
The Shannon Definition of Information
We just saw that the more uncertain the system state, or the larger the number of accessible microstates, the larger S ; in addition, S is zero if the system state is exactly known. The value of S is
thus related to the lack of information on the system state. The quantitative definition of information, given by Claude Shannon (1949), is closely copied from the Gibbs definition of entropy.3 It
analyzes the capacity of communication channels. One assumes that there are W possible distinct messages, that si (i = 0, 1, . . . W − 1) is the content of the message number i, and that the
probability for si to be emitted is pi . Then the information content I per sent message is I=−
W −1
pi log2 pi
This definition is, to the constant kB ln2, the same as that of the Gibbs entropy, since log2 pi = lnpi /ln2. The entropy is also related to the lack of information : probabilities are attributed to
the various microstates, since we do not exactly know in which of them is the studied system. These probabilities should be chosen in such a way that they do not include unjustified hypotheses, that
is, only the known properties of the system are introduced, and the entropy (the missing information) is maximized with the constraints imposed by the physical conditions of the studied system. This
is the approach that will be chosen in the arguments of § 2.3, 2.4, and 2.5. 3 See the paper by J. Machta in American Journal of Physics, vol 67, p.1074 (1999), which compares entropy, information,
and algorithmics using the example of meteorological data.
Summary of Chapter 1 The time evolution of an individual particle is associated with a trajectory in the six-dimensional phase space, of coordinates (r, p ). The Heisenberg uncertainty principle and
the classical limit of quantum properties require that a quantum state occupies a cell of area h3 in this space. At a given time t an N -particle state is represented by a point in the 6N
-dimensional phase space. The classical statistical description of a macroscopic system utilizes a probability density, such that the probability of finding the system of N particles N
1 , . . . , pN ), to Π d3ri d3 pi , is in the neighborhood of the point (r1 , . . . , rN , p i=1
equal to N CN 3 1 , . . . , pN ) 3N d ri d pi D(r1 , . . . , rN , p h i=1
where CN is a constant depending on N only. In Quantum Mechanics the density operator ˆ = D pn |ψn ψn | n
is introduced, which contains the uncertainties related both to the incomplete knowledge of the system and to the quantum measurement. A configuration defined by the data of the microscopic physical
parameters is a microstate. A macrostate is defined by the value of macroscopic physical parameters ; it is generally produced by a very large number of microstates. This course will only deal with
Statistical Physics in equilibrium. The time average of a fluctuating physical parameter is generally equivalent to the average on an assembly of identical systems prepared in the same way (“ensemble
average”). One assumes that, in the absence of additional information, in the case of an 19
Summary of Chapter 1
isolated system of energy between E and E + δE, where δE is the uncertainty, all the W (E) accessible microstates are equally probable. The statistical entropy S is then defined by S = kB ln W (E)
where kB = R/N is the Boltzmann constant, with R the ideal gas constant and N the Avogadro number. The general definition of the statistical entropy, valid for an ensemble average, is S = −kB pi ln pi
where pi is the probability of occurrence of the particular microstate i of the system. The expression for an isolated system is a special case of the latter definition. The statistical entropy is an
extensive parameter, which is equal to zero when the microscopic state of the system is perfectly known.
Appendix 1.1 The Liouville Theorem in Classical Mechanics There are two possible ways to consider the problem of the evolution between the times t and t + dt of an ensemble of systems taking part in
the ensemble average, each of them containing N particles, associated with the same macrostate and prepared in the same way. To one system, in its particular microstate, corresponds one point in the
6N -dimensional phase space. Either we may be concerned with the time evolution of systems contained in a small volume around the considered point (§ A) ; or we may evaluate the number of systems
that enter or leave a fixed infinitesimal volume of the phase space during a time interval (§ B). In both approaches it will be shown that the elementary volume of the phase space is conserved between
the times t and t + dt. A. Time evolution of an ensemble of systems At time t the systems occupy the phase space volume dτ around the point with coordinates (q1 , . . . qi , . . . , q3N , p1 , . . .
pi , . . . , p3N ) (Fig. 1.5). The infinitesimal 3N volume has the expression dτ = CN dpi dqi , where CN is a constant that i=1
only depends on N . To compare this volume to the one at the instant t + dt, one has to evaluate 3N 3N the products dpi (t)dqi (t) and dpi (t + dt)dqi (t + dt), that is, to calculate i=1
the Jacobian of the transformation ⎧ ⎪ ⎪ pi (t) → pi (t + dt) = pi (t) + dt · ∂pi (t) ⎨ ∂t ⎪ ∂q ⎪ ⎩ qi (t) → qi (t + dt) = qi (t) + dt · i (t) ∂t 21
Appendix 1.1
pi t + dt dτ t
dτ O
Fig. 1.5: The different systems, which were located in the volume dτ of the 6N -dimensional phase space at t, are in dτ at t + dt. i.e., the determinant of the 6N × 6N matrix ⎤ ⎡ ∂qi (t + dt) ∂pi (t +
dt) ⎢ ∂qj (t) ∂qj (t) ⎥ ⎥ ⎢ ⎥ ⎢ ⎣ ∂qi (t + dt) ∂pi (t + dt) ⎦ ∂pj (t) ∂pj (t)
The Hamilton equations are used, which relate the conjugated variables qi and pi through the N -particle hamiltonian and are the generalization of Eq. (1.4) : ⎧ ∂qi ∂H ⎪ ⎪ ⎨ ∂t = ∂p i (1.31) ⎪ ∂pi ∂H
⎪ ⎩ =− ∂t ∂qi One deduces
and in particular
⎧ ∂H ⎪ ⎪ ⎨ qi (t + dt) = qi (t) + dt · ∂p i ⎪ ∂H ⎪ ⎩ pi (t + dt) = pi (t) − dt · ∂qi
⎧ ∂2H ∂qi (t + dt) ⎪ ⎪ ⎨ ∂q (t) = δij + dt · ∂q ∂p j j i 2 ⎪ ∂p (t + dt) ∂ H ⎪ ⎩ i = δij − dt · ∂pj (t) ∂pj ∂qi
Appendix 1.1
The obtained determinant is developed to first order in dt : ˆ ) = 1 + dt · Tr M ˆ + O(dt2 ) Det(1 + dt · M with ˆ =− Tr M
3N 3N ∂2H ∂2H + =0 ∂pi ∂qi i=1 ∂pi ∂qi i=1
Consequently, the volume around a point of the 6N -dimensional phase space is conserved during the time evolution of this point ; the number of systems in this volume, i.e., D(. . . qi , . . . , . .
. pi , . . . , t)dτ , is also conserved. B. Number of systems versus time in a fixed volume dτ of the phase space In the volume dτ = CN
dqi · dpi , at time t, the number of systems present
is equal to D(q1 , . . . qi , . . . q3N , p1 , . . . pi , . . . p3N )dτ
Each system evolves over time, so that it may leave the considered volume while other systems may enter it. p1
q˙1 (t)dt dq1 dp1 (q˙1 (t) + O
∂ q˙1 (t) dq1 ) dt ∂q1 q1
Fig. 1.6: In the volume dq1 · dp1 (thick line), between the times t and t + dt, the systems which enter were in the light grey rectangle at t ; those which leave the volume were in the dark grey
hatched rectangle. It is assumed that q˙1 (t) is directed toward the right. Take a “box”, like the one in Fig. 1.6 which represents the situation for the only coordinates q1 , p1 . If only this
couple of conjugated variables is considered, the
Appendix 1.1
problem involves a single coordinate q1 in real space (so that the velocity q˙1 is necessarily normal to the “faces” at constant q1 ) and we determine the variation of the number of systems on the
segment for the q1 coordinate [q1 , q1 + dq1 ]. Considering this space coordinate, between the times t and t + dt, one calculates the difference between the number of systems which enter the volume
through the faces normal to dq1 and the number which leave through the same faces : the ones which enter were at time t at a distance of this face smaller or equal to q˙1 (t)dt (light grey area on
the Fig.), their density is D(q1 − dq1 , . . . qi , . . . qÅ 3N , p1 , . . . pi , . . . p3Nã). Those which leave were inside the ∂ q˙1 (t) dq1 dt from the face (dark grey hatched volume, at a
distance q˙1 (t) + ∂q1 area). The resulting change, between t and t + dt and for this set of faces, of the number of particles inside the considered volume, is equal to −
∂(q˙1 (t)D) dq1 dt ∂q1
For the couple of “faces” at constant p1 , an analogous evaluation is performed, also contributing to the variation of the number of particles in the considered “box.” Now at 3N dimensions, for all
the “faces,” the net increase of the number of particles in dτ during the time interval dt is −
3N ∂(Dq˙i (t)) i=1
dqi dt −
3N ∂(Dp˙ i (t)) i=1
dpi dt
This is related to a change in the particle density in the considered volume ∂D through to dtdτ . Thus the net rate of change is given by ∂t ã ã Å 3N Å 3N ∂D ∂D ∂D ∂ q˙i ∂ p˙ i (1.38) =− q˙i + p˙ i −
D + ∂t ∂qi ∂pi ∂qi ∂pi i=1 i=1 From the Hamilton equations, the factor of D is zero and the above equation is equivalent to ã 3N Å dD ∂D ∂D ∂D = + q˙i + p˙ i = 0 dt ∂t ∂qi ∂pi i=1
dD(q1 , . . . , qi , . . . q3N , p1 , . . . , pi , . . . p3N ) is the time derivative of the where dt probability density when one moves with the representative point of the system in the 6N
-dimensional phase space. This result is equivalent to the one of § A.
Appendix 1.1
As an example, consider the microcanonical ensemble, in which the density D is a constant for the energies between E and E + δE, zero elsewhere. The density D is function of the sole energy, a
constant in the microcanonical case, so that its derivative with respect to energy is zero between E and E + δE. Then D is independent of the coordinates qi and of the momenta pi , and from (1.39) it
is time-independent.
Appendix 1.2 Properties of the Density Operator in Quantum Mechanics ˆ defined by The operator D ˆ = D
pn |ψn ψn |
where pn is the probability to realize the microstate |ψn , has the following properties : ˆ =D ˆ+ i) it is hermitian : D Indeed, for any operator |u, ˆ u|D|u =
ˆ + |u pn |u|ψn |2 = u|D
ii) it is defined positive ˆ u|D|u =
pn |u|ψn |2 ≥ 0
iii) it is normed to unity, since ˆ= Tr D
pn = 1
Appendix 1.3 Estimation of the Number of Microstates for Free Particles Confined Within a Volume Ω Take the example of a free point particle, of energy and momentum linked p2 = ε ; in three
dimensions, the particle occupies the volume Ω. The by 2m constant energy surfaces correspond to a constant p. In the 6-dimensional (r, p ) phase space for a single particle, the volume associated
with the energies between ε and ε + δε is equal to : Ω4πp2 δp = Ω4πp.pδp, with … m pδp , that is, δp = δε (1.43) δε = m 2ε For a single particle, the number of accessible states between ε and ε + δε
is proportional to Ωp · pδp, i.e., Ωε1/2 δε, i.e., also to Ωε(3/2)−1 δε. For N independent free particles, the constant-energy sphere of the 6N dimensional phase space contains all the states of
energy smaller or equal to E, where E is of the order of N ε. The volume of such a sphere varies in ΩN p3N . The searched microstates, of energy between E and E + δE, correspond to a volume of the
order of ΩN p3N −1 δp, that is, ΩN p3N −2 δE, for pδp ∝ δE. The number W (E) of microstates with energy between E and E + δE is proportional to this volume, i.e., Å W (E) =
Ω Ω0
ãN Å
E E0
δE E
where the constants Ω0 and E0 provide the homogeneity of the expression and where δE, the uncertainty on the energy E, is very much smaller than this energy value. The increase of W (E) versus energy
and volume is extremely fast : for 29
Appendix 1.3
example, if N is of the order of the Avogadro number, N , Å W (E) =
Ω Ω0
ã6×1023 Å
E E0
δE E
which is such a huge number that it is difficult to imagine it ! The logarithm in the definition of the Boltzmann entropy S is here : ã Å Ω E 3N δE ln W (E) = N ln ln (1.46) + + ln Ω0 2 E0 E E is of the
order of N times the energy of an individual particle. For N in the N range, the dominant terms in ln W (E) are proportional to N : indeed for an experimental accuracy δE/E of the order of 10−4 , the
last logarithm is equal to −9.2 and is totally negligible with respect to the two first terms, of the order of 1023 . Note : the same type of calculation, which estimates the number of accessible
states in a given energy range, will also appear when calculating the density of states for a free particle (§ 6.4).
Chapter 2
The Different Statistical Ensembles. General Methods in Statistical Physics In the first chapter we convinced ourselves that the macroscopic properties of a physical system can only be analyzed through
a statistical approach, in the framework of Statistical Physics. Practically, the statistical description of a system with a very large number of particles in equilibrium, and the treatment of any
exercise or problem of Statistical Physics, always follow the two steps (see the section General method for solving exercises and problems, at the end of Ch. 9) : – First, determination of the
microscopic states (microstates) accessible by the N -particles system : this is a Quantum Mechanics problem, which may be reduced to a Classical Mechanics one in some specific cases. § 2.1 will
schematically present the most frequent situations. – Second, evaluation of the probability for the system to be in a particular microstate, in the specified physical conditions : it is at this point
that the statistical description comes in. To solve this type of problem, one has to express that in equilibrium the statistical entropy is maximum, under the constraints defined by the physical
situation under study : the system may be isolated, or in thermal contact 31
Chapter 2. The Different Statistical Ensembles. General Methods
with a heat reservoir (a very large system, which will dictate its temperature), the system may exchange particles or volume with another system, and so forth. The experimental conditions define the
constraint(s) from the conservation laws adapted to the physical situation (conserved total energy, conserved number of particles, etc.). In this chapter, the most usual statistical ensembles will be
presented : “microcanonical” ensemble in § 2.2, “canonical” ensemble in § 2.4, “grand canonical” ensemble in § 2.5, using the names introduced at the end of the 19th century. In all these statistical
ensembles, one reproduces the considered system in a thought experiment, thus realizing an assembly of macroscopic objects built under the same initial conditions, which allows average values of
fluctuating physical parameters to be defined. For a given physical condition, one will decide to use the statistical ensemble most adapted to the analysis of the problem. However, as will be shown,
for systems with a macroscopic number of particles (the only situation considered in this book), there is equivalence, to a extremely small relative fluctuation, between the various statistical
ensembles that will be described here : in the case of a macroscopic system, a measurement will not distinguish between a situation in which the energy is fixed and another one in which the
temperature is given. It is only the convenience of treatment of the problem that will lead us to choose one statistical ensemble rather than another. (On the contrary for systems with a small number
of particles, or of size intermediate between the microscopic and macroscopic ranges, the socalled “mesoscopic systems,” nowadays much studied in Condensed Matter Physics, the equivalence between
statistical ensembles is not always valid.)
Determination of the Energy States of an N -Particle System
For an individual particle i in translation, of momentum pi and mass m, under a potential energy Vi (ri ), the time-independent Schroedinger equation (eigenvalues equation) is Ç ˆ i |ψ αi = h i =
å pˆi 2 + Vi (ri ) |ψiαi 2m
αi i εα i |ψi
where αi expresses the different states accessible to this particle. As soon as several particles are present, one has to consider their interactions, which leads to a several-body potential energy.
Thus, the hamiltonian for N particles in motion, under a potential energy and mutual interactions, is given
Energy States of an N -Particle System
by ˆ = H
pˆi 2 i
+ Vi (ri ) +
Vij (ri , rj )
This is the case, for example, for the molecules of a gas, confined within a limited volume and mutually interacting through Van der Waals interactions ; this is also the situation of electrons in
solids in attractive Coulombic interaction with the positively charged nuclei (Vi term), and in repulsion between themselves (Vij term). The latter interactions, expressed in the Vij ’s, are
essential in the approach toward thermal equilibrium through the transitions they induce between microscopic states, but they complicate the solution of the N -particle eigenstate problem. Now they
mostly correspond to energy terms very small with respect to those associated with the remaining part of the hamiltonian. An analogous problem is faced when one is concerned by the magnetic moments
of a ferromagnetic material : the hamiltonian of an individual electronic intrinsic magnetic moment located in a magnetic field is given by ˆ = − h µˆB · B
ˆ by : where the Bohr magneton operator µˆB is related to the spin operator S e ˆ µ ˆB = − S m
and has for eigenvalues ∓e/2m (see a course of Quantum Mechanics). The ha contains, miltonian of an ensemble of magnetic moments in a magnetic field B ˆ other terms expressing their mutual interin
addition to terms similar to h, actions, i.e. ˆ =− + µˆBi · B ˆBi · µˆBj H Jij µ (2.5) i
where Jij is the coupling term between the moments localized at sites i and j. According to the site i or j, the moment orientation is different. All these types of hamiltonians, in which
several-particle terms appear, are generally treated through approximations, which allow them to be reduced to ˆ is written as a sum of similar hamiltonians, the simplest situation where H each
concerning a single particle. There are several methods to achieve such a reduction : – either neglecting the interactions between particles in the equilibrium state, as they correspond to very small
energies. As a consequence, in the above example of the gas molecules, the only term included in the potential energy
Chapter 2. The Different Statistical Ensembles. General Methods
expresses the confinement with a box (confinement potential) or an external field (gravity) ; the particles are then considered as “free” ; – or using the so-called “mean field” treatment : each particle
is subjected to a mean effect from the other ones. This will be a repulsive potential energy in the case of the Coulombic interaction between electrons in a solid, an effective magnetic field equivalent
to the interactions from the other magnetic moments in the case of a ferromagnetic material ; ˆ in order to decouple the – or changing of variable in the hamiltonian H, ˆ variables, so that H may be
re-written as a sum of terms (“normal modes” defining quasi-particles, for example, in the problem of the atomic vibrations in a crystal at a given temperature). From now on, let us assume that the
total-system hamiltonian is indeed expressed as the sum of similar individual-particle hamiltonians ˆ i ψ αi (ri ) = εαi ψ αi (ri ) ˆ i , with h ˆ = (2.6) h H i i i i
Then it will only be necessary to solve the problem for an individual particle : indeed you learnt in Quantum Mechanics that the solution of ˆ r1 , r2 , . . . , rN ) = Eψ(r1 , r2 , . . . , rN ) Hψ(
ˆj h
N α ψiαi (ri ) = εj j ψiαi (ri ) j
i.e., E=
i εα r1 , r2 , . . . , rN ) = i , ψ(
ψiαi (ri )
The total energy of the system is the sum of the individual particle energies and the N -particle eigenfunction is the product of the eigenfunctions for each single particle. In the first three
chapters of this book, the problems studied will concern discrete variables (for example„ spin magnetic moments) or particles distinguishable by their fixed coordinates, like the atoms of a solid
vibrating around their equilibrium position at finite temperature. The systems with translation degrees of freedom will be analyzed in the classical framework in chapter 4 (ideal gas), or in the
framework of the Quantum Statistics from chapter 5 until the end of the course : in the Quantum Statistics description of the accessible
Isolated System in Equilibrium : “Microcanonical Ensemble”
microstates, one has to account for the indistinguishability of the particles, as expressed by the Pauli principle. In all that follows, we will assume that the microscopic states of the considered
system are known. We are going to rely on the hypothesis of maximum entropy, as introduced in chapter 1, § 1.3 and § 1.4, under the specific physical constraints due to the given experimental
conditions ; we will then deduce the system macroscopic properties, using Statistical Physics.
Isolated System in Equilibrium : “Microcanonical Ensemble”
In such a system one assumes that there is no possible exchange with its surroundings (Fig. 2.1) : the volume is fixed, the number of particles N is given, the total energy assumes a fixed value E,
i.e., its lies in the range between E E + δE, where δE is the uncertainty. This situation is called “microcanonical ensemble.” It is the framework of application of the Boltzmann hypothesis of
equiprobability of the W (E) accessible microstates (§ 1.3.3). The probability to realize any of these microstates is equal to 1/W (E) ; the statistical entropy S is maximum under these conditions
(with respect to the situation where the probabilities would not be equal) and is given by S = kB ln W (E)
E to δE
Fig. 2.1 : Isolated system. Its energy has a fixed value. To solve this type of problem, one has to determine W (E) : this means performing a combinatory calculation of the number of occurrences of
the macroscopic energy E from the various configurations of the microscopic components of the system. For example, for an ensemble of localized magnetic moments located in an external magnetic field,
to each fixed value of E corresponds a value of the macroscopic magnetization. The corresponding number of microstates W (E) was evaluated in § 1.3.1. The other physical macroscopic parameters
Chapter 2. The Different Statistical Ensembles. General Methods
are deduced from S and its partial derivatives (see § 3.5). In particular a microcanonical temperature is defined from the partial derivative of S versus E : 1/T = ∂S/∂E.
Equilibrium Conditions for Two Systems in Contact, Only Exchanging Energy
E A
A ∂S = kB β ∂E
∂S = kB β ∂E
Fig. 2.2: The systems A and A only exchange energy, the combined system A0 =A+A is isolated. Two distinct systems A and A are brought into thermal contact. The system A0 , made up of the combination
of A and A (Fig. 2.2) is the isolated system, to which we can apply the above approach, i.e. look for the maximum of its statistical entropy. The systems in contact, A and A , can only exchange
energy and are weakly coupled, which means that a possible interaction energy is neglected. We can also say that the system A under study has the constraint to be coupled to A , the total energy E +
E being conserved. This is a very common situation, like the one of a drink can in a refrigerator, the ensemble (can-refrigerator) being taken as thermally insulated from the room in which the
refrigerator stands, the room representing the surrondings.
Equilibrium Condition : Equal β Parameters
The total energy and the total statistical entropy are shared between A and A : E0 = E + E SA0 (E0 , E) = SA (E) + SA (E0 − E)
(2.11) (2.12)
From the Boltzmann relation (2.10), the equation (2.12) is equivalent to W0 (E0 , E) = WA (E) · WA (E0 − E)
Equilibrium Conditions for Two Systems in Contact
since for weakly coupled systems the number of microstates of the combined system is the product of the numbers of microstates of the two subsystems. After a long enough time has elapsed, the
combined system (can-refrigerator) A0 reaches an equilibrium situation. Since the combination A0 of the two systems A and A is an isolated system, in equilibrium its entropy is maximum with respect
to the exchange of energy between A and A : SA0 (E0 , E) = SA (E) + SA (E0 − E)
∂SA0 (E0 , E) ∂SA0 (E0 , E) ∂SA (E) ∂SA (E0 − E) = 0 , i.e., = + =0 ∂E ∂E ∂E ∂E (2.15) Once the equilibrium has been reached : ∂SA (E0 − E) ∂SA (E ) ∂SA (E) =− = (2.16) ∂E ∂E ∂E The obtained
equilibrium condition is that the partial derivative of entropy versus energy takes the same value in both systems in contact, A and A , all the other parameters (volume, etc.) being kept constant.
This derivative will be written : ∂S = kB β (2.17) ∂E where kB is the Boltzmann constant. ˜ one In equilibrium, reached for an energy value of system A equal to E = E, thus has β = β
We know that the entropy S and the energy E are both extensive* parameters : by definition the values of extensive parameters are doubled when the system volume Ω is doubled together with the number N
of particles, while the density N/Ω, an intensive* parameter, is maintained in such a transformation. In such a process, the parameter β is not changed, it is an intensive parameter having the
dimension of reciprocal energy, the dimension of kB β being a reciprocal temperature. In § 3.3 the partial derivative of the entropy versus energy will be identified with 1/T , where T is the absolute
temperature, so that (2.18) expresses that in equilibrium T = T .
Fluctuations of the Energy Around its Most Likely Value
The number of microstates of the total system A0 realizing the splitting of the energies [E in A, E0 − E in A ] is proportional to the product
Chapter 2. The Different Statistical Ensembles. General Methods
WA (E) · WA (E0 − E). The probability of occurrence of the corresponding microstate is WA (E)WA (E0 − E) WA (E)WA (E0 − E) = p(E, E0 − E) = Wtot (E0 ) WA (E)WA (E0 − E)
Let us consider the variation of this probability versus E around the value ˜ achieving the entropy maximum with respect to an energy exchange betE ween A and A . In (2.19), only the numerator is a
function of E, since the denominator is a sum over all the possible values of E. For a macroscopic system, W (E) increases very fast with E, as already verified on the example of free particles in
Appendix 3 of Chapter 1. On the other hand, WA (E0 − E) strongly decreases with E if A is macroscopic. Their product goes through a very steep maximum, corresponding to the ˜ state of maximum
probability of the combined system, reached for E = E. This energy is very close to the average value E of the energy in system A, ˜ as the probability practically vanishes outside the immediate
vicinity of E. ˜ The probability variation around E is much faster than that of its logarithm, which varies versus E in the same way as SA0 (E) = kB ln[WA (E)WA (E0 − E)]
because the combined system A0 is isolated. Note that in the particular case of Å a free-particle system, where W (E) ∼ E N ã ∂S (see Appendix 3, Chapter 1), β = is of the order of N/E. Since the ∂E
Ω ˜ the average energy per particle is of the order total energy is of the order E, 1/β = kB T , to a numerical factor of order unity. The average energy of a free particle at temperature T is thus
of order kB T . Now we look for the probability variation versus the energy of system A ˜ The logarithm of this probability is for E close to the most likely value E. proportional to the entropy of
the combined system SA0 (E), considered as a function of E, as A0 is an isolated system to which the Boltzmann formula SA0 (E) = kB ln W0 (E) applies. In equilibrium, the entropy of the combined
system A0 is maximum with respect to an energy exchange between the two systems ; the entropy of A0 ˜ small : can be developed around this maximum, for E − E ã Å 2 Å ã ˜ + ∂SA0 ˜ + 1 ∂ S A0 ˜ 2
(2.21) (E − E) (E − E) SA0 (E) = SA0 (E) ∂E E˜ 2 ∂E 2 E˜
˜ 3 + O (E − E)
Equilibrium Conditions for Two Systems in Contact
Here the first derivative is equal to zero and the second derivative is negative, as the extremum is a maximum. This second derivative includes the contributions of both A and A and is equal to : ∂ 2
SA ∂ 2 S A ∂ 2 S A0 = + = kB 2 2 ∂E ∂E ∂E 2
∂β ∂β + ∂E ∂E
ã = −kB λ0
˜ In fact, since β = 1/kB T , staAll these terms are calculated for E = E. ting that ∂β/∂E is negative means that the internal energy increases with temperature : this is always realized in practical
cases.1 . From the entropy properties (2.21) and (2.22), using the Boltzmann relation one deduces those of the number of accessible microstates : ˜ exp − λ0 (E − E) ˜ 2 /2 W0 (E) ≈ W0 (E)
The probability of realizing the state with the sharing of the energies [E in system A and E0 − E in system A ] is proportional to W0 (E). It is given by
˜ E0 − E) ˜ exp − λ0 (E − E) ˜ 2 /2 p(E, E0 − E) ≈ p(E,
˜ This √ is a Gaussian function around the equilibrium state E. Its width ∆E = 1/ λ0 depends of the size of the total system like Å
∂β ∂β + ∂E ∂E
ã−1/2 (2.25)
that is, like the square root of a number of particles, since E and E are ˜ extensive and the parameters β and β intensive. The relative width ∆E/E varies as a reciprocal number of particles. For a
macroscopic system, where N and N ∼ 1023 , this relative width is of the order of 10−11 , thus unmeasurable : the fluctuations of the energy of system A around its equilibrium value are practically
null. This is equivalent to saying that, for a system with a very large number of particles, situations i), in which the energy of A is exactly ˜ and ii), in which system A reached an equilibrium
equal to the value E, ˜ energy E through exchanges with a heat reservoir, lead to the same physical measurement of the energy. 1 When the total energy has an upper bound, as in the case of the spins
in a paramagnetic solid, the maximum energy is reached when all the magnetic moments are antiparallel to the applied magnetic field. In the vicinity of this maximum, adding energy produces a decrease
in entropy, so that (2.17) leads to a negative β, thus to a “ negative temperature.” However in the situation of a real paramagnetic solid, the spin degree of freedom is in equilibrium with the other
degrees of freedom, like the ones related to the atoms’ vibrations around their equilibrium positions. For the latter ones, the energy is not limited and for the whole solid, the total energy indeed
increases with temperature.
Chapter 2. The Different Statistical Ensembles. General Methods
System in Contact with a Heat Reservoir, “Canonical Ensemble”
When two systems are brought into thermal contact, one being much larger than the other (the can is much smaller than the refrigerator) (Fig. 2.3), the larger system behaves like a heat reservoir of
energy E0 − E close to E0 , the energy E of the other system thus being very small. The statistical entropy, and, consequently, the parameter β of the larger system, are close to those of the total
system : Å Å 2 ã ã ã Å ∂SA (E) ∂ SA (E) ∂SA = − E + O(E 2 ) (2.26) ∂E E0 −E ∂E ∂E 2 E0 E0 Å 2 ã ∂ SA (E) + O(E 2 ) (2.27) = kB β0 − E ∂E 2 E0
E A
A β
Fig. 2.3: The system A is in contact with the heat reservoir A which dictates its parameter β to A.
When the equilibrium is reached, the parameter β of the smaller system is adjusted to the parameter β of the larger system, which itself takes a value close to that of the parameter β0 of the
ensemble : the larger system acts as a heat reservoir or thermostat, a body that keeps its value of β and dictates this parameter to the smaller system (we will see in chapter 3 that this is
equivalent to saying that it keeps its temperature when put in thermal contact : for example, in equilibrium the can takes the temperature of the refrigerator). The smaller system A can be a
macroscopic system, sufficiently smaller than A , or even a single microscopic particle, like an individual localized magnetic moment inside a paramagnetic solid, with respect to which the rest of the
macroscopic solid plays the role of a reservoir. The situation in which the system under study is in thermal contact with a heat reservoir that gives its temperature T to the system is called
“canonical,” since it is a very frequent situation among the studied problems. The ensemble
Canonical Ensemble
of systems in contact with the heat reservoir, on which we will calculate the ensemble average, is the “canonical ensemble.”
The Boltzmann Factor
We are thus in the situation where a smaller system A is coupled to a larger system A that dictates its value of the parameter β, which is close to the value for the combined isolated system. First
consider the probability pi that a specific microstate i of system A, of energy Ei , is produced. It only depends on the properties of the heat reservoir and is the ratio of the number of microstates
Wres (E0 − Ei ) of the reservoir with this energy to the sum of all the numbers of microstates for all the reservoir energies E0 − Ei : Wres (E0 − Ei ) pi = Wres (E0 − Ei )
Wres (E0 −Ei ) is calculated in the microcanonical ensemble for the given energy E0 − Ei and is related to Sres (E0 − Ei ) through the Boltzmann relation ln Wres (E0 − Ei ) =
1 Sres (E0 − Ei ) kB
Indeed, once the reservoir energy E0 − Ei is fixed, the different microstates in this situation are equally likely. Since
Å Sres (E0 − Ei ) = SA (E0 ) − Ei
∂SA ∂E
ã + ...
E =E0
ln Wres (E0 − Ei ) = ln WA (E0 ) − Ei β0 + . . .
= constant − β0 Ei + . . . the probability of occurrence of this microstate is proportional to Wres (E0 − Ei ), i.e., pi = C exp(−β0 Ei )
In the probability pi there appears the exponential of the energy Ei of the system. This is the so-called “Boltzmann factor” (1871), or “canonical distribution,” an expression that will be used very
often in this course, every time we will study an ensemble of distinguishable particles, in fixed number 1 (the case of indistinguishable particles will N , of given temperature T = kB β
Chapter 2. The Different Statistical Ensembles. General Methods
be treated in the framework of the Quantum Statistics in chapter 5 and the following ones). As an example, Fig. 2.4 schematizes the Boltzmann factor for a 4-microstate system. E
E β1 E4
β2 = 0, 5β1 (T2 = 2T1 )
Fig. 2.4: The considered system has four microstates of energies E1 to E4 . The abscissae are proportional to the probabilities of occurrence of one of the microstates, for two different values of β
(or of the temperature). If the total number of microstates of system A corresponding to the same energy E is W (E), then the probability that system A, coupled to A , has the energy E, is equal to :
p(E) = C W (E) exp(−β0 E)
where W (E) is the degeneracy (the number of ways to realize this energy) of the state of energy E. The probability p(E) is the product of the function W (E), very rapidly increasing with energy for
a macroscopic system, by a ˜ decreasing exponential : the probability has a very steep maximum for E = E.
Energy, with Fixed Average Value
We now show that another condition, i.e., the constraint of a given average value E of the energy of a system A, leads to a probability law analogous to
Canonical Ensemble
the one just found. The only difference is that the value of β, now a parameter, should be adjusted so that the average of energy indeed coincides with the given value E. We have to maximize the
statistical entropy of A while the value E is given. We thus have to realize SA = −kB
pi ln pi maximum, under the constraints
⎧ ⎪ pi = 1 ⎪ ⎨ i ⎪ pi Ei = E ⎪ ⎩
A general mathematical method allows one to solve this type of problem, it is the method of Lagrange multipliers explained in Appendix 2.1 : parameters are introduced, which are the Lagrange
multipliers (here β). These parameters are adjusted at the end of the calculation, in order that the average values be equal to those given by the physical properties of the system, here the value of
the average energy. One thus obtains for the probability of occurrence of the microstate of energy Ei : exp(−βEi ) pi = exp(−βEi )
that is, an expression of the “Boltzmann-factor” type (2.32) but here β does not characterize the temperature of a real heat reservoir. It is rather determined by the condition that the average
energy of the system A should be the one given by the problem conditions, that is :
Ei exp(−βEi )
exp(−βEi )
= E
This method of the Lagrange multipliers is followed in some Statistical Physics courses to obtain the general expression of the Boltzmann factor (for example, those given at Ecole polytechnique by R.
Balian, E. Brézin, or A. Georges and M. Mézard).
Chapter 2. The Different Statistical Ensembles. General Methods
Partition Function Z
The probability for the system in contact with a heat reservoir to be in a state of energy Ei is pi =
e−βEi 1 , withβ = ZN kB T
The term ZN =
which allows one to norm the Boltzmann factors is called the “canonical partition function for N particles” (in German “Zustandssumme,” which means “sum over the states”). Formulae (2.37) and (2.38)
are among the most important ones of this course ! The complete statistical information on the considered problem is contained in its partition function, since from Z and its partial derivatives all
the average values of the physical macroscopic parameters of the system can be calculated. We will show this property now on the examples of the average energy and the energy fluctuation around its
average (for the entropy see § 2.4.5). Indeed, the average energy is given by E =
pi Ei =
exp(−βEi ) Z
∂ ln Z 1 ∂Z =− =− Z ∂β ∂β It appears here that ln Z is extensive like E.
To determine the energy fluctuation around its average value, one calculates (∆E)2 = E − E2 = E 2 − E2
A procedure similar to that of (2.39) is followed to find E 2 : E 2 =
pi Ei2 =
(∆E)2 =
1 ∂2Z 1 − 2 Z ∂β 2 Z
∂Z ∂β
exp(−βEi ) 1 ∂2Z = Z Z ∂β 2 ã2 =
∂ ∂β
1 ∂Z Z ∂β
ã =
(2.41) ∂ 2 ln Z ∂β 2
One verifies here that, since ln Z varies as the number of particles of the system and β is intensive, ∆E varies as N 1/2 and ∆E/E as N −1/2 (in agreement
Canonical Ensemble
with the result of § 2.3.2), which corresponds to an extremely small value in a macroscopic system : for N = 1023 , ∆E/E is of the order of 10−11 ! Note 1 : A demonstration (outside the framework of
the present course, see, for example, R. Balian chapter 5 § 5) can be done of the equivalence between all the statistical ensembles in the case of macroscopic systems : it is valid in the
“thermodynamical limit,” which consists in simultaneously having several parameters tending toward infinity : the volume Ω (which means in particular that it is very much larger than typical atomic
volumes), the particle number N and the other extensive parameters, while keeping constant the particle density N/Ω and the other intensive parameters. In such a limit, it is equivalent to impose an
exact value of a physical parameter (like the energy in the microcanonical ensemble) or an average value of the same parameter (the energy in the canonical ensemble). In fact, the technique is
simpler in the canonical ensemble, where ln Z is calculated on all the states without any restriction, than in the microcanonical ensemble, where the calculation of ln W (E) requires the limitation
to the range of energies between E and E + δE. Besides, for macroscopic systems one understands that the predicted physical results, using either the microcanonical or the canonical statistical
ensemble, cannot be distinguished through measurements. Note 2 : The value of ZN “gives a hint” at the number of microstates that can be achieved at the experiment temperature. Indeed at very low
temperature, where βEi → ∞, only the fundamental state E0 is realized and ZN = 1. On the other hand, at high temperature where βEi → 0, many terms are of the order of unity, corresponding to
practically equally likely states.
Entropy in the Canonical Ensemble
We just showed that, using the partition function, we can calculate the average energy of system A at temperature T , together the fluctuation around this average energy. We now have to express the
statistical entropy of A in these conditions. Here the probabilities of occurrence of the different microstates are not equal, since they depend of their respective energies through Boltzmann factors.
Thus the definition of the statistical entropy to be used is no longer that of Boltzmann, but rather the one of Gibbs : SA = −kB
pi ln pi
Chapter 2. The Different Statistical Ensembles. General Methods
with pi =
1 −βEi e , pi = 1 Z i
Consequently, SA = −kB
pi (− ln Z − βEi )
SA = kB (ln Z + βE) = kB ln Z + kB βU
The average energy E of the macroscopic system is identified to the internal energy U of Thermodynamics (see § 3.2). Note that, for a large system in which the energy fluctuation is relatively very
small around E, the probability of occurrence of this energy value is very close to unity. The number of occurrences of this average energy is W (E), so that the Gibbs entropy (2.43) practically
reduces to the Boltzmann entropy (2.10) for this value E.
Partition Function of a Set of Two Independent Systems with the Same β Parameter (Same T )
In § 2.1 we saw that, in the N -particle problems solved in this book, we always decompose the hamiltonian of the total system into a sum of hamiltonians for individual particles ; a particle may
have several degrees of freedom, whence another sum of hamiltonians. The energy of a microstate is thus a sum of energies and now we see the consequence of this property on the partition function of
such a system in thermal equilibrium. This situation is schematized by simply considering a system A made up of two independent subsystems A1 and A2 , both in contact with a heat reservoir at
temperature T . A particular microstate of A1 has the energy E1i , while the energy of A2 is E2j and the energy of A is Eij . Then ⎧ Eij = E1i + E2j ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 1 ⎪ ⎪ Z = Z1 Z2 = exp{−β(E1i + E2j )}
, with β = ⎪ ⎪ kB T ⎨ ij ln Z = ln Z1 + ln Z2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ E = E1 + E2 ⎪ ⎪ ⎪ ⎪ ⎩ S = S1 + S2
Canonical Ensemble
This property is very often used : when the energies of two independent systems at the same temperature sum, the corresponding partition functions multiply. Consequently, in the case of
distinguishable independent particles with several degrees of freedom, one will separately calculate the partition functions for the various degrees of freedom of an individual particle ; then one
will multiply the factors corresponding to the different degrees of freedom and particles to obtain the partition function Z of the total system. Finally, in the special case of the ideal gas, one
will introduce the factor CN into Z (see § 4.4.3 and 6.7). Take the example of the vibrations of atoms around their equilibrium positions, owing to thermal motion, in a solid in thermal equilibrium
at temperature T : this is a very classical exercise and here we only sketch its solution. In the model proposed by Einstein (1907), one assumes that the atoms, in number N , are points and that they
are all attracted toward their respective equilibrium position with the same restoring constant, that is, with the same frequency ω. The value of ω depends of the mechanical properties of the solid,
in particular of its stiffness. Then the hamiltonian for a particular atom in motion is given by 1 ˆ 2i ˆ xi + ˆhyi + h ˆ zi ˆi = p + mω 2ri2 = h h 2m 2
It is the sum of three hamiltonians of the “harmonic-oscillator” type, identical ˆ xi relative to the for each coordinate. The eigenvalues of the hamiltonian h coordinate x of site i are
1 n = nxi + ω Exi 2
The contribution to the partition function of this degree of freedom is equal to exp(−β(nxi + 1/2)ω) (2.50) zxi = n
The partition function for the total system is the product of 3N terms similar to this one. The average energy is deduced using (2.39), it is equal to 3N times the average value for the coordinate x
of site i. The parameter accessible to experiment is the specific heat at constant volume Cv , defined as the derivative of the average energy with respect to temperature. The above solution gives for
this lattice contribution Cv =
3N kB (Θ/T )2 ω , taking =Θ Θ/T 2 kB (e − 1)
This expression is sketched on Fig. 2.5.
Chapter 2. The Different Statistical Ensembles. General Methods
Cv /3N kB 1
T /Θ
Fig. 2.5: Variation of Cv versus temperature, according to the Einstein model. The dots are experimental data for diamond, the solid line is calculated for Θ = 1320 K (after A. Einstein, Annalen der
Physik vol.22, p.186 (1907)). Its high temperature limit Cv = 3N kB = 25 J.K−1mol−1 is independent of ω, thus universal : this is the Dulong and Petit law (1819), valid for metals at room
temperature. The low temperature limit Cv → 0 for T → 0 is in agreement with experiment. However, the above law of variation of Cv is not experimentally verified. The Einstein model is improved by
introducing a repartition of characteristic frequencies in a solid instead of a single ω : this is the Debye model (1913), in which Cv is proportional to T 3 at very low temperatures. The agreement
with experiment then becomes excellent for insulators, whereas for metals an additional term is due to the mobile electron contribution to the specific heat (see § (7.2.2)).
Exchange of Both Energy and Particles : “Grand Canonical Ensemble”
Here we consider two coupled systems A and A , separated by a porous partition through which energy and particles can be exchanged, the combined system A0 being isolated from its surroundings (Fig.
2.6). An example of such a situation is water molecules under two phases, liquid and gas, contained in
Grand Canonical Ensemble
∂S = −kB α ∂N A ∂S = kB β ∂E
∂S = −kB α ∂N A
∂S = kB β ∂E
Fig. 2.6 : The two systems A and A exchange energy and particles. a thermally insulated can of fixed volume, the water molecules belonging to either phase. One can also study hydrogen molecules as a
free gas in equilibrium with hydrogen molecules adsorbed on a solid palladium catalyst surface together with the total system being thermally insulated and the total number of molecules fixed. We have
to state that the entropy is maximum at equilibrium, following an approach very similar to the one of 2.3.1.
Equilibrium Condition : Equality of Both Temperatures and Chemical Potentials
One writes that the maximum of entropy for the combined system A0 = A+A takes place at constant total energy and fixed total number of particles, i.e., SA0 (E0 , N0 ) = SA (E, N ) + SA (E , N )
maximum under the two constraints
⎧ ⎨ E0 = E + E
⎩ N = N + N 0 Let us express the condition of entropy maximum of A0 : SA0 (E0 , N0 ) = SA (E, N ) + SA (E0 − E, N0 − N )
With respect to an energy exchange between the two systems : Å ã ∂SA0 (E, N ) =0, ∂E N Å i.e.,
∂SA0 (E, N ) ∂E
Å =
∂SA (E, N ) ∂E
Å +
∂SA (E0 − E, N0 − N ) ∂E
ã =0 N
Chapter 2. The Different Statistical Ensembles. General Methods
With respect to a particles exchange : Å ã ∂SA0 (E, N ) =0 ∂N E Å
∂SA0 (E, N ) i.e., ∂N
Å =
∂SA (E, N ) ∂N
Å + E
∂SA (E0 − E, N0 − N ) ∂N
ã = 0. E
(2.56) The thermal equilibrium condition (2.55) again provides the results of § 2.3.1, that is, β = β . The condition (2.56) on N implies that ã ã ã Å Å Å ∂SA (E0 − E, N0 − N ) ∂SA (E , N ) ∂SA (E, N
) =− = ∂N ∂N ∂N E E E (2.57) The second equilibrium condition is the equality, in the systems A and A in contact, of the partial derivatives of the entropy versus the number of particles, all the
other parameters being kept constant. This derivative will be written Å ã ∂S = −kB α (2.58) ∂N Ω,E where kB is the Boltzmann constant. It will be seen in § 3.5.1 that the parameter α is related to
the chemical potential µ through α = βµ = µ/kB T . The equilibrium condition between systems A and A thus simultaneously implies ⎧ ⎪ ⎨ β = β , i.e., T = T (2.59) ⎪ ⎩ α = α , i.e., µ = µ T T that is,
the equality of both the temperatures and the chemical potentials in both systems. Thus in equilibrium the chemical potential of hydrogen is the same, whether in gas phase or adsorbed on the
catalyst, and the temperatures are the same in both phases.
Heat Reservoir and Particles Reservoir ; Grand Canonical Probability and Partition Function
Similarly to the approach of § 2.4, one now considers that the system A under study is macroscopic, but much smaller that the system A (Fig. 2.7). Consequently, the parameters α and β of system A are
almost the same as those
Grand Canonical Ensemble
of the combined system A0 , and are very little modified when system A varies in energy or in number of particles : with respect to A the system A behaves like a heat reservoir and a particle
reservoir, it dictates both its temperature β = 1/kB T and its chemical potential α = µ/kB T to A.
E A
A N
Fig. 2.7: The system A is in contact with the heat reservoir and particles reservoir A , which dictates its parameters α and β to A. This situation in Statistical Physics, in which the system under
study A is coupled to a heat reservoir that dictates its value of β, i.e., its temperature, and to a particle reservoir that gives the value of α, is called the “grand canonical ensemble.”
Grand Canonical Probability Law and Partition Function
Now both the energy Ei and the number of particles Ni of the considered system enter into the Boltzmann-type probability pi . Indeed, consider the probability pi to reach a particular microstate i of
A, of energy Ei and particle number Ni . It is related to the ability for the reservoir to realize this physical situation at given energy and number of particles, that is, to the number Wres (E0 −
Ei , N0 − Ni ) of equally likely microstates, and thus to the reservoir entropy, through the Boltzmann entropy relation. Now Å ã ∂Sres Sres (E0 − Ei , N0 − Ni ) = SA0 (E0 , N0 ) − Ei ∂E E0 ,N0 Å ã
(2.60) ∂Sres − Ni + ... ∂E E0 ,N0 thus ln Wres (E0 − Ei , N0 − Ni ) = ln Wres (E0 , N0 ) − Ei β0 + Ni α0 + . . .
The probability of occurrence of this microstate is proportional to Wres , i.e., p(Ei , Ni ) =
1 exp(−β0 Ei + α0 Ni ) ZG
Chapter 2. The Different Statistical Ensembles. General Methods
which defines the grand partition function ZG , the quantity which norms the probabilities. It is now a sum over all energies and numbers of particles that can be achieved ; this sum can also be
regrouped according to the different numbers of particles : e−βEi +αNi = eαN ZN (2.63) ZG = i
The canonical partition function ZN for N particles, as defined in (2.38), e−βEi (2.64) ZN = i
thus appears in ZG with a weight equal to exp αN . (Note that in (2.63) the sum is performed over all microstates i with an arbitrary number of particles, whereas in (2.64) the sum is only over the
states with N particles, with N fixed.)
Average Values
Using p(Ei , Ni ) one calculates the average energy and the average number of particles of system A : this is conveniently performed from the partial derivatives of ZG . The approach is the same as
in (2.39) : E =
Ei p(Ei , Ni ) =
∂ ln ZG exp(−βEi + αNi ) 1 ∂ZG =− =− ZG ZG ∂β ∂β (2.65)
N =
Ni p(Ei , Ni ) =
∂ ln ZG exp(−βEi + αNi ) 1 ∂ZG = =+ ZG ZG ∂α ∂α (2.66)
To determine the energy fluctuations around its average value, one proceeds in a way similar to the calculation in the canonical ensemble (eqs. (2.41) and (2.42)). Let us calculate the fluctuation of
the particle number in system A : N 2 =
Ni2 p(Ei , Ni ) =
1 ∂ 2 ZG ZG ∂α2
1 ∂ 2 ZG − (∆N )2 = N 2 − N 2 = ZG ∂α2 Å ã 1 ∂ZG ∂ 2 ln ZG ∂ = = ∂α ZG ∂α ∂α2
(2.67) 1 ∂ZG ZG ∂α
ã2 (2.68)
Other Statistical Ensembles
Since ln ZG is extensive as it increases like E or N (see (2.65) and (2.66)), (∆N )2 varies proportionaly with the size of the system, or with its average number of particles. Consequently, ∆N/N is
of the order N −1/2 , it is thus very small for a macroscopic system. Thus by differentiating ZG one obtains the average physical parameters E and N and their fluctuations around these averages : in
fact ZG , which expresses the probabilities, contains the complete information on the system.
Grand Canonical Entropy
Using the Gibbs definition S = −kB
pi ln pi
one calculates the entropy in the grand canonical ensemble. Here pi =
1 −βEi +αNi e , pi = 1 ZG i
Consequently, S = −kB
pi (− ln ZG − βEi + αNi )
S = kB (ln ZG + βE − αN ) = kB ln ZG + kB βU − kB αN
In the present definition of the grand canonical ensemble, we have considered a macroscopic system in contact with a heat reservoir and a particle reservoir. The grand canonical ensemble is also used
when the system under study is specified by the average values of both its energy and its particle number and one then needs to determine β and α = βµ : the argument is the extension of the one in §
2.4.2. We will use the grand canonical ensemble in particular for the study of the Quantum Statistics : the Pauli principle introduces limitations to the occupation of the quantum states, which will
be more conveniently expressed in this ensemble.
Other Statistical Ensembles
As you imagine, one can generalize to various physical situations the approach consisting of considering two weakly coupled systems A and A , the combined system being isolated, i.e., the total
parameters being fixed. In particular one
Chapter 2. The Different Statistical Ensembles. General Methods
can study systems A and A exchanging both energy and volume, the total volume Ω0 = Ω + Ω being fixed. The equilibrium condition now implies that the partial derivatives of the entropy ∂S ∂E
= kB β
∂S ∂Ω
= kB γ
take the same value in both systems. Thermodynamics will show, in § 3.5.1, that kB γ = P/T where P is the pressure, so that the new equilibrium condition will be ® T = T (2.74) P = P that is, in
equilibrium the systems A and A will have the same temperature and the same pressure. If A is much larger than A, it will give its pressure to A. The chemistry experiments performed in isothermal and
isobar conditions (for example, at room temperature and under the atmospheric pressure) are in such a condition. Then Å ã P Ωi 1 1 Ei − exp(−βEi − γΩi ) = exp − (2.75) p(Ei , Ωi ) = ZT,P ZT,P kB T kB
T i represents the microstate in which the system has the energy Ei and occupies the volume Ωi . A new partition function ZT,P is defined by exp(−βEi − γΩi ) (2.76) ZT,P = i
(In fact the sum over the volume, a continuous variable, is rather an integral.) The entropy is deduced : S = −kB p(Ei , Ωi )(− ln ZT,P − βEi − γΩi ) (2.77) i
S = kB (ln ZT,P + βE + γΩ)
Summary of Chapter 2 In this chapter the general method for solving problems in Statistical Physics is presented. – First one needs to determine the quantum states of the system. In the cases we will
consider, the system hamiltonian can be decomposed into a sum of similar one-particle terms ˆi ˆ = H h i
Until chapter 4, the particles will be assumed to be distinguishable. – Only in a second step does one choose a statistical ensemble adapted to the physical conditions of the problem (or the most
convenient to solve the problem, since for macroscopic systems one cannot measure the fluctuations resulting from the choice of a specific ensemble rather than another one). One expresses that the
entropy is maximum in equilibrium, under the constraints set by the physical conditions. According to the chosen ensemble, a mathematical quantity deduced from the probabilities allows one to obtain
the macroscopic physical parameters that can be measured in experiments : – the microcanonical ensemble is used to treat the case of an isolated system : the number of microstates W (E) all producing
the same macrostate of energy in the range between E and E + δE, allows the entropy to be obtained S = kB ln W (E) – the canonical ensemble is adapted to the case of a system with a fixed number of
particles N , in contact with a heat reservoir at temperature T . The probability to be in a microstate of energy Ei is then given by the Boltzmann factor : e−βEi 1 , with β = pi = ZN kB T 55
Summary of Chapter 2
The canonical partition function for N particles is defined by ZN = e−βEi i
It is related to the average energy through E = −
∂ ln Z 1 ∂Z =− Z ∂β ∂β
– the grand canonical ensemble corresponds to the situation of a system in contact with a heat reservoir and a particle reservoir. The grand canonical partition function is introduced : ZG = eαN ZN N
The probability to be in a microstate of both energy Ei and number of particles Ni is given by p(Ei , Ni ) =
1 exp(−βEi + αNi ) ZG
The average energy and the average number of particles are then deduced : E = −
∂ ln ZG 1 ∂ZG ∂ ln ZG 1 ∂ZG =− , N = = ZG ∂β ∂β ZG ∂α ∂α
Appendix 2.1 Lagrange Multipliers One looks for the extremum of a function with several variables, satisfying constraints. Here we just treat the example of the entropy maximum with a fixed average
value of the energy, i.e., pi ln pi maximum, under the constraints S = −kB i
⎧ pi = 1 ⎪ ⎨ i ⎪ pi Ei = E ⎩
The process consists in introducing constants, the Lagrange multipliers, here λ and β, the values of which will be determined at the end of the calculation. Therefore one is looking for the maximum
of the auxiliary function : F (p1 , . . . , pi , . . . ) = − pi ln pi + λ( pi − 1) − β( Ei pi − E) (2.80) i
which expresses the above conditions, with respect to the variables pi . This is written ∂F = −(ln pi + 1) + λ − βEi = 0 ∂pi
ln pi = −βEi + λ − 1
pi = (eλ−1 ) e−βEi
The solution is
Appendix 2.1
The constants λ and β are then obtained by expressing the constraints : • pi = 1 , whence i
e−βEi = 1
eλ−1 = •
1 e
(2.84) 1 Z
pi Ei = E , i.e.,
1 Ei e−βEi = E Z i
which is indeed expression (2.36). Note that, if the only constraint is Σpi = 1, the choice of the pi maximizing S i
indeed consists in taking all the pi equal ; this is what is done for an isolated system using the Boltzmann formula.
Chapter 3
Thermodynamics and Statistical Physics in Equilibrium In Chapter 1 we stated the basic principles of Statistical Physics, which rely on a detailed microscopic description and on the hypothesis, for
an isolated system, of the equiprobability of the different microscopic occurrences of a given macroscopic state ; the concept of statistical entropy was also introduced. In Chapter 2 the study was
extended to other systems in equilibrium, in which the statistical entropy is maximized, consistently with the constraints defined by the physical conditions of the problem. It is now necessary to
link this statistical entropy S and the entropy which appears in the second law of Thermodynamics and that you became familiar with in your previous studies. It will be shown in this chapter that
these two entropies are identical. Thus the purpose of this chapter will first be (§ 3.1 to 3.4), mostly in the equilibrium situation, to rediscover the laws of Thermodynamics, which rule the behavior
of macroscopic physical properties. This will be done in the framework of the hypotheses and results of Statistical Physics, as obtained in the first two chapters of this book. Then we will be able to
use at best the different thermodynamical potentials (§ 3.5), each of them being directly related to a partition function, that is, to a microscopic description. They are convenient tools to solve the
Statistical Physics problems. We are going to state the successive laws of Thermodynamics and to see how each one is interpreted in the framework of Statistical Physics. The example chosen in
Thermodynamics will be the ideal gas, the properties of which you 59
Chapter 3. Thermodynamics and Statistical Physics
studied in previous courses. Its study in Statistical Physics will be presented in the next chapter 4.
Zeroth Law of Thermodynamics
According to this law, if two bodies at the same temperature T are brought into thermal contact, i.e. can exchange energy, they remain in equilibrium. In Statistical Physics, we saw in (2.3.1) that
if two systems are free to exchange energy, the combined system being isolated, the equilibrium condition is that the partial derivative of the statistical entropy versus energy should take the same
value in each system : ∂S ∂S = kB β, = kB β ; β = β (3.1) ∂E ∂E It was suggested that this partial derivative of the entropy is related to its temperature, as will be more precisely shown below in §
First Law of Thermodynamics
Its statement is as follows : Consider an infinitesimal process of a system with a fixed number of particles, exchanging an infinitesimal work δW and an infinitesimal amount of heat δQ with its
surroundings. Although both δW and δQ depend on the sequence of states followed by the system, the sum dU = δW + δQ
only depends on the initial state and the final state, i.e., it is independent of the process followed. This states that dU is the differential of a state function U , which is the internal energy of
the system. A property related to this first law is that the internal energy of an isolated system remains constant, as no exchange of work or heat is possible : dU = 0
In this Statistical Physics course, we introduced the energy of a macroscopic system : in the microcanonical ensemble it is given exactly ; in the canonical or grand canonical ensemble its value is
given by the average pi Ei (3.4) E = U = i
First Law of Thermodynamics
where pi is the probability of occurrence of the state of energy Ei . The differential of the internal energy pi dEi + Ei dpi (3.5) dU = i
consists in two terms that we are going to identify with the two contributions in dU , i.e., δW and δQ. For this purpose, two particular processes will first be analyzed, in which either work (§
3.2.1) or heat (§ 3.2.2) is exchanged with the surroundings. Then the case of a quasi-static general process will be considered (§ 3.2.3).
Work E
E4 + dE4 E4 E3 + dE3 E3
E2 E2 + dE2
E1 + dE1 E1 xj
xj + dxj
Fig. 3.1: Modification of the energies of the considered system when the external parameter varies from xj to xj + dxj . The quantum states of a system are the solutions of a hamiltonian which depends
of external parameters xj characteristic of the system : for example, you learnt in Quantum Mechanics, and will see again in chapter 6, that a gas molecule, free and confined within a fixed volume Ω,
assumes for energy
Chapter 3. Thermodynamics and Statistical Physics
eigenvalues 2 k 2 /2m, where the allowed values of k depend of the container volume like Ω−1/3 ; as for the probabilities of realizing these eigenstates, they are determined by the postulates of
Statistical Physics. Mechanical work will influence these external parameters : for example, the motion of a piston modifies the volume of the system, thus shifting its quantized states. More
generally, if only the external parameters of the hamiltonian are modified from xj to xj + dxj , the energy of the eigenstates is changed and a particular eigenvalue Ei of the hamiltonian becomes Ei +
dEi in such a process (Fig. 3.1). The variation of the average energy of the system is then ∂U dxj = −Xj dxj ∂xj
where Xj is the generalized force corresponding to the external parameter xj . In the special case of a gas, the variation of internal energy in a dilatation dL is equal to ∂U dL = −F dL = −P SdL =
−P dΩ ∂L
Here P is the external pressure that acts on the system surface S ; it is equal to the pressure of the system in the case of a reversible process (Fig. 3.2). This internal energy variation is
identified with the infinitesimal work received by the gas. In such a situation the variation dL of the characteristic length modifies the accessible energies. P dΩ
Fig. 3.2 : Dilatation of the volume Ω submitted to the external pressure P . In general the variation of average energy resulting from the variation of the external parameters is identified with the
infinitesimal work δW = − Xj dxj (3.8) j
and corresponds to the term
pi dEi
First Law of Thermodynamics
Assume that energy is brought into the system, while no work is delivered : the external parameters of the hamiltonian are unchanged, the energy levels Ei remain identical. The average energy of the
system increases, now due to higher probabilities of occurrence of the system states of higher energy. The probabilities pi are modified and now the variation of internal energy is identified with the
infinitesimal heat absorbed : δQ = Ei dpi (3.9) i
A way to confirm this identification of
Ei dpi is to consider a very slow pro-
cess in which no heat is exchanged with the surroundings, that is, the system is thermally insulated : this is an adiabatic ∗ process. In Thermodynamics, such a process is done at constant entropy.
In Statistical Physics, it can also be shown that the statistical entropy is conserved in this process : pi ln pi = constant (3.10) S = −kB i
Now as
dpi ln pi +
pi = 1, one has
dpi =0 pi
dpi = 0, whence
ln pi dpi = 0
One replaces ln pi by its expression in the canonical ensemble ln pi = −βEi − ln Z and deduces
Ei dpi = 0
which is indeed consistent with the fact that the heat δQ exchanged with the surroundings is zero in an adiabatic process.
Quasi-Static General Process
Consider an infinitesimal quasi-static ∗ process of a system, that is, slow enough to consist in a continuous and reversible ∗ sequence of equilibrium states.
Chapter 3. Thermodynamics and Statistical Physics
In the differential dU one still identifies the work with the term arising from the variation of the hamiltonian external parameters in this process. The complementary contribution to dU is identified
with δQ, the heat exchanged with the surroundings in the process.
Second Law of Thermodynamics
Let us recall its statement : Let us consider a system with a fixed number N of particles, in thermal contact with a heat source at temperature T . The system exchanges an amount of heat δQ with this
source in an infinitesimal quasi-static process. Then the entropy variation dSthermo of the considered system is equal to dSthermo =
δQ T
Although δQ depends on the particular path followed, the extensive quantity dSthermo is the differential of a state function, which thus only depends on the initial and final states of the considered
process. Besides, in a spontaneous, i.e. without any contribution from the surroundings, and irreversible process, the entropy variation dSthermo is positive or zero. The general definition of the
entropy in Statistical Physics is that of Gibbs : S = −kB
pi ln pi
Let us first examine the interpretation of an irreversible process in Statistical Physics. An example is the free expansion of a gas into vacuum, the so-called “Joule-Gay-Lussac expansion” : a N
-molecule gas was occupying a volume Ω (Fig. 3.3a), the partition is removed, then the gas occupies the previously empty volume Ω (Fig. 3.3b). The combined system is isolated and removing the
partition requires a negligible amount of work. The accessible volume becomes Ω+Ω and it is extremely unlikely that the molecules again gather in the sole volume Ω. In such a process, which is
irreversible in Thermodynamics, the accessible volume for the particles has increased, that is, the constraint on their positions has been removed ; the number of cells of the one-particle phase
space (r, p) has increased with the accessible volume in this process. Let us now consider an infinitesimal reversible process, corresponding to a mere modification of the probabilities of occurrence
of the same accessible
Second Law of Thermodynamics
Fig. 3.3 : Joule-Gay-Lussac expansion into vacuum. microstates. dS is obtained by differentiating (3.16) : pi ln pi dpi + dpi dS = −kB pi i i ln pi dpi = −kB i
since, as probabilities are normalized,
dpi = 0.
To be more precise, let us work in the canonical ensemble, in which the system is submitted to this process at constant number of particles, remaining at every time t in thermal equilibrium with a
heat reservoir at the temperature T (t). The probability to realize the microstate of energy Ei is equal to (see (2.37)) : pi (t) =
e−β(t)Ei (t) , i.e., ln pi (t) = −β(t)Ei (t) − ln Z(t) Z
Then dS = kB β(t)
Ei (t)dpi
From the above interpretation of the infinitesimal heat δQ, (3.19) can also be written dS = kB β(t)δQ
This expression is to be compared to (3.15). The statistical entropy defined by (3.16) and the thermodynamical entropy (3.15) can be identified under the condition that kB β, defined by (3.1) as the
partial derivative of the statistical 1 entropy S versus energy, is identical to , the reciprocal of the thermodynaT mical temperature : kB β ≡
1 T
Chapter 3. Thermodynamics and Statistical Physics
Indeed the integrand factor, which allows one to transform δQ into the total differential dS, is unique to a constant, which justifies this identification.
Third Law of Thermodynamics
The second law allows one to calculate entropy variations. From the third law, or Walther Nernst (1864-1941)’s law, the absolute value of S can be obtained : When the absolute temperature T of a
system tends to zero, its entropy tends to zero. From a microscopic point of view, when the temperature vanishes, the particles gather into the accessible states with the lowest energies. If the
minimum energy corresponds to a single state E0 , the probability of occupation of this fundamental state tends to unity while those for the other states vanish. Then the microscopic description
becomes certain and the entropy vanishes : for example, when the temperature tends to zero in a monoatomic ideal gas, one expects the molecules to be in a state of kinetic energy as small as possible
(yet different from zero, because of the Heisenberg uncertainty principle) ; in the case of electrons in a solid, as will be studied in chapter 7, the fundamental state is nondegenerate too. Let us
now consider a material which has two equivalent states at low temperature, with the same minimal energy, for example, two allotropic crystalline species, each occurring with the probability 1/2.
When T tends to zero, the corresponding entropy tends to S0 = kB ln 2
This quantity is different from zero, yet it is not extensive so that S0 /N becomes negligible. This is very different from the situation at finite T where S/N is finite and S extensive. Thus the
statement of the Nernst law consistent with Statistical Physics is that S/N tends to zero for N large when the temperature tends to the absolute zero. Notice that, in the case of the paramagnetic
solid at zero external magnetic field and very low temperature, one would expect to find S/N = kB ln 2, in contradiction with the Nernst law. In fact, this spin system cannot be strictly in zero
magnetic field because of the nuclear magnetic interactions that are always present (but are not very efficient at higher temperature where other phenomena imply much larger amounts of energy) : under
the effect of these
The Thermodynamical Potentials ; the Legendre Transformation
nuclear interactions the system is no longer at zero magnetic field and becomes ordered when T tends to zero. The present sections have put in correspondence the laws of Thermodynamics and the
statistical approach of chapters 1 and 2. Their conclusion is that the description based on Statistical Physics and its postulates is indeed equivalent to the three laws of Thermodynamics.
The Thermodynamical Potentials and their Differentials ; the Legendre Transformation
Now that we have identified the statistical entropy to the thermodynamical entropy, we can take advantage of our background in Thermodynamics to introduce the thermodynamical potential adapted to the
considered physical situation, and relate it to the statistical description and to the physical parameters. The thermodynamical potential will be a function of the parameters determined by the
experimental conditions and in equilibrium it will be extremum.
Isolated System
Here the entropy S is the thermodynamical potential to consider. During a spontaneous evolution toward equilibrium, S increases and reaches its maximum in equilibrium. In the microcanonical
description, the entropy is related to the number W of microstates of energies lying in the range between E and E + dE by S = kB ln W
dS and dU are linked through the differential expressions of the first and second laws of Thermodynamics : dS =
dU δW δQ = − T T T
so that 1 = T
∂S ∂U
ã (3.25) N,Ω
Chapter 3. Thermodynamics and Statistical Physics
In the case of the ideal gas where δW = −P dΩ
dS =
dU P dΩ + T T
The differential expression (3.27) shows that S, calculated for a fixed number N of particles, is a function of the variables U and Ω, both extensive. The pressure is deduced from the partial
derivative of S versus the volume : Å ã ã Å P ∂S kB ∂ ln W (U ) = = (3.28) T ∂Ω N,U ∂Ω N,U Note : we suggested in the Appendix 3 of chapter 1 that, in the case of free particles, the number of
microstates associated with the energy E was of the form W (E) ∝ ΩN χ(E)
so that P = T
kB ∂ ln W (U ) ∂Ω
ã = N,U
kB N Ω
P Ω = N kB T
(3.30) (3.31)
Here we find the ideal gas law, that will be again established in the next chapter. After having studied a given system, we now consider another similar system, with the same energy U and under the
same volume Ω, which only differs by its number of particles equal to N + dN . Its entropy is now S(U, Ω, N ) + dS, and we write µ dS = − dN T
This defines the chemical potential µ, related to the entropy variation of the system when an extra particle is added, at constant internal energy and volume. The chemical potential is an intensive
parameter. For a classical ideal gas, if N increases, one expects S to increase, so that µ is negative. Then, including the dependence in the particles number, the differential of S for the ideal gas
is written dS =
dU P dΩ µdN + − T T T
The Thermodynamical Potentials ; the Legendre Transformation
S is a function of extensive variables only ; the partial derivative of S versus N is equal to ã Å µ ∂S (3.34) =− ∂N Ω,U T We previously showed in § 2.5.1 that if the two systems are exchanging
particles, in equilibrium ∂S = −kB α ∂N takes the same value in both systems.
These two partial derivatives of the entropy versus the number of particles can be identified, so that µ (3.36) α= kB T Besides, from the differential (3.33) of S expressed for the ideal gas one
deduces the expression of the differential of its energy U : dU = −P dΩ + T dS + µdN
The last term can be interpreted as a “chemical work.”
System with a Fixed Number of Particles, in Contact with a Heat Reservoir at Temperature T
We are going to show that the corresponding thermodynamical potential is the Helmholtz free energy F , which is minimum in equilibrium and is related to the N -particle canonical partition function
ZN . Indeed the general definition (3.16) of S applies, with now probabilities pi equal to pi =
1 exp(−βEi ) ZN
Then i
pi ln pi =
Å pi
and β =
Ei − kB T
1 kB T
ã − ln ZN
U − ln ZN kB T
Consequently, in the canonical ensemble, U + kB ln ZN T U − T S = −kB T ln ZN S=
(3.40) (3.41)
Chapter 3. Thermodynamics and Statistical Physics
Now in Thermodynamics the free energy F is defined by F = U − TS
whence we deduce the relation between the thermodynamical potential F , and the partition function ZN expressing the Statistical Physics properties : F = −kB T ln ZN
Let us write the differential of the thermodynamical potential F in the case where the infinitesimal work is given by δW = − Xj dxj (see eq. 3.8) : j
dF = dU − T dS − SdT Xj dxj + δQ + µdN − T dS − SdT =−
Now δQ = T dS, so that dF = −
Xj dxj − SdT + µdN
The interpretation of (3.45) is that, in a specific infinitesimal reversible process at constant temperature and fixed number of particles, the free energy variation is equal to the work received by the
system, whence the name of Helmholtz “free energy,” i.e., available energy, for this thermodynamical potential. For a gas, the infinitesimal work is expressed as a function of pressure, so that dF =
−P dΩ − SdT + µdN
The entropy, the pressure, and the chemical potential are obtained as partial derivatives F or of ln ZN : ã ã Å Å ∂(kB T ln ZN ) ∂F = (3.47) S=− ∂T N,Ω ∂T N,Ω ã ã Å Å ∂(kB T ln ZN ) ∂F = (3.48) P =−
∂Ω N,T ∂Ω N,T ã ã Å Å ∂(kB T ln ZN ) ∂F µ= =− (3.49) ∂N Ω,T ∂N Ω,T
System in Contact with Both a Heat Reservoir at Temperature T and a Particle Reservoir
It was shown in 2.5.1 that, during such an exchange of energy and particles, the parameters taking the same value in equilibrium in both the system and
The Thermodynamical Potentials ; the Legendre Transformation
the reservoir are kB β = 1/T and kB α = µ/T : two Lagrange parameters thus come into the probabilities, which do not only depend on the energy but also on the number of particles. Thus the
probability of occurrence of a macroscopic state of energy Ei and number of particles Ni can be written (see § 2.5.3, eqs. (2.62) and (2.63)) : pi (Ei , Ni ) = so that
pi ln pi =
1 exp(−βEi + αNi ) ZG
Å ã Ei + αNi − ln ZG pi − kB T
U + αN − ln ZG kB T
(3.51) (3.52)
Consequently, U − αkB N + kB ln ZG T U − T S − αkB T N = −kB T ln ZG S=
(3.53) (3.54)
where N is the average number of particles in the system under study. The thermodynamical potential introduced here is the grand potential A, defined by A = U − T S − µN = F − µN
It is related to the grand canonical partition function through A = −kB T ln ZG
The differential of A is deduced from its definition : dA = dF − µdN − N dµ Xj dxj − SdT − N dµ =−
(3.57) (3.58)
Like the other thermodynamical potentials, A is extensive. The only extensive variable in A is the one in the infinitesimal work. In the case of a gas, it is the volume so that A = −P Ω
The entropy S and the average number of particles N are obtained by calculating the partial derivatives of A. As a conclusion of this section on the thermodynamical potentials, remember that to a
fixed physical situation, corresponding to data on specific external parameters, is associated a particular thermodynamical potential, which is extremum in equilibrium.
Chapter 3. Thermodynamics and Statistical Physics
Transformation of Legendre ; Other Thermodynamical Potentials
In § 3.5.2 and § 3.5.3 we saw two different physical situations, associated with two thermodynamical potentials with similar differentials, except for a change in one variable. The mathematical process
that shifts from F (N ) to A(µ) is called “transformation of Legendre.” The general definition of such a transformation is the following : one goes from a function Φ(x1 , x2 , ...) to a function Γ = Φ
(x1 , x2 , ...) − x1 y1 , with y1 =
∂Φ ∂x1
the new function Γ is considered as a function of the new variable y1 . Because of the change of variable and of function, the differentials of Φ and of Γ are respectively : ∂Φ ∂Φ dx1 + dx2 + . . .
∂x1 ∂x2 ∂Γ ∂Γ dy1 + dx2 + . . . dΓ = ∂y1 dx2
dΦ =
(3.61) (3.62)
Now ∂Γ ∂Γ ∂x1 = · = ∂y1 ∂x1 ∂y1
∂y1 ∂Φ ∂Φ − − x1 ∂x1 ∂x1 ∂x1
ã ·
∂x1 = −x1 ∂y1
Consequently, dΓ = −x1 dy1 +
∂Φ dx2 + . . . ∂x2
The partial derivatives of Γ versus the variables x2 . . . are not modified with respect to those of Φ for the same variables. In this process one has shifted from the variables (x1 , x2 . . . xn ) to
the variables (y1 , x2 . . . xn ). This is indeed the transformation that was performed for example, when shifting from U (S) to F (T ), the other variables Ω, N being conserved, or when shifting
from F (N ) to A(µ), the variables Ω and T being unchanged. Other thermodynamical potentials can be defined in a similar way, a specific potential corresponding to each physical situation. For example,
a frequent situation in Chemistry is that of experiments performed at constant temperature and constant pressure (isothermal and isobaric condition) : the exchanges occur between the studied system A
and the larger system A which acts as
The Thermodynamical Potentials ; the Legendre Transformation
an energy and volume reservoir. We saw that the equilibrium condition, expressing the maximum of entropy of the combined system with respect to exchanges of energy and volume, corresponds to the
equality of the temperatures and the pressures in both systems. The corresponding thermodynamical potential is the Gibbs free enthalpy G, in which the number of particles is fixed : G = U − T S + P Ω
dG = ΩdP − SdT + µdN
This potential is defined from the free energy F F = U − TS dF = −P dΩ − SdT + µdN
using a transformation of Legendre on the set of variables (volume Ω and pressure P ), that is G = F + P Ω
The chemical potential is interpreted here as the variation of the free enthalpy G when one particle is added to the system, at constant pressure and temperature. The free enthalpy, extensive like
all thermodynamical potentials, depends of a single extensive variable, so that G = µN
The chemical potential µ is here the free enthalpy per particle. The free enthalpy is related to the partition function which includes all the possible values of energy and volume of the considered
system (see § 2.6) : ZT,P =
e−βEi − γΩi
Note 1 : each time a transformation of Legendre is performed, an extensive variable is replaced by an intensive one. Now a thermodynamical potential has to be extensive, so that it should retain at
least one extensive variable. Consequently, there is a limit to the number of such tranfomations yielding a new thermodynamical potential. Note 2 : Appendix 4.1 presents a practical application of
thermodynamical potentials to the study of chemical reactions.
Summary of Chapter 3 We have expressed the laws of Thermodynamics in the language of Statistical Physics. – Zeroth law : For two systems A and A in thermal equilibrium ã Å 1 ∂S 1 = kB β = = kB β = T
∂E N,Ω T – First law : The differential of the average energy of a system : dU = d pi Ei = pi dEi + Ei dpi i
is split into two terms. The infinitesimal work is related to the modification of the energy states of the system due to the change of external parameters (volume, etc.) δW = pi dEi i
The infinitesimal heat exchanged δQ is associated with variations of probabilities of occurrence of the microstates Ei dpi δQ = i
– Second law : We have identified the thermodynamical entropy with the statistical entropy as introduced in the first chapters of this book and have shown that 1 kB β = . T 75
Summary of Chapter 3
In Statistical Physics, an irreversible process corresponds to an increase of the number of accessible microstates for the considered system, and consequently to an increase of disorder and of
entropy. – Third law : The entropy per particle S/N vanishes when the temperature tends to the absolute zero. – The adapted thermodynamical potential depends on the considered physical situation, it
is extremum in equilibrium. It is related to the microscopic parameters of Statistical Physics through the partition function ; its partial derivatives give access to the measurable physical
parameters. Using a transformation of Legendre, one gets a new thermodynamical potential, with one new variable (see the table next page).
dS =
P dU µ dΩ + − dN T T T 1 ∂S ... = T ∂U Ω,N
S = kB ln W
Thermodynamical Potential
∂F ∂T
dF = −P dΩ − SdT + µdN
F = −kB T ln ZN F = U − TS
ZN (β) =
W microscopic configurations at E
Statistical Description
1 kB T
exchange of energy with a heat reservoir at fixed N
isolated system E to δE
Physical Conditions
Lagrange Parameter(s)
eαN ZN (β)
N = −
∂A ∂µ
(average value)
dA = −P dΩ − SdT − N dµ
A = −kB T ln ZG A = U − T S − µN
1 µ , α = βµ = kB T kB T
ZG (α, β) =
exchange of energy with a heat reservoir and of particles with a particle reservoir
Grand Canonical
Summary of Chapter 3 77
Chapter 4
The Ideal Gas 4.1
After having stated the general methods in Statistical Physics and bridged its concepts with your previous background in Thermodynamics, we are now going to apply our know-how to the molecules of a
gas, with no mutual interaction. The properties of such a gas, called “ideal gas” (“perfect gas” in French) are discussed in Thermodynamics courses. Here the contribution of Statistical Physics will
be evidenced : starting from a microscopic description in Classical Mechanics, it allows to explain the observed macroscopic properties. A few notions of kinetic theory of gases will first be
introduced in § 4.2 and § 4.3 we will discuss to what extent classical Statistical Physics, as used up to this point in this book, can apply to free particles. This chapter is “doubly classical”
since, like in the three preceding chapters, Classical Statistics is used, but now for particles with an evolution described by Classical Mechanics : both the states themselves and their occurrence
are classical. The canonical ensemble is chosen in § 4.4 to solve the statistical properties of the ideal gas and physical consequences are deduced ; in the Appendices, all that we learnt in this
chapter and the preceding ones is applied in a practical problem : the interpretation of chemical reactions in gas phase is done in Thermodynamics (Appendix 4.1) and in Statistical Physics (Appendix
Chapter 4. The Ideal Gas
Kinetic Approach
This section, which has no relation with Quantum Mechanics or Statistical Physics in equilibrium, will allow us to obtain insight into the simplest physical parameters describing the motion of
molecules in a gas, and to introduce some orders of magnitude. If necessary, we will refer to the Thermodynamics courses on the ideal gas.
Scattering Cross Section, Mean Free Path
Consider a can filled with a gas, in the standard conditions of temperature and pressure (T = 273 K, P = 1 atmosphere). Each molecule has a radius r, defined by the average extension of the valence
orbitals, i.e., a fraction of a nanometer. If in its motion a molecule comes too close to another one, there will be a collision. One defines the scattering cross section which is the circle of radius
r around a molecule, inside which no other molecule can penetrate.
d Fig. 4.1: Definitions of the scattering cross section and of the mean free path. The mean distance between two collisions l, or mean free path, is deduced from the mean volume per particle Ω/N and
the scattering cross section by l(πr2 ) =
Ω = d3 N
where d is the average distance between particles (Fig. 4.1). From l, one deduces the mean time τ between two collisions, or collision time, by writing l = ντ where ν is the mean velocity at
temperature T , of the order of the justification in § 4.4.2).
(4.2) kB T /m (see
Kinetic Approach
For nitrogen molecules with r = 0.42 nm, at 273 K and under a pressure of one atmosphere one finds l ≈ 70 nm, ν ≈ 300 m.sec−1 , τ ≈ 2 × 10−10 sec. The length l is to be compared with the average
distance between particles d = 3.3 nm. These values show that in the gas collisions are very frequent at the time scale of a measure, which ensures that equilibrium is rapidly reached after a
perturbation of the gas ; at the molecular scale distances covered between collisions are very large (of the order of a hundred times the size of a molecule), thus the motion of each molecule mostly
occurs as if it was alone, that is, free.
Kinetic Calculation of the Pressure
Let us now recall the calculation, from kinetic arguments, of the pressure of a gas, i.e., of the force exerted on a wall of unit surface by the molecules. The aim is to evaluate the total momentum
transferred to this wall by the molecules impinging on it during the time interval ∆t : indeed in this chapter Classical Mechanics does apply, so that the total force on the wall is given by the
Fundamental Equation of Dynamics : Ftotal =
∆ pi i
Here the momentum brought by particle i is ∆ pi . The resulting force is normal since the wall is at rest, it is related to the pressure P and to the considered wall surface S through |Ftotal | = P S
One will now evaluate the total momentum transferred to the surface, which is the difference between the total momentum of the particles which reach the wall and that of the particles which leave it
(Fig. 4.2). pi f (r, p i , t) The incident particles, with pz > 0, bring a total momentum i
to the wall, chosen as xOy plane. The probability density in equilibrium f (r, pi , t) is assumed to be stationary, homogeneous, and isotropic in space (hypotheses stated by James Clerk Maxwell in
1859). It thus reduces to an even function f ( p) versus the three components of the momentum and satisfies to (4.5) f ( p)d3 p = 1 Consider the particles, of momentum equal to p to d3 p, incident on
the surface S during the time interval ∆t. They are inside an oblique cylinder along the
Chapter 4. The Ideal Gas
pz ∆t m S
Fig. 4.2 : Collisions of molecules on a wall. Spz pz ∆t and volume ∆t (the direction z is m m chosen along the normal to the surface). For a particles density ρ = N/Ω, the number of incident
particles is
direction of p, of basis S, height
Spz ∆t ρf ( p)d3 p m
Each of them transfers its momentum p to the wall. The total momentum brought by the incident particles is thus equal to S ∆ pi = ∆tρ pz pf ( p)d3 p (4.7) m p >0 z inc This sum is restricted to the
particles traveling toward the surface, of momentum pz > 0. A similar calculation leads to the total momentum of the particles which leave the surface, or in other words “were reflected,” by the wall.
The volume (−pz ) is S ∆t, the total momentum taken from the wall is equal to m S ∆ pi = ∆tρ (−pz ) pf ( p)d3 p (4.8) m pz 0. Indeed one will always perform an integration by parts of a seconddegree
term in energy in the numerator to express the corresponding factor of the denominator partition function, which will provide the factor kB T /2. But this property is not valid for a polynomial of
degree different than two, for example for an anharmonic term in x3i . This result constitutes the “theorem of energy equipartition”, established by James C. Maxwell as early as 1860 : For a system in
canonical equilibrium at temperature T , the average value of the energy per particle and per quadratic degree of 1 freedom is equal to kB T . 2 We have already used this property several times, in
particular in § 4.3 to find the root-mean-square momentum of the ideal gas at temperature T . It is also useful to determine the thermal fluctuations of an oscillation, providing 1 1 a potential energy
in kx2 for a spring, in Cϑ2 for a torsion pendulum, 2 2 and more generally to calculate the average value at temperature T of a hamiltonian of the form h= ai x2i + bj p2j (4.33) i
where ai is independent of xi , bj does not depend of pj , and all the coefficients ai and bj are positive (which allows to use this property for an energy development around a stable equilibrium
state). A case of application of the energy equipartition theorem is the specific heat at constant volume associated to U . We recall its expression ã Å ∂U Cv = (4.34) ∂T Ω,N
Classical Statistics Treatment of the Ideal Gas
In the case of a monoatomic ideal gas, Cv only contains the contribution of 3 the three translation degrees of freedom and is thus equal to N kB . 2
Free Energy ; Physical Parameters (P, S, µ)
We saw in chapter 3 that the canonical partition function Zc is related to the free energy through F = U − T S = −kB T ln Zc
In the case of an ideal gas, the differential of F is dF = −P dΩ − SdT + µdN
The volume Ω and the number of particles N are extensive variables, the temperature T is intensive. The chemical potential, already introduced in § 3.5.3, is here the variation of free energy of the
system at fixed temperature and volume when one particle is added. Substituting the expression of Zc obtained in § 4.4.1, one obtains : Å ã Ω F = −kB T N ln 3 (2πmkB T )3/2 + ln CN h ã Å Ω 1 3/2 F =
−N kB T ln ln C (2πmk T ) + ln N + B N N h3 N
(4.37) (4.38)
For the potential F to be extensive, the quantity between brackets must be Ω should appear in the first term and intensive. This dictates that the ratio N that the remaining terms should be constant :
ln N +
1 ln CN = constant N
i.e., ln CN = −N ln N + constant × N
From now on, we will take CN =
1 , i.e. ln CN = −N ln N + N + O(ln N ) N!
so that the constant in Eq. (4.39) is equal to unity. This choice will be justified in § 4.4.4 by a physical argument, the Gibbs paradox, and will be demonstrated in § 6.7. Then ï Å Å ã ò ã Ω eΩ 3/2
(2πmk T ) T ln +1 = −N k (4.42) F = −N kB T ln B B N h3 N λ3th
Chapter 4. The Ideal Gas
One deduces the pressure, the entropy and the chemical potential of the assembly of N particles : Å ã ∂F N kB T (4.43) = P =− ∂Ω N,T Ω ã ã Å Å 5 ∂F Ω + N kB S=− = N kB ln (4.44) ∂T N,Ω N λ3th 2 ã ã Å
Å Ω ∂F (4.45) = −kB T ln µ= ∂N T,Ω N λ3th One identifies the first of these relations as the ideal gas law, that we now just re-demonstrated using Statistical Physics. The expression obtained for S(Ω,
N, T ) is extensive but does not satisfy the Nernst principle, since it does not vanish when the temperature tends to zero : the entropy would become very negative if the de Broglie thermal
wavelength was becoming too large with respect to the mean distance between particles. This confirms what we already expressed in § 4.3, that is, the ideal gas model is no longer valid at very low
temperature. In the validity limit of the ideal gas model, λth d is equivalent to Ω/N λ3th very large, thus to µ very negative (µ → −∞). Remember that the measure of the classical N -particle phase
space must be N 1 d3ri d3 pi taken equal to to ensure the extensivity of the thermodynaN ! i=1 h3 mical potentials F and S. This will be justified in chapter 6 : the factor CN expresses the common
classical limit of both Quantum Statistics.
Gibbs Paradox
At the end of the 19th century, John Willard Gibbs used the following physical argument to deduce the requirement of the constant CN = N1 ! in the expression of the infinitesimal volume of the phase
space : consider two containers of volumes Ω1 and Ω2 , containing under the same density and at the same temperature T , respectively, N1 and N2 molecules of gas (Fig. 4.3). One then opens the
partition which separates the two containers, the gases mix and the final total entropy is compared to the initial total one. If the gases in the two containers were of different chemical natures, the
disorder and thus the entropy increase. On the other hand, if both gases are of the same nature and the molecules indistinguishable, the entropy is unchanged in the process. The detailed calculation
of the different terms in the total entropy, before
N1 et N2 of the same nature Ω 1 + Ω2 N1 + N2 T Ω1 N1 S1 T
Ω2 N2 S2 T
N1 N2 = Ω1 Ω2
S = S1 + S2 N1 et N2 of different natures Ω 1 + Ω2 N1 , N2 T S > S1 + S2
Fig. 4.3: Mixture of two gases at the same temperature and with the same density. and after opening the partition, is simple and uses the partial derivative with respect of temperature of expression
(4.38) of the free energy, including the constant CN : if one assumes that CN = 1, there will be a variation of the total entropy even for a mixture of molecules of the same nature. When one
introduces the factor CN = 1/N !, the result corresponds to the physical prediction. The choice of this factor is not surprising once one considers that molecules of the same chemical nature are
indistinguishable : for a system of N indistinguishable particles, there are N ! possible permutations of these particles among their one-particle states, providing the same macroscopic state. In
order to account only once for this state in the enumeration that will lead to the physical predictions, among others of the entropy, it is natural to choose the value CN = 1/N !.
Using Statistical Physics, we have been able to find again the well-known results of Thermodynamics on the ideal gas and to justify them. It seems interesting, now that we come to the end of the study
of Classical Statistics, to present the respective contributions of Thermodynamics (Appendix 4.1) and of Statistical Physics (Appendix 4.2) on a practical example. This will be done on the analysis
of the chemical reactions. In the Statistical Physics part, one will show in details the various microscopic terms of the hamiltonian of a gas of diatomic molecules ; this allows one to
Chapter 4. The Ideal Gas
understand the origin of a law very commonly used in Chemistry, that is, the law of mass action. Enjoy your reading ! Reading these two Appendixes is not a prerequisite to understanding the next
Summary of Chapter 4 First we have presented some orders of magnitude, relative to nitrogen molecules in standard conditions of temperature and pressure. They justify that the hypotheses of the ideal
gas, of molecules with no mutual interaction, is reasonable : indeed the distances traveled by a molecule between collisions are very large with respect to a molecule radius and the characteristic
time to reach equilibrium is very short as compared to a typical time of measurement. Then the kinetic calculation of a gas pressure was recalled. The domain of validity of the ideal gas model, in
the framework of both Classical Mechanics and Classical Statistics, is the range of high temperatures and low densities : the criterion to be fulfilled is Å ã1/3 Å ã1/3 Ω h Ω
d= 2πmkB T , or λth = √ h N N 2πmkB T where λth is the thermal de Broglie wavelength and d the mean distance between particles. The statistical treatment of the ideal gas is done in this chapter in
the canonical ensemble, the N -particle canonical partition function is given by ïÅ ã òN Ω 3/2 Zc = CN (2πmk T ) B h3 To ensure that the free energy and the entropy are extensive and to solve the
Gibbs paradox, one must take CN = 1/N !. The average value at temperature T of the energy associated to a quadratic 1 degree of freedom is equal to kB T (theorem of the equipartition of energy). 2
This applies in particular to the kinetic energy terms for each coordinate. The chemical potential of the ideal gas is very negative ; its entropy does not satisfy to the third law of Thermodynamics
(but the ideal gas model is no longer valid in the limit of very low temperatures). 93
Appendix 4.1 Thermodynamics and Chemical Reactions We will consider here macroscopic systems composed of several types of molecules or chemical species, possibly in several phases : one can study a
chemical reaction between gases, or between gases and solids, etc. The first question that will be raised is “What is the spontaneous evolution of such systems ?” ; then the conditions of chemical
equilibrium will be expressed. These types of analyses, only based on Thermodynamics, were already introduced in your previous studies. Now Appendix 4.2 will evidence the contribution of Statistical
Physics. 1. Spontaneous Evolution In the case of a thermally isolated system, the second law of Thermodynamics indicates that a spontaneous evolution will take place in a direction corresponding to
an increase in entropy S, S being maximum in equilibrium.
Q A A T0
Atot Fig. 4.4: The system A is in thermal contact with the heat reservoir A , with which it exchanges the heat Q. The combined system is isolated. 95
Appendix 4.1
Chemical reactions are mostly done at constant temperature : one then has to consider the case of a system A in thermal contact with a heat reservoir A at temperature T0 (Fig. 4.4). The combined
system Atot , made of A and A is an isolated system, the entropy Stot of which increases in a spontaneous process : ∆Stot ≥ 0
The entropy variation of the combined system ∆Stot = ∆S + ∆S
can be expressed as a function of parameters of the system A alone. Indeed, if the amount of heat Q is given to A by the heat reservoir, the reservoir absorbs −Q and its entropy variation is equal to
∆S = −Q/T0 : the transfer of Q does not produce a variation of T0 and one can imagine a reversible sequence of isothermal transformations at T0 producing this heat exchange. Besides, the internal
energy variation in A, i.e., ∆U , is, from the first law of Thermodynamics, ∆U = W + Q
where W is the work operated in the considered transformation. Consequently, ∆Stot = ∆S −
(∆U − W ) −∆F0 + W = T0 T0
where, by definition, F0 = U − T0 S, which is identical to the Helmholtz free energy F = U − T S of system A if the temperature of A is that of the heat reservoir (in an off-equilibrium situation T
differs from T0 and F is not identical to F ). From the condition of spontaneous evolution one deduces that −∆F0 ≥ (−W )
which expresses that the maximum work that may be delivered by A in a reversible process is (−∆F0 ), whence the name of “free energy.” If the volume of system A is maintained constant and there is no
other type of work, W = 0, so that the condition for spontaneous evolution is then given by ∆F0 ≤ 0
Appendix 4.1
This condition for a system in contact with a heat reservoir replaces the condition ∆Stot ≥ 0 for an isolated system. A more realistic situation is the one in which system A is kept at constant
temperature and constant pressure. This is the case in laboratory experiments when a chemical reaction is performed at fixed temperature T0 and under the atmospheric pressure P0 : system A is
exchanging heat and work with a very large reservoir A , which can thus give energy to A without any modification of T0 and also exchange a volume ∆Ω with A without any change of its pressure P0 (Fig.
4.5). A T0 P0
A Q T ∆Ω P
Atot Fig. 4.5: System A is in contact with the larger system A , with which it is exchanging the amount of heat Q and the volume ∆Ω. Similarly, one considers that the combined system made of A and A
is isolated, so that in a spontaneous process ∆Stot = ∆S + ∆S ≥ 0
If A absorbs Q from A , ∆S = −Q/T0 , where T0 is again the temperature of A . The variation of internal energy of A includes the received heat Q, the mechanical work −P0 ∆Ω received by A from A , and
also W ∗ , work of any other origin (electrical for example) that A may have received : ∆U = Q − P0 ∆Ω + W ∗
Consequently, ∆Stot = ∆S − i.e.,
∆Stot =
Q T0 ∆S − (∆U + P0 ∆Ω − W ∗ ) = T0 T0
−∆G0 + W ∗ T0
(4.54) (4.55)
Appendix 4.1
Here we defined G0 = U + P0 Ω − T0 S, which identifies to the Gibbs free enthalpy of system A, that is, G = U + P Ω − T S, when P = P0 and T = T0 . A spontaneous evolution will be expressed through the
condition −∆G0 + W ∗ ≥ 0
When all the external parameters of system A, except its volume, are kept fixed, W ∗ is null and the condition of spontaneous evolution is then ∆G0 ≤ 0
Consequently, when a system is in contact with a reservoir so that it remains at fixed temperature and pressure and can exchange work only with this reservoir, the stable equilibrium is characterized
by the condition of minimum of G0 . 2. Chemical Equilibrium, Law of Mass Action : Thermodynamical Approach Consider a homogeneous system (for example in gaseous phase) with several types of molecules
: here we will study the dissociation reaction of iodine vapor at temperatures in the 1000 K range : I2 2I In this reaction the number of iodine atoms is conserved between the species corresponding
to the two members of the chemical equation, which can also be expressed in the form 2I − I2 = 0 the species disappearing in the reaction carrying a negative sign, the one appearing a positive one.
In the same way, the most general chemical equilibrium will be written n
bi Ni = 0
The bi are the integral coefficients of the balanced reaction for the Bi molecules. The number Ni of molecules of Bi type in the system is changed if the equilibrium is displaced but the total number
of atoms of each species in the system is conserved. The variations of the numbers Ni are thus proportional to the coefficients bi of the chemical equilibrium dNi = bi dλ
for any i
where dλ is a proportionality constant. dNi is positive for the molecules appearing, negative for those vanishing in the reaction.
Appendix 4.1
Consider a chemical equilibrium at fixed temperature T and pressure P , thus characterized by dG(T, P ) = 0. In such conditions the equilibrium is expressed by : dG = At fixed T and P, dG =
n i=1 n
dGi =
(−Si dT + Ωi dP + µi dNi )
µi dNi = 0
ã ∂G Here µi = is the chemical potential of species Bi . Replacing dNi ∂Ni T,P by its expression versus λ, one obtains the relation characterizing the equilibrium : n
bi µi = 0
The chemical potentials depend on the experimental conditions, in particular on the concentrations of the different species. Assume that these reagents are ideal gases and look for the dependence of
µi on the partial pressure Pi of one of these gases. Since Gi is extensive and depends on a single extensive parameter Ni , the number of molecules of the species i, then one must have Gi = µi Ni
The dependence in temperature and pressure of µi is thus that of Gi . For an ideal gas, at fixed temperature, Gi = Ui + Pi Ω − T Si
introducing the partial pressure Pi of species i, such that Pi Ω = Ni kB T . Consequently, dGi = −Si dT + ΩdPi + µi dNi Å
∂Gi ∂Pi
Å = Ni
∂µi ∂Pi
ã =Ω= T,Ni
Ni kB T Pi
i.e., by integrating (4.64), µi = µ0i (T ) + kB T ln
Pi P0
Appendix 4.1
where µ0i (T ) is the value of the chemical potential of species i at temperature T and under the standard pressure P0 . The general condition of chemical equilibrium is thus given by Å ãbi n n Pi bi
µ0i + kB T ln =0 P0 i=1 i=1
One writes ∆G0m = bi µ0i N i=1 n
∆G0m , the molar free enthalpy at the temperature T of the reaction, is the difference between the molar free enthalpy of the synthesized products and that of the used reagents, N is the Avogadro
number. One also defines Kp =
ã n Å Pi bi i=1
For example, in the dissociation equilibrium of iodine molecules, Kp =
(PI /P0 )2 (PI2 )/P0
One recognizes in Kp the constant of mass action. The above equation gives 0
Kp = e−∆Gm /RT
where R = N kB is the ideal gases constant, R = 8.31 J.K−1 . From the tables of free enthalpy values deduced from experiments, one can deduce the value of Kp . From the experimental study of a
chemical equilibrium at temperature T , one can also deduce the value of Kp , and thus of ∆G0m , at temperature T : if the temperature of the experiment is modified from T to T , the partial pressures
are changed, Kp (T ) becomes Kp (T ), such that ln Kp (T ) = −∆G0m (T )/RT
By definition, at a given T , ∆G0m = ∆H 0 − T ∆S 0 , where ∆H 0 is the molar enthalpy of the reaction and ∆S 0 its entropy variation. If T and T are close enough, one makes the approximation that
neither ∆H 0 nor ∆S 0 noticeably vary between these temperatures. Then Å ã 1 1 (4.72) − ln Kp (T ) − ln Kp (T ) = −∆H 0 RT RT
Appendix 4.1
i.e., also ∆H 0 d ln Kp (T ) = dT RT 2
From the enthalpy ∆H 0 , one can predict the direction of displacement of the equilibrium when the temperature increases : if ∆H 0 > 0, that is, the enthalpy increases in the reaction and the
reaction is endothermal, the equilibrium is thus displaced toward the right side when T increases. This is in agreement with the principle of Le Châtelier : If a system is in stable equilibrium, any
spontaneous change of its parameters must lead to processes which tend to restore the equilibrium. As an application, the experimental data below on the dissociation of iodine allow to deduce the
enthalpy of this reaction in the vicinity of 1000 K. For a fixed total volume of 750.0 cm3 , in which a total number n of molecules I2 was introduced at temperature T , the equilibrium pressure is P :
T (K) P (10−2 atm) n(10−4 moles)
973 6.24 5.41
1 073 7.50 5.38
1 173 9.18 5.33
As an exercise, one can calculate Kp at each temperature and verify that ∆H 0 = 157 kJ.mole−1 . This reaction is endothermal. The example presented here suggests the rich domain, but also the
limitations of the applications of Thermodynamics in Chemistry : Thermodynamics does not reduce to a game between partial derivatives, as some students may believe ! It allows one to predict the
evolution of new reactions, from tables deduced from measures. However, only the contribution of Statistical Physics allows us to predict these evolutions from the sole spectroscopic data and
microscopic models. It is this latter approach that will be followed now. Thanks to the understanding acquired in this first part of the Statistical Physics course, we will be able to deduce the
constant of mass action from the quantum properties of the chemical species of the reaction.
Appendix 4.2 Statistical Physics and Chemical Reactions The same example of dissociation of the iodine molecule at 1000 K, in gas phase, will now be treated by applying the general methods of
Statistical Physics. The calculations will be completely developed, which may be regarded as tedious but yet they are instructive ! 1. Energy States : The Quantum Mechanics Problem Iodine I is a
monoatomic gas, its only degrees of freedom are those of translation. The electronic structure of its fundamental state is 2 P3/2 (notation 2S+1 LJ : orbital state = 1 whence the notation P ; spin 1/
2, i.e., 2S + 1 = 2 ; total kinetic moment equal to J = 3/2, therefore the degeneracy 4). Iodine has a nuclear moment J=5/2. The hamiltonian of the iodine atom in the gas phase is therefore given by
HI = HI transl + HI elec + HI nucl
The molecule I2 has six degrees of freedom, which can be decomposed into three degrees of translation of its center of mass, two of rotation and one of vibration : HI2 = HI2 transl + Hrot + Hvibr +
HI2 elec + HI2 nucl
Since this molecule is symmetrical, one should account for the fact that the two iodine atoms are indistinguishable inside the molecule and thus the physics of the system is invariant in a rotation
of 180 degrees around an axis perpendicular to the bond in its middle. The fundamental electronic state of molecular iodine is not degenerate. It lies lower than the fundamental electronic state of
atomic iodine. The dissociation energy, necessary at zero temperature to transform a molecule into two iodine 103
Appendix 4.2
atoms, is equal to Ed = 1.542 eV, which corresponds to 17,900 K : one should bring Ed /2 per atom to move from the state I2 to the state I. The nuclear spin of iodine has the value J = 5/2, both
under the atomic and molecular forms ; one assumes that no magnetic field is applied. The splittings between energy levels of I or I2 are known from spectroscopic measures : Raman or infrared
techniques give access to the distances between rotation or vibration levels ; the distance between electronic levels are generally in the visible (from 1.5 eV to 3 eV) or the ultraviolet range. One
now expresses the eigenvalues of the different terms of HI and HI2 . a. Translation States One has to find the hamiltonian eigenstates : in the case of the iodine atom of mass mI , Htr =
pˆ2 + V (r) 2mI
where V (r) is the confinement potential limiting the particle presence to a rectangular box of volume Ω = Lx Ly Lz . √ The eigenstates are plane waves ψ = (exp ik · r)/ Ω ; the k values are quantized
and for the Born-Von Kármán periodic limit conditions [ψ(x+Lx ) = ψ(x)] (see § 6.4.2) the k components satisfy : kx Lx = nx 2π ; ky Ly = ny 2π ; kz Lz = nz 2π with nx , ny , nz positive, negative or
null integers. The corresponding energies are : 2 2 k 2mI ñÅ ã2 Å ã2 Å ã2 ô nx h2 ny nz = + + 2mI Lx Ly Lz
ε I tr =
The translation energies εI2 tr of the molecule I2 are obtained by replacing mI by mI2 in the above expression of εItransl . All these states are extremely close : if Ω is of the order of 1 cm3 and
the dimensions of the order of 1 cm, the energy of the state nx = 1, ny = nz = 0 is given by, for I (mI = 127 g) Å ã2 a0 1 mH × (0.53 × 10−8 )2 = 3 × 10−18 eV ! · = 13.6 × 13.6 eV × mI Lx 127
Appendix 4.2
The distances between energy levels are in the same range, thus very small with respect to kB T , for T = 1000K. This discussion implies that the ideal gas model is valid at room temperature for I
and I2 (still more valid than for N2 , since these molecules are heavier). The partition functions for their translation states will take the form (4.26)(4.27).
b. Rotation States The iodine molecule is linear (Fig. 4.6), its inertia momentum around an Å ã2 Re axis normal to the bond is I = 2m = µRe2 with µ = mI /2 and Re 2 the equilibrium distance between
the two nuclei. Since mI = (127/N )g this momentum is particularly large at the atomic scale. The rotation hamiltonian is 1 ˆ2 J 2I and shows that the problem is isotropic in the center of mass
frame. Hrot =
Fig. 4.6 : Structure of the I2 molecule. Its eigenvalues are : εrot (J) =
2 J(J + 1) = hcBJ(J + 1), 2I
is given in cm−1 (4.79)
(one recalls the definition hν = hc/λ = hcσ ; the unit cm−1 for σ is particularly adapted to infrared spectroscopy). The selection rules ∆J = ±1 for allowed electric dipolar transitions between
rotation levels correspond to εrot (J + 1) − εrot (J) = hc · 2B(J + 1). For iodine B = 0.0373 cm−1 , 2hcB/kB = 0.107 K, to be compared to HC where B = 10.6 cm−1 , 2hcB/kB = 30.5 K. The rotation
levels are thus still very close at the thermal energy scale.
Appendix 4.2
c. Vibration States In a first approximation the vibration hamiltonian of the molecule I2 can be described as a harmonic oscillator hamiltonian, with a parabolic potential well centered at the
equilibrium distance Re between the two nuclei, that is, V (R) = 12 K(R−Re )2 (Fig. 4.7). (In this figure a realistic potential is sketched in full line ; the dotted line represent its approximated
harmonic potential.) ε
3 2 1 n=0 Re
Fig. 4.7 : Potential well associated to the vibration of the molecule I2 .
Hvibr =
1 pˆ2R + K(R − Re )2 2µ 2
∂ with pˆR = −i , µ being the reduced mass of the system. In this approxi∂R mation, the vibration energies are given by ã Å K 1 εvibr (n) = n + ω with ω = (4.81) 2 µ The transitions ∆ε = ω = hcσv
are in the infrared domain. For I2 σv = 214.36 cm−1 , which corresponds to λ = 46.7µm or hcσv /kB = 308 K (for F2 the corresponding temperature is 1280 K).
d. Electronic States The splitting between fundamental and excited states, for both I and I2 , is of the order of a few eV, corresponding to more than 10,000 K. The origin of the electronic energies
(Fig. 4.8) is taken at the I2 fundamental state, so that the energy of the I fundamental state is EI , with 2EI = Ed = 1.542 eV, where Ed is the dissociation energy.
Appendix 4.2
εe2 εe1 εe1 g=4
εe0 EI
g=1 I2
Fig. 4.8 : Electronic states of I2 and I. e. Iodine Nuclear Spin It is J = 5/2 on both species I and I2 . 2. Description of the Statistical State of the System : Calculation of the Partition
Functions It was stated in Ch.2 that for a macroscopic system the predictions for the experimental measurements of physical parameters were the same whatever the chosen statistical ensemble. Here we
will work in the canonical ensemble, where the calculations are simpler. At temperature T , the number of iodine molecules is NI2 , the number of atoms produced by the dissociation of the molecules
NI . The partition function for a single iodine molecule is written zI2 , that of an isolated atom zI . Because of the particles indistinguishability, the total canonical partition function Zc is
given by : Zc =
1 1 (zI )NI2 (zI )NI NI2 ! 2 NI !
It is related to the free energy F of the system F = −kB T ln Zc
the partial derivatives of which, with respect to the numbers of particles, are the chemical potentials : ã Å Å ã ∂F zI2 zI µI = −kB T ln (4.84) = −kB T ln µI2 = ∂NI2 NI2 NI Since the total energy of
an iodine atom or molecule is a sum of contributions, the partition function of a corresponding atom or a molecule is a product of
Appendix 4.2
factors : thus for an I2 molecule εI2 = εI2 tr + εrot (J) + εvibr (n) + εel + εnucl
zI2 = zI2 tr · zrot · zvibr · zel · znucl
with :
2 1 3 r d3 p e−βp /2mI2 d 3 h = gJ e−βεJ = (2J + 1)e−βhcBJ(J+1)
zI2 tr = zrot
zvibr =
gv e−βεv =
zel =
e−βω(n+ 2 )
ge e−βεe
znucl = (2J + 1)2 for I2 ,
znucl = 2J + 1 for I
The factors gr , gv , ge express the degeneracies of the rotation, vibration and electronic energy levels. The value of the nuclear spin is J , there are two nuclei in I2 . One will first evaluate the
various partition functions, noting that a large value of the partition function is related to a large number of accessible states. The examined chemical potentials will be sums of terms, associated
with the zI zI different z factors. Note that one has to calculate 2 (or ) (see (4.84)), NI2 NI thus to divide a single factor of zI2 by NI2 : we will associate the factor NI2 to the sole translation
factor, then it no longer comes into the partition functions for the other degrees of freedom. (The indiscernability factor is thus included in the translation degree of freedom.) Then : ln
ztr I2 zI2 = ln + ln zrot + ln zvibr + ln zel + ln znucl NI2 NI2
a. Translation The calculation of ztr I was already done for the ideal gas : Å
ztr I
2πmI kB T =Ω· h2
ã3/2 =
Ω NI kB T 1 = · (λth I )3 Pi (λth I )3
Pi is the partial pressure, such that Pi Ω = NI kB T and λth I is the thermal de Broglie wavelength at T for I. Since we are in the classical framework, ztr I
Appendix 4.2
must be very large. Numerically λth , expressed in picometers (10−12 m), is given by 1749 · 10−12 m λth = (TK × mg·mole−1 )1/2 At 1000 K, for the iodine atom mI = 127g,
λth I = 4, 91 10−12 m.
In the same condition, for the I2 molecule,
λth I2 = 3.47 10−12 m.
For a partial pressure equal to the atmospheric pressure and a temperature T = 1000 K, one obtains zI tr = 1.15 × 109 NI zI2 tr = 3.26 × 109 NI2 These numbers are huge !
b. Rotation The factor 2J + 1 is the degeneracy of the energy level hcBJ(J + 1). Besides, the linear molecule I2 is symmetrical, so that after a 180 degree rotation one gets a new state,
indistinguishable from the initial one (this would not be the case for HC for example). One thus has to divide the above expression (4.88) by two. This is equivalent to saying that the wave function
symmetry allows to keep only half of the quantum states. Since the energy distance between rotation levels is small with respect to kB T , one can replace the discrete sum by an integral 1 1 ∞ (2J +
1)e−βhcBJ(J+1) (2J + 1)e−βhcBJ(J+1) dJ 2 2 0 J (4.94) 1 1 kB T = 2 βhcB 2hcB This approximation implies that zrot is a large number. Numerically, the contribution of zrot , equal to 1/2βhcB, is the
ratio 1000 K/0.107 K=9346.
c. Vibration zvibr = e−βω/2
∞ n=0
e−βωn =
e−βω/2 1 − e−βω
At 1000 K, for I2 , the term βω is 0.308 : the quantum regime is valid and this expression cannot be approximated. It has the value 3.23.
Appendix 4.2
d. Electronic Term
ze =
ge e−βεe
Owing to the large distances between electronic levels, one will only consider the fundamental level, four times degenerated for I, nondegenerated for I2 . At 1000 K, the term zeI is equal to 4 e−(Ed
/2kB T ) = 4e−8.94 = 5.25 × 10−4 , whereas for the same choice of energy origin zeI2 = 1.
3. Prediction of the Action of Mass Constant Let us return to the definition relations (4.68) to (4.70) of Appendix 4.1, Kp = exp(−∆G0m /RT ) = exp(−
bi µ0i /kB T )
where the µ0i are the chemical potentials under the atmospheric pressure. For our example : Kp = exp[−(2µ0I − µ0I2 )/kB T ]
The chemical potentials are expressed versus the partition functions calculated above : Å 0 Å 0 ã ã 1 z I tr z I2 tr 0 0 (2µI − µI2 ) = −2[ln + ln zeI ] + ln + ln zrot + ln zvib kB T NI NI2 (4.99)
Here the translation partition functions are calculated under the atmospheric pressure (standard pressure) ; the terms associated to the nuclear spins 2 ln(2J + 1) exactly compensate and ln ze I2 =
0. From the data of the previous section one obtains : z 0 I tr = 20.87 , NI z 0 I2 tr ln = 21.91 , NI2 ln
ln zeI = −7.56 ln zrot = 9.14 , ln zvibr = 1.17
Kp = e−5.6 3.7 × 10−3
Appendix 4.2
One can also predict, from the calculation of the temperature variation of Kp , a value for the enthalpy of the iodine dissociation reaction : ∆H d ln Kp = dT RT 2
i.e., d dT
−2µ0I + µ0I2 kB T
å =
∆H RT 2
∆H is thus deduced from the logarithmic derivatives versus T , at constant P , of the different partition functions : the translation partition functions vary ã Å βω −1 . Consequently, like T 5/2 ,
zrot like T , zvibr like sinh 2 ã ò ã ò ï Å 0 ï Å 0 z I tr z I2 tr d d ∆H =2 + ln zeI − + ln zrot + ln zvibr ln ln RT 2 dT NI dT NI2 (4.102) ò ï ò ï 5 Ed 1 ω βω 5 − + + + (4.103) coth =2 2 2 2T 2kB T
2T T 2kB T 2 that is, Å ã 3RT βω ω ∆H = + N Ed − coth 2 2 2
The obtained value of ∆H is close to that of the dissociation energy per mole : ∆H = 152 kJ/mole. The Kp and ∆H values, calculated from spectroscopic data, are very close to those deduced from the
experiment quoted in the Appendix 4.1 (Kp 3.4 · 10−3 at 1000 K, ∆H = 158 kJ/mole). The discrepancies come in particular from the approximations done in the decomposition of the degrees of freedom of
the iodine molecule : we assumed that the vibration and rotation were independent (rigid rotator model) ; moreover, the potential well describing vibration is certainly not harmonic. Obviously, for
an experiment performed at constant temperature T and volume Ω, the equilibrium constant Kv could have been calculated in a similar way and the free energy F would have been used. The variation of Kv
with temperature provides the variation ∆U of internal energy in the chemical reaction.
4. Conclusion Through these examples, which were developed in detail, the contribution of Statistical Physics in the prediction of chemical reactions was shown. As just
Appendix 4.2
seen now, if the Statistical Physics concepts refer to very fundamental notions, this discipline (sometimes at the price of lengthy calculations !) brings useful data to solve practical problems.
Chapter 5
Indistinguishability, the Pauli Principle (Quantum Mechanics) The preceding chapter presented properties of the ideal gas which are valid in the limit where the average value, at temperature T , of
the de Broglie wavelength is small with respect to the average distance between the particles in motion. Another way to express the same concept is to consider that the wave functions of two
particles have a very small probability of overlapping in such conditions. From now on, we will work in the other limit, in which collisions between particles of the same nature are very likely. In
the present chapter the Quantum Mechanics properties applicable to this limit will be recalled. In Quantum Mechanics courses (see for example the one by J.-L. Basdevant and J. Dalibard) it is stated
that, for indistinguishable particles, the Pauli principle must be added to the general postulates of Quantum Mechanics. In fact here we are only re-visiting this principle but ideas are better
understood when they are tackled several times ! In chapter 6 the consequences in Statistical Physics will be drawn on the properties obtained in the present chapter 5. In the introduction (§ 5.1) we
will list very different physical phenomena, associated with the property of indistinguishability, which will be interpreted at the end of the chapter in the framework of Statistical Physics. To
introduce the issue, in § 5.2 some properties of the quantum states of two, then several, identical particles, will be recalled. In § 5.3, the Pauli principle and 113
Chapter 5. Indistinguishability, the Pauli Principle
its particular expression, the Pauli exclusion principle, will be stated ; the theorem of spin-statistics connection will allow one to recognize which type of statistics is applicable according to
the nature of the considered particles and to distinguish between fermions and bosons. In § 5.4, the special case of two identical particles of spin 1/2 will be analyzed. These results will be
extended to an arbitrary number of particles in § 5.5 and the description of a N indistinguishable particle state through the occupation numbers of its various energy levels will be introduced : this
is much more convenient than using the N -particle wave function. It is this approach that will be used in the following chapters of this book. Finally in § 5.6 we will return to the examples of the
introduction and will interpret them in the framework of the Pauli principle.
It is possible to distinguish fixed particles of the same nature, by specifying their coordinates : this is the case of magnetic moments localized on sites of a paramagnetic crystal, of hydrogen
molecules adsorbed on a solid catalyst. They are then distinguishable through their coordinates. D
θ (1)
Fig. 5.1: Two identical particles, a) represented by the wave packets which describe them in their center-of-mass frame, b) suffer a collision. c) After the collision, the wave function is spread in a
ring-shaped region the radius of which increases with time. When a particle is detected in D, it is impossible to know whether it was associated with wave packet (1) or (2) prior to the collision. On
the other hand, two electrons in motion in a metal, two protons in a nucleus, or more generally all particles of the same chemical nature, identical, in motion and thus likely to collide, cannot be
distinguished by their physical properties : these identical particles are said to be indistinguishable. Indeed, as explained in Quantum Mechanics courses, if these particles, the evolution of which
is described through Quantum Mechanics, do collide, their wave
functions overlap (fig. 5.1). Then, after the collision it is impossible to know whether (1 ) originated from (1) or from (2), since the notion of trajectory does not exist in Quantum Mechanics (Fig.
5.2). D
D (1 )
(2 )
(1) (2)
(2) (2 )
(1 ) a)
Fig. 5.2: Representation in term of “trajectory” of the event described on Fig. 5.1. The two particles being identical, one cannot distinguish between a) and b). The quantum treatment of such systems
refers to the Pauli principle. This principle allows the interpretation of a great variety of chemical and physical phenomena, among which are 1. the periodic classification of the elements ; 2. the
chemical binding ; 3. the ferromagnetism of some metals, like iron or cobalt, that is, their ability to evidence a macroscopic magnetization even in the absence of an external magnetic field ; 4. the
superconductivity of some solids, the electrical resistance of which strictly vanishes below a temperature called “critical temperature” ; 5. the special behavior of the isotope 42 He, which becomes
superfluid for temperatures lower than a critical temperature Tc , equal to 2.17 K under the atmospheric pressure : this means that no viscosity is restraining its motion ; the isotope 32 He does not
show such a property ; 6. the stimulated light emission and the laser effect.
Chapter 5. Indistinguishability, the Pauli Principle
States of Two Indistinguishable Particles (Quantum Mechanics) General Case
When N particles are indistinguishable, the physics of the system and the N particle hamiltonian are unchanged by permutation of some of these particles. Consequently, the hamiltonian commutes with
any permutation operator between these particles, so that they have a common basis of eigenvectors. It can be shown that any permutation of N particles can be decomposed into a product of exchanges
of two particles. Thus it is helpful to begin by recalling the properties of the operator which exchanges the particles of a system consisting in particle 1 at r1 and particle 2 at r2 : Pˆ12 ψ(r1 ,
r2 ) = ψ(r2 , r1 )
The operator Pˆ12 must satisfy : (Pˆ12 )2 = 1
since after two exchanges one returns to the initial state. This implies that the eigenvalue eiα of Pˆ12 must fulfill the condition : e2iα = 1, i.e., eiα = ±1
eiα = +1, ψ(r2 , r1 ) = ψ(r1 , r2 )
i) If
the eigenstate is symmetrical in the exchange (or transposition) of particles 1 and 2. ii) If eiα = −1, ψ(r2 , r1 ) = −ψ(r1 , r2 )
the wave function is antisymmetrical in the transposition of the two particles.
Independent Particles
If the indistinguishable particles are independent, the total hamiltonian for all the particles is written as a sum of one-particle hamiltonians : ˆN = H
N i=1
ˆi h
States of two Indistinguishable Particles (Quantum Mechanics)
The eigenstates of each one-particle hamiltonian satisfy : ˆ i χn (ri ) = εn χn (ri ) (i = 1, 2) h
The index n corresponds to the considered quantum state, for example for the electron of a hydrogen atom n = 1s, 2s, 2p . . . ; the particle i is located at ri . The one-particle eigenstate at the
lowest energy (fundamental state) satisfies ˆ 1 χ1 (r1 ) = ε1 χ1 (r1 ) h
The energy ε2 of the one-particle first excited state is such that ˆ 1 χ2 (r1 ) = ε2 χ2 (r1 ) h
Since the particles are assumed to be independent, the eigenstates of the ˆ 12 can be easily expressed with respect to the onetwo-particle hamiltonian H particle solutions : ˆ 12 ψnn (r1 , r2 ) = εnn
ψnn (r1 , r2 ) H
εnn = εn + εn and ψnn (r1 , r2 ) = χn (rn )χn (r2 )
Indeed remember that ˆ1 + h ˆ 2 )χn (r1 )χn (r2 ) = εn χn (r1 )χn (r2 ) + χn (r1 )εn χn (r2 ) (h = (εn + εn )χn (r1 )χn (r2 )
so that the two-particle energy is the sum of the one-particle energies and the two-particle wave function is the product of the one-particle wave functions. Any linear combination of wave functions
corresponding to the same energy ˆ 12 . εn + εn is also a solution of H Here one is looking for eigenfunctions with a symmetry determined by the indistinguishability properties. In the two-particle
fundamental state, each individual particle is in its fundamental state (fig. 5.3), this two-particle state has the energy 2ε1 , it is nondegenerate. Its wave function satisfies ˆ 12 ψ11sym (r1 , r2 )
= 2ε1 ψ11sym (r1 , r2 ) H
It is symmetrical and can be written ψ11sym (r1 , r2 ) = χ1 (r1 )χ1 (r2 ) .
Chapter 5. Indistinguishability, the Pauli Principle
fundamental state
first excited state
Fig. 5.3 : Occupation of two levels by two indistinguishable particles. In the two-particle first excited state, the total energy is ε1 + ε2 . This state is doubly degenerate, as two distinct
independent states can be built (particle 1 in ε1 , particle 2 in ε2 or the opposite). But since the particles are indistinguishable, one cannot tell which of the two particles is in the fundamental
state. ˆ 12 and Pˆ12 . Thus one looks for solutions which are common eigenstates of H The acceptable solutions are : for the symmetrical first excited state 1 ψ12sym (r1 , r2 ) = √ (χ1 (r1 )χ2 (r2 ) +
χ2 (r1 )χ1 (r2 )) 2
for the antisymmetrical first excited state 1 ψ12asym (r1 , r2 ) = √ (χ1 (r1 )χ2 (r2 ) − χ2 (r1 )χ1 (r2 )) 2
These wave functions (5.14) to (5.16) are determined, to a phase factor. Now consider the case of N particles. ˆ N , is then equal The total energy of the system, which is the eigenvalue of H to εN =
The eigenfunction is a product of one-particle eigenfunctions, N
ψN (r1 , . . . , rN ) = Π χni (ri ) i=1
Any permutation of N particles can be decomposed into a product of exchanges of particles by pair, it thus admits the eigenvalues +1 or −1 : Pˆ ψ(r1 , r2 , . . . ri , . . . rN ) = σψ(r1 , r2 , . . .
ri , . . . , rN )
Pauli Principle ; Spin-Statistics Connection
with σ = ±1. σ = +1 is associated with an “even” permutation σ = −1 is associated with an “odd” permutation Although the decomposition of the permutation into a product of exchanges is not unique,
the value of σ is well determined. Among the eigenfunctions of the permutation operators, two types will be distinguished : either the wave function is completely symmetrical, i.e., it is unchanged
by any permutation operator ; or it is a completely antisymmetrical eigenfunction which satisfies (5.19) and changes its sign for an odd permutation, but is not changed for an even permutation. The
eigenfunctions, common ˆ N of the N indistinto the permutation operators and to the hamiltonian H guishable particles, will necessarily be of one of these two types.
Pauli Principle ; Spin-Statistics Connection Pauli Principle ; Pauli Exclusion Principle
The Pauli (1900-1958) principle is a postulate which is stated as follows : The wave function of a set of N indistinguishable particles obeys either of the two properties below : – either it is
completely symmetrical, i.e., its sign does not change by permutation of two arbitrary particles : then these particles are called bosons ; – or it is completely antisymmetrical, and thus its sign is
modified by permutation of two particles : these particles are fermions. The property verified by the wave function only depends on the type of the considered particles, it is independent of their
number N and of the particular state Ψ they occupy. This principle is verified by all chemical types of particles. In particular, every time a new type of elementary particle was discovered, these new
particles indeed satisfied the Pauli principle. A special case of the Pauli principle is the Pauli exclusion principle : Two fermions cannot be in the same quantum state. Indeed, if they were in the
Chapter 5. Indistinguishability, the Pauli Principle
same one-particle state Ψ, the antisymmetrical expression of the wave function would be : 1 ψ(1, 2) = √ [ψ(1)ψ(2) − ψ(2)ψ(1)] = 0 2
One now needs to determine which particles, called “fermions”, satisfy the Fermi-Dirac statistics, and which particles, the “bosons”, follow the BoseEinstein statistics. There is no intuitive
justification to the following theorem, which provides the answer to this question.
Theorem of Spin-Statistics Connection
The particles with an integral or zero spin (S/ = 0, 1, 2 . . . ) are bosons. The wave function for N bosons is symmetrical in the transposition of two of them. The particles with an half-integral
spin (S/ = 1/2, 3/2, 5/2 . . . ) are fermions. The N -fermions wave function is antisymmetrical in the transposition of two of them. Among the elementary particles, electrons, protons, neutrons,
neutrinos, muons have a spin equal to 1/2 ; the spin of photons is 1, the spin of mesons is 0 or 1. When these particles are combined into one or several nuclei, the total angular momentum takes into
account all the orbital momenta and the spin intrinsic momenta. In Quantum Mechanics courses it is shown that the addition of two angular momenta J1 and J2 gives a parameter which has the properties
of a new angular momentum, with a quantum number J varying between | J1 −J2 | and | J1 + J2 | by unity steps. If two integral or half-integral momenta are added, only integral values of J are
obtained ; if an integral J1 is added to a half-integral J2 , the sum J is half-integral. It is the quantum number J which is identified with the spin of the ensemble. More generally if a composite
particle contains an even number of elementary particles of half-integral spin, its total angular moment (or its total spin) is integral, this is a boson ; if it contains an odd number, its total
angular moment is half-integral, and the composite particle is a fermion. For example, the atom 42 He has two electrons, two protons and two neutrons, i.e., six particles of half-integral spin : it
is a boson. On the other hand, the atom 32 He is composed of two electrons, two protons but a single neutron, thus it is a fermion. The statistical properties of these two isotopes are very different
(see the Introduction of this chapter, example 5), whereas the che-
Case of Two Particles of Spin 1/2
mical properties, determined by the number of electrons that can participate in chemical bonds, are the same.
Case of Two Particles of Spin 1/2
Here the properties of the wave functions of two indistinguishable particles of spin 1/2 are recalled, this is in particular the case for two electrons. From the theorem of spin-statistics
connection, this wave function must be antisymmetrical with respect to the transposition of these two particles. Now, in addition to their spin degree of freedom, these two particles also have an
orbital degree of freedom concerning the space coordinates ; from Quantum Mechanics courses, the wave function for a single particle of this type can be decomposed into tensor products concerning the
space and spin variables. The most general form of decomposition is : ψ(r1 , r2 ; σ1 , σ2 ) = ψ++ (r1 , r2 )| + + + ψ+− (r1 , r2 )| + − + ψ−+ (r1 , r2 )| − + + ψ−− (r1 , r2 )| − −
Triplet and Singlet Spin States
Let us first assume, for simplification, that this decomposition is reduced to a single term, i.e., ψ(r1 , r2 ; σ1 , σ2 ) = ψ(r1 , r2 ) ⊗ |σ1 , σ2 , with σ1 , σ2 = ±1/2
Since we are dealing with fermions, the two-particle wave function changes sign when particles 1 and 2 are exchanged. This sign change can occur in two different ways : – either it is the space part ψ
(r1 , r2 ) of the wave function which is antisymmetrical in this exchange, the spin function remaining unchanged ; – or it is |σ1 , σ2 which changes its sign in this exchange, the space function
remaining unchanged. Consider the spin states. Each particle has two possible spin states, noted | ↑ or | ↓. There are thus four basis states for the set of two spins, which can be chosen as | ↑↑, |
↑↓, | ↓↑, | ↓↓ (the order of notation is : particle 1, then particle 2).
Chapter 5. Indistinguishability, the Pauli Principle
From these states one deduces four new independent states, classified according to their symmetry, that is, their behavior when exchanging the spins of the two particles. There are three symmetrical
states, unchanged when the role of particles 1 and 2 is exchanged : ⎧ ⎪ | ↑↑ = |11 ⎪ ⎪ ⎪ ⎨ 1
√ | ↑↓ + | ↓↑ = |10 ⎪ ⎪ 2 ⎪ ⎪ ⎩ | ↓↓ = |1 − 1
They constitute the triplet state of total spin S = 1, with projections on the quantization axis Oz described by the respective quantum numbers Sz = +1, 0, −1 : the eigenvalue of the squared length
of this spin is equal to S(S + 1)2 , i.e., 22 when the eigenvalue of the projection along Oz is Sz , i.e., , 0 or −. There remains an antisymmetrical state, which changes its sign in the exchange of
the two spins, this is the singlet state 1
√ | ↑↓ − | ↓↑ = |00 2
in which S = Sz = 0. Even when the two-particle hamiltonian does not explicitly depend on spin, orbital wave functions with different symmetries and possibly different energy eigenvalues are associated
with the triplet and singlet states : for example, in the special case treated in § 5.2.2 the fundamental state, with a symmetrical orbital wave function [see Eq. (5.14)], must be a spin singlet
state, whereas in the first excited state one can have a spin triplet. This energy splitting between the symmetrical and antisymmetrical orbital solutions is called the exchange energy or exchange
interaction. The exchange interaction is responsible of ferromagnetism, as understood by Werner Heisenberg (1901-1976) as early as 1926 : indeed the dipolar interaction between two magnetic moments a
fraction of a nm apart, each of the order of the Bohr magneton, is much too weak to allow the existence of an ordered phase of magnetization at room temperature. On the contrary, owing to the Pauli
principle, in ferromagnetic systems the energy splittings between states with different spins correspond to separations between different orbital levels. These are of a fraction of an electron-volt,
which is equivalent to temperatures of the order of 1000 K, since these states are the ones responsible for valence bindings.
Case of Two Particles of Spin 1/2
In a ferromagnetic solid, magnetism can originate from mobile electrons in metals like in Fe, Ni, Co, or from ions in insulating solids, as in the magnetite iron oxide Fe3 O4 , and the spin can differ
from 1/2. However this interaction between the spins, which relies on the Pauli principle, is responsible for the energy splitting between different orbital states : one state in which the total
magnetic moment is different from zero, corresponding to the lower temperature state, and the other one in which it is zero.
General Properties of the Wave Function of Two Spin 1/2 Particles
It has just been shown that, because of the Pauli principle, space wave functions are associated with the triplet states, which are changed into their opposite in the exchange of the two particles
(antisymmetrical wave function). To the singlet state is associated a symmetrical wave function. The most general wave function for two spin 1/2 particles will be written, for a decomposition onto
the triplet and singlet states : A A A S |11 + ψ10 |10 + ψ1−1 |1 − 1 + ψ00 |00 ψ11 A The wave functions of the type ψ11 are spatial functions of r1 and r2 , the upper index recalls the symmetry of
each component.
The same wave function can be expressed on the spin basis |σ1 , σ2 , accounting for expressions (5.24) and (5.25) of the triplet and singlet wave functions. One thus obtains : A ψ(r, r2 ; σ1 , σ2 ) =
ψ11 | + + + A + ψ1−1 | − −
A ψA − ψS + ψS ψ10 √ 00 | + − + 10√ 00 | − + 2 2
It thus appears, by comparing to the general expression (5.21), that the Pauli principle dictates symmetry conditions on the spin 1/2 components : • ψ++ (r1 , r2 ) and ψ−− (r1 , r2 ) must be
antisymmetrical in the exchange of particles 1 and 2 • ψ+− (r1 , r2 ) and ψ−+ (r1 , r2 ) do not have a specific symmetry, but their sum is antisymmetrical and their difference symmetrical.
Chapter 5. Indistinguishability, the Pauli Principle
Special Case of N Independent Particles ; Occupation Numbers of the States
In the general case of an N -identical-particle system, the Pauli principle dictates a form for the N -particle wave function and occupation conditions for the energy levels which are very different
in the case of fermions or bosons. In this part of the course, we are only interested by independent particles : this situation is simpler and already allows a large number of physical problems to be
Wave Function
Fermions : We have seen that two fermions cannot be in the same quantum state and that their wave function must be antisymmetrical in the transposition of the two particles. We consider independent
particles, the N -particle wave function of which takes the form of a determinant called “Slater determinant,” which indeed ensures the change of sign by transposition and the nullity if two
particles are in the same state : χ1 (r1 ) χ2 (r1 ) χN (r1 ) .. ... ψ(r1 , . . . , rN ) ∝ ... . .. χ1 (rN ) . χN (rN )
If a state is already occupied by a fermion, another one cannot be added. A consequence is that at zero temperature all the particles cannot be in the minimum energy state : the different accessible
levels must be filled, beginning with the lowest one, each time putting one particle per level and thus each time moving to the next higher energy level. The process stops when the N th and last
particle is placed in the last occupied level : in Chemistry this process is called the “Aufbau principle” or construction principle ; in solids, the last occupied state at zero temperature is called
the Fermi level and is usually noted εF (see chapters 7 and 8). Bosons : On the other hand, the wave function for N bosons is symmetrical ; in particular it is possible to have all the particles on
the same level of minimum energy at zero temperature.
Special Case of N Independent Particles ; Occupation Numbers
Occupation Numbers
The N -fermion (or N -boson) wave functions are complicated to write because, as soon as a particle is put in a state of given energy, this induces conditions on the available states for the other
particles ; then the antisymmetrization of the total wave function has to be realized for the fermions (or the symmetrization for the bosons). Since these particles are indistinguishable, the only
interesting datum in Statistical Physics is the number of particles in a defined quantum state. Instead of writing the N -particle wave function, which will be of no use in what follows, we will
described the system as a whole by the occupation numbers of its different states, this is the so-called “Fock basis” : these one-particle states correspond to distinct εk , which can have the same
energy but different indexes if the state is degenerate (for example two states of the same energy and different spins will have different indexes k and k ) ; the states are ordered by increasing
energies (Fig. 5.4). εk
Fig. 5.4 : Definition of the occupation numbers of the one-particle states. The notation |n1 ,2 , . . . nk , . . . will express the N -particle state in which n1 particles occupy the state of energy
ε1 , n2 that of energy ε2 , . . . nk the state of energy εk . One must satisfy n1 + n2 + . . . + nk + . . . = N
the total number of particles in the system. Under these conditions, the total energy of the N -particle system is given by n1 ε1 + n2 ε2 + . . . + nk εk + . . . = EN
From the Pauli principle • for fermions : nk = 0 or 1
• for bosons
: nk = 0, 1, 2 . . . ∞
Chapter 5. Indistinguishability, the Pauli Principle
This description will be used in the remainder of the course. The whole physical information is included in the Fock notation. Indeed one cannot tell which particle is in which state, this would be
meaningless since the particles are indistinguishable. The only thing that can be specified is how many particles are in each one-particle state.
Return to the Introduction Examples
Now we are able to propose a preliminary interpretation of the physical phenomena described in the introduction, § 5.1, some of which will be analyzed in more detail in the following chapters :
Fermions Properties
1. In the periodic classification of the elements, the atomic levels are filled with electrons which are fermions. In a given orbital level only two electrons of distinct spin states can be placed,
i.e., according to the Pauli principle they are in the singlet spin state. 2. When a chemical bond is created, the electrons coming from the two atoms of the bond go into the atomic levels
originating from the coupling between the levels of the separated atoms. The resulting bonding and antibonding states must also be populated by electrons in different spin states, thus in the singlet
spin state. 3. The ferromagnetism arises, as already mentioned, from the energy difference between the singlet and the triplet spin states for a pair of electrons. If the fundamental state presents a
macroscopic magnetization, it must be made from microscopic states with a nonzero spin. In fact, in a ferromagnetic metal such as iron or nickel, the spins of two iron nuclei couple through
conduction electrons which travel from nucleus to nucleus.
Bosons Properties
4. In superconductivity the electrons of a solid couple by pairs due to their interaction with the solid vibrations. These pairs, called “Cooper pairs,” are bosons which gather into the same
fundamental state at low temperature, thus producing the phenomenon of superconductivity. 5. In the same way the 4 He atoms are bosons, which can occupy the same quantum state below a critical
temperature (see chapter 9). Their ma-
Return to the Introduction Examples
croscopic behavior expresses this collective property, whence the absence of viscosity. 6. Photons are also bosons and the stimulated emission, a specific property of bosons in nonconserved number,
will be studied in § 9.2.4.
Summary of Chapter 5 In Quantum Mechanics, the hamiltonian describing the states of identical particles is invariant under a permutation of these particles, the states before and after permutation
are physically indistinguishable. This means that the hamiltonian commutes with any permutation operator : one first looks for the permutations eigenstates, which are also eigenstates of the N
-particle hamiltonian. In the case of two indistinguishable particles the symmetrical and antisymmetrical eigenstates have been described. These states have a simple expression in the case of
independent particles. Any permutation of N particles can be decomposed into a product of exchanges (or transpositions) of particles by pair : a permutation is thus either even or odd. The Pauli
principle postulates that the wave functions for N identical particles can only be, according to the particle’s nature, either completely symmetrical (bosons) or completely antisymmetrical (fermions)
in a permutation of the particles. In particular, because their wave function is antisymmetrical, two fermions cannot occupy the same quantum state (“Pauli exclusion principle”). The theorem of
spin-statistics connection specifies that the particles of halfintegral total spin, the fermions, have an antisymmetrical wave function ; the particles of integral or zero total spin, the bosons, have
a completely symmetrical wave function. In the case of two spin-1/2 particles, the triplet spin state, symmetrical, and the singlet spin state, antisymmetrical, have been described. The spin
component of the wave function has to be combined with the orbital part, the global wave function being antisymmetrical to satisfy the Pauli principle. In Statistical Physics, to account for the
Pauli principle, the state of N particles will be described using the occupation numbers of the various energy 129
Summary of Chapter 5
levels, rather than by its wave function. A given state can be occupied at most by one fermion, whereas the number of bosons that can be located in a given state is arbitrary.
Chapter 6
General Properties of the Quantum Statistics In chapter 5 we learnt the consequences, in Quantum Mechanics, of the Pauli principle which applies to indistinguishable particles : they will now always
be taken as independent, that is, without any mutual interaction. We saw that it is very difficult to express the conditions stated by the Pauli principle on the N -particle wave function ; on the
other hand, a description through occupation numbers |n1 , n2 , . . . , nk , . . . of the different quantum states ε1 , ε2 , . . . εk . . . accessible to these indistinguishable particles is much more
convenient. In fact we assume that the states are nondegenerate and if there are several states with the same energy, we label them with different indexes. Besides, it was stated that there exists two
types of particles, the fermions with a half-integral spin, which cannot be more than one in any given one-particle state εk , whereas the bosons, with an integral or zero spin, can be in arbitrary
number in any one single-particle quantum state. In the present chapter we return to Statistical Physics (except in § 6.4, which is on Quantum Mechanics) : after having presented the study technique
applicable to systems referring to Quantum Statistics, we determine the average number of particles, at a given temperature, on an energy state of the considered system : this number will be very
different for fermions and bosons. This chapter presents general methods and techniques to analyze, using Statistical Physics, the properties of systems of indistinguishable particles following the
Quantum Statistics of either Fermi-Dirac or Bose-Einstein, while the following chapters will study these statistics more in detail on physical examples.
Chapter 6. General Properties of the Quantum Statistics
In § 6.1, we will show how the statistical properties of indistinguishable independent particles are more conveniently described in the grand canonical ensemble. In § 6.2 we will explain how the
grand canonical partition function ZG is factorized into terms, each one corresponding to a one-particle quantum state. In § 6.3 the Fermi-Dirac and the Bose-Einstein distributions will be deduced,
which give the average number of particles of the system occupying a quantum state of given energy at temperature T , and the associated thermodynamical potential, the grand potential, will be
expressed. The § 6.4 will be a (long) Quantum Mechanics parenthesis, it will be shown there that a macroscopic system, with extremely close energy levels, is conveniently described by a density of
states, that will be calculated for a free particle. In § 6.5, this density of states will be used to obtain the average value, at fixed temperature, of physical parameters in the case of
indistinguishable particles. Finally, in § 6.6, the criterion used in § 4.3 to define the domain of Classical Statistics will be justified and it will be shown that the Maxwell-Boltzmann Classical
Statistics is the common limit of both Quantum Statistics when the density decreases or the temperature increases.
Use of the Grand Canonical Ensemble
In chapter 5 we learnt the properties requested for the wave function of N indistinguishable particles, when these particles are fermions or when they are bosons. (From the present chapter until the
end of the book, the study will be limited to indistinguishable and independent particles.) Now we look for the statistical description of such a system in thermal equilibrium at temperature T and
first show, on the example of two particles, that the use of the canonical ensemble is not convenient.
Two Indistinguishable Particles in Thermal Equilibrium : Statistical Description in the Canonical Ensemble
Consider a system of two indistinguishable particles. The one-particle energy levels are noted ε1 , . . . , εk , . . . , and are assumed to be nondegenerate (a degenerate level is described by
several states at the same energy but with different indexes). The canonical partition function for a single particle at temperature T is written : 1 (6.1) e−βεk with β = Z1 = kB T k
Use of the Grand Canonical Ensemble
Now consider the two particles. For a given microscopic configuration, the total energy is εi + εj . After having recalled the results for two distinguishable particles, we will have to separately
consider the case of fermions and that of bosons. We have seen in § 2.4.5 that for distinguishable particles Z2disc = (Z1 )2
since there is no limitation to the levels occupation by the second particle. Fermions cannot be more than one to occupy the same state. Consequently, as soon as the level of energy εi is occupied by
the first particle, the second one cannot be on it. Thus 1 −β(εi +εj ) −β(εi +εj ) Z2fermions = e = e (6.3) 2 i<j i=j
Let us compare Z2fermions to (Z1 )2 : 2 −βεi −βεj (Z1 ) = e e e−β2εi + 2 e−β(εi +εj ) = i
(Z1 )2 =
e−β2εi + 2Z2fermions
In the case of bosons, in addition to the configurations possible for the fermions, the two particles can also lie on the same level : e−β(εi +εj ) (6.6) Z2bosons = i≤j 2
Let us relate (Z1 ) to Z2bosons : (Z1 )2 = Z2bosons +
1 −β(εi +εj ) e 2
(Z1 )2 = Z2bosons + Z2fermions
It was easy to determine the levels available for the particles in this example because they were only two. For three particles one must then consider which restrictions appear for the third one
because of the levels occupied by the first two particles. For N particles one should again proceed step by step, but this method is no longer realistic !
Description in the Grand Canonical Ensemble
To avoid complicated conditions on the occupation numbers nk one will work in the grand canonical ensemble, assuming in a first step that the total number
Chapter 6. General Properties of the Quantum Statistics
of particles of the system is arbitrary. Thus one will calculate ZG and the grand potential A ; then the value of the Lagrange parameter α = βµ (or that of the chemical potential µ) will be fixed in
such a way that the average number of particles N , deduced from the statistical calculation, coincides with the real number of particles in the system. We know (§ 2.5.4) that the √ relative
fluctuation on N introduced by this procedure is of the order of 1/ N , that is, of the order of a few 10−12 for N of the order of the Avogadro number.
Factorization of the Grand Partition Function Fermions and Bosons
By definition ∞
ZG (α, β) =
eαN ZN (β) =
N =0
eβµN ZN (β)
N =0
where N is one of the numbers of particles and where α and the chemical potential are related by α = βµ. The canonical partition function ZN (β) corresponding to N particles is given by
ZN (β) =
Let us introduce the occupation numbers nk of the quantum states εk :
ZG =
N =0
nk εk
nk =N
nk = N
and the energy for this configuration of the N particles being given by k
nk εk = EN
Factorization of the Grand Partition Function
As the sum is performed over all the total numbers N and all the particle distributions in the system, ZG is also written βµ nk −β nk εk k ZG = e k (6.14) {|n1 ...nk >}
nk (µ−εk )
{|n1 ...nk >}
In the latter expression, one separates the contribution of the state ε1 : this is the term eβ(µ−ε1 )n1 (6.16) n1
which multiplies the sum concerning all the other states, with n1 taking all possible values. One then proceeds state after state, so that ZG is written as a product of factors : ∞ β(µ−εk )nk ZG = e
(6.17) k=1
Each factor concerns a single state εk and its value depends on the number of particles which can be located in this state, i.e., of the nature, fermions or bosons, of the considered particles. Note
that, since ZG is a product of factors, ln ZG is a sum of terms, each concerning a one-particle state of the type εk .
In this case, the quantum state εk is occupied by 0 or 1 particle : there are only two terms in nk , the contribution of the state εk in ZG is the factor 1 + exp β(µ − εk ), whence ZG fermions =
(1 + eβ(µ−εk ) )
The state εk can now be occupied by 0, 1, 2,... particles, so that nk varies from zero to infinity. The contribution in ZG of the state εk is thus the factor ∞
eβ(µ−εk )nk =
nk =0
1 1 − eβ(µ−εk )
Chapter 6. General Properties of the Quantum Statistics
This geometrical series can be summed only if exp(α − βεk ) is smaller than unity, a condition to be fulfilled by all the states εk . One will thus have to verify that µ < ε1 , where ε1 is the
one-particle quantum state with the lowest energy, i.e., the fundamental state. This condition is also expressed through α = βµ as α < βε1 . Then the grand partition function takes the form : ZG
bosons =
∞ k=1
1 1−
eβ(µ−εk )
Chemical Potential and Number of Particles
The grand canonical partition function ZG is associated with the grand potential A such that A = −kB T ln ZG
As specified in § 3.5.3, the partial derivatives of the grand partition function provide the entropy, the pressure and the average number of particles N : dA = −SdT − P dΩ − N dµ
In particular the constraint on the total number of particles of the system is expressed from : Å ã ∂A N =− (6.23) ∂µ T,Ω
Average Occupation Number ; Grand Potential
From the grand partition function ZG one deduces average values at temperature T of physical parameters of the whole system, but also of the state εk . In particular the average value of the
occupation number nk of the state of energy εk is obtained through nk =
nk eβ(µ−εk )nk 1 ∂ ln ZG β(µ−ε )n =− k k β ∂εk e
Average Occupation Number ; Grand Potential
For fermions, the average occupation number, the so-called “Fermi-Dirac distribution,” is given by nk F D =
1 1 = βε −α e k +1 eβ(εk −µ) + 1
with α = βµ
For bosons, this average number is “the Bose-Einstein” distribution which takes the form nk BE =
1 1 = βε −α e k −1 eβ(εk −µ) − 1
Note : Remember the sign difference in the denominator between fermions and bosons ! Its essential physical consequences are developed in the next chapters. The average occupation numbers are related
to factors in the grand partition function ZG : indeed one will verify that 1 = 1 + eβ(µ−εk ) 1 − nk F D 1 1 + nk BE = 1 − eβ(µ−εk )
(6.27) (6.28)
which are the respective contributions of the level εk to ZG , in the cases of fermions or bosons. These occupation numbers yield the average values, at given temperature, thus at fixed β, of physical
parameters. Thus the average number of particles is related to α and β, which appear in f (ε), through N =
The total energy of the system at temperature T is obtained from U=
εk nk
The grand potential A is expressed, like the ZG factors, versus the occupation numbers of the states of energy εk : A = +kB T
ln(1 − nk ) fermions
ln(1 + nk ) bosons
A = −kB T
Chapter 6. General Properties of the Quantum Statistics
Free Particle in a Box ; Density of States (Quantum Mechanics)
The different physical parameters, like the average number of particles N , the internal energy U , the grand potential A, were expressed above as sums of contributions arising from the discrete
one-particle levels. Now we will first recall what these states are for a free particle confined within a volume of macroscopic characteristic dimensions, i.e., of the order of a mm or a cm (§ 6.4.1) :
such dimensions are extremely large with respect to atomic distances, so that the characteristic energy splittings are very small (§ 6.4.1a). The boundary conditions (§ 6.4.1b) of the wave function
imply quantization conditions, that we are going to express in two different ways. However, for both quantization conditions the same energy density of states D(ε) is defined (§ 6.4.2), such that the
number of allowed states between energies ε and ε + dε is equal to D(ε) dε.
Quantum States of a Free Particle in a Box
a) Eigenstates By definition a free particle feels no potential energy (except the one expressing the possible confinement) We first consider that the free particle can move through the entire space.
Its hamiltonian is given by pˆ 2 ˆ h= 2m
We are looking for its stationary states (see a course on Quantum Mechanics) : the eigenstate |ψ and the corresponding energy ε satisfy pˆ 2 |ψ = ε|ψ 2m
i.e., 2 ∆ψ(r) = εψ(r) 2m The time-dependent wave function is then deduced : −
Ψ(r, t) = ψ(r) exp(−iεt/)
It is known that the three space variables separate in eq. (6.35) and that it is enough to solve the one-dimension Schroedinger equation −
2 d2 ψ(x) = εx ψ(x) 2m dx2
Free Particle in a Box ; Density of States
which admits two equivalent types of solutions : i) either 1 ψ(x) = √ exp ikx x Lx with kx > 0, < 0 or zero. The time-dependent wave function Å ã 1 εx t Ψ(x, t) = √ exp i kx x − Lx
2 kx2 = ωx is a progressive wave : when the time t increases, with εx = 2m a state of given phase propagates toward increasing x’s for kx > 0, toward decreasing x’s for kx < 0 ; the space wave
function is a constant for kx = 0. ii) or ψ(x) = A cos kx x + B sin kx x
where A and B are constants. The time-dependent wave function ã Å εx t Ψ(x, t) = (A sin kx x + B cos kx x) exp −i
corresponds to the same energy εx or the same pulsation ωx as in (6.39). The solution of progressive-wave type (6.39) is found again if one chooses √ A = 1/ Lx , B = iA. For real coefficients A and B,
when changing the space origin in x there is indeed separation of the space and time variables in (6.41). Then a state of given space phase does not propagate in time, such a wave is stationary. At
three dimensions the kinetic energy terms along the three coordinates add up in the hamiltonian ; the energy is equal to ε = εx + εy + εz
and the wave function is the product of three wave functions, each concerning a single coordinate. Now we are going to express, through boundary conditions on the wave function, that the particle is
confined within the volume Ω. This will yield quantization conditions on the wave vector and the energy.
Chapter 6. General Properties of the Quantum Statistics
b) Boundary Conditions The volume Ω containing the particles has a priori an arbitrary shape. However, it can be understood that each particle in motion is most of the time at a large distance from
the container walls, so that the volume properties are not sensitive to the surface properties as soon as the volume is large enough. It is shown, and we will admit it, that the Statistical Physics
properties of a macroscopic system are indeed independent of the shape of the volume Ω. For convenience we will now assume that Ω is a box (Lx , Ly , Lz ), with macroscopic dimensions of the order of
a mm or a cm. We consider a one-dimension problem, on a segment of length Lx . The presence probability of the particle is zero outside the interval [O, Lx ], in the region where the particle cannot
be found. There are two ways to express this condition : i) Stationary boundary conditions : To indicate that the particle cannot leave the interval, one assumes that in x = 0 and x = Lx potential
barriers are present, which are infinitely high and thus impossible to overcome. Thus the wave function vanishes outside the interval [O, Lx ] and also at the extremities of the segment, to ensure its
continuity at these points ; since it vanishes at x = 0, it must be of the form ψ(x) = A sin kx x (Fig. 6.1). The cancellation at x = Lx yields kx Lx = nx π
where nx is an integer, so that π Lx π The wave vector is an integer multiple of , it is quantized. Lx kx = nx
For such values of kx , the time-dependent wave function is given by ã Å εx t Ψ(x, t) = A sin kx x exp −i
One understands that the physically distinct allowed values are restricted to nx > 0 : indeed taking nx < 0 is just changing the sign of the wave function, which does not change the physics of the
problem. The value nx = 0 is to be rejected since a wave function has to be normalized to unity and thus cannot cancel everywhere. In the same way ky and kz are quantized by the conditions of
cancellation of the wave function on the surface : finally the allowed vectors k for the particle confined within the volume Ω are of the form ã Å k = nx π , ny π , nz π (6.46) Lx Ly Lz
Free Particle in a Box ; Density of States
x O
Fig. 6.1: The free-particle wave function is zero outside the interval ]O, Lx [. the three integers nx , ny , nz being strictly positive. In the space of the wave vectors (Fig. 6.2), the extremities
ã are on a rectangular lattice Å of these vectors π π π k . The unit cell built on these defined by the three vectors i, j, Lx Ly Lz π3 π3 . = vectors is Lx Ly Lz Ω kz
π Ly π Lz
kx Fig. 6.2: Quantization of the k-space for boundary conditions of the stationary-wave type. Only the trihedral for which the three k-components are positive is to be considered. ii) Periodical
boundary [Born-Von Kármán (B-VK)] conditions : Here again one considers that when a solid is macroscopic, all that deals with its surface has little effect on its macroscopic physical parameters like
its pressure, its temperature. The solid will now be closed on itself (Fig. 6.3) and this will not noticeably modify its properties : one thus suppresses surface
Chapter 6. General Properties of the Quantum Statistics
effects, of little importance for a large system. Then at a given x the properties are the same, and the wave function takes the same value, whether the particle arrives at this point directly or
after having traveled one turn around before reaching x : ψ(x + Lx ) = ψ(x)
Lx O
Fig. 6.3 : The segment [0, Lx ] is closed on itself. This condition is expressed on the progressive-wave type wave function (6.39) through exp ikx (x + Lx ) = exp ikx x
for any x, so that the condition to be fulfilled is kx Lx = nx 2π 2π that is, kx = nx Lx
(6.49) (6.50)
where nx is integer. The same argument is repeated on the y and z coordinates. The wave vector k is again quantized, with basis vectors twice larger than for the condition i) above for stationary
waves, the allowed values now being ã Å k = n 2π n 2π n 2π (6.51) x Lx y Ly z Lz The integers nx , ny , nz are now positive, negative or null (Fig. 6.4). Indeed the time-dependent wave function is
written, to a phase, 2 k 2 1 Ψ(r, t) = √ exp i(k.r − ωt), with ω = ε = 2m Ω
Changing the sign of k, thus of the integers ni , produces a new wave which propagates in the opposite direction and is thus physically different. If one of these integers is zero, this means that the
wave is constant along this axis.
Free Particle in a Box ; Density of States
kz 2π Ly
2π Lz
kx Fig. 6.4: Quantization of the k-space for the periodic (B-VK) boundary conditions. Note that each dimension of the elementary box is twice that corresponding to the stationary boundary conditions
(Fig. 6.2), and that now the entire space should be considered.
Density of States
Under these quantization conditions what is the order of magnitude of the obtained energies, and of the spacing of the allowed levels. As an example consider the framework of the periodic boundary
conditions. The quantized allowed energies are given by ñ Å ã2 Å ã2 Å ã2 ô 2 2 2π 2 2π 2 2π (6.53) + ny + nz n ε= 2m x Lx Ly Lz For Lx for the order of 1 mm, a single coordinate and a mass equal to
that of the proton, 2 2m
2π Lx
ã2 = 13.6 eV.
ã2 Å (2π)2 0.5 × 10−10 ∼ . = 8 × 10−16 eV 1840 10−3
For a mass equal to that of the electron, the obtained value, 1840 times larger, is still extremely small at the electron-volt scale ! The spacing between the considered energy levels is thus very
small as compared to any realistic experimental accuracy, as soon as Lx is macroscopic, that is, large with respect to atomic distances (the Bohr orbit of the hydrogen atom is associated with
energies in the electron-volt range, see the above es-
Chapter 6. General Properties of the Quantum Statistics
timation). If the stationary boundary conditions had been chosen, the energy splitting would have been four times smaller. The practical question which is raised is : “What is the number of available
quantum states in a wave vector interval d3k around k fixed, or in an energy interval dε around a given ε ?” It is to answer this question that we are going to now define densities of states, which
summarize the Quantum Mechanics properties of the system. It is only in a second step (§ 6.5) that Statistical Physics will come into play through the occupation factors of these accessible states.
a) Wave Vector and Momentum Densities of States (B-VK Conditions) : One is estimating the number of quantum states dn of a free particle, confined within a volume Ω, of wave vector between k and k +
d3k, the volume d3k (2π)3 . The wave vector density of being assumed to be large as compared to Ω states Dk (k) is defined by dn = Dk (k)d3k
We know that the allowed states are uniformly spread in the k-space and that, in the framework of the B-VK periodic boundary conditions, for each (2π)3 there is a new quantum state (Fig. 6.4) :
indeed a elementary volume Ω parallelepiped has eight summits, shared between eight parallelepipeds. The searched number dn is thus equal to dn =
d3k Ω 3 = d k, (2π)3 /Ω (2π)3
which determines Dk (k) =
Ω (2π)3
The above estimation has been done from the Schroedinger equation solutions related to the space variable r (orbital solutions). If the particle carries a spin s, to a given solution correspond 2s +
1 states with different spins, degenerate in the absence of applied magnetic field, which multiplies the above value of the density of states [Eq. (6.57)] by (2s + 1). Thus for an electron, of spin 1/
2, there are two distinct spin values for a given orbital state. For a free particle, its wave vector and momentum are related by p = k d3 p = 3 d3k
Free Particle in a Box ; Density of States
The number dn of states for a free particle of momentum vector between p and p + d3 p, confined within the volume Ω, is written, from eq. (6.56), dn =
p Ω Ω d3 = 3 d3 p = Dp ( p)d3 p 3 3 (2π) h
Here the (three-dimensional) momentum density of states has been defined through Dp ( p) =
Ω h3
Note that the distance occupied by a quantum state, on the momentum axis of the phase space corresponding to the one-dimension motion of a single h . This is consistent with the fact that a cell of
this phase space particle, is Lx has the area h and that here ∆x = Lx , since, the wave function being of plane-wave type, it is delocalized on the whole accessible distance. b) Three-Dimensional
Energy Density of States (B-VK Conditions) : The energy of a free particle only contains the kinetic term inside the allowed volume, it only depends of its momentum or wave vector modulus : ε=
2 k 2 p2 = 2m 2m
All the states corresponding to a given … energy ε are, in the k-space, at the 2mε surface of a sphere of radius k = . If one does not account for the 2 particle spin, the number of allowed states
between the energies ε and ε + dε is the number of allowed values of k between the spheres of radii k and k + dk, 2 kdk. (For clarity in the sketch, Fig. dk being related to dε through dε = m 6.5 is
drawn in the case of a two-dimensional problem.) The concerned volume is 4πk 2 dk, the number dn of the searched states, without considering the spin variable, is equal to Ω Ω 4πk 2 dk = 4πk.kdk (2π)
3 (2π)3 … Ω 2mε m = 2 dε 2π 2 2
(6.62) (6.63)
Thus a density of states in energy is obtained, defined by dn(ε) = D(ε) dε, with Å ã √ (2s + 1) 2m 3/2 (6.64) D(ε) = CΩ ε with C = 4π 2 2
Chapter 6. General Properties of the Quantum Statistics
ky ε+ ε
kx O
2π Lx
2π Ly Fig. 6.5: Number of states of energy between ε and ε + dε, (two-dimensional problem). This expression of the energy density of states, which includes the spin degeneracy, is valid for any
positive energy ; the density of states vanishes for ε < 0, since then no state can exist, the energy being only of kinetic origin for the considered free particle. c) Densities of States
(Stationary-Waves Conditions) We have seen that, in the case of stationary-wave conditions, all the physically distinct solutions are obtained when restricting the wave vectors choice to those with
all three positive components. The volume corresponding to the states of energies between ε and ε + dε must thus be limited to the corresponding trihedral of the k-space. This is the 1/8th of the
space between the two 1 spheres of radii k and k + dk, i.e., 4πk 2 dk, dk being still related to dε by 8 dε =
2 kdk m
However the wave vector density of states is eight times larger for the stationary-wave quantization condition than for the B-VK condition (the allowed points of the k-space are twice closer in the
x, y and z directions). Then one obtains a number of orbital states, without including the spin degeneracy, of dn =
Ω1 4πk 2 dk π3 8
Fermi-Dirac Distribution ; Bose-Einstein Distribution
The density D(ε) thus has exactly the same value than in b) when the spin degeneracy (2s + 1) is introduced. One sees that both types of quantization condition are equivalent for a macroscopic
system. In most of the remaining parts of this course, the B-VK conditions will be used, as they are simpler because they consider the whole wave vector space. Obviously, in specific physical
conditions, one may have to use the stationary-wave quantization conditions. Notes : In presence of a magnetic potential energy, which does not affect the orbital part of the wave function, the
density of states Dk (k) is unchanged. On the contrary, D(ε) is shifted (see § 7.1.3). As an exercise, we now calculate the energy density of states for free particles in the two- or one-dimension
space. The relations between ε and k, between dε and dk are not modified [(6.61) and (6.65)]. – The two-dimension elementary area is 2πkdk, the k -density of states Lx Ly is given by without taking
the spin in account and the energy (2π)2 density of states is a constant. – In one dimension, the elementary length is 2dkx (kx and −kx correspond L to the same energy, whence the factor 2), the kx
-density of states is 2π 1 without spin, and D(ε) varies like √ . D(ε) diverges for ε tending to ε zero but its integral, which gives the total number of accessible states from ε = 0 to ε = ε0 , is
Fermi-Dirac Distribution ; Bose-Einstein Distribution
We have just seen that, for a macroscopic system, the accessible energy levels are extremely close and have defined a density of states, which is a discrete function of energy ε : in the large volume
limit, it is as if D(ε) was a function of the continuous variable ε. Let us now return to Statistical Physics in the framework of the large volumes limit. From the average number of occupation nk
(see § 6.3) of an energy level εk , one defines the energy function “ occupation factor” or “distribution,” of Fermi-Dirac or of Bose-Einstein according to the nature of the considered
Chapter 6. General Properties of the Quantum Statistics
particles : 1
fF D (ε) =
1 eβε−α
1 1 = eβ(ε−µ) − 1 eβε−α − 1
fBE (ε) =
(6.67) (6.68)
Recall that α and the chemical potential µ are related by α = βµ with 1 β= . kB T The consequences of the expression of their distribution will be studied in details in chapter 7 for free fermions
and in chapter 9 for bosons.
Average Values of Physical Parameters at T in the Large Volumes Limit
On the one hand, the energy density of states D(ε) summarizes the Quantum Mechanical properties of the system, i.e., it expresses the solutions of the Schroedinger equation for an individual
particle. On the other hand, the distribution f (ε) expresses the statistical properties of the considered particles. To calculate the average value of a given physical parameter, one writes that in
the energy interval between ε and ε + dε, the number of accessible states is D(ε)dε, and that at temperature T and for a parameter α or a chemical potential µ, these levels are occupied according to
the distribution f (ε) (of Fermi-Dirac or Bose-Einstein). Thus +∞ D(ε)f (ε) dε (6.69) N= −∞ +∞
εD(ε)f (ε) dε
The temperature and the chemical potential are included inside the distribution expression. In fact the first equation allows one to determine the chemical potential value. To obtain the grand
potential, one generalizes to a large system relations (6.31) and (6.32) that were established for discrete parameters : +∞ A = +kB T D(ε) ln(1 − f (ε)) dε for fermions (6.71) −∞ +∞
A = −kB T
D(ε) ln(1 + f (ε)) dε for bosons −∞
Common Limit of the Quantum Statistics
6.7 6.7.1
Common Limit of the Quantum Statistics Chemical Potential of the Ideal Gas
In the case of free particles (fermions or bosons) of spin s, confined in a volume Ω, one expresses the number of particles (6.69) using the expression (6.64) of the density of states D(ε) : √ εdε (2s
+ 1)Ω(2m)3/2 +∞ (6.73) N= 2 3 βε−α 4π e ∓1 0 where, in the denominator, the sign − is for bosons and the sign + for fermions. If the temperature is finite, the dimensionless variable of the integral
is βε = x, so that this relation can be written : √ ã Å N 2mkB T 3/2 (2s + 1) +∞ xdx = (6.74) x−α ∓ 1 Ω 2 4π 2 e 0 The physical parameters are then put into the left member, which depends of the
system density and temperature as √ Å ã3/2 h2 1 1 N 4π 2 N π 1 N (λth )3 = constant λ3th = Ω (2π)3 2s + 1 23/2 mkB T Ω 2 2s + 1 Ω (6.75) The thermal de Broglie wavelength λth appears, as defined in §
4.3. The quantity (6.75) must be equal to the right member integral, which is a function of α such that ∞ √ xdx (6.76) u∓ (α) = x−α e ∓1 0 √ In fact (6.75) expresses, to the factor π/2(2s+1), the
cube of the ratio of the thermal de Broglie wavelength λth to the average distance between particles d = (Ω/N )1/3 . It is this comparison which, in § 4.3, had allowed us to define the limit of the
classical ideal gas model. Since (6.75) and (6.76) are equal, one obtains : √ 1 πN (λth )3 (6.77) u∓ (α) = 2s + 1 2 Ω The integral u∓ (α) can be numerically calculated versus α, it is a monotonous
increasing function of α for either Fermi-Dirac (F D) or Bose-Einstein (BE) Quantum Statistics (Fig. 6.6). We already know the sign or the value of u(α) for some specific physical situations : It was
noted in § 6.2.2 that the expression of f (ε) for Bose-Einstein statistics implies that α < βε1 , i.e., α < 0 for free bosons, since then ε1 is zero. We
Chapter 6. General Properties of the Quantum Statistics
uF D
u± (α)
u+ (α) = uF D u− (α) = uBE uBE constant classical limit αBE
αF D O
N 3 λ Ω th α
Fig. 6.6: Sketch of the function u± (α) and determination of α from the physical data of the considered system.
will see in chapter 9 of this book that, in some situations, α vanishes, thus producing the Bose-Einstein condensation, which is a phase transition. In the case of free fermions at low temperature,
it will be seen in the next chapter that µ is larger than ε1 , which is equal to zero, i.e., α > 0. When α tends to minus infinity, the denominator term takes the same expression for both fermions and
bosons. The integration is exactly performed : u∓ (α) e
√ √ −x α π
1 xe dx = e 2
Considering the expression of the right member of 6.76, this limit does occur if the particle density n = N/Ω tends to zero or the temperature tends to infinity. In the framework of this classical
limit, one deduces eα =
N λ3th Ω (2s + 1)
which allows one to express the chemical potential µ, since α = µ/kB T . One
Common Limit of the Quantum Statistics
finds, if the spin degeneracy is neglected, i.e., if one takes 2s + 1 = 1, ñ Å ã ô Ω 2πmkB T 3/2 µ = −kB T ln (6.80) N h2 the very expression obtained in § 4.4.3 for the ideal gas.
Grand Canonical Partition Function of the Ideal Gas
Let us look for the expression of A in the limit where α tends to minus infinity. For both fermions and bosons, one gets the same limit, by developing the Naperian logarithm to first order : it is
equal to eα−βεk (6.81) A = −kB T k
One recognizes the one-particle partition function z1 = e−βεk
= −kB T e
(its value for the ideal gas was calculated in § 4.4.1), whence A = −kB T eα z1
A = −kB T ln ZG ,
By definition
thus in the common limit of both Quantum Statistics, i.e., in the classical limit conditions, ZG is equal to ZG = exp(eα z1 ) =
z1N N!
Now the general definition of the grand canonical partition function ZG , using the N -particle canonical partition function ZcN , is ZG = eαN ZcN (6.87) N
This shows that in the case of identical particles, for sake of consistency, the N -particles canonical partition function must contain the factor 1/N !, the
Chapter 6. General Properties of the Quantum Statistics
“memory” of the Quantum Statistics, to take into account the indistinguishability of the particles : ZcN =
1 N z N! 1
This factor ensures the extensivity of the thermodynamical functions, like S, and allows to solve the Gibbs paradox (there is no extra entropy when two volumes of gases of the same density and the
same nature are allowed to mix into the same container.) (see § 4.4.3). Notes : The above reasoning allows one to grasp how, in the special case of free particles, one goes from a quantum regime to a
classical one. We did not specify to which situation corresponds the room temperature, the most frequent experimental condition. We will see that for metals, for example, it corresponds to a very low
temperature regime, of quantum nature (see chapter 7).
Summary of Chapter 6 To account for the occupation conditions of the energy states as stated by the Pauli principle, it is simpler to work in the grand canonical ensemble, in which the number of
particles is only given in average : then, at the end of the calculation, the value of the chemical potential is adjusted so that this average number coincides with the real number of particles in
the system. As soon as the system is macroscopic, the fluctuations around the average particle number predicted in the grand canonical ensemble are much smaller than the resolution of any measure. The
grand partition function gets factorized by energy level and takes a different expression for fermions and for bosons. Remember the distribution, which provides the average number of particles, of
chemical potential equal to µ, on the energy level ε at temperature T = 1/kB β : for fermions fF D (ε) =
1 exp β(ε − µ) + 1
fBE (ε) =
1 exp β(ε − µ) − 1
for bosons
For a free particle in a box, we calculated the location of the allowed energy levels. The periodic boundary conditions of Born-Von Kármán (or in an analogous way the stationary-waves conditions)
evidence quantized levels, and for a box of macroscopic dimensions (“large volume limit”), the allowed levels are extremely close. One then defines an energy density of states such that the number of
allowed states between ε and ε + dε is equal to dn = D(ε)dε For a free particle in the three-dimensional space D(ε) ∝ 153
(6.91) √
Summary of Chapter 6
In the large volume limit the average values at temperature T of physical parameters are expressed as integrals, which account for the existence of allowed levels through the energy density of states
and for their occupation by the particles at this temperature through the distribution term : +∞ D(ε)f (ε) dε (6.92) N= −∞ +∞
εD(ε)f (ε) dε
We showed how for free particles the Classical Statistics of the ideal gas is the common limit of the Fermi-Dirac and Bose-Einstein statistics when the temperature is very high and/or the density
very low. The factor 1/N ! in the canonical partition function of the ideal gas is then the “memory” of the Quantum Statistics properties.
Chapter 7
Free Fermions Properties We now come to the detailed study of the properties of the Fermi-Dirac quantum statistics. First at zero temperature we will study the behavior of free fermions, which shows
the consequences of the Pauli principle. Then we will be concerned by electrons, considered as free, and will analyze two of their properties which are temperature-dependent, i.e., their specific heat
and the “ thermionic” emission. In the next chapter we will understand why this free electron model indeed properly describes the properties of the conduction electrons in metals. In fact, in the
present chapter we follow the historical order : the properties of electrons in metals, described by this model, were published by Arnold Sommerfeld (electronic specific heat, Spring 1927) and Wolfang
Pauli (electrons paramagnetism, December 1926), shortly after the statement of the Pauli principle at the beginning of 1925.
Properties of Fermions at Zero Temperature Fermi Distribution, Fermi Energy
The Fermi-Dirac distribution, presented in § 6.5, gives the average value of the occupation number of the one-particle state of energy ε, and is a function of the chemical potential µ. Its expression
is ï ò 1 β(ε − µ) 1 f (ε) = β(ε−µ) = 1 − tanh (7.1) 2 2 e +1 155
Chapter 7. Free Fermions Properties
1 It is plotted in Fig. 7.1 and takes the value for ε = µ for any value of β and 2 thus of the temperature. This particular point is a center of inversion of the curve, so that the probability of
“nonoccupation” for µ− δε, i.e., 1 − f (µ− δε), has the same value as the occupation probability for µ + δε, i.e., f (µ + δε). β At the point ε = µ the tangent slope is − : the lower the temperature,
the 4 larger this slope. f 1−f 1
2 kB T
1 − f (ε)
f (ε) 0
Fig. 7.1 : Variation versus energy of f (ε) and 1 − f (ε). In particular when T tends to zero, f (ε) tends to a “step function” (Heaviside function). One usually calls Fermi energy the limit of µ
when T tends to zero ; it is noted εF , its existence and its value will be specified below. Then one obtains at T = 0 : ® f (ε) = 1 if ε < εF (7.2) f (ε) = 0 if ε > εF At T = 0 K the filled states
thus begin at the lowest energies, and end at εF , the last occupied state. The states above εF are empty. This is in agreement with the Pauli principle : indeed in the fundamental state the
particles are in their state of minimum energy but two fermions from the same physical system cannot occupy the same quantum state, whence the requirement to climb up in energy until the particles
all located. In the framework of the large volume limit, to which this course is limited, the total number N of particles in the system satisfies ∞ D(ε)f (ε)dε (7.3) N= 0
where D(ε) in the one-particle density of states in energy. This condition determines the µ value.
Properties of Fermions at Zero Temperature
At zero temperature, the chemical potential, then called the “Fermi energy εF ,”, is thus deduced from the condition that εF N= D(ε)dε (7.4) 0
The number N corresponds to the hatched area of Fig. 7.2.
f (ε)
T = OK
Fig. 7.2: Density of states D(ε) of three-dimensional free particles and determination of the Fermi energy at zero temperature. The hatched area under the curve provides the number N of particles in
the system. For a three-dimensional system of free particles, with a spin s = 1/2, from (6.64) √ Ω (2m)3/2 ε 2π 2 3 D(ε) = 0
D(ε) =
for ε > 0 for
The integral (7.4) is then equal to N=
2 Ω 3/2 (2m)3/2 εF 3 2π 2 3
Å ã N 2/3 2 3π 2 2m Ω
that is, εF =
This expression leads to several comments :
Chapter 7. Free Fermions Properties
– N and Ω are extensive, thus εF is intensive ; – εF only depends on the density N/Ω ; – the factor of 2 /2m in (7.7) is equal to kF2 , where kF is the Fermi wave vector : in the k-space the surface
corresponding to the constant energy εF , called “Fermi surface,” contains all the filled states. In the case of free particles, it is a sphere, the “Fermi sphere,” of radius kF . One can √ also define
the momentum pF = kF = 2mεF . Then expression (7.6) can be obviously expressed versus kF or pF for spin-1/2 particles : N=
2Ω 4 3 2Ω 4π 3 πk = 3 p (2π)3 3 F h 3 F
p)), the one-particle density The first factor is equal to Dk (k) (resp. Dp ( of states in the wave vector- (resp. momentum-) space. Expression (7.8) is also directly obtained by writing : N= 0
Dk (k)d3k .
– εF corresponds, for metals, to energies of the order of a few eV. Indeed, let us assume now (see next chapter) that the only electrons to be considered are those of the last incomplete shell, which
participate in the conduction. Take the example of copper, of atomic mass 63.5 g and density 9 g/cm3 . There are thus 9N /63.5 = 0.14N copper atoms per cm3 , N being the Avogadro number. Since one
electron per copper atom participates in conductivity, the number of such electrons per cm3 is of 0.14N = 8.4 × 1022 /cm3 i.e., 8.4 × 1028 / m3 . This corresponds to a wave vector kF = 1.4 × 1010 m−1
, and to a velocity at the Fermi level kF of 1.5 × 106 m/sec. One deduces εF = 7 eV. vF = m Remember that the thermal energy at 300 K, kB T , is close to 25 meV, so that the Fermi energy εF
corresponds to a temperature TF such that kB TF = εF , in the 104 or 105 K range according to the metal (for copper 80, 000 K) !
Internal Energy and Pressure of a Fermi Gas at Zero Temperature
The internal energy at zero temperature is given by εF εD(ε)dε U= 0
Properties of Fermions at Zero Temperature
For the three-dimensional density of states (7.5), U is equal to : U=
2 Ω 5/2 (2m)3/2 εF 5 2π 2 3
i.e., by expressing N using (7.6), U= Besides,
3 N εF 5 Å
∂U P =− ∂Ω
ã (7.12) N
From (7.10) and (7.7), U varies like N 5/3 Ω−2/3 , so that at constant particles number N P =
2U 3Ω
For copper, one obtains P = 3.8 × 1010 N/m2 = 3.7 × 105 atmospheres. This very large pressure, exerted by the particles on the walls of the solid which contains them, exists even at zero temperature.
Although the state equation (7.13) is identical in the cases of fermions and of the ideal gas, the two situations are very different : for a classical gas, when the temperature tends to zero, both the
internal energy and the pressure vanish. For fermions, the Pauli principle dictates that levels of nonzero energies are filled. The fact that electrons in the same orbital state cannot be more than a
pair, and of opposite spins, leads to their mutual repulsion and to a large average momentum, at the origin of this high pressure.
Magnetic Properties. Pauli Paramagnetism
A gas of electrons, each of them carrying a spin magnetic moment equal to If N+ the Bohr magneton µ B , is submitted to an external magnetic field B. equal to +µB , is the number of electrons with a
moment projection along B N− that of moment −µB , the gas total magnetic moment (or magnetization) is along B M = µB (N+ − N− )
The density of states Dk (k) is the same as for free electrons. Since the potential energy of an electron depends on the direction of its magnetic moment, the total energy of an electron is ε± =
2k 2 ∓ µB B 2m
Chapter 7. Free Fermions Properties
The densities of states D+ (ε) and D− (ε) now differ for the two magnetic moment orientations, whereas in the absence of magnetic field, D+ (ε) = D− (ε) = D(ε)/2 : here ® D+ (ε) = 12 D(ε + µB B) (7.16)
D− (ε) = 12 D(ε − µB B) Besides, the populations of electrons with the two orientations of magnetic moment are in equilibrium, thus they have the same chemical potential. The solution of this problem
is a standard exercise of Statistical Physics. Here we just give the result, in order to comment upon it : one obtains the magnetization M = µB · µB B · D(εF )
This expresses that the moments with the opposite orientations compensate, except for a small energy range of width µB B around the Fermi energy. The 1 M corresponding (Pauli) susceptibility, defined
by lim , is equal to µ0 Ω B B→0 χ=
1 3 N µ2B kB T 1 3 N µ2B · = µ0 2 Ω εF µ0 2 Ω kB T εF
with µ0 = 4π × 10−7 . It is reduced with respect to the classical result (Curie law) by a factor of the order of kB T /εF , which expresses that only a very small fraction of the electrons
participate in the effect (in other words, only a few electrons can reverse their magnetic moment to have it directed along the external field).
Properties of Fermions at Non-Zero Temperature Temperature Ranges and Chemical Potential Variation
At nonvanishing temperature, one again obtains the chemical potential µ position from expression (7.3), which links µ to the total number of particles : ∞ D(ε)f (ε)dε N= 0 ∞ 1 = D(ε) β(ε−µ) dε (7.3)
e +1 0
Properties of Fermions at Non-Zero Temperature
D(ε) f (ε) T TF
f (ε)D(ε)
Fig. 7.3 : Determination of the chemical potential µ for T TF .
Relation (7.3) implies that µ varies with T . Fig. 7.3 qualitatively shows for kB T εF the properties which will be calculated in Appendix 7.1. Indeed the area under the curve representing the
product D(ε)f (ε) gives the number of particles. At T = 0 K, one has to consider the area under D(ε) up to the energy εF ; at small T , a few states above µ are filled and an equivalent number below µ
are empty. Since the function D(ε) is increasing, to maintain an area equal to N, µ has to be smaller than εF , of an amount proportional to D (µ). The exact relation allowing one to deduce µ(T ) for
the two Quantum Statistics was given in (6.74) : recalling that α = βµ, and substituting (7.5) into (7.3) one obtains for fermions √ ∞ Å ã3/2 xdx 2 2 N = 2π · (7.19) exp(x − α) + 1 Ω 2mkB T 0 Å ã 2
εF 3/2 = (7.20) 3 kB T In the integral we defined βε = x. Thus α is related to the quantity εF /kB T = TF /T . – When T vanishes, the integral (7.19) tends to infinity : the exponential in the
denominator no longer plays any role since α tends to infinity (εF is finite and β tends to infinity). The right member also tends to
Chapter 7. Free Fermions Properties
infinity. One thus has to return to equation (7.4). – We have just studied the low temperature regime T TF , which takes place in metals at room temperature (TF ∼ 104 K) and corresponds to Fig. 7.3.
In astrophysics, the “white dwarfs” enter in this framework. They are stars at the end of their existence, which have already burnt all their nuclear fuel and therefore have shrunk. Their size is of
the order of a planet (radius of about 5000 km) for a mass of 2 × 1030 kg (of the order of that of the sun), their temperature T of 107 K. Their high density in electrons determines εF . TF being in
the 109 K range, the low temperature regime is achieved ! This regime for which the Quantum Statistics apply is called the “degenerate” (in opposition to “classical”) regime : as already mentioned
about Pauli paramagnetism, and will be seen again below, the “active” particles all have an energy close to µ, this is almost as if this level was degenerate in the Quantum Mechanics meaning. – On
the other hand, when T TF , the integral (7.19) vanishes. This is realized either at low density (εF small), or at given density when T tends to infinity. Then e−α tends to infinity, and since α = βµ
with β tending to zero, this implies that µ tends to −∞. One then finds the Maxwell-Boltzmann Classical Statistics, as discussed in § 6.7.2. – Between these two extreme situations, µ thus decreases
from εF to −∞ when the ratio T /TF increases. When studying systems at room temperature, the regime is degenerate in most cases. For properties mainly depending on the Pauli principle, one can
consider then to be at zero temperature. This is the situation when filling the electronic shells in atoms according to the “Aufbau (construction) principle”, widely used in Chemistry courses : the
Quantum Mechanics problem is first solved in order to find the one-electron energy levels of the considered atom. The solutions ranked in increasing energies are the states 1s, 2s, 2p, 3s, 3p, 4s, 3d,
etc. (except some particular cases for some transition metals). These levels are then filled with electrons, shell after shell, taking into account the level degeneracy (K shell, n = 1, containing at
most two electrons, L shell of n = 2, with at most eight electrons, etc.). Since the energy splittings between levels are of the order of a keV for the deep shells and of an eV for the valence shell,
the thermal energy remains negligible with respect to the energies coming into play and it is as if the temperature were zero. On the other hand, if one is interested in temperature-dependent
properties, like the specific heat or the entropy, which are derivatives versus temperature of thermodynamical functions, more exact expressions are required and at low temperature one will proceed
through limited developments versus the variable T /TF .
Properties of Fermions at Non-Zero Temperature
Specific Heat of Fermions
The very small contribution of electrons to a metal specific heat, which follow the Dulong and Petit law (see Ch. 2, § (2.4.5)) only introducing the vibrations of the lattice atoms, presented a
theoretical challenge in the 1910s. This lead to a questioning of the electron conduction model, as proposed by Paul Drude in 1900. Å ã ∂U of a fermions assembly, one To calculate the specific heat Cv
= ∂T N,Ω needs to express the temperature dependence of the internal energy. One can already make an estimate of Cv from the observation of the shape of the Fermi distribution : the only electrons
which can absorb energy are those occupying the states in a region of a few kB T width around the chemical potential, and each of them will increase its energy by an amount of the order kB T to reach
an empty state. The number of concerned electrons is thus ∼ kB T · D(µ) and for a threekB T dimensional system at low temperature (
1, so that µ and εF are εF close), µ N∼ D(ε)dε ∝ µ3/2 , (7.21) 0
3N dN = = D(µ) i.e., dµ 2µ
The total absorbed energy is thus of the order of Å ã 3N U ∼ kB T · · kB T 2 µ Consequently, Cv =
dU ∼ N kB dT
kB T µ
ã (7.24)
These considerations evidence a reduction of the value of Cv with respect to the classical value N kB in a ratio of the order kB T /µ 1, because of the very small number of concerned electrons. The
exact approach refers to a general method of calculation of the integrals in which the Fermi distribution enters : the Sommerfeld development. The complete calculation is given in Appendix 7.1. Here
we just indicate its principle. At a temperature T small with respect to TF , one looks for a limited development in T /TF of the fermions internal energy U , which depends of T , and of
Chapter 7. Free Fermions Properties
the chemical Å ã potential, itself temperature-dependent. The searched quantity dU is then equal to Cv = dT N,Ω Cv =
3 π2 N kB · 2 3
kB T εF
ã (7.25)
which is indeed of the form (7.24), as far as µ is not very different from εF (see Appendix 7.1). The first factor in (7.25) is the specific heat of a monoatomic ideal gas ; the last factor is much
smaller than unity in the development conditions, it ranges between 10−3 and 10−2 at 300 K (for copper it is 3.6 × 10−3 ) and expresses the small number of effective electrons. The electron’s specific
heat in metals is measured in experiments performed at temperatures in the kelvin range. In fact the measured total specific heat includes the contributions of both the electrons and the lattice
Cvtotal = Cvel + Cvlattice
The contribution Cvlattice of the lattice vibrations to the solid specific heat was studied in Ch. 2, § 2.4.5. The Debye model was mentioned, which correctly interprets the low temperature results by
predicting that Cvlattice is proportional to T 3 , as observed in non-metallic materials. The lower the temperature, the less this contribution will overcome that of the electrons. The predicted
variation versus T for Cvtotal is thus given by Cvtotal = γT + aT 3
= γ + aT 2
π 2 kB 2 εF
T with, from (7.25),
γ = N kB
From the measurement of γ one deduces a value of εF of 5.3 eV for copper whereas in § 7.1.1 we calculated 7 eV from the value of the electrons concentration, using (7.7). In the case of silver the
agreement is much better since the experiment gives εF = 5.8 eV, close to the prediction of (7.7). The possible discrepancies arise from the fact that the mass m used in the theoretical expression of
the Fermi energy was that of the free electron. Now a conduction electron is submitted to interactions with the lattice ions, with the solid vibrations and the other electrons. Consequently, its
motion is altered,
Properties of Fermions at Non-Zero Temperature
which can be expressed by attributing it an effective mass m∗ . This concept will be explained in § 8.2.3 and § 8.3.3. Defining εF =
ã2/3 Å 2 2N , 3π 2m∗ V
one deduces from these experiments a mass m∗ = 1.32m for copper, m∗ = 1.1m for silver, where m is the free electron mass. Note that the fermion entropy can be easily deduced from the development of
the Appendix : we saw that A = −P Ω (see §Å3.5.3), ã so that, according to 2 ∂A (7.13), A = − U . Besides, from (3.5.3), S = − . One deduces 3 ∂T µ,Ω S = N kB
π 2 kB T = Cv 2 εF
The electrons entropy thus satisfies the third law of Thermodynamics since it vanishes with T .
Thermionic Emission ε
V εF
f (ε) 1
Fig. 7.4: Potential well for the electrons of a metal. In order to extract an electron at zero temperature, one illuminates the solid with a photon of energy higher than V − εF . The walls of the
“potential box” which confines the electrons in a solid are not really infinitely high. The photoelectrical effect, evidenced by Heinrich
Chapter 7. Free Fermions Properties
Hertz (1887), and interpreted by Albert Einstein (this was the argument for Einstein’s Nobel prize in 1921), allows one to extract electrons from a solid, at room temperature, by illumination with a
light of high enough energy : for zinc, a radiation in the near ultraviolet is required (Fig. 7.4). The energy’s origin is here the state of a free electron with zero kinetic energy, i.e., with such
a convention the Fermi level has a negative energy. The potential barrier is equal to V . To extract an electron at zero temperature, a photon must bring it at least V − εF , an energy usually in the
ultraviolet range. When heating this metal to a high temperature T , the Fermi distribution is spreading toward little negative or even positive energies, so that a few electrons are then able to
pass the potential barrier and become free (Fig. 7.5). One utilizes tungsten heated to about 3000 K to make the filaments of the electrons cannons of TV tubes or electronic microscopes, as its fusion
temperature is particularly high (3700 K). ε
V µ
f (ε)
Fig. 7.5: Filling of the electronic states in a metal heated to a high temperature T . One way to calculate the thermal current consists in expressing that a particle escapes the metal only if it can
overcome the potential barrier, i.e. if its kinetic energy in the direction perpendicular to the surface, taken as z axis, is large enough. It should satisfy p2 p2 ≥ z ≥V (7.32) 2m 2m We know that
the electronic energies, and thus V and V − µ, are of the order of a few eV : whatever the filament temperature, the system is in a regime
Properties of Fermions at Non-Zero Temperature
where kB T remains small with respect to the concerned energies, and where kB T
1. Consequently, µ is still close to εF , and in the Fermi distribution εF f (ε) = exp β
1 p2 2m
ä −µ +1
the denominator exponent is large for the electrons likely to be emitted. The Fermi distribution can then be approximated by a Maxwell-Boltzmann distribution p2
f (ε) eβµ e−β 2m
There remains to estimate the number ∆n of electrons moving toward the surface ∆S during a time ∆t. This is a standard problem of kinetic theory of gases (the detailed method of calculation is
analogous to the kinetic calculation of the pressure, § 4.2.2). One obtains a total current jz = ∆n/∆S∆t equal to jz =
2 ∆n = 3 2πm(kB T )2 e−β(V −µ) ∆S∆t h
This expression shows the very fast variation of the thermal emission current with the temperature T , in T 2 e−(V −µ)/kB T . It predicts that a 0.3 × 0.3 mm2 tungsten tip emits a current of 15 mA at
3000 K and of only 0.8µA at 2000 K. Practically, to obtain a permanent emission current, one must apply an external electric field : in the absence of such a field, the first electrons emitted into
vacuum create a space charge, associated with an antagonistic electric field, which prevents any further emission. Another way of calculating the emitted current consists in considering that in the TV
tube, in the absence of applied electrical potential, an equilibrium sets up at temperature T between the tungsten electrons which try to escape the metal and those outside the metal that try to
enter it. The equilibrium between electrons in both phases is expressed through the equality of their chemical potentials. Now, outside the solid the electron concentration is very small, so that one
deals with a classical gas which follows the Maxwell-Boltzmann statistics. The electron current toward the solid is calculated like for an ideal gas. When an electric field is applied, this current
vanishes but the emission current, which is its opposite, remains. Note that, in order to be rigorous, all these calculations should take into account the energy levels of the metal and its surface,
which would be very
Chapter 7. Free Fermions Properties
complicated ! Anyhow, the above estimates qualitatively describe the effect and lead to correct orders of magnitude. On this example of the thermionic emission we have shown a practical application,
in which electrons follow the high temperature limit of the Fermi-Dirac statistics.
Summary of Chapter 7 The properties of free fermions are governed by the Fermi-Dirac distribution : fF D (ε) =
1 exp β(ε − µ) + 1
At zero temperature the fermions occupy all the levels of energy lower or equal to εF , the Fermi energy, which is defined as the chemical potential µ for T = 0 K. The number N of particles of the
system and the Fermi energy are linked at T = 0 K through εF N= D(ε)dε 0
This situation, related to the Pauli principle, induces large values of the internal energy and of the pressure of a system of N fermions. An excitation, of thermal or magnetic origin, of a fermions
system, only affects the states of energy close to µ or εF . Consequently, the fermions’ magnetic susceptibility and specific heat are reduced with respect to their values in a classical system. In the
case of the specific heat, the reduction is a ratio of the order of kB T /µ, where µ is the chemical potential, of energy close to εF . The magnetic susceptibility and the specific heat of electrons in
metals are well described in the framework of the free electrons model developed in this chapter.
Appendix 7.1 Calculation of Cv Through the Sommerfeld Development ã dU of fermions is deduced from the expression dT N,Ω of the internal energy U (T ) versus temperature. Å
The specific heat Cv =
We have therefore to calculate, at nonvanishing temperature, ∞ f (ε)D(ε)dε N= 0 ∞ U= εf (ε)D(ε)dε
(7.36) (7.37)
that is, more generally, g(ε) =
g(ε)D(ε)f (ε)dε =
ϕ(ε)f (ε)dε
where ϕ(ε) is assumed to vary like a power law of ε for ε > εminimum, to be null for smaller values of ε. We know that, owing to the shape of the Fermi distribution, εF lim g(ε) = ϕ(ε)dε T →0
We write the difference
∆= −∞
ϕ(ε)f (ε)dε −
which is small when kB T µ (βµ 1). This difference introduces ϕ (µ) : if ϕ (µ) is zero, because of the symmetry of f (ε) around µ, the contributions on 171
Appendix 7.1
both sides of µ compensate. Besides, we understood in § 7.2.1 that, to maintain the same N when the temperature increases, µ gets closer to the origin, for a three-dimensional system for which ϕ (µ)
> 0 when D(ε) is different from √ zero (ϕ(ε) ∝ ε). Let us write ∆ explicitly : µ ∆= dεϕ(ε)f (ε) − −∞ µ
= −∞
1 eβ(ε−µ)
dεϕ(ε) +
Å dεϕ(ε)
dεϕ(ε)f (ε)
ã −1 +
1 eβ(ε−µ)
Here a symmetrical part is played by either the empty states below µ [the probability for a state to be empty is 1 − f (ε)], or the filled states above µ, all in small numbers if the temperature is
low. We take β(ε − µ) = x, βdε = dx ∞ 0 Å Å ã ã x x dx dx 1 1 ∆=− ϕ µ+ ϕ µ+ + −x β β 1 + e β β 1 + ex −∞ 0 ∞ ∞ Å Å ã ã x 1 x 1 dx dx ϕ µ− ϕ µ+ + =− x β β 1 + e β β 1 + ex ∞0 ï Å ã Å 0 ãò x x dx 1 ϕ
µ+ − ϕ µ− = x β 1 + e β β 0 ñ ô Å ã 1 ∞ dx x ϕ(3) (µ) x 3 = · 2 ϕ (µ) + + ... β 0 1 + ex β 3! β =
∞ ϕ(2n+1) (µ) In β 2n+2 n=0
(7.43) (7.44) (7.45) (7.46) (7.47)
with In =
2 (2n + 1)!
dx 0
x2n+1 ex + 1
The integrals In are deduced from the Riemann ζ functions (see the section “A few useful formulae”). In particular at low temperature the principal term π2 of ∆ is equal to (kB T )2 ϕ (µ). 6 Let us
return to the calculation of the specific heat Cv . For the internal energy of nonrelativistic particles, ϕ(ε) = ε · Kε1/2 = Kε3/2 for ε > 0, ϕ(ε) = 0 for ε < 0. Then µ π2 U (T ) = (kB T )2 (Kε3/2 )µ
+ O(T 4 ) Kε3/2 dε + (7.49) 6 0
Appendix 7.1
but µ depends on T since µ εF π2 1/2 2 1/2 4 N= (kB T ) (Kε )µ + O(T ) = Kε dε + Kε1/2 dε 6 0 0
expresses the conservation of the particles number. One deduces from (7.50) : ñ µ(T ) = εF
π2 1− 12
kB T εF
ã2 ô + O(T 4 )
Replacing the expression of µ(T ) into U (T ), one gets : 5/2 π
U (T ) − U (T = 0) KεF
kB T εF
ã2 = N εF
π2 4
kB T εF
ã2 (7.52)
One deduces Cv : 3 dU π2 = N kB · Cv = dT 2 3
kB T εF
ã (7.53)
Chapter 8
Elements of Bands Theory and Crystal Conductivity The previous chapter on free fermions allowed us to interpret some properties of metals. We considered that each electron was confined within the
solid by potential barriers, but did not introduce any specific description of its interactions with the lattice ions or with the other electrons. This model is not very realistic, in particular it
cannot explain why some solids are conducting the electrical current and others are not : at room temperature, the conductivity of copper is almost 108 siemens (Ω−1 · m−1 ) whereas that of quartz is
of 10−15 S, and of teflon 10−16 S. This physical parameter varies on 24 orders of magnitude according to the material, which is absolutely exceptional in nature. Thus we are going to reconsider the
study of electrons in solids from the start and to first address the characteristics of a solid and of its physical description. As we now know, the first problem to solve is a quantum mechanical one,
and we will rely on your previous courses. In particular we will be concerned by a periodic solid, that is, a crystal (§ 8.1). We will see that the electron eigenstates are regrouped into energy
bands, separated by energy gaps (§ 8.2 and § 8.3). We will have to introduce here very simplifying hypotheses : in Solid State Physics courses, more general assumptions are used to describe
electronic states of crystals. Using Statistical Physics (§ 8.4) we will see how electrons are filling energy bands. We will understand why some materials are conductors, other ones insulators. We
will go more into details on the statistics of semiconductors,
Chapter 8. Elements of Bands Theory and Crystal Conductivity
of high importance in our everyday life, and will end by a brief description of the principles of a few devices based on semiconductors (§ 8.5).
What is a Solid, a Crystal ?
A solid is a dense system with a specific shape of its own. This definition expresses mechanical properties ; but one can also refer to the solid’s visual aspect, shiny in the case of metals,
transparent in the case of glass, colored for painting pigments. These optical properties, as well as the electrical conduction and the magnetic properties, are due to the electrons. In a macroscopic
solid with a volume in the cm3 range, the numbers of electrons and nuclei present are of the order of the Avogadro number N = 6.02 × 1023 . The hamiltonian describing this solid contains for each
particle, either electron or nucleus, its kinetic energy, and its potential energy arising from its Coulombic interaction with all the other charged particles. It is easily guessed that the exact
solution will be impossible to get without approximations (and still very difficult to obtain using them !). One first notices that nuclei are much heavier than electrons and, consequently, much less
mobile : either the nuclei motion will be neglected by considering that their positions remain fixed, or the small oscillations of these nuclei around their equilibrium positions (solid vibrations)
will be treated seˆ will parately from the electron motions, so that the electron hamiltonian H ˆ will then be given by not include the nuclei kinetic energy. H ˆ = H
NZ N NZ N Z ˆ2 pi 1 1 Ze2 1 e2 − + n | 2 4πε0 2m 4πε0 i=1 n=1 |ri − R |r − rj | i=1 i,j=1 i
n and the where Z is the nuclear charge ; the nth nucleus is located at R electron i at ri . The second term expresses the Coulombian interaction of all the electrons with all the nuclei, the third
one the repulsion among electrons. Besides, we will assume that the solid is a perfect crystal, that is, its atoms are n of a three-dimensional periodic lattice. This regularly located on the sites R
will simplify the solution of the Quantum Mechanical problem, but many of our conclusions will also apply to disordered solids. ˆ depends of their probabilities of The repulsion term between
electrons in H presence, in turn given by the solution of the hamiltonian : this is a difficult self-consistent problem. The Hartree method, which averages the electronic repulsion, and the
Hartree-Fock method, which takes into account the wave function antisymmetrization as required by the Pauli principle, are used to
The Eigenstates for the Chosen Model
solve this problem by iteration, but they lead to complicated calculations. Phenomena like ferromagnetism, superconductivity, Mott insulators, mixedvalence systems, are based on the interaction
between electrons ; however in most cases one can account of the repulsion between electrons in a crystal through an average one-electron periodic term and we will limit ourselves to this situation
in the present course. Then it will be possible to split the ˆ into one-electron terms, this is the independent-electron aphamiltonian H proximation : ˆ = H
ˆi h
with ˆ2 p ˆ hi = i + V(ri ) 2m
The potential V(ri ) has the lattice periodicity, it takes into account the attractions and repulsions on electron i and is globally attractive. The eigenvalues ˆ are the sums of the eigenenergies of
the various electrons. of H ˆ i . We will Our task will consist in solving the one-electron hamiltonian h introduce an additional simplification by assuming that the lattice is onedimensional, i.e., V
(ri ) = V(xi ). This will allow us to extract the essential characteristics, at the cost of simpler calculations. In the “electron in the box” model that we have used up to now, V(xi ) only expressed
the potential barriers limiting the solid, and thus leveled out the potential of each atom. Here we will introduce the attractive potential on each lattice site. The energy levels will derive from
atomic levels, rather than from the free electron states which were the solutions in the “box” case.
8.2 8.2.1
The Eigenstates for the Chosen Model Recall : the Double Potential Well
First consider an electron on an isolated atom, located at the origin. Its hamiltonian ˆ h is given by 2 ˆ = pˆ + V (x) h 2m
where V is the even Coulombian potential of the nucleus (Fig. 8.1).
Chapter 8. Elements of Bands Theory and Crystal Conductivity
One of its eigenstate is such that : ˆ hϕ(x) = ε0 ϕ(x)
ϕ(x) is an atomic orbital, of s-type for example, which decreases from the origin with a characteristic distance λ. We will assume that the other orbitals do not have to be considered in this
problem, since they are very distant in energy from ε0 . V
Fig. 8.1: Coulombian potential of an isolated ion and probability of presence of an electron on this ion. Let us add another nucleus, that we locate at x = d (Fig. 8.2). The hamiltonian, still for a
single electron now sharing its presence between the two ions, 2 ˆ = pˆ + V (x) + V (x − d) h 2m
In the case of two 1s-orbital initial states, this corresponds to the description of the H+ 2 ion (two protons, one electron). One follows the standard method on the double potential well of the
Quantum Mechanics courses and defines |ϕL and |ϕR , the atomic eigenstates respectively located on the left nucleus (at x = 0 ) and the right nucleus (at x = d) : ⎧ñ ô ⎪ pˆ2 ⎪ ⎪ + V (x) |ϕL = ε0 |ϕL ⎪
⎪ ⎨ 2m (8.7) ⎪ ï 2 ò ⎪ ⎪ pˆ ⎪ ⎪ + V (x − d) |ϕR = ε0 |ϕR ⎩ 2m
The Eigenstates for the Chosen Model
x ε0 + A ε0 − A
Fig. 8.2: Potential due to two ions of the same nature, distant of d and corresponding bound states. Owing to the symmetry of the potential in (8.6) with respect to the point ˆ can be chosen either
symmetrical x = d/2, whatever d the eigenfunctions of h or antisymmetrical with respect to this mid-point. However, if d is very large ˆ has two with respect to λ, the coupling between the two sites
is very weak, h eigenvalues practically equal to ε0 and if at the initial time the electron is on one of the nuclei, for example the left one L, it will need a time overcoming the observation
possibilities to move to the right one R. One can then consider that the atomic states (8.7) are the eigenstates of the problem (although strictly they are not). On the other hand, you learnt in
Quantum Mechanics that if d is not too large with respect to λ, the L and R wells are coupled by the tunnel effect, which is expressed through the coupling matrix element ˆ L , −A = ϕR |H|ϕ
which decreases versus d approximately like exp(−d/λ). ˆ are equal The energy degeneracy is then lifted : the two new eigenvalues of h 1 to ε0 − A, corresponding to the symmetrical wave function |ψS
= √ (|ϕL + 2 1 |ϕR ), and ε0 + A, of antisymmetrical wave function |ψA = √ (|ϕL − |ϕR ) 2 (Fig. 8.2). These results are valid for a weak enough coupling (A |ε0 |). For a stronger coupling (d ∼ λ) one
should account for the overlap of the two atomic functions, expressed by ϕL |ϕR different from zero. Anyway the eigenstates are delocalized between both wells. For H+ 2 they correspond to the bonding
and antibonding states of the chemical bound.
Chapter 8. Elements of Bands Theory and Crystal Conductivity
Electron on an Infinite and Periodic Chain
In the independent electrons model, the hamiltonian describing a single electron in the crystal is given by +∞ 2 ˆ crystal = pˆ + V (x − xn ) h 2m n=−∞
where xn = nd, is the location of the nth nucleus. The potentials sum is periodic and plays the role of V(ri ) in formula (8.3) (Fig. 8.3). V
nd x
Fig. 8.3 : Potential periodic of a crystal. One is looking for the stationary solution, such that ˆ crystal ψ(x) = εψ(x) h
If the nuclei are far enough apart, there is no coupling : although the eigenstates are delocalized, if at the initial time the electron is on a given nucleus, an infinite time would be required for
this electron to move to the neighboring nucleus. The state of energy ε = ε0 is N times degenerate. On the other hand, in the presence of an even weak coupling, the eigenstates are delocalized. We
are going to assume that the coupling is weak and will note : ˆn + ˆ crystal = h V (x − n d) (8.11) h n =n
to show the atomic hamiltonian on the specific site n. To solve the eigenvalue problem, by analogy with the case of the double potential well, we will work in a basis of localized states, which are
not the eigenstates of the considered infinite periodic chain. Now every site plays a similar role, the problem is invariant under the translations which change x into x + nd = x + xn . The general
basis state is written |n, it corresponds to the wave function φn (x) located around xn and is deduced through the translation xn from the function centered at the origin, i.e., φn (x) = φ0 (x − xn
). These states are built through orthogonalization from the atomic functions ϕ(x − xn ) in order to constitute an orthonormal basis : n |n = δnn (this is
The Eigenstates for the Chosen Model
the so-called “Hückel model”). The functions φn (x) depend on the coupling. When the atoms are very distant, φn (x) is very close to ϕ(x − xn ), the atomic wave function. In a weak coupling
situation, where the tunnel effect only takes place between first neighbors, φn (x) is still not much different from the atomic function. One looks for a stationary solution of the form |ψ = cn |n
One needs to find the coefficients cn , expected to all have the same norm, since every site plays the same role : ˆ hcrist
+∞ cn |n = ε cn |n
One proceeds by analogy with the solution of the double-well potential problem : one assumes that the hamiltonian only couples the first neighboring sites. Then the coupling term is given by ˆ crist |
n + 1 = n − 1|h ˆ crist |n = −A n|h
For atomic s-type functions ϕ(x) A is negative. By multiplying (8.13) by the bra n|, one obtains ˆ cryst |n − 1 + cn n|h ˆ cryst |n + cn+1 n|h ˆ cryst |n + 1 = cn ε cn−1 n|h Now ˆ cryst |n = n|h ˆ n
|n + n| n|h
V (x − n d)|n
n =n
= ε0 + α ε0 The term α, which is the integral of the product of |φn (x)|2 by the sum of potentials of sites distinct from xn , is very small in weak coupling conditions [it varies in exp(−2d/λ),
whereas A is in exp(−d/λ)]. It will be neglected with respect to ε0 . The coefficients cn , cn−1 and cn+1 are then related through −cn−1 A + cn ε0 − cn+1 A = cn ε
One obtains analogous equations by multiplying (8.13) by the other bras, whence finally the set of coupled equations : ⎧ ⎪ ⎨. . . . . . (8.18) −cn−1 A + cn ε0 − cn+1 A = cn ε ⎪ ⎩ − cn A + cn+1 ε0 −
cn+2 A = cn+1 ε
Chapter 8. Elements of Bands Theory and Crystal Conductivity
Energy Bands and Bloch Functions
The system (8.18) assumes a nonzero solution if one chooses cn as a phase : cn = exp(ik · nd)
where k, homogeneous to a reciprocal length, is a wave vector and nd the coordinate of the considered site. By substituting (8.19) into (8.17) one obtains the dispersion relation, or dispersion law,
linking ε and k : ε(k) = ε0 − 2A cos kd
ε(k) is an even function, periodic in k, which describes the totality of its values, i.e. the interval [ε0 − 2A, ε0 + 2A], when k varies from −π/d to π/d (Fig. 8.4). The domain of accessible energies
ε constitutes an allowed band ; a value of ε chosen outside of the interval [ε0 − 2A, ε0 + 2A] does not correspond to a kvalue, it belongs to a forbidden domain. The range −π/d ≤ k < π/d is called
“the first Brillouin zone,” a concept introduced in 1930 by Léon Brillouin, a French physicist ; it is sufficient to study the physics of the problem. Note that ε(k)
ε0 + 2 A ε0
ε0 − 2 A
Fig. 8.4 : Dispersion relation in the crystal. in the neighborhood of k = 0, ε(k) ε0 − 2A + Ad2 k 2 + . . . which looks like the free electron dispersion relation, if an effective mass m∗ is defined
such that 2 /2m∗ = Ad2 . This mass also expresses the smaller or larger difficulty for an electron to move under the application of an external electric field, due to the interactions inside the solid
it is subjected to. The wave function associated with ε(k) is given by ψk (x) =
+∞ n=−∞
exp iknd · φ0 (x − nd)
The Eigenstates for the Chosen Model
This expression can be transformed into ψk (x) = exp ikx ·
exp −ik(x − nd) · φ0 (x − nd)
which is the product of a plane wave by a periodic function uk (x), of period d : indeed, for n integer, uk (x + n d) =
exp −ik(x + n d − nd) · φ0 (x + n d − nd)
exp −ik(x − n d) · φ0 (x − n d) = uk (x)
n =−∞
This type of solution of the Schroedinger stationary equation ψk (x) = eikx · uk (x)
can be generalized, for a three-dimensional periodic potential V (r), into
ψk (r) = eik·r · uk
This is the so-called Bloch function, from the name of Felix Bloch, who, in his doctoral thesis prepared in Leipzig under the supervision of Werner Heisenberg and defended in July 1928, was the first
to obtain the expression of a crystal wave function, using a method analogous to the one followed here. The expression results from the particular form of the differential equation to be solved
(second derivative and periodic potential) and is justified using the Floquet theorem. Expression (8.21) of ψk (x), established in the case of weak coupling between neighboring atomic sites, can be
interpreted as a Linear Combination of Atomic Orbitals (LCAO), since the φ0 ’s then do not differ much from the atomic solutions ϕ. This model is also called “tight binding” as it privileges the
atomic states. When the atomic states of two neighboring sites are taken as orthogonal, this is the so-called “Hückel model.” The ψk (x) functions are delocalized all over the crystal. The extreme
energy states, at the edges of the energy band, correspond to particular Bloch functions : the minimum energy state ε0 − 2A is associated with ψ0 (x) = +∞ n=−∞ φ0 (x−nd), the addition in phase of
each localized wave function : this is a situation comparable to the bonding symmetricalstate of the H+ 2 ion ; the n state of energy ε0 + 2A is associated with ψπ/d (x) = +∞ n=−∞ (−1) φ0 (x− nd),
alternate addition of each localized wave function and is analogous to the antibonding state of the H+ 2 ion.
Chapter 8. Elements of Bands Theory and Crystal Conductivity
One can notice that the band width 4A obtained for an infinite chain is only twice the value of the energy splitting for a pair of coupled sites. The reason is that, in the present model, a given site
only interacts with its first neighbors and not with the more distant sites Note : the dispersion relation (8.20) and the wave function (8.24), that we obtained under very specific hypotheses, can be
generalized in the case of three-dimensional periodic solids, as can be found in basic Solid State Physics courses, like the one taught at Ecole Polytechnique.
8.3 8.3.1
The Electron States in a Crystal Wave Packet of Bloch Waves
Bloch waves of the (8.21) or (8.24) type are delocalized and thus cannot describe the amplitude of probability of presence for an electron, as these waves cannot be normalized : although a localized
function φ0 (x) can be normed, ψk (x) spreads to infinity. To describe an electron behavior, one has to build a packet of Bloch waves
Ψ(x, t) = −∞
g(k)eikx uk (x)e−iε(k)t/ dk
centered around a wave vector k0 , of extent ∆k small with respect to π/d. From the Heisenberg uncertainty principle, the extent in x of the packet will be such that ∆x · ∆k ≥ 1, i.e., ∆x d : the
Bloch wave packet is spreading over a large number of elementary cells.
Resistance ; Mean Free Path
It can be shown that an electron described by a wave packet (8.26), in motion in a perfect infinite crystal, keeps its velocity value for ever and thus is not diffused by the periodic lattice ; in fact
the Bloch wave solution already includes all these coherent diffusions. Then there is absolutely no resistance to the electron motion or, in other words, the conductivity is infinite. modifies the
electron energy levels ; the proA static applied electric field E blem is complex, one should also account for the effect of the lattice ions and interact with the electron. which are also subjected to
the action of E Experimentally one notices in a macroscopic crystal that the less its defects,
The Electron States in a Crystal
like dislocations or impurities, the higher its conductivity. An infinite perfect three-dimensional crystal would have an infinite conductivity, a result predicted by Felix Bloch as early as 1928.
(These simple considerations do not apply to the recently elaborated, low-dimensional, systems.) The real finite conductivity of crystals arises from the discrepancies to the perfect infinite lattice :
thermal motion of the ions, defects, existence of the surface. In fact in copper at low temperature one deduces from the conductivity value σ = ne2 τ /m the mean time τ between two collisions and,
introducing the Fermi velocity vF (see §7.1.1), one obtains a mean free path = vF τ of the order of 40 nm, that is, about 100 interatomic distances. This does show that the lattice ions are not
responsible for the diffusion of conduction electrons.
Finite Chain, Density of States, Effective Mass
V (x) 0
(N + 1)d x
Fig. 8.5 : Potential for a N -ions chain. A real crystal is spreading over a macroscopic distance L = N d (Fig. 8.5) (the distance L is measured with an uncertainty much larger than the interatomic
distance d ; one can thus always assume that L contains an integer number N of interatomic distances). Obviously, if only the coupling to the nearest neighbors is considered, for most sites there is
no change with respect to the case of the infinite crystal. Thus the eigenstates of the hamiltonian for the finite crystal : 2 ˆ crystal = pˆ + V (x − xn ) h 2m n=1 N
should not differ much from those of the hamiltonian (8.9) for the infinite crystal : we are going to test the Bloch function obtained for the infinite crystal as a solution for the finite case. On the
other hand, for a macroscopic dimension L the exact limit conditions do not matter much. It is possible, as in the case of the free electron model, to close the crystal on itself by superposing the
sites 0 and N d and to express
Chapter 8. Elements of Bands Theory and Crystal Conductivity
the periodic limit conditions of Born-Von Kármán : ψ(x + L) = ψ(x) ψk (x + L) = e
(8.28) uk (x + L) = e
ψk (x)
as L contains an integer number of atoms, i.e. an integer number of atomic periods d, and uk (x) is periodic. One thus finds the same quantization condition as for a one-dimensional free particle : k
= p2π/L, where p is an integer, associated with an one-dimension orbital density of states L/2π ; the corresponding three-dimensional density of states is Ω/(2π)3 , where Ω is the crystal volume.
These densities are doubled when one accounts for the electron spin. The solutions of (8.27) are then ⎧ kd ⎪ ⎨ε = ε0 − 2A cos N 1 ψk (x) = √N n=1 exp iknd · φ0 (x − nd) ⎪ ⎩ with k = p · 2π p integer
When comparing to the results (8.20) and (8.21) for the infinite crystal, one ε is now quanfirst notices that ψk (x) as given by (8.30) is normed. Moreover π π tized (Fig. 8.6) : to describe the first
Brillouin zone − ≤ k < all the d d N N different states are obtained for p ranging between − and + − 1 (or 2 2 N −1 N −1 between − and − 1 if N is odd, but because of the order of 2 2 magnitude of N ,
ratio of the macroscopic distance L to the atomic one d, this does not matter much !). On this interval there are thus N possible states, i.e., as many as the number of sites. Half of the states
correspond to waves propagating toward increasing x’s (k > 0), the other ones to waves propagating toward decreasing x’s. Note that if the states are uniformly spread in k, their distribution is not
uniform in energy. One defines the energy density of states D(ε) such that the number of accessible states dn, of energies between ε and ε + dε, is equal to dn = D(ε)dε
One will note that, at one dimension, to a given value of ε correspond two opposite values of kx , associated with distinct quantum states. By observing Fig. 8.6, it can be seen that D(ε) takes very
large values at the vicinity of the band edges (ε close to ε0 ± 2A) ; in fact at one dimension D(ε) has a specific behavior [D(ε) diverges at the band edges, but the integral ε D(ε )dε (8.32) εmin
The Electron States in a Crystal
ε(k) ε +2A
ε0 − 2 A −π/d
2π 2π = Nd L
Fig. 8.6 : Allowed states in the energy band, for a N electrons system.
remains finite]. When this model is extended to a three-dimensional system, one finds again a “quasicontinuous” set of very close accessible states, uniformly located in k (but not in energy), in
number equal to that of the initial orbitals (it was already the case for two atoms for which we obtained the bonding and antibonding states). V x 2s
2s band
1s band
Fig. 8.7: In a N atom crystal, to each atomic state corresponds a band that can accommodate 2N times more electrons than the initial state. In a more realistic case, the considered atom has several
energy levels (1s, 2s, 2p, ...). Each energy level gives rise to a band, wider if the atomic state lies higher in energy (Fig. 8.7). The widths of the bands originating from the deep atomic states
are extremely small, one can consider that these atomic levels do not couple. On the other hand, the width of the band containing the electrons involved in the chemical bond is always of the order of
several electron-volts.
Chapter 8. Elements of Bands Theory and Crystal Conductivity
Again each band contains N times more orbital states than the initial atomic state, that is, 2N times more when the electron spin is taken into account. The density of states in k is uniform. One
defines an energy density of states D(ε) such that Å dn = D(ε)dε = 2
L 2π
d3k at three dimensions
The area under the curve representing the density of states of each band is equal to 2N times the number of electrons acceptable in the initial atomic orbital : 2N for s states, 6N for p states, etc.
(Fig. 8.8). The allowed bands are generally separated by forbidden bands, except when there is an overlap between the higher energy bands, which are broadened through the coupling (3s and 3p on the
Fig. 8.8). 1s D(ε)
2p 2s 3s, 3p
ε Fig. 8.8 : Density of states of a crystal. It can be shown that in the vicinity of a band extremum, the tangent of the D(ε) curve is vertical. In particular, at the bottom of a band the density of
state is rather similar to the one of a free electron in a box [see (6.64)], for √ Ω which we obtained D(ε) = 2 3 (2m)3/2 ε, m being the free electron mass. 2π Here again one will define the effective
mass in a given band, this time from the curvature of D(ε) [this mass is the same as the one in § 8.2.3, introduced from the dispersion relation (8.20)]. Indeed, setting D(ε)
√ Ω (2m∗ )3/2 ε − ε0 2π 2 3
in the vicinity of ε0 at the bottom of the band is equivalent to taking for dispersion relation ε − ε0
2 k 2 2m∗
2 d2 ε = m∗ dk 2
Statistical Physics of Solids
The effective mass is linked to the band width : from the dispersion relation (8.20) for instance, one obtains around k = 0 2Ad2 1 = ∗ m 2
One can understand that the electrons of the deep bands are “very heavy,” and thus not really mobile inside the lattice, that is, they are practically localized. Note : in non crystalline (or
amorphous) solids, like amorphous silicon, the coupling between the atomic states also produces energy bands.
Statistical Physics of Solids
Once the one-electron quantum states determined, one has to locate the N electrons in these states, in the framework of the independent electrons model.
Filling of the Levels
At zero temperature, the Pauli principle dictates one to fill the levels from the lowest energy one, putting two electrons, with different spin states, on each orbital state until the N electrons are
exhausted. We now take two examples, that of lithium (Z = 3), and that of carbon (Z = 6) crystallized into the diamond form. In lithium, each atom brings 3 electrons, for the whole solid there are 3N
electrons to be located. The 1s band is filled by 2N electrons. The complementary N electrons are filling the 2s band up to its half, until the energy ε = εF , the Fermi energy, which lies 4.7 eV above
the bottom of this band. Above εF , any state is empty at T = 0 K. When a small amount of energy is brought to the solid, for example using a voltage source of a few volts, the electrons in the 2s
band in the vicinity of εF can occupy states immediately above εF , whereas 1s electrons cannot react, having no empty states in their neighborhood. This energy modification of electrons in the 2s
band produces a global macroscopic reaction, expressed by an electric current, to the electric field : lithium is a metal or a conductor, the 2s band is its conduction band. In diamond, the initial
electronic states are not the 2s and 2p states of the atomic carbon but their sp3 tetrahedral hybrids (as in CH4 ) which separate into bonding hybrids with 4 electrons per atom (including spin) and
Chapter 8. Elements of Bands Theory and Crystal Conductivity
ding hybrids also with 4 electrons. In the crystal, the bonding states constitute a 4N -electron band, the same is true for the antibonding states. The 4N electrons of the N atoms exactly fill the
bonding band, which contains the electrons participating in the chemical bond, whence the name of valence band. The band immediately above, of antibonding type, is the conduction band. It is
separated from the valence band by the band gap energy, noted Eg : Eg = 5.4 eV for diamond and at T = 0 K the conduction band is empty, the valence band having exactly accommodated all the available
electrons. Under the effect of an applied voltage of a few volts, the valence electrons cannot be excited through the forbidden band, there is no macroscopic reaction, the diamond is an insulator.
Like carbon, silicon and germanium also belong to column IV of the periodic table, they have the same crystalline structure, they are also insulators. However these latter atoms are heavier, their
crystal unit cells are larger and the characteristic energies smaller : thus Eg = 1.1 eV in silicon and 0.75 eV in germanium. Remember that a full band has no conduction, the application of an
electric field producing no change of the macroscopic occupation of the levels. This is equally valid for the 1s band of the lithium metal and at zero temperature for the valence band of insulators
like Si or Ge.
Variation of Metal Resistance versus Temperature
We now are able to justify the methods and results on metals of the previous chapter, by reference to the case of lithium discussed above : their bands consist in one or several full bands, inactive
under the application of an electric field, and a conduction band partly filled at zero temperature. The density of states at the bottom of this latter band is analogous to that of an electron in a
box, under the condition that the free electron mass is replaced by the effective mass at the bottom of the conduction band. Indeed we already anticipated this result when we analyzed the specific
heats of copper and silver (7.2.2). The electrical conductivity σ is related to the linear effect of an applied field E on the conduction electrons. An elementary argument allows one to establish its
expression σ = ne2 τ /m∗ , where n is the electron concentration in the conduction band, e the electron charge, τ the mean time between two collisions and m∗ the effective mass. A more rigorous
analysis, in the framework of the transport theory, justifies this formula which is far from being obvious : the only excitable electrons are in the vicinity of the Fermi level, their velocity in the
absence of electric field is vF . The phenomenological τ term takes into account the various mechanisms
Statistical Physics of Solids
which limit the conductivity : in fact the probabilities of these different independent mechanisms are additive, so that 1 1 1 = + + ··· τ τvibr τimp
1/τvibr corresponds to the probability of diffusion by the lattice vibrations, which increases almost linearly with the temperature T , 1/τimp to the diffusion by neutral or ionized impurities, a
mechanism present even at zero temperature. Then 1 m∗ =ρ= 2 σ ne
1 τvibr
1 τimp
ã + ···
ρ = ρvibr (T ) + ρimp + · · · Thus the resistivity terms arising from the various mechanisms add up, this is the empirical Matthiessen law, stated in the middle of the 19th century. Consequently, the
resistance of a metal increases with temperature according to a law practically linear in T , the dominating mechanism being the electron’s diffusion on the lattice vibrations. This justifies the use
of the platinum resistance thermometer as a secondary temperature standard between −183◦ C and +630◦ C.
Variation of Insulator’s Conductivity Versus Temperature ; Semiconductors
a) Concentrations of Mobile Carriers in Equilibrium at Temperature T We have just seen that at zero temperature an electric field has no effect on an insulator, since its valence band is totally full
and its conduction band empty. However there are processes to excite such systems. The two main ones are : – an optical excitation : a photon of energy hν > Eg will promote an electron from the
valence band into the conduction band. This is the absorption phenomenon, partly responsible for the color of these solids. It creates a nonequilibrium thermal situation. – a thermal excitation : the
thermal motion is also at the origin of transitions from the valence band to the conduction band. We are now going to detail this latter phenomenon, which produces a thermal equilibrium at
temperature T .
Chapter 8. Elements of Bands Theory and Crystal Conductivity
When an electron is excited from the valence band, this band then makes a transition from the state (full band) to the state (full band minus one electron), which allows one to define a
quasi-particle, the “hole,” which is the lack of an electron. We already noted in chapter 7.1.1 the symmetry, due to the shape of the Fermi distribution, between the occupation factor f (ε) of energy
states above µ, and the probability 1 − f (ε) of the states below µ for being empty. The following calculation will show the similarity between the electron or hole role. The electrons inside the
insulator follow the Fermi-Dirac statistics and at zero temperature the conduction band is empty and the valence band full. This means that εF lies between εv , the top of the valence band and εc ,
the minimum of the conduction band. At weak temperature T (kB T Eg ) one expects that the chemical potential µ still lies inside the energy band gap. We are going to estimate n(T ), the number of
electrons present at T in the conduction band and p(T ), the number of electrons missing (or holes present) in the valence band, and to deduce the energy location µ(T ) of the chemical potential.
These very few carriers will be able to react to an applied electric : indeed in the conduction band the electrons are very scarce at usual field E temperatures and they are able to find vacant states
in their close neighbo ; in the same way the presence of several holes in rhood when excited by E the valence band permits the valence electrons to react to E. The volume concentration n(T ) of
electrons present in the conduction band at temperature T in the solid of volume Ω is such that εcmax εcmax 1 dε (8.39) D(ε)f (ε)dε = D(ε) Ωn(T ) = exp β(ε − µ) + 1 εc εc where εc is the energy of
the conduction band minimum, εcmax the energy of its maximum. One assumes, and it will be verified at the end of the calculation, that µ is distant of this conduction band of several kB T , which
leads to β(εc − µ) 1, so that one can approximate f (ε) by a Boltzmann factor. Then Ωn(T )
D(ε)e−β(ε−µ) dε
Because of the very fast exponential variation, the integrand has a significant value only if ε − εc does not exceeds a few kB T . Consequently, one can extend the upper limit of the integral to +∞,
as the added contributions are absolutely negligible. Moreover, only the expression of D(ε) at the bottom of the conduction band really matters : we saw in (8.34) that an effective mass, here mc , can
be introduced, which expresses the band curvature in the vicinity of
Statistical Physics of Solids
ε = εc . Whence
√ 1 (2mc )3/2 ε − εc e−β(ε−µ) dε 2 3 2π εc ∞ √ −x 1 −β(εc −µ) 3/2 e (2mc kT ) xe dx = 2π 2 3 0 ã ã Å Å 2πmc kB T 3/2 εc − µ =2 exp − h2 kB T
n(T )
In the same way one calculates the hole concentration p(T ) in the valence band, the absence of electrons at temperature T being proportional to the factor 1 − f (ε) : εv Ωp(T ) = D(ε)[1 − f (ε)]dε
(8.43) εvmin
One assumes that µ is at a distance of several kB T from εv , the maximum of the valence band, of minimum εvmin . The neighborhood of εv is the only region that will intervene in the integral
calculation because of the fast decrease of 1 − f (ε) e−β(µ−ε) . In this region D(ε) is parabolic and is approximated by D(ε)
√ Ω (2mv )3/2 εv − ε 2 3 2π
where mv > 0, the “hole effective mass,” expresses the curvature of D(ε) in the neighborhood of the valence band maximum. One gets : Å
2πmv kB T p(T ) = 2 h2
ã Å µ − εv exp − kB T
In the calculations between formulae (8.40) and (8.45) we did not make any assumption about the way n(T ) and p(T ) are produced. These expressions are general, as is the product of (8.42) by (8.45)
: Å n(T ) · p(T ) = 4
2πkB T h2
ã Å Eg (mc mv )3/2 exp − kB T
with Eg = εc − εv . b) Pure (or Intrinsic) Semiconductor In the case of a thermal excitation, for each electron excited into the conduction band a hole is left in the valence band. One deduces from
(8.46) : Å n(T ) = p(T ) = ni (T ) = 2
2πkB T h2
ã Å Eg (mc mv )3/4 exp − 2kB T
Chapter 8. Elements of Bands Theory and Crystal Conductivity
For this intrinsic mechanism, the variation of ni (T ) versus temperature is dominated by the exp(−Eg /2kB T ) term. In fact, here the thermal mechanism creates two charge carriers with a threshold
energy Eg . The very fast variation of the exponential is responsible for the distinction between insulators and semiconductors. Indeed at room temperature T = 300 1 eV, let us compare the diamond K,
which corresponds to approximately 40 situation where Eg = 5.2 eV [exp(−Eg /2kB T ) = exp(−104) = 6.8 × 10−46 ] and that of silicon, where Eg = 1.1 eV [exp(−22) = 2.8 × 10−10 ]. In 1 cm3 of diamond,
there is no electron in the conduction band, whereas ni (T ) is equal to 1.6 × 1016 m−3 at 300 K in silicon. A semiconductor is an insulator with a “not too wide” band gap, which allows the
excitation of some electrons into the conduction band at room temperature. The most commonly used semiconductors, apart from silicon, are germanium and gallium arsenide GaAs (Eg = 1.52 eV). The
chemical potential of an intrinsic semiconductor is determined by substituting expression (8.47) of ni (T ) into (8.41). One obtains : µ=
3 mv εc + εv + kB T ln 2 4 mc
The Fermi level εF , which is the chemical potential at T = 0 K, lies in the middle of the band gap and µ does not move much apart from this location when T increases (kB T Eg ). Consequently, the
hypotheses that µ lies very far from the band edges as compared to kB T , made when calculating n(T ) and p(T ), are valid. Under the effect of an external electric field, both electrons and holes
participate in the electrical current : indeed the current includes contributions from both the conduction and the valence bands. As will be studied in Solid State Physics courses, the holes behave
like positive charges and the total electrical conductivity in the semiconductor is the sum of the valence band and conduction band currents : σ = σn + σp ã Å τn τp = ne2 + pe2 mc mv
Here τn (respectively τp ) is the electron (respectively hole) collision time. The temperature variation of σ in an intrinsic semiconductor is dominated by the variation of exp(−Eg /2kT ), the other
factors varying like powers of T . Consequently, σ increases very fast with T , thus its reciprocal, the resistivity and the measured resistance of a semiconductor decrease with temperature because
the number of carriers increases.
Statistical Physics of Solids
c) Doped Semiconductor The density of intrinsic carriers at room temperature is very weak and practically impossible to attain : for instance in silicon about 1 electron out of 1012 is in the
conduction band due to thermal excitation. To achieve the intrinsic concentration the studied sample should contain neither impurities nor defects ! It is more realistic to control the conductivity
by selected impurities, the dopants. Thus phosphorus atoms can be introduced into a silicon crystal : phosphorus is an element of column V of the periodic table, which substitutes for silicon atoms
in some lattice sites. Each phosphorus atom brings one more electron than required by the tetrahedral bonds of Si. At low temperature this electron remains localized near the phosphorus nucleus, on a
level located 44 meV below the bottom of the conduction band. When the temperature increases, the electron is ionized into the conduction band. Phosphorus is a donor. The concentration of conduction
electrons will be determined by the concentration ND in P atoms, of typical value ND ∼ 1022 m−3 , chosen much larger than the intrinsic and the residual impurity concentrations, but much smaller than
the Si atoms concentration. In such a material, because of (8.46), the hole concentration is very small with respect to ni , the current is carried by the electrons, the material is called of n-type.
In a symmetrical way, the substitution in the silicon lattice of a Si atom by a boron atom creates an acceptor state : boron, an element of column III of the periodic table, only has three peripheral
electrons and thus must capture one extra electron in the valence band to satisfy the four tetrahedral bonds of the crystal. This is equivalent to creating a hole, which, at low temperature, is
localized 46 meV above the silicon valence band and at higher temperature is delocalized inside the valence band. In boron-doped silicon, the conductivity arises from the holes, the semiconductor is
of p-type. In Solid State Physics courses, the statistics of doped semiconductors is studied in more detail : one finds in particular that the chemical potential µ moves versus temperature inside the
band gap (in a n-type material, at low temperature it is close εc , in the vicinity of the donors levels and returns to the middle of the band gap, like in an intrinsic material, at very high
temperature). The semiconductors constituting the electronic compounds are generally doped.
Chapter 8. Elements of Bands Theory and Crystal Conductivity
Examples of Semiconductor Devices
Here we only give a flavor of the variety of configurations and uses. In specialized Solid State Physics courses, one will describe in more details the semiconductors properties. The aim here is to
show that, just using the basic elements of the present chapter, one can understand the principle of operation of three widespread devices.
The Photocopier : Photoconductivity Properties
Illuminating a semiconductor by light of energy larger than Eg promotes an electron into the conduction band and leaves a hole in the valence band. The conductivity increase is ∆σ, proportional to
the light flux. In the xerography process, the photocopier drum is made of selenium which, under the illumination effect, shifts from a very insulator state to a conducting state. This property is
utilized to fix the pigments (the toner black powder) which are next transferred to the paper and then cooked in an oven to be stabilized on the paper sheet.
The Solar Cell : an Illuminated p − n Junction
When, in the same solid, a p region is placed next to a n region, the electrons of the n region diffuse toward the holes of the p region : some of these mobile charges neutralize and a so-called space
charge region becomes emptied of mobile carriers. The dopants still remain in the region, their charged nuclei create an electric field which prevents any additional diffusion. This effect can also be
interpreted from the requirement of equal chemical potentials in the n and p region at equilibrium : in the neighborhood (a few µm) of the i . If region of doping change, there prevails an intense
internal electric field E the sun is illuminating this region, the excited electron and hole, of opposite i . An external use circuit will take advantage of charges, are separated by E the current
produced by this light excitation. We will reconsider these devices, called “solar cells,” at the end of next chapter when we will deal with thermal radiation.
Examples of Semiconductor Devices
Compact Disk Readers : Semiconductor Quantum Wells
It was seen in § 8.3.3 that starting from the energy levels of the constituting atoms one gets the energy bands of a solid. Their location depends of the initial atomic levels, thus of the chemical
nature of the material. For almost thirty years it has been possible, in particular using Molecular Beam Epitaxy (MBE), to sequentially crystallize different semiconductors, which have the same
crystal lattice but different chemical composition : a common example is GaAs and Ga1−x Alx As. In such semiconductor sandwiches the bottom of the conduction band (or the top of the valence band )
does not lie at the same energy in GaAs and in Ga1−x Alx As. One thus tailors quantum wells, to fashion. Their width usually ranges from 5 to 20 nm, their energy depth is of a few hundreds of meV,
and the energies of their localized levels is determined by their “geometry,” as taught in Quantum Mechanics courses. These quantum wells are much used in optoelectronics, for example as diodes or
lasers emitters, in the telecommunication networks by optical fibers, as readers of compact disks, of code-bars in supermarkets, etc.
Summary of Chapter 8 One has first to study the quantum states of a periodic solid. By analogy with the problem of the double potential well of the Quantum Mechanics course, one considers an electron
on an infinite periodic chain, in a model of independent electrons, of the Linear Combination of Atomic Orbitals type. The hamiltonian eigenstates are Bloch states, the eigenenergies are regrouped
into allowed bands separated by forbidden bands (or band gaps). The important parameter is the coupling term between neighboring sites. In a perfect crystal, the velocity of an electron would be
maintained forever. The finite macroscopic dimension of a crystal allows one to define a density of states and to show that the number of orbital states is the product of the number of cells in the
crystal by the number of orbital states corresponding to each initial atomic state. Filling these states together with considering the spin properties, allows one to distinguish between insulators
and metals. We calculated the number of mobile carriers of an insulator versus temperature. The resistance of an insulator exponentially decreases when T increases, the resistance of a metal
increases almost linearly with T . A semiconductor is an insulator with a “rather small” band gap energy.
Chapter 9
Bosons : Helium 4, Photons, Thermal Radiation We learnt in chapter 5 that bosons are particles of integer or null spin, with a symmetrical wave function, i.e. which is unmodified in the exchange of
two particles. Bosons can be found in arbitrary number on a one-particle state of given energy εk . We are going now to discuss in more detail the properties of systems following the Bose-Einstein
statistics. We will begin with the case of material particles and take the example of 42 He, the helium isotope made up of two protons and two neutrons. The Bose-Einstein condensation occurring in
this case is related to the helium property of superfluidity. Then we will study the statistics in equilibrium of photons, relativistic particles of zero mass. The photon statistics has an essential
importance for our life on the earth through the greenhouse effect. Moreover, it played a major conceptual part : indeed it is in order to interpret the experimental thermal radiation behavior that
Max Planck (1858-1947) produced the quanta hypothesis (1901). Finally we will take a macroscopic point of view and stress the importance of thermal radiation phenomena in our daily experience.
Material Particles
The ability of bosons of given non-zero mass to gather on the same level under a low enough temperature yields to a phase transition, the Bose-Einstein condensation.
Chapter 9. Bosons : Helium 4, Photons, Thermal Radiation
Thermodynamics of the Boson Gas
Let us recall the average occupation number nk [Eq.(6.26)] by bosons of 1 : chemical potential µ, of the energy level εk at temperature T = kB β 1
nk =
eβ(εk −µ)
As nk is positive or zero, the denominator exponent should be positive for any εk , which implies that the chemical potential µ must be smaller than any εk , including ε1 , the energy of the
fundamental state. At fixed T and µ, nk decreases when εk increases. The µ value is fixed by the condition
nk = N
where N is the total particles number of the considered physical system. From nk one can calculate all the thermodynamical parameters and in particular the average energy : U=
εk nk
the grand potential A and its partial derivatives S and P A = −kB T ln(1 + nk )
dA = −P dΩ − SdT − N dµ
In the large volume limit, (9.1) is replaced by the Bose-Einstein distribution, a continuous function given by f (ε) =
1 eβ(ε−µ) − 1
The physical parameters are calculated in this case using the density of states D(ε), then expressing the constraint replacing (9.2) on the particles number and verifying that µ < ε1 . Here (§ 9.1)
one is concerned by free bosons, with no p2 (apart from mutual interaction, therefore their energy is reduced to ε = 2m the “box” potential producing the quantization, leading to the expression of D
(ε)), so that ε1 = 0 and µ < 0. Then for particles with a spin s, in the three-dimensional space, one gets,
Material Particles
Å ã √ 2s + 1 2m 3/2 taking D(ε) = CΩ ε, where C = [cf. Eq. (6.64)], 4π 2 2 √ ∞ ∞ εdε N= f (ε)D(ε)dε = CΩ β(ε−µ) e −1 0 0 ∞ ∞ 3/2 ε dε U= εf (ε)D(ε)dε = CΩ β(ε−µ) − 1 e 0 0 ∞ A = −kB T ln[1 + f (ε)]D
(ε)dε 0 ∞ √ ε ln[1 − e−β(ε−µ) ]dε A = CΩkB T 2 = − CΩ 3
0 ∞
ε3/2 dε −1
(9.7) (9.8)
(9.9) (9.10)
Integrating (9.9) by parts, one finds (9.10) which is equal to (9.8) to the factor 2 − , so that 3 2 A = − U = −P Ω (9.11) 3 This is the state equation of the bosons gas, that is, 2U (9.12) 3Ω which
takes the same form as for the fermions (7.13), although the physical properties are totally different. P =
Bose-Einstein Condensation
The above expressions are only valid when the condition µ < 0 is fulfilled (see above). Now when T decreases at constant density N/Ω, since β increases the differences (ε − µ) must decrease in order
that N/Ω keeps its value (9.7). This means that µ gets closer to zero. Now consider the limit situation µ = 0. Relation (9.7) then becomes ∞ √ εdε N = CΩ βε − 1 e 0 √ Å ã CΩ ∞ xdx CΩ 1 N = 3/2 = 3/2
I x−1 e 2 0 βB βB
(9.13) (9.14)
i.e., using the mathematical tables of the present book, N=
CΩ 3/2
· 2, 315
Chapter 9. Bosons : Helium 4, Photons, Thermal Radiation
Relation (9.15) defines a specific temperature TB = 1/kB βB which is only related to the density n = N/Ω : ã2/3 Å N 1 1 kB CΩ 2, 315 n2/3 6, 632 = 2mkB (2s + 1)2/3
TB =
In (9.17) C was replaced by its expression C =
2s + 1 4π 2
spin of the 42 He particles is zero.
(9.16) (9.17) Å
2m 2
ã3/2 , in which the
The temperature TB , or Bose temperature, would correspond in the above calculation to µ = 0, whereas the situation of § 9.1.1 is related to µ < 0. Since the integrals in (9.7) and (9.13) are both
equal to N/CΩ, to maintain this value constant it is necessary that β (corresponding to µ < 0) be smaller than βB (associated to µ = 0). Consequently, (9.7) is only valid for T > TB . When lowering
the temperature to TB , µ tends to zero, thus getting closer to the fundamental state of energy ε1 = 0. For T < TB the analysis has to be reconsidered : the population of level ε1 has to be studied
separately, as it would diverge if one were using expression (9.1) without caution ! Now the population N1 of ε1 is at most equal to N . Expression (9.1) remains valid but µ cannot be strictly zero.
Assume that N1 is a macroscopic number of the order of N : N1 =
1 1 = −βµ e −1 eβ(ε1 −µ) − 1
The corresponding value of µ is given by : 1 N1 kB T µ− N1
e−βµ = 1 +
µ is very close to zero because it is the ratio of a microscopic energy to a macroscopic number of particles. The first excited level ε2 has the energy 2 ∝ Ω−2/3 , whereas the chemical potential is in
Ω−1 , so that µ is very 2mL2 small as compared to ε2 . The N − N1 particles which are not located in ε1 are distributed among the excited states, for which one can take µ = 0 without introducing a
large error,
Material Particles
and adapt the condition (9.7) on the particle number ∞ √ εdε N − N1 = CΩ β − 1 e ε 2∞ √ εdε CΩ βε − 1 e ε1 =0
(9.20) (9.21)
Indeed shifting the limit of the integral from ε2 to ε1 is adding a relative contribution to (9.20) of the order of (ε2 /kB T )1/2 . Since the volume Ω is macroscopic, this quantity is very small, ε2
being much less than kB T . One recognizes in (9.21) the integral (9.13) under the condition of replacing βB by β. Whence : N − N1 T 3/2 ñ Å ã ô T 3/2 N1 = N 1 − TB
3/2 TB
(9.22) (9.23)
Thus, when lowering the temperature, different regimes take place : – as long as T > TB , the chemical potential is negative and the particles are distributed among all the microscopic states of the
system according to the Bose-Einstein distribution (9.6) ; – when T = TB , µ almost vanishes (it is very small but negative) ; – when T < TB , the particles number N1 on the fundamental level becomes
macroscopic, this is the “Bose condensation.” One thus realizes a macroscopic quantum state. The distribution of particles on the excited levels corresponds to µ 0. When T continues to decrease, N1
increases and tends to N when T tends to zero ; µ is practically zero. Going through TB corresponds to a phase transition : for T < TB the Bose gas is degenerate, which means that a macroscopic
particles number is located in the same quantum state. This corresponds for 42 He to the superfluid state. The phenomenon of helium superfluidity is very complex and the interactions between atoms,
neglected up to now, play a major part in this phase. The above calculation predicts a transition to the superfluid state at 3.2 K in standard conditions (4 He specific mass =140 kg/m3 under the
atmospheric pressure) whereas the transition occurs at Tλ = 2.17 K. This is a second-order phase transition (the free energy is then finite, continuous and derivable ; the specific heat is
discontinuous). The superconductivity of some solids also corresponds to a Bose condensation (2003 Nobel prize of A. Abrikosov and V. Ginzburg). It was possible to observe the atoms’ Bose
condensation by trapping and cooling atomic beams at temperatures of the order of a mK in “optical molasses” (1997 Nobel prices of S. Chu, C. Cohen-Tannoudji and W.D. Phillips).
Chapter 9. Bosons : Helium 4, Photons, Thermal Radiation
Bose-Einstein Distribution of Photons
Our daily experience teaches us that a heated body radiates, as is the case for a radiator heating a room or an incandescent lamp illuminating us through its visible radiation. Understanding these
mechanisms, a challenge at the end of the 19th century, was at the origin of M. Planck’s quanta hypothesis (1901). For more clarity we will not follow the historical approach but rather use the tools
provided by this course. Once the results obtained, we will return to the problems Planck was facing.
Description of the Thermal Radiation ; the Photons
Consider a cavity of volume Ω heated at temperature T (for example an oven) and previously evacuated. The material walls of this cavity are made of atoms, the electrons of which are promoted into
excited states by the energy from the heat source. During their deexcitation, these electrons radiate energy, under the form of an electromagnetic field which propagates inside the cavity, is absorbed
by other electrons associated to other atoms, and so forth, and this process leads to a thermal equilibrium between the walls and the radiation enclosed inside the cavity. If the cavity walls are
perfect reflectors, classical electromagnetism teaches us is zero inside the perfect conductor and normal that the wave electrical field E tangential is contito the wall on the vacuum side (due to the
condition that E nuous), and that the corresponding magnetic field B must be tangential. In order to produce the resonance of this cavity and to establish stationary waves, the cavity dimension L
should contain an integer number of half-wavelengths λ. The wave vector k = 2π/λ and L are then linked. k) exp i(k ·r −ωt) An electromagnetic wave, given in complex notation by E(ω, is characterized
by its wave vector k and its frequency ω ; it can have two independent polarization states (either linear polarizations along two perpendicular directions, or left or right circular polarizations) on
which the direction of the electric field in the plane normal to k is projected. Finally its intensity k)|2 . is proportional to |E(ω, Since the interpretation of the photoelectric effect by A.
Einstein in 1905, it has been known that the light can be described both in terms of waves and of particles, which are the photons. The quantum description of the electromagnetic field is taught in
advanced Quantum Mechanics courses. Here we only specify that the photon is a relativistic particle of zero mass, so that
Bose-Einstein Distribution of Photons
the general relativistic relation between the energy ε and the momentum p, ε = p2 c2 + m20 c4 , where c is the light velocity in vacuum, reduces in this case to ε = pc. Although the photon spin is
equal to 1, due to its zero mass it only has two distinct spin states. The set of data consisting in p and the photon spin value defines a mode. The wave parameters k and ω and the photon parameters
are related through the Planck constant, as summarized in this table : electromagnetic wave wave vector k frequency ω k, ω)|2 intensity |E( polarization (2 states)
particle : photon p = k momentum ε = ω energy number of photons spin (2 values)
Let us return to the cavity in thermal equilibrium with the radiation it contains : its dimensions, in Ω1/3 , are very large with respect to the wavelength of the considered radiation. The exact
surface conditions do not matter much : as in the particles case (see § 6.4.1b) ), rather than using the stationary wave conditions, we prefer the periodic limit conditions (“Born-Von Kármán”
conditions) : in a thought experiment we close the system on itself, which quantizes k and consequently, p and the energy. Assuming the cavity is a box of dimensions Lx , Ly , Lz , one obtains : ã Å
k = (kx , ky , kz ) = 2π nx , ny , nz (9.24) Lx Ly Lz where nx , ny , nz are positive or negative integers, or zero. Then
Å p = h
nx ny nz , , Lx Ly Lz
ε = pc = ω = hν
ã (9.25) (9.26)
Statistics of Photons, Bosons in Non-Conserved Number
In the photon emission and absorption processes of the wall atoms, the number of photons varies, and there are more photons when the walls are hotter : in this system the constraint on the
conservation of particles number, that existed in § 9.1, is lifted. This constraint was defining the chemical potential µ as a Lagrange multiplier, which no longer appears in the probability law for
photons, i.e., the chemical potential is zero for photons.
Chapter 9. Bosons : Helium 4, Photons, Thermal Radiation
Let us consider the special case where only a single photon mode is possible, i.e., a single value of p, of the polarization and of ε in the cavity at thermal equilibrium at temperature T . Any
number of photons is possible, thus the partition function z(ε) is equal to z(ε) =
e−βnε =
1 1 − e−βε
and the average energy at T of the photons in this mode is given by nε = −
ε 1 ∂z(ε) = βε z(ε) ∂β e −1
For the occupation factor of this mode, one finds again the Bose-Einstein distribution in which µ = 0, i.e. f (ε) =
1 −1
[Note that in the study of the specific heat of solids (see § 2.4.5) a factor similar to (9.28) appeared : in the Einstein and Debye models the quantized solid vibrations are described by oscillators.
Changing the vibration
state of the solid from n + 12 ω to n + 12 + 1 ω is equivalent to creating a quasi-particle, called a phonon, which follows the Bose-Einstein statistics. The number of phonons is not conserved]. If
now one accounts for all the photons modes, for a macroscopic cavity volume p), D(ε) such that the Ω, one can define the densities of states Dk (k), Dp ( number dn of modes with a wave vector between
k and k + dk is equal to dn : Ω 3 d k = Dk (k)d3k (2π)3 Ω p = Dp ( p)d3 p dn = 2 · 3 d3 h dn = 2 ·
that is, Dk (k) =
Ω 4π 3
Dp ( p) =
2Ω h3
The factor 2 expresses the two possible values of the spin of a photon of fixed k (or p). Since the energy ε only depends of the modulus p, one obtains D(ε) from Ω Ω 4πp2 dp = 8π 3 3 ε2 dε 3 h h c 8πΩ
2 D(ε) = 3 3 ε h c
D(ε)dε = 2
Bose-Einstein Distribution of Photons
In this “large volume limit”, one can express the thermodynamical parameters. The thermodynamical potential to be considered here is the free energy F (since µ = 0) ∞ ln(1 − e−βε )D(ε)dε (9.31) F =
−kB T ln Z = kB T 0
The internal energy is written ∞ U= εD(ε)f (ε)dε 0 ∞ ∞ 8πΩ ε3 8πΩ hν 3 dν = dε = h3 c3 eβε − 1 c3 eβhν − 1 0 0 i.e.,
U= 0
with u(ν) =
8πh ν3 c3 eβhν − 1
The photon spectral density in energy u(ν) is defined from the energy contribution dU of the photons present inside the volume Ω, with a frequency between ν and ν + dν : dU = Ωu(ν)dν
The expression (9.33) of u(ν) is called the Planck law . The total energy for the whole spectrum is given by ∞ 3 8πΩ x dx U = 3 3 (kB T )4 h c ex − 1 0
where the dimensionless integral is equal to Γ(4)ζ(4) with Γ(4) = 3! and π4 ζ(4) = (see the section “Some useful formulae” in this book). Introducing 90 = h/2π, the total energy becomes U=
π 2 (kB T )4 Ω 15 (c)3
The free energy U and the internal energy F are easily related : in the integration by parts of (9.31), the derivation of the logarithm introduces
Chapter 9. Bosons : Helium 4, Photons, Thermal Radiation
the Bose-Einstein distribution, the integration of D(ε) provides a term in ε3 /3 = ε . ε2 /3. This exactly gives F =−
π 2 (kB T )4 U =− Ω 3 45 (c)3
[to be compared to (9.11), which is valid for material particles with a density of states in ε1/2 , whereas the photons density varies in ε2 ]. Since dF = −SdT − P dΩ
one deduces Å S=− Å P =− P =
∂F ∂T ∂F ∂Ω
ã = Ω
Å ã 4π 2 kB T 3 kB Ω 45 c
=− T
F Ω
π 2 (kB T )4 45 (c)3
(9.38) (9.39)
The pressure P created by the photons is called the radiation pressure.
Black Body Definition and Spectrum
The results obtained above are valid for a closed cavity, in which a measurement is strictly impossible because there is no access for a sensor. One is easily convinced that drilling a small hole
into the cavity will not perturb the photons distribution but will permit measurements of the enclosed thermal radiation, through the observation of the radiation emitted by the hole. Besides any
radiation coming from outside and passing through this hole will be trapped inside the cavity and will only get out after thermalization. This system is thus a perfect absorber, whence its name of
black body. In addition we considered that thermal equilibrium at temperature T is reached between the photons and the cavity. In such a system, at steady-state, for each frequency interval dν, the
energy absorbed by the walls exactly balances their emitted energy, which is found in the photon gas in this frequency range, that is, Ωu(ν)dν. Therefore the parameters we obtained previously for the
photon gas are also those characterizing the thermal emission from matter at temperature T , whether this emission takes place in a cavity or not, whether this matter is in contact with vacuum or
not. We will then be able to compare the laws already stated, and in the first place the Planck law, to experiments performed on thermal radiation.
Bose-Einstein Distribution of Photons
The expression (9.33) of u(ν) provides the spectral distribution of the thermal radiation versus the parameter β, i.e. versus the temperature T . For any T , u(ν) vanishes for ν = 0 and tends to zero
when ν tends to infinity. The variable is in fact βhν and one finds that the maximum of u(ν) is reached for βhνmax = 2.82, that is, νmax = 2.82
kB T h
Relation (9.41) constitutes the Wien law. When T increases, the frequency of the maximum increases and for T1 < T2 the whole curve u(ν) for T2 is above that for T1 (Fig. 9.1). u(ν)
5.10−15 T1 < T2 < T3
T1 0
ν 10 Hz 14
Fig. 9.1: Spectral density in energy of the thermal radiation for several temperatures.
For small values of ν, u(ν) kB T · for ν large, u(ν)
8π 2 ν c3
8πh 3 −βhν ν e c3
(9.42) (9.43)
At the end of the 19th century, photometric measures allowed one to obtain the u(ν) curves with a high accuracy. Using the then available MaxwellBoltzmann statistics, Lord Rayleigh and James Jeans
had predicted that the average at temperature T of the energy of oscillators should take the form (9.42), in kB T ν 2 . If this latter law does describe the low-frequency behavior of the thermal
radiation, it predicted an “ultraviolet catastrophe” [u(ν) would be an increasing function of ν, thus the U V and X emissions should be huge]. Besides, theoretical laws had been obtained by Wilhem
Wien and experimentally confirmed by Friedrich Paschen : an exponential behavior at
Chapter 9. Bosons : Helium 4, Photons, Thermal Radiation
high frequency like in (9.43) and the law (9.41) of the displacement of the distribution maximum. It was the contribution of Max Planck in 1900 to guess the expression (9.33) of u(ν), assuming that
the exchanges of energy between matter and radiation can only occur through discrete quantities, the quanta. Note that the ideas only slowly clarified until the advent in the mid 1920’s of Quantum
Mechanics, such as we know it now. It it remarkable that the paper by Bose, proposing the now called “Bose-Einstein statistics,” preceded by a few months the formulation by Erwin Schroedinger of its
equation ! The universe is immersed within an infrared radiation, studied in cosmology, the distribution of which very accurately follows a Planck law for T = 2.7 K. It is in fact a “fossil”
radiation resulting from the cooling, by adiabatic expansion of the universe, of a thermal radiation at a temperature of 3000 K which was prevailing billions of years ago : according to expression
(9.38) of the entropy, an adiabatic process maintains the product ΩT 3 . The universe radius has thus increased of a factor 1000 during this period. The sun is emitting toward us radiation with a
maximum in the very near infrared, corresponding to T close to 6000 K. A “halogen” bulb lamp is emitting a radiation corresponding to the temperature of its tungsten filament (around 3000 K), which is
immersed in a halogen gas to prevent its evaporation. We ourselves are emitting radiation with a maximum in the far infrared, corresponding to our temperature close to 300 K.
Microscopic Interpretation of the Bose-Einstein Distribution of Photons
Another schematic way to obtain the Bose-Einstein distribution, and consequently, the Planck law, consists in considering that the walls of the container are made of atoms with only two possible
energy levels ε0 and ε1 separated by ε1 − ε0 = hν, and in assuming that the combined system (atoms-photons of energy hν) is at thermal equilibrium (Fig. 9.2). It is possible to rigorously calculate
the transition probabilities between ε0 and ε1 (absorption), or between ε1 and ε0 (emission) under the effect of photons of energy hν. This is done in advanced Quantum Mechanics courses. Here we will
limit ourselves to a more qualitative argument, admitting that in Quantum Mechanics a transition is expressed by a matrix element, which is the product of a term 1|V |0 specific of the atoms with
another term due to the photons. Besides, we will admit that an assembly of n photons of energy hν is described by a harmonic oscillator, with energy levels splitting equal to hν, lying in the state
|n. The initial state contains N0 atoms in the fundamental state, N1 atoms in the state ε1 , and n photons of energy hν.
Bose-Einstein Distribution of Photons
N1 hν
N1 + 1 absorption N0 − 1
N1 − 1 hν spontaneous emission N0 + 1
N1 − 1 hν
hν N0
induced emission
N0 + 1
Fig. 9.2: Absorption, spontaneous and induced emission processes for an assembly of two-level atoms in thermal equilibrium with photons.
In the absorption process (Fig. 9.2 top), the number of atoms in the state of energy ε0 becomes N0 − 1, while N1 changes to N1 + 1 ; a photon is utilized, thus the photons assembly shifts from |n to
|n − 1. This is equivalent to considering the action on the state |n of the photon annihilation operator, i.e., according to the Quantum Mechanics course, a|n =
√ n |n − 1
The absorption probability per unit time P is proportional to the number of atoms N0 capable of absorbing a photon, to the square of the matrix element appearing in the transition, and is equal, to a
multiplicative constant, to : Pa = N0 |1|V |0|2 |n − 1|a|n|2 = N0 |1|V |0| n 2
(9.45) (9.46)
This probability is, as expected, proportional to the number n of present photons.
Chapter 9. Bosons : Helium 4, Photons, Thermal Radiation
In the emission process (Figs. 9.2 middle and bottom), using a similar argument, since the number of photons is changed from n to n + 1, one considers the action on the state |n of the photon
creation operator : √ (9.47) a+ |n = n + 1 |n + 1 The emission probability per unit time Pe is proportional to the number of atoms N1 that can emit a photon, to the square of the matrix element of
this transition and is given by Pe = N1 |0|V |1|2 |n + 1|a+ |n|2 = N1 |0|V |1| (n + 1) 2
(9.48) (9.49)
We find here that Pe is not zero even when n = 0 ( spontaneous emission), but that the presence of photons increases Pe (induced or stimulated emission) : another way to express this property is to
say that the presence of photons “stimulates” the emission of other photons. The induced emission was introduced by A. Einstein in 1916 and is at the origin of the laser effect. Remember that the name
“laser” stands for Light Amplification by Stimulated Emission of Radiation. In equilibrium at the temperature T , n photons of energy hν are present : the absorption and emission processes exactly
compensate, thus N0 n = N1 (n + 1)
Besides, for the atoms in equilibrium at T , the Maxwell-Boltzmann statistics allows one to write N1 = e−βhν (9.51) N0 One then deduces the value of n n=
1 eβhν
which is indeed the Bose-Einstein distribution of photons. This yields u(ν, T ) through the same calculation of the photons modes as above [formulae (9.29) to (9.33)]. This approach, which is based
on advanced Quantum Mechanical results, has the advantage of introducing the stimulated emission concept and the description of an equilibrium from a balance between two phenomena.
Photometric Measurements : Definitions
We saw that a “black body” is emitting radiation corresponding to the thermal equilibrium in the cavity at the temperature T . A few simple geometrical
Bose-Einstein Distribution of Photons
considerations allow us to relate the total power emitted by a black body to its temperature and to introduce the luminance, which expresses the “color” of a heated body : we will then be able to
justify that all black bodies at the same given temperature T are characterized by the same physical parameter and thus that one can speak of THE black body. Let us first calculate the total power P
emitted through the black-body aperture in the half-space outside the cavity. We consider the photons of frequency ν, to dν, which travel through the aperture of surface dS and have a velocity
directed toward the angle θ to dω with the normal n to dS. The number of these photons which pass the hole during dt are included in a cylinder of basis dS and height c cos θdt (Fig. 9.3). They carry
an energy
dω θ dS
Fig. 9.3: Radiation emitted through an aperture of surface dS in a cone dω around the direction θ.
dω c cos θ dt dS u(ν)dν (9.53) 4π The total power P radiated into the half-space, by unit time, through a hole of unit surface is given by 2π π2 dϕ ∞ cos θ sin θdθ c u(ν, T )dν (9.54) P = 4π 0 0 0 d2
P · dt =
P =
π 2 (kB T )4 cU = = σT 4 4Ω 60 3 c2
This power varies like T 4 , this is the Stefan-Boltzmann law. The Stefan constant σ is equal to 5.67 × 10−8 watts · m−2 · kelvins−4 . At 6000 K, the approximate temperature of the sun surface, one
gets P = 7.3 × 107 W· m−2 ; at 300 K one finds 456 W·m−2 . Another way to handle the question is to analyze the collection process of the radiation issued from an arbitrary source by a detector. It is
easy to
Chapter 9. Bosons : Helium 4, Photons, Thermal Radiation
understand that the collected signal first depends on the spectral response of the detector : for instance, a very specific detector is our eye, which is only sensitive between 0.4 µm and 0.70 µm, with
a maximum of sensitivity in the yellow-green at 0.55 µm. Therefore specific units have been defined in visual photometry, which take into account the physiological spectral response. But the signal
also depends on geometrical parameters, such as the detector surface dS , the solid angle under which the detector sees the source or, equivalently, the source surface dS and the source-detector
distance together with their respective directions (Fig. 9.4). A physical parameter, the luminance L, characterizes the source : in the visible detection range the colors of two sources of different
luminances are seen as different (a red-hot iron does not have the same aspect as the sun or as an incandescent lamp ; a halogen lamp looks whiter that an electric torch). To be more quantitative, in
energetic photometry one defines the infinitesimal power d2 P received by the elementary surface dS of a detector from the surface dS of a source (Fig. 9.4) by the equation d2 P = LdS cos θdωdν = LdS
cos θ cos θ dS dν = LdS cos θ dω dν r2
dS dω M
θ n
dω θ
Fig. 9.4: Definition of the geometrical parameters of the source and detector. The surface element of the source is centered on M , the detection element on M , with M M = r ; θ is the angle between M
M and the normal n to dS, dω the solid angle under which the detector is seen from the source dS. One notices that the definition is fully symmetrical and that alternatively one can introduce dS , the
solid angle dω under which the source is seen from the detector, the angle θ between M M and the normal to dS in M . The luminance depends of T , ν and possibly of θ, the direction of the emission,
that is, L(ν, T, θ). When the source is a black body of surface dS in thermal equilibrium at the temperature T , the power emitted toward dS in the frequency range (ν, ν+dν)
Bose-Einstein Distribution of Photons
is calculated from (9.53) d2 P =
c u(ν, T ) dS cos θ dω dν 4π
The comparison between (9.56) and (9.57) allows one to express the black body luminance L0 versus u(ν, T ) : c u(ν, T ) (9.58) L0 (ν, T ) = 4π Thus all black bodies at the same temperature T have the
same luminance, which allows one to define the black body. It is both the ideal source and detector since, once the equilibrium has been reached, it absorbs any radiation that it receives and re-emits
it. Its luminance does not depend on the direction θ of emission. Formulae (9.56) and (9.57) also express that the surface which comes into play is the apparent surface, i.e., the projection dS cos θ
of the emitting surface on the plane perpendicular to M M . For this whole surface c u(ν, T ) : this is the Lambert law the luminance is uniform and equal to 4π which explains why the sun, a sphere
radiating like the black body, appears to us like a planar disk. The heated bodies around us are generally not strictly black bodies, in the meaning that (9.58) is not fulfilled : their luminance(L(ν,
T, θ)) not only depends on the frequency and the temperature, but also on the emission direction. Besides, we saw that the black body is “trapping” any radiation which enters the cavity : it thus
behaves like a perfect absorber, which is not the most general case for an arbitrary body.
Radiative Balances
Consider an arbitrary body, exchanging energy only through radiation and having reached thermal equilibrium : one can only express a balance stating that the absorbed power is equal to the emitted
power (in particular a transparent body neither absorbs nor emits energy). Assume that the studied body is illuminated by a black body and has an absorption coefficient a(ν, T, θ) (equal to unity for
the black body). If the studied body does not diffuse light, that is, radiation is only reemitted in the image direction of the source, and if it remains at the frequency it was absorbed, one obtains
the relation c u(ν, T )a(ν, T, θ) = L(ν, T, −θ) (9.59) L0 (ν, T )a(ν, T, θ) = 4π emission absorption after simplifying the geometrical factors. Relation (9.59) links the black body luminance L0 (ν, T
) to that of the considered body, of absorption coefficient a(ν, T, θ). The black body luminance is
Chapter 9. Bosons : Helium 4, Photons, Thermal Radiation
larger than, or equal to, that of any thermal emitter at the same temperature, the black body is the best thermal emitter. This is the Kirchhoff law. In the case of a diffusing body (reemitting
radiation in a direction other than −θ) or of a fluorescent body (reemitting at a frequency different from that of its excitation), one can no longer express a detailed balance, angle by angle or
frequency by frequency, between the ambient radiation and the body radiation. However, in steady-state regime, the total absorbed power is equal to the emitted power and the body temperature remains
constant. The applicable relation is deduced from (9.59) after integration over the angles and/or over the frequencies. One can also say that the power, assumed to be emitted by a black body,
received by the studied body at thermal equilibrium at T , is either absorbed (and thus reemitted since T remains constant), or reflected. For instance for a diffusing body one writes : received power
= emitted power + reflected power which implies for the luminances : L0 (ν, T ) = L(ν, T ) + [L0 (ν, T ) − L(ν, T )] L0 (ν, T ) = L(ν, T ) + [1 − a(ν, T )]L0 (ν, T )
This is another way to obtain (9.59). These types of balances are comparisons between the body luminance L and the black body luminance L0 : if the interpretation of the L0 expression requires
Quantum Mechanics, the comparisons of luminances are only based on balances and were discovered during the 19th century. More generally speaking, the exchanges of energy are taking place according to
three processes : conduction, convection and radiation. Whereas the first two mechanisms require a material support, radiation which occurs through photons can propagate through vacuum : it is thanks
to radiation that we are in contact with the universe. Through the intersideral vacuum, stars are illuminating us and we can study them by analyzing the received radiation. Our star, the sun, is
sending a power of the order of a kW to each 1 m2 surface of earth. The researches on solar energy try to make the best use of this radiation, using either thermal sensors, or photovoltaic cells
(“solar cells”) (see §8.5.2) which directly convert it into electricity.
Greenhouse Effect
Before interpreting the radiative balance of the earth, let us consider the greenhouse effect which allows one to maintain inside a building, made of
Bose-Einstein Distribution of Photons
glass formerly, often of plastics nowadays, a temperature higher than the one outside and thus favors particular vegetables or flowers cultivation (Fig. 9.5). The origin of the effect arises from the
difference in absorption properties of the building walls for the visible and infrared radiations. In the visible range, where the largest part of the radiation emitted by the sun is found, the walls
are transparent, so that the sun radiation can reach the ground, where it is absorbed. The ground in turn radiates like a black body, with its frequency maximum in the infrared [Wien law, equation
(9.41)]. Now the walls are absorbing in the infrared, they reemit a part of this radiation toward the ground, the temperature of which rises, and so on. At equilibrium the ground temperature is thus
TG. If a steady state has been reached, the power getting into the greenhouse, i.e., P0 , is equal to the one that is getting out of it :
Wall in glass or plastics a
Ground TG
Fig. 9.5: Principle of the greenhouse effect.
P0 = σSTG4 (1 − a) if S is the ground surface and a the absorption coefficient, by the walls, of the ground infrared radiation. In the absence of greenhouse, the ground temperature would be TG , such
that P0 = σSTG4 The temperature TG inside the greenhouse is higher than TG since TG =
TG (1 − a)1/4
Here we assumed a perfect transparency of the walls in the visible and no reflection by the walls both in the visible and the infrared. “The greenhouse effect” related to a possible climate global
warming is an example of radiative balance applied to the earth. The earth is surrounded by a layer of gases, some of which corresponding to molecules with a permanent dipolar moment, due to their
unsymetrical arrangement of atoms : this is the case for H2 O, CO2 , O3 , etc. Thus the vibration and rotation spectra of these molecules evidence an infrared absorption. The sun illuminates the
earth with thermal radiation, the maximum of its spectrum being located in the visible [Eq. (9.41)] ; this radiation is warming
Chapter 9. Bosons : Helium 4, Photons, Thermal Radiation
the ground. The earth, of temperature around 300 K, reemits thermal radiation toward space, the frequency maximum of this reemission is in the infrared. A fraction of the earth radiation is absorbed
by the gases around the earth, which in turn reemit a fraction of it, and so on. Finally, according to the same phenomenon as the one described above for a greenhouse, the earth receives a power per
unit surface larger than the one it would have reached it if there had only been vacuum around it ; its average temperature is 288 K (instead of 255 K if it had been surrounded by vacuum). The global
climate change, possibly induced by human activity, is related to an increase in concentration of some greenhouse effects gases (absorbing fluorocarboned compounds, which are now forbidden for use, and
mainly CO2 ). If the absorption coefficient increases, the ground temperature will vary in the same way. This will produce effects on the oceans level, etc. . . 1 More advanced studies on the greenhouse
effect account for the spectra of absorption or transparency of the gases layer around the earth according to their chemical species, etc. In our daily life, we are surrounded by radiating objects
like heating radiators, incandescent lamps. On the contrary we wish other objects to maintain their energy without radiating : this is the case, in particular, for thermos bottles, Dewar cans. We
foresee here a very vast domain with technological, ecological and political involvements !
1 The Intergovernmental Panel on Climate Change, established by the World Meteorological Organization and the United Nations Environment program, is assessing the problem of potential global climate
change. An introduction brochure can be downloaded from its website (www.ipcc.ch).
Summary of Chapter 9 The chemical potential µ of material non-interacting particles following the Bose-Einstein statistics is located below their fundamental state ε1 . The occupation factor by
material bosons of an energy level ε is equal to f (ε) =
1 eβ(ε−µ) − 1
At given density N/Ω, below a temperature TB , which is a function of N/Ω, the Bose condensation takes place : a macroscopic number of particles then occupies the fundamental state ε1 , that is, the
same quantum state. The thermodynamical properties in this low temperature range are mainly due to the particles still in the excited states. The superfluidity of helium 4, the superconducting state
of some solids, are examples of systems in the condensed Bose state, which is observed for T < TB . Photons are bosons in non-conserved number, their Bose-Einstein distribution is given by 1 f (ε) =
βε e −1 The Planck law expresses the spectral density in energy or frequency of photons in an empty cavity of volume Ω, in the frequencies range [ν, ν + dν] : dU = Ωu(ν)dν = Ω
8πh ν 3 dν c3 eβhν − 1
This law gives the spectral distribution of the “black body” radiation, i.e., of thermal radiation at temperature T = 1/kB β. The black body is the perfect thermal emitter, its luminance is
proportional to u(ν). The thermal emission of a body heated at temperature T corresponds to a luminance at maximum equal to that of the black body. 221
Summary of Chapter 9
The greenhouse effect arises from the properties of the walls enclosing the greenhouse : higher transparency in the visible and larger infrared absorption. This phenomenon is also applicable to the
radiation balance of the earth.
General Method for Solving Exercises and Problems At the risk of stating obvious facts, we recall here that the text of a statistical physics exercise or problem is very precise, contains all the
required information and that each word really matters. Therefore it is recommended to first read the text with the highest attention. The logic of the present course will always be apparent, i.e, you
have to ask yourself the two following questions, in the order given here : 1. What are the microstates of the studied system ? 2. Which among these microstates are achieved in the problem conditions
? 1. The first question is generally solved through Quantum Mechanics, which reduces to Classical Mechanics in special cases (the ideal gas, for example). Mostly this is equivalent to the question :
“What are the energy states of the system ?” In fact they are often specified in the text. At this stage there are three cases : – distinguishable particles (for example of fixed coordinates) : see the
first part of the course – classical mobile particles : see the chapter on the ideal gas – indistinguishable particles : see the second part of the course, in which the study is limited to the case of
independent particles.
General Method for Solving Exercises and Problems
2. The second question specifically refers to Statistical Physics. According to the physical conditions of the problem one will work in the adapted statistical ensemble : – isolated system, fixed
energy and fixed particles number : microcanonical ensemble – temperature given by an energy reservoir and fixed number of particles : canonical ensemble – system in contact with an energy reservoir
(which dictates its temperature) and a particles reservoir (which dictates its chemical potential) : grand canonical ensemble The systems of indistinguishable particles obeying the Pauli principle
and studied in the framework of this course are solved in the grand-canonical ensemble. Their chemical potential is then set in such a way that the average number of particles coincides with the real
number of particles given by the physics of the system.
Units and physical constants (From R. Balian, “From microphysics to macrophysics : methods and applications of statistical physics”, Springer Verlag Berlin (1991)) We use the international system
(SI), legal in most countries and adopted by most international institutions. Its fundamental units are the meter (m), the kilogram (kg), the second (sec), the ampere (A), the kelvin (K), the mole
(mol), and the candela (cd), to which we add the radian (rad) and the steradian (sr). The derived units called by specific names are : the hertz (Hz = s−1 ), the newton (N = m kg sec−2 ), the pascal
(Pa = N m−2 ), the joule (J = N m), the watt (W = J sec−1 ), the coulomb (C = A s), the volt (V = W A−1 ), the farad (F = C V−1 ), the ohm (Ω = V A−1 ), the siemens (S = A V−1 ), the weber (Wb = V
s), the tesla (T = Wb m−2 ), the henry (H = Wb A−1 ), the Celsius degree (C), the lumen (lm = cd sr), the lux (lx = lm m−2 ), the becquerel (Bq = sec−1 ), the gray (Gy = J kg−1 ) and the sievert (Sv
= J kg−1 ). The multiples and submultiples are indicated by the prefixes deca (da = 10), hecto (h = 102 ), kilo (k = 103 ) ; mega (M = 106 ), giga (G = 109 ), tera (T = 1012), peta (P = 1015 ), exa (E
= 1018 ) ; deci (d = 10−1 ), centi (c = 10−2 ), milli (m = 10−3 ) ; micro (µ = 10−6 ), nano (n = 10−9 ), pico (p = 10−12 ), femto (f = 10−15 ), atto (a = 10−18 ).
Units and physical constants
Constants of electromagnetic units light velocity Planck constant Dirac constant Avogadro number Atomic mass unit neutron and proton masses electron mass Elementary charge Faraday constant Bohr
magneton nuclear magneton Fine structure constant Hydrogen atom : Bohr radius binding energy Rydberg constant Boltzmann constant molar constant of the gas Normal conditions : pressure temperature
molar volume Gravitation constant gravity acceleration Stefan constant
µ0 = 4π × 10−7 N A−2 (defines the ampere) ε0 = 1/µ0 c2 , 1/4πε0 9 × 109 N m2 C−2 c = 299 792 458 m sec−1 (defines the meter) c 3 × 108 m sec−1 h = 6.626 × 10−34 J sec = h/2π 1.055 × 10−34 J sec N 6.022
× 1023 mol−1 (by definition, the mass of one mole of 12 C is 12 g) 1 u=1 g/N 1.66 × 10−27 kg (or dalton or amu) mn 1.0014mp 1.0088 u m 1 u/1823 9.11 × 10−31 kg e 1.602 × 10−19 C N e 96 485 C mol−1 µB
= e/2m 9.27 × 10−24 J T−1 e/2mp 5 × 10−27 J T−1 e2 1 α= 4πε0 c 137 4πε0 2 = 0.53 Å mcα me2Å ã2 2 m e2 E0 = = 2 2ma20 2 4πε0 13.6 eV R∞ = E0 /hc 109 737 cm−1 kB 1.381 × 10−23 J K−1 R = N kB 8.316 J
K−1 mol−1 a0 =
1 atm = 760 torr = 1.01325× 105 Pa Triple point of water : 273.16 K ( definition of the kelvin) or 0.01C (definition of the Celsius scale) 22.4 × 10−3 m3 mol−1 G 6.67 × 10−11 m3 kg−1 sec−2 g 9.81 m
sec−2 π2 k4 603 c2 5.67 × 10−8 W m−2 K−4 σ=
Units and physical constants
Definition of the photometric units
A light power of 1 W, at the frequency of 540 THz, is equivalent to 683 lm
Energy units and equivalences :
1 erg = 10−7 J (not IS) 1 kWh=3.6×106 J 1 eV ↔ 1.602 × 10−19 J ↔ 11 600 K
electrical potential (electron-volt) heat (calorie) chemical binding temperature (kB T ) mass (mc2 ) wave number (hc/λ) frequency (hν)
1 cal = 4.184 J (not SI ; specific heat of 1 g water) 23 kcal mol−1 ↔ 1 eV (not SI) 1 290 K ↔ 40 eV (standard temperature) 9.11 × 10−31 kg ↔ 0.511 MeV (electron at rest) 109 700 cm−1 ↔ 13.6 eV
(Rydberg) 3.3 × 1015 Hz ↔ 13.6 eV
It is useful to remember these equivalencies to quickly estimate orders of magnitude. Various non-SI units
1 ångström (Å)=10−10 m (atomic scale) 1 fermi (fm)=10−15 m (nuclear scale) 1 barn (b)=10−28 m2 1 bar=105 Pa 1 gauss (G)=10−4 T 1 marine mile=1852 m 1 knot=1 marine mile per hour 0.51 m sec−1 1
astronomical unit (AU) 1.5 × 1011 m (Earth-Sun distance) 1 parsec (pc) 3.1 × 1016 m (I UA/sec arc) 1 light-year 0.95 × 1016 m
Data on the Sun
Radius 7 × 108 m=109 earth radii Mass 2 × 1030 kg Average density 1.4 g cm−3 Luminosity 3.8 × 1026 W
A few useful formulae Normalization of a Gaussian function : … +∞ π −ax2 dx e = a −∞ The derivation of this formula with respect to a yields the moments of the Gauss distribution.
Euler gamma function :
Γ(t) ≡
xt−1 e−x dx = (t − 1)Γ(t − 1)
(1 − x)s−1 xt−1 dx =
Γ(t)Γ(1 − t) = Stirling formula :
π sin πt
Γ(s)Γ(t) Γ(s + t)
Å ã √ 1 = π 2
√ t! = Γ(t + 1) ∼ tt e−t 2πt t→∞
Binomial series : (1 + x)t =
∞ n x
∞ (−x)n Γ(n − t) Γ(t + 1) = n! Γ(t + 1) − n) n=0 n! Γ(−t) n=0
Poisson formula : +∞
f (n) =
f˜(2πl) ≡
+∞ l=−∞
dxf (x)e2πilx
|x| < 1
A few useful formulae
Euler-Maclaurin formula : ε 1 ε3 1 a+ε f (x) |a+ε dxf (x) ≈ [f (a) + f (a + ε)] − f (x) |a+ε + +... a a ε a 2 12 720 ε 7ε3 ≈ f (a + 12 ε) + f (x) |a+ε f (x) |a+ε − +... a a 24 5760 This formula
allows one to calculate the difference between an integral and a sum over n, for a = nε. Constants e 2.718, π 3.1416 1 γ ≡ lim(1 + ... + − ln n) 0.577 Euler constant. n Riemann zeta function : ∞ t−1 ∞
1 x dx = Γ(t)ζ(t) ζ(t) ≡ , t n ex − 1 0 n=1
t ζ
1.5 2.612
xt−1 dx = (1 − 2−t+1 )Γ(t)ζ(t) ex + 1
2.5 1.341
1 2 6π
Dirac distribution : 1 2π
3 1.202
3.5 1.127
4 1 4 90 π
5 1.037
dx eixy/a = δ(y/a) = |a|δ(y)
+∞ 1 2πi y/a e = δ(y/a) |a|
sin tx 1 − cos tx = lim = πδ(x) t→∞ x tx2
f (x)δ(x) = f (0)δ(x)
f (x)δ (x) = −f (0)δ(x) + f (0)δ (x)
If f (x) = 0 at the x = xi points, one has 1 δ(x − xi ) δ[f (x)] = |f (xi )| i
Exercises and Problems A course can only be fully understood after practice through exercises and problems. There are many excellent collections of classical exercises in Statistical Physics. In what
follows, we reproduce original texts which were recently given as examinations at Ecole Polytechnique. An examination of Statistical Physics at Ecole Polytechnique usually consists in an exercise and
a problem. As examples, you will find here the exercises given in 2000, 2001 and 2002 and the problems of 2001 and 2002 which were to be solved by the students who had attended the course presented in
this book. Electrostatic Screening : exercise 2000 Magnetic Susceptibility of a “Quasi-1D” Conductor : exercise 2001 Entropies of the HC Molecule : exercise 2002 Quantum Boxes and Optoelectronics :
problem 2001 Physical Foundations of Spintronics : problem 2002
Exercise 2000 : Electrostatic Screening
Exercise 2000 : Electrostatic Screening I. Let us consider N free fermions of mass me and spin 1/2, without any mutual interaction, contained in a macroscopic volume Ω maintained at temperature T .
I.1. Recall the equation relating N/Ω to the chemical potential µ of these fermions through the density of states in k (do not calculate the integral). I.2. In the high temperature and low density
limit recall the approximate integral expression relating N/Ω to µ (now calculate the integral). I.3. The same fermions, at high temperature and small density, are now submitted to a potential energy
V (r). Express the total energy of one such particle versus k. Give the approximate expression relating N to µ and V (r). I.4. Deduce that the volume density of fermions is given by n(r) = n0 exp(−βV
where β = 1/kB T and n0 is a density that will be expressed as an integral. I.5. Show that at very small potential energy, such that βV (r) 1 for any r, then n0 = N/Ω. II. From now on, we consider an
illuminated semiconductor, of volume Ω, maintained at temperature T . It contains N electrons, of charge −e, in its conduction band together with P = N holes (missing electrons) in its valence band.
The holes will be taken as fermions, of spin 1/2, mass mh and charge +e. We assume that the high temperature limit is realized. Assume that a fixed point charge Q is present at the coordinates origin,
which, if alone, would create an electrostatic potential : Φext (r) =
Q 4πε0 εr r
where εr is the relative dielectric constant of the considered medium. In fact, the external charge Q induces inhomogeneous charge densities −en(r) and +ep(r) inside both fermion gases. These induced
charges in turn generate electrostatic potentials, which modify the densities of mobile charges, and so on. You are going to solve this problem by a self-consistent method in the following way : let
Φtot (r) be the total electrostatic potential, acting on the particles.
Exercises and Problems
II.1. Express the potential energies, associated with Φtot (r), to which the electrons and the holes are respectively subjected. Deduce from I. the expressions n(r) and p(r) of the corresponding
densities of mobile particles. II.2. The electrostatic potential Φtot (r) is a solution of the Poisson equation, in which the charge density includes all the charges of the problem. Admit that the
charge density Qδ(r) is associated with the point charge Q located at r, where δ(r) is the Dirac delta distribution. Show that, in the high temperature limit and to the lowest order in Φtot (r), the
total electrostatic potential is solution of ∆Φtot (r) =
Φtot (r) Qδ(r) − λ2 ε0 εr
where λ is a constant that will be expressed versus N/Ω and T . What is the unit of λ ? II.3. Estimate λ for N/Ω = 1022 m−3 , T = 300 K, εr = 12.4. II.4. The solution of this equation is Φtot (r) =
Q exp(−r/λ) 4πε0 εr r
Indicate the physical implications of this result in a few sentences.
Exercise 2001 : Magnetic Susceptibility of a “Quasi-1D” I. Consider an assembly of N magnetic ions at temperature T inside a volume Ω. These ions are independent and distinguishable. The electronic
structure of an ion consists in a fundamental non-degenerate level |0, of energy ε0 , and a doubly degenerate excited state of energy ε0 + λ. Under the effect of a static and uniform magnetic field B,
the fundamental level remains at the energy ε0 while the excited level splits into two sublevels |+ and |− of respective energies ε0 + λ + γB and ε0 + λ − γB. I.1. Without any calculation, plot the
location of the states |0, |+ and |− versus the applied magnetic field, in the following special cases : a) λ = 0 , kB T γB b) λ kB T γB c) kB T λ γB d) γB λ kB T
Exercise 2001 : Magnetic Susceptibility of a “Quasi-1D” Conductor
In each case indicate which states are occupied at temperature T . I.2. Calculate the system free energy, assuming that the only degrees of freedom are those, of electronic origin, related to the
three levels |0, |+ and |−. ∂F and Deduce the algebraic expressions of the system magnetization M = − ∂B M of its susceptibility χ = limB→0 (Ω is the volume). ΩB I.3. On the expressions of the
magnetization and of the susceptibility obtained γB = in I.2. only discuss the limit cases a), b), c), d) of question I.1 (take kB T x). Verify that the obtained results are in agreement with the
qualitative considerations of I.1.
I.4. The figure below represents the variation versus temperature of the paramagnetic susceptibility of the organic “quasi-one-dimensional” compound [HN(C2 H5 )3 ](TCNQ)2 . Using the results of the
present exercise can you qualitatively interpret the shape of this curve ?
Temperature (K)
Fig. 1: Magnetic susceptibility of the quasi-one-dimensional organic compound versus temperature. II. Let a paramagnetic ion have a total spin S. You will admit that, in presence the levels of this
ion take the energies of a static and uniform magnetic field B, εl = lγB, where −S ≤ l ≤ S, with l = −S, (−S + 1), (−S + 2), . . . + S. Consider an ion at thermal equilibrium at the given temperature
T . Calculate
Exercises and Problems
its free energy f and its average magnetic moment m = −
∂f . ∂B
Write the expressions of the magnetization in the following limit cases : high and low magnetic fields ; high and low temperatures. In each situation give a brief physical comment.
Exercise 2002 : Entropies of the HC Molecule 1. Let a three-dimensional ideal gas be made of N monoatomic molecules of mass m, confined inside the volume Ω and at temperature T . Without demonstration
recall the expressions of its entropy S3D and of its chemical potential µ3D . 2. For a two-dimensional ideal gas of N molecules, mobile on a surface A, of the same chemical species as in 1., give the
expressions, versus A, m and T , of the partition function Z2D , the entropy S2D and the chemical potential µ2D . To obtain these physical parameters, follow the same approach as in the course. Which
of these parameters determines whether a molecule from the ideal gas of question 1. will spontaneously adsorb onto surface A ? 3. Let us now assume that each of the N molecules from the gas is
adsorbed on surface A. What is, with respect to the gas phase, the variation per mole of translation entropy if the molecules now constitute a mobile adsorbed film on A ? 4. Numerically calculate the
value of S3D (translation entropy) for one mole of HC, of molar mass M = 36.5, at 300K under the atmospheric pressure. Calculate S2D (translation entropy on a plane) for one mole of such a gas,
adsorbed on a porous silicon surface : assume that, due to the surface corrugation, the average distance between molecules on the surface is equal to 1/30 of the distance in the gas phase considered
above. 5. In fact, in addition to its translation degrees of freedom, the HC molecule possesses rotation degrees of freedom. Write the general expression of the partition function of N molecules of
this gas versus ztr and zrot , the translation and rotation partition functions for a single molecule, accounting for the fact that the molecules are indistinguishable. The expressions of ztr and
zrot will be calculated in the next questions. 6. In this question one considers a molecule constrained to rotate in a plane around a fixed axis. Its inertia moment with respect to this axis is IA .
Problem 2001 : Quantum Boxes and Optoelectronics
rotation angle φ varies from 0 to 2π, the classical momentum associated with φ is pφ . The hamiltonian for one molecule corresponding to this motion is given by p2φ h= 2IA The differential elementary
volume of the corresponding phase space is dφ dpφ /h, where h is the Planck constant. Calculate the partition function z1rot . 7. In the case of the linear HC molecule, of inertia moment I, free to
rotate in space, the rotation hamiltonian for a single molecule is written, in spherical coordinates, p2φ p2 h= θ + 2I 2I sin2 θ Here θ is the polar angle (0 ≤ θ ≤ π), φ the azimuthal angle (0 ≤ φ ≤
2π) ; pθ (respectively pφ ) is the momentum associated with θ (respectively φ). The volume element of the phase space is now equal to dφ dpφ dθ dpθ /h2 . Calculate the corresponding partition
function z2rot . Show that the dependence in temperature and in inertia moment obtained for z2rot is the expected one, when this expression of the rotation energy is taken into account. Verify that
the expression found for z2rot is in agreement with the one given in the course for a linear molecule in the classical approximation (be careful, the HC molecule is not symmetrical). 8. Numerically
calculate the rotation entropy by mole at 300K of a HC mole. Take I = 2.7 × 10−40 g · cm 2 .
Problem 2001 : Quantum Boxes and Optoelectronics For more clarity, comments are written in italics and questions in standard characters. Semiconducting quantum boxes are much studied nowadays,
because of their possible applications in optoelectronics. The aim of the present problem is to study the rate of spontaneous emission and the radiative efficiency in quantum boxes : these are two
important physical parameters for the elaboration of emission devices (quantum–boxes lasers). We will particularly focus on the experimental dependence of these parameters.
Exercises and Problems
Fig. 1 : Electron microscope image of quantum boxes of InAs in GaAs.
Quantum boxes are made of three-dimensional semiconducting inclusions of nanometer size (see Fig.1). In the studied case, the inclusions consist of InAs and the surrounding material of GaAs. The InAs
quantum boxes confine the electrons to a nanometer scale in three dimensions and possess a discrete set of electronic valence and conduction states ; they are often called “artificial atoms.” In the
following we will adopt a simplified description of their electronic structure, as presented below in Fig. 2.
Fig. 2: Simplified model of the electronic structure of an InAs quantum box and of the filling of its states at thermal equilibrium at T = 0. We consider that the energy levels in the conduction band,
labeled by a positive integer n, are equally spaced and separated by ωc ; the energy of the fundamental level (n = 0) is noted Ec . The nth level is 2(n + 1) times degenerate, where the factor 2
accounts for the spin degeneracy and (n + 1) for the orbital degeneracy. Thus a conduction state is identified through the set of three in-
Problem 2001 : Quantum Boxes and Optoelectronics
1 teger quantum numbers |n, l, s, with n > 0 and 0 ≤ l ≤ n, s = ± . The first 2 valence states are described in a similar way, with analogous notations (Ev , ωv , n , l , s ).
I. Quantum Box in Thermal Equilibrium At zero temperature and thermal equilibrium, the valence states are occupied by electrons, whereas the conduction states are empty. We are going to show in this
section that the occupation factor of the conduction states of the quantum box in equilibrium remains small for temperatures up to 300K. Therefore, only in this part, the electronic structure of the
quantum box will be roughly described by just considering the conduction and valence band levels lying nearest to the band gap. Fig.3 illustrates the assumed electronic configuration of the quantum
box at zero temperature.
Fig. 3 : Electronic configuration at T = 0. I.1. Give the average number of electrons in each of these two (spindegenerate) levels versus β = 1/kB T , where T is the temperature, and the chemical
potential µ . I.2. Find an implicit relation defining µ versus β, by expressing the conservation of the total number of electrons in the thermal excitation process. I.3. Show that, in the framework of
this model, µ is equal to (Ec + Ev )/2 independently of temperature. I.4. Estimate the average number of electrons in the conduction level at 300K and conclude. For the numerical result one will take
Ec − Ev = 1 eV. In optoelectronic devices, like electroluminescent diodes or lasers, electrons are injected into the conduction band and holes into the valence band of the semiconducting active
medium : the system is then out of equilibrium. The return toward equilibrium takes place through electron transitions from the conduction band toward the valence band, which may be associated with
light emission.
Exercises and Problems
Although the box is not in equilibrium, you will use results obtained in the course (for systems in equilibrium) to study the distribution of electrons in the conduction band, and that of holes in
the valence band. Consider that, inside a given band, the carriers are in equilibrium among themselves, with a specific chemical potential for each band. This is justified by the fact that the
phenomena responsible for transitions between conduction states, or between valence states, are much faster than the recombination of electronhole pairs. One now assumes in sections II exactly
contains one electron-hole the total number of electrons contained equal to 1 ; the same assumption is valid in the valence states.
and III that the quantum box pair (regime of weak excitation) : in all the conduction states is thus for the total number of holes present
II. Statistical Study of the Occupation Factor of the Conduction Band States II.1. Write the Fermi distribution for a given electronic state |n, l, s versus β = 1/kB T and µ, the chemical potential
of the conduction electrons. Show that this distribution factor fn,l,s only depends of the index n ; it will be written fn in the following. II.2. Expressing that the total number of electrons is 1,
find a relation between µ and β. II.3. Show that µ < Ec . II.4. Show that fn is a decreasing function of n. Deduce that fn ≤
1 (n + 1)(n + 2)
(Hint : give an upper bound and a lower bound for the population of electronic states of energy smaller or equal to that of level n). II.5. Deduce that for states other than the fundamental one, it
is very reasonable to approximate the Fermi-Dirac distribution function by a MaxwellBoltzmann distribution and this independently of temperature. Deduce that the chemical potential is defined by the
following implicit relation : 1 = g(β, w) , with 2
w = eβ(µ−Ec )
and g(β, w) =
w +w (n + 1)e−βnωc w+1 n≥1
Problem 2001 : Quantum Boxes and Optoelectronics
II.6. Show that the function g defined in II.5 is an increasing function of w and a decreasing function of β. Deduce that the system fugacity w decreases when the temperature increases and that µ is a
decreasing function of T . II.7. Specify the chemical potential limits at high and low temperatures. II.8. Deduce that beyond a critical temperature Tc – to be estimated only in the next questions –
it becomes legitimate to make a Maxwell-Boltzmann-type approximation for the occupation factor of the fundamental level. II.9. Now one tries to estimate Tc . Show that expression (1) can also be
written under the form : w w 1 = −w+ 2 w+1 (1 − e−βωc )2 [Hint : as a preliminary, calculate the sum χ(β) =
(2) e−βnωc and compare
it to the sum entering into equation (1)]. II.10. Consider that the Maxwell-Boltzmann approximation is valid for the 1 fundamental state if f0 ≤ . Hence deduce Tc versus ωc . e+1 II.11. For a
temperature higher than Tc show that : fn = e−βnωc
(1 − e−βωc )2 2
III. Statistics of holes III.1. Give the average number of electrons in a valence band state |n , l , s of energy En , versus the chemical potential µv of the valence electrons. (Recall : µv is
different from µ because the quantum box is out of equilibrium when an electron-hole pair is injected into it.) III.2. Deduce the average number of holes hn ,l ,s present in this valence state of
energy En . Show that the holes follow a Fermi-Dirac statistics, if one associates the energy −En to this fictitious particle. Write the hole chemical potential versus µv . It will thus be possible to
easily adapt the results of section II, obtained for the conduction electrons, to the statistical study of the holes in the valence band.
Exercises and Problems
IV. Radiative lifetime of the confined carriers Consider a quantum box containing an electron-hole pair at the instant t = 0 ; several physical phenomena can produce its return to equilibrium. In the
present section IV we only consider the radiative recombination of the charge carriers through spontaneous emission : the deexcitation of the electron from the conduction band to the valence band
generates the emission of a photon. However an optical transition between a conduction state |n, l, s and a valence state |n , l , s is only allowed if specific selection rules are satisfied. For the
InAs quantum boxes, one can consider, to a very good approximation, that for an allowed transition n = n , l = l and s = s and that each allowed optical transition has the same strength.
Consequently, the probability per unit time of radiative recombination for the electron-hole pair is simply given by fn,l,s · hn,l,s 1 = τr τ0
The time τr is called the radiative lifetime of the electron-hole pair ; the time τ0 is a constant characteristic of the strength of the allowed optical transitions.
IV.1. Show that, if only radiative recombinations take place and if the box contains one electron-hole pair at t = 0, the average number of electrons in the conduction band can be written e−t/τr for
t > 0.
Fig. 4: Lifetime τ and radiative yield η measured for InAs quantum boxes ; the curves are the result of the theoretical model developed in the present problem. The scales of τ, η and T are
logarithmic. IV.2. Now the temperature is zero. Calculate the radiative lifetime using (4). The lifetime τ of the electron-hole pair can be measured. Its experimental variation versus T is shown in
Fig. 4.
Problem 2001 : Quantum Boxes and Optoelectronics
IV.3. Assume that at low temperature one can identify the lifetime τ and the radiative lifetime τr . Deduce τ0 from the data of Fig. 4. IV.4. From general arguments show that the radiative lifetime
τr is longer for T = 0 than for T = 0. (Do not try to calculate τr in this question.) Now the temperature is in the 100–300K range. For InAs quantum boxes, some studies show that the energy spacing ω
between quantum levels is of the order of 15 meV for the valence band and of 100 meV for the conduction band. IV.5. Estimate Tc for the conduction electrons and the valence holes using the result of
II.10. Show that a zero temperature approximation is valid at room temperature for the distribution function of the conduction states. What is the approximation then valid for the valence states ?
IV.6. Deduce the dependence of the radiative lifetime versus temperature T for T > Tc . IV.7. Compare to the experimental result for τ shown in Fig. 4. Which variation is explained by our model ?
What part of the variation remains unexplained ? IV.8. From the experimental values of τ at T = 200 K and at T = 0 K deduce an estimate of the energy spacing between hole levels. Compare to the value
of 15 meV suggested in IV.4.
V. Non-Radiative Recombinations
trap trap
Fig. 5 : A simple two-step mechanism of non-radiative recombination. The electron-hole pair can also recombine without emission of a photon (nonradiative recombination). A simple two-step mechanism
of non-radiative recombination is schematized as follows : (a) a trap located in the neighborhood of the quantum box captures the electron ; (b) the electron then recombines with
Exercises and Problems
the hole, without emission of a photon. The symmetrical mechanism relying on the initial capture of a hole is also possible. One expects that such a phenomenon will be the more likely as the captured
electron (or hole) lies in an excited state of the quantum box, because of its more extended wave function of larger amplitude at the trap location. The radiative and non-radiative recombinations are
mutually independent. Write 1/τnr for the probability per unit time of non-radiative recombination of the electron-hole pair ; the total recombination probability per unit time is equal to 1/τ . The
dependence of the emission quantum yield η of the InAs quantum boxes is plotted versus T in the Fig. 4 above : η is defined as the fraction of recombinations occuring with emission of a photon. V.1.
Consider a quantum box containing an electron-hole pair at the instant t = 0. Write the differential equation describing the evolution versus time of the average number of electrons in the conduction
band, assuming that the electron-hole pair can recombine either with or without emission of radiation. V.2. Deduce the expressions of τ and η versus τr and τnr . V.3. Comment on the experimental
dependence of η versus T . Why is it consistent with the experimental dependence of τ ? Assume that the carriers, electrons or holes, can be trapped inside the nonradiative recombination center only
if they lie in a level of the quantum box of large enough energy (index n higher than or equal to n0 ). Write the nonradiative recombination rate under the following form : 1 =γ τnr
(fn,l,s + tn,l,s )
n≥n0 ,l,s
V.4. From the experimental results deduce an estimate of the index n0 for the InAs quantum boxes. [Hint : work in a temperature range where the radiative recombination rate is negligible with respect
to the non-radiative recombination rate and write a simple expression versus 1/T , to first order in η, then in ln η. You can assume, and justify the validity of this approximation, that the
degeneracy of any level of index higher than or equal to n0 can be replaced by 2(n0 + 1)]. This simple description of the radiative and non-radiative recombination processes provides a very
satisfactory understanding of the experimental dependences of the electron-hole pairs lifetime and of the radiative yield of the InAs
Problem 2002 : Physical Foundations of Spintronics
quantum boxes (see Fig. 4). These experimental results and their modeling are taken from a study performed at the laboratory of Photonics and Nanostructures of the CNRS (French National Center of
Scientific Research) at Bagneux, France.
Problem 2002 : Physical Foundations of Spintronics A promising research field in material physics consists in using the electron spin observable as a medium of information (spintronics). This
observable has the advantage of not being directly affected by the electrostatic fluctuations, contrarly to the space observables (position, momentum) of the electron. Thus, whereas in a standard
semiconductor the momentum relaxation time of an electron does not exceed 10−12 sec, its spin relaxation time can reach 10−8 sec. Here we will limit ourselves to the two-dimensional motion of
electrons (quantum well structure). The electron effective mass is m. The accessible domain is the rectangle [0, Lx ] × [0, Ly ] of the xOy plane. The periodic limit conditions will be chosen at the
edge of this rectangle. Part II can be treated if the results indicated in part I are assumed.
Part I : Quantum Mechanics
Consider the motion of an electron described by the hamiltonian ˆ2 ˆ 0 = p + α (ˆ σx pˆx + σ H ˆy pˆy ) 2m
is the momentum operator of the electron and the σ where pˆ = −i∇ ˆi (i = x, y, z) represent the Pauli matrices. Do not try to justify this hamiltonian. Assume that at any time t the electron state
is factorized into a space part and a spin part, the space part being a plane wave of wave vector k. This normalized state is thus written
eik·r eik·r [a+ (t)|+ + a− (t)|−] = Lx Ly Lx Ly
ã a+ (t) a− (t)
with |a+ (t)|2 + |a− (t)|2 = 1
(1) (2)
where |± are the eigenvectors of σ ˆz . One will write kx = k cos φ, ky = k sin φ, with k ≥ 0, 0 ≤ φ < 2π.
Exercises and Problems
I.1. Briefly justify that one can indeed search a solution of the Schroedinger equation under the form given in (2), where k and φ are time-independent. Write the evolution equations of the coefficients
a± (t). I.2
ã a+ (t) des(a) For fixed k and φ, show that the evolution of vector a (t) Å− ã a+ (t) d = cribing the spin state can be put under the form : i dt a− (t) ã Å a+ (t) ˆs ˆ s is a 2 × 2 hermitian matrix,
that will be expliH where H a− (t) citly written versus k, φ, and the constants α, and m. ˆ s are : E± = (b) Show that the two eigenenergies of the spin hamiltonian H 2 k 2 ± αk with the
corresponding eigenstates : 2m ã ã Å Å 1 1 1 1 √ |χ (3) = |χ+ = √ − eiφ −eiφ 2 2 Å
(c) Deduce that the set of |Ψk,± states :
eik·r |Ψk,+ = 2LxLy
1 eiφ
eik·r |Ψk,− = 2Lx Ly
1 −eiφ
ã (4)
with ki = 2πni /Li (i = x, y and ni positive, negative or null integers) constitute a basis, well adapted to the problem, of the space of the one-electron states. I.3 Assume that at t = 0 the state
is of the type (2) and write ω = αk. (a) Decompose this initial state on the |Ψk,± basis. (b) Deduce the expression of a± (t) versus a± (0). I.4 One defines sz (t), the average value at time t of the
z-component of the ˆz . Show that : electron spin Sˆz = (/2) σ
sz (t) = sz (0) cos(2ωt) + sin(2ωt) Im a+ (0)∗ a− (0) e−iφ lying along the z-axis. The I.5 The system is installed in a magnetic field B on the orbital variables is neglected. The hamiltonian thus
becomes effect of B ˆ =H ˆ 0 − γB Sˆz . H (a) Why does the momentum k remain a good quantum number ? Deduce ˆ s(B) can still be defined. You that, for a fixed k, a spin hamiltonian H ˆ s , γ, B and σ
will express it versus H ˆz . (B) ˆ s are (b) Show that the eigenenergies of H (B)
» 2 k 2 ± α2 k 2 + γ 2 B 2 /4 2m
Problem 2002 : Physical Foundations of Spintronics
(γB αk). The expression of the eigenvectors (c) Consider a weak field B ˆ of Hs to first order in B are then given by (B)
|χ+ = |χ+ −
γB |χ− 4αk
|χ− = |χ− +
γB |χ+ 4αk
(B) Deduce that the average values of Sˆz in the states |χ± are (B)
χ+ |Sˆz |χ+ = −
γB 4αk
χ− |Sˆz |χ− = +
γB 4αk
(d) What are the values of |χ± and of the matrix elements χ± |Sˆz |χ± , (such that γB αk). in the case of a strong field B Part II : Statistical Physics In this part, we are dealing with the
properties of an assembly of N electrons, assumed to be without any mutual interaction, each of them being submitted to the hamiltonian studied in Part I. We will first assume that there is no
magnetic field ; the energy levels are then distributed into two branches as obtained in I.2 : E± (k) =
2 k 2 ± αk 2m
with k = |k|. We will write k0 = mα/ and 0 = 2 k02 /(2m) = mα2 /2. II.1. We first consider the case α = 0. Briefly recall why the density of states D0 () of either branch is independent of for this
two-dimensional problem. Show that D0 () = mLx Ly /(2π2 ). II.2. We return to the real problem with α > 0 and now consider the branch E+ (k). (a) Plot the dispersion law E+ (k) versus k. What is the
energy range accessible in this branch ? (b) Express k versus k0 , 0 , m, and the energy for this branch. Deduce the density of states : ã Å … 0 (6) D+ () = D0 () 1 − + 0 Plot D+ ()/D0 () versus /0 .
II.3. One is now interested by the branch E− (k). (a) Plot E− (k) versus k and indicate the accessible energy range. How many k’s are possible for a given energy ?
Exercises and Problems
(b) Show that the density of states of this branch is given by … 0 < 0 : D− () = 2D0 () + 0 Å ã … 0 > 0 : D− () = D0 () 1 + + 0
II.4. In the rest of the problem, the temperature is taken as zero. (a) Qualitatively explain how the Fermi energy changes with the electrons number N . Show that if N is small enough, only the
branch E− is filled. (b) What is the value of the Fermi energy F when the E+ branch begins to be filled ? Calculate the number of electrons N ∗ at this point, versus Lx , Ly , m, and 0 . (c) Where are
the wave vectors k corresponding to the filled states F < 0 located ? II.5. Assume in what follows that F > 0. (a) What are the wave vectors k for the occupied states in either branch E± ? (b)
Calculate the Fermi wave vectors kF+ and kF− versus F , k0 , 0 , m and . (c) For N > N ∗ , calculate N − N ∗ and deduce the relation : Lx Ly m (20 + F ) π2 II.6. A weak magnetic field is now applied
on the sample. N=
(a) Using the result of I.5, explain why the Fermi energy does not vary to first order in B. (b) Calculate the magnetization along z of the electrons gas, to first order in B. II.7. Now one analyzes
the possibility to maintain a magnetization in the system for some time in the absence of magnetic field. At time t = 0, each electron is prepared in the spin state |+z . The magnetization along z is
thus M (0) = N /2. For a given electron, the evolution of sz (t) in the absence of collisions was calculated in I.4. Now this evolution is modified due to the effect of collisions. Because of the
elastic collisions of an electron on the material defects, its k wave vector direction may be randomly modified. These collisions are modeled assuming that : – they occur at regular times : t1 = τ ,
t2 = 2τ ,..., tn = nτ ,... – the angle φn characterizing the k direction between the nth and the (n + 1)th collision is uniformly distributed over the range [0, 2π[ and decorrelated from the previous
angles φn−1 , φn−2 , ..., and thus from the coefficients a± (tn ). – the angles φn corresponding to different particles are not correlated.
Problem 2002 : Physical Foundations of Spintronics
Under these assumptions, the total magnetization after many collisions can be calculated by making an average over angles for each electron. One writes s¯(t) for the average of sz (t) over the angles
φn . Show that s¯(tn+1 ) = cos(2ωτ ) s¯(tn ). II.8. One assumes that ωτ 1. (a) Is there a large modification of the average spin s¯ between two collisions ? (b) One considers large times t with
respect to the interval τ between two collisions. Show that the average spin s¯(t) exponentially decreases and express the characteristic decrease time td versus ω and τ . (c) Take two materials :
the first one is a good conductor (a few collisions take place per unit time), the second one is a poor conductor (many collisions occur per unit time). In which of these materials is the average spin
better “protected,” i.e., has a slower decrease ? Comment on the result. (d) For τ = 10−12 sec and ω=5 µeV, estimate the spin relaxation time td . II.9. To maintain a stationary magnetization in the
population of N spins, one injects g± electrons per unit time, of respective spins sz = ±/2 into the system. For each spin value, the electrons leave the system with a probability per unit time equal
to 1/tr . Besides, one admits that the relaxation mechanism studied in the previous question is equivalent to the exchange between the + and − spin populations : a + spin is transformed into a − spin
with a probability per unit time equal to 1/td , and conversely a − spin is transformed into a + spin with a probability per unit time equal to 1/td. One calls f± (t) the populations of each of the
spin states at time t, td is the spin relaxation time. ∗ (a) Write the rate equations on df± /dt. What are the spin populations f± in steady state ? ∗ f ∗ − f− (b) Deduce the relative spin
population, that is, the quantity P = + ∗ ∗. f+ + f− In what type of material will the magnetization degrade not too fast ?
Solution of the Exercises and Problems
Electrostatic Screening : exercise 2000 Magnetic Susceptibility of a “Quasi-1D” Conductor : exercise 2001 Entropies of the HC Molecule : exercise 2002 Quantum Boxes and Optoelectronics : problem 2001
Physical Fundations of Spintronics : problem 2002
Exercise 2000 : Electrostatic Screening
Exercise 2000 : Electrostatic Screening I.1. The density N/Ω and the chemical potential µ are related by N 1 = Ω Ω
1 D(k)fF D (k)d3k = 4π 3
4πk 2 dk ã 2 k 2 exp β −µ +1 2me Å
I.2. In the high temperature and low density limit, the exponential in the denominator of the Fermi-Dirac distribution is large with respect to 1, so that Å 2 2 ã +∞ −β k − µ N 1 2me 4πk 2 dk e Ω 4π
3 0 Å ã N 2πme kB T 3/2 2eβµ Ω h2 I.3. The total energy in presence of a potential is written ε=
2 k 2 + V (r) 2me
At high temperature and low density
1 4π 3
e−βV (r) d3r
4πk 2 dk e
ã 2 k 2 −µ 2me
I.4. By definition one has
N= Ω
one thus identifies n(r) to n(r) = n0 e−βV (r) with n0 =
1 4π 3
4πk 2 dk e
I.5. If βV (r) 1 for any r, n is uniform, i.e., n = n0 =
N Ω
ã 2 k 2 −µ 2me
Solution of the Exercises and Problems
II.1. The potential energy of an electron inside the electrostatic potential Φtot (r) is −e Φtot (r), the energy of a hole +e Φtot (r). The electron and hole densities are respectively equal to n(r)
= n0 exp[−β(−e Φtot (r)] p(r) = p0 exp[−β(+e Φtot (r)] with n0 = p0 =
N . Ω
II.2. The total density of charges at r is equal to e[p(r) − n(r)]. The charge localized at the origin also enters into the Poisson equation : ∆Φtot (r) +
e[p(r) − n(r)] Qδ(r) + =0 ε0 εr ε0 εr
In the high temperature limit and to the lowest order p(r) − n(r) = n0 e−βeΦtot (r) − eβeΦtot (r) −2n0 eβΦtot (r) Whence the equation satisfied by Φtot (r) : ∆Φtot (r) = One writes λ2 = II.3. If
2n0 e2 β Q Φtot (r) − δ(r) ε0 εr ε0 εr
Ω ε0 εr kB T , this is the square of a length. N 2e2
N = 1022 m−3 T = 300 K, εr = 12.4, then λ = 30 nm. Ω
II.4. The effect of the charge located at the origin cannot be felt beyond a distance of the order of λ : at a larger distance Φtot (r) is rapidly constant, the electron and hole densities are
constant andopposite. The mobile electrons and holes have screened the charge at the origin.
Exercise 2001 : Magnetic Susceptibility of a “Quasi-1D” Conductor
Exercise 2001 : Magnetic Susceptibility of a “Quasi-One-Dimensional” Conductor I.1. a) λ = 0 kB T γB
b) λ kB T γB |+1>
ε0 + λ
|−1> |0 > B
|0 > B
|−1> Three levels in ε0 for B = 0 Comparable occupations of the three levels
At the temperature T , the upper levels are weakly occupied
c) kB T λ γB |+1> ε
ε0 + λ
d) γB λ kB T |+1>
ε0 + λ |−1>
|0 > B
|0 > B |−1>
Comparable occupations of the three states at T . Only level | − 1 is populated. Little influence of λ, this case The magnetization is saturated, each is almost equivalent to a) ion carries a moment
Solution of the Exercises and Problems
I.2. State |0, energy ε0 |+ ε0 + λ + γB |− ε0 + λ − γB For a single ion : z = e−βε0 + e−β(ε0 +λ+γB) + e−β(ε0 +λ−γB) = e−βε0 [1 + e−β(λ+γB) + e−β(λ−γB) ] For the whole system : Z = (z)N F = −kB T ln Z
¶ î ó© = −N kB T −βε0 + ln 1 + e−β(λ+γB) + e−β(λ−γB) ó î = N ε0 − N kB T ln 1 + e−β(λ+γB) + e−β(λ−γB) −βλ+βγB − e−βλ−βγB e ∂F = Nγ M =− ∂B 1 + e−β(λ+γB) + e−β(λ−γB) For B → 0, M ∼
N γ2βγBe−βλ i.e., 1 + 2e−βλ χ=
2N γ 2 βe−βλ Ω(1 + 2e−βλ )
I.3. Discussion of the limit cases : (a) λ = 0 ex − e−x with x = βγB M = Nγ 1 + ex + e−x 2N γ 2 (Curie law). In the limit x → 0, one obtains χ = 3ΩkB T (b) βλ 1 βγB = x M N γe−βλ (ex − e−x ) χ
2N γ 2 e−βλ ΩkB T
(Curie law attenuated by the thermal activation towards the magnetic level). (c) 1 βλ x M
2N γ 2 βB N γ[x − βλ + βλ + x] = 3 3 χ=
This situation is analogous to a).
2N γ 2 3ΩkB T
Exercise 2001 : Magnetic Susceptibility of a “Quasi-1D” Conductor
(d) βγB βλ 1 M Nγ The magnetization is saturated. I.4. The high temperature behavior of χ in the figure suggests a Curie law. The expression of χ obtained in I.2. vanishes for β → ∞ and for β → 0. It
is positive and goes through a maximum for β of the order of λ1 (exactly for 1 − βλ+ 2e−βλ = 0, i.e., βλ = 1.47). From the temperature of the χ maximum λ can be deduced. Note : in fact, for the
system studied in the present experiment, the ε0 + λ level contains 3 states, and not 2 like in this exercise based on a simplified model. II. z=
e−lβγB = eβSγB
e−lβγB = eβSγB
1 − e−(2S+1)βγB 1 − e−βγB
1 − e−(2S+1)x f = −kB T ln z = −kB T SβγB + ln 1 − e−x m=−
ô with x = βγB
∂f ∂f (2S + 1)γβe−(2S+1)x kB T βγe−x = − γβ = γS + kB T − ∂B ∂x 1 − e−x 1 − e−(2S+1)x 2S + 1 1 m = S + (2S+1)x − γ e − 1 ex − 1
3 limits : x → +∞ x → −∞ x → 0 (a) x → +∞ (high positive field, low temperature) m →S γ
(b) x → −∞ (high negative field, low temperature) m → S − (2S + 1) + 1 = −S γ
(c) x → 0 : (low field, high temperature). One expects m to be proportional to B, which allows to define a susceptibility. Therefore one has to develop m up to the term in x, which requires to express
Solution of the Exercises and Problems
the denominators to third order. 2S + 1 m S+ − 2 3 γ (2S + 1)x + (2S + 1)2 x2 + (2S + 1)3 x6 + ... x + S(S + 1) γBS(S + 1) m x = γ 3 3kB T
x2 2
1 3 + x6 + . . .
(Curie law)
Exercise 2002 : Entropies of the HC Molecule ò 5 Ω 3/2 + N kB (2πmk T ) B N h3 2 ò ï Ω 3/2 = −kB T ln (2πmk T ) B N h3 ï
S3D = N kB ln µ3D 2. Z2D S2D µ2D
ãN A 2πmkB T h2 ï ò A = N kB ln (2πmkB T ) + 2N kB N h2 ï ò A = −kB T ln (2πmk T ) B N h2 1 = N!
A molecule is spontaneously adsorbed if µ2D < µ3D . 3. If the adsorbed film is mobile on the surface : ∆S = S2D − S3D ï ò Ah 1 ∆S = N kB ln − N kB 2 Ω(2πmkB T )1/2 4. For the only translation degrees
of HC : S3D = 152.9 J.K−1 S2D = 48.2 J.K−1 The average 3D distance is of 3.3 nm, it is 0.11 nm between molecules on the surface (which is less than an interatomic distance, but one has also to
account for the large corrugation of the surface).
Problem 2001 : Quantum Boxes and Optoelectronics
5. ZN =
1 (zrot )N (ztr )N N!
Ç å βp2φ dφdpφ exp − z1rot = h 2IA å Ç +∞ βp2φ 2π dpφ exp − = h −∞ 2IA φ 2π = 2πIA kB T h å Ç ã Å βp2φ dθdpθ dφdpφ βp2θ exp − z2rot = exp − h2 2I 2I sin2 θ
8π 2 IkB T h2
4π 2 kB T = 2 IkB T , for a symmetrical mo2hcB h lecule, such that its physical situation is identical after a 180˚ rotation. But HC is not symmetrical The expression in the course is
8. Rotation entropy of HC : Frot = −N kB T ln z2rot (the factor for indistinguishability is included inside the translation term). ò ï ∂Frot ∂ ln z2rot = N kB ln z2rot + T Srot = − ∂T ∂T −1 Srot =
33.3 J · K
Problem 2001 : Quantum Boxes and Optoelectronics I.1. Owing to the spin degeneracies one has nEc =
2 eβ(Ec −µ)
and nEv =
2 eβ(Ev −µ)
Solution of the Exercises and Problems
I.2. At any temperature nEc + nEv = 2, in the present approximation in which the total number of electrons is 2. It follows that 1 eβ(Ec −µ)
1 eβ(Ev −µ)
= 1,
an implicit relation between µ and β
I.3. This equation can be reexpressed, noting x = eβµ , y = eβEc and z = eβEv : x x + =1 y+x z+x that is, zx + yx + 2x2 = zy + zx + yx + x2 , x2 = yz, whence the result Ec + Ev , independently of
temperature. µ= 2 I.4. One immediately deduces from I.3 nEc =
2 eβ(Ec −µ)
β(Ec −Ev ) 2
2 = 4 × 10−9 . e20 + 1
At 300 K with Ec − Ev = 1 eV, nEc =
At 300 K, at thermal equilibrium, the probability for the conduction levels to be occupied is thus extremely small. II.1. fnls =
1 eβ(Enls −µ)
As the energy Enls only depends of n, consequently we will write fnls = fn . II.2. Noting gn for the degeneracy of state n, one has gn fn = 2(n + 1)fn n = 1 = n
an implicit relation between µ and β. II.3. If one had µ > Ec , 2f0 would be larger than 1, in contradiction with n = 1. II.4. En is an increasing function of n, as is also eβ(En−µ) , whence the
result. One has f0 < 1/2, and fn+1 < fn , so that : 1≥
n i=0
gi fi >
gi fn = 2fn
(n + 1) = 2fn
i.e., fn ≤
1 (n + 1)(n + 2)
(n + 1)(n + 2) 2
Problem 2001 : Quantum Boxes and Optoelectronics
1 1 , f2 ≤ , etc. One then 6 12 β(E1 −µ) β(E2 −µ) > 5, in f2 e > 11, which justifies deduces that in f1 one has e the use of the Maxwell-Boltzmann approximation, i.e.,
II.5. The relation obtained in II.4 implies f1 ≤
fnls =
1 eβ(Enls−µ)
≈ e−β(En −µ)
for any excited state. Taking into account the validity of this approximation, the implicit relation between µ and β becomes 1 = 2f0 + 2
(n + 1)e−β(En−µ)
With En = Ec +nωc , and w = e−β(Ec −µ) , this expression can be transformed into ∞ 1 1 (n + 1)e−βnωc = g(β, w) = −1 +w 2 w +1 n=1
∞ w +w (n + 1)e−βnωc w+1 n=1
II.6. Obviously g is a decreasing function of β at fixed w. Moreover, ∞ ∂g 1 = + (n + 1)e−βnωc > 0, thus g is an increasing function ∂w (w + 1)2 n=1 of w. When T increases, β decreases. For g to
remain constant [equal to 1/2 from (1)], the fugacity w must decrease when T increases. 1 ∂w ∂µ = w 2 (Ec − µ) + β . From II.3., µ < E0 = Ec ∂T kT ∂T ∂w < 0 implies and the first term of the right
member is > 0. Consequently, ∂T ∂µ < 0. ∂T
Since w = e−β(Ec −µ) ,
II.7. From equation (1), – when β → 0,
(n + 1)e−βnωc tends to infinity, thus the solution w must
tend to 0. Then µ tends to −∞. (One finds back the classical limit at high temperature) ; w 1 = , so that w 2 w+1 tends to 1 in this limit : the low temperature limit of µ is then E0 = Ec .
– when T → 0, the sum tends to 0 and (1) becomes
Solution of the Exercises and Problems
1 . At low temperature, w tends to 1 and f0 tends to 1/2, w−1 + 1 whereas at high temperature w tends to 0 and the Maxwell-Boltzmann statistics is then applicable to the 0 level. Since w is a
monotonous function of T , this latter limit must be valid beyond a critical temperature Tc , so that then eβ(E0 −µ) 1 (w 1). II.8. f0 =
II.9. The function χ =
e−βnωc is equal to
e−βωc 1 . Its = βωc −βω c 1−e e −1
∞ ∂χ = −ωc ne−βnωc ∂β n=1
allows to conveniently calculate the sum in the right member of (1), which is equal to : ∞
(n + 1)e−βnωc = χ −
1 eβωc 1 ∂χ = βωc + βωc ωc ∂β e − 1 (e − 1)2
2eβωc − 1 e2βωc 1 = −1 + = −1 + βω 2 βω (e c − 1) (e c − 1)2 (1 − e−βωc )2
whence the result (2). II.10. This condition is equivalent to w < 1/e, and at T = Tc one has w = 1/e. For w = 1/e, (2) becomes 1 e(e − 1) = ωc 2 2(e + 1) 1 − exp − kB Tc ωc 2 2(e + 1) = 1 − exp −
i.e., 2 e +e+2 kB Tc 1+
which provides :
ωc 2(e + 1) = − ln 1 − kTc e2 + e + 2
Numerically kB Tc = 0.65 ωc. II.11. For T > Tc , the Maxwell-Boltzmann statistics is applicable to the 1 fundamental state and consequently to any state. f0 = −1 can be apw +1 proached by w, which
allows to write (2) as : w 1 = 2 (1 − e−βωc )2
Problem 2001 : Quantum Boxes and Optoelectronics
It follows that fn = e−β(E0 +nωc −µ) = we−βnωc = e−βnωc
(1 − e−βωc )2 2
III.1. The average number of electrons in a state of spin s of energy E in the valence band is also the Fermi-Dirac distribution function : n =
1 eβ(E−µv )
III.2. hn s = 1 − n =
eβ(E−µv ) 1 1 = −β(E−µ ) = β(−E+µ ) β(E−µ ) v v v + 1 e +1 e +1 e
hn s is indeed an expression of the Fermi-Dirac statistics, under the condition of writing Et = −E. The holes chemical potential is then given by µt = −µv . The results on the electrons in questions
II.1. to II.11. apply to the holes (provided this transposition for the energy and for µ). IV.1. If all recombinations are radiative, dn n =− dt τr whence the result : n = n0 e−t/τr with n0 = 1.
IV.2. At T = 0, both f0 and h0 are equal to 1/2, all the other occupation factors being equal to zero. Consequently, one has : f0,0,1 h0,0,1 + f0,0,−1 h0,0,−1 2f0 h0 1 1 = = = τr τ0 τ0 2τ0
i.e., τr = 2τ0
IV.3. One deduces from the figure τr = 2τ0 = 1 nsec, that is, τ0 = 0.5 nsec. IV.4. At any temperature one has fn,l,s hn,l,s 1 = τr τ0
Moreover the average numbers of electrons and holes are equal to 1. As the occupation factors are spin-independent, for a given value of s one gets : n,l
fn =
n ,l
hn =
One deduces :
Solution of the Exercises and Problems
f n hn ≤
1 hn = 4 n ,l
This limit being reached at T = 0, one then finds that at any temperature the radiative lifetime is larger than or equal to its value at T = 0. IV.5. For the electrons ωc 100 meV, whereas ωv 15 meV
for the holes. Tc is thus of the order of 780 K for the electrons and 117 K for the holes. Expression (2) shows that when kB T ω, w is of the order of 1, which indeed constitutes the zero temperature
approximation. The MaxwellBoltzmann approximation is valid for any hole state if T > Tc = 117 K, from II.10. IV.6. One thus has f0 = 1/2 and all the other fn ’s are equal to zero, whereas from (3),
(1 − e−βωv )2 h0 = 2 One then deduces that the only active transition is associated to the levels |001 and |00 − 1 (like at zero temperature) and that 1 2f0 h0 h0 (1 − e−βωv )2 = = = τr τ0 τ0 2τ0
IV.7. This model allows one to qualitatively understand the increase of the radiative lifetime with temperature, due to the redistribution of the hole probability of presence on the excited levels.
This is indeed observed above 250 K, a temperature range in which a decrease of the lifetime is observed in the experiment. IV.8. At 200 K one approximately gets τr (T ) 1 (1 − e−βωv )2 = , which
gives 2 1 βωv = − ln 1 − √ 2
2τr (0), that is,
i.e., a splitting between holes levels of 1.23 kB T for T = 200 K, that is, of 20 meV. 1 . This τr means that the variation of the average number of electrons per unit time, due to these radiative
recombinations, is given by
V.1. The probability per unit time of radiative recombination is
n dn =− dt τr
Problem 2001 : Quantum Boxes and Optoelectronics
In presence of non radiative recombinations at the rate becomes :
1 , the total variation τnr
dn n n =− − dt τr τnr V.2. This allows to define the total lifetime τ by dn n n n =− − =− dt τr τnr τ i.e., 1 1 1 = + τ τr τnr and a radiative yield η equal to the ratio between the number of
radiative recombinations per unit time and the total number of recombinations per unit time : η=
n τr n τ
τ = τr
1 τr 1 τr
1 τnr
1 r 1 + ττnr
V.3. When only radiative recombinations are present, τnr → ∞ and the radiative yield η is equal to 1, independently of the value of the radiative lifetime. Here the yield strongly decreases above 200
K, whereas it is constant for T < 150 K. This implies that a nonradiative recombination mechanism becomes active for T > 200 K. Correlatively, this is consistent with the decrease of the total
lifetime τ in the same temperature range, whereas the results of question IV.6. show that the radiative lifetime should increase with temperature. V.4. We found in V.2 : η =
In this expression
1 = γ τnr
1 τr 1 τr
1 τnr
(fn,l,s + hn,l,s ). At sufficiently high tem-
n≥n0 ,l,s
perature, around 350 K, τ ≈ 0.5 nsec whereas τr > 2 nsec. In this regime, τnr the nonradiative recombinations determine τ , and η ≈ . Besides, only the τr hole levels of index n0 > 0 are occupied,
they satisfy a Maxwell-Boltzmann statistics. This allows to write, using (3), (1 − e−βωv )2 1 =γ 2(n + 1)e−βnωv τnr 2 n≥n0
Solution of the Exercises and Problems
Taking an orbital degeneracy equal to n0 + 1 for each excited level, one has (1 − e−βωv )2 1 2(n0 + 1)e−βn0 ωv =γ e−βnωv τnr 2 n≥0
= γ(1 − e
)(n0 + 1)e
−βn0 ωv
Using the result of IV.6, 1 γ(1 − e−βωv )(n0 + 1)e−βn0 ωv (n0 + 1)e−βn0 ωv = = 2γτ0 −βωv )2 (1−e η (1 − e−βωv ) 2τ0
≈ 2γτ0 (n0 + 1)e−βn0 ωv In this temperature range ln(η) = constant +
n0 ωv . kT
The slope of η(T ) provides an approximation for n0 : between 200 and 350 K ã Å Å ã 1 n0 ωv 1 η200K − = ln(10) = ln η350K kB 200 350 which yields a value of n0 ranging between 5 and 6.
Problem 2002 : Physical Foundations of Spintronics Part I : Quantum Mechanics
I.1. A state having a plane wave eik·r for space part is an eigenstate of the momentum operator, with the eigenvalue k. The effect of the hamiltonian on this type of state gives : ñ Å ãô ï 2 2Å ã Å
−iφ ãò eik·r eik·r k a+ e a+ a− ˆ H = + kα a− a− eiφ a+ Lx Ly Lx Ly 2m One thus sees that, if the coefficients a± verify the evolution equations ia˙ ± =
2 k 2 a± + kα e∓iφ a∓ 2m
the proposed eigenvector is a solution of the Schroedinger equation ˙ = H|ψ. ˆ i|ψ
Problem 2002 : Physical Foundations of Spintronics
I.2. (a) For fixed k and φ, one is restricted to a two-dimensional problem, only on the spin variables, and the hamiltonian is equal to ã Å 2 k 2 0 e−iφ ˆ Hs = Id + kα eiφ 0 2m where Id represents the
identity matrix. (b) It can be immediately verified that the two proposed vectors are eigenˆ s with the eigenenergies E± . vectors of H (c) The space of the one-electron states is the tensor product
of the space associated to the electron motion in the rectangle [0, Lx] × [0, Ly ] with the spin space. The functions eik·r / Lx Ly are an orthonormal basis of the orbital space, provided that the
choice of k allows this function to satisfy the periodical limit conditions : this requires ki = 2πni /Li , with ni positive, negative or null integer. As for the two vectors |χ± , they constitute a
basis of the spin space. Thus the vectors |Ψk,± are an ˆ 0. orthonormal eigenbasis of the hamiltonian H I.3. (a) The development of the initial spin state on the pair of eigenstates |χ± is ã Å ã Å 1
a+ (0) = a+ (0) + a− (0)e−iφ a− (0) eiφ 2 Å ã 1
1 + a+ (0) − a− (0)e−iφ −eiφ 2 The initial state (orbital ⊕ spin) is thus given by |Ψ =
a+ (0) + a− (0)eiφ a+ (0) − a− (0)eiφ √ √ |Ψk,+ + |Ψk,− . 2 2
(b) One deduces that the electron spin state at time t is characterized by Å
a+ (t) a− (t)
ã = +
Å ã e−iE+ t/
1 −iφ a+ (0) + a− (0)e eiφ 2 Å ã e−iE− t/
1 −iφ a+ (0) − a− (0)e −eiφ 2
After development, one finds a± (t) = e−i
k t / 2m
a± (0) cos(ωt) − ia∓ (0)e∓iφ sin(ωt) .
Solution of the Exercises and Problems
I.4. A simple calculation gives sz (t) = =
|a+ (t)|2 − |a− (t)|2 2
cos(2ωt) |a+ (0)|2 − |a− (0)|2 2
+2 sin(2ωt) Im a∗+ (0)a− (0)e−iφ
One then deduces the result announced in the text. I.5. (a) The new term in the hamiltonian, arising from the magnetic field, commutes with the momentumÅ operator. One Å can thus ã ã always look for
eia+ a+ i k· r genstates of the form e , where is a two-component a− a− vector providing the spin state. The hamiltonian determining the evolution of this spin is written in the |±z basis : Å ã 2 k 2
−γB/2 kαe−iφ ˆ s − γB σ ˆ s(B) = H ˆz = Id + H kαeiφ γB/2 2 2m (b) One verifies that the eigenenergies of the above matrix are those given in the text. σz |χ∓ = 1. One deduces (c) One notices that χ±
|Sˆz |χ± = 0 and that χ± |ˆ the formulae given in the text. ˆ s(B) are practically equal to the (d) For a large B, the eigenstates of H ˆ eigenstates |±z of Sz . More precisely, for γ < 0 (electron),
one has (B) (B) (B) |χ± ±|±z . Consequently, χ± |Sˆz |χ± ±/2. Part II : Statistical Physics
II.1. For α = 0, one has E± = 2 k 2 /(2m). The number of quantum states in a given spin state, with a wave vector of coordinates in the range (kx to kx +dkx ; ky to ky + dky ) is equal to d2 N =
Lx Ly Lx Ly dkx dky = k dk dφ 2 (2π) (2π)2
i.e., by integration over the angle φ : dN =
Lx Ly Lx Ly m kdk = d 2π 2π 2
The density of states is thus independent of the energy and equal to D0 () = Lx Ly m/(2π2 ).
Problem 2002 : Physical Foundations of Spintronics
II.2. (a) The dispersion law in this branch is written E+ (k) =
2 (k + k0 )2 − 0 2m
This corresponds to a portion of parabola, plotted in Fig.1. The spectrum of the allowed energies is [0, ∞[.
Dispersion relations E
k F+
Fig. 1 : Typical behavior of the dispersion laws E± (k) in either branch.
(b) For the branch = E+ (k), … k = −k0 + and thus
2m ( + 0 ) 2
ã Å … 0 Lx Ly dk D+ () = k = D0 () 1 − 2π d + 0
The corresponding curve is plotted in 2.
Solution of the Exercises and Problems 4
Fig. 2: Densities of states D+ /D0 (lower curve) and D− /D0 (upper curve), plotted versus /0 . II.3. (a) For the branch = E− (k), E− (k) =
2 (k − k0 )2 − 0 2m
the variation of which is given in Fig.1. The spectrum of the permitted energies is [−0 , ∞]. – For −0 < ≤ 0, there are two possible wavevector moduli k for a given energy : … 2m k1,2 = k0 ± ( + 0 )
2 – For > 0, there is a single allowed wavevector modulus : … 2m ( + 0 ) k = k0 + 2 (b) One then deduces, for > 0 Å ã … 0 D− () = D0 () 1 + + 0 and for < 0 (adding both contributions corresponding
for either sign in the k1,2 formula) : … 0 D− () = 2D0 () + 0
Problem 2002 : Physical Foundations of Spintronics
The corresponding curve is plotted in Fig. 2. II.4. (a) Assume that the electrons are accommodated one next to the other in the available energy levels. Since the temperature is taken to be zero, the
first electron must be located on the fundamental level, the second one on the first excited level, and so forth. Thus the first levels to be filled are those of the E− branch, with k around k0 , the
Fermi energy F being a little larger than −0 for N small. When N increases, the energy F also increases. It reaches 0 when all the states of the E− branch, of wave vectors ranging between 0 and 2k0 ,
are filled. For larger numbers of electrons, both branches E± are filled. (b) The E+ branch remains empty until F reaches the value F = 0. There are then 0 2Lx Ly m D− ()d = 0 N∗ = π2 −0 electrons on
the E− branch. (c) In the E− branch, the filled states verify E− (k) k1 < k < k2 with : … 2m k1,2 = k0 ± (F + 0 ) 2 | {"url":"https://epdf.pub/statistical-physics-including-applications-to-condensed-matter414290fbf2e2786e52f30fbfa66bd6cc62238.html","timestamp":"2024-11-06T08:59:53Z","content_type":"text/html","content_length":"526837","record_id":"<urn:uuid:24e1a47f-5cc7-4fb8-b475-075c15a7b82a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00586.warc.gz"} |
Monty Hall Problem, Why 50–50 Is Impossible.
This article aims to reinforce the understanding of how the probability behind Monty Hall (or probability in general) works, it is for someone who agrees the answer is 66% but need a more persuasive
Monty Hall problem is settled, if you think you are smarter and wish to argue that the answer is other than 66%(the common wrong answer is 50–50), you should stop reading now, GTFO and google what is
Dunning–Kruger effect instead.
I would not reiterate what is Monty Hall problem, I suggest you google it if you are new to the problem.
To understand Monty Hall problem, most people explain why the answer is 66%, but I think a more effective way is to explain why 50–50(host reveal a door, 2 doors left, 50–50) is impossible, to
understand this, I am going to explain to you 2 critical points:
1.Why the probability changes?
2. Why the probability does not change?
And, there are few elements that lead to changes in Monty Hall problem,
1. switching.
2. changes in choices.
For smoother understanding, we will ignore switching for now, we going to focus on changes in choices first.
Now here is the main puzzle: what happens to the probability if the choices available changes before or after you made your choice?
To demonstrate this problem, we are going to use poker cards as exmaples, imagine there are four cards, J,K,Q,A on the table, the chance for you to flip the Ace is 25% obviously, what happens if:
A. BEFORE you pick a card:
i. I randomly remove 3 cards.
ii. I open all the non-Ace cards.
iii. I add more non-Ace cards, facing down of course.
B. AFTER you pick a card:
i. I randomly remove 3 cards, except your card.
ii. I open all the non-Ace cards, except your card.
iii. I add more non-Ace cards, facing down of course.
So what happens to the probability of getting Ace in Ai, Aii, Aiii, Bi, Bii and Biii? Keep in mind, no switching here for now.
Take your time to think before you continue to scroll down.
Ai: 25%(same)
Aii: 100%(increase)
Aiii: less than 25%(decrease)
Bi: 25%(same)
Bii: 25%(same)
Biii: 25%(same)
Did you get it right?
As you can see, the probability in all the cases of B remains the same, after you pick a card(you made your choice), whatever happens to the rest of the cards, is not your concern anymore, it WON’T
change your pick, which is the reason why 50–50 is impossible.
Mean while, before you make your choice, anything that happens to the pool (choices) WILL affect your pick:
Aii: I reveal information to you, hence you know what left in the face-down card, increasing probability.
Aiii: I added more cards to the pool, reducing your chance to flip ace, decreasing probability.
now you may wonder will the action in Ai affect the probability, yes it will, but normally in a math question like this, we assume the chance of every card gets removed is the same, hence the
probability remains the same in this case even though it is affected.
So what switching means? Switching is actually very simple, it is not even the main point here, it basically means you pick a new card in a new(sub) pool.
Now if you are sharp enough, by now you should notice, Monty Hall problem is made up of Bii and Aii cases, you start with Bii, if you choose to switch, you jump from Bii case to Aii(new pool)case. | {"url":"https://acidcoder.medium.com/monty-hall-problem-why-50-50-is-impossible-42c5aa2f6ff3?source=author_recirc-----5a7ad577b9d5----2---------------------1da5d643_339d_49d9_b686_f3e5ae69ed98-------","timestamp":"2024-11-12T16:43:14Z","content_type":"text/html","content_length":"100377","record_id":"<urn:uuid:6ba7dde4-e088-495b-be80-df25ad29740c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00470.warc.gz"} |
Mastering Insertion Sort: A Detailed Guide
Sorting is a fundamental operation in the field of computer science, and because of this, there are various algorithms available to solve this problem. Each one is chosen based on factors such as the
number of items to sort, the degree of order already present, the computer architecture where the algorithm will be executed, the type of storage device, among others. In this article, we will
explore the insertion sort algorithm, understanding its nuances, strengths, and limitations.
What is insertion sort?
Insertion sort is a comparison-based algorithm that constructs its output one element at a time. It works similarly to the method we use to sort a deck of cards: we take one card at a time, compare
it with the ones we already have in our hand, place the card in the correct position, and repeat this action until we finish our deck.
It is an adaptive algorithm, meaning it is efficient for small data sets, as well as other quadratic complexity algorithms ($O(n^2)$). It is simple to implement, requires a constant amount of memory,
as changes in the list are made in the list itself (without the need to create a new list, which would double the use of memory), and is capable of sorting the list as it receives it.
How does insertion sort work?
Initialization: We assume that the first element of our list is already sorted. We proceed to the next element, consider it as our key, and insert it in the correct position in the sorted part of the
Iteration: For each item in the list (starting from the second element), we store the current item (key) and its position. Then we compare the key with the elements in the sorted part of the list
(elements before the key);
Insertion: If the current element in the sorted part is greater than the key, we move that element one position up. This creates space for the new key to be inserted;
Repositioning the Key: We continue moving elements one position up until we find the correct position for the key. This position is found when we encounter an element that is less than or equal to
the key or when we reach the beginning of the list;
Repeat: The process is repeated for all the elements in the list.
Implementation in JavaScript
To better understand the algorithm, let's implement it in JavaScript:
* Sorts an array of numbers using the insertion sort algorithm.
* @param {number[]} numbers - The list of numbers to be sorted.
* @returns {number[]} - The sorted list of numbers.
function insertionSort(numbers) {
for (let i = 1; i < numbers.length; i++) {
const key = numbers[i]
let j = i - 1
while (j >= 0 && numbers[j] > key) {
numbers[j + 1] = numbers[j]
numbers[j + 1] = key
Complexity Analysis
Time Complexity
Best Case (Array is already sorted): $O(n)$. This is because the inner loop (while) is not executed at all; Average Case and Worst Case (Array is sorted in reverse order): $O(n^2)$. In the worst
case, each iteration will cause an element to be moved. This makes the algorithm inefficient for large data sets.
Space Complexity
Space Complexity: $O(1)$. Insertion sort is an in-place algorithm; it requires a constant amount of memory space. | {"url":"https://douglasmoura.dev/en-US/mastering-insertion-sort-a-detailed-guide","timestamp":"2024-11-09T10:19:43Z","content_type":"text/html","content_length":"75866","record_id":"<urn:uuid:26ebd5d7-8a44-4291-9818-147cf2ff5e2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00803.warc.gz"} |
Temporal specifications with accumulative values
There is recently a significant effort to add quantitative objectives to formal verification and synthesis. We introduce and investigate the extension of temporal logics with quantitative atomic
assertions, aiming for a general and flexible framework for quantitative-oriented specifications. In the heart of quantitative objectives lies the accumulation of values along a computation. It is
either the accumulated summation, as with the energy objectives, or the accumulated average, as with the mean-payoff objectives. We investigate the extension of temporal logics with the
prefix-accumulation assertions Sum(v) ≥ c and Avg(v) ≥ c, where v is a numeric variable of the system, c is a constant rational number, and Sum(v) and Avg(v) denote the accumulated sum and average of
the values of v from the beginning of the computation up to the current point of time. We also allow the path-accumulation assertions LimInfAvg(v) ≥ c and LimSupAvg(v) ≥ c, referring to the average
value along an entire computation. We study the border of decidability for extensions of various temporal logics. In particular, we show that extending the fragment of CTL that has only the EX, EF,
AX, and AG temporal modalities by prefix-accumulation assertions and extending LTL with path-accumulation assertions, result in temporal logics whose model-checking problem is decidable. The extended
logics allow to significantly extend the currently known energy and mean-payoff objectives. Moreover, the prefix-accumulation assertions may be refined with "controlled-accumulation", allowing, for
example, to specify constraints on the average waiting time between a request and a grant. On the negative side, we show that the fragment we point to is, in a sense, the maximal logic whose
extension with prefix-accumulation assertions permits a decidable model-checking procedure. Extending a temporal logic that has the EG or EU modalities, and in particular CTL and LTL, makes the
problem undecidable.
Original language English
Title of host publication Proceedings - 26th Annual IEEE Symposium on Logic in Computer Science, LICS 2011
Pages 43-52
Number of pages 10
State Published - 2011
Event 26th Annual IEEE Symposium on Logic in Computer Science, LICS 2011 - Toronto, ON, Canada
Duration: 21 Jun 2011 → 24 Jun 2011
Publication series
Name Proceedings - Symposium on Logic in Computer Science
ISSN (Print) 1043-6871
Conference 26th Annual IEEE Symposium on Logic in Computer Science, LICS 2011
Country/Territory Canada
City Toronto, ON
Period 21/06/11 → 24/06/11
Dive into the research topics of 'Temporal specifications with accumulative values'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/temporal-specifications-with-accumulative-values-13","timestamp":"2024-11-04T17:54:26Z","content_type":"text/html","content_length":"53317","record_id":"<urn:uuid:73d44464-b1bc-4395-96be-cd65dda3bc68>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00423.warc.gz"} |
Networks of curves evolving by curvature in the plane
The network flow is the evolution of a regular network of embedded curves under curve shortening flow in the plane, where it is allowed that at triple points three curves meet under a 120 degree
condition. A network is called non-regular if at multiple points more than three embedded curves can meet, without any angle condition but with distinct unit tangents. Studying the singularity
formation under the flow of regular networks one expects that at the first singular time a non-regular network forms.
In this course we will present recent work together with Tom Ilmanen and Andre Neves, showing that starting from any non-regular initial network there exists a flow of regular networks. The lectures
will cover the following material:
1) Short-time existence and higher interior estimates (based on work of Mantegazza, Novaga and Tortorelli).
2) Singularity formation, generalised self-similar shrinking networks and local regularity.
3) Self-similarly expanding networks and their dynamical stability.
4) Desingularising non-regular initial networks and short-time existence.
5) Towards an evolution through singularities. | {"url":"https://cvgmt.sns.it/seminar/339/","timestamp":"2024-11-11T07:35:28Z","content_type":"text/html","content_length":"8525","record_id":"<urn:uuid:0a3e711e-01f6-4cc5-a2b7-1dd5dc40dfb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00291.warc.gz"} |
manual page
radixsort, sradixsort — radix sort
#include <limits.h>
#include <stdlib.h>
radixsort(const u_char **base, int nmemb, const u_char *table, u_int endbyte);
sradixsort(const u_char **base, int nmemb, const u_char *table, u_int endbyte);
The radixsort() and sradixsort() functions are implementations of radix sort.
These functions sort an array of nmemb pointers to byte strings. The initial member is referenced by base. The byte strings may contain any values; the end of each string is denoted by the
user-specified value endbyte.
Applications may specify a sort order by providing the table argument. If non-null, table must reference an array of UCHAR_MAX + 1 bytes which contains the sort weight of each possible byte value.
The end-of-string byte must have a sort weight of 0 or 255 (for sorting in reverse order). More than one byte may have the same sort weight. The table argument is useful for applications which wish
to sort different characters equally; for example, providing a table with the same weights for A-Z as for a-z will result in a case-insensitive sort. If table is NULL, the contents of the array are
sorted in ascending order according to the ASCII order of the byte strings they reference and endbyte has a sorting weight of 0.
The sradixsort() function is stable; that is, if two elements compare as equal, their order in the sorted array is unchanged. The sradixsort() function uses additional memory sufficient to hold nmemb
The radixsort() function is not stable, but uses no additional memory.
These functions are variants of most-significant-byte radix sorting; in particular, see D.E. Knuth's Algorithm R and section 5.2.5, exercise 10. They take linear time relative to the number of bytes
in the strings.
Upon successful completion, the value 0 is returned; otherwise the value -1 is returned and the global variable errno is set to indicate the error.
The value of the endbyte element of table is not 0 or 255.
Additionally, the sradixsort() function may fail and set errno for any of the errors specified for the library routine malloc(3).
sort(1), qsort(3)
Knuth, D.E., Sorting and Searching, The Art of Computer Programming, Vol. 3, pp. 170-178, 1968.
Paige, R., Three Partition Refinement Algorithms, SIAM J. Comput., No. 6, Vol. 16, 1987.
McIlroy, P., Computing Systems, Engineering Radix Sort, Vol. 6:1, pp. 5-27, 1993.
The radixsort() function first appeared in 4.4BSD. | {"url":"https://man.openbsd.org/OpenBSD-6.6/radixsort.3","timestamp":"2024-11-08T12:29:11Z","content_type":"text/html","content_length":"11907","record_id":"<urn:uuid:b0a19e52-0531-4a47-a124-81a8a38a49a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00602.warc.gz"} |
Algebra 80 - Multiplication with Complex Numbers | Video Summary and Q&A | Glasp
Algebra 80 - Multiplication with Complex Numbers | Summary and Q&A
7.3K views
August 9, 2019
Algebra 80 - Multiplication with Complex Numbers
Complex numbers can be multiplied by using the distributive property and combining like terms.
Key Insights
• 🍉 Complex numbers can be multiplied using the distributive property and combining like terms.
• #️⃣ Multiplying a complex number by a real number scales the length of the vector without changing its direction.
• 🔄 Multiplying a complex number by the imaginary unit (i) rotates the vector counterclockwise by 90 degrees without changing its length.
• ✈️ The modulus of a complex number is its distance from the origin on the complex plane, and the argument is the angle of its vector from the positive real axis.
• ✖️ When two complex numbers are multiplied, their arguments are added, and their moduli are multiplied.
Hello. I'm Professor Von Schmohawk and welcome to Why U. In the previous lecture, we saw how complex numbers can be added or subtracted. In this lecture, we will demonstrate how complex numbers can
be multiplied. We will start with a simple example multiplying the complex number 3+2i times 2 . Since this complex number is a sum of two numbers, ... Read More
Questions & Answers
Q: How can complex numbers be multiplied using the distributive property?
Complex numbers can be multiplied by using the distributive property and combining like terms. Each term in the first complex number is multiplied by each term in the second complex number and then
combined. The result is a new complex number.
Q: What happens when a complex number is multiplied by a real number?
When a complex number is multiplied by a real number, the length of the vector representing the complex number is scaled by the real number. The direction of the vector remains unchanged.
Q: What happens when a complex number is multiplied by the imaginary unit (i)?
When a complex number is multiplied by the imaginary unit (i), the vector representing the complex number is rotated counterclockwise by 90 degrees. The length of the vector remains the same.
Q: How can complex numbers be graphically visualized using vectors?
Complex numbers can be graphically visualized using vectors on the complex plane. Each complex number is represented as a point on the plane, with the real part determining the horizontal position
and the imaginary part determining the vertical position. The length of the vector represents the modulus or absolute value of the complex number, and the angle of the vector from the positive real
axis represents the argument of the complex number.
Summary & Key Takeaways
• Complex numbers can be multiplied by using the distributive property and combining like terms.
• When a complex number is multiplied by a real number, the length of the vector is scaled without changing its direction.
• When a complex number is multiplied by the imaginary unit (i), the vector is rotated counterclockwise by 90 degrees, while the length remains the same.
Explore More Summaries from MyWhyU 📚 | {"url":"https://glasp.co/youtube/p/algebra-80-multiplication-with-complex-numbers","timestamp":"2024-11-05T15:13:30Z","content_type":"text/html","content_length":"356897","record_id":"<urn:uuid:4c75d9ec-d07a-497a-a179-703a23065758>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00582.warc.gz"} |
naginterfaces.library.tsa.kalman_unscented_state_revcom(irevcm, y, ly, x, st, xt, fxt, comm, lx=None, ropt=None)[source]¶
kalman_unscented_state_revcom applies the Unscented Kalman Filter to a nonlinear state space model, with additive noise.
kalman_unscented_state_revcom uses reverse communication for evaluating the nonlinear functionals of the state space model.
For full information please refer to the NAG Library document for g13ej
On initial entry: must be set to or .
If , it is assumed that , otherwise it is assumed that and that kalman_unscented_state_revcom has been called at least once before at an earlier time step.
On intermediate entry: must remain unchanged.
yfloat, array-like, shape
, the observed data at the current time point.
lyfloat, array-like, shape
, such that , i.e., the lower triangular part of a Cholesky decomposition of the observation noise covariance structure. Only the lower triangular part of is referenced.
If is time dependent, the value supplied should be for time .
xfloat, ndarray, shape , modified in place
On initial entry: the state vector for the previous time point.
On intermediate exit: when
On intermediate entry: must remain unchanged.
On final exit: the updated state vector.
stfloat, ndarray, shape , modified in place
On initial entry: , such that , i.e., the lower triangular part of a Cholesky decomposition of the state covariance matrix at the previous time point. Only the lower triangular part of is
On intermediate exit: when
, the lower triangular part of a Cholesky factorization of .
On intermediate entry: must remain unchanged.
On final exit: , the lower triangular part of a Cholesky factorization of the updated state covariance matrix.
xtfloat, ndarray, shape , modified in place
Note: the required extent for this argument in dimension 2 is determined as follows: if : ; if : ; otherwise: .
On initial entry: need not be set.
On intermediate exit: when , otherwise .
For the th sigma point, the value for the th parameter is held in , for , for .
On intermediate entry: must remain unchanged.
On final exit: the contents of are undefined.
fxtfloat, ndarray, shape , modified in place
Note: the required extent for this argument in dimension 1 is determined as follows: if : ; otherwise: .
Note: the required extent for this argument in dimension 2 is determined as follows: if : ; if : ; otherwise: .
On initial entry: need not be set.
On intermediate exit: the contents of are undefined.
On intermediate entry: when , otherwise for the values of and held in .
For the th sigma point the value for the th parameter should be held in , for .
When , and when , .
On final exit: the contents of are undefined.
commdict, communication object, modified in place
Communication structure.
On initial entry: need not be set.
lxNone or float, array-like, shape , optional
, such that , i.e., the lower triangular part of a Cholesky decomposition of the process noise covariance structure. Only the lower triangular part of is referenced.
If , there is no process noise ( for all ) and is not referenced.
If is time dependent, the value supplied should be for time .
roptNone or float, array-like, shape , optional
Options. The default value will be used for if . Setting will use the default values for all options and need not be set.
If set to then the second set of sigma points are redrawn, as given by equation [equation]. If set to then the second set of sigma points are generated via augmentation, as given by
equation [equation].
Default is for the sigma points to be redrawn (i.e., )
, value of used when constructing the first set of sigma points, .
Defaults to .
, value of used when constructing the first set of sigma points, .
Defaults to .
, value of used when constructing the first set of sigma points, .
Defaults to .
Value of used when constructing the second set of sigma points, .
Defaults to when and the second set of sigma points are augmented and otherwise.
Value of used when constructing the second set of sigma points, .
Defaults to .
Value of used when constructing the second set of sigma points, .
Defaults to .
On intermediate exit: or . The value of specifies what intermediate values are returned by this function and what values the calling program must assign to arguments of
kalman_unscented_state_revcom before re-entering the routine. Details of the output and required input are given in the individual argument descriptions.
On final exit:
(errno )
On entry, .
Constraint: , , or .
(errno )
On entry, .
Constraint: .
(errno )
has changed between calls.
On intermediate entry, .
On initial entry, .
(errno )
On entry, .
Constraint: .
(errno )
has changed between calls.
On intermediate entry, .
On initial entry, .
(errno )
On entry, augmented sigma points requested, and .
Constraint: .
(errno )
On entry, redrawn sigma points requested, and .
Constraint: .
(errno )
has changed between calls.
On intermediate entry, .
On intermediate exit, .
(errno )
On entry, .
Constraint: or .
(errno )
On entry, .
Constraint: .
(errno )
On entry, .
Constraint: .
(errno )
On entry, .
Constraint: .
(errno )
[‘icomm’] has been corrupted between calls.
(errno )
[‘rcomm’] has been corrupted between calls.
(errno )
A weight was negative and it was not possible to downdate the Cholesky factorization.
(errno )
Unable to calculate the Kalman gain matrix.
(errno )
Unable to calculate the Cholesky factorization of the updated state covariance matrix.
(errno )
On entry, and .
Constraint: and .
The minimum required values for and are returned in and respectively.
kalman_unscented_state_revcom applies the Unscented Kalman Filter (UKF), as described in Julier and Uhlmann (1997b) to a nonlinear state space model, with additive noise, which, at time , can
be described by:
where represents the unobserved state vector of length and the observed measurement vector of length . The process noise is denoted , which is assumed to have mean zero and covariance
structure , and the measurement noise by , which is assumed to have mean zero and covariance structure .
Unscented Kalman Filter Algorithm
Given , an initial estimate of the state and and initial estimate of the state covariance matrix, the UKF can be described as follows:
1. Generate a set of sigma points (see Sigma Points):
2. Evaluate the known model function :
The function is assumed to accept the matrix, and return an matrix, . The columns of both and correspond to different possible states. The notation is used to denote the th column of ,
hence the result of applying to the th possible state.
3. Time Update:
4. Redraw another set of sigma points (see Sigma Points):
5. Evaluate the known model function :
The function is assumed to accept the matrix, and return an matrix, . The columns of both and correspond to different possible states. As above is used to denote the th column of .
6. Measurement Update:
Here is the Kalman gain matrix, is the estimated state vector at time and the corresponding covariance matrix. Rather than implementing the standard UKF as stated above
kalman_unscented_state_revcom uses the square-root form described in the Haykin (2001).
Sigma Points
A nonlinear state space model involves propagating a vector of random variables through a nonlinear system and we are interested in what happens to the mean and covariance matrix of those
variables. Rather than trying to directly propagate the mean and covariance matrix, the UKF uses a set of carefully chosen sample points, referred to as sigma points, and propagates these
through the system of interest. An estimate of the propagated mean and covariance matrix is then obtained via the weighted sample mean and covariance matrix.
For a vector of random variables, , with mean and covariance matrix , the sigma points are usually constructed as:
When calculating the weighted sample mean and covariance matrix two sets of weights are required, one used when calculating the weighted sample mean, denoted and one used when calculating the
weighted sample covariance matrix, denoted . The weights and multiplier, , are constructed as follows:
where, usually and and are constants. The total number of sigma points, , is given by . The constant is usually set to somewhere in the range and for a Gaussian distribution, the optimal
values of and are and respectively.
Rather than redrawing another set of sigma points in (d) of the UKF an alternative method can be used where the sigma points used in (a) are augmented to take into account the process noise.
This involves replacing equation [equation] with:
Augmenting the sigma points in this manner requires setting to (and hence to ) and recalculating the weights. These new values are then used for the rest of the algorithm. The advantage of
augmenting the sigma points is that it keeps any odd-moments information captured by the original propagated sigma points, at the cost of using a larger number of points.
Haykin, S, 2001, Kalman Filtering and Neural Networks, John Wiley and Sons
Julier, S J, 2002, The scaled unscented transformation, Proceedings of the 2002 American Control Conference (Volume 6), 4555–4559
Julier, S J and Uhlmann, J K, 1997, A consistent, debiased method for converting between polar and Cartesian coordinate systems, Proceedings of AeroSense ‘97, International Society for Optics
and Phonotonics, 110–121
Julier, S J and Uhlmann, J K, 1997, A new extension of the Kalman Filter to nonlinear systems, International Symposium for Aerospace/Defense, Sensing, Simulation and Controls (Volume 3) (26) | {"url":"https://support.nag.com/numeric/py/nagdoc_latest/naginterfaces.library.tsa.kalman_unscented_state_revcom.html","timestamp":"2024-11-11T10:37:56Z","content_type":"text/html","content_length":"388791","record_id":"<urn:uuid:8d372346-2dd2-449e-8f43-ef39fcf760f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00413.warc.gz"} |
Nonnegativity of values of holomorphic functions #
We show that if f is holomorphic on an open disk B(c,r) and all iterated derivatives of f at c are nonnegative real, then f z ≥ 0 for all z ≥ c in the disk; see
DifferentiableOn.nonneg_of_iteratedDeriv_nonneg. We also provide a variant Differentiable.nonneg_of_iteratedDeriv_nonneg for entire functions and versions showing f z ≥ f c when all iterated
derivatives except f itseld are nonnegative.
A function that is holomorphic on the open disk around c with radius r and whose iterated derivatives at c are all nonnegative real has nonnegative real values on c + [0,r).
An entire function whose iterated derivatives at c are all nonnegative real has nonnegative real values on c + ℝ≥0.
An entire function whose iterated derivatives at c are all nonnegative real (except possibly the value itself) has values of the form f c + nonneg. real on the set c + ℝ≥0.
An entire function whose iterated derivatives at c are all real with alternating signs (except possibly the value itself) has values of the form f c + nonneg. real along the set c - ℝ≥0. | {"url":"https://leanprover-community.github.io/mathlib4_docs/Mathlib/Analysis/Complex/Positivity.html","timestamp":"2024-11-14T20:56:36Z","content_type":"text/html","content_length":"17130","record_id":"<urn:uuid:ba5dc294-dec8-44be-beb7-541e74248287>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00319.warc.gz"} |
Cool Math Stuff
When talking about great mathematicians of the past, many will rank the top three as
, and Newton. I have posted stories about the others, but none about Isaac Newton. So, I think now is a good time.
Newton is best known for his work in physics, but he also made huge contributions to calculus, algebra, geometry, and infinite series. Many mathematicians expand their expertise to different diverse
branches of math, but Newton stuck to the things that applied the most to his physics and are currently ruling the American school system.
Let me tell you an interesting story about Newton. When he was a young student, he was very shy and not at all the genius that he is known as today. One day at recess, a bully came up to him and
punched him in the stomach. Newton chose to fight back, and proceeded to shove his face in the mud. All of his classmates, who did not like this kid, cheered him on as he proved his superiority to
the bully.
After this incident, he decided that physical prestige wasn't enough for him, and he wanted mental prestige as well. So, he started working much harder at his schoolwork, and soon after became top of
the class, proving to everyone that he was smarter than the bully as well. This motivation could have been what turned him into one of the best scientists and mathematicians of all time.
I think this story shows that anyone who has drive and dedication can become a genius, and it also is a story themed around the negativity of bullying. I also like it because it is an interesting
aspect about a mathematician's childhood, which help people get to know who is behind what they are learning and practicing.
In school, the teacher is always on top of you for checking your work. When you do a subtraction problem, solve the reversed addition problem and make sure it is right, when you do an algebra
problem, make sure you plug your solution back into the original equation. These are all things that are drilled into our heads, but never quite executed.
I did post a year and a half ago about checking your work in algebra problems: plugging the answer into the original equation (click here to see how to do that). But there is also a shortcut for
checking work on plain arithmetic problems as well.
Let's take the problem 138 + 253. I would have went smaller, but the method will be easier to demonstrate with larger numbers.
+ 253
If we add that up normally, we would get:
+ 253
How do we know if that is correct? Well, we do something called mod sums. What that means is we add up the digits in the number, and then add up the digits in this sum, and keep going until we find a
single digit number. This is called the number's mod sum or digital root.
So, what is the mod sum of 138? Well, we add up the digits.
1 + 3 + 8 = 12
1 + 2 = 3
So, the mod sum or digital root of 138 is 3. Let's find it for 253.
2 + 5 + 3 = 10
1 + 0 = 1
The mod sum of 253 is therefore 1. Let's find the mod sum of the total and see if you notice the pattern.
3 + 9 + 1 = 13
1 + 3 = 4
So, the two addends have mod sums of 3 and 1. The sum has a mod sum of 4. What is the pattern? That's right, the mod sum of the answer is the sum of the mod sums of the addends. What about a
subtraction problem?
- 643
The answer to this problem is 281. But how do we confirm it?
The mod sum of 924 is 6 (9+2+4=15 and 1+5=6) and the mod sum of 643 is 4 (6+4+3=13 and 1+3=4). So, the mod sum of the difference must be the difference of the two mod sums. The mod sum of 281 is 2
(2+8+1=11 and 1+1=2), which is the difference of 6 and 4. So, the answer was correct.
What about a multiplication problem? Say 71 x 55. If you do the math, you will find that the answer is 3905. But let's check it with mod sums.
Mod Sum of 71 = 8
Mod Sum of 55 = 1
Mod Sum of 3905 = 8
8 x 1 = 8
So it is correct. There are some glitches in the technique, but this is the basis of it. You might run into scenarios that I didn't quite explain how to deal with, but feel free to comment. I will be
happy to respond with some more specific pointers. Have fun actually checking your work now!
One of the things that lots of people seem to be oblivious to is that mathematics is developing and innovating just as much as any other discipline, which I allude to in many of my presentations.
There are many conjectures, or unsolved problems, out there that mathematicians are working on and trying to prove or solve.
Rota's Conjecture was a problem like this, in the branch of matroid theory. This is a diverse area of mathematics that isn't taught or mentioned in the American school system (another concept I
allude to in my presentations). So when I read this article about Geoff Whittle solving the problem, I thought it would make for a great post. Here is the story:
Game theory and proofs are two of my favorite areas of mathematics; game theory is practical and fun while proofs are interesting and insightful. So, when I learned about this problem that combines
the two, I thought that it was definitely worth a post.
This game is called Chomp. It is normally played with just a table of squares, but I find it easier to understand by thinking of a chocolate bar.
The mouth-watering Chomp playing board
Chomp is played where the first player chooses a square on the board, and then takes away everything above and to the right of it (essentially taking a bite out of the top right corner of the
chocolate bar). The second player would do the same thing with another remaining square. This process keeps continuing until all that remains is the bottom left square. Whoever is forced to take that
square loses.
To better understand how the game works, click here to practice playing it. You will see how easy it is to play and understand.
At this point, any game theorist would be wondering if there is an optimal strategy for this game. From what we saw a couple weeks ago with Anti Tic-Tac-Toe, you might be wondering if symmetry is
involved in this game. And yes, you can win this game by playing symmetrical moves in the end game. However, the board is not square, it is a rectangle. So, there cannot be full symmetry.
I do not know what the actual optimal strategy is. But, I do know that one exists that would enable player one to always force a win. I will demonstrate this by an "existence proof" where you prove
it exists without finding the actual thing.
Pretend player one just took the top right corner square. This is either a good position or a bad position. If it is a good position, then by definition, player one can continue to play perfectly and
force a win. If it is a bad position, then player two must have a responding move that will force them to win.
But, this responding move must be a square that player one could have hit on their first move. Since the top right square really doesn't have an effect on the rest of the board, this would not be a
problem. So, player one could have played this strategy, which would allow them to force a win as well.
In either of these situations, player one wins. So, there is our proof. I find these existence proofs really interesting because you don't always think you can know if a statement is true without
being able to see an example, but with mathematics, it can be done. | {"url":"https://coolmathstuff123.blogspot.com/2013/10/","timestamp":"2024-11-03T23:17:37Z","content_type":"text/html","content_length":"86418","record_id":"<urn:uuid:e349766a-70be-49c9-a7b1-38b845211389>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00478.warc.gz"} |
Particle-based modeling in the Sciences
For more than a hundred years diverse processes and phenomena in the natural sciences have been modeled using random particle systems. Since the 19th century, many scientists have been modeling
things such as liquids, gases, light and [solid] materials as a huge number of particles with mutual interactions. A stochastic approach is often made, i.e. the particles are subject to stochastic
rules regarding their locations or their movements. Other ingredients in the models can be random environments or stochasticity in the interactions. We distinguish dynamic models in which the
particles move randomly (e.g. interacting diffusions), and static models in which they do not move (e.g. Gibbs point processes)
A main task is to study the macroscopic behavior of the system and to explain it mathematically and possibly bring it into coincidence with experimental data. In many cases, the task is to develop
methods or to made existing methods applicable for the description of the important order parameters and for the proof of their crucial properties. A typical example of an important order parameter
is the empirical average of the particles or the whole trajectories. In the hydrodynamic limit (many particles in a fixed box) it can be described approximately by a single equation, usually a
differential equation. In the thermodynamic limit (many particles in a large box with fixed positive density) often become exponential asymptotes with the help of variational formulas described and
then variational equations for the description of the minimizers of the formulas. Also, the probabilities of the Particles after averaging can create interesting equations suffice. Some such
equations have been studied long before they were derived from particle models. >
Particle models have become especially widespread in physics and chemistry as a good compromise between reality and tractability. For example, static, atomic many body systems are often described
through an energy function, which assigns every possible configuration an energy based on the interaction between the particles and then interprets the negative exponential of the energy as
proportional to the configuration probability. Such distributions are called Gibbs measures; they preferentially select configurations with low energies. An example is a salt crystal, which consists
of charged particles (ions) seeking to minimize their combined electrostatic potential energy. Other systems, especially at positive temperatures contain random walks or Brownian motions, which react
(e.g. coagulate) with each other when in close proximity (see the applied theme Coagulation). In this way we model, for example, the formation of soot particles in flames. A related class of
stochastic particle models are families of interacting stochastic (partial) differential equations, which have recently been used in the modeling of battery charging (see the applied theme
Thermodynamic models for electrochemical systems).
At the WIAS, special attention is paid to phase transitions, i.e., to the phenomenon that the behavior changes significantly when an order parameter exceeds a certain threshold. These are mostly
transitions from the form of the sudden occurrence of macroscopic structures. The most important ones are percolation (e.g. in random graphs), condensation (e.g. in the Bose gas), crystallization
(e.g. in atomistic point processes) and gelation (in spatial particle models with coagulation). The relevant order parameter is usually the temperature and/or the particle density.
Particle system methods have also received tremendous attention in research in optimization and machine learning, which is the focus of the Weierstrass Group DOC. For example, modern computational
algorithms often deal with big data sets in potentially high dimensions. Optimization with such data is not only computationally costly but also subject to local optima. One approach to theoretically
analyzing these computational algorithms is to view them as particle gradient descent ? modeling the computational algorithms as dynamical particle systems. In this direction, we are interested in
working with particle systems under the framework of optimal transport theory and gradient flow. The goal is to use the modeling insight for large-scale computation, especially for robust machine
learning algorithms and deep generative models.
Contribution of the Institute
Atomic, static models for interacting many body systems are described with Lennard-Jones potentials, which cause the particles to maintain a certain amount of separation and not to collapse onto a
single point. Another example is the Bose-gas in which every particle has a kinetic energy in addition to its position. The work of the WIAS on the first model deals with the formation of clusters
and crystallization, and for the Bose-gas with condensation phenomena; see the mathematical theme Large Deviations
A realisation of a many body system showing a small crystal in the lower right corner.
Models with many random particles are also used for the description of large wireless telecommunications systems; in this case the “particles” are the user-devices . When the movement of the users
does not have to be considered, the modeling of the device locations is typically via a Poisson point process, but when the motion of users becomes important it is not yet clear how to model user
paths especially as user behavior undergoes periodic qualitative changes (e.g. between day and night). The “particle” interactions depend on their separation since a message can only be effectively
transmitted when two devices are within range of each other; see the applied theme Mobile Communication Networks. The crucial phase transition is percolation, such that a message can be transmitted
in principle over very long distances, because the locations are suitably close to each other on a long scale. In this connection we have used methods from the theory of large deviations to analyze
the positions of the devices. By performing a constrained energy minimization we are able to characterize the most important particle distributions for which no effective network can be established.
For dynamic models a wide range of hydrodynamic limit results have been proved dealing with elastically colliding gas molecules, soot formation and chemical reactions and leading to kinetic equations
(see the Mathematical Theme Nonlinear kinetic equations). For a combined generalization of soot formation and chemical reactions, a dynamic large deviations principle was derived. With additional
analytic tools an entropy-like free energy and its dissipation potentials were identified. Together they form a gradient structure and provide a more detailed description of the dynamics and the
effect of perturbations.
While researching robustness for machine learning, we have modeled data distribution shift and perturbation as particle systems. Then, we invented algorithms based on variational approaches that can
learn from noisy data robustly. We have also applied particle methods to statistical inference and sampling, such as Stein geometry used for Markov chain Monte Carlo.
• B. Jahnel, W. König, Probabilistic Methods in Telecommunications, D. Mazlum, ed., Compact Textbooks in Mathematics, Birkhäuser Basel, 2020, XI, 200 pages, (Monograph Published), DOI 10.1007/
978-3-030-36090-0 .
This textbook series presents concise introductions to current topics in mathematics and mainly addresses advanced undergraduates and master students. The concept is to offer small books covering
subject matter equivalent to 2- or 3-hour lectures or seminars which are also suitable for self-study. The books provide students and teachers with new perspectives and novel approaches. They may
feature examples and exercises to illustrate key concepts and applications of the theoretical contents. The series also includes textbooks specifically speaking to the needs of students from
other disciplines such as physics, computer science, engineering, life sciences, finance.
• W. König, Große Abweichungen, Techniken und Anwendungen, M. Brokate, A. Heinze , K.-H. Hoffmann , M. Kang , G. Götz , M. Kerz , S. Otmar, eds., Mathematik Kompakt, Birkhäuser Basel, 2020, VIII,
167 pages, (Monograph Published), DOI 10.1007/978-3-030-52778-5 .
Die Lehrbuchreihe Mathematik Kompakt ist eine Reaktion auf die Umstellung der Diplomstudiengänge in Mathematik zu Bachelor- und Masterabschlüssen. Inhaltlich werden unter Berücksichtigung der
neuen Studienstrukturen die aktuellen Entwicklungen des Faches aufgegriffen und kompakt dargestellt. Die modular aufgebaute Reihe richtet sich an Dozenten und ihre Studierenden in Bachelor- und
Masterstudiengängen und alle, die einen kompakten Einstieg in aktuelle Themenfelder der Mathematik suchen. Zahlreiche Beispiele und Übungsaufgaben stehen zur Verfügung, um die Anwendung der
Inhalte zu veranschaulichen. Kompakt: relevantes Wissen auf 150 Seiten Lernen leicht gemacht: Beispiele und Übungsaufgaben veranschaulichen die Anwendung der Inhalte Praktisch für Dozenten: jeder
Band dient als Vorlage für eine 2-stündige Lehrveranstaltung
• P. Exner, W. König, H. Neidhardt, eds., Mathematical Results in Quantum Mechanics. Proceedings of the QMath12 Conference, World Scientific Publishing, Singapore, 2015, xii+383 pages, (Collection
Articles in Refereed Journals
• A. Erhardt, D. Peschka, Ch. Dazzi, L. Schmeller, A. Petersen, S. Checa, A. Münch, B. Wagner, Modeling cellular self-organization in strain-stiffening hydrogels, Computational Mechanics, published
online on 31.08.2024, DOI 10.1007/s00466-024-02536-7 .
We develop a three-dimensional mathematical model framework for the collective evolution of cell populations by an agent-based model (ABM) that mechanically interacts with the surrounding
extracellular matrix (ECM) modeled as a hydrogel. We derive effective two-dimensional models for the geometrical set-up of a thin hydrogel sheet to study cell-cell and cell-hydrogel mechanical
interactions for a range of external conditions and intrinsic material properties. We show that without any stretching of the hydrogel sheets, cells show the well-known tendency to form long
chains with varying orientations. Our results further show that external stretching of the sheet produces the expected nonlinear strain-softening or stiffening response, with, however, little
qualitative variation of the overall cell dynamics for all the materials considered. The behavior is remarkably different when solvent is entering or leaving from strain softening or stiffening
hydrogels, respectively.
• M. Fradon, J. Kern, S. Rœlly, A. Zass, Diffusion dynamics for an infinite system of two-type spheres and the associated depletion effect, Stochastic Processes and their Applications, 171 (2024),
104319, DOI 10.1016/j.spa.2024.104319 .
We consider a random diffusion dynamics for an infinite system of hard spheres of two different sizes evolving in ℝ^d, its reversible probability measure, and its projection on the subset of the
large spheres. The main feature is the occurrence of an attractive short-range dynamical interaction --- known in the physics literature as a depletion interaction -- between the large spheres,
which is induced by the hidden presence of the small ones. By considering the asymptotic limit for such a system when the density of the particles is high, we also obtain a constructive dynamical
approach to the famous discrete geometry problem of maximisation of the contact number of n identical spheres in ℝ^d. As support material, we propose numerical simulations in the form of movies.
• B. Jahnel, U. Rozikov, Gibbs measures for hardcore-SOS models on Cayley trees, Journal of Statistical Mechanics: Theory and Experiment, (2024), 073202, DOI 10.1088/1742-5468/ad5433 .
We investigate the finite-state p-solid-on-solid model, for p=∞, on Cayley trees of order k ≥ 2 and establish a system of functional equations where each solution corresponds to a (splitting)
Gibbs measure of the model. Our main result is that, for three states, k=2,3 and increasing coupling strength, the number of translation-invariant Gibbs measures behaves as 1→3 →5 →6 →7. This
phase diagram is qualitatively similar to the one observed for three-state p-SOS models with p>0 and, in the case of k=2, we demonstrate that, on the level of the functional equations, the
transition p → ∞ is continuous.
• R.I.A. Patterson, D.R.M. Renger, U. Sharma, Variational structures beyond gradient flows: A macroscopic fluctuation-theory perspective, Journal of Statistical Physics, 191 (2024), pp. 1--60, DOI
10.1007/s10955-024-03233-8 .
Macroscopic equations arising out of stochastic particle systems in detailed balance (called dissipative systems or gradient flows) have a natural variational structure, which can be derived from
the large-deviation rate functional for the density of the particle system. While large deviations can be studied in considerable generality, these variational structures are often restricted to
systems in detailed balance. Using insights from macroscopic fluctuation theory, in this work we aim to generalise this variational connection beyond dissipative systems by augmenting densities
with fluxes, which encode non-dissipative effects. Our main contribution is an abstract framework, which for a given flux-density cost and a quasipotential, provides a decomposition into
dissipative and non-dissipative components and a generalised orthogonality relation between them. We then apply this abstract theory to various stochastic particle systems -- independent copies
of jump processes, zero-range processes, chemical-reaction networks in complex balance and lattice-gas models.
• L. Andreis, W. König, H. Langhammer, R.I.A. Patterson, A large-deviations principle for all the components in a sparse inhomogeneous random graph, Probability Theory and Related Fields, 186
(2023), pp. 521--620, DOI 10.1007/s00440-022-01180-7 .
We study an inhomogeneous sparse random graph, G[N], on [N] = { 1,...,N } as introduced in a seminal paper [BJR07] by Bollobás, Janson and Riordan (2007): vertices have a type (here in a compact
metric space S), and edges between different vertices occur randomly and independently over all vertex pairs, with a probability depending on the two vertex types. In the limit N → ∞ , we
consider the sparse regime, where the average degree is O(1). We prove a large-deviations principle with explicit rate function for the statistics of the collection of all the connected
components, registered according to their vertex type sets, and distinguished according to being microscopic (of finite size) or macroscopic (of size ≈ N). In doing so, we derive explicit
logarithmic asymptotics for the probability that G[N] is connected. We present a full analysis of the rate function including its minimizers. From this analysis we deduce a number of limit laws,
conditional and unconditional, which provide comprehensive information about all the microscopic and macroscopic components of G[N]. In particular, we recover the criterion for the existence of
the phase transition given in [BJR07].
• O. Collin, B. Jahnel, W. König, A micro-macro variational formula for the free energy of a many-body system with unbounded marks, Electronic Journal of Probability, 28 (2023), pp. 118/1--118/58,
DOI 10.1214/23-EJP1014 .
The interacting quantum Bose gas is a random ensemble of many Brownian bridges (cycles) of various lengths with interactions between any pair of legs of the cycles. It is one of the standard
mathematical models in which a proof for the famous Bose--Einstein condensation phase transition is sought for. We introduce a simplified version of the model with an organisation of the
particles in deterministic boxes instead of Brownian cycles as the marks of a reference Poisson point process (for simplicity, in Z ^d, instead of R ^d). We derive an explicit and interpretable
variational formula in the thermodynamic limit for the limiting free energy of the canonical ensemble for any value of the particle density. This formula features all relevant physical quantities
of the model, like the microscopic and the macroscopic particle densities, together with their mutual and self-energies and their entropies. The proof method comprises a two-step large-deviation
approach for marked Poisson point processes and an explicit distinction into small and large marks. In the characteristic formula, each of the microscopic particles and the statistics of the
macroscopic part of the configuration are seen explicitly; the latter receives the interpretation of the condensate. The formula enables us to prove a number of properties of the limiting free
energy as a function of the particle density, like differentiability and explicit upper and lower bounds, and a qualitative picture below and above the critical threshold (if it is finite). This
proves a modified saturation nature of the phase transition. However, we have not yet succeeded in proving the existence of this phase transition.
• CH. Hirsch, B. Jahnel, E. Cali, Connection intervals in multi-scale infrastructure-augmented dynamic networks, Stochastic Models, 39 (2023), pp. 851--877, DOI 10.1080/15326349.2023.2184832 .
We consider a hybrid spatial communication system in which mobile nodes can connect to static sinks in a bounded number of intermediate relaying hops. We describe the distribution of the
connection intervals of a typical mobile node, i.e., the intervals of uninterrupted connection to the family of sinks. This is achieved in the limit of many hops, sparse sinks and growing time
horizons. We identify three regimes illustrating that the limiting distribution depends sensitively on the scaling of the time horizon.
• B. Jahnel, S.K. Jhawar, A.D. Vu, Continuum percolation in a nonstabilizing environment, Electronic Journal of Probability, 28 (2023), pp. 131/1--131/38, DOI 10.1214/23-EJP1029 .
We prove nontrivial phase transitions for continuum percolation in a Boolean model based on a Cox point process with nonstabilizing directing measure. The directing measure, which can be seen as
a stationary random environment for the classical Poisson--Boolean model, is given by a planar rectangular Poisson line process. This Manhattan grid type construction features long-range
dependencies in the environment, leading to absence of a sharp phase transition for the associated Cox--Boolean model. Our proofs rest on discretization arguments and a comparison to percolation
on randomly stretched lattices established in [MR2116736].
• B. Jahnel, J. Köppl, Dynamical Gibbs variational principles for irreversible interacting particle systems with applications to attractor properties, The Annals of Applied Probability, 33 (2023),
pp. 4570--4607, DOI 10.1214/22-AAP1926 .
We consider irreversible translation-invariant interacting particle systems on the d-dimensional cubic lattice with finite local state space, which admit at least one Gibbs measure as a
time-stationary measure. Under some mild degeneracy conditions on the rates and the specification we prove, that zero relative entropy loss of a translation-invariant measure implies, that the
measure is Gibbs w.r.t. the same specification as the time-stationary Gibbs measure. As an application, we obtain the attractor property for irreversible interacting particle systems, which says
that any weak limit point of any trajectory of translation-invariant measures is a Gibbs measure w.r.t. the same specification as the time-stationary measure. This extends previously known
results to fairly general irreversible interacting particle systems.
• B. Jahnel, Ch. Külske, Gibbsianness and non-Gibbsianness for Bernoulli lattice fields under removal of isolated sites, Bernoulli. Official Journal of the Bernoulli Society for Mathematical
Statistics and Probability, 29 (2023), pp. 3013--3032, DOI 10.3150/22-BEJ1572 .
We consider the i.i.d. Bernoulli field μ [p] on Z ^d with occupation density p ∈ [0,1]. To each realization of the set of occupied sites we apply a thinning map that removes all occupied sites
that are isolated in graph distance. We show that, while this map seems non-invasive for large p, as it changes only a small fraction p(1-p)^2d of sites, there is p(d) <1 such that for all p ∈ (p
(d), 1) the resulting measure is a non-Gibbsian measure, i.e., it does not possess a continuous version of its finite-volume conditional probabilities. On the other hand, for small p, the Gibbs
property is preserved.
• N. Djurdjevac Conrad, J. Köppl, A. Djurdjevac, Feedback loops in opinion dynamics of agent-based models with multiplicative noise, Entropy. An International and Interdisciplinary Journal of
Entropy and Information Studies, 24 (2022), pp. e24101352/1--e24101352/23, DOI 10.3390/e24101352 .
We introduce an agent-based model for co-evolving opinion and social dynamics, under the influence of multiplicative noise. In this model, every agent is characterized by a position in a social
space and a continuous opinion state variable. Agents? movements are governed by positions and opinions of other agents and similarly, the opinion dynamics is influenced by agents? spatial
proximity and their opinion similarity. Using numerical simulations and formal analysis, we study this feedback loop between opinion dynamics and mobility of agents in a social space. We
investigate the behavior of this ABM in different regimes and explore the influence of various factors on appearance of emerging phenomena such as group formation and opinion consensus. We study
the empirical distribution and in the limit of infinite number of agents we derive a corresponding reduced model given by a partial differential equation (PDE). Finally, using numerical examples
we show that a resulting PDE model is a good approximation of the original ABM.
• A. Agazzi, L. Andreis, R.I.A. Patterson, D.R.M. Renger, Large deviations for Markov jump processes with uniformly diminishing rates, Stochastic Processes and their Applications, 152 (2022), pp.
533--559, DOI 10.1016/j.spa.2022.06.017 .
We prove a large-deviation principle (LDP) for the sample paths of jump Markov processes in the small noise limit when, possibly, all the jump rates vanish uniformly, but slowly enough, in a
region of the state space. We further show that our assumptions on the decay of the jump rates are optimal. As a direct application of this work we relax the assumptions needed for the
application of LDPs to, e.g., Chemical Reaction Network dynamics, where vanishing reaction rates arise naturally particularly the context of Mass action kinetics.
• N. Engler, B. Jahnel, Ch. Külske, Gibbsianness of locally thinned random fields, Markov Processes and Related Fields, 28 (2022), pp. 185--214, DOI 10.48550/arXiv.2201.02651 .
We consider the locally thinned Bernoulli field on ℤ ^d, which is the lattice version of the Type-I Matérn hardcore process in Euclidean space. It is given as the lattice field of occupation
variables, obtained as image of an i.i.d. Bernoulli lattice field with occupation probability p, under the map which removes all particles with neighbors, while keeping the isolated particles. We
prove that the thinned measure has a Gibbsian representation and provide control on its quasilocal dependence, both in the regime of small p, but also in the regime of large p, where the thinning
transformation changes the Bernoulli measure drastically. Our methods rely on Dobrushin uniqueness criteria, disagreement percolation arguments [46], and cluster expansions
• A.K. Giri, P. Malgaretti, D. Peschka, M. Sega, Resolving the microscopic hydrodynamics at the moving contact line, Physical Review Fluids, 7 (2022), pp. L102001/1--L102001/9, DOI 10.1103/
PhysRevFluids.7.L102001 .
By removing the smearing effect of capillary waves in molecular dynamics simulations we are able to provide a microscopic picture of the region around the moving contact line (MCL) at an
unprecedented resolution. On this basis, we show that the continuum character of the velocity field is unaffected by molecular layering down to below the molecular scale. The solution of the
continuum Stokes problem with MCL and Navier-slip matches very well the molecular dynamics data and is consistent with a slip-length of 42 Å and small contact line dissipation. This is consistent
with observations of the local force balance near the liquid-solid interface.
• Z. Mokhtari, R.I.A. Patterson, F. Höfling, Spontaneous trail formation in populations of auto-chemotactic walkers, New Journal of Physics, 24 (2022), pp. 013012/1--013012/11, DOI 10.1088/
1367-2630/ac43ec .
We study the formation of trails in populations of self-propelled agents that make oriented deposits of pheromones and also sense such deposits to which they then respond with gradual changes of
their direction of motion. Based on extensive off-lattice computer simulations aiming at the scale of insects, e.g., ants, we identify a number of emerging stationary patterns and obtain
qualitatively the non-equilibrium state diagram of the model, spanned by the strength of the agent--pheromone interaction and the number density of the population. In particular, we demonstrate
the spontaneous formation of persistent, macroscopic trails, and highlight some behaviour that is consistent with a dynamic phase transition. This includes a characterisation of the mass of
system-spanning trails as a potential order parameter. We also propose a dynamic model for a few macroscopic observables, including the sub-population size of trail-following agents, which
captures the early phase of trail formation.
• B. Jahnel, A. Tóbiás, E. Cali, Phase transitions for the Boolean model of continuum percolation for Cox point processes, Brazilian Journal of Probability and Statistics, 3 (2022), pp. 20--44, DOI
10.1214/21-BJPS514 .
We consider the Boolean model with random radii based on Cox point processes. Under a condition of stabilization for the random environment, we establish existence and non-existence of
subcritical regimes for the size of the cluster at the origin in terms of volume, diameter and number of points. Further, we prove uniqueness of the infinite cluster for sufficiently connected
• B. Jahnel, A. Tóbiás, SINR percolation for Cox point processes with random powers, Adv. Appl. Math., 54 (2022), pp. 227--253, DOI 10.1017/apr.2021.25 .
Signal-to-interference plus noise ratio (SINR) percolation is an infinite-range dependent variant of continuum percolation modeling connections in a telecommunication network. Unlike in earlier
works, in the present paper the transmitted signal powers of the devices of the network are assumed random, i.i.d. and possibly unbounded. Additionally, we assume that the devices form a
stationary Cox point process, i.e., a Poisson point process with stationary random intensity measure, in two or higher dimensions. We present the following main results. First, under suitable
moment conditions on the signal powers and the intensity measure, there is percolation in the SINR graph given that the device density is high and interferences are sufficiently reduced, but not
vanishing. Second, if the interference cancellation factor γ and the SINR threshold τ satisfy γ ≥ 1/(2τ), then there is no percolation for any intensity parameter. Third, in the case of a Poisson
point process with constant powers, for any intensity parameter that is supercritical for the underlying Gilbert graph, the SINR graph also percolates with some small but positive interference
cancellation factor.
• A. Stephan, EDP-convergence for a linear reaction-diffusion system with fast reversible reaction, Calculus of Variations and Partial Differential Equations, 60 (2021), pp. 226/1--226/35, DOI
10.1007/s00526-021-02089-0 .
We perform a fast-reaction limit for a linear reaction-diffusion system consisting of two diffusion equations coupled by a linear reaction. We understand the linear reaction-diffusion system as a
gradient flow of the free energy in the space of probability measures equipped with a geometric structure, which contains the Wasserstein metric for the diffusion part and cosh-type functions for
the reaction part. The fast-reaction limit is done on the level of the gradient structure by proving EDP-convergence with tilting. The limit gradient system induces a diffusion system with
Lagrange multipliers on the linear slow-manifold. Moreover, the limit gradient system can be equivalently described by a coarse-grained gradient system, which induces a diffusion equation with a
mixed diffusion constant for the coarse-grained slow variable.
• S. Jansen, W. König, B. Schmidt, F. Theil, Distribution of cracks in a chain of atoms at low temperature, Annales Henri Poincare. A Journal of Theoretical and Mathematical Physics, 22 (2021), pp.
4131--4172, DOI 10.1007/s00023-021-01076-7 .
We consider a one-dimensional classical many-body system with interaction potential of Lennard--Jones type in the thermodynamic limit at low temperature 1/β ∈ (0, ∞). The ground state is a
periodic lattice. We show that when the density is strictly smaller than the density of the ground state lattice, the system with N particles fills space by alternating approximately crystalline
domains (clusters) with empty domains (voids) due to cracked bonds. The number of domains is of the order of N exp(-β e [surf] /2) with e [surf] > 0 a surface energy.
• J.-D. Deuschel, T. Orenshtein, N. Perkowski, Additive functionals as rough paths, The Annals of Probability, 49 (2021), pp. 1450--1479, DOI 10.1214/20-AOP1488 .
We consider additive functionals of stationary Markov processes and show that under Kipnis--Varadhan type conditions they converge in rough path topology to a Stratonovich Brownian motion, with a
correction to the Lévy area that can be described in terms of the asymmetry (non-reversibility) of the underlying Markov process. We apply this abstract result to three model problems: First we
study random walks with random conductances under the annealed law. If we consider the Itô rough path, then we see a correction to the iterated integrals even though the underlying Markov process
is reversible. If we consider the Stratonovich rough path, then there is no correction. The second example is a non-reversible Ornstein-Uhlenbeck process, while the last example is a diffusion in
a periodic environment. As a technical step we prove an estimate for the p-variation of stochastic integrals with respect to martingales that can be viewed as an extension of the rough path
Burkholder-Davis-Gundy inequality for local martingale rough paths of [FV08], [CF19] and [FZ18] to the case where only the integrator is a local martingale.
• L. Andreis, W. König, R.I.A. Patterson, A large-deviations principle for all the cluster sizes of a sparse Erdős--Rényi random graph, Random Structures and Algorithms, 59 (2021), pp. 522--553,
DOI 10.1002/rsa.21007 .
A large-deviations principle (LDP) is derived for the state, at fixed time, of the multiplicative coalescent in the large particle number limit. The rate function is explicit and describes each
of the three parts of the state: microscopic, mesoscopic and macroscopic. In particular, it clearly captures the well known gelation phase transition given by the formation of a particle
containing a positive fraction of the system mass at time t=1. Via a standard map of the multiplicative coalescent onto a time-dependent version of the Erdős-Rényi random graph, our results can
also be rephrased as an LDP for the component sizes in that graph. Our proofs rely on estimates and asymptotics for the probability that smaller Erdős-Rényi graphs are connected.
• A. Mielke, M.A. Peletier, A. Stephan, EDP-convergence for nonlinear fast-slow reaction systems with detailed balance, Nonlinearity, 34 (2021), pp. 5762--5798, DOI 10.1088/1361-6544/ac0a8a .
We consider nonlinear reaction systems satisfying mass-action kinetics with slow and fast reactions. It is known that the fast-reaction-rate limit can be described by an ODE with Lagrange
multipliers and a set of nonlinear constraints that ask the fast reactions to be in equilibrium. Our aim is to study the limiting gradient structure which is available if the reaction system
satisfies the detailed-balance condition. The gradient structure on the set of concentration vectors is given in terms of the relative Boltzmann entropy and a cosh-type dissipation potential. We
show that a limiting or effective gradient structure can be rigorously derived via EDP convergence, i.e. convergence in the sense of the Energy-Dissipation Principle for gradient flows. In
general, the effective entropy will no longer be of Boltzmann type and the reactions will no longer satisfy mass-action kinetics.
• A. Hinsen, B. Jahnel, E. Cali, J.-P. Wary, Phase transitions for chase-escape models on Poisson--Gilbert graphs, Electronic Communications in Probability, 25 (2020), pp. 25/1--25/14, DOI 10.1214/
20-ECP306 .
We present results on phase transitions of local and global survival in a two-species model on Gilbert graphs. At initial time there is an infection at the origin that propagates on the Gilbert
graph according to a continuous-time nearest-neighbor interacting particle system. The Gilbert graph consists of susceptible nodes and nodes of a second type, which we call white knights. The
infection can spread on susceptible nodes without restriction. If the infection reaches a white knight, this white knight starts to spread on the set of infected nodes according to the same
mechanism, with a potentially different rate, giving rise to a competition of chase and escape. We show well-definedness of the model, isolate regimes of global survival and extinction of the
infection and present estimates on local survival. The proofs rest on comparisons to the process on trees, percolation arguments and finite-degree approximations of the underlying random graphs.
• S. Jansen, W. König, B. Schmidt, F. Theil, Surface energy and boundary layers for a chain of atoms at low temperature, Archive for Rational Mechanics and Analysis, 239 (2021), pp. 915--980
(published online on 21.12.2020), DOI 10.1007/s00205-020-01587-3 .
We analyze the surface energy and boundary layers for a chain of atoms at low temperature for an interaction potential of Lennard-Jones type. The pressure (stress) is assumed small but positive
and bounded away from zero, while the temperature goes to zero. Our main results are: (1) As the temperature goes to zero and at fixed positive pressure, the Gibbs measures for infinite chains
and semi-infinite chains satisfy path large deviations principles. The rate functions are bulk and surface energy functionals. The minimizer of the surface functional corresponds to zero
temperature boundary layers. (2) The surface correction to the Gibbs free energy converges to the zero temperature surface energy, characterized with the help of the minimum of the surface energy
functional. (3) The bulk Gibbs measure and Gibbs free energy can be approximated by their Gaussian counterparts. (4) Bounds on the decay of correlations are provided, some of them uniform in the
inverse temperature.
• CH. Hirsch, B. Jahnel, A. Tóbiás, Lower large deviations for geometric functionals, Electronic Communications in Probability, 25 (2020), pp. 41/1--41/12, DOI 10.1214/20-ECP322 .
This work develops a methodology for analyzing large-deviation lower tails associated with geometric functionals computed on a homogeneous Poisson point process. The technique applies to
characteristics expressed in terms of stabilizing score functions exhibiting suitable monotonicity properties. We apply our results to clique counts in the random geometric graph, intrinsic
volumes of Poisson--Voronoi cells, as well as power-weighted edge lengths in the random geometric, κ-nearest neighbor and relative neighborhood graph.
• J. Maas, A. Mielke, Modeling of chemical reaction systems with detailed balance using gradient structures, Journal of Statistical Physics, 181 (2020), pp. 2257--2303, DOI 10.1007/
s10955-020-02663-4 .
We consider various modeling levels for spatially homogeneous chemical reaction systems, namely the chemical master equation, the chemical Langevin dynamics, and the reaction-rate equation.
Throughout we restrict our study to the case where the microscopic system satisfies the detailed-balance condition. The latter allows us to enrich the systems with a gradient structure, i.e. the
evolution is given by a gradient-flow equation. We present the arising links between the associated gradient structures that are driven by the relative entropy of the detailed-balance steady
state. The limit of large volumes is studied in the sense of evolutionary Γ-convergence of gradient flows. Moreover, we use the gradient structures to derive hybrid models for coupling different
modeling levels.
• A. Tóbiás, B. Jahnel, Exponential moments for planar tessellations, Journal of Statistical Physics, 179 (2020), pp. 90--109, DOI 10.1007/s10955-020-02521-3 .
In this paper we show existence of all exponential moments for the total edge length in a unit disc for a family of planar tessellations based on Poisson point processes. Apart from classical
such tessellations like the Poisson--Voronoi, Poisson--Delaunay and Poisson line tessellation, we also treat the Johnson--Mehl tessellation, Manhattan grids, nested versions and Palm versions. As
part of our proofs, for some planar tessellations, we also derive existence of exponential moments for the number of cells and the number of edges intersecting the unit disk.
• A. Mielke, A. Stephan, Coarse-graining via EDP-convergence for linear fast-slow reaction systems, Mathematical Models & Methods in Applied Sciences, 30 (2020), pp. 1765--1807, DOI 10.1142/
S0218202520500360 .
We consider linear reaction systems with slow and fast reactions, which can be interpreted as master equations or Kolmogorov forward equations for Markov processes on a finite state space. We
investigate their limit behavior if the fast reaction rates tend to infinity, which leads to a coarse-grained model where the fast reactions create microscopically equilibrated clusters, while
the exchange mass between the clusters occurs on the slow time scale. Assuming detailed balance the reaction system can be written as a gradient flow with respect to the relative entropy.
Focusing on the physically relevant cosh-type gradient structure we show how an effective limit gradient structure can be rigorously derived and that the coarse-grained equation again has a
cosh-type gradient structure. We obtain the strongest version of convergence in the sense of the Energy-Dissipation Principle (EDP), namely EDP-convergence with tilting.
• A. Stephan, H. Stephan, Memory equations as reduced Markov processes, Discrete and Continuous Dynamical Systems, 39 (2019), pp. 2133--2155, DOI 10.3934/dcds.2019089 .
A large class of linear memory differential equations in one dimension, where the evolution depends on the whole history, can be equivalently described as a projection of a Markov process living
in a higher dimensional space. Starting with such a memory equation, we give an explicit construction of the corresponding Markov process. From a physical point of view the Markov process can be
understood as the change of the type of some quasiparticles along one-way loops. Typically, the arising Markov process does not have the detailed balance property. The method leads to a more
realisitc modeling of memory equations. Moreover, it carries over the large number of investigation tools for Markov processes to memory equations, like the calculation of the equilibrium state,
the asymptotic behavior and so on. The method can be used for an approximative solution of some degenerate memory equations like delay differential equations.
• CH. Hirsch, B. Jahnel, Large deviations for the capacity in dynamic spatial relay networks, Markov Processes and Related Fields, 25 (2019), pp. 33--73.
We derive a large deviation principle for the space-time evolution of users in a relay network that are unable to connect due to capacity constraints. The users are distributed according to a
Poisson point process with increasing intensity in a bounded domain, whereas the relays are positioned deterministically with given limiting density. The preceding work on capacity for relay
networks by the authors describes the highly simplified setting where users can only enter but not leave the system. In the present manuscript we study the more realistic situation where users
leave the system after a random transmission time. For this we extend the point process techniques developed in the preceding work thereby showing that they are not limited to settings with
strong monotonicity properties.
• C. Cotar, B. Jahnel, Ch. Külske, Extremal decomposition for random Gibbs measures: From general metastates to metastates on extremal random Gibbs measures, Electronic Communications in
Probability, 23 (2018), pp. 1--12, DOI 10.1214/18-ECP200 .
The concept of metastate measures on the states of a random spin system was introduced to be able to treat the large-volume asymptotics for complex quenched random systems, like spin glasses,
which may exhibit chaotic volume dependence in the strong-coupling regime. We consider the general issue of the extremal decomposition for Gibbsian specifications which depend measurably on a
parameter that may describe a whole random environment in the infinite volume. Given a random Gibbs measure, as a measurable map from the environment space, we prove measurability of its
decomposition measure on pure states at fixed environment, with respect to the environment. As a general corollary we obtain that, for any metastate, there is an associated decomposition
metastate, which is supported on the extremes for almost all environments, and which has the same barycenter.
• G. Botirov, B. Jahnel, Phase transitions for a model with uncountable spin space on the Cayley tree: The general case, Positivity. An International Mathematics Journal Devoted to Theory and
Applications of Positivity, 23 (2019), pp. 291--301 (published online on 17.08.2018), DOI 10.1007/s11117-018-0606-1 .
In this paper we complete the analysis of a statistical mechanics model on Cayley trees of any degree, started in [EsHaRo12, EsRo10, BoEsRo13, JaKuBo14, Bo17]. The potential is of
nearest-neighbor type and the local state space is compact but uncountable. Based on the system parameters we prove existence of a critical value θ[ c ] such that for θ≤θ [ c ] there is a unique
translation-invariant splitting Gibbs measure. For θ [ c ] < θ there is a phase transition with exactly three translation-invariant splitting Gibbs measures. The proof rests on an analysis of
fixed points of an associated non-linear Hammerstein integral operator for the boundary laws.
• W. Wagner, A random walk model for the Schrödinger equation, Mathematics and Computers in Simulation, 143 (2018), pp. 138--148, DOI 10.1016/j.matcom.2016.07.012 .
A random walk model for the spatially discretized time-dependent Schrödinger equation is constructed. The model consists of a class of piecewise deterministic Markov processes. The states of the
processes are characterized by a position and a complex-valued weight. Jumps occur both on the spatial grid and in the space of weights. Between the jumps, the weights change according to
deterministic rules. The main result is that certain functionals of the processes satisfy the Schrödinger equation.
• A. Mielke, R.I.A. Patterson, M.A. Peletier, D.R.M. Renger, Non-equilibrium thermodynamical principles for chemical reactions with mass-action kinetics, SIAM Journal on Applied Mathematics, 77
(2017), pp. 1562--1585, DOI 10.1137/16M1102240 .
We study stochastic interacting particle systems that model chemical reaction networks on the micro scale, converging to the macroscopic Reaction Rate Equation. One abstraction level higher, we
study the ensemble of such particle systems, converging to the corresponding Liouville transport equation. For both systems, we calculate the corresponding large deviations and show that under
the condition of detailed balance, the large deviations induce a non-linear relation between thermodynamic fluxes and free energy driving force.
• R.I.A. Patterson, S. Simonella, W. Wagner, A kinetic equation for the distribution of interaction clusters in rarefied gases, Journal of Statistical Physics, 169 (2017), pp. 126--167.
• M. Erbar, M. Fathi, V. Laschos, A. Schlichting, Gradient flow structure for McKean--Vlasov equations on discrete spaces, Discrete and Continuous Dynamical Systems, 36 (2016), pp. 6799--6833.
In this work, we show that a family of non-linear mean-field equations on discrete spaces, can be viewed as a gradient flow of a natural free energy functional with respect to a certain metric
structure we make explicit. We also prove that this gradient flow structure arises as the limit of the gradient flow structures of a natural sequence of N-particle dynamics, as N goes to infinity
• S. Jansen, W. König, B. Metzger, Large deviations for cluster size distributions in a continuous classical many-body system, The Annals of Applied Probability, 25 (2015), pp. 930--973.
An interesting problem in statistical physics is the condensation of classical particles in droplets or clusters when the pair-interaction is given by a stable Lennard-Jones-type potential. We
study two aspects of this problem. We start by deriving a large deviations principle for the cluster size distribution for any inverse temperature $betain(0,infty)$ and particle density $rhoin
(0,rho_rmcp)$ in the thermodynamic limit. Here $rho_rmcp >0$ is the close packing density. While in general the rate function is an abstract object, our second main result is the
$Gamma$-convergence of the rate function towards an explicit limiting rate function in the low-temperature dilute limit $betatoinfty$, $rho downarrow 0$ such that $-beta^-1logrhoto nu$ for some
$nuin(0,infty)$. The limiting rate function and its minimisers appeared in recent work, where the temperature and the particle density were coupled with the particle number. In the de-coupled
limit considered here, we prove that just one cluster size is dominant, depending on the parameter $nu$. Under additional assumptions on the potential, the $Gamma$-convergence along curves can be
strengthened to uniform bounds, valid in a low-temperature, low-density rectangle.
• M. Erbar, J. Maas, D.R.M. Renger, From large deviations to Wasserstein gradient flows in multiple dimensions, Electronic Communications in Probability, 20 (2015), pp. 1--12.
We study the large deviation rate functional for the empirical distribution of independent Brownian particles with drift. In one dimension, it has been shown by Adams, Dirr, Peletier and Zimmer
[ADPZ11] that this functional is asymptotically equivalent (in the sense of Gamma-convergence) to the Jordan-Kinderlehrer-Otto functional arising in the Wasserstein gradient flow structure of the
Fokker-Planck equation. In higher dimensions, part of this statement (the lower bound) has been recently proved by Duong, Laschos and Renger, but the upper bound remained open, since the proof in
[DLR13] relies on regularity properties of optimal transport maps that are restricted to one dimension. In this note we present a new proof of the upper bound, thereby generalising the result of
[ADPZ11] to arbitrary dimensions.
• M. Muminov, H. Neidhardt, T. Rasulov, On the spectrum of the lattice spin-boson Hamiltonian for any coupling: 1D case, Journal of Mathematical Physics, 56 (2015), pp. 053507/1--053507/24.
A lattice model of radiative decay (so-called spin-boson model) of a two level atom and at most two photons is considered. The location of the essential spectrum is described. For any coupling
constant the finiteness of the number of eigenvalues below the bottom of its essential spectrum is proved. The results are obtained by considering a more general model H for which the lower bound
of its essential spectrum is estimated. Conditions which guarantee the finiteness of the number of eigenvalues of H below the bottom of its essential spectrum are found. It is shown that the
discrete spectrum might be infinite if the parameter functions are chosen in a special form.
• S. Simonella, M. Pulvirenti, On the evolution of the empirical measure for hard-sphere dynamics, Bulletin of the Institute of Mathematics. Academia Sinica. Institute of Mathematics, Academia
Sinica, Taipei, Taiwan. English. English summary., 10 (2015), pp. 171--204.
• A. Mielke, M.A. Peletier, D.R.M. Renger, On the relation between gradient flows and the large-deviation principle, with applications to Markov chains and diffusion, Potential Analysis, 41 (2014),
pp. 1293--1325.
Motivated by the occurence in rate functions of time-dependent large-deviation principles, we study a class of non-negative functions ℒ that induce a flow, given by ℒ(z[t],ż[t])=0. We derive
necessary and sufficient conditions for the unique existence of a generalized gradient structure for the induced flow, as well as explicit formulas for the corresponding driving entropy and
dissipation functional. In particular, we show how these conditions can be given a probabilistic interpretation when ℒ is associated to the large deviations of a microscopic particle system.
Finally, we illustrate the theory for independent Brownian particles with drift, which leads to the entropy-Wasserstein gradient structure, and for independent Markovian particles on a finite
state space, which leads to a previously unknown gradient structure.
• M.H. Duong, V. Laschos, M. Renger, Wasserstein gradient flows from large deviations of many-particle limits, ESAIM. Control, Optimisation and Calculus of Variations, 19 (2013), pp. 1166--1188.
• M.A. Peletier, M. Renger, M. Veneroni, Variational formulation of the Fokker--Planck equation with decay: A particle approach, Communications in Contemporary Mathematics, 15 (2013), pp. 1350017/
• S. Adams, A. Collevecchio, W. König, A variational formula for the free energy of an interacting many-particle system, The Annals of Probability, 39 (2011), pp. 683--728.
We consider $N$ bosons in a box in $R^d$ with volume $N/rho$ under the influence of a mutually repellent pair potential. The particle density $rhoin(0,infty)$ is kept fixed. Our main result is
the identification of the limiting free energy, $f(beta,rho)$, at positive temperature $1/beta$, in terms of an explicit variational formula, for any fixed $rho$ if $beta$ is sufficiently small,
and for any fixed $beta$ if $rho$ is sufficiently small. The thermodynamic equilibrium is described by the symmetrised trace of $rm e^-beta Hcal_N$, where $Hcal_N$ denotes the corresponding
Hamilton operator. The well-known Feynman-Kac formula reformulates this trace in terms of $N$ interacting Brownian bridges. Due to the symmetrisation, the bridges are organised in an ensemble of
cycles of various lengths. The novelty of our approach is a description in terms of a marked Poisson point process whose marks are the cycles. This allows for an asymptotic analysis of the system
via a large-deviations analysis of the stationary empirical field. The resulting variational formula ranges over random shift-invariant marked point fields and optimizes the sum of the
interaction and the relative entropy with respect to the reference process. In our proof of the lower bound for the free energy, we drop all interaction involving lq infinitely longrq cycles, and
their possible presence is signalled by a loss of mass of the lq finitely longrq cycles in the variational formula. In the proof of the upper bound, we only keep the mass on the lq finitely
longrq cycles. We expect that the precise relationship between these two bounds lies at the heart of Bose-Einstein condensation and intend to analyse it further in future.
• M. Aizenman, S. Jansen, P. Jung, Symmetry breaking in quasi-1D Coulomb systems, Annales Henri Poincare. A Journal of Theoretical and Mathematical Physics, 11 (2010), pp. 1453--1485.
Quasi one-dimensional systems are systems of particles in domains which are of infinite extent in one direction and of uniformly bounded size in all other directions, e.g. on a cylinder of
infinite length. The main result proven here is that for such particle systems with Coulomb interactions and neutralizing background, the so-called “jellium”, at any temperature and at any
finite-strip width there is translation symmetry breaking. This extends the previous result on Laughlin states in thin, two-dimens The structural argument which is used here bypasses the question
of whether the translation symmetry breaking is manifest already at the level of the one particle density function. It is akin to that employed by Aizenman and Martin (1980) for a similar
statement concerning symmetry breaking at all temperatures in strictly one-dimensional Coulomb systems. The extension is enabled through bounds which establish tightness of finite-volume charge
• A. Collevecchio, W. König, P. Mörters, N. Sidorova, Phase transitions for dilute particle systems with Lennard--Jones potential, Communications in Mathematical Physics, 299 (2010), pp. 603--630.
Contributions to Collected Editions
• L. Lüchtrath, Ch. Mönch, The directed age-dependent random connection model with arc reciprocity, in: Modelling and Mining Networks, M. Dewar, B. Kamiński, D. Kaszyński, Ł. Kraiński, P. Prałat,
F. Théberge, M. Wrzosek, eds., 14671 of Lecture Notes in Computer Science, Springer, 2024, pp. 97--114, DOI 10.1007/978-3-031-59205-8_7 .
We introduce a directed spatial random graph model aimed at modelling certain aspects of social media networks. We provide two variants of the model: an infinite version and an increasing
sequence of finite graphs that locally converge to the infinite model. Both variants have in common that each vertex is placed into Euclidean space and carries a birth time. Given locations and
birth times of two vertices, an arc is formed from younger to older vertex with a probability depending on both birth times and the spatial distance of the vertices. If such an arc is formed, a
reverse arc is formed with probability depending on the ratio of the endpoints' birth times. Aside from the local limit result connecting the models, we investigate degree distributions, two
different clustering metrics and directed percolation.
• A. Hinsen, B. Jahnel, E. Cali, J.-P. Wary, Malware propagation in urban D2D networks, in: IEEE 18th International Symposium on on Modeling and Optimization in Mobile, ad Hoc, and Wireless
Networks, (WiOpt), Volos, Greece, Institute of Electrical and Electronics Engineers (IEEE), 2020, pp. 1--9.
We introduce and analyze models for the propagation of malware in pure D2D networks given via stationary Cox--Gilbert graphs. Here, the devices form a Poisson point process with random intensity
measure λ, Λ where Λ is stationary and given, for example, by the edge-length measure of a realization of a Poisson--Voronoi tessellation that represents an urban street system. We assume that,
at initial time, a typical device at the center of the network carries a malware and starts to infect neighboring devices after random waiting times. Here we focus on Markovian models, where the
waiting times are exponential random variables, and non-Markovian models, where the waiting times feature strictly positive minimal and finite maximal waiting times. We present numerical results
for the speed of propagation depending on the system parameters. In a second step, we introduce and analyze a counter measure for the malware propagation given by special devices called white
knights, which have the ability, once attacked, to eliminate the malware from infected devices and turn them into white knights. Based on simulations, we isolate parameter regimes in which the
malware survives or is eliminated, both in the Markovian and non-Markovian setting.
• B. Jahnel, W. König, Probabilistic methods for spatial multihop communication systems, in: Topics in Applied Analysis and Optimisation, M. Hintermüller, J.F. Rodrigues, eds., CIM Series in
Mathematical Sciences, Springer Nature Switzerland AG, Cham, 2019, pp. 239--268.
• M. Kantner, U. Bandelow, Th. Koprucki, H.-J. Wünsche, Multi-scale modelling and simulation of single-photon sources on a device level, in: Euro-TMCS II -- Theory, Modelling & Computational
Methods for Semiconductors, 7th -- 9th December 2016, Tyndall National Institute, University College Cork, Ireland, E. O'Reilly, S. Schulz, S. Tomic, eds., Tyndall National Institute, 2016, pp.
Preprints, Reports, Technical Reports
• R. Lasarzik, E. Rocca, R. Rossi, Existence and weak-strong uniqueness for damage systems in viscoelasticity, Preprint no. 3129, WIAS, Berlin, 2024, DOI 10.20347/WIAS.PREPRINT.3129 .
Abstract, PDF (524 kByte)
In this paper we investigate the existence of solutions and their weak-strong uniqueness property for a PDE system modelling damage in viscoelastic materials. In fact, we address two solution
concepts, emphweak and emphstrong solutions. For the former, we obtain a global-in-time existence result, but the highly nonlinear character of the system prevents us from proving their
uniqueness. For the latter, we prove local-in-time existence. Then, we show that the strong solution, as long as it exists, is unique in the class of weak solutions. This emphweak-strong
uniqueness statement is proved by means of a suitable relative energy inequality.
• B. Jahnel, J. Köppl, Y. Steenbeck, A. Zass, The variational principle for a marked Gibbs point process with infinite-range multibody interactions, Preprint no. 3126, WIAS, Berlin, 2024, DOI
10.20347/WIAS.PREPRINT.3126 .
Abstract, PDF (468 kByte)
We prove the Gibbs variational principle for the Asakura?Oosawa model in which particles of random size obey a hardcore constraint of non-overlap and are additionally subject to a
temperature-dependent area interaction. The particle size is unbounded, leading to infinite-range interactions, and the potential cannot be written as a k-body interaction for fixed k. As a
byproduct, we also prove the existence of infinite-volume Gibbs point processes satisfying the DLR equations. The essential control over the influence of boundary conditions can be established
using the geometry of the model and the hard-core constraint.
• L. Lüchtrath, Ch. Mönch, A very short proof of Sidorenko's inequality for counts of homomorphism between graphs, Preprint no. 3120, WIAS, Berlin, 2024, DOI 10.20347/WIAS.PREPRINT.3120 .
Abstract, PDF (148 kByte)
We provide a very elementary proof of a classical extremality result due to Sidorenko (Discrete Math. 131.1-3, 1994), which states that among all graphs G on k vertices, the k-1-edge star
maximises the number of graph homomorphisms of G into any graph H.
• E. Bolthausen, W. König, Ch. Mukherjee, Self-repellent Brownian bridges in an interacting Bose gas, Preprint no. 3110, WIAS, Berlin, 2024, DOI 10.20347/WIAS.PREPRINT.3110 .
Abstract, PDF (478 kByte)
We consider a model of d-dimensional interacting quantum Bose gas, expressed in terms of an ensemble of interacting Brownian bridges in a large box and undergoing the influence of all the
interactions between the legs of each of the Brownian bridges. We study the thermodynamic limit of the system and give an explicit formula for the limiting free energy and a necessary and
sufficient criterion for the occurrence of a condensation phase transition. For d ≥ 5 and sufficiently small interaction, we prove that the condensate phase is not empty. The ideas of proof rely
on the similarity of the interaction to that of the self-repellent random walk, and build on a lace expansion method conducive to treating paths undergoing mutual repellence within each bridge.
• B. Jahnel, L. Lüchtrath, M. Ortgiese, Cluster sizes in subcritical soft Boolean models, Preprint no. 3106, WIAS, Berlin, 2024, DOI 10.20347/WIAS.PREPRINT.3106 .
Abstract, PDF (435 kByte)
We consider the soft Boolean model, a model that interpolates between the Boolean model and long-range percolation, where vertices are given via a stationary Poisson point process. Each vertex
carries an independent Pareto-distributed radius and each pair of vertices is assigned another independent Pareto weight with a potentially different tail exponent. Two vertices are now connected
if they are within distance of the larger radius multiplied by the edge weight. We determine the tail behaviour of the Euclidean diameter and the number of points of a typical maximally connected
component in a subcritical percolation phase. For this, we present a sharp criterion in terms of the tail exponents of the edge-weight and radius distributions that distinguish a regime where the
tail behaviour is controlled only by the edge exponent from a regime in which both exponents are relevant. Our proofs rely on fine path-counting arguments identifying the precise order of decay
of the probability that far-away vertices are connected.
• J. Köppl, N. Lanchier, M. Mercer, Survival and extinction for a contact process with a density-dependent birth rate, Preprint no. 3103, WIAS, Berlin, 2024, DOI 10.20347/WIAS.PREPRINT.3103 .
Abstract, PDF (860 kByte)
To study later spatial evolutionary games based on the multitype contact process, we first focus in this paper on the conditions for survival/extinction in the presence of only one strategy, in
which case our model consists of a variant of the contact process with a density-dependent birth rate. The players are located on the d-dimensional integer lattice, with natural birth rate λ and
natural death rate one. The process also depends on a payoff a[11] = a modeling the effects of the players on each other: while players always die at rate one, the rate at which they give birth
is given by λ times the exponential of a times the fraction of occupied sites in their neighborhood. In particular, the birth rate increases with the local density when a > 0, in which case the
payoff a models mutual cooperation, whereas the birth rate decreases with the local density when a < 0, in which case the payoff a models intraspecific competition. Using standard coupling
arguments to compare the process with the basic contact process (the particular case a = 0 ), we prove that, for all payoffs a , there is a phase transition from extinction to survival in the
direction of λ. Using various block constructions, we also prove that, for all birth rates λ, there is a phase transition in the direction of a. This last result is in sharp contrast with the
behavior of the nonspatial deterministic mean-field model in which the stability of the extinction state only depends on λ . This underlines the importance of space (local interactions) and
stochasticity in our model.
• P.P. Ghosh, B. Jahnel, S.K. Jhawar, Large and moderate deviations in Poisson navigations, Preprint no. 3096, WIAS, Berlin, 2024, DOI 10.20347/WIAS.PREPRINT.3096 .
Abstract, PDF (318 kByte)
We derive large- and moderate-deviation results in random networks given as planar directed navigations on homogeneous Poisson point processes. In this non-Markovian routing scheme, starting from
the origin, at each consecutive step a Poisson point is joined by an edge to its nearest Poisson point to the right within a cone. We establish precise exponential rates of decay for the
probability that the vertical displacement of the random path is unexpectedly large. The proofs rest on controlling the dependencies of the individual steps and the randomness in the horizonal
displacement as well as renewal-process arguments.
• B. Jahnel, J. Köppl, Time-periodic behaviour in one- and two-dimensional interacting particle systems, Preprint no. 3092, WIAS, Berlin, 2024, DOI 10.20347/WIAS.PREPRINT.3092 .
Abstract, PDF (311 kByte)
We provide a class of examples of interacting particle systems on $Z^d$, for $din1,2$, that admit a unique translation-invariant stationary measure, which is not the long-time limit of all
translation-invariant starting measures, due to the existence of time-periodic orbits in the associated measure-valued dynamics. This is the first such example and shows that even in low
dimensions, not every limit point of the measure-valued dynamics needs to be a time-stationary measure.
• B. Jahnel, U. Rozikov, Three-state $p$-SOS models on binary Cayley trees, Preprint no. 3089, WIAS, Berlin, 2024, DOI 10.20347/WIAS.PREPRINT.3089 .
Abstract, PDF (640 kByte)
We consider a version of the solid-on-solid model on the Cayley tree of order two in which vertices carry spins of value 0,1 or 2 and the pairwise interaction of neighboring vertices is given by
their spin difference to the power p>0. We exhibit all translation-invariant splitting Gibbs measures (TISGMs) of the model and demonstrate the existence of up to seven such measures, depending
on the parameters. We further establish general conditions for extremality and non-extremality of TISGMs in the set of all Gibbs measures and use them to examine selected TISGMs for a small and a
large p. Notably, our analysis reveals that extremality properties are similar for large p compared to the case p=1, a case that has been explored already in previous work. However, for the small
p, certain measures that were consistently non-extremal for p=1 do exhibit transitions between extremality and non-extremality.
• CH. Hirsch, B. Jahnel, S.K. Jhawar, P. Juhász, Poisson approximation of fixed-degree nodes in weighted random connection models, Preprint no. 3057, WIAS, Berlin, 2023, DOI 10.20347/
WIAS.PREPRINT.3057 .
Abstract, PDF (474 kByte)
We present a process-level Poisson-approximation result for the degree-$k$ vertices in a high-density weighted random connection model with preferential-attachment kernel in the unit volume. Our
main focus lies on the impact of the left tails of the weight distribution for which we establish general criteria based on their small-weight quantiles. To illustrate that our conditions are
broadly applicable, we verify them for weight distributions with polynomial and stretched exponential left tails. The proofs rest on truncation arguments and a recently established quantitative
Poisson approximation result for functionals of Poisson point processes.
• B. Jahnel, Ch. Külske, A. Zass, Locality properties for discrete and continuum Widom--Rowlinson models in random environments, Preprint no. 3054, WIAS, Berlin, 2023, DOI 10.20347/
WIAS.PREPRINT.3054 .
Abstract, PDF (606 kByte)
We consider the Widom--Rowlinson model in which hard disks of two possible colors are constrained to a hard-core repulsion between particles of different colors, in quenched random environments.
These random environments model spatially dependent preferences for the attach- ment of disks. We investigate the possibility to represent the joint process of environment and infinite-volume
Widom--Rowlinson measure in terms of continuous (quasilocal) Papangelou inten- sities. We show that this is not always possible: In the case of the symmetric Widom-Rowlinson model on a
non-percolating environment, we can explicitly construct a discontinuity coming from the environment. This is a new phenomenon for systems of continuous particles, but it can be understood as a
continuous-space echo of a simpler non-locality phenomenon known to appear for the diluted Ising model (Griffiths singularity random field [ EMSS00]) on the lattice, as we explain in the course
of the proof.
• B. Jahnel, A.D. Vu, A long-range contact process in a random environment, Preprint no. 3047, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3047 .
Abstract, PDF (3735 kByte)
We study survival and extinction of a long-range infection process on a diluted one-dimensional lattice in discrete time. The infection can spread to distant vertices according to a Pareto
distribution, however spreading is also prohibited at random times. We prove a phase transition in the recovery parameter via block arguments. This contributes to a line of research on directed
percolation with long-range correlations in nonstabilizing random environments.
• L. Andreis, T. Iyer, E. Magnanini, Gelation, hydrodynamic limits and uniqueness in cluster coagulation processes, Preprint no. 3039, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3039 .
Abstract, PDF (627 kByte)
We consider the problem of gelation in the cluster coagulation model introduced by Norris [Comm. Math. Phys., 209(2):407-435 (2000)]; this model is general enough to incorporate various
inhomogenieties in the evolution of clusters, for example, their shape, or their location in space. We derive general, sufficient criteria for stochastic gelation in this model, and for
trajectories associated with this process to concentrate among solutions of a generalisation of the Flory equation; thus providing sufficient criteria for the equation to have gelling solutions.
As particular cases, we extend results related to the classical Marcus-Lushnikov coagulation process and Smoluchowski coagulation equation, showing that reasonable 'homogenous' coagulation
processes with exponent γ larger than 1 yield gelation. In another special case, we prove a law of large numbers for the trajectory of the empirical measure of the stochastic cluster coagulation
process, by means of a uniqueness result for the solution of the aforementioned generalised Flory equation. Finally, we use coupling arguments with inhomogeneous random graphs to deduce
sufficient criterion for strong gelation (the emergence of a particle of size O(N)).
• B. Jahnel, J. Köppl, B. Lodewijks, A. Tóbiás, Percolation in lattice k-neighbor graphs, Preprint no. 3028, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3028 .
Abstract, PDF (437 kByte)
We define a random graph obtained via connecting each point of ℤ ^d independently to a fixed number 1 ≤ k ≤ 2d of its nearest neighbors via a directed edge. We call this graph the emphdirected
k-neighbor graph. Two natural associated undirected graphs are the emphundirected and the emphbidirectional k-neighbor graph, where we connect two vertices by an undirected edge whenever there is
a directed edge in the directed k-neighbor graph between them in at least one, respectively precisely two, directions. In these graphs we study the question of percolation, i.e., the existence of
an infinite self-avoiding path. Using different kinds of proof techniques for different classes of cases, we show that for k=1 even the undirected k-neighbor graph never percolates, but the
directed one percolates whenever k≥ d+1, k≥ 3 and d ≥5, or k ≥4 and d=4. We also show that the undirected 2-neighbor graph percolates for d=2, the undirected 3-neighbor graph percolates for d=3,
and we provide some positive and negative percolation results regarding the bidirectional graph as well. A heuristic argument for high dimensions indicates that this class of models is a natural
discrete analogue of the k-nearest-neighbor graphs studied in continuum percolation, and our results support this interpretation.
• W. König, N. Pétrélis, R. Soares Dos Santos, W. van Zuijlen, Weakly self-avoiding walk in a Pareto-distributed random potential, Preprint no. 3023, WIAS, Berlin, 2023, DOI 10.20347/
WIAS.PREPRINT.3023 .
Abstract, PDF (604 kByte)
We investigate a model of continuous-time simple random walk paths in ℤ ^d undergoing two competing interactions: an attractive one towards the large values of a random potential, and a
self-repellent one in the spirit of the well-known weakly self-avoiding random walk. We take the potential to be i.i.d. Pareto-distributed with parameter α > d, and we tune the strength of the
interactions in such a way that they both contribute on the same scale as t → ∞. Our main results are (1) the identification of the logarithmic asymptotics of the partition function of the model
in terms of a random variational formula, and, (2) the identification of the path behaviour that gives the overwhelming contribution to the partition function for α > 2d: the random-walk path
follows an optimal trajectory that visits each of a finite number of random lattice sites for a positive random fraction of time. We prove a law of large numbers for this behaviour, i.e., that
all other path behaviours give strictly less contribution to the partition function.The joint distribution of the variational problem and of the optimal path can be expressed in terms of a
limiting Poisson point process arising by a rescaling of the random potential. The latter convergence is in distribution?and is in the spirit of a standard extreme-value setting for a rescaling
of an i.i.d. potential in large boxes, like in KLMS09.
• B. Jahnel, J. Köppl, On the long-time behaviour of reversible interacting particle systems in one and two dimensions, Preprint no. 3004, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3004 .
Abstract, PDF (287 kByte)
By refining Holley's free energy technique, we show that, under quite general assumptions on the dynamics, the attractor of a (possibly non-translation-invariant) interacting particle system in
one or two spatial dimensions is contained in the set of Gibbs measures if the dynamics admits a reversible Gibbs measure. In particular, this implies that there can be no reversible interacting
particle system that exhibits time-periodic behaviour and that every reversible interacting particle system is ergodic if and only if the reversible Gibbs measure is unique. In the special case
of non-attractive stochastic Ising models this answers a question due to Liggett.
• A. Stephan, H. Stephan, Positivity and polynomial decay of energies for square-field operators, Preprint no. 2901, WIAS, Berlin, 2021, DOI 10.20347/WIAS.PREPRINT.2901 .
Abstract, PDF (328 kByte)
We show that for a general Markov generator the associated square-field (or carré du champs) operator and all their iterations are positive. The proof is based on an interpolation between the
operators involving the generator and their semigroups, and an interplay between positivity and convexity on Banach lattices. Positivity of the square-field operators allows to define a hierarchy
of quadratic and positive energy functionals which decay to zero along solutions of the corresponding evolution equation. Assuming that the Markov generator satisfies an operator-theoretic
normality condition, the sequence of energies is log-convex. In particular, this implies polynomial decay in time for the energy functionals along solutions.
• A. Stephan, Coarse-graining and reconstruction for Markov matrices, Preprint no. 2891, WIAS, Berlin, 2021, DOI 10.20347/WIAS.PREPRINT.2891 .
Abstract, PDF (248 kByte)
We present a coarse-graining (or model order reduction) procedure for stochastic matrices by clustering. The method is consistent with the natural structure of Markov theory, preserving
positivity and mass, and does not rely on any tools from Hilbert space theory. The reconstruction is provided by a generalized Penrose-Moore inverse of the coarse-graining operator incorporating
the inhomogeneous invariant measure of the Markov matrix. As we show, the method provides coarse-graining and reconstruction also on the level of tensor spaces, which is consistent with the
notion of an incidence matrix and quotient graphs, and, moreover, allows to coarse-grain and reconstruct fluxes. Furthermore, we investigate the connection with functional inequalities and
Poincaré-type constants.
• M. Heida, B. Jahnel, A.D. Vu, Stochastic homogenization on irregularly perforated domains, Preprint no. 2880, WIAS, Berlin, 2021, DOI 10.20347/WIAS.PREPRINT.2880 .
Abstract, PDF (668 kByte)
We study stochastic homogenization of a quasilinear parabolic PDE with nonlinear microscopic Robin conditions on a perforated domain. The focus of our work lies on the underlying geometry that
does not allow standard homogenization techniques to be applied directly. Instead we prove homogenization on a regularized geometry and demonstrate afterwards that the form of the homogenized
equation is independent from the regularization. Then we pass to the regularization limit to obtain the anticipated limit equation. Furthermore, we show that Boolean models of Poisson point
processes are covered by our approach.
• W. König, Branching random walks in random environment: A survey, Preprint no. 2779, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2779 .
Abstract, PDF (253 kByte)
We consider branching particle processes on discrete structures like the hypercube in a random fitness landscape (i.e., random branching/killing rates). The main question is about the location
where the main part of the population sits at a late time, if the state space is large. For answering this, we take the expectation with respect to the migration (mutation) and the branching/
killing (selection) mechanisms, for fixed rates. This is intimately connected with the parabolic Anderson model, the heat equation with random potential, a model that is of interest in
mathematical physics because of the observed prominent effect of intermittency (local concentration of the mass of the solution in small islands). We present several advances in the investigation
of this effect, also related to questions inspired from biology.
• J.-D. Deuschel, T. Orenshtein, G.R. Moreno Flores, Aging for the stationary Kardar--Parisi--Zhang equation and related models, Preprint no. 2763, WIAS, Berlin, 2020, DOI 10.20347/
WIAS.PREPRINT.2763 .
Abstract, PDF (368 kByte)
We study the aging property for stationary models in the KPZ universality class. In particular, we show aging for the stationary KPZ fixed point, the Cole-Hopf solution to the stationary KPZ
equation, the height function of the stationary TASEP, last-passage percolation with boundary conditions and stationary directed polymers in the intermediate disorder regime. All of these models
are shown to display a universal aging behavior characterized by the rate of decay of their correlations. As a comparison, we show aging for models in the Edwards-Wilkinson universality class
where a different decay exponent is obtained. A key ingredient to our proofs is a characteristic of space-time stationarity - covariance-to-variance reduction - which allows to deduce the
asymptotic behavior of the correlations of two space-time points by the one of the variances at one point. We formulate several open problems.
• D. Heydecker, R.I.A. Patterson, Bilinear coagulation equations, Preprint no. 2637, WIAS, Berlin, 2019, DOI 10.20347/WIAS.PREPRINT.2637 .
Abstract, PDF (453 kByte)
We consider coagulation equations of Smoluchowski or Flory type where the total merge rate has a bilinear form π(y) · Aπ (x) for a vector of conserved quantities π, generalising the
multiplicative kernel. For these kernels, a gelation transition occurs at a finite time t[g] ∈ (0,∞), which can be given exactly in terms of an eigenvalue problem in finite dimensions. We prove a
hydrodynamic limit for a stochastic coagulant, including a corresponding phase transition for the largest particle, and exploit a coupling to random graphs to extend analysis of the limiting
process beyond the gelation time.
• A. Stephan, Combinatorial considerations on the invariant measure of a stochastic matrix, Preprint no. 2627, WIAS, Berlin, 2019, DOI 10.20347/WIAS.PREPRINT.2627 .
Abstract, PDF (225 kByte)
The invariant measure is a fundamental object in the theory of Markov processes. In finite dimensions a Markov process is defined by transition rates of the corresponding stochastic matrix. The
Markov tree theorem provides an explicit representation of the invariant measure of a stochastic matrix. In this note, we given a simple and purely combinatorial proof of the Markov tree theorem.
In the symmetric case of detailed balance, the statement and the proof simplifies even more.
• M. Mittnenzweig, Hydrodynamic limit and large deviations of reaction-diffusion master equations, Preprint no. 2521, WIAS, Berlin, 2018, DOI 10.20347/WIAS.PREPRINT.2521 .
Abstract, PDF (389 kByte)
We derive the hydrodynamic limit of a reaction-diffusion master equation, that combines an exclusion process with a reversible chemical master equation expression for the reaction rates. The
crucial assumption is that the associated macroscopic reaction network has a detailed balance equilibrium. The hydrodynamic limit is given by a system of reaction-diffusion equations with a
modified mass action law for the reaction rates. We provide the upper bound for large deviations of the empirical measure from the hydrodynamic limit.
• M. Aizenman, S. Jansen, P. Jung, Symmetry breaking in quasi-1D Coulomb systems, Preprint no. 1547, WIAS, Berlin, 2010, DOI 10.20347/WIAS.PREPRINT.1547 .
Abstract, Postscript (1642 kByte), PDF (355 kByte)
Quasi one-dimensional systems are systems of particles in domains which are of infinite extent in one direction and of uniformly bounded size in all other directions, e.g. on a cylinder of
infinite length. The main result proven here is that for such particle systems with Coulomb interactions and neutralizing background, the so-called “jellium”, at any temperature and at any
finite-strip width there is translation symmetry breaking. This extends the previous result on Laughlin states in thin, two-dimens The structural argument which is used here bypasses the question
of whether the translation symmetry breaking is manifest already at the level of the one particle density function. It is akin to that employed by Aizenman and Martin (1980) for a similar
statement concerning symmetry breaking at all temperatures in strictly one-dimensional Coulomb systems. The extension is enabled through bounds which establish tightness of finite-volume charge
Talks, Poster
• J. Köppl, Dynamical Gibbs Variational Principles and applications to attractor properties (online talk), Postgraduate Online Probability Seminar (POPS) (online seminar), Postgraduate Online
Probability Seminar (POPS), online, February 28, 2024.
• J. Köppl, Dynamical Gibbs Variational Principles and applications to attractor properties (online talk), Oberseminar Stochastik, Universität Paderborn, Institut für Mathematik, May 15, 2024.
• J. Köppl, The long-time behaviour of interacting particle systems: a Lyapunov functional approach (online talk), Probability seminar, University of California Los Angeles (UCLA), Department of
Mathematics, Los Angeles, USA, February 15, 2024.
• B. Jahnel, Poisson approximation of fixed-degree nodes in weighted random connection models, Bernoulli-IMS 11th World Congress in Probability and Statistics, August 12 - 16, 2024,
Ruhr-Universität Bochum, August 16, 2024.
• B. Jahnel, Time-periodic behavior in one- and two-dimensional interacting particle systems (online talk), International Scientific Conference on Gibbs Measures and the Theory of Dynamical Systems
(online event), May 20 - 21, 2024, Ministry of Higher Education, Science and Innovations of the Republic of Uzbekistan, Romanovskiy Institut of Mathematics and University of Exact and Social
Sciences, Tashkent, Uzbekistan, May 20, 2024.
• L. Lüchtrath, Cluster sizes in soft Boolean models, Probability and Analysis 2024, April 22 - 26, 2024, Wroclaw University of Science and Technology, Będlewo, Poland, April 22, 2024.
• J. Köppl, Dynamical Gibbs variational principles for irreversible interacting particle systems and applications to attractor properties, Analysis and Probability Seminar Passau, Universität
Passau, Fakultät für Informatik und Mathematik, January 17, 2023.
• J. Köppl, Dynamical Gibbs variational principles for irreversible interacting particle systems with applications to attractor properties, 16th German Probability and Statistics Days (GPSD) 2023,
March 7 - 10, 2023, Universität Duisburg-Essen, March 9, 2023.
• J. Köppl, The long-time behaviour of interacting particle systems: A Lyapunov functional approach, In Search of Model Structures for Non-equilibrium Systems, Münster, April 24 - 28, 2023.
• J. Köppl, The long-time behaviour of interacting particle systems: A Lyapunov functional approach, In search of model structures for non-equilibrium systems, April 24 - 28, 2023, Westfälische
Wilhelms-Universität Münster, Fachbereich Mathematik und Informatik, April 25, 2023.
• J. Köppl, The longe-time behavior of interacting particle systems: A Lyapunov functional approach, Mathematics of Random Systems: Summer School 2023, September 11 - 15, 2023, Kyoto University,
Research Institute for Mathematical Sciences, Kyoto, Japan, November 13, 2023.
• D. Peschka, Moving contact lines for sliding droplets, 93rd Annual Meeting of the International Association of Applied Mathematics and Mechanics (GAMM 2023), Session 11 ``Interfacial Flows'', May
30 - June 2, 2023, Technische Universität Dresden, June 1, 2023.
• A. Zass, Diffusion dynamics for an system of two-type speres and the associated depletion effect, Workshop MathMicS 2023: Mathematics and microscopic theory for random Soft Matter systems,
February 13 - 15, 2023, Heinrich-Heine-Universität Düsseldorf, Institut für Theoretische Physik II - Soft Matter, February 14, 2023.
• A. Zass, The statistical mechanics of the interlacement point process, Second Annual Conference of the SPP2265, March 27 - 30, 2023, Deutsches Zentrum für Luft- und Raumfahrt (DLR), Köln, March
30, 2023.
• B. Jahnel, Stochastische Methoden für Kommunikationsnetzwerke, Seminar der Fakultät Informatik, Hochschule Reutlingen, October 6, 2023.
• B. Jahnel, Stochastische Methoden für Kommunikationsnetzwerke, Orientierungsmodul der Technischen Universität Braunschweig, Institut für Mathematische Stochastik, November 2, 2023.
• B. Jahnel, Stochastische Methoden für Kommunikationsnetzwerke, Orientierungsmodul der Technischen Universität Braunschweig, Institut für Mathematische Stochastik, January 30, 2023.
• B. Jahnel, Subcritical percolation phases for generalized weight-dependent random connection models, 21st INFORMS Applied Probability Society Conference, June 28 - 30, 2023, Centre Prouvé, Nancy,
France, June 29, 2023.
• B. Jahnel, Subcritical percolation phases for generalized weight-dependent random connection models, DMV Annual Meeting 2023, Minisymposium MS 12 ``Random Graphs and Statistical Network
Analysis'', September 25 - 28, 2023, Technische Universität Ilmenau, September 25, 2023.
• B. Jahnel, The statistical mechanics of the interlacement point process, Second Annual Conference of the SPP 2265, March 27 - 30, 2023, Deutsches Zentrum für Luft- und Raumfahrt (DLR), Köln,
March 29, 2023.
• W. König, The statistical mechanics of the interlacement point process, Second Annual Conference of the SPP 2265, March 27 - 30, 2023, Deutsches Zentrum für Luft- und Raumfahrt (DLR), Köln, March
30, 2023.
• L. Lüchtrath, The emergence of a giant component in one-dimensional inhomogeneous networks with long-range effects, 18th Workshop on Algorithms and Models for Web Graphs, May 23 - 26, 2023, The
Fields Institute for Research in Mathematical Sciences, Toronto, Canada, May 25, 2023.
• A. Stephan, Positivity and polynomial decay of energies for square-field operators, Variational and Geometric Structures for Evolution, October 9 - 13, 2023, Centro Internazionale per la Ricerca
Matematica (CIRM), Levico Terme, Italy, October 13, 2023.
• A. Stephan, Fast-slow chemical reaction systems: Gradient systems and EDP-convergence, Oberseminar Dynamics, Technische Universität München, Department of Mathematics, April 17, 2023.
• S. Schindler, Convergence to self-similar profiles for a coupled reaction-diffusion system on the real line, CRC 910: Workshop on Control of Self-Organizing Nonlinear Systems, Wittenberg,
September 26 - 28, 2022.
• S. Schindler, Energy approach for a coupled reaction-diffusion system on the real line (online talk), SFB 910 Symposium ``Pattern formation and coherent structure in dissipative systems'' (Online
Event), Technische Universität Berlin, January 14, 2022.
• S. Schindler, On asymptotic self-similar behavior of solutions to parabolic systems (hybrid talk), SFB910: International Conference on Control of Self-Organizing Nonlinear Systems (Hybrid Event),
November 23 - 26, 2022, Technische Universität Berlin, Potsdam, November 25, 2022.
• A. Stephan, EDP-convergence for a linear reaction-diffusion systems with fast reversible reaction (online talk), SIAM Conference on Analysis of Partial Differential Equations (PD22) (Online
Event), Minisymposium MS11: ``Bridging Gradient Flows, Hypocoercivity and Reaction-Diffusion Systems'', March 14 - 18, 2022, March 14, 2022.
• B. Jahnel, Malware propagation in mobile device-to-device networks (online talk), Joint H2020 AI@EDGE and INSPIRE-5G Project Workshop -- Platforms and Mathematical Optimization for Secure and
Resilient Future Networks (Online Event), Paris, France, November 8 - 9, 2022, November 8, 2022.
• R.I.A. Patterson, Large deviations with vanishing reactant concentrations, Workshop on Chemical Reaction Networks, July 6 - 8, 2022, Politecnico di Torino, Department of Mathematical Sciences
``G. L. Lagrange'', Torino, Italy, July 7, 2022.
• A. Stephan, EDP-convergence for a linear reaction-diffusion system with fast reversible reaction, Mathematical Models for Biological Multiscale Systems (Hybrid Event), September 12 - 14, 2022,
WIAS Berlin, September 12, 2022.
• A. Stephan, EDP-convergence for gradient systems and applications to fast-slow chemical reaction systems, Block Course ``Multiscale Problems and Homogenization'' at Freie Universität Berlin from
Nov. 10 to Dec. 15, 2022, Berlin Mathematical School & Berlin Mathematics Research Center MATH+, November 24, 2022.
• S. Schindler, Self-similar diffusive equilibration for a coupled reaction-diffusion system with mass-action kinetics, SFB910: International Conference on Control of Self-Organizing Nonlinear
Systems (Hybrid Event), August 29 - September 2, 2021, Technische Universität Berlin, Potsdam, September 1, 2021.
• A. Stephan, Gradient systems and EDP-convergence with applications to nonlinear fast-slow reaction systems (online talk), DS21: SIAM Conference on Applications of Dynamical Systems, Minisymposium
19 ``Applications of Stochastic Reaction Networks'' (Online Event), May 23 - 27, 2021, Society for Industrial and Applied Mathematics, May 23, 2021.
• A. Stephan, Gradient systems and mulit-scale reaction networks (online talk), Limits and Control of Stochastic Reaction Networks (Online Event), July 26 - 30, 2021, American Institute of
Mathematics, San Jose, USA, July 29, 2021.
• A. Stephan, Coarse-graining via EDP-convergence for linear fast-slow reaction-diffusion systems (online talk), 91st Annual Meeting of the International Association of Applied Mathematics and
Mechanics (Online Event), Section S14 ``Applied Analysis'', March 15 - 19, 2021, Universität Kassel, March 17, 2021.
• B. Jahnel, First-passage percolation and chase-escape dynamics on random geometric graphs, Stochastic Geometry Days, November 15 - 19, 2021, Dunkerque, France, November 17, 2021.
• B. Jahnel, Gibbsian representation for point processes via hyperedge potentials (online talk), Thematic Einstein Semester on Geometric and Topological Structure of Materials, Summer Semester
2021, Technische Universität Berlin, May 20, 2021.
• B. Jahnel, Phase transitions for the Boolean model for Cox point processes (online talk), DYOGENE Seminar (Online Event), INRIA Paris, France, January 11, 2021.
• B. Jahnel, Phase transitions for the Boolean model for Cox point processes (online talk), Probability Seminar Bath (Online Event), University of Bath, Department of Mathematical Sciences, UK,
October 18, 2021.
• B. Jahnel, Stochastic geometry for epidemiology (online talk), Monday Biostatistics Roundtable, Institute of Biometry and Clinical Epidemiology (Online Event), Campus Charité, June 14, 2021.
• T. Orenshtein, Aging for the O'Conell--Yor model in intermediate disorder (online talk), Joint Israeli Probability Seminar (Online Event), Technion, Haifa, November 17, 2020.
• T. Orenshtein, Aging for the stationary KPZ equation, The 3rd Haifa Probability School. Workshop on Random Geometry and Stochastic Analysis, February 24 - 28, 2020, Technion Israel Institute of
Technology, Haifa, February 24, 2020.
• T. Orenshtein, Aging for the stationary KPZ equation (online talk), Bernoulli-IMS One World Symposium 2020 (Online Event), August 24 - 28, 2020, A virtual one week symposium on Probability and
Mathematical Statistics, August 27, 2020.
• T. Orenshtein, Aging for the stationary KPZ equation (online talk), 13th Annual ERC Berlin--Oxford Young Researchers Meeting on Applied Stochastic Analysis (Online Event), June 8 - 10, 2020, WIAS
Berlin, June 10, 2020.
• T. Orenshtein, Aging in Edwards--Wilkinson and KPZ universality classes (online talk), Probability, Stochastic Analysis and Statistics Seminar (Online Event), University of Pisa, Italy, October
27, 2020.
• A. Stephan, EDP-convergence for nonlinear fast-slow reactions, Workshop ``Variational Methods for Evolution'', September 13 - 19, 2020, Mathematisches Forschungsinstitut Oberwolfach, September
18, 2020.
• A. Stephan, On mathematical coarse-graining for linear reaction systems, 8th BMS Student Conference, February 19 - 21, 2020, Technische Universität Berlin, February 21, 2020.
• A. Stephan, On gradient flows and gradient systems (online talk), CRC 1114 PhD Seminar (Online Event), Freie Universität Berlin, November 11, 2020.
• A. Stephan, On gradient systems and applications to interacting particle systems (online talk), CRC 1114 PhD Seminar (Online Event), Freie Universität Berlin, November 25, 2020.
• A. Stephan, Coarse-graining for gradient systems with applications to reaction systems (online talk), Thematic Einstein Semester on Energy-based Mathematical Methods for Reactive Multiphase
Flows: Student Compact Course ``Variational Methods for Fluids and Solids'' (Online Event), October 12 - 23, 2020, WIAS Berlin, October 15, 2020.
• A. Stephan, EDP-convergence for nonlinear fast-slow reaction systems (online talk), Annual Workshop of the GAMM Activity Group on Analysis of PDEs (Online Event), September 30 - October 2, 2020,
Institute of Science and Technology Austria (IST Austria), Klosterneuburg, October 1, 2020.
• R.I.A. Patterson, Interpreting LDPs without detailed balance, Workshop ``Variational Methods for Evolution'', September 13 - 19, 2020, Mathematisches Forschungsinstitut Oberwolfach, September 15,
• A. Stephan, Rigorous derivation of the effective equation of a linear reaction system with different time scales, 90th Annual Meeting of the International Association of Applied Mathematics and
Mechanics (GAMM 2019), Section S14 ``Applied Analysis'', February 18 - 22, 2019, Universität Wien, Technische Universität Wien, Austria, February 21, 2019.
• B. Jahnel, Continuum percolation in random environment, Workshop on Probability, Analysis and Applications (PAA), September 23 - October 4, 2019, African Institute for Mathematical Sciences ---
Ghana (AIMS Ghana), Accra.
• R.I.A. Patterson, A novel simulation method for stochastic particle systems, Seminar, Department of Chemical Engineering and Biotechnology, University of Cambridge, Faculty of Mathematics, UK,
May 9, 2019.
• R.I.A. Patterson, Flux large deviations, Workshop on Chemical Reaction Networks, July 1 - 3, 2019, Politecnico di Torino, Dipartimento di Scienze Matematiche ``G. L. Lagrange``, Italy, July 2,
• R.I.A. Patterson, Flux large deviations, Seminar, Statistical Laboratory, University of Cambridge, Faculty of Mathematics, UK, May 7, 2019.
• L. Taggi, Critical density in activated random walks, Horowitz Seminar on Probability, Ergodic Theory and Dynamical Systems, Tel Aviv University, School of Mathematical Sciences, Israel, May 20,
• W. Dreyer, Thermodynamics and kinetic theory of non-Newtonian fluids, Technische Universität Darmstadt, Mathematische Modellierung und Analysis, June 13, 2018.
• M. Kantner, Multi-scale modeling and numerical simulation of single-photon emitters, Matheon Workshop--9th Annual Meeting ``Photonic Devices", Zuse Institut, Berlin, March 3, 2016.
• M. Kantner, Multi-scale modelling and simulation of single-photon sources on a device level, Euro--TMCS II Theory, Modelling & Computational Methods for Semiconductors, Tyndall National Institute
and University College Cork, Cork, Ireland, December 9, 2016.
• A. Mielke, On entropic gradient structures for classical and quantum Markov processes with detailed balance, Pure Analysis and PDEs Seminar, Imperial College London, Department of Mathematics,
UK, May 11, 2016.
• A. Mielke, Chemical Master Equation: Coarse graining via gradient structures, Kolloquium des SFB 1114 ``Scaling Cascades in Complex Systems'', Freie Universität Berlin, Fachbereich Mathematik,
Berlin, June 4, 2015.
• A. Mielke, Geometric approaches at and for theoretical and applied mechanics, Phil Holmes Retirement Celebration, October 8 - 9, 2015, Princeton University, Mechanical and Aerospace Engineering,
New York, USA, October 8, 2015.
• A. Mielke, The Chemical Master Equation as a discretization of the Fokker--Planck and Liouville equation for chemical reactions, Colloquium of Collaborative Research Center/Transregio
``Discretization in Geometry and Dynamics'', Technische Universität Berlin, Institut für Mathematik, Berlin, February 10, 2015.
• A. Mielke, The Fokker--Planck and Liouville equations for chemical reactions as large-volume approximations of the Chemical Master Equation, Workshop ``Stochastic Limit Analysis for Reacting
Particle Systems'', December 16 - 18, 2015, WIAS Berlin, Berlin, December 18, 2015.
• R.I.A. Patterson, Approximation errors for Smoluchowski simulations, 10 th IMACS Seminar on Monte Carlo Methods, July 6 - 10, 2015, Johannes Kepler Universität Linz, Austria, July 7, 2015.
• A. Mielke, Generalized gradient structures for reaction-diffusion systems, Applied Mathematics Seminar, Università di Pavia, Dipartimento di Matematica, Italy, June 17, 2014.
• R.I.A. Patterson, Monte Carlo simulation of nano-particle formation, University of Technology Eindhoven, Institute for Complex Molecular Systems, Netherlands, September 5, 2013.
• S. Jansen, Large deviations for interacting many-particle systems in the Saha regime, Berlin-Leipzig Seminar on Analysis and Probability Theory, July 8, 2011, Technische Universität Clausthal,
Institut für Mathematik, July 8, 2011.
• W. König, Eigenvalue order statistics and mass concentration in the parabolic Anderson model, Berlin-Leipzig Seminar on Analysis and Probability Theory, Technische Universität Clausthal, Institut
für Mathematik, July 8, 2011.
• W. König, Phase transitions for dilute particle systems with Lennard--Jones potential, University of Bath, Department of Mathematical Sciences, UK, April 14, 2010.
• W. König, Phase transitions for dilute particle systems with Lennard--Jones potential, Workshop on Mathematics of Phase Transitions: Past, Present, Future, November 12 - 15, 2009, University of
Warwick, Coventry, UK, November 15, 2009.
External Preprints
• D. Heydecker , R.I.A. Patterson, Kac interaction clusters: A bilinear coagulation equation and phase transition, Preprint no. arXiv:1902.07686, Cornell University Library, 2019.
We consider the interaction clusters for Kac's model of a gas with quadratic interaction rates, and show that they behave as coagulating particles with a bilinear coagulation kernel. In the large
particle number limit the distribution of the interaction cluster sizes is shown to follow an equation of Smoluchowski type. Using a coupling to random graphs, we analyse the limiting equation,
showing well-posedness, and a closed form for the time of the gelation phase transition tg when a macroscopic cluster suddenly emerges. We further prove that the second moment of the cluster size
distribution diverges exactly at tg. Our methods apply immediately to coagulating particle systems with other bilinear coagulation kernels. | {"url":"https://www.wias-berlin.de/research/ats/manyparticle/?lang=1","timestamp":"2024-11-02T18:59:11Z","content_type":"text/html","content_length":"192579","record_id":"<urn:uuid:a96edf54-f5db-4b34-8b99-84e34f73a957>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00610.warc.gz"} |
Understanding Mathematical Functions: What Does The Mod Function Do
Mathematical functions are an essential part of understanding and interpreting the relationships between various mathematical quantities. One such function that is commonly used is the mod function,
also known as the modulo operation. This function calculates the remainder of a division between two numbers and is widely used in various mathematical and programming applications.
Key Takeaways
• The mod function, also known as the modulo operation, calculates the remainder of a division between two numbers.
• It is widely used in mathematical and programming applications.
• The mod function simplifies complex calculations and improves efficiency in certain algorithms.
• It has applications in computer programming and is relevant in number theory.
• Misconceptions about the mod function should be addressed to understand its purpose and how it differs from other mathematical operations.
Understanding Mathematical Functions: What does the mod function do
The mod function, short for modulo, is a fundamental mathematical operation that calculates the remainder of a division between two numbers. It is denoted by the symbol "%".
A. Definition of the mod function
The mod function takes two numbers, the dividend and the divisor, and returns the remainder when the dividend is divided by the divisor. In other words, it calculates what is left over after the
division process.
B. Explain what the mod function does
For example, in the expression 7 % 3, the dividend is 7 and the divisor is 3. When 7 is divided by 3, the quotient is 2 with a remainder of 1. Therefore, 7 % 3 equals 1.
Another way to look at it is that the mod function returns the amount that is "left over" or "unused". For instance, in the expression 10 % 4, the quotient is 2 with a remainder of 2. So, 10 % 4
equals 2.
C. Provide examples of how the mod function is used
• One common application of the mod function is in programming, particularly when dealing with loops or conditional statements. It can be used to check if a number is even or odd, or to perform
tasks at regular intervals.
• It is also used in cryptography, where it plays a role in encryption and decryption algorithms.
• In mathematics, the mod function is used in various fields such as number theory, abstract algebra, and computer science.
Understanding Mathematical Functions: What does the mod function do
Mathematical functions play a crucial role in various fields such as computer science, engineering, and finance. One such function that is commonly used is the mod function, which serves the purpose
of finding the remainder. In this chapter, we will delve into how the mod function works and its applications in different contexts.
How the mod function works
The mod function, short for modulo, is used to find the remainder when one number is divided by another. It can be denoted as "a mod b" where 'a' is the dividend and 'b' is the divisor. The result of
'a mod b' is the remainder when 'a' is divided by 'b'.
For example, when 10 is divided by 3, the quotient is 3 and the remainder is 1. This can be expressed as 10 mod 3 = 1. The mod function is particularly useful in scenarios where we need to work with
remainders, such as finding the day of the week or cycling through a sequence of numbers.
Discuss the process of taking the remainder
When using the mod function to compute the remainder, the process involves dividing the first number by the second and taking the leftover amount as the result. This operation can be performed on
integers, floating-point numbers, and even negative numbers.
For instance, if we have -7 mod 3, the result would be 2, as the remainder of -7 divided by 3 is 2. Similarly, 5.5 mod 2 would yield 1.5, signifying the remainder when 5.5 is divided by 2.
Explain how the mod function is applied in different contexts
The mod function is widely used in a variety of applications. In computer programming, it is used to determine if a number is even or odd, as well as to cycle through elements in an array. In
finance, the mod function can be utilized to calculate interest payments or determine recurring patterns in financial data.
Furthermore, the mod function finds extensive use in cryptography, where it is employed to generate secure keys and implement algorithms for secure communication.
Applications of the mod function
The mod function, short for modulus, is a fundamental mathematical operation that finds its applications in various fields. Let’s delve into how this function is used in computer programming and its
significance in number theory.
A. Use in computer programming
The mod function is extensively used in computer programming for a wide range of applications. It is commonly used to determine whether a number is even or odd. By using the mod function with 2 as
the divisor, programmers can easily check if a number is divisible by 2. This is particularly useful in algorithms that involve sorting, searching, or manipulation of data.
Moreover, the mod function is invaluable in handling cyclic behavior in algorithms. For instance, in a program that needs to iterate through a sequence of elements and restart from the beginning once
the end is reached, the mod function comes in handy to wrap around the index.
Additionally, the mod function is often applied in encryption algorithms and data validation processes. It plays a crucial role in generating hash codes, validating checksums, and ensuring the
integrity of transmitted data.
B. Relevance in number theory
The mod function holds significant relevance in number theory, a branch of mathematics that deals with the properties and relationships of numbers. It is particularly useful in the study of
divisibility, congruences, and prime numbers.
1. Divisibility
When a number is divided by another number using the mod function, the remainder obtained provides valuable information about the divisibility of the two numbers. This concept is fundamental in
number theory, where the mod function is used to explore the properties of divisors and multiples.
2. Congruences
In number theory, the mod function is employed to define congruences between integers. Two numbers are said to be congruent modulo n if their difference is divisible by n. This concept forms the
basis of modular arithmetic, which has diverse applications in cryptography, algebra, and computer science.
3. Prime numbers
The mod function is crucial in the identification and verification of prime numbers. Through the application of various theorems and algorithms that utilize the mod function, mathematicians can
efficiently identify prime numbers and study their intricate patterns and properties.
Advantages of using the mod function
When it comes to mathematical functions, the mod function plays a crucial role in simplifying complex calculations and improving efficiency in various algorithms. Its unique ability to handle
remainders and divisibility makes it a valuable tool in a wide range of mathematical and programming applications.
A. Highlight its ability to simplify complex calculations
• Handling remainders
The mod function allows for the efficient handling of remainders in mathematical calculations. This is particularly useful when dealing with large numbers or complex operations, as it provides a
clear and concise way to represent the remainder in a division.
• Modular arithmetic
By using the mod function, complex arithmetic operations can be simplified through modular arithmetic. This can be especially beneficial in cryptography, computer graphics, and number theory,
where the mod function helps to reduce the complexity of calculations and ensure accuracy in the results.
B. Discuss how it can improve efficiency in certain algorithms
• Data organization
In algorithms and programming, the mod function is commonly used to organize and distribute data into different categories or groups. This can significantly improve the efficiency of sorting and
searching algorithms, as well as enhance the performance of data structures and databases.
• Optimizing loops and iterations
By utilizing the mod function, iterations and repetitive calculations in algorithms can be optimized to reduce the number of operations and improve overall performance. This is particularly
beneficial in cases where the efficiency of the algorithm is critical, such as in real-time systems or resource-constrained environments.
Common misconceptions about the mod function
When it comes to mathematical functions, the mod function often tends to be misunderstood. Let's address some of the common misconceptions about this function and clarify its purpose and how it
differs from other mathematical operations.
A. Address misunderstandings about its purpose
One common misconception about the mod function is that it is simply a fancy way to divide numbers. However, the mod function serves a specific purpose in mathematics that goes beyond simple
division. It is important to understand that the mod function returns the remainder of a division operation, rather than the quotient itself.
B. Clarify how it differs from other mathematical operations
Another misunderstanding about the mod function is that it is similar to the remainder operator in programming languages. While they may serve similar purposes, it is essential to clarify that the
mod function operates within the realm of mathematical functions and has its own distinct properties and applications. Unlike other mathematical operations such as addition, subtraction,
multiplication, and division, the mod function specifically deals with remainders and has unique mathematical properties.
Summarizing, the mod function, short for modulus, is a mathematical operation that returns the remainder when one number is divided by another. It is a useful tool for various applications such as
finding patterns in numbers, solving equations, and cryptography. The mod function can be written as "a % b" in programming languages like Python, JavaScript, and C++. It is important to understand
how the mod function works in order to effectively use it in mathematical and programming contexts.
For those interested in delving further into mathematical functions, there are plenty of resources available to explore. From online tutorials and lectures to textbooks and peer-reviewed journals,
the world of mathematics is rich and diverse. Keep exploring, learning, and applying mathematical functions to expand your knowledge and problem-solving skills.
ONLY $99
Immediate Download
MAC & PC Compatible
Free Email Support | {"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-what-does-the-mod-function-do","timestamp":"2024-11-14T18:26:57Z","content_type":"text/html","content_length":"214751","record_id":"<urn:uuid:45c2b949-58da-44ea-9084-9e0f4668b573>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00151.warc.gz"} |
How Many Extra Electrons Are On This Charged Tape?
Photo: Rhett Allain. Electrically charged tape
Maybe you haven’t tried this before — but it’s kind of awesome. If you put two clear sticky tapes together and pull them apart, they will become electrically charged. It produces a nice effect that
even works when the air humidity is higher than normal.
Of course, electric charge is conserved, so when you pull these apart the bottom tape becomes negative and the top tape is positive. Let’s focus on the bottom tape — it’s negative because some of the
electrons from the top tape where transferred. But how many electrons made this transition? Let’s find out.
Data Collection
I’m going to vertically hang two negatively charged tapes. Just like this:
Using video analysis, (Tracker Video Analysis) the length of the two tapes are both about 15 centimeter and each hangs with a deflection of 23 degrees from the vertical. Finally, I need the mass of
the tape. If you take a 4 meter long piece of tape and place it on a scale, it has a mass of 4 grams. That means the tape has a length density of 1 gram per meter. So, 15 centimeters of tape has a
mass of 0.15 grams. | {"url":"https://rjallain.medium.com/how-many-extra-electrons-are-on-this-charged-tape-3809f84a0e11","timestamp":"2024-11-07T10:53:02Z","content_type":"text/html","content_length":"91564","record_id":"<urn:uuid:d530ba7a-8b99-44e3-a2e9-ce30d6f1607e>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00518.warc.gz"} |
American Mathematical Society
Proofs and Conversations
Talia Ringer
My research is on making it easier to write formal, machine-checkable proofs using special tools called proof assistants. So of course, I love these tools, and I want everyone to have a chance to use
them. I am just now noticing that I have been selling my love for these tools to mathematicians the wrong way.
The sales pitch in my field is obvious. My work is primarily in the field of formal verification: using these tools to write machine-checkable proofs about software systems. In my field, we really
care about our software being correctly implemented, down to the tiniest details. Subtle mistakes in software systems can be catastrophic, expensive, and even fatal. The most powerful way to avoid
these mistakes is to formally prove our software correct.
So for a long time, I told mathematicians that these tools are great because they make it possible to have full confidence in one’s results. The response to this was often one of confusion. The
normal way of doing math has worked pretty well for most of history (modulo the occasional surmountable crisis like Russell’s Paradox). Why go formal?
I can now see the right sales pitch to mathematicians: formal proof has the potential to empower collaboration at a scale never before seen in mathematics. For example, mathlib mC20, the large formal
math library implemented in the proof assistant called Lean, has had 141 contributors in the past month at the time of writing. These contributors range from professional mathematicians to hobbyists
and everything in between. If you want to contribute, you can, too. This is the beauty of the collision of the world of proofs with the world of software.
It will take time and patience for this to catch on. Most mathematicians I have spoken to still view the proof assistant as making it harder to prove their results rather than easier. Mathematicians
who use proof assistants still find themselves having to think hard about details that they often take for granted, because nothing is really obvious to a computer. Much of the work being done right
now is on building the infrastructure that will help mathematicians abstract over those details in the future.
In my research field, our formal proof infrastructure is mature enough that many of us find it easier to write a proof in a proof assistant. The proof assistant really assists us. Math will get there
too, and when it does, we will find ourselves in a world where more and more people can participate in the Great Conversation of Mathematics.
Proof assistants
To use a proof assistant, we start by writing definitions directly inside of the proof assistant (or finding relevant definitions someone else has already written). We then state theorems about these
definitions. Finally, we write proofs about these theorems interactively, with the help of the proof assistant.
Suppose we wish to prove that every natural number is even or odd. We will do so using a popular proof assistant called Coq. We can get the definitions of “even” and “odd” from Coq’s standard
Definition Even n ≔ ∃ m, n = 2*m.
Definition Odd n ≔ ∃ m, n = 2*m + 1.
We can use these to state our theorem:
Theorem even_or_odd:
∀ (n : nat), Even n \/ Odd n.
We can then move into the interactive proof mode by simply typing the word Proof. In this interactive proof mode, next to where we are typing, Coq displays our current goal to us, which at this point
is just the theorem we stated. To prove this goal, we send Coq these high-level strategies called tactics. Here, we can use a tactic that does induction:
induction n.
Coq responds by refining the goal into the base case and the inductive case. In the base case:
1 goal
Even 0 \/ Odd 0
our goal is to show that zero is either even or odd. Zero is obviously even, but “obviously” does not really compute. So we provide more detail than we may be used to. Our goal Even 0 Odd 0 is a
disjunction, so first, we tell Coq that we will prove the disjunction by proving its left side (Even 0). Then, we explicitly choose 0 for m (there are some ways around providing this much detail, but
they have many caveats). Then it is obvious. In other words:
- left. exists 0. auto.
Here, left refines the goal to the left side of the disjunction, exists 0 chooses 0 for m, and auto takes care of the “obvious” part.
In the inductive case, our goal is to show that, given some natural number n, if n is either even or odd, then so is its successor (denoted S n):
1 goal
n : nat
IHn : Even n \/ Odd n
Even (S n) \/ Odd (S n)
Note that our inductive hypothesis is given a name, IHn, which we can refer to explicitly. Here, we call the destruct tactic on it, which does case analysis, splitting into the even and odd cases:
- destruct IHn.
In the even case:
1 goal
n : nat
H : Even n
Even (S n) \/ Odd (S n)
we know the successor is odd. But we have to do more work, again, since this is a computer checking the result. So after choosing the right side of the disjunction in the goal (that is, saying the
successor is odd), we use destruct again on H, the fact that n is even. What this does is use the definition of evenness to assert that there is some x for which n = 2*m. We can then use that same
exact x to show that S n is odd by the definition of oddness:
+ right. destruct H. exists x. lia.
where lia invokes a simple linear arithmetic solver to prove the remaining equality in a way that satisfies Coq. The odd case is fairly similar, and then we are done, so that our proof looks like
Theorem even_or_odd:
∀ (n : nat), Even n \/ Odd n.
induction n.
- left. exists 0. auto.
- destruct IHn.
+ right. destruct H. exists x. lia.
+ left. destruct H. exists (S x). lia.
What I call a “proof” here is really a proof script—a sequence of tactics that proves our goal. But Coq does not check this proof script directly. Instead, it translates the whole thing down to this
low-level representation called a proof term. This proof term is a purely logical representation of the proof, without any abstraction, and so it is often quite large. What is important is that Coq
can check this purely logical proof term against the theorem statement automatically, giving us certainty that it proves that theorem.
While this proof is a toy example, Coq and its siblings like Lean and Isabelle/HOL have been used to write proofs about both state-of-the-art mathematics and security-critical software systems.
Choosing a proof assistant
There are many proof assistants to choose from. Lean is likely the most popular in mathematics in the US these days, but I would not let that stop you from exploring other proof assistants. For
example, I use Coq to write proofs about software and about programming languages. I often use a fairly niche proof assistant called Cubical Agda for higher-dimensional reasoning, like to reason
about homotopies or, more generally, how proofs themselves relate to each other.
There are many axes along which proof assistants vary that factor into one’s choice of proof assistant. Some of these axes relate to the proof assistant’s community of users: mathematicians versus
computer scientists, means of interacting, axioms communally agreed upon as OK to assume,Footnote^1 and style of writing proofs. Others relate to infrastructure: libraries, frameworks, automation,
user interfaces, languages, and archives. Still others relate to the guts of the proof assistants: logical foundations and expressiveness, means of achieving trustworthiness, and ways of representing
proofs internally.
Yes, this is a thing, and it is one reason many mathematicians tell me they prefer Lean to Coq, even though I love Coq.
Mathematicians I speak to often claim that the guts of the proof assistant do not matter to them. I think they do matter, they are just abstracted away. In fact, the guts are what allow for
abstraction to begin with. For central to these guts is a design principle that states that there ought to be separation of concerns between the thing that produces the proof and the thing that
checks the proof BB02BW05. The thing that checks the proof should be a small, human-readable logic checker called the kernel. The thing that produces the proof (like the proof script we saw earlier)
is then free to do pretty much anything, so long as in the end it produces something that the kernel can check (the corresponding proof term). This checking happens when we write Qed.
This separation of concerns is what makes the proof assistant trustworthy while empowering users to build and use automation that allows for higher and higher levels of abstraction. Since we can
trust lemmas and theorems once they have been proven, we can also build on previous results, just like we would in math. This makes it possible for communities of users to work in parallel on
different proofs, using one another’s results smoothly. This whole experience then starts to look a lot like software engineering.
Proof engineering
The view of writing machine-checkable proofs through a software engineering lens is called proof engineering RPS19. Every aspect of writing formal proofs, from the community to the infrastructure to
the guts, has parallels in software engineering. This is good because the software engineering community has figured out a lot about how to make things easier, and all of that becomes available to
us. For example, we can use existing systems to track changes to our work-in-progress proofs with our collaborators, and we can adapt design principles from software engineering to help us write
proofs collaboratively.
The benefits of viewing proofs through a software engineering lens become especially potent when it comes to empowering collaboration. One example I like of this has to do with how I interact with
some of my students—by pair proving. I view pair proving as the proof analogue to pair programming. In pair programming, the driver writes code while the navigator helps steer the process and give
feedback. Every so often, programmers switch roles.
When I pair prove with my students, I like to start as the navigator, and occasionally jump in as driver if something is easier to show than to explain. My most useful purpose as navigator is to help
students figure out when to move between bottom-up and top-down reasoning. Bottom-up reasoning asks: given what we know, what is it we can show? Top-down reasoning asks: given what we would like to
show, what do we need to know? Students are often good at both of these individually, but they tend to get stuck in one mode of reasoning when switching to the other would be more effective. As
navigator, though, I can help them figure out when to switch directions. This is one small way proof engineering helps us collaborate.
The Great Conversation
The big promise of proof engineering comes when we look at collaboration between entire communities all around the world. Then, we can see the ways these tools and principles and communities can
empower and enhance the Great Conversation of Mathematics, or even reduce its barriers to entry.
My favorite example of this came from Terry Tao when he was learning how to use Lean. He shared a link to his Lean proofs inside of a GitHub repository. GitHub repositories are a common way that
people store code that makes it easy to contribute. Contribution happens by way of something called a pull request—a collection of changes made locally on one’s computer that are submitted to the
original authors for review and eventual approval. An approved pull request becomes part of the code.
This is exactly what happened. The pull requests trickled in immediately, and I found myself in awe. I realized that, in 2024, anyone in the world can submit pull requests to collaborate with Terry
Tao. He had spoken to me once about how we had entered this era of mathematics where collaborating and bridging fields are valuable skills. Watching Terry’s experiences with Lean, I realized that
proof assistants are a powerful tool in this era of collaboration and bridging fields. Mathematicians need to know.
In the futuristic world we live in, when you submit a pull request to Terry Tao’s GitHub repository, you do not need to worry much about accidentally breaking his proofs—no matter who you are. Thanks
to the separation of concerns between proof production and proof checking, you can just check the revised proofs locally. If you change the top-level theorem he is proving or the axioms he relies on,
maybe then you should start to worry. So long as you leave those in tact, though, you might be able to help him fill in holes or improve the elegance or clarity of his proof. To be sure, you can
check the result and make sure it passes Lean’s proof checker. Then you can submit your pull request and suddenly you are, in some sense, collaborating with Terry Tao. He and you are linked in the
Great Conversation of Mathematics.
Broadening the conversation
An even more exciting possibility stems from broadening the very notion of what the Great Conversation of Mathematics could be—and letting it include computers. Large language models like ChatGPT,
for example, are fundamentally unreliable, but it turns out this lack of reliability does not matter if we use the language model to generate formal proofs of theorems we have already stated, since
the proof assistant’s kernel can check the proof in the end.
Thanks to this certainty, we can start to include computers at many points of the Great Conversation—asking and answering questions, helping with discovery, debugging faulty conjectures, dispatching
proofs of lemmas, finding relevant information, discovering connections and analogies and helping you use them for your goals—all without compromising trust. I hope someday this conversation grows
into a computer-aided community of mathematicians at a scale never before seen, where anyone can participate. Of course, the goal should never be to replace mathematicians—only to empower you all to
explore the world of mathematics more and more, with more tools at your disposal.
Getting started
Want to try these proof assistants? There is a large document full of resources that we recently put together during the National Academies AI for Math Seminar.Footnote^2 The tutorials for formal
proof listed in that document, like the Lean Natural Number Game, are an especially great place to start. The discussion forums listed, like the Lean Zulip, are indispensable to new and seasoned
proof engineers alike. Be patient—it may take some time to get used to being explicit about things that you may take for granted in pen-and-paper mathematics, or finding the right automation to help
you not need to be explicit about those things. And above all, do not be afraid to keep asking questions. People want to help. Enjoy!
Photo of Talia Ringer is courtesy of Talia Ringer. | {"url":"https://www.ams.org/journals/notices/202409/noti3032/noti3032.html?adat=October%202024&trk=3032&pdfissue=202409&pdffile=rnoti-p1179.pdf&cat=none&type=.html","timestamp":"2024-11-09T20:08:22Z","content_type":"text/html","content_length":"105754","record_id":"<urn:uuid:6528f91f-36ed-4f46-aef3-b22458adf307>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00013.warc.gz"} |
Mathematics Money Management - Ralph Vince - PDF Free Download
THE MATHEMATICS OF MONEY MANAGEMENT: RISK ANALYSIS TECHNIQUES FOR TRADERS by Ralph Vince
Published by John Wiley & Sons, Inc. Library of Congress Cataloging-in-Publication Data Vince. Ralph. 1958-The mathematics of money management: risk analysis techniques for traders / by Ralph Vince.
Includes bibliographical references and index. ISBN 0-471-54738-7 1. Investment analysis—Mathematics. 2. Risk management—Mathematics 3. Program trading (Securities) HG4529N56 1992 332.6'01'51-dc20
Preface and Dedication The favorable reception of Portfolio Management Formulas exceeded even the greatest expectation I ever had for the book. I had written it to promote the concept of optimal f
and begin to immerse readers in portfolio theory and its missing relationship with optimal f. Besides finding friends out there, Portfolio Management Formulas was surprisingly met by quite an
appetite for the math concerning money management. Hence this book. I am indebted to Karl Weber, Wendy Grau, and others at John Wiley & Sons who allowed me the necessary latitude this book required.
There are many others with whom I have corresponded in one sort or another, or who in one way or another have contributed to, helped me with, or influenced the material in this book. Among them are
Florence Bobeck, Hugo Rourdssa, Joe Bristor, Simon Davis, Richard Firestone, Fred Gehm (whom I had the good fortune of working with for awhile), Monique Mason, Gordon Nichols, and Mike Pascaul. I
also wish to thank Fran Bartlett of G & H Soho, whose masterful work has once again transformed my little mountain of chaos, my little truckload of kindling, into the finished product that you now
hold in your hands. This list is nowhere near complete as there are many others who, to varying degrees, influenced this book in one form or another. This book has left me utterly drained, and I
intend it to be my last. Considering this, I'd like to dedicate it to the three people who have influenced me the most. To Rejeanne, my mother, for teaching me to appreciate a vivid imagination; to
Larry, my father, for showing me at an early age how to squeeze numbers to make them jump; to Arlene, my wife, part ner, and best friend. This book is for all three of you. Your influences resonate
throughout it. Chagrin Falls, Ohio R. V. March 1992
Index Introduction.............................................................................................. 5 Scope of this
book................................................................................ 5 Some prevalent misconceptions........................................................... 6 Worst-case scenarios
and stategy.........................................................6 Mathematics notation........................................................................... 7 Synthetic constructs in this
text........................................................... 7 Optimal trading quantities and optimal f............................................. 8 Chapter 1-The Empirical
Techniques.......................................................9 Deciding on quantity............................................................................ 9 Basic
concepts...................................................................................... 9 The runs test.......................................................................................
10 Serial correlation................................................................................ 11 Common dependency errors.............................................................. 12
Mathematical Expectation................................................................. 13 To reinvest trading profits or not....................................................... 14 Measuring a
good system for reinvestment the Geometric Mean..... 14 How best to reinvest...........................................................................15 Optimal fixed fractional
trading.........................................................15 Kelly formulas................................................................................... 16 Finding the optimal f by the
Geometric Mean................................... 16 To summarize thus far....................................................................... 17 Geometric Average
Trade.................................................................. 17 Why you must know your optimal f.................................................. 18 The severity of
drawdown................................................................. 18 Modern portfolio theory..................................................................... 19 The Markovitz
model......................................................................... 19 The Geometric Mean portfolio strategy............................................. 21 Daily procedures for using
optimal portfolios................................... 21 Allocations greater than 100%........................................................... 22 How the dispersion of outcomes affects geometric
growth............... 23 The Fundamental Equation of trading............................................... 24 Chapter 2 - Characteristics of Fixed Fractional Trading and Salutary
Techniques..............................................................................................26 Optimal f for small traders just starting out....................................... 26
Threshold to geometric...................................................................... 26 One combined bankroll versus separate bankrolls.............................27 Threat each play as if
infinitely repeated........................................... 28 Efficiency loss in simultaneous wagering or portfolio trading.......... 28 Time required to reach a specified goal and the trouble
with fractional f.......................................................................................................... 29 Comparing trading
systems................................................................30 Too much sensivity to the biggest loss.............................................. 30 Equalizing optimal
f........................................................................... 31 Dollar averaging and share averaging ideas...................................... 32 The Arc Sine Laws and random
walks.............................................. 33 Time spent in a drawdown................................................................. 34 Chapter 3 - Parametric Optimal f on the Normal
Distribution............... 35 The basics of probability distributions............................................... 35 Descriptive measures of
distributions................................................ 35 Moments of a distribution.................................................................. 36 The Normal
Distribution.................................................................... 37 The Central Limit Theorem............................................................... 38 Working with the Normal
Distribution.............................................. 38 Normal Probabilities.......................................................................... 39 Further Derivatives of the
Normal..................................................... 41 The Lognormal Distribution.............................................................. 41 The parametric optimal
f.................................................................... 42 The distribution of trade P&L's..........................................................43 Finding optimal f on the Normal
Distribution................................... 44 The mechanics of the procedure........................................................ 45 Chapter 4 - Parametric Techniques on Other
Distributions................... 49 The Kolmogorov-Smirnov (K-S) Test............................................... 49 Creating our own Characteristic Distribution Function..................... 50
Fitting the Parameters of the distribution...........................................52 Using the Parameters to find optimal f.............................................. 54 Performing "What
Ifs"....................................................................... 56 Equalizing f........................................................................................ 56 Optimal f on
other distributions and fitted curves............................. 56 Scenario planning...............................................................................57 Optimal f on binned
data....................................................................60 Which is the best optimal f?...............................................................60 -3-
Chapter 5 - Introduction to Multiple Simultaneous Positions under the Parametric Approach.............................................................................. 61 Estimating
Volatility.......................................................................... 61 Ruin, Risk and Reality....................................................................... 62 Option pricing
models........................................................................62 A European options pricing model for all distributions.....................65 The single long option and optimal
f................................................. 66 The single short option.......................................................................69 The single position in The Underlying
Instrument............................ 70 Multiple simultaneous positions with a causal relationship...............70 Multiple simultaneous positions with a random relationship............ 72 Chapter
6 - Correlative Relationships and the Derivation of the Efficient Frontier................................................................................................... 73 Definition of The
Problem................................................................. 73 Solutions of Linear Systems using Row-Equivalent Matrices...........76 Interpreting The
Results..................................................................... 77 Chapter 7 - The Geometry of Portfolios................................................. 80 The Capital Market Lines
(CMLs).....................................................80 The Geometric Efficient Frontier.......................................................81 Unconstrained
portfolios.................................................................... 83 How optimal f fits with optimal portfolios........................................ 84 Threshold to The Geometric for
Portfolios........................................ 85 Completing The Loop........................................................................ 85 Chapter 8 - Risk
Management................................................................ 88 Asset Allocation................................................................................. 88 Reallocation: Four
Methods............................................................... 90 Why reallocate?..................................................................................92 Portfolio Insurance – The
Fourth Reallocation Technique................ 92 The Margin Constraint....................................................................... 95 Rotating
Markets................................................................................ 96 To summarize.....................................................................................96
Application to Stock Trading............................................................. 97 A Closing Comment.......................................................................... 97 APPENDIX A
- The Chi-Square Test.................................................... 98 APPENDIX B - Other Common Distributions...................................... 99 The Uniform
Distribution.................................................................. 99 The Bernouli Distribution................................................................ 100 The Binomial
Distribution............................................................... 100 The Geometric Distribution............................................................. 101 The Hypergeometric
Distribution.................................................... 101 The Poisson Distribution..................................................................102 The Exponential
Distribution........................................................... 102 The Chi-Square Distribution............................................................ 103 The Student's
Distribution................................................................103 The Multinomial Distribution.......................................................... 104 The stable Paretian
Distribution.......................................................104 APPENDIX C - Further on Dependency: The Turning Points and Phase Length
Tests......................................................................................... 106
Introduction SCOPE OF THIS BOOK I wrote in the first sentence of the Preface of Portfolio Management Formulas, the forerunner to this book, that it was a book about mathematical tools. This is a book
about machines. Here, we will take tools and build bigger, more elaborate, more powerful tools-machines, where the whole is greater than the sum of the parts. We will try to dissect machines that
would otherwise be black boxes in such a way that we can understand them completely without having to cover all of the related subjects (which would have made this book impossible). For instance, a
discourse on how to build a jet engine can be very detailed without having to teach you chemistry so that you know how jet fuel works. Likewise with this book, which relies quite heavily on many
areas, particularly statistics, and touches on calculus. I am not trying to teach mathematics here, aside from that necessary to understand the text. However, I have tried to write this book so that
if you understand calculus (or statistics) it will make sense and if you do not there will be little, if any, loss of continuity, and you will still be able to utilize and understand (for the most
part) the material covered without feeling lost. Certain mathematical functions are called upon from time to time in statistics. These functions-which include the gamma and incomplete gamma
functions, as well as the beta and incomplete beta functions-are often called functions of mathematical physics and reside just beyond the perimeter of the material in this text. To cover them in the
depth necessary to do the reader justice is beyond the scope, and away from the direction of, this book. This is a book about account management for traders, not mathematical physics, remember? For
those truly interested in knowing the "chemistry of the jet fuel" I suggest Numerical Recipes, which is referred to in the Bibliography. I have tried to cover my material as deeply as possible
considering that you do not have to know calculus or functions of mathematical physics to be a good trader or money manager. It is my opinion that there isn't much correlation between intelligence
and making money in the markets. By this I do not mean that the dumber you are the better I think your chances of success in the markets are. I mean that intelligence alone is but a very small input
to the equation of what makes a good trader. In terms of what input makes a good trader, I think that mental toughness and discipline far outweigh intelligence. Every successful trader I have ever
met or heard about has had at least one experience of a cataclysmic loss. The common denominator, it seems, the characteristic that separates a good trader from the others, is that the good trader
picks up the phone and puts in the order when things are at their bleakest. This requires a lot more from an individual than calculus or statistics can teach a person. In short, I have written this
as a book to be utilized by traders in the real-world marketplace. I am not an academic. My interest is in realworld utility before academic pureness. Furthermore, I have tried to supply the reader
with more basic information than the text requires in hopes that the reader will pursue concepts farther than I have here. One thing I have always been intrigued by is the architecture of music
-music theory. I enjoy reading and learning about it. Yet I am not a musician. To be a musician requires a certain discipline that simply understanding the rudiments of music theory cannot bestow.
Likewise with trading. Money management may be the core of a sound trading program, but simply understanding money management will not make you a successful trader. This is a book about music theory,
not a how-to book about playing an instrument. Likewise, this is not a book about beating the markets, and you won't find a single price chart in this book. Rather it is a book about mathematical
concepts, taking that important step from theory to application, that you can employ. It will not bestow on you the ability to tolerate the emotional pain that trading inevitably has in store for
you, win or lose. This book is not a sequel to Portfolio Management Formulas. Rather, Portfolio Management Formulas laid the foundations for what will be covered here. -5-
Readers will find this book to be more abstruse than its forerunner. Hence, this is not a book for beginners. Many readers of this text will have read Portfolio Management Formulas. For those who
have not, Chapter 1 of this book summarizes, in broad strokes, the basic concepts from Portfolio Management Formulas. Including these basic concepts allows this book to "stand alone" from Portfolio
Management Formulas. Many of the ideas covered in this book are already in practice by professional money managers. However, the ideas that are widespread among professional money managers are not
usually readily available to the investing public. Because money is involved, everyone seems to be very secretive about portfolio techniques. Finding out information in this regard is like trying to
find out information about atom bombs. I am indebted to numerous librarians who helped me through many mazes of professional journals to fill in many of the gaps in putting this book together. This
book does not require that you utilize a mechanical, objective trading system in order to employ the tools to be described herein. In other words, someone who uses Elliott Wave for making trading
decisions, for example, can now employ optimal f. However, the techniques described in this book, like those in Portfolio Management Formulas, require that the sum of your bets be a positive result.
In other words, these techniques will do a lot for you, but they will not perform miracles. Shuffling money cannot turn losses into profits. You must have a winning approach to start with. Most of
the techniques advocated in this text are techniques that are advantageous to you in the long run. Throughout the text you will encounter the term "an asymptotic sense" to mean the eventual outcome
of something performed an infinite number of times, whose probability approaches certainty as the number of trials continues. In other words, something we can be nearly certain of in the long run.
The root of this expression is the mathematical term "asymptote," which is a straight line considered as a limit to a curved line in the sense that the distance between a moving point on the curved
line and the straight line approaches zero as the point moves an infinite distance from the origin. Trading is never an easy game. When people study these concepts, they often get a false feeling of
power. I say false because people tend to get the impression that something very difficult to do is easy when they understand the mechanics of what they must do. As you go through this text, bear in
mind that there is nothing in this text that will make you a better trader, nothing that will improve your timing of entry and exit from a given market, nothing that will improve your trade
selection. These difficult exercises will still be difficult exercises even after you have finished and comprehended this book. Since the publication of Portfolio Management Formulas I have been
asked by some people why I chose to write a book in the first place. The argument usually has something to do with the marketplace being a competitive arena, and writing a book, in their view, is
analogous to educating your adversaries. The markets are vast. Very few people seem to realize how huge today's markets are. True, the markets are a zero sum game (at best), but as a result of their
enormity you, the reader, are not my adversary. Like most traders, I myself am most often my own biggest enemy. This is not only true in my endeavors in and around the markets, but in life in
general. Other traders do not pose anywhere near the threat to me that I myself do. I do not think that I am alone in this. I think most traders, like myself, are their own worst enemies. In the mid
1980s, as the microcomputer was fast becoming the primary tool for traders, there was an abundance of trading programs that entered a position on a stop order, and the placement of these entry stops
was often a function of the current volatility in a given market. These systems worked beautifully for a time. Then, near the end of the decade, these types of systems seemed to collapse. At best,
they were able to carve out only a small fraction of the profits that these systems had just a few years earlier. Most traders of such systems would later abandon them, claiming that if "everyone was
trading them, how could they work anymore?" Most of these systems traded the Treasury Bond futures market. Consider now the size of the cash market underlying this futures market. Arbitrageurs in
these markets will come in when the prices of the cash and futures diverge by an appropriate amount (usually not more than a few ticks), buying the less expensive of the two instruments and selling
the more expensive. As a result, the divergence between the price of cash and futures will dissipate in short order. The only time that the relationship between cash and futures can really get out of
line is when an exogenous shock, such as some sort of news event, drives prices to diverge farther than the arbitrage process ordinarily would allow for. Such disruptions are usually very short-lived
and rather rare. An arbitrageur capitalizes on price discrepancies, one type of which is the relationship of a futures contract to its underlying cash instrument. As a result of this process, the
Treasury Bond futures market is intrinsically tied to the enormous cash Treasury market. The futures market reflects, at least to within a few ticks, what's going on in the gigantic cash market. The
cash market is not, and never has been, dominated by systems traders. Quite the contrary. Returning now to our argument, it is rather inconceivable that the traders in the cash market all started
trading the same types of systems as those who were making money in the futures market at that time! Nor is it any more conceivable that these cash participants decided to all gang up on those who
were profiteering in the futures market, There is no valid reason why these systems should have stopped working, or stopped working as well as they had, simply because many futures traders were
trading them. That argument would also suggest that a large participant in a very thin market be doomed to the same failure as traders of these systems in the bonds were. Likewise, it is silly to
believe that all of the fat will be cut out of the markets just because I write a book on account management concepts. Cutting the fat out of the market requires more than an understanding of money
management concepts. It requires discipline to tolerate and endure emotional pain to a level that 19 out of 20 people cannot bear. This you will not learn in this book or any other. Anyone who claims
to be intrigued by the "intellectual challenge of the markets" is not a trader. The markets are as intellectually challenging as a fistfight. In that light, the best advice I know of is to always
cover your chin and jab on the run. Whether you win or lose, there are significant beatings along the way. But there is really very little to the markets in the way of an intellectual challenge.
Ultimately, trading is an exercise in self-mastery and endurance. This book attempts to detail the strategy of the fistfight. As such, this book is of use only to someone who already possesses the
necessary mental toughness.
SOME PREVALENT MISCONCEPTIONS You will come face to face with many prevalent misconceptions in this text. Among these are: − Potential gain to potential risk is a straight-line function. That is, the
more you risk, the more you stand to gain. − Where you are on the spectrum of risk depends on the type of vehicle you are trading in. − Diversification reduces drawdowns (it can do this, but only to
a very minor extent-much less than most traders realize). − Price behaves in a rational manner. The last of these misconceptions, that price behaves in a rational manner, is probably the least
understood of all, considering how devastating its effects can be. By "rational manner" is meant that when a trade occurs at a certain price, you can be certain that price will proceed in an orderly
fashion to the next tick, whether up or down-that is, if a price is making a move from one point to the next, it will trade at every point in between. Most people are vaguely aware that price does
not behave this way, yet most people develop trading methodologies that assume that price does act in this orderly fashion. But price is a synthetic perceived value, and therefore does not act in
such a rational manner. Price can make very large leaps at times when proceeding from one price to the next, completely bypassing all prices in between. Price is capable of making gigantic leaps, and
far more frequently than most traders believe. To be on the wrong side of such a move can be a devastating experience, completely wiping out a trader. Why bring up this point here? Because the
foundation of any effective gaming strategy (and money management is, in the final analysis, a gaming strategy) is to hope for the best but prepare for the worst.
WORST-CASE SCENARIOS AND STATEGY The "hope for the best" part is pretty easy to handle. Preparing for the worst is quite difficult and something most traders never do. Preparing for the worst,
whether in trading or anything else, is something most of us put off indefinitely. This is particularly easy to do when we consider that worst-case scenarios usually have rather remote probabilities
of occurrence. Yet preparing for the worst-case scenario is something we must do now. If we are to be prepared for the worst, we must do it as the starting point in our money management strategy. You
will see as you proceed through this text that we always build a strategy from a worst-case scenario. We always start with a worst case and incorporate it into a mathematical technique to take
advantage of situations that include the realization of the worst case. Finally, you must consider this next axiom. If you play a game with unlimited liability, you will go broke with a probability
that approaches certainty as the length of the game approaches infinity. Not a very pleasant prospect. The situation can be better understood by saying that if you can only die by being struck by
lightning, eventually you will die by being struck by lightning. Simple. If you trade a vehicle with unlimited liability (such as futures), you will eventually experience a loss of such magnitude as
to lose everything you have. Granted, the probabilities of being struck by lightning are extremely small for you today and extremely small for you for the next fifty years. However, the probability
exists, and if you were to live long enough, eventually this microscopic probability would see realization. Likewise, the probability of experiencing a cataclysmic loss on a position today may be
extremely small (but far greater than being struck by lightning today). Yet if you trade long enough, eventually this probability, too, would be realized. There are three possible courses of action
you can take. One is to trade only vehicles where the liability is limited (such as long options). The second is not to trade for an infinitely long period of time. Most traders will die before they
see the cataclysmic loss manifest itself (or before they get hit by lightning). The probability of an enormous winning trade exists, too, and one of the nice things about winning in trading is that
you don't have to have the gigantic winning trade. Many smaller wins will suffice. Therefore, if you aren't going to trade in limited liability vehicles and you aren't going to die, make up your mind
that you are going to quit trading unlimited liability vehicles altogether if and when your account equity reaches some prespecified goal. If and when you achieve that goal, get out and don't ever
come back. We've been discussing worst-case scenarios and how to avoid, or at least reduce the probabilities of, their occurrence. However, this has not truly prepared us for their occurrence, and we
must prepare for the worst. For now, consider that today you had that cataclysmic loss. Your account has been tapped out. The brokerage firm wants to know what you're going to do about that big fat
debit in your account. You weren't expecting this to happen today. No one who ever experiences this ever does expect it. Take some time and try to imagine how you are going to feel in such a
situation. Next, try to determine what you will do in such an instance. Now write down on a sheet of paper exactly what you will do, who you can call for legal help, and so on. Make it as definitive
as possible. Do it now so that if it happens you'll know what to do without having to think about these matters. Are there arrangements you can make now to protect yourself before this possible
cataclysmic loss? Are you sure you wouldn't rather be trading a vehicle with limited liability? If you're going to trade a vehicle with unlimited liability, at what point on the upside will you stop?
Write down what that level of profit is. Don't just read this and then keep plowing through the book. Close the book and think about these things for awhile. This is the point from which we will
build. The point here has not been to get you thinking in a fatalistic way. That would be counterproductive, because to trade the markets effectively will require a great deal of optimism on your
part to make it through the inevitable prolonged losing streaks. The point here has been to get you to think about the worst-case scenario and to make contingency plans in case such a worst-case
scenario occurs. Now, take that sheet of paper with your contingency plans (and with the amount at which point you will quit trading unlimited liability vehicles altogether written on it) and put it
in the top drawer of your desk. Now, if the worst-case
scenario should develop you know you won't be jumping out of the window. Hope for the best but prepare for the worst. If you haven't done these exercises, then close this book now and keep it closed.
Nothing can help you if you do not have this foundation to build upon.
MATHEMATICS NOTATION Since this book is infected with mathematical equations, I have tried to make the mathematical notation as easy to understand, and as easy to take from the text to the computer
keyboard, as possible. Multiplication will always be denoted with an asterisk (*), and exponentiation will always be denoted with a raised caret (^). Therefore, the square root of a number will be
denoted as ^(l/2). You will never have to encounter the radical sign. Division is expressed with a slash (/) in most cases. Since the radical sign and the means of expressing division with a
horizontal line are also used as a grouping operator instead of parentheses, that confusion will be avoided by using these conventions for division and exponentiation. Parentheses will be the only
grouping operator used, and they may be used to aid in the clarity of an expression even if they are not mathematically necessary. At certain special times, brackets ({ }) may also be used as a
grouping operator. Most of the mathematical functions used are quite straightforward (e.g., the absolute value function and the natural log function). One function that may not be familiar to all
readers, however, is the exponential function, denoted in this text as EXP(). This is more commonly expressed mathematically as the constant e, equal to 2.7182818285, raised to the power of the
function. Thus: EXP(X) = e^X = 2.7182818285^X The main reason I have opted to use the function notation EXP(X) is that most computer languages have this function in one form or another. Since much of
the math in this book will end up transcribed into computer code, I find this notation more straightforward.
SYNTHETIC CONSTRUCTS IN THIS TEXT As you proceed through the text, you will see that there is a certain geometry to this material. However, in order to get to this geometry we will have to create
certain synthetic constructs. For one, we will convert trade profits and losses over to what will be referred to as holding period returns or HPRs for short. An HPR is simply 1 plus what you made or
lost on the trade as a percentage. Therefore, a trade that made a 10% profit would be converted to an HPR of 1+.10 = 1.10. Similarly, a trade that lost 10% would have an HPR of 1+(-.10) = .90. Most
texts, when referring to a holding period return, do not add 1 to the percentage gain or loss. However, throughout this text, whenever we refer to an HPR, it will always be 1 plus the gain or loss as
a percentage. Another synthetic construct we must use is that of a market system. A market system is any given trading approach on any given market (the approach need not be a mechanical trading
system, but often is). For example, say we are using two separate approaches to trading two separate markets, and say that one of our approaches is a simple moving average crossover system. The other
approach takes trades based upon our Elliott Wave interpretation. Further, say we are trading two separate markets, say Treasury Bonds and heating oil. We therefore have a total of four different
market systems. We have the moving average system on bonds, the Elliott Wave trades on bonds, the moving average system on heating oil, and the Elliott Wave trades on heating oil. A market system can
be further differentiated by other factors, one of which is dependency. For example, say that in our moving average system we discern (through methods discussed in this text) that winning trades
beget losing trades and vice versa. We would, therefore, break our moving average system on any given market into two distinct market systems. One of the market systems would take trades only after a
loss (because of the nature of this dependency, this is a more advantageous system), the other market system only after a profit. Referring back to our example of trading this moving average system
in conjunction with Treasury Bonds and heating oil and using the Elliott Wave trades also, we now have six market systems: the moving average system after a loss on bonds, the moving average system
after a win on bonds, the Elliott Wave trades on bonds, the moving average system after a win on heating oil, the moving average system after a loss on heating oil, and the Elliott Wave trades on
heating oil. -7-
Pyramiding (adding on contracts throughout the course of a trade) is viewed in a money management sense as separate, distinct market systems rather than as the original entry. For example, if you are
using a trading technique that pyramids, you should treat the initial entry as one market system. Each add-on, each time you pyramid further, constitutes another market system. Suppose your trading
technique calls for you to add on each time you have a $1,000 profit in a trade. If you catch a really big trade, you will be adding on more and more contracts as the trade progresses through these
$1,000 levels of profit. Each separate add-on should be treated as a separate market system. There is a big benefit in doing this. The benefit is that the techniques discussed in this book will yield
the optimal quantities to have on for a given market system as a function of the level of equity in your account. By treating each add-on as a separate market system, you will be able to use the
techniques discussed in this book to know the optimal amount to add on for your current level of equity. Another very important synthetic construct we will use is the concept of a unit. The HPRs that
you will be calculating for the separate market systems must be calculated on a "1 unit" basis. In other words, if they are futures or options contracts, each trade should be for 1 contract. If it is
stocks you are trading, you must decide how big 1 unit is. It can be 100 shares or it can be 1 share. If you are trading cash markets or foreign exchange (forex), you must decide how big 1 unit is.
By using results based upon trading 1 unit as input to the methods in this book, you will be able to get output results based upon 1 unit. That is, you will know how many units you should have on for
a given trade. It doesn't matter what size you decide 1 unit to be, because it's just an hypothetical construct necessary in order to make the calculations. For each market system you must figure how
big 1 unit is going to be. For example, if you are a forex trader, you may decide that 1 unit will be one million U.S. dollars. If you are a stock trader, you may opt for a size of 100 shares.
Finally, you must determine whether you can trade fractional units or not. For instance, if you are trading commodities and you define 1 unit as being 1 contract, then you cannot trade fractional
units (i.e., a unit size less than 1), because the smallest denomination in which you can trade futures contracts in is 1 unit (you can possibly trade quasifractional units if you also trade
minicontracts). If you are a stock trader and you define 1 unit as 1 share, then you cannot trade the fractional unit. However, if you define 1 unit as 100 shares, then you can trade the fractional
unit, if you're willing to trade the odd lot. If you are trading futures you may decide to have 1 unit be 1 minicontract, and not allow the fractional unit. Now, assuming that 2 minicontracts equal 1
regular contract, if you get an answer from the techniques in this book to trade 9 units, that would mean you should trade 9 minicontracts. Since 9 divided by 2 equals 4.5, you would optimally trade
4 regular contracts and 1 minicontract here. Generally, it is very advantageous from a money management perspective to be able to trade the fractional unit, but this isn't always true. Consider two
stock traders. One defines 1 unit as 1 share and cannot trade the fractional unit; the other defines 1 unit as 100 shares and can trade the fractional unit. Suppose the optimal quantity to trade in
today for the first trader is to trade 61 units (i.e., 61 shares) and for the second trader for the same day it is to trade 0.61 units (again 61 shares). I have been told by others that, in order to
be a better teacher, I must bring the material to a level which the reader can understand. Often these other people's suggestions have to do with creating analogies between the concept I am trying to
convey and something they already are familiar with. Therefore, for the sake of instruction you will find numerous analogies in this text. But I abhor analogies. Whereas analogies may be an effective
tool for instruction as well as arguments, I don't like them because they take something foreign to people and (often quite deceptively) force fit it to a template of logic of something people
already know is true. Here is an example: The square root of 6 is 3 because the square root of 4 is 2 and 2+2 = 4. Therefore, since 3+3 = 6, then the square root of 6 must be 3. Analogies explain,
but they do not solve. Rather, an analogy makes the a priori assumption that something is true, and this "explanation" then masquerades as the proof. You have my apologies in advance for the use of
the analogies in this text. I have opted for them only for the purpose of instruction.
OPTIMAL TRADING QUANTITIES AND OPTIMAL F Modern portfolio theory, perhaps the pinnacle of money management concepts from the stock trading arena, has not been embraced by the rest of the trading
world. Futures traders, whose technical trading ideas are usually adopted by their stock trading cousins, have been reluctant to accept ideas from the stock trading world. As a consequence, modern
portfolio theory has never really been embraced by futures traders. Whereas modern portfolio theory will determine optimal weightings of the components within a portfolio (so as to give the least
variance to a prespecified return or vice versa), it does not address the notion of optimal quantities. That is, for a given market system, there is an optimal amount to trade in for a given level of
account equity so as to maximize geometric growth. This we will refer to as the optimal f. This book proposes that modern portfolio theory can and should be used by traders in any markets, not just
the stock markets. However, we must marry modern portfolio theory (which gives us optimal weights) with the notion of optimal quantity (optimal f) to arrive at a truly optimal portfolio. It is this
truly optimal portfolio that can and should be used by traders in any markets, including the stock markets. In a nonleveraged situation, such as a portfolio of stocks that are not on margin,
weighting and quantity are synonymous, but in a leveraged situation, such as a portfolio of futures market systems, weighting and quantity are different indeed. In this book you will see an idea
first roughly introduced in Portfolio Management Formulas, that optimal quantities are what we seek to know, and that this is a function of optimal weightings. Once we amend modern portfolio theory
to separate the notions of weight and quantity, we can return to the stock trading arena with this now reworked tool. We will see how almost any nonleveraged portfolio of stocks can be improved
dramatically by making it a leveraged portfolio, and marrying the portfolio with the risk-free asset. This will become intuitively obvious to you. The degree of risk (or conservativeness) is then
dictated by the trader as a function of how much or how little leverage the trader wishes to apply to this portfolio. This implies that where a trader is on the spectrum of risk aversion is a
function of the leverage used and not a function of the type of trading vehicle used. In short, this book will teach you about risk management. Very few traders have an inkling as to what constitutes
risk management. It is not simply a matter of eliminating risk altogether. To do so is to eliminate return altogether. It isn't simply a matter of maximizing potential reward to potential risk
either. Rather, risk management is about decisionmaking strategies that seek to maximize the ratio of potential reward to potential risk within a given acceptable level of risk. To learn this, we
must first learn about optimal f, the optimal quantity component of the equation. Then we must learn about combining optimal f with the optimal portfolio weighting. Such a portfolio will maximize
potential reward to potential risk. We will first cover these concepts from an empirical standpoint (as was introduced in Portfolio Management Formulas), then study them from a more powerful
standpoint, the parametric standpoint. In contrast to an empirical approach, which utilizes past data to come up with answers directly, a parametric approach utilizes past data to come up with
parameters. These are certain measurements about something. These parameters are then used in a model to come up with essentially the same answers that were derived from an empirical approach. The
strong point about the parametric approach is that you can alter the values of the parameters to see the effect on the outcome from the model. This is something you cannot do with an empirical
technique. However, empirical techniques have their strong points, too. The empirical techniques are generally more straightforward and less math intensive. Therefore they are easier to use and
comprehend. For this reason, the empirical techniques are covered first. Finally, we will see how to implement the concepts within a userspecified acceptable level of risk, and learn strategies to
maximize this situation further. There is a lot of material to be covered here. I have tried to make this text as concise as possible. Some of the material may not sit well with you, the reader, and
perhaps may raise more questions than it answers. If that is the case, than I have succeeded in one facet of what I have attempted to do. Most books have a single "heart," a central concept that the
entire text flows toward. This book is a little different in that it has many hearts. Thus, some people may find this book difficult -8-
when they go to read it if they are subconsciously searching for a single heart. I make no apologies for this; this does not weaken the logic of the text; rather, it enriches it. This book may take
you more than one reading to discover many of its hearts, or just to be comfortable with it. One of the many hearts of this book is the broader concept of decision making in environments
characterized by geometric consequences. An environment of geometric consequence is an environment where a quantity that you have to work with today is a function of prior outcomes. I think this
covers most environments we live in! Optimal f is the regulator of growth in such environments, and the by-products of optimal f tell us a great deal of information about the growth rate of a given
environment. In this text you will learn how to determine the optimal f and its by-products for any distributional form. This is a statistical tool that is directly applicable to many real-world
environments in business and science. I hope that you will seek to apply the tools for finding the optimal f parametrically in other fields where there are such environments, for numerous different
distributions, not just for trading the markets. For years the trading community has discussed the broad concept of "money management." Yet by and large, money management has been characterized by a
loose collection of rules of thumb, many of which were incorrect. Ultimately, I hope that this book will have provided traders with exactitude under the heading of money management.
50,000/(5,000/.l) = 1
Chapter 1-The Empirical Techniques
This chapter is a condensation of Portfolio Management Formulas. The purpose here is to bring those readers unfamiliar with these empirical techniques up to the same level of understanding as those
who are.
10 8 T W R
DECIDING ON QUANTITY Whenever you enter a trade, you have made two decisions: Not only have you decided whether to enter long or short, you have also decided upon the quantity to trade in. This
decision regarding quantity is always a function of your account equity. If you have a $10,000 account, don't you think you would be leaning into the trade a little if you put on 100 gold contracts?
Likewise, if you have a $10 million account, don't you think you'd be a little light if you only put on one gold contract ? Whether we acknowledge it or not, the decision of what quantity to have on
for a given trade is inseparable from the level of equity in our account. It is a very fortunate fact for us though that an account will grow the fastest when we trade a fraction of the account on
each and every tradein other words, when we trade a quantity relative to the size of our stake. However, the quantity decision is not simply a function of the equity in our account, it is also a
function of a few other things. It is a function of our perceived "worst-case" loss on the next trade. It is a function of the speed with which we wish to make the account grow. It is a function of
dependency to past trades. More variables than these just mentioned may be associated with the quantity decision, yet we try to agglomerate all of these variables, including the account's level of
equity, into a subjective decision regarding quantity: How many contracts or shares should we put on? In this discussion, you will learn how to make the mathematically correct decision regarding
quantity. You will no longer have to make this decision subjectively (and quite possibly erroneously). You will see that there is a steep price to be paid by not having on the correct quantity, and
this price increases as time goes by. Most traders gloss over this decision about quantity. They feel that it is somewhat arbitrary in that it doesn't much matter what quantity they have on. What
matters is that they be right about the direction of the trade. Furthermore, they have the mistaken impression that there is a straight-line relationship between how many contracts they have on and
how much they stand to make or lose in the long run. This is not correct. As we shall see in a moment, the relationship between potential gain and quantity risked is not a straight line. It is
curved. There is a peak to this curve, and it is at this peak that we maximize potential gain per quantity at risk. Furthermore, as you will see throughout this discussion, the decision regarding
quantity for a given trade is as important as the decision to enter long or short in the first place. Contrary to most traders' misconception, whether you are right or wrong on the direction of the
market when you enter a trade does not dominate whether or not you have the right quantity on. Ultimately, we have no control over whether the next trade will be profitable or not. Yet we do have
control over the quantity we have on. Since one does not dominate the other, our resources are better spent concentrating on putting on the tight quantity. On any given trade, you have a perceived
worst-case loss. You may not even be conscious of this, but whenever you enter a trade you have some idea in your mind, even if only subconsciously, of what can happen to this trade in the
worst-case. This worst-case perception, along with the level of equity in your account, shapes your decision about how many contracts to trade. Thus, we can now state that there is a divisor of this
biggest perceived loss, a number between 0 and 1 that you will use in determining how many contracts to trade. For instance, if you have a $50,000 account, if you expect, in the worst case, to lose
$5,000 per contract, and if you have on 5 contracts, your divisor is .5, since: 50,000/(5,000/.5) = 5 In other words, you have on 5 contracts for a $50,000 account, so you have 1 contract for every
$10,000 in equity. You expect in the worst case to lose $5,000 per contract, thus your divisor here is .5. If you had on only 1 contract, your divisor in this case would be .1 since: -9-
6 4 2 0 0.05
0.45 0.55 f values
Figure 1-1 20 sequences of +2, -1. This divisor we will call by its variable name f. Thus, whether consciously or subconsciously, on any given trade you are selecting a value for f when you decide
how many contracts or shares to put on. Refer now to Figure 1-1. This represents a game where you have a 50% chance of winning $2 versus a 50% chance of losing $1 on every play. Notice that here the
optimal f is .25 when the TWR is 10.55 after 40 bets (20 sequences of +2, -1). TWR stands for Terminal Wealth Relative. It represents the return on your stake as a multiple. A TWR of 10.55 means you
would have made 10.55 times your original stake, or 955% profit. Now look at what happens if you bet only 15% away from the optimal .25 f. At an f of .1 or .4 your TWR is 4.66. This is not even half
of what it is at .25, yet you are only 15% away from the optimal and only 40 bets have elapsed! How much are we talking about in terms of dollars? At f = .1, you would be making 1 bet for every $10
in your stake. At f = .4, you would be making I bet for every $2.50 in your stake. Both make the same amount with a TWR of 4.66. At f = .25, you are making 1 bet for every $4 in your stake. Notice
that if you make 1 bet for every $4 in your stake, you will make more than twice as much after 40 bets as you would if you were making 1 bet for every $2.50 in your stake! Clearly it does not pay to
overbet. At 1 bet per every $2.50 in your stake you make the same amount as if you had bet a quarter of that amount, 1 bet for every $10 in your stake! Notice that in a 50/50 game where you win twice
the amount that you lose, at an f of .5 you are only breaking even! That means you are only breaking even if you made 1 bet for every $2 in your stake. At an f greater than .5 you are losing in this
game, and it is simply a matter of time until you are completely tapped out! In other words, if your fin this 50/50, 2:1 game is .25 beyond what is optimal, you will go broke with a probability that
approaches certainty as you continue to play. Our goal, then, is to objectively find the peak of the f curve for a given trading system. In this discussion certain concepts will be illuminated in
terms of gambling illustrations. The main difference between gambling and speculation is that gambling creates risk (and hence many people are opposed to it) whereas speculation is a transference of
an already existing risk (supposedly) from one party to another. The gambling illustrations are used to illustrate the concepts as clearly and simply as possible. The mathematics of money management
and the principles involved in trading and gambling are quite similar. The main difference is that in the math of gambling we are usually dealing with Bernoulli outcomes (only two possible outcomes),
whereas in trading we are dealing with the entire probability distribution that the trade may take.
BASIC CONCEPTS A probability statement is a number between 0 and 1 that specifies how probable an outcome is, with 0 being no probability whatsoever of the event in question occurring and 1 being
that the event in question is certain to occur. An independent trials process (sampling with replacement) is a sequence of outcomes where the probability statement is constant from one event to the
next. A coin toss is an example of just such a process. Each toss has a 50/50 probability regardless of the outcome of the prior toss. Even if the last 5 flips of a coin were heads, the probability
of this flip being heads is unaffected and remains .5.
Naturally, the other type of random process is one in which the outcome of prior events does affect the probability statement, and naturally, the probability statement is not constant from one event
to the next. These types of events are called dependent trials processes (sampling without replacement). Blackjack is an example of just such a process. Once a card is played, the composition of the
deck changes. Suppose a new deck is shuffled and a card removed-say, the ace of diamonds. Prior to removing this card the probability of drawing an ace was 4/52 or .07692307692. Now that an ace has
been drawn from the deck, and not replaced, the probability of drawing an ace on the next draw is 3/51 or .05882352941. Try to think of the difference between independent and dependent trials
processes as simply whether the probability statement is fixed (independent trials) or variable (dependent trials) from one event to the next based on prior outcomes. This is in fact the only
THE RUNS TEST When we do sampling without replacement from a deck of cards, we can determine by inspection that there is dependency. For certain events (such as the profit and loss stream of a
system's trades) where dependency cannot be determined upon inspection, we have the runs test. The runs test will tell us if our system has more (or fewer) streaks of consecutive wins and losses than
a random distribution. The runs test is essentially a matter of obtaining the Z scores for the win and loss streaks of a system's trades. A Z score is how many standard deviations you are away from
the mean of a distribution. Thus, a Z score of 2.00 is 2.00 standard deviations away from the mean (the expectation of a random distribution of streaks of wins and losses). The Z score is simply the
number of standard deviations the data is from the mean of the Normal Probability Distribution. For example, a Z score of 1.00 would mean that the data you arc testing is within 1 standard deviation
from the mean. Incidentally, this is perfectly normal. The Z score is then converted into a confidence limit, sometimes also called a degree of certainty. The area under the curve of the Normal
Probability Function at 1 standard deviation on either side of the mean equals 68% of the total area under the curve. So we take our Z score and convert it to a confidence limit, the relationship
being that the Z score is a number of standard deviations from the mean and the confidence limit is the percentage of area under the curve occupied at so many standard deviations. Confidence Limit
(%) 99.73 99 98 97 96 95.45 95 90
Z Score 3.00 2.58 2.33 2.17 2.05 2.00 1.96 1.64
With a minimum of 30 closed trades we can now compute our Z scores. What we are trying to answer is how many streaks of wins (losses) can we expect from a given system? Are the win (loss) streaks of
the system we are testing in line with what we could expect? If not, is there a high enough confidence limit that we can assume dependency exists between trades -i.e., is the outcome of a trade
dependent on the outcome of previous trades? Here then is the equation for the runs test, the system's Z score: (1.01) Z = (N*(R-.5)-X)/((X*(X-N))/(N-1))^(1/2) where N = The total number of trades in
the sequence. R = The total number of runs in the sequence. X = 2*W*L W = The total number of winning trades in the sequence. L = The total number of losing trades in the sequence. Here is how to
perform this computation: 1. Compile the following data from your run of trades: A. The total number of trades, hereafter called N. B. The total number of winning trades and the total number of
losing trades. Now compute what we will call X. X = 2*Total Number of Wins*Total Number of Losses. - 10 -
C. 2.
The total number of runs in a sequence. We'll call this R. Let's construct an example to follow along with. Assume the following trades:
-3 +2
The net profit is +7. The total number of trades is 12, so N = 12, to keep the example simple. We are not now concerned with how big the wins and losses are, but rather how many wins and losses there
are and how many streaks. Therefore, we can reduce our run of trades to a simple sequence of pluses and minuses. Note that a trade with a P&L of 0 is regarded as a loss. We now have: -
As can be seen, there are 6 profits and 6 losses; therefore, X = 2*6*6 = 72. As can also be seen, there are 8 runs in this sequence; therefore, R = 8. We define a run as anytime you encounter a sign
change when reading the sequence as just shown from left to right (i.e., chronologically). Assume also that you start at 1. 1. You would thus count this sequence as follows: 1
+ 2
+ 4
+ 6
+ 8
2. Solve the expression: N*(R-.5)-X For our example this would be: 12*(8-5)-72 12*7.5-72 90-72 18 3. Solve the expression: (X*(X-N))/(N-1) For our example this would be: (72*(72-12))/(12-1) (72*60)/
11 4320/11 392.727272 4. Take the square root of the answer in number 3. For our example this would be: 392.727272^(l/2) = 19.81734777 5. Divide the answer in number 2 by the answer in number 4. This
is your Z score. For our example this would be: 18/19.81734777 = .9082951063 6. Now convert your Z score to a confidence limit. The distribution of runs is binomially distributed. However, when there
are 30 or more trades involved, we can use the Normal Distribution to very closely approximate the binomial probabilities. Thus, if you are using 30 or more trades, you can simply convert your Z
score to a confidence limit based upon Equation (3.22) for 2-tailed probabilities in the Normal Distribution. The runs test will tell you if your sequence of wins and losses contains more or fewer
streaks (of wins or losses) than would ordinarily be expected in a truly random sequence, one that has no dependence between trials. Since we are at such a relatively low confidence limit in our
example, we can assume that there is no dependence between trials in this particular sequence. If your Z score is negative, simply convert it to positive (take the absolute value) when finding your
confidence limit. A negative Z score implies positive dependency, meaning fewer streaks than the Normal Probability Function would imply and hence that wins beget wins and losses beget losses. A
positive Z score implies negative dependency, meaning more streaks than the Normal Probability Function would imply and hence that wins beget losses and losses beget wins. What would an acceptable
confidence limit be? Statisticians generally recommend selecting a confidence limit at least in the high nineties. Some statisticians recommend a confidence limit in excess of 99% in order to assume
dependency, some recommend a less stringent minimum of 95.45% (2 standard deviations). Rarely, if ever, will you find a system that shows confidence limits in excess of 95.45%. Most frequently the
confidence limits encountered are less than 90%. Even if you find a system with a confidence limit between 90 and 95.45%, this is not exactly a nugget of gold. To assume that there is dependency
involved that can be capitalized upon to make a substantial difference, you really need to exceed 95.45% as a bare minimum.
As long as the dependency is at an acceptable confidence limit, you can alter your behavior accordingly to make better trading decisions, even though you do not understand the underlying cause of the
dependency. If you could know the cause, you could then better estimate when the dependency was in effect and when it was not, as well as when a change in the degree of dependency could be expected.
So far, we have only looked at dependency from the point of view of whether the last trade was a winner or a loser. We are trying to determine if the sequence of wins and losses exhibits dependency
or not. The runs test for dependency automatically takes the percentage of wins and losses into account. However, in performing the runs test on runs of wins and losses, we have accounted for the
sequence of wins and losses but not their size. In order to have true independence, not only must the sequence of the wins and losses be independent, the sizes of the wins and losses within the
sequence must also be independent. It is possible for the wins and losses to be independent, yet their sizes to be dependent (or vice versa). One possible solution is to run the runs test on only the
winning trades, segregating the runs in some way (such as those that are greater than the median win and those that are less), and then look for dependency among the size of the winning trades. Then
do this for the losing trades.
SERIAL CORRELATION There is a different, perhaps better, way to quantify this possible dependency between the size of the wins and losses. The technique to be discussed next looks at the sizes of
wins and losses from an entirely different perspective mathematically than the does runs test, and hence, when used in conjunction with the runs test, measures the relationship of trades with more
depth than the runs test alone could provide. This technique utilizes the linear correlation coefficient, r, sometimes called Pearson's r, to quantify the dependency/independency relationship. Now
look at Figure 1-2. It depicts two sequences that are perfectly correlated with each other. We call this effect positive correlation.
For each period find the difference between each X and the average X and each Y and the average Y. 9. Now calculate the numerator. To do this, for each period multiply the answers from step 2-in
other words, for each period multiply together the differences between that period's X and the average X and between that period's Y and the average Y. 10. Total up all of the answers to step 3 for
all of the periods. This is the numerator. 11. Now find the denominator. To do this, take the answers to step 2 for each period, for both the X differences and the Y differences, and square them
(they will now all be positive numbers). 12. Sum up the squared X differences for all periods into one final total. Do the same with the squared Y differences. 13. Take the square root to the sum of
the squared X differences you just found in step 6. Now do the same with the Y's by taking the square root of the sum of the squared Y differences. 14. Multiply together the two answers you just
found in step 1 - that is, multiply together the square root of the sum of the squared X differences by the square root of the sum of the squared Y differences. This product is your denominator. 15.
Divide the numerator you found in step 4 by the denominator you found in step 8. This is your linear correlation coefficient, r. The value for r will always be between +1.00 and -1.00. A value of 0
indicates no correlation whatsoever. Now look at Figure 1-4. It represents the following sequence of 21 trades: 1, 2, 1, -1, 3, 2, -1, -2, -3, 1, -2, 3, 1, 1, 2, 3, 3, -1, 2, -1, 3 4
-4 Figure 1-4 Individual outcomes of 21 trades.
We can use the linear correlation coefficient in the following manner to see if there is any correlation between the previous trade and the current trade. The idea here is to treat the trade P&L's as
the X values in the formula for r. Superimposed over that we duplicate the same trade P&L's, only this time we skew them by 1 trade and use these as the Y values in the formula for r. In other words,
the Y value is the previous X value. (See Figure 1-5.).
Figure 1-2 Positive correlation (r = +1.00).
0 Figure 1-3 Negative correlation (r = -1 .00). Now look at Figure 1-3. It shows two sequences that are perfectly negatively correlated with each other. When one line is zigging the other is zagging.
We call this effect negative correlation. The formula for finding the linear correlation coefficient, r, between two sequences, X and Y, is as follows (a bar over a variable means the arithmetic mean
of the variable): (1.02) R = (∑a(Xa-X[])*(Ya-Y[]))/((∑a(Xa-X[])^2)^(1/2)*(∑a(YaY[])^2)^(l/2)) Here is how to perform the calculation: 7. Average the X's and the Y's (shown as X[] and Y[]). - 11 -
-4 Figure 1-5 Individual outcomes of 21 trades skewed by 1 trade. A(X) 1 2 1 -1
1.2 0.2 -1.8
0.3 1.3 0.3
0.36 0.26 -0.54
1.44 0.04 3.24
0.09 1.69 0.09
3 2 -1 -2 -3 1 -2 3 1 1 2 3 3 -1 2 -1 3 X[] = .8
-1 3 2 -1 -2 -3 1 -2 3 1 1 2 3 3 -1 2 -1 3 Y[] = .7
2.2 1.2 -1.8 -2.8 -3.8 0.2 -2.8 2.2 0.2 0.2 1.2 2.2 2.2 -1.8 1.2 -1.8 2.2
-1.7 2.3 1.3 -1.7 -2.7 -3.7 0.3 -2.7 2.3 0.3 0.3 1.3 2.3 2.3 -1.7 1.3 -1.7
-3.74 2.76 -2.34 4.76 10.26 -0.74 -0.84 -5.94 0.46 0.06 0.36 2.86 5.06 -4.14 -2.04 -2.34 -3.74
4.84 1.44 3.24 7.84 14.44 0.04 7.84 4.84 0.04 0.04 1.44 4.84 4.84 3.24 1.44 3.24 4.84
2.89 5.29 1.69 2.89 7.29 13.69 0.09 7.29 5.29 0.09 0.09 1.69 5.29 5.29 2.89 1.69 2.89
concepts, the reader is referred to the section on statistical validation of a trading system under "The Binomial Distribution" in Appendix B.
The averages differ because you only average those X's and Y's that have a corresponding X or Y value (i.e., you average only those values that overlap), so the last Y value (3) is not figured in the
Y average nor is the first X value (1) figured in the x average. The numerator is the total of all entries in column E (0.8). To find the denominator, we take the square root of the total in column
F, which is 8.555699, and we take the square root to the total in column G, which is 8.258329, and multiply them together to obtain a denominator of 70.65578. We now divide our numerator of 0.8 by
our denominator of 70.65578 to obtain .011322. This is our linear correlation coefficient, r. The linear correlation coefficient of .011322 in this case is hardly indicative of anything, but it is
pretty much in the range you can expect for most trading systems. High positive correlation (at least .25) generally suggests that big wins are seldom followed by big losses and vice versa. Negative
correlation readings (below -.25 to -.30) imply that big losses tend to be followed by big wins and vice versa. The correlation coefficients can be translated, by a technique known as Fisher's Z
transformation, into a confidence level for a given number of trades. This topic is treated in Appendix C. Negative correlation is just as helpful as positive correlation. For example, if there
appears to be negative correlation and the system has just suffered a large loss, we can expect a large win and would therefore have more contracts on than we ordinarily would. If this trade proves
to be a loss, it will most likely not be a large loss (due to the negative correlation). Finally, in determining dependency you should also consider out-ofsample tests. That is, break your data
segment into two or more parts. If you see dependency in the first part, then see if that dependency also exists in the second part, and so on. This will help eliminate cases where there appears to
be dependency when in fact no dependency exists. Using these two tools (the runs test and the linear correlation coefficient) can help answer many of these questions. However, they can only answer
them if you have a high enough confidence limit and/or a high enough correlation coefficient. Most of the time these tools are of little help, because all too often the universe of futures system
trades is dominated by independency. If you get readings indicating dependency, and you want to take advantage of it in your trading, you must go back and incorporate a rule in your trading logic to
exploit the dependency. In other words, you must go back and change the trading system logic to account for this dependency (i.e., by passing certain trades or breaking up the system into two
different systems, such as one for trades after wins and one for trades after losses). Thus, we can state that if dependency shows up in your trades, you haven't maximized your system. In other
words, dependency, if found, should be exploited (by changing the rules of the system to take advantage of the dependency) until it no longer appears to exist. The first stage in money management is
therefore to exploit, and hence remove, any dependency in trades. For more on dependency than was covered in Portfolio Management Formulas and reiterated here, see Appendix C, "Further on Dependency:
The Turning Points and Phase Length Tests." We have been discussing dependency in the stream of trade profits and losses. You can also look for dependency between an indicator and the subsequent
trade, or between any two variables. For more on these - 12 -
As traders we must generally assume that dependency does not exist in the marketplace for the majority of market systems. That is, when trading a given market system, we will usually be operating in
an environment where the outcome of the next trade is not predicated upon the outcome(s) of prior trade(s). That is not to say that there is never dependency between trades for some market systems
(because for some market systems dependency does exist), only that we should act as though dependency does not exist unless there is very strong evidence to the contrary. Such would be the case if
the Z score and the linear correlation coefficient indicated dependency, and the dependency held up across markets and across optimizable parameter values. If we act as though there is dependency
when the evidence is not overwhelming, we may well just be fooling ourselves and causing more self-inflicted harm than good as a result. Even if a system showed dependency to a 95% confidence limit
for all values of a parameter, it still is hardly a high enough confidence limit to assume that dependency does in fact exist between the trades of a given market or system. A type I error is
committed when we reject an hypothesis that should be accepted. If, however, we accept an hypothesis when it should be rejected, we have committed a type II error. Absent knowledge of whether an
hypothesis is correct or not, we must decide on the penalties associated with a type I and type II error. Sometimes one type of error is more serious than the other, and in such cases we must decide
whether to accept or reject an unproven hypothesis based on the lesser penalty. Suppose you are considering using a certain trading system, yet you're not extremely sure that it will hold up when you
go to trade it real-time. Here, the hypothesis is that the trading system will hold up real-time. You decide to accept the hypothesis and trade the system. If it does not hold up, you will have
committed a type II error, and you will pay the penalty in terms of the losses you have incurred trading the system real-time. On the other hand, if you choose to not trade the system, and it is
profitable, you will have committed a type I error. In this instance, the penalty you pay is in forgone profits. Which is the lesser penalty to pay? Clearly it is the latter, the forgone profits of
not trading the system. Although from this example you can conclude that if you're going to trade a system real-time it had better be profitable, there is an ulterior motive for using this example.
If we assume there is dependency, when in fact there isn't, we will have committed a type 'II error. Again, the penalty we pay will not be in forgone profits, but in actual losses. However, if we
assume there is not dependency when in fact there is, we will have committed a type I error and our penalty will be in forgone profits. Clearly, we are better off paying the penalty of forgone
profits than undergoing actual losses. Therefore, unless there is absolutely overwhelming evidence of dependency, you are much better off assuming that the profits and losses in trading (whether with
a mechanical system or not) are independent of prior outcomes. There seems to be a paradox presented here. First, if there is dependency in the trades, then the system is 'suboptimal. Yet dependency
can never be proven beyond a doubt. Now, if we assume and act as though there is dependency (when in fact there isn't), we have committed a more expensive error than if we assume and act as though
dependency does not exist (when in fact it does). For instance, suppose we have a system with a history of 60 trades, and suppose we see dependency to a confidence level of 95% based on the runs
test. We want our system to be optimal, so we adjust its rules accordingly to exploit this apparent dependency. After we have done so, say we are left with 40 trades, and dependency no longer is
apparent. We are therefore satisfied that the system rules are optimal. These 40 trades will now have a higher optimal f than the entire 60 (more on optimal f later in this chapter). If you go and
trade this system with the new rules to exploit the dependency, and the higher concomitant optimal f, and if the dependency is not present, your performance will be closer to that of the 60 trades,
rather than the superior 40 trades. Thus, the f you have chosen will be too far to the right, resulting in a big price to pay on your part for assuming dependency. If dependency is there, then you
will be closer to the peak of the f curve by assuming that the dependency is there. Had you decided not to assume it when in fact there was dependency, you would
tend to be to the left of the peak of the f curve, and hence your performance would be suboptimal (but a lesser price to pay than being to the right of the peak). In a nutshell, look for dependency.
If it shows to a high enough degree across parameter values and markets for that system, then alter the system rules to capitalize on the dependency. Otherwise, in the absence of overwhelming
statistical evidence of dependency, assume that it does not exist, (thus opting to pay the lesser penalty if in fact dependency does exist).
MATHEMATICAL EXPECTATION By the same token, you are better off not to trade unless there is absolutely overwhelming evidence that the market system you are contemplating trading will be
profitable-that is, unless you fully expect the market system in question to have a positive mathematical expectation when you trade it realtime. Mathematical expectation is the amount you expect to
make or lose, on average, each bet. In gambling parlance this is sometimes known as the player's edge (if positive to the player) or the house's advantage (if negative to the player): (1.03)
Mathematical Expectation = ∑[i = 1,N](Pi*Ai) where P = Probability of winning or losing. A = Amount won or lost. N = Number of possible outcomes. The mathematical expectation is computed by
multiplying each possible gain or loss by the probability of that gain or loss and then summing these products together. Let's look at the mathematical expectation for a game where you have a 50%
chance of winning $2 and a 50% chance of losing $1 under this formula: Mathematical Expectation = (.5*2)+(.5*(-1)) = 1+(-5) = .5 In such an instance, of course, your mathematical expectation is to
win 50 cents per toss on average. Consider betting on one number in roulette, where your mathematical expectation is: ME = ((1/38)*35)+((37/38)*(-1)) = (.02631578947*35)+(.9736842105*(-1)) =
(9210526315)+(-.9736842105) = -.05263157903 Here, if you bet $1 on one number in roulette (American doublezero) you would expect to lose, on average, 5.26 cents per roll. If you bet $5, you would
expect to lose, on average, 26.3 cents per roll. Notice that different amounts bet have different mathematical expectations in terms of amounts, but the expectation as a percentage of the amount bet
is always the same. The player's expectation for a series of bets is the total of the expectations for the individual bets. So if you go play $1 on a number in roulette, then $10 on a number, then $5
on a number, your total expectation is: ME = (-.0526*1)+(-.0526*10)+(-.0526*5) = -.0526-.526 .263 = -.8416 You would therefore expect to lose, on average, 84.16 cents. This principle explains why
systems that try to change the sizes of their bets relative to how many wins or losses have been seen (assuming an independent trials process) are doomed to fail. The summation of negative
expectation bets is always a negative expectation! The most fundamental point that you must understand in terms of money management is that in a negative expectation game, there is no
money-management scheme that will make you a winner. If you continue to bet, regardless of how you manage your money, it is almost certain that you will be a loser, losing your entire stake no matter
how large it was to start. This axiom is not only true of a negative expectation game, it is true of an even-money game as well. Therefore, the only game you have a chance at winning in the long run
is a positive arithmetic expectation game. Then, you can only win if you either always bet the same constant bet size or bet with an f value less than the f value corresponding to the point where the
geometric mean HPR is less than or equal to 1. (We will cover the second part of this, regarding the geometric mean HPR, later on in the text.) - 13 -
This axiom is true only in the absence of an upper absorbing barrier. For example, let's assume a gambler who starts out with a $100 stake who will quit playing if his stake grows to $101. This upper
target of $101 is called an absorbing barrier. Let's suppose our gambler is always betting $1 per play on red in roulette. Thus, he has a slight negative mathematical expectation. The gambler is far
more likely to see his stake grow to $101 and quit than he is to see his stake go to zero and be forced to quit. If, however, he repeats this process over and over, he will find himself in a negative
mathematical expectation. If he intends on playing this game like this only once, then the axiom of going broke with certainty, eventually, does not apply. The difference between a negative
expectation and a positive one is the difference between life and death. It doesn't matter so much how positive or how negative your expectation is; what matters is whether it is positive or
negative. So before money management can even be considered, you must have a positive expectancy game. If you don't, all the money management in the world cannot save you 1. On the other hand, if you
have a positive expectation, you can, through proper money management, turn it into an exponential growth function. It doesn't even matter how marginally positive the expectation is! In other words,
it doesn't so much matter how profitable your trading system is on a 1 contract basis, so long as it is profitable, even if only marginally so. If you have a system that makes $10 per contract per
trade (once commissions and slippage have been deducted), you can use money management to make it be far more profitable than a system that shows a $1,000 average trade (once commissions and slippage
have been deducted). What matters, then, is not how profitable your system has been, but rather how certain is it that the system will show at least a marginal profit in the future. Therefore, the
most important preparation a trader can do is to make as certain as possible that he has a positive mathematical expectation in the future. The key to ensuring that you have a positive mathematical
expectation in the future is to not restrict your system's degrees of freedom. You want to keep your system's degrees of freedom as high as possible to ensure the positive mathematical expectation in
the future. This is accomplished not only by eliminating, or at least minimizing, the number of optimizable parameters, but also by eliminating, or at least minimizing, as many of the system rules as
possible. Every parameter you add, every rule you add, every little adjustment and qualification you add to your system diminishes its degrees of freedom. Ideally, you will have a system that is very
primitive and simple, and that continually grinds out marginal profits over time in almost all the different markets. Again, it is important that you realize that it really doesn't matter how
profitable the system is, so long as it is profitable. The money you will make trading will be made by how effective the money management you employ is. The trading system is simply a vehicle to give
you a positive mathematical expectation on which to use money management. Systems that work (show at least a marginal profit) on only one or a few markets, or have different rules or parameters for
different markets, probably won't work real-time for very long. The problem with most technically oriented traders is that they spend too much time and effort hating the computer crank out run after
run of different rules and parameter values for trading systems. This is the ultimate "woulda, shoulda, coulda" game. It is completely counterproductive. Rather than concentrating your efforts and
computer time toward maximizing your trading system profits, direct the energy toward maximizing the certainty level of a marginal profit.
This rule is applicable to trading one market system only. When you begin trading more than one market system, you step into a strange environment where it is possible to include a market system with
a negative mathematical expectation as one of the markets being traded and actually have a higher net mathematical expectation than the net mathematical expectation of the group before the inclusion
of the negative expectation system! Further, it is possible that the net mathematical expectation for the group with the inclusion of the negative mathematical expectation market system can be higher
than the mathematical expectation of any of the individual market systems! For the time being we will consider only one market system at a time, so we most have a positive mathematical expectation in
order for the money-management techniques to work.
System A
TO REINVEST TRADING PROFITS OR NOT Let's call the following system "System A." In it we have 2 trades: the first making SO%, the second losing 40%. If we do not reinvest our returns, we make 10%. If
we do reinvest, the same sequence of trades loses 10%. System A No Reinvestment Trade No. P&L Cumulative 100 1 50 150 2 -40 110
With Reinvestment P&L Cumulative 100 50 150 -60 90
Now let's look at System B, a gain of 15% and a loss of 5%, which also nets out 10% over 2 trades on a nonreinvestment basis, just like System A. But look at the results of System B with
reinvestment: Unlike system A, it makes money. System B No Reinvestment Trade No. P&L Cumulative 100 1 15 115 2 -5 110
With Reinvestment P&L Cumulative 100 15 115 -5.75 109.25
An important characteristic of trading with reinvestment that must be realized is that reinvesting trading profits can turn a winning system into a losing system but not vice versa! A winning system
is turned into a losing system in trading with reinvestment if the returns are not consistent enough. Changing the order or sequence of trades does not affect the final outcome. This is not only true
on a nonreinvestment basis, but also true on a reinvestment basis (contrary to most people's misconception). System A No Reinvestment Trade No. P&L Cumulative 100 1 40 60 2 50 110 System B No
Reinvestment Trade No. P&L Cumulative 100 1 -5 95 2 15 110
No Reinvestment Trade No. P&L Cumulative 100 1 50 150 2 -40 110 3 1 111 4 1 112 Percentage of Wins 75% Avg. Trade 3 Risk/Rew. 1.3 Std. Dev. 31.88 Avg. Trade/Std. Dev. 0.09
Now let's take System B and add 2 more losers of 1 point each. System B No Reinvestment Trade No. P&L Cumulative 100 1 15 115 2 -5 110 3 -1 109 4 -1 108 Percentage of Wins 25% Avg. Trade 2 Risk/Rew.
2.14 Std. Dev. 7.68 Avg. Trade/Std. Dev. 0.26
With Reinvestment P&L Cumulative 100 -5 95 14.25 109.25
As can obviously be seen, the sequence of trades has no bearing on the final outcome, whether viewed on a reinvestment or a nonreinvestment basis. (One side benefit to trading on a reinvestment basis
is that the drawdowns tend to be buffered. As a system goes into and through a drawdown period, each losing trade is followed by a trade with fewer and fewer contracts.) By inspection it would seem
you are better off trading on a nonreinvestment basis than you are reinvesting because your probability of winning is greater. However, this is not a valid assumption, because in the real world we do
not withdraw all of our profits and make up all of our losses by depositing new cash into an account. Further, the nature of investment or trading is predicated upon the effects of compounding. If we
do away with compounding (as in the nonreinvestment basis), we can plan on doing little better in the future than we can today, no matter how successful our trading is between now and then. It is
compounding that takes the linear function of account growth and makes it a geometric function. If a system is good enough, the profits generated on a reinvestment basis will be far greater than
those generated on a nonreinvestment basis, and that gap will widen as time goes by. If you have a system that can beat the market, it doesn't make any sense to trade it in any other way than to
increase your amount wagered as your stake increases.
MEASURING A GOOD SYSTEM FOR REINVESTMENT THE GEOMETRIC MEAN So far we have seen how a system can be sabotaged by not being consistent enough from trade to trade. Does this mean we should close up and
put our money in the bank? Let's go back to System A, with its first 2 trades. For the sake of illustration we are going to add two winners of 1 point each.
- 14 -
With Reinvestment P&L Cumulative 100 15 115 -5.75 109.25 -1.0925 108.1575 -1.08157 107.0759 25% 1.768981 1.89 7.87 0.22
Now, if consistency is what we're really after, let's look at a bank account, the perfectly consistent vehicle (relative to trading), paying 1 point per period. We'll call this series System C.
System C No Reinvestment Trade No. P&L Cumulative 100 1 1 101 2 1 102 3 1 103 4 1 104 Percentage of Wins 1.00 Avg. Trade 1 Risk/Rew. Infinite Std. Dev. 0.00 Avg. Trade/Std. Dev. Infinite
With Reinvestment P&L Cumulative 100 40 60 30 90
With Reinvestment P&L Cumulative 100 50 150 -60 90 0.9 90.9 0.909 91.809 75% - 2.04775 0.86 39.00 -0.05
With Reinvestment P&L Cumulative 100 1 101 1.01 102.01 1.0201 103.0301 1.030301 104.0604 1 .00 1.015100 Infinite 0.01 89.89
Our aim is to maximize our profits under reinvestment trading. With that as the goal, we can see that our best reinvestment sequence comes from System B. How could we have known that, given only
information regarding nonreinvestment trading? By percentage of winning trades? By total dollars? By average trade? The answer to these questions is "no," because answering "yes" would have us
trading System A (but this is the solution most futures traders opt for). What if we opted for most consistency (i.e., highest ratio average trade/standard deviation or lowest standard deviation)?
How about highest risk/reward or lowest drawdown? These are not the answers either. If they were, we should put our money in the bank and forget about trading. System B has the tight mix of
profitability and consistency. Systems A and C do not. That is why System B performs the best under reinvestment trading. What is the best way to measure this "right mix"? It turns out there is a
formula that will do just that-the geometric mean. This is simply the Nth root of the Terminal Wealth Relative (TWR), where N is the number of periods (trades). The TWR is simply what we've been
computing when we figure what the final cumulative amount is under reinvestment, In other words, the TWRs for the three systems we just saw are: System System A System B System C
TWR .91809 1.070759 1.040604
Since there are 4 trades in each of these, we take the TWRs to the 4th root to obtain the geometric mean: System System A System B System C
Geometric Mean 0. 978861 1.017238 1.009999
(1.04) TWR = ∏[i = 1,N]HPRi (1.05) Geometric Mean = TWR^(1/N)
where N = Total number of trades. HPR = Holding period returns (equal to 1 plus the rate of return -e .g., an HPR of 1.10 means a 10% return over a given period, bet, or trade). TWR = The number of
dollars of value at the end of a run of periods/bets/trades per dollar of initial investment, assuming gains and losses are allowed to compound. Here is another way of expressing these variables:
(1.06) TWR = Final Stake/Starting Stake The geometric mean (G) equals your growth factor per play, or: (1.07) G = (Final Stake/Starting Stake)^(I/Number of Plays) Think of the geometric mean as the
"growth factor per play" of your stake. The system or market with the highest geometric mean is the system or market that makes the most profit trading on a reinvestment of returns basis. A geometric
mean less than one means that the system would have lost money if you were trading it on a reinvestment basis. Investment performance is often measured with respect to the dispersion of returns.
Measures such as the Sharpe ratio, Treynor measure, Jensen measure, Vami, and so on, attempt to relate investment performance to dispersion. The geometric mean here can be considered another of these
types of measures. However, unlike the other measures, the geometric mean measures investment performance relative to dispersion in the same mathematical form as that in which the equity in your
account is affected. Equation (1.04) bears out another point. If you suffer an HPR of 0, you will be completely wiped out, because anything multiplied by zero equals zero. Any big losing trade will
have a very adverse effect on the TWR, since it is a multiplicative rather than additive function. Thus we can state that in trading you are only as smart as your dumbest mistake.
chance of losing as the length of the game is shortened - i.e., as the number of trials approaches 1. If you play a game whereby you have a 49% chance of winning $1 and a 51% of losing $1, you are
best off betting on only 1 trial. The more trials you bet on, the greater the likelihood you will lose, with the probability of losing approaching certainty as the length of the game approaches
infinity. That isn't to say that you are in a positive expectation for the 1 trial, but you have at least minimized the probabilities of being a loser by only playing 1 trial. Return now to a
positive expectation game. We determined at the outset of this discussion that on any given trade, the quantity that a trader puts on can be expressed as a factor, f, between 0 and 1, that represents
the trader's quantity with respect to both the perceived loss on the next trade and the trader's total equity. If you know you have an edge over N bets but you do not know which of those N bets will
be winners (and for how much), and which will be losers (and for how much), you are best off (in the long run) treating each bet exactly the same in terms of what percentage of your total stake is at
risk. This method of always trading a fixed fraction of your stake has shown time and again to be the best staking system. If there is dependency in your trades, where winners beget winners and
losers beget losers, or vice versa, you are still best off betting a fraction of your total stake on each bet, but that fraction is no longer fixed. In such a case, the fraction must reflect the
effect of this dependency (that is, if you have not yet "flushed" the dependency out of your system by creating system rules to exploit it). "Wait," you say. "Aren't staking systems foolish to begin
with? Haven't we seen that they don't overcome the house advantage, they only increase our total action?" This is absolutely true for a situation with a negative mathematical expectation. For a
positive mathematical expectation, it is a different story altogether. In a positive expectancy situation the trader/gambler is faced with the question of how best to exploit the positive
We have spent the course of this discussion laying the groundwork for this section. We have seen that in order to consider betting or trading a given situation or system you must first determine if a
positive mathematical expectation exists. We have seen that what is seemingly a "good bet" on a mathematical expectation basis (i.e., the mathematical expectation is positive) may in fact not be such
a good bet when you consider reinvestment of returns, if you are reinvesting too high a percentage of your winnings relative to the dispersion of outcomes of the system. Reinvesting returns never
raises the mathematical expectation (as a percentage-although it can raise the mathematical expectation in terms of dollars, which it does geometrically, which is why we want to reinvest). If there
is in fact a positive mathematical expectation, however small, the next step is to exploit this positive expectation to its fullest potential. For an independent trials process, this is achieved by
reinvesting a fixed fraction of your total stake. 2 And how do we find this optimal f? Much work has been done in recent decades on this topic in the gambling community, the most famous and accurate
of which is known as the Kelly Betting System. This is actually an application of a mathematical idea developed in early 1956 by John L. Kelly, Jr.3 The Kelly criterion states that we should bet that
fixed fraction of our stake (f) which maximizes the growth function G(f): (1.08) G(f) = P*ln(l+B*f)+(1 -P)*ln(l-f) where f = The optimal fixed fraction. P = The probability of a winning bet or trade.
B = The ratio of amount won on a winning bet to amount lost on a losing bet. ln() = The natural logarithm function.
Thus far we have discussed reinvestment of returns in trading whereby we reinvest 100% of our stake on all occasions. Although we know that in order to maximize a potentially profitable situation we
must use reinvestment, a 100% reinvestment is rarely the wisest thing to do. Take the case of a fair bet (50/50) on a coin toss. Someone is willing to pay you $2 if you win the toss but will charge
you $1 if you lose. Our mathematical expectation is .5. In other words, you would expect to make 50 cents per toss, on average. This is true of the first toss and all subsequent tosses, provided you
do not step up the amount you are wagering. But in an independent trials process this is exactly what you should do. As you win you should commit more and more to each toss. Suppose you begin with an
initial stake of one dollar. Now suppose you win the first toss and are paid two dollars. Since you had your entire stake ($1) riding on the last bet, you bet your entire stake (now $3) on the next
toss as well. However, this next toss is a loser and your entire $3 stake is gone. You have lost your original $1 plus the $2 you had won. If you had won the last toss, it would have paid you $6
since you had three $1 bets on it. The point is that if you are betting 100% of your stake, you'll be wiped out as soon as you encounter a losing wager, an inevitable event. If we were to replay the
previous scenario and you had bet on a nonreinvestment basis (i.e., constant bet size) you would have made $2 on the first bet and lost $1 on the second. You would now be net ahead $1 and have a
total stake of $2. Somewhere between these two scenarios lies the optimal betting approach for a positive expectation. However, we should first discuss the optimal betting strategy for a negative
expectation game. When you know that the game you are playing has a negative mathematical expectation, the best bet is no bet. Remember, there is no money-management strategy that can turn a losing
game into a winner. 'However, if you must bet on a negative expectation game, the next best strategy is the maximum boldness strategy. In other words, you want to bet on as few trials as possible (as
opposed to a positive expectation game, where you want to bet on as many trials as possible). The more trials, the greater the likelihood that the positive expectation will be realized, and hence the
greater the likelihood that betting on the negative expectation side will lose. Therefore, the negative expectation side has a lesser and lesser - 15 -
For a dependent trials process, just as for an independent trials process, the idea of betting a proportion of your total stake also yields the greatest exploitation of a positive mathematical
expectation. However, in a dependent trials process you optimally bet a variable fraction of your total stake, the exact fraction for each individual bet being determined by the probabilities and
payoffs involved for each individual bet. This is analogous to trading a dependent trials process as two separate market systems. 3
Kelly, J. L., Jr., A New Interpretation of Information Rate, Bell System Technical Journal, pp. 917-926, July, 1956.
As it turns out, for an event with two possible outcomes, this optimal f4 can be found quite easily with the Kelly formulas.
KELLY FORMULAS Beginning around the late 1940s, Bell System engineers were working on the problem of data transmission over long-distance lines. The problem facing them was that the lines were
subject to seemingly random, unavoidable "noise" that would interfere with the transmission. Some rather ingenious solutions were proposed by engineers at Bell Labs. Oddly enough, there are great
similarities between this data communications problem and the problem of geometric growth as pertains to gambling money management (as both problems are the product of an environment of favorable
uncertainty). One of the outgrowths of these solutions is the first Kelly formula. The first equation here is: (1.09a) f = 2*P-l or (1.09b) f = P-Q where f = The optimal fixed fraction. P = The
probability of a winning bet or trade. Q = The probability of a loss, (or the complement of P, equal to 1P). Both forms of Equation (1.09) are equivalent. Equation (l.09a) or (1.09b) will yield the
correct answer for optimal f provided the quantities are the same for both wins and losses. As an example, consider the following stream of bets: -1, +1, +1,-1,-1, +1, +1, +1, +1,-1 There are 10
bets, 6 winners, hence: f = (.6*2)-l = 1.2-1 = .2 If the winners and losers were not all the same size, then this formula would not yield the correct answer. Such a case would be our two-toone
coin-toss example, where all of the winners were for 2 units and all of the losers for 1 unit. For this situation the Kelly formula is: (1.10a) f = ((B+1)*P-1)/B where f = The optimal fixed fraction.
P = The probability of a winning bet or trade. B = The ratio of amount won on a winning bet to amount lost on a losing bet. In our two-to-one coin-toss example: f = ((2+ l).5-l)/2 = (3*.5-l)/2 = (1.5
-l)/2 = .5/2 = .25 This formula will yield the correct answer for optimal f provided all wins are always for the same amount and all losses are always for the same amount. If this is not so, then
this formula will not yield the correct answer. The Kelly formulas are applicable only to outcomes that have a Bernoulli distribution. A Bernoulli distribution is a distribution with two possible,
discrete outcomes. Gambling games very often have a Bernoulli distribution. The two outcomes are how much you make when you win, and how much you lose when you lose. Trading, unfortunately, is not
this simple. To apply the Kelly formulas to a non-Bernoulli distribution of outcomes (such as trading) is a mistake. The result will not be the true optimal f. For more on the Bernoulli distribution,
consult Appendix B. Consider the following sequence of bets/trades: +9, +18, +7, +1, +10, -5, -3, -17, -7 Since this is not a Bernoulli distribution (the wins and losses are of different amounts),
the Kelly formula is not applicable. However, let's try it anyway and see what we get. Since 5 of the 9 events are profitable, then P = .555. Now let's take averages of the wins and losses to
calculate B (here is where so many 4
As used throughout the text, f is always lowercase and in roman type. It is not to be confused with the universal constant, F, equal to 4.669201609…, pertaining to bifurcations in chaotic systems. -
16 -
traders go wrong). The average win is 9, and the average loss is 8. Therefore we say that B = 1.125. Plugging in the values we obtain: f = ((1.125+1) .555-1)/1.125 = (2.125*.555-1)/1.125 =
(1.179375-1)/1.125 = .179375/1.125 = .159444444 So we say f = .16. You will see later in this chapter that this is not the optimal f. The optimal f for this sequence of trades is .24. Applying the
Kelly formula when all wins are not for the same amount and/or all losses are not for the same amount is a mistake, for it will not yield the optimal f. Notice that the numerator in this formula
equals the mathematical expectation for an event with two possible outcomes as defined earlier. Therefore, we can say that as long as all wins are for the same amount and all losses are for the same
amount (whether or not the amount that can be won equals the amount that can be lost), the optimal f is: (1.10b) f = Mathematical Expectation/B where f = The optimal fixed fraction. B = The ratio of
amount won on a winning bet to amount lost on a losing bet. The mathematical expectation is defined in Equation (1.03), but since we must have a Bernoulli distribution of outcomes we must make
certain in using Equation (1.10b) that we only have two possible outcomes. Equation (l.l0a) is the most commonly seen of the forms of Equation (1.10) (which are all equivalent). However, the formula
can be reduced to the following simpler form: (1.10c) f = P-Q/B where f = The optimal fixed fraction. P = The probability of a winning bet or trade. Q = The probability of a loss (or the complement
of P, equal to 1-P).
FINDING THE OPTIMAL F BY THE GEOMETRIC MEAN In trading we can count on our wins being for varying amounts and our losses being for varying amounts. Therefore the Kelly formulas could not give us the
correct optimal f. How then can we find our optimal f to know how many contracts to have on and have it be mathematically correct? Here is the solution. To begin with, we must amend our formula for
finding HPRs to incorporate f: (1.11) HPR = 1+f*(-Trade/Biggest Loss) where f = The value we are using for f. -Trade = The profit or loss on a trade (with the sign reversed so that losses are
positive numbers and profits are negative). Biggest Loss = The P&L that resulted in the biggest loss. (This should always be a negative number.) And again, TWR is simply the geometric product of the
HPRs and geometric mean (G) is simply the Nth root of the TWR. (1.12) TWR = ∏[i = 1,N](1+f*(-Tradei/Biggest Loss)) (1.13) G = (∏[i = 1,N](1+f*(-Tradei/Biggest Loss))]^(1/N) where f = The value we are
using for f. -Tradei = The profit or loss on the ith trade (with the sign reversed so that losses are positive numbers and profits are negative). Biggest Loss = The P&L that resulted in the biggest
loss. (This should always be a negative number.) N = The total number of trades. G = The geometric mean of the HPRs. By looping through all values for I between .01 and 1, we can find that value for
f which results in the highest TWR. This is the value for f that would provide us with the maximum return on our money using fixed fraction. We can also state that the optimal f is the f that yields
highest geometric mean. It matters not whether we look for highest TWR or geometric mean, as both are maximized at the same value for f. Doing this with a computer is easy, since both the TWR curve
and the geometric mean curve are smooth with only one peak. You simply loop from f = .01 to f = 1.0 by .01. As soon as you get a TWR that is less than the previous TWR, you know that the f
corresponding to the previous TWR is the optimal f. You can employ many other search algorithms to facilitate this process of finding the optimal f in the range of 0 to 1. One of the fastest ways is
with the parabolic interpolation search procedure detailed in portfolio Management Formulas.
TO SUMMARIZE THUS FAR You have seen that a good system is the one with the highest geometric mean. Yet to find the geometric mean you must know f. You may find this confusing. Here now is a summary
and clarification of the process: Take the trade listing of a given market system. 1. Find the optimal f, either by testing various f values from 0 to 1 or through iteration. The optimal f is that
which yields the highest TWR. 2. Once you have found f, you can take the Nth root of the TWR that corresponds to your f, where N is the total number of trades. This is your geometric mean for this
market system. You can now use this geometric mean to make apples-to-apples comparisons with other market systems, as well as use the f to know how many contracts to trade for that particular market
system. Once the highest f is found, it can readily be turned into a dollar amount by dividing the biggest loss by the negative optimal f. For example, if our biggest loss is $100 and our optimal f
is .25, then -$100/.25 = $400. In other words, we should bet 1 unit for every $400 we have in our stake. If you're having trouble with some of these concepts, try thinking in terms of betting in
units, not dollars (e.g., one $5 chip or one futures contract or one 100-share unit of stock). The number of dollars you allocate to each unit is calculated by figuring your largest loss divided by
the negative optimal f. The optimal f is a result of the balance between a system's profitmaking ability (on a constant 1-unit basis) and its risk (on a constant 1unit basis). Most people think that
the optimal fixed fraction is that percentage of your total stake to bet, This is absolutely false. There is an interim step involved. Optimal f is not in itself the percentage of your total stake to
bet, it is the divisor of your biggest loss. The quotient of this division is what you divide your total stake by to know how many bets to make or contracts to have on. You will also notice that
margin has nothing whatsoever to do with what is the mathematically optimal number of contracts to have on. Margin doesn't matter because the sizes of individual profits and losses are not the
product of the amount of money put up as margin (they would be the same whatever the size of the margin). Rather, the profits and losses are the product of the exposure of 1 unit (1 futures
contract). The amount put up as margin is further made meaningless in a money-management sense, because the size of the loss is not limited to the margin. Most people incorrectly believe that f is a
straight-line function rising up and to the right. They believe this because they think it would mean that the more you are willing to risk the more you stand to make. People reason this way because
they think that a positive mathematical expectancy is just the mirror image of a negative expectancy. They mistakenly believe that if increasing your total action in a negative expectancy game
results in losing faster, then increasing your total action in a positive expectancy game will result in winning faster. This is not true. At some point in a positive expectancy situation, further
increasing your total action works against you. That point is a function of both the system's profitability and its consistency (i.e., its geometric mean), since you are reinvesting the returns back
into the system. It is a mathematical fact that when two people face the same sequence of favorable betting or trading opportunities, if one uses the optimal f and the other uses any different
money-management system, then the ratio of the optimal f bettor's stake to the other person's stake will increase as time goes on, with higher and higher probability. In the long - 17 -
run, the optimal f bettor will have infinitely greater wealth than any other money-management system bettor with a probability approaching 1. Furthermore, if a bettor has the goal of reaching a
specified fortune and is facing a series of favorable betting or trading opportunities, the expected time to reach the fortune will be lower (faster) with optimal f than with any other betting
system. Let's go back and reconsider the following sequence of bets (trades): +9, +18, +7, +1, +10, -5, -3, -17, -7 Recall that we determined earlier in this chapter that the Kelly formula was not
applicable to this sequence, because the wins were not all for the same amount and neither were the losses. We also decided to average the wins and average the losses and take these averages as our
values into the Kelly formula (as many traders mistakenly do). Doing this we arrived at an f value of .16. It was stated that this is an incorrect application of Kelly, that it would not yield the
optimal f. The Kelly formula must be specific to a single bet. You cannot average your wins and losses from trading and obtain the true optimal fusing the Kelly formula. Our highest TWR on this
sequence of bets (trades) is obtained at .24, or betting $1 for every $71 in our stake. That is the optimal geometric growth you can squeeze out of this sequence of bets (trades) trading fixed
fraction. Let's look at the TWRs at different points along 100 loops through this sequence of bets. At 1 loop through (9 bets or trades), the TWR for f = ,16 is 1.085, and for f = .24 it is 1.096.
This means that for 1 pass through this sequence of bets an f = .16 made 99% of what an f = .24 would have made. To continue: Passes Throe 1 10 40 100
Total Bets or Trades 9 90 360 900
TWR for f=.24 1.096 2.494 38.694 9313.312
TWR for f=.16 1.085 2.261 26.132 3490.761
Percentage Difference 1 9.4 32.5 62.5
As can be seen, using an f value that we mistakenly figured from Kelly only made 37.5% as much as did our optimal f of .24 after 900 bets or trades (100 cycles through the series of 9 outcomes). In
other words, our optimal f of .24, which is only .08 different from .16 (50% beyond the optimal) made almost 267% the profit that f = .16 did after 900 bets! Let's go another 11 cycles through this
sequence of trades, so that we now have a total of 999 trades. Now our TWR for f = .16 is 8563.302 (not even what it was for f = .24 at 900 trades) and our TWR for f = .24 is 25,451.045. At 999
trades f = .16 is only 33.6% off = .24, or f = .24 is 297% off = .16! As you see, using the optimal f does not appear to offer much advantage over the short run, but over the long run it becomes more
and more important. The point is, you must give the program time when trading at the optimal f and not expect miracles in the short run. The more time (i.e., bets or trades) that elapses, the greater
the difference between using the optimal f and any other money-management strategy.
GEOMETRIC AVERAGE TRADE At this point the trader may be interested in figuring his or her geometric average trade-that is, what is the average garnered per contract per trade assuming profits are
always reinvested and fractional contracts can be purchased. This is the mathematical expectation when you are trading on a fixed fractional basis. This figure shows you what effect there is by
losers occurring when you have many contracts on and winners occurring when you have fewer contracts on. In effect, this approximates how a system would have fared per contract per trade doing fixed
fraction. (Actually the geometric average trade is your mathematical expectation in dollars per contract per trade. The geometric mean minus 1 is your mathematical expectation per trade-a geometric
mean of 1.025 represents a mathematical expectation of 2.5% per trade, irrespective of size.) Many traders look only at the average trade of a market system to see if it is high enough to justify
trading the system. However, they should be looking at the geometric average trade (GAT) in making their decision. (1.14) GAT = G*(Biggest Loss/-f) where G = Geometric mean-1.
f = Optimal fixed fraction. (and, of course, our biggest loss is always a negative number). For example, suppose a system has a geometric mean of 1.017238, the biggest loss is $8,000, and the optimal
f is .31. Our geometric average trade would be: GAT = (1.017238-1)*(-$8,000/-.31) = .017238*$25,806.45 = $444.85
WHY YOU MUST KNOW YOUR OPTIMAL F The graph in Figure 1-6 further demonstrates the importance of using optimal fin fixed fractional trading. Recall our f curve for a 2:1 cointoss game, which was
illustrated in Figure 1-1. Let's increase the winning payout from 2 units to 5 units as is demonstrated in Figure 1-6. Here your optimal f is .4, or to bet $1 for every $2.50 in you stake. After 20
sequences of +5,-l (40 bets), your $2.50 stake has grown to $127,482, thanks to optimal f. Now look what happens in this extremely favorable situation if you miss the optimal f by 20%. At f values of
.6 and .2 you don't make a tenth as much as you do at .4. This particular situation, a 50/50 bet paying 5 to 1, has a mathematical expectation of (5*.5)+(1*(-.5)) = 2, yet if you bet using an f value
greater than .8 you lose money. 140 120 100 T 80 W R 60 40 20 0 0.05
0.45 0.55 f values
Figure 1-6 20 sequences of +5, -1. Two points must be illuminated here. The first is that whenever we discuss a TWR, we assume that in arriving at that TWR we allowed fractional contracts along the
way. In other words, the TWR assumes that you are able to trade 5.4789 contracts if that is called for at some point. It is because the TWR calculation allows for fractional contracts that the TWR
will always be the same for a given set of trade outcomes regardless of their sequence. You may argue that in real life this is not the case. In real life you cannot trade fractional contracts. Your
argument is correct. However, I am allowing the TWR to be calculated this way because in so doing we represent the average TWR for all possible starting stakes. If you require that all bets be for
integer amounts, then the amount of the starting stake becomes important. However, if you were to average the TWRs from all possible starting stake values using integer bets only, you would arrive at
the same TWR value that we calculate by allowing the fractional bet. Therefore, the TWR value as calculated is more realistic than if we were to constrain it to integer bets only, in that it is
representative of the universe of outcomes of different starting stakes. Furthermore, the greater the equity in the account, the more trading on an integer contract basis will be the same as trading
on a fractional contract basis. The limit here is an account with an infinite amount of capital where the integer bet and fractional bet are for the same amounts exactly. This is interesting in that
generally the closer you can stick to optimal f, the better. That is to say that the greater the capitalization of an account, the greater will be the effect of optimal f. Since optimal f will make
an account grow at the fastest possible rate, we can state that optimal f will make itself work better and better for you at the fastest possible rate. The graphs (Figures 1-1 and 1-6) bear out a few
more interesting points. The first is that at no other fixed fraction will you make more money than you will at optimal f. In other words, it does not pay to bet - 18 -
$1 for every $2 in your stake in the earlier example of a 5:1 game. In such a case you would make more money if you bet $1 for every $2.50 in your stake. It does not pay to risk more than the optimal
f-in fact, you pay a price to do so! Obviously, the greater the capitalization of an account the more accurately you can stick to optimal f, as the dollars per single contract required are a smaller
percentage of the total equity. For example, suppose optimal f for a given market system dictates you trade 1 contract for every $5,000 in an account. If an account starts out with $10,000 in equity,
it will need to gain (or lose) 50% before a quantity adjustment is necessary. Contrast this to a $500,000 account, where there would be a contract adjustment for every 1% change in equity. Clearly
the larger account can better take advantage of the benefits provided by optimal f than can the smaller account. Theoretically, optimal f assumes you can trade in infinitely divisible quantities,
which is not the case in real life, where the smallest quantity you can trade in is a single contract. In the asymptotic sense this does not matter. But in the real-life integer-bet scenario, a good
case could be presented for trading a market system that requires as small a percentage of the account equity as possible, especially for smaller accounts. But there is a tradeoff here as well. Since
we are striving to trade in markets that would require us to trade in greater multiples than other markets, we will be paying greater commissions, execution costs, and slippage. Bear in mind that the
amount required per contract in real life is the greater of the initial margin requirement and the dollar amount per contract dictated by the optimal f. The finer you can cut it (i.e., the more
frequently you can adjust the size of the positions you are trading so as to align yourself with what the optimal f dictates), the better off you are. Most accounts would therefore be better off
trading the smaller markets. Corn may not seem like a very exciting market to you compared to the S&P's. Yet for most people the corn market can get awfully exciting if they have a few hundred
contracts on. Those who trade stocks or forwards (such as forex traders) have a tremendous advantage here. Since you must calculate your optimal f based on the outcomes (the P&Ls) on a 1-contract (1
unit) basis, you must first decide what 1 unit is in stocks or forex. As a stock trader, say you decide that I unit will be 100 shares. You will use the P&L stream generated by trading 100 shares on
each and every trade to determine your optimal f. When you go to trade this particular stock (and let's say your system calls for trading 2.39 contracts or units), you will be able to trade the
fractional part (the .39 part) by putting on 239 shares. Thus, by being able to trade the fractional part of 1 unit, you are able to take more advantage of optimal f. Likewise for forex traders, who
must first decide what 1 contract or unit is. For the forex trader, 1 unit may be one million U.S. dollars or one million Swiss francs.
THE SEVERITY OF DRAWDOWN It is important to note at this point that the drawdown you can expect with fixed fractional trading, as a percentage retracement of your account equity, historically would
have been at least as much as f percent. In other words if f is .55, then your drawdown would have been at least 55% of your equity (leaving you with 45% at one point). This is so because if you are
trading at the optimal f, as soon as your biggest loss was hit, you would experience the drawdown equivalent to f. Again, assuming that f for a system is .55 and assuming that translates into trading
1 contract for every $10,000, this means that your biggest loss was $5,500. As should by now be obvious, when the biggest loss was encountered (again we're speaking historically what would have
happened), you would have lost $5,500 for each contract you had on, and would have had 1 contract on for every $10,000 in the account. At that point, your drawdown is 55% of equity. Moreover, the
drawdown might continue: The next trade or series of trades might draw your account down even more. Therefore, the better a system, the higher the f. The higher the f, generally the higher the
drawdown, since the drawdown (in terms of a percentage) can never be any less than the f as a percentage. There is a paradox involved here in that if a system is good enough to generate an optimal f
that is a high percentage, then the drawdown for such a good system will also be quite high. Whereas optimal fallows you to experience the greatest geometric growth, it also gives you enough rope to
hang yourself with.
Most traders harbor great illusions about the severity of drawdowns. Further, most people have fallacious ideas regarding the ratio of potential gains to dispersion of those gains. We know that if we
are using the optimal f when we are fixed fractional trading, we can expect substantial drawdowns in terms of percentage equity retracements. Optimal f is like plutonium. It gives you a tremendous
amount of power, yet it is dreadfully dangerous. These substantial drawdowns are truly a problem, particularly for notices, in that trading at the optimal f level gives them the chance to experience
a cataclysmic loss sooner than they ordinarily might have. Diversification can greatly buffer the drawdowns. This it does, but the reader is warned not to expect to eliminate drawdown. In fact, the
real benefit of diversification is that it lets you get off many more trials, many more plays, in the same time period, thus increasing your total profit. Diversification, although usually the best
means by which to buffer drawdowns, does not necessarily reduce drawdowns, and in some instances, may actually increase them! Many people have the mistaken impression that drawdown can be completely
eliminated if they diversify effectively enough. To an extent this is true, in that drawdowns can be buffered through effective diversification, but they can never be completely eliminated. Do not be
deluded. No matter how good the systems employed are, no matter how effectively you diversify, you will still encounter substantial drawdowns. The reason is that no matter of how uncorrelated your
market systems are, there comes a period when most or all of the market systems in your portfolio zig in unison against you when they should be zagging. You will have enormous difficulty finding a
portfolio with at least 5 years of historical data to it and all market systems employing the optimal f that has had any less than a 30% drawdown in terms of equity retracement! This is regardless of
how many market systems you employ. If you want to be in this and do it mathematically correctly, you better expect to be nailed for 30% to 95% equity retracements. This takes enormous discipline,
and very few people can emotionally handle this. When you dilute f, although you reduce the drawdowns arithmetically, you also reduce the returns geometrically. Why commit funds to futures trading
that aren't necessary simply to flatten out the equity curve at the expense of your bottom-line profits? You can diversify cheaply somewhere else. Any time a trader deviates from always trading the
same constant contract size, he or she encounters the problem of what quantities to trade in. This is so whether the trader recognizes this problem or not. Constant contract trading is not the
solution, as you can never experience geometric growth trading constant contract. So, like it or not, the question of what quantity to take on the next trade is inevitable for everyone. To simply
select an arbitrary quantity is a costly mistake. Optimal f is factual; it is mathematically correct.
MODERN PORTFOLIO THEORY Recall the paradox of the optimal f and a market system's drawdown. The better a market system is, the higher the value for f. Yet the drawdown (historically) if you are
trading the optimal f can never be lower than f. Generally speaking, then, the better the market system is, the greater the drawdown will be as a percentage of account equity if you are trading
optimal f. That is, if you want to have the greatest geometric growth in an account, then you can count on severe drawdowns along the way. Effective diversification among other market systems is the
most effective way in which this drawdown can be buffered and conquered while still staying close to the peak of the f curve (i.e., without hating to trim back to, say, f/2). When one market system
goes into a drawdown, another one that is being traded in the account will come on strong, thus canceling the draw-down of the other. This also provides for a catalytic effect on the entire account.
The market system that just experienced the drawdown (and now is getting back to performing well) will have no less funds to start with than it did when the drawdown began (thanks to the other market
system canceling out the drawdown). Diversification won't hinder the upside of a system (quite the reverse-the upside is far greater, since after a drawdown you aren't starting back with fewer
contracts), yet it will buffer the downside (but only to a very limited extent). There exists a quantifiable, optimal portfolio mix given a group of market systems and their respective optimal fs.
Although we cannot be certain that the optimal portfolio mix in the past will be optimal in the - 19 -
future, such is more likely than that the optimal system parameters of the past will be optimal or near optimal in the future. Whereas optimal system parameters change quite quickly from one time
period to another, optimal portfolio mixes change very slowly (as do optimal f values). Generally, the correlations between market systems tend to remain constant. This is good news to a trader who
has found the optimal portfolio mix, the optimal diversification among market systems.
THE MARKOVITZ MODEL The basic concepts of modern portfolio theory emanate from a monograph written by Dr. Harry Markowitz.5 Essentially, Markowitz proposed that portfolio management is one of
composition, not individual stock selection as is more commonly practiced. Markowitz argued that diversification is effective only to the extent that the correlation coefficient between the markets
involved is negative. If we have a portfolio composed of one stock, our best diversification is obtained if we choose another stock such that the correlation between the two stock prices is as low as
possible. The net result would be that the portfolio, as a whole (composed of these two stocks with negative correlation), would have less variation in price than either one of the stocks alone.
Markowitz proposed that investors act in a rational manner and, given the choice, would opt for a similar portfolio with the same return as the one they have, but with less risk, or opt for a
portfolio with a higher return than the one they have but with the same risk. Further, for a given level of risk there is an optimal portfolio with the highest yield, and likewise for a given yield
there is an optimal portfolio with the lowest risk. An investor with a portfolio whose yield could be increased with no resultant increase in risk, or an investor with a portfolio whose risk could be
lowered with no resultant decrease in yield, are said to have inefficient portfolios. Figure 1-7 shows all of the available portfolios under a given study. If you hold portfolio C, you would be
better off with portfolio A, where you would have the same return with less risk, or portfolio B, where you would have more return with the same risk.
Reward 1.130 1.125
1.120 1.115 1.110 1.105
A C
1.100 1.095 Risk 1.090 0.290 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0.330 Figure 1-7 Modern portfolio theory. In describing this, Markowitz described what is called the efficient frontier. This is
the set of portfolios that lie on the upper and left sides of the graph. These are portfolios whose yield can no longer be increased without increasing the risk and whose risk cannot be lowered
without lowering the yield. Portfolios lying on the efficient frontier are said to be efficient portfolios. (See Figure 1-8.)
Markowitz, H., Portfolio Selection—Efficient Diversification of Investments. Yale University Press, New Haven, Conn., 1959.
Reward 1.130 1.125 1.120 1.115 1.110 1.105 1.100 1.095 Risk 1.090 0.290 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0.330 Figure 1-8 The efficient frontier Those portfolios lying high and off to the
right and low and to the left are generally not very well diversified among very many issues. Those portfolios lying in the middle of the efficient frontier are usually very well diversified. Which
portfolio a particular investor chooses is a function of the investor's risk aversion-Ms or her willingness to assume risk. In the Markowitz model any portfolio that lies upon the efficient frontier
is said to be a good portfolio choice, but where on the efficient frontier is a matter of personal preference (later on we'll see that there is an exact optimal spot on the efficient frontier for all
investors). The Markowitz model was originally introduced as applying to a portfolio of stocks that the investor would hold long. Therefore, the basic inputs were the expected returns on the stocks
(defined as the expected appreciation in share price plus any dividends), the expected variation in those returns, and the correlations of the different returns among the different stocks. If we were
to transport this concept to futures it would stand to reason (since futures don't pay any dividends) that we measure the expected price gains, variances, and correlations of the different futures.
The question arises, "If we are measuring the correlation of prices, what if we have two systems on the same market that are negatively correlated?" In other words, suppose we have systems A and B.
There is a perfect negative correlation between the two. When A is in a drawdown, B is in a drawup and vice versa. Isn't this really an ideal diversification? What we really want to measure then is
not the correlations of prices of the markets we're using. Rather, we want to measure the correlations of daily equity changes between the different market system. Yet this is still an
apples-and-oranges comparison. Say that two of the market systems we are going to examine the correlations on are both trading the same market, yet one of the systems has an optimal f corresponding
to I contract per every $2,000 in account equity and the other system has an optimal f corresponding to 1 contract per every $10,000 in account equity. To overcome this and incorporate the optimal fs
of the various market systems under consideration, as well as to account for fixed fractional trading, we convert the daily equity changes for a given market system into daily HPRs. The HPR in this
context is how much a particular market made or lost for a given day on a 1-contract basis relative to what the optimal f for that system is. Here is how this can be solved. Say the market system
with an optimal f of $2,000 made $100 on a given day. The HPR then for that market system for that day is 1.05. To find the daily HPR, then: (1.15) Daily HPR = (A/B)+1 where A = Dollars made or lost
that day. B = Optimal fin dollars. We begin by converting the daily dollar gains and losses for the market systems we are looking at into daily HPRs relative to the optimal fin dollars for a given
market system. In so doing, we make quantity irrelevant. In the example just cited, where your daily HPR is 1.05, you made 5% that day on that money. This is 5% regardless of whether you had on 1
contract or 1,000 contracts. Now you are ready to begin comparing different portfolios. The trick here is to compare every possible portfolio combination, from portfolios of 1 market system (for
every market system under consideration) to portfolios of N market systems. - 20 -
As an example, suppose you are looking at market systems A, B, and C. Every combination would be: A B C AB AC BC ABC But you do not stop there. For each combination you must figure each Percentage
allocation as well. To do so you will need to have a minimum Percentage increment. The following example, continued from the portfolio A, B, C example, illustrates this with a minimum portfolio
allocation of 10% (.10): A 100% B 100% C 100% AB 90% 10% 80% 20% 70% 30% 60% 40% 50% 50% 40% 60% 30% 70% 20% 80% 10% 90% AC 90% 10% 80% 20% 70% 30% 60% 40% 50% 50% 40% 60% 30% 70% 20% 80% 10% 90% B C
90% 10% 80% 20% 70% 30% 60% 40% 50% 50% 40% 60% 30% 70% 20% 80% 10% 90% ABC 80% 10% 10% 70% 20% 10% 70% 10% 20% 10% 30% 60% 10% 20% 70% 10% 10% 80% Now for each CPA we go through each day and compute
a net HPR for each day. The net HPR for a given day is the sum of each market system's HPR for that day times its percentage allocation. For example, suppose for systems A, B, and C we are looking at
percentage allocations of 10%, 50%, 40% respectively. Further, suppose that the individual HPRs for those market systems for that day are .9, 1.4, and 1.05 respectively. Then the net HPR for this day
is: Net HPR = (.9*.1)+(1.4*.5)+(1.05*.4) = .09+.7+.42 = 1.21 We must perform now two necessary tabulations. The first is that of the average daily net HPR for each CPA. This comprises the reward or Y
axis of the Markowitz model. The second necessary tabulation is that of the standard deviation of the daily net HPRs for a given CPA-specifically, the population standard deviation. This measure
corresponds to the risk or X axis of the Markowitz model. Modern portfolio theory is often called E-V Theory, corresponding to the other names given the two axes. The vertical axis is often called E,
for expected return, and the horizontal axis V, for variance in expected returns. From these first two tabulations we can find our efficient frontier. We have effectively incorporated various
markets, systems, and f fac-
tors, and we can now see quantitatively what our best CPAs are (i.e., which CPAs lie along the efficient frontier).
THE GEOMETRIC MEAN PORTFOLIO STRATEGY Which particular point on the efficient frontier you decide to be on (i.e., which particular efficient CPA) is a function of your own riskaversion preference, at
least according to the Markowitz model. However, there is an optimal point to be at on the efficient frontier, and finding this point is mathematically solvable. If you choose that CPA which shows
the highest geometric mean of the HPRs, you will arrive at the optimal CPA! We can estimate the geometric mean from the arithmetic mean HPR and the population standard deviation of the HPRs (both of
which are calculations we already have, as they are the X and Y axes for the Markowitz model!). Equations (1.16a) and (l.16b) give us the formula for the estimated geometric mean (EGM). This estimate
is very close (usually within four or five decimal places) to the actual geometric mean, and it is acceptable to use the estimated geometric mean and the actual geometric mean interchangeably.
(1.16a) EGM = (AHPR^2-SD^2)^(1/2) or (l.16b) EGM = (AHPR^2-V)^(1/2) where EGM = The estimated geometric mean. AHPR = The arithmetic average HPR, or the return coordinate of the portfolio. SD = The
standard deviation in HPRs, or the risk coordinate of the portfolio. V = The variance in HPRs, equal to SD^2. Both forms of Equation (1.16) are equivalent. The CPA with the highest geometric mean is
the CPA that will maximize the growth of the portfolio value over the long run; furthermore it will minimize the time required to reach a specified level of equity.
DAILY PROCEDURES FOR USING OPTIMAL PORTFOLIOS At this point, there may be some question as to how you implement this portfolio approach on a day-to-day basis. Again an example will be used to
illustrate. Suppose your optimal CPA calls for you to be in three different market systems. In this case, suppose the percentage allocations are 10%, 50%, and 40%. If you were looking at a $50,000
account, your account would be "subdivided" into three accounts of $5,000, $25,000, and $20,000 for each market system (A, B, and C) respectively. For each market system's subaccount balance you then
figure how many contracts you could trade. Say the f factors dictated the following: Market system A, 1 contract per $5,000 in account equity. Market system B, 1 contract per $2,500 in account
equity. Market system C, l contract per $2,000 in account equity. You would then be trading 1 contract for market system A ($5,000/$5,000), 10 contracts for market system B ($25,000/$2,500), and 10
contracts for market system C ($20,000/$2,000). Each day, as the total equity in the account changes, all subaccounts are recapitalized. What is meant here is, suppose this $50,000 account dropped to
$45,000 the next day. Since we recapitalize the subaccounts each day, we then have $4,500 for market system subaccount A, $22,500 for market system subaccount B, and $18,000 for market system
subaccount C, from which we would trade zero contracts the next day on market system A ($4,500 7 $5,000 = .9, or, since we always floor to the integer, 0), 9 contracts for market system B ($22,500/
$2,500), and 9 contracts for market system C ($18,000/$2,000). You always recapitalize the subaccounts each day regardless of whether there was a profit or a loss. Do not be confused. Subaccount, as
used here, is a mental construct. Another way of doing this that will give us the same answers and that is perhaps easier to understand is to divide a market system's optimal f amount by its
percentage allocation. This gives us a dollar amount that we then divide the entire account equity by to know how many contracts to trade. Since the account equity changes daily, we recapitalize this
daily to the new total account equity. In the example we have cited, - 21 -
market system A, at an f value of 1 contract per $5,000 in account equity and a percentage allocation of 10%, yields 1 contract per $50,000 in total account equity ($5,000/.10). Market system B, at
an f value of 1 contract per $2,500 in account equity and a percentage allocation of 50%, yields 1 contract per $5,000 in total account equity ($2,500/.50). Market system C, at an f value of 1
contract per $2,000 in account equity and a percentage allocation of 40%, yields 1 contract per $5,000 in total account equity ($2,000/.40). Thus, if we had $50,000 in total account equity, we would
trade 1 contract for market system A, 10 contracts for market system B, and 10 contracts for market system C. Tomorrow we would do the same thing. Say our total account equity got up to $59,000. In
this case, dividing $59,000 into $50,000 yields 1.18, which floored to the integer is 1, so we would trade 1 contract for market system A tomorrow. For market system B, we would trade 11 contracts
($59,000/$5,000 = 11.8, which floored to the integer = 11). For market system C we would also trade 11 contracts, since market system C also trades 1 contract for every $5,000 in total account
equity. Suppose we have a trade on from market system C yesterday and we are long 10 contracts. We do not need to go in and add another today to bring us up to 11 contracts. Rather the amounts we are
calculating using the equity as of the most recent close mark-to-market is for new positions only. So for tomorrow, since we have 10 contracts on, if we get stopped out of this trade (or exit it on a
profit target), we will be going 11 contracts on a new trade if one should occur. Determining our optimal portfolio using the daily HPRs means that we should go in and alter our positions on a
day-by-day rather than a trade-by-trade basis, but this really isn't necessary unless you are trading a longer-term system, and then it may not be beneficial to adjust your position size on a
day-byday basis due to increased transaction costs. In a pure sense, you should adjust your positions on a day-by-day basis. In real life, you are usually almost as well off to alter them on a
trade-by-trade basis, with little loss of accuracy. This matter of implementing the correct daily positions is not such a problem. Recall that in finding the optimal portfolio we used the daily HPRs
as input, We should therefore adjust our position size daily (if we could adjust each position at the price it closed at yesterday). In real life this becomes impractical, however, as transaction
costs begin to outweigh the benefits of adjusting our positions daily and may actually cost us more than the benefit of adjusting daily. We are usually better off adjusting only at the end of each
trade. The fact that the portfolio is temporarily out of balance after day 1 of a trade is a lesser price to pay than the cost of adjusting the portfolio daily. On the other hand, if we take a
position that we are going to hold for a year, we may want to adjust such a position daily rather than adjust it more than a year from now when we take another trade. Generally, though, on
longer-term systems such as this we are better off adjusting the position each week, say, rather than each day. The reasoning here again is that the loss in efficiency by having the portfolio
temporarily out of balance is less of a price to pay than the added transaction costs of a daily adjustment. You have to sit down and determine which is the lesser penalty for you to pay, based upon
your trading strategy (i.e., how long you are typically in a trade) as well as the transaction costs involved. How long a time period should you look at when calculating the optimal portfolios? Just
like the question, "How long a time period should you look at to determine the optimal f for a given market system?" there is no definitive answer here. Generally, the more back data you use, the
better should be your result (i.e., that the near optimal portfolios in the future will resemble what your study concluded were the near optimal portfolios). However, correlations do change, albeit
slowly. One of the problems with using too long a time period is that there will be a tendency to use what were yesterday's hot markets. For instance, if you ran this program in 1983 over 5 years of
back data you would most likely have one of the precious metals show very clearly as being a part of the optimal portfolio. However, the precious metals did very poorly for most trading systems for
quite a few years after the 1980-1981 markets. So you see there is a tradeoff between using too much past history and too little in the determination of the optimal portfolio of the future. Finally,
the question arises as to how often you should rerun this entire procedure of finding the optimal portfolio. Ideally you should run this on a continuous basis. However, rarely will the portfolio
composition change. Realistically you should probably run this about every 3 months. Even by running this program every 3 months there is still a
high likelihood that you will arrive at the same optimal portfolio composition, or one very similar to it, that you arrived at before.
ALLOCATIONS GREATER THAN 100% Thus far, we have been restricting the sum of the percentage allocations to 100%. It is quite possible that the sum of the percentage allocations for the portfolio that
would result in the greatest geometric growth would exceed 100%. Consider, for instance, two market systems, A and B, that are identical in every respect, except that there is a negative correlation
(R<0) between them. Assume that the optimal f, in dollars, for each of these market systems is $5,000. Suppose the optimal portfolio (based on highest geomean) proves to be that portfolio that
allocates 50% to each of the two market systems. This would mean that you should trade 1 contract for every $10,000 in equity for market system A and likewise for B. When there is negative
correlation, however, it can be shown that the optimal account growth is actually obtained by trading 1 contract for an amount less than $10,000 in equity for market system A and/or market system B.
In other words, when there is negative correlation, you can have the sum of percentage allocations exceed 100%. Further, it is possible, although not too likely, that the individual percentage
allocations to the market systems may exceed 100% individually. It is interesting to consider what happens when the correlation between two market systems approaches -1.00. When such an event occurs,
the amount to finance trades by for the market systems tends to become infinitesimal. This is so because the portfolio, the net result of the market systems, tends to never suffer a losing day (since
an amount lost by a market system on a given day is offset by the same amount being won by a different market system in the portfolio that day). Therefore, with diversification it is possible to have
the optimal portfolio allocate a smaller f factor in dollars to a given market system than trading that market system alone would. To accommodate this, you can divide the optimal f in dollars for
each market system by the number of market systems you are running. In our example, rather than inputting $5,000 as the optimal f for market system A, we would input $2,500 (dividing $5,000, the
optimal f, by 2, the number of market systems we are going to run), and likewise for market system B. Now when we use this procedure to determine the optimal geomean portfolio as being the one that
allocates 50% to A and 50% to B, it means that we should trade 1 contract for every $5,000 in equity for market system A ($2,500/.5) and likewise for B. You must also make sure to use cash as another
market system. This is non-interest-bearing cash, and it has an HPR of 1.00 for every day. Suppose in our previous example that the optimal growth is obtained at 50% in market system A and 40% in
market system B. In other words, to trade 1 contract for every $5,000 in equity for market system A and 1 contract for every $6,250 for B ($2,500/.4). If we were using cash as another market system,
this would be a possible combination (showing the optimal portfolio as having the remaining 10% in cash). If we were not using cash as another market system, this combination wouldn't be possible. If
your answer obtained by using this procedure does not include the non-interest-bearing cash as one of the output components, then you must raise the factor you are using to divide the optimal fs in
dollars you are using as input. Returning to our example, suppose we used non-interest-bearing cash with the two market systems A and B. Further suppose that our resultant optimal portfolio did not
include at least some percentage allocation to non-interest bearing cash. Instead, suppose that the optimal portfolio turned out to be 60% in market system A and 40% in market system B (or any other
percentage combination, so long as they added up to 100% as a sum for the percentage allocations for the two market systems) and 0% allocated to non-interest-bearing cash. This would mean that even
though we divided our optimal fs in dollars by two, that was not enough, We must instead divide them by a number higher than 2. So we will go back and divide our optimal fs in dollars by 3 or 4 until
we get an optimal portfolio which includes a certain percentage allocation to non-interest-bearing cash. This will be the optimal portfolio. Of course, in real life this does not mean that we must
actually allocate any of our trading capital to non-interest-bearing cash, Rather, the non-interest-bearing cash was used to derive the optimal amount of - 22 -
funds to allocate for 1 contract to each market system, when viewed in light of each market system's relationship to each other market system. Be aware that the percentage allocations of the
portfolio that would have resulted in the greatest geometric growth in the past can be in excess of 100% and usually are. This is accommodated for in this technique by dividing the optimal f in
dollars for each market system by a specific integer (which usually is the number of market systems) and including non-interest-bearing cash (i.e., a market system with an HPR of 1.00 every day) as
another market system. The correlations of the different market systems can have a profound effect on a portfolio. It is important that you realize that a portfolio can be greater than the sum of its
parts (if the correlations of its component parts are low enough). It is also possible that a portfolio may be less than the sum of its parts (if the correlations are too high). Consider again a
coin-toss game, a game where you win $2 on heads and lose $1 on tails. Such a game has a mathematical expectation (arithmetic) of fifty cents. The optimal f is .25, or bet $1 for every $4 in your
stake, and results in a geometric mean of 1.0607. Now consider a second game, one where the amount you can win on a coin toss is $.90 and the amount you can lose is $1.10. Such a game has a negative
mathematical expectation of -$.10, thus, there is no optimal f, and therefore no geometric mean either. Consider what happens when we play both games simultaneously. If the second game had a
correlation coefficient of 1.0 to the first-that is, if we won on both games on heads or both coins always came up either both heads or both tails, then the two possible net outcomes would be that we
win $2.90 on heads or lose $2.10 on tails. Such a game would have a mathematical expectation then of $.40, an optimal f of .14, and a geometric mean of 1.013. Obviously, this is an inferior approach
to just trading the positive mathematical expectation game. Now assume that the games are negatively correlated. That is, when the coin on the game with the positive mathematical expectation comes up
heads, we lose the $1.10 of the negative expectation game and vice versa. Thus, the net of the two games is a win of $.90 if the coins come up heads and a loss of -$.10 if the coins come up tails.
The mathematical expectation is still $.40, yet the optimal f is .44, which yields a geometric mean of 1.67. Recall that the geometric mean is the growth factor on your stake on average per play.
This means that on average in this game we would expect to make more than 10 times as much per play as in the outright positive mathematical expectation game. Yet this result is obtained by taking
that positive mathematical expectation game and combining it with a negative expectation game. The reason for the dramatic difference in results is due to the negative correlation between the two
market systems. Here is an example where the portfolio is greater than the sum of its parts. Yet it is also important to bear in mind that your drawdown, historically, would have been at least as
high as f percent in terms of percentage of equity retraced. In real life, you should expect that in the future it will be higher than this. This means that the combination of the two market systems,
even though they are negatively correlated, would have resulted in at least a 44% equity retracement. This is higher than the outright positive mathematical expectation which resulted in an optimal f
of .25, and therefore a minimum historical drawdown of at least 25% equity retracement. The moral is clear. Diversification, if done properly, is a technique that increases returns. It does not
necessarily reduce worst-case drawdowns. This is absolutely contrary to the popular notion. Diversification will buffer many of the little pullbacks from equity highs, but it does not reduce
worst-case drawdowns. Further, as we have seen with optimal f, drawdowns are far greater than most people imagine. Therefore, even if you are very well diversified, you must still expect substantial
equity retracements. However, let's go back and look at the results if the correlation coefficient between the two games were 0. In such a game, whatever the results of one toss were would have no
bearing on the results of the other toss. Thus, there are four possible outcomes: Game 1 Outcome Win Win Lose Lose
Amount $2.00 $2.00 -$1.00 -$1 .00
Game 2 Outcome Win Lose Win Lose
Amount $.90 -$1.10 $.90 -$1.10
Net Outcome Win Win Lose Lose
Amount $2.90 $.90 -S.10 -$2.10
The mathematical expectation is thus: ME = 2.9*.25+.9*.25-.1*.25-2.1*.25 = .725+.225-.025-.525 = .4 Once again, the mathematical expectation is $.40. The optimal f on this sequence is .26, or 1 bet
for every $8.08 in account equity (since the biggest loss here is -$2.10). Thus, the least the historical drawdown may have been was 26% (about the same as with the outright positive expectation
game). However, here is an example where there is buffering of the equity retracements. If we were simply playing the outright positive expectation game, the third sequence would have hit us for the
maximum drawdown. Since we are combining the two systems, the third sequence is buffered. But that is the only benefit. The resultant geometric mean is 1.025, less than half the rate of growth of
playing just the outright positive expectation game. We placed 4 bets in the same time as we would have placed 2 bets in the outright positive expectation game, but as you can see, still didn't make
as much money: 1.0607^2 = 1.12508449 1.025^ 4 = 1.103812891 Clearly, when you diversify you must use market systems that have as low a correlation in returns to each other as possible and preferably
a negative one. You must realize that your worst-case equity retracement will hardly be helped out by the diversification, although you may be able to buffer many of the other lesser equity
retracements. The most important thing to realize about diversification is that its greatest benefit is in what it can do to improve your geometric mean. The technique for finding the optimal
portfolio by looking at the net daily HPRs eliminates having to look at how many trades each market system accomplished in determining optimal portfolios. Using the technique allows you to look at
the geometric mean alone, without regard to the frequency of trading. Thus, the geometric mean becomes the single statistic of how beneficial a portfolio is. There is no benefit to be obtained by
diversifying into more market systems than that which results in the highest geometric mean. This may mean no diversification at all if a portfolio of one market system results in the highest
geometric mean. It may also mean combining market systems that you would never want to trade by themselves.
HOW THE DISPERSION OF OUTCOMES AFFECTS GEOMETRIC GROWTH Once we acknowledge the fact that whether we want to or not, whether consciously or not, we determine our quantities to trade in as a function
of the level of equity in an account, we can look at HPRs instead of dollar amounts for trades. In so doing, we can give money management specificity and exactitude. We can examine our
money-management strategies, draw rules, and make conclusions. One of the big conclusions, one that will no doubt spawn many others for us, regards the relationship of geometric growth and the
dispersion of outcomes (HPRs). This discussion will use a gambling illustration for the sake of simplicity. Consider two systems, System A, which wins 10% of the time and has a 28 to 1 win/loss
ratio, and System B, which wins 70% of the time and has a 1 to 1 win/loss ratio. Our mathematical expectation, per unit bet, for A is 1.9 and for B is .4. We can therefore say that for every unit bet
System A will return, on average, 4.75 times as much as System B. But let's examine this under fixed fractional trading. We can find our optimal fs here by dividing the mathematical expectations by
the win/loss ratios. This gives us an optimal f of .0678 for A and .4 for B. The geometric means for each system at their optimal f levels are then: A = 1.044176755 B = 1.0857629 System A B
% Wins Win:Loss ME f Geomean 10 28:1 1.9 .0678 1.0441768 70 1:1 .4 .4 1.0857629
As you can see, System B, although less than one quarter the mathematical expectation of A, makes almost twice as much per bet (returning 8.57629% of your entire stake per bet on average when you
reinvest at the optimal f levels) as does A (which returns 4.4176755% of your entire stake per bet on average when you reinvest at the optimal f levels). Now assuming a 50% drawdown on equity will
require a 100% gain to recoup, then 1.044177 to the power of X is equal to 2.0 at approximately X equals 16.5, or more than 16 trades to recoup from a 50% drawdown for System A. Contrast this to
System B, where 1.0857629 to the power of X is equal to 2.0 at approximately X equals 9, or 9 trades for System B to recoup from a 50% drawdown. - 23 -
What's going on here? Is this because System B has a higher percentage of winning trades? The reason B is outperforming A has to do with the dispersion of outcomes and its effect on the growth
function. Most people have the mistaken impression that the growth function, the TWR, is: (1.17) TWR = (1+R)^N where R = The interest rate per period (e.g., 7% = .07). N = The number of periods.
Since 1+R is the same thing as an HPR, we can say that most people have the mistaken impression that the growth function,6 the TWR, is: (1.18) TWR = HPR^N This function is only true when the return
(i.e., the HPR) is constant, which is not the case in trading. The real growth function in trading (or any event where the HPR is not constant) is the multiplicative product of the HPRs. Assume we
are trading coffee, our optimal f is 1 contract for every $21,000 in equity, and we have 2 trades, a loss of $210 and a gain of $210, for HPRs of .99 and 1.01 respectively. In this example our TWR
would be: TWR = 1.01*.99 = .9999 An insight can be gained by using the estimated geometric mean (EGM) for Equation (1.16a): (1.16a) EGM = (AHPR^2-SD^2)^(1/2) or (1.16b) EGM = (AHPR^2-V)^(1/2) Now we
take Equation (1.16a) or (1.16b) to the power of N to estimate the TWR. This will very closely approximate the "multiplicative" growth function, the actual TWR: (1.19a) Estimated TWR = ((AHPR^2-SD^2)
^(1/2))^N or (1.19b) Estimated TWR = ((AHPR^2-V)^(1/2))^N where N = The number of periods. AHPR = The arithmetic mean HPR. SD = The population standard deviation in HPRs. V = The population variance
in HPRs. The two equations in (1.19) are equivalent. The insight gained is that we can see here, mathematically, the tradeoff between an increase in the arithmetic average trade (the HPR) and the
variance in the HPRs, and hence the reason that the 70% 1:1 system did better than the 10% 28:1 system! Our goal should be to maximize the coefficient of this function, to maximize: (1.16b) EGM =
(AHPR^2-V)^(1/2) Expressed literally, our goal is "To maximize the square root of the quantity HPR squared minus the population variance in HPRs." The exponent of the estimated TWR, N, will take care
of itself. That is to say that increasing N is not a problem, as we can increase the number of markets we are following, can trade more short-term types of systems, and so on. However, these
statistical measures of dispersion, variance, and standard deviation (V and SD respectively), are difficult for most nonstatisticians to envision. What many people therefore use in lieu of these
measures is known as the mean absolute deviation (which we'll call M). Essentially, to find M you simply take the average absolute value of the difference of each data point to an average of the data
points. (1.20) M = ∑ABS(Xi-X[])/N In a bell-shaped distribution (as is almost always the case with the distribution of P&L's from a trading system) the mean absolute deviation equals about .8 of the
standard deviation (in a Normal Distribution, it is .7979). Therefore, we can say: 6
Many people mistakenly use the arithmetic average HPR in the equation for HPH^N. As is demonstrated here, this will not give the true TWR after N plays. What you must use is the geometric, rather
than the arithmetic, average HPR^N. This will give you the true TWR. If the standard deviation in HPRs is 0, then the arithmetic average HPR and the geometric average HPR are equivalent, and it
matters not which you use.
(1.21) M = .8*SD and (1.22) SD = 1.25*M We will denote the arithmetic average HPR with the variable A, and the geometric average HPR with the variable G. Using Equation (1.16b), we can express the
estimated geometric mean as: (1.16b) G = (A^2-V)^(1/2) From this equation, we can obtain: (1.23) G^2 = (A^2-V) Now substituting the standard deviation squared for the variance [as in (1.16a)]: (1.24)
G^2 = A^2-SD^2 From this equation we can isolate each variable, as well as isolating zero to obtain the fundamental relationships between the arithmetic mean, geometric mean, and dispersion,
expressed as SD ^ 2 here: (1.25) A^2-C^2-SD^2 = 0 (1.26) G^2 = A^2-SD^2 (1.27) SD^2 = A^2-G^2 (1.28) A^2 = G^2+SD^2 In these equations, the value SD^2 can also be written as V or as (1.25*M)^2. This
brings us to the point now where we can envision exactly what the relationships are. Notice that the last of these equations is the familiar Pythagorean Theorem: The hypotenuse of a right angle
triangle squared equals the sum of the squares of its sides! But here the hypotenuse is A, and we want to maximize one of the legs, G. In maximizing G, any increase in D (the dispersion leg, equal to
SD or V ^ (1/2) or 1.25*M) will require an increase in A to offset. When D equals zero, then A equals G, thus conforming to the misconstrued growth function TWR = (1+R)^N. Actually when D equals
zero, then A equals G per Equation (1.26). So, in terms of their relative effect on G, we can state that an increase in A ^ 2 is equal to a decrease of the same amount in (1.25*M)^2. (1.29) ∆A^2 = -A
((1.25*M)^2) To see this, consider when A goes from 1.1 to 1.2: A SD M G A^2 SD^2 = (1.25*M)^2 1.1 .1 .08 1.095445 1.21 .01 1.2 .4899 .39192 1.095445 1.44 .24 .23 .23
When A = 1.1, we are given an SD of .1. When A = 1.2, to get an equivalent G, SD must equal .4899 per Equation (1.27). Since M = .8*SD, then M = .3919. If we square the values and take the
difference, they are both equal to .23, as predicted by Equation (1.29). Consider the following: A SD M G A^2 SD^2 = (1.25*M)^2 1.1 .25 .2 1.071214 1.21 .0625 1.2 .5408 .4327 1.071214 1.44 .2925 .23
Notice that in the previous example, where we started with lower dispersion values (SD or M), how much proportionally greater an increase was required to yield the same G. Thus we can state that the
more you reduce your dispersion, the better, with each reduction providing greater and greater benefit. It is an exponential function, with a limit at the dispersion equal to zero, where G is then
equal to A. A trader who is trading on a fixed fractional basis wants to maximize G, not necessarily A. In maximizing G, the trader should realize that the standard deviation, SD, affects G in the
same proportion as does A, per the Pythagorean Theorem! Thus, when the trader reduces the standard deviation (SD) of his or her trades, it is equivalent to an equal increase in the arithmetic average
HPR (A), and vice versa!
THE FUNDAMENTAL EQUATION OF TRADING We can glean a lot more here than just how trimming the size of our losses improves our bottom line. We return now to equation (1.19a): (1.19a) Estimated TWR =
((AHPR^2-SD^2)^(1/2))^N We again replace AHPR with A, representing the arithmetic average HPR. Also, since (X^Y)^Z = X^(Y*Z), we can further simplify the exponents in the equation, thus obtaining: -
24 -
(1.19c) Estimated TWR = (A^2-SD^2)^(N/2) This last equation, the simplification for the estimated TWR, we call the fundamental equation for trading, since it describes how the different factors, A,
SD, and N affect our bottom line in trading. A few things are readily apparent. The first of these is that if A is less than or equal to 1, then regardless of the other two variables, SD and N, our
result can be no greater than 1. If A is less than 1, then as N approaches infinity, A approaches zero. This means that if A is less than or equal to 1 (mathematical expectation less than or equal to
zero, since mathematical expectation = A-1), we do not stand a chance at making profits. In fact, if A is less than 1, it is simply a matter of time (i.e., as N increases) until we go broke. Provided
that A is greater than 1, we can see that increasing N increases our total profits. For each increase of 1 trade, the coefficient is further multiplied by its square root. For instance, suppose your
system showed an arithmetic mean of 1.1, and a standard deviation of .25. Thus: Estimated TWR = (1.1^2-.25^2)^(N/2) = (1.21-.0625)^(N/2) = 1.1475^(N/2) Each time we can increase N by 1, we increase
our TWR by a factor equivalent to the square root of the coefficient. In the case of our example, where we have a coefficient of 1.1475, then 1.1475^(1/2) = 1.071214264. Thus every trade increase,
every 1-point increase in N, is the equivalent to multiplying our final stake by 1.071214264. Notice that this figure is the geometric mean. Each time a trade occurs, each time N is increased by 1,
the coefficient is multiplied by the geometric mean. Herein is the real benefit of diversification expressed mathematically in the fundamental equation of trading. Diversification lets you get more N
off in a given period of time. The other important point to note about the fundamental trading equation is that it shows that if you reduce your standard deviation more than you reduce your
arithmetic average HPR, you are better off. It stands to reason, therefore, that cutting your losses short, if possible, benefits you. But the equation demonstrates that at some point you no longer
benefit by cutting your losses short. That point is the point where you would be getting stopped out of too many trades with a small loss that later would have turned profitable, thus reducing your A
to a greater extent than your SD. Along these same lines, reducing big winning trades can help your program if it reduces your SD more than it reduces your A. In many cases, this can be accomplished
by incorporating options into your trading program. Having an option position that goes against your position in the underlying (either by buying long an option or writing an option) can possibly
help. For instance, if you are long a given stock (or commodity), buying a put option (or writing a call option) may reduce your SD on this net position more than it reduces your A. If you are
profitable on the underlying, you will be unprofitable on the option, but profitable overall, only to a lesser extent than had you not had the option position. Hence, you have reduced both your SD
and your A. If you are unprofitable on the underlying, you will have increased your A and decreased your SD. All told, you will tend to have reduced your SD to a greater extent than you have reduced
your A. Of course, transaction costs are a large consideration in such a strategy, and they must always be taken into account. Your program may be too short-term oriented to take advantage of such a
strategy, but it does point out the fact that different strategies, along with different trading rules, should be looked at relative to the fundamental trading equation. In doing so, we gain an
insight into how these factors will affect the bottom line, and what specifically we can work on to improve our method. Suppose, for instance, that our trading program was long-term enough that the
aforementioned strategy of buying a put in conjunction with a long position in the underlying was feasible and resulted in a greater estimated TWR. Such a position, a long position in the underlying
and a long put, is the equivalent to simply being outright long the call. Hence, we are better off simply to be long the call, as it will result
in considerably lower transaction costs'7 than being both long the underlying and long the put option. To demonstrate this, we'll use the extreme example of the stock indexes in 1987. Let's assume
that we can actually buy the underlying OEX index. The system we will use is a simple 20-day channel breakout. Each day we calculate the highest high and lowest low of the last 20 days. Then,
throughout the day if the market comes up and touches the high point, we enter long on a stop. If the system comes down and touches the low point, we go short on a stop. If the daily opens are
through the entry points, we enter on the open. The system is always in the market: Date 870106 870414 870507 870904 871001 871012 871221
Position L S L S L S L
Entry 24107 27654 29228 31347 32067 30281 24294
P&L 0 35.47 -15.74 21.19 -7.2 -17.86 59.87
Cumulative 0 35.47 19.73 40.92 33.72 15.86 75.73
Volatility .1516987 .2082573 .2182117 .1793583 .1 848783 .2076074 .3492674
If we were to determine the optimal f on this stream of trades, we would find its corresponding geometric mean, the growth factor on our stake per play, to be 1.12445. Now we will take the exact same
trades, only, using the Black-Scholes stock option pricing model from Chapter 5, we will convert the entry prices to theoretical option prices. The inputs into the pricing model are the historical
volatility determined on a 20-day basis (the calculation for historical volatility is also given in Chapter 5), a risk-free rate of 6%, and a 260.8875-day year (this is the average number of weekdays
in a year). Further, we will assume that we are buying options with exactly .5 of a year left till expiration (6 months) and that they are at-the-money. In other words, that there is a strike price
corresponding to the exact entry price. Buying long a call when the system goes long the underlying, and buying long a put when the system goes short the underlying, using the parameters of the
option pricing model mentioned, would have resulted in a trade stream as follows: Date 870106 870414 870414 870507 870507 870904 870904 871001 871001 871012 871012 871221 871221
Position Entry P&L L 9.623 0 F 35.47 25.846 L 15.428 0 F 8.792 -6.637 L 17.116 0 F 21.242 4.126 L 14.957 0 F 10.844 -4.113 L 15.797 0 F 9.374 -6.423 L 16.839 0 F 61.013 44.173 L 23 0
Cumulative 0 25.846 25.846 19.21 19.21 23.336 23.336 19.223 19.223 12.8 12.8 56.974 56.974
Underlying 24107 27654 27654 29228 29228 31347 31347 32067 32067 30281 30281 24294 24294
Action LONG CALL LONG PUT LONG CALL LONG PUT LONG CALL LONG PUT LONG CALL
If we were to determine the optimal f on this stream of trades, we would find its corresponding geometric mean, the growth factor on our stake per play, to be 1.2166, which compares to the geometric
mean at the optimal f for the underlying of 1.12445. This is an enormous difference. Since there are a total of 6 trades, we can raise each geometric mean to the power of 6 to determine the TWR on
our stake at the end of the 6 trades. This returns a TWR on the underlying of 2.02 versus a TWR on the options of 3.24. Subtracting 1 from each TWR translates these results to percentage gains on our
starting stake, or a 102% gain trading the underlying and a 224% gain making the same trades in the options. The options are clearly superior in this case, as the fundamental equation of trading
testifies. Trading long the options outright as in this example may not always be superior to being long the underlying instrument. This example is an extreme case, yet it does illuminate the fact
that trading strategies (as 7
There is another benefit here that is not readily apparent hut has enormous merit. That is that we know, in advance, what our worst-case loss is in advance. Considering how sensitive the optimal f
equation is to what the biggest loss in the future is, such a strategy can have us be much closer to the peak of the f curve in the future by allowing US to predetermine what our largest loss can he
with certainty. Second, the problem of a loss of 3 standard deviations or more having a much higher probability of occurrence than the Normal Distribution implies is eliminated. It is the gargantuan
losses in excess of 3 standard deviations that kill most traders. An options strategy such as this can totally eliminate such terminal losses. - 25 -
well as what option series to buy) should be looked at in light of the fundamental equation for trading in order to be judged properly. As you can see, the fundamental trading equation can be
utilized to dictate many changes in our trading. These changes may be in the way of tightening (or loosening) our stops, setting targets, and so on. These changes are the results of inefficiencies in
the way we are carrying out our trading as well as inefficiencies in our trading program or methodology. I hope you will now begin to see that the computer has been terribly misused by most traders.
Optimizing and searching for the systems and parameter values that made the most money over past data is, by and large a futile process. You only need something that will be marginally profitable in
the future. By correct money management you can get an awful lot out of a system that is only marginally profitable. In general, then, the degree of profitability is determined by the money
management you apply to the system more than by the system itself Therefore, you should build your systems (or trading techniques, for those opposed to mechanical systems) around how certain you can
be that they will be profitable (even if only marginally so) in the future. This is accomplished primarily by not restricting a system or technique's degrees of freedom. The second thing you should
do regarding building your system or technique is to bear the fundamental equation of trading in mind It will guide you in the right direction regarding inefficiencies in your system or technique,
and when it is used in conjunction with the principle of not restricting the degrees of freedom, you will have obtained a technique or system on which you can now employ the money-management
techniques. Using these moneymanagement techniques, whether empirical, as detailed in this chapter, or parametric (which we will delve into starting in Chapter 3), will determine the degree of
profitability of your technique or system.
Chapter 2 - Characteristics of Fixed Fractional Trading and Salutary Techniques We have seen that the optimal growth of an account is achieved through optimal f. This is true regardless of the
underlying vehicle. Whether we are trading futures, stocks, or options, or managing a group of traders, we achieve optimal growth at the optimal f, and we reach a specified goal in the shortest time.
We have also seen how to combine various market systems at their optimal f levels into an optimal portfolio from an empirical standpoint. That is, we have seen how to combine optimal f and portfolio
theory, not from a mathematical model standpoint, but from the standpoint of using the past data directly to determine the optimal quantities to trade in for the components of the optimal portfolio.
Certain important characteristics about fixed fractional trading still need to be mentioned. We now cover these characteristics.
OPTIMAL F FOR SMALL TRADERS JUST STARTING OUT How does a very small account, an account that is going to start out trading 1 contract, use the optimal f approach? One suggestion is that such an
account start out by trading 1 contract not for every optimal f amount in dollars (biggest loss/-f), but rather that the drawdown and margin must be considered in the initial phase. The amount of
funds allocated towards the first contract should be the greater of the optimal f amount in dollars or the margin plus the maximum historic drawdown (on a 1-unit basis): (2.01) A = MAX {(Biggest Loss
/-f), (Margin+ABS(Drawdown))} where A = The dollar amount to allocate to the first contract. f = The optimal f (0 to 1). Margin = The initial speculative margin for the given contract. Drawdown = The
historic maximum drawdown. MAX{} = The maximum value of the bracketed values. ABS() = The absolute value function. With this procedure an account can experience the maximum drawdown again and still
have enough funds to cover the initial margin on another trade. Although we cannot expect the worst-case drawdown in the future not to exceed the worst-case drawdown historically, it is rather
unlikely that we will start trading right at the beginning of a new historic drawdown. A trader utilizing this idea will then subtract the amount in Equation (2.01) from his or her equity each day.
With the remainder, he or she will then divide by (Biggest Loss/-f). The answer obtained will be rounded down to the integer, and 1 will be added. The result is how many contracts to trade. An
example may help clarify. Suppose we have a system where the optimal f is .4, the biggest historical loss is -$3,000, the maximum drawdown was -$6,000, and the margin is $2,500. Employing Equation
(2.01) then: A = MAX{( -$3,000/-.4), ($2,500+ABS( -$6,000))} = MAX(($7,500), ($2,500+$6,000)) = MAX($7,500, $8,500) = $8,500 We would thus allocate $8,500 for the first contract. Now suppose we are
dealing with $22,500 in account equity. We therefore subtract this first contract allocation from the equity: $22,500-$8,500 = $14,000 We then divide this amount by the optimal fin dollars: $14,000/
$7,500 = 1.867 Then we take this result down to the integer: INT( 1.867) = 1 and add 1 to the result (the 1 contract represented by the $8,500 we have subtracted from our equity): 1+1 = 2 We
therefore would trade 2 contracts. If we were just trading at the optimal f level of 1 contract for every $7,500 in account equity, we - 26 -
would have traded 3 contracts ($22,500/$7,500). As you can see, this technique can be utilized no matter of how large an account's equity is (yet the larger the equity the closer the two answers will
be). Further, the larger the equity, the less likely it is that we will eventually experience a drawdown that will have us eventually trading only 1 contract. For smaller accounts, or for accounts
just starting out, this is a good idea to employ.
THRESHOLD TO GEOMETRIC Here is another good idea for accounts just starting out, one that may not be possible if you are employing the technique just mentioned. This technique makes use of another
by-product calculation of optimal f called the threshold to geometric. The by-products of the optimal f calculation include calculations, such as the TWR, the geometric mean, and so on, that were
derived in obtaining the optimal f, and that tell us something about the system. The threshold to the geometric is another of these by-product calculations. Essentially, the threshold to geometric
tells us at what point we should switch over to fixed fractional trading, assuming we are starting out constant-contract trading. Refer back to the example of a coin toss where we win $2 if the toss
comes up heads and we lose $1 if the toss comes up tails. We know that our optimal f is .25, or to make 1 bet for every $4 we have in account equity. If we are starting out trading on a
constant-contract basis, we know we will average $.50 per unit per play. However, if we start trading on a fixed fractional basis, we can expect to make the geometric average trade of $.2428 per unit
per play. Assume we start out with an initial stake of $4, and therefore we are making 1 bet per play. Eventually, when we get to $8, the optimal f would have us step up to making 2 bets per play.
However, 2 bets times the geometric average trade of $.2428 is $.4856. Wouldn't we be better off sticking with 1 bet at the equity level of $8, whereby our expectation per play would still be $.50?
The answer is, "Yes." The reason that the optimal f is figured on the basis of contracts that are infinitely divisible, which may not be the case in real life. We can find that point where we should
move up to trading two contracts by the formula for the threshold to the geometric, T: (2.02) T = AAT/GAT*Biggest Loss/-f where T = The threshold to the geometric. AAT = The arithmetic average trade.
GAT s The geometric average trade, f = The optimal f (0 to 1). In our example of the 2-to-l coin toss: T = .50/.2428*-1/-.25 = 8.24 Therefore, we are better off switching up to trading 2 contracts
when our equity gets to $8.24 rather than $8.00. Figure 2-1 shows the threshold to the geometric for a game with a 50% chance of winning $2 and a 50% chance of losing $1.
Threshold in $ 120 100 80 60 40
Optimal f is .25 where threshold is $8.24
0,5 0,10 0,15 0,20 0,25 0,30 0,35 0,40 0,45 0,50 0,55 f values
Figure 2-1 Threshold to the geometric for 2:1 coin toss. Notice that the trough of the threshold to the geometric curve occurs at the optimal f. This means that since the threshold to the geometric
is the optimal level of equity to go to trading 2 units, you go to 2 units at the lowest level of equity, optimally, when incorporating the threshold to the geometric at the optimal f.
Now the question is, "Can we use a similar approach to know when to go from 2 cars to 3 cars?" Also, 'Why can't the unit size be 100 cars starting out, assuming you are starting out with a large
account, rather than simply a small account starting out with 1 car?" To answer the second question first, it is valid to use this technique when starting out with a unit size greater than 1.
However, it is valid only if you do not trim back units on the downside before switching into the geometric mode. The reason is that before you switch into the geometric mode you are assumed to be
trading in a constant-unit size. Assume you start out with a stake of 400 units in our 2-to-l cointoss game. Your optimal fin dollars is to trade 1 contract (make 1 bet) for every $4 in equity.
Therefore, you will start out trading 100 contracts (making 100 bets) on the first trade. Your threshold to the geometric is at $8.24, and therefore you would start trading 101 contracts at an equity
level of $404.24. You can convert your threshold to the geometric, which is computed on the basis of advancing from 1 contract to 2, as: (2.03) Converted T = EQ+T-(Biggest Loss/-f) where EQ = The
starting account equity level. T = The threshold to the geometric for going from 1 car to 2. f = The optimal f (0 to 1). Therefore, since your starting account equity is $400, your T is $8.24, your
biggest loss -$1, and your f is .25: Converted T = 400+8.24-(-1/-.25) = 400+8.24-4 = 404.24 Thus, you would progress to trading 101 contracts (making 101 bets) if and when your account equity reached
$404.24. We will assume you are trading in a constant-contract mode until your account equity reaches $404.24, at which point you will begin the geometric mode. Therefore, until Your account equity
reaches $404.24, you will trade 100 contracts on the next trade regardless of the remaining equity in your account. If, after you cross the geometric threshold (that is, after your account equity
hits S404.24), you suffer a loss and your equity drops below $404.24, you will go back to trading on a constant 100-contract basis if and until you cross the geometric threshold again. This inability
to trim back contracts on the downside when you are below the geometric threshold is the drawback to using this procedure when you are at an equity level of trading more than 2 contacts. If you are
only trading 1 contract, the geometric threshold is a very valid technique for determining at what equity level to start trading 2 contracts (since you cannot trim back any further than 1 contract
should you experience an equity decline). However, it is not a valid technique for advancing from 2 contracts to 3, because the technique is predicated upon the fact that you are currently trading on
a constant-contract basis. That is, if you are trading 2 contracts, unless you are willing not to trim back to 1 contract if you suffer an equity decline, the technique is not valid, and likewise if
you start out trading 100 contracts. You could do just that (not trim back the number of contracts you are presently trading if you experience an equity decline), in which case the threshold to the
geometric, or its converted version in Equation (2.03), would be the valid equity point to add the next contract. The problem with doing this (not trimming back on the downside) is that you will make
less (your TWR will be less) in an asymptotic sense. You will not make as much as if you simply traded the full optimal f. Further, your drawdowns will be greater and your risk of ruin higher.
Therefore, the threshold to the geometric is only beneficial if you are starting out in the lowest denomination of bet size (1 contract) and advancing to 2, and it is only a benefit if the arithmetic
average trade is more than twice the size of the geometric average trade. Furthermore, it is beneficial to use only when you cannot trade fractional units.
ONE COMBINED BANKROLL VERSUS SEPARATE BANKROLLS Some very important points regarding fixed fractional trading must be covered before we discuss the parametric techniques. First, when trading more
than one market system simultaneously, you will generally do better in an asymptotic sense using only one combined bankroll from which to figure your contract sizes, rather than separate bankrolls
for each. - 27 -
It is for this reason that we "recapitalize" the subaccounts on a daily basis as the equity in an account fluctuates. What follows is a run of two similar systems, System A and System B. Both have a
50% chance of winning, and both have a payoff ratio of 2:1. Therefore, the optimal f dictates that we bet $1 for every S4 units in equity. The first run we see shows these two systems with positive
correlation to each other. We start out with $100, splitting it into 2 subaccount units of $50 each. After a trade is registered, it only affects the cumulative column for that system, as each system
has its own separate bankroll. The size of each system's separate bankroll is used to determine bet size on the subsequent play: System A Trade P&L
System B Cumulative Trade P&L 50.00 2 25.00 75.00 2 25.00 -1 -18.75 56.25 -1 -18.75 2 28.13 84 .38 2 28.13 -1 -21.09 63.28 -1 -21.09 2 31.64 94 .92 2 31.64 -1 -23.73 71.19 -1 -23.73 -50.00 Net Profit
21.19140 21.19140 Total net profit of the two banks = $42.38
Cumulative 50.00 75.00 56.25 84.38 63.28 94 .92 71.19 -50.0
Now we will see the same thing, only this time we will operate from a combined bank starting at 100 units. Rather than betting $1 for every $4 in the combined stake for each system, we will bet $1
for every $8 in the combined bank. Each trade for either system affects the combined bank, and it is the combined bank that is used to determine bet size on the subsequent play: System A Trade P&L
System B Trade P&L
Combined Bank 100.00 2 25.00 2 25.00 150.00 -1 -18.75 -1 -18.75 112.50 2 28.13 2 28.13 168.75 -1 -21.09 -1 -21.09 126.56 2 31.64 2 31.64 189.84 -1 -23.73 -1 -23.73 142.38 -100.00 Total net profit of
the combined bank = $42.38
Notice that using either a combined bank or a separate bank in the preceding example shows a profit on the $100 of $42.38. Yet what was shown is the case where there is positive correlation between
the two systems. Now we will look at negative correlation between the same two systems, first with both systems operating from their own separate bankrolls: System A Trade P&L
System B Cumulative Trade P&L 50.00 2 25.00 75.00 -1 -12.50 -1 -18.75 56.25 2 18.75 2 28.13 84.38 -1 -14.06 -1 -21.09 63.28 2 21.09 2 31.64 94.92 -1 -15.82 -1 -23.73 71.19 2 23.73 -50.00 Net Profit
21.19140 Total net profit of the two banks =
Cumulative 50.00 37.50 56.25 42.19 63.28 47.46 71.19 -50.00 21.19140 $42.38
As you can see, when operating from separate bankrolls, both systems net out making the same amount regardless of correlation. However, with the combined bank: System A Trade P&L
System B Trade P&L
Combined Bank 100.00 2 25.00 -1 -12.50 112.50 -1 -14.06 2 28.12 126.56 2 31.64 -1 -15.82 142.38 -1 -17.80 2 35.59 160.18 2 40.05 -1 -20.02 180.20 -1 -22.53 2 45.00 202.73 -100.00 Total net profit of
the combined bank = $102.73
With the combined bank, the results are dramatically improved. When using fixed fractional trading you are best off operating from a single combined bank.
THREAT EACH PLAY AS IF INFINITELY REPEATED The next axiom of fixed fractional trading regards maximizing the current event as though it were to be performed an infinite number of times in the future.
We have determined that for an independent trials process, you should always bet that f which is optimal (and constant) and likewise when there is dependency involved, only with dependency f is not
constant. Suppose we have a system where there is dependency in like begetting like, and suppose that this is one of those rare gems where the confidence limit is at an acceptable level for us, that
we feel we can safely assume that there really is dependency here. For the sake of simplicity we will use a payoff ratio of 2:1. Our system has shown that, historically, if the last play was a win,
then the next play has a 55% chance of being a tin. If the last play was a loss, our system has a 45% chance of the next play being a loss. Thus, if the last play was a win, then from the Kelly
formula, Equation (1.10), for finding the optimal f (since the payoff ratio is Bernoulli distributed): (1.10) f = ((2 +1)*.55-1)/2 = (3*.55-1)/2 = .65/2 = .325 After a losing play, our optimal f is:
f = ((2+ l)*.45-l)/2 = (3*.45- l)/2 = .35/2 = .175 Now dividing our biggest losses (-1) by these negative optimal fs dictates that we make 1 bet for every 3.076923077 units in our stake after a win,
and make 1 bet for every 5.714285714 units in our stake after a loss. In so doing we will maximize the growth over the long run. Notice that we treat each individual play as though it were to be
performed an infinite number of times. Notice in this example that betting after both the wins and the losses still has a positive mathematical expectation individually. What if, after a loss, the
probability of a win was .3? In such a case, the mathematical expectation is negative, hence there is no optimal f and as a result you shouldn't take this play: (1.03) ME = (.3*2)+ (.7*-1) = .6-.7 =
-.1 In such circumstances, you would bet the optimal amount only after a win, and you would not bet after a loss. If there is dependency present, you must segregate the trades of the market system
based upon the dependency and treat the segregated trades as separate market systems. The same principle, namely that asymptotic growth is maximized if each play is considered to be performed an
infinite number of times into the future, also applies to simultaneous wagering (or trading a portfolio). Consider two betting systems, A and B. Both have a 2:1 payoff ratio, and both win 50% of the
time. We will assume that the correlation coefficient between the two systems is 0, but that is not relevant to the point being illuminated here. The optimal fs for both systems (if they were being
traded alone, rather than simultaneously) are .25, or to make 1 bet for every 4 units in equity. The optimal fs for trading both systems simultaneously are .23, or 1 bet for every 4.347826087 units
in account equity.1 System B only trades two-thirds of the time, so some trades will be done when the two systems are not trading simultaneously. This first sequence is demonstrated with a starting
combined bank of 1,000 units, and each bet for each system is performed with an optimal f of 1 bet per every 4.347826087 units: A -1 2 -1
B -230.00 354.20 -217.83
-1 2
-177.10 435.67
Combined Bank 1,000.00 770.00 947.10 1,164.93
The method We are using here to arrive at these optimal bet sizes is described in Chapters 6 and 7. We are, in effect, using 3 market systems, Systems A and B as described here, both with an
arithmetic HPR of 1.125 and a stand and deviation in HPRs of .375, and null cash, with an HPR of 1.0 and a standard deviation of 0. The geometric average is thus maximized at approximately E = .23,
where the weightings for A and B both are .92. Thus, the optimal fs for both A and B are transformed to 4.347826. Using such factors will maximize growth in this game. - 28 -
A 2 -1 2
B 535.87 -391.18 422.48
-1 2
-391.18 422.48
Combined Bank 1,700.80 918.43 1,763.39
Next we see the same exact thing, the only difference being that when A is betting alone (i.e., when B does not have a bet at the same time as A), we make 1 bet for every 4 units in the combined bank
for System A, since that is the optimal f on the single, individual play. On the plays where the bets are simultaneous, we are still betting 1 unit for every 4.347826087 units in account equity for
both A and B. Notice that in so doing we are taking each bet, whether it is individual or simultaneous, and applying that optimal f which would maximize the play as though it were to be performed an
infinite number of times in the future. A -1 2 -1 2 -1 2
B -250.00 345.00 -212.17 567.34 -391.46 422.78
-1 2
-172.50 424.35
-1 2
-391.46 422.78
Combined Bank 1,000.00 750.00 922.50 1,134.67 1,702.01 919.09 1,764.65
As can be seen, there is a slight gain to be obtained by doing this, and the more trades that elapse, the greater the gain. The same principle applies to trading a portfolio where not all components
of the portfolio are in the market all the time. You should trade at the optimal levels for the combination of components (or single component) that results in the optimal growth as though that
combination of components (or single component) were to be traded an infinite number of times in the future.
EFFICIENCY LOSS IN SIMULTANEOUS WAGERING OR PORTFOLIO TRADING Let's again return to our 2:1 coin-toss game. Let's again assume that we are going to play two of these games, which we'll call System A
and System B, simultaneously and that there is zero correlation between the outcomes of the two games. We can determine our optimal fs for such a case as betting 1 unit for every 4.347826 in account
equity when the games are played simultaneously. When starting with a bank of 100 units, notice that we finish with a bank of 156.86 units: System A System B Trade P&L Trade P&L Optimal f is 1 unit
for every 4.347826 in equity: -1 -23.00 -1 -23.00 2 24.84 -1 -12.42 -1 -15.28 2 30.55 2 37.58 2 37.58 System A System B Trade P&L Trade P&L Optimal f is 1 unit for every 8.00 in equity: -1 -12.50 -1
-12.50 2 18.75 2 18.75 -1 -14.06 -1 -14.06 2 21.09 2 21.09
Bank 100.00 54.00 66.42 81.70 156.66 Bank 100.00 75.00 112.50 84.38 126.56
Now let's consider System C. This would be the same as Systems A and B, only we're going to play this game alone, without another game going simultaneously. We're also going to play it for 8 plays-as
opposed to the previous endeavor, where we played 2 games for 4 simultaneous plays. Now our optimal f is to bet 1 unit for every 4 units in equity. What we have is the same 8 outcomes as before, but
a different, better end result: System C Trade P&L Optimal f is 1 unit f or every 4.00 in equity: -1 -25.00 2 37.50 -1 -28.13 2 42.19 2 63.28 2 94.92 -1 -71.19 -1 -53.39
Bank 100.00 75.00 112.50 84.38 126.56 189.84 284.77 213.57 160.18
The end result here is better not because the optimal fs differ slightly (both are at their respective optimal levels), but because there is a small efficiency loss involved with simultaneous
wagering. This inefficiency is the result of not being able to recapitalize your account after every single wager as you could betting only 1 market system. In the si-
multaneous 2-bet case, you can only recapitalize 3 times, whereas in the single B-bet case you recapitalize 7 times. Hence, the efficiency loss in simultaneous wagering (or in trading a portfolio of
market systems). We just witnessed the case where the simultaneous bets were not correlated. Let's look at what happens when we deal with positive (+1.00) correlation: Notice that after 4
simultaneous plays where the correlation between the market systems employed is+1.00, the result is a gain of 126.56 on a starting stake of 100 units. This equates to a TWR of 1.2656, or a geometric
mean, a growth factor per play (even though these are combined plays) of 1.2656^(1/4) = 1.06066. Now refer back to the single-bet case. Notice here that after 4 plays, the outcome is 126.56, again on
a starting stake of 100 units. Thus, the geometric mean of 1.06066. This demonstrates that the rate of growth is the same when trading at the optimal fractions for perfectly correlated markets. As
soon as the correlation coefficient comes down below+1.00, the rate of growth increases. Thus, we can state that when combining market systems, your rate of growth will never be any less than with
the single-bet case, no matter of how high the correlations are, provided that the market system being added has a positive arithmetic mathematical expectation. Recall the first example in this
section, where there were 2 market systems that had a zero correlation coefficient between them. This market system made 156.86 on 100 units after 4 plays, for a geometric mean of (156.86/100)^(1/4)
= 1.119. Let's now look at a case where the correlation coefficients are -1.00. Since there is never a losing play under the following scenario, the optimal amount to bet is an infinitely high amount
(in other words, bet 1 unit for every infinitely small amount of account equity). But, rather than getting that greedy, we'll just make 1 bet for every 4 units in our stake so that we can make the
illustration here: System A System B Trade P&L Trade P&L Bank Optimal f is 1 unit for every 0.00 in equity (shown is 1 for every 4): 100.00 -1 -12.50 2 25.00 112.50 2 28.13 -1 -14.06 126.56 -1 -15.82
2 31.64 142.38 2 35.60 -1 -17.80 160.18
There are two main points to glean from this section. The first is that there is a small efficiency loss with simultaneous betting or portfolio trading, a loss caused by the inability to recapitalize
after every individual play. The second point is that combining market systems, provided they have a positive mathematical expectation, and even if they have perfect positive correlation, never
decreases your total growth per time period. However, as you continue to add more and more market systems, the efficiency loss becomes considerably greater. If you have, say, 10 market systems and
they all suffer a loss simultaneously, that loss could be terminal to the account, since you have not been able to trim back size for each loss as you would have had the trades occurred sequentially.
Therefore, we can say that there is a gain from adding each new market system to the portfolio provided that the market system has a correlation coefficient less than 1 and a positive mathematical
expectation, or a negative expectation but a low enough correlation to the other components in the portfolio to more than compensate for the negative expectation. There is a marginally decreasing
benefit to the geometric mean for each market system added. That is, each new market system benefits the geometric mean to a lesser and lesser degree. Further, as you add each new market system,
there is a greater and greater efficiency loss caused as a result of simultaneous rather than sequential outcomes. At some point, to add another market system will do more harm then good.
TIME REQUIRED TO REACH A SPECIFIED GOAL AND THE TROUBLE WITH FRACTIONAL F Suppose we are given the arithmetic average HPR and the geometric average HPR for a given system. We can determine the
standard deviation in HPRs from the formula for estimated geometric mean: (1.19a) EGM = (AHPR^2-SD^2)^(1/2) where AHPR = The arithmetic mean HPR. - 29 -
SD = The population standard deviation in HPRs. Therefore, we can estimate the standard deviation, SD, as: (2.04) SD^2 = AHPR^2-EGM^2 Returning to our 2:1 coin-toss game, we have a mathematical
expectation of $.50, and an optimal f of betting $1 for every $4 in equity, which yields a geometric mean of 1.06066. We can use Equation (2.05) to determine our arithmetic average HPR: (2.05) AHPR =
l+(ME/f$) where AHPR = The arithmetic average HPR. ME = The arithmetic mathematical expectation in units. f$ = The biggest loss/-f. f = The optimal f (0 to 1). Thus, we would have an arithmetic
average HPR of: AHPR = 1+(.5/( -1/ -.25)) = 1+(.5/4) = 1+.125 = 1.125 Now, since we have our AHPR and our ECM, we can employ equation (2.04) to determine the estimated standard deviation in the HPRs:
(2.04) SD^2 = AHPR^2-EGM^2 = 1.125^2-1.06066^2 = 1.265625-1.124999636 = .140625364 Thus SD^2, which is the variance in HPRs, is .140625364. Taking the Square root of this yields a standard deviation
in these HPRs of .140625364^(1/2) = .3750004853. You should note that this is the estimated standard deviation because it uses the estimated geometric mean as input. It is probably not completely
exact, but it is close enough for our purposes. However, suppose we want to convert these values for the standard deviation (or variance), arithmetic, and geometric mean HPRs to reflect trading at
the fractional f. These conversions are now given: (2.06) FAHPR = (AHPR-1)*FRAC+1 (2.07) FSD = SD*FRAC (2.08) FGHPR = (FAHPR^2-FSD^2)^(1/2) where FRAC = The fraction of optimal f we are solving for.
AHPR = The arithmetic average HPR at the optimal f. SD = The standard deviation in HPRs at the optimal f. FAHPR = The arithmetic average HPR at the fractional f. FSD = The standard deviation in HPRs
at the fractional f FGHPR = The geometric average HPR at the fractional f. For example, suppose we want to see what values we would have for FAHPR, FGHPR, and FSD at half the optimal f (FRAC = .5) in
our 2:1 coin-toss game. Here, we know our AHPR is 1.125 and our SD is .3750004853. Thus: (2.06) FAHPR = (AHPR-1)*FRAC+1 = (1.125- 1)*.5+1 = .125*.5+1 = .0625+1 = 1.0625 (2.07) FSD = SD*FRAC =
,3750004853*.5 = .1875002427 (2.08) FGHPR = (FAHPR^2-FSD^2)^(1/2) = (1.0625^2-.1875002427^2)^(1/2) = (1.12890625-.03515634101)^(1/2) = 1.093749909^(1/2) = 1.04582499 Thus, for an optimal f of .25, or
making 1 bet for every $4 in equity, we have values of 1.125, 1.06066, and .3750004853 for the arithmetic average, geometric average, and standard deviation of HPRs respectively. Now we have solved
for a fractional (.5) f of .125 or making 1 bet for every $8 in our stake, yielding values of 1.0625, 1.04582499, and
.1875002427 for the arithmetic average, geometric average, and standard deviation of HPRs respectively. We can now take a look at what happens when we practice a fractional f strategy. We have
already determined that under fractional f we will make geometrically less money than under optimal f. Further, we have determined that the drawdowns and variance in returns will be less with
fractional f. What about time required to reach a specific goal? We can quantify the expected number of trades required to reach a specific goal. This is not the same thing as the expected time
required to reach a specific goal, but since our measurement is in trades we will use the two notions of time and trades elapsed interchangeably here: (2.09) N = ln(Goal)/ln(Geometric Mean) where N =
The expected number of trades to reach a specific goal. Goal = The goal in terms of a multiple on our starting stake, a TWR. ln() = The natural logarithm function. Returning to our 2:1 coin-toss
example. At optimal f we have a geometric mean of 1.06066, and at half f this is 1.04582499. Now let's calculate the expected number of trades required to double our stake (goal = 2). At full f: N =
ln(2)/ln( 1.06066) = .6931471/.05889134 = 11.76993 Thus, at the full f amount in this 2:1 coin-toss game, we anticipate it will take us 11.76993 plays (trades) to double our stake. Now, at the half f
amount: N = ln(2)/ln(1.04582499) = .6931471/.04480602 = 15.46996 Thus, at the half f amount, we anticipate it will take us 15.46996 trades to double our stake. In other words, trading half f in this
case will take us 31.44% longer to reach our goal. Well, that doesn't sound too bad. By being more patient, allowing 31.44% longer to reach our goal, we eliminate our drawdown by half and our
variance in the trades by half. Half f is a seemingly attractive way to go. The smaller the fraction of optimal f that you use, the smoother the equity curve, and hence the less time you can expect
to be in the worst-case drawdown. Now, let's look at it in another light. Suppose you open two accounts, one to trade the full f and one to trade the half f. After 12 plays, your full f account will
have more than doubled to 2.02728259 (1.06066^12) times your starting stake. After 12 trades your half f account will have grown to 1.712017427 (1.04582499^12) times your starting stake. This half f
account will double at 16 trades to a multiple of 2.048067384 (1.04582499^16) times your starting stake. So, by waiting about one-third longer, you have achieved the same goal as with full optimal f,
only with half the commotion. However, by trade 16 the full f account is now at a multiple of 2.565777865 (1.06066^16) times your starting stake. Full f will continue to pull out and away. By trade
100, your half f account should be at a multiple of 88.28796546 times your starting stake, but the full f will be at a multiple of 361.093016! So anyone who claims that the only thing you sacrifice
with trading at a fractional versus full f is time required to reach a specific goal is completely correct. Yet time is what it's all about. We can put our money in Treasury Bills and they will reach
a specific goal in a certain time with an absolute minimum of drawdown and variance! Time truly is of the essence.
COMPARING TRADING SYSTEMS We have seen that two trading systems can be compared on the basis of their geometric means at their respective optimal fs. Further, we can compare systems based on how high
their optimal fs themselves are, with the higher optimal f being the riskier system. This is because the least the drawdown may have been is at least an f percent equity retracement. So, there are
two basic measures for comparing systems, the geometric means at the optimal fs, with the higher geometric mean being the superior system, and the optimal fs themselves, with the lower optimal f
being the superior system. Thus, rather than having a single, onedimensional measure of system performance, we see that performance must be measured on a two-dimensional plane, one axis being the
geometric mean, the other being the value for f itself. The higher the geometric mean at the optimal f, the better the system, Also, the lower the optimal f, the better the system. - 30 -
Geometric mean does not imply anything regarding drawdown. That is, a higher geometric mean does not mean a higher (or lower) drawdown. The geometric mean only pertains to return. The optimal f is
the measure of minimum expected historical drawdown as a percentage of equity retracement. A higher optimal f does not mean a higher (or lower) return. We can also use these benchmarks to compare a
given system at a fractional f value and another given system at its full optimal f value. Therefore, when looking at systems, you should look at them in terms of how high their geometric means are
and what their optimal fs are. For example, suppose we have System A, which has a 1.05 geometric mean and an optimal f of .8. Also, we have System B, which has a geometric mean of 1.025 and an
optimal f of .4. System A at the half f level will have the same minimum historical worst-case equity retracement (drawdown) of 40%, just as System B's at full f, but System A's geometric mean at
half f will still be higher than System B's at the full f amount. Therefore, System A is superior to System B. "Wait a minute," you say, "I thought the only thing that mattered was that we had a
geometric mean greater than 1, that the system need be only marginally profitable, that we can make all the money we want through money management!" That's still true. However, the rate at which you
will make the money is still a function of the geometric mean at the f level you are employing. The expected variability will be a function of how high the f you are using is. So, although it's true
that you must have a system with a geometric mean at the optimal f that is greater than 1 (i.e., a positive mathematical expectation) and that you can still make virtually an unlimited amount with
such a system after enough trades, the rate of growth (the number of trades required to reach a specific goal) is dependent upon the geometric mean at the f value employed. The variability en route
to that goal is also a function of the f value employed. Yet these considerations, the degree of the geometric mean and the f employed, are secondary to the fact that you must have a positive
mathematical expectation, although they are useful in comparing two systems or techniques that have positive mathematical expectations and an equal confidence of their working in the future.
TOO MUCH SENSIVITY TO THE BIGGEST LOSS A recurring criticism with the entire approach of optimal f is that it is too dependent on the biggest losing trade. This seems to be rather disturbing to many
traders. They argue that the amount of contracts you put on today should not be so much a function of a single bad trade in the past. Numerous different algorithms have been worked up by people to
alleviate this apparent oversensitivity to the largest loss. Many of these algorithms work by adjusting the largest loss upward or downward to make the largest loss be a function of the current
volatility in the market. The relationship seems to be a quadratic one. That is, the absolute value of the largest loss seems to get bigger at a faster rate than the volatility. (Volatility is
usually defined by these practitioners as the average daily range of the last few weeks, or average absolute value of the daily net change of the last few weeks, or any of the other conventional
measures of volatility.) However, this is not a deterministic relationship. That is, just because the volatility is X today does not mean that our largest loss will be X^Y. It simply means that it
usually is somewhere near X^Y. If we could determine in advance what the largest possible loss would be going into today, we could then have a much better handle on our money management.2 Here again
is a case where we must consider the worst-case scenario and build from there. The problem is that we do not know exactly what our largest loss can be going into today. An algorithm that can predict
this is really not very useful to us because of the one time that it fails. 2
This is where using options in a trading strategy is so useful. Either buying a put or call out right in opposition to the underlying position to limit the loss to the strike price of the options, or
simply buying options outright in lieu of the underlying, gives you a floor, an absolute maximum loss. Knowing this is extremely handy from a money-management, particularly an optimal f, standpoint,
Further, if you know what your maximum possible loss is n advance (e.g., a day trade), then you can always determine what the f is in dollars perfectly for any trade by the relation dollars at risk
per unit/optima] f. For example, suppose a day trader knew her optimal 1 was .4. Her stop today, on a I-unit basis, is going to be $900. She will therefore optimally trade 1 unit for every $2,250
($900/.4) in account equity.
Consider for instance the possibility of an exogenous shock occurring in a market overnight. Suppose the volatility were quite low prior to this overnight shock, and the market then went locked-limit
against you for the next few days. Or suppose that there were no price limits, and the market just opened an enormous amount against you the next day. These types of events are as old as commodity
and stock trading itself. They can and do happen, and they are not always telegraphed in advance by increased volatility. Generally then you are better off not to "shrink" your largest historical
loss to reflect a current low-volatility marketplace. Furthermore, there Is the concrete possibility of experiencing a loss larger in the future than what was the historically largest loss. There is
no mandate that the largest loss seen in the past is the largest loss you can experience today.3 This is true regardless of the current volatility coming into today. The problem is that, empirically,
the f that has been optimal in the past is a function of the largest loss of the past. There's no getting around this. However, as you shall see when we get into the parametric techniques, you can
budget for a greater loss in the future. In so doing, you will be prepared if the almost inevitable larger loss comes along. Rather than trying to adjust the largest loss to the current climate of a
given market so that your empirical optimal f reflects the current climate, you will be much better off learning the parametric techniques. The technique that follows is a possible solution to this
problem, and it can be applied whether we are deriving our optimal f empirically or, as we shall learn later, parametrically.
EQUALIZING OPTIMAL F Optimal f will yield the greatest geometric growth on a stream of outcomes. This is a mathematical fact. Consider the hypothetical stream of outcomes: +2, -3, +10, -5 This is a
stream from which we can determine our optimal f as .17, or to bet 1 unit for every $29.41 in equity. Doing so on such a stream will yield the greatest growth on our equity. Consider for a moment
that this stream represents the trade profits and losses on one share of stock. Optimally we should buy one share of stock for every $29.41 that we have in account equity, regardless of what the
current stock price is. But suppose the current stock price is $100 per share. Further, suppose the stock was $20 per share when the first two trades occurred and was $50 per share when the last two
trades occurred. Recall that with optimal f we are using the stream of past trade P&L's as a proxy for the distribution of expected trade P&L's currently. Therefore, we can preprocess the trade P&L
data to reflect this by converting the past trade P&L data to reflect a commensurate percentage gain or loss based upon the current price. For our first two trades, which occurred at a stock price of
$20 per share, the $2 gain corresponds to a 10% gain and the $3 loss corresponds to a 15% loss. For the last two trades, taken at a stock price of $50 per share, the $10 gain corresponds to a 20%
gain and the $5 loss corresponds to a 10% loss. The formulas to convert raw trade P&L's to percentage gains and losses for longs and shorts are as follows: (2.10a) P&L% = Exit Price/Entry Price-1
(for longs) (2.10b) P&L% = Entry Price/Exit Price-1 (for shorts) or we can use the following formula to convert both longs and shorts: (2.10c) P&L% = P&L in Points/Entry Price Thus, for our 4
hypothetical trades, we now have the following stream of percentage gains and losses (assuming all trades are long trades): +.l, -.15, +.2, -.l
We call this new stream of translated P&L's the equalized data, because it is equalized to the price of the underlying instrument when the trade occurred. To account for commissions and slippage, you
must adjust the exit price downward in Equation (2.10a) for an amount commensurate with the amount of the commissions and slippage. Likewise, you should adjust the exit price upward in (2.10b). If
you are using (2.10c), you must deduct the amount of the commissions and slippage (in points again) from the numerator P&L in Points. Next we determine our optimal f on these percentage gains and
losses. The f that is optimal is .09. We must now convert this optimal f of .09 into a dollar amount based upon the current stock price. This is accomplished by the following formula: (2.11) f$ =
Biggest % Loss*Current Price*$ per Point/-f Thus, since our biggest percentage loss was -.15, the current price is $100 per share, and the number of dollars per full point is 1 (since we are only
dealing with buying 1 share), we can determine our f$ as: f$ = -.15*100*1/-.09 = -15/-.09 = 166.67 Thus, we would optimally buy 1 share for every $166.67 in account equity. If we used 100 shares as
our unit size, the only variable affected would have been the number of dollars per full point, which would have been 100. The resulting f$ would have been $16,666.67 in equity for every 100 shares.
Suppose now that the stock went down to $3 per share. Our f$ equation would be exactly the same except for the current price variable which would now be 3. Thus, the amount to finance 1 share by
becomes: f$ = -.15*3*1/-.09 = -.45/-.09 = 5 We optimally would buy 1 share for every $5 we had in account equity. Notice that the optimal f does not change with the current price of the stock. It
remains at .09. However, the f$ changes continuously as the price of the stock changes. This doesn't mean that you must alter a position you are already in on a daily basis, but it does make it more
likely to be beneficial that you do so. As an example, if you are long a given stock and it declines, the dollars that you should allocate to 1 unit (100 shares in this case) of this stock will
decline as well, with the optimal f determined off of equalized data. If your optimal f is determined off of the raw trade P&L data, it will not decline. In both cases, your daily equity is
declining. Using the equalized optimal f makes it more likely that adjusting your position size daily will be beneficial. Equalizing the data for your optimal f necessitates changes in the
by-products.4 We have already seen that both the optimal f and the geometric mean (and hence the TWR) change. The arithmetic average trade changes because now it, too, must be based on the idea that
all trades in the past must be adjusted as if they had occurred from the current price. Thus, in our hypothetical example of outcomes on 1 share of +2, -3,+10, and -5, we have an average trade of $1.
When we take our percentage gains and losses of +.1, -15, +.2, and -.1, we have an average trade (in percent) of +.5. At $100 per share, this translates into an average trade of 100*.05 or $5 per
trade. At $3 per share, the average trade becomes $.15 (3*.05). The geometric average trade changes as well. Recall Equation (1.14) for the geometric average trade: (1.14) GAT = G*(Biggest Loss/-f)
where G = Geometric mean 1. f = Optimal fixed fraction. (and, of course, our biggest loss is always a negative number). This equation is the equivalent of: GAT = (geometric mean-1)*f$
Prudence requires that we USC a largest loss at least as big as the largest loss seen in the past. As the future unfolds and we obtain more and more data, we will derive longer runs of losses. For
instance, if ] flip a coin 100 times I might see it come up tails 12 times for a row at the longest run of tails. If I go and flip it 1,000 times, I most likely will see a longer run of tails. This
same principle is at work when we trade. Not only should we expect longer streaks of losing trades in the future, we should also expect a bigger largest losing trade. - 31 -
Risk-of-ruin equations, although not directly addressed in this text, must also be adjusted to reflect equalized data when being used. Generally, risk-of-ruin equations use the raw trade P&L data as
input. However, when you use equalized data, the new stream of percentage gains and losses must be multiplied by the current price of the underlying instrument and the resulting stream used. Thus, a
stream of percentage gains and losses such as .1, -.15, .2, -.1 translates into a stream of 10, -15, 20, -10 for an underlying at a current price of $100. This new stream should then be used as the
data for the risk-of-ruin equations.
We have already obtained a new geometric mean by equalizing the past data. The f$ variable, which is constant when we do not equalize the past data, now changes continuously, as it is a function of
the current underlying price. Hence our geometric average trade changes continuously as the price of the underlying instrument changes. Our threshold to the geometric also must be changed to reflect
the equalized data. Recall Equation (2.02) for the threshold to the geometric: (2.02) T = AAT/GAT*Biggest Loss/-f where T = The threshold to the geometric. AAT = The arithmetic average trade. GAT =
The geometric average trade. f = The optimal f (0 to 1). This equation can also be rewritten as: T = AAT/GAT*f$ Now, not only do the AAT and GAT variables change continuously as the price of the
underlying changes, so too does the f$ variable. Finally, when putting together a portfolio of market systems we must figure daily HPRs. These too are a function of f$: (2.12) Daily HPR = D$/f$+1
where D$ = The dollar gain or loss on 1 unit from the previous day. This is equal to (Tonight's Close-Last Night's Close)*Dollars per Point. f$ = The current optimal fin dollars, calculated from
Equation (2.11). Here, however, the current price variable is last night's close. For example, suppose a stock tonight closed at $99 per share. Last night it was $102 per share. Our biggest
percentage loss is -15. If our f is .09 then our f$ is: f$ = -.15*102 *1/-.09 = -15.3/-.09 = 170 Since we are dealing with only 1 share, our dollars per point value is $1. We can now determine our
daily HPR for today by Equation (2.12) as: (2.12) Daily HPR = (99-102)*1/170+1 = -3/170+1 = -.01764705882+1 = .9823529412 Return now to what was said at the outset of this discussion. Given a stream
of trade P&L's, the optimal f will make the greatest geometric growth on that stream (provided it has a positive arithmetic mathematical expectation). We use the stream of trade P&L's as a proxy for
the distribution of possible outcomes on the next trade. Along this line of reasoning, it may be advantageous for us to equalize the stream of past trade profits and losses to be what they would be
if they were performed at the current market price. In so doing, we may obtain a more realistic proxy of the distribution of potential trade profits and losses on the next trade. Therefore, we should
figure our optimal f from this adjusted distribution of trade profits and losses. This does not mean that we would have made more by using the optimal f off of the equalized data. We would not have,
as the following demonstration shows: P&L Percentage Underly- f$ ing Price At f = .09, trading the equalized method: +2 -3 +10 -5 P&L
.1 -.15 .2 -.1 Percentage
Number of Shares
20 $33.33 300 20 $33.33 318 50 $83.33 115.752 50 $83.33 129.642 Underly- f$ Number of ing Price Shares At f = .17, trading the nonequalized method: +2 .1 20 $29.41 340.02 -3 -.15 20 $29.41 363.14 +10
.2 50 $29.41 326.1 -5 -.1 50 $29.41 436.98
Cumulative $10,000 $10,600 $9,646 $10,803.52 $10,155.31 Cumulative $10,000 $10,680.04 $9,590.61 $12,851.61 $10,666.71
However, if all of the trades were figured off of the current price (say $100 per share), the equalized optimal f would have made more than the raw optimal f. Which then is the better to use? Should
we equalize our data and determine our optimal f (and its by-products), or should we just run every- 32 -
thing as it is? This is more a matter of your beliefs than it is mathematical fact. It is a matter of what is more pertinent in the item you are trading, percentage changes or absolute changes. Is a
$2 move in a $20 stock the same as a $10 move in a $100 stock? What if we are discussing dollars and deutsche marks? Is a 30-point move at .4500 the same as a .40-point move at .6000? My personal
opinion is that you are probably better off with the equalized data. Often the matter is moot, in that if a stock has moved from $20 per share to $100 per share and we want to determine the optimal
f, we want to use current data. The trades that occurred at $20 per share may not be representative of the way the stock is presently trading, regardless of whether they are equalized or not.
Generally, then, you are better off not using data where the underlying was at a dramatically different price than it presently is, as the characteristics of the way the item trades may have changed
as well. In that sense, the optimal f off of the raw data and the optimal f off of the equalized data will be identical if all trades occurred at the same underlying price. So we can state that if it
does matter a great deal whether you equalize your data or not, then you're probably using too much data anyway. You've gone so far into the past that the trades generated back then probably are not
very representative of the next trade. In short, we can say that it doesn't much matter whether you use equalized data or not, and if it does, there's probably a problem. If there isn't a problem,
and there is a difference between using the equalized data and the raw data, you should opt for the equalized data. This does not mean that the optimal f figured off of the equalized data would have
been optimal in the past. It would not have been. The optimal f figured off of the raw data would have been the optimal in the past. However, in terms of determining the as-yet-unknown answer to the
question of what will be the optimal f (or closer to it tomorrow), the optimal f figured off of the equalized data makes better sense, as the equalized data is a fairer representation of the
distribution of possible outcomes on the next trade. Equations (2.10a) through (2.10c) will give different answers depending upon whether the trade was initiated as a long or a short. For example, if
a stock is bought at 80 and sold at 100, the percentage gain is 25. However, if a stock is sold short at 100 and covered at 80, the gain is only 20%. In both cases, the stock was bought at 80 and
sold at 100, but the sequence-the chronology of these transactions-must be accounted for. As the chronology of transactions affects the distribution of percentage gains and losses, we assume that the
chronology of transactions in the future will be more like the chronology in the past than not. Thus, Equations (2.10a) through (2,10c) will give different answers for longs and shorts. Of course, we
could ignore the chronology of the trades (using 2.10c for longs and using the exit price in the denominator of 2.10c for shorts), but to do so would be to reduce the information content of the
trade's history. Further, the risk involved with a trade is a function of the chronology of the trade, a fact we would be forced to ignore.
DOLLAR AVERAGING AND SHARE AVERAGING IDEAS Here is an old, underused money-management technique that is an ideal tool for dealing with situations where you are absent knowledge. Consider a
hypothetical motorist, Joe Putzivakian, case number 286952343. Every week, he puts $20 of gasoline into his auto, regardless of the price of gasoline that week. He always gets $20 worth, and every
week he uses the $20 worth no matter how much or how little that buys him. When the price for gasoline is higher, it forces him to be more austere in his driving. As a result, Joe Putzivakian will
have gone through life buying more gasoline when it is cheaper, and buying less when it was more expensive. He will have therefore gone through life paying a below average cost per gallon of
gasoline. In other words, if you averaged the cost of a gallon of gasoline for all of the weeks of which Joe was a motorist, the average would have been higher than the average that Joe paid. Now
consider his hypothetical cousin, Cecil Putzivakian, case number 286952344. Whenever he needs gasoline, he just fills up his pickup and complains about the high price of gasoline. As a result, Cecil
has used a consistent amount of gas each week, and has therefore paid the average price for it throughout his motoring lifetime.
Now let's suppose you are looking at a long-term investment program. You decide that you want to put money into a mutual fund to be used for your retirement many years down the road. You believe that
when you retire the mutual fund will be at a much higher value than it is today. That is, you believe that in an asymptotic sense the mutual fund will be an investment that makes money (of course, in
an asymptotic sense, lightning does strike twice). However, you do not know if it is going to go up or down over the next month, or the next year. You are absent knowledge about the nearer-term
performance of the mutual fund. To cope with this, you can dollar average into the mutual fund. Say you want to space your entry into the mutual fund over the course of two years. Further, say you
have $36,000 to invest. Therefore, every month for the next 24 months you will invest $1,500 of this $36,000 into the fund, until after 24 months you will be completely invested. By so doing, you
have obtained a below average cost into the fund. "Average" as it is used here refers to the average price of the fund over the 24month period during which you are investing. It doesn't necessarily
mean that you will get a price that is cheaper than if you put the full $36,000 into it today, nor does it guarantee that at the end of these 24 months of entering the fund you will show a profit on
your $36,000. The amount you have in the fund at that time may be less than the $36,000. What it does mean is that if you simply entered arbitrarily at some point along the next 24 months with your
full $36,000 in one shot, you would probably have ended up buying fewer mutual fund shares, and hence have paid a higher price than if you dollar averaged in. The same is true when you go to exit a
mutual fund, only the exit side works with share averaging rather than dollar averaging. Say it is now time for you to retire and you have a total of 1,000 shares in this mutual fund, You don't know
if this is a good time for you to be getting out or not, so you decide to take 2 years (24 months), to average out of the fund. Here's how you do it. You take the total number of shares you have
(1,000) and divide it by the number of periods you want to get out over (24 months). Therefore, since 1,000/24 = 41.67, you will sell 41.67 shares every month for the next 24 months. In so doing, you
will have ended up selling your shares at a higher price than the average price over the next 24 months. Of course, this is no guarantee that you will have sold them for a higher price than you could
have received for them today, nor does it guarantee that you will have sold your shares at a higher price than what you might get if you were to sell all of your shares 24 months from now. What you
will get is a higher price than the average over the time period that you are averaging out over. That is guaranteed. These same principles can be applied to a trading account. By dollar averaging
money into a trading account as opposed to simply "taking the plunge" at some point during the time period you are averaging over, you will have gotten into the account at a better "average price."
Absent knowledge of what the near-term equity changes in the account will be you are better off, on average, to dollar average into a trading program. Don't just rely on your gut and your nose, use
the measures of dependency discussed in Chapter 1 on the monthly equity changes of a trading program. Try to see if there is dependency in the monthly equity changes. If there is dependency to a high
enough confidence level so you can plunge in at a favorable point, then do so. However, if there isn't a high enough confidence in the dependency of the monthly equity changes, then dollar average
into (and share average out of) a trading program. In so doing, you will be ahead in an asymptotic sense. The same is true for withdrawing money from an account. The way to share average out of a
trading program (when there aren't any shares, like a commodity account) is to decide upon a date to start averaging out, as well as how long a period of time to average out for. On the date when you
are going to start averaging out, divide the equity in the account by 100. This gives you the value of "1 share." Now, divide 100 by the number of periods that you want to average out over. Say you
want to average out of the account weekly over the next 20 weeks. That makes 20 periods. Dividing 100 by 20 gives 5. Therefore, you are going to average out of your account by 5 "shares" per week.
Multiply the value you had figured for 1 share by 5, and that will tell you how much money to withdraw from your trading account this week. Now, going into next week, you must keep track of how many
shares you have left. Since you got out of 5 shares last week, you are left with 95. When the time comes along for withdrawal number 2, divide the equity in your account by 95 and multiply by 5. This
will give you the value of the 5 - 33 -
shares you are "cashing in" this week. You will keep on doing this until you have zero shares left, at which point no equity will be left in your account. By doing this, you have probably obtained a
better average price for getting out of your account than you would have received had you gotten out of the account at some arbitrary point along this 20-week withdrawal period. This principle of
averaging in and out of a trading account is so simple, you have to wonder why no one ever does it. I always ask the accounts that I manage to do this. Yet I have never had anyone, to date, take me
up on it. The reason is simple. The concept, although completely valid, requires discipline and time in order to work-exactly the same ingredients as those required to make the concept of optimal f
work. Just ask Joe Putzivakian. It's one thing to understand the concepts and believe in them. It's another thing to do it.
THE ARC SINE LAWS AND RANDOM WALKS Now we turn the discussion toward drawdowns. First, however, we need to study a little bit of theory in the way of the first and second arc sine laws. These are
principles that pertain to random walks. The stream of trade P&L's that you are dealing with may not be truly random. The degree to which the stream of P&L's you are using differs from being purely
random is the degree to which this discussion will not pertain to your stream of profits and losses. Generally though, most streams of trade profits and losses are nearly random as determined by the
runs test and the linear correlation coefficient (serial correlation). Furthermore, not only do the arc sine laws assume that you know in advance what the amount that you can win or lose is, they
also assume that the amount you can win is equal to the amount you can lose, and that this is always a constant amount. In our discussion, we will assume that the amount that you can win or lose is
$1 on each play. The arc sine laws also assume that you have a 50% chance of winning and a 50% chance of losing. Thus, the arc sine laws assume a game where the mathematical expectation is 0. These
caveats make for a game that is considerably different, and considerably more simple, than trading is. However, the first and second arc sine laws are exact for the game just described. To the degree
that trading differs from the game just described, the arc sine laws do not apply. For the sake of learning the theory, however, we will not let these differences concern us for the moment. Imagine a
truly random sequence such as coin tossing5 where we win 1 unit when we win and we lose 1 unit when we lose. If we were to plot out our equity curve over X tosses, we could refer to a specific point
(X,Y), where X represented the Xth toss and Y our cumulative gain or loss as of that toss. We define positive territory as anytime the equity curve is above the X axis or on the X axis when the
previous point was above the X axis. Likewise, we define negative territory as anytime the equity curve is below the X axis or on the X axis when the previous point was below the X axis. We would
expect the total number of points in positive territory to be close to the total number of points in negative territory. But this is not the case. If you were to toss the coin N times, your
probability (Prob) of spending K of the events in positive territory is: (2.13) Prob~l/(Pi*K^.5*(N-K)^.5) where Pi = 3.141592654. The symbol ~ means that both sides tend to equality in the limit. In
this case, as either K or (N-K) approaches infinity, the two sides of the equation will tend toward equality. Thus, if we were to toss a coin 10 times (N = 10) we would have the following
probabilities of being in positive territory for K of the tosses: K Probability6 5
Although empirical tests show that coin tossing is not a truly random sequence due to slight imperfections in the coin used, we will assume here, and elsewhere in the text when referring to coin
tossing, that we are tossing an ideal coin with exactly a .5 chance of landing heads or tails. 6 Note that since neither K nor N may equal 0 in Equation (2.13) (as you would then be dividing by 0),
we can discern the probabilities corresponding to K = 0 and K = N by summing the probabilities from K = l to K = N-l and subtracting this sum from 1. Dividing this difference by 2 will give us the
probabilities associated with K = 0 and K = N.
.14795 .1061 .0796 .0695 .065 .0637 .065 .0695 .0796 .1061 .14795
You would expect to be in positive territory for 5 of the 10 tosses, yet that is the least likely outcome! In fact, the most likely outcomes are that you will be in positive territory for all of the
tosses or for none of them! This principle is formally detailed in the first arc sine law which states: For a Fixed A (0
Probability .14795 .1061 .0796 .0695 .065 .0637 .065 .0695 .0796 .1061 .14795
the second law, where, rather than looking for an absolute maximum and minimum, we were looking for a maximum above the mathematical expectation and a minimum below it. The minimum below the
mathematical expectation could be greater than the maximum above it if the minimum happened later and the arithmetic mathematical expectation was a rising line (as in trading) rather than a
horizontal line at zero. Thus, we can interpret the spirit of the arc sine laws as applying to trading in the following ways. (However, rather than imagining the important line as being a, horizontal
line at zero, we should imagine a line that slopes upward at the rate of the arithmetic average trade (if we are constant-con-tract trading). If we are Axed fractional trading, the line will be one
that curves upward, getting ever steeper, 'at such a rate that the next point equals the current point times the geometric mean.) We can interpret the first arc sine law as stating that we should
expect to be on one side of the mathematical expectation line for far more trades than we spend on the other side of the mathematical expectation line. Regarding the second arc sine law, we should
expect the maximum deviations from the mathematical expectation line, either above or below it, as being most likely to occur near the beginning or the end of the equity curve graph and least likely
near the center of it. You will notice another characteristic that happens when you are trading at the optimal f levels. This characteristic concerns the length of time you spend between two equity
high points. If you are trading at the optimal f level, whether you are trading just 1 market system or a portfolio of market systems, the time of the longest drawdown7 (not necessarily the worst, or
deepest, drawdown) takes to elapse is usually 35 to 55% of the total time you are looking at. This seems to be true no matter how long or short a time period you are looking at! (Again, time in this
sense is measured in trades.) This is not a hard-and-fast rule. Rather, it is the effect of the spirit of the arc sine laws at work. It is perfectly natural, and should be expected This principle
appears to hold true no matter how long or short a period we are looking at. This means that we can expect to be in the largest drawdown for approximately 35 to 55% of the trades over the life of a
trading program we are employing! This is true whether we are trading 1 market system or an entire portfolio. Therefore, we must learn to expect to be within the maximum drawdown for 35 to 55% of the
life of a program that we wish to trade. Knowing this before the fact allows us to be mentally prepared to trade through it. Whether you are about to manage an account, about to have one managed by
someone else, or about to trade your own account, you should bear in mind the spirit of the arc sine laws and how they work on your equity curve relative to the mathematical expectation line, along
with the 35% to 55% rule. By so doing you will be tuned to reality regarding what to expect as the future unfolds. We have now covered the empirical techniques entirely. Further, we have discussed
many characteristics of fixed fractional trading and have introduced some salutary techniques, which will be used throughout the sequel. We have seen that by trading at the optimal levels of money
management, not only can we expect substantial drawdowns, but the time spent between two equity highs can also be quite substantial. Now we turn our attention to studying the parametric techniques,
the subject of the next chapter.
In a nutshell, the second arc sine law states that the maximum or minimum are most likely to occur near the endpoints of the equity curve and least likely to occur in the center.
TIME SPENT IN A DRAWDOWN Recall the caveats involved with the arc sine laws. That is, the arc sine laws assume a 50% chance of winning, and a 50% chance of losing. Further, they assume that you win
or lose the exact same amounts and that the generating stream is purely random. Trading is considerably more complicated than this. Thus, the arc sine laws don't apply in a pure sense, but they do
apply in spirit. Consider that the arc sine laws worked on an arithmetic mathematical expectation of 0. Thus, with the first law, we can interpret the percentage of time on either side of the zero
line as the percentage of time on either side of the arithmetic mathematical expectation. Likewise with - 34 -
7By longest drawdown here is meant the longest time, in terms of the number of elapsed trades, between one equity peak and the time (or number of elapsed trades) until that peak is equaled or
Chapter 3 - Parametric Optimal f on the Normal Distribution Now that we are finished with our discussion of the empirical techniques as well as the characteristics of fixed fractional trading, we
enter the realm of the parametric techniques. Simply put, these techniques differ from the empirical in that they do not use the past history itself as the data to be operated on Bather, we observe
the past history to develop a mathematical description of that distribution of that data This mathematical description is based upon what has happened in the past as well as what we expect to happen
in the future. In the parametric techniques we operate on these mathematical descriptions rather than on the past history itself The mathematical descriptions used in the parametric techniques are
most often what are referred to as probability distributions. Therefore, if we are to study the parametric techniques, we must study probability distributions (in general) as a foundation We will
then move on to studying a certain type of distribution, the Normal Distribution. Then we will see how to find the optimal f and its byproducts on the Normal Distribution.
THE BASICS OF PROBABILITY DISTRIBUTIONS Imagine if you will that you are at a racetrack and you want to keep a log of the position in which the horses in a race finish. Specifically, you want to
record whether the horse in the pole position came in first, second, and so on for each race of the day. You will only record ten places. If the horse came in worse than in tenth place, you will
record it as a tenth-place finish. If you do this for a number of days, you will have gathered enough data to see the distribution of finishing positions for a horse starting out in the pole
position. Now you take your data and plot it on a graph. The horizontal axis represents where the horse finished, with the far left being the worst finishing position (tenth) and the far right being
a win. The vertical axis will record how many times the pole position horse finished in the position noted on the horizontal axis. You would begin to see a bell-shaped curve develop. Under this
scenario, there are ten possible finishing positions for each race. We say that there are ten bins in this distribution. What if, rather than using ten bins, we used five? The first bin would be for
a first- or second-place finish, the second bin for a third-or fourth-place finish, and so on. What would have been the result? Using fewer bins on the same set of data would have resulted in a
probability distribution with the same profile as one determined on the same data with more bins. That is, they would look pretty much the same graphically. However, using fewer bins does reduce the
information content of a distribution. Likewise, using more bins increases the information content of a distribution. If, rather than recording the finishing position of the pole position horse in
each race, we record the time the horse ran in, rounded to the nearest second, we will get more than ten bins; and thus the information content of the distribution obtained will be greater. If we
recorded the exact finish time, rather than rounding finish times to use the nearest second, we would be creating what is called a continuous distribution. In a continuous distribution, there are no
bins. Think of a continuous distribution as a series of infinitely thin bins (see Figure 3-1). A continuous distribution differs from a discrete distribution, the type we discussed first in that a
discrete distribution is a binned distribution. Although binning does reduce the information content of a distribution, in real life it is often necessary to bin data. Therefore, in real life it is
often necessary to lose some of the information content of a distribution, while keeping the profile of the distribution the same, so that you can process the distribution. Finally, you should know
that it is possible to take a continuous distribution and make it discrete by binning it, but it is not possible to take a discrete distribution and make it continuous.
- 35 -
Figure 3-1 A continuous distribution is a series of infinitely thin bins When we are discussing the profits and losses of trades, we are essentially discussing a continuous distribution. A trade can
take a multitude of values (although we could say that the data is binned to the nearest cent). In order to work with such a distribution, you may find it necessary to bin the data into, for example,
one-hundred-dollar-wide bins. Such a distribution would have a bin for trades that made nothing to $99.99, the next bin would be for trades that made $100 to $199.99, and so on. There is a loss of
information content in binning this way, yet the profile of the distribution of the trade profits and losses remains relatively unchanged.
DESCRIPTIVE MEASURES OF DISTRIBUTIONS Most people are familiar with the average, or more specifically the arithmetic mean. This is simply the sum of the data points in a distribution divided by the
number of data points: (3.01) A = (∑[i = 1,N] Xi)/N where A = The arithmetic mean. Xi = The ith data point. N = The total number of data points in the distribution. The arithmetic mean is the most
common of the types of measures of location, or central tendency of a body of data, a distribution. However, you should be aware that the arithmetic mean is not the only available measure of central
tendency and often it is not the best. The arithmetic mean tends to be a poor measure when a distribution has very broad tails. Suppose you randomly select data points from a distribution and
calculate their mean. If you continue to do this you will find that the arithmetic means thus obtained converge poorly, if at all, when you are dealing with a distribution with very broad tails.
Another important measure of location of a distribution is the median. The median is described as the middle value when data are arranged in an array according to size. The median divides a
probability distribution into two halves such that the area under the curve of one half is equal to the area under the curve of the other half. The median is frequently a better measure of central
tendency than the arithmetic mean. Unlike the arithmetic mean, the median is not distorted by extreme outlier values. Further, the median can be calculated even for open-ended distributions. An
open-ended distribution is a distribution in which all of the values in excess of a certain bin are thrown into one bin. An example of an open-ended distribution is the one we were compiling when we
recorded the finishing position in horse racing for the horse starting out in the pole position. Any finishes worse than tenth place were recorded as a tenth place finish. Thus, we had an open
distribution. The median is extensively used by the U.S. Bureau of the Census. The third measure of central tendency is the mode-the most frequent occurrence. The mode is the peak of the distribution
curve. In some distributions there is no mode and sometimes there is more than one mode. Like the median, the mode can often be regarded as a superior measure of central tendency. The mode is
completely independent of extreme outlier values, and it is more readily obtained than the arithmetic mean or the median. We have seen how the median divides the distribution into two equal areas. In
the same way a distribution can be divided by three quartiles (to give four areas of equal size or probability), or nine deciles (to give ten areas of equal size or probability) or 99 percentiles (to
give 100 areas of equal size or probability). The 50th percentile is the median, and along with the 25th and 75th percentiles give us the quartiles. Fi-
nally, another term you should become familiar with is that of a quantile. A quantile is any of the N-1 variate values that divide the total frequency into N equal parts. We now return to the mean.
We have discussed the arithmetic mean as a measure of central tendency of a distribution. You should be aware that there are other types of means as well. These other means are less common, but they
do have significance in certain applications. First is the geometric mean, which we saw how to calculate in the first chapter. The geometric mean is simply the Nth root of all the data points
multiplied together. (3.02) G = (∏[i = 1,N]Xi)^(1/N) where G = The geometric mean. Xi = The ith data point. N = The total number of data points in the distribution. The geometric mean cannot be used
if any of the variate-values is zero or negative. We can state that the arithmetic mathematical expectation is the arithmetic average outcome of each play (on a constant I-unit basis) minus the bet
size. Likewise, we can state that the geometric mathematical expectation is the geometric average outcome of each play (on a constant I-unit basis) minus the bet size. Another type of mean is the
harmonic mean. This is the reciprocal of the mean of the reciprocals of the data points. (3.03) 1/∏ = 1/N ∑[i = 1,N]1/Xi where H = The harmonic mean. Xi = The ith data point. N = The total number of
data points in the distribution. The final measure of central tendency is the quadratic mean or roof mean square. (3.04) R^2 = l/N∑[i = 1,N]Xi^2 where R = The root mean square. Xi = The ith data
point. N = The total number of data points in the distribution. You should realize that the arithmetic mean (A) is always greater than or equal to the geometric mean (G), and the geometric mean is
always greater than or equal to the harmonic mean (H): (3.05) H<=G<=A where H = The harmonic mean. G = The geometric mean. A = The arithmetic mean.
MOMENTS OF A DISTRIBUTION The central value or location of a distribution is often the first thing you want to know about a group of data, and often the next thing you want to know is the data's
variability or "width" around that central value. We call the measures of a distributions central tendency the first moment of a distribution. The variability of the data points around this central
tendency is called the second moment of a distribution. Hence the second moment measures a distribution's dispersion about the first moment. As with the measure of central tendency, many measures of
dispersion are available. We cover seven of them here, starting with the least common measures and ending with the most common. The range of a distribution is simply the difference between the
largest and smallest values in a distribution. Likewise, the 10-90 percentile range is the difference between the 90th and 10th percentile points. These first two measures of dispersion measure the
spread from one extreme to the other. The remaining five measures of dispersion measure the departure from the central tendency (and hence measure the half-spread). The semi-interquartile range or
quartile deviation equals one half of the distance between the first and third quartiles (the 25th and 75th - 36 -
per-centiles). This is similar to the 10-90 percentile range, except that with this measure the range is commonly divided by 2. The half-width is an even more frequently used measure of dispersion.
Here, we take the height of a distribution at its peak, the mode. If we find the point halfway up this vertical measure and run a horizontal line through it perpendicular to the vertical line, the
horizontal line will touch the distribution at one point to the left and one point to the right. The distance between these two points is called the half-width. Next, the mean absolute deviation or
mean deviation is the arithmetic average of the absolute value of the difference between the data points and the arithmetic average of the data points. In other words, as its name implies, it is the
average distance that a data point is from the mean. Expressed mathematically: (3.06) M = 1/N ∑[i = 1,N] ABS (Xi-A) where M = The mean absolute deviation. N = The total number of data points. Xi =
The ith data point. A = The arithmetic average of the data points. ABS() = The absolute value function. Equation (3.06) gives us what is known as the population mean absolute deviation. You should
know that the mean absolute deviation can also be calculated as what is known as the sample mean absolute deviation. To calculate the sample mean absolute deviation, replace the term 1/N in Equation
(3.06) with 1/(N-1). You use the sample version when you are making judgments about the population based on a sample of that population. The next two measures of dispersion, variance and standard
deviation, are the two most commonly used. Both are used extensively, so we cannot say that one is more common than the other; suffice to say they are both the most common. Like the mean absolute
deviation, they can be calculated two different ways, for a population as well as a sample. The population version is shown, and again it can readily be altered to the sample version by replacing the
term 1/N with 1/(N-1). The variance is the same thing as the mean absolute deviation except that we square each difference between a data point and the average of the data points. As a result, we do
not need to take the absolute value of each difference, since multiplying each difference by itself makes the result positive whether the difference was positive or negative. Further, since each
distance is squared, extreme outliers will have a stronger effect on the variance than they would on the mean absolute deviation. Mathematically expressed: (3.07) V = 1/N ∑[i = 1,N] ((Xi-A)^2) where
V = The variance. N = The total number of data points. Xi = The ith data point. A = The arithmetic average of the data points. Finally, the standard deviation is related to the variance (and hence
the mean absolute deviation) in that the standard deviation is simply the square root of the variance. The third moment of a distribution is called skewness, and it describes the extent of asymmetry
about a distributions mean (Figure 3-2). Whereas the first two moments of a distribution have values that can be considered dimensional (i.e., having the same units as the measured quantities),
skew-ness is defined in such a way as to make it nondimensional. It is a pure number that represents nothing more than the shape of the distribution.
Skew = 0
Figure 3-4 Kurtosis.
Figure 3-2 Skewness A positive value for skewness means that the tails are thicker on the positive side of the distribution, and vice versa. A perfectly symmetrical distribution has a skewness of 0.
Mode Mean
Figure 3-3 Skewness alters location. In a symmetrical distribution the mean, median, and mode are all at the same value. However, when a distribution has a nonzero value for skewness, this changes as
depicted in Figure 3-3. The relationship for a skewed distribution (any distribution with a nonzero skewness) is: (3.08) Mean-Mode = 3*(Mean-Median) As with the first two moments of a distribution,
there are numerous measures for skewness, which most frequently will give different answers. These measures now follow: (3.09) S = (Mean-Mode)/Standard Deviation (3.10) S = (3*(Mean-Median))/Standard
Deviation These last two equations, (3.09) and (3.10), are often referred to as Pearson's first and second coefficients of skewness, respectively. Skewness is also commonly determined as: (3.11) S =
1/N ∑[i = 1,N] (((Xi-A)/D)^3) where S = The skewness. N = The total number of data points. Xi = The ith data point. A = The arithmetic average of the data points. D = The population standard
deviation of the data points.
- 37 -
Finally, the fourth moment of a distribution, kurtosis (see Figure 34) measures the peakedness or flatness of a distribution (relative to the Normal Distribution). Like skewness, it is a
nondimensional quantity. A curve less peaked than the Normal is said to be platykurtic (kurtosis will be negative), and a curve more peaked than the Normal is called leptokurtic (kurtosis will be
positive). When the peak of the curve resembles the Normal Distribution curve, kurtosis equals zero, and we call this type of peak on a distribution mesokurtic. Like the preceding moments, kurtosis
has more than one measure. The two most common are: (3.12) K = Q/P where K = The kurtosis. Q = The semi-interquartile range. P = The 10-90 percentile range. (3.13) K = (1/N (∑[i = 1,N] (((Xi-A)/D)^
4)))-3 where K = The kurtosis. N = The total number of data points. Xi = The ith data point. A = The arithmetic average of the data points. D = The population standard deviation of the data points.
Finally, it should be pointed out there is a lot more "theory" behind the moments of a distribution than is covered here, For a more in-depth discussion you should consult one of the statistics books
mentioned in the Bibliography. The depth of discussion about the moments of a distribution presented here will be more than adequate for our purposes throughout this text. Thus far, we have covered
data distributions in a general sense. Now we will cover the specific distribution called the Normal Distribution.
THE NORMAL DISTRIBUTION Frequently the Normal Distribution is referred to as the Gaussian distribution, or de Moivre's distribution, after those who are believed to have discovered it-Karl Friedrich
Gauss (1777-1855) and, about a century earlier and far more obscurely, Abraham de Moivre (1667-1754). The Normal Distribution is considered to be the most useful distribution in modeling. This is due
to the fact that the Normal Distribution accurately models many phenomena. Generally speaking, we can measure heights, weights, intelligence levels, and so on from a population, and these will very
closely resemble the Normal Distribution. Let's consider what is known as Galton's board (Figure 3-5). This is a vertically mounted board in the shape of an isosceles triangle. The board is studded
with pegs, one on the top row, two on the second, and so on. Each row down has one more peg than the previous row. The pegs are arranged in a triangular fashion such that when a ball is dropped in,
it has a 50/50 probability of going right or left with each peg it encounters. At the base of the board is a series of troughs to record the exit gate of each ball.
tion is distributed according to the Exponential Distribution (Figure 36), then it may be necessary to use an N of 100 or so.
Even the means of samples taken from the exponential will tend to be normally distributed
Figure 3-5 Galton's board. The balls falling through Galton's board and arriving in the troughs will begin to form a Normal Distribution. The "deeper" the board is (i.e., the more rows it has) and
the more balls are dropped through, the more closely the final result will resemble the Normal Distribution. The Normal is useful in its own right, but also because it tends to be the limiting form
of many other types of distributions. For example, if X is distributed binomially, then as N tends toward infinity, X tends to be Normally distributed. Further, the Normal Distribution is also the
limiting form of a number of other useful probability distributions such as the Poisson, the Student's, or the T distribution. In other words, as the data (N) used in these other distributions
increases, these distributions increasingly resemble the Normal Distribution.
THE CENTRAL LIMIT THEOREM One of the most important applications for statistical purposes involving the Normal Distribution has to do with the distribution of averages. The averages of samples of a
given size, taken such that each sampled item is selected independent of the others, will yield a distribution that is close to Normal. This is an extremely powerful fact, for it means that you can
generalize about an actual random process from averages computed using sample data. Thus, we can state that if N random samples are drawn from a population, then the sums (or averages) of the samples
will be approximately Normally distributed, regardless of the distribution of the population from which the samples are drawn The closeness to the Normal Distribution improves as N (the number of
samples) increases. As an example, consider the distribution of numbers from 1 to 100. This is what is known as a uniform distribution: all elements (numbers in this case) occur only once. The number
82 occurs once and only once, as does 19, and so on. Suppose now that we take a sample of five elements and we take the average of these five sampled elements (we can just as well take their sums).
Now, we replace those five elements back into the population, and we take another sample and calculate the sample mean. If we keep on repeating this process, we will see that the sample means are
Normally distributed, even though the population from which they are drawn is uniformly distributed. Furthermore, this is true regardless of how the population is distributed! The Central Limit
Theorem allows us to treat the distribution of sample means as being Normal without having to know the distribution of the population. This is an enormously convenient fact for many areas of study.
If the population itself happens to be Normally distributed, then the distribution of sample means will be exactly (not approximately) Normal. This is true because how quickly the distribution of the
sample means approaches the Normal, as N increases, is a function of how close the population is to Normal. As a general rule of thumb, if a population has a unimodal distribution-any type of
distribution where there is a concentration of frequency around a single mode, and diminishing frequencies on either side of the mode (i.e., it is convex)-or is uniformly distributed, using a value
of 20 for N is considered sufficient, and a value of 10 for N is considered probably sufficient. However, if the popula- 38 -
Figure 3-6 The Exponential Distribution and the Normal. The Central Limit Theorem, this amazingly simple and beautiful fact, validates the importance of the Normal Distribution.
WORKING WITH THE NORMAL DISTRIBUTION In using the Normal Distribution, we most frequently want to find the percentage of area under the curve at a given point along the curve. In the parlance of
calculus this would be called the integral of the function for the curve itself. Likewise, we could call the function for the curve itself the derivative of the function for the area under the curve.
Derivatives are often noted with a prime after the variable for the function. Therefore, if we have a function, N(X), that represents the percentage of area under the curve at a given point, X, we
can say that the derivative of this function, N'(X) (called N prime of X), is the function for the curve itself at point X. We will begin with the formula for the curve itself, N'(X). This function
is represented as: (3.14) N'(X) = 1/(S*(2*3.1415926536)^(1/2))*EXP(-((XU)^2)/(2*S^2)) where U = The mean of the data. S = The standard deviation of the data. X = The observed data point. EXP() = The
exponential function. This formula will give us the Y axis value, or the height of the curve if you Will, at any given X axis value. Often it is easier to refer to a point along the curve with
reference to its X coordinate in terms of how many standard deviations it is away from the mean. Thus, a data point that was one standard deviation away from the mean would be said to be one standard
unit from the mean. Further, it is often easier to subtract the mean from all of the data points, which has the effect of shifting the distribution so that it is centered over zero rather than over
the mean. Therefore, a data point that was one standard deviation to the right of the mean would now have a value of 1 on the X axis. When we make these conversions, subtracting the mean from the
data points, then dividing the difference by the standard deviation of the data points, we are converting the distribution to what is called the standardized normal, which is the Normal Distribution
with mean = 0 and variance = 1. Now, N'(Z) will give us the Y axis value (the height of the curve) for any value of Z: (3.15a) N'(Z) = l/((2*3.1415926536)^(1/2))*EXP(-(Z^2/2)) = .398942*EXP(-(Z^2/2))
where (3.16) Z = (X-U)/S and U = The mean of the data. S = The standard deviation of the data. X = The observed data point. EXP() = The exponential function.
Equation (3.16) gives us the number of standard units that the data point corresponds to-in other words, how many standard deviations away from the mean the data point is. When Equation (3.16) equals
1, it is called the standard normal deviate. A standard deviation or a standard unit is sometimes referred to as a sigma. Thus, when someone speaks of an event being a "five sigma event," they are
referring to an event whose probability of occurrence is the probability of being beyond five standard deviations. 0.5
0.4 0.3 0.2 0.1 0 -3
0 <-- Z -->
Figure 3-7 The Normal Probability density function. Consider Figure 3-7, which shows this equation for the Normal curve. Notice that the height of the standard Normal curve is .39894. From Equation
(3.15a), the height is: (3.15a) N'(Z) = .398942*EXP(-(Z^2/2)) N'(0) = .398942*EXP(-(0^2/2)) N'(0) = .398942 Notice that the curve is continuous-that is, there are no "breaks" in the curve as it runs
from minus infinity on the left to positive infinity on the right. Notice also that the curve is symmetrical, the side to the right of the peak being the mirror image of the side to the left of the
peak. Suppose we had a group of data where the mean of the data was 11 and the standard deviation of the group of data was 20. To see where a data point in that set would be located on the curve, we
could first calculate it as a standard unit. Suppose the data point in question had a value of -9. To calculate how many standard units this is we first must subtract the mean from this data point:
-9 -11 = -20 Next we need to divide the result by the standard deviation: -20/20 = -1 We can therefore say that the number of standard units is -1, when the data point equals -9, and the mean is 11,
and the standard deviation is 20. In other words, we are one standard deviation away from the peak of the curve, the mean, and since this value is negative we know that it means we are one standard
deviation to the left of the peak. To see where this places us on the curve itself (i.e., how high the curve is at one standard deviation left of center, or what the Y axis value of the curve is for
a corresponding X axis value of -1), we need to now plug this into Equation (3.15a): (3.15a) N'(Z) = .398942*EXP(-(Z^2/2)) = .398942*2.7182818285^(-(-1^2/2)) = .398942*2.7182818285^(-1/2) =
.398942*.6065307 = .2419705705 Thus we can say that the height of the curve at X = -1 is .2419705705. The function N'(Z) is also often expressed as: (3.15b) N'(Z) = EXP(-(Z^2/2))/((8*ATN(1))^(1/2) =
EXP(-(Z^2/2))/((8*.7853983)^(1/2) = EXP(-(Z^2/2))/2.506629 where (3.16) Z = (X-U)/S and ATN() = The arctangent function. U = The mean of the data. S = The standard deviation of the data. - 39 -
X = The observed data point. EXP() = The exponential function. Nonstatisticians often find the concept of the standard deviation (or its square, variance) hard to envision. A remedy for this is to
use what is known as the mean absolute deviation and convert it to and from the standard deviation in these equations. The mean absolute deviation is exactly what its name implies. The mean of the
data is subtracted from each data point. The absolute values of each of these differences are then summed, and this sum is divided by the number of data points. What you end up with is the average
distance each data point is away from the mean. The conversion for mean absolute deviation and standard deviation are given now: (3.17) Mean Absolute Deviation = S*((2/3.1415926536)^(1/2)) =
S*.7978845609 where M = The mean absolute deviation. S = The standard deviation. Thus we can say that in the Normal Distribution, the mean absolute deviation equals the standard deviation times
.7979. Likewise: (3.18) S = M*1/.7978845609 = M*1.253314137 where S = The standard deviation. M = The mean absolute deviation. So we can also say that in the Normal Distribution the standard
deviation equals the mean absolute deviation times 1.2533. Since the variance is always the standard deviation squared (and standard deviation is always the square root of variance), we can make the
conversion between variance and mean absolute deviation. (3.19) M = V^(1/2)*((2/3.1415926536)^(1/2)) = V^(l/2)*.7978845609 where M = The mean absolute deviation. V = The variance. (3.20) V =
(M*1.253314137)^2 where V = The variance. M = The mean absolute deviation. Since the standard deviation in the standard normal curve equals 1, we can state that the mean absolute deviation in the
standard normal curve equals .7979. Further, in a bell-shaped curve like the Normal, the semi-interquartile range equals approximately two-thirds of the standard deviation, and therefore the standard
deviation equals about 1.5 times the semi-interquartile range. This is true of most bell-shaped distributions, not just the Normal, as are the conversions given for the mean absolute deviation and
standard deviation.
NORMAL PROBABILITIES We now know how to convert our raw data to standard units and how to form the curve N'(Z) itself (i.e., how to find the height of the curve, or Y coordinate for a given standard
unit) as well as N'(X) (Equation (3.14), the curve itself without first converting to standard units). To really use the Normal Probability Distribution though, we want to know what the probabilities
of a certain outcome happening arc. This is not given by the height of the curve. Rather, the probabilities correspond to the area under the curve. These areas are given by the integral of this N'(Z)
function which we have thus far studied. We will now concern ourselves with N(Z), the integral . to N'(Z), to find the areas under the curve (the probabilities).1 (3.21) N(Z) = 1 -N'(Z)*
((1.330274429*Y ^ 5)(1.821255978*Y^4)+(1.781477937*Y^3)(.356563782*Y^2)+(.31938153*Y)) If Z<0 then N(Z) = 1-N(Z) (3.15a) N'(Z) = .398942*EXP(-(Z^2/2)) where Y = 1/(1+2316419*ABS(Z)) 1
The actual integral to the Normal probability density does not exist in closed form, but it can very closely be approximated by Equation (3.21).
and ABS() = The absolute value function. EXP() = The exponential function. We will always convert our data to standard units when finding probabilities under the curve. That is, we will not describe
an N(X) function, but rather we will use the N(Z) function where: (3.16) Z = (X-U)/S and U = The mean of the data. S = The standard deviation of the data. X = The observed data point. Refer now to
Equation (3.21). Suppose we want to know what the probability is of an event not exceeding +2 standard units (Z = +2). Y = 1/(1+2316419*ABS(+2)) = 1/1.4632838 = .68339443311 (3.15a) N'(Z) =
.398942*EXP(-(+2^2/2)) = .398942*EXP(-2) = .398942*.1353353 = .05399093525 Notice that this tells us the height of the curve at +2 standard units. Plugging these values for Y and N'(Z) into Equation
(3.21) we can obtain the probability of an event not exceeding +2 standard units: N(Z) = 1-N'(Z)*((1.330274429*Y^5)(1.821255978*Y^4)+(1.781477937*Y^3)(.356563782*Y^2)+(.31938153*Y)) = 1-.05399093525*
((1.330274429*.68339443311^5)(1.821255978*.68339443311^4+1.781477937*.68339443311^3)(.356563782*.68339443311^2)+(.31938153*.68339443311)) = 1-.05399093525*((1.330274429*.1490587)
(1.821255978*.2181151+(1.781477937*.3191643)(-356563782*.467028+.31938153*.68339443311)) = 1-.05399093525*(.198288977-.3972434298+.5685841587.16652527+.2182635596) = 1-.05399093525*.4213679955 =
1-.02275005216 = .9772499478 Thus we can say that we can expect 97.72% of the outcomes in a Normally distributed random process to fall shy of +2 standard units. This is depicted in Figure 3-8.
N(Z) & N'(Z) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 N'(Z) 0.2 0.1 0 -3 -2
0 Z
Figure 3-8 Equation (3.21) showing probability with Z = +2. If we wanted to know what the probabilities were for an event equaling or exceeding a prescribed number of standard units (in this case
+2), we would simply amend Equation (3.21), taking out the 1- in the beginning of the equation and doing away with the -Z provision (i.e., doing away with "If Z < 0 then N(Z) = 1-N(Z)"). Therefore,
the second to last line in the last computation would be changed from = 1-.02275005216 to simply .02275005216
- 40 -
We would therefore say that there is about a 2.275% chance that an event in a Normally distributed random process would equal or exceed +2 standard units. This is shown in Figure 3-9.
N(Z) & N'(Z) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 N'(Z) 0.2 0.1 0 -3 -2
N(Z) N(Z) without the 1and -Z provision
0 Z
Figure 3-9 Doing away with the 1- and -Z provision in Equation (3.21). Thus far we have looked at areas under the curve (probabilities) where we are only dealing with what are known as "1-tailed"
probabilities. That is to say we have thus far looked to solve such questions as, "What are the probabilities of an event being less (more) than such-andsuch standard units from the mean?" Suppose
now we were to pose the question as, “What are the probabilities of an event being within so many standard units of the mean?" In other words, we wish to find out what the "e-tailed" probabilities
are. 0.5
0.4 0.3 0.2 0.1 0 -3
0 <-- Z -->
Figure 3-10 A two-tailed probability of an event being+or-2 sigma. Consider Figure 3-10. This represents the probabilities of being within 2 standard units of the mean. Unlike Figure 3-8, this
probability computation does not include the extreme left tail area, the area of less than -2 standard units. To calculate the probability of being within Z standard units of the mean, you must first
calculate the I-tailed probability of the absolute value of Z with Equation (3.21). This will be your input to the next Equation, (3.22), which gives us the 2-tailed probabilities (i.e., the
probabilities of being within ABS(Z) standard units of the mean): (3.22) e-tailed probability = 1-((1-N(ABS(Z)))*2) If we are considering what our probabilities of occurrence within 2 standard
deviations are (Z = 2), then from Equation (3.21) we know that N(2) = .9772499478, and using this as input to Equation (3.22): 2-tailed probability = 1-((1-.9772499478)*2) = 1-(.02275005216* 2) =
1-.04550010432 = .9544998957 Thus we can state from this equation that the probability of an event in a Normally distributed random process falling within 2 standard units of the mean is about
= -212.506628274*EXP(-2) = -2/2.506628274*.1353353 = -.1079968336 Therefore, we can state that the instantaneous rate of change in the N'(Z) function when Z = +2 is -.1079968336. This represents rise
/run, so we can say that when Z = +2, the N'(Z) curve is rising -.1079968336 for ever) 1 unit run in Z. This is depicted in Figure 3-13.
0.4 0.3 0.2
0.1 0 -3
0.4 -2
0 <-- Z -->
Figure 3-11 Two-tailed probability of an event being beyond 2 sigma. Just as with Equation (3.21), we can eliminate the leading 1- in Equation (3.22) to obtain (1-N(ABS(Z)))*2, which represents the
probabilities of an event falling outside of ABS(Z) standard units of the mean. This is depicted in Figure 3-11. For the example where Z = 2, we can state that the probabilities of an event in a
Normally distributed random process falling outside of 2 standard units is: 2 tailed probability (outside) = (1-.9772499478)*2 = .02275005216*2 = .04550010432 Finally, we come to the case where we
want to find what the probabilities (areas under the N'(Z) curve) are for two different values of Z. 0.5
0.4 0.3 0.2 0.1 0 -3
0.3 0.2 0.1 0 -3
The tangent to the point Z=2 -2
0 <-- Z -->
Figure 3-13 N"(Z) giving the slope of the line tangent tangent to N'(Z) at Z = +2. For the reader's own reference, further derivatives are now given. These will not be needed throughout the remainder
of this text, but arc provided for the sake of completeness: (3.24) N'"(Z) = (Z^2-1)/2.506628274*EXP(-(Z^2)/2) (3.25) N""(Z) = ((3*Z)-Z^3)/2.506628274*EXP(-(Z^2)/2) (3.26) N'""(Z) = (Z^4-(6*Z^2)+3)/
2.506628274*EXP(-(Z^2)/2) As a final note regarding the Normal Distribution, you should be aware that the distribution is nowhere near as “peaked” as the graphic examples presented in this chapter
imply. The real shape of the Normal Distribution is depicted in Figure 3-14.
4 -2
0 <-- Z -->
Figure 3-12 The area between -1 and +2 standard units. Suppose we want to find the area under the N'(Z) curve between -1 standard unit and +2 standard units. There are a couple of ways to accomplish
this. To begin with, we can compute the probability of not exceeding +2 standard units with Equation (3.21), and from this we can subtract the probability of not exceeding -1 standard units (see
Figure 312). This would give us: .9772499478-.1586552595 = .8185946883 Another way we could have performed this is to take the number 1, representing the entire area under the curve, and then
subtract the sum of the probability of not exceeding -1 standard unit and the probability of exceeding 2 standard units: = 1-(.022750052+.1586552595) = 1 .1814053117 = .8185946883 With the basic
mathematical tools regarding the Normal Distribution thus far covered in this chapter, you can now use your powers of reasoning to figure any probabilities of occurrence for Normally distributed
random variables.
FURTHER DERIVATIVES OF THE NORMAL Sometimes you may want to know the second derivative of the N(Z) function. Since the N(Z) function gives us the area under the curve at Z, and the N'(Z) function
gives us the height of the curve itself at Z, then the N"(Z) function gives us the instantaneous slope of the curve at a given Z: (3.23) N"(Z) = -Z/2.506628274*EXP(-(Z^2/2) where EXP() = The
exponential function. To determine what the slope of the N'(Z) curve is at +2 standard units: N"(Z) = -2/2.506628274*EXP(-(+2^2)/2) - 41 -
1 0 -3
Figure 3-14 The real shape of the Normal Distribution. Notice that here the scales of the two axes are the same, whereas in the other graphic examples they differ so as to exaggerate the shape of the
THE LOGNORMAL DISTRIBUTION Many of the real-world applications in trading require a small but crucial modification to the Normal Distribution. This modification takes the Normal, and changes it to
what is known as the Lognormal Distribution. Consider that the price of any freely traded item has zero as a lower limit.2 Therefore, as the price of an item drops and approaches zero, it should in
theory become progressively more difficult for the item to get lower. For example, consider the price of a hypothetical stock at $10 per share. If the stock were to drop $5, to $5 per share, a 50%
loss, then according to the Normal Distribution it could just as easily drop from $5 to $0. However, under the Lognormal, a similar drop of 50% from a price
of $5 per share to $2.50 per share would be about as probable as a drop from $10 to $5 per share. The Lognormal Distribution, Figure 3-15, works exactly like the Normal Distribution except that with
the Lognormal we are dealing with percentage changes rather than absolute changes.
Figure 3-15 The Normal and Lognormal distributions. Consider now the upside. According to the Lognormal, a move from $10 per share to $20 per share is about as likely as a move from $5 to $10 per
share, as both moves represent a 100% gain. That isn't to say that we won't be using the Normal Distribution. The purpose here is to introduce you to the Lognormal, show you its relationship to the
Normal (the Lognormal uses percentage price changes rather than absolute price changes), and point out that it usually is used when talking about price moves, or anytime that the Normal would apply
but be bounded on the low end at zero.2 To use the Lognormal distribution, you simply convert the data you are working with to natural logarithms.3 Now the converted data will be Normally distributed
if the raw data was Lognormally distributed. For instance, if we are discussing the distribution of price changes as being Lognormal, we can use the Normal distribution on it. First, we must divide
each closing price by the previous closing price. Suppose in this instance we are looking at the distribution of monthly closing prices (we could use any time period-hourly, daily, yearly, or
whatever). Suppose we now see $10, $5, $10, $10, then $20 per share as our first five months closing prices. This would then equate to a loss of 50% going into the second month, a gain of 100% going
into the third month, a gain of 0% going into the fourth month, and another gain of 100% into the fifth month. Respectively then, we have quotients of .5, 2, 1, and 2 for the monthly price changes of
months 2 through 5. These are the same as HPRs from one month to the next in succession. We must now convert to natural logarithms in order to study their distribution under the math for the Normal
Distribution. Thus, the natural log of .5 is -.6931473, of 2 it is .6931471, and of 1 it is 0. We are now able to apply the mathematics pertaining to the Normal distribution to this converted data.
This idea that the lowest an item can trade for is zero is not always entirely true. For instance. during tile stock market crash of 1929 and the ensuing bear market, the shareholders of many failed
banks were held liable to the depositors in those banks. Persons who owned stock in such banks not only lost their full investment, they also realized liability beyond the amount of their investment.
The point here isn't to say that such an event can or cannot happen again. Rather, we cannot always say that zero is the absolute low end of what a freely traded item can be priced at, although it
usually is. 3 The distinction between common and natural logarithms is reiterated here. A common log is a log base 10, while a natural log is a log base e, where e = 2.7182818285. The common log of X
is referred to mathematically as log(X) while the natural log is referred to as ln(X). The distinction gets blurred when we observe BASIC programming code, which often utilizes a function LOG(X) to
return the natural log. This is diametrically opposed to mathematical convention. BASIC does not have a provision For common logs, but the natural log can be converted to the common log by
multiplying the natural log by .4342917. likewise, we CM convert common logs to natural logs by multiplying the common log by 2.3026. - 42 -
THE PARAMETRIC OPTIMAL F Now that we have studied the mathematics of the Normal and Lognormal distributions, we will see how to determine an optimal f based on outcomes that are Normally distributed.
The Kelly formula is an example of a parametric optimal f in that the optimal f returned is a function of two parameters. In the Kelly formula the input parameters are the percentage of winning bets
and the payoff ratio. However, the Kelly formula only gives you the optimal f when the possible outcomes have a Bernoulli distribution. In other words, the Kelly formula will only give the correct
optimal f when there are only two possible outcomes. When the outcomes do not have a Bernoulli distribution, such as Normally distributed outcomes (which we arc about to study), the Kelly formula
will not give you the correct optimal f.4 When they are applicable, parametric techniques are far more powerful than their empirical counterparts. Assume we have a situation that can be described
completely by the Bernoulli distribution. We can derive our optimal f here by way of either the Kelly formula or the empirical technique detailed in Portfolio Management Formulas. Suppose in this
instance we win 60% of the time. Say we are tossing a coin that is biased, that we know that in the long run 60% of the tosses will be heads. We are therefore going to bet that each toss will be
heads, and the payoff is 1:1. The Kelly formula would tell us to bet a fraction of .2 of our stake on the next bet. Further suppose that of the last 20 tosses, 11 were heads and 9 were tails. If we
were to use these last 20 trades as the input into the empirical techniques, the result would be that we should risk .1 of our stake on the next bet. Which is correct, the .2 returned by the
parametric technique (the Kelly formula in this Bernoulli distributed case) or the .1 returned empirically by the last 20 tosses? The correct answer is .2, the answer returned from the parametric
technique. The reason is that the next toss has a 60% probability of being heads, not a 55% probability as the last 20 tosses would indicate. Although we are only discussing a 5% probability
difference, 1 toss in 20, the effect on how much we should bet is dramatic. Generally, the parametric techniques are inherently more accurate in this regard than are their empirical counterparts
(provided we know the distribution of the outcomes). This is the first advantage of the parametric to the empirical. This is also a critical proviso-that we must know what the distribution of
outcomes is in the long run in order to use the parametric techniques. This is the biggest drawback to using the parametric techniques. The second advantage is that the empirical technique requires a
past history of outcomes whereas the parametric does not. Further, this past history needs to be rather extensive. In the example just cited, we can assume that if we had a history of 50 tosses we
would have arrived at an empirical optimal f closer to .2. With a history of 1,000 tosses, it would be even closer according to the law of averages. The fact that the empirical techniques require a
rather lengthy stream of past data has almost restricted them to mechanical trading systems. Someone trading anything other than a mechanical trading system, be it by Elliott Wave or fundamentals,
has almost been shut out from using the optimal f technique. With the parametric techniques this is no longer true. Someone who wishes to blindly follow some market guru, for instance, now has a way
to employ the power of optimal f. Therein lies the third advantage of the parametric technique over the empirical-it can be used by any trader in any market. There is a big assumption here, however,
for someone not employing a mechanical trading system. The assumption is that the future distribution of profits and losses will resemble the distribution in the past (which is what we figure the
optimal f on). This may be less likely than with a mechanical system. This also sheds new light on the expected performance of any technique that is not purely mechanical. Even the best practitioners
of such techniques, be it by fundamentals, Gann, Elliott Wave, and so on, are doomed to fail if they are too far beyond the peak of (to the right of) the f curve. If they are too far to the left of
the peak, they are going to end up with geometrically lower profits than their expertise in their area 4
We are speaking of the Kelly formulas here in a singular sense even though there are, in fact, two different Kelly formulas, one for when the payoff ration is 1:1, and the other for when the payoff
is any ratio. In the examples of Kelly in this discussion we are assuming a payoff of 1:1, hence it doesn't matter which of the two Kelly formulas we are using.
should have made for them. Furthermore, practitioners of techniques that are not purely mechanical must realize that everything said about optimal f and the purely mechanical techniques applies. This
should be considered when contemplating expected drawdowns of such techniques. Remember that the drawdowns Will be substantial, and this fact does not mean that the technique should be abandoned. The
fourth and perhaps the biggest advantage of the parametric over the empirical method of determining optimal f, is that the parametric method allows you to do 'What if' types of modeling. For example,
suppose you are trading a market system that has been running very hot. You want to be prepared for when that market system stops performing so well, as you know it Inevitably will. With the
parametric techniques, you can vary your input parameters to reflect this and thereby put yourself at what the optimal f will be when the market system cools down to the state that the parameters you
Input reflect. The parametric techniques are therefore far more powerful than the empirical ones. So why use the empirical techniques at all? The empirical techniques are more intuitively obvious
than the parametric ones are. Hence, the empirical techniques are what one should learn first before moving on to the parametric. We have now covered the empirical techniques in detail and are
therefore prepared to study the parametric techniques.
THE DISTRIBUTION OF TRADE P&L'S Consider the following sequence of 232 trade profits and losses in points. It doesn't matter what the commodity is or what system generated this stream-it could be any
system on any market. Trade# P&L 1. 0.18 2. -1.11 3. 0.42 4. -0.83 5. 1.42 6. 0.42 1. -0.99 8. 0.87 9. 0.92 10. -0.4 11. -1.48 12. 1.87 13. 1.37 14. -1.48 15. -0.21 16. 1.82 17. 0.15 18. 0.32 19.
-1.18 20. -0.43 21. 0.42 22. 0.57 23. 4.72 24. 12.42 25. 0.15 26. 0.15 27. -1.14 28. 1.12 29. -1.88 30. 0.17 31. 0.57 32. 0.47 33. -1.88 34. 0.17 35. -1.93 36. 0.92 37. 1.45 38. 0.17 39. 1.87 40.
0.52 41. 0.67 Trade# P&L 166. 1.37 167. -1.93 168. 2.12 169. 0.62 170. 0.57 171. 0.42 172. 1.58
Trade# 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. Trade# 183. 184. 185. 186.
187. 188. 189.
P&L -1.58 -0.5 0.17 0.17 -0.65 0.96 -0.88 0.17 -1.53 0.15 -0.93 0.42 2.77 8.52 2.47 -2.08 -1.88 -1.88 1.67 -1.88 3.72 2.87 2.17 1.37 1.62 0.17 0.62 0.92 0.17 1.52 -1.78 0.22 0.92 0.32 0.17 0.57 0.17
1.18 0.17 0.72 -3.33 P&L 0.24 0.57 0.35 1.57 -1.73 -0.83 -1.18
Trade# 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123.
Trade# 200. 201. 202. 203. 204. 205. 206.
P&L -4.13 -1.63 -1.23 1.62 0.27 1.97 -1.72 1.47 -1.88 1.72 1.02 0.67 0.67 -1.18 3.22 -4.83 8.42 -1.58 -1.88 1.23 1.72 1.12 -0.97 -1.88 -1.88 1.27 0.16 1.22 -0.99 1.37 0.18 0.18 2.07 1.47 4.87 -1.08
1.27 0.62 -1.03 1.82 0.42 P&L -0.98 0.17 -0.96 0.35 0.52 0.77 1.10
Trade# 124. 125. 126. 127. 128. 130. 131. 132. 133. 134. 135. 136. 137. 138. 139. 140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161. 162.
163. 164. 165. Trade# 217. 218. 219. 220. 221. 222. 223.
P&L -2.63 -0.73 -1.83 0.32 1.62 1.02 -0.81 -0.74 1.09 -1.13 0.52 0.18 0.18 1.47 -1.07 -0.98 1.07 -0.88 -0.51 0.57 2.07 0.55 0.42 1.42 0.97 0.62 0.32 0.67 0.77 0.67 0.37 0.87 1.32 0.16 0.18 0.52 -2.33
1.07 1.32 1.42 2.72 P&L -1.08 0.25 0.14 0.79 -0.55 0.32 -1.30 - 43 -
Trade# P&L 173. 0.17 174. 0.62 175. 0.77 176. 0.37 177. -1.33 178. -1.18 179. 0.97 180. 0.70 181. 1.64 182. 0.57
Trade# 190. 191. 192. 193. 194. 195. 196. 197. 198. 199.
P&L -0.65 -0.78 -1.28 0.32 1.24 2.05 0.75 0.17 0.67 -0.56
Trade# 207. 208. 209. 210. 211. 212. 213. 214. 215. 216.
P&L -1.88 0.35 0.92 1.55 1.17 0.67 0.82 -0.98 -0.85 0.22
Trade# 224. 225. 226. 227. 228. 229. 230. 231. 232.
P&L 0.37 -0.51 0.34 -1.28 1.80 2.12 0.77 -1.33 1.52
If we wanted to determine an equalized parametric optimal f we would now convert these trade profits and losses to percentage gains and losses [based on Equations (2.10a) through (2.10c)]. Next, we
would convert these percentage profits and losses by multiplying them by the current price of the underlying instrument. For example, P&L #1 is .18. Suppose that the entry price to this trade was
100.50. Thus, the percentage gain on this trade would be .18/100.50 = .001791044776. Now suppose that the current price of this underlying instrument is 112.00. Multiplying .001791044776 by 112.00
translates into an equalized P&L of .2005970149, If we were seeking to do this procedure on an equalized basis, we would perform this operation on all 232 trade profits and losses. Whether or not we
are going to perform our calculations on an equalized basis (in this chapter we will not operate on an equalized basis), we must now calculate the mean (arithmetic) and population standard deviation
of these 232 individual trade profits and losses as .330129 and 1.743232 respectively (again, if we were doing things on an equalized basis, we would need to determine the mean and standard deviation
on the equalized trade P&L's). With these two numbers we can use Equation (3.16) to translate each individual trade profit and loss into standard units. (3.16) Z = (X-U)/S where U = The mean of the
data. S = The standard deviation of the data. X = The observed data point. Thus, to translate trade #1, a profit of .18, to standard units: Z = (.18-.330129)/1.743232 = -.150129/1.743232 =
-.08612106708 Likewise, the next three trades of -1.11, .42, and -.83 translate into -.8261258398, .05155423948, and -.6655046488 standard units respectively. If we are using equalized data, we
simply standardize by subtracting the mean of the data and dividing by the data's standard deviation. Once we have converted all of our individual trade profits and losses over to standard units, we
can bin the now standardized data. Recall that with binning there is a loss of information content about a particular distribution (in this case the distribution of the individual trades) but the
character of the distribution remains unchanged. Suppose we were to now take these 232 individual trades and place them into 10 bins. We are choosing arbitrarily here-we could have chosen 9 bins or
50 bins. In fact, one of the big arguments about binning data is that most frequently there is considerable arbitrariness as to how the bins should be chosen. Whenever we bin something, we must
decide on the ranges of the bins. We will therefore select a range of -2 to +2 sigmas, or standard deviations. This means we will have 10 equally spaced bins between -2 standard units to +2 standard
units. Since there are 4 standard units in total between -2 and +2 standard units and we are dividing this space into 10 equal regions, we have 4/10 = -4 standard units as the size or "width" of each
bin. Therefore, our first bin, the one "farthest to the left," will contain those trades that were within -2 to -1.6 standard units, the next one trades from -1.6 to -1.2, then -1.2 to -.8, and so
on, until our final bin contains those trades that were 1.6 to 2 standard units. Those trades that are less than -2 standard units or greater than +2 standard units will not be binned in this
exercise, and we will ignore them. If we so desired, we could have included them in the extreme bins, placing those data points less than -2 in the -2 to -1.6 bin, and likewise for those data points
greater than 2. Of course, we could have chosen a wider range for binning, but since these trades are beyond the range of our bins, we have chosen not to include them. In other words, we are
eliminating from this exercise those trades with P&L's less than .330129-
(1.743232*2) = -3.156335 or greater than .330129+(1.743232*2) = 3.816593. What we have created now is a distribution of this system's trade P&L's. Our distribution contains 10 data points because we
chose to work with 10
232 actual trades
Normal distribution
Figure 3-16 232 individual trades in 10 bins from -2 to +2 sigma versus the Normal Distribution. bins. Each data point represents the number of trades that fell into that bin. Each trade could not
fall into more than 1 bin, and if the trade was beyond 2 standard units either side of the mean (P&L's<-3.156335 or >3.816593), then it is not represented in this distribution. Figure 3-16 shows this
distribution as we have just calculated it. "Wait a minute," you say. "Shouldn't the distribution of a trading system's P&L's be skewed to the right because we are probably going to have a few large
profits?" This particular distribution of 232 trade P&L's happens to be from a system that very often takes small profits via a target. Many people have the mistaken impression that P&L distributions
are going to be skewed to the right for all trading systems. This is not at all true, as Figure 3-16 attests. Different market systems will have different distributions, and you shouldn't expect them
all to be the same. Also in Figure 3-16, superimposed over the distribution we have just put together, is the Normal Distribution as it would look for 232 trade P&L's if they were Normally
distributed. This was done so that you can compare, graphically, the trade P&L's as we have just calculated them to the Normal. The Normal Distribution here is calculated by first taking the
boundaries of each bin. For the leftmost bin in our example this would be Z = -2 and Z = -1.6. Now we run these Z values through Equation (3.21) to convert these boundaries to a cumulative
probability. In our example, this corresponds to .02275 for Z = -2 and .05479932 for Z = -1.6. Next, we take the absolute value of the difference between these two values, which gives us ABS
(.02275-.05479932) = .03204932 for our example. Last, we multiply this answer by the number of data points, which in this case is 232 because there are 232 total trades (we still must use 232 even
though some have been eliminated because they were beyond the range of our bins). Therefore, we can state that if the data were Normally distributed and placed into 10 bins of equal width between -2
and +2 sigmas, then the leftmost bin would contain .03204932*232 = 7.43544224 elements. If we were to calculate this for each of the 10 bins, we would calculate the Normal curve superimposed in
Figure 3-16.
FINDING OPTIMAL F ON THE NORMAL DISTRIBUTION Now we can construct a technique for finding the optimal f on Normally distributed data. Like the Kelly formula, this will be a parametric technique.
However, this technique is far more powerful than the Kelly formula, because the Kelly formula allows for only two possible outcomes for an event whereas this technique allows for the full spectrum
of the outcomes (provided that the outcomes are Normally distributed). The beauty of Normally distributed outcomes (aside from the fact that they so frequently occur, since they are the limit of many
other distributions) is that they can be described by 2 parameters. The Kelly formulas will give you the optimal f for Bernoulli distributed outcomes by inputting the 2 parameters of the payoff ratio
and the probability of winning. The technique about to be described likewise only needs two pa- 44 -
rameters as input, the average and the standard deviation of the outcomes, to return the optimal f. Recall that the Normal Distribution is a continuous distribution, In order to use this technique we
need to make this distribution be discrete. Further recall that the Normal Distribution is unbounded. That is, the distribution runs from minus infinity on the left to plus infinity on the right.
Therefore, the first two steps that we must take to find the optimal f on Normally distributed data is that we must determine (1) at how many sigmas from the mean of the distribution we truncate the
distribution, and (2) into how many equally spaced data points will we divide the range between the two extremes determined in (1). For instance, we know that 99.73% of all the data points will fall
between plus and minus 3 sigmas of the mean, so we might decide to use 3 sigmas as our parameter for (1). In other words, we are deciding to consider the Normal Distribution only between minus 3
sigmas and plus 3 sigmas of the mean. In so doing, we will encompass 99.73% of all of the activity under the Normal Distribution. Generally we will want to use a value of 3 to 5 sigmas for this
parameter. Regarding step (2), the number of equally spaced data points, we will generally want to use a bare minimum of ten times the number of sigmas we are using in (1). If we select 3 sigmas for
(1), then we should select at least 30 equally spaced data points for (2). This means that we are going to take the horizontal axis of the Normal Distribution, of which we are using the area from
minus 3 sigmas to plus 3 sigmas from the mean, and divide that into 30 equally spaced points. Since there are 6 sigmas between minus 3 sigmas and plus 3 sigmas, and we want to divide this into 30
equally spaced points, we must divide 6 by 30-1, or 29. This gives us .2068965517. So, our first data point will be minus 3, and we will add .2068965517 to each previous point until we reach plus 3,
at which point we will have created 30 equally spaced data points between minus 3 and plus 3. Therefore, our second data point will be -3 +.2068965517 = -2.793103448, our third data point
2.79310344+.2068965517 = -2.586206896, and so on. In so doing, we will have determined the 30 horizontal input coordinates to this system. The more data points you decide on, the better will be the
resolution of the Normal curve. Using ten times the number of sigmas is a rough rule for determining the bare minimum number of data points you should use. Recall that the Normal distribution is a
continuous distribution. However, we must make it discrete in order to find the optimal f on it. The greater the number of equally spaced data points we use, the closer our discrete model will be to
the actual continuous distribution itself, with the limit of the number of equally spaced data points approaching infinity where the discrete model approaches the continuous exactly. Why not use an
extremely large number of data points? The more data points you use in the Normal curve, the more calculations will be required to find the optimal f on it. Even though you will usually be using a
computer to solve for the optimal f, it will still be slower the more data points you use. Further, each data point added resolves the curve further to a lesser degree than the previous data point
did. We will refer to these first two input parameters as the bounding parameters. Now, the third and fourth steps are to determine the arithmetic average trade and the population standard deviation
for the market system we are working on. If you do not have a mechanical system, you can get these numbers from your brokerage statements or you can estimate them. That is the one of the real
benefits of this technique-that you don't need to have a mechanical system, you don't even need brokerage statements or paper trading results to use this technique. The technique can be used by
simply estimating these two inputs, the arithmetic mean average trade (in points or in dollars) and the population standard deviation of trades (in points or in dollars, so long as it's consistent
with what you use for the arithmetic mean trade). Be forewarned, though, that your results will only be as accurate as your estimates. If you are having difficulty estimating your population standard
deviation, then simply try to estimate by how much, on average, a trade will differ from the average trade. By estimating the mean absolute deviation in this way, you can use Equation (3.18) to
convert your estimated mean absolute deviation into an estimated standard deviation: (3.18) S = M*1/.7978845609 = M*1.253314137 where
S = The standard deviation. M = The mean absolute deviation. We will refer to these two parameters, the arithmetic mean average trade and the standard deviation of the trades, as the actual input
parameters. Now we want to take all of the equally spaced data points from step (2) and find their corresponding price values, based on the arithmetic mean and standard deviation. Recall that our
equally spaced data points are expressed in terms of standard units. Now for each of these equally spaced data points we will find the corresponding price as: (3.27) D = U+(S*E) where D = The price
value corresponding to a standard unit value. E = The standard unit value. S = The population standard deviation. U = The arithmetic mean. Once we have determined all of the price values
corresponding to each data point we have truly accomplished a great deal. We have now constructed the distribution that we expect the future data points to tend to. However, this technique allows us
to do a lot more than that. We can incorporate two more parameters that will allow us to perform "What if ' types of scenarios about the future. These parameters, which we will call' the "What if"
parameters, allow us to see the effect of a change in our average trade or a change in the dispersion (standard deviation) of our trades. The first of these parameters, called shrink, affects the
average trade. Shrink is simply a multiplier on our average trade. Recall that when we find the optimal f we also obtain other calculations, which are useful by-products of the optimal f. Such
calculations include the geometric mean, TWR, and geometric average trade. Shrink is the factor by which we will multiply our average trade before we perform the optimal f technique on it. Hence,
shrink lets us see what the optimal f would be if our average trade were affected by shrink as well as how the other byproduct calculations would be affected. For example, suppose you are trading a
system that has been running very hot lately. You know from past experience that the system is likely to stop performing so well in the future. You would like to see what would happen if the average
trade were cut in half. By using a shrink value of .5 (since shrink is a multiplier, the average trade times .5 equals the average trade cut in half) you can perform the optimal f technique to
determine what your optimal f should be if the average trade were to be cut in half. Further, you can see how such changes affect your geometric average trade, and so on. By using a shrink value of
2, you can also see the affect that a doubling of your average trade would have. In other words, the shrink parameter can also be used to increase (unshrink?) your average trade. What's more, it lets
you take an unprofitable system (that is, a system with an average trade less than zero), and, by using a negative value for shrink, see what would happen if that system became profitable. For
example, suppose you have a system that shows an average trade of -$100. If you use a shrink value of -.5, this will give you your optimal f for this distribution as if the average trade were $50,
since -100*-.5 = 50. If we used a shrink factor of -2, we would obtain the distribution centered about an average trade of $200. You must be careful in using these "What if" parameters, for they make
it easy to mismanage performance. Mention was just made of how you can turn a system with a negative arithmetic average trade into a positive one. This can lead to problems if, for instance, in the
future, you still have a negative expectation. The other "What if" parameter is one called stretch. This is not, as its name would imply, the opposite of shrink. Rather, stretch is the multiplier to
be used on the standard deviation. You can use this parameter to determine the effect on f and its by-products by an increase or decrease in the dispersion. Also, unlike shrink, stretch must always
be a positive number, whereas shrink can be positive or negative (so long as the average trade times shrink is positive). If you want to see what will happen if your standard deviation doubles,
simply use a value of 2 for stretch. To - 45 -
see what Would happen if the dispersion quieted down, use a value less than 1. You will notice in using this technique that lowering the stretch toward zero will tend to increase the by-product
calculations, resulting in a more optimistic assessment of the future and vice versa. Shrink works in an opposite fashion, as lowering the shrink towards zero will result in more pessimistic
assessments about the future and vice versa. Once we have determined what values we want to use for stretch and shrink (and for the time being we will use values of 1 for both, which means to leave
the actual parameters unaffected) we can amend Equation (3.27) to: (3.28) D = (U*Shrink)+(S*E*Stretch) where D = The price value corresponding to a standard unit value. E = The standard unit value. S
= The population standard deviation. U = The arithmetic mean. To summarize thus far, the first two steps are to determine the bounding parameters of the number of sigmas either side of the mean we
are going to use, as well as how many equally spaced data points we are going to use within this range. The next two steps are the actual input parameters of the arithmetic average trade and
population standard deviation. We can derive these parameters empirically by looking at the results of a given trading system or by using brokerage statements or paper trading results. We can also
derive these figures by estimation, but remember that the results obtained will only be as accurate as your estimates. The fifth and sixth steps are to determine the factors to use for stretch and
shrink if you are going to perform a "What if type of scenario. If you are not, simply use values of 1 for both stretch and shrink. Once you have completed these six steps, you can now use Equation
(3.28) to perform the seventh step. The seventh step is to convert the equally spaced data points from standard values to an actual amount of either points or dollars (depending on whether you used
points or dollars as input for your arithmetic average trade and population standard deviation). Now the eighth step is to find the associated probability with each of the equally spaced data points.
This probability is determined by using Equation (3.21): (3.21) N(Z) = 1-N'(Z)*((1.330274429*Y^5)(1.821255978*Y^4)+(1.781477937*Y^3)(.356563782*Y^2)+(.31938153*Y)) If Z<0 then N(Z) = 1-N(Z) where Y =
1/(1+.2316419*ABS(Z)) ABS() = The absolute value function. N'(Z) = .398942*EXP(-(Z^2/2)) EXP() = The exponential function. However, we will use Equation (3.21) without its 1-as the first term in the
equation and without the -Z provision (i.e., without the "If Z<0 then N(Z)-1-N(Z)"), since we want to know what the probabilities are for an event equaling or exceeding a prescribed amount of
standard units. So we go along through each of our equally spaced data points. Each point has a standard value, which we will use as the Z parameter in Equation (3.21), and a dollar or point amount.
Now there will be another variable corresponding to each equally spaced data point-the associated probability.
THE MECHANICS OF THE PROCEDURE The procedure will now be demonstrated on the trading example introduced earlier in this chapter. Since our 232 trades are currently in points, we should convert them
to their dollar representations. However, since the market is a not specified, we will assign an arbitrary value of $1,000 per point. Thus, the average trade of .330129 now becomes .330129*$1000, or
an average trade of $330.13. Likewise the population standard deviation of 1.743232 is also multiplied by $1,000 per point to give $1,743.23. Now we construct the matrix. First, we must determine the
range, in sigmas from the mean, that we want our calculations to encompass. For
our example we will choose 3 sigmas, so our range will go from minus 3 sigmas to plus 3 sigmas. Note that you should use the same amount to the left of the mean that you use to the right of the mean.
That is, if you go 3 sigmas to the left (minus 3 sigmas) then you should not go only 2 or 4 sigmas to the right, but rather you should go 3 sigmas to the right as well (i.e., plus 3 sigmas from the
mean). Next we must determine how many equally spaced data points to divide this range into. Choosing 61 as our value gives a data point at every tenth of a standard unit-simple. Thus we can
determine our column of standard values. Now we must determine the arithmetic mean that we are going to use as input. We determine this empirically from the 232 trades as $330.13. Further, we must
determine the population standard deviation, which we also determine empirically from the 232 trades as $1,743.23. Now to determine the column of associated P&L's. That is, we must determine a P&L
amount for each standard value. Before we can determine our associated P&L column, we must decide on values for stretch and shrink. Since we are not going to perform any "What if types of scenarios
at this time, we will choose a value of 1 for both stretch and shrink. Arithmetic mean = 330.13 Population Standard Deviation = 1743.23 Stretch = 1 Shrink = 1 Using Equation (3.28) we can calculate
our associated P&L column. We do this by taking each standard value and using it as E in Equation (3.28) to get the column of associated P&L's: (3.29) D = (U*Shrink)+(S*E*Stretch) where D = The price
value corresponding to a standard unit value. E = The standard unit value. S = The population standard deviation. U = The arithmetic mean. For the -3 standard value, the associated P&L is: D =
(U*Shrink)+(S*E*Stretch) = (330.129*1)+(1743.232*(-3)*1) = 330.129+(-5229.696) = 330.129-5229.696 = 4899.567 Thus, our associated P&L column at a standard value of -3 equals 4899.567. We now want to
construct the associated P&L for the next standard value, which is -2.9, so we simply perform the same Equation, (3.29), again-only this time we use a value of -2.9 for E. Now to determine the
associated probability column. This is calculated using the standard value column as the Z input to Equation (3.21) without the preceding 1-and without the-Z provision (i.e, the "If Z < 0 then N(Z) =
1-N(Z)"). For the standard value of -3 (Z = -3), this is: N(Z) = N'(Z)*(( 1.330274429*Y^5)(1.821255978*Y^4)+(1.781477937*Y^3)(.356563782*Y^2+(.31938153*Y)) If Z<0 then N(Z) = 1-N(Z) where Y = 1/
(1+.2316419*ABS(Z)) ABS() = The absolute value function. N'(Z) = .398942*EXP(-(Z^2/2)) EXP() = The exponential function. Thus: N'(3) = .398942*EXP(-((-3)^2/2)) = .398942*EXP(-(9/2)) = .398942*EXP
(-4.5) = .398942*.011109 = .004431846678 Y = 1/(1+2316419*ABS(-3)) = 1/(1+2316419*3) = 1/(1+6949257) = 1/1.6949257 = .5899963639 N(-3) = .004431846678*((1.330274429*.5899963639^5)
- 46 -
= .004431846678*((1.330274429*.07149022693)(1.821255978*.1211706)+(1.781477937*.2053752)(.356563782*.3480957094)+(.31938153*.5899963639)) = .004431846678*
(.09510162081-.2206826796+.3658713876.1241183226+.1884339414) = .004431846678*.3046059476 = .001349966857 Note that even though Z is negative (Z = -3), we do not adjust N(Z) here by making N(Z) = 1-N
(Z). Since we are not using the-Z provision, we just let the answer be. Now for each value in the standard value column there will be a corresponding entry in the associated P&L column and in the
associated probability column. This is shown in the following table. Once you have these three columns established you are ready to begin the search for the optimal f and its by-products. STD VALUE
-3.0 -2.9 -2.8 -2.7 -2.6 -2.5 -2.4 -2.3 -2.2 -2.1 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -6.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1
1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0
($4,899.57) ($4,725.24) ($4,550.92) ($4,376.60) ($4,202.27) ($4,027.95) ($3,853.63) ($3,679.30) ($3,504.98) ($3,330.66) ($3,156.33) ($2,982.01) ($2,807.69) ($2,633.37) ($2,459.04) ($2,284.72)
($2,110.40) ($1,936.07) ($1,761.75) ($1,587.43) ($1,413.10) ($1,238.78) ($1,064.46) ($890.13) ($715.81) ($541.49) ($367.16) ($192.84) ($18.52) $155.81 $330.13 $504.45 $678.78 $853.10 $1,027.42
$1,201.75 $1,376.07 $1,550.39 $1,724.71 $1,899.04 $2,073.36 $2,247.68 $2,422.01 $2,596.33 $2,770.65 $2,944.98 $3,119.30 $3,293.62 $3,467.95 $3,642.27 $3,816.59 $3,990.92 $4,165.24 $4,339.56 $4,513.89
$4,688.21 $4,862.53 $5,036.86 $5,211.18 $5,385.50 $5,559.83
ASSOCIATED PROBABILITY 0.001350 0.001866 0.002555 0.003467 0.004661 0.006210 0.008198 0.010724 0.013903 0.017864 0.022750 0.028716 0.035930 0.044565 0.054799 0.066807 0.080757 0.096800 0.115070
0.135666 0.158655 0.184060 0.211855 0.241963 0.274253 0.308537 0.344578 0.382088 0.420740 0.460172 0.500000 0.460172 0.420740 0.382088 0.344578 0.308537 0.274253 0.241963 0.211855 0.184060 0.158655
0.135666 0.115070 0.096800 0.080757 0.066807 0.054799 0.044565 0.035930 0.028716 0.022750 0.017864 0.013903 0.010724 0.008198 0.006210 0.004661 0.003467 0.002555 0.001866 0.001350
ASSOCIATED HPR AT f=.01 0.9999864325 0.9999819179 0.9999761557 0.9999688918 0.9999598499 0.9999487404 0.9999352717 0.9999191675 0.9999001875 0.9998781535 0.9998529794 0.9998247051 0.9997935316
0.9997598578 0.9997243139 0.9996877915 0.9996514657 0.9996168071 0.9995855817 0.999559835 0.9995418607 0.9995341524 0.9995393392 0.999560108 0.9995991135 0.9996588827 09997417168 0.9998495968
0.9999840984 1.0001463216 1.0003368389 1.0004736542 1.00058265 1.0006649234 1.0007220715 1.0007561259 1.0007694689 1.0007647383 1.0007447264 1.0007122776 1.0006701921 1.0006211392 .0005675842
.0005117319 .0004554875 1.0004004351 1.0003478328 .0002986228 .0002534528 1.0002127072 1.0001765438 .000144934 .0001177033 .0000945697 .0000751794 1.0000591373 1.0000460328 1.0000354603 1.0000270338
1.0000203976 1.0000152327
By-products atf-.01: TWR = 1.0053555695 Sum of the probabilities = 7.9791232176 Geomean = 1.0006696309 GAT = $328.09 Here is how you go about finding the optimal f. First, you must determine the
search method for f. You can simply loop from 0 to 1 by a predetermined amount (e.g., .01), use an iterative technique, or use the technique of parabolic interpolation described in Portfolio
Management formulas. What you seek to find is what value for f (between 0 and 1) will result in the highest geometric mean. Once you have decided upon a search technique, you must determine what the
worst-case associated P&L is in your table. In our example it is the P&L corresponding to -3 standard units, 4899.57. You will need to use this particular value repeatedly throughout the
calculations. In order to find the geometric mean for a given f value, for each value of f that you are going to process in your search for the optimal, you must convert each associated P&L and
probability to an HPR. Equation (3.30) shows the calculation for the HPR: (3.30) HPR = (1+(L/(W/(-f))))^P where L = The associated P&L. W = The worst-case associated P&L in the table (This will
always be a negative value). f = The tested value for f. P = The associated probability. Working through an example now where we use the value of .01 for the tested value for f, we will find the
associated HPR at the standard value of -3. Here, our worst-case associated P&L is 4899.57, as is our associated P&L. Therefore, our HPR here is: HPR = (1+(-4899.57/-4899.57/(-.01))))^.001349966857 =
(1+(-4899.57/489957))^.001349966857 = (1+(-.01))^.001349966857 = .99^.001349966857 = .9999864325 Now we move down to our next standard value, of -2.9, where we have an associated P&L of -2866.72 and
an associated probability of 0.001865. Our associated HPR here will be: HPR = (-4725.24/(-4899.57/(-.01))))^.001866 = (1+(-4725.24/489957))^001866 = (1+(-4725.24/489957))^.001866 = (1+
(-.009644193266))^.001866 = .990355807^.001866 = .9999819 Once we have calculated an associated HPR for each standard value for a given test value off (.01 in our example table), you are ready to
calculate the TWR. The TWR is simply the product of all of the HPRs for a given f value multiplied together: (3.31) TRW = (∏[i = 1,N]HPRi) where N = The total number of equally spaced data points.
HPRi = The HPR corresponding to the i'th data point, given by Equation (3.30). So for our test value off = .01, the TWR will be: TWR = .9999864325*.9999819179*...*1.0000152327 = 1.0053555695 We can
readily convert a TWR into a geometric mean by taking the TWR to the power of 1 divided by the sum of all of the associated probabilities. (3.32) G = TRW^(1/∑[i = 1,N] Pi) where N = The number of
equally spaced data points. Pi = The associated probability of the ith data point. Note that if we sum the column that lists the 61 associated probabilities it equals 7.979105. Therefore, our
geometric mean at f = .01 is: G = 1.0053555695^(1/7.979105) = 1.0053555695^.1253273393 = 1.00066963 - 47 -
We can also calculate the geometric average trade (GAT). This is the amount you would have made, on average per contract per trade, if you were trading this distribution of outcomes at a specified f
value. (3.33) GAT = (G(f)-1)*(w/(-f)) where G(f) = The geometric mean for a given f value. f = The given f value. W = The worst-case associated P&L. In the case of our example, the f value is .01:
GAT = (1.00066963-1)*(-4899.57/(-.01)) = .00066963*489957 = 328.09 Therefore, we would expect to make, on average per contract per trade, $328.09. Now we go to our next value for f that must be
tested according to our chosen search procedure for the optimal f In the case of our example we are looping from 0 to 1 by .01 for f, so our next test value for f is .02. We will do the same thing
again. We will calculate a new associated HPRs column, and calculate our TWR and geometric mean. The f value that results in the highest geometric mean is that value for f which is the optimal based
on the input parameters we have used. In our example, if we were to continue with our search for the optimal f, we would find the optimal at f = .744 (I am using a step increment of .001 in my search
for the optimal f here.) This results in a geometric mean of 1.0265. Therefore, the corresponding geometric average trade is $174.45. It is important to note that the TWR itself doesn't have any real
meaning as a by-product. Rather, when we are calculating our geometric mean parametrically, as we are here, the TWR is simply an interim step in obtaining that geometric mean. Now, we can figure what
our TWR would be after X trades by taking the geometric mean to the power of X. Therefore, if we want to calculate our TWR for 232 trades at a geometric mean of 1.0265, we would raise 1.0265 to the
power of 232, obtaining 431.79. So we can state that trading at an optimal f of .744, we would expect to make 43,079% ((431.79-1)*100) on our stake after 232 trades. Another by-product we will
calculate is our threshold to geometric Equation (2.02): Threshold to geometric = 330.13/174.45*-4899.57/-.744 = 12,462.32 Notice that the arithmetic average trade of $330.13 is not something that we
have calculated with this technique, rather it is a given as it is one of the input parameters. We can now convert our optimal f into how many contracts to trade by the equations: (3.34) K = E/Q
where K = The number of contracts to trade. E = The current account equity. (3.35) Q = W/( -f) where W = The worst-case associated P&L. f = The optimal f value. Note that this variable, Q, represents
a number that you can divide your account equity by as your equity changes on a day-by-day basis to know how many contracts to trade. Returning now to our example: Q = -4,899.57/-.744 = $6,585.44
Therefore, we will trade 1 contract for every $6,585.44 in account equity. For a $25,000 account this means we would trade: K = 25000/6585.44 = 3.796253553 Since we cannot trade in fractional
contracts, we must round this figure of 3.796253553 down to the nearest integer. We would therefore trade 3 contracts for a $25,000 account. The reason we always round down rather than up is that the
price extracted for being slightly below optimal is less than the price for being slightly beyond it. Notice how sensitive the optimal number of contracts to trade is to the worst loss. This worst
loss is solely a function of how many sigmas
you have decided to go to the left of the mean. This bounding parameter, the range of sigmas, is very important in this calculation. We have chosen three sigmas in our calculation. This means that we
are, in effect, budgeted for a three-Sigma loss. However, a loss greater than three sigmas can really hurt us, depending on how far beyond three sigmas it is. Therefore, you should be very careful
what value you choose for this range bounding parameter. You'll have a lot riding on it. Notice that for the sake of simplicity in illustration, we have not deducted commissions and slippage from
these figures. If you wanted to incorporate commissions and slippage, you should deduct X dollars in commissions and slippage from each of the 232 trades at the outset of this exercise. You would
calculate your arithmetic average trade and population standard deviation from this set of 232 adjusted trades, and then perform the exercise exactly as described. We could now go back and perform a
"What if type of scenario here. Suppose we want to see what will happen if the system begins to perform at only half the profitability it is now (shrink = .5). Further, assume that the market that
the system we are looking at is in gets very volatile, and that as a consequence the dispersion among the trades increases by 60% (stretch = 1.6). By pumping these parameters through this system we
can see what the optimal will be so that we can make adjustments to our trading before these changes become history. In so doing we find that the optimal f now becomes ,262, or to trade 1 contract
for every $31,305.92 in account equity (since the worst-case associated P&L is strongly affected by changes in stretch and shrink). This is quite a change. This means that if these changes in the
market system start to materialize, we are going to have to do some altering in our money management regarding that system. The geometric mean will drop to 1.0027, the geometric average trade will be
cut to $83.02, and the TWR over 232 trades will be 1.869. This is not even close to what it presently would be. All of this is predicated upon a 50% decrease in average trade and a 60% increase in
standard deviation. This quite possibly could happen. It is also quite possible that the future could work out more favorably than the past. We can test this out, too. Suppose we want to see what
will happen if our average profit increases by only 10%. We can check this by inputting a shrink value of 1.1. These “What if” parameters, stretch and shrink, really give us a great deal of power in
our money management. The closer your distribution of trade P&L's is to Normal to begin with, the better the technique will work for you. The problem with almost any money management technique is
that there is a certain amount of "slop" involved. Here, we can define slop as the difference between the Normal Distribution and the distribution we are actually using. The difference between the
two is slop, and the more slop there is, the less effective the technique becomes. To illustrate, recall that using this method we have determined that to trade 1 contract for every $6,585.44 in
account equity is optimal. However, if we were to go over these trades and find our optimal f empirically, we would find that the optimal is to trade 1 contract for every $7,918.04 in account equity.
As you can see, using the Normal Distribution technique here would have us slightly to the right of the f curve, trading slightly more contracts than the empirical would suggest. However, as we shall
see, there is a lot to be said for expecting the future distribution of prices to be Normally distributed. When someone buys or sells an option, the assumption that the future distribution of the log
of price changes in the underlying instrument will be Normal is built into the price of the option. Along this same line of reasoning, someone who is entering a trade in a market and is not using a
mechanical system can be said to be looking at the same possible future distribution. The technique detailed in this chapter was shown using data that was not equalized. We can also use this very
same technique on equalized data by incorporating the following changes: Before the data is standardized, it should be equalized by first converting all of the trade profits and losses to percentage
profits and losses per Equations (2.10a) through (2.10c). Then these percentage profits and losses should be translated into percentages of the current price by simply multiplying them by the current
price. 1. When you go to standardize this data, standardize the now equalized data by using the mean and standard deviation of the equalized data. 2. The rest of the procedure is the same as written
in this chapter in terms of determining the optimal f, geometric mean, and TWR. The - 48 -
geometric average trade, arithmetic average trade, and threshold to the geometric are only valid for the current price of the underlying instrument. When the price of the underlying instrument
changes, the procedure must be done again, going back to step 1 and multiplying the percentage profits and losses by the new underlying price. When you go to redo the procedure with a different
underlying price, you will obtain the same optimal f, geometric mean, and TWR. However, your arithmetic average trade, geometric average trade, and threshold to the geometric will differ, depending
on the new price of the underlying instrument. 3. The number of contracts to trade as given in Equation (3.34) must be changed. The worst-case associated P&L, the W variable in Equation (3.34) [as
subequation (3.35)] will be different as a result of the changes caused in the equalized data by a different current price. In this chapter we have learned how to find the optimal f on a probability
distribution. We have used the Normal Distribution because it shows up so frequently in many naturally occurring processes and because it is easier to work with than many other distributions, since
its cumulative density function, Equation (3.21), exists.5 Yet the Normal is often regarded as a poor model for the distribution of trade profits and losses. What then is a good model for our
purposes? In the next chapter we will address this question and build upon the techniques we have learned in this chapter to work for any type of probability distribution, whether its cumulative
density function is known or not.
Again, the cumulative density function to the Normal Distribution does not really exist, but rather is very closely approximated by Equation (3.21). However, the cumulative density of the Normal can
at least be approximated by an equation, a luxury which not all distributions possess.
Chapter 4 - Parametric Techniques on Other Distributions We have seen in the previous chapter how to find the optimal f and its by-products on the Normal Distribution. The same technique can be
applied to any other distribution where the cumulative density function is known. Many of these more common distributions and their cumulative density functions are covered in Appendix B.
Unfortunately, most distributions of trade P&L's do not fit neatly into the Normal or other common distribution functions. In this chapter we first treat this problem of the undefined nature of the
distribution of trade P&L's and later look at the technique of scenario planning, a natural outgrowth of the notion of optimal f. This technique has many broad applications. This then leads into
finding the optimal f on a binned distribution, which leads us to the next chapter regarding both options and multiple simultaneous positions. Before we attempt to model the real distribution of
trade P&L's, we must have a method for comparing two distributions.
THE KOLMOGOROV-SMIRNOV (K-S) TEST The chi-square test is no doubt the most popular of all methods of comparing two distributions. Since many market-oriented applications other than the ones we
perform in this chapter often use the chi-square test, it is discussed in Appendix A. However, the best test for our purposes may well be the K-S test. This very efficient test is applicable to
unbinned distributions that are a function of a single independent variable (profit per trade in our case). All cumulative density functions have a minimum value of 0 and a maximum value of 1. What
goes on in between differentiates them. The K-S test measures a very simple variable, D, which is defined as the maximum absolute value of the difference between two distributions' cumulative density
functions. To perform the K-S test is relatively simple. N objects (trades in our case) are standardized (by subtracting the mean and dividing by the standard deviation) and sorted in ascending
order. As we go through these sorted and standardized trades, the cumulative probability is however many trades we've gone through divided by N. When we get to our first trade in the sorted sequence,
the trade with the lowest standard value, the cumulative density function (CDF) is equal to 1/N. With each standard value that we pass along the way up to our highest standard value, 1 is added to
the numerator until, at the end of the sequence, our CDF is equal to N/N or 1. For each standard value we can compute the theoretical distribution that we wish to compare to. Thus, we can compare our
actual cumulative density to any theoretical cumulative density. The variable D, the K-S statistic, is equal to the greatest distance between any standard values of our actual cumulative density and
the value of the theoretical distribution's CDF at that standard value. Whichever standard value results in the greatest difference is assigned to the variable D. When comparing our actual CDF at a
given standard value to the theoretical CDF at that standard value, we must also compare the previous standard value's actual CDF to the current standard value's actual CDF. The reason is that the
actual CDF breaks upward instantaneously at the data points, and, if the actual is below the theoretical, the difference between the lines is greater the instant before the actual jumps up.
Cumulative probability 1 0.9 0.8 A 0.7 0.6 B 0.5 0.4 0.3 0.2 Actual 0.1 Theoretical 0 -3 -2 -1 0 1 Standard values
Figure 4-1 The K-S test. - 49 -
To see this, look at Figure 4-1. Notice that at point A the actual line is above the theoretical. Therefore, we want to compare the current actual CDF value to the current theoretical value to find
the greatest difference. Yet at point B, the actual line is below the theoretical. Therefore, we want to compare the previous actual value to the current theoretical value. The rationale is that we
are measuring the greatest distance between the two lines. Since we are measuring at the instant the actual jumps up, we can consider using the previous value for the actual as the current value for
the actual the instant before it jumps. In summary, then, for each standard value, we want to take the absolute value of the difference between the current actual CDF value and the current
theoretical CDF value. We also want to take the absolute value of the difference between the previous actual CDF value and the current theoretical CDF value. By doing this for all standard values,
all points where the actual CDF jumps up by 1/N, and taking the greatest difference, we will have determined the variable D. The lower the value of D, the more the two distributions are alike. We can
readily convert the D value to a significance level by the following formula: (4.01) SIG = ∑[j = 1, ∞] (j%2)*4-2*EXP(-2*j^2*(N^(1/2)*D)^2) where SIG = The significance level for a given D and N. D =
The K-S statistic. N = The number of trades that the K-S statistic is determined over. % = The modulus operator, the remainder from division. As it is used here, J % 2 yields the remainder when J is
divided by 2. EXP() = The exponential function. There is no need to keep summing the values until J gets to infinity. The equation converges (in short order, usually) to a value. Once the convergence
is obtained to a close enough user tolerance, there is no need to continue summing values. To illustrate Equation (4.01) by example. Suppose we had 100 trades that yielded a K-S statistic of .04: J1
= (1%2)*4-2*EXP(-2*1^2*(100^(1/2)*.04)^2) = 1*4-2*EXP(-2*1^2*(10*.04)^2) = 2*EXP(-2*1^2*.4^2) =2*EXP(-2*1*.16) = 2*EXP(-.32) = 2*.726149 = 1.452298 So our first value is 1.452298. Now to this we will
add the next pass through the equation, and as such we must increment J by 1 so that J now equals J2: J2 = (2%2)*4-2*EXP(-2*2^2*(100^(1/2)*.04)^2) = 0*4-2*EXP(-2*2^2*(10*.04)^2) = -2*EXP(-2*2^2*.4^2)
= -2*EXP(-2*4*.16) = -2*EXP(-1.28) = -2*.2780373 = -.5560746 Adding this value of -.5560746 back into our running sum of 1.452298 gives us a new running sum of .8962234. We again increment J by 1, so
it equals J3, and perform the equation. We take the resulting sum and add it to our running total of .8962234. We keep on doing this until we converge to a value within a close enough tolerance. For
our example, this point of convergence will be right around .997, depending upon how many decimal places we want to be accurate to. This answer means that for 100 trades where the greatest value
between the two distributions was .04, we can be 99.7% certain that the actual distribution was generated by the theoretical distribution function. In other words, we can be 99.7% certain that the
theoretical distribution function represents the actual distribution. Incidentally, this is a very good significance level.
CREATING OUR OWN CHARACTERISTIC DISTRIBUTION FUNCTION We have determined that the Normal Probability Distribution is generally not a very good model of the distribution of trade profits and losses.
Further, none of the more common probability distributions are either. Therefore, we must create a function to model the distribution of our trade profits and losses ourselves. The distribution of
the logs of price changes is generally assumed to be of the stable Paretian variety (for a discussion of the stable Paretian distribution, refer to Appendix B). The distribution of trade P&L's can be
regarded as a transformation of the distribution of prices. This transformation occurs as a result of trading techniques such as traders trying to cut their losses and let their profits run. Hence,
the distribution of trade P&L's can also be regarded as of the stable Paretian variety. What we are about to study, however, is not the stable Paretian. The stable Paretian, like all other
distributional functions, models a specific probability phenomenon. The stable Paretian models the distribution of sums of independent, identically distributed random variables. The distributional
function we arc about to study does not model a specific probability phenomenon. Rather, it models other unimodal distributional functions. As such, it can replicate the shape, and therefore the
probability densities, of the stable Paretian as well as any other unimodal distribution. Now we will create this function. To begin with, consider the following equation: (4.02) Y = 1/(X^2+1) This
equation graphs as a general bell-shaped curve, symmetric about the X axis, as is shown in Figure 4-2.
1 0.8 0.6 0.4
0.8 0.6 0.4 0.2 0 -3
Figure 4-3 LOC =-.5 SCALE = 1 SKEW = 0 KURT = 2 Likewise, if we wanted to shift location to the right, we would use a positive value for the LOC variable. Keeping LOC at zero will result in no shift
in location, as depicted in Figure 4-2. The exponent in the denominator affects kurtosis. Thus far, we have seen the distribution with the kurtosis set to a value of 2, but we can control the
kurtosis of the distribution by changing the value of the exponent. This alters our characteristic function, which now appears as: (4.04) Y = 1/((X-LOC)^KURT+1) where Y = The ordinate of the
characteristic function. X = The standard value amount. LOC = A variable representing the location, the first moment of the distribution. KURT = A variable representing kurtosis, the fourth moment of
the distribution. Figures 4-4 and 4-5 demonstrate the effect of the kurtosis variable on our characteristic function. Note that the higher the exponent the more flat topped and thin-tailed the
distribution (platykurtic), and the lower the exponent, the more pointed the peak and thicker the tails of the distribution (leptokurtic).
0.2 0 -3
0.8 -2
Figure 4-2 LOC = 0 SCALE = 1 SKEW = 0 KURT = 2. We will thus build from this general equation. The variable X can be thought of as the number of standard units we are either side of the mean, or Y
axis. We can affect the first moment of this "distribution," the location, by adding a value to represent a change in location to X. Thus, the equation becomes: (4.03) Y = 1/((X-LOC)^2+1) where Y =
The ordinate of the characteristic function. X = The standard value amount. LOC = A variable representing the location, the first moment of the distribution. Thus, if we wanted to alter location by
moving it to the left by 1/2 of a standard unit, we would set LOC to -.5. This would give us the graph depicted in Figure 4-3.
0.6 0.4 0.2 0 -3
Figure 4-4 LOC = 0 SCALE = 1 SKEW =0 KURT = 3.
1 0.8 0.6 0.4 0.2 0 -3
- 50 -
Figure 4-5 LOC = 0 SCALE = 1 SKEW = 0 KURT = 1 So that we do not run into problems with irrational numbers when KURT<1, we will use the absolute value of the coefficient in the denominator. This does
not affect the shape of the curve. Thus, we can rewrite Equation (4.04) as: (4.04) Y = 1/(ABS(X-LOC)^KURT+1) We can put a multiplier on the coefficient in the denominator to allow us to control the
scale, the second moment of the distribution. Thus, our characteristic function has now become: (4.05) Y = 1/(ABS((X-LOC)*SCALE) ^ KURT+1) where Y = The ordinate of the characteristic function. X =
The standard value amount. LOC = A variable representing the location, the first moment of the distribution. SCALE = A variable representing the scale, the second moment of the distribution. KURT = A
variable representing kurtosis, the fourth moment of the distribution. Figures 4-6 and 4-7 demonstrate the effect of the scale parameter. The effect of this parameter can be thought of as moving the
horizontal axis up or down on the distribution. When the axis is moved up (by decreasing scale), the graph is also enlarged. This results in what we have in Figure 4-6. This has the effect of moving
the horizontal axis up and enlarging the distribution curve. The result is as though we were looking at the "cap" of the distribution. Figure 4-7 does just the opposite. As is borne out in the
figure, the effect is that the horizontal axis has been moved down and the distribution curve shrunken.
(4.06) Y = (1/(ABS((X-LOC)*SCALE)^KURT+1))^C where C = The exponent for skewness, calculated as: (4.07) C = (1+(ABS(SKEW)^ABS( 1/(X-LOC))*sign(X)*sign(SKEW)))^.5 Y = The ordinate of the
characteristic function. X = The standard value amount. LOC = A variable representing the location, the first moment of the distribution. SCALE = A variable representing the scale, the second moment
of the distribution. SKEW = A variable representing the skewness, the third moment of the distribution. KURT = A variable representing kurtosis, the fourth moment of the distribution. sign() = The
sign function, equal to 1 or -1. The sign of X is calculated as X/ABS(X) for X not equal to 0. If X is equal to zero, the sign should be regarded as positive. Figures 4-8 and 4-9 demonstrate the
effect of the skewness variable on our distribution.
1 0.8 0.6
0.2 0 -3
Figure 4-8 LOC = 0 SCALE = 1 SKEW = -.5 KURT = 2.
1 0.2 0.8 0 -3
Figure 4-6 LOC = 0 SCALE = .5 SKEW = 0 KURT = 2.
0 -3
Figure 4-9 LOC = 0 SCALE = 1 SKEW = +.5 KURT = 2.
0.4 0.2 0 -3
Figure 4-7 LOC = 0 SCALE = 2 SKEW = 0 KURT = 2. We now have a characteristic function to a distribution whereby we have complete control over three of the first four moments of the distribution.
Presently, the distribution is symmetric about the location. What we now need is to be able to incorporate a variable for skewness, the third moment of the distribution, into this function. To
account for skewness, we must amend our function further. Our characteristic function has now evolved to: - 51 -
A few important notes on the four parameters LOC, SCALE, SKEW, and KURT. With the exception of the variable LOC (which is expressed as the number of standard values to offset the distribution by),
the other three variables are nondimensional - that is, their values are pure numbers which have meaning only in a relative context, characterizing the shape of the distribution and are relevant only
to this distribution. Furthermore, the parameter values are not the same values you would get if you employed any of the standard measuring techniques detailed in "Descriptive Measures of
Distributions" in Chapter 3. For instance, if you determined one of Pearson's coefficients of skewness on a set of data, it would not be the same value that you would use for the variable SKEW in the
adjustable distributions here. The values for the four variables are unique to our distribution and have meaning only in a relative context.
Also of importance is the range that the variables can take. The SCALE. variable must always be positive with no upper bound, and likewise with KURT. In application, though, you will generally use
values between .5 and 3, and in extreme cases between .05 and 5. However, you can use values beyond these extremes, so long as they are greater than zero. The LOC variable can be positive, negative,
or zero. The SKEW parameter must be greater than or equal to -1 and less than or equal to +1. When SKEW equals +1, the entire right side of the distribution (right of the peak) is equal to the peak,
and vice versa when SKEW equals -1. The ranges on the variables are summarized as: (4.08) -infinity0 (4.10) -1<=SKEW<=+1 (4.11) KURT>0 Figures 4-2 through 4-9 demonstrate just how pliable our
distribution is. We can fit these four parameters such that the resultant distribution can fit to just about any other distribution.
FITTING THE PARAMETERS OF THE DISTRIBUTION Just as with the process described in Chapter 3 for finding our optimal f on the Normal Distribution, we must convert our raw trades data over to standard
units. We do this by first subtracting the mean from each trade, then dividing by the population standard deviation. From this point forward, we will be working with the data in standard units rather
than in its raw form. After we have our trades in standard values, we can sort them in ascending order. With our trades data arranged this way, we will be able to perform the K-S test on it. Our
objective now is to find what values for LOC, SCALE, SKEW, and KURT best fit our actual trades distribution. To determine this "best fit" we rely on the K-S test. We estimate the parameter values by
employing the "twentieth-century brute force technique." We run every combination for KURT from 3 to .5 by -.1 (we could just as easily run it from .5 to 3 by .1, as it doesn't matter whether we
ascend or descend through the values). We also run every combination for SCALE from 3 to .5 by -.1, For the time being we leave LOC and SKEW at 0. Thus, we are going to run the following
combinations: LOC 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
SCALE 3 3 3 3 3 3 3 3 3 3 3 3 2.9 2.9 .5 .5
SKEW 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
KURT 3 2.9 2.8 2.7 2.6 2.5 2.4 2.3 2.2 2.1 2 1.9 3 2.9 .6 .5
We perform the K-S test for each combination. The combination that results in the lowest K-S statistic we assume to be our optimal bestfitting Parameter values for SCALE and KURT (for the time
being). To perform the K-S test for each combination, we need both the actual distribution and the theoretical distribution (determined from the parameters for the adjustable distribution that we are
testing). We already have seen how to construct the actual cumulative density as X/N, where N is the total number of trades and X is the ranking (between 1 and N) of a given trade. Now we need to
calculate the CDF, (the function for what percentage of the area of the characteristic function a certain point constitutes) for our theoretical distribution for the given LOC, SCALE, SKEW, and KURT
parameter values we are presently looping through. We have the characteristic function for our adjustable distribution. This is Equation (4.06). To obtain a CDF from a distribution's characteristic
function we must find the integral of the characteristic function. We define the integral, the percentage of area under the characteristic func- 52 -
tion at point X, as N(X). Thus, since Equation (4.06) gives us the first derivative to the integral, we define Equation (4.06) as N'(X). Often you may not be able to derive the integral of a
function, even if you are proficient in calculus. Therefore, rather than determining the integral to Equation (4.06), we are going to rely on a different technique, one that, although a bit more
labor intensive, is hardier than the technique of finding the integral. The respective probabilities can always be estimated for any point on the function's characteristic line by making the
distribution be a series of many bars. Then, for any given bar on the distribution, you can calculate the probability associated at that bar by taking the sum of the areas of all those bars to the
left of your bar, including your bar, and dividing it by the sum of the areas of all the bars in the distribution. The more bars you use, the more accurate your estimated probabilities will be. If
you could use an infinite number of bars, your estimate would be exact. We now discuss the procedure for finding the areas under our adjustable distribution by way of an example. Assume we wish to
find probabilities associated with every .1 increment in standard values from -3 to +3 sigmas of our adjustable distribution. Notice that our table (p. 163) starts at -5 standard units and ends at +5
standard units, the reason being that you should begin and end 2 sigmas beyond the bounding parameters (-3 and +3 sigmas in this case) to get more accurate results. Therefore, we begin our table at
-5 sigmas and end it at +5 sigmas. Notice that X represents the number of standard units that we are away from the mean. This is then followed by the four parameter values. The next column is the N'
(X) column, the height of the curve at point X given these parameter values. N'(X) is calculated as Equation (4.06). We now work with Equation (4.06). Assume that we want to calculate N'(X) for X at
-3, with the values for the parameters of .02, 2.76, 0, and 1.78 for LOC, SCALE, SKEW, and KURT respectively. First, we calculate the exponent of skewness, C in Equation (4.06)-given as Equation
(4.07)-as: x -5.0 -4.9 -4.8 -4.7 -4.6 -4.5 -4.4 -4.3 -4.2 -4.1 -4.0 -3.9 -3.8 -3.7 -3.6 -3.5 -3.4 -3.3 -3.2 -3.1 -3.0 -2.9 -2.8 -2.7 -2.6 -2.5 -2.4 -2.3 -2.2 -2.1 -2.0 -1.9 -1.8 -1.7 -1.6 -1.5 -1.4
-1.3 -1.2 -1.1
LOC SCA LE 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02
2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02
2.76 0.02 2.76
SKE W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
KURT N'(X)Eq.(4.06) RUNNINGSUM 1.78 0.0092026741 0.0092026741 1.78 0.0095350519 0.018737726 1.78 0.0098865117 0.0286242377 1.78 0.01025857 0.0388828077 1.78 0.0106528988 0.0495357065 1.78
0.0110713449 0.0606070514 1.78 0.0115159524 0.0721230038 1.78 0.0119889887 0.08411I9925 1.78 0.0124929748 0.0966049673 1.78 0.0130307203 0.I096356876 1.78 0.0136053639 0.1232410515 1.78 0.0142204209
0.1374614724 1.78 0.0148798398 0.1523413122 1.78 0.0155880672 0.1679293795 1.78 0.0163501266 0.184279506 1.78 0.0171717099 0.2014512159 1.78 0.0180592883 0.2195105042 1.78 0.0190202443 0.2385307485
1.78 0.0200630301 0.2585937786 1.78 0.0211973606 0.2797911392 1.78 0.0224344468 0.302225586 1.78 0.0237872819 0.3260128679 1.78 0.0252709932 0.3512838612 1.78 0.0269032777 0.3781871389 1.78
0.0287049446 0.4068920835 1.78 0.0307005967 0.4375926802 1.78 0.032919491I 0.4705121713 1.78 0.0353966362 0.5059088075 1.78 0.0381742015 0.544083009 1.78 0.041303344 0.5853863529 1.78 0.0448465999
0.6302329529 1.78 0.0488810452 0.6791139981 1.78 0.0535025185 0.7326165166 1.78 0.0588313292 0.7914478458 1.78 0.0650200649 0.8564679107 1.78 0.0722644105 0.9287323213 1.78 0.080818341 1.0095506622
1.78 0.0910157581 1.1005664203 1.78 0.1033017455 1.2038681658 1.78 0.I182783502 1.322146516
N(X) 0.000388 0.001178 0.001997 0.002847 0.003729 0.004645 0.005598 0.006590 0.007622 0.008699 0.009823 0.010996 0.012224 0.013509 0.014856 0.016270 0.017756 0.019320 0.020969 0.022709 0.024550
0.026499 0.028569 0.030770 0.033115 0.035621 0.038305 0.041186 0.044290 0.047642 0.051276 0.055229 0.059548 0.064287 0.06951I 0.075302 0.081759 0.089007 0.097204 0.106550
x -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6
3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.0
LOC SCA LE 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02
2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02
2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02 2.76 0.02
2.76 0.02 2.76 0.02 2.76
SKE W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
KURT N'(X)Eq.(4.06) RUNNINGSUM 1.78 0.1367725028 1.4589190187 1.78 0.1599377464 1.6188567651 1.78 0.1894070001 1.8082637653 1.78 0.2275190511 2.0357828164 1.78 0.2776382822 2.3134210986 1.78
0.3445412618 2.6579623604 1.78 0.4346363128 3.0925986732 1.78 0.5550465747 3.6476452479 1.78 0.7084848615 4.3561301093 1.78 0.8772840491 5.2334141584 1.78 1 6.2334141584 1.78 0.9363557429
7.1697699013 1.78 0.776473162 7.9462430634 1.78 0.6127219404 8.5589650037 1.78 0.4788099392 9.0377749429 1.78 0.377388991 9.4151639339 1.78 0.3020623672 9.7172263011 1.78 0.2458941852 9.9631204863
1.78 0.2034532796 10.1665737659 1.78 0.1708567846 10.3374305505 1.78 0.1453993995 10.48282995 1.78 0.1251979811 10.6080279311 1.78 0.1089291462 10.7169570773 1.78 0.0956499316 10.8126070089 1.78
0.0846780659 10.8972850748 1.78 0.0755122067 10.9727972814 1.78 0.0677784099 11.0405756913 1.78 0.0611937787 11.10176947 1.78 0.0555414402 11.1573109102 1.78 0.0506530744 11.2079639847 1.78
0.0463965419 11.2543605266 1.78 0.0426670018 11.2970275284 1.78 0.0393804519 11.3364079803 1.78 0.0364689711 11.3728769515 1.78 0.0338771754 11.4067541269 1.78 0.0315595472 11.4383136741 1.78
0.0294784036 11.4677920777 1.78 0.0276023341 11.4953944118 1.78 0.0259049892 11.5212994011 1.78 0.0243641331 11.5456635342 1.78 0.0229608959 11.5686244301 1.78 0.0216791802 11.5903036102 1.78
0.0205051855 11.6108087957 1.78 0.0194270256 11.6302358213 1.78 0.0184344179 11.6486702392 1.78 0.0175184304 11.6661886696 1.78 0.0166712734 11.682859943 1.78 0.0158861285 11.6987460714 1.78
0.0151570063 11.7139030777 1.78 0.014478628 11.7283817056 1.78 0.0138463263 11.742228032 1.78 0.0132559621 11.7554839941 1.78 0.012703854 11.7681878481 1.78 0.0121867187 11.7803745668 1.78
0.0117016203 11.7920761871 1.78 0.0112459269 11.8033221139 1.78 0.0108172734 11.8141393873 1.78 0.0104135298 11.8245529171 1.78 0.0100327732 11.8345856903 1.78 0.0096732643 11.8442589547 1.78
0.0093334265 11.8535923812
= .02243444681 Thus, at the point X = -3, the N'(X) value is .02243444681. (Notice that we calculate an N'(X) column, which corresponds to every value of X). The next step we must perform, the next
column, is the running sum of the N'(X)'s as we advance up through the X's. This is straight forward enough. Now we calculate the N(X) column, the resultant probabilities associated with each value
of X, for the given parameter values. To do this, we must perform Equation (4.12): (4.12) N(C) = (∑[i = 1,C]N'(Xi)+∑[i = 1,C-1]N'(Xi))/2/ ∑[i = 1,M]N'(Xi) where C = The current X value. M = The total
count of X values. Equation (4.12) says, literally, to add the running sum at the current value of X to the running sum at the previous value of X as we advance up through the X's. Now divide this
sum by 2. Then take the new quotient and divide it by the last value in the column of the running sum of the N'(X)'s (the total of the N'(X) column). This gives us the resultant probabilities for a
given value of X, for given parameter values. Thus, for the value of -3 for X, the running sum of the N'(X)'s at -3 is .302225586, and the previous X, -3.1, has a running sum value of .2797911392.
Summing these two running sums together gives us 5820167252. Dividing this by 2 gives us .2910083626. Then dividing this by the last value in the running sum column, the total of all of the N'(X)'s,
11.8535923812, gives us a quotient of .02455022522. This is the associated probability, N(X), at the standard value of X = -3. Once we have constructed cumulative probabilities for each trade in the
actual distribution and probabilities for each standard value increment in our adjustable distribution, we can perform the K-S test for the parameter values we are currently using. Before we do,
however, we must make adjustments for a couple of other preliminary considerations. In the example of the table of cumulative probabilities shown earlier for our adjustable distribution, we
calculated probabilities at every .1 increment in standard values. This was for the sake of simplicity. In practice, you can obtain a greater degree of accuracy by using a smaller step increment. I
find that using .01 standard values is a good step increment. A word on how to determine your bounding parameters in actual practice-that is, how many sigmas either side of the mean you should go in
determining your probabilities for our adjustable distribution. In our example we were using 3 sigmas either side of the mean, but in reality you must use the absolute value of the farthest point
from the mean. For our 232-trade example, the extreme left (lowest) standard value is -2.96 standard units and the extreme right (highest) is 6.935321 standard units. Since 6.93 is greater than ABS
(-2.96), we must take the 6.935321. Now, we add at least 2 sigmas to this value, for the sake of accuracy, and construct probabilities for a distribution from -8.94 to +8.94 sigmas. Since we want a
good deal of accuracy, we will use a step increment of .01. Therefore, we will figure probabilities for standard values of: -8.94 -8.93 -8.92 -8.91
N(X) 0.117308 0.129824 0.144560 0.162146 0.183455 0.209699 0.242566 0.284312 0.337609 0.404499 0.483685 0.565363 0.637613 0.696211 0.742253 0.778369 0.807029 0.830142 0.849096 0.864885 0.878225
0.889639 0.899515 0.908145 0.915751 0.922508 0.928552 0.933993 0.938917 0.943396 0.947490 0.951246 0.954707 0.957907 0.960874 0.963634 0.966209 0.968617 0.970874 0.972994 0.974990 0.976873 0.978653
0.980337 0.981934 0.983451 0.984893 0.986266 0.987576 0.988826 0.990020 0.991164 0.992259 0.993309 0.994316 0.995284 0.996215 0.997110 0.997973 0.998804 0.999606
+8.94 Now, the last thing we must do before we can actually perform our K-S statistic is to round the actual standard values of the sorted trades to the nearest .01 (since we are using .01 as our
step value on the theoretical distribution). For example, the value 6.935321 will not have a corresponding theoretical probability associated with it, since it is in between the step values 6.93 and
6.94. Since 6.94 is closer to 6.935321, we round 6.935321 to 6.94. Before we can begin the procedure of optimizing our adjustable distribution parameters to the actual distribution by employing the
K-S test, we must round our actual sorted standardized trades to the nearest step increment. In lieu of rounding the standard values of the trades to the nearest Xth decimal place you can use linear
interpolation on your table of cumulative probabilities to derive probabilities corresponding to the actual standard values of the trades. For more on linear interpolation, consult a good statistics
book, such as some of the ones suggested in the bibliography or Commodity Market Money Management by Fred Gehm.
(4.07) C = (1+(ABS(SKEW)^ABS(1/(X-LOC))*sign(X)*sign(SKEW)))^.5 = (1+(ABS(0)^ABS(l/(-3-.02))*-1*-1))^5 = (1+0)^.5 = 1 Thus, substituting 1 for C in Equation (4.06): (4.06) Y= (1/(ABS((X-LOC)*SCALE)^
KUKT+1))^C = (l/(ABS((-3-.02)*2.76)^1.78+1))^1 = (1/((3.02*2.76)^1.78+1))^1 = (1/(8.3352^1.78+1))^1 = (1/(43.57431058+1))^1 = (1/44.57431058)^1 = .02243444681^1 - 53 -
Thus far, we have been optimizing only for the best-fitting KURT and SCALE values. Logically, it would seem that if we standardized our data, as we have, then the LOC parameter should be kept at 0
and the SCALE parameter should be kept at 1. This is not necessarily true, as the true location of the distribution may not be the arithmetic mean, and the true optimal value for scale may not be at
1. The KURT and SCALE values have a very strong relationship to one another. Thus, we first try to isolate the -"neighborhood" of best-fitting parameter values for KURT and SCALE. For our 232 trades
this occurs at SCALE equal to 2.7 and KURT equal to 1.9. Now we progressively try to zero in on the best-fitting parameter values. This is a computer-time-intensive process. We run our next pass
through, cycling the LOC parameter from .1 to -.1 by -.05, the SCALE parameter from 2.6 to 2.8 by .05, the SKEW parameter from .1 to -.1 by -.05, and the KURT parameter from 1.86 to 1.92 by .02. The
results of this cycle through give the optimal (lowest K-S statistic) at LOC = 0, SCALE = 2.8, SKEW = 0, and KURT = 1.86. Thus we perform a third cycle through. This time we run LOC from .04 to -.04
by -.02, SCALE from 2.76 to 2.82 by .02, SKEW from .04 to -.04 by -.02, and KURT from 1.8 to 1.9 by .02. The results of the third cycle through show optimal values at LOC = .02, SCALE = 2.76, SKEW =
0, and KURT = 1.8. Now we have zeroed right in on the optimal neighborhood, the areas where the parameters make for the best fit of our adjustable characteristic function to the actual data. For our
last cycle through we are going to run LOC from 0 to .03 by .01, SCALE from 2.76 to 2.73 by -.01, SKEW from ,01 to -.01 by -.01, and KURT from 1.8 to 1.75 by -.01. The results of this final pass show
optimal parameters for our 232 trades at LOC = .02, SCALE = 2.76, SKEW = 0, and KURT = 1.78.
USING THE PARAMETERS TO FIND OPTIMAL F Now that we have found the best-fitting parameter values, we can find the optimal f on this distribution. We can take the same procedure we used to find the
optimal f on the Normal Distribution discussed in the last chapter. The only difference now is that the associated probabilities for each standard value (X value) are calculated per the procedure
described for Equations (4.06) and (4.12). With the Normal Distribution, we find our associated probabilities column (probabilities corresponding to a certain standard value) by using Equation
(3.21). Here, to find our associated probabilities, we must follow the procedure detailed previously: 1. For a given standard value, X, we figure its corresponding N'(X) by Equation (4.06). 2. For
each standard value, we also have the interim step of keeping a running sum of the N'(X) 's corresponding to each value of X. 3. Now, to find N(X), the resultant probability for a given X, add
together the running sum corresponding to the X value with the running sum corresponding to the previous X value. Divide this sum by 2. Then divide this quotient by the sum total of the N'(X)'s, the
last entry in the column of running sums. This new quotient is the associated 1- tailed probability for a given X. Since we now have a procedure to find the associated probabilities for a given
standard value, X, for a given set of parameter values, we can find our optimal f. The procedure is exactly the same as that detailed for finding the optimal f on the Normal Distribution. The only
difference is that we calculate the associated probabilities column differently. In our 232-trade example, the parameter values that result in the lowest K-S statistic are .02, 2.76, 0, and 1.78 for
LOC, SCALE, SKEW, and KURT respectively. We arrived at these parameter values by using the optimization procedure outlined in this chapter. This resulted in a KS statistic of .0835529 (meaning that
at its worst point, the two distributions were apart by 8.35529%), and a significance level of 7.8384%. Figure 4-10 shows the distribution function for those parameter values that best fit our 232
- 54 -
1.2 1 N(X)
0.8 0.6
0.4 0.2 0 -4899.56 -3156.33 -1413.1
2073.36 3716.59
Figure 4-10 Adjustable distribution fit to the 232 trades. If we take these parameters and find the optimal f on this distribution, bounding the distribution from +3 to -3 sigmas and using 100
equally spaced data points, we arrive at an optimal f value of .206, or 1 contract for every $23,783.17. Compare this to the empirical method, which showed that optimal growth is obtained at 1
contract for every $7,918.04 in account equity. But that is the result we get if we bound the distribution at 3 sigmas either side of the mean. In reality, in the empirical stream of trades, we had a
worst-case loss of 2.96 sigmas and a best-case gain of 6.94 sigmas. Now if we go back and bound our distribution at 2.96 sigmas on the left (negative side) of the mean and 6.94 on the right (and
we'll use 300 equally spaced data points this time), we obtain an optimal f of .954 or 1 contract for every $5,062.71 in account equity. Why does this differ from the empirical optimal f of
$7,918.04? The difference is in the "roughness" of the actual distribution. Recall that the significance level of our best-fitting parameters was only 7.8384%. Let us take our 232-trade distribution
and bin it into 12 bins from -3 to +3 sigmas. -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5
Bin -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0
Number of Trades 2 1 2 24 39 43 69 38 7 2 0 2
Notice that out on the tails of the distribution are gaps, areas or bins where there isn't any empirical data. These areas invariably get smoothed over when we fit our adjustable distribution to the
data, and it is these smoothed-over areas that cause the difference between the parametric and the empirical optimal fs. Why doesn't our distribution fit the observed better, especially in light of
how malleable it is? The reason has to do with the observed distribution having too many pointy of inflection. A parabola can be cupped upward or downward. Yet over the extent of a parabola, the
direction of the cup, whether it points upward or downward, is unchanged. We define a point of inflection as any time the direction of the concavity changes from up to down. Therefore, a parabola has
0 points of inflection, since the direction of the concavity never changes. An object shaped like the letter S lying on its side has one point of inflection, one point where the concavity changes
from up to down.
tioned, we must assume that the optimal f for the next trade is determined parametrically by the generating function, even though this may differ from the empirical optimal f. Obviously, the bounding
parameters have a very important effect on the optimal f. Where should you place the bounding parameters so as to obtain the best results? Look at what happens as we move the upper bound up. The
following table is compiled by bounding the lower end at 3 sigmas, and using 100 equally spaced data points and the optimal parameters to our 232 trades:
Concave Down
Concave Up
Points of inflection
Concave Up
Figure 4-11 Points of inflection on a bell-shaped distribution. Figure 4-11 shows the Normal Distribution. Notice there are two points of inflection in a bell-shaped curve such as the Normal
Distribution. Depending on the value for SCALE, our adjustable distribution can have n zero points of inflection (if SCALE is very low) or two points of inflection. The reason our adjustable
distribution does not fit the actual distribution of trades any better than it does is that the actual distribution has too many Points of inflection. Does this mean that our fitted adjustable
distribution is wrong? Probably not. If we were so inclined, we could create a distribution function that allowed for more than two points of inflection, which would better curve-fit to the actual
observed distribution. If we created a distribution function that allowed for as many points of inflection as we desired, we could fit to the observed distribution perfectly. Our optimal f derived
therefrom would • then be nearly the same as the empirical. However, the more points of inflection we were to add to our distribution function, the less robust it would be (i.e., it would probably be
less representative of the trades in the future). However, we are not trying to fit the parametric f to the observed exactly. We are trying to determine how the observed data is distributed so that
we can determine with a fair degree of accuracy what the optimal fin the future will be if the data is distributed as it were in the past. When we look at the adjustable distribution that has been
fit to our actual trades, the spurious points of inflection are removed. An analogy may clarify this. Suppose we are using Galton's board. We know that asymptotically the distribution of the balls
falling through the board will be Normal. However, we are only going to see 4 balls rolled through the board. Can we expect the outcomes of the 4 balls to be perfectly conformable to the Normal? How
about 5 balls? 50 balls? In an asymptotic sense, we expect the observed distribution to flesh out to the expected as the number of trades increases. Fitting our theoretical distribution to every
point of inflection in the actual will not give us any greater degree of accuracy in the future. As more trades occur, we can expect the observed distribution to converge toward the expected, as we
can expect the extraneous points of inflection to be filled in with trades as the number of trades approaches infinity. If the process generating the trades is accurately modeled by our parameters,
the optimal f derived from the theoretical will be more accurate over the future sequence of trades than the optimal f derived empirically over the past trades. In other words, if our 232 trades are
a proxy of the distribution of the trades in the future, then we can expect the trades in the future to arrive in a distribution more like the theoretical one that we have fit than like the observed
with its extraneous points of inflection and its roughness due to not having an infinite number of trades. In so doing, we can expect the optimal fin the future to be more like the optimal f obtained
from the theoretical distribution than it is like the optimal f obtained empirically over the observed distribution. So, we are better off in this case to use the parametric optimal f rather than the
empirical. The situation is analogous to the 20-coin-toss discussion of the previous chapter. If we expect 60% wins at a 1:1 payoff, the optimal f is correctly .2. However, if we only had empirical
data of the last 20 tosses, 11 of which were wins, our optimal f would show as .1, even though ,2 is what we should optimally bet on the next toss since it has a 60% chance of winning. We must assume
that the parametric optimal f ($5,062.71 in this case) is correct because it is the optimal f on the generating function. As with the coin-toss game just men- 55 -
Upper Bound 3 Sigmas 4 Sigmas 5 Sigmas 6 Sigmas 7 Sigmas 8 Sigmas 100 Sigmas
f .206 .588 .784 .887 .938 .963 .999
f$ $23783.17 $8,332.51 $6,249.42 $5,523.73 $5,223.41 $5,087.81 $4,904.46
Notice that, keeping the lower bound constant, the higher up we move the higher bound, the more the optimal f approaches 1. Thus, the more we move the upper bound up, the more the optimal f in
dollars will approach the lower bound (worst-case expected loss) exactly. In this case, where our lower bound is at -3 sigmas, the more we move the upper bound up, the more the optimal f in dollars
will approach the lower bound as a limit-$330.13-(1743.23*3) = -$4,899.56. Now observe what happens when we keep the upper bound constant (at 3), but move the lower bound lower. Very soon into this
process the arithmetic mathematical expectation turns negative. This happens because more than 50% of the area under the characteristic function is to the left of the zero axis. Consequently, as we
move the lower bounding parameter lower, the optimal f quickly goes to zero. Now consider what happens when we move both bounding parameters out at the same rate. Here we are using the optimal
parameter set of .02, 2.76, 0, and 1.78 on our distribution of 232 trades, and 100 equally spaced data points: Upper and Lower Bound 3 Sigmas 4 Sigmas 5 Sigmas 6 Sigmas 10 Sigmas
f .206 .158 ,126 .104 .053
f$ $23,783.17 $42,040.42 $66,550.75 $97,387.87 $322,625.17
Notice that our optimal f approaches 0 as we move both bounding parameters out to plus and minus infinity. Furthermore, since our worstcase loss gets greater and greater, and gets divided by a
smaller and smaller optimal f, our f$, the amount to finance 1 unit by, approaches infinity as well. The problem of where the best place is to put the bounding parameters is best rephrased as,
"Where, in the extreme case, do we expect the best and worst trades in the future (over the course of which we are going to trade this market system) to occur?" The tails of the distribution itself
actually go to plus and minus infinity. To account for this we would optimally finance each contract by an infinitely high amount (as in our last example, where we moved both bounds outward). If we
were going to trade for an infinitely long time into the future, our optimal f in dollars would be infinite. But we're not going to trade this market system forever. The optimal f in the future over
which we are going to trade this market system is a function of what the best and worst trades in that future are. Recall that if we flip a coin 100 times and record what the longest streak of
consecutive tails is, then flip the coin another 100 times, the longest streak of consecutive tails at the end of 200 flips will more than likely be greater than it was after only the first 100
flips. Similarly, if the worst-case loss seen over our 232-trade history was a 2.96-sigma loss (let's say a 3-sigma loss) then we should expect a loss of greater than 3 sigmas in the future over
which we are going to trade this market system. Therefore, rather than bounding our distribution at what the bounds of the past history of trades were (-2.96 and +6.94 sigmas), we will bound it at -4
and +6.94 sigmas. We should perhaps expect the high-end bound to be violated in the future, much as we expect the low-end bound to be violated. However, we won't make this assumption for a couple of
reasons. The first is that trading systems notoriously do not trade as well into the future, in general, as they have over historical data, even when there are no optimizable parameters involved. It
gets back to the principle that mechanical trading systems seem to suffer from a con-
tinually deteriorating edge. Second, the fact that we pay a lesser penalty for erring in optimal f if we err to the left of the peak of the f curve than if we err to the right of it suggests that we
should err on the conservative side in our prognostications about the future. Therefore, we will determine our parametric optimal f by using the bounding parameters of -4 and +6.94 sigmas and use 300
equally spaced data points. However, in calculating the probabilities at each of the 300 equally spaced data points, it is important that we begin our distribution 2 sigmas before and after our
selected bounding parameters. We therefore determine the associated probabilities by creating bars from -6 to +8.94 sigmas, even though we are only going to use the bars between -4 and +6.94 sigmas.
In so doing, we have enhanced the accuracy of our results. Using our optimal parameters of .02, 2.76, 0, and 1.78 now yields an optimal f of .837, or 1 contract per every $7,936.41. So long as our
selected bounding parameters are not violated, our model of reality is accurate in terms of the bounds selected. That is, so long as we do not see a loss greater than 4 sigmas-$330.13-(1743.23*4) =
-$6,642.79-or a profit greater than 6.94 sigmas$330.13+(1743.23*6.94) = $12,428.15-we have accurately modeled the bounds of the distribution of trades in the future. The possible divergence between
our model and reality is our blind spot. That is, the optimal f derived from our model (with our selected bounding parameters) is the optimal f for our model, not necessarily for reality. If our
selected bounding parameters are violated in the future, our selected optimal f cannot then be the optimal. We would be smart to defend this blind spot with techniques, such as long options, that
limit our liability to a prescribed amount. While we are discussing weaknesses with the method, one final weakness should be pointed out. Once you have obtained your parametric optimal f, you should
be aware that the actual distribution of trade profits and losses is one in which the parameters are constantly changing, albeit slowly. You should frequently run the technique on your trade profits
and losses for each market system you are trading to monitor these dynamics of the distributions.
PERFORMING "WHAT IFS" Once you have obtained your parametric optimal f, you can perform "What If types of scenarios on your distribution function by altering the parameters LOC, SCALE, SKEW, and KURT
of the distribution function to replicate different expected outcomes in the near future (different distributions the future might take) and observe the effects. Just as we can tinker with stretch
and shrink on the Normal distribution, so, too, can we tinker with the parameters LOC, SCALE, SKEW, and KURT of our adjustable distribution. The "What if capabilities of the parametric technique are
the strengths that help to offset the weaknesses of the actual distribution of trade P&L's moving around. The parametric techniques allow us to see the effects of changes in the distribution of
actual trade profits and losses before they occur, and possibly to budget for them. When tinkering with the parameters, a suggestion is in order. When finding the optimal f, rather than tinkering
with the LOC, the location parameter, you are better off tinkering with the arithmetic average trade in dollars that you are using as input. The reason is illustrated in Figure 4-12.
As is
Altering shrink or average trade Altering the location parameter Figure 4-12 Altering location parameters. - 56 -
Notice that in Figure 4-12, changing the location parameter LOC moves the distribution right or left in the "window" of the bounding parameters. But the bounding parameters do not move with the
distribution. Thus, a change in the LOC parameter also affects how many equally spaced data points will be left of the mode and right of the mode of the distribution. By changing the actual
arithmetic mean (or using the shrink variable in the Normal Distribution search for f), the window of the bounding parameters moves also. When you alter the arithmetic average trade as input, or
alter the shrink variable in the Normal Distribution mechanism, you still have the same number of equally spaced data points to the right and left of the mode of the distribution that you had before
the alteration.
EQUALIZING F The technique detailed in this chapter was shown using data that was not equalized. We can also use this very same technique on equalized data. If we want to determine an equalized
parametric optimal f, we would convert the raw trade profits and losses over to percentage gains and losses, based on Equations (2.10a) through (2.10c). Next, we would convert these percentage
profits and losses by multiplying them by the current price of the underlying instrument. For example, P&L number 1 is .18. Suppose the entry price to this trade was 100.50. The percentage gain on
this trade would be .18/100.50 = .001791044776. Now suppose that the current price of this underlying instrument is 112.00. Multiplying .001791044776 by 112.00 translates into an equalized P&L of
.2005970149. If we were seeking to do this procedure on an equalized basis, we would perform this operation on all 232 trade profits and losses. We would then calculate the arithmetic mean and
population standard deviation on the equalized trades and would use Equation (3.16) to standardize the trades. Next, we could find the optimal parameter set for LOC, SCALE, SKEW, and KURT on the
equalized data exactly as was shown in this chapter for nonequalized data. The rest of the procedure is the same in this chapter in terms of determining the optimal f, geometric mean, and TWR. The
by-products of the geometric average trade, arithmetic average trade, and threshold to the geometric are only valid for the current price of the underlying instrument. When the price of the
underlying instrument changes, the procedure must be done again, going back to step one and multiplying the percentage profits and losses by the new underlying price. When you go to redo the
procedure with a different underlying price, you will obtain the same optimal f, geometric mean, and TWR. However, your arithmetic average trade, geometric average trade, and threshold to the
geometric will be different based upon the new price of the underlying instrument. The number of contracts to trade as given in Equation (3.34) must be changed. The worst-case associated P&L, the W
variable, Equation (3.35), will be different in Equation (3.34) as a result of the changes caused in the equalized data by a different current price.
OPTIMAL F ON OTHER DISTRIBUTIONS AND FITTED CURVES At this point you should realize that there are many other ways you can determine your parametric optimal f. We have covered a procedure for finding
the optimal f on Normally distributed data in the previous chapter. Thus we have a procedure that will give us the optimal f for any Normally distributed phenomenon. That same procedure can be used
to find the optimal on data of any distribution, so long as the cumulative density function of the selected distribution is available (these functions arc given for many other common distributions in
Appendix B). When the cumulative density function is not available, the optimal f can be found for any other function by the integration method used in this chapter to approximate the cumulative
densities, the areas under the curve. I have elected in this chapter to model the actual distribution of trades by way of our adjustable distribution. This amounts to little more than finding a
function and its appropriate values, which model the actual density function of the trade P&L's with a maximum of 2 points of inflection. You could use or create many other functions and methods to
do this-such as polynomial interpolation and extrapolation, rational function (quotients of polynomials) interpolation and extrapolation, or using splines to fit a theoretical function to the actual.
Once any theoret-
ical function is found, the associated probabilities can be determined by the same method of integral estimation as was used in finding the associated probabilities of our adjustable distribution or
by using integration techniques of calculus. There is a problem with fitting any of these other functions. Part of the thrust of this book has been to allow users of systems that are not purely
mechanical to have the same account management power that users of purely mechanical systems have. As such, the adjustable distribution route that I took only requires estimates for the parameters.
These parameters pertain to the first four moments of the distribution. It is these moments -location, scale, skewness, and kurtosis-that describe the distribution. Thus, someone trading on some not
purely mechanical basis-e.g., Elliott wave— could estimate the parameters and have access to optimal f and its by-product calculations. A past history of trades is not a prerequisite for estimating
these parameters. If you were to use any of the other fitting techniques mentioned, you wouldn't necessarily need a past history of trades either, but the estimates for the parameters of those
fitting techniques do not necessarily pertain to the moments of the distribution. What they pertain to is a function of the particular function you are using. These other techniques would not
necessarily allow you to see what would happen if kurtosis increased or skewness changed or the scale were altered, and so on. Our adjustable distribution is the logical choice for a theoretical
function to fit to the actual, since the parameters not only measure the moments of the distribution, they give us control over those moments when prognosticating about future changes to the
distribution. Furthermore, estimating the parameters of our adjustable distribution is easier than with fitting any other function which I am aware of.
SCENARIO PLANNING People who forecast for a living (economists, stock market forecasters, weathermen, government agencies, etc.) have a notorious history for incorrect forecasts, but most decisions
anyone must make in life usually require making a forecast about the future. A couple of pitfalls immediately crop up here. To begin with, people generally make assumptions about the future that are
more optimistic than the actual probabilities. Most people feel that they arc far more likely to win the lottery this month than they are to die in an auto accident, even though the probabilities of
the latter are greater. This is not only true on the level of the individual, it is even more pronounced at the level of the group. When people work together, they tend to see a favorable outcome as
the most likely result (everyone else seems to, otherwise they wouldn't be working here), otherwise they would quit the project they are a part of (unless, of course, we have all become automatons
mindlessly slaving away on sinking ships). The second and more harmful pitfall is that people make straightline forecasts into the future. People try to predict the price of a gallon of gas two years
from now, predict what will happen with their jobs, who will be the next president, what the next styles will be, and on and on. Whenever we think of the future, we tend to think in terms of a
single, most likely outcome. As a result, whenever we must make decisions, whether as an individual or a group, we tend to make these decisions based on what we think will be the single most likely
outcome in the future. As a consequence, we are extremely vulnerable to unpleasant surprises. Scenario planning is a partial solution to this problem. A scenario is simply a possible forecast, a
story about one way that the future might unfold. Scenario planning is a collection of scenarios to cover the spectrum of possibilities. Of course, the complete spectrum can never be covered, but the
scenario planner wants to cover as many possibilities as he or she can. By acting in this manner, as opposed to a straight-line forecast of the most likely outcome, the scenario planner can prepare
for the future as it unfolds. Furthermore, scenario planning allows the planner to be prepared for what might otherwise be an unexpected event. Scenario planning is tuned to reality in that it
recognizes that certainty is an illusion. Suppose you are involved in long-run planning for your company. Say you make a particular product. Bather than making a single-mostlikely-outcome,
straight-line forecast, you decide to exercise scenario planning. You Will need to sit down with the other planners and brainstorm for possible scenarios. What if you cannot get enough of the raw
materials to make your product? What if one of your competitors fails? - 57 -
What if a new competitor emerges? What if you have severely underestimated demand for this product? What if a war breaks out on such-andsuch a continent? What if it is a nuclear war? Because each
scenario is only one of several, each scenario can be considered seriously. But what do you do once you have defined these scenarios? To begin with, you must determine what goal you would like to
achieve for each given scenario. Depending upon the scenario, the goal need not be a positive one. For instance, under a bleak scenario your goal may simply be damage control. Once you have defined a
goal for a given scenario, you then need to draw up the contingency plans pertaining to that scenario to achieve the desired goal. For instance, in the rather unlikely bleak scenario where your goal
is damage control, you need to have plans formulated so that you can minimize the damage. Above all else, scenario planning provides the planner with a course of action to take should a certain
scenario develop. It forces you to make plans before the fact; it forces you to be prepared for the unexpected. Scenario planning can do a lot more, however. There is a hand-inglove fit between
scenario planning and optimal f. Optimal fallows us to determine the optimal quantity to allocate to a given set of possible scenarios. We can exist in only one scenario at a time, even though we are
planning for multiple futures (multiple scenarios). Scenario planning puts us in a position where we must make a decision regarding how much of a resource to allocate today given the possible
scenarios of tomorrow. This is the true heart of scenario planning-quantifying it. We can use another parametric method for optimal f to determine how much of a certain resource to allocate given a
certain set of scenarios. This technique will maximize the utility obtained in an asymptotic geometric sense. First, we must define each unique scenario. Second, we must assign a number to the
probability of that scenario's occurrence. Being a probability means that this number is between 0 and 1. Scenarios with a probability of 0 we need not consider any further. Note that these
probabilities are not cumulative. In other words, the probability assigned to a given scenario is unique to that scenario. Suppose we are a decision maker for XYZ Manufacturing Corporation. Two of
the many scenarios we have are as follows. In one scenario XYZ Manufacturing files for bankruptcy, with a probability of .15; in the other scenario XYZ is being put out of business by intense foreign
competition, with a probability of .07. Now, we must ask if the first scenario, filing for bankruptcy, includes filing for bankruptcy due to the second scenario, intense foreign competition. If it
does, then the probabilities in the first scenario have not taken the probabilities of the second scenario into account, and we must amend the probabilities of the first scenario to be .08 (.15-.07).
Note also that just as important as the uniqueness of each probability to each scenario is that the sum of the probabilities of all of the scenarios we are considering must equal 1 exactly, not 1.01
nor .99, but 1. For each scenario we now have assigned a probability of just that scenario occurring. We must also assign an outcome result. This is a numerical value. It can be dollars made or lost
as a result of a scenario manifesting itself, it can be units of utility, medication, or anything. However, our output is going to be in the same units that we put in as input. You must have at least
one scenario with a negative outcome in order to use this technique. This is mandatory. Since we are trying to answer the question "How much of this resource should we allocate today given the
possible scenarios of tomorrow?", if there is not a negative outcome scenario, then we should allocate 100% of this resource. Further, without a negative outcome scenario it is questionable how tuned
to reality this set of scenarios really is. A last prerequisite to using this technique is that the mathematical expectation, the sum of all of the outcome results times their respective
probabilities, must be greater than zero. (1.03) ME = ∑[i = 1,N] (Pi *Ai) where Pi = The probability associated with the ith scenario. Ai = The result of the ith scenario. N = The total number of
scenarios under consideration. If the mathematical expectation equals zero or is negative, the following technique cannot be used. That's not to say that scenario planning itself cannot be used. It
can and should. However, optimal f can only be incorporated with scenario planning when there is a positive
mathematical expectation. When the mathematical expectation is zero or negative, we ought not allocate any of this resource at this time. Lastly, you must try to cover as much of the spectrum of
outcomes as possible. In other words, you really want to account for 99% of the possible outcomes. This may sound nearly impossible, but many scenarios can be made broader so that you don't need
10,000 scenarios to cover 99% of the spectrum. In making your scenarios broader, you must avoid the common pitfall of three scenarios: an optimistic one, a pessimistic one, and a third where things
remain the same. This is too simple, and the answers derived therefrom are often too crude to be of any value. Would you want to find your optimal f for a trading system based on only three trades?
So even though there may be an unknowably large number of scenarios covering the entire spectrum, we can cover what we believe to be about 99% of the spectrum of outcomes. If this makes for an
unmanageably large number of scenarios, we can make the scenarios broader to trim down their number. However, by trimming down their number we lose a certain amount of information. When we trim down
the number of scenarios (by broadening them) down to only three, a common pitfall, we have effectively eliminated so much information that this technique is severely hampered in its effectiveness.
What is a good number of scenarios to have then? As many as you can and still manage them. Here, a computer is a great asset. Assume again that we are decision making for XYZ. We are looking at
marketing a new product of ours in a primitive, remote little country. We are looking at five possible scenarios (in reality you should have many more than this, but we'll use five for the sake of
illustration). These five scenarios portray what we perceive as possible futures for this primitive remote country, their probabilities of occurrence, and the gain or loss of investing there.
Scenario War Trouble Stagnation Peace Prosperity
Probability .1 .2 .2 .45 .05 Sum 1.00
Result -$500,000 -$200,000 0 $500,000 $1 ,000,000
The sum of our probabilities equals 1. We have at least 1 scenario with a negative result, and our mathematical expectation is positive: (.1*-$500,000)+(.2*-$200,000)+.. = $185,000 We can therefore
use the technique on this set of scenarios. Notice first, however, that if we used the single most likely outcome method we would conclude that peace will be the future of this country, and we would
then act as though peace was to occur, as though it were a certainty, only vaguely remaining aware of the other possibilities. Returning to the technique, we must determine the optimal f. The optimal
f is that value for f (between 0 and 1) which maximizes the geometric mean: (4.13) Geometric mean = TWR^(1/∑[i = 1,N] Pi) and (4.14) TWR = ∏[i = 1,N] HPRi and (4.15) HPRi = (1+(Ai/(W/-f))) ^ Pi
therefore (4.16) Geometric mean = (∏[i = 1,N] (1+(Ai/(W/-f))) ^ Pi) ^ (1/∑[i = 1,N] Pi) Finally then, we can compute the real TWR as: (4.17) TWR = Geometric Mean ^ X where N = The number of different
scenarios. TWR = The terminal wealth relative. HPRi = The holding period return of the ith scenario. Ai = The outcome of the ith scenario. Pi = The probability of the ith scenario. W = The worst
outcome of all N scenarios. f = The value for f which we are testing. X = However many times we want to "expand" this scenario out. That is, what we would expect to make if we invested f amount into
these possible scenarios X times.
- 58 -
The TWR returned by Equation (4.14) is just an interim value we must have in order to obtain the geometric mean. Once we have this geometric mean, the real TWR can be obtained by Equation (4.17).
Here is how to perform these equations. To begin with, we must decide on an optimization scheme, a way of searching through the f values to find that f which maximizes our equation. Again, we can do
this with a straight loop with f from .01 to 1, through iteration, or through parabolic interpolation. Next, we must determine what the worst possible result for a scenario is of all of the scenarios
we are looking at, regardless of how small the probabilities of that scenario's occurrence are. In the example of XYZ Corporation this is -$500,000. Now for each possible scenario, we must first
divide the worst possible outcome by negative f. In our XYZ Corporation example, we will assume that we are going to loop through f values from .01 to 1. Therefore we start out with an f value of
.01. Now, if we divide the worst possible outcome of the scenarios under consideration by the negative value for f: -$500,000/-.01 = $50,000,000 Negative values divided by negative values yield
positive results, so our result in this case is positive. As we go through each scenario, we divide the outcome of the scenario by the result just obtained. Since the outcome to the first scenario is
also the worst scenario, a loss of $500,000, we now have: -$500,000/$50,000,000 = -.01 The next step is to add this value to 1. This gives us: l+(-.01) = .99 Lastly, we take this answer to the power
of the probability of its occurrence, which in our example is .1: .99^.1 = .9989954713 Next, we go to the next scenario labeled 'Trouble," where there is a .2 probability of a loss of $200,000. Our
worst-case result is still $500,000. The f value we are working on is still .01, so the value we want to divide this scenario's result by is still $50,000,000: -$200,000/$50,000,000 = -.004 Working
through the rest of the steps to obtain our HPR: 1+(-.004) = .996 .996^.2 = .9991987169 If we continue through the scenarios for this test value of .01 for f, we will find the 3 HPRs corresponding to
the last 3 scenarios: Stagnation 1.0 Peace 1.004467689 Prosperity 1.000990622 Once we have turned each scenario into an HPR for the given f value, we must multiply these HPRs together:
.9989954713*.9991987169*1.0*1.004487689*1.000990622 = 1.00366'7853 This gives us the interim TWR, which in this case is 1.003667853. Our next step is to take this to the power of 1 divided by the sum
of the probabilities. Since the sum of the probabilities is 1, we can state that we must raise the TWR to the power of 1 to give us the geometric mean. Since anything raised to the power of 1 equals
itself, we can say that our geometric mean equals the TWR in this case. We therefore have a geometric mean of 1.003667853. If, however, we relaxed the constraint that each scenario must have a unique
probability, then we could allow the sum of the probabilities of the scenarios to be greater than 1. In such a case, we would have to raise our TWR to the power of 1 divided by this sum of the
probabilities in order to derive the geometric mean. The answer we have just obtained in our example is our geometric mean corresponding to an f value of .01. Now we move on to an f value of .02, and
repeat the whole process until we have found the geometric mean corresponding to an f value of .02. We keep on proceeding until we arrive at that value for f which yields the highest geometric mean.
In our example we find that the highest geometric mean is obtained at an f value of .57, which yields a geometric mean of 1.1106. Dividing our worst possible outcome to a scenario (-$500,000) by the
negative optimal f yields a result of $877,192.35. In other words, if XYZ Corporation wants to commit to marketing this new product in this remote country, they will optimally commit this amount to
this venture at this time. As time goes by and things develop, so do the scenarios, and as their resultant outcomes and •probabilities change, so does this f amount change. The more XYZ Corporation
keeps abreast of these changing
scenarios, and the more accurate the scenarios they develop as input are, the more accurate their decisions will be. Note that if XYZ Corporation cannot commit this $877,192.35 to this undertaking at
this time, then they are too far beyond the peak of the f curve. It is the equivalent to the trader who has too many commodity contracts on with respect to what the optimal f says he or she should
have on. If XYZ Corporation commits more than this amount to this project at this time, the situation would be analogous to a commodity trader with too few contracts on. Furthermore, although the
quantity discussed here is a quantity of money, it could be a quantity of anything and the technique would be just as valid. The approach can be used for any quantitative decision in an environment
of favorable uncertainty. If you create different scenarios for the stock market, the optimal f derived from this methodology will give you the correct percentage to be invested in the stock market
at any given time. For instance, if the f returned is .65, then that means that 65% of your equity should be in the stock market with the remaining 35% in, say, cash. This approach will provide you
with the greatest geometric growth of your capital in the long run. Of course, again, the output is only as accurate as the input you have provided the system with in terms of scenarios, their
probabilities of occurrence, and resultant payoffs and costs. Furthermore, recall that everything said about optimal f applies here, and that also means that the expected drawdowns will approach a
100% equity retracement. If you exercise this scenario planning approach to asset allocation, you can expect close to 100% of the assets allocated to the endeavor in question to be depleted at any
one time in the future. For example, suppose you arc using this technique to determine what percentage of investable funds should be in the stock market and what percentage should be in a risk-free
asset. Assume that the answer is to have 65% invested in the stock market and the remaining 35% in the risk-free asset. You can expect the drawdowns in the future to approach 100% of the amount
allocated to the stock market. In other words, you can expect to see, at some point in the future, almost 100% of your entire 65% allocated to the stock market to be gone. Yet this is how you will
achieve maximum geometric growth. This same process can be used as an alternative parametric technique for determining the optimal f for a given trade. Suppose you are making your trading decisions
based on fundamentals. If you wanted to, you could outline the different scenarios that the trade may take. The more scenarios, and the more accurate the scenarios, the more accurate your results
would be. Say you are looking to buy a municipal bond for income, but you're not planning on holding the bond to maturity. You could outline numerous different scenarios of how the future might
unfold and use these scenarios to determine how much to invest in this particular bond issue. This concept of using scenario planning to determine the optimal f can be used for everything from
military strategies to deciding the optimal level to participate in an underwriting to the optimal down payment on a house. For our purposes, this technique is perhaps the best technique, and
certainly the easiest to employ for someone not using a mechanical means of entering and exiting the markets. Those who trade on fundamentals, weather patterns, Elliott waves, or any other approach
that requires a degree of subjective judgment, can easily discern their optimal fs with this approach. This approach is easier than determining distributional parameter values. The arithmetic average
HPR of a group of scenarios can be computed as: (4.18) AHPR = (∑[i = 1,N](1+(Ai/(W/-f)))*Pi)∑[i = 1,N]Pi where N = the number of scenarios. f = the f value employed. Ai = the outcome (gain or loss)
associated with the ith scenario. Pi = the probability associated with the ith scenario. W = the most negative outcome of all the scenarios. The AHPR will be important later in the text when we will
need to discern the efficient frontier of numerous market systems. We will need to determine the expected return (arithmetic) of a given market system. This expected return is simply AHPR-1. The
technique need not be applied parametrically, as detailed here; it can also be applied empirically. In other words, we can take the trade - 59 -
listing of a given market system and use each of those trades as a scenario that might occur in the future, the profit or loss amount of the trade being the outcome result of the given scenario. Each
scenario (trade) would have an equal probability of occurrence-1/N, where N is the total number of trades (scenarios). This will give us the optimal f empirically. This technique bridges the gap
between the empirical and the parametric. There is not a fine line that delineates the two schools. As you can see, there is a gray area. When we are presented with a decision where there is a
different set of scenarios for each facet of the decision, selecting the scenario whose geometric mean corresponding to its optimal f is greatest will maximize our decision in an asymptotic sense.
Often this flies in the face of conventional decision-making rules such as the Hutwicz rule, maximax, minimax, minimax regret, and greatest mathematical expectation. For example, suppose we must
decide between two possible choices. We could have many possible choices, but for the sake of simplicity we choose two, which we call "white" and "black." If we select the decision labeled "white,"
we determine that it will present the possible future scenarios to us: White Decision Scenario Probability Result A .3 -20 B .4 0 C .3 30 Mathematical expectation = $3.00 Optimal f = .17 Geometric
mean = 1 .0123
It doesn't matter what these scenarios are, they can be anything, and to further illustrate this they will simply be assigned letters, A, B, C in this discussion. Further, it doesn't matter what the
result is, it can be just about anything. The Black decision will present the following scenarios: Black Decision Scenario Probability Result A .3 -10 B .4 5 C .15 6 D .15 20 Mathematical expectation
= $2.90 Optimal f = .31 Geometric mean = 1.0453
Many people would opt for the white decision, since it is the decision with the higher mathematical expectation. With the white decision you can expect, "on average," a $3.00 gain versus black's
$2,90 gain. Yet the black decision is actually the correct decision, because it results in a greater geometric mean. With the black decision, you would expect to make 4.53% (1.0453-1) "on average" as
opposed to white's 1.23% gain. When you Consider the effects of reinvestment, the black decision makes more than three times as much, on average, as does the white decision! "Hold on, pal," you say.
"We're not doing this thing over again, we're doing it only once. We're not reinvesting back into the same future scenarios here. Won't we come out ahead if we always select the highest arithmetic
mathematical expectation for each set of decisions that present themselves this way to us?" The only time we want to be making decisions based on greatest arithmetic mathematical expectation is if we
are planning on not reinvesting the money risked on the decision at hand. Since, in almost every case, the money risked on an event today will be risked again on a different event in the future, and
money made or lost in the past affects what we have available to risk today (i.e., an environment of geometric consequences), we should decide based on geometric mean to maximize the long-run growth
of our money. Even though the scenarios that present themselves tomorrow won't be the same as those of today, by always deciding based on greatest geometric mean we are maximizing our decisions. It
is analogous to a dependent trials process such as a game of blackjack. Each hand the probabilities change, and therefore the optimal fraction to bet changes as well. By always betting what is
optimal for that hand, however, we maximize our long-run growth. Remember that to maximize long-run growth, we must look at the current contest as one that expands infinitely into the future. In
other words, we must look at each individual event as though we were to play it an infinite number of times over if we want to maximize growth over many plays of different contests.
As a generalization, whenever the outcome of an event has an effect on the outcome(s) of subsequent event(s) we are best off to maximize for greatest geometric expectation. In the rare cases where
the outcome of an went has no effect on subsequent events, we are then best off to maximize for greatest arithmetic expectation. Mathematical expectation (arithmetic) does not take the variance
between the outcomes of the different scenarios into account, and therefore can lead to incorrect decisions when reinvestment is considered, or in any environment of geometric consequences. Using
this method in scenario planning gets you quantitatively positioned with respect to the possible scenarios, their outcomes, and the likelihood of their occurrence. The method is inherently more
conservative than positioning yourself per the greatest arithmetic mathematical expectation. equation (3.05) Allowed that the geometric mean is never greater than the arithmetic mean. Likewise, this
method can never have you position yourself (have a greater commitment) than selecting by the greatest arithmetic mathematical expectation would. In the asymptotic sense, the long-run sense, this is
not only a superior method of positioning yourself, as it achieves greatest geometric growth, it is also a more conservative one than positioning yourself per the greatest arithmetic mathematical
expectation, which would invariably put you to the right of the peak of the f curve. Since reinvestment is almost always a fact of life (except on the day before you retire1) - that is, you reuse the
money that you are using today - we must make today's decision under the assumption that the same decision will present itself a thousand times over in order to maximize the results of our decision.
We must make our decisions and position ourselves in order to maximize geometric expectation. Further, since the outcomes of most events do in fact have an effect on the outcomes of subsequent
events, we should make our decisions and position ourselves based on maximum geometric expectation. This tends to lead to decisions and positions that arc not always apparently obvious.
OPTIMAL F ON BINNED DATA Now we come to the case of finding the optimal f and its by-products on binned data. This approach is also something of a hybrid between the parametric and the empirical
techniques. Essentially, the process is almost identical to the process of finding the optimal f on different scenarios, only rather than different payoffs for each bin (scenario), we use the
midpoint of each bin. Therefore, for each bin we have an associated probability figured as the total number of elements (trades) in that bin divided by the total number of elements (trades) in all
the bins. Further, for each bin we have an associated result of an element ending up in that bin. The associated results are calculated as the midpoint of each bin. For example, suppose we have 3
bins of 10 trades. The first bin we will define as those trades where the P&L's were -$1,000 to -$100. Say there are 2 elements in this bin. The next bin, we say, is for those trades which are -$100
to $100. This bin has 5 trades in it. Lastly, the third bin has 3 trades in it and is for those trades that have P&L's of $100 to $1,000. Bin -1,000 - 100 100
Bin -100 100 1,000
Trades Associated Probability 2 .2 5 .5 3 .3
Associated Result -550 0 550
Now it is simply a matter of solving for Equation (4.16), where each bin represents a different scenario. Thus, for the case of our S-bin example here, we find that our optimal f is at .2, or 1
contract for every $2,750 in equity (our worst-case loss being the midpoint of the first bin, or (-$1000+-$100)/2 = -$550). This technique, though valid, is also very rough. To begin with, it assumes
that the biggest loss is the midpoint of the worst bin. This is not 1
There are certain tines when you will want to maximize for greatest arithmetic mathematical expectation instead of geometric, Such a case is when an entity is operating in a "constant-contract" kind
or way and wants to switch over to a "fixed fractional" mode of operating at some favorable point in the future. This favorable point can be determined as the geometric threshold where the arithmetic
average trade that is used as input is calculated as the arithmetic mathematical expectation (the sum of the outcome of each scenario times its probability of occurrence) divided by (he sum of the
probabilities of all of the scenarios. Since the sum of the probabilities of all of the scenarios usually equals 1, we can state that the arithmetic average "trade" is equal to the arithmetic
mathematical expectation. - 60 -
always the case. Often it is helpful to make a single extra bin to hold the worst-case loss. As applied to our 3-bin example, suppose we had a trade that was a loss of $1,000. Such a trade would fall
into the -$1,000 to -$100 bin, and would be recorded as -$550, the midpoint of the bin. Instead we can bin this same data as follows: Bin -1,000 -999 -100 100
Bin -1,000 -100 100 1,000
Trades Associated Probability 1 .1 1 .1 5 .5 3 .3
Associated Result -1,000 -550 0 550
Now, the optimal f is .04, or 1 contract for every $25,000 in equity. Are you beginning to see how rough this technique is? So, although this technique will give us the optimal f for binned data, we
can see that the loss of information involved in binning the data to begin with can make our results so inaccurate as to be useless. If we had more data points and more bins to start with, the
technique would not be rough at all. In fact, if we had infinite data and an infinite number of bins, the technique would be exact. (Another way in which this method could be exact is if the data in
each of the bins equaled the midpoints of their respective bins exactly.) The other problem with this technique is that the average element in a bin is not necessarily the midpoint of the bin. In
fact, the average of the elements in a bin will tend to be closer to the mode of the entire distribution than the midpoint of the bin is. Hence, the dispersion tends to be greater with this technique
than is the real case. There are ways to correct for this, but these corrections themselves can often be incorrect, depending upon the shape of the distribution. Again, this problem would be
alleviated and the results would be exact if we had an infinite number of elements (trades) and an infinite number of bins. If you happen to have a large enough number of trades and a large enough
number of bins, you can use this technique with a fair degree of accuracy if you so desire. You can do "What if" types of simulations by altering the number of elements in the various bins and get a
fair approximation for the effects of such changes.
WHICH IS THE BEST OPTIMAL F? We have now seen that we can find our optimal f from an empirical procedure as well as from a number of different parametric procedures for both binned and unbinned data.
Further, we have seen that we can equalize the data as a means of preprocessing, to find what our optimal f should be if all trades occurred at the present underlying price. At this point you are
probably asking for the real optimal f to please stand up. Which optimal f is really optimal? For starters, the straight (nonequalized) empirical optimal f will give you the optimal f on past data.
Using the empirical optimal f technique detailed in Chapter 1 and in Portfolio Management Formulas will yield the optimal f that would have realized the greatest geometric growth on a past stream of
outcomes. However, we want to discern what the value for this optimal f will be in the future (specifically, over the next trade), considering that we are absent knowledge regarding the outcome of
the next trade. We do not know whether it will be a profit, in which case the optimal f would be 1, or a loss, in which case the optimal f would be 0. Rather, we can only express the outcome of the
next trade as an estimate of the probability distribution of outcomes for the next trade. That being said, our best estimate for traders employing a mechanical system, is most likely to be obtained
by using the parametric technique on our adjustable distribution function as detailed in this chapter on either equalized or nonequalized data. If there is a material difference in using equalized
versus nonequalized data, then there is most likely too much data, or not enough data at the present price level. For non-system traders, the scenario planning approach is the easiest to employ
accurately. In my opinion, these techniques will result in the best estimate of the probability distribution of outcomes on the next trade. You now have a good conception of both the empirical and
parametric techniques, as well as some hybrid techniques for finding the optimal f. In the next chapter, we consider finding the optimal j (parametrically) when more than one position is running
Chapter 5 - Introduction to Multiple Simultaneous Positions under the Parametric Approach Mention has already been made in this text of the idea of using options, either by themselves or in
conjunction with a position in the underlying, to improve returns. Buying a long put in conjunction with a long position in the underlying (or simply buying a call in lieu of both), or sometimes even
writing (setting short) a call in conjunction with a long position in the underlying can increase asymptotic geometric growth. This happens as the result of incorporating the options into the
position, which then often (but not always) reduces dispersion to a greater degree than it reduces arithmetic average return. Per the fundamental equation of trading, this then results in a greater
estimated TWR. Options can be used in a variety of ways, both among themselves and in conjunction with positions in the underlying, to manage risk. In the future, as traders concentrate more and more
on risk management, options will very likely play an ever greater role. Portfolio Management Formulas discussed the relationship of optimal j and options.1 In this chapter we pick up on that
discussion and Carry it further into an introduction of multiple simultaneous positions, especially with regard to options. This chapter gives us another method for finding the optimal f s for
positions that are not entered and exited by using a mechanical system. The parametric techniques discussed thus far could be utilized by someone not trading by means of a mechanical system, but
aside from the scenario planning approach, they still have some rough edges. For example, someone not using a mechanical system who was using the technique described in Chapter 4 would need an
estimate of the kurtosis of his or her trades. This may not be too easy to come by (at least, an accurate estimate of this may not be readily available). Therefore, this chapter is for those who are
using purely nonmechanical means of entering and exiting their trades. Users of these techniques will not need parameter estimates for the distribution of trades. However, they will need parameter
estimates for both the volatility of the underlying instrument and the trader's forecast for the price of the underlying instrument. For a trader not utilizing a mechanical, objective system, these
parameters are for easier to come by than parameter estimates for the distribution of trades that have not yet occurred. This discussion of optimal f and its by-products for those traders not
utilizing a mechanical, objective system comes at a convenient stage in the book, as it is the perfect entree for multiple simultaneous positions. Does this mean that someone who is using a
mechanical means to enter and exit trades cannot engage in multiple simultaneous positions? No. Chapter 6 will show us a method for finding optimal multiple simultaneous positions for traders whether
they are using a mechanical system or not. This chapter introduces the concept of multiple simultaneous positions, but the standpoint is that of someone not using a mechanical system, and possibly
using options as well as the underlying instruments.
ESTIMATING VOLATILITY One important parameter a trader wishing to use the following concepts must input is volatility. We discuss two ways to determine volatility. The first is to use the estimate
that has been determined by the marketplace. This is called implied volatility. The option valuation models introduced in this chapter use volatility as one of their inputs to derive the fair
theoretical price of an option. Implied volatility is determined by assuming that the market price of an option is equivalent to its fair theoretical price. Solving for the volatility value that
yields a fair theoretical price equal to the market price determines the implied volatility. This value for volatility is arrived at by iteration. The second method of estimating volatility is to use
what is known as historical volatility, which is determined by the actual price changes in the underlying instrument. Although volatility as input to the options 1
There were some minor formulative problems with the options material in Portfolio management Formulas, These have since been resolved, and the corrected formulations are presented here. My apologies
for whatever confusion this may have caused.
pricing models is an annualized figure, a much shorter period of time, usually 10 to 20 days, is used when determining historical volatility and the resulting answer is annualized. Here is how to
calculate a 20-day annualized historical volatility. Step 1 Divide tonight's close by the previous market day's close. Step 2 Take the natural log of the quotient obtained in step 1. Thus, for the
March 1991 Japanese yen on the night of 910225 (this is known as YYMMDD format for February 25, 1991), we take the close of 74.82 and divide it by the 910222 close of 75.52: 74.82/75.52 = .9907309322
We then take the natural log of this answer. Since the natural log of .9907309322 is -.009312258, our answer to step 2 is -.009312258. Step 3 After 21 days of back data have elapsed, you will have 20
values for step 2. Now you can start running a 20-day moving average to the answers from step 2. Step 4 You now want to run a 20-day sample variance for the data from step 2. For a 20-day variance
you must first determine the moving average for the last 20 days. This was done in step 3. Then, for each day of the last 20 days, you take the difference between today's moving average, and that
day's answer to step 2. In other words, for each of the last 20 days you will subtract the moving average from that day's answer to step 2. Now, you square this difference (multiply it by itself). In
so doing, you convert all negative answers to positives so that all answers are now positive. Once that is done, you add up all of these positive differences for the last 20 days. Finally, you divide
this sum by 19, and the result is your sample variance for the last 20 days. The following spreadsheet will show how to find the 20-day sample variance for the March 1991 Japanese yen for a single
day, 901226 (December 26, 1990): A Date
B C D E F Col E Close LN 20-Day Col C- Squared Change Average (-.0029)
77.96 76.91 74.93 75.37 74.18 74.72 74.57 75.42 76.44 75.54 75.37 75.9 75.57 75.08 75.11 74.99 74.52 74.06 73.91 73.49 73.5
-9.0136 -0.0261 0.0059 -0.0159 0.0073 -0.0020 0.0113 0.0134 -0.0118 -0.0023 0.0070 -0.0044 -0.0065 0.0004 -0.0016 -0.0063 -0.0062 -0.0020 -0.0057 0.0001 -.0029
-0.0107 -0.0232 0.0088 -0.0130 0.0102 0.0009 0.0142 0.0163 -0.0089 0.0006 0.0099 -0.0015 -0.0036 0.0033 0.0013 -0.0034 -0.0033 0.0009 -0.0028 0.0030
G H Col G Sum of Divided Last20 by 19 Values of Col F
0.000113 0.000537 0.000076 0.000169 0.000103 0.000000 0.000202 0.000266 0.000079 0.000000 0.000098 0.000002 0.000012 0.000010 0.000001 0.000011 0.000010 0.000000 0.000007 0.000009 .001716
As you can see, the 20-day sample variance for 901226 is .00009. You need to do this for every day so that you will have determined the 20-day sample variance for every single day. Step 5 Once you
have determined the 20-day sample variance for every single day, you must convert this into a 20-day sample standard deviation. This is easily accomplished by taking the square root of the variance
for each day. Thus, for 901226, taking the square root of the variance (which was shown to be .00009) gives us a 20-day sample standard deviation of .009486832981. Step 6 Now we must "annualize" the
data. Since we are using daily data, and we'll suppose that there are 252 trading days in the yen per year (approximately), we must multiply the answers from step 5 by the square root of 252, or
15.87450787. Thus, for 901226, the 20-day sample standard deviation is ,009486832981, and multiplying by 15.87450787 gives us an answer of .1505988048. This answer is the historical volatility-in
this case, 15.06%-and can be used as the volatility input to the Black-Scholes option pricing model.
The following spreadsheet shows how to go through the steps to get to this 20-day annualized historical volatility. You will notice that the interim steps in determining variance for a given day,
which were detailed on the previous spreadsheet, are not on this one. This was done in order for you to see the whole process. Therefore, bear in mind that the variance column in this following
spreadsheet is determined for each row exactly as in the previous spreadsheet. A DATE B C CLOSE LN Change 901127 77.96 901128 76.91 -0.0136 901129 74.93 -0.0261 901130 75.37 0.0059 961203 74.18
-0.0159 901204 74.72 0.0073 901205 74.57 -0.0020 901206 75.42 0.0113 901207 76.44 0.0134 901210 75.54 -0.0118 901211 75.37 -0.0023 961212 75.9 0.0070 961213 75.57 -0.0044 901214 75.08 -0.0065 961217
75.11 0.0004 901218 74.99 -0.0016 901219 74.52 -0.0063 901220 74.06 -0.0062 901221 73.91 -0.0020 901224 73.49 -0.0057 901226 73.5 0.0001 901227 73.34 -0.0022 901 228 74.07 0.0099 901231 73.84 -0.0031
D 20- E 20-Day F 20-Day G AnnualDay Av- Variance SD ized*15.8745 erage 1
risk-of-ruin equation can tell us what the probability of ruin is before we start out trading this system. If we were trading this system on a fixed fractional basis, the line would curve upward,
getting steeper and steeper with each elapsed trade. However, the amount we could drop off of this line is always commensurate with how high we are on the line. That is, the probability of ruin does
not diminish as more and more trades elapse. In theory, though, the risk of ruin in fixed fractional trading is zero, because we can trade in infinitely divisible units. In real life this is not
necessarily so. In real life, the risk of ruin in fixed fractional trading is always a little higher than in the same system under constant-contract trading. In reality, there is no limit on how much
you can lose on any given trade. In reality, the equity expectation lines we are talking about can be retraced completely in one trade, regardless of how high they are. Thus, the risk of ruin, if we
are to trade for an infinitely long period of time in an instrument with unlimited liability, regardless of whether we are trading on a constant-contract or a fixed fractional basis, is 1. Ruin is
certain. The only way to defuse this is to be able to put a cap on the maximum loss. This can be accomplished by trading options where the position is initiated at a debit.2
-0.0029 -0.0024 -0.0006 -0.0010
0.0001 0.0001 0.0001 0.0001
0.0095 0.0092 0.0077 0.0076
0.1508 0.1460 0.1222 0.1206
RUIN, RISK AND REALITY Recall the following axiom from the Introduction to this text: if you play a game with unlimited liability, you will go broke with a probability that approaches certainty as
the length of the game approaches infinity. What constitutes a game with unlimited liability? The answer is a distribution of outcomes where the left tail (the adverse outcomes) is unbounded and goes
to minus infinity. Long option positions allow us to bound the adverse tail of the distribution of outcomes. You may take issue with this axiom. It seems irreconcilable that the risk of ruin be less
than 1 (i.e., ruin is not certain), yet I contend that in trading an instrument with unlimited liability on any given trade, ruin is certain. In other words, my contention here is that if you trade
anything other than options and you are looking at trading for an infinite length of time, your real risk of ruin is 1. Ruin is certain under such conditions. This can be reconciled with risk-of-ruin
equations in that equations used for risk of ruin use empirical data as input. That is, the input to risk-ofruin equations comes from a finite sample of trades. My contention of certain ruin for
playing an infinitely long game with unlimited liability on any given trade is derived from a parametric standpoint. The parametric standpoint encompasses the large losing trades, those trades way
out on the left tail of the distribution, which have not yet occurred and are therefore not a part of the finite sample used as input into the risk-ofrum equations. To picture this, assume for a
moment a trading system being performed under constant-contract trading. Each trade taken is taken with only 1 contract. To plot out where we would expect the equity to be X trades into the future,
we simply multiply X by the average trade. Thus, if our system has an average trade of $250, and we want to know where we can expect our equity to be, say, 7 trades into the future, we can determine
this as $250*7 = $1,750. Notice that this line of arithmetic mathematical expectation is a straight-line function. Now, on any given trade, a certain amount can be lost, thus dropping us down
(temporarily) from this expected line. In this hypothetical situation we have a limit to what we can lose on any given trade. Since our line is always higher than the most we can lose on a given
trade, we cannot be ruined on one trade. However, a prolonged losing streak could drop us far enough down from this line that we could not continue to trade, hence we would be "ruined." The
probability of this diminishes as more trades elapse, as the line of expectation gets higher and higher. A
Imagine an underlying instrument (it can be a stock, bond, foreign currency, commodity, or anything else) that can trade up or down by 1 tick on the next trade. If, say, we measure where this stock
will be 100 ticks down the road, and if we do this over and over, we will find that the distribution of outcomes is Normal. This, per Galton's board, is as we would expect it to be. If we then
figured the price of the option based on this principle such that you could not make a profit by buying these options, or by selling them short, we would have arrived at the Binomial Option Pricing
Model (Binomial Model or Binomial). This is sometimes also called the Cox-Ross-Rubenstein model after those who devised it. Such an option price is based on its expected value (its arithmetic
mathematical expectation), since you cannot make a profit by either buying these options repeatedly and holding them to expiration or selling them repeatedly and holding the position till expiration,
losing on some and winning on others but netting out a profit in the end. Thus, the option is said to be fairly priced. We will not cover the specific mathematics of the Binomial Model. Rather, we
till cover the mathematics of the Black-Scholes Stock Option Model and the Black Futures Option Model. You should be aware that, inside from these three models, there are other valid options pricing
models which will not be covered here either, although the concepts discussed in this chapter apply to all options pricing models. Finally, the best reference I know of regarding the mathematics of
options pricing models is Option Volatility and Pricing Strategies by Sheldon Natenberg. Natenberg's book covers the mathematics for many of the options pricing models (including the Binomial Model)
in great detail. The math for the Black-Scholes Stock Option Model and the Black Futures Option Model, which we are about to discuss, comes from Natenberg. These topics take an entire text to
discuss, more space than we have here. Those readers who want to pursue the concepts of optimal f and options are referred to Natenberg for foundational material regarding options. We must cover
pricing models on a level sufficient to work the optimal f techniques about to be discussed on option prices. Therefore, we will now discuss the Black-Scholes Stock Option Pricing Model (hereafter,
Black-Scholes). This model is named after those who devised it, Fischer Black at the University of Chicago and Myron Scholes at M.I.T., and appeared in the May-June 1973 Journal of Political Economy.
Black-Scholes is considered the limiting form of the Binomial Model (hereafter, Binomial). In other words, with the Binomial, you must determine how many up or down ticks you are going to use before
you 2
We will see later in this chapter that underlying instruments are identical to call options with infinite time till expiration. Therefore, if we are long the underlying installment we can assume that
our worst-case loss is the full value of the instrument. In many cases, this can be regarded in a loss of such magnitude as to be synonymous with a cataclysmic loss. However, being short the
underlying instrument is analogous to being short a call option with infinite time remaining of expiration, and liability is truly unlimited in such a situation.
record where the price might end up. The following little diagram shows the idea.
Initial Price
Here, you start out at an initial price, where price can branch off in 2 directions for the next period. The period after that, there are 4 directions that the price might end up. Ultimately, with
the Binomial you must determine in advance how many periods in total you are going to use to figure the fair price of the option on. Black-Scholes is considered the limiting form of the Binomial
because it assumes an infinite number of periods (in theory). That is, Black-Scholes assumes that this little diagram will keep on branching out and to the right infinitely. If you determine an
option's fair price via Black-Scholes, then you will tend toward the same answer with the Binomial as the number of periods used in the Binomial tends toward infinity. (The fact that Black-Scholes is
the limiting form of the Binomial would imply that the Binomial Model appeared first. Oddly enough, the Black-Scholes model appeared first.) The mathematics of Black-Scholes are quite
straightforward. The fair value of a call on a stock option is given as: (5.01) C = U*EXP(-R*T)*N(H)-E*EXP(-R*T)*N(H-V*T^(1/2)) and for a put: (5.02) P = -U*EXP(-R*T)*N(-H)+E*EXP(-R*T)*N(V*T^(l/2)-H)
where C = The fair value of a call option. P = The fair value of a put option. U = The price of the underlying instrument. E = The exercise price of the option. T = Decimal fraction of the year left
to expiration.3 V = The annual volatility in percent. R = The risk-free rate. ln() = The natural logarithm function. N() = The cumulative Normal density function, as given in Equation (3.21). (5.03)
H = ln(U/(E*EXP(-R*T)))/(V*T^(l/2))+(V*T^(l/2))/2 For stocks that pay dividends, you must adjust the variable U to reflect the current price of the underlying minus the present value of the expected
dividends: (5.04) U = U-∑[i = 1,N] Di*EXP(-R*Wi) where Di = The ith expected dividend payout. Wi = The time (decimal fraction of a year) to the ith payout. One of the very nice things about the
Black-Scholes Model is the exact calculation of the delta, the first derivative of the price of the option. This is the option's instantaneous rate of change with respect to a change in U, the price
of the underlying:
(5.05) Call Delta = N(H) (5.06) Put Delta = -N(-H) These deltas become quite important in Chapter 7, when we discuss portfolio insurance. Black went on to make the model applicable to futures
options, which have a stock-type settlement.4 The Black futures option pricing model is the same as the Black-Scholes stock option pricing model except for the variable H: (5.07) H = ln(U/E)/(V*T^(1/
2))+(V*T^(l/2))/2 The only other difference in the futures model is the deltas, which are: (5.08) Call Delta = EXP(-R*T)*N(H) (5.09) Put Delta = -EXP(-R*T)*N(-H) For example, suppose we are looking
at a futures option that has a strike price of 600, a current market price of 575 on the underlying, and an annual volatility of 25%. We will use the commodity options model, a 252-day year, and a
risk-free rate of 0 for simplicity. Further, we will assume that the expiration day of the options is September 15, 1991 (910915), and that the day on which we are observing these options is August
1, 1991 (910801). To begin with, we will calculate the variable T, the decimal fraction of the year left to expiration. First, we must convert both 910801 and 910915 to their Julian day equivalents.
To do this, we must use the following algorithm. 1. Set variable 1 equal to the year (1991), variable 2 equal to the month (8) and variable 3 equal to the day (1). 2. If variable 2 is less than 3
(i.e., the month is January or February) then set variable 1 equal to the year minus 1 and set variable 2 equal to the month plus 13. 3. If variable 2 is greater than 2 (i.e., the month is March or
after) then set variable 2 equal to the month plus 1. 4. Set variable 4 equal to variable 3 plus 1720995 plus the integer of the quantity 365.25 times variable 1 plus the integer of the quantity
30.6001 times variable 2. Mathematically: V4 = V3+1720995+INT(365.25*V1)+INT(30.6001*V2) 5. Set variable 5 equal to the integer of the quantity .01 times variable 1: Mathematically: V5 = INT(.01*V1)
Now to obtain the Julian date as variable 4 plus 2 minus variable 5 plus the integer of the quantity .25 times variable 5. Mathematically: JULIAN DATE = V4+2-V5+INT(.25*V5) So to convert our date of
910801 to Julian: Step 1 V1 = 1991, V2 = 8, V3 = 1 Step 2 Since it is later in the year than January or February, this step does not apply. Step 3 Since it is later in the year than January or
February, this step does apply. Therefore V2 = 8+1 = 9. Step 4 Now we set V4 as: V4 = V3+1720995+INT(365.25*V1)+INT(30.6001*V2) = 1+1720995+INT(365.25*1991)+INT(30.6001*9) = 1+1720995+INT(727212.75)
+INT(275.4009) = 1+1720995+727212+275 = 2448483 Step 5 Now we set V5 as: V5 = INT(.01*V1) = INT(.01*1991) = INT( 19.91) = 19 Step 6 Now we obtain the Julian date as: JULIAN DATE = V4+2-V5+INT(.25*V5)
Most often, only market days are used in calculating the fraction of a year in options. The number of weekdays in a year (Gregorian) can be determined as 365.2425/7*5 = 260.8875 weekdays on average
per year. Due to holidays, the actual number of trading days in a year is usually somewhere between 250 and 252. Therefore, if we are using a 252-trading-day year, and there are 50 trading days left
to expiration, the decimal fraction of the year left to expiration, T, would be 50/252 = .1984126984.
Futures-type settlement requires no initial cash payment, although the required margin must be posted. Additionally, all profits and losses are realized immediately, even if the position is not
liquidated. These points are in direct contrast to stock-type settlement. In stock-type settlement, purchase requires full and immediate payment, and profits (or losses) are not realized until the
position is liquidated.
= 2448483+2-19+INT(.25*19) = 2448483+2-19+INT(4.75) = 2448483+2-19+4 = 2448470 Thus, we can state that the Julian date for August 1, 1991, is 2448470. Now if we convert the expiration date of
September 15, 1991 to Julian, we would obtain a Julian date of 2448515. If we were using a 365 day year (or 365.2425, the Gregorian Calendar length), we could find the time left until expiration by
simply taking the difference between these two Julian dates, subtracting 1 and dividing the sum by 365 (or 365.2425). However, we are not using a 365 day year; rather we are using a 252-day year, as
we are only counting days when the exchange is open (weekdays less holidays). Here is how we account for this. We must examine each day between the two Julian dates to see if it is a weekend. We can
determine what day of the week a given Julian date is by adding 1 to the Julian date, dividing by 7, and taking the remainder (the modulus operation). The remainder will be a value of 0 through 6,
corresponding to Sunday through Saturday. Thus, for August 1, 1991, where the Julian date is 2448470: Day of week = ((2448470+l)/7) % 7 = 2448471/ % 7 = ((2448471/7)-INT(2448471/7))*7 =
(349781.5714-349781)*7 = .5714*7 =4 Since 4 corresponds to Thursday, we can state that August 1, 1991 is a Thursday. We now proceed through each Julian date up to and including the expiration date.
We count up all of the weekdays in between those two dates and find that there are 32 weekdays in between (and including) August 1, 1991 and September 15, 1991. From our final answer we must subtract
1, as we count day one when August 2, 1991 arrives. Therefore, we have 31 weekdays between 910801 and 910915. Now we must subtract holidays, when the exchange is closed. Monday September 2, 1991, is
Labor Day in the United States. Even though we may not live in the United States, the exchange where this particular option is traded on, being in the United States, will be closed on September 2,
and therefore we must subtract 1 from our count of days. Therefore, we determine that we have 30 "tradeable" days before expiration. Now we divide the number of tradeable days before expiration by
the length of what we have determined the year to be. Since we are using a 252 day year, we divide 30 by 252 to obtain .119047619. This is the decimal fraction of the year left to expiration, the
variable T. Next, we must determine the variable H for the pricing model. Since we are using the futures model, we must calculate H as in Equation (5.07): (5.07) H = ln(U/E)/(V*T^(1/2))+(V*T^(l/2))/2
= ln(575/600)/(.25*.119047619^(1/2))+(.25*.119047619 ^ (l/2))/2 = ln(575/600)/(.25*.119047619^.5)+(.25*.119047619^.5)/2 = ln(575/600)/(.25*.3450327796)+(.25*.3450327796)/2 = ln(575/600)
/.0862581949+.0862581949/2 = ln(.9583333)/.0862581949+.0862581949/2 = .04255961442/.0862581949+.0862581949/2 = -.4933979255+.0862581949/2 = -.4933979255+.04312909745 = -.4502688281 In Equation (5.01)
you will notice that we need to use Equation (3.21) on two occasions. The first is where we set the variable Z in Equation (3.01) to the variable H as we have just calculated it; the second is where
we set it to the expression H-V*T^(1/2). We know that V*T^(1/2) is equal to .0862581949 from the last expression, so HV*T^(1/2) equals -.4502688281-.0862581949 = -.536527023. We therefore must use
Equation (3.21) with the input variable Z as -.4502688281 and -.536527023. Prom Equation (3.21), this yields .3262583 and .2957971 respectively (Equation (3.21) has been demonstrated in Chapter 3, so
we need not repeat it here). Notice, however, that we have now
obtained the delta, the instantaneous rate of change of the price of the option with respect to the price of the underlying. The delta is N(H), or the variable H pumped through as Z in Equation
(3.21). Our delta for this option is there-fore .3262583. We now have all of the inputs required to determine the theoretical option price. Plugging our values into Equation (5.01): (5.01) C = U*EXP
(-R*T)*N(H)-E*EXP(-R*T)*N(H-V *T^(1/2)) = 575*EXP(-0*.119047619)*N(-.4502688281)-600*EXP(0*.119047619)*N(-.4502688281-.25*.119047619^(1/2)) = 575*EXP(-0*.119047619)*.3262583-600*EXP(0*.119047619)
*.2957971 = 575*EXP(0)*.3262583-600*EXP(0)*.2957971 = 575*1*.3262583-600*1*.2957971 = 575*.3262583-600*.2957971 = 187.5985225-177.47826 = 10.1202625 Thus, the fair price of the 600 call option that
expires September 15, 1991, with the underlying at 575 on August 1, 1991, with volatility at 25%, and using a 252-day year and the Black futures model with R = 0, is 10.1202625. It is interesting to
note the relationship between options and their underlying instruments by using these pricing models. We know that 0 is the limiting downside price of an option, but on the upside the limiting price
is the price of the underlying instrument itself. The models demonstrate this in that the theoretical fair price of an option approaches its upside limiting value of the value of the underlying, U,
if any or all three of the variables T, R, or V are increased. This would mean, for instance, that if we increased T, the time till expiration of the option, to an infinitely high amount, then the
price of the option would equal that of the underlying instrument. In this regard, we can state that all underlying instruments are really the same as options, only with infinite T. Thus, what
follows in this discussion is not only true of options, it can likewise be said to be true of the underlying as though it were an option with infinite T. Both the Black-Scholes stock option model and
the Black futures model are based on certain assumptions. The developers of these models were aware of these assumptions and so should you be. Nonetheless, despite whatever shortcomings are involved
in the assumptions, these models are still very accurate, and option prices will tend to these models' values. The first of these assumptions is that the option cannot be exercised until the exercise
date. This European style options settlement tends to underprice certain options as compared to the American style, where the options can be exercised at any time. Some of the other assumptions in
this model are that we actually know the future volatility of the underlying instrument and that it will remain constant throughout the life of the option. Not only will this not happen (i.e., the
volatility will change), but the distribution of volatility changes is lognormal, an issue that the models do not address.5 Another issue that the models assume is that the risk-free interest rate
will remain constant throughout the life of an option. This also Is unlikely. Furthermore, short-term rates appear to be lognormally distributed. Since the higher the short-term rates are the higher
the resultant option prices will be, this assumption regarding short-term rates being constant may further undervalue the fair price of the option (the price returned by the models) relative to the
expected value (its true arithmetic mathematical expectation). Finally, another point (perhaps the most important point) that might undervalue the model-generated fair value of the option relative to
the true expected value regards the assumption that the logs of price changes are normally distributed. If rather than having a time frame in which they expired, options had a given number of up and
down ticks before they expired, and could only change by 1 tick at a time, and if each tick was statistically independent of the last tick, we could rightly make this assumption of Normality. The
logs of price changes, however, do not have these clean characteristics. 5
The fact that the distribution of volatility changes is lognormal is not a very widely considered fact. In light of how extremely sensitive option prices are to the volatility of the underlying
instrument, this certainly makes the prospect of buying a long Option (put Or call) more appealing in terms of mathematical expectation.
All of these assumptions made by the pricing models aside, the theoretical fair prices returned by the models are monitored by professionals in the marketplace. Even though many are using models that
differ from these detailed here, most models return similar theoretical fair prices. When actual prices diverge from the models to the extent that an arbitrageur has a profit opportunity, they will
begin to again converge to what the models claim is the theoretical fair price. This fact, that we can predict with a fair degree of accuracy what the price of an option will be given the various
inputs (time to expiration, price of the underlying instrument, etc.) allows us to perform the exercises regarding optimal f and its by-products on options and mixed positions. The reader should bear
in mind that all of these techniques are based on the assumptions just noted about the options pricing models themselves.
A EUROPEAN OPTIONS PRICING MODEL FOR ALL DISTRIBUTIONS We can create our own pricing model devoid of any assumptions regarding the distribution of price changes. First, the term "theoretically fair"
needs to be defined when referring to an options price. This definition is given as the arithmetic mathematical expectation of the option at expiration, expressed in terms ofits present worth,
assuming no directional bias in the underlying. This is our options pricing model in literal terms. The frame of reference employed here is 'What is this option worth to me today as an options buyer?
" In mathematical terms, recall that the mathematical expectation (arithmetic) is defined as Equation (1.03): (1.03) Mathematical expectation = ∑[i = 1,N] (pi*ai) where p = Probability of winning or
losing the ith trial. a = Amount won or lost on the ith trial. N = Number of possible outcomes (trials). The mathematical expectation is computed by multiplying each possible gain or loss by the
probability of that gain or loss and then summing these products. When the sum of the probabilities, the pi terms, is greater than 1, Equation 1,03 must then be divided by the sum of the
probabilities, the pi terms. In a nutshell, our options pricing model will take all those discrete price increments that have a probability greater than or equal to .001 of occurring at expiration
and determine an arithmetic mathematical expectation on them. (5.10) C = ∑(pi*ai)/∑pi where C = The theoretically fair value of an option, or an arithmetic mathematical expectation. pi = The
probability of being at price i on expiration. ai = The intrinsic value associated with the underlying instrument being at price i. In using this model, we first begin at the current price and work
up I tick at a time, summing the values in both the numerator and denominator until the price, i, has a probability, pi, less than .001 (you can use a value less than this, but I find .001 to be a
good value to use; it implies finding a fair value assuming you are going to have 1,000 option trades in your lifetime). Then, starting at that value which is 1 tick below the current price, we work
down 1 tick at a time, summing values for both the numerator and denominator until the price, i, results in a probability, pi, less than .001. Note that the probabilities we are using are 1-tailed
probabilities, where if a probability is greater than .5, we are subtracting the probability from 1. Of interest to note is that the pi terms, the probabilities, can be discerned by whatever
distribution the user feels is applicable, not just the Normal. That is, the user can derive a theoretically fair value of an option for any distributional form! Thus, this model frees us to use the
stable Paretian, Student's t, Poisson, our own adjustable distribution, or any other distribution we feel price conforms to in determining fair options values. We still need to amend the model to
express the arithmetic mathematical expectation at expiration as a present value: (5.11) C = (∑ (pi*ai)*EXP(-R*T))/ ∑ pi
where C = The theoretically fair value of an option, or the present value of the arithmetic mathematical expectation at time T. pi = The probability of being at price i on expiration. ai = The
intrinsic value associated with the underlying instrument being at price i. R = The current risk-free rate. T = Decimal fraction of a year remaining till expiration. Equation (5.11) is the options
pricing model for all distributions, returning the present worth of the arithmetic mathematical expectation of the option at expiration.6 Note that the model can be used for put values as well, the
only difference being in discerning the intrinsic values, the ai terms, at each price increment, i. When dividends are involved, Equation (5.04) should be employed to adjust the current price of the
underlying by. Then this adjusted current price is used in determining the probabilities associated with being at a given price, i, at expiration. An example of using Equation (5.11) is as follows.
Suppose we determine that the Student's t distribution is a good model of the distribution of the log of price changes7 for a hypothetical commodity that we are considering buying options on. Now we
use the K-S test to determine the best-fitting parameter value for the degrees of freedom parameter of the Student's t distribution. We will assume that 5 degrees of freedom provides for the best fit
to the actual data per the K-S test. We will assume that we are discerning the fair price for a call option on 911104 that expires 911220, where the price of the underlying is 100 and the strike
price is 100. We will assume an annualized volatility of 20%, a risk-free rate of 5%, and a 260.8875-day year (the average number of weekdays in a year; we therefore ignore holidays that fall on a
weekday, for example, Thanksgiving in the United States). Further, we will assume that the minimum tick that this hypothetical commodity can trade in is .10. If we perform Equations (5.01) and (5.02)
using (5.07) for the variable II, we obtain fair values of 2.861 for both the 100 call and 100 put. These options prices are thus the fair values according to the Black commodity options model, which
assumes a lognormal distribution of prices. If, however, we use Equation (5.11), we must figure the pi terms. These we obtain from the snippet of BASIC code in Appendix B. Note that the snippet of
code requires a standard value, given the variable name Z, and the degrees of freedom, given the variable name DEGFDM. Before we call this snippet of code we can convert the price, i, to a standard
value by the following formula: (5.12) Z = ln(i/current underlying price)/(V*T^.5) where i = The price associated with the current status of the summation process. V = The annualized volatility as a
standard deviation. T = Decimal fraction of a year remaining till expiration. ln() = The natural logarithm function. Equation (5.12) can be expressed in BASIC as: Z = LOG(I/U)/(V*T^.5) The variable U
represents the current underlying price (adjusted for dividends, if necessary). 6
Notice that Equation (5.11) does not differentiate stock from commodity options. Conventional thinking has it that, embedded in the price of a stock option, is the interest on a pure discount bond
that matures at expiration with a face value equal to the strike price. Commodity options, it is believed, see an interest rate of 0 on this, so it is as if they do not have it. From our frame of
reference-that is, "What is this option worth to me today as an options buyer?"-we disregard this. If both a stock and a commodity have exactly the same expected distribution of outcomes, their
arithmetic mathematical expectations are the same, and the rational investor would opt for buying the less expensive. This situation is analogous to someone considering buying One of two identical
houses where one is priced higher because the seller has paid a higher interest rate on the mortgage. 7 The Student's t distribution is generally a poor model of the distribution of price changes.
However, since the only other parameter, aside from volatility as an annualized standard deviation, which needs to be considered in using the Student's t distribution, is the degrees of freedom, and
since the probabilities associated with the Student's t distribution are easily ascertained by the snippet of Basic code in Appendix B, we will use the Student's t distribution here for the sake of
simplicity and demonstration.
Lastly, once we have obtained a probability from the Student's t distribution BASIC code snippet in Appendix B, the probability returned is a 2-tailed one. We need to make it a 1-tailed probability
and express it as a probability of deviating from the current price (i.e., bound it between 0 and .5). These two procedures are performed by the following two lines of BASIC: CF = 1-((1- CF)/2) IF CF
>.5 then CF = 1-CF Doing this with the option parameters we have specified, and 5 degrees of freedom, yields a fair call option value of 3.842 and a fair put value of 2.562. These values differ
considerably from the more conventional models for a number of reasons. First, the fatter tails of the Student's t distribution with 5 degrees of freedom will make for a higher fair call value.
Generally, the thicker the tails of the distribution used, the greater the call value returned. Had we used 4 degrees of freedom, we would have obtained an even greater fair call value. Second, the
put value and the call value differ substantially, whereas with the more conventional model the put and call value were equivalent. This difference requires some discussion. The fair value of a put
can be determined from a call option with the same strike and expiration (or vice versa) by the put-call parity formula: (5.13) P = C+(E-U)*EXP(-R*T) where P = The fair put value. C = The fair call
value. E = The strike price. U = The current price of the underlying instrument. R = The risk-free rate. T = Decimal fraction of a year remaining till expiration. When Equation (5.13) is not true, an
arbitrage opportunity exists. From (5.13) we can see that the conventional model's prices, being equivalent, would appear to be correct since the expression E-U is 0, and therefore P = c. However,
let's consider the variable U in Equation (5.13) as the expected price of the current underlying instrument at expiration. The expected value of the underlying can be discerned by (5.10) except the
ai term simply equals i. For our example with DEGFDM = 5, the expected value for the underlying instrument = 101.288467. This happens as a result of the fact that the least a commodity can trade for
in this model is 0, whereas there is no upside limit. A move from a price of 100 to a price of 50 is as likely as a move from a price of 100 to 200. Hence, call values will be priced greater than put
values. It comes as no surprise then that the expected value of the underlying instrument at expiration should be greater than its current value. This seems to be consistent with our experience with
inflation. When we replace the U in Equation (5.13), the current price of the underlying instrument, with its expected value at expiration, we can derive our fair put value from (5.13) as: P = 3.842+
(100-101.288467)*EXP(-.05*33/260.8875) = 3.842+1.288467*EXP(-.006324565186) = 3.842+-1.288467*.9936954 = 3.842+-1.280343731 = 2.561656269 This value is consistent with the put value discerned by
using Equation (5.11) for the current value of the arithmetic mathematical expectation of the put at expiration. There's only one problem. If both the put and call options for the same strike and
expiration are fairly priced per (5.11), then an arbitrage opportunity exists. In the real world the U in (5.13) is the current price of the underlying, not the expected value of the underlying, at
expiration. In other words, if the current price is 100 and the December 100 call is 3.842 and the 100 put is 2.561656269, then an arbitrage opportunity exists per (5.13). The absence of put-call
parity would suggest, given our newly derived options prices, that rather than buy the call for 3.842 we instead obtain a* equivalent position by buying the put for 2.562 and buy the underlying. The
problem is resolved if we first calculate the expected value on the underlying, discerned by Equation (5.10) except the ai term simply equals i (for our example with DEGFDM = 5, the expected value
for the underlying instrument equals 101.288467) and subtract the current price of the underlying from this value. This gives us 101.288467-100 =
1.288467. Now if we subtract this value from each ai term, each intrinsic value in (5.11) (and setting any resultant values less than 0 to 0), then Equation (5.11) will yield theoretical values that
are consistent with (5.13). This procedure has the effect of forcing the arithmetic mathematical expectation on the underlying to equal the current price of the underlying. In the case of our example
using the Student's t distribution with 5 degrees of freedom, we obtain a value for both the 100 put and call of 3.218. Thus our answer is consistent with Equation (5.13), and an arbitrage
opportunity no longer exists between these two options and their underlying instrument. Whenever we are using a distribution that results in an arithmetic mathematical expectation at expiration on
the underlying which differs from the current value of the underlying, we must subtract the difference (expectation-current value) from the intrinsic value at expiration of the options and floor
those resultant intrinsic values less than 0 to 0. In so doing, Equation (5.11) will give us, for any distributional form we care to use, the present worth of the arithmetic mathematical expectation
of the option at expiration, given an arithmetic mathematical expectation on the underlying instrument equivalent to its current price (i.e., assuming no directional bias in the underlying
THE SINGLE LONG OPTION AND OPTIMAL F Let us assume here that we are speaking about the simple outright purchase of a call option. Rather than taking a full history of option trades that a given
market system produced and deriving our optimal f therefrom, we are going to take a look at all the possible outcomes of what this particular option might do throughout the term that we hold it. We
are going to weight each outcome by the probability of its occurrence. This probability-weighted outcome will be derived as an HPR relative to the purchase price of the option. Finally, we will look
at the full spectrum of outcomes (i.e., the geometric mean) for each value for f until we obtain the optimal value. In almost all of the good options pricing models the input variables that have the
most effect on the theoretical options price are (a) the time remaining till expiration, (b) the strike price, (c) the underlying price, and (d) the volatility. Different models have different input,
but basically these four have the greatest bearing on the theoretical value returned. Of the four basic inputs, two-the time remaining till expiration and the underlying price-are certain to change.
One, volatility, may change, yet rarely to the extent of the underlying price or the time till expiration, and certainly not as definitely as these two. One, the strike price, is certain not to
change. Therefore, we must look at the theoretical price returned by our model for all of these different values of different underlying prices and different for all of these different values of
different underlying prices and different times left till expiration. The HPR for an option is thus a function not only of the price of the underlying, but also of how much time is left on the
option: (5.14) HPR(T,U) = (1+f*(Z(T,U-Y)/S-1))^P(T,U) where HPR(T,U) = The HPR for a given test value for T and U. f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The
theoretical option price if the underlying were at price U-Y with time T remaining till expiration. This can be discerned by whatever pricing model the user deems appropriate. P(T,U) = The I-tailed
probability of the underlying being at price U by time T remaining till expiration. This can discerned by whatever distributional form the user deems appropriate. Y = The difference between the
arithmetic mathematical expectation of the underlying at time T, given by Equation (5.10), and the current price. This formula will give us the HPR (which is probability-weighted to the probability
of the outcome) of one possible outcome for this option: that the underlying instrument will be at price U by time T. In the preceding equation the variable T represents the decimal part of the year
remaining until option expiration. Therefore, at expiration T = 0. If 1 year is left to expiration, T = 1. The variable Z(T, U-Y) is found via whatever option model you are using. The only other
you need to calculate is the variable P(T, U), the probability of the underlying being at price U with time T left in the life of the option. If we are using the Black-Scholes model or the Black
commodity model, we can calculate P(T, U) as: if U < or = to Q: (5.15a) P(T,U) = N((ln(U/Q))/(V*(L^(1/2)))) if U > Q: (5.15b) P(T,U) = 1-N((ln(U/Q))/(V*(L^(1/2)))) where U = The price in question. Q
= Current price of the underlying instrument. V = The annual volatility of the underlying instrument. L = Decimal fraction of the year elapsed since the option was put on. N() = The Cumulative Normal
Distribution Function. This is given as Equation (3.21). ln() = The natural logarithm function. Having performed these equations, we can derive a probabilityweighted HPR for a particular outcome in
the option. A broad range of outcomes are possible, but fortunately, these outcomes are not continuous. Take the time remaining till expiration. This is not a continuous function. Rather, a discrete
number of days are left till expiration. The same is true for the price of the underlying. If a stock is at a price of, say, 35 and we want to know how many possible price outcomes there are between
the possible prices of 30 and 40, and if the stock is traded in eighths, then we know that there are 81 possible price outcomes between 30 and 40 inclusive. What we must now do is calculate all of
the probability- weighted HPRs on the option for the expiration date or for some other mandated exit date prior to the expiration date. Say we know we will be out of the option no later than a week
from today. In such a case we do not need to calculate HPRs for the expiration day, since that is immaterial to the question of how many of these options to buy, given all of the available
information (time to expiration, time we expect to remain in the trade, price of the underlying instrument, price of the option, and volatility). If we do not have a set time when we will be out of
the trade, then we must use the expiration day as the date on which to calculate probability-weighted HPRs. Once we know how many days to calculate for (and we will assume here that we will calculate
up to the expiration day), we must calculate the probability-weighted HPRs for all possible prices for that market day. Again, this is not as overwhelming as you might think. In the Normal
Probability Distribution, 99.73% of all outcomes will fall within three standard deviations of the mean. The mean here is the current price of the underlying instrument. Therefore, we really only
need to calculate the probability-weighted HPRs for a particular market day, for each discrete price between -3 and +3 standard deviations. This should get us quite accurately close to the correct
answer. Of course if we wanted to we could go out to 4, 5, 6 or more standard deviations, but that would not be much more accurate. Likewise, if we wanted to, we could contract the price window in by
only looking at 2 or 1 standard deviations. There is no gain in accuracy by doing this though. The point is that 3 standard deviations is not set in stone, but should provide for sufficient accuracy.
If we are using the Black-Scholes model or the Black futures option model, we can determine how much 1 standard deviation is above a given underlying price, U: (5.16) Std. Dev. = U*EXP(V*(T^(1/2)))
where U = Current price of the underlying instrument. V = The annual volatility of the underlying instrument. T = Decimal fraction of the year elapsed since the option was put on. EXP() = The
exponential function. Notice that the standard deviation is a function of the time elapsed in the trade (i.e., you must know how much time has elapsed in order to know where the three standard
deviation points are).
Building upon this equation, to determine that point that is X standard deviations above the current underlying price: (5.17a) +X Std. Dev. = U*EXP(X*(V*T^(1/2))) Likewise, X standard deviations
below the current underlying price is found by: (5.17b) -X Std. Dev. = U*EXP(-X*(V*T ^ (1/2))) where U = Current price of the underlying instrument. V = The annual volatility of the underlying
instrument. T = Decimal fraction of the year elapsed since the option was put on. EXP() = The exponential function. X = The number of standard deviations away from the mean you are trying to discern
probabilities on. Remember, you must first determine how old the trade is, as a fraction of a year, before you can determine what price constitutes X standard deviations above or below a given price
U. Here, then, is a summary of the procedure for finding the optimal f for a given option. Step 1 Determine if you will be out of the option by a definite date. If not, then use the expiration date.
Step 2 Counting the first day as day 1, determine how many days you will have been in the trade by the date in number 1. Now convert this number of days into a decimal fraction of a year. Step 3 For
the day in number 1, calculate those points that are within +3 and -3 standard deviations of the current underlying price. Step 4 Convert these ranges of values of prices in step 3 to discrete
values. In other words, using increments of 1 tick, determine all of the possible prices between and including those values in step 3 that bound the range. Step 5 For each of these outcomes now
calculate the Z(T, U-Y)'s and P(T, U)'s for the probability-weighted HPR equation. In other words, for each of these outcomes now calculate the resultant theoretical option price as well as the
probability of the underlying instrument being at that price by the dates in question. Step 6 After you have completed step 5, you now have all of the input required to calculate the
probability-weighted HPRs for all of the outcomes. (5.14) HPR(T,U) = (1+f*(Z(T,U-Y)/S-1))^P(T,U) where f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical
option price if the underlying were at price U-Y with time T remaining till expiration. This can discerned by whatever pricing model the user deems appropriate. P(T,U) = The 1-tailed probability of
the underlying being at price U by time T remaining till expiration. This can be discerned by whatever distributional from the user deems appropriate. Y = The difference between the arithmetic
mathematical expectation of the underlying at time T, given by (5.10), and the current price. You should note that the distributional form used for the variable P(T, U) need not be the same
distributional form used by the pricing model employed to discern the values for Z(T, U-Y). For example, suppose you are using the Black-Scholes stock option model to discern the values for Z(T,
U-Y). This model assumes a lognormal distribution of price changes. However, you can correctly use another distributional form to determine the corresponding P(T, U). Literally, this translates as
follows: You know that if the underlying goes to price U, the option's price will tend to that value given by Black-Scholes. Yet the probability of the underlying going to price U from here is
greater than the lognormal distribution would indicate. Step 7 Now you can begin the process of finding the optimal f. Again you can do this by iteration, by looping through all of the possible f
values between 0 and 1, by parabolic interpolation, or by any other one-dimensional search algorithm. By plugging the test values for f into the HPRs (and you have an HPR for each of the possible
price increments between +3 and -3 standard deviations on the expiration date or mandated exit date) you can find your geometric mean for a given test value of f. The way you now obtain this
geometric mean is to multiply
all Of these HPRs together and then take the resulting product to the power of 1 divided by the sum of the probabilities: (5.18a) G(f,T) = {∏[U = -3SD,+3SD]HPR(T,U)}^(1/∑[U = -3SD ,+3SD]P(T,U))
Therefore: (5.18b) G(f,T) = {∏[U = -3SD,+3SD](l+f*(Z(T,UY)/S1))^P(T,U)}^(1/∑[U = -3SD,+3SD]P(T,U)) where G(f, T) = The geometric mean HPR for a given test value for f and a given time remaining till
expiration from a mandated exit date. f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical option price if the underlying were at price U -Y with time T
remaining till expiration. This can be discerned by whatever pricing model the user deems appropriate. P(T,U) = The probability of the underlying being at price U by time T remaining till expiration.
This can be discerned by whatever distributional form the user deems appropriate. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given by (5.10), and
the current price. The value for f that results in the greatest geometric mean is the value for f that is optimal. We can optimize for the optimal mandated exit date as well. In other words, say we
want to find what the optimal f is for a given option for each day between now and expiration. That is, we run this procedure over and lover, starting with tomorrow as the mandated exit date and
finding the optimal f, then starting the whole process over again with the next day as the mandated exit date. We keep moving the mandated exit date forward until the mandated exit date is the
expiration date. We record the optimal fs and geometric means for each mandated exit date. When we are through with this entire procedure, we can find the mandated exit date that results in the
highest geometric mean. Now we know the date by which we must be out of the option position by in order to have the highest mathematical expectation (i.e., the highest geometric mean). We also know
how many contracts to buy by using the f value that corresponds to the highest- geometric mean. We now have a mathematical technique whereby we can blindly go out and buy an option and (as long as we
are out of it by the mandated exit date that has the highest geometric mean-provided that it is greater than 1.0, of course-and buy the number of contracts indicated by the optimal f corresponding to
that highest geometric mean) be in a positive mathematical expectation. Furthermore, these are geometric positive mathematical expectations. In other words, the geometric mean (minus 1.0) is the
mathematical expectation when you are reinvesting returns. (The true arithmetic positive mathematical expectation would of course be higher than the geometric.) Once you know the optimal f for a
given option, you can readily turn this into how many contracts to buy based on the following equation: (5.19) K = INT(E/(S/f)) where K = The optimal number of option contracts to buy. f = The value
for the optimal f (0 to 1). S = the current price of the option. E = The total account equity. INT() = The integer function. The answer derived from this equation must be "floored to the integer." In
other words, for example, if the answer is to buy 4.53 contracts, you would buy 4 contracts. We can determine the TWR for the option trade. To do so we must know how many times we would perform this
same trade over and over. In other words, if our geometric mean is 1.001 and we want to find the TWR that corresponds to make this same play over and over 100 times, our TWR would be 1.001 ^ 100 =
1.105115698. We would therefore expect to make 10.3115698% on our stake if we were to make this same options play 100 times over. The formula to convert from a geometric mean to a TWR was given as
Equation (4.18): (4.18) TWR = Geometric Mean^X where
TWR = The terminal wealth relative. X = However many times we want to "expand" this play out. That is, what we would expect to make if we invested f amount into these possible scenarios X times.
Further, we can determine our other by-products, such as the geometric mathematical expectation, as the geometric mean minus 1. If we take the biggest loss possible (the cost of the option itself),
divide this by the optimal f, and multiply the result by the geometric mathematical expectation, the result will yield the geometric average trade. As you have seen, when applied to options positions
such as this, the optimal f technique has the added by-product of discerning what the optimal exit date is. We have discussed the options position in its pure form, devoid of any underlying bias we
may have in the direction of the price of the underlying. For a mandated exit date, the points of 3 standard deviations above and below are calculated from the current price. This assumes that we
know nothing of the future direction of the underlying. According to the mathematical pricing models, we should not be able to find positive arithmetic mathematical expectations if we were to hold
these options to expiration. However, as we have seen, through the use of this technique it is possible to find positive geometric mathematical expectations if we put on a certain quantity and exit
the position on a certain date. If you have a bias toward the direction of the underlying, that can also be incorporated. Suppose we are looking at options on a particular underlying instrument,
which is currently priced at 100. Further suppose that our bias, generated by our analysis of this market, suggests a price of 105 by the expiration date, which is 40 market days from now. We expect
the price to rise by 5 points in 40 days. If we assume a straightline basis for this advance, we can state that the price should rise, on average, .125 points per market day. Therefore, for the
mandated exit day of tomorrow, we will figure a value of U of 100.125. For the next mandated exit date, U will be 100.25. Finally, by the time that the mandated exit date is the expiration date, U
will be 105. If the underlying is a stock, you should subtract the dividends from this adjusted U via Equation (5.04). The bias is applied to the process by having a different value for U each day
because of our forecast. Because they affect the outcomes of Equations (5.17a) and (5.17b), these different values for U will dramatically affect our optimal f and by-product calculations. Notice
that because Equations (5.17a) and (5.17b) are affected by the new value for U each day, there is an automatic equalization of the data. Hence, the optimal f's we obtain are based on equalized data.
As you work with this optimal f idea and options, you will notice that each day the numbers change. Suppose you buy an option today at a certain price that has a given mandated exit date. Suppose the
option has a different price after tomorrow. If you run the optimal f procedure again on this new option, it, too, may have a positive mathematical expectation and a different mandated exit date.
What does this mean? The situation is analogous to a horse race where you can still place bets after the race has begun, until the race is finished. The odds change continuously, and you can cash
your ticket at any time, you need not wait until the race is over. Say you bet $2 on a horse before the race begins, based on a positive mathematical expectation that you have for that horse, and the
horse is running next to last by the first turn. You make time stop (because you can do that in hypothetical situations) and now you look at the tote board. Your $2 ticket on this horse is now only
worth S 1.50. You determine the mathematical expectation on your horse again, considering how much of the race is already finished, the current odds on your horse, and where it presently is in the
field. You determine that the current price of that $1.50 ticket on your horse is 10% undervalued. Therefore, since you could cash your 82 ticket that you bought before the race for S 1.50 right now,
taking a loss, and you could also purchase the $1.50 ticket on the horse right now with a positive mathematical expectation, you do nothing. The current situation is thus that you have a positive
mathematical situation, but on the basis of a $l.50 ticket not a $2 ticket. This same analogy holds for our option trade, which is now slightly underwater but has a positive mathematical expectation
on the basis of the new price. You should use the new optimal f on the new price, adjusting your current position if necessary, and go with the new optimal exit date. In so doing, you will have
incorporated the latest price information about the underlying instrument. Often, doing this may have you
take the position all the way into expiration. There are many inevitable losses along the way by following this technique of optimal f on options. Why you should be able to find positive mathematical
expectations in options that are theoretically fairly priced in the first place may seem like a paradox or simply quackery to you. However, there is a very valid reason why this is so: Inefficiencies
are a function of your frame of reference. Let's start by stating that theoretical option prices as returned by the models do not give a positive mathematical expectation (arithmetic) to either the
buyer or seller. In other words, the models are theoretically fair. The missing caveat here is "if held till expiration." It is this missing caveat that allows an option to be fairly priced per the
models, yet have a positive expectation if not held till expiration. Consider that options decay at the rate of the square root of the time remaining till expiration. Thus, the day with the least
expected time premium decay will always be the first day you are in the option. Now consider Equations (5.17a) and (5.17b), the price corresponding to a move of X standard deviations after so much
time has elapsed. Notice that each day the window returned by these formulas expands, but by less and less. The day of the greatest rate of expansion is the first day in the option. Thus, for the
first day in the option, the time premium will shrink the least, and the window of X standard deviations will expand the fastest. The less the time decay, the more likely we are to have a positive
expectation in a long option. Further, the wider the window of X standard deviations, the more likely we are to have a positive expectation, as the downside is fixed with an option but the upside is
not. There is a constant tug-of-war going on between the window of X standard deviations getting wider and wider with each passing day (at a slower and slower rate, though) and time decaying the
premium faster and faster with each passing day. What happens is that the first day sees the most positive mathematical expectation, although it may not be positive. In other words, the mathematical
expectation (arithmetic and geometric) is greatest after you have been in the option 1 day (it's actually greatest the first instant you put on the option and decays gradually thereafter, but we are
looking at this thing at discrete intervals-each day's close). Each day thereafter the expectation gets lower, but at a slower rate. The following table depicts this decay of expectation of a long
option. The table is derived from the option discussed earlier in this chapter. This is the 100 call option where the underlying is at 100, and it expires 911220. The volatility is 20% and it is now
911104. We are using the Black commodity option formula (H discerned as in Equation (5.07) and R = 5%) and a 260.8875-day year. We are using 8 standard deviations to calculate our optimal f's from,
and we are using a minimum tick increment of .1 (which will be explained shortly). Exit Date Tue. 911105 Wed. 911106 Thu. 911107
AHPR 1.000409 1.000001 <1
GHPR 1.000195 1 .000000 <1
f .0806 .0016 0
The AHPR column is the arithmetic average HPR (the calculation of which will be discussed later on in this chapter), and GHPR is the geometric mean HPR. The f column is the optimal f from which the
AHPR and GHPR columns were derived. The arithmetic mathematical expectation, as a percentage, is simply the AHPR minus 1, and the geometric mathematical expectation, as a percentage, is the GHPR
minus 1. Notice that the greatest mathematical expectations occur on the day after we put the option on (although this example has a positive mathematical expectation, not all options will show a
positive mathematical expectation). Each day thereafter the expectations themselves decay. The rate of decay also gets slower and slower each day. After 911106 the mathematical expectations (HPR-1)
go negative. Therefore, if we wanted to trade on this information, we could elect to enter today (911104) and exit on the close tomorrow (911105). The fair option price is 2.861. If we assume it is
traded at a price of $100 per full point, the cost of the option is 2.861*$100 = $286.10. Dividing this price by the optimal f of .0806 tells us to buy one option for every $3,549.63 in equity. If we
wanted to hold the option till the close of 911106, the last day that still has a positive mathematical expectation, we would have to initiate the position today using the f value corresponding to
the optimal for an exit 911106 of .0016. We would therefore enter today (911104) with 1 contract for every $178,812.50 in ac-
count equity ($286.10/ .0016). Notice that to do so has a much lower expectation than if we entered with 1 contract for every 33,549.63 in account equity and exited on the close tomorrow, 911105. The
rate of change between the two functions, time premium decay and the expanding window of X standard deviations, may create a positive mathematical expectation for being long a, given option. This
expectation is at its greatest the first instant in the position and declines at a decreasing rate from there. Thus, an option that is priced fairly to expiration based on the models can be found to
have a positive expectation if exited early on in the premium decay. The next table looks at this same 100 call option again, only this time we look at it using different-sized windows (different
amounts of standard deviations): Number of Standard Deviations 2 3 AHPR 1.000102 1.000379 GHPR 1.000047 1.00018 f .043989 .0781 Cutoff 911105 911105
5 1.000409 1.000195 .0806 911106
8 1.000409 1.000195 .0806 911106
10 1.000409 1.000195 .0806 911106
The AHPR and GHPR pertain to the arithmetic and geometric HPRs at the optimal f values if you exit the trade on the close of 911105 (the most opportune date to exit, because it has the highest AHPR
and GHPR). The f corresponds to the optimal f for 911105. The heading Cutoff pertains to the last date when a positive expectation (i.e., AHPR and GHPR both greater than 1) exists. The interesting
point to note is that the four values AHPR, GHPR, f, and Cutoff all converge to given points as we increase the number of standard deviations toward infinity. Beyond 5 standard deviations the values
hardly change at all. Beyond 8 standard deviations they seem to stop changing. The tradeoff in using more standard deviations is that extra computer time is required. This seems a small price to pay,
but as we get into multiple simultaneous positions in this chapter, you will notice that each additional leg of a multiple simultaneous position increases the time required exponentially. For one leg
we can argue that using 8 standard deviations is ideal. However, for more than one leg simultaneously, we may find it necessary to trim back this number of standard deviations. Furthermore, this 8
standard deviation rule applies only when we assume Normality in the logs of price changes.
THE SINGLE SHORT OPTION Everything stated about the single long option holds true for a single short option position. The only difference is in regard to Equation (5.14): (5.14) HPR(T,U) = (1+f*(Z
(T,U-Y)/S-1))^P(T,U) where HPR(T,U) = The HPR for a given test value for T and U. f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical option price if the
underlying were at price U with time T remaining till expiration. P(T,U) = The probability of the underlying being at price U by time T remaining till expiration. Y = The difference between the
arithmetic mathematical expectation of the underlying at time T, given by (5.10), and the current price. For a single short option position this equation now becomes: (5.20) HPR(T,U) = (1+f*(1-Z
(T,U-Y)/S))^P(T,U) where HPR(T,U) = The HPR for a given test value for T and U. f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical option price if the
underlying were at price U with time T remaining till expiration. P(T,U) = The probability of the underlying being at price U by time T remaining till expiration. Y = The difference between the
arithmetic mathematical expectation of the underlying at time T, given by (5.10), and the current price. You will notice that the only difference between Equation (5.14), the equation for a single
long option position, and Equation (5.20), the
equation for a single short option position, is in the expression (Z(T,UY)/S-1), which becomes (1-Z(T,U-Y)/S) for the single short option position. Aside from this change, everything else detailed
about the single long option position holds for the single short option position.
THE SINGLE POSITION IN THE UNDERLYING INSTRUMENT In Chapter 3 we detailed the math of finding the optimal f parametrically. Now we can use the same method as with a single long option, only our
calculation of the HPR is taken from Equation (3.30). (3.30) HPR(U) = (1+(L/(W/(-f))))^P where HPR(U) = The HPR for a given U. L = The associated P&L. W = The worst-case associated P&L in the table
(this will always be a negative value). f = The tested value for f. P = The associated probability. The variable L, the associated P&L, is discerned by taking the price of the underlying at a given
price U, minus the price at which the trade was initiated, S, for a long position. (5.21a) L for a long position = U-S For a short position, the associated P&L is figured just the reverse: (5.21b) L
for a short position = S-U where S = The current price of the underlying instrument. U = The price of the underlying instrument for this given HPR. We could also figure the optimal f for a single
position in the underlying instrument using Equation (5.14). When doing so we must realize that the optimal f returned can be greater than 1. For example, consider an underlying instrument at a price
of 100. We determine that the five following outcomes might occur: Outcome 110 105 100 95 90
Probability .15 .30 .50 .25 .10
P&L 10 5 0 -5 -10
Note that per Equation (5.10), our arithmetic mathematical expectation on the underlying is 100.5769230 77. This means that the variable Y in (5.14) is equal to .576923077 since 100.576923077-100 =
.576923077. If we were to figure the optimal f using the P&L column and the Equation (3.30) method, we derive an f of .19, or 1 unit for every $52.63 in equity. If instead we used Equation (5.14) on
the outcome column, whereby the variable S is therefore equal to 100, and we do not subtract the value of Y, the arithmetic mathematical expectation of the underlying minus its current value from U
in discerning our Z(T, U -Y) variable, we find our optimal fat approximately 1.9. This translates again into 1 unit for every $52.63 in equity as 100/1.9 = 52.63. On the other hand, if we subtract
the value of Y, the arithmetic mathematical expectation on the underlying per Equation (5.10), in the Z(T, U-Y) term of (5.14) we end up with a mathematical expectation on the underlying equal to its
current value, and therefore we do not have an optimal f. This is what we must do, subtract the value of Y in the Z(T, U-Y) term of Equation (5.14) in order to be consistent with the options
calculations as well as the put/call parity formula. If we are using the Equation (3.30) method instead of the Equation (5.14) method, then each value for U in (5.21a) and (5.21b) must have the
arithmetic mathematical expectation of the underlying, Y, subtracted from it. That is, we must subtract the value of Y from each P&L. Doing so again yields a situation where there is not a positive
mathematical expectation, and therefore there is no value for f that is optimal. Literally, this means only that if we blindly go out and take a position in the underlying instrument, we do not get a
positive mathematical expectation (as we do with some options), and therefore there is no f that is optimal in this case. We can have an optimal f only if we have a
positive mathematical expectation. We can have this only if we have a bias in the underlying. Now we have a methodology that can be used to give us the optimal f (and its by-products) for options,
whether long or short, as well as trades in the underlying instrument (from a number of different methods). Note that the methods used in this chapter to discern the optimal fs and by-products for
either options or the underlying instrument are predicated upon not necessarily using a mechanical system to enter your trades. For instance, the empirical method for finding optimal f used an
empirical stream of trade P&L's generated by a mechanical system. In Chapter 3 we learned of a parametric technique to find the optimal f from data that was Normally distributed. This same technique
can be used to find the optimal f from data of any distribution, so long as the distribution in question has a cumulative density function. In Chapter 4 we learned of a method to find the optimal f
parametrically for distributions that do not have a cumulative density function, such as the distribution of trade P&L's (whether a mechanical system is used or not) or the scenario planning
approach. In this chapter we have learned of a method for finding the optimal f when not using a mechanical system. You will notice that all of the calculations thus far assume that you are, in
effect, blindly entering a position at some point in time and exiting at some unknown future point. Usually the method is shown where there isn't a bias in the price of the underlying -that is, the
method is shown devoid of any price forecast in the underlying. We have seen however, that we can incorporate our price forecast into the process simply by changing the value of the underlying used
as input into the Equations (5.17a and 5.17b) each day as the trade progresses. Even a slight bias changes the expectation function dramatically. The optimal exit date may now very well not be the
market day immediately after the entry day. In fact, the optimal exit date may well become the expiration day. In such a case, the option has a positive mathematical expectation even if held all
expiration. Not only is the expectation function altered dramatically by even a slight bias in the price of the underlying, so, too, are the optimal fs, AHPRs, and GHPRs. For instance, the following
table is once again derived from the option discussed earlier in this chapter. This is the 100 call option where the underlying is at 100, and it expires 911220. The volatility is 20% and it is now
911104. We are using the Black commodity option formula (H discerned as in Equation (5.07) and R = 5%) and a 260.8875-day year. We will again use 8 standard deviations to calculate our optimal fs
from (to be consistent with the previous tables showing no bias in the underlying, or bias = 0), and we are using a minimum tick increment of .1. Here, however, we will assume a bias of .01 points
(one tenth of one tick) upward per day in the price of the underlying: Exit Date Tue. 911105 Wed. 911106 Thu. 911107 Fri. 911108
AHPR 1.000744 1.000149 1.000003 <1
GHPR 1.000357 1.000077 1.000003 <1
f .1081663 .0377557 .0040674 0
Notice how simply a tiny .01-point upward bias per day changes the results. Our optimal exit date is still 911105, and our optimal f is .1081663, which translates into 1 contract for every $2,645.00
in account equity (2.861*100/.1081663). Also notice that a positive expectation is obtained in this option all the way until the close of 911107. Had we had a stronger bias than simply .01 point
upward per day, the results would be changed to an even more pronounced degree. The last point that needs to be addressed is the cost of commissions. In the price of the option obtained with Equation
(5.14), the variable Z(T, U-Y) must be adjusted downward to reflect the commissions involved in the transaction (if you are charged commissions on the entry side also, then you must adjust the
variable S in Equation (5.14) upward by the amount of the commissions). We have covered finding the optimal f and its by-products when we are not using a mechanical system. We can now begin to
combine multiple positions.
MULTIPLE SIMULTANEOUS POSITIONS WITH A CAUSAL RELATIONSHIP As we begin our discussion of multiple simultaneous positions, it is important to differentiate between causal relationships and correlative
relationships. In the causal relationship, there is a factual, connective ex-
planation of the correlation between two or more items. That is, a causal relationship is one where there is correlation, and the correlation can be explained or accounted for in some logical,
connective fashion. This is in contrast to a correlative relationship where there is, of course, correlation, but there is no causal, connective, explanation of the correlation. As an example of a
causal relationship, let's look at put options on IBM and call options on IBM. Certainly the correlation between the IBM puts and the IBM calls is -1 (or very close to it), but there is more to the
relationship than simply correlation. We know for a fact that when there is upward pressure on IBM rah that there will be downward pressure on the puts (all else remaining constant, including
volatility). This logical, connective relationship means that there is a causal relationship between IBM calls and IBM puts. When there is correlation but no cause, we simply say that there is a
correlative relationship (as opposed to a causal relationship). Usually, correlative relationships will not have correlation coefficients whose absolute values are close to 1. Usually, the absolute
value of the correlation coefficient will be closer to 0. For example, corn and soybeans tend to move in tandem. Although their correlation coefficients are not exactly equal to 1, there is still a
causal relationship because both markets are affected by things that affect the grains. If we look at IBM calls and Digital Equipment puts (or calls), we cannot say that the relationship is
completely a causal relationship. Surely there is somewhat of a causal relationship, as both of the underlying stocks are members of the computer group, but just because IBM goes up (or down) is not
an absolute mandate that Digital Equipment will also. As you can see, there is not a fine line that differentiates causal and correlative relationships. This "clouding" of causal relationships and
those that are simply correlative will make our work more difficult. For the time being, we will only deal with causal relationships, or what we believe are causal relationships. In the text chapter
we will deal with correlative relationships, which encompass causal relationships as well. You should be aware right now that the techniques mentioned in the next chapter on correlative relationships
arc also applicable to, or can be used in lieu of, the techniques for causal relationships about to be discussed. The reverse is not true. That is, it is erroneous to apply the following techniques
on causal relationships to relationships that are simply correlative. A causal relationship is one where the correlation coefficients between the prices of two items is 1 or -1. To simplify matters,
a causal relationship almost always consists of any two tradeable items (stock, commodity, option, etc.) that have the same underlying instrument. This includes, but is not limited to, options
spreads, straddles, strangles, and combinations, as well as covered writes or any other position where you are using the underlying in conjunction with one or more of its options, or one or more
options on the same underlying instrument, even if you do not have a position in that underlying instrument. In its simplest form, multiple simultaneous positions consisting of only options (no
position in the underlying), when the position is put on at a debit, can be solved for by using Equation (5.14). By solved for I mean that we can determine the optimal f for the entire position and
its by-products (including the optimal exit date). The only differences are that the variable S will now represent the net of the legs of the position at the trade's inception. The variable Z(T, U-Y)
will now represent the net of the legs at price U by time T remaining till expiration. Likewise, multiple simultaneous positions consisting of only options (no position in the underlying), when the
position is put on at a credit, can be solved for by using Equation (5.20). Again, we must alter the variables S and Z(T, U-Y) to reflect the net of the legs of the position. For example, suppose we
are looking to put on a long option straddle, the purchase of a put and a call on the same underlying instrument with the same strike price and expiration date. Further suppose that the optimal f
returned by this technique was 1 contract for every $2,000. This would mean that for every $2,000 in account equity we should buy 1 straddle; for every $2,000 in account equity we should buy 1 of the
puts and 1 of the calls. The optimal f returned by this technique pertains to financing 1 unit of the entire position, no matter how large that position is. This fact will be true for all the
multiple simultaneous techniques discussed throughout this chapter. We can now devise an equation for multiple simultaneous positions involving whether a position in the underlying instrument is
included or
not. We can use this generalized form for multiple simultaneous positions with a causal relationship: (5.22) HPR(T,U) = (1+∑[i = 1,N]Ci(T,U))^P(T,U) where N = The number of legs in the position. HPR
(T,U) = The HPR for a given test value for T and U. Ci(T,U) = The coefficient of the ith kg at a given value for U, at a given time T remaining till expiration: For an option leg put on at a debit or
a long position in the underlying: (5.23a) Ci(T, U) = f*(Z(T, U-Y)/S-l) For an option leg put on at a credit or a short position in the underlying: (5.23b) Ci(T,U) = f*(1-Z(T,U-Y)/S) where f = The
tested value for f. S = The current price of the option or underlying instrument. Z(T,U-Y) = The theoretical option price if the underlying were at price U with time T remaining till expiration. P
(T,U) = The probability of the underlying being at price U by time T remaining till expiration. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given
by (5.10), and the current price. Equation (5.22) can be used if you are planning on putting these legs all on at once, one for one, and you only need to iterate for the optimal f and optimal exit
date of the entire position (that is what is meant by "multiple simultaneous positions"). For each value of U you will have an HPR given by Equation (5.22). For each value for f you will have a
geometric mean, composed of all of the HPRs per Equation (5.18a): (5.18a) G(f,T) = {∏[U = -8SD,8SD]HPR(T,U)}^(1/∑[U = -8SD,8SD] P(T,U)) where G(f,T) = The geometric mean HPR for a given test value
for f and a given time remaining till expiration from a mandated exit date. Those values off and T (the values of the optimal f and mandated exit date) that result in the highest geometric means, are
the ones that you should use on the net position of the legs. To summarize the entire procedure. We want to find the optimal f for each day, using each market day between now and expiration as the
mandated exit date. For each mandated exit date you will determine those discrete prices between plus and minus X standard deviations (ordinarily we will let X equal 8) from the base price of the
underlying instrument. The base price can be the current price of the underlying instrument or it can be altered to reflect a particular bias you might have regarding that market's direction. You now
need to find the value between 0 and 1 for f that results in the greatest geometric mean HPR, using an HPR for each of the discrete prices between plus and minus X standard deviations of the base
price for that mandated exit date. Therefore, for each mandated exit date you will have an optimal f and a corresponding geometric mean. The mandated exit date that has the greatest geometric mean is
the optimal exit date for the position, and the f corresponding to that geometric mean is the f that is optimal. The "nesting" of the logic of this procedure is as follows: For each mandated exit
date (weekday) between now and expiration For each value off (until the optimal is found) For each market system For each tick between+and-8 std. devs. Determine the HPR Finally, you should note that
in this section we have been attempting, among other things, to discern the optimal exit date, which we have looked upon as a single date at which to close down all of the legs of the position. You
can apply the same procedure to determine the optimal exit date for each leg in the position. This compounds the number of computations geometrically, but it can be accomplished. This would alter the
logic to appear as: For each market system For each mandated exit date (weekday) between now and expiration For each value off (until the optimal is found)
For each market system For each tick between +8 and -8 std. devs. Determine the HPR We have thus covered multiple simultaneous positions with a causal relationship. Now we can move on to a similar
situation where the relationship is random.
MULTIPLE SIMULTANEOUS POSITIONS WITH A RANDOM RELATIONSHIP You should be aware that, as with the causal relationships already discussed, the techniques mentioned in the next chapter on correlative
relationships are also applicable to, or can be used in lieu of, the techniques for random relationships about to be discussed. This is not true the other way around. That is, it is erroneous to
apply the techniques on random relationships that follow in this chapter to relationships that are correlative (unless the correlation coefficients equal 0). A random relationship is one where the
correlation coefficients between the prices of two items is 0. A random relationship exists between any two tradeable items (stock, futures, options, etc.) whose prices are independent of one
another, where the correlation coefficient between the two prices is zero, or is expected to be zero in an asymptotic sense. When there is a correlation coefficient of 0 between every combination 062
legs in a multiple simultaneous position, the HPR for the net position is given as: (5.24) HPR(T,U) = (1+∑[i = 1,N] Ci(T,U))^∏[i = 1,N] Pi(T,U) where N = The number of legs in the position. HPR(T,U)
= The HPR for a given test value for T and U. Ci(T,U) = The coefficient of the ith leg at a given value for U, at a given time remaining till expiration of T: For an option leg put on at a debit or a
long position in the underlying instrument: (5.23a) Ci(T,U) = f*(Z(T,U-Y)/S-1) For an option leg put on at a credit or a short position in the underlying instrument: (5.23b) Ci(T,U) = f*(l-Z(T,U-Y)/
S) where f = The tested value for f. S = The current price of the option. Z(T,U-Y) = The theoretical option price if the underlying were at price U with time T remaining till expiration. Pi(T,U) =
The probability of the ith underlying being at price U by time remaining till expiration of T. Y = The difference between the arithmetic mathematical expectation of the underlying at time T, given by
(5.10), and the current price. We can now figure the geometric mean for random relationship HPRs as: (5.25) C(f,T) = {∏[U1 = -8SD,+ 8SD]...∏[UN = -8SD,+8SD]{(1+∑[i = 1,N]Ci(T,U))^∏[i = 1,N]Pi(T,U)}}^
{1/(∑[U1 = -8SD,+ 8SD]...∑[UN = -8SD,+ 8SD]∏[i = 1,N]Pi(T,U))} where G(f, T) = The geometric mean HPR for a given test value for f and a given time remaining till expiration from a mandated exit
date. Once again, the f and T that result in the greatest geometric mean are optimal. The "nesting" of the logic of this procedure is exactly the same as with the causal relationships: For each
mandated exit date (weekday) between now and expiration For each value off (until the optimal is found) For each market system For each tick between +8 and -8 std. devs. Determine the HPR The only
difference between the procedure for solving for random relationships and that for causal relationships is that the exponent to each HPR in the random relationship is calculated by multiplying
together the probabilities of all of the legs being at the given price of the
particular HPR. Each of these probability sums used as exponents for each HPR are themselves summed so that when all of the HPRs are multiplied together to obtain the interim TWR, it can be raised to
the power of 1 divided by the sum of the exponents used in the HPRs. And again, the outer loop of the logic could be mended to accommodate a search for the optimal exit date for each leg in the
position. Complicated as Equation (5.25) looks, it still does not address the problem of a linear correlation coefficient between the prices of any two components that is not 0. As you can see,
solving for the optimal mixture of components is quite a task! In the next few chapters you will see how to find the right quantities for each leg in a multiple position-using stock, commodities,
options, or any other tradeable item-regardless of the relationship (causal, random, or correlative). The inputs you will need for a given option position in the next chapter are (1) the correlation
coefficient of its average daily HPR on a 1-contract basis to each of the other positions in the portfolio, and (2) its arithmetic average HPR and standard deviation in HPRs. Equations (5.14) and
(5.20) detailed how to find the HPR for long options and short options respectively. Equation (5.18) then showed how to turn this into a geometric mean. Now, we can also discern the arithmetic mean
as: For long options, options put on at a debit: (5.26a) AHPR = {∑[U = -8SD,+ 8SD]((1+f*(Z(T, U-Y)/S1))*P(T,U))}/∑[U1 = -8SD,+ 8SD]P(TU) For short options, options put on at a credit: (5.26b) AHPR =
(∑[U = -8SD,+ 8SD]((1+f*(1-Z(T, UY)/S))*P(T,U))}/∑[U = -8SD,+ 8SD]P(T,U) where AHPR = The arithmetic average HPR. f = The optimal f (0 to 1). S = The current price of the option. Z(T,U-Y) = The
theoretical option price if the underlying were at price U with time T remaining till expiration. P(T, U) = The probability of the underlying being at price U with time T remaining till expiration. Y
= The difference between the arithmetic mathematical expectation of the underlying at time T, given by (5.10), and the current price. Once you have the geometric average HPR and the arithmetic
average HPR, you can readily discern the standard deviation in HPRs: (5.27) SD = (A^2-G^2)^(1/2) where A = The arithmetic average HPR. G = The geometric average HPR. SD = The standard deviation in
HPRs. In this chapter we have leaned of yet another way to calculate optimal f. The technique shown was for nonsystem traders and used the distribution of outcomes on the underlying instrument by a
certain date in the future as input. As a side benefit, this approach allows us to find the optimal f on both options and for multiple simultaneous positions. However, one of the drawbacks of this
technique is that the relationships between all of the positions must be random or causal Does this mean we cannot use the techniques far finding the optimal f, discussed in earlier chapters, on
multifile simultaneous positions or options? No-again, which method you choose is a matter of utility to you. The methods detailed in this chapter have certain drawbacks as well as benefits (such as
the ability to discern optimal exit times). In the next chapter, we will begin to delve into optimal portfolio construction, which will later allow us to perform multiple simultaneous positions using
the techniques detailed earlier. There are many different directions of study we could head off into at this function. However, the goal in this text is to study portfolios of different markets,
portfolios of different market systems, and different tradeable items. This being the case, we will part from the trail of theoretical option prices and head in the direction of optimal portfolio
Chapter 6 - Correlative Relationships and the Derivation of the Efficient Frontier We have now covered finding the optimal quantities to trade for futures, stocks, and options, trading them either
alone or in tandem with another item, when there is either a random or a causal relationship between the prices of the items. That is, we have defined the optimal set when the linear correlation
coefficient between any two elements in the portfolio equals 1, ~1, or 0. Yet the relationships between any two elements in a portfolio, whether we look at the correlation of prices (in a
nonmechanical means of trading) or equity changes (in a mechanical system), are rarely at such convenient values of the linear correlation coefficient. In the last chapter we looked at trading these
items from the standpoint of someone not using a mechanical trading system. Because a mechanical trading system was not employed, we were looking at the correlative relationship of the prices of the
items. This chapter provides a method for determining the efficient frontier of portfolios of market systems when the linear correlation coefficient between any two portfolio components under
consideration is any value between -1 and 1 inclusive. Herein is the technique employed by professionals for determining optimal portfolios of stocks. In the next chapter we will adapt it for use
with any tradeable instrument. In this chapter, an important assumption is made regarding these techniques. The assumption is that the generating distributions (the distribution of returns) have
finite variance. These techniques are effective only to the extent that the input data used, has finite variance.1
DEFINITION OF THE PROBLEM For the moment we are dropping the entire idea of optimal f; it will catch up with us later. It is easier to understand the derivation of the efficient frontier
parametrically if we begin from the assumption that we are discussing a portfolio of stocks. These stocks are in a cash account and are paid for completely. That is, they are not on margin. Under
such a circumstance, we derive the efficient frontier of portfolios. That is, for given stocks we want to find those with the lowest level of expected risk for a given level of expected gain, the
given levels being determined by the particular investor's aversion to risk. Hence, this basic theory of Markowitz (aside from the general reference to it as Modern Portfolio Theory) is often
referred to as E-V theory (Expected return-Variance of return). Note that the inputs are based on returns. That is, the inputs to the derivation of the efficient frontier are the returns we would
expect on a given stock and the variance we would expect of those returns. Generally, returns on stocks can be defined as the dividends expected over a given period of time plus the capital
appreciation (or minus depreciation) over that period of time, expressed as a percentage gain (or loss). Consider four potential investments, three of which are stocks and one a savings account
paying 8.5% per year. Notice that we are defining the length of a holding period, the period we measure returns and their variances, as 1 year in this example: Investment Toxico Incubeast Corp. LA
Garb Savings Account
Expected Return 9.5% 13% 21 % 6.5%
Expected Variance of Return 10% 25% 40% 0%
We can express expected returns as HPR's by adding 1 to them. Also, we can express expected variance of return as expected standard
For more on this, see Fama, Eugene F., "Portfolio Analysis in a Stable Paretian Market," Management Science 11, pp. 404-419, 1965. Fama has demonstrated techniques for finding the efficient frontier
parametrically for stably distributed securities possessing the same characteristic exponent, A, when the returns of the components all depend upon a single underlying market index. Headers should be
aware that other work has been done on determining the efficient frontier when there is infinite variance in the returns of the components in the portfolio. These techniques are not covered here
other than to refer interested readers to pertinent articles. For more on the stable Paretian distribution, see Appendix B. For a discussion of infinite variance, see The Student's Distribution" in
Appendix B.
deviation of return by taking the square root of the variance. In so doing, we transform our table to: Investment Toxico lncubeast Corp. LA Garb Savings Account
Expected Return as an HPR 1.095 1.13 1.21 1.085
Expected Standard Deviation of Return .316227766 .5 .632455532 0
The time horizon involved is irrelevant so long as it is consistent for all components under consideration. That is, when we discuss expected return, it doesn't matter if we mean over the next year,
quarter, 5 years, or day, as long as the expected returns and standard deviations for all of the components under consideration all have the same time frame. (That is, they must will be for the next
year, or they must all be for the next day, and so on.) Expected return is synonymous with potential gains, while variance (or standard deviation) in those expected returns is synonymous with
potential risk. Note that the model is two-dimensional. In other words, we can say that the model can be represented on the upper right quadrant of the Cartesian plane (see Figure 6-1) by placing
expected return along one axis (generally the vertical or Y axis) and expected variance or standard deviation of returns along the other axis (generally the horizontal or X axis). There are other
aspects to potential risk, such as potential risk of (probability of) a catastrophic loss, which E-V theory does not differentiate from variance of returns in regards to defining potential risk.
While this may very well be true, we will not address this concept any further in this chapter so is to discuss E-V theory in its classic sense. However, Markowitz himself nearly stated that a
portfolio derived from E-V theory is optimal only if the utility, the "satisfaction," of the investor is a function of expected return and variance in expected return only. Markowitz indicated that
investor utility may very well encompass moments of the distribution higher than the first two (which are what E-V theory addresses), such as skewness and kurtosis of expected returns.
1.3 LA Garb 1.2
1 -0.1
Savings account
Figure 6-1 The upper-right quadrant of the Cartesian plane. Potential risk is still a far broader and more nebulous thing than what we have tried to define it as. Whether potential risk is simply
variance on a contrived sample, or is represented on a multidimensional hypercube, or incorporates further moments of the distribution, we try to define potential risk to account for our inability to
really put our finger on it. That said, we will go forward defining potential risk as the variance in expected returns. However, we must not delude ourselves into thinking that risk is simply defined
as such. Risk is far broader, and its definition far more elusive. So the first step that an investor wishing to employ E-V theory must make is to quantify his or her beliefs regarding the expected
returns and variance in returns of the securities under consideration for a certain time horizon (holding period) specified by the investor. These parameters can be, arrived at empirically. That is,
the investor can examine the past history of the securities under consideration and calculate the returns and their variances over the specified holding periods. Again the term returns means not only
the dividends in the underlying security, but any gains in the value of the security as well. This is then specified as a percentage. Variance is the statistical variance of the percentage returns. A
user of this approach would often perform a linear regression on the past returns to determine the return (the expected return) in the next holding period. The variance portion of the input would
then be determined by calculating the variance of each past data point from what
would have been predicted for that past data point (and not from the regression line calculated to predict the next expected return). Rather than gathering these figures empirically, the investor can
also simply estimate what he or she believes will be the future returns and variances2 in those returns. Perhaps the best way to arrive at these parameters is to use a combination of the two. The
investor should gather the information empirically, then, if need be, interject his or her beliefs about the future of those expected returns and their variances. The next parameters the investor
must gather in order to use this technique are the linear correlation coefficients of the returns. Again, these figures can be arrived at empirically, by estimation, or by a combination of the two.
In determining the correlation coefficients, it is important to use data points of the same time frame as was used to determine the expected returns and variance in returns. In other words, if you
are using yearly data to determine the expected returns and variance in returns (on a yearly basis), then you should use yearly data in determining the correlation coefficients. If you are using
daily data to determine the expected returns and Variance in returns (on a daily basis), then you should use daily data in determining the correlation coefficients, It is also very important to
realize that we are determining the correlation coefficients of returns (gains in the stock price plus dividends), not of the underlying price of the stocks in question. Consider our example of four
alternative investments-Toxico, Incubeast Corp., LA Garb, and a savings account. We designate these with the symbols T, I, L, and S respectively. Next we construct a grid of the linear correlation
coefficients as follows: T I L
I -.15
L .05 .25
S 0 0 0
From the parameters the investor has input, we can calculate the covariance between any two securities as: (6.01) COVa,b = Ra,b*Sa*Sb where COVa,b = The covariance between the ath security and the
bth one. Ra,b = The linear correlation coefficient between a and b. Sa = The standard deviation of the ath security. Sb = The standard deviation of the bth security. The standard deviations, Sa and
Sb, are obtained by taking the square root of the variances in expected returns for securities a and b. Returning to our example, we can determine the covariance between Toxico (T) and Incubeast (I)
as: COVT,I = -.15*.10^(1/2)*.25^(1/2) = -.15*.316227766*.5 = -.02371708245 Thus, given a covariance and the comprising standard deviations, we can calculate the linear correlation coefficient as:
(6.02) Ra,b = COVa,b/(Sa*Sb) where COVa,b = The covariance between the ath security and the bth one. Ra,b = The linear correlation coefficient between a and b. Sa = The standard deviation of the ath
security. Sb = The standard deviation of the bth security. Notice that the covariance of a security to itself is the variance, since the linear correlation coefficient of a security to itself is 1:
(6.03) COVX,X = 1*SX*SX = 1*SX^2 = SX^2 = VX where COVX,X = The covariance of a security to itself. SX = The standard deviation of a security. VX = The variance of a security. We can now create a
table of covariances for our example of four investment alternatives: T 2
T .1
I -.0237
L .01
S 0
Again estimating variance can be quite tricky. An easier way is to estimate the mean absolute deviation, then multiply this by 1.25 to arrive at the standard deviation. Now multiplying this standard
deviation by itself, squaring it, gives the estimated variance.
I L s
-.0237 .01 0
.25 .079 0
.079 .4 0
We now have compiled the basic parametric information, and we can begin to state the basic problem formally. First, the sum of the weights of the securities comprising the portfolio must be equal to
1, since this is being done in a cash account and each security is paid for in full: (6.04) ∑[i = 1,N]Xi = 1 where N = The number of securities comprising the portfolio. Xi = The percentage weighting
of the ith security. It is important to note that in Equation (6.04) each Xi must be nonnegative. That is, each Xi must be zero or positive. The next equation defining what we are trying to do
regards the expected return of the entire portfolio. This is the E in E-V theory. Essentially what it says is that the expected return of the portfolio is the sum of the returns of its components
times their respective weightings: (6.05) ∑[i = 1,N]Ui*Xi = E where E = The expected return of the portfolio. N = The number of securities comprising the portfolio. Xi = The percentage weighting of
the ith security. Ui = The expected return of the ith security. Finally, we come to the V portion of E-V theory, the variance in expected returns. This is the sum of the variances contributed by each
security in the portfolio plus the sum of all the possible covariances in the portfolio. (6.06a) V = ∑[i = 1,N]∑[j = 1,N] Xi*Xj*COVi,j (6.06b) V = ∑[i = 1,N]∑[j = 1,N]Xi*Xj*Ri,j*Si*Sj (6.06c) V = (∑
[i = 1,N]Xi^2*Si ^ 2)+2*∑[i = 1,N]∑[j = i+1,N]Xi*Xj*COVi,j (6.06d) V = (∑[i = 1,N]Xi^2*Si^2)+2*∑[i = 1,N]∑[j = i+1,N]Xi*Xj*Ri,j*Si*Sj where V = The variance in the expected returns of the portfolio.
N = The number of securities comprising the portfolio. Xi = The percentage weighting of the ith security. Si = The standard deviation of expected returns of the ith security. COVi,j = The covariance
of expected returns between the ith security and the jth security. Ri,j = The linear correlation coefficient of expected returns between the ith security and the jth security. All four forms of
Equation (6.06) are equivalent. The final answer to Equation (6.06) is always expressed as a positive number. We can now consider that our goal is to find those values of Xi, which when summed equal
1, that result in the lowest value of V for a given value of E. When confronted with a problem such as trying to maximize (or minimize) a function, H(X,Y), subject to another condition or constraint,
such as G(X,Y), one approach is to use the method of Lagrange. To do this, we must form the Lagrangian function, F(X,Y,L): (6.07) F(X,Y,L) = H(X,Y)+L*G(X,Y) Note the form of Equation (6.07). It
states that the new function we have created, F(X,Y,L), is equal to the Lagrangian multiplier, L-a slack variable whose value is as yet undetermined-multiplied by the constraint function G(X,Y). This
result is added to the original function H(X,Y), whose extreme we seek to find. Now, the simultaneous solution to the three equations will yield those points (X1,Y1) of relative extreme: FX(X,Y,L) =
0 FY(X,Y,L) = 0 FL(X,Y,L) = 0 For example, suppose we seek to maximize the product of two numbers, given that their sum is 20. We will let the variables X and Y be the two numbers. Therefore, H(X,Y)
= X*Y is the function to be maximized
given the constraining function G(X,Y) = X+Y-20 = 0. We must form the Lagrangian function: F(X,Y,L) = X*Y+L*(X+Y-20) FX(X,Y,L) = Y+L FY(X,Y,L) = X+L FL(X,Y,L) = X+Y-20 Now we set FX(X,Y,L) and FY
(X,Y,L) both equal to zero and solve each for L: Y+L = 0 Y = -L and X+L = 0 X = -L Now setting FL(X,Y,L) = 0 we obtain X+Y-20 = 0. Lastly, we replace X and Y by their equivalent expressions in terms
of L: (-L)+(-L)-20 = 0 2*-L = 20 L = -10 Since Y equals -L, we can state that Y equals 10, and likewise with X. The maximum product is 10*10 = 100. The method of Lagrangian multipliers has been
demonstrated here for two variables and one constraint function. The method can also be applied when there are more than two variables and more than one constraint function. For instance, the
following is the form for finding the extreme when there are three variables and two constraint functions: (6.08) F(X,Y,Z,L1,L2) = H(X,Y,Z)+L1*G1(X,Y,Z)+L2*G2(X,Y,Z) In this case, you would have to
find the simultaneous solution for five equations in five unknowns in order to solve for the points of relative extreme. We will cover how to do that a little later on. We can restate the problem
here as one where we must minimize V, the variance of the entire portfolio, subject to the two constraints that: (6.09) (∑[i = 1,N]Xi*Ui)-E = 0 and (6.10) (∑[i = 1,N]Xi) -1 = 0 where N = The number
of securities comprising the portfolio. E = The expected return of the portfolio. Xi = The percentage weighting of the ith security. Ui = The expected return of the ith security. The minimization of
a restricted multivariable function can be handled by introducing these Lagrangian multipliers and differentiating partially with respect to each variable. Therefore, we express our problem in terms
of a Lagrangian function, which we call T. Let: (6.11) T = V+ L1*((∑[i = 1,N]Xi*Ui) -E)+L2*((∑[i = 1,N]Xi)-1) where V = The variance in the expected returns of the portfolio, from Equation (6.06). N
= The number of securities comprising the portfolio. E = The expected return of the portfolio. Xi = The percentage weighting of the ith security. Ui = The expected return of the ith security. L1 =
The first Lagrangian multiplier. L2 = The second Lagrangian multiplier. The minimum variance (risk) portfolio is found by setting the firstorder partial derivatives of T with respect to all variables
equal to zero. Let us again assume that we are looking at four possible investment alternatives: Toxico, Incubeast Corp., LA Garb, and a savings account. If we take the first-order partial derivative
of T with respect to X 1 we obtain: (6.12) δT/δX1 = 2*X1*COV1,1+2*X2*COV1,2+2*X3*COV1,2+2*X4*COV1,4+L1*U1+L2 Setting this equation equal to zero and dividing both sides by 2 yields:
X1*COV1,1+X2*COV1,2+X3*COV1,3+X4*COV1,4+.5*L1*U1+.5*L2 = 0 Likewise: δT/δX2 = X1*COV2,1+X2 +COV2,2+X3*COV2,3+X4*COV2,4+.5*L1*U2+.5*L2 = 0 δT/δX3 =
X1*COV3,1+X2*COV3,2+X3*COV3,3+X4*COV3,4+.5*L1*U3+.5*L2 = 0
δT/δX4 = X1*COV4,1+X2*COV4,2+X3*COV4,3+X4*COV4,4+.5 *L1*U4+ .5*L2 = 0 And we already have δT/δL1 as Equation (6.09) and δT/δL2 as Equation (6.10). Thus, the problem of minimizing V for a given E can
be expressed in the N-component case as N+2 equations involving N+2 unknowns. For the four-component case, the generalized form is: X1*U1 X1 X1*COV1,1 X1*COV2,1 X1*COV3,1 X1*COV4,1
+X2*U2 +X2 +X2*COV1,2 +X2*COV2,2 +X2*COV3,2 +X2*COV4,2
+X3*U3 +X3 +X3*COV1,3 +X3*COV2,3 +X3*COV3,3 +X3*COV4,3
+X4*U4 +X4 +X4*COV1,4 +X4*COV2,4 +X4*COV3,4 +X4*COV4,4
+.5*L1*U1 +.5*L1*U2 +.5*L1*U3 +.5*L1*U4
=E =1 +.5*L2 =0 +.5*L2 =0 +.5*L2 =0 +.5*L2 =0
where E = The expected return of the portfolio. Xi = The percentage weighting of the ith security. Ui = The expected return of the ith security. COVA,B = The covariance between securities A and B. L1
= The first Lagrangian multiplier. L2 = The second Lagrangian multiplier. This is the generalized form, and you use this basic form for any number of components. For example, if we were working with
the case of three components (i.e., N = 3), the generalized form would be: X1*U1 X1 X1*COV1,1 X1*COV2,1 X1*COV3,l
+X2*U2 +X2 +X2*COV1,2 +X2*COV2,2 +X2*COV3,2
+X3*U3 +X3 +X3*COV1,3 +X3*COV2,3 +X3*COV3,3
+.5*L1*U1 +.5*L1*U2 +.5*L1*U3
+.5*L2 +.5*L2 +.5*L2
=E =1 =0 =0 =0
You need to decide on a level of expected return (E) to solve for, and your solution will be that combination of weightings which yields that E with the least variance. Once you have decided on E,
you now have all of the input variables needed to construct the coefficients matrix. The E on the right-hand side of the first equation is the E you have decided you want to solve for (i.e., it is a
given by you). The first line simply states that the sum of all of the expected returns times their weightings must equal the given E. The second line simply states that the sum of the weights must
equal 1. Shown here is the matrix for a three-security case, but you can use the general form when solving for N securities. However, these first two lines are always the same. The next N lines then
follow the prescribed form. Now, using our expected returns and covariances (from the covariance table we constructed earlier), we plug the coefficients into the generalized form. We thus create a
matrix that represents the coefficients of the generalized form. In our four-component case (N = 4), we thus have 6 rows (N+2): X1 .095 1 .1 -.0237 .01 0
X2 .13 1 -.0237 .25 .079 0
X3 .21 1 .01 .079 .4 0
X4 .085 1 0 0 0 0
.095 .13 .21 .085
Answer =E =1 =0 =0 =0 =0
Note that the expected returns are not expressed in the matrix as HPR's, rather they are expressed in their "raw" decimal state. Notice that we also have 6 columns of coefficients. Adding the answer
portion of each equation onto the right, and separating it from the coefficients with a creates what is known as an augmented matrix, which is constructed by fusing the coefficients matrix and the
answer column, which is also known as the right-hand side vector. Notice that the coefficients in the matrix correspond to our generalized form of the problem: X1*U1 X1 X1*COV1,1 X1*COV2,1 X1*COV3,1
+X2*U2 +X2 +X2*COV1,2 +X2*COV2,2 +X2*COV3,2 +X2*COV4,2
+X3*U3 +X3 +X3*COV1,1 +X3*COV2,3 +X3*COV3,3 +X3*COV4,3
+X4*U4 +X4 +X4*COV1,4 +X4*COV2,4 +X4*COV3,4 +X4*COV4,4
+.5*L1*U1 +.5*L1*U2 +.5*L1*U3 +.5*L1*U4
=E =1 +.5*L2 =0 +.5*L2 =0 +.5*L2 =0 +.5*L2 =0
The matrix is simply a representation of these equations. To solve for the matrix, you must decide upon a level for E that you want to solve for. Once the matrix is solved, the resultant answers will
be the optimal
weightings required to minimize the variance in the portfolio as a whole for our specified level of E. Suppose we wish to solve for E=.14, which represents an expected return of 14%.Plugging .14 into
the matrix for E and putting in zeros for the variables L1 and L2 in the first two rows to complete the matrix gives us a matrix of: X1 .095 1 .1 -.0237 .01 0
X2 .13 1 -.0237 .25 .079 0
X3 .21 1 .01 .079 .4 0
X4 .085 1 0 0 0 0
L1 0 0 .095 .13 .21 .085
L2 0 0 1 1 1 1
Answer =.14 =1 =0 =0 =0 =0
By solving the matrix we will solve the N+2 unknowns in the N+2 equations.
SOLUTIONS OF LINEAR SYSTEMS USING ROW-EQUIVALENT MATRICES A polynomial is an algebraic expression that is the sum of one or more terms. A polynomial with only one term is called a monomial; with two
terms a binomial; with three terms a trinomial. Polynomials with more than three terms are simply called polynomials. The expression 4*A^3+A^2+A+2 is a polynomial having four terms. The terms are
separated by a plus (+) sign. Polynomials come in different degrees. The degree of a polynomial is the value of the highest degree of any of the terms. The degree of a term is the sum of the
exponents on the variables contained in the term. Our example is a third-degree polynomial since the term 4*A^3 is raised to the power of 3, and that is a higher power than any of the other terms in
the polynomial are raised to. If this term read 4*A^3*B^2*C, we would have a sixth-degree polynomial since the sum of the exponents of the variables (3+2+1) equals 6. A first-degree polynomial is
also called a linear equation, and it graphs as a straight line. A second-degree polynomial is called a quadratic, and it graphs as a parabola. Third-, fourth-, and fifth-degree polynomials are also
called cubics, quartics, and quintics, respectively. Beyond that there aren't any special names for higher-degree polynomials. The graphs of polynomials greater than second degree are rather
unpredictable. Polynomials can have any number of terms and can be of any degree. Fortunately, we will be working only with linear equations, first-degree polynomials here. When we have more than one
linear equation that must be solved simultaneously we can use what is called the method of row-equivalent matrices. This technique is also often referred to as the Gauss-Jordan procedure or the
Gaussian elimination method. To perform the technique, we first create the augmented matrix of the problem by combining the coefficients matrix with the right-hand side vector as we have done. Next,
we want to use what are called elementary transformations to obtain what is known as the identity matrix. An elementary transformation is a method of processing a matrix to obtain a different but
equivalent matrix. Elementary transformations are accomplished by what are called row operations. (We will cover row operations in a moment.) An identity matrix is a square coefficients matrix where
all of the elements are zeros except for a diagonal line of ones starting in the upper left comer. For a six-by-six coefficients matrix such as we are using in our example, the identity matrix would
appear as: 1 0 0 0 0 0
This type of matrix, where the number of rows is equal to the number of columns, is called a square matrix. Fortunately, due to the generalized form of our problem of minimizing V for a given E, we
are always dealing with a square coefficients matrix. Once an identity matrix is obtained through row operations, it can be regarded as equivalent to the starting coefficients matrix. The answers
then are read from the right-hand-side vector. That is, in the first row of the identity matrix, the 1 corresponds to the variable X1, so the answer in the fight-hand side vector for the first row is
the answer for
X1. Likewise, the second row of the right-hand side vector contains the answer for X2, since the 1 in the second row corresponds to X2. By using row operations we can make elementary transformations
to our original matrix until we obtain the identity matrix. From the identity matrix, we can discern the answers, the weights X1, ..., XN, for the components in a portfolio. These weights will
produce the portfolio with the minimum variance, V, for a given level of expected return, E.3 Three types of row operations can be performed: 1. Any two rows may be interchanged. 2. Any row may be
multiplied by any nonzero constant. 3. Any row may be multiplied by any nonzero constant and added to the corresponding entries of any other row. Using these three operations, we seek to transform
the coefficients matrix to an identity matrix, which we do in a very prescribed manner. The first step, of course, is to simply start out by creating the augmented matrix. Next, we perform the first
elementary transformation by invoking row operations rule 2. Here we take the value in the first row, first column, which is .095, and we want to convert it to the number 1. To do so, we multiply
each value in the first row by the constant 1/.095. Since any number times 1 divided by that number yields 1, we have obtained a 1 in the first row, first column. We have also multiplied every entry
in the first row by this constant, 1/.095, as specified by row operations rule 2. Thus, we have obtained elementary transformation number 1. Our next step is to invoke row operations rule 3 for all
rows except the one we have just used rule 2 on. Here, for each row, we take the value of that row corresponding to the column we just invoked rule 2 on. In elementary transformation number 2, for
row 2, we will use the value of 1, since that is the value of row 2, column 1, and we just performed rule 2 on column 1. We now make this value negative (or positive if it is already negative). Since
our value is 1, we make it -1. We now multiply by the corresponding entry (i.e., same column) of the row we just performed rule 2 on. Since we just performed rule 2 on row 1, we will multiply this -1
by the value of row 1, column 1, which is 1, thus obtaining -1. Now we add this value back to the value of the cell we are working on, which is 1, and obtain 0. Now on row 2, column 2, we take the
value of that row corresponding to the column we just invoked rule 2 on. Again we will use the value of 1, since that is the value of row 2, column 1, and we just performed rule 2 on column 1. We
again make this value negative (or positive if it is already negative). Since our value is 1, we make it -1. Now multiply by the corresponding entry (i.e., same column) of the row we just performed
rule 2 on. Since we just performed rule 2 on row 1, we will multiply this -1 by the value of row 1, column 2, which is 1.3684, thus obtaining -1.3684. Again, we add this value back to the value of
the cell we are working on, row 2, column 2, which is 1, obtaining 1+(-1.3684) = -.3684. We proceed likewise for the value of every cell in row 2, including the value of the right-hand side vector of
row 2. Then we do the same for all other rows until the column we are concerned with, column 1 here, is all zeros. Notice that we need not invoke row operations rule 3 for the last row, since that
already has a value of zero for column 1. When we are finished, we will have obtained elementary transformation number 2. Now the first column is already that of the identity matrix. Now we proceed
with this pattern, and in elementary transformation 3 we invoke row operations rule 2 to convert the value in the second row, second column to a 1. In elementary transformation number 4, we invoke
row operations rule 3 to convert the remainder of the rows to zeros for the column corresponding to the column we just invoked row operations rule 2 on. We proceed likewise, converting the values
along the diagonals to ones per row operations rule 2, then converting the remaining values in that column to zeros per row operations rule 3 until we have obtained the identity matrix on the left.
The right-hand side vector will then be our. solution set. X1 X2 X3 X4 Starting Augmented Matrix .095 .13 .21 .085 3
Answer Explanation
That is, these weights will produce the portfolio with a minimum V for a given E only to the extent that our inputs of E and V for each component and the linear correlation coefficient of every
possible pair of components are accurate and variance in returns is infinite.
X1 X2 X3 X4 L1 1 1 1 1 0 .1 -.023 .01 0 .095 -.023 .25 .079 0 .13 .01 .079 .4 0 .21 0 0 0 0 .085 Elementary Transformation Number 1 1 1.3684 2.2105 .8947 0 1 1 1 1 0 0.1 -.023 .01 0 .095 -.023 .25
.079 0 .13 .01 .079 .4 0 .21 0 0 0 0 .085 Elementary Transformation Number 2 1 1.3684 2.2105 .8947 0 0 -.368 -1.210 .1052 0 0 -.160 -.211 -.089 .095 0 .2824 .1313 .0212 .13 0 .0653 .3778 -.008 .21 0
0 0 0 .085 Elementary Transformation Number 3 1 1.3684 2.2105 .8947 0 0 1 3.2857 -.285 0 0 -.160 -.211 -.089 .095 0 .2824 .1313 .0212 .13 0 .0653 .3778 -.008 .21 0 0 0 0 .085 Elementary
Transformation Number 4 1 0 -2.285 1.2857 0
L2 0 1 1 1 1
Answer Explanation 1 0 0 0 0
1.47368 row1*(1/.095) 1 0 0 0 0
1.47368 -.4736 -.1473 .03492 -.0147 0
1.47368 1.28571 row2*(1/-.36842) -.1473 .03492 -.0147 0
1.28571 .05904 row3+(.16054*row2) -.3282 Сгрока4+(.282431*row2) -.0987 row5+(-.065315*row2) 0
3.2857 -.285 .3164 -.135 -.796 .1019
0 .095 .13
0 0 .1632 .0097 .21 1 0 0 0 0 .085 1 Elementary Transformation Number 5 1 0 -2.285 1.2857 0 0 0 1 3.2857 -.285 0 0 0 0 1 -.427 .3002 3.1602 0 0 -.796 .1019 .13 1 0 0 .1632 .0097 .21 1 0 0 0 0 .085 1
Elementary Transformation Number 6 1 0 0 .3080 .6862 7.2233 0 1 0 1.1196 -.986 -1.38 0 0 1 -.427 .3002 3.1602 0 0 0 -.238 .3691 3.5174 0 0 0 .0795 .1609 .4839 0 0 0 0 .085 1 Elementary Transformation
Number 7 1 0 0 .3080 .6862 7.2233 0 1 0 1.1196 -.986 -1.38 0 0 1 -.427 .3002 3.1602 0 0 0 1 -1.545 -14.72 0 0 0 .0795 .1609 .4839 0 0 0 0 .085 1 Elementary Transformation Number 8 1 0 0 0 1.1624
11.760 0 1 0 0 .7443 6.1080 0 0 1 0 -.360 -3.139 0 0 0 1 -1.545 -14.72 0 0 0 0 .2839 1.6557 0 0 0 0 .085 1 Elementary Transformation Number 9 1 0 0 0 1.1624 11.761 0 1 0 0 .7445 6.1098 0 0 1 0 -.361
-3.140 0 0 0 1 -1.545 -14.72 0 0 0 0 1 5.8307 0 0 0 0 0.085 1 Elementary Transformation Number 10 1 0 0 0 0 4.9831 0 1 0 0 0 1.7685 0 0 1 0 0 -1.035 0 0 0 1 0 -5.715 0 0 0 0 1 5.8312 0 0 0 0 0 0.5043
row2+(-1*row1) row3+(-.1*row1) row4+(.0237*row1) row5+(-.01*row1)
-.2857 1.28571 .18658 row3*(1/.31643) -.3282 -.0987 0 .14075 .67265 .18658 -.1795 -.1291 0 .14075 .67265 .18658 .75192 -.1291 0 -.0908 -.1692 .50819 .75192 -.1889 0 -.0909 -.1693 .50823 .75192 -.6655
0 0.68280 0.32620 0.26796 -0.2769 -0.6655 0.05657
row1+(2.2857*row3) row2+(-3.28571*row3) row4+(.7966*row3) row5+(-.16328*row3)
row1+(-.30806*row4) row2+(1.119669*row4) row3+(.42772*row4) row5+(-.079551*row4)
row1+(-1.16248*row5) row2+(-.74455*row5) row3+(.3610*row5) row4+(1.5458trow5) row6+(-.085*row5)
X1 X2 X3 X4 L1 L2 Elementary Transformation Number 11 1 0 0 0 0 49826 0 1 0 0 0 1.7682 0 0 1 0 0 -1.035 0 0 0 1 0 -5.715 0 0 0 0 1 5.8312 0 0 0 0 0 1 Elementary Transformation Number 12 1 0 0 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 Matrix Obtained 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0
Answer Explanation 0.68283 0.32622 0.26795 -0.2769 -0.6655 0.11217 row6*(1/.50434) 0.12391 0.12787 0.38407 0.36424 -1.3197 0.11217
row1+(-4.98265*row6) row2+(-1.76821*row6) row3+(1.0352*row6) row4+(5.7158*row6) row5+(-5.83123*row6)
0.12391 =X1 0.12787 =X2 0.38407 =X3 0.36424 =X4 -1.3197 =-2.6394=L1 /.5 0.11217/ =.22434=L2 .5
INTERPRETING THE RESULTS Once we have obtained the identity matrix, we can interpret its meaning. Here, given the inputs of expected returns and expected variance in returns for all of the components
under consideration, and given the linear correlation coefficients of each possible pair of components, for an expected yield of 14% this solution set is optimal. Optimal, as used here, means that
this solution set will yield the lowest variance for a 14% yield. In a moment, we will determine the variance, but first we must interpret the results. The first four values, the values for X 1
through X4, tell us the weights (the percentages of investable funds) that should be allocated to these investments to achieve this optimal portfolio with a 14% expected return. Hence, we should
invest 12.391% in Toxico, 12.787% in Incubeast, 38.407% in LA Garb, and 36.424% in the savings account. If we are looking at investing $50,000 per this portfolio mix: Stock Toxico Incubeast LA Garb
Percentage .12391 .12787 .38407 .36424
(*50,000 = ) Dollars to Invest $6,195.50 $6,393.50 $19,203.50 $18,212.00
Thus, for Incubeast, we would invest $6,393.50. Now assume that Incubeast sells for $20 a share. We would optimally buy 319.675 shares (6393.5/20). However, in the real world we cannot run out and
buy fractional shares, so we would say that optimally we would buy either 319 or 320 shares. Now, the odd lot, the 19 or 20 shares remaining after we purchased the first 300, we would have to pay up
for. Odd lots are usually marked up a small fraction of a point, so we would have to pay extra for those 19 or 20 shares, which in turn would affect the expected return on our Incubeast holdings,
which in turn would affect the optimal portfolio mix We are often better off to just buy the round lot-in this case, 300 shares. As you can see, more slop creeps into the mechanics of this. Whereas
we can identify what the optimal portfolio is down to the fraction of a share, the real-life implementation requires again that we allow for slop. Furthermore, the larger the equity you are
employing, the more closely the real-life implementation of the approach will resemble the theoretical optimal. Suppose, rather than looking at $50,000 to invest, you were running a fund of $5
million. You would be looking to invest 12.787% in Incubeast (if we were only considering these four investment alternatives)? and would therefore be investing 5,000,000*.12787 = $639,350. Therefore,
at $20 a share, you would buy 639,350/20 = 31,967.8 shares. Again, if you restricted it down to the round lot, you would buy 31,900 shares, deviating from the optimal number of shares by about 0.2%.
Contrast this to the case "where you have $50,000 to invest and buy 300 shares versus the optimal of 319.675. There you are deviating from the optimal by about 6.5%. The Lagrangian multipliers have
an interesting interpretation. To begin with, the Lagrangians we are using here must be divided by .5 after the Identity matrix is obtained before we can interpret them. This is
in accordance with the generalized form of our problem. The L1 variable equals -δV/δE. This means that L1 represents the marginal variance in expected returns. In the case of our example, where L 1 =
-2.6394, we can state that V is changing at a rate of -L1, or -(-2.6394), or 2.6394 units for every unit in E instantaneously at E = .14. To interpret the L2 variable requires that the problem first
be restated. Rather than having ∑Xi = 1, we will state that ∑Xi = M, where M equals the dollar amount of funds to be invested. Then L 2 = δV/δM. In other words, L2 represents the marginal risk of
increased or decreased investment. Returning now to what the variance of the entire portfolio is, we can use Equation (6.06) to discern the variance. Although we could use any variation of Equation
(6.06a) through (6.06d), here we will use variation a: (6.06a) V = ∑[i = 1,N]∑[j = 1,N] Xi*Xj*COVi,j Plugging in the values and performing Equation (6.06a) gives: Xi 0.12391* 0.12391* 0.12391*
0.12391* 0.12787* 0.12787* 0.12787* 0.12787* 0.38407* 0.38407* 0.38407* 0.38407* 0.36424* 0.36424* 0.36424* 0.36424*
Xj 0.12391* 0.12787* 0.38407* 0.36424* 0.12391* 0.12787* 0.38407* 0.36424* 0.12391* 0.12787* 0.38407* 0.36424* 0.12391* 0.12787* 0.38407* 0.36424*
COVi,j 0.1 -0.0237 0.01 0 -0.0237 0.25 0.079 0 0.01 0.079 0.4 0 0 0 0 0
0.0015353688 =-0.0003755116 0.0004759011 0 =-0.0003755116 0.0040876842 0.0038797714 =0 =0.0004759011 =0.0038797714 =0.059003906 =0 =0 =0 =0 =0 .0725872809
Thus, we see that at the value of E = .14, the lowest value for V is obtained at V = .0725872809. Now suppose we decided to input a value of E = .18. Again, we begin with the augmented matrix, which
is exactly the same as in the last example of E = .14, only the upper rightmost cell, that is the first cell in the right-hand side vector, is changed to reflect this new E of .18: X1 X2 X3 X4 L1
Starting Augmented Matrix .095 .13 .21 .085 0 1 1 1 1 0 .1 -.023 0.01 0 .095 -.023 .25 .079 0 .13 .01 .079 .4 0 .21 0 0 0 0 .085
L2 Answer 0 0 1 1 1 1
.18 1 0 0 0 0
Through the use of row operations... the identity matrix is obtained: 1 0 0 0 0 0
0.21401=X1 0.22106=X2 0.66334=X3 -.0981=X4 -1.3197/.5=-2.639=L1 0.11217/.5=.22434=L2
We then go about solving the matrix exactly as before, only this time we get a negative answer in the fourth cell down of the right-hand side vector. Meaning, we should allocate a negative
proportion, a disinvestment of 9.81% in the savings account. To account for this, whenever we get a negative answer for any of the Xi's-which means if any of the first N rows of the right-hand side
vector is less than or equal to zero-we must pull that row+2 and that column out of the starting augmented matrix, and solve for the new augmented matrix. If either of the last 2 rows of the
right-hand side vector are less than or equal to zero, we don't need to do this. These last 2 entries in the right-hand side vector always pertain to the Lagrangians, no matter how many or how few
components there are in total in the matrix. The Lagrangians are allowed to be negative. Since the variable returning with the negative answer corresponds to the weighting of the fourth component, we
pull out the fourth column and the sixth row from the starting augmented matrix. We then use row operations to perform elementary transformations until, again, the identity matrix is obtained:
X1 X2 X3 L1 Starting Augmented Matrix .095 .13 .21 0 1 1 1 0 .1 -.023 0.01 .095 -.023 .25 .079 .13 .01 .079 .4 .21
.18 1 0 0 0
Through the use of row operations... the identity matrix is obtained: 1 0 0 0 0
0.1283688 0.1904699 0.6811613 -2.38/.5=-4.76 0.210944/.5=.4219
=X1 =X2 =X3 =L1 =L2
When you must pull out a row and column like this, it is important that you remember what rows correspond to what variables, especially when you have more than one row and column to pull. Again,
using an example to illustrate, suppose we want to solve for E = .1965. The first identity matrix we arrive at will show negative values for the weighting of Toxico, X1, and the savings account, X4.
Therefore, we return to our starting augmented matrix: X1 X2 X3 X4 Starting Augmented Matrix .095 .13 .21 .085 1 1 1 1 .1 -.023 .01 0 -.023 .25 .079 0 .01 .079 .4 0 0 0 0 0
Answer Pertains to
0 0 .095 .13 .21 .085
.1965 1 0 0 0 0
Toxico Incubeast LA Garb Savings L1 L2
Now we pull out row 3 and column 1, the ones that pertain to Toxico, and also pull row 6 and column 4, the ones that pertain to the savings account: X2 X3 X4 L1 L2 Answer Starting Augmented Matrix
.13 .21 .085 0 0 .1965 1 1 1 0 0 1 .25 .079 0 .13 1 0 .079 .4 0 .21 1 0
Pertains to Incubeast LA Garb L1 L2
So we will be working with the following matrix: X2 X3 X4 L1 L2 Answer Starting Augmented Matrix .13 .21 .085 0 0 .1965 1 1 1 0 0 1 .25 .079 0 .13 1 0 .079 .4 0 .21 1 0
Pertains to Incubeast LA Garb L1 L2
Through the use of row operations ... the identity matrix is obtained: 1 0 0 0
.169 .831 -2.97/.5=-5.94 .2779695/.5=.555939
Incubeast LA Garb L1 L2
Another method we can use to solve for the matrix is to use the inverse of the coefficients matrix. An inverse matrix is a matrix that, when multiplied by the original matrix, yields the identity
matrix. This technique will be explained without discussing the details of matrix multiplication. In matrix algebra, a matrix is often denoted with a boldface capita] letter. For example, we can
denote our coefficients matrix as C. The inverse to a matrix is denoted as superscripting -1 to it. The inverse matrix to C then is C-1. To use this method, we need to first discern the inverse
matrix to our coefficients matrix. To do this, rather than start by augmenting the righthand-side vector onto the coefficients matrix, we augment the identity matrix itself onto the coefficients
matrix. For our 4-stock example: Starting Augmented Matrix X1 X2 X3 X4 0.095 0.13 0.21 0.085 1 1 1 1 0.1 -0.023 0.01 0 -0.023 0.25 0.079 0 0.01 0.079 0.4 0 0 0 0 0
L1 0 0 0.095 0.13 0.21 0.085
L2 0 0 1 1 1 1
Identity Matrix 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0
Now we proceed using row operations to transform the coefficients matrix to an identity matrix. In the process, since every row operation performed on the left is also performed on the right, we will
have transformed the identity matrix on the right-hand side into the inverse matrix
C-r, of the coefficients matrix C. In our example, the result of the row operations yields: C 1 00 0 10 0 01 0 00 0 00 0 00
C-1 2.2527 -0.1915 2.3248 -0.1976 6.9829 -0.5935 -11.5603 1.9826 -23.9957 2.0396 2.0396 -0.1734
10.1049 0.9127 -1.1370 -9.8806 2.2526 -0.1915
0.9127 4.1654 -1.5726 -3.5056 2.3248 -0.1976
-1.1370 -1.5726 0.6571 2.0524 6.9829 -0.5935
-9.8806 -3.5056 2.0524 11.3337 -11.5603 1.9826
Now we can take the inverse matrix, C-i, and multiply it by our original right-hand side vector. Recall that our right-hand side vector is: E S 0 0 0 0
Whenever we multiply a matrix by a columnar vector (such as this) we multiply all elements in the first column of the matrix by the first element in the vector, all elements in the second column of
the matrix by the second element in the vector, and so on. If our vector were a row vector, we would multiply all elements in the first row of the matrix by the first element in the vector, all
elements in the second row of the matrix by the second element in the vector, and so on. Since our vector is columnar, and since the last four elements are zeros, we need only multiply the first
column of the inverse matrix by E (the expected return for the portfolio) and the second column of the inverse matrix by S, the sum of the weights. This yields the following set of equations, which
we can plug values for E and S into and obtain the optimal weightings. In our example, this yields: E*2.2527+S*-0.1915 = Optimal weight for first stock E*2.3248+S*-0.1976 = Optimal weight for second
stock E*6.9829+S*-0.5935 = Optimal weight for third stock E*-11.5603+S*1.9826 = Optimal weight for fourth stock E*-23.9957+S*2.0396 = .5 of first Lagrangian E*2.0396+S*-0.1734 = .5 of second
Lagrangian Thus, to solve for an expected return of 14% (E = .14) with the sum of the weights equal to 1: .14*2.2527+1*-0.1915 = .315378-.1915 = .1239 Toxico .14*2.3248+1*-0.1976 = .325472-.1976 =
.1279 Incubeast .14*6.9829+1*-0.5935 = .977606-.5935 = .3841 LA Garb .14*-11.5603+1*1.9826 = -1.618442+1.9826 = .3641 Savings .14*-23.9957+1*2.0396 = -3.359398+2.0396 = -1.319798*2 = -2.6395 L1
.14*2.0396+1 *-0.1734 = .285544-.1734 = .1121144*2 = .2243L2 Once you have obtained the inverse to the coefficients matrix, you can quickly solve for any value of E provided that your answers, the
optimal weights, are all positive. If not, again you must create the coefficients matrix without that item, and obtain a new inverse matrix. Thus far we have looked at investing in stocks from the
long side only How can we consider short sale candidates in our analysis? To begin with, you would be looking to sell short a stock if you expected it would decline. Recall that the term "returns"
means not only the dividends in the underlying security, but any gains in the value of the security as well. This figure is then specified as a percentage. Thus, in determining the returns of a short
position, you would have to estimate what percentage gain you would expect to make on the declining stock, and from that you would then need to subtract the dividend (however many dividends go
ex-date over the holding period you are calculating your E and V on) as a percentage.4 Lastly, any linear correlation coefficients of which the stock you are looking to short is a member must be
multiplied by -1. Therefore, since the linear correlation coefficient between Toxico and Incubeast is -.15, if you were looking to short Toxico, you would multiply this by -1. In such a case you
would use -.15*-1 = .15 as the linear correlation coefficient. If you linear looking to short both of these stocks, the linear correlation coefficient between the two would be -.15*-1*-1 = -.15. In
other words, if you are looking to 4
In this chapter we are assuming that all transactions are performed in a cash account. So, though a short position is required to be performed in a margin account as opposed to a cash account, we
will not calculate interest on the margin.
short both stocks, the linear correlation coefficient between them remains unchanged, as it would if you were looking to go long both stocks. Thus far we have sought to obtain the optimal portfolio,
and its variance V, when we know the expected return, E, that we seek. We can also solve for E when we know V. The simplest way to do this is by iteration using the techniques discussed thus far in
this chapter. There is much more to matrix algebra than is presented in this chapter. There are other matrix algebra techniques to solve systems of linear equations. Often you will encounter
reference to techniques such as Cramer's Rule, the Simplex Method, or the Simplex Tableau. These are techniques similar to the ones described in this chapter, although more involved. There are a
multitude of applications in business and science for matrix algebra, and the topic is considerably involved We have only etched the surface, just enough for what we need to accomplish. For a more
detailed discussion of matrix algebra and its applications in business and science, the reader is referred to Sets, Matrices, and Linear Programming, by Robert L. Childness. The next chapter covers
utilizing the techniques detailed in this chapter for any tradeable instrument, as well as stocks, while incorporating optimal f, as well as a mechanical system.
Chapter 7 - The Geometry of Portfolios We have now covered how to find the optimal fs for a given market system from a number of different standpoints. Also, we have seen how to derive the efficient
frontier. In this chapter we show how to combine the two notions of optimal f and, the efficient frontier to obtain a truly efficient portfolio for which geometric growth is maximized. Furthermore,
we will delve into an analytical study of the geometry of portfolio construction.
THE CAPITAL MARKET LINES (CMLS) In the last chapter we saw how to determine the efficient frontier parametrically. We can improve upon the performance of any given portfolio by combining a certain
percentage of the portfolio with cash. Figure 7-1 shows this relationship graphically.
CML Line
1.04 B
Efficient frontier
1.02 A
0.04 0.06 Standard Deviation
Figure 7-1 Enhancing returns with the risk-free asset. In Figure 7-1, point A represents the return on the risk-free asset. This would usually be the return on 91-day Treasury Bills. Since the risk,
the standard deviation in returns, is regarded as nonexistent, point A is at zero on the horizontal axis. Point B represents the tangent portfolio. It is the only portfolio lying upon the efficient
frontier that would be touched by a line drawn from the risk-free rate of return on the vertical axis and zero on the horizontal axis. Any point along line segment AB will be composed of the
portfolio at point B and the risk-free asset. At point B, all of the assets would be in the portfolio, and at point A all of the assets would be in the riskfree asset. Anywhere in between points A
and B represents having a portion of the assets in both the portfolio and the risk-free asset. Notice that any portfolio along line segment AB dominates any portfolio on the efficient frontier at the
same risk level, since being on the line segment AB has a higher return for the same risk. Thus, an investor who wanted a portfolio less risky than portfolio B would be better off to put a portion of
his or her investable funds in portfolio B and a portion in the risk-free asset, as opposed to owning 100% of a portfolio on the efficient frontier at a point less risky than portfolio B. The line
emanating from point A, the risk-free rate on the vertical axis and zero on the horizontal axis, and emanating to the right, tangent to one point on the efficient frontier, is called the capital
market line (CML). To the right of point B, the CML line represents portfolios where the investor has gone out and borrowed more money to invest further in portfolio B. Notice that an investor who
wanted a portfolio with a greater return than portfolio B would be better off to do this, as being on the CML line right of point B dominates (has higher return than) those portfolios on the
efficient frontier with the same level of risk. Usually, point B will be a very well-diversified portfolio. Most portfolios high up and to the right and low down and to the left on the efficient
frontier nave very few components. Those in the middle of the efficient frontier, where the tangent point to the risk-free rate is, usually are very well diversified. It has traditionally been
assumed that all rational investors will want to get the greatest return for a given risk and take on the lowest risk for a given return. Thus, all investors would want to be somewhere on the CML
line. In other words, all investors would want to own the same
portfolio, only with differing degrees of leverage. This distinction between the investment decision and the financing decision is known as the separation theorem.1 We assume now that the vertical
scale, the E in E-V theory, represents the arithmetic average HPR (AHPR) for the portfolios and the horizontal, or V, scale represents the standard deviation in the HPRs. For a given risk-free rate,
we can determine where this tangent point portfolio on our efficient frontier is, as the coordinates (AHPR, V) that maximize the following function are: (7.01a) Tangent Portfolio = MAX{(AHPR-(1+RFR))
/SD} where MAX{} = The maximum value. AHPR = The arithmetic average HPR. This is the E coordinate of a given portfolio on the efficient frontier. SD = The standard deviation in HPRs. This is the V
coordinate of a given portfolio on the efficient frontier. RFR = The risk-free rate. In Equation (7.0la), the formula inside the braces ({ }) is known as the Sharpe ratio, a measurement of
risk-adjusted returns. Expressed literally, the Sharpe ratio for a portfolio is a measure of the ratio of the expected excess returns to the standard deviation. The portfolio with the highest Sharpe
ratio, therefore, is the portfolio where the CML line is tangent to the efficient frontier for a given RFR. The Sharpe ratio, when multiplied by the square root of the number of periods over which it
was derived, equals the t statistic. From the resulting t statistic it is possible to obtain a confidence level that the AHPR exceeds the RFR by more than chance alone, assuming finite variance in
the returns. The following table shows how to use Equation (7.0la) and demonstrate the entire process discussed thus far. The first two columns represent the coordinates of different portfolios on
the efficient frontier. The coordinates are given in (AHPR, SD) format, which corresponds to the Y and X axes of Figure 7-1. The third column is the answer obtained for Equation (7.01a) assuming a
1.5% risk-free rate (equating to an AHPR of 1.015. We assume that the HPRs here are quarterly HPRs, thus a 1.5% risk-free rate for the quarter equates to roughly a 6% risk-free rate for the year).
Thus, to work out (7.0la) for the third set of coordinates (60013. 1.002): (AHPR-(1+RFR))/SD = (1.002-(1+.015))/.00013 = (1.0021.015)/.00013 = -.013/.00013 = -100 The process is completed for each
point along the efficient frontier. Equation (7.01a) peaks out at .502265, which is at the coordinates (.02986, 1.03). These coordinates are the point where the CML line is tangent to the efficient
frontier, corresponding to point B in Figure 7-1. This tangent point is a certain portfolio along the efficient frontier. The Sharpe ratio is the slope of the CML, with the steepest slope being the
tangent line to the efficient frontier. Efficient Frontier AHPR SD 1.00000 1.00100 1.00200 1.00300 1.00400 l.00500 1.00600 1.00700 l.00600 1.00900 1.01000 1.01100 1.01200 1.01300 1.01400 0.91500
1.01600 1.01700 1.01800 1.01900 1
0.00000 0.00003 0.00013 0.00030 0.00053 0.00063 0.00119 0.00163 0.00212 0.00269 0.00332 0.00402 0.00476 0.00561 0.00650 0.00747 0.00649 0.00959 0.01075 0.01198
CML line Eq.(7.01a) Percentage RFR=.015 0 0.00% -421.902 0.11% -100.000 0.44% -40.1812 1.00% -20.7184 1.78% -12.0543 2.78% -7.53397 4.00% -4.92014 5.45% -3.29611 7.11% -2.23228 9.00% -1.50679 11.11%
-0.99622 13.45% -0.62783 16.00% -0.35663 18.78% -0.15375 21.78% 0 25.00% 0.117718 28.45% 0.208552 32.12% 0.279036 36.01% 0.333916 40.12%
AHPR 1.0150 1.0150 1.0151 1.0152 1.0153 1.0154 1.0156 1.0158 1.0161 1.0164 1.0167 1.0170 1.0174 1.0178 1.0183 1.0188 1.0193 1.0198 1.0204 1.0210
See Tobin, James, "Liquidity preference as Behavior Towards Risk," Review of Economic Studies 25, pp. 65-85, February 1958.
Efficient Frontier AHPR SD 1.02000 0.01327 1.02100 0.01463 1.02200 0.01606 1.02300 0.01755 1.02400 0.01911 1.02500 0.02074 1.02600 0.02243 1.02700 0.02419 1.02800 0.02602 1.02900 0.02791 1.03000
0.02986 1.03100 1.03200 1.03300 1.03400 1.03500 1.03600 1.03700 1.03800 1.03900 1.04000 1.04100 1.04200 1.04300 1.04400 1.04500 1.04600 1.04700 1.04800 1.04900 1.05000
0.03189 0.03398 0.03614 0.03836 0.04065 0.04301 0.04543 0.04792 0.05047 0.05309 0.05578 0.05853 0.06136 0.06424 0.06720 0.07022 0.07330 0.07645 0.07967 0.08296
CML line Eq.(7.01a) Percentage 0.376698 44.45% 0.410012 49.01% 0.435850 53.79% 0.455741 58.79% 0.470073 64.01% 0.482174 69.46% 0.490377 75.12% 0.496064 81.01% 0.499702 87.12% 0.501667 93.46% 0.502265
( 100.02% peak) 0.501742 106.79% 0.500303 113.80% 0.498114 121.02% 0.495313 128.46% 0.492014 136.13% 0.488313 144.02% 0.484287 152.13% 0.480004 160.47% 0.475517 169.03% 0.470873 177.81% 0.466111
186.81% 0.461264 196.03% 0.456357 205.48% 0.451416 215.14% 0.446458 225.04% 0.441499 235.15% 0.436554 245.48% 0.431634 256.04% 0.426747 266.82% 0.421902 277.82%
AHPR 1.0217 1.0224 1.0231 1.0236 1.0246 1.0254 1.0263 1.0272 1.0281 1.0290 1.0300 1.0310 1.0321 1.0332 1.0343 1.0354 1.0366 1.0376 1.0391 1.0404 1.0417 1.0430 1.0444 1.0456 1.0473 1.0466 1.0503
1.0516 1.0534 1.0550 1.0567
The next column over, "percentage, "represents what percentage of your assets must be invested in the tangent portfolio if you are at the CML line for that standard deviation coordinate. In other
words, for the last entry in the table, to be on the CML line at the .08296 standard deviation level, corresponds to having 277.82% of your assets in the tangent portfolio (i.e., being fully invested
and borrowing another $1.7782 for every dollar already invested to invest further). This percentage value is calculated from the standard deviation of the tangent portfolio as: (7.02) P = SX/ST where
SX = The standard deviation coordinate for a particular point on the CML line. ST = The standard deviation coordinate of the tangent portfolio. P = The percentage of your assets that must be invested
in the tangent portfolio to be on the CML line for a given SX. Thus, the CML line at the standard deviation coordinate .08296, the last entry in the table, is divided by the standard deviation
coordinate of the tangent portfolio, .02986, yielding 2.7782, or 277.82%. The last column in the table, the CML line AHPR, is the AHPR of the CML line at the given standard deviation coordinate. This
is figured as: (7.03) ACML = (AT*P)+((1+RFR)*(1-P)) where ACML = The AHPR of the CML line at a given risk coordinate, or a corresponding percentage figured from (7.02). AT = The AHPR at the tangent
point, figured from (7.01a). P = The percentage in the tangent portfolio, figured from (7.02) RFR = The risk-free rate. On occasion you may want to know the standard deviation of a certain point on
the CML line for a given AHPR. This linear relationship can be obtained as: (7.04) SD = P*ST where SD = The standard deviation at a given point on the CML line corresponding to a certain percentage,
P, corresponding to a certain AHPR. P = The percentage in the tangent portfolio, figured from (7.02). ST = The standard deviation coordinate of the tangent portfolio.
THE GEOMETRIC EFFICIENT FRONTIER The problem with Figure 7-1 is that it shows the arithmetic average HPR. When we are reinvesting profits back into the program we must look at the geometric average
HPR for the vertical axis of the efficient frontier. This changes things considerably. The formula to convert a point on the efficient frontier from an arithmetic HPR to a geometric is: (7.05) GHPR =
(AHPR^2-V)^(1/2) where GHPR = The geometric average HPR. AHPR = The arithmetic average HPR. V = The variance coordinate. (This is equal to the standard deviation coordinate squared.)
GHPR Figure 7-2 The efficient frontier with/without reinvestment In Figure 7-2 you can see the efficient frontier corresponding to the arithmetic average HPRs as well as that corresponding to the
geometric average HPRs. You can see what happens to the efficient frontier when reinvestment is involved. By graphing your GHPR line, you can see which portfolio is the geometric optimal (the highest
point on the GHPR line). You could also determine this portfolio by converting the AHPRs and Vs of each portfolio along the AHPR efficient frontier into GHPRs per Equation (7.05) and see which had
the highest GHPR. Again, that would be the geometric optimal. However, given the AHPRs and the Vs of the portfolios lying along the AHPR efficient frontier, we can readily discern which portfolio
would be geometric optimal- the one that solves the following equality: (7.06a) AHPR-1-V = 0 where AHPR = The arithmetic average HPRs. This is the E coordinate of a given portfolio on the efficient
frontier. V = The variance in HPR. This is the V coordinate of a given portfolio on the efficient frontier. This is equal to the standard deviation squared. Equation (7.06a) can also be written as
any one of the following three forms: (7.06b) AHPR-1 = V (7.06c) AHPR-V = 1 (7.06d) AHPR = V+1 A brief note on the geometric optimal portfolio is in order here. Variance in a portfolio is generally
directly and positively correlated to drawdown in that higher variance is generally indicative of a portfolio with higher draw-down. Since the geometric optimal portfolio is that portfolio for which
E and V are equal (with E = AHPR-1), then we can assume that the geometric optimal portfolio will see high drawdowns. In fact, the greater the GHPR of the geometric optimal portfolio-that is, the
more the portfolio makes-the greater will be its drawdown in terms of equity retracements, since the GHPR is directly positively correlated with the AHPR. Here again is a paradox. We want to be at
the geometric optimal portfolio. Yet, the higher the geometric mean of a portfolio, the greater will be the drawdowns in terms of percentage equity retracements generally. Hence, when we perform the
exercise of diversification, we should view it as an exercise to obtain the highest geometric mean rather than the lowest drawdown, as the two tend to pull in oppo-
site directions! The geometrical optimal portfolio is one where a line drawn from (0,0), with slope 1, intersects the AHPR efficient frontier. Figure 7-2 demonstrates the efficient frontiers on a
one-trade basis. That is, its rows what you can expect on a one-trade basis. We can convert the geometric average HPR to a TWR by the equation: (7.07) GTWR = GHPR^N where GTWR = The vertical axis
corresponding to a given GHPR after N trades. GHPR = The geometric average HPR. N = The number of trades we desire to observe. Thus, after 50 trades a GHPR of 1.0154 would be a GTWR of 1.0154 A 50 =
2.15. In other words, after 50 trades we would expect our stake to have grown by a multiple of 2.15. We can likewise project the efficient frontier of the arithmetic average HPRs into ATWRs as:
(7.08) ATWR = 1+N*(AHPR-1) where ATWR = The vertical axis corresponding to a given AHPR after N trades. AHPR = The arithmetic average HPR. N = The number of trades we desire to observe. Thus, after
50 trades, an arithmetic average HPR of 1.03 would have made 1+50*(1.03-1) = 1+50*.03 = 1+1.5 = 2.5 times our starting stake. Note that this shows what happens when we do not reinvest our winnings
back into the trading program. Equation (7.08) is the TWR you can expect when constant-contract trading.
Figure 7-3 The efficient frontier with/without reinvestment Just as Figure 7-2 shows the TWRs, both arithmetic and geometric, for one trade, Figure 7-3 shows them for a few trades later. Notice that
the GTWR line is approaching the ATWR line. At some point for N, the geometric TWR will overtake the arithmetic TWR. Figure 7-4 shows the arithmetic and geometric TWRs after more trades have elapsed.
Notice that the geometric has overtaken the arithmetic. If we were to continue with more and more trades, the geometric TWR would continue to outpace the arithmetic. Eventually, the geometric TWR
becomes infinitely greater than the arithmetic.
The logical question is, “How many trades must elapse until the geometric TWR surpasses the arithmetic?” Recall Equation (2.09a), which tells us the number of trades required to reach a specific
goal: (2.09a) N = ln(Goal)/ln(Geometric Mean) where N = The expected number of trades to reach a specific goal. Goal = The goal in terms of a multiple on our starting stake, a TWR. ln() = The natural
logarithm function. We let the AHPR at the same V as our geometric optimal portfolio be our goal and use the geometric mean of our geometric optimal portfolio in the denominator of (2.09a). We can
now discern how many trades are required to make our geometric optimal portfolio match one trade in the corresponding arithmetic portfolio. Thus: N = ln(l.031)/ln( 1.01542) = .035294/.0153023 =
1.995075 We would thus expect 1.995075, or roughly 2, trades for the optimal GHPR to be as high up as the corresponding (same V) AHPR after one trade. The problem is that the ATWR needs to reflect
the fact that two trades have elapsed. In other words, as the GTWR approaches the ATWR, the ATWR is also moving upward, albeit at a constant rate (compared to the GTWR, which is accelerating). We can
relate this problem to Equations (7.07) and (7.08), the geometric and arithmetic TWRs respectively, and express it mathematically: (7.09) GHPR^N => 1+N*(AHPR-1) Since we know that when N = 1, G will
be less than A, we can rephrase the question to "At how many N will G equal A?" Mathematically this is: (7.10a) GHPR^N = 1+N*(AHPR-1) which can be written as: (7.10b) 1+N*(AHPR-1)-GHPR ^N = 0 or
(7.10c) 1+N*AHPR-N-GHPR^N = 0 or (7.10d) N = (GHPR^N-1)/(AHPR -1) The N that solves (7.10a) through (7.10d) is the N that is required for the geometric HPR to equal the arithmetic. All three
equations are equivalent. The solution must be arrived at by iteration. Taking our geometric optimal portfolio of a GHPR of 1.01542 and a corresponding AHPR of 1.031, if we were to solve for any of
Equations (7.10a) through (7.10d), we would find the solution to these equations at N = 83.49894. That is, at 83.49894 elapsed trades, the geometric TWR will overtake the arithmetic TWR for those
TWRs corresponding to a variance coordinate of the geometric optimal portfolio.
ATWR GHPR SD Figure 7-5 AHPR, GHPR, and their CML lines.
Figure 7-4 The efficient frontier with/without reinvestment.
Just as the AHPR has a CML line, so too does the GHPR. Figure 75 shows both the AHPR and the GHPR with a CML line for both calculated from the same risk-free rate. The CML for the GHPR is calculated
from the CML for the AHPR by the following equation: (7.11) CMLG = (CMLA^2-VT*P)^(1/2) where
CMLG = The E coordinate (vertical) to the CML line to the GHPR for a given V coordinate corresponding to P. CMLA = The E coordinate (vertical) to the CML line to the AHPR for a given V coordinate
corresponding to P. P = The percentage in the tangent portfolio, figured from (7.02). VT = The variance coordinate of the tangent portfolio. You should know that, for any given risk-free rate, the
tangent portfolio and the geometric optimal portfolio are not necessarily (and usually are not) the same. The only time that these portfolios will be the same is when the following equation is
satisfied: (7.12) RFR = GHPROPT-1 where RFR = The risk-free rate. GHPROPT = The geometric average HPR of the geometric optimal portfolio. This is the E coordinate of the portfolio on the efficient
frontier. Only when the GHPR of the geometric optimal portfolio minus 1 is equal to the risk-free rate will the geometric optimal portfolio and the portfolio tangent to the CML line be the same. If
RFR > GHPROPT-1, then the geometric optimal portfolio will be to the left of (have less variance than) the tangent portfolio. If RFR < GHPROPT-1, then the tangent portfolio will be to the left of
(have less variance than) the geometric optimal portfolio. In all cases, though, the tangent portfolio will, of course, never have a higher GHPR than the geometric optimal portfolio. Note also that
the point of tangency for the CML to the GHPR and for the CML to the AHPR is at the same SD coordinate. We could use Equation (7.01a) to find the tangent portfolio of the GHPR line by substituting
the AHPR in (7.01a) with GHPR. The resultant equation is: (7.01b) Tangent Portfolio = MAX{(GHPR-(1+RFR))/SD} where MAX() = The maximum value. GHPR = The geometric average HPRs. This is the E
coordinate of a given portfolio on the efficient frontier. SD = The standard deviation in HPRs. This is the SD coordinate of a given portfolio on the efficient frontier. RFR = The risk-free rate.
UNCONSTRAINED PORTFOLIOS Now we will see how to enhance returns beyond the GCML line by lifting the sum of the weights constraint. Let us return to geometric optimal portfolios. If we look for the
geometric optimal portfolio among our four market systems-Toxico, Incubeast, LA Garb and a savings accountwe find it at E equal to .1688965 and V equal to .1688965, thus conforming with Equations
(7.06a) through (7.06d). The geometric mean of such a portfolio would therefore be 1.094268, and the portfolio's composition would be: Toxico 18.89891% Incubeast 19.50386% LA Garb 58.58387% Savings
Account .03014% In using Equations (7.06a) through (7.06d), you must iterate to the solution. That is, you try a test value for E (halfway between the highest and the lowest AHPRs, -1 is a good
starting point) and solve the matrix for that E. If your variance is higher than E, it means the tested for value of E was too high, and you should lower it for the next attempt. Conversely, if your
variance is less than E, you should raise E for the next pass. You determine the variance for the portfolio by using one of Equations (6.06a) through (6.06d). You keep on repeating the process until
whichever of Equations (7.06a) through (7.06d) you choose to use, is solved. Then you will have arrived at your geometric optimal portfolio. (Note that all of the portfolios discussed thus far,
whether on the AHPR efficient frontier or the GHPR efficient frontier, are determined by constraining the sum of the percentages, the weights, to 100% or 1.00.) Recall Equation (6.10), the equation
used in the starting augmented matrix to find the optimal weights in a portfolio. This equation dictates that the sum of the weights equal 1: (6.10) (∑[i = 1,N]Xi) -1 = 0 where
N = The number of securities comprising the portfolio. Xi = The percentage weighting of the ith security. The equation can also be written as: (∑[i = 1,N]Xi) = l By allowing the left side of this
equation to be greater than 1, we can find the unconstrained optimal portfolio. The easiest way to do this is to add another market system, called non-interest-bearing cash (NIC), into the Starting
augmented matrix. This market system, NIC, will have an arithmetic average daily HPR of 1.0 and a population standard deviation (as well as variance and covariances) in those daily HPRs of 0. What
this means is that each day the HPR for NIC will be 1.0. The correlation coefficients for NIC to any other market system are always 0. Now we set the sum of the weights constraint to some arbitrarily
high number, greater than I. A good initial value is 3 times the number of market systems (without NIC) that you are using. Since we have 4 market systems (when not counting NIC) we should set this
sum of the weights constraint to 4*3 = 12. Note that we are not really lifting the constraint that the sum of the weights be below some number, we are just setting this constraint at an arbitrarily
high value. The difference between this arbitrarily high value and what the sum of the weights actually comes out to be will be the weight assigned to NIC. We are not going to really invest in NIC,
though. It's just a null entry that we are pumping through the matrix to arrive at the unconstrained weights of our market systems. Now, let's take the parameters of our four market systems from
Chapter 6 and add NIC as well: Investment
Expected Return as an HPR Toxico 1.095 Incubeast Corp. 1.13 LA Garb 1.21 Savings Account 1.085 NIC 1.00
Expected Standard Deviation of Return .316227766 .5 .632455532 0 0
The covariances among the market systems, with NIC included, are as follows: T I L S N
T .1 -.0237 .01 0 0
I -.0237 .25 .079 0 0
L .01 .079 .4 0 0
S 0 0 0 0 0
N 0 0 0 0 0
Thus, when we include NIC, we are now dealing with 5 market systems; therefore, the generalized form of the starting augmented matrix is: X1*U1+ X2*U2+ X3*U3+ X4*U4+ X5*U5 = E X1+ X2+ X3+ X4+ X5 = S
X1*COV1,1+X2*COV1,2+X3*COV1,3+X4*COV1,4+X5*COV1,5+.5*L1*U1+. 5*L2 = 0 X1*COV2,1+X2*COV2,2+X3*COV2,3+X4*COV2,4+X5*COV2,5+.5*L1*U2+. 5*L2 = 0
X1*COV3,1+X2*COV3,2+X3*COV3,3+X4*COV3,4+X5*COV3,5+.5*L1*U3+. 5*L2 = 0 X1*COV4,1+X2*COV4,2+X3*COV4,3+X4*COV4,4+X5*COV4,5+.5*L1*U4+. 5*L2 = 0
X1*COV5,1+X2*COV5,2+X3*COV5,3+X4*COV5,4+X5*COV5,5+.5*L1*U5+. 5*L2 = 0 where E = The expected return of the portfolio. S = The sum of the weights constraint. COVA,B = The covariance between securities
A and B. Xi = The percentage weighting of the ith security. Ui = The expected return of the ith security. L1 = The first Lagrangian multiplier. L2 = The second Lagrangian multiplier. Thus, once we
have included NIC, our starting augmented matrix appears as follows: X1 .095 1 .1
X2 .13 1 -.0237
X3 .21 1 .01
X4 .085 1 0
X5 L1 L2 Answer 0 E 0 12 0 .095 1 0
-.0237 .01 0 0
.25 .079 0 0
.079 .4 0 0
.13 .21 .085 0
Note that the answer column of the second row, the sum of the weights constraint, is 12, as we determined it to be by multiplying the number of market systems (not including NIC) by 3. When you are
using NIC, it is important that you include it as the last, the Nth market system of N market systems, in the starting augmented matrix. Now, the object is to obtain the identity matrix by using row
operations to produce elementary transformations, as was detailed in Chapter 6. You can now create an unconstrained AHPR efficient frontier and an unconstrained GHPR efficient frontier. The
unconstrained AHPR efficient frontier represents using leverage but not reinvesting. The GHPR efficient frontier represents using leverage and reinvesting the profits. Ideally, we want to find the
unconstrained geometric optimal portfolio. This is the portfolio that will result in the greatest geometric growth for us. We can use Equations (7.06a) through (7.06d) to solve for which of the
portfolios along the efficient frontier is geometric optimal. In so doing, we find that no matter what value we try to solve E for (the value in the answer column of the first row), we get the same
portfolio-comprised of only the savings account levered up to give us whatever value for E we want. This results in giving us our answer; we get the lowest V (in this case zero) for any given E. What
we must do, then, is take the savings account out of the matrix and start over. This time we will try to solve for only four market systems -Toxico, Incubeast, LA Garb, and NIC-and we set our sum of
the weights constraint to 9. Whenever you have a component in the matrix with zero variance and an AHPR greater than 1, you'll end up with the optimal portfolio as that component levered up to meet
the required E. Now, solving the matrix, we find Equations (7.06a) through (7.06d) satisfied at E equals .2457. Since this is the geometric optimal portfolio, V is also equal to .2457. The resultant
geometric mean is 1.142833. The portfolio is: Toxico 102.5982% Incubeast 49.00558% LA Garb 40.24979% NIC 708.14643% “Wait,” you say. "How can you invest over 100% in certain components?” We will
return to this in a moment. If NIC is not one of the components in the geometric optimal portfolio, then you must make your sum of the weights constraint, S, higher. You must keep on making it higher
until NIC becomes one of the components of the geometric optimal portfolio. Recall that if there are only two components in a portfolio, if the correlation coefficient between them is -1, and if both
have positive mathematical expectation, you will be required to finance an infinite number of contracts. This is so because such a portfolio would never have a losing day. Now, the lower the
correlation coefficients are between the components in the portfolio, the higher the percentage required to be invested in those components is going to be. The difference between the percentages
invested and the sum of the weights constraint, S, must be filled by NIC. If NIC doesn't show up in the percentage allocations for the geometric optimal portfolio, it means that the portfolio is
running into a constraint at S and is therefore not the unconstrained geometric optimal. Since you are not going to be actually investing in NIC, it doesn't matter how high a percentage it commands,
as long as it is listed as part of the geometric optimal portfolio.
HOW OPTIMAL F FITS WITH OPTIMAL PORTFOLIOS In Chapter 6 we saw that we must determine an expected return (as a percentage) and an expected variance in returns for each component in a portfolio.
Generally, the expected returns (and the variances) are determined from the current price of the stock. An optimal percentage (weighting) is then determined for each component. The equity of the
account is then multiplied by a components weighting to determine the number of dollars to allocate to that component, and this dollar allocation is then divided by the current price per share to
determine how many shares to have on. That generally is how portfolio strategies are currently practiced. But it is not optimal. Here lies one of this book's
many hearts. Rather than determining the expected return and variance in expected return from the current price of the component, the expected return and variance in returns should be determined from
the optimal f, in dollars, for the component. In other words, as input you should use the arithmetic average HPR and the variance in the HPRs. Here, the HPRs used should be not of trades, but of a
fixed time length such as days, weeks, months, quarters, or years-as we did in Chapter 1 with Equation (1.15). (1.15) Daily HPR = (A/B)+1 where A = Dollars made or lost that day. B = Optimal fin
dollars. We need not necessarily use days. We can use any time length we like so long as it is the same time length for all components in the portfolio (and the same time length is used for
determining the correlation coefficients between these HPRs of the different components). Say the market system with an optimal f of $2,000 made $100 on a given day. Then the HPR for that market
system for that day is 1.05. If you are figuring your optimal f based on equalized data, you must use Equation (2.12) in order to obtain your daily HPRs: (2.12) Daily HPR = D$/f$+1 where D$ = The
dollar gain or loss on 1 unit from the previous day. This is equal to (Tonight's Close-Last Night's Close)*Dollars per Point f$ = The current optimal fin dollars, calculated from Equation (2.11).
Here, however, the current price variable is last night's close. In other words, once you have determined the optimal fin dollars for 1 unit of a component, you then take the daily equity changes on
a 1unit basis and convert them to HPRs per Equation (1.15)-or, if you are using equalized data, you can use Equation (2.12). When you are combining market systems in a portfolio, all the market
systems should be the same in terms of whether their data, and hence their optimal fs and by-products, has been equalized or not. Then we take the arithmetic average of the HPRs. Subtracting 1 from
the arithmetic average will give us the expected return to use for that component. Taking the variance of the daily (weekly, monthly, etc.) HPRs will give the variance input into the matrix. Lastly,
we determine the correlation coefficients between the daily HPRs for each pair of market systems under consideration. Now here is the critical point. Portfolios whose parameters (expected returns,
variance in expected ret urns, and correlation coefficients of the expected returns) are selected based on the current price of the component will not yield truly optimal portfolios. To discern the
truly optimal portfolio you must derive the input parameters based on trading 1 unit at the optimal f for each component. You cannot be more at the peak of the optimal f curve than optimal f itself:
to base the parameters on the current market price of the component is to base your parameters arbitrarily-and, as a consequence, not necessarily optimally. Now let's return to the question of how
you can invest more than 100% in a certain component. One of the basic premises of this book is that weight and quantity are not the same thing. The weighting that you derive from solving for a
geometric optimal portfolio must be reflected back into the optimal f's of the portfolio's components. The way to do this is to divide the optimal f's for each component by its corresponding weight.
Assume we have the following optimal f's (in dollars): Toxico $2,500 Incubeast $4,750 LA Garb $5,000 (Note that, if you are equalizing your data, and hence obtaining an equalized optimal f and
by-products, then your optimal fs in dollars will change each day based upon the previous day's closing price and Equation[2.11].) We now divide these f's by their respective weightings: Toxico
$2,500/1.025982 = $2,436.69 Incubeast $4,750/.4900558 = $9,692.77 LA Garb $5,000/.4024979 = $12,422.43
Thus, by trading in these new "adjusted" f values, we Witt be at the geometric optimal portfolio. In other words, suppose Toxico represents a certain market system. By trading 1 contract under this
market system for every $2,436.69 in equity (and doing the same with the other market systems at their new adjusted f values) we will be at the geometric optimal unconstrained portfolio. Likewise if
Toxico is a stock, and we regard 100 shares as "1 contract," we will trade 100 shares of Toxico for every l$2,436.69 in account equity. For the moment, disregard margin completely. Later in the next
chapter we will address the potential problem of margin requirements. "Wait a minute," you protest. "If you take an optimal portfolio and change it by using optimal f, you have to prove that it is
still optimal. But if you treat the new values as a different portfolio, it must fall somewhere else on the return coordinate, not necessarily on the efficient frontier. In other words, if you keep
reevaluating f, you cannot stay optimal, can you?" We are not changing the f values. That is, our f values (the number of units put on for so many dollars in equity) are still the same. We are simply
performing a shortcut through the calculations, which makes it appear as though we are "adjusting" our f values. We derive our optimal portfolios based on the expected returns and variance in returns
of trading 1 unit of each of the components, as well as on the correlation coefficients. We thus derive optimal weights (optimal percentages of the account to trade each component with). Thus, if a
market system had an optimal f of $2,000, and in optimal portfolio weight of .5, we would trade 50% of our account at the full optimal f level of $2,000 for this market system. This is exactly the
same is if we said we will trade 100% of our account at the optimal f divided by the optimal weighting ($2,000/.5) of $4000. In other words, we are going to trade the optimal f of $2,000 per unit on
50% of our equity, which in turn is exactly the same as saying we are going to trade the adjusted f of $4,000 on 100% of our equity. The AHPRs and SDs that you input into the matrix are determined
from the optimal f values in dollars. If you are doing this on stocks, you can compute your values for AHPR, SD, and optimal f on a I-share or a 100-share basis (or any other basis you like). You
dictate the size of one unit. In a nonleveraged situation, such as a portfolio of stocks that are not on margin, weighting and quantity are synonymous. Yet in a leveraged situation, such as a
portfolio of futures market systems, weighting and quantity arc different indeed. you can now see the idea first roughly introduced in Portfolio Management Formulas: that optimal quantities are what
we seek to know, and that this is a function of optimal weightings. When we figure the correlation coefficients on the HPRs of two market systems, both with a positive arithmetic mathematical
expectation, we find a slight tendency toward positive correlation. This is because the equity curves (the cumulative running sum of daily equity changes) both tend to rise up and to the right. This
can be bothersome to some people. One solution is to determine a least squares regression line to each equity curve (before equalization, if employed) and then take the difference at each point in
time on the equity curve and its regression line. Next, convert this now detrended equity curve back to simple daily equity changes (noncumulative, i.e., the daily change in the detrended equity
curve). If you are equalizing the data, you would then do it at this point in the sequence of events. Lastly, you figure your correlations on this processed data. This technique is valid so long as
you are using the correlations of daily equity changes and not prices. If you use prices, you may do yourself more harm than good. Very often prices and daily equity changes are linked, as example
would be a long-term moving average crossover system. This detrending technique must always be used with caution. Also, the daily AHPR and standard deviation in HPRs must always be figured off of
non-detrended data. A final problem that happens when you detrend your data occurs with systems that trade infrequently. Imagine two day-trading systems that give one trade per week, both on
different days. The correlation coefficient between them may be only slightly positive. Yet when we detrend their data, we get very high positive correlation. This mistakenly happens because their
regression lines are rising a little each day. Yet on most days the equity change is zero. Therefore, the difference is nega-
tive. The preponderance , slightly negative days with both market systems, then mistakenly results in high positive correlation.
THRESHOLD TO THE GEOMETRIC FOR PORTFOLIOS Now let's address the problem of incorporating the threshold to the geometric with the given optimal portfolio mix. This problem is readily handled simply by
dividing the threshold to the geometric for each component by its weighting in the optimal portfolio. This is done in exactly the same way as the optimal fs of the components are divided by their
respective weightings to obtain a new value representative of the optimal portfolio mix. For example, assume that the threshold to the geometric for Toxico is $5,100. Dividing this by its weighting
in the optimal portfolio mix of 1.025982 gives us a new adjusted threshold to the geometric of: Threshold = $5,100/1.025982 = $4,970.85 Since the weighting for Toxico is greater than 1, both its
optimal f and its threshold to the geometric will be reduced, for they are divided by this weighting. In this case, if we cannot trade the fractional unit with Toxico, and if we are trading only 1
unit of Toxico, we will switch up to 2 units only when our equity gets up to $4,970.85. Recall that our new adjusted f value in the optimal portfolio mix for Toxico is $2,436.69 ($2,500/1.025982).
Since twice this amount equals $4,873.38, we would ordinarily move up to trading two contracts at that point. However, our threshold to the geometric, being greater than twice the f allocation in
dollars, tells us there isn't any benefit to switching to trading 2 units before our equity reaches the threshold to the geometric of $4970.85. Again, if you are equalizing your data, and hence
obtaining an equalized optimal f and by-products, including the threshold to the geometric, then your optimal fs in dollars and your thresholds to the geometric will change each day, based upon the
previous day's closing price and Equation (2.11).
COMPLETING THE LOOP One thing you will readily notice about unconstrained portfolios (portfolios for which the sum of the weights is greater than 1 and NIC shows up as a market system in the
portfolio) is that the portfolio is exactly the same for any given level of E-the only difference being the degree of leverage. This is not true for portfolios lying along the efficient frontier(s)
when the sum of the weights is constrained). In other words, the ratios of the weightings of the different market systems to each other are always the same for any point along the unconstrained
efficient frontiers (AHPR or GHPR). For example, the ratios of the different weightings between the different market systems in the geometric optimal portfolio can be calculated. The ratio of Toxico
to Incubeast is 102.5982% divided by 49.00558%, which equals 2.0936. We can thus determine the ratios of all the components in this portfolio to one another: Toxico/Incubeast = 2.0936 Toxico/LA Garb
= 2.5490 Incubeast/LA Garb = 1.2175 Now, we can go back to the unconstrained portfolio and solve for different values for E. What follows are the weightings for the components of the unconstrained
portfolios that have the lowest variances for the given values of E. You will notice that the ratios of the weightings of the components are exactly the same: E = .1 E = .3 Toxico .4175733 1.252726
Incubeast .1 994545 .5983566 LA Garb .1638171 .49145
Thus, we can state that the unconstrained efficient frontiers are the same portfolio at different levels of leverage. This portfolio, the one that gets levered up and down with E when the sum of the
weights constraint is lifted, is the portfolio that has a value of zero for the second Lagrangian multiplier when the sum of the weights equals 1. Therefore, we can readily determine what our
unconstrained geometric optimal portfolio will be. First, we find the portfolio that has a value of zero for the second Lagrangian multiplier when the sum of the weights is constrained to 1.00. One
way to find this is through iteration. The resultant portfolio will be that portfolio which gets levered up (or down) to satisfy any given E in the unconstrained portfolio. That value
for E which satisfies any of Equations (7.06a) through (7.06d) will be the value for E that yields the unconstrained geometric optimal portfolio. Another equation that we can use to solve for which
portfolio along the unconstrained AHPR efficient frontier is geometric optimal is to use the first Lagrangian multiplier that results in determining a portfolio along any particular point on the
unconstrained AHPR efficient frontier. Recall from Chapter 6 that one of the by-products in determining the composition of a portfolio by the method of row-equivalent matrices is the first Lagrangian
multiplier. The first Lagrangian multiplier represents the instantaneous rate of change in variance with respect to expected return, sign reversed. A first Lagrangian multiplier equal to -2 means
that at that point the variance was changing at that rate (-2) opposite the expected return, sign reversed. This would result in a portfolio that was geometric optimal. (7.06e) L1 = -2 where L1 = The
first Lagrangian multiplier of a given portfolio along the unconstrained AHPR efficient frontier.2 Now it gets interesting as we tie these concepts together. The portfolio that gets levered up and
down the unconstrained efficient frontiers (arithmetic or geometric) is the portfolio tangent to the CML line emanating from an RFR of 0 when the sum of the weights is constrained to 1.00 and NIC is
not employed. Therefore, we can also find the unconstrained geometric optimal portfolio by first finding the tangent portfolio to an RFR equal to 0 where the sum of the weights is constrained to
1.00, then levering this portfolio up to the point where it is the geometric optimal. But how can we determine how much to lever this constrained portfolio up to make it the equivalent of the
unconstrained geometric optimal portfolio? Recall that the tangent portfolio is found by taking the portfolio along the constrained efficient frontier (arithmetic or geometric) that has the highest
Sharpe ratio, which is Equation (7.01). Now we lever this portfolio up, and we multiply the weights of each of its components by a variable named q, which can be approximated by: (7.13) q = (E-RFR)/V
where E = The expected return (arithmetic) of the tangent portfolio. RFR = The risk-free rate at which we assume you can borrow or loan. V = The variance in the tangent portfolio. Equation (7.13)
actually is a very close approximation for the actual optimal q. An example may help illustrate the role of optimal q. Recall that our unconstrained geometric optimal portfolio is as follows:
Component Toxico Incubeast LA Garb
Weight 1.025955 .4900436 .4024874
This portfolio, we found, has an AHPR of 1.245694 and variance of .2456941. Throughout the remainder of this discussion we will assume for simplicity's sake an RFR of 0. (Incidentally, the Sharpe
ratio of this portfolio, (AHPR-(1+RFR))/SD, is .49568.) Now, if we were to input the same returns, variances, and correlation coefficients of these components into the matrix and solve for which
portfolio was tangent to an RFR of 0 when the sum of the weights is constrained to 1.00 and we do not include NIC, we would obtain the following portfolio: Component Toxico Incubeast LA Garb
Weight .5344908 .2552975 .2102117
This particular portfolio has an AHPR of 1.128, a variance of .066683, and a Sharpe ratio of .49568. It is interesting to note that the Sharpe ratio of the tangent portfolio, a portfolio for which
the sum of 2
Thus, we can state that the geometric optimal portfolio is that portfolio which, when the sum of the weights is Constrained to 1, has a second Lagrangian multiplier equal to 0, and when unconstrained
has a first Lagrangian multiplier of -2. Such a portfolio will also have a second Lagrangian multiplier equal to 0 when unconstrained.
the weights is con strained to 1.00 and we do not include NIC, is exactly the same as the Sharpe ratio for our unconstrained geometric optimal portfolio. Subtracting 1 from our AHPRs gives us the
arithmetic average return of the portfolio. Doing so we notice that in order to obtain the same return for the constrained tangent portfolio as for the unconstrained geometric optimal portfolio, we
must multiply the former by 1.9195. .245694/.128 = 1.9195 Now if we multiply each of the weights of the constrained tangent portfolio, the portfolio we obtain is virtually identical to the
unconstrained geometric optimal portfolio: Component Toxico Incubeast LA Garb
Weight .5344908 .2552975 .2102117
* 1.9195 = Weight 1.025955 .4900436 .4035013
The factor 1.9195 was arrived at by dividing the return on the unconstrained geometric optimal portfolio by the return on the constrained tangent portfolio. Usually, though, we will want to find the
unconstrained geometric optimal portfolio knowing only the constrained tangent portfolio. This is where optimal q comes in.3 If we assume an RFR of 0, we can determine the optimal q on our
constrained tangent portfolio as: (7.13) q = (E-RFR)/V = (. 128-0)7.066683 = 1.919529715 A few notes on the RFR. To begin with, we should always assume an RFR of 0 when we are dealing with futures
contracts. Since we are not actually borrowing or lending funds to lever our portfolio up or down, there is effectively an RFR of 0. With stocks, however, it is a different story. The RFR you use
should be determined with this fact in mind. Quite possibly, the leverage you employ does not require you to use an RFR other than 0. You will often be using AHPRs and variances for portfolios that
were determined by using daily HPRs of the components. In such cases, you must adjust the RFR from an annual rate to a daily one. This is quite easy to accomplish. First, you must be certain that
this annual rate is what is called the effective mutual interest rate. Interest rates are typically stated as annual percentages, but frequently these annual percentages are what is referred to as
the nominal annual interest rate. When interest is compounded semiannually, quarterly, monthly, and so on, the interest earned during a year is greater than if compounded annually (the nominal rate
is based on compounding annually). When interest is compounded more frequently than annually, an effective annual interest rate can be determined from the nominal interest rate. It is the effective
annual interest rate that concerns us and that we will use in our calculations. To convert the nominal rate to an effective rate we can use: (7.14) E = (1+R/M)^M-1 where E = The effective annual
interest rate. R = The nominal annual interest rate. M = The number of compounding periods per year. Assume that the nominal annual interest rate is 9%, and suppose that it is compounded monthly.
Therefore, the corresponding effective annual interest rate is: (7.14) E = (1+.09/12)^12-1 = (1+.0075)^12-1 = 1.0075^12-1 = 1.093806898-1 = .093806898 Therefore, our effective annual interest rate is
a little over 9.38%. Now if we figured our HPRs on the basis of weekdays, we can state that there are 365.2425/7*5 = 260.8875 weekdays, on average, in a year. Dividing .093806898 by 260.8875 gives us
a daily RFR of .0003595683887. If we determine that we are actually paying interest to lever our portfolio up, and we want to determine from the constrained tangent portfolio what the unconstrained
geometric optimal portfolio is, we simply input the value for the RFR into the Sharpe ratio, Equation (7.01), and the optimal q, Equation (7.13). Now to close the loop. Suppose you determine that the
RFR for your portfolio is not 0, and you want to find the geometric optimal portfolio without first having to find the constrained portfolio tangent to your applicable RFR. Can you just go straight
to the matrix, set the sum 3
Latane, Henry, and Donald Tuttle, "Criteria for Portfolio Building," journal of Finance 22, September 1967, pp. 362363.
of the weights to some arbitrarily high number, include NIC, and find the unconstrained geometric optimal portfolio when the RFR is greater than 0? Yes, this is easily accomplished by subtracting the
RFR from the expected returns of each of the components, but not from NIC (i.e., the expected return for NIC remains at 0, or an arithmetic average HPR of 1.00). Now, solving the matrix will yield
the unconstrained geometric optimal portfolio when the RFR is greater than 0. Since the unconstrained efficient frontier is the same portfolio at different levels of leverage, you cannot put a CML
line on the unconstrained efficient frontier. You can only put CML lines on the AHPR or GHPR efficient frontiers if they are constrained (i.e., if the sum of the weights equals 1). It is not logical
to put CML lines on the AHPR or GHPR unconstrained efficient frontiers. We have seen numerous ways of arriving at the geometric optimal portfolio. For starters, we can find it empirically, as was
detailed in Portfolio Management Formulas and recapped in Chapter 1 of this text. We have seen how to find it parametrically in this chapter, firm a number of different angles, for any value of the
risk-free rate. Now that we know how to find the geometric optimal portfolio we must learn how to use it in real life. The geometric optimal portfolio will give us the greatest possible geometric
growth In the next chapter we will go into techniques to use this portfolio within given risk constraints.
Chapter 8 - Risk Management We now know haw to find the optimal portfolios by numerous different methods. Further, we now have a thorough understanding of the geometry of portfolios and the
relationship of optimal quantities and optimal weightings. We can now see that the best way to trade any portfolio of any underlying instrument is at the geometric optimal level Doing so on a
reinvestment of returns basis will maximize the ratio of expected gain to expected risk In this chapter we discuss how to use these geometric optimal portfolios within the risk constraints that we
specify. Thus, whatever vehicles we are trading in, we can align ourselves anywhere we desire on the risk spectrum. In so doing, we will obtain the maximum rate of geometric growth for a given level
of risk
ASSET ALLOCATION You should be aware that the optimal portfolio obtained by this parametric technique will always be almost, if not exactly, the same as the portfolio that would be obtained by using
an empirical technique such as the one detailed in the first chapter or in Portfolio Management Formulas. As such, we can expect tremendous drawdowns on the entire portfolio in terms of equity
retracement. Our only guard against this is to dilute the portfolio somewhat. What this amounts to is combining the geometric optimal portfolio with the risk-free asset in some fashion. This we call
asset allocation. The degree of risk and safety for any investment is not a function of the investment itself, but rather a function of asset allocation. Even portfolios of blue-chip stocks, if
traded at their unconstrained geometric optimal portfolio levels, will show tremendous drawdowns. Yet these blue-chip stocks must be traded at these levels to maximize potential geometric gain
relative to dispersion (risk) and also provide for attaining a goal in the least possible time. When viewed from such a perspective, trading blue-chip stocks is as risky as pork bellies, and pork
bellies are no less conservative than blue-chip stocks. The same can be said of a portfolio of commodity trading systems and a portfolio of bonds. The object now is to achieve the desired level of
potential geometric gain to dispersion (risk) by combining the risk-free asset with whatever it is we are trading, be it a portfolio of blue-chip stocks, bonds, or commodity trading systems. When you
trade a portfolio at unconstrained fractional f, you are on the unconstrained GHPR efficient frontier, but to the left of the geometric optimal point-the point that satisfies any of Equations (7.06a)
through (7.06e). Thus, you have less potential gain relative to the dispersion than you would if you were at the geometric optimal point. This is one way you can combine a portfolio with the
risk-free asset. Another way you can practice asset allocation is by splitting your equity into two subaccounts, an active subaccount and an inactive subaccount. These are not two separate accounts,
rather they are a way of splitting a single account in theory. The technique works as follows. First, you must decide upon an initial fractional level. Suppose that, initially, you want to emulate an
account at the half f level. Your initial fractional level is .5 (the initial fractional level must be greater than zero and less than 1). This means you will split your account, with half the equity
in your account going into the inactive subaccount and half going into the active subaccount. Assume you are starting out with a $100,000 account. Initially, $50,000 is in the inactive subaccount and
$50,000 is in the active subaccount. It is the equity in the active subaccount that you use to determine how many contracts to trade. These subaccounts are not real; they are a hypothetical construct
you are creating in order to manage your money more effectively. You always use the full optimal fs with this technique. Any equity changes are reflected in the active portion of the account.
Therefore, each day you must look at the account's total equity (closed equity plus open equity, marking open Positions to the market), and subtract the inactive amount (which will remain constant
from day to day). The difference is your active equity, and it is on this difference that you will calculate how many contracts to trade at the full f levels. Now suppose that the optimal f for
market system A is 'to trade 1 contract for every $2,500 in account equity. You come into the first day with $50,000 in active equity, and therefore you will look to
trade 20 contracts. If you were using the straight half f strategy; you would end up with the same number of contracts on day one. At half f, you would trade 1 contract for every $5,000 in account
equity ($2,500/.5), and you would use the full $100,000 account equity to figure how many contracts to trade. Therefore, under the half f strategy, you would trade 20 contracts on this day as well.
However, as soon as the equity in the accounts changes, the number of contracts you will trade changes as well. Assume now that you make $5,000 this next day, thus pushing the total equity in the
account up to $105,000. Under the half f strategy, you will now be trading 21 contracts. However, with the split-equity technique, you must subtract the now-constant inactive amount of $50,000 from
your total equity of $105,000. This leaves an active equity portion of $55,000, from which you will figure your contract size at the optimal f level of 1 contract for every $2,500 in equity.
Therefore, with the split-equity technique, you will now look to trade 22 contracts. The procedure works the same way on the downside of the equity curve, with the split-equity technique peeling off
contracts at a faster rate than the fractional f strategy does. Suppose you lost $5,000 on the first day of trading, putting the total account equity at $95,000. With the fractional f strategy you
would now look to trade 19 contracts ($95,000/$5,000). However, with the split-equity technique you are now left with $45,000 of active equity, and thus you will look to trade 18 contracts ($45,000/
$2,500). Notice that with the split-equity technique, the exact fraction of optimal f that we are using changes with the equity changes. We specify the fraction we want to start at. In our example we
used an initial fraction of .5. When the equity increases, this fraction of the optimal f increases too, approaching 1 as a limit as the account equity approaches infinity. On the downside, this
fraction approaches 0 as a limit at the level where the total equity in the account equals the inactive portion. The fact that portfolio insurance is built into the split-equity technique is a
tremendous benefit and will be discussed at length later in this chapter. Because the split-equity technique has a fraction for f that moves, we refer to it as a dynamic fractional ѓ strategy, as
opposed to the straight fractional f (static fractional f) strategy. The static fractional f strategy puts you on the CML line somewhere to the left of the optimal portfolio if you are using a
constrained portfolio. Throughout the life of the account, regardless of equity changes, the account will stay at that point on the CML line. If you are using an unconstrained portfolio (as you
rightly should), you will be on the unconstrained efficient frontier (since there are no CML lines with unconstrained portfolios) at some point to the left of the optimal portfolio. As the equity in
the account changes, you stay at the same point on the unconstrained efficient frontier. With the dynamic fractional f technique, you start at these same points for the constrained and unconstrained
portfolios. However, as the account equity increases, the portfolio moves up and to the right, and as the equity decreases, the portfolio moves down and to the left. The limits are at the peak of the
curve to the right where the fraction of f equals 1, and on the left at the point where the fraction off equals 0. With the static f method of asset allocation, the dispersion remains constant, since
the fraction of optimal fused is constant. Unfortunately, this is not true with the dynamic fractional f technique. Here, as the account equity increases, so does the dispersion as the fraction of
optimal f used increases. The upper limit to this dispersion is the dispersion at full f as the account equity approaches infinity. On the downside, the dispersion diminishes rapidly as the fraction
of optimal f used approaches zero as the total account equity approaches the inactive subaccount equity. Here, the lower limit to the dispersion is zero. Using the dynamic fractional f technique is
analogous to trading an account full out at the optimal f levels, where the initial size of account is the active equity portion. So we see that there are two ways to dilute an account down from the
full geometric optimal portfolio, two ways to exercise asset allocation. We can trade a static fractional or a dynamic fractional f. The dynamic fractional will also have dynamic variance, a slight
negative, but it also provides for portfolio insurance (more on this later). Although the two techniques are related, you can also see that they differ. Which is best? Assume we have a system in
which the average daily arithmetic HPR is 1.0265. The standard deviation in these daily HPRs is .1211, so the geometric mean is 1.019. Now, we look at the
numbers for a .2 static fractional f and a .1 static fractional f by using Equations (2.06) through (2.08): (2.06) FAHPR = (AHPR-1)*FRAC+1 (2.07) FSD = SD*FRAC (2.08) FGHPR = (FAHPR^2-FSD^2)^1/2
where FRAC = The fraction of optimal f we are solving for AHPR = The arithmetic average HPR at the optimal f, SD = The standard deviation in HPRs at the optimal f. FAHPR = The arithmetic average HPR
at the fractional f. FSD = The standard deviation in HPRs at the fractional f, FGHPR = The geometric average HPR at the fractional f. The results then are: Full f .2 f .1 f AHPR 1.0265 1.0053 1.00265
SD .1211 .02422 .01211 GHPR 1.01933 1.005 1.002577
Now recall Equation (2.09a), the time expected to reach a specific goal: (2.09a) N = ln(Goal)/1n(Geometric Mean) where N = The expected number of trades to reach a specific goal. Coal = The goal in
terms of a multiple on our starting stake, a TWR. ln() = The natural logarithm function. Now, we compare trading at the -2 static fractional f strategy, with a geometric mean of 1.005, to the .2
dynamic fractional f strategy (20% as initial active equity) with a daily geometric mean of 1.01933. The time (number of days since the geometric means are daily) required to double the static
fractional f is given by Equation (2.09a) as: ln(2)/ln( 1.005) = 138.9751 To double the dynamic fractional f requires setting the goal to 6. This is because if you initially have 20% of the equity at
work, and you start out with a $100,000 account, then you initially have $20,000 at work. The goal is to make the active equity equal $120,000. Since the inactive equity remains at $80,000, you will
then have a total of $200,000 on your account. Thus, to make a $20,000 account grow to $120,000 means you need to achieve a TWR of 6. Therefore, the goal is 6 in order to double a .2 dynamic
fractional f: 1n(6)/ln(1.01933) = 93.58634 Notice that it took 93 days for the dynamic fractional f versus 138 days for the static fractional f. Now look at the .1 fraction. The number of days
expected in order for the static technique to double is: ln(2)/ln( 1.002577) = 269.3404 Compare this to doubling a dynamic fractional f that is initially set to .1 active. You need to achieve a TWR
of 11, so the number of days required for the comparative dynamic fractional f strategy is: ln(11)/ln( 1.01933) = 125.2458 To double the account equity at the .1 level of fractional f takes 269 days
for our static example, as compared to 125 days for the dynamic. The lower the fraction for f, the faster the dynamic will outperform the static technique. Now take a look at tripling the .2
fractional f. The number of days expected by the static technique to triple is: ln(3)/ln( 1.005) = 220.2704 This compares to its dynamic counterpart, which requires: ln(11)/ln( 1.01933) = 125.2458
days To make 400% profit (i.e., a goal or TWR of 5) requires of the .2 static technique: ln(5)/ln( 1.005) = 322.6902 days Which compares to its dynamic counterpart: ln(21)/ln( 1.01933) = 159.0201
days The dynamic technique takes almost half as much time as the static to teach the goal of 400% in this example. However, if you look out in time 322.6902 days to where the static technique
doubled, the dynamic technique would be at a TWR of:
TWR = .8+(1.01933^322.6902)*.2 = .8+482.0659576*.2 = 97.21319 This represents making over 9,600% in the time it took the static to make 100% We can now amend Equation (2.09a) to accommodate both the
static and fractional dynamic f strategies to determine the expected length required to achieve a specific goal as a TWR. To begin with, for the static fractional f, we can create Equation (2.09b):
(2.09b) N = ln(Goal)/ln(A) where N = The expected number of trades to reach a specific goal. Goal = The goal in terms of a multiple on our starting stake, a TWR. A = The adjusted geometric mean. This
is the geometric mean, run through Equation (2.08 to determine the geometric mean for a given static fractional f. ln() = The natural logarithm function. For a dynamic fractional f, we have Equation
(2.09c): (2.09c) N = ln(((Goal-1)/ACTV)+l)/ln(Geometric Mean) where N = The expected number of trades to reach a specific goal. Goal = The goal in terms of a multiple on our starting stake, a TWR.
ACTV = The active equity percentage. Geometric Mean = This is simply the raw geometric mean, there is no adjustment performed on it as there is in (2.09b). ln() = The natural logarithm function. To
illustrate the use of (2.09c), suppose we want to determine how long it will take an account to double (i.e., TWR = 2) at .1 active equity and a geometric mean of 1.01933: (2.09) N = ln(((Goal-1)/
ACTV)+l)/ln(Geometric Mean)-ln(((21)/.l)+l)/ln(1.01933) = ln((1/.1)+1)/ln(1.01933) = ln( 10+l)/ln( 1.01933) = ln(11)/ln( 1.01933) = 2.397895273/.01914554872 = 125.2455758 Thus, if our geometric mean
is determined on a daily basis, we can expect to double in about 125% days. If our geometric mean is determined on a trade-by-trade basis, we can expect to double in about 125% trades. So long as you
are dealing with an N great enough such that (2.09c) is less than (2.09b), then you are benefiting from dynamic fractional f trading.
Ultimately the dynamic makes infinetely more than static fractional f strategy for the same initial level of risk
static Time --> Figure 8-1 Static versus dynamic fractional f. Figure 8-1 demonstrates the relationship between trading at a static versus a dynamic fractional f strategy over time. The more the time
that elapses, the greater the difference between the static fractional f and the dynamic fractional f strategy. Asymptotically, the dynamic fractional f strategy provides infinitely greater wealth
than its static counterpart. In the long run you are better off to practice asset allocation in a dynamic fractional f technique. That is, you determine an initial level, a percentage, to allocate as
active equity. The remainder is inactive equity. The day-to-day equity changes are reflected in the active portion only.
The inactive dollar amount remains constant. Therefore, each day you subtract the constant inactive dollar amount from your total account equity. This difference is the active portion, and it is on
this active portion that you will figure your quantities to trade in based on the optimal f levels. Eventually, if things go well for you, your active portion will dwarf your inactive portion, and
you'll have the same problem of excessive variance and Potential drawdown that you would have had initially at the full optimal f level. We now discuss four ways to treat this "problem." There are no
fine lines delineating these four methods, and it is possible to mix methods to meet your specific needs.
REALLOCATION: FOUR METHODS First, a word about the risk-free asset. Throughout this chapter the risk-free asset has been treated as though it were simply cash, or nearcash equivalents such as
Treasury Bills or money market funds (assuming that there is no risk in any of these). The risk-free asset can also be any asset which the investor believes has no risk, or risk so negligible as to
be nonexistent. This may include long-term government and corporate bonds. These can be coupon bonds or zeros. Holders may even write call options against these risk-free assets to further enhance
their returns. Many trading programs employ zero coupon bonds as the risk-free asset. For every dollar invested in such a program, a dollar's worth of face value zero coupon bonds is bought in the
account. Such a bond, if it were to mature in, say, 5 years, would surely cost less than a dollar. The difference between the dollar face value of the bond and its actual cost is the return the bond
will generate over its remaining life. This difference is then applied toward the trading program. If the program loses all of this money, the bonds will still mature at their full face value. At the
time of the bond maturity, the investor is then paid an amount equal to his initial investment, although he would not have seen any return on that initial investment over the term that the money was
in the program (5 years in the case of this example). Of course, this is predicated upon the managers of the program not losing an amount in excess of the difference between the face value of the
bond and its market cost. This same principle can be applied by any trader. Further, you need not use zero coupon bonds. Any kind of interest-generating vehicle can be used. The point is that the
risk-free asset need not be simply "dead" cash. It can be an actual investment program, designed to provide a real yield, and this yield can be made to offset potential losses in the program. The
main consideration is that the risk-free asset be regarded as risk-free (i.e., treated as though safety of principal were the primary concern). Now on with our discussion of allocating between the
risk-free asset, the "inactive" portion of the account, and the active, trading portion. The first, and perhaps the crudest, way to determine what the active/inactive percentage split will be
initially, and when to reallocate back to this percentage, is the investor utility method. This can also referred to as the gut feel method. Here, we assume that the drawdowns to be seen will be
equal to a complete retracement of active equity. Therefore, if we are willing to see a 50% drawdown, we initially allocate 50% to active equity. Likewise, if we are tilling to see a 10% drawdown, we
initially split the account into 10% active, 90*inactive. Basically, with the investor utility method you are trying to allocate as high a percentage to active equity as you are willing to risk
losing. Now, it is possible that the active portion may be completely wiped out, at which point the trader no longer has any active portion of his account left with which to continue trading. At such
a point, it will be necessary for the trader to decide whether to keep on trading, and if so, what percentage of the remaining funds in the account (the inactive subaccount) to allocate as new active
equity. This new active equity can also be lost, so it is important that the trader bear in mind at the outset of this program that the initial active equity is not the maximum amount that can be
lost. Furthermore, in any trading where there is unlimited liability on a given position (such as a futures trade) the entire account is at risk, and even the trader's assets outside of the account
are at risk! The reader should not be deluded into thinking that he or she is immune from a string of locked limit days, or an enormous opening gap that could take the entire account into a deficit
position, regardless of what the "active" equity portion of the account is.
This approach also makes a distinction between a drawdown in blood and a drawdown in diet cola. For instance, if a trader decides that a 25% equity retracement is the most that the trader would
initially care to sit through, he or she should initially split the account into 75% inactive, 2.5% active. Suppose the trader is starting out with a $100,000 account. Initially, therefore, $25,000
is active and $75,000 is inactive. Now suppose that the account gets up to $200,000. The trader still has $75,000 inactive, but now the active portion is up to $125,000. Since he or she is trading at
the full f amount on this $125,000, it is very possible to lose a good portion, if not all of this amount by going into an historically typical drawdown at this point. Such a drawdown would represent
greater than a 25% equity retracement, even though the amount of the initial starting equity that would be lost would be 25% if the total account value plunged down to the inactive $75,000. An
account that starts out at a lower percentage of active equity will therefore be able to reallocate sooner than an account trading the same market systems starting out at a higher percentage of
active equity. Therefore, not only does the account that starts out at a lower percentage of active equity have a lower potential drawdown on initial margin, but also since the trader can reallocate
sooner he is less likely to get into awkward ratios of active to inactive equity (assuming an equity runup) than if he started out at a higher initial active equity percentage. As a trader, you are
also faced with the question of when to reallocate, whether you are using the crude investor utility method or one of the more sophisticated methods about to be described. You should decide in
advance at what point in your equity, both on the upside and on the downside, you want to reallocate. For instance, you may decide that if you get a 100% return on your initial investment, it would
be a good time to reallocate. Likewise, you should also decide in advance at what point on the downside you will reallocate. Usually this point is the point where there is either no active equity
left or the active equity left doesn't allow for even 1 contract in any of the market systems you are using. You should decide, preferably in advance, whether to continue trading if this downside
limit is hit, and if so, what percentage to reallocate to active equity to start anew. Also, you may decide to reallocate with respect to time, particularly for professionally managed accounts. For
example, you may decide to reallocate every quarter. This could be incorporated with the equity limits of real-location. You may decide that if the active portion is completely wiped out, you will
stop trading altogether until the quarter is over. At the beginning of the next quarter, the account is reallocated with X% as active equity and 100-X% as inactive equity. It is not beneficial to
reallocate too frequently. Ideally, you will never reallocate. Ideally, you will let the fraction of optimal f you are using keep approaching 1 as your account equity grows. In reality, however, you
most likely will reallocate at some point in time. It is to be hoped you will not reallocate so frequently that it becomes a problem. Consider the case of reallocating after every trade or every day.
Such is the case with static fractional f trading. Recall again Equation (2.09a), the time required to reach a specific goal. Let's return to our system, which we are trading with a .2 active portion
and a geometric mean of 1.01933. We will compare this to trading at the static fractional .2 f, where the resultant geometric mean is 1.005. If we start with a $100,000 account and we want to
reallocate at $110,000 total equity, the number of days (since our geometric means here are on a per day basis) required by the static fractional .2 f is: ln(1.1)/ln(1.005) = 19.10956 This compares
to using $20,000 of the $100,000 total equity at the full f amount and trying to get the total account up to $110,000. This would represent a goal of 1.5 times the $20,000: ln(1.5)/ln(1.01933) =
21.17807 At lower goals, the static fractional f strategy grows faster than its corresponding dynamic fractional f counterpart. As time elapses, the dynamic overtakes the static, until eventually the
dynamic is infinitely farther ahead. Figure 8-1 displays this relationship between the static and dynamic fractional fs graphically. If you reallocate too frequently you are only shooting yourself in
the foot, as the technique would then be inferior to its static fractional f counterpart, Therefore, since you are best off in the long run to use the dynamic fractional f approach to asset
allocation, you are also best off to reallocate funds between the active and inactive subaccounts as infre-
quently as possible. Ideally, you will make this division between active and inactive equity only once, at the outset of the program. Generally, the dynamic fractional f will overtake its static
counterpart faster the lower the portion of initial active equity. In other words, a portfolio with an initial active equity of .1 will overcome its static counterpart faster than a portfolio with an
initial active equity allocation of .2 will overtake its static counterpart. At an initial active equity allocation of 100% (1.0), the dynamic never overtakes the static fractional f (rather they
grow at the same rate). Also affecting the rate at which the dynamic fractional f overtakes its static counterpart is the geometric mean of the portfolio itself. The higher the geometric mean, the
sooner the dynamic will overtake the static. At a geometric mean of 1.0, the dynamic never overtakes its static counterpart. A second method for determining initial active equity amounts and
real-location is the scenario planning method. Under this method the amount allocated initially is determined mathematically as a function of the different scenarios, their outcomes, and their
probabilities of occurrence, for the performance of the account. This exercise, too, can be performed at regular intervals. The technique involves the scenario planning method detailed in Chapter 4.
As an example, suppose you are pondering three possible scenarios for the next quarter: Scenario Drawdown No gain Good runup
Probability 50% 25% 25%
Result -100% 0% +300%
The result column pertains to the results on the account's active equity. Thus, there is a 50% chance here of a 100% loss of active equity, a 25% chance of the active equity remaining unchanged, and
a 25% chance of a 360% gain on the active equity. In reality you should consider more than three scenarios, but for simplicity, only three are used here. You input the three different scenarios,
their probabilities of occurrence, and their results in units, where each unit represents a percentage point. The results are determined based on what you see happening for each scenario if you were
trading at the full optimal f amount. Inputting these three scenarios yields an optimal f of .11. Don't confuse this optimal f with the optimal fs of the components of the portfolio you are trading.
They are different. Optimal f here pertains to the optimal f of the scenario planning exercise you just performed, which also told you the optimal amount to allocate as active equity for your given
parameters. Therefore, given these three scenarios, you are best off in an asymptotic sense to allocate 11% to active equity and the remaining 89% to inactive. At the beginning of the next quarter,
you perform this exercise again, and determine your new allocations at that time. Since the amount of funds you have to reallocate for a given quarter is a function of how you have allocated them for
the previous quarter, you are best off to use this optimal f amount, as it will provide you with the greatest geometric growth in the long run. (Again, that's provided that your input-the scenarios,
their probabilities, and the corresponding results-is accurate.) This scenario planning method of asset allocation is also useful if you are trying to incorporate the opinion of more than one
adviser. In our example, rather than pondering three possible scenarios for the next quarter, you might want to incorporate the opinions of three different advisers. The probability column
corresponds to how much faith you have in each different adviser. So in our example, the first scenario, a 50% probability of a 100% loss on active equity, corresponds to a very bearish adviser whose
opinion deserves twice the weight of the other two advisers. Recall the share average method of pulling out of a program, which was examined in Chapter 2. We can incorporate this concept here as a
reallocation method. In so doing, we will be creating a technique that systematically takes profits out of a program advantageously and also takes us out of a losing program. The program calls for
pulling out a regular periodic percentage of the total equity in the account (active equity + inactive equity). Therefore, each month, quarter, or whatever time period you are using, you will pull
out X% of your equity. Remember though, that you want to get enough time in each period to make certain that you are benefiting, at least somewhat, by dynamic fractional f. Any value for N that is
enough to satisfy Equation (8.01) is a value for N that we can use and be certain that we are benefiting from dynamic fractional f: (8.01) FG^N <= G^N*FRAC+1-FRAC where FG = The geometric mean for
the fractional f, found by Equation (2.08). N = The number of periods, with G and FG figured on the basis of 1 period. G = The geometric mean at the optimal f level. FRAC = The active equity
percentage. If we are using an active equity percentage of 20% (i.e., FRAC = .2), then FG must be figured on the basis of a .2 f. Thus, for the case where our geometric mean at full optimal f is
1.01933, and the .2 f (FG) is 1.005, we want a value for N that satisfies the following: 1.005^N <= 1.01933^N*.2+1-.2 We figured our geometric mean for optimal f(G) and therefore also our geometric
mean for the fractional f (FG) on a daily basis, and we want to see if 1 quarter is enough time. Since there are about 63 trading days per quarter, we want to see if an N of 63 is enough time to
benefit by dynamic fractional f. Therefore, we check Equation (8.01) at a value of 63 for N: 1.005^63 <= 1.01933^63*.2+1-.2 1.369184237 <= 3.340663933*.2+1-.2 1.369184237 <= .6681327866+1-.2
1.369184237 <= 1.6681327866-.2 1.369184237 <= 1.4681327866 The equation is satisfied, since the left side is less than or equal to the right side. Thus, we can reallocate on a quarterly basis under
the given values here and be benefiting from using dynamic fractional f. And where do you put this now pulled-out equity? Why, it goes right back into the account as inactive equity. Each period you
will figure the total value of your account, and transfer that amount from active to inactive equity. Thus, there is reallocation. For example, again assume a $100,000 account where $20,000 is
regarded as the active amount. Say you are share averaging out on a quarterly basis, and the quarterly percentage you pull out is 2%. Now assume that at the beginning of the following quarter the
account still stands at $100,000 total equity, of which $20,000 is active equity. You now take out 2% of the total account equity of $100,000 and transfer that amount from active to inactive equity.
Therefore, you transfer $2,000 from active to inactive equity, and your $100,000 account now has $18,000 active equity and $82,000 inactive. We hope that the program will outpace the periodic
percentage withdrawals to the upside. Suppose that in our last example, our $100,000 account goes to $110,000 at the end of the quarter. Now, when we go to reallocate 2%, $2,200, we debit our active
equity amount of $30,000 and credit our inactive amount of $80,000. Thus, we have $27,800 active equity and $82,200 inactive. Since our active equity after the reallocation is still greater than it
was at the beginning of the previous period, we can say that the program has outpaced the reallocation. On the other hand, if the program loses money, or if the program goes nowhere (in which case
you are risking money repeatedly, yet not making any upward progress on your equity), this technique has you eventually end up with the entire account equity as inactive equity. At that point, you
have automatically ceased trading a losing program. Naturally, two questions must now crop up. The first is, "What must this periodic percentage reduction be such that if the account equity were to
stagnate after N periodic deductions from active equity, the program would automatically terminate (i.e., active equity equal to 0)?" The solution is given by Equation (8.02): (8.02) P = 1-INACTIVE^
(1/N) where P = The periodic percentage of the total account equity that should be transferred from active to inactive equity. INACTIVE = The inactive percent of account equity. N = The number of
periods we want the program to terminate in if the equity stagnates.
Thus, if we were to make quarterly transfers of equity from active to inactive, and we were using an initial allocation of 80% as inactive equity, and we wanted the program to terminate in 2,5 years
(10 quartersi.e., N = 10), the quarterly percentage would be: P = 1-.8^(1/10) = 1-.8^.1 = 1-.9779327685 = .0220672315 Thus, we should pull out 2.20672315% of the total equity each quarter, and
transfer that from active to inactive equity. The second question to arise is, "If we are pulling out a certain given percentage, what must the number of periods be in order for the active equity to
equal 0?" In other words, if we know we want to pull out P% each period (again we assume that the periods here are quarters) and if the account equity stagnates, over how many periods, N, must we
make these equity transfers until the active equity equals 0. The solution is given by Equation (8.03): (8.03) N = ln(INACTIVE)/ln(l-P) where P = The periodic percentage of the total account equity
that will be transferred from active to inactive equity. INACTIVE = The inactive percentage of account equity. N = The number of periods it will take for the program to terminate if the equity
stagnates. Again, assume that the initial inactive equity is allocated as 80% and that you are pulling out 2.20672315% per quarter. Therefore, the number of periods, quarters in this case, required
until the program terminates if the equity stagnates is: N = ln(.8)/ln(l-.0220672315) = ln(.8)/ln(.9779327685) = -.223143/.0223143 = 10 For the given values, it would thus take 10 periods for the
program to terminate. Share averaging will get us out of a portfolio over time at an aboveaverage price, just as dollar averaging will get us into a portfolio over time at a below-average cost.
Consider now that most people do just the opposite of this, hence they are getting into and out of a portfolio at prices worse than average. When someone opens an account to trade, they dump all the
trading capital in and just start trading. When they want to add funds, they will almost always invariably add in single blocks of cash, unable to make equal dollar deposits over time. A trader
trying to live off trading profits will generally withdraw enough money from the account on a periodic basis to cover his living expenses, regardless of what percentage of his account this
constitutes. This is exactly what he should not do. Suppose that the trader's living expenses are constant from one month to the next, SO he is withdrawing a constant dollar amount. By doing this he
is accomplishing the exact opposite of share averaging in that he will be withdrawing a larger percentage of his funds when the account balance is lower, and a smaller percentage when the account
balance is higher. In short, he is slowly getting out of the portfolio (or a portion of it) over time at a below-average price. Rather, the trader should withdraw a constant percentage (of total
account equity, active plus inactive) each month. The withdrawn funds can be put into a middle account, a simple demand deposit account. Then from this demand deposit account the trader can withdraw
a constant dollar amount each month to meet his living expenses. If the trader were to bypass this middle account and withdraw a constant dollar amount directly from the trading account, it would
cause the ideas of share averaging and dollar averaging to work against him. Recall from Chapter 2 the observation that when you are trading at the optimal f levels you can expect to be in the
worst-case drawdown 35 to 55% of the time period you are looking at. Generally, this doesn't sit well with most traders. Most traders want or need a much smoother equity curve, either to satisfy the
needs of their living expenses or for other, more emotional, reasons. What trader wouldn't like to make a steady $X per day from trading? This 35 to 55% principle is true on a full optimal f basis,
and therefore is true on a dynamic fractional f basis as well, but is not true on a static fractional f basis. Since the dynamic is asymptotically better than its static fractional f counterpart, we
can expect this 35 to 55% principle to apply to us if we are going to trade our account in
the mathematically optimal fashion-that is, at full optimal f for a given level of initial risk (our initial active equity). The establishment of a buffer demand deposit account allows for the
account to be traded in the mathematically optimal fashion (dynamic optimal 0 while it also allows the share averaging method of reallocation to work (i.e., cash is transferred to the buffer demand
deposit account) and allows for a steady dollar outcome from the buffer demand deposit account, thus meeting the trader's needs. Thus, if a trader needs $X per day to meet his needs, be they living
expenses or otherwise, these can be satisfied without sabotaging the mathematics in the account by establishing and administering a buffer demand deposit account, and share averaging funds on a
periodic basis from the trading program to this buffer account. The trader then makes regular withdrawals of a constant dollar amount from this buffer account. Of course, the regular dollar
withdrawals must be for an amount less than the smallest amount transferred from the trading account to the buffer account. For example, if we are looking at a $500,000 account, we are withdrawing 1%
per month, and we start out with 20% initial active equity, then we know that our smallest withdrawal from the trading account will be .01*500,000*(1-.2) = .01*500,000*.8 = $4,000. Therefore, our
constant dollar withdrawal from the buffer account should be for an amount no greater than $4,000. The buffer account can also be the inactive subaccount. Before we come to the fourth asset
allocation technique, a certain confusion must be cleared up. With optimal fixed fractional trading, you can see that you add more and more contracts when your equity increases, and vice versa when
it decreases. This technique makes the greatest geometric growth of your equity in the long run.
WHY REALLOCATE? Reallocation seems to do just the opposite of what we want to do in that reallocation trims back after a runup in equity or adds more equity to the active portion after a period where
the equity has been run down. Reallocation is a compromise between the theoretical ideal and the reallife implementation. These techniques allow us to make the most of this compromise. Ideally, you
would never reallocate. When your humble little $10,000 account grew to $10 million, it would never go through reallocation. Ideally, you would sit through the drawdown that took your account back
down to $50,000 from the $10 million mark before it shot up to $20 million. Ideally, if your active equity were depleted down to 1 dollar, you would still be able to trade a fractional contract (a
"microcontract"?). In an ideal world, all of these things would be possible. In real life, you are going to reallocate at some point on the upside or the downside. Given that you are going to do
this, you might as well do it in a systematic, beneficial way. In reallocating, or compromising, you "reset" things back to a state you would be at if you were starting the program all over again,
only at a different equity level. Then you let the outcome of the trading dictate where the fraction off used floats to by using a dynamic fractional fin between reallocations. Things can get levered
up awfully fast, even when you start out with an active equity allocation of only 20%. Remember, you are using the full optimal f on this 20%, and if your program does modestly well, you'll be
trading in substantial quantities relative to the total equity in the account in short order.
PORTFOLIO INSURANCE – THE FOURTH REALLOCATION TECHNIQUE Assume for a moment that you are managing a stock fund. Figure 82 depicts a typical portfolio insurance strategy (also known as dynamic
hedging). The floor in this example is the current portfolio value of 100 (dollars per share). The typical portfolio follows the equity market 1 for 1. This is represented by the unbroken line. The
insured portfolio is depicted here by the dotted line. Note that the dotted line is below the unbroken line when the portfolio is at or above its initial value (100). This difference represents the
cost of the portfolio insurance. Otherwise, as the portfolio falls in value, portfolio insurance provides a floor on the value of the portfolio at a desired floor value (in this case the present
value of 100) minus the cost of performing the strategy. In a nutshell, portfolio insurance is akin to buying a put option on the portfolio. Suppose the fund you are managing consists of only 1
stock, which is currently priced at 100. Buying a put option on this stock, with a strike price of 100, at a cost of 10, would replicate the dotted line in Figure 8-2. The worst that could happen now
to your portfolio of 1 stock and a put option on it is that you could exercise the put, which sells your stock at 100, and you lose the value of the put, 10. Thus, the worst that this portfolio can
be worth is 90, no matter how far down the underlying stock goes. In a nutshell, portfolio insurance is akin to buying a put option on the portfolio. Suppose the fund you are managing consists of
only 1 stock, which is currently priced at 100. Buying a put option on this stock, with a strike price of 100, at a cost of 10, would replicate the dotted line in Figure 8-2. The worst that could
happen now to your portfolio of 1 stock and a put option on it is that you could exercise the put, which sells your stock at 100, and you lose the value of the put, 10. Thus, the worst that this
portfolio can be worth is 90, no matter how far down the underlying stock goes. On the upside, your insured portfolio suffers somewhat in that the value of the portfolio is always reduced by the cost
of the put.
Total portfolio value
Insured portfolio
Uninshured portfolio
80 90 100 110 120 Underlying portfolio value
Figure 8-2 Portfolio insurance. Clearly, looking at Figure 8-2 and considering the fundamental equation for trading, the estimated TWR of Equation (1.19c), you can intuitively see that an insured
portfolio is superior to an uninsured portfolio in an asymptotic sense. In other words, if you're only as smart as your dumbest mistake, you have put a floor on that dumbest mistake by portfolio
insurance. Now consider that being long a call option will give you the same profile as being long the underlying and long a put option with the same strike price and expiration date as the call
option. Here, when we speak of the same profile, we mean an equivalent position in terms of the risk/reward characteristics at different values for the underlying. Thus, the dotted line in Figure 8-2
can also represent a portfolio comprised of simply being long the 100 call option at expiration. Here is how dynamic hedging works to provide portfolio insurance. Suppose you buy 100 shares of a
single stock for your fund, at a price of $100 per share. You now replicate the call option by using this underlying stock. You do this by determining an initial floor for the stock. The floor you
choose is, say, 100. You also determine an expiration date for the hypothetical option you are going to create. Say the expiration date you choose is the date on which this quarter ends. Now you
figure the delta for this 100 call option with the chosen expiration date. You can use Equation (5.05) to find the delta of a call option on a stock (you can use the delta for whatever option model
you are using; we're using the Black-Scholes Stock Option Model here). Suppose the delta is .5. This means that you should be 50% invested in the given stock. You would thus have only 50 shares of
stock on rather than the 100 shares you would have on if you were not practicing portfolio insurance. As the value of the stock increases, so will the delta, and likewise the number of shares you
hold. The upside limit is a delta at 1, where you would be 100% invested. In our example, at a delta of 1 you would have on 100 shares. As the stock price decreases, so does the delta, and so does
the size of your position in the stock. The downside limit is at a delta of 0 (where the put delta is-1), at which point you wouldn't have any position in the stock.
Operationally, stock fund managers have used noninvasive methods of dynamic hedging. Such a technique involves not having to trade the cash portfolio. Rather, the portfolio as a whole is adjusted to
what the current delta should be as dictated by the model by using futures, and sometimes put options. One benefit of using futures is low transaction costs. Selling short futures against the
portfolio is equivalent to selling off part of the portfolio and putting it into cash. As the portfolio falls, more futures are sold, and as it rises, these short positions are covered. The loss to
the portfolio as it goes up and the short futures positions are covered is what accounts for the portfolio insurance cost, the cost of the replicated put options. Dynamic hedging, though, has the
benefit of allowing us to closely estimate this cost at the outset. To managers trying to implement such a strategy, it allows the portfolio to remain untouched while the appropriate asset allocation
shifts are performed through futures and/or options trades. This noninvasive technique of using futures and/or options permits the separation of asset allocation and active portfolio management. To
implement portfolio insurance, you must continuously adjust the portfolio to the appropriate delta. This means that, say each day, you must input into the option pricing model the current portfolio
value, time of expiration, interest rate levels, and portfolio volatility to determine the delta of the put option you are trying to replicate. Adding this delta (which is a number between 0 and -1)
to 1 will give you the corresponding call's delta. This is the hedge ratio, the percentage that you should be invested in the fund. You must make sure that you stay as close to this hedge ratio as
possible. Suppose your hedge ratio for the present moment is .46. Say that the size of the fund you are managing is the equivalent to 50 S&P futures contracts. Since you only want to be 46% invested,
you want to be 54% dis-invested. Fifty-four percent of 50 contracts is 27 contracts. Therefore, at the present price level of the fund, at this point in time, for the given interest rate and
volatility levels, the fund should be short 27 S&P contracts along with its long position in cash stocks. Because the delta needs to be recomputed on an ongoing basis, and portfolio adjustments
constantly monitored, the strategy is called a dynamic hedging strategy. One problem with using futures in the strategy is that the futures market does not exactly track the cash market. Further, the
portfolio you are selling futures against may not exactly follow the cash index upon which the futures market is traded. These tracking errors can add to the expense of a portfolio insurance program.
Furthermore, when the option being replicated gets very near to expiration and the portfolio value is near the strike price, the gamma of the replicated option goes up astronomically. Gamma is the
instantaneous rate of change of the delta or hedge ratio. In other words, gamma is the delta of the delta. If the delta is changing very fast (i.e., if the replicated option has a high gamma),
portfolio insurance becomes increasingly more cumbersome to perform. There are numerous ways to work around this problem, some of which are very sophisticated. One of the simplest involves not only
trying to match the delta of the replicated option, but using futures and options together to match both the delta and gamma of the replicated option. Again, this high gamma usually becomes a problem
only when expiration draws near and the portfolio value and the replicated option's strike price are very close. There is a very interesting relationship between optimal f and portfolio insurance.
When you enter a position, you can state that f percent of your funds are invested. For example, consider a gambling game in which your optimal f is .5, your biggest loss is -1, and your bankroll is
$10,000. In such a case, you would bet $1 for every $2 in your stake, since -1, the biggest loss, divided by -.5, the negative optimal f, is 2. Dividing $10,000 by 2 yields $5,000. You would
therefore bet $5,000 on the next bet, which is f percent, 50%, of your bankroll. Had you multiplied our bankroll of $10,000 by f, .5, you would have arrived at the same $5,000 result. Hence, you have
bet f percent of our bankroll. Likewise, if your biggest loss were $250 and everything else remained the same, you would be making 1 bet for every $500 in your bankroll (since -$250/-.5 = $500).
Dividing $10,000 by $500 means that you would make 20 bets. Since the most you can lose on any one bet is $250, you have thus risked f percent, 50% of our stake, in risking $5,000 ($250*20). We can
therefore state that f equals the percentage of our funds at risk, or f equals the hedge ratio. Since f is only applied on the active portion of our portfolio in a dynamic fractional f strategy, the
hedge ratio of the portfolio is:
(8.04a) H = f*A/E where H = The hedge ratio of the portfolio. f = The optimal f (0 to 1). A = The active portion of funds in an account. E = The total equity of the account. Equation (8.04a) gives us
the hedge ratio for a portfolio being traded on a dynamic fractional f strategy. Portfolio insurance is also at work in a static fractional f strategy, only the quotient A/E equals 1, and the value
for f, the optimal f, is multiplied by whatever value we are using for the fraction off. Thus, in a static fractional f strategy the hedge ratio is: (8.04b) H = f*FRAC where H = The hedge ratio of
the portfolio. f = The optimal f (0 to 1). FRAC = The fraction of optimal f that you are using. Since there is usually more than one market system working in an account, we must account for this.
When this is the case, the variable fin Equation (8.04a) or (8.04b) must be calculated as: (8.05) f = ∑[i = 1,N]fi*Wi where f = The f (0 to 1) to be input in Equation (8.04a) or (8.04b). N = The
total number of market systems in the portfolio. Wi = The weighting of the ith component in the portfolio (from the identity matrix). fi = The f factor (0 to 1) of the ith component in the portfolio.
We can state that in trading an account on a dynamic fractional f basis we are performing portfolio insurance. Here, the floor is equal to the initial inactive equity plus the cost of performing the
insurance. However, it is often simpler to refer to the floor of a dynamic fractional f strategy as simply the initial inactive equity of an account. We can state that Equation (8.04a) or (8.04b)
equals the delta of the call option of the terms used in portfolio insurance. Further, we find that this delta changes much the way a call option that is deep out-of-themoney and very far from
expiration changes. Thus, by using a constant inactive dollar amount, trading an account on a dynamic fractional f strategy is equivalent to owning a put option on the portfolio that is deep
in-the-money and very far out in time. Equivalently, we can state that trading a dynamic fractional f strategy is the same as owning a call option on the portfolio that doesn't expire for a very long
time and is very far out-of-the-money, rather than the portfolio itself. This quality, this relationship to portfolio insurance, is true for any dynamic fractional f strategy, whether we are using
share averaging, scenario planning, or investor utility. It is also possible to use portfolio insurance as a reallocation technique to "steer" performance somewhat. This steering may be analogous to
trying to steer a tanker with a rowboat oar, but this is a valid reallocation technique. The method involves setting parameters for the program initially. First you must determine a floor value. Once
this has been chosen, you must decide upon an expiration date, a volatility level, and other input parameters for the particular option model you intend to use. These inputs will give you the options
delta at any given point in time. Once the delta is known, you can determine what your active equity should be. Since the delta for the account, the variable H in Equation (8.04a), must equal the
delta for the call option being replicated, D, we can replace H in Equation (8.04a) with D: D = f*A/E Therefore: (8.06) D/f = A/E if D < f (otherwise A/E = 1) where D = The hedge ratio of the call
option being replicated. f = The f (0 to 1) from Equation (8.05). A = The active portion of funds in an account. E = The total equity of the account. Since A/E is equal to the percentage of active
equity, we can state that the percentage of the total account equity funds that we should have
in active equity is equal to the delta on the call option divided by the f determined in Equation (8.05). However, you will note that if D is greater than f, then it is suggesting that you allocate
greater than 100% of an account's equity as active. Since this is not possible, there is an upper limit of 100% of the account's equity that can be used as active equity. You can use Equation (5.05)
to find the delta of a call option on a stock, or Equation (5.08) to find the delta of a call option on a future. The problem with implementing portfolio insurance as a reallocation technique, as
detailed here, is that reallocation is taking place constantly. This detracts from the fact that a dynamic fractional f strategy will asymptotically dominate a static fractional f strategy. As a
result, trying to steer performance by way of portfolio insurance as a dynamic fractional f reallocation strategy probably isn't such a good idea. However, any time you use dynamic fractional f, you
are employing portfolio insurance. We now cover an example of portfolio insurance. Recall our geometric optimal portfolio of Toxico, Incubeast, and LA Garb. We found the geometric optimal portfolio
to exist at V = .2457. We must now convert this portfolio variance into the volatility input for the option pricing model. Recall that this input is described as the annualized standard deviation.
Equation (8.07) allows us to convert between the portfolio variance and the volatility estimate for an option on the portfolio: (8.07) OV = (V^.5)*ACTV*YEARDAYS^.5 where OV = The option volatility
input for an option on the portfolio. V = The variance on the portfolio. ACTV = The current active equity portion of the account. YEARDAYS = The number of market days in a year. If we assume a year
of 251 market days and an active equity percentage of 100% (1.00) for the sake of simplicity: OV = (.2457^.5)*1*251^.5 = .4956813493*15.84297952 = 7.853069464 This corresponds to a volatility of over
785%! Remember, this is the annualized volatility on the portfolio being traded at the optimal f level with 100% of the account designated as active equity. As a result, we are going to get very high
volatility readings. Since we are going to demonstrate portfolio insurance as a reallocation technique, we must use 1.00 as the value for ACTV. Equation (5.05) will give us the delta on a particular
call option as: (5.05) Call Delta = N(H) The H term in (5.05) is given by (5.03) as: (5.03) H = ln(U/(E*EXP(-R*T)))/(V*T^(1/2))+(V*T^(l/2))/2 U = The price of the underlying instrument. E = The
exercise price of the option. T = Decimal fraction of the year to expiration. V = The annual volatility in percent. R = The risk-free rate. ln() = The natural logarithm function. N() = The cumulative
Normal density function, as given in Equation (3.21). Notice that we are using the stock option pricing model here. We now use our answer for OV as the volatility input, V, in Equation (5.03). If we
assume the risk-free rate, R, to be 6% and the decimal fraction of the year left till expiration, T, to be .25, Equation (5.03) yields: H = ln(100/(100*EXP(.06*.25)))/(7.853069464*.25^.5)+
(7.853069464*.25^.5)/ 2 = ln(100/(100*EXP(-.015)))/(7.853069464*.5)+(7.853069464*.5)/2 = ln(100/(100*.9851119396))/(7.853069464*.5)+(7.853069464*.5)/2 = ln( 100/98.51119396) ∕ 3.926534732+3.926534732
/2 = ln( 1.015113065) ∕ 3.926534732+1.963267366 = .015 13.926534732+1.963267366 = .00382+1.963267366 = 1.967087528 This answer represents the H portion of (5.05). We must now run this through
Equation (3.21) as the Z variable to obtain the actual call delta:
(3.21) N(Z) = 1-N'(Z)*((1.330274429*Y^5)(1.821255978*Y^4)+(1.781477937*Y^3)(.356563782*Y^2)+(.31938153*Y)) where Y = 1/(1+.2316419*ABS(Z)) N'(Z) = .398942*EXP(-(Z^2/2)) Thus: Y = 1/ (1+.2316419*ABS
(1.967087528)) = 1/(1+ .4556598925) = 1/1.4556598925 = .6869736574 Now solving for the term N'( 1.967087528) N'(1.967087528) = .398942*EXP(-(1.967087528 ^ 2/2)) = .398942*EXP(-(3.869433343/2)) =
.398942*EXP(-1.934716672) = .398942*.1444651941 = .05763323346 Now, plugging the values for Y and N' (1.967087528) into (3.21) to obtain the actual call delta as given by Equation (5.05): N(Z) =
1-.05763323346*((1.330274429*.6869736574^5)(1.821255978*.6869736574^4)+(1.781477937*.6869736574^3)(.356563782*.6869736574^2)+(.31938153*.6869736574)) = 1-.05763323346*((1.330274429*.1530031)
(1.821255978*.2227205)+(1.781477937*.3242054)(.356563782*.4719328)+(.31938153*.6869736)) = 1-.05763323346*(.2035361115-.405631042+-5775647672.168274144+.2194066794) = 1-.05763323346*.4266023721 =
1-.02458647411 = .9754135259 Thus, we have a delta of .9754135259 on our hypothetical call option for a portfolio trading at a price of 100%, with a strike price of 100%, with .25 of a year left to
expiration, a risk-free rate of 6%, and a volatility on this portfolio of 785.3069464%. Now recall that the sum of the weights on this geometric optimal portfolio consisting of Toxico, Incubeast, and
LA Garb, per Equation (8.05), is 1.9185357. Thus, per Equation (8.06), we would reallocate to 50.84156244% (.9754135259/1.9185357) active equity if we were using portfolio insurance to reallocate.
"What is the cost of this insurance?" That depends upon the volatility that will actually be seen over the life of the replicated option. For instance, if the equity in the account were not to
fluctuate at all over the life of the replicated option (volatility equal to 0), the replicated option, the insurance, would cost us nothing. This is a great benefit to portfolio insurance versus
outright buying a put option (assuming one was available on our portfolio). We pay the actual theoretical price of the option for the volatility actually encountered, not the volatility perceived by
the marketplace before the fact, as would be the case with actually buying the put option. Further, actually buying the put option (again assuming one was available) entails a bid-ask spread that is
circumvented by replicating the option.
THE MARGIN CONSTRAINT Here is a problem that continuously crops up when we take any of the fixed fractional trading techniques out of its theoretical context and apply it in the real world. We have
seen that anytime an additional market system is added to the portfolio, so long as the linear correlation coefficient of daily equity changes between that market system and another market system in
the portfolio is less than +1, the portfolio is improved. That is to say that the geometric mean of daily HPRs is increased. Thus, it stands to reason that you would want to have as many market
systems as possible in a portfolio. Naturally, at some point, margin considerations become a problem. Even if you are trading only 1 market system, margin considerations can often be a problem.
Consider that the optimal fin dollars is very often less than the initial margin requirements for a given market. Now, depending on what fraction of f you are using at the moment, whether
you are using a static or dynamic fractional f strategy, you will encounter a margin call if the fraction is too high. When you trade a portfolio of market systems, the problem of a margin call
becomes even more likely. With an unconstrained portfolio, the sum of the weights is often considerably greater than 1. When you trade only 1 market system, the weight is, de facto, 1. If the sum of
the weights of a market system you are trading is, say, 3, then the likelihood of a margin call is 3 times as great as it would be if you were trading just 1 market. What is needed is a way to
reconcile how to create an optimal portfolio within the bounds of the margin requirements on the components in the portfolio. This can very easily be found. The way to accomplish this is to find what
fraction off you can use as an upper limit. This upper limit, U, is given by Equation (8.08) as: (8.08) U = ∑[i = 1,N]fi$/((∑[i = 1,N] margini$)*N) where U = The upside fraction of Ј At this
particular fraction off you are trading the optimal portfolio as aggressively as possible without incurring an initial margin call. fi$ = The optimal fs in dollars for the ith market system. margini$
= The initial margin requirement of the ith market system. N = The total number of market systems in the portfolio. If U is greater than 1, then use 1 as the answer for U. For instance, suppose we
have a portfolio with the three market systems as follows, with the following optimal fs in dollars for the three market systems and the following initial margin requirements. (Note: the f$ are the
optimal fs in dollars for each market system in the portfolio. This represents the market system's individual optimal f$ divided by its weighting in the portfolio): Market System A B C Sums
f$ $2,500 $2,000 $3,000 $7,500
Initial Margin $2,000 $2,000 $2,000 $6,000
Now, per Equation (8.08) we use the sum of the f$ column in the numerator, which is $7,500, and divide by the sum of the initial margin requirements, $6,000, times the number of markets, N, which is
3: U = $7,500/($6,000*3) = 7500/18,000 = .4167 Therefore, we can determine that, as an upside limit, our fraction off cannot exceed 41.67% in this case (that is, if we are employing a dynamic
fractional f strategy). Therefore, we must reallocate when our active equity divided by our total equity in the account equals or exceeds .4167. If, however, you are still employing a static
fractional f strategy (despite my protestations), then the highest you should set that fraction to is .4167. This will put you on the unconstrained geometric efficient frontier, to the left of the
optimal portfolio, but as far to the right as possible without encountering a margin call. To see this, suppose we have a $100,000 account. We set our fractional f values to a .4167 fraction of
optimal. Therefore for each market system: Market System A B C
f$ $2,500 $2,000 $3,000
1.4167 = New f$ $6,000 $4,600 $7,200
For a $100,000 account, we will trade 16 contracts of market system A (100,000/6,000), 20 contracts of market system B (100,000/4,800), and 13 contracts of market system C (100,000/7,200). The
resulting margin requirement for such a portfolio is: 16*$2,000 = $32,000 20*2,000 = 40,000 13*2,000 = 26,000 Initial margin requirement $96,000 Notice that using this formula (8.08) yields the
highest fraction for f (without incurring an initial margin call) that gives you the same ratios of the different market systems to one another. Hence, Equation (8.08) returns the unconstrained
optimal portfolio at its least diluted state without incurring an initial margin call. Notice in the previously cited example that if you are trading a fractional f strategy, the value returned from
Equation (8.08) is the maximum fraction for f you can get to without incurring an initial margin call. Again consider a $100,000 account. Assume that at one time, when
you opened this account, it had $70,000 in it. Further assume that of that initial $70,000 you allocated $58,330 as inactive equity. Thus, you initially started out at a roughly 83:17 percentage
split between inactive and active equity. You have traded the active portion at the full optimal f values. Now your account stands at $100,000. You still have $58,330 as inactive equity, therefore
your active equity is $41,670, which is .4167 of your total equity. This should now be the maximum fraction you can use, the maximum ratio of active to total equity, without incurring a margin call.
Recall that you are trading at the full f levels. Therefore, you will trade 16 contracts of market system A (41,670/2,500), 20 contracts of market system B (41,670/2,000), and 13 contracts of market
system C (41,670/3,000). The resultant margin requirement for such a portfolio is: 16*$2,000 = $32,000 20*2,000 = 40,000 13* 2,000 = 26,000 Initial margin requirement $96,000 Again we can see that
this is pushing it as much as possible without incurring a margin call, since we have $100,000 total equity in the account. Recall from Chapter 2 the fact that adding more and more market systems
results in higher and higher geometric means for the portfolio as a whole. However, there is a tradeoff in that each market system adds marginally less benefit to the geometric mean, but marginally
more detriment in the way of efficiency loss due to simultaneous rather than sequential outcomes. Therefore, you do not want to trade an infinite number of market systems. What's more, theoretically
optimal portfolios run into the real-life application problem of margin constraints. In other words, you are better off to trade 3 market systems at the full optimal f levels than to trade 300 market
systems at dramatically reduced levels as a result of Equation (8.08). Usually, you will find that the optimal number of market systems to trade in, particularly when you have many orders to place
and the potential for mistakes, is but a handful. If one or more market systems in the portfolio have optimal weightings greater than 1, a potential problem emerges. For example, assume a market
system with an optimal f of .8 and a biggest loss of $4,000. Therefore, f$ is $5,000. Let's suppose the optimal weighting for this component of the portfolio is 1.25. Therefore you will trade one
unit of this component for every $4,000 ($5,000/1.25) in account equity. As you can see, as soon as the component sees its largest loss, all of the active equity in the account will be wiped out
(unless profits are sufficient in the other market systems to salvage some active equity). This problem tends to crop up for systems that trade infrequently. For example, recall that if we could have
two market systems with perfect negative correlation and a positive expectation, we would optimally have on an infinite number of contracts. When one of the components lost, the other would win an
equal or greater amount. Thus, we would always have a net profit on each play. However, these market systems are always having a simultaneous play. The situation being discussed is analogous to this
hypothetical situation when one of these components is not active on a certain play. Now there's only one market system active on a given play, and that market system has on an infinite number of
contracts. A loss is catastrophic. The solution is to divide 1 by the highest weighting of any of the components in the portfolio and use the answer as the upper limit on active equity if the answer
is less than the answer to Equation (8.08). This ensures that if a loss is encountered in the future of the same magnitude as the largest loss over which f was derived, it will not wipe out the
account. For example, suppose the highest weighting of any component in our portfolio is 1.25. Then if Equation (8.08) does not give us an answer less than .8 (1/1.25), we will use .8 as our upper
limit on our active equity percentage. This is unlikely to be a problem if you start with a low active equity percentage. However, a more aggressive trader may encounter this problem. An alternative
solution is to set additional constraints in the portfolio matrix (such as constraints on the maximum weighting for each market system being set to 1, as well as constraints pertaining to margin).
These additional linear programming constraints may be slightly beneficial to the aggressive trader, but the matrix solutions can be involved. Interested readers are again referred to Childress.
ROTATING MARKETS Many traders use systems or techniques that have them monitoring many markets all the time, filtering for what they feel are the best mar-
kets for the systems at the moment. For example, some traders may prefer to monitor the volatility in all of the futures markets and trade only those markets whose volatility exceeds a certain
amount. Sometimes they will be in many markets, sometimes they won't be in any. Further, the markets that they are in are constantly changing. This changing composition seems to be particularly a
problem for stock fund managers. How can we manage such a thing and still be at the optimal portfolio? The solution is really quite simple. Anytime a market is added or deleted from the portfolio,
the new unconstrained geometric optimal portfolio is calculated as detailed in this chapter. Any adjustments to existing positions in terms of the quantity that should be on in light of the newly
added or deleted market system ought to be made as well. In a nutshell, it is alright to have a constantly changing portfolio in terms of components. The goal for the manager of such a portfolio,
however, is to have the portfolio always be the unconstrained geometric optimal of the components involved and to keep the inactive equity amount constant. In so doing, a constantly changing
portfolio composition can be managed in a manner that is asymptotically optimal. There is a potential problem with this type of trading from a portfolio standpoint. An example may help illustrate.
Imagine two highly correlated markets, such as gold and silver. Now imagine that your system trades so infrequently that you have never had a position in both of these markets on the same day. When
you determine the correlation coefficients of the daily equity changes, it is quite possible that the correlation coefficient you will show between gold and silver is 0. However, if in the future you
have a trade in both markets simultaneously, you can expect them to have a high positive correlation. To solve this problem, it is helpful to edit your correlation coefficients with an eye toward
this type of situation. In short, don't be afraid to edit the correlation coefficients upward. However, be wary of moving them lower. Suppose you show the correlation coefficient between Bonds and
Soybeans as 0, but you feel it should be lower, say -.25. You really should not adjust correlation coefficients lower, as lower correlation coefficients tend to have you increase position size. In
short, if you're going to err in the correlation coefficients, err by moving them upward rather than downward. Moving them upward will tend to move the portfolio to the left of the peak of the
portfolio's f curve, while moving correlation coefficients lower will tend to move you to the right of the portfolio's f curve. Often people try to filter trades in a manner as to have them in a
particular market during certain times and out at others in an attempt to lower drawdown. If the filtering technique works, if it lowers drawdown on a one-unit basis, then the f that is optimal for
the filtered trades will be higher (and f$ lower) than for the entire series of trades before filtering. If the trader applies the optimal f over the entire prefiltered series to the postfiltered
series, she will find herself at a fractional f on the postfiltered series and hence cannot be obtaining a geometric optimal portfolio. On the other hand, if the trader applies the optimal f on the
postfiltered series, she can obtain the geometric optimal portfolio, but she is right back to the problem of impending large drawdowns at optimal f. She seems to have defeated the purpose of her
filter. This illustrates the fallacy of filters from a money-management standpoint. Filters might work (reduce drawdown on a one-unit basis) only because they cause the trader to be at a fraction of
the optimal f. Why filter at all? We could state that we benefit by filtering if our answer to the fundamental equation of trading on postfiltered trades at the prefiltered optimal f is greater than
the answer to the fundamental equation of trading on prefiltered trades at the prefiltered optimal f. It is important to note when making such a comparison that the postfiltered trades are less in
number (have lower N) than the prefiltered trades.
TO SUMMARIZE We have seen that trading on a fixed fractional basis makes the most money in an asymptotic sense. It maximizes the ratio of potential gain to potential loss. Once we have an optimal f
value we can convert our daily equity changes on a 1-unit basis to an HPR, we can determine the arithmetic average HPR and standard deviation in those HPRs, and we can calculate the correlation
coefficient of the HPRs between any two market systems, We can then use these parameters as inputs in determining the optimal weightings for an optimal portfolio. (Since we are using leveraged
vehicles, weighting and quantity are not synonymous, as they would be if there was no leverage involved.) These weightings then are
reflected back into the f values, the amount we should finance each contract by, as the f values are divided by their respective weightings. This gives us new f values, which result in the greatest
geometric growth with respect to the intercorrelations of the other market systems and their weightings. The greatest geometric growth is obtained by using that set of weightings whose sum is
unconstrained and whose arithmetic average HPR minus its standard deviation in HPRs squared (its variance) equals 1 [Equation (7.06c)]. Rather than being diluted (which only puts you farther left on
the unconstrained efficient frontier), as is the case with a static fractional f strategy, this portfolio is traded full out with only a fraction of the funds in the account. Such a technique is
called a dynamic fractional f strategy. The remaining funds, the inactive equity, are left untouched by the activity that goes on in these active funds. Since this active portion is being traded at
the optimal levels, fluctuations in this active equity will be swift. As a result, at some point on the upside or downside in the equity fluctuations, or at some point in time, you will likely find
it necessary, even if only from an emotional standpoint, to reallocate funds between the active and inactive portions. Four methods of doing so have been explained, although other, possibly better,
methods may exist: 1. Investor Utility. 2. Scenario Planning. 3. Share Averaging. 4. Portfolio Insurance. The fourth method, portfolio insurance or dynamic hedging, is inherent in any dynamic
fractional f strategy, but it can also be utilized as a reallocation method. We have further seen that to take the unconstrained geometric optimal portfolio and apply it in real time will most likely
encounter a problem in terms of the initial margin requirements. This problem can be alleviated by determining an upper level limit for the ratio of active equity to total account equity.
APPLICATION TO STOCK TRADING The techniques that have been described in this book apply not only to futures traders, but to traders in any market. Even someone trading a portfolio of only blue chip
stocks is not immune from the principles and the consequences discussed in this book. You have seen that such a portfolio of blue chip stocks has an optimal level of leverage where the ratio of
potential gains to potential losses in equity are maximized. At such a level, the drawdowns to be expected arc also quite severe, and therefore the portfolio ought to be diluted, preferably by way of
a dynamic fractional f strategy. The entire procedure can be performed exactly as though the stock being traded were a commodity market system. For instance, suppose Toxico were trading at $40 per
share. The cost of 100 shares of Toxico would be $4,000. This 100-share block of Toxico can be treated as 1 contract of the Toxico market system. Thus, if we were operating in a cash account, we
could replace the margini$ variable in Equation (8.08) with the value of 100 shares of Toxico ($4,000 in this example). In so doing, we can determine the upper limit on the fraction of f to use such
that we never have to even perform the procedure in a margin account. When you are doing this type of exercise, remember that you are replicating a leveraged situation, but there isn't really any
borrowing or lending going on. Therefore, you should use an RFR of 0 in any calculations (such as the Sharpe ratio) that require an RFR. On the other hand, if we perform the procedure in a margin
account, and if initial margin levels are, say, 50%, then we would use a value of $2,000 for the margini$ variable for Toxico in (8.08). Traditionally, stock fund managers have used portfolios where
the sum of the weights is constrained to 1. Then they opt for that portfolio composition which gives the lowest variance for a given level of arithmetic return. The resultant portfolio composition is
expressed in the form of the weights, or percentages of the trading account, to apply to each component of the portfolio. By lifting this sum of the weights constraint and opting for the single
portfolio that is geometric optimal, we get the optimal leveraged portfolio. Here, the weights and quantities are completely different. We now divide the optimal amount to finance I unit of each
component by
its respective weighting; the result is the optimal leverage for each component in the portfolio. Now, we can dilute this portfolio down by marrying it to the risk-free asset. We can dilute the
portfolio to the point where there really isn't any leverage involved. That is, we are leveraging the active equity portion of the portfolio but the active equity portion is actually borrowing its
own money, interest-free, from the inactive equity portion. The result is a portfolio and a method of adding to and trimming back from positions as the equity in the account changes that will result
in the greatest geometric growth. As such a method maximizes the potential geometric growth to the potential loss and allows for the maximum loss acceptable to be essentially specified at the outset,
it can also be argued to be a superior means of managing a stock portfolio. The current generally accepted procedure for determining the efficient frontier will not really yield the efficient
frontier, much less the portfolio that is geometric optimal (the geometric optimal portfolio always lies on the efficient frontier). This can be derived only by incorporating the optimal f. Further,
the generally accepted procedure yields a portfolio that gets traded on a static f basis rather than on a dynamic basis, the latter being asymptotically infinitely more powerful.
A CLOSING COMMENT This is a very exciting time to be in this field, New concepts have been emerging nearly continuously since the mid 1950s. We have witnessed an avalanche of great ideas from the
academic community building upon the E-V model. Among the ideas presented has been the E-S model. With the E-S model the measure of risk is semivariance in lieu of variance.1 Semivariance is defined
as the variation beneath some target level of return, which could be the expected return, zero return, or any other fixed level of return. When this target level of return equals the expected return
and the distribution of returns is symmetrical (without skew), the E-S efficient frontier is the same as the E-V efficient frontier. Other portfolio models have been presented using other measures
for risk than variance in returns. Still other portfolio models have been presented using moments of the distribution of returns beyond the first two moments. Of particular interest in this regard
have been the stochastic dominance approaches, which encompass the entire distribution of returns and hence can be considered the limiting case of multidimensional portfolio analysis as the number of
moments incorporated approaches infinity.2 This approach may be particularly useful when the variance in returns is infinite or undefined. Again, I am not a so-called academic. This is neither a
boast nor an apology. I am no more an academic than I am a ventriloquist or a TV wrestler. Academics want a model to explain how the markets work. As a nonacademic, I don't care how they work. For
example, many people in the academic community argue that the efficient market hypothesis is flawed because there is no such thing as a rational investor. They argue that people do not behave
rationally, and therefore conventional portfolio models, such as E-V theory (and its offshoots) and the Capital Asset Pricing model, are poor models of how the markets operate. While I agree that
people certainly do not behave rationally, it does not mean that we shouldn't behave rationally or that we cannot benefit by behaving rationally. When variance in returns is finite, we can certainly
benefit by being on the efficient frontier. There has been much debate in recent years over the usefulness of current portfolio models in light of the fact that the distribution of the logs of price
changes appear to be stable Paretian with infinite (or undefined) variance. Yet many studies demonstrate that the markets in recent years have seen a move toward Normality (therefore finite variance)
and independence, which the portfolio models being criticized assume.3 Fur1
Markowitz, Harry, Portfolio Selection: Efficient Diversification of Investments. New York: John Wiley, 1959. 2 See Quirk, J, P., and R. Saposnik, "Admissibility and Measurable Utility Functions,"
Review of Economic Studies, 29(79):140-146, February 1962. Also see Reilly, Frank K, Investment Analysis and Portfolio Management. Hinsdale, IL: The Dryden Press, 1979. 3 See Helms, Billy P., and
Terrence F. Martell, "An Examination of the Distribution of Commodity Price Changes," Working Paper Series. New York: Columbia University Center for the Study of Futures Markets, CFSM-76, April 1984.
Also see Hudson, Michael A., Raymond M. Leuthold, and Cboroton F. Sarassorro, "Commodity Futures Price Changes: Distribution, Market Efficiency, and Pricing Commodity Options," Working Paper Series,
New York: Columbia University Center for the Study of Futures Markets, CFSM-127, June 1986.
ther, the portfolio models use the distribution of returns as input, not the distribution of the logs of price changes. Whereas the distribution of returns is a transformed distribution of the logs
of price changes (transformed by techniques such as cutting losses short and letting profits run), they are not necessarily the same distribution, and the distribution of returns may not be a member
of the stable Paretian (which is why we modeled the distribution of trade P&L's in Chapter 4 with our adjustable distribution). Furthermore, there are derivative products such as options that have
finite semivariance (if long) or finite variance altogether. For example, a vertical option spread put on at a debit guarantees finite variance in returns. I'm not defending against the attacks on
the current portfolio models. Rather, I am playing devil's advocate here. The current portfolio models can be employed provided we are aware of their shortcomings. We no doubt need better portfolio
models. It is not my contention that the current portfolio models are adequate. Rather, it is my contention that the input to the portfolio models, current and future for whatever portfolio models we
use, should be based on trading one unit at the optimal level-or what we believe will be the optimal level for that item in the future, as though we were trading only that item. For example, if we
are employing E-V theory, the Markowitz model, the inputs are the expected return, variance in returns, and correlation of returns to other market systems. These inputs must be determined from
trading one unit on each market system at the optimal f level. Portfolio models other than E-V may require different input parameters. These parameters must be discerned based on trading one unit of
the market systems at their optimal f levels. Portfolio models are but one facet of money management, but they are a facet where debate is certain to rage for quite some time. This book could not be
definitive in that regard, as newer, better models are yet to be formulated. We most likely will never have a model we all agree upon as being adequate. That should make for a healthy and stimulating
APPENDIX A - The Chi-Square Test There exist a number of statistical tests designed to determine if two samples come from the same population. Essentially, we want to know if two distributions are
different. Perhaps the most well known of these tests is the chi-square test, devised by Karl Pearson around 1900. It is perhaps the most popular of all statistical tests used to determine whether
two distributions are different. The chi-square statistic, X2, is computed as: (A.01) X2 =>[i = 1,N](Oi-Ei)^2/Ei where N = The total number of bins. Oi = The number of events observed in the ith bin.
Ei = The number of events expected in the ith bin. A large value for the chisquare statistic indicates that it is unlikely that the two distributions are the same (i.e., the two samples are not drawn
from the same population). Likewise, the smaller the value for the chi-square statistic, the more likely it is that the two distributions are the same (i.e., the two samples were drawn from the same
population). Note that the observed values, the Oi's, will always be integers. However, the expected values, the Ei's, can be nonintegers. Equation (A.01) gives the &i-square statistic when both the
expected and observed values are integers. When the expected values, the Ei's, are permitted to be nonintegers, we must use a different equation, known as Yates' correction, to find the chi-square
statistic: (A.02) X2 = ∑[i = 1,N] (ABS(OiEi)-.5)^2/Ei where N = The total number of bins. Oi = The number of events observed in the ith bin. Ei = The number of events expected in the ith bin. ABS()
-The absolute value function. If we are comparing the number of events observed in a bin to what the Normal Distribution dictates should be in that bin, we must employ Yates' correction. That is
because the number of
events expected,1 the Ei's, are nonintegers. We now work through an example of the chi-square statistic for the data corresponding to Figure 3-16. This is the 232 trades, converted to standard units,
placed in 10 bins from -2 to +2 sigma, and plotted versus what the data would be if it were Normally distributed. Note that we must use Yates' correction: Bin# 1 2 3 4 5 6 7 8 9 10
Observed Expected 7.435423 17 13.98273 25 22.45426 27 30.79172 38 36.05795 61 36.078 37 30.7917 12 22.45426 4 13.98273 2 7.435423
((ABS(O-E)-.5)^2 4.738029 .4531787 .1863813 .3518931 .05767105 16.56843 1.058229 4.41285 6.430941 3.275994 X2=37.5336
We can convert a chi-square statistic such as 37.5336 to a significance level. In the sense we are using here, a significance level is a number between 0, representing that the two distributions are
different, and 1, meaning that the two distributions are the same. We can never be 100% certain that two distributions are the same (or different), but we can determine how alike or different two
distributions are to a certain significance level. There are two ways in which we can find the significance level. This first and by far the simplest way is by using tables. The second way to convert
a chi-square statistic to a significance level is to perform the math yourself (which is how the tables were drawn up in the first place). However, the math requires the use of incomplete gamma
functions, which, as was mentioned in the Introduction, will not be treated in this text. Interested readers are referred to the Bibliography, in particular to Numerical Recipes. However, most
readers who would want to know how to calculate a significance level from a given chi-square statistic would want to know this because tables are rather awkward to use from a programming standpoint.
Therefore, what follows is a snippet of BASIC language code to convert from a given chisquare statistic to a significance level. 1000 REM INPUT NOBINS%, THE NUMBER OF BINS AND 1
As detailed in Chapter 3, this is determined by the Normal Distribution per Equation (3.21) for each boundary of the bin, taking the absolute value of the differences, and multiplying by the total
number of events.
CHISQ, THE CHI-SQUARE STATISTIC 1010 REM OUTPUT IS CONF, THE CONFIDENCE LEVEL FOR A GIVEN NOBINS% AND CHISQ 1020 PRINT "CHI SQUARE STATISTIC AT"NOBINS%3"DEGREES FREEDOM IS"CHISQ 1030 REM HERE WE
CONVERT FROM A GIVEN CHISQR TO A SIGNIFICANCE LEVEL, CONF 1040 XI = 0:X2 = 0:X3# = 0:X4 = 0:X5 = 0:X6 = 0:CONF = 0 1050 IF CHISQ < 31 OR (NOBINS%-3) > 2 THEN X6 = (NOBINS%-3)/2-1 :X1 = 1 ELSE CONF =
1 :GOTO 1110 1060 FOR X2 = 1 TO ((NOBINS%-3)/2-.5):X1 = XI*X6:X6 = X6-1: NEXT 1070 IF (NOBINS%-3) MOD 2 <> 0 THEN X1 = X 1*1.77245374942627# 1080 X7 = 1:X4 = 1:X3# = ((CHISQ/2)*((NOBINS%3)/2))*2/(EXP
(CHISQ/2) * XI*(NOBINS%-3)):X5 = NOBINS% -3+2 1090 X4 = X4*CHISQ/X5:X7 = X7+X4:X5 = X5+2:IF X4> 0 THEN 1090 1100 CONF = 1-X3#*X7 1110 PRINT "FOR A SIGNIFICANCE LEVEL OF ";USING".#########";CONF
Whether you determine your significance levels via a table or calculate them yourself, you will need two parameters to determine a significance level. The first of these parameters is, of course, the
chi-square statistic itself. The second is the number of degrees of freedom Generally, the number of degrees of freedom is equal to the number of bins minus 1 minus the number of population
parameters that have to be estimated for the sample statistics. Since there are ten bins in our example and we must use the arithmetic mean and standard deviation of the sample to construct the
Normal curve, we must therefore subtract 3 degrees of freedom. Hence, we have 7 degrees of freedom. The significance level of a chi-square statistic of 37.5336 at 7 degrees of freedom is .000002419,
Since this significance level is so much closer to zero than one, we can safely assume that our 232 trades from
Chapter 3 are not Normally distributed. What follows is a small table for converting between chisquare values and degrees of freedom to significance levels. More elaborate tables may be found in many
of the statistics books mentioned in the Bibliography: VALUES OF X2 Degrees of Significance Level Freedom .20 .10 .05 .01 1 1.6 2.7 3.8 6.6 2 3.2 4.6 6.0 9.2 3 4.6 6.3 7.8 11.3 4 6.0 7.8 9.5 13.3 5
7.3 9.2 11.1 15.1 10 13.4 16.0 18.3 23.2 20 25.0 28.4 31.4 37.6
You should be aware that the chi-square test can do a lot more than is presented here. For instance, you can use the chi-square test on a 2 x 2 contingency table (actually on any N x M contingency
table). If you are interested in learning more about the chisquare test on such a table, consult one of the statistics books mentioned in the Bibliography. Finally, there is the problem of the
arbitrary way we have chosen our bins as regards both their number and their range. Recall that binning data involves a certain loss of information about that data, but generally the profile of the
distribution remains relatively the same. If we choose to work with only 3 bins, or if we choose to work with 30, we will likely get somewhat different results. It is often a helpful exercise to bin
your data in several different ways when conducting statistical tests that rely on binned data. In so doing, you can be rather certain that the results obtained were not due solely to the arbitrary
nature of how you chose your bins. In a purely statistical sense, in order for our number of degrees of freedom to be valid, it is necessary that the number of elements in each of the expected bins,
the Ei's, be at least five. When there is a bin with less than five expected elements in it, theoretically the number of bins should be reduced until all of the bins have at least five expected
elements in them. Often, when only the lowest and/or highest bin has less than 5 expected elements in it, the adjustment can be made by making these groups "all less than" and "all greater than"
APPENDIX B Other Common Distributions
parametrically to other fields where there are such environments. For this reason this appendix has been included.
This appendix covers many of the other common distributions aside from the Normal. This text has shown how to find the optimal f and its by-products on any distribution. We have seen in Chapter 3 how
to find the optimal f and its by-products on the Normal distribution. We can use the same technique to find the optimal f on any other distribution where the cumulative density function is known. It
matters not whether the distribution is continuous or discrete. When the distribution is discrete, the equally spaced data points are simply the discrete points along the cumulative density curve
itself. When the distribution is continuous, we must contrive these equally spaced data points as we did with the Normal Distribution in Chapter 3. Further, it matters not whether the tails of the
distribution go out to plus and minus infinity or are bounded at some finite number. When the tails go to plus and minus infinity we must determine the bounding parameters (i.e., how far to the left
extreme and right extreme we are going to operate on the distribution). The farther out we go, the more accurate our results. If the distribution is bounded on its tails at some finite point already,
then these points become the bounding parameters. Finally, in Chapter 4 we learned a technique to find the optimal f and its by-products for the area under any curve (not necessarily just our
adjustable distribution) when we do not know the cumulative density function, so we can find the optimal f and it's by products for any process regardless of the distribution. The hardest part is
determining what the distribution in question is for a particular process, what the cumulative density function is for that process, and what parameter value(s) are best for our application. One of
the many hearts of this book is the broader concept of decision making in environments characterized by geometric consequences. Optimal f is the regulator of growth in such environments, and the
by-products of optimal f tell us a great deal about the growth rate of a given environment. You may seek to apply the tools for finding the optimal f
THE UNIFORM DISTRIBUTION The Uniform Distribution, sometimes referred to as the Rectangular Distribution from its shape, occurs when all items in a population have equal frequency. A good example is
the 10 digits 0 through 9. If we were to randomly select one of these digits, each possible selection has an equal chance of occurrence. Thus, the Uniform Distribution is used to model truly random
events. A particular type of uniform distribution where A = 0 and B = 1 is called the Standard Uniform Distribution, and it is used extensively in generating random numbers. The Uniform Distribution
is a continuous distribution. The probability density function, N'(X), is described as: (B.01) N'(X) = 1/(B-A) for A<= X<= B else N'(X) = 0 where B = The rightmost limit of the interval AB. A = The
leftmost limit of the interval AB. The cumulative density of the Uniform is given by: (B.02) N(X) = 0 for XB where B = The rightmost limit of the interval AB. A = The leftmost limit of the interval
1 0.8 0.6 0.4 0.2 0
Figure B-1 Probability density functions for the Uniform Distribution (A = 2, B = 7).
Figure B-2 Cumulative probability functions for the Uniform Distribution (A = 2, B = 7). Figures B-1 and B-2 illustrate the probability density and cumulative probability (i.e., cdf) respectively of
the Uniform Distribution. Other qualities of the Uniform Distribution are: (B.03) Mean = (A+B)/2 (B.04) Variance = (B-A)^2/12 where B = The rightmost limit of the interval AB. A = The leftmost limit
of the interval AB.
THE BERNOULI DISTRIBUTION Another simple, common distribution is the Bernoulli Distribution. This is the distribution when the random variable can have only two possible values. Examples of this are
heads and tails, defective and nondefective articles, success or failure, hit or miss, and so on. Hence, we say that the Bernoulli Distribution is a discrete distribution (as opposed to being a
continuous distribution). The distribution is completely described by one parameter, P, which is the probability of the first event occurring. The variance in the Bernoulli is: (B.05) Variance = P*Q
where (B.06) Q = P-1
Figure B-3 Probability density functions for the Bernoulli Distribution (P = .5).
1 0.8 0.6
0.2 0
(B.08a) X! = X*(X-l)*(X-2)*...*1 which can be also written as: (B.08b) X! = ∏[J = 0,X-1]X-J Further, by convention: (B.08c) 0! = 1 The cumulative density function for the Binomial is: (B.09) N(X) = ∑
[J = 0,X] (N!/(J!*(N-J)!))*(P^J)*(Q^(N -J)) where N = The number of trials. X = The number of successes. P = The probability of a success on a single trial. Q = 1-P. 1
Figure B-4 Cumulative probability functions for the Bernoulli Distribution (P = .5). Figures B-3 and B-4 illustrate the probability density and cumulative probability (i.e., cdf) respectively of the
Bernoulli Distribution.
THE BINOMIAL DISTRIBUTION The Binomial Distribution arises naturally when sampling from a Bernoulli Distribution. The probability density function, N'(X), of the Binomial (the probability of X
successes in N trials or X defects in N items or X heads in N coin tosses, etc.) is: (B.07) N'(X) = (N!/(X!*(NX)!))*(P^X)*(Q^(N-X)) where N = The number of trials. X = The number of successes. P =
The probability of a success on a single trial. Q = 1-P. It should be noted here that the exclamation point after a variable denotes the factorial function:
Figure B-5 Probability density functions for the Binomial Distribution (N = 5, P = .5).
1 0.8 0.6 0.4 0.2 0
Figure B-6 Cumulative probability functions for the Binomial Distribution (N = 5, P = .5). Figures B-5 and B-6 illustrate the probability density and cumulative probability (i.e., cdf) respectively
of the Binomial Distribution. The Binomial is also a discrete distribution. Other properties of the Binomial Distribution are: (B.10) Mean = N*P
(B.11) Variance = N*P*Q where N = The number of trials. P = The probability of a success on a single trial. Q = 1-P. As N becomes large, the Binomial tends to the Normal Distribution, with the Normal
being the limiting form of the Binomial. Generally, if N*P and N*Q are both greater than 5, you could use the Normal in lieu of the Binomial as an approximation. The Binomial Distribution is often
used to Statistically validate a gambling system. An example will illustrate. Suppose we have a gambling system that has won 51% Of the time. We want to determine what the winning percentage would be
if it performs in the future at a level of 3 standard deviations worse. Thus, the variable of interest here, X, is equal to .51, the probability of a winning trade. The variable of interest need not
always be for the probability of a win. It can be the probability of an event being in one of two mutually exclusive groups. We can now perform the first necessary equation in the test: (B.12) L =
P-Z*((P*(1-P))/(N1))^.5 where 3 4 5 L = The lower boundary for P to be at Z standard deviations. P = The variable of interest representing the probability of being in one of two mutually exclusive
groups. Z = The selected number of standard deviations. N = The total number of events in the sample. Suppose our sample consisted of 100 plays. Thus: L = .51-3*((.51*(1-.51))/(1001))^.5 = .51-3*
((.51*.49)/99)^.5 = .51-3*(.2499/99)^.5 = .51-3*.0025242424^.5 = .51-3*.05024183938 = .51-.1507255181 = .3592744819 3 4 5 Based on our history of 100 plays which generated a 51% win rate, we can
state that it would take a 3-sigma event for the population of plays (the future if we play an infinite number of times into the future) to have less than 35.92744819 percent winners. What kind of a
confidence level does this represent? That is a function of N, the total number of plays in the sample. We can determine the confidence level of achieving 35 or 36 wins in 100 tosses by Equation
(B.09). However, (B.09) is clumsy to work with as N gets large because of all
of the factorial functions in (B.09). Fortunately, the Normal distribution, Equation (3.21) for 1-tailed probabilities, can be used as a very close approximation for the Binomial probabilities. In
the case of our example, using Equation (3.21), 3 standard deviations translates into a 99.865% confidence. Thus, if we were to play this gambling system over an infinite number of times, we could be
99.865% sure that the percentage of wins would be greater than or equal to 35.92744819%. This technique can also be used for statistical validation of trading systems. However, this method is only
valid when the following assumptions are true. First, the N events (trades) are all independent and randomly selected. This can easily be verified for any trading system. Second, the N events
(trades) can all be classified into two mutually exclusive groups (wins and losses, trades greater than or less than the median trade, etc.). This assumption, too, can easily be satisfied. The third
assumption is that the probability of an event being classified into one of the two mutually exclusive groups is constant from one event to the next. This is not necessarily true in trading, and the
technique becomes inaccurate to the degree that this assumption is false, Be that as it may, the technique still can have value for traders. Not only can it be used to determine the confidence level
for a certain method being profitable, the technique can also be used to determine the confidence level for a given market indicator. For instance, if you have an indicator that will forecast the
direction of the next day's close, you then have two mutually exclusive groups: correct forecasts, and incorrect forecasts. You can now express the reliability of your indicator to a certain
confidence level. This technique can also be used to discern how many trials are necessary for a system to be profitable to a given confidence level. For example, suppose we have a gambling system
that wins 51% of the time on a game that pays 1 to 1. We want to know how many trials we must observe to be certain to a given confidence level that the system will be profitable in an asymptotic
sense. Thus we can restate the problem as, "If the system wins 51% of the time, how many trials must I witness, and have it show a 51% win rate, to know that it will be prof-
itable to a given confidence level?" Since the payoff is 1:1, the system must win in excess of 50% of the time to be considered profitable. Let's say we want the given confidence level to again be
99.865, or 3 standard deviations (although we are using 3 standard deviations in this discussion, we aren't restricted to that amount; we can use any number of standard deviations that we want). How
many trials must we now witness to be 99.865% confident that at least 51% of the trials will be winners? If .51-X = .5, then X = .01, Therefore, the right factors of Equation (B.12), Z*((P*(1P))/
(N-1))^.5, must equal .01. Since Z = 3 in this case, and .01/3 = .0033, then: ((P*(1-P))/(N-1))^.5 = .0033 We know that P equals .51, thus: ((.51*(1-.51))/(N-1))^.5 = .0033 Squaring both sides gives
us: ((.51*(l-.51))/(N-1)) = .00001111 To continue: (.51*.49)/(N-1) = .00001111 .2499/(N-1) = .00001111 .2499/.00001111 = N-1 .2499/.00001111+1 = N 22,491+1 = N N = 22,492 Thus, we need to witness a
51% win rate over 22,492 trials to be 99.865% certain that we will see at least 51% wins.
THE GEOMETRIC DISTRIBUTION Like the Binomial, the Geometric Distribution, also a discrete distribution, occurs as a result of N independent Bernoulli trials. The Geometric Distribution measures the
number of trials before the first success (or failure). The probability density function, N'(X), is: (B.13) N'(X) = Q ^ (X- 1)*P where P = The probability of success for a given trial. Q = The
probability of failure for a given trial. In other words, N'(X) here measures the number of trials until the first success. The cumulative density function for the Geometric is therefore: (B.14) N(X)
= ∑[J = 1,X] Q^(J1)*P where P = The probability of success for a given trial.
Q = The probability of failure for a given trial.
1 0.8
took until a 5 appeared, plotting these results would yield the Geometric Distribution function formulated in (B.13).
Another type of discrete distribution related to the preceding distributions is termed the Hypergeometric Distribution. Recall that in the Binomial Distribution 0.4 it is assumed that each draw in
succession from the population has the same probabilities. That 0.2 is, suppose we have a deck of 52 cards. 26 of these cards are black and 26 are red. If we draw a card 0 1 2 3 4 5 and6 record 7
whether 8 it9is black 10 or red, we then put the card back Figure B-7 Probability density into the deck for the next draw. functions for the Geometric DisThis "sampling with replacement" tribution (P
= .6). is what the Binomial Distribution assumes. Now for the next draw, 1 there is still a .5 (26/52) probability of the next card being black (or red). 0.8 The Hypergeometric Distribution assumes
almost the same thing, except there is no replace0.6 ment after sampling. Suppose we draw the first card and it is red, and we do not replace it back into 0.4 the deck. Now, the probability of the
next draw being red is reduced to 25/51 or .4901960784. In the 0.2 Hypergeometric Distribution there is dependency, in that the probabilities of the next event are 0 1 2 3 4 5 dependent 6 7 on the 8
outcome(s) 9 10 of the prior event(s). Contrast this to Figure B-8 Cumulative probabilithe Binomial Distribution, where ty functions for the Geometric an event is independent of the Distribution (P =
.6). outcome(s) of the prior event(s). The basic functions N'(X) and Figures B-7 and B-8 illustrate N(X) of the Hypergeometric are the probability density and cumuthe same as those for the
Binomilative probability (i.e., cdf) real, (B.07) and (B.09) respectively, spectively of the Geometric Disexcept that with the Hypergeotribution. Other properties of the metric the variable P, the
probaGeometric are: bility of success on a single trial, (B.15) Mean = 1/P (B.16) Varichanges from one trial to the next. ance = Q/P^2 It is interesting to note the rewhere lationship between the
HypergeoP = The probability of sucmetric and Binomial Distribucess for a given trial. tions. As N becomes larger, the Q = The probability of failure differences between the computed for a given
trial. probabilities of the HypergeometSuppose we are discussing ric and the Binomial draw closer tossing a single die. If we are talkto each other. Thus we can state ing about having the outcome of
that as N approaches infinity, the 5, how many times will we have Hypergeometric approaches the to toss the die, on average, to Binomial as a limit. achieve this outcome? The mean If you want to use
the Binoof the Geometric Distribution tells mial probabilities as an approxius this. If we know the probability mation of the Hypergeometric, as of throwing a 5 is 1/6 (.1667) then the Binomial is
far easier to comthe mean is 1/.1667 = 6. Thus we pute, how big must the population would expect, on average, to toss be? It is not easy to state with any a die six times in order to get a 5.
certainty, since the desired accuIf we kept repeating this process racy of the result will determine and recorded how many tosses it whether the approximation is suc-
cessful or not. Generally, though, a population to sample size of 100 to 1 is usually sufficient to permit approximating the Hypergeometric with the Binomial.
THE POISSON DISTRIBUTION The Poisson Distribution is another important discrete distribution. This distribution is used to model arrival distributions and other seemingly random events that occur
repeatedly yet haphazardly. These events can occur at points in time or at points along a wire or line (one dimension), along a plane (two dimensions), or in any N-dimensional construct. Figure B-9
shows the arrival of events (the X's) along a line, or in time. The Poisson Distribution was originally developed to model incoming telephone calls to a switchboard. Other typical situations that can
be modeled by the Poisson are the breakdown of a piece of equipment, the completion of a repair job by a steadily working repairman, a typing error, the growth of a colony of bacteria on a Petri
plate, a defect in a long ribbon or chain, and so on. The main difference between the Poisson and the Binomial distributions is that the Binomial is not appropriate for events that can occur more
than once within a given time frame. Such an example might be the probability of an automobile accident over the next 6 months. In the Binomial we would be working with two distinct cases: Either an
accident occurs, with probability P, or it does not, with probability Q (i.e., 1-P). However, in the Poisson Distribution we can also account for the fact that more than one accident can occur in
this time period. The probability density function of the Poisson, N'(X), is given by: (B.17) N'(X) = (L^X*EXP(L))/X! where L = The parameter of the distribution. EXP() = The exponential function.
Note that X must take discrete values. Suppose that calls to a switchboard average four calls per minute (L = 4). The probability of three calls (X = 3) arriving in the next minute are:
N'(3) = (4^3*EXP(-4))/3! = (64*EXP(-4))/(3*2) = (64*.01831564)/6 = 1.17220096/6 = .1953668267 So we can say there is about a 19.5% chance of getting 3 calls in the next minute. Note that this is not
cumulative-that is, this is not the probability of getting 3 calls or fewer, it is the probability of getting exactly 3 calls. If we wanted to know the probability of getting 3 calls or fewer we
would have had to use the N(3) formula [which is given in (B.20)]. Other properties of the Poisson Distribution are: (B.18) Mean = L (B.10) Variance =L where L = The parameter of the distribution. In
the Poisson Distribution, both the mean and the variance equal the parameter L. Therefore, in our example case we can say that the mean is 4 calls and the variance is 4 calls (or, the standard
deviation is 2 calls-the square root of the variance, 4). When this parameter, L, is small, the distribution is shaped like a reversed J, and when L is large, the distribution is not dissimilar to
the Binomial. Actually, the Poisson is the limiting form of the Binomial as N approaches infinity and P approaches 0. Figures B-10 through B-13 show the Poisson Distribution with parameter values of
.5 and 4.5.
EXP() = The exponential function.
0.8 0.6 0.4 0.2 0
Figure B-11 Cumulative probability functions for the Poisson Distribution (L = .5).
1 0.8 0.6 0.4 0.2 0
Figure B-12 Probability density functions for the Poisson Distribution (L = 4.5).
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
Figure B-10 Probability density functions for the Poisson Distribution (L = .5).
0 1 2 3 4 3Figure B-13 4 Cumulative 5 6 probability functions for the Poisson Distribution (L = 4.5). The cumulative density function of the Poisson, N(X), is given by: (B.20) N(X) = ∑[J = 0,X] (L^
J*EXP(-L))/J! where L = The parameter of the distribution.
Related to the Poisson Distribution is a continuous distribution with a wide utility called the Exponential Distribution, sometimes also referred to as the Negative Exponential Distribution. This
distribution is used to model interarrival times in queuing systems, service times on equipment, and sudden, unexpected failures such as equipment failures due to defects, 3manufacturing 4 5 light
6bulbs burning out, the time that it takes for a radioactive particle to decay, and so on. (There is a very interesting relationship between the Exponential and the Poisson distributions. The arrival
of calls to a queuing system follows a Poisson Distribution, with arrival rate L. The interarrival distribution (the time between the arrivals) is Exponential with parameter 1/L.) The probability
density function N'(X) for the Exponential Distribution is given as: (B.21) N'(X) = A*EXP(-A*X) where A = The single parametric input, equal to 1/L in the Poisson Distribution. A must be greater than
0. 5 EXP() 6 7= The 8 exponential 9 10 function. The integral of (B.21), N(X), the cumulative density function for the Exponential Distribution is given as: (B.22) N(X) = 1-EXP(-A*X) where A = The
single parametric input, equal to 1/L in the Poisson Distribution. A must be greater than 0. EXP() = The exponential function. Figures B-14 and B-15 show the functions of the Exponential
Distribution. Note that once you know A, the distribution is completely determined.
1 0.8 0.6 0.4 0.2 0
Figure B-14 Probability density functions for the Exponential Distribution (A = 1).
1 0.8 0.6 0.4 0.2 0
Figure B-15 Cumulative probability functions for the Exponential Distribution (A = 1). The mean and variance of the Exponential Distribution are: (B.23) Mean = 1/A (B.24) Variance = 1/A^2 Again A is
the single parametric input, equal to 1/L in the Poisson Distribution, and must be greater than 0. Another interesting quality about the Exponential Distribution is that it has what is known as the
"forgetfulness property." In terms of a telephone switchboard, this property states that the probability of a call in a given time interval is not affected by the fact that no calls may have taken
place in the preceding interval(s).
THE CHI-SQUARE DISTRIBUTION A distribution that is used extensively in goodness-of-fit testing is the Chi-Square Distribution (pronounced ki square, from the Greek letter X (chi) and hence often
represented as the X2 distribution). Appendix A shows how to perform the chi-square test to
determine how alike or unalike two different distributions are. Assume that K is a standard normal random variable (i.e., it has mean 0 and variance 1). If we say that K equals the square root of J
(J = K^2), then we know that K will be a continuous random variable. However, we know that K will not be less than zero, so its density function will differ from the Normal. The Chi-Square
Distribution gives us the density function of K: (B.27) N'(K) = (K ^ ((V/2)1)*EXP(-V/2))/(2 ^ 3(V/2)*GAM(V/2)) 4 5 6 where K = The chi-square variable X2. V = The number of degrees of freedom, which
is the single input parameter. EXP() = The exponential function. GAM() = The standard gamma function. A few notes on the gamma function are in order. This function has the following properties: 5.
GAM(0) = 1 6. GAM( 1/2) = The square root of pi, or 1.772453851 7. GAM(N) = (N-1)*GAM(N1); therefore, if N is an integer, GAM(N) = (N-1)! Notice in Equation (B.25) 3 4 5 6 that the only input
parameter is V, the number of degrees of freedom. Suppose that rather than just taking one independent random variable squared (K^2), we take M independent random variables squared, and take their
sum: JM = K1^2+K2^2 ... KM^2 Now JM is said to have the Chi-Square Distribution with M degrees of freedom. It is the number of degrees of freedom that determines the shape of a particular Chi-Square
Distribution. When there is one degree of freedom, the distribution is severely asymmetric and resembles the Exponential Distribution (with A = 1). At two degrees of freedom the distribution begins
to look like a straight line going down and to the right, with just a slight concavity to it. At three degrees of freedom, a convexity starts taking shape and we begin to have a unimodal-shaped
distribution. As the number of degrees of freedom increases, the density function gradually becomes more and more symmetric. As the number of degrees of freedom becomes very large, the Chi-Square
Distribution begins to resemble the Normal Distribution per The Central Limit Theorem.
THE STUDENT'S DISTRIBUTION The Student's Distribution, sometimes called the t Distribution or Student's t, is another important distribution used in hypothesis testing that is related to the Normal
Distribution. When you are working with less than 30 samples of a near-Normally distributed population, the Normal Distribution can no longer be accurately used. Instead, you must use the Student's
Distribution. This is a symmetrical distribution with one parametric input, again the degrees of freedom. The degrees of freedom usually equals the number of elements in a sample minus one (N-1). The
shape of this distribution closely resembles the Normal except that the tails are thicker and the peak of the distribution is lower. As the number of degrees of freedom approaches infinity, this
distribution approaches the Normal in that the tails lower and the peak increases to resemble the Normal Distribution. When there is one degree of freedom, the tails are at their thickest and the
peak at its smallest. At this point, the distribution is called Cauchy. It is interesting that if there is only one degree of freedom, then the mean of this distribution is said not to exist. If
there is more than one degree of freedom, then the mean does exist and is equal to zero, since the distribution is symmetrical about zero. The variance of the Student's Distribution is infinite if
there are fewer than three degrees of freedom. The concept of infinite variance is really quite simple. Suppose we measure the variance in daily closing prices for a particular stock for the last
month. We record that value. Now we measure the variance in daily closing prices for that stock for the next year and record that value. Generally, it will be greater than our first value, of simply
last month's variance. Now let's go back over the last 5 years and measure the variance in daily closing prices. Again, the variance has gotten larger. The farther back we gothat is, the more data we
incorporate into our measurement of variance-the greater the variance becomes. Thus, the variance increases without bound as the size of the sample increases. This is infinite variance. The
distribution of the log of daily price changes appears to have infinite variance, and thus the Student's Distribution is sometimes used to model
the log of price changes. (That is, if C0 is today's close and C1 yesterday's close, then ln(C0/C1) will give us a value symmetrical about 0. The distribution of these values is sometimes modeled by
the Student's distribution). If there are three or more degrees of freedom, then the variance is finite and is equal to: (B.26) Variance = V/ (V-2) for V>2 (B.27) Mean = 0 for V>1 where V = The
degrees of freedom. Suppose we have two independent random variables. The first of these, Z, is standard normal (mean of 0 and variance of 1). The second of these, which we call J, is Chi-Square
distributed with V degrees of freedom. We can now say that the variable T, equal to Z/(J/V), is distributed according to the Student's Distribution. We can also say that the variable T will follow
the Student's Distribution with N-1 degrees of freedom if: T = N^(1/2)*((X-U)/S) where X = A sample mean. S = A sample standard deviation, N = The size of a sample. U = The population mean. The
probability density function for the Student's Distribution, N'(X), is given as: (B.28) N'(X) = (GAM((V+1)/2)/(((V*P)^(1/2))* GAM(V/2)))*((1+((X^2)/V))^((V+1)/2)) where P = pi, or 3.1415926536. V =
The degrees of freedom. GAM() = The standard gamma function. The mathematics of the Student's Distribution are related to the incomplete beta function. Since we aren't going to plunge into functions
of mathematical physics such as the incomplete beta function, we will leave the Student's Distribution at this point. Before we do, however, you still need to know how to calculate probabilities
associated with the Student's Distribution for a given number of standard units (Z score) and degrees of freedom. You can use published tables to find these values. Yet, if you're as averse to tables
as I am, you can simply use the following snippet of BASIC code to discern the probabilities. You'll note that as the degrees of freedom variable,
DEGFDM, approaches infinity, the values returned, the probabilities, converge to the Normal as given by Equation (3.22): 1000 REM 2 TAIL PROBABILITIES ASSOCIATED WITH THE STUDENT'S T DISTRIBUTION
1010 REM INPUT ZSCORE AND DEGFDM, OUTPUTS CF 1020 ST = ABS(ZSCORE):R8 = ATN(ST/SQR(DEGFDM)):RC8 = COS(R8):X8 = 1:R28 = RC8*RC8:RS8 = SIN(R8) 1030 IF DEGFDM MOD 2 = 0 THEN 1080 1040 IF DEGFDM = 1 THEN
Y8 = R8:GOTO 1070 1050 Y8 = RC8:FOR Z8 = 3 TO (DEGFDM-2) STEP 2:X8 = X8*R28*(Z8-1)/Z8:Y8 = Y8+X8*RC8:NEXT 1060 Y8 = R8+RS8*Y8 1070 CF = Y8*.6366197723657157#:GOT0 1100 1080 Y8 = 1 :FOR Z8 = 2 TO
(DEGFDM-2) STEP 2:X8 = X8* R28 * (Z8-1)/Z8:Y8 = Y8+X8:NEXT 1090 CF = Y8*RS8 1100 PRINT CF Next we come to another distribution, related to the ChiSquare Distribution, that also has important uses in
statistics. The F Distribution, sometimes referred to as Snedecor's Distribution or Snedecor's F, is useful in hypothesis testing. Let A and B be independent chi-square random variables with degrees
of freedom of M and N respectively. Now the random variable: F = (A/M)/(B/N) Can be said to have the F Distribution with M and N degrees of freedom. The density function, N'(X), of the F Distribution
is given as: (B.29) N'(X) = (GAM((M+N)/2)*((M/N)^(M/2)) )/(GAM(M/2)*GAM(N/2)*((1+M /N)^((M+N)/2))) where M = The number of degrees of freedom of the first parameter. N = The number of degrees of
freedom of the second parameter. GAM() = The standard gamma function.
THE MULTINOMIAL DISTRIBUTION The Multinomial Distribution is related to the Binomial, and likewise is a discrete distribution. Unlike the Binomial, which assumes two possible outcomes for an event,
the Multinomial assumes that there are M different
outcomes for each trial. The probability density function, N'(X), is given as: (B.30) N'(X) = (N!/(∏[i = 1,M] Ni!))*∏[i = 1,M] Pi^Ni where N = The total number of trials. Ni = The number of times the
ith trial occurs. Pi = The probability that outcome number i will be the result of any one trial. The summation of all Pi's equals 1. M = The number of possible outcomes on each trial. For example,
consider a single die where there are 6 possible outcomes on any given roll (M = 6). What is the probability of rolling a 1 once, a 2 twice, and a 3 three times out of 10 rolls of a fair die? The
probabilities of rolling a 1, a 2 or a 3 are each 1/6. We must consider a fourth alternative to keep the sum of the probabilities equal to 1, and that is the probability of not rolling a 1, 2, or 3,
which is 3/6. Therefore, P1 = P2 = P3 = 1/6, and P4 = 3/6. Also, N1 = 1, N2 = 2, N3 = 3, and N4 = 10 • 3-2-1 = 4. Therefore, Equation (B.30) can be worked through as: N'(X) = (10!/(1!*2!*3!*4!))*(1/
6)^1*(1/6) ^2*(1/6)^3*(3/6) 4 = (3628800/(1*2*6*24))*.1667*.02 78*.00463*.0625 = (3628800/288)*.000001341 = 12600*.000001341 = .0168966 Note that this is the probability of rolling exactly a 1 once,
a 2 twice, and a 3 three times, not the cumulative density. This is a type of distribution that uses more than one random variable, hence its cumulative density cannot be drawn out nicely and neatly
in two dimensions as you could with the other distributions discussed thus far. We will not be working with other distributions that have more than one random variable, but you should be aware that
such distributions and their functions do exist.
THE STABLE PARETIAN DISTRIBUTION The stable Paretian Distribution is actually an entire class of distributions, sometimes referred to as "Pareto-Levy" distributions. The probability density function
N'(U) is given as: (B.31) ln(N'(U)) = i*D*UV*abs(U)^A*Z where
U = The variable of the stable distribution. A = The kurtosis parameter of the distribution. B = The skewness parameter of the distribution. D = The location parameter of the distribution. V = This
is also called the scale parameter, i = The imaginary unit, -1^(1/2) Z = 1 -i*B* (U/ASS(U))*tan(A*3.141592653 6/2) when A >< 1 and 1+i*B*(U∕ASS(U))*2/3.1415926 536*log(ABS(U)) when A = 1. ABS() = The
absolute value function. tan() = The tangent function. ln() = The natural logarithm function. The limits on the parameters of Equation (B.31) are: (B.32) 0
The cumulative density functions for the stable Paretian are not known to exist in closed form. For this reason, evaluation of the parameters of this distribution is complex, and work with this
distribution is made more difficult. It is interesting to note that the stable Paretian parameters A, B, C, and D correspond to the fourth, third, second, and first moments of the distribution
respectively. This gives the stable Paretian the power to model many types of real-life distributions-in particular, those where the tails of the distribution are thicker than they would be in the
Normal, or those with infinite variance (i.e., when A is less than 2). For these reasons, the stable Paretian is an extremely powerful distribution with applications in economics and the social
sciences, where data distributions often have those characteristics (fatter tails and infinite variance) that the stable Paretian addresses. This infinite variance characteristic makes the Central
Limit Theorem inapplicable to data that is distributed per the stable Paretian distribution when A is less than 2. This is a very important fact if you plan on using the Central Limit Theorem. One of
the major characteristics of the stable Paretian is that it is invariant under addition. This means that the sum of independent stable variables with characteristic exponent A will be stable, with
approximately the same characteristic exponent. Thus we have the Generalized Central Limit Theorem, which is essentially the Central Limit Theorem, except that the limiting form of the distribution
is the stable Paretian rather than the Normal, and the theorem applies even when the data has infinite variance (i.e., A < 2), which is when the Central Limit Theorem does not apply. For example, the
heights of people have finite variance. Thus we could model the heights of people with the Normal Distribution. The distribution of people's incomes, however, does not have finite variance and is
therefore modeled by the stable Paretian distribution rather than the Normal Distribution. It is because of this Generalized Central Limit Theorem that the stable Paretian Distribution is believed by
many to be representative of the distribution of price changes.1 1
Do not confuse the stable Paretian Distribution with our adjustable distribution discussed in Chapter 4. The
There are many more probability distributions that we could still cover (Negative Binomial Distribution, Gamma Distribution, Beta Distribution, etc.); however, they become increasingly more obscure
as we continue from here. The distributions we have covered thus far are, by and large, the main common probability distributions. Efforts have been made to catalogue the many known probability
distributions. Undeniably, one of the better efforts in this regard has been done by Karl Pearson, but perhaps the most comprehensive work done on cataloguing the many known probability distributions
has been presented by Frank Haight.2 Haight's "Index" covers almost all of the known distributions on which information was published prior to January, 1958. Haight lists most of the mathematical
functions associated with most of the distributions. More important, references to books and articles are given so that a user of the index can find what publications to consult for more in-depth
matter on the particular distribution of interest. Haight's index categorizes distributions into ten basic types: 1. Normal 2. Type III 3. Binomial 4. Discrete 5. Distributions on (A, B) 6.
Distributions on (0, infinity) 7. Distributions on (-infinity, infinity) 8. Miscellaneous Univariate 9. Miscellaneous Bivariate 10. Miscellaneous Multivariate Of the distributions we have covered in
this Appendix, the Chi-Square and Exponential (Negative Exponential) are categorized by Haight as Type III. The Binomial, Geometric, and Bernoulli are categorized as Binomial. The Poisson and
Hypergeometric are categorized as Discrete. The Rectangular is under Distributions on (A, B), the F Distribution as well as the Pareto are under Distributions on (0, infinity), the Student's
stable Paretian is a real distribution because it models a probability phenomenon. Our adjustable distribution does not. Rather, it models other (Zdimensional) probability distributions, such as the
stable Paretian. 2 Haight, F. A., "Index to the Distributions of Mathematical Statistics," Journal of Research of the National Bureau of Standards-B. Mathematics and Mathematical Physics 65 B No. 1,
pp. 23-60, Januaiy-March 1961.
is regarded as a Distribution on (-infinity, infinity), and the Multinomial as a Miscellaneous Multivariate. It should also be noted that not all distributions fit cleanly into one of these ten
categories, as some distributions can actually be considered subclasses of others. For instance, the Student's distribution is catalogued as a Distribution on (-infinity, infinity), yet the Normal
can be considered a subclass of the Student's, and the Normal is given its own category entirely. As you can see, there really isn't any "clean" way to categorize distributions. However, Haight's
index is quite thorough. Readers interested in learning more about the different types of distributions should consult Haight as a starting point.
APPENDIX C - Further on Dependency: The Turning Points and Phase Length Tests There exist statistical tests of dependence other than those mentioned in Portfolio Management Formulas and reiterated in
Chapter 1. The turning points test is an altogether different test for dependency. Going through the stream of trades, a turning point is counted if a trade is for a greater P&L value than both the
trade before it and the trade after it. A trade can also be counted as a turning point if it is for a lesser P&L value than both the trade before it and the trade after it. Notice that we are using
the individual trades, not the equity curve (the cumulative values of the trades). The number of turning points is totaled up for the entire stream of trades. Note that we must start with the second
trade and end with the next to last trade, as we need a trade on either side of the trade we are considering as a turning point. Consider now three values (1, 2, 3) in a random series, whereby each
of the six possible orderings are equally likely: 1, 2, 3 2, 3,1 1, 3, 2 3, 1,2 2, 1,3 3, 2, 1 Of these six, four will result in a turning point. Thus, for a random stream of trades, the expected
number of turning points is given as: (C.01) Expected number of turning points = 2/3*(N-2) where N = The total number of trades. We can derive the variance in the number of turning points of a random
series as: (C.02) Variance = (16*N-29)/90 The standard deviation is the square root of the variance. Taking the difference between the actual number of turning points counted in the stream of trades
and the expected number and then dividing the difference by the standard deviation will give us a Z score, which is then expressed as a confidence limit. The confidence limit is discerned from
Equation (3.22) for 2-tailed Normal probabilities. Thus, if our stream of trades is very far away (very many standard deviations from the expected number), it is unlikely that our stream of trades is
random; rather, dependency is present. If dependency appears to a high confidence limit (at least 95%) with the turning points test, you can determine from inspection whether like begets like (if
there are fewer actual turning points than expected) or whether like begets unlike (if there are more actual turning points than-expected). Another test for dependence is the phase length test. This
is a statistical test similar to the turning points test. Rather than counting up the number of turning points between (but not including) trade 1 and the last trade, the phase length test looks at
how many trades have elapsed between turning points. A "phase" is the number of trades that elapse between a turning point high and a turning point low, or a turning point low and a turning point
high. It doesn't matter which occurs first, the high turning point or the low turning point. Thus, if trade number 4 is a turning point (high or low) and trade number 5 is a turning point (high or
low, so long as it's the opposite of what the last turning point was), then the phase length is 1, since the difference between 5 and 4 is 1. With the phase length test you add up the number of
phases of length 1, 2, and 3 or more. Therefore, you will have 3 categories: 1, 2, and 3+. Thus, phase lengths of 4 or 5, and so on, are all totaled under the group of 3+. It doesn't matter if a
phase goes from a high turning point to a low turning point or from a low turning point to a high turning point; the only thing that matters is how many trades the phase is comprised of. To figure
the phase length, simply take the trade number of the latter phase (what number it is in sequence from 1 to N, where N is the total number of trades) and subtract the trade number of the prior phase.
For each of the three categories you will have the total number of complete phases that occurred between (but not including) the first and the last trades. Each of these three categories also has an
expected number of trades for that category. The expected number of trades-of phase length D is: (C.03) E(D) = 2*(N-D-2)*(D^2*3*D+1)/(D+3)! where D = The length of the phase. E(D) = The expected
number of counts. N = The total number of trades.
Once you have calculated the expected number of counts for the three categories of phase length (1, 2, and 3+), you can perform the chisquare test. According to Kendall and colleagues,1 you should
use 2.5 degrees of freedom here in determining the significance levels, as the lengths of the phases are not independent. Remember that the phase length test doesn't tell you about the dependence
(like begetting like, etc.), but rather whether or not there is dependence or randomness. Lastly, this discussion of dependence addresses converting a correlation coefficient to a confidence limit.
The technique employs what is known as fisher's Z transformation, which converts/a correlation coefficient, r, to a Normally distributed variable: (C.04) F = .5*ln((1+r)/(l-r)) where F = The
transformed variable, now Normally distributed. r = The correlation coefficient of the sample. ln() = The natural logarithm function. The distribution of these transformed variables will have a
variance of: (C.05) V = 1/(N-3) where V = The variance of the transformed variables. N = The number of elements in the sample. The mean of the distribution of these transformed variables is discerned
by Equation (C.04), only instead of being the correlation coefficient of the sample, r is the correlation coefficient of the population. Thus, since our population has a correlation coefficient of 0
(which we assume, since we are testing deviation from randomness) then Equation (C.04) gives us a value of 0 for the mean of the population. Now we can determine how many standard deviations the
adjusted variable is from the mean by dividing the adjusted variable by the square root of the variance, Equation (C.05). The result is the Z score associated with a given correlation coefficient and
sample size. For example, suppose we had a correlation coefficient of .25, and this was discerned over 100 trades. Thus, we can find our Z score as Equation (C.04) divided by the square root of
Equation (C.05), or: (C.06) Z = (.5*ln((1+r)/(1-r)))/(l/(N-3))^.5 Which, for our example is: Z = (.5*ln((l+.25)/(l-.25)))/(l/(100-3))^.5 = (.5*ln(1.25/.75))/(l/97)^.5 = (.5*ln(1.6667))/.010309^.5 =
(.5*.51085)/.1015346165 = .25541275/.1015346165 = 2.515523856 Now we can translate this into a confidence limit by using Equation (3.22) for a Normal Distribution e-tailed confidence limit. For our
example this works out to a confidence limit in excess of 98.8%. If we had had 30 trades or less, we would have had to discern our confidence limit by using the Student's Distribution with N-1
degrees of freedom.
Kendall, M. G., A. Stuart, and J. K. Ord. The Advanced Theory of Statistics, Vol. III. New York: Hafner Publishing, 1983. | {"url":"https://docer.tips/mathematics-money-management-ralph-vince.html","timestamp":"2024-11-10T12:21:31Z","content_type":"text/html","content_length":"815605","record_id":"<urn:uuid:e816fc20-aaac-4119-9798-7acf0be72e2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00456.warc.gz"} |
Find Constant Of Proportionality (Equations) Worksheets [PDF] (7.RP.A.2.B): 7th Grade Math
Teaching Solving Constant of Proportionality Easily
• Firstly, note down the values of the variables. For example x = 3 and y = 24.
• Then, you may use the constant of proportionality formula. The mathematical formula of constant of proportionality is y = kx or y = k/x.
• The resultant value is the constant of proportionality for the given equation.
Let us look at the given example mentioned below to understand more about the constant of proportionality.
Q. Q. It is given that a variable y varies proportionally with x. Find the constant of proportionality if y = 25 and x = 5.
Step 1: Note down the quantities of the two variables,
Y = 25
X = 5
Step 2: Find the constant of proportionality using the formula.
Y = kx.
25 = k * 5
K = 25 / 5 = 5
Hence, the constant of proportionality is 5.
Why should you use a find constant of proportionality (equations)worksheet for your students?
• Students can now easily determine the constant of proportionality in an equation using these worksheets.
• These worksheets can help your students to know more about proportions and its different types.
Download find constant of proportionality (equations) worksheets PDF
You can download and print these super fun find constant of proportionality (equations) worksheets here for your students.
You can also check out our Combining like terms Worksheet, Constant of Proportionality Worksheets & Solve Proportions Worksheets which will help students understand complex concepts and score well on
their exams. | {"url":"https://www.bytelearn.com/math-grade-7/worksheet/find-constant-of-proportionality-equations","timestamp":"2024-11-10T05:29:58Z","content_type":"text/html","content_length":"239680","record_id":"<urn:uuid:3ff5fb43-b73c-4b4d-8c4b-3e3e34d69d1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00567.warc.gz"} |
Find a root of a univariate real function within an interval.
x = fzero(@func,interval)
x = fzero(@func,interval,options)
[x,fval,info,output] = fzero(...)
The function whose root is to be found.
A two-element vector containing the root.
Type: double
Dimension: vector
A struct containing option settings.
See optimset for details.
The location of the approximate root.
The function value at the approximate root.
The convergence status flag.
info = 1
Function value converged to within tolX.
info = 0
Reached maximum number of iterations or function calls.
A struct containing iteration details. The members are as follows:
The number of iterations.
The number of function evaluations.
The function values of the interval end points at each iteration.
The interval end points at each iteration.
function y = func(x)
y = (x-3)^4 - 16;
options = optimset('TolX', 1.0e-8);
interval = [1, 6];
[x,fval] = fzero(@func, interval, options)
x = 4.99999999
fval = -4.18902e-07
Modify the previous example to pass an extra parameter to the objective function using a function handle.
function y = func(x, offset)
y = (x-3)^4 + offset;
handle = @(x) func(x, -8);
[x,fval] = fzero(handle, interval, options)
x = 1.31820717
fval = 3.55271e-15
fzero implements ACM Transactions on Mathematical Software algorithm 748, which does not require derivatives.
Options for convergence tolerance controls are specified with optimset.
To pass additional parameters to a function argument, use an anonymous function.
options and defaults are as follows:
• MaxIter: 400
• MaxFunEvals: 1,000,000
• TolX: 1.0e-7 | {"url":"https://help.altair.com/compose/help/en_us/topics/reference/oml_language/Optimization/fzero.htm","timestamp":"2024-11-03T03:09:37Z","content_type":"application/xhtml+xml","content_length":"69744","record_id":"<urn:uuid:576a75bd-f8b1-422d-a669-98ac2e1001ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00660.warc.gz"} |
Canara Bank PO exam 2018 Question paper and key
Solved previous year question paper and answer key of Canara Bank PO examination held on 4th March 2018. The paper had
• Reasoning Ability
• Quantitative Aptitude and
• English Language
Some sample questions from Canara bank Probationary Officers exam 2018
1. A and B started a business. B's Investment was 2.5 limes that of A. The respective ratio between the time period for which A invested and that for which B Invested was 3 : 1 . If the total
investment made by A and B together was Rs. 28.000 and the annual profit earned was Rs. 2500 less than A s investment, what was the difference between A's share and B’s share In the annual profit?
(1) Rs. 200Â Â Â
(2) Rs. 800
(3) Rs. 500Â Â Â
(4) Rs. 400
(5) Rs. 650
2.  In T hours, a car covers 36 km. less than the distance covered by a bus in the same time. The speed of the car Is 12 km/h less than the speed of the bus. Had the speed of the car been 52 km/h.
what would have been the distance covered by It In T- 1/2 hours?
(1)260 km  Â
(2) 120km
(3) 390 km  Â
(4) 130 km
(5) 140 km
3.   A’s age 7 years ago was 10 years more than half his present age. The respective ratio between B’s age 4 years ago and A’s age that time was 2 : 5. If C’s age 12 years hence will be
thrice of B's age 2 years ago. what is C's present age? (in years)
(1)38Â Â Â
(3) 42Â Â Â
(4) 37
4.   S, is a series of 5 consecutive positive multiples of 4. whose sum is 100. S2 is another series of 4 consecutive even numbers. whose 2nd lowest number is 6 less than the highest number of
S,. What is the average of S2 ?
(1)21Â Â Â
(3) 23Â Â Â
(4) 25
Download the complete question paper and key of Canara Bank PO exam 2018 from the links below
Canara Bank PO Exam 2018 solved question paper.pdf
(Size: 3.61 MB / Downloads: 1,450)
Canara Bank PO Exam 2018 Answer Key.pdf
(Size: 79.02 KB / Downloads: 787)
06-25-2018, 07:39 PM
(06-25-2018, 01:17 AM)muneeshwar Wrote: Solved previous year question paper and answer key of Canara Bank PO examination held on 4th March 2018. The paper had
□ Reasoning Ability
□ Quantitative Aptitude and
□ English Language
Some sample questions from Canara bank Probationary Officers exam 2018
1. A and B started a business. B's Investment was 2.5 limes that of A. The respective ratio between the time period for which A invested and that for which B Invested was 3 : 1 . If the total
investment made by A and B together was Rs. 28.000 and the annual profit earned was Rs. 2500 less than A s investment, what was the difference between A's share and B’s share In the annual
(1) Rs. 200Â Â Â
(2) Rs. 800
(3) Rs. 500Â Â Â
(4) Rs. 400
(5) Rs. 650
2.  In T hours, a car covers 36 km. less than the distance covered by a bus in the same time. The speed of the car Is 12 km/h less than the speed of the bus. Had the speed of the car been 52
km/h. what would have been the distance covered by It In T- 1/2 hours?
(1)260 km  Â
(2) 120km
(3) 390 km  Â
(4) 130 km
(5) 140 km
3.   A’s age 7 years ago was 10 years more than half his present age. The respective ratio between B’s age 4 years ago and A’s age that time was 2 : 5. If C’s age 12 years hence will
be thrice of B's age 2 years ago. what is C's present age? (in years)
(1)38Â Â Â
(3) 42Â Â Â
(4) 37
4.   S, is a series of 5 consecutive positive multiples of 4. whose sum is 100. S2 is another series of 4 consecutive even numbers. whose 2nd lowest number is 6 less than the highest number
of S,. What is the average of S2 ?
(1)21Â Â Â
(3) 23Â Â Â
(4) 25
Download the complete question paper and key of Canara Bank PO exam 2018 from the links below
06-25-2018, 07:40 PM
Thank u bro share latest question paper like this in PD magazine
07-19-2018, 09:26 AM
When this exam held?
09-06-2018, 02:28 PM
Thank you for your favour
08-31-2019, 01:15 PM
The largest collection of Officer Grade question papers of Banking and Insurance companies
150 Question papers of Bank PO Examinations
09-03-2019, 01:07 PM
The largest collection of Officer Grade question papers of Banking and Insurance companies
150 Question papers of Bank PO Examinations | {"url":"https://www.educationobserver.com/forum/showthread.php?tid=21848","timestamp":"2024-11-13T10:48:58Z","content_type":"application/xhtml+xml","content_length":"52503","record_id":"<urn:uuid:6a23d3da-1a8d-4609-b233-a05c89f93f51>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00324.warc.gz"} |
GOLD Elliott Wave Technical Analysis – 30th September, 2014
by Lara | Sep 30, 2014 | Gold | 4 comments
Summary: This correction is still incomplete. An adjustment to the hourly charts for recent movement sees all three possibilities of a flat, triangle and combination still open. I expect Gold to
continue to be range bound until Thursday. The breakout when it comes should be downwards.
Click on charts to enlarge.
Main Wave Count
On the weekly chart extend the triangle trend lines of primary wave 4 outwards. The point in time at which they cross over may be the point in time at which primary wave 5 ends. This does not always
work, but it works often enough to look out for. It is a rough guideline only and not definitive. A trend line placed from the end of primary wave 4 to the target of primary wave 5 at this point in
time shows primary wave 5 would take a total 26 weeks to reach that point, and that is what I will expect. Primary wave 5 has begun its 12th week.
At 956.97 primary wave 5 would reach equality in length with primary wave 1. Primary wave 3 is $12.54 short of 1.618 the length of primary wave 1, and equality between primary waves 5 and 1 would
give a perfect Elliott relationship for this downwards movement.
However, when triangles take their time and move close to the apex of the triangle, as primary wave 4 has (looking at this on a weekly chart is clearer) the movement following the triangle is often
shorter and weaker than expected. If the target at 956.97 is wrong it may be too low. In the first instance I expect it is extremely likely that primary wave 5 will move at least below the end of
primary wave 3 at 1,180.40 to avoid a truncation. When intermediate waves (1) through to (4) within primary wave 5 are complete I will recalculate the target at intermediate degree because this would
have a higher accuracy. I cannot do that yet; I can only calculate it at primary degree.
Minor wave 3 is $9.65 longer than 1.618 the length of minor wave 1. This variation is less than 10% the length of minor wave 3 and so I would consider it an acceptable Fibonacci ratio. Just.
Movement comfortably below 1,180.84 would provide further confidence in this main wave count as at that stage an alternate idea which sees primary wave 4 as continuing would be invalidated.
I have drawn a Fibonacci retracement the length of minor wave 3. Minor wave 4 has so far reached up to the 0.236 at 1,234.34. If minor wave 4 continues as a triangle this will be its maximum depth.
If it continues as a flat correction then it may yet move higher to the 0.382 at 1,250.78 Fibonacci ratio. If it continues as a combination then about the 0.236 Fibonacci ratio will be close to its
maximum depth.
Draw a channel about intermediate wave (1): draw the first trend line from the lows labeled minor waves 1 to 3, then place a copy on the high labeled minor wave 2. If it completes as a flat
correction minor wave 4 may find resistance and may end about the upper edge of this blue channel. If it overshoots the channel it should find some resistance at the upper blue trend line before
breaking above it.
There is a nice morning doji star on this daily chart which supports this wave count. A morning doji star is a bottom reversal pattern, indicating the prior bear trend of minor wave 3 should change
to a new trend. This new trend may be either upwards or sideways, and the wave count expects it is sideways.
There are still three structural possibilities for this fourth wave correction: a flat, a triangle or a combination. If fourth wave correction continues for another two days / sessions it may end in
a total Fibonacci eight days, this Thursday. This expectation is the same for all three hourly wave counts.
All three hourly wave counts below are viable and all three scenarios must be considered. At this stage I do not favour either of the three hourly wave counts. All expect Gold to remain range bound
in a consolidation phase probably for two more days at least.
Hourly Wave Count – Triangle
Minor wave 4 may still be unfolding as a triangle.
Within a triangle one of the subwaves is often a more complicated and time consuming double zigzag. This may have been minute wave b.
Minute wave b is a 113% correction of minute wave a. The triangle would be a running contracting or barrier triangle.
Within both a contracting and barrier triangle minute wave c may not move beyond the end of minute wave a above 1,236.69. Minute wave c would be very likely to end at the upper pink trend line drawn
from the end of minute wave a to the end of minuette wave (x) within minute wave b as it is very common for the subwaves within triangles to touch the triangle trend lines.
Within a contracting triangle minute wave d may not move beyond the end of minute wave b below 1,204.61.
Within a barrier triangle minute wave d may end about the same level as minute wave b at 1,204.61. In practice this means minute wave d may end slightly below 1,204.61 as long as the b-d trend line
remains essentially flat. This lower invalidation point is not black and white. This is the only Elliott wave rule which has any grey area.
Minute wave e may not move beyond the end of minute wave c and is most likely to end short of the a-c trend line.
Only one of the five subwaves of a triangle may be a more complicated double. This means all the remaining triangle subwaves of c, d and e must be simple A-B-C corrective structures, and they are
most likely to be single zigzags. This triangle could end in just two more days, or it could take a little longer.
Hourly Wave Count – Combination
At this stage a combination is also entirely possible, and with the duration of minuette wave (b) this idea is now taking on a more typical look.
It is my experience over the years that when one expects a triangle is unfolding it is always necessary to consider a combination alongside it. Often what you think is a triangle completing turns out
to be a combination, as the triangle invalidates itself just before the structure ends.
The first structure in the double combination was a zigzag labeled minute wave w. The double is joined by a three, a zigzag, in the opposite direction labeled minute wave x. The second structure in
the double combination is a flat correction labeled minute wave y.
Within the flat correction minuette wave (b) is a 109% correction of minuette wave (a) and so the flat is an expanded flat. Expanded flats most commonly have C waves which are 1.618 the length of
their A waves. At 1,244 minuette wave (c) would reach 1.618 the length of minuette wave (a).
Within minuette wave (c) at 1,230 subminuette wave iii would reach 1.618 the length of subminuette wave iv.
Subminuette wave ii may not move beyond the start of subminuette wave i below 1,204.61.
The purpose of double combinations is to move price sideways and take up time. The second structure in the double normally ends close to the end of the first structure. At 1,244 minute wave y would
end reasonably close to 1,236.69.
Hourly Wave Count – Flat
A flat correction for minor wave 4 is still possible, with a time consuming double zigzag for minute wave b within it.
Minute wave b is a 113% correction of minute wave a so this is an expanded flat correction. At 1,251 minute wave c would reach 1.618 the length of minute wave a and minor wave 4 would reach up to the
0.382 Fibonacci ratio of minor wave 3.
Minute wave c could complete within two more days or sessions, or it may require a little longer. Within minute wave c at 1,230 minuette wave (iii) would reach 1.618 the length of minuette wave (i).
Within minute wave c minuette wave (ii) may not move beyond the start of minuette wave (i) below 1,204.61.
This analysis is published about 06:30 p.m. EST.
4 Comments
Elliott Wave Gold on October 3, 2014 at 6:31 am
I think the next analysis elaborated on this.
Thursday was an expectation, but it has not been met. Its still in a consolidation phase.
Sid on October 1, 2014 at 5:19 pm
Range bound till Thursday: then breakout will b downside. ? means tomorrow is the last day for gold to go up? 1250 was the tgt? price can go up on Friday as well or we have to short on Thursday
high?? plz elaborate thanks
Chapstick_jr on October 1, 2014 at 1:39 am
I hope you didn’t jinx yourself by posting 3 wave counts with the same invalidation point. You don’t do it often, but I remember the last time you did it a few months back…all 3 were invalidated.
Elliott Wave Gold on October 1, 2014 at 7:59 pm
so far so good….
and TBH I didn’t notice that I’d done that until after I’d prepared all three charts….
but that last wave down with the triangle in the middle is such a clear three, it seemed clear it had to go up after that. | {"url":"https://elliottwavegold.com/2014/09/gold-elliott-wave-technical-analysis-30th-september-2014/","timestamp":"2024-11-06T02:57:15Z","content_type":"text/html","content_length":"48894","record_id":"<urn:uuid:dd0ce3b6-5289-48c6-9825-ee63ec7dbe6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00146.warc.gz"} |
Modifying space and time
One of the main points of my book,
How Einstein Ruined Physics
, is that the essence of special relativity is that the electromagnetic covariance is deduced from the spacetime geometry. That is what gives relativity its central importance in physics.
I also argue that Einstein had no role in either discovering or popularizing this crucial idea. He did not even understand it until after many other physicists did.
Briefly, here is the history of special relativity. Maxwell
discovered the first relativistic theory
, following the work of Gauss, Faraday, and others, and coined the word "relativity". Michelson did the crucial experiment on the relativity of motion, following a suggestion of Maxwell. Lorentz
built on Maxwell's theory, and
discovered the transformations
that reconciled the theory with Michelson's experiments. Poincare perfected Lorentz's work, and discovered the 4-dimensional spacetime geometry and electromagnetic covariance in 1905. Minkowski
extended and elaborated Poincare's ideas, emphasizing the geometry, and published the 1908 paper that got everybody excited about relativity.
Einstein played no part in any of this. Historians say that he
paid no attention to the relativity experiments
that inspired Lorentz, Poincare, and Minkowski, that the main point of his paper was that he postulated what Lorentz had proved, and that the physics community was not impressed by his 1905 paper at
the time.
More importantly, Einstein's 1905 paper and subsequent papers lack the crucial concepts of spacetime geometry and electromagnetic covariance. I explain this in my book, and refute scholars who say
I neglected to address a comment from a 1905 Einstein private letter where he seems to say that he had a spacetime theory. Noted science writer James Gleick
about Einstein's 1905 papers:
The name echoes through the language: It doesn't take an Einstein. A poor man's Einstein. He's no Einstein. In this busy century, dominated like no other by science—and exalting, among the human
virtues, braininess, IQ, the ideal of pure intelligence -— he stands alone as our emblem of intellectual power. We talk as though humanity could be divided into two groups: Albert Einstein and
everybody else.
... Einstein said in a letter to a friend, it "modifies the theory of space and time." Ah, yes. Relativity.
The quote is from a letter to Einstein's friend, Conrad Habicht.
is a little more context:
Such movement of suspended bodies has actually Been observed by biologists who call it Brownian molecular movement. The fourth work is based on the concepts of electrodynamics of moving bodies
and modifies the theory of space and time; ... [quoted in Einstein: the life and times, by Ronald W. Clark, p.87]
He says that his paper is based on electrodynamics. Compare that to what
said in 1908:
The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and
time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.
But Einstein misses all of these points. He fails to say that he has a spacetime theory, that it is a consequence of experiments, that it makes space and time inseparable, and that it is a radical
new idea.
Einstein's 1905 paper does give formulas for modifying space and time, but they are the same formulas previously given for the FitzGerald contraction and the Larmor dilation of Lorentz local time.
Einstein later acknowledged that he got Lorentz transformations out of the Lorentz 1895 paper, and may have also read subsequent papers on the subject by Lorentz and Poincare.
Some say that Einstein followed experiments, but Clark's biography documents on p.128-130 that Einstein said contradictory things about the Michelson-Morley experiment. Sometimes he said that it was
important, and other times he said that it was unimportant or that he never even heard of it. Most Einstein historians now say that he ignored experiments like Michelson-Morley, and that they praise
him for using postulates instead. I think that Einstein ignored experiments because he was just reciting Lorentz's theory, and he knew that the experimental evidence would be the same as that for
Lorentz's theory. Einstein did not even claim (until years later) that he was
doing anything different from Lorentz
. Lorentz's approach was very different because he was creating a new theory to explain the experiments.
It sounds as if Einstein was getting close when he says his paper "modifies the theory of space and time", but he is basing it on electrodynamics just as Lorentz did 10 years earlier. Lorentz also
modifies space and time with his
Lorentz transformations
. The core of Lorentz's theory was that the experiments could be explained by coupling motion to changes in space and time. It is silly to assume that Einstein meant something different from Lorentz
unless Einstein actually said that he meant something different from Lorentz. He did not, and he was happy to see other physicists call it the "Lorentz-Einstein theory".
If Einstein had said, "I show that a new non-Euclidean geometry of space and time can be used to explain the electrodynamics of moving bodies", then I would have to agree that Einstein understood the
essence of special relativity, and that he could be called a co-discoverer of the theory. But he did not. Poincare and Minkowski said it, and everyone else got it from them, and not Einstein.
Einstein did not even understand or accept it until after the mainstream European physicists did after 1908.
2 comments:
1. Roger,
I don't think non-Euclidean geometry was initially used in Einstein's relativity at all, and it certainly doesn't do anything but add useless complexity to the concept. Einstein was schooled by
'experts' in new maths to dress up his theory with obscurring complexity to avoid scrutiny outside of the select few who had mastered the new math. The smaller your pool of critics, the easier it
is to fool them. This age old technique has been quite effective in cloaking errors and fudges for hundreds if not thousands of years, and is the primary reason modern physics can't precisely
define many of the terms it wants to use so freely. If you can't define your subject or distinguish between a Concept like a mathematical point (which exists only mathematically or as an idea),
and an Object like an iron atom (which exists physically, whether you are looking at it or not) you really can't do actual physics which describes with useful theory what is going on in the
actual world. New and increasingly complex maths are invented every day, and all of them depend upon the same logical errors, poor definitions, and lack of any theoretical rigor. Simple gut
check; if the math is all that is carrying your forces, in truth you really don't have a working theory, you have a non explanatory heuristic model, and your model is either wrong or incomplete.
By the way, I do enjoy reading your blog, I like that you can challenge unquestioning hero worship of physicists like Einstein. Just don't forget that the cult of personality worship which tends
to hide all error did not begin or end with Albert.
2. That's right, non-Euclidean geometry was not initially used in Einstein's relativity, but was used in Poincare's relativity in 1905 and Minkowski's relativity in 1908. Einstein's relativity did
not catch on.
Yes, physics has other hero worship. But Einstein is by far the biggest false hero. | {"url":"http://blog.darkbuzz.com/2013/12/modifying-space-and-time.html","timestamp":"2024-11-07T16:30:20Z","content_type":"text/html","content_length":"115874","record_id":"<urn:uuid:5a2d37bc-d2b2-420a-940d-c8815aded785>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00171.warc.gz"} |
Ultimate Guide to Linear Discriminant Analysis (LDA) - Dataaspirant
Ultimate Guide to Linear Discriminant Analysis (LDA)
Linear Discriminant Analysis (LDA) isn't just a tool for dimensionality reduction or classification. It has its roots in the world of guinness and beer! Sir Ronald A.
Fisher, the father of LDA, originally developed it in the context of distinguishing between two species of iris flowers. Later, it was used to classify the origin of ancient vases and even to
differentiate between various types of brewed beverages.
So next time you're sipping on your favorite drink, you can ponder how data science might have once played a role in its categorization!
Dive deeper into LDA in this blog and discover how it's more than just math – it's a bridge between nature, history, and modern technology.
Whether you're a beginner exploring the world of machine learning or an aspiring data scientist, this guide provides a comprehensive introduction to Linear Discriminant Analysis.
Ultimate Guide to Linear Discriminant Analysis (LDA)
LDA is widely used for dimensionality reduction and classification tasks, offering a robust framework for extracting meaningful features and maximizing class separability. By leveraging the
statistical properties of data, LDA reveals hidden patterns, enhancing our understanding and prediction capabilities.
This guide will take you through the fundamental concepts, techniques, and applications of Linear Discriminant Analysis. Starting with the core principles and assumptions, we'll cover the
step-by-step process, including data preprocessing, feature extraction, and the mathematical formulation of LDA.
Additionally, we'll explore performance evaluation metrics to assess the effectiveness of LDA in data classification. This knowledge will empower you to optimize your models and make informed
Introduction to Linear Discriminant Analysis
Linear Discriminant Analysis (LDA) is a powerful technique in the field of machine learning and data analysis. It provides a structured approach to data classification, enabling us to extract
valuable insights and make accurate predictions.
What is Linear Discriminant Analysis?
Linear Discriminant Analysis, also known as Fisher's Linear Discriminant, is a statistical method used for dimensionality reduction and classification tasks. It aims to find a linear combination of
features that maximally separates different classes in the data.
By focusing on discriminative information, LDA helps us identify the most relevant features that contribute to class separation, improving the accuracy of classification models.
Why is Linear Discriminant Analysis important for data classification?
Data classification plays a fundamental role in various domains, from image recognition and natural language processing to fraud detection and sentiment analysis. Linear Discriminant Analysis offers
a systematic approach to enhance data classification accuracy by reducing the dimensionality of the data while preserving class discrimination.
This allows us to handle high-dimensional datasets effectively and make informed decisions based on the extracted features.
Key benefits and applications of Linear Discriminant Analysis
Improved classification accuracy: By focusing on the most discriminative features, LDA helps improve the accuracy of classification models. It maximizes the separability between classes, reducing the
risk of misclassification and enhancing overall predictive performance.
Dimensionality reduction: LDA transforms high-dimensional data into a lower-dimensional space while preserving class discrimination. This not only simplifies the data representation but also reduces
computational complexity, making it easier to analyze and interpret the results.
Feature selection and interpretation: LDA identifies the most relevant features for classification, providing valuable insights into the underlying data structure. This feature selection capability
helps in understanding the factors that contribute significantly to differentiating classes, leading to more interpretable models.
Robustness to multicollinearity: Linear Discriminant Analysis is less sensitive to multicollinearity, a common issue where predictor variables are highly correlated. Unlike some other classification
algorithms, LDA can handle multicollinearity without compromising performance, making it a reliable choice for complex datasets.
Wide-ranging applications: Linear Discriminant Analysis finds applications across diverse domains. It has been successfully employed in image recognition, text classification, sentiment analysis,
medical diagnosis, and many other areas where accurate data classification is crucial.
Foundations of Linear Discriminant Analysis
Before diving into the practical aspects of LDA, it is important to establish a solid understanding of linear discriminant analysis foundations.
Core Concepts and LDA Assumptions
LDA operates under the assumption that the data follows a multivariate normal distribution, and each class has its own distribution with distinct mean vectors and a shared covariance matrix.
The key concept in LDA is to find a linear combination of features that maximizes the separation between classes while minimizing the within-class scatter. This is achieved by calculating
discriminant functions, which are linear combinations of the input features that provide the best separation between classes.
Understanding the Discriminant Function
The discriminant function is the heart of linear discriminant analysis. Represents a mathematical expression used to transform the input components into a value representing the probability of
belonging to a particular class. The discriminant function is calculated based on the estimated class means, the covariance matrix, and the prior probabilities for each class.
The goal of the discriminant function is to map the input data into a lower dimensional space where class separation is maximized. By comparing the discriminant function values for different classes,
we can assign each data point to the most possible class label.
Difference between LDA and PCA (Principal Component Analysis)
Befor we are going to learn the difference between them in more technical way, let’s understand the difference with an crayons example
Imagine you have a big box of crayons and you want to organize them.
PCA (Principal Component Analysis) is like trying to line up all your crayons by how similar their colors are. You want to see which colors are most different from each other and which ones look kind
of the same. So, with PCA, you might end up with a line of crayons that goes from lightest to darkest, but you're not really worried if they are from the same pack or different packs.
LDA (Linear Discriminant Analysis) is a bit different. Let's say some of the crayons have stickers on them, like star stickers or heart stickers. With LDA, you're trying to put the crayons in a line
where crayons with the same stickers are close together, and crayons with different stickers are far apart. So, you're focusing more on the stickers than just the colors.
In short:
• PCA is like organizing your crayons by color.
• LDA is like organizing your crayons by the stickers on them.
Now let’s understand in more technical way.
PCA focuses on finding the directions (principal components) that capture the most variation in the data, regardless of class designation. It tries to represent the data in a new coordinate system
where the parts are unrelated.
On the other hand, LDA tries to find a linear combination of features that maximizes the separation between classes while minimizing the within-class variance. Removes class labels and focuses on
maximum class separation.
How to Prepare Data for Linear Discriminant Analysis
Data preprocessing is a crucial step in the data analysis pipeline before applying Linear Discriminant Analysis (LDA). It involves handling missing values, addressing outliers, and performing feature
scaling and normalization.
These steps ensure that the data is in an appropriate format and that the LDA algorithm can effectively extract discriminative information from the features. Let's dive deeper into each aspect of
data preparation for LDA.
Handling Missing Values
Missing values can arise due to various reasons, such as data collection errors or incomplete records. Dealing with missing values is important to avoid biased or inaccurate results. There are
several approaches to handle missing values, including:
• Removal: If the number of instances with missing values is relatively small compared to the overall dataset, you can choose to remove those instances. However, caution should be exercised as
removing too many instances can lead to a loss of valuable information.
• Imputation: Imputation involves filling in missing values with estimated values based on other observed data. Simple imputation methods include replacing missing values with the mean, median, or
mode of the respective feature. More advanced techniques, such as k-nearest neighbors or regression-based imputation, can also be employed to infer missing values based on the relationships
between variables.
Handling Outliers:
Outliers are data points that deviate significantly from the majority of the dataset. They can arise due to data entry errors, measurement issues, or represent genuine extreme observations. Outliers
can potentially affect the LDA results, as they can distort the estimation of class means and covariance matrices. Here are some approaches to handling outliers:
• Removal: If outliers are the result of data collection errors or measurement issues, it may be appropriate to remove them from the dataset. However, it is crucial to carefully evaluate the impact
of removing outliers and consider the potential loss of valuable information.
• Robust statistics: Robust statistical techniques, such as median absolute deviation or the Winsorization method, can be used to estimate robust measures of central tendency and dispersion. These
methods are less influenced by extreme values and provide more reliable estimators.
Feature scaling and normalization
Feature scaling is important to ensure that features have a similar scale, as LDA is sensitive to the relative size of features. Here are common techniques for scaling and normalizing functions:
• Standardization: Standardization, also known as z-score normalization, transforms the data to have a mean of 0 and a standard deviation of 1. It takes the mean of each function and divides it by
its standard deviation. This technique ensures that segments have zero mean and equal variance. Min and Max
• Scaling: Min and Max Scaling changes the values of each function to a certain range, usually between 0 and 1. It takes the minimum value and divides it by the range (maximum value minus minimum
value). Minimum and maximum scaling preserves the relative relationships between data points.
How to use LDA for Feature Extraction and Dimensionality Reduction
Linear Discriminant Analysis (LDA) not only serves as a classification technique but also offers powerful feature extraction and dimensionality reduction capabilities. By leveraging the statistical
properties of the data, LDA can identify the most discriminative features and project the data onto a lower-dimensional space that preserves class separability.
Let's delve deeper into the intuition behind feature extraction, the process of reducing dimensionality using LDA, and how to interpret the results of the LDA transformation.
Intuition behind feature extraction
Feature extraction aims to transform the original set of features into a new set of features that capture the most discriminative information for classification. LDA achieves this by finding linear
combinations of the original features that maximize the separability between classes.
The intuition is to project the data onto a lower-dimensional space where the distances between classes are maximized while minimizing the scatter within each class.
Reducing dimensionality using LDA
LDA accomplishes dimensionality reduction by projecting the original high-dimensional feature space onto a lower-dimensional space. The number of dimensions in the reduced space is determined by the
number of unique classes in the dataset (number of classes minus one).
The reduced space is designed in a way that maximizes the separation between classes, making it easier to classify new instances.
The LDA transformation involves two main steps:
1. Computation of Class Means and Variance Matrices: LDA computes the mean vectors for each class and computes variance matrices that capture within-class and between-class variance. These matrices
provide valuable insight into the distribution and variability of the data.
2. Solving the eigenvalue problem: LDA solves the eigenvalue problem to find the linear discriminants, also known as eigenvectors, which represent the directions in the feature space where the data
show the most differences between classes. These eigenvectors are associated with the largest eigenvalues and indicate the best direction of projection.
Interpreting LDA transformation results
The results of the LDA transformation can be interpreted in several ways:
• Separation of classes: The goal of the LDA transformation is to maximize the separation between classes. A larger distance between classes in the reduced space provides better separation and
indicates that the LDA transform successfully captures the discriminative information.
• Discrimination values: The LDA transformation provides discrimination values for each instance, indicating closeness to each class. Instances with higher discriminant function values for a
particular class are more likely to belong to that class.
• Feature Importance: LDA provides insight into the importance of individual features for individual classes. The higher the absolute value of the coefficients of the linear discriminant functions,
the greater the influence of the corresponding part of the classification process.
Mathematical Formulation of Linear Discriminant Analysis
Linear discriminant analysis (LDA) is a statistical technique that aims to find a linear combination of features that maximizes the separation between classes. Using the statistical properties of the
data, LDA can efficiently identify the most discriminating directions in the feature space.
In this section, we take a closer look at the mathematical formulation of LDA, including the equations involved in computing the mean classes, the covariance matrices, and the eigenvectors and
eigenvalues used in the LDA transformation.
Linear Discriminant Analysis equations
Linear Discriminant Analysis (LDA) involves several equations that play a crucial role in the calculation and transformation of the data. These equations help us compute class means, covariance
matrices, and the eigenvectors and eigenvalues used in LDA
Calculating class means and covariance matrices
To begin with, LDA involves computing the class means and covariance matrices. The class mean vector for each class represents the average feature values of instances belonging to that class. For a
dataset with C classes and N instances, the mean vector for class c, denoted as μc, is calculated as the sum of the feature vectors divided by the number of instances in that class:
μc = (1/Nc) * ∑xi
Here, Nc represents the number of instances in class c, and xi represents the feature vector of instance i.
Next, LDA involves calculating the within-class scatter matrix (Sw), which captures the spread or variance of the data within each class. The within-class scatter matrix is obtained by summing up the
covariance matrices for each class. The covariance matrix for class c, denoted as Sc, is computed as:
Sc = ∑(xi - μc)(xi - μc)ᵀ
In this equation, xi represents the feature vector of instance i belonging to class c, and μc is the mean vector of class c. By summing up the covariance matrices for all classes, we obtain the
within-class scatter matrix Sw.
The between-class scatter matrix (Sb) quantifies the separation between classes and is computed by considering the differences between the class means. The between-class scatter matrix is defined as:
Sb = ∑(μc - μ)(μc - μ)ᵀ
Here, μ represents the overall mean vector calculated as the average of all class means:
μ = (1/C) * ∑μc
Eigenvectors and eigenvalues in LDA
Once the class means and covariance matrices are computed, the next step in LDA involves finding the eigenvectors and eigenvalues of the matrix (Sw^(-1)) * Sb. These eigenvectors represent the
directions in the feature space along which the data exhibits the most separation between classes.
To obtain the eigenvectors, we solve the eigenvalue problem:
(Sw^(-1)) * Sb * w = λ * w
In this equation, w represents the eigenvector, and λ represents the corresponding eigenvalue. The eigenvectors derived from this eigenvalue problem represent the optimal directions of projection
that maximize class separability.
The eigenvalues associated with each eigenvector indicate the importance or discriminative power of that eigenvector. Higher eigenvalues suggest greater separation between classes along the
corresponding eigenvector.
Implementing Linear Discriminant Analysis: Step-by-Step Guide
Linear discriminant analysis (LDA) is a statistical technique that aims to find a linear combination of features that maximizes the separation between classes. Using the statistical properties of the
data, LDA can efficiently identify the most discriminating directions in the feature space.
In this section, we take a closer look at the mathematical formulation of LDA, including the equations involved in computing the mean classes, covariance matrices, and eigenvectors and eigenvalues
used in the LDA transformation.
Data splitting for training and testing
Before implementing LDA, it is important to split our dataset into training and testing subsets. This allows us to train the LDA model on a portion of the data and evaluate its performance on unseen
data. We can use the train_test_split function from scikit-learn to achieve this. Here's an example:
Linear Discriminant Analysis Implementation with Python
To implement LDA, we can utilize the LinearDiscriminantAnalysis class from scikit-learn. This class provides the necessary functionality for dimensionality reduction and classification using LDA.
Here's an example:
Training and fitting the LDA model
Next, we can train and fit the LDA model using the training data. This involves learning the discriminant information from the data to find the optimal projection vectors. Here's an example:
Making predictions and evaluating performance
After training the LDA model, we can evaluate its performance on the testing data. This helps us assess how well the model generalizes to unseen instances. Here's an example:
Evaluating the Performance of Linear Discriminant Analysis
Once you have trained and tested your Linear Discriminant Analysis (LDA) model, it's essential to evaluate its performance. This section discusses several metrics and techniques commonly used for
evaluating the performance of a classification model, including LDA.
Metrics for model evaluation
When assessing the performance of a classification model, you can consider various metrics, depending on your specific requirements. Some commonly used metrics include:
• Classification Accuracy: It measures the proportion of correctly classified instances out of the total number of instances.
• Confusion Matrix: A table that provides a detailed breakdown of the model's predicted and actual class labels, enabling the calculation of various metrics.
• Precision: It calculates the ratio of correctly predicted positive instances to the total predicted positive instances, indicating the model's ability to avoid false positives.
• Recall: Also known as sensitivity or true positive rate, it calculates the ratio of correctly predicted positive instances to the total actual positive instances, indicating the model's ability
to identify all positive instances.
• F1 Score: It combines precision and recall into a single metric, providing a balanced measure of the model's performance.
Using Linear Discriminant Analysis On Iris classification Dataset
What is Iris Flower classification
The Iris dataset is a well-known dataset in machine learning, consisting of measurements of four features (sepal length, sepal width, petal length, and petal width) from three different species of
iris flowers (setosa, versicolor, and virginica).
In this case study, we will explore how Linear Discriminant Analysis (LDA) can be used to classify the iris flowers based on their features.
Step 1: Data Exploration and Preprocessing:
We start by loading the Iris dataset and examining its features and target classes. We then preprocess the data by standardizing the feature matrix using StandardScaler() to ensure that all features
have zero mean and unit variance.
Step 2: Data Splitting:
Next, we split the preprocessed data into training and testing subsets using train_test_split() from scikit-learn. This allows us to train the LDA model on a portion of the data and evaluate its
performance on unseen data.
Step 3: Linear Discriminant Analysis:
We create an instance of LinearDiscriminantAnalysis() as lda and fit the LDA model using the training data. This step involves learning the discriminant information from the data and finding the
optimal projection vectors.
Step 4: Data Visualization:
To visualize the LDA-transformed data, we plot a scatter plot where each class is represented by a different color for the original data as well as the data after LDA. This helps us visualize the
separation of the iris flowers in the LDA space.
Step 5: Classification and Evaluation:
We transform the testing data using transform() to obtain the LDA-transformed features. Then, we use predict() to classify the LDA-transformed testing data. Finally, we calculate the accuracy of the
LDA model by comparing the predicted classes with the true classes.
• Accuracy: 1.0
• Precision: 1.0
• Recall: 1.0
• F1 Score: 1.0
• Confusion Matrix: [[10, 0, 0], [0, 9, 0], [0, 0, 11]]
FAQ on Linear Discriminant Analysis (LDA)
1. What is LDA?
LDA, or Linear Discriminant Analysis, is a statistical method used to find the "direction" that maximizes the separation between multiple classes in a dataset.
2. Why is LDA used?
LDA is primarily used for dimensionality reduction and classification tasks, especially when you want to separate and classify data into distinct groups or classes.
3. How is LDA different from PCA?
While both LDA and PCA are used for dimensionality reduction, PCA focuses on explaining variance in data, irrespective of classes. In contrast, LDA aims to maximize the separation between different
4. Can LDA be used for regression problems?
No, LDA is specifically designed for classification problems. For regression tasks, other techniques, such as linear regression, should be used.
5. Does LDA assume anything about the data?
Yes, LDA assumes that the features (variables) in your dataset are normally distributed and have the same covariance matrix for all classes.
6. How many components can LDA extract?
The number of components LDA can extract is one less than the number of classes in the data. So, for a dataset with three classes, LDA can extract up to two components.
7. Is LDA sensitive to feature scaling?
Yes. Just like many other machine learning algorithms, LDA can be sensitive to feature scaling. It's often a good practice to scale your data before applying LDA.
8. Can LDA handle non-linear data?
LDA, by its very nature, is linear. If the classes in the data have a non-linear boundary, other techniques, such as kernel methods or neural networks, might be more suitable.
9. What are the advantages of LDA?
A key benefit of LDA is its unsupervised nature, which means it doesn't require pre-labeled categories or topics for your documents. LDA can independently identify potential topics in the data and
determine the likelihood of each document pertaining to those topics.
In this beginner's guide to Linear Discriminant Analysis (LDA), we have covered the foundations, implementation, and evaluation of LDA for data classification and dimensionality reduction. LDA is a
powerful technique that improves classification accuracy and facilitates data interpretation. We discussed the importance of LDA, its key benefits, and real-world applications.
We explored the core concepts, assumptions, and data preparation techniques for LDA. Additionally, we provided an overview of the mathematical formulation of LDA and presented a step-by-step guide to
implementing LDA in Python. By understanding LDA, beginners can effectively apply it for data analysis tasks and further enhance their knowledge in machine learning.
Recommended Courses
Follow us:
I hope you like this post. If you have any questions ? or want me to write an article on a specific topic? then feel free to comment below. | {"url":"https://dataaspirant.com/linear-discriminant-analysis/","timestamp":"2024-11-03T00:08:15Z","content_type":"application/xhtml+xml","content_length":"282666","record_id":"<urn:uuid:117282d2-562c-4cd2-a88a-bce517dad5c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00433.warc.gz"} |
Bagaria, Joan da Silva, Samuel G. 2024-07-08T11:23:53Z 2024-07-08T11:23:53Z 2023-01-01 0166-8641 http://hdl.handle.net/2445/214429 744329 We present more applications of the recently introduced
-strongly compact cardinals in the context of either consistency or reflection results in General Topology, focusing on issues related to normality. In particular, we show that such large cardinal
notion provides a new upper bound for the consistency strength of the statement “All normal Moore spaces are metrizable” (NMSC). The proof uses random forcing, as in the original consistency proof of
NMSC due to Nykos-Kunen-Solovay (see Fleissner [10]). We establish a compactness theorem for normality (i.e., reflection of non-normality) in the realm of first countable spaces, using the least
-strongly compact cardinal, as well as two more similar compactness results on related topological properties. We finish the paper by combining the techniques of reflection and forcing to show that
our new upper bound for the consistency strength of NMSC can be also obtained via Cohen forcing, using some arguments from Dow-Tall-Weiss. eng http://creativecommons.org/licenses/by-nc-nd/3.0/es/
info:eu-repo/semantics/openAccess cc-by-nc-nd (c) Joan Bagaria et al., 2023 $\omega_1$-strongly compact cardinals and normality info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion | {"url":"https://hispana.mcu.es/eu/registros/download.do?tipoRegistro=MTD&id=55234478&formato=xml_raw¶m_formato_metadatos=mods","timestamp":"2024-11-08T23:53:54Z","content_type":"application/xml","content_length":"3280","record_id":"<urn:uuid:3537f7ed-8972-433e-a62a-a4a90631fad0>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00447.warc.gz"} |
Estimating Multiplication And Division Worksheets - Divisonworksheets.com
Estimating Multiplication And Division Worksheets
Estimating Multiplication And Division Worksheets – You can help your child practice and review their skills in division by using division worksheets. Worksheets are available in a broad variety, and
you can make your own. These worksheets are amazing because they are available for download at no cost and modify the exact layout you desire you want them to be. They’re great for first-graders,
kindergarteners, and even second-graders.
Two persons can produce enormous numbers
When dividing large numbers, a child should practice using worksheets. There are often only two, three or four divisors listed on worksheets. This won’t cause your child stress about forgetting how
to divide the large number, or failing to remember their times tables. For your youngster to develop their mathematical skills, you can download worksheets online , or print them from your computer.
Use worksheets for multidigit division to help children practice and improve their understanding of the subject. It’s an important mathematical ability that’s required for complicated calculations,
as well as other tasks in everyday life. The worksheets can be interactive and include tasks and questions that are focused on division of multidigit integers.
The task of dividing huge numbers can be quite difficult for students. These worksheets are often based on a common algorithm that provides step-by-step instructions. This may cause students to not
have the knowledge required. One way to teach long division is to employ bases 10 blocks. Long division ought to be easy for students after they have understood the steps.
Utilize a range of worksheets or questions to practice division of large amounts. These worksheets incorporate fractional calculations with decimals. Additionally, you can find worksheets for
hundredsths, which are particularly helpful for learning how to divide large amounts of money.
Sort the numbers to create small groups.
It may be difficult to assign a number small groups. It may seem great on paper but many participants of small groups dislike this method. It is a natural reflection of how the body develops and can
help to facilitate the Kingdom’s continuous growth. It encourages others to reach to the forgotten and search for new leadership to guide the way.
This is a fantastic method for brainstorming. It is possible to form groups of people who share similar experience and traits. You can think of creative solutions in this way. After you’ve created
your groups, it’s time to introduce yourself and your group members. It’s a good activity that stimulates creativity and innovation.
It is used to divide huge numbers into smaller units. It is useful when you want to make the same quantity of things for multiple groups. One example is an entire class which could be divided into
five groups. These groups can then be added together to create 30 pupils.
You must keep in mind that there are two different types of numbers that you can use to divide numbers: the divisor and the Quotient. What you get when you divide one by another is “ten/five,” while
dividing two by two gives the same result.
It’s a good idea to use the power of ten to calculate big numbers.
Splitting massive numbers into powers can aid in comparing them. Decimals are a common aspect of grocery shopping. They are easily found on receipts as well as on price tags and food labels. Even
petrol pumps utilize them to display the price per gallon, as well as the amount of gas that is dispensed through a funnel.
There are two methods to divide large numbers into powers of 10. The first is by shifting the decimal to the left and multiplying by 10-1. The second method makes use of the associative feature of
powers of ten. Once you’ve learned the associative property of powers of 10, you are able to divide an enormous number into smaller powers that are equal to 10.
The first technique is based on mental computation. If you divide 2.5 by the power of ten, you’ll see an underlying pattern. As the power of ten increases, the decimal position will shift towards the
left. This principle is simple to comprehend and can be applied to any issue, no matter how difficult.
The other option is mentally splitting large numbers into powers ten. It is easy to express massive numbers by using the scientific notation. When using scientific notation, large numbers must be
written using positive exponents. It is possible to change the decimal position five spaces on one side and convert 450,000 to 4.5. You can divide a large number into smaller powers than 10 or split
it into smaller power of 10.
Gallery of Estimating Multiplication And Division Worksheets
Free Printable Estimating Division Worksheets PDF Number Dyslexia
Estimation Worksheets For 3rd Grade Multiplication Using Estimation
Estimation Worksheet 2
Leave a Comment | {"url":"https://www.divisonworksheets.com/estimating-multiplication-and-division-worksheets/","timestamp":"2024-11-07T07:10:51Z","content_type":"text/html","content_length":"61966","record_id":"<urn:uuid:672860be-ed72-4591-9c04-724d26baff40>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00807.warc.gz"} |
An elephant has a mass of 156,000 kg and a volume of 6 kL. What is the elephant's density? | Socratic
An elephant has a mass of 156,000 kg and a volume of 6 kL. What is the elephant's density?
1 Answer
The numbers are not realistic! But see below....
Density is simply mass per unit volume.
Typical units are $g . c {m}^{-} 3$ or $k g . {m}^{-} 3$.
1 cubic metre is equivalent to 1000 litres, so units of "thousands of litres" (kl) and units of cubic metres are numerically identical.
So the mass is 156,000 kg, and the volume is 6 ${m}^{3}$
The density is therefore 156,000/6 = 26,000 $k g . {m}^{-} 3$
In reality, this is an unrealistic figure though. The density of water is 1000 $k g . {m}^{-} 3$ - a human being typically has density a bit lower, which is why we can swim in water. If an elephant
had a density 26 times that of water it would sink like a stone whenever it went into a river! As a comparison, solid lead has a density of around 11,500 $k g . {m}^{-} 3$.
So whoever set you this question needs to use some slightly more realistic numbers, but you can see the methodology anyway.
Impact of this question
10392 views around the world | {"url":"https://socratic.org/questions/an-elephant-has-a-mass-of-156-000-kg-and-a-volume-of-6-kl-what-is-the-elephant-s","timestamp":"2024-11-03T06:04:33Z","content_type":"text/html","content_length":"34483","record_id":"<urn:uuid:0bd7647c-321a-4bc3-88f5-6619247b984d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00463.warc.gz"} |
Program for Merge Sort in C
In this tutorial, you will get program for merge sort in C.
Merge sort runs in O (n log n) running time. It is a very efficient sorting data structure algorithm with near optimal number of comparisons. Recursive algorithm used for merge sort comes under the
category of divide and conquer technique. An array of n elements is split around its center producing two smaller arrays. After these two arrays are sorted independently, they can be merged to
produce the final sorted array. The process of splitting and merging can be carried recursively till there is only one element in the array. An array with 1 element is always sorted.
An example of merge sort in C is given below. First divide the list into the smallest unit (1 element), then compare each element with the adjacent list to sort and merge the two adjacent lists.
Finally all the elements are sorted and merged.
Program for Merge Sort in C
void mergesort(int a[],int i,int j);
void merge(int a[],int i1,int j1,int i2,int j2);
int main()
int a[30],n,i;
printf("Enter no of elements:");
printf("Enter array elements:");
printf("\nSorted array is :");
printf("%d ",a[i]);
return 0;
void mergesort(int a[],int i,int j)
int mid;
mergesort(a,i,mid); //left recursion
mergesort(a,mid+1,j); //right recursion
merge(a,i,mid,mid+1,j); //merging of two sorted sub-arrays
void merge(int a[],int i1,int j1,int i2,int j2)
int temp[50]; //array used for merging
int i,j,k;
i=i1; //beginning of the first list
j=i2; //beginning of the second list
while(i<=j1 && j<=j2) //while elements in both lists
while(i<=j1) //copy remaining elements of the first list
while(j<=j2) //copy remaining elements of the second list
//Transfer elements from temp[] back to a[]
If you have any doubts regarding above program for merge sort in C then comment below.
56 thoughts on “Program for Merge Sort in C”
what is the compiler used here?
I have used gcc compiler and codeblocks ide.
when left recursion is performed then recursively function is called and when i<j condition false then program execution stopped then when right recursion is performed.
I hope you may know C follows top to bottom approach when the statement finished execution immediately the compiler executes the next statement.
So after completing the left recursion it comes back and starts the next execution of next statement.
Hey sorry, but still i didn’t get about the right recursive function. There will be three iteration of left recursion and after that when value of p,q,r becomes 0,1,0 then it will perform
the next statement which is right recursion.
How you have taken array for temp to array of a that for loop i didnt understand
using keyboard
very good question
temp[k] is the array where all sorted elements are stored. using the for loop we display the temp array ..hope you got it
why are you using temp[50] when your original array is of 30 elements only. i am having this problem in my code that it does segmentation fault if i define temp array of same size as of my
original array. but my program runs fine when i define large temp array.
//////////what’s wrong with this code can anybody please tell me…ivalid indirection is what erroe it i showing…/////
void main()
int a[20],i,n;
printf(“enter the length of the array\n”);
printf(“enter the elements in the array\n”);
printf("the sorted array is\n");
int merge_sort(int a[], int p, int r)
int q;
return 0;
merge(int a, int p, int q,int r)
int k,i,j,c,b,L[20],R[20];
return 0;
Your first two lines are incomplete.
Also, there won’t be return 0 at the end if you are defining main function as void.
U just put in header
First u need do complete the declaration of header files.. like #include .
And after that u must need too declare your function definitions after those header files.
starting two lines are incorrect and return zero shouldn’t be required here.if u really want to see that the program is corect or not then execute in turbo C++….dont do show off
First two lines are incomplete.
i am vreri vreri intreshtead programing
why we copy remaining elements of the first list and second list?
For eg out of 2 arrays , left side array got sorted which means all remaining elements of right array are greater than greatest element of left array so print remaining elements.
Den can u tell me you v have to use K in merging?
I have the same question in my mind as questioned by raj mishra .
when left recursion is performed then recursively function is called and when i<j condition false then program execution stopped then when right recursion is performed ?
bro for every fuctn call a activation record in the form of stack is created in the memory and at same the program counter value for next statement is saved …when this I<j condition becomes
false pop operation will be performed along with the completion of next statement that time rightside recusion will carry out in same manner
hy how r sir how to combine quick sort ,heapsort, mergesort in switch statement plz send me the source
code for it:
hello sir… how u calculated the time taken for execution in the above program?
It is inbuilt compiler feature. I am using codeblocks.
Plz explain it by using its algorithm
could you please do a c++ code for the same sorting algorithm.
THANKS BROTHER
U MADE IT LOOK SIMPLE…
Yes Yes Yes.I do have a doubt.
mergesort(a,i,mid); //left recursion
mergesort(a,mid+1,j); //right recursion
As per my understanding left recursion will execute until the condition is false and then the control moves over to the right recursion.
My question is at mergesort(a,mid+1,j),the new values of mid ,low and high will be from main function? OR from the mergesort(a,i,mid) ?
And the same question applies here too :
mergesort(a,mid+1,j); //right recursion
merge(a,i,mid,mid+1,j); //merging of two sorted sub-arrays
please calculate the time complexity
It simplifies a begginers approach. Great work.
Please provide logic calculations for this program
why not dynamically allocate?
Thank you it is vrey usefull.
This is great! Thank you so much 🙂
I made some minor modifications when declaring the arrays though:
1st modification:
| int a[30],n,i;
| printf(“Enter no of elements:”);
| scanf(“%d”,&n);
| printf(“Enter array elements:”);
| for(i=0;i<n;i++)
| scanf("%d",&a[i]);
my version:
| int n;
| printf("Enter no of elements:");
| int a[n], i;
| scanf("%d",&n);
| printf("Enter array elements:");
| for(i=0;i<n;i++)
| scanf("%d",&a[i]);
2nd modification:
void merge(int a[],int i1,int j1,int i2,int j2)
int temp[50]; //array used for merging
my version:
void merge(int a[],int i1,int j1,int i2,int j2)
int temp[j2-i1+1]; //array used for merging
As I understand, this would declare just enough elements for each array.
1st modification I put some lines in slightly akward order! Should have looked like this
my version:
| int n;
| printf(“Enter no of elements:”);
| scanf(“%d”,&n);
| int a[n], i;
| printf(“Enter array elements:”);
| for(i=0;i<n;i++)
| scanf("%d",&a[i]);
excellent work but if u will elaborate the concept of how recursion works here , it would be very helpful
can u explain how recursion is working internally in this program,recursion concepts are not clear of mine,it would be very nice of u
I was wondering if you could write the code with c++……..
thanks a lot…
Can anyone explain or expand this loop
Sir my code doesn’t work after entering the array element ……..means it doesn’t give the sorted list …..
pls can i get the code for merge sort in c++?
instead of i and j in the last for loop use the variable k alone.
Is it Merge Sort time complexit?
Void main( ) missing
very helpfull
Can u please explain the program clearly
With the use of variables used and so on!
Please upload algorithm as well…
mast program
i love u
Implement a Merge Sort algorithm to sort a given set of elements and determine the time required to sort the elements. The elements can be read from a file or can be generated using the random
number generator.
i want the solution of this question
Plz mail me the solution
void mergesort(int a[],int i,int j);
void merge(int a[],int i1,int j1,int i2,int j2);
int main()
int a[30],n,i;
printf(“Enter no of elements:”);
printf(“Enter array elements:”);
printf("\nSorted array is :");
printf("%d ",a[i]);
return 0;
void mergesort(int a[],int i,int j)
int mid;
void merge(int a[],int i1,int j1,int i2,int j2)
int temp[50];
int i,j,k;
while(i<=j1 && j<=j2)
Leave a Comment | {"url":"https://www.thecrazyprogrammer.com/2014/03/c-program-for-implementation-of-merge-sort.html","timestamp":"2024-11-02T09:38:09Z","content_type":"text/html","content_length":"241290","record_id":"<urn:uuid:5880a563-ec93-4a6d-a7d4-b265b27faf02>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00155.warc.gz"} |
Flow shop for dual CPUs in dynamic voltage scaling
We study the following flow shop scheduling problem on two processors. We are given n jobs with a common deadline D, where each job j has workload p[i,j] on processor i and a set of processors which
can vary their speed dynamically. Job j can be executed on the second processor if the execution of job j is completed on the first processor. Our objective is to find a feasible schedule such that
all jobs are completed by the common deadline D with minimized energy consumption. For this model, we present a linear program for the discrete speed case, where the processor can only run at
specific speeds in S={s[1],s[2],⋯,s[q]} and the job execution order is fixed. We also provide a m^α−1-approximation algorithm for the arbitrary order case and for continuous speed model where m is
the number of processors and α is a parameter of the processor.
We then introduce a new variant of flow shop scheduling problem called sense-and-aggregate model motivated by data aggregation in wireless sensor networks where the base station needs to receive data
from sensors and then compute a single aggregate result. In this model, the first processor will receive unit size data from sensors and the second processor is responsible for calculating the
aggregate result. The second processor can decide when to aggregate and the workload that needs to be done to aggregate x data will be f(x) and another unit size data will be generated as the result
of the partial aggregation which will then be used in the next round aggregation. Our objective is to find a schedule such that all data are received and aggregated by the deadline with minimum
energy consumption. We present an O(n^5) dynamic programming algorithm when f(x)=x and a greedy algorithm when f(x)=x−1.
Finally, we investigate the performance of the flowshop problem when the order of jobs is fixed by comparing it to the approximation algorithm with an arbitrary order. We show experimentally that the
approximation ratio is close to 1 when there are few machines and when there are more jobs.
• Flowshop
• Scheduling
• Speed scaling
Dive into the research topics of 'Flow shop for dual CPUs in dynamic voltage scaling'. Together they form a unique fingerprint. | {"url":"https://scholars.ln.edu.hk/en/publications/flow-shop-for-dual-cpus-in-dynamic-voltage-scaling","timestamp":"2024-11-12T07:12:33Z","content_type":"text/html","content_length":"60912","record_id":"<urn:uuid:ff07da9d-f789-45a6-98e3-ca0aed5c135f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00498.warc.gz"} |
1 Introduction
The spin–spin coupling analysis of 1D NMR spectra describes how nuclei are scalarly coupled within molecules. Scalar couplings are important NMR parameters that provide constraints for the building
of 3-D molecular structures [1]. The high magnetic fields at which NMR is performed nowadays reduce the probability of strongly coupled homonuclear spin systems, so that first-order analysis is most
often sufficient for measuring coupling constants. Each multiplet can therefore be analyzed independently of the others within the same spin system. The automatic extraction of coupling constants is
a problem to which solutions have already been proposed ([2–11], and references cited therein). Human interpretation of a multiplet structure relies on the recognition of elementary peak cluster
shapes [12]. Automated methods for first order multiplet analysis mimic this process [11].
A method based on time-domain analysis has already been described by our group [13,14]. Its basic principle was later extended to increase its performance. This communication describes the process
that is presently implemented in the AUJ (automatic J couplings) computer software (www.univ-reims.fr/LSD/JmnSoft/Auj).
AUJ is implemented as a GIFA [15] macro command that performs some pre-processing and calls a binary program whose source file is written in C language. The latter uses a library that contains the
truely active part of the AUJ algorithm and that is designed to be invoked from any type of NMR processing software. The library uses the time-domain data as input and produces a set of coupling
constants as well as the corresponding reconstructed time-domain signal.
2 Process outline
Analysis starts with data point extraction of the user-selected multiplet. The multiplet is then converted to complex time-domain data by inverse Fourier transformation. Alternatively, a column of a
2-D J-resolved spectrum can also be simply extracted by the user and back-converted to real time-domain data. The AUJ algorithm is able to handle both input types. Complex data is supposed to
originate from a non-centered non-symmetrical multiplet and therefore must be transformed to real data (step 1). The ‘log-abs’ algorithm [13,14] is then used (step 2) to produce a 1-D J-spectrum that
is then optimized (step 3) and analyzed to produce a first set of coupling constants and multiplicities (step 4). At this stage, J values can be accurate but multiplicities are usually not. The
latter are optimized (step 5) and the resulting values used to refine the final J values (step 6). Finally, the centered multiplet (when input data is complex), the reconstructed time-domain data,
and the J values are exported back to the calling GIFA macro command and made visible through its graphical user interface, so that an algorithm failure is immediately visible. The ratio of the
spectral noise level with the root-mean-square deviation between original and reconstructed multiplets is also a good performance indicator. The analysis process depends on empirically adjusted
parameters whose default values, provided in the following paragraphs, can easily be adjusted by means of a single control panel. In practice, the proposed defaut parameter values fit well with most
of the situations that were tested by the authors.
2.1 Step 1: Multiplet centering and symmetrization
Time-domain data is first zero-filled (16 times by default) to obtain a high resolution multiplet by Fourier transformation. Then, a pivot frequency is searched so that frequency reversal around the
pivot point leads to a spectrum that looks, at best, identical to the original one. Reversal and comparison are not directly performed on the high-resolution spectrum but on a binary-valued version
of it that is built as follows. The real part of the spectrum is first divided by its Euclidian norm. Each value in the normalized spectrum is replaced by 1 if its value is greater than a given
threshold (0.03 by default) and, if not, replaced by 0. The pivot position is selected so that the number of ones is maximum in the point-by-point product of the binary spectrum by its reversed
version. The optimum pivot frequency value is used to frequency shift the original time-domain signal by multiplying it by the adequate linear phase ramp function. Frequency symmetrization is
achieved de facto by setting the imaginary part of the resulting time-domain data to zero. Centering failure may occur for strongly unsymmetrical multiplets and threshold adjustment may be necessary.
2.2 Step 2: ‘log-abs’ analysis
A first-order multiplet is described in the time domain by:
$s–(tj)=A exp(–tj/T2*)∏i=1N[cos (πJ–itj)]n–i$ (2)
in which $tj (1≤j≤M)$ is the signal measurement time, ɛ the noise, $S–$ the signal model in the absence of noise, A the signal amplitude, $T2*$ the apparent transverse relaxation time, $J–i (1≤i≤N)$
the i^th possible coupling constant value, and $n–i$ the associated multiplicity. If $n–i=0$, then $J–i$ is not a coupling constant of the multiplet, while $n–i=1$ means that $J–i$ gives rise to a
doublet, $n–i=2$ means that $J–i$ gives rise to a triplet, and so on.
The model described by equations (1) and (2) is slightly different from the usual one, because it imposes N (80 by default) predefined J values:
$J–i=J–min+iN(J–max–J–min)$ (3)
so that J ∊]$J–min$, $J–max$]. However, in this model, A, $T2*$ and all $n–i$ can be found through the ‘log-abs’ transformation. The equation:
$log |s–(tj)|=log |A|·1–1T2*·tj+∑i=1Nn–i·log |cos (πJ–itj)|$ (4)
indicates that the unknowns can be found by a linear fit of s(t) with a set of N+2 basis functions: 1, t, $log |cos (πJ–it)|$.
Considering that
the data points s(t[j]) having the smallest absolute value are those whose logarithm is, at most, affected by noise. Only M′ (N≤M′≤M) t[j] values are kept for the ‘log-abs’ analysis, those for
which the |s(t[j])| values are the highest. The M′/M ratio is defined by the user (1/2 by default). In order to avoid t[j] and $J–i$ combinations for which $cos(πJ–itj)$ equals zero, $J–min$ and
$J–max$ are chosen so that $πJ–min$ and $πJ–min$ (respectively 2 and 60 s^–1 by default) are rational numbers. The function $n–(J–)$ is called the 1-D multiplet J-spectrum.
2.3 Step 3: Optimization of the 1-D J-spectrum
The crude 1-D J-spectum does not exploit all available data and is strongly affected by noise. Its refinement is possible through the minimization of the least squares residue R:
$R=∑j=1M (|s(tj)|–|s–(tj)|)2$ (6)
considered as a fonction of A, $T2*$ and all $n–i$. The absolute values in the expression of R are necessary to be able to calculate non-integer powers of $cos(πJ–itj)$. Minimization of R is achieved
through an iterative conjugate gradient algorithm. The $n–i$ values found at step 2 are not directly used as a minimization starting point. Those that are less than a given threshold (0.05 by
default) are replaced by zeros. The threshold value can be increased if the multiplet is not ideal, for example, if it presents a non-Lorentzian lineshape or a strong noise level.
2.4 Step 4: Analysis of the refined 1-D J-spectrum
Each series of contiguous $n–i$ values (i[min]≤i≤i[max]) so that all $n–i$ are greater than a threshold (0.05 by default) is viewed as a peak in the 1-D J-spectrum. The position of the mass
center of the k^th peak (1≤k≤K^*) is considered as the J[k] coupling constant value and the non-integer peak integral, $nk*$, rounded to the closest integer value, as the associated n[k]
multiplicity [14]:
$Jk=1nk*∑i=iminimaxn–iJ–i$ (8)
2.5 Step 5: Multiplicity and $T2*$ optimization
This step is a grid search of the best integer multiplicities and $T2*$ values so that the linear fit residue of s(t) with:
$s*(t)=exp (t/T2*)∏k=1K*[cos (πJkt)]nk$ (9)
is minimum. The n[k] values from step 4 are simply ignored (they were useful to find all J[k]) and systematically replaced by values drawn from the [0, n[max]] interval (n[max]=4 by default), while
$T2*$ values are drawn from a predefined set ({0.1 s, 0.2 s, 0.4 s, 0.7 s, 1.1 s} by default). It often happens that a particular n[k] is at best equal to zero. The corresponding J[k] value is
removed from the set of J values and $K*$ decreased accordingly.
2.6 Step 6: J[k] optimization
The low resolution in the 1-D J-spectrum may lead to believe that two different J values are identical. Therefore, the multiplet is considered as being produced by the effect of K independent
couplings with:
Each J[k] value is perturbed by addition of a small deviation drawn from a random number generator (±0.05 Hz by default). The J[k] and $T2*$ obtained in step 5 are used as the starting point for a
conjugate gradient residue minimization of the linear fit of s(t) with s*(t). The user is left to decide whether two very close final J values are the same or not. This is a difficult decision when
no standard deviation values have been evaluated. As already mentioned in [16], error evaluation is strongly dependent on the noise autocorrelation properties and therefore is beyond the scope of
this Communication.
3 Results
The proton NMR spectrum of sucrose in DMSO-d[6] was recorded at 500 MHz. The quadruplet-like signal at δ=3.81 ppm (Fig. 1, left) is obviously more complex than a regular quadruplet. This particular
multiplet would clearly be a difficult problem to solve for any method based on peak list analysis.
Fig. 1
The multiplet at δ=3.81 ppm (TMS as reference) in the ^1H NMR spectrum of sucrose dissolved in perdeuterated DMSO (left). The multiplet that is reconstructed from the AUJ analysis, with J=
5.37 Hz, 7.04 Hz, 8.14 Hz and $T2*$=0.1 s (right).
The 1-D J-spectrum is analyzed as J=1.47 Hz (0.55), 2.13 Hz (0.54), 5.57 Hz (1.32), 7.20 Hz (0.72), and 8.25 Hz (0.80), where numbers in parenthesis are the $n–$ non-integer multiplicity values.
Multiplicity refinement eliminates the two lowest J values and proposes the remaining ones to correspond to doublets. This behavior is very common, as the not strictly Lorentzian peak shape may be
seen by the algorithm as originating from small, non-resolved coupling constants. The final refinement of the J values produces three close but different coupling constants: J=5.37 Hz, 7.04 Hz, and
8.14 Hz, giving rise to a reconstructed multiplet that is very similar to the original one (Fig. 1, right).
A more difficult problem was given to AUJ, the analysis of a multiplet that is simulated on the basis of results presented in [11] for the methine proton of 3-bromo-2-methyl-1-propanol (Fig. 2,
left). The multiplet is a quadruplet of quintet with J=5.43 Hz and 6.76 Hz, respectively. Some computer generated noise is added to the spectrum and the line width is chosen so that peak clusters
are poorly resolved. The 1-D J-spectrum is interpreted as J=1.10 Hz (0.77), 5.46 Hz (1.24), and 6.69 Hz (1.80). Again, the noise introduces unwanted small coupling constants in the 1-D J-spectrum.
The multiplicity optimization step finds the correct result and the final refinement produces J[1]=5.39 Hz, J[2]=5.40 Hz, J[3]=5.41 Hz, J[4]=J[5]=J[6]=6.78 Hz, J[7]=6,79 Hz, a result
that compares well to what is expected.
Fig. 2
Experiment on simulated data, using parameters published in [11] for the methine proton of 3-bromo-2 methyl-1-propanol. Simulated spectrum, using J[1]=J[2]=J[3]=5.43 Hz, J[4]=J[5]=J[6]=J
[7]=6.76 Hz and $T2*$=0.25 s. Some computer-generated Gaussian noise was added (left). The multiplet that was reconstructed from the AUJ analysis, with J[1]=5.39 Hz, J[2]=5.40 Hz, J[3]=
5.41 Hz, J[4]=J[5]=J[6]=6.78 Hz, J[7]=6,79 Hz, and T =0.254 s (right).
4 Conclusion
This Communication shows that the AUJ algorithm provides a pertinent way to analyze complex multiplets. The modeling of time-domain data ensures a reliable result on poorly resolved signals, even
though non-ideal lineshapes and high noise levels may lead the user to modify the default algorithm parameters. However, it should always been remembered that the best first order analysis algorithm
ever written cannot provide a safe and useful result if carried out on a part of a strongly coupled spin system. Further development of AUJ will deal with parameter selection improvement, the
interfacing of the algorithm with commercial NMR data processing software, as well as its extension to slices of 2-D NMR spectra, different from J-resolved ones, in order to analyze nearly or fully
superimposed multiplets.
We thank Dr. Karen Plé for linguistic improvement. | {"url":"https://comptes-rendus.academie-sciences.fr/chimie/articles/en/10.1016/j.crci.2005.05.012/","timestamp":"2024-11-02T15:57:38Z","content_type":"text/html","content_length":"86801","record_id":"<urn:uuid:c8d93cd0-36b1-48eb-863f-1bff736e3c7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00458.warc.gz"} |
How can I calculate an average stock in my dataview?Home - Board Community
How can I calculate an average stock in my dataview?
can someone help me calculating an average stock in my dataview? I have a monthly stock information and I want to divide it by number of months <> 0. I tried with a refer to but I can not calculate
with the sum. If I try to divide 'a' by 'b' board does not use the total sum, for example 4 (first row), board divides by 1:
Thank you!
Hendrik Hellwig
• Hi,
you can easily solve this, using nexel:
=AVERAGEEX( [@a;*;*;Range.Horizontal] )
Or maybe the "Yearly Moving Average" function might also be helpful
• Okay, great!
is there a possibility to add a column and calculate the average over both?
• Hi,
should it be a weighted Average? so for your case, shout it be (1.189+431+399+386+362)/5 or should it be (1.189+(431+399+386+362)/4)/2?
Do you use an by Column entity or a Detail by ?
• Hi,
the first example (1.189+431+399+386+362)/5. I use a Detail by.
• Hi,
=([@a;*;*]+SUM( [@b;*;*;Range.Horizontal] ))/([@a;*;*;CountEX.Horizontal]+[@b;*;*;CountEX.Horizontal])
should work (havn't tried it with detail by)
• Hi,
I tried =(SUM([@a;*;*])+SUM( [@b;*;*;Range.Horizontal] ))/([@a;*;*;CountEX.Horizontal]+[@b;*;*;CountEX.Horizontal]).
The SUM-Funktion works but the CountEX.Horizontal only works at column b not a.
At column b there is the 'Detail by', at column a I use a 'Refer to'.
• Hi,
did it worked with the formula I posted? Cause the first sum in yours can't work, cause no area is defined.
• Hi,
you are right. I used the formula
=(SUM([@a;*;*;Range.Horizontal])+SUM( [@b;*;*;Range.Horizontal] ))/([@a;*;*;CountEX.Horizontal]+[@b;*;*;CountEX.Horizontal]),
because your formula
=([@a;*;*]+SUM( [@b;*;*;Range.Horizontal] ))/([@a;*;*;CountEX.Horizontal]+[@b;*;*;CountEX.Horizontal])
does not work.
Then I had this problem: The SUM-Funktion works but the CountEX.Horizontal only works at column b not a.
I tried a little bit and now I choose this function and it works:
=(SUM([@a;*;*;Range.Horizontal])+SUM( [@b;*;*;Range.Horizontal] ))/[@a;*;*;CountEX.Left]
• Hi
I do not understand why the formula is not working:
The blue one is working. Here the formula is =[@i;*;*]. It is the Block 'ø-Bestand Menge 2017'.
Can you help?
• Hi Hendrik,
it's quite hard to answer your question, without knowing details about the Layout, cause it can have several reasons (different Block, Detail by, so the other block has a different number of
maybe you can post a screenshot of the alyout configution and add the Block Number (a, b, c...) to the Block heading, so its easier to distinguish the different blocks.
Cause without knowing, if this a different blocks it is very hard to help you
• Hi
is there a possibility to link two formulas like
[@a;*;*] and [@b;*;*;Count.Left]
to [@a;*;[@b;*;*;Count.Left]]?
• Hi Hendrik,
I never have tested it, but what do you want to do? Cause with your syntax you would point to the cell with the Code [@b;*;*;Count.Left] (Absolute Reference Mode), and you dont know if an Enitty
with this code eixst
• Hi,
I had the same kind of question but I'm not using Nexel. Instead you can use a "counter" cube, that can be quickly filled by a dataflow such as :
a: stock cube
b: counter
dataflow: b=a^0
(which is 1 where there is a value and 0 when no value).
then in your layout you can use the counter cube as your divider instead of trying to find the right total in the layout.
• Hi Björn Reuber,
how would you do this for charts. AFAIK they don't support formulas.
BR, Ray
• Hi,
you can create the Layout including a nexel formula in a dataview and then copy this Layout to another object (like a chart)
• Hi Hendrik Hellwig,
depending on your BOARD version (>= 10.0) you could also try the analytics functions:
• 2K Forums
• 331 Resources
• Academy
• 340 Partner Hub
• 94 Support | {"url":"https://community.board.com/discussion/7475/how-can-i-calculate-an-average-stock-in-my-dataview","timestamp":"2024-11-12T15:37:28Z","content_type":"text/html","content_length":"363720","record_id":"<urn:uuid:bd14f9eb-3a39-419e-9606-57b6c151ea7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00746.warc.gz"} |
A three solution Theorem for Pucci's extremal operator
Dr. Mohan Kumar Mallick, TIFR-CAM
Speaker Dr. Mohan Kumar Mallick, TIFR-CAM
When Feb 13, 2020
from 04:00 PM to 05:00 PM
Where LH 006
Add event to calendar vCal
Abstract: In this talk we will discuss the existence of three positive solutions to the following boundary value problem:
\(-\mathcal{M}_{\lambda,\Lambda}^+(D^2u)\) = \(f(u)\) \(\rm{in} \) \(\Omega,\\\)
\(u\) = \(0\) \(\rm{on}\) \(\partial\Omega,\)
where \(\Omega\) is a bounded smooth domain in \(\mathbb{R}^n\) and \(f:[0,\infty]\to[0,\infty]\) is a \(C^{\alpha}\) function with \(f(0)\geq 0\). \(\mathcal{M}_{\lambda,\Lambda}^+\) is the Pucci's
maximal operator. The idea of the proof relies on the construction of two pairs of sub-super solutions \((\psi_1,\phi_1)\) and \((\psi_2,\phi_2)\) where \(\psi_1\leq \psi_2\leq \phi_1\), \(\psi_1\leq
\phi_2\leq \phi_1\) with \(\psi_2\nless\phi_2 \), and \(\psi_2, \phi_2\) are strict sub and supersolutions, then we establish the existence of three solutions \(u_1, u_2\) and \(u_3\)for the above
boundary value problem such that \(u_1\in[\psi_1,\phi_2\), \(u_2\in[\psi_2,\phi_1]\) and \(u_3\in [\psi_1,\phi_1]\setminus[\psi_1,\phi_2]\cap[\psi_2,\phi_1]\). | {"url":"https://www.math.tifrbng.res.in/events/a-three-solution-theorem-for-puccis-extremal-operator","timestamp":"2024-11-06T02:28:56Z","content_type":"application/xhtml+xml","content_length":"33110","record_id":"<urn:uuid:419b4dd4-878b-4047-8a52-dd3c870e8740>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00124.warc.gz"} |
SPLADE for Sparse Vector Search Explained | Pinecone
Google, Netflix, Amazon, and many more big tech companies all have one thing in common. They power their search and recommendation systems with “vector search”.
Before modern vector search, we had the “traditional” bag of words (BOW) methods. That is, we take a set of " documents" to be retrieved (like web pages on Google). Each document is transformed into
a set (bag) of words, and use this to populate a sparse “frequency vector”. Popular algorithms for this include TF-IDF and BM25.
These sparse vectors are hugely popular in information retrieval thanks to their efficiency, interpretability, and exact term matching. Yet, they’re far from perfect.
Our nature as human beings does not align with sparse vector search. When searching for information, we rarely know the exact terms that will be contained in the documents we’re looking for.
Dense embedding models offer some help in this direction. By using dense models, we can search based on “semantic meaning” rather than term matching. However, these models could be better.
We need vast amounts of data to fine-tune dense embedding models; without this, they lack the performance of sparse methods. This is problematic for niche domains where data is hard to find and
domain-specific terminology is important.
In the past, there have been a range of bandaid solutions for dealing with this; ranging from complex and (still not perfect) two-stage retrieval systems, to query and document expansion or rewrite
methods (as we will explore later). However, none of these came close to being truly robust solutions.
Fortunately, plenty of progress has been made in making the most of both worlds. A merger of sparse and dense retrieval is now possible through hybrid search, and learnable sparse embeddings help
minimize the traditional drawbacks of sparse retrieval.
This article will cover the latest in learnable sparse embeddings with SPLADE — the Sparse Lexical and Expansion model [1].
Sparse and Dense
In information retrieval, vector embeddings represent documents and queries in a numerical vector format. This format allows us to search a vector database and identify similar vectors.
Sparse and dense vectors are two different forms of this representation, each with pros and cons.
Sparse vectors consist of many zero values with very few non-zero values.
Sparse vectors like TF-IDF or BM25 have high dimensionality and contain very few non-zero values (hence, they are called “sparse”). There are decades of research behind sparse vectors. Resulting in
compact data structures and many efficient retireval algorithms designed specifically for these vectors.
Dense vectors are lower-dimensional but information-rich, with non-zero values in most-or-all dimensions. These are typically built using neural network models like transformers and, through this,
can represent more abstract information like the semantic meaning behind some text.
Generally speaking, the pros and cons of both methods can be outlined as follows:
Pros Cons
+ Typically faster retrieval • Performance cannot be improved significantly over baseline
+ Good baseline performance • Performance cannot be improved significantly over baseline
+ Don’t need model fine-tuning • Suffers from vocabulary mismatch problem
+ Exact matching of terms
Pros Cons
+ Can outperform sparse with fine-tuning • Requires training data, difficult to do in low-resource scenarios
+ Search with human-like abstract concepts • Does not generalize well, particularly for niche terminology
+ Multi-modality (text, images, audio, etc.) and cross-modal search (e.g., text-to-image) • Requires more compute and memory than sparse
• No exact match
• Not easily interpretable
Ideally, we want the merge the best of both, but that’s hard to do.
Two-Stage Retrieval
A typical approach to handling this is implementing a two-stage retrieval and ranking system. In this scenario, we use two distinct stages to retrieve and rank relevant documents for a given query.
In the first stage, the system uses a sparse retrieval method to retrieve a large set of candidate documents. These are then passed to the second stage, where we use a dense model to rerank the
results based on their relevance to the query.
Two-stage retrieval system with a sparse retriever and dense reranker.
There are benefits to this, (1) we apply the sparse model to the full set of documents to retrieve, which is more efficient. Then (2) we rerank the now smaller set of documents with the slower dense
model, which can be more accurate. From this, we can return much more relevant results to users. Another benefit is that this reranking stage is detached from the retrieval system, this can be useful
when the retrieval system is multi-purpose.
However, it isn’t perfect. Two stages of retrieval and reranking can be slower than a single-stage system using approximate search algorithms. Having two stages is more complex and therefore brings
more engineering challenges. Finally, the performance relies on the first-stage retriever returning relevant results; if nothing useful is returned, the reranking cannot help.
Improving Single-Stage Systems
Because of the two-stage retrieval drawbacks, much work has been put into improving single-stage retrieval systems.
A single stage retrieval system. Note that the retriever may be sparse, dense, or even both.
A part of that is the research into more robust and learnable sparse embedding models — and one of the most performant models in this space is SPLADE.
The idea behind the Sparse Lexical and Expansion models is that a pretrained language model like BERT can identify connections between words/sub-words (called word-pieces or “terms” in this article)
and use that knowledge to enhance our sparse vector embedding.
This works in two ways, it allows us to weigh the relevance of different terms (something like the will carry less relevance than a less common word like orangutan). And it enables term expansion:
the inclusion of alternative but relevant terms beyond those found in the original sequence.
Term expansion allows us to identify relevant but different terms and use them in the sparse vector retrieval step.
The most significant advantage of SPLADE is not necessarily that it can do term expansion but instead that it can learn term expansions. Traditional methods required rule-based term expansion which
is time-consuming and fundamentally limited. Whereas SPLADE can use the best language models to learn term expansions and even tweak them based on the sentence context.
Despite having a query and document with many relevant terms, because they are not “exact matches” they are not identified.
Term expansion is crucial in minimizing the vocabulary mismatch problem — the typical lack of term overlap between queries and relevant documents.
With term expansion on our query we will a much larger overlap because we’re now able to identify similar words.
It’s expected that relevant documents can contain little-to-no term overlap because of the complexity of language and the multitude of ways we can describe something.
SPLADE Embeddings
How SPLADE builds its sparse embeddings is simple to understand. We start with a transformer model like BERT using a Masked-Language Modeling (MLM) head.
MLM is the typical pretraining method utilized by many transformers. We can start with an off-the-shelf pretrained BERT model.
As mentioned, we will use BERT with an MLM head. If you’re familiar with BERT and MLM, then great — if not, let’s break it down.
BERT is a popular transformer model. Like all transformers, its core functionality is to create information-rich token embeddings. What exactly does that mean?
We start with some text like "Orangutans are native to the rainforests of Indonesia and Malaysia". We would begin by tokenizing the text into BERT-specific sub-word tokens:
text = (
"Orangutans are native to the rainforests of "
"Indonesia and Malaysia"
# create the tokens that will be input into the model
tokens = tokenizer(text, return_tensors="pt")
{'input_ids': tensor([[ 101, 2030, 5654, 13210, 3619, 2024, 3128, 2000, 1996, 18951,
2015, 1997, 6239, 1998, 6027, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
# we transform the input_ids to human-readable tokens
Token IDs are mapped to learned token embeddings within the embedding matrix.
These tokens are matched up to an “embedding matrix” that acts as the first layer in the BERT model. In this embedding matrix, we find learned “vector embeddings” that act as a “numerical
representation” of these word/sub-word tokens.
The vectors of the embedding matrix each represent a token within a meaningful vector space.
From here, the token representations of our original text go through several “encoder blocks”. These blocks encode more and more contextual information into each vector embedding based on the
surrounding context from the rest of the text.
After this, we arrive at our transformer’s “output”, the information-rich vector embeddings. Each embedding represents the earlier token but with added information gathered from the other token
vector embeddings also extracted from the original sentence.
Processing the initial token embedding through several attention encoder blocks allows more contextual information to be encoded, producing information-rich embeddings.
This process is the core of BERT and every other transformer model. However, the power of transformers is the considerable number of things for which these information-rich vectors can be used.
Typically, we add a task-specific “head” to a transformer to transform these vectors into something else, like predictions or sparse vectors.
Masked Language Modeling Head
The MLM head is one of many heads commonly used with BERT models. Unlike most heads, an MLM head is used during the initial pretraining of BERT.
This works by taking an input sentence; again, let’s use "Orangutans are native to the rainforests of Indonesia and Malaysia". We tokenize the text and then replace random tokens with a [MASK] token.
Any word or sub-word token can be masked using the [MASK] token.
This masked token sequence is passed as input to BERT. At the other end, we give the original sentence to the MLM head. BERT and the MLM head are then optimized for predicting the original word/
sub-word token that had been replaced by a [MASK] token.
The MLM head produces a probability distribution from each output logit. The probabilities act as predictions of [MASK] representing each token from the vocab.
For this to work, the MLM head contains 30522 output values for each token position. These 30522 values represent the BERT vocabulary and act as a probability distribution over the vocab. The highest
activation represents the token prediction for that particular token position.
MLM and Sparse Vectors
These 30522 probability distributions act as an indicator of which words/tokens from the vocab are most important. The MLM head outputs these distributions for every token input to the model.
The MLM head gives us a probability distribution for each token, whether or not they have been masked. These distributions are aggregated to give the importance estimation.
SPLADE takes all these distributions and aggregates them into a single distribution called the importance estimation . This importance estimation is the sparse vector produced by SPLADE. We can
combine all these probability distributions into a single distribution that tells us the relevance of every token in the vocab to our input sentence.
: Every token in the input set of tokens .
: Every predicted weight for all tokens in the vocab , for each token .
This allows us to identify relevant tokens that do not exist within the input sentence. For example, if we mask the word rainforest, we may return high predictions for the words jungle, land, and
forest. These words and their associated probabilities would then be represented in the SPLADE-built sparse vector.
This learned query/document expansion to include other relevant terms is a crucial advantage of SPLADE over traditional sparse methods. Helping us minimize the vocabulary mismatch problem based on
learned relationships and term context.
Term expansion in the query can lead to much greater overlap between queries and relevant documents, helping us minimize the vocabulary mismatch problem.
As many transformer models are pretrained with MLM, there are a large number of models that have trained MLM head weights that can be used for later SPLADE fine-tuning.
Where SPLADE Works Less Well
SPLADE is an excellent approach to minimizing the vocabulary mismatch problem commonly found in sparse vector methods. However, there are some drawbacks that we need to consider.
Compared to other sparse methods, retrieval with SPLADE is slow. There are three primary reasons for this:
1. The number of non-zero values in SPLADE query and document vectors is typically greater than in traditional sparse vectors, and sparse retrieval systems are not optimized for this.
2. The distribution of non-zero values deviates from the traditional distribution expected by the sparse retrieval systems, again causing slowdowns.
3. SPLADE vectors are not natively supported by most sparse retrieval systems. Meaning we must perform multiple pre and post-processing steps, weight discretization, etc.
Fortunately, there are solutions to all of these problems. For (1), the authors of SPLADE addressed this in a later version of the model that minimizes the number of query vector non-zero values [2].
Reducing the number of query vector non-zero values was made possible through two steps. First, by first improving the performance of the SPLADE document encodings via a max pooling modification to
the original pooling strategy:
Second, by limiting term expansion to the document encodings only. Thanks to the improved document encoding performance, dropping query expansions still leaves us with better performance than the
original SPLADE model.
Both (2) and (3) are solved using the Pinecone vector database. (2) is solved by Pinecone’s retrieval engine being designed from the ground up to be agnostic to data distribution. Pinecone allows
real-valued sparse vectors — meaning SPLADE vectors are supported by default.
SPLADE Implementation
We have two options for implementing SPLADE; directly with Hugging Face transformers and PyTorch, or with more abstraction using the official SPLADE library. We will demonstrate both, starting with
the Hugging Face and PyTorch implementation to understand how it works.
Hugging Face and PyTorch
To begin, we install all prerequisites:
!pip install -U transformers torch
Then we initialize the BERT tokenizer and BERT model with masked-language modeling (MLM) head. We load the fine-tuned SPLADE model weights from naver/splade-cocondenser-ensembledistil.
from transformers import AutoModelForMaskedLM, AutoTokenizer
model_id = 'naver/splade-cocondenser-ensembledistil'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)
From here, we can create an input document text, tokenize it, and process it through the model to produce the MLM head output logits.
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens)
MaskedLMOutput(loss=None, logits=tensor([[[ -6.9833, -8.2131, -8.1693, ..., -8.1552, -7.8168, -5.8152],
[-13.6888, -11.7828, -12.5595, ..., -12.4415, -11.5789, -12.0632],
[ -8.7075, -8.7019, -9.0092, ..., -9.1933, -8.4834, -6.8165],
[ -5.1051, -7.7245, -7.0402, ..., -7.5713, -6.9855, -5.0462],
[-23.5020, -18.8779, -17.7931, ..., -18.2811, -17.2806, -19.4826],
[-21.6329, -17.7142, -16.6525, ..., -17.1870, -16.1865, -17.9581]]],
grad_fn=<ViewBackward0>), hidden_states=None, attentions=None)
torch.Size([1, 91, 30522])
This leaves us with 91 probability distributions, each of dimensionality 30522. To transform this into the SPLADE sparse vector, we do the following:
import torch
vec = torch.max(
1 + torch.relu(output.logits)
) * tokens.attention_mask.unsqueeze(-1),
tensor([0., 0., 0., ..., 0., 0., 0.], grad_fn=<SqueezeBackward0>)
Because our vector is sparse, we can transform it into a much more compact dictionary format, keeping only the non-zero positions and weights.
# extract non-zero positions
cols = vec.nonzero().squeeze().cpu().tolist()
# extract the non-zero values
weights = vec[cols].cpu().tolist()
# use to create a dictionary of token ID to weight
sparse_dict = dict(zip(cols, weights))
{1000: 0.6246446967124939,
1039: 0.45678916573524475,
1052: 0.3088974058628082,
1997: 0.15812619030475616,
1999: 0.07194626331329346,
2003: 0.6496524810791016,
2024: 0.9411943554878235,
29215: 0.3594200909137726,
29278: 2.276832342147827}
This is the final format of our sparse vector, but it’s not very interpretable. What we can do is translate the token ID keys to human-readable plaintext tokens. We do that like so:
# extract the ID position to text token mappings
idx2token = {
idx: token for token, idx in tokenizer.get_vocab().items()
# map token IDs to human-readable tokens
sparse_dict_tokens = {
idx2token[idx]: round(weight, 2) for idx, weight in zip(cols, weights)
# sort so we can see most relevant tokens first
sparse_dict_tokens = {
k: v for k, v in sorted(
key=lambda item: item[1],
{'pc': 3.02,
'lace': 2.95,
'programmed': 2.36,
'##for': 2.28,
'madagascar': 2.26,
'death': 1.96,
'##d': 1.95,
'lattice': 1.81,
'carter': 0.0,
'reg': 0.0}
Now we can see the most highly scored tokens from the sparse vector, including important field-specific terms like programmed, cell, lattice, regulated, and so on.
Naver Labs SPLADE
Another higher-level alternative is using the SPLADE library itself. We install it with pip install git+https://github.com/naver/splade.git and initialize the same model and vector building steps as
above, using:
from splade.models.transformer_rep import Splade
sparse_model_id = 'naver/splade-cocondenser-ensembledistil'
sparse_model = Splade(sparse_model_id, agg='max')
We must still tokenize the input using a Hugging Face tokenizer to give us tokens, then we create the sparse vectors with:
with torch.no_grad():
sparse_emb = naver_model(
These embeddings can be processed into a smaller sparse vector dictionary using the same code above. The resultant data is the same as we built with the Hugging Face and PyTorch method.
Comparing Vectors
Let’s look at how to actually compare our sparse vectors. We’ll define three short texts.
texts = [
"Programmed cell death (PCD) is the regulated death of cells within an organism",
"How is the scheduled death of cells within a living thing regulated?",
"Photosynthesis is the process of storing light energy as chemical energy in cells"
As before, we encode everything with the tokenizer, build output logits with the model, and transform the token-level vectors into single sparse vectors.
tokens = tokenizer(
texts, return_tensors='pt',
padding=True, truncation=True
output = model(**tokens)
# aggregate the token-level vecs and transform to sparse
vecs = torch.max(
torch.log(1 + torch.relu(output.logits)) * tokens.attention_mask.unsqueeze(-1), dim=1
We now have three 30522-dimensional sparse vectors. To compare them, we can use cosine or dot-product similarity. Using cosine similarity, we do the following:
import numpy as np
sim = np.zeros((vecs.shape[0], vecs.shape[0]))
for i, vec in enumerate(vecs):
sim[i,:] = np.dot(vec, vecs.T) / (
np.linalg.norm(vec) * np.linalg.norm(vecs, axis=1)
array([[1. , 0.54609376, 0.20535842],
[0.54609376, 0.99999988, 0.20411882],
[0.2053584 , 0.20411879, 1. ]])
Leaving us with:
Similarity heatmap using calculated values from sim above. Sentences 1 and 2 share the highest similarity (with the exception of the diagonals which are just a comparison of each sentence to itself).
The two similar sentences naturally score higher than the third irrelevant sentence.
That’s it for this introduction to learned sparse embeddings with SPLADE. Through SPLADE, we can represent text with efficient sparse vector embeddings. Helping us deal with the vocabulary mismatch
problem while enabling exact matching.
We’ve also seen where SPLADE falls short when used in traditional retrieval systems. Fortunately, we covered how improvements through SPLADEv2 and distribution agnostic retrieval systems like
Pinecone can help us sidestep those shortfalls.
There is still plenty more to be done. More research and recent efforts demonstrate the benefit of mixing both dense and sparse representations using hybrid search indexes. In this, and many other
advances, we can see vector search becoming ever more accurate and accessible.
[1] T. Formal, B. Piwowarski, S. Clinchant, SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking (2021), SIGIR 21
[2] T. Formal, C. Lassance, B. Piwowarski, S. Clinchant, SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval (2021) | {"url":"https://www.pinecone.io/learn/splade/","timestamp":"2024-11-06T12:46:00Z","content_type":"text/html","content_length":"434478","record_id":"<urn:uuid:43cd10f0-5df4-4474-b6d1-11fd7ece1916>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00752.warc.gz"} |
Numerical Methods for Second-Order ODE
Most ordinary differential equations arising in real-world applications
cannot be solved exactly. These ode can be analyized qualitatively.
However, qualitative analysis may not be able to give accurate answers.
A numerical method can be used to get an accurate approximate solution
to a differential equation. There are many programs and packages for
solving differential equations. With today's computer, an accurate
solution can be obtained rapidly. In this section we focus on Euler's
method, a basic numerical method for solving initial value problems.
Consider the differential equation:
The first step is to convert the above second-order ode into two
first-order ode. This is a standard operation. Let v(t)=y'(t).
Then v'(t)=y''(t). We then get two differential equations. The
first is easy
The second is obtained by rewriting the original ode. Using the fact
that y''=v' and y'=v,
The initial conditions are y(0)=1 and y'(0)=v(0)=2.
We are now ready to approximate the two first-order ode by Euler's
method. A derivation of Euler's method is given the numerical methods
section for first-order ode. We first discretize the time interval.
Let 0=t_0. Let t_1<t_2<t_3< ... be other points. We approximate
the solution at these gridpoints. For simplicity we will assume that
the points are equispaced. In general they need not be. Let us define
z_k to be the approximation to y(t) at t_k and let w_k be the
approximation to v(t) at t_k. For our example we get the following
recursion formulas for z_k and w_k:
The initial conditions are z_0=y(0)=1 and w_0=v(0)=2. Here Dt is the
spacing between gridpoints.
Suppose our goal is to compute the solution to the model differential
equation at time t=1. The exact solution, obtained using an advanced
algorithm, is 4.1278. The following table summarizes the results obtained
using Euler's method. Note that the error decreases as the number
of gridpoints N increases. Here the spacing between points is 1/N.
Consider the second-order ode:
Suppose the goal is solve the problem on the interval [t_0,T]. The
following list summarizes the steps in solving the problem by Euler'
• Convert the second-order ode into two first-order ode. Let
v=y'. Then the two odes are
• Discretize the interval [t_0,T]. Pick a bunch of points
t_0<t_1<t_2<...<t_N=T. The points need not be equispaced.
• Let z_k denote the approximation to y(t_k) and let w_k denote the
approximation to v(t_k).
• Use the formulas
to compute the approximate solution for k=0,1,2,...
Software for Solving Differential Equations Numerically
• Netlib: This is a repository for all sorts of mathematical software.
Most of the programs are in C or Fortran. Look under ode or odepack.
• Maple, Mathematica, Matlab: These are packages for doing numerical
and symbolic computations. They have routines for solving ode
[ODE Home] [1st-Order Home] [2nd-Order Home] [Laplace Transform Home] [Notation] [References]
Copyright © 1996 Department of Mathematics, Oregon State University
If you have questions or comments, don't hestitate to contact us. | {"url":"http://sites.science.oregonstate.edu/math/home/programs/undergrad/CalculusQuestStudyGuides/ode/second/so_num/so_num.html","timestamp":"2024-11-05T22:01:32Z","content_type":"text/html","content_length":"6120","record_id":"<urn:uuid:108d2d23-443c-49f1-9dca-ab21cb6c61bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00516.warc.gz"} |
Smooth approximations
We consider CSPs whose template can be defined in an infinite “ground structure” with a high degree of symmetry (in the form of homogeneity) and a certain finite presentation (finite boundedness).
For example, the ground structure can be the order of the rationals, leading to Temporal CSPs, or the random graph, which gives rise to so-called Graph-SAT problems. Such CSPs are always in NP,
include all finite-domain CSPs as well as many additional natural problems, and there is an open dichotomy conjecture extending the one for finite templates.
The general algebraic approach to finite domain CSPs via polymorphisms also works in this setting, and the most sensible approach to the conjecture is to try to reduce it to the finite case. To this
end, one associates certain finite algebras to a CSP template, and hopes to extract from them sufficient information about the complexity of the CSP. In most successful confirmations of instances of
the conjecture so far, this was done somewhat non-systematically, leading to long proofs of ad hoc arguments. The novel Theory of Smooth Approximations intends to provide a uniform way of relating
the associated finite algebras with the structure of the CSP template and consequently its computational complexity. Applying this method, all previous results in the literature can be reproven much
more smoothly; moreover, the method allows, for the first time, for a systematic investigation of local consistency methods in this setting.
This is joint work with Antoine Mottet. | {"url":"https://csp-seminar.org/talks/michael-pinsker/","timestamp":"2024-11-02T17:29:20Z","content_type":"text/html","content_length":"6890","record_id":"<urn:uuid:59172fa0-0222-43c1-9c64-33ad7c0d9843>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00620.warc.gz"} |
Why It Matters: Systems of Equations and Inequalities
Why learn to solve systems of equations and inequalities?
When you play in a river you are surrounded by fluids, including water and air. At first it might seem strange to think of air as a fluid, but a fluid is defined as a substance that flows. Wind,
therefore, is a great example of air that flows. Other examples of flows include traffic patterns and electrical currents. Flows can be turbulent like what you may experience in airplanes.
Early in the 19th Century, Claude-Louis Navier in France and George Gabriel Stokes in England both derived an equation that can explain and predict the flow of fluids. The Navier-Stokes equations are
a system of equations used to describe the velocity of a fluid as it moves through three-dimensional space over a specific interval of time.
Interestingly, our understanding of solutions to the Navier-Stokes equations remains minimal. Surprisingly, given the equations’ wide range of practical uses, it has not yet been proven that
solutions always exist in three dimensions. The Clay Mathematics Institute has called this one of the seven most important open problems in mathematics and has offered a US $1,000,000 prize for a
solution or a counter-example.
In this section, we will learn how to graph systems of equations in two dimensions and find whether solutions exist. We will also see how systems of equations can be used to solve problems where we
have two unknown variables. | {"url":"https://courses.lumenlearning.com/beginalgebra/chapter/introduction-4/","timestamp":"2024-11-05T01:14:37Z","content_type":"text/html","content_length":"45958","record_id":"<urn:uuid:25b096c8-7ace-4d82-b6e2-0e66f908e324>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00455.warc.gz"} |
Nedialko Bradinoff
I am a PhD student at the department of mathematics at KTH working in the focus group 'Random Matrices, Stochastic Models and Analysis' under the supervision of Professor Maurice Duits.
I am a teaching assistant for the courses 'Probability Theory SF2940' and 'Differential equations SF1633'. I was previously PAD (PhD representative) for Mathematics in the SCI PhD Student Council. | {"url":"https://www.kth.se/profile/nedialko","timestamp":"2024-11-08T14:45:54Z","content_type":"text/html","content_length":"44815","record_id":"<urn:uuid:2d76c516-5d0c-40fe-ab01-726d3678fd8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00580.warc.gz"} |
O((logn)<sup>2</sup>) time online approximation schemes for bin packing and subset sum problems
Given a set S = {b [1],⋯, b [n] } of integers and an integer s, the subset sum problem is to decide if there is a subset S′ of S such that the sum of elements in S′ is exactly equal to s. We present
an online approximation scheme for this problem. It updates in O(logn) time and gives a (1+ε)-approximation solution in time. The online approximation for target s is to find a subset of the items
that have been received. The bin packing problem is to find the minimum number of bins of size one to pack a list of items a [1],⋯, a [n] of size in [0,1]. Let function bp(L) be the minimum number of
bins to pack all items in the list L. We present an online approximate algorithm for the function bp(L) in the bin packing problem, where L is the list of the items that have been received. It
updates in O(logn) updating time and gives a (1+ε)-approximation solution app(L) for bp(L) in time to satisfy app(L)≤(1+ε)bp(L)+1.
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 6213 LNCS
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Other 4th International Frontiers of Algorithmics Workshop, FAW 2010
Country/Territory China
City Wuhan
Period 8/11/10 → 8/13/10
ASJC Scopus subject areas
• Theoretical Computer Science
• General Computer Science
Dive into the research topics of 'O((logn)^2) time online approximation schemes for bin packing and subset sum problems'. Together they form a unique fingerprint. | {"url":"https://utsouthwestern.elsevierpure.com/en/publications/olognsup2sup-time-online-approximation-schemes-for-bin-packing-an","timestamp":"2024-11-09T12:52:11Z","content_type":"text/html","content_length":"50988","record_id":"<urn:uuid:505f3865-1f53-4147-bd2d-3b736fd66b7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00667.warc.gz"} |