content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Geometric Mean Calculator - Many Calculators
Geometric Mean Calculator
Need to quickly calculate the geometric mean? Our calculator will help you! Enter your data, and we will calculate the result for you. There's no need to waste time on manual calculations - the
geometric mean calculator is easy to use and will quickly give you a result that is accurate to the number of decimal places you choose.
The geometric mean is:
About the Geometric Mean Calculator
The geometric mean calculator is easy to use and allows you to quickly calculate the mean of the product of several numbers. To use the calculator, all you have to do is enter your data in the text
box, separating them with semicolons. You can also choose how precise you want the result to be by specifying the number of decimal places. Any change in the text box or precision will automatically
recalculate and display the result. Use our calculator today and quickly calculate the geometric mean.
What is the geometric mean?
The geometric mean is a mathematical indicator of the average that is used when you want to calculate the arithmetic mean of multiplying several numbers. In other words, the geometric mean is the
square root to the nth power of the product of n numbers. It is often used when you want to find the arithmetic mean of data that are very different or linearly correlated. It can also be used in
many fields such as finance, statistics, life sciences, and engineering. Note, however, that the geometric mean is not well suited for data that are not linearly correlated and cannot be calculated
for negative values.
Definition: The geometric mean of two or more numbers x[1], x[2], ..., x[n] is equal to the nth root of the product of these numbers:
An example
To calculate the geometric mean of several numbers, simply multiply them and then find the square root of the resulting product, which is of degree n.
For example, to calculate the geometric mean of the numbers 4, 9, and 16, do the following
• multiply all the numbers together: 4 * 9 * 16 = 576
• find the square root of the product: √576 = 24
In this case, the geometric mean is 24.
Applications of the geometric mean in various fields
The geometric mean is widely used in many fields, including finance, statistics, life sciences, and engineering.
Finance: The geometric mean is often used in finance to calculate the rate of return on investments. For example, if we have data on the return on 3 different investments over the past 3 years, we
can calculate the geometric mean to find the average annual return on those investments.
Statistics: The geometric mean is often used in statistics to calculate the arithmetic mean for data that are linearly correlated. For example, if we have population growth data for different
countries, we can calculate the geometric mean to find the average population growth for those countries.
Life sciences: The geometric mean is often used in the life sciences to calculate the average of many physical quantities, such as chemical concentrations or body mass. For example, if we have data
on the concentration of different chemicals in different samples, we can calculate the geometric mean to get the average concentration for those samples.
Technology: Geometric mean is often used in technology to calculate an average for many parameters, such as device power or screen brightness. For example, if we have power data for different
devices, we can calculate the geometric mean to get the average power for those devices.
The geometric mean is widely used in many fields such as finance, statistics, life sciences, and engineering. It can be used to calculate the arithmetic mean for data that are very different or
linearly correlated. It is a valuable tool to consider when analyzing data, although its limitations, such as sensitivity to negative values and computational difficulty, should be kept in mind.
Advantages and disadvantages of the use of the geometric mean
The geometric mean is a valuable mathematical tool used in many fields. However, it has several limitations that must be considered when using it.
• the geometric mean reflects the true distribution of the data better than the arithmetic mean when the data are very different;
• the geometric mean is immune to outliers, meaning that one or more outliers will not significantly affect the final result;
• the geometric mean is often used in statistics to calculate the average for data that are linearly correlated;
• the geometric mean is sensitive to negative values. Unlike the arithmetic mean, the geometric mean cannot be calculated for negative values;
• the geometric mean is more difficult to calculate than the arithmetic mean because it requires raising all numbers to the power of n and calculating the nth degree root;
• the geometric mean is not well suited for data that are not linearly correlated. In this case, the harmonic mean or the quadratic mean would be a better choice;
The geometric mean is a valuable mathematical tool that has many advantages, such as better representation of the true distribution of the data and robustness to outliers. It is also often used in
statistics to calculate the mean for data that are linearly correlated. However, one should be aware of its limitations, such as sensitivity to negative values and computational difficulties. For
data that are not linearly correlated, other measures of mean, such as harmonic mean or quadratic mean, may be a better choice.
see also: | {"url":"https://manycalculators.com/mathematics/geometric-mean-calculator","timestamp":"2024-11-03T00:44:08Z","content_type":"text/html","content_length":"15717","record_id":"<urn:uuid:464eaeb2-e13a-4bed-99f2-f77493082adb>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00823.warc.gz"} |
Higher order numerical schemes for affinely controlled nonlinear systems
Title data
Grüne, Lars ; Kloeden, Peter E.:
Higher order numerical schemes for affinely controlled nonlinear systems.
In: Numerische Mathematik. Vol. 89 (2001) . - pp. 669-690.
ISSN 0029-599X
DOI: https://doi.org/10.1007/s002110000279
Abstract in another language
A systematic method for the derivation of high order schemes for affinely controlled nonlinear systems is developed. Using an adaptation of the stochastic Taylor expansion for control systems we
construct Taylor schemes of arbitrary high order and indicate how derivative free Runge-Kutta type schemes can be obtained. Furthermore an approximation technique for the multiple control integrals
appearing in the schemes is proposed.
Further data | {"url":"https://eref.uni-bayreuth.de/id/eprint/63326/","timestamp":"2024-11-15T04:42:34Z","content_type":"application/xhtml+xml","content_length":"20968","record_id":"<urn:uuid:cf42c495-274c-4a69-819f-65fee3635a85>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00437.warc.gz"} |
UCAT Practice Test Quantitative - Practice Test Geeks
UCAT Practice Test Quantitative
What fraction of the grid is not blacked out?
Correct! Wrong!
Correct answer: 2/3
We can count the total number of squares/cells in the grid and then deduct the number of blacked-out cells.
The total number of cells is 6.
Amount of cells Not covered in black: (3 whole cells + 2 half-cells)=4
A university has studied all 12 species of birds that are indigenous to a Pacific Ocean island group. The experiment needed a 72-hour monitoring of the birds' resting heart rate, which was measured
via trackers. This graph depicts the relationship between a bird's resting heart rate and its wingspan in metres. What is the average length of a bird's wingspan?
Correct! Wrong!
Correct answer: 0.55 m
Find the'middle' number by counting the number of birds in the chart.
To get the median, multiply the number of birds by one and divide by 2 to get the middle value.
The median wingspan is 12 + 1 = 13 / 2 = 6.5 since there are 12 birds.
As a result, the median is located halfway between the 6th and 7th numbers.
The sixth is at 0.50m and the seventh is at 0.60m. 0.55 Is halfway between these.
The number of English counties pupils in two classes at a primary school who visited is listed above. Each class had a total of 30 students. What was the most number of counties visited by a Class 1
Correct! Wrong!
Correct answer: 8
Determine the point at which the cumulative frequency equals the total class size:
Because there are 30 students in the class, this occurs in 8 counties for classes 1 through B.
A listed company's stock price is calculated over a single day.
The price is expressed as a number of pence per share.
A broker will pay a 'offer' price of 2p per share more than the price shown to buy shares, plus an additional £5 for stock transactions of 5000 or less shares.
There is a 0.5 percent charge above this level.
A broker will be paid a 'bid' price of 2p less than the price quoted to sell shares.
A broker will be paid a 'bid' price of 2p less than the price quoted to sell shares.
Any profit made from buying and selling stocks is subject to a 2.5 percent deduction.
The price at which all of a company's shares could be sold is termed its value.
The corporation has a total of 500,000 shares.
A dividend is a payment made to shareholders every quarter based on how many equities they own.
What is the largest change in the company's value over the course of one hour?
Correct! Wrong!
Correct answer: £10,000
Determine when the most significant price change occurs:
Between 10:00 and 11:00 a.m., this occurs.
The change is 176p to 174p – 2p.
Determine how this affects the company's worth.
The corporation has a total of 500,000 shares, so:
0.02 x 500,000 = £10,000 – C.
This is Mark's weekly schedule, which he has followed for the past six weeks while working from home. He has always skipped work on Saturdays and Sundays. He gets up at 7 a.m. and leaves the house at
8 a.m. What was his average speed during his 10-kilometer run, to the nearest 0.1 m/s?
Correct! Wrong!
Correct answer: 2.8 m/s
Calculate the average speed by converting the units to metres and seconds.
1 hour = 60 minutes = 60 x 60 seconds = 3600 s
10 km = 10,000 m
So in metres per second:
10000 / 3600 = 2.777… = 2.8 m/s
The table's figures are all in kilos.
1 ml water = 1g
What percentage of the total mass of coffee beans consumed in the country was consumed in cafés in 2013-14?
Correct! Wrong!
Correct answer: 67.80%
Calculate the percentage of coffee beans used in cafés as a percentage of total coffee beans used.
Beans in cafés: 242,000
Beans in total: 242000 + 50300 + 28341 + 36324 = 356965
242000/356965 = 0.6779… = 69.8%
The average rent per calendar month (PCM) in a popular South London neighborhood is depicted in this graph. There are 12 calendar months in a year.
In 2015, Simon rents a one-bedroom flat in this neighborhood for 75% of the average cost. Between 2015-16 and 2016-17, the price of the typical property increased twice as fast as the price of the
average property. What is the difference between his new rent and the market average for 2016-17?
Correct! Wrong!
Correct answer: He pays £125 less
Calculate his rent over the years.
He paid 75% of the market average (£1000 PCM at the time).
0.75 x 1000 = 750
The average price went up from 1000 to 1250
Use the following formula to calculate the percentage increase:
New Value / Old Value = Multiplier
1250/1000 = 1.25 = 25%
His rent jumped by double the rate, or by 50%.
For a percentage increase, use the following formula:
New value = Old value x multiplier
50% increase = multiplier of 1.5
New rent = 750 x 1.5
New rent = £1125
Calculate the difference from the market rent.
According to the graph, the new average rent is £1250.
£1250 – £1125 = £125 so he pays £125 below the market rate.
What is the most common store-to-distribution-center distance?
Correct! Wrong!
Correct answer: 21 to 30 miles
The majority of the stores (26) are located between 21 and 30 kilometers from the center.
This is the revenue of an Australian medium-to-large business. All figures are in AUD$.
The corporation wants to figure out how much money it makes on a monthly basis, taking into account all three revenue streams. What is the average monthly total revenue for the three months listed in
the table, rounded up to the closest $250?
Correct! Wrong!
Correct answer: $239,250
Calculate the monthly average by adding the incomes:
(120 + 145 + 130 + 15 +17.5 19 + 80 + 90 + 101) x 1000 = $717,500
To calculate the monthly average, multiply this by three.
$717,500/3 = $239,166… = $239,250 to the nearest $250.
The percentage of nickel in two coins is shown in the table.
What is the difference between the weight of copper present in Coin B and the weight of copper present in Coin A if both coins are made entirely of nickel and copper?
Correct! Wrong!
Correct answer: 0.675 g
The percentage of copper in Coin A is (100 – 25) = 75%.
The amount of copper in Coin A, which weighs 6.5 g, is 6.5 75/100 = 4.875 g.
The amount of copper in Coin B is 100 – 16 = 84%.
The amount of copper in Coin B, which weighs 5 g, is 5 84/100 = 4.2 g.
The difference in copper content between Coin B and Coin A is 4.875 - 4.2 = 0.675 g.
The temperature of Venus is in the middle of the temperature range between Earth and Mercury. What is Mercury's current temperature?
Correct! Wrong!
Correct answer: 940 °C
480 + (480 – 20) = 940. | {"url":"https://practicetestgeeks.com/wp_quiz/ucat-practice-test-quantitative/","timestamp":"2024-11-10T12:50:38Z","content_type":"text/html","content_length":"146292","record_id":"<urn:uuid:4141b601-d79e-442d-a450-aa5c5ff8a92f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00453.warc.gz"} |
The osmotic pressure of CaCl2 and urea solutions of the same concentration at the same temperature are respectively 0.605 atm and 0.245 atm, calculate van’t Hoff factor for CaCl2. - Chemistry
Exercise | Q 3.4 | Page 46
Answer the following.
The osmotic pressure of CaCl2 and urea solutions of the same concentration at the same temperature are respectively 0.605 atm and 0.245 atm, calculate van’t Hoff factor for CaCl2.
Given: Osmotic pressure of CaCl2 solution = 0.605 atm
Osmotic pressure of urea solution = 0.245 atm
To find: The value of van’t Hoff factor
Formulae: π = MRT, π = iMRT
Calculation: For urea solution
π = MRT
0.245 atm = MRT ....(i)
For CaCl[2] solution
π = iMRT
0.602 atm = iMRT ....(ii)
From equations (i) and (ii),
∴ i = 2.47
The value of van’t Hoff factor is 2.47. | {"url":"https://www.kalakadu.com/2020/11/the-osmotic-pressure-of-cacl2-and-urea-solutions-of-the-same-concentration-at-the-same-temperature-are-respectively-0605-atm-and-0245-atm-calculate-van-t-hoff-factor-for-cacl2-colligative-properties-determination-molar-mass-osmosis-osmotic-pressure.html","timestamp":"2024-11-06T05:18:34Z","content_type":"application/xhtml+xml","content_length":"50233","record_id":"<urn:uuid:e4ebc8fb-0cc2-4076-baf1-5179dada20e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00453.warc.gz"} |
Probability Seminar
• When: Thursdays at 2:30 pm
• Where: 901 Van Vleck Hall
• Organizers: Hanbaek Lyu, Tatyana Shcherbyna, David Clancy
• To join the probability seminar mailing list: email probsem+subscribe@g-groups.wisc.edu.
• To subscribe seminar lunch announcements: email lunchwithprobsemspeaker+subscribe@g-groups.wisc.edu
Fall 2024
Thursdays at 2:30 PM either in 901 Van Vleck Hall or on Zoom
We usually end for questions at 3:20 PM.
September 5, 2024:
No seminar
September 12, 2024: Hongchang Ji (UW-Madison)
Spectral edge of non-Hermitian random matrices
We report recent progress on spectra of so-called deformed i.i.d. matrices. They are square non-Hermitian random matrices of the form $A+X$ where $X$ has centered i.i.d. entries and $A$ is a
deterministic bias, and $A$ and $X$ are on the same scale so that their contributions to the spectrum of $A+X$ are comparable. Under this setting, we present two recent results concerning universal
patterns arising in eigenvalue statistics of $A+X$ around its boundary, on macroscopic and microscopic scales. The first result shows that the macroscopic eigenvalue density of $A+X$ typically has a
jump discontinuity around the boundary of its support, which is a distinctive feature of $X$ by the \emph{circular law}. The second result is edge universality for deformed non-Hermitian matrices; it
shows that the local eigenvalue statistics of $A+X$ around a typical (jump) boundary point is universal, i.e., matches with those of a Ginibre matrix $X$ with i.i.d. standard Gaussian entries.
Based on joint works with A. Campbell, G. Cipolloni, and L. Erd\H{o}s.
September 19, 2024: Miklos Racz (Northwestern)
The largest common subtree of uniform attachment trees
Consider two independent uniform attachment trees with n nodes each -- how large is their largest common subtree? Our main result gives a lower bound of n^{0.83}. We also give some upper bounds and
bounds for general random tree growth models. This is based on joint work with Johannes Bäumler, Bas Lodewijks, James Martin, Emil Powierski, and Anirudh Sridhar.
September 26, 2024: Dmitry Krachun (Princeton)
A glimpse of universality in critical planar lattice models
Abstract: Many models of statistical mechanics are defined on a lattice, yet they describe behaviour of objects in our seemingly isotropic world. It is then natural to ask why, in the small mesh size
limit, the directions of the lattice disappear. Physicists' answer to this question is partially given by the Universality hypothesis, which roughly speaking states that critical properties of a
physical system do not depend on the lattice or fine properties of short-range interactions but only depend on the spatial dimension and the symmetry of the possible spins. Justifying the reasoning
behind the universality hypothesis mathematically seems virtually impossible and so other ideas are needed for a rigorous derivation of universality even in the simplest of setups.
In this talk I will explain some ideas behind the recent result which proves rotational invariance of the FK-percolation model. In doing so, we will see how rotational invariance is related to
universality among a certain one-dimensional family of planar lattices and how the latter can be proved using exact integrability of the six-vertex model using Bethe ansatz.
Based on joint works with Hugo Duminil-Copin, Karol Kozlowski, Ioan Manolescu, Mendes Oulamara, and Tatiana Tikhonovskaia.
October 3, 2024: Joshua Cape (UW-Madison)
A new random matrix: motivation, properties, and applications
In this talk, we introduce and study a new random matrix whose entries are dependent and discrete valued. This random matrix is motivated by problems in multivariate analysis and nonparametric
statistics. We establish its asymptotic properties and provide comparisons to existing results for independent entry random matrix models. We then apply our results to two problems: (i) community
detection, and (ii) principal submatrix localization. Based on joint work with Jonquil Z. Liao.
October 10, 2024: Midwest Probability Colloquium
October 17, 2024: Kihoon Seong (Cornell)
Gaussian fluctuations of focusing Φ^4 measure around the soliton manifold
I will explain the central limit theorem for the focusing Φ^4 measure in the infinite volume limit. The focusing Φ^4 measure, an invariant Gibbs measure for the nonlinear Schrödinger equation, was
first studied by Lebowitz, Rose, and Speer (1988), and later extended by Bourgain (1994), Brydges and Slade (1996), and Carlen, Fröhlich, and Lebowitz (2016).
Rider previously showed that this measure is strongly concentrated around a family of minimizers of the associated Hamiltonian, known as the soliton manifold. In this talk, I will discuss the
fluctuations around this soliton manifold. Specifically, we show that the scaled field under the focusing Φ^4 measure converges to white noise in the infinite volume limit, thus identifying the
next-order fluctuations, as predicted by Rider.
This talk is based on joint work with Philippe Sosoe (Cornell).
October 24, 2024: Jacob Richey (Alfred Renyi Institute)
Stochastic abelian particle systems and self-organized criticality
Abstract: Activated random walk (ARW) is an 'abelian' particle system that conjecturally exhibits complex behaviors which were first described by physicists in the 1990s, namely self organized
criticality and hyperuniformity. I will discuss recent results for ARW and the stochastic sandpile (a related model) on Z and other graphs, plus many open questions.
October 31, 2024: David Clancy (UW-Madison)
Likelihood landscape on a known phylogeny
Abstract: Over time, ancestral populations evolve to become separate species. We can represent this history as a tree with edge lengths where the leaves are the modern-day species. If we know the
precise topology of the tree (i.e. the precise evolutionary relationship between all the species), then we can imagine traits (their presence or absence) being passed down according to a symmetric
2-state continuous-time Markov chain. The branch length becomes the probability a parent species has a trait while the child species does not. This length is unknown, but researchers have observed
they can get pretty good estimates using maximum likelihood estimation and only the leaf data despite the fact that the number of critical points for the log-likelihood grows exponentially fast in
the size of the tree. In this talk, I will discuss why this MLE approach works by showing that the population log-likelihood is strictly concave and smooth in a neighborhood around the true branch
length parameters and the size.
This talk is based on joint work with Hanbaek Lyu, Sebastien Roch and Allan Sly.
November 7, 2024: Zoe Huang (UNC Chapel Hill)
November 14, 2024: Deb Nabarun (University of Chicago)
November 21, 2024: Reza Gheissari (Northwestern)
November 28, 2024: Thanksgiving
No seminar
December 5, 2024: Erik Bates (NC State)
Spring 2024
Thursdays at 2:30 PM either in 901 Van Vleck Hall or on Zoom
We usually end for questions at 3:20 PM.
January 25, 2024: Tatyana Shcherbina (UW-Madison)
Characteristic polynomials of sparse non-Hermitian random matrices
We consider the asymptotic local behavior of the second correlation functions of the characteristic polynomials of sparse non-Hermitian random matrices $X_n$ whose entries have the form $x_{jk}=d_
{jk}w_{jk}$ with iid complex standard Gaussian $w_{jk}$ and normalized iid Bernoulli$(p)$ $d_{jk}$. If $p\to\infty$, the local asymptotic behavior of the second correlation function of characteristic
polynomials near $z_0\in \mathbb{C}$ coincides with those for Ginibre ensemble of non-Hermitian matrices with iid Gaussian entries: it converges to a determinant of the Ginibre kernel in the bulk $|
z_0|<1$, and it is factorized if $|z_0|>1$. It appears, however, that for the finite $p>0$, the behavior is different and it exhibits the transition between three different regimes depending on
values $p$ and $|z_0|^2$. This is the joint work with Ie. Afanasiev.
Optimal rigidity and maximum of the characteristic polynomial of Wigner matrices
We consider two related questions about the extremal statistics of Wigner matrices (random symmetric matrices with independent entries). First, how much can their eigenvalues fluctuate? It is known
that the eigenvalues of such matrices display repulsive interactions, which confine them near deterministic locations. We provide optimal estimates for this “rigidity” phenomenon. Second, what is the
behavior of the maximum of the characteristic polynomial? This is motivated by a conjecture of Fyodorov–Hiary–Keating on the maxima of logarithmically correlated fields, and we will present the first
results on this question for Wigner matrices. This talk is based on joint work with Paul Bourgade and Ofer Zeitouni.
Stochastic dynamics and the Polchinski equation
I will discuss a general framework to obtain large scale information in statistical mechanics and field theory models. The basic, well known idea is to build a dynamics that samples from the model
and control its long time behaviour. There are many ways to build such a dynamics, the Langevin dynamics being a typical example. In this talk I will introduce another, the Polchinski dynamics, based
on renormalisation group ideas. The dynamics is parametrised by a parameter representing a certain notion of scale in the model under consideration. The Polchinski dynamics has a number of
interesting properties that make it well suited to study large-dimensional models. It is also known under the name stochastic localisation. I will mention a number of recent applications of this
dynamics, in particular to prove functional inequalities via a generalisation of Bakry and Emery's convexity-based argument. The talk is based on joint work with Roland Bauerschmidt and Thierry
Bodineau and the recent review paper https://arxiv.org/abs/2307.07619 .
A matrix model for conditioned Stochastic Airy
There are three basic flavors of local limit theorems in random matrix theory, connected to the spectral bulk and the so-called soft and hard edges. There also abound a collection of more exotic
limits which arise in models that posses degenerate (or “non-regular”) points in their equilibrium measure. What is more, there is typically a natural double scaling about these non-regular points,
producing limit laws that transition between the more familiar basic flavors. Here I will describe a general beta matrix model for which the appropriate double scaling limit is the Stochastic Airy
Operator, conditioned on having no eigenvalues below a fixed level. I know of no other random matrix double scaling fully characterized outside of beta = 2. This is work in progress with J. Ramirez
(University of Costa Rica).
February 22, 2024: No talk this week
February 29, 2024: Zongrui Yang (Columbia)
Stationary measures for integrable models with two open boundaries
We present two methods to study the stationary measures of integrable systems with two open boundaries. The first method is based on Askey-Wilson signed measures, which is illustrated for the open
asymmetric simple exclusion process and the six-vertex model on a strip. The second method is based on two-layer Gibbs measures and is illustrated for the geometric last-passage percolation and
log-gamma polymer on a strip. This talk is based on joint works with Yizao Wang, Jacek Wesolowski, Guillaume Barraquand and Ivan Corwin.
March 7, 2024: Atilla Yilmaz (Temple)
Stochastic homogenization of nonconvex Hamilton-Jacobi equations
After giving a self-contained introduction to the qualitative homogenization of Hamilton-Jacobi (HJ) equations in stationary ergodic media in spatial dimension d ≥ 1, I will focus on the case where
the Hamiltonian is nonconvex, and highlight some interesting differences between: (i) periodic vs. truly random media; (ii) d = 1 vs. d ≥ 2; and (iii) inviscid vs. viscous HJ equations.
March 14, 2024: Eric Foxall (UBC Okanagan)
Some uses of ordered representations in finite-population exchangeable ancestry models (ArXiv: https://arxiv.org/abs/2104.00193)
For a population model that encodes parent-child relations, an ordered representation is a partial or complete labelling of individuals, in order of their descendants’ long-term success in some
sense, with respect to which the ancestral structure is more tractable. The two most common types are the lookdown and the spinal decomposition(s), used respectively to study exchangeable models and
Markov branching processes. We study the lookdown for an exchangeable model with a fixed, arbitrary sequence of natural numbers, describing population size over time. We give a simple and intuitive
construction of the lookdown via the complementary notions of forward and backward neutrality. We discuss its connection to the spinal decomposition in the setting of Galton-Watson trees. We then use
the lookdown to give sufficient conditions on the population sequence for the existence of a unique infinite line of descent. For a related but slightly weaker property, takeover, the necessary and
sufficient conditions are more easily expressed: infinite time passes on the coalescent time scale. The latter property is also related to the following question of identifiability: under what
conditions can some or all of the lookdown labelling be determined by the unlabelled lineages? A reasonably good answer can be obtained by comparing extinction times and relative sizes of lineages.
March 21, 2024: Semon Rezchikov (Princeton)
Renormalization, Diffusion Models, and Optimal Transport
To this end, we will explain how Polchinski’s formulation of the renormalization group of a statistical field theory can be seen as a gradient flow equation for a relative entropy functional. We will
review some related work applying this idea to problems in mathematical physics; subsequently, we will explain how this idea can be used to design adaptive bridge sampling schemes for lattice field
theories based on diffusion models which learn the RG flow of the theory. Based on joint work with Jordan Cotler.
March 28, 2024: Spring Break
Percolation Exponent, Conformal Radius for SLE, and Liouville Structure Constant
In recent years, a technique has been developed to compute the conformal radii of random domains defined by SLE curves, which is based on the coupling between SLE and Liouville quantum gravity (LQG).
Compared to prior methods that compute SLE related quantities via its coupling with LQG, the crucial new input is the exact solvability of structure constants in Liouville conformal field theory. It
appears that various percolation exponents can be expressed in terms of conformal radii that can be computed this way. This includes known exponents such as the one-arm and polychromatic
two-arm exponents, as well as the backbone exponents, which is unknown previously. In this talk we will review this method using the derivation of the backbone exponent as an example, based on a
joint work with Nolin, Qian, and Sun.
April 11, 2024: Bjoern Bringman (Princeton)
Global well-posedness of the stochastic Abelian-Higgs equations in two dimensions.
There has been much recent progress on the local solution theory for geometric singular SPDEs. However, the global theory is still largely open. In this talk, we discuss the global well-posedness of
the stochastic Abelian-Higgs model in two dimension, which is a geometric singular SPDE arising from gauge theory. The proof is based on a new covariant approach, which consists of two parts: First,
we introduce covariant stochastic objects, which are controlled using covariant heat kernel estimates. Second, we control nonlinear remainders using a covariant monotonicity formula, which is
inspired by earlier work of Hamilton.
April 18, 2024: Christopher Janjigian (Purdue)
Infinite geodesics and Busemann functions in inhomogeneous exponential last passage percolation
This talk will discuss some recent progress on understanding the structure of semi-infinite geodesics and their associated Busemann functions in the inhomogeneous exactly solvable exponential
last-passage percolation model. In contrast to the homogeneous model, this generalization admits linear segments of the limit shape and an associated richer structure of semi-infinite geodesic
behaviors. Depending on certain choices of the inhomogeneity parameters, we show that the model exhibits new behaviors of semi-infinite geodesics, which include wandering semi-infinite geodesics with
no asymptotic direction, isolated asymptotic directions of semi-infinite geodesics, and non-trivial intervals of directions with no semi-infinite geodesics.
Based on joint work-in-progress with Elnur Emrah (Bristol) and Timo Seppäläinen (Madison)
April 25, 2024: Colin McSwiggen (NYU)
Large deviations and multivariable special functions
This talk introduces techniques for using the large deviations of interacting particle systems to study the large-N asymptotics of generalized Bessel functions. These functions arise from a versatile
approach to special functions known as Dunkl theory, and they include as special cases most of the spherical integrals that have captured the attention of random matrix theorists for more than two
decades. I will give a brief introduction to Dunkl theory and then present a result on the large-N limits of generalized Bessel functions, which unifies several results on spherical integrals in the
random matrix theory literature. These limits follow from a large deviations principle for radial Dunkl processes, which are generalizations of Dyson Brownian motion. If time allows, I will discuss
some further results on large deviations of radial Heckman-Opdam processes and/or applications to asymptotic representation theory. Joint work with Jiaoyang Huang.
May 2, 2024: Anya Katsevich (MIT)
The Laplace approximation in high-dimensional Bayesian inference
Computing integrals against a high-dimensional posterior is the major computational bottleneck in Bayesian inference. A popular technique to reduce this computational burden is to use the Laplace
approximation, a Gaussian distribution, in place of the true posterior. Despite its widespread use, the Laplace approximation's accuracy in high dimensions is not well understood. The body of
existing results does not form a cohesive theory, leaving open important questions e.g. on the dimension dependence of the approximation rate. We address many of these questions through the unified
framework of a new, leading order asymptotic decomposition of high-dimensional Laplace integrals. In particular, we (1) determine the tight dimension dependence of the approximation error, leading to
the tightest known Bernstein von Mises result on the asymptotic normality of the posterior, and (2) derive a simple correction to this Gaussian distribution to obtain a higher-order accurate
approximation to the posterior. | {"url":"https://wiki.math.wisc.edu/index.php/Probability_Seminar","timestamp":"2024-11-04T15:17:24Z","content_type":"text/html","content_length":"40646","record_id":"<urn:uuid:abda9ed8-6852-47c8-a8e0-aaedfc559151>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00043.warc.gz"} |
Discrete-Time Signals and Systems: Complex Number and Complex Function Review
top of page
Adam Panagos / Engineer / Lecturer
Complex Number and Complex Function Review
This series of videos reviews basic definitions and operations associated with complex numbers and complex functions. Understanding how to work with these quantities is essential as we'll be working
with complex numbers and transformations regularly throughout the course.
Basic Definitions
This series of videos reviews basic definitions and operations associated with complex numbers and complex functions.This first video defines the rectangular format and polar format of a complex
number, and also visualizes complex numbers in the complex plane.
Complex Number Operations
This second video defines a variety of operations that can be performed on complex numbers such as: addition, multiplication, magnitude, and phase computations. It also derives expression for both
the magnitude and phase of a ratio of complex numbers.
Complex Number Operations Example
The previous video in this series provide general definitions of several complex number operations. This video performs specific example computations of complex number addition, conjugation,
multiplication, magnitude, and phase.
Complex Numbers and Operations in Matlab
The previous video in this series worked a variety of example computations with complex numbers. This video performs the same computations yet they are all performed in Matlab.
Complex and Complex-Valued Functions
In this final video we formally define complex and complex-valued functions. We examine the Fourier Transform X(f) of the continuous-time signal x(t). X(f) is a complex-valued function and its
magnitude and phase are computed and plotted.
bottom of page | {"url":"https://www.adampanagos.org/courses/dt/complexreview","timestamp":"2024-11-09T14:27:51Z","content_type":"text/html","content_length":"815206","record_id":"<urn:uuid:e906d3cb-2a38-451f-88c0-c7a1df2849a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00276.warc.gz"} |
Collapse of Conflicts into Impossible Obligations
We saw above that Kant's Law, when represented as OBp → ◊p, is a theorem of KTd. If we interpret possibility here as practical possibility, then as the indebtedness example above suggests, it is far
from evident that it is in fact true. However, a stronger claim than that of Kant's Law is that something cannot be obligatory unless it is at least logically possible. In SDL, this might be
expressed by the rule:
If ⊢ ~p then ⊢ ~OBp.
This is derivable in SDL, since if ⊢ ~p, then ⊢ OB~p by OB-NEC, and then by OB-NC, we get ⊢ ~OBp. Claiming that Romeo is obligated to square the circle because he solemnly promised Juliet to do so is
less convincing as an objection than the earlier financial indebtedness case. So SDL is somewhat better insulated from this sort of objection, and, as we noted earlier, we are confining ourselves
here to theories that endorse OB-OD (i.e., ⊢ ~OB⊥).^[1]
However, this points to another puzzle for SDL. The rule above is equivalent to ⊢ OB-OD in any system with OB-RE, and in fact, in the context of SDL, these are both equivalent to OB-NC. That is, we
could replace the latter axiom with either of the former rules for a system equivalent to SDL. In particular, in any system with K and RM, (OBp & OB~p) ↔ OB⊥ is a theorem.^[2] But it seems odd that
there is no distinction between a contradiction being obligatory, and having two distinct conflicting obligations. It seems that one can have a conflict of obligations without it being obligatory
that some logically impossible state of affairs obtains. A distinction seems to be lost here. Separating OB-NC from OB-D is now quite routine in conflict-allowing deontic logics.
Some early discussions and attempted solutions to the last two problems can be found in Chellas 1980 and Schotch and Jennings 1981, both of whom use non-normal modal logics for deontic logic.^[3]
Brown 1996a uses a similar approach to Chellas' for modeling conflicting obligations, but with the addition of an ordering relation on obligations to model the relative stringency of obligations,
thus moving in the direction of a model addressing Plato's Dilemma as well.
Return to Deontic Logic. | {"url":"https://plato.stanford.edu/ARCHIVES/WIN2009/entries/logic-deontic/collapse.html","timestamp":"2024-11-10T01:27:28Z","content_type":"application/xhtml+xml","content_length":"6951","record_id":"<urn:uuid:c9e71a30-ffff-4da9-b50d-0ea8a17dd37c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00307.warc.gz"} |
If You Give A Seed A Fertilizer
Part 1
I read the entire exposition and example explanation once.
And I'm a bit confused.
Then I saw my puzzle input.
The numbers are massive: in the billions!
So, I likely won't be working with filled number ranges, only boundaries.
Still, I gotta read this again.
A second playback
My understanding:
• Each journey is from seed to location: because each seed needs to be planted
• Each map step performs a numeric conversion from one category to another: source to destination
• Seemingly backwards, the first number is the lowest destination number, and the second is the lowest source number
• Lastly is the range of values from the lowest of each number
Inspecting the first mapping:
In words:
soil range start seed range start range length
soil range start seed range start range length
Another way to think of it:
50 <= 98 (+1)
52 <= 50 (+47)
Ok, this is making more sense.
The important edge case
Any source numbers that aren't mapped correspond to the same destination number.
That should be a fun condition to account for in my algorithm...eventually.
The abridged sample list makes more sense now
It is rendered in the opposite order of the input:
instead of
So, if I render it in the order of the input:
soil <= seed
I now understand the four confirmative seed number to soil number mappings.
Following Seed 79 through its mapping journey
I see how Seed 79 maps to Soil 81: 52 50 48.
Since no soil-to-fertilizer mapping ranges include 81, I see how Soil 81 maps too Fertilizer 81.
Same for Water, so Fertilizer 81 maps to Water 81.
Water's source range for Light includes 81, so Water 81 maps to Light 74.
Light's source range for Temperature includes 74, so Light maps to Temperature 78.
Temperature has no source number mapping to Humidity, so Temperature 78 maps to Humidity 78.
Finally, Humidity's source range for Location includes 78, so Humidity 78 maps to Location 82.
Was that the right answer?
Yup! Nice.
I think I finally get all this!
Now, how the heck am I gonna programmatically find the lowest Location number?
Piece-by-piece and trial-and-error. That's how!
Writing my program a little at a time
The input contains groups of text separated by double line breaks:
That will give me eight groups: the first is the seeds; the other seven are the mappings:
[ 'seeds...', 'mapping 1...', 'mapping 2...', ... ]
I need a list of the seed numbers - and I actually want to remove the seed number list from the processed input:
[....input.shift().matchAll(/\d+/g)].map(el => +el[0])
Then, I intend to do the following:
For each seed, accumulate a number that will become the lowest location number
Check each mapping group for a source number and range
Carry forward any mapped number to the next group
At the end, compare the resulting number to the accumulated number, and swap only if the new number is lower
Sounds simple, but it will require a lot of code to extract numbers, determine ranges, iterate through each group, compare numbers and update numbers.
Converting each mapping group into lists of numbers:
const mappings = input.map(group => {
group = group.split('\n')
return group.map(line => {
return [...line.matchAll(/\d+/g)].map(el => +el[0])
The shift() removes the first line of each group describing the mapping. I don't need that, since I can process each list in order.
Work in progress: Determining which - if any - mappings matches the current number:
const part1 = seeds.reduce((lowest, seed) => {
mappings.reduce((num, group) => {
let flags = group.map(set => num >= set[1] && num <= (set[1] + (set[2] - 1))).indexOf(true)
if (flags === -1) {
return num
} else {
// To-do: perform conversion using appropriate mapping range
}, seed)
// To-do: compare this number with accumulated number
return lowest
}, Infinity)
Work in progress: determining the location numbers:
const part1 = seeds.reduce((lowest, seed) => {
const location = mappings.reduce((num, group) => {
let match = group.map(set => num >= set[1] && num <= (set[1] + (set[2] - 1))).indexOf(true)
if (match === -1) {
return num
} else {
if (group[match][1] > group[match][0]) {
return num - (group[match][1] - group[match][0])
} else {
return num + (group[match][0] - group[match][1])
}, seed)
return lowest
}, Infinity)
Running this on the example input shows me all four correct location numbers!
I'm seemingly on the right track!
With the last return statement updated, my algorithm generates the correct answer on the puzzle input!
const part1 = seeds.reduce((lowest, seed) => {
const location = mappings.reduce((num, group) => {
let match = group.map(set => num >= set[1] && num <= (set[1] + (set[2] - 1))).indexOf(true)
if (match === -1) {
return num
} else {
if (group[match][1] > group[match][0]) {
return num - (group[match][1] - group[match][0])
} else {
return num + (group[match][0] - group[match][1])
}, seed)
return location >= lowest ? lowest : location
}, Infinity)
Will it generate an answer - let alone the correct one - on my puzzle input???
I did it!!!
Part 2
This feels a lot harder...
...even though it seems like there must be some hidden fact about the numbers in a range that can prevent having to check each of them.
Since, of course, there are trillions of them.
How can I attempt to reveal this truth, if there is one?
I'm not gonna crack this code any time soon
I tried the brute-force approach on a single range of seeds.
My program ran for a minute without spitting out a result before I stopped it.
Clearly that isn't a feasible solution.
Sadly, I don't see any way to short-circuit this problem other than checking every seed...which would take an eternity and is clearly not the intended solve path.
So, I admit defeat and throw in the towel.
Part 1 was a blast to solve.
Part 2 requires more advanced computer science skills than I have.
Onward to Day 6!
cover photo is of The Bad Seed, a popular children's book hero
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://dev.to/rmion/if-you-give-a-seed-a-fertilizer-bhn","timestamp":"2024-11-10T08:36:51Z","content_type":"text/html","content_length":"350930","record_id":"<urn:uuid:224dc0f0-ae3d-4ba3-8ad8-cf4c0a77d130>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00386.warc.gz"} |
Buy uspolicy.be ?
Products related to Geometry:
Similar search terms for Geometry:
• Can you help me with solid geometry and geometry?
Yes, I can help you with solid geometry and geometry. I can explain concepts, provide examples, and help you solve problems related to these topics. Whether you need assistance with understanding
the properties of 3D shapes or the principles of geometric theorems, I am here to support you in your learning. Feel free to ask me any specific questions you have, and I will do my best to
assist you.
• What is geometry?
Geometry is a branch of mathematics that deals with the study of shapes, sizes, relative positions of figures, and properties of space. It involves the study of points, lines, angles, surfaces,
and solids. Geometry is used to solve problems related to measurements, design, and spatial reasoning. It plays a crucial role in various fields such as architecture, engineering, art, and
• Can you review geometry?
Yes, I can review geometry. Geometry is a branch of mathematics that deals with shapes, sizes, and properties of space. It includes concepts such as points, lines, angles, shapes, and dimensions.
By studying geometry, we can understand the relationships between different geometric figures and solve problems related to them.
• What is Geometry 36?
Geometry 36 is a branch of mathematics that focuses on the study of shapes, sizes, and properties of space. It involves the study of points, lines, angles, surfaces, and solids, as well as their
relationships and measurements. Geometry 36 is essential in various fields such as architecture, engineering, physics, and computer graphics, as it helps in understanding spatial relationships
and solving real-world problems.
• What does geometry mean?
Geometry is a branch of mathematics that deals with the study of shapes, sizes, and properties of space. It involves the study of points, lines, angles, surfaces, and solids, and how they relate
to each other. Geometry is used to solve problems related to measurements, spatial relationships, and visualization of objects in both two and three dimensions. It is an essential part of
mathematics and has applications in various fields such as architecture, engineering, art, and physics.
• What is Geometry 23?
Geometry 23 is a specific course or textbook chapter that covers advanced topics in geometry. It may include concepts such as three-dimensional shapes, geometric proofs, trigonometry, and more
complex geometric theorems. Students studying Geometry 23 are likely to have a strong foundation in basic geometry and are ready to explore more challenging geometric principles.
• What is Geometry 22?
Geometry 22 is a branch of mathematics that focuses on the study of shapes, sizes, and properties of space. It involves the study of points, lines, angles, surfaces, and solids, as well as the
relationships between them. Geometry 22 also deals with concepts such as symmetry, congruence, similarity, and transformations. It is an important field of study that has practical applications
in various fields such as architecture, engineering, and physics.
• What is bond geometry?
Bond geometry refers to the spatial arrangement of atoms in a molecule, specifically focusing on the angles between the atoms involved in a chemical bond. The geometry of a molecule is determined
by the arrangement of electron pairs around the central atom, which can be influenced by factors such as the number of bonding and non-bonding electron pairs. The most common bond geometries
include linear, trigonal planar, tetrahedral, trigonal bipyramidal, and octahedral, each with specific bond angles that contribute to the overall shape of the molecule. Understanding bond
geometry is crucial in predicting the physical and chemical properties of a molecule, as well as its reactivity with other molecules.
• What is analytical geometry?
Analytical geometry, also known as coordinate geometry, is a branch of mathematics that combines algebra and geometry. It involves studying geometric shapes using a coordinate system, where
points are located using numerical coordinates. By using equations and formulas, analytical geometry allows us to describe and analyze geometric figures such as lines, circles, and parabolas in a
precise and systematic way. This branch of mathematics is essential in various fields such as physics, engineering, and computer science for solving complex problems involving shapes and spatial
• Can you explain geometry?
Geometry is a branch of mathematics that deals with the study of shapes, sizes, and properties of space. It involves concepts such as points, lines, angles, surfaces, and solids, and explores
their relationships and measurements. Geometry helps us understand the world around us by providing a framework to analyze and describe the physical aspects of objects and their spatial
relationships. By studying geometry, we can solve problems related to measurement, design, construction, and visualization.
• What is Geometry 26?
Geometry 26 is a branch of mathematics that focuses on the properties and relationships of shapes, sizes, and dimensions in space. It involves studying concepts such as points, lines, angles,
surfaces, and solids, and how they interact with each other. Geometry 26 also includes topics like symmetry, congruence, similarity, and transformations. Overall, Geometry 26 plays a crucial role
in various fields such as architecture, engineering, physics, and computer science.
• Who can do geometry?
Anyone with the interest and willingness to learn can do geometry. It is a branch of mathematics that involves the study of shapes, sizes, and properties of space. Whether you are a student, a
professional, or someone who enjoys problem-solving, geometry can be learned and applied by anyone. With dedication and practice, anyone can develop the skills to understand and work with
geometric concepts.
* All prices are inclusive of VAT and, if applicable, plus shipping costs. The offer information is based on the details provided by the respective shop and is updated through automated processes.
Real-time updates do not occur, so deviations can occur in individual cases. | {"url":"https://www.uspolicy.be/Geometry","timestamp":"2024-11-06T22:05:15Z","content_type":"text/html","content_length":"78975","record_id":"<urn:uuid:e44a6ebd-f4eb-49f6-9c99-1c0781c25011>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00243.warc.gz"} |
[1] 2411.04134
In this paper, we conduct a comprehensive investigation into the weak cosmic censorship conjecture (WCCC) for Reissner-Nordstr\"om (R-N) AdS black holes that are influenced by Perfect Fluid Dark
Matter (PFDM). Our study is framed within the context of Conformal Field Theory (CFT) thermodynamics. We delve into the principles of energy flux and mass-energy equivalence to explore the interplay
between the weak gravity conjecture (WGC) and the WCCC. Our analysis begins by examining the interaction between incoming and outgoing energy fluxes, which induces changes in the black hole's
properties. By applying the first law of thermodynamics, we assess the validity of the second law in these dynamic scenarios. We also consider equilibrium conditions that involve both absorption and
superradiance processes. Utilizing the framework of black hole thermodynamics within CFT, we demonstrate that the WCCC is upheld if the black hole is in or near an extremal state, particularly when
it is subjected to radiation and particle absorption. This finding is significant as it reinforces the robustness of the WCCC under these specific conditions. Furthermore, we uncover additional
insights by employing mass-energy equivalence principles and conducting second-order approximations near the extremality state. Specifically, we find that when a black hole radiates and its central
charge surpasses the scaled electric charge, the emitted superradiant particles adhere to the WGC. This adherence results in the black hole moving away from its extremal state, thereby maintaining
the WCCC.
[2] 2411.04172
Self-dual Yang-Mills and Einstein gravity in Euclidean AdS$_4$ are useful toy models because they can be described by simple scalar Lagrangians exhibiting a new manifestation of the colour/kinematics
duality, as recently shown by two of the authors. In this paper, we clarify how the self-dual sectors fit into the full theories. In particular, we explicitly construct the light-cone action for
Yang-Mills theory and Einstein gravity in AdS$_4$ in terms of positive and negative helicity fields, where we are able to pinpoint the self-dual sector as expected. We then show that the boundary
correlators of these theories take a remarkably simple form in terms of Feynman diagrams in half of flat space, acted on by certain differential operators. We also analyse their soft limits and show
that they exhibit Weinberg-like soft factors, where the soft pole which appears in scattering amplitudes is replaced by a derivative with respect to the energy.
[3] 2411.04183
Field theories exhibit dramatic changes in the structure of their operator algebras in the limit where the number of local degrees of freedom ($N$) becomes infinite. An important example of this is
that the algebras associated to local subregions may not be additively generated in the limit. We investigate examples and explore the consequences of this ``superadditivity'' phenomenon in large $N$
field theories and holographic systems. In holographic examples we find cases in which superadditive algebras can probe the black hole interior, while the additive algebra cannot. We also discuss how
superaddivity explains the sucess of quantum error correction models of holography. Finally we demonstrate how superadditivity is intimately related to the ability of holographic field theories to
perform quantum tasks that would naievely be impossible. We argue that the connected wedge theorems (CWTs) of May, Penington, Sorce, and Yoshida, which characterize holographic protocols for quantum
tasks, can be re-phrased in terms of superadditive algebras and use this re-phrasing to conjecture a generalization of the CWTs that is an equivalence statement.
[4] 2411.04190
We discover a surprising connection between Carrollian symmetries and hydrodynamics in the shallow water approximation. Carrollian symmetries arise in the speed of light going to zero limit of
relativistic Poincar\'e symmetries. Using a recent gauge theoretic description of shallow water wave equations we find that the actions corresponding to two different waves, viz. the so called flat
band solution and the Poincar\'e waves map exactly to the actions of the electric and magnetic sectors of Carrollian electrodynamics.
[5] 2411.04194
We propose a nonperturbative construction of Hopf algebras that represent categories of line operators in topological quantum field theory, in terms of semi-extended operators (spark algebras) on
pairs of transverse topological boundary conditions. The construction is a direct implementation of Tannakian formalism in QFT. Focusing on d=3 dimensional theories, we find topological definitions
of R-matrices, ribbon twists, and the Drinfeld double construction for generalized quantum groups. We illustrate our construction in finite-group gauge theory, and apply it to obtain new results for
B-twisted 3d $\mathcal{N}=4$ gauge theories, a.k.a. equivariant Rozansky-Witten theory, or supergroup BF theory (including ordinary BF theory with compact gauge group). We reformulate our
construction mathematically in terms of abelian and dg tensor categories, and discuss connections with Koszul duality.
[6] 2411.04199
This paper investigates marginal and dipole TsT transformations of a seed type IIB supergravity solution dual to a supersymmetry-preserving deformation of the Klebanov-Witten 4d SCFT. To explore key
properties of the deformed theories, we holographically analyze various observables, including Wilson loops, 't Hooft loops, Entanglement Entropy, and holographic central charge flow. Moreover, we
focus on detecting which of these observables are affected by the dynamics of the Kaluza-Klein (KK) modes resulting from the circle compactification.
[7] 2411.04207
In this work, we investigate the assumptions regarding spacetime backgrounds underlying the classical double copy. We argue (contrary to the norm) that single-copy fields naturally constructed on the
original curved background metric are only interpretable on a flat metric when such a well-defined limit exists, for which Kerr--Schild coordinates offer a natural choice. As an explicit example
where such a distinction matters, we initiate an exploration of single-copies for the G\"odel universe. This metric lacks a (geodesic) Kerr--Schild representation yet is Petrov type-D, meaning the
technology of the ``Weyl double copy" may be utilized. The Weyl derived single copy has many desirable features, including matching the defining properties of the spacetime, and being sourced by the
mixed Ricci tensor just as Kerr--Schild single copies are. To compare, we propose a sourced flat-space single-copy interpretation for the G\"odel metric by leveraging its symmetries, and find that
this proposal lacks the defining properties of the spacetime, and is not consistent with the flat limit of our curved-space single copy. Notably, this inconsistency does not occur in Kerr--Schild
metrics. Our curved-space single copy also lead to the same electromagnetic analogue of the G\"odel universe found separately through tidal force analogies, opening a new avenue of exploration
between the double copy and gravitoelectromagnetism programs.
[8] 2411.04208
We explicitly compute the effective action from Open Superstring Field Theory in the hybrid formalism to quartic order in the $\alpha'\rightarrow 0$ limit, and show that it reproduces ten-dimensional
Super Yang-Mills in terms of four-dimensional superfields. We also show that in this limit the gauge transformations coincide with SYM to all orders, which means that the effective action should
reproduce SYM to all orders.
[9] 2411.04245
The non-linear $\Sigma$-Model minimally coupled with Maxwell theory in $3+1$ dimensions possesses a topologically non-trivial sector characterized by ``lasagna''-like configurations. We demonstrate
that, when a specific quantization condition is met, the associated second-order field equations admit a first-order Bogomol'nyi-Prasad-Sommerfield system. This discloses a hidden $(1+1)$-dimensional
supersymmetry with $\mathcal{N}=2$ supercharges. We examine the supersymmetric imprint on the time-dependent regime, with particular emphasis on the transition from integrability to chaos.
[10] 2411.04251
We investigate the time evolution generated by the two-sided chord Hamiltonian in the double-scaled SYK model, which produces a probability distribution over operators in the double-scaled algebra.
Via the bulk-to-boundary map, this distribution translates into dynamic profiles of bulk states within the chord Hilbert space. We derive analytic expressions for these states, valid across a wide
parameter range and at all time scales. Additionally, we show how distinct semi-classical behaviors emerge by localizing within specific regions of the energy spectrum in the semi-classical limit. We
reformulate the doubled Hilbert space formalism as an isometric map between the one-particle sector of the chord Hilbert space and the doubled zero-particle sector. Using this map, we obtain analytic
results for correlation functions and examine the dynamical properties of operator Krylov complexity for chords, establishing an equivalence between the chord number generating function and the
crossed four-point correlation function. We also consider finite-temperature effects, showing how operator spreading slows as temperature decreases. In the semi-classical limit, we apply a saddle
point analysis and include the one-loop determinant to derive the normalized time-ordered four-point correlation function. The leading correction mirrors the \(1/N\) connected contribution observed
in the large-\(p\) SYK model at infinite temperature. Finally, we analyze the time evolution of operator Krylov complexity for a matter chord in the triple-scaled regime, linking it to the
renormalized two-sided length in JT gravity with matter.
[11] 2411.04325
A scenario to understand the asymptotic properties of confinement between quark probes, based on a mixed ensemble of percolating center vortices and chains, was initially proposed by one of us in a
non-Abelian setting. More recently, the same physics was reobtained by means of a Schr\"odinger wavefunctional peaked at Abelian-projected configurations, which deals with center-vortex lines and
pointlike monopoles in real space. In this work, we will reassess both settings in the unified language provided by the Weingarten lattice representation for the sum over surfaces. In particular, in
the phase where surfaces are stabilized by contact interactions and percolate, lattice gauge fields emerge. This generalizes the Goldstone modes in an Abelian loop condensate to the case where
non-Abelian degrees of freedom are present. In this language, the different natural matching properties of elementary center-vortex worldsurfaces and monopole worldlines can be easily characterized.
In the lattice, both the Abelian and non-Abelian settings implement the original idea that the mixed ensemble conciliates $N$-ality with the formation of a confining flux tube. Common features,
differences in the continuum, and perspectives will also be addressed.
[12] 2411.04343
In this work, we investigate the dynamics of a scalar field in the nonintegrable $\displaystyle \phi ^{4}$ model, restricted to the half-line. Here we consider singular solutions that interpolate the
Dirichlet boundary condition $\phi(x=0,t)=H$ and their scattering with the regular kink solution. The simulations reveal a rich variety of phenomena in the field dynamics, such as the formation of a
kink-antikink pair, the generation of oscillons by the boundary perturbations, and the interaction between these objects and the boundary, which causes the emergence of boundary-induced resonant
scatterings (for example, oscillon-boundary bound states and kink generation by oscillon-boundary collision) founded into complex fractal structures. Linear perturbation analysis was applied to
interpret some aspects of the scattering process. The power spectral density of the scalar field at a fixed point leads to several frequency peaks. Most of them can be explained with some interesting
insights for the interaction between the scattering products and the boundary.
[13] 2411.04344
A uniform construction of non-supersymmetric 0-, 4-, 6- and 7-branes in heterotic string theory was announced and outlined in our letter \cite{Kaidi:2023tqo}. In this full paper, we provide details
on their properties. Among other things, we discuss the charges carried by the branes, their topological and dynamical stability, the exact worldsheet descriptions of their near-horizon regions, and
the relationship of the branes to the mathematical notion of topological modular forms.
[14] 2411.04378
We study AdS form factors, given by the Mellin representation for CFT correlators of local operators in the presence of extended defects. We propose a formula for taking (and expanding around) the
flat-space limit. This formula relates the flat-space form factors for particles scattering off an extended object to the high-energy limit of the Mellin amplitude, via a Borel transform. We check
the validity of our proposal in a number of examples. As an application, we study the two-point function of local operators in the presence of a 't Hooft loop in 4d $\mathcal{N}=4$ SYM, and compute
the first few orders of stringy corrections to the AdS form factor of gravitons scattering off a D1 brane.
[15] 2411.04437
We study timelike supersymmetric solutions of a $D=3, N=4$ gauged supergravity using Killing spinor bilinears method and prove that AdS$_3$ is the only solution within this class. We then consider
the ungauged version of this model. It is found that for this type of solutions, the ungauged theory effectively truncates to a supergravity coupled to a sigma model with a 2-dimensional hyperbolic
target space $\mathbb{H}^2$, and all solutions can be expressed in terms of two arbitrary holomorphic functions. The spacetime metric is a warped product of the time direction with a 2-dimensional
space, and the warp factor is given in terms of the K\"ahler potential of $\mathbb{H}^2$. We show that when the holomorphic function that determines the sigma model scalar fields is not constant, the
metric on the sigma model target manifold becomes part of the spacetime metric. We then look at some special choices for these holomorphic functions for which the spacetime metric and the Killing
spinors are only radial dependent. We also derive supersymmetric null solutions of the ungauged model which are pp-waves on the Minkowski spacetime.
[16] 2411.04492
In this work, we relate the growth rate of Krylov complexity in the boundary to the radial momentum of an infalling particle in AdS geometry. We show that in general AdS black hole background, our
proposal captures the universal behaviors of Krylov complexity at both initial and late times. Hence it can be generally considered as an approximate dual of the Krylov complexity at least in diverse
dimensions. Remarkably, for BTZ black holes, our holographic Krylov complexity perfectly matches with that of CFT$_2$ at finite temperatures.
[17] 2411.04734
Recently, it was shown by Danielson-Satishchandran-Wald (DSW) that for the massive or charged body in a quantum spatial separated superposition state, the presence of a black hole can decohere the
superposition inevitably towards capturing the radiation of soft photons or gravitons. In this work, we study the DSW decoherence effect for the static charged body in the Reissner-Nordstr\"om black
holes. By calculating the decohering rate for this case, it is shown that the superposition is decohered by the low frequency photons that propagate through the black hole horizon. For the extremal
Reissner-Nordstr\"om black hole, the decoherence of quantum superposition is completely suppressed due to the black hole Meissner effect.
[18] 2411.04754
Membrane configurations in the Banks-Fischler-Shenker-Susskind matrix model are unstable due to the existence of flat directions in the potential and the decay process can be seen as a realization of
chaotic scattering. In this note, we compute the lifetime of a membrane in a reduced model. The resulting lifetime exhibits scaling laws with respect to energy, coupling constant and a cut-off scale.
We numerically evaluate the scaling exponents, which cannot be fixed by the dimensional analysis. Finally, some applications of the results are discussed.
[19] 2411.04774
In this paper, we present a second realization of the Weyl double copy (WDC) in four-dimensional algebraic type D spacetime. We show that any type D vacuum solution admits an algebraically general
Maxwell scalar on the curved background that squares to give the Weyl scalar. The WDC relation defines a scalar field that satisfies the Klein-Gordon equation sourced by the Weyl scalar on the curved
background. We then extend the type D WDC to five dimensions.
[20] 2411.04806
We investigate ultra slow-roll inflation in a black hole background finding a correspondence between scalar solutions of ultra slow-roll inflation and quasi-normal modes of the cosmological black
hole spacetime. Transitions from slow-roll to ultra slow-roll can enhance the peak of the primordial power spectrum increasing the likelihood of primordial black hole formation. By following such a
transition in a black hole background, we observe a decay of the slow-roll attractor solution into the quasi-normal modes of the system. With a large black hole, the ringing modes dominate, which
could have implications for the background of cosmological scalar perturbations and peak enhancement.
[21] 2411.04849
We study twisted M-theory in a general conifold background, and describe it in terms of a 5d non-commutative Chern-Simons-matter theory. In an equivalent description as twisted type IIA string
theory, the matter degrees of freedom arise from topological strings stretched between stacks of D6-branes. In order to study the 5d Chern-Simons-matter theory with a boundary, we first construct and
investigate the properties of a 4d non-commutative gauged chiral WZW model. We prove the gauge invariant coupling of this 4d theory to the bulk 5d Chern-Simons theory defined on $\mathbb{R}_+ \times
\mathbb{C}^2 $, and further generalize our results to the 5d Chern-Simons-matter theory. We also investigate the toroidal current algebra of the 4d chiral WZW model that arises from radial
quantization along one of the complex planes. Finally, we show that a gauged non-commutative chiral 4d WZW model arises from the partition function for quantum 5d non-commutative Chern-Simons theory
with boundaries in the BV-BFV formalism, and further generalize this 5d-4d correspondence to the 5d non-commutative Chern-Simons-matter theory for the case of adjoint matter.
[22] 2411.04857
In this letter, we consider effective field theories for light fields transforming under the fundamental or adjoint representation of a continuous group. We demonstrate that in the presence of
gravity, crossing symmetry combined with two subtraction sum rules, leads to stringent constraints on the spectrum of its ultraviolet (UV) completion. Such constraints come in the form of necessary
conditions on the symmetry group irreps of the UV states. This is in sharp contrast with non-gravitational theories where anything goes. Beautifully, the graviton pole is the anchor of our argument,
not an obstruction. Using numerical methods, we also demonstrate that the massless spin-2 must be a singlet under said symmetry group.
[23] 2411.04883
Inspired by an ontic view of the wavefunction in quantum mechanics and motivated by the universal interaction of gravity, we discuss a possible gravity implication in the state collapse mechanism.
Concretely, we investigate the stability of the spatial superposition of a massive quantum state under the gravity effect. In this context, we argue that the stability of the spatially superposed
state depends on its gravitational self-energy originating from the effective mass density distribution through the spatially localized eigenstates. We reveal that the gravitational self-interaction
between the different spacetime curvatures created by the eigenstate effective masses leads to the reduction of the superposed state to one of the possible localized states. Among others, we discuss
such a gravity-driven state reduction. Then, we approach the corresponding collapse time and the induced effective electric current in the case of a charged state, as well as the possible detection
[24] 2411.04886
We introduce supersymmetric extensions of the Hom-Lie deformation of the Virasoro algebra, as realized in the GL(1,1) quantum superspace, for Bloch electron systems under Zeeman effects. The
construction is achieved by defining generators through magnetic translations and spin matrix bases, specifically for the N=1 and N=2 supersymmetric deformed algebras. This approach reveals a
structural parallel between the deformed algebra in quantum superspace and its manifestation in Bloch electron systems.
[25] 2411.04902
We show that the Ward identities of a Carrollian CFT stress tensor at null infinity reproduce the leading and subleading soft graviton theorems for massless scattering in the bulk. We deduce the
expressions of the stress tensor components in terms of the bulk radiative modes, and these components turn out to be local at $\mathscr{I}$ in terms of the twistor potentials. This analysis makes
the correspondence between the large-time limit of Carrollian amplitudes and the soft limit of momentum space amplitudes manifest. We then construct Carrollian CFT currents from the ascendants of the
hard graviton operator, which satisfy the $Lw_{1+\infty}$ algebra. We show that the large-time limit of their Ward identities implies an infinite tower of projected soft graviton theorems in the
bulk, while their finite-time OPEs encode the collinear limit of scattering amplitudes.
[26] 2411.04926
We compute the surface charges associated to $p-$form gauge fields in arbitrary spacetime dimension for large values of the radial coordinate. In the critical dimension where radiation and Coulomb
falloff coincide we find asymptotic charges involving asymptotic parameters, i.e. parameters with a component of order zero in the radial coordinate. However, in different dimensions we still find
nontrivial asymptotic charges now involving parameters that are not asymptotic times the radiation-order fields. For $p$=1 and $D>4$, our charges thus differ from those presented in the literature.
We then show that under Hodge duality electric charges for $p-$forms are mapped to magnetic charges for the dual $q-$forms, with $q = D-p-2$. For charges involving fields with radiation falloffs the
duality relates charges that are finite and nonvanishing. For the case of Coulomb falloffs, above or below the critical dimension, Hodge duality exchanges overleading charges in one theory with
subleading ones in its dual counterpart.
[27] 2411.04932
We argue that scale-separated AdS$_2$ vacua with at least two preserved supercharges cannot arise from flux compactifications in a regime of computational control. We deduce this by showing that the
AdS$_2$ scale is parametrically of the same order as the tension of a fundamental BPS domain wall, which provides an upper bound on the UV cutoff. Since the latter does not need to be associated to
any geometric scale, the argument excludes scale separation in a broader sense than what commonly considered. Our claim is exemplified by a bottom-up 2D supergravity analysis as well as top-down
models from Type II flux compactifications.
[28] 2411.04966
We explore the relationship between linear and non-linear causality in theories of dissipative relativistic fluid dynamics. While for some fluid-dynamical theories, a linearized causality analysis
can be used to determine whether the full non-linear theory is causal, for others it can not. As an illustration, we study relativistic viscous magnetohydrodynamics supplemented by a neutral-particle
current, with resistive corrections to the conservation of magnetic flux. The dissipative theory has 10 transport coefficients, including anisotropic viscosities, electric resistivities, and
neutral-particle conductivities. We show how causality properties of this magnetohydrodynamic theory, in the most general fluid frame, may be understood from the linearized analysis.
[29] 2411.04978
The `quantum gravity in the lab' paradigm suggests that quantum computers might shed light on quantum gravity by simulating the CFT side of the AdS/CFT correspondence and mapping the results to the
AdS side. This relies on the assumption that the duality map (the `dictionary') is efficient to compute. In this work, we show that the complexity of the AdS/CFT dictionary is surprisingly subtle:
there might be cases in which one can efficiently apply operators to the CFT state (a task we call 'operator reconstruction') without being able to extract basic properties of the dual bulk state
such as its geometry (which we call 'geometry reconstruction'). Geometry reconstruction corresponds to the setting where we want to extract properties of a completely unknown bulk dual from a
simulated CFT boundary state. We demonstrate that geometry reconstruction may be generically hard due to the connection between geometry and entanglement in holography. In particular we construct
ensembles of states whose entanglement approximately obey the Ryu-Takayanagi formula for arbitrary geometries, but which are nevertheless computationally indistinguishable. This suggests that even
for states with the special entanglement structure of holographic CFT states, geometry reconstruction might be hard. This result should be compared with existing evidence that operator reconstruction
is generically easy in AdS/CFT. A useful analogy for the difference between these two tasks is quantum fully homomorphic encryption (FHE): this encrypts quantum states in such a way that no efficient
adversary can learn properties of the state, but operators can be applied efficiently to the encrypted state. We show that quantum FHE can separate the complexity of geometry reconstruction vs
operator reconstruction, which raises the question whether FHE could be a useful lens through which to view AdS/CFT.
[30] 2411.04162
We introduce a string-based parametrization for nucleon quark and gluon generalized parton distributions (GPDs) that is valid for all skewness. Our approach leverages conformal moments, representing
them as the sum of spin-j nucleon A-form factor and skewness-dependent spin-j nucleon D-form factor, derived from t-channel string exchange in AdS spaces consistent with Lorentz invariance and
unitarity. This model-independent framework, satisfying the polynomiality condition due to Lorentz invariance, uses Mellin moments from empirical data to estimate these form factors. With just five
Regge slope parameters, our method accurately produces various nucleon quark GPD types and symmetric nucleon gluon GPDs through pertinent Mellin-Barnes integrals. Our isovector nucleon quark GPD is
in agreement with existing lattice data, promising to improve the empirical extraction and global analysis of nucleon GPDs in exclusive processes, by avoiding the deconvolution problem at any
skewness, for the first time.
[31] 2411.04173
We derive universal formulae for integrating out heavy degrees of freedom in scalar field theories up to one-loop level in terms of covariant quantities associated with the geometry of the field
manifold. The universal matching results can be readily applied to phenomenologically interesting extensions of the Standard Model, as we demonstrate using a singlet scalar example. We also discuss
the role of field redefinitions in effective field theory matching and simplifications resulting from going to a field basis where interactions are encoded in a nontrivial metric on the field
[32] 2411.04182
We investigate the interplay between self-duality and spatially modulated symmetry of generalized $N$-state clock models, which include the transverse-field Ising model and ordinary $N$-state clock
models as special cases. The spatially modulated symmetry of the model becomes trivial when the model's parameters satisfy a specific number-theoretic relation. We find that the duality is
non-invertible when the spatially modulated symmetry remains nontrivial, and show that this non-invertibility is resolved by introducing a generalized $\mathbb{Z}_N$ toric code, which manifests
ultraviolet/infrared mixing, as the bulk topological order. In this framework, the boundary duality transformation corresponds to the boundary action of a bulk symmetry transformation, with the
endpoint of the bulk symmetry defect realizing the boundary duality defect. Our results illuminate not only a holographic perspective on dualities but also a relationship between spatially modulated
symmetry and ultraviolet/infrared mixing in one higher dimension.
[33] 2411.04186
Cosmic (super)strings offer promising ways to test ideas about the early universe and physics at high energies. While in field theory constructions their tension is usually assumed to be constant (or
at most slowly-varying), this is often not the case in the context of String Theory. Indeed, the tensions of both fundamental and field theory strings within a string compactification depend on the
expectation values of the moduli, which in turn can vary with time. We discuss how the evolution of a cosmic string network changes with a time-dependent tension, both for long-strings and closed
loops, by providing an appropriate generalisation of the Velocity One Scale (VOS) model and its implications. The resulting phenomenology is very rich, exhibiting novel features such as growing
loops, percolation and a radiation-like behaviour of the long string network. We conclude with a few remarks on the impact for gravitational wave emission.
[34] 2411.04209
Machine learning (ML) has emerged as a powerful tool in mathematical research in recent years. This paper applies ML techniques to the study of quivers--a type of directed multigraph with significant
relevance in algebra, combinatorics, computer science, and mathematical physics. Specifically, we focus on the challenging problem of determining the mutation-acyclicity of a quiver on 4 vertices, a
property that is pivotal since mutation-acyclicity is often a necessary condition for theorems involving path algebras and cluster algebras. Although this classification is known for quivers with at
most 3 vertices, little is known about quivers on more than 3 vertices. We give a computer-assisted proof of a theorem to prove that mutation-acyclicity is decidable for quivers on 4 vertices with
edge weight at most 2. By leveraging neural networks (NNs) and support vector machines (SVMs), we then accurately classify more general 4-vertex quivers as mutation-acyclic or non-mutation-acyclic.
Our results demonstrate that ML models can efficiently detect mutation-acyclicity, providing a promising computational approach to this combinatorial problem, from which the trained SVM equation
provides a starting point to guide future theoretical development.
[35] 2411.04221
We assess the variance of supernova(SN)-like explosions associated with the core collapse of rotating massive stars into a black hole-accretion disc system under changes in the progenitor structure.
Our model of the central engine evolves the black hole and the disc through the transfer of matter and angular momentum and includes the contribution of the disc wind. We perform two-dimensional,
non-relativistic, hydrodynamics simulations using the open-source hydrodynamic code Athena++, for which we develop a method to calculate self-gravity for axially symmetric density distributions. For
a fixed model of the wind injection, we explore the explosion characteristics for progenitors with zero-age main-sequence masses from 9 to 40 $M_\odot$ and different degrees of rotation. Our outcomes
reveal a wide range of explosion energies with $E_\mathrm{expl}$ spanning from $\sim 0.3\times10^{51}$~erg to $ > 8\times 10^{51}$~erg and ejecta mass $M_\mathrm{ej}$ from $\sim 0.6$ to $> 10 M_\
odot$. Our results are in agreement with some range of the observational data of stripped-envelope and high-energy SNe such as broad-lined type Ic SNe, but we measure a stronger correlation between
$E_\mathrm{expl}$ and $M_\mathrm{ej}$. We also provide an estimate of the $^{56}$Ni mass produced in our models which goes from $\sim0.04\;M_\odot$ to $\sim 1.3\;M_\odot$. The $^{56}$Ni mass shows a
correlation with the mass and the angular velocity of the progenitor: more massive and faster rotating progenitors tend to produce a higher amount of $^{56}$Ni. Finally, we present a criterion that
allows the selection of a potential collapsar progenitor from the observed explosion energy.
[36] 2411.04233
We investigate cosmological vacuum amplification of gravitational waves in dynamical Chern-Simons gravity. We develop a comprehensive framework to compute graviton production induced by the parity
violating Pontryagin coupling and study its imprint on the stochastic gravitational wave background energy power spectrum. We explore gravitational vacuum amplification in four concrete scenarios for
the evolution of the Chern-Simons pseudoscalar. We show that a parity-violating contribution dominates over an initially flat spectrum when the velocity of the pseudoscalar quickly interpolates
between two asymptotically constant values or when it is nonvanishing and constant through a finite period of time. This is also the case when we parametrize the pseudoscalar evolution by a perfect
fluid with radiation- and dust-like equations of state for large enough values of its energy density. The resulting spectra are compared with the sensitivity curves of current and future
gravitational wave observational searches.
[37] 2411.04244
The highly energetic particle medium formed in the ultrarelativistic heavy ion collision displays a notable difference in temperatures between its central and peripheral regions. This temperature
gradient can generate an electric field within the medium, a phenomenon referred to as the Seebeck effect. We have estimated the Seebeck coefficient for a dense quark-gluon plasma medium by using the
relativistic Boltzmann transport equation in the recently developed novel relaxation time approximation (RTA) model within the kinetic theory framework. This study explores the Seebeck coefficient of
individual quark flavors as well as the entire partonic medium, with the emphasis on its dependence on the temperature and the chemical potential. Our observation indicates that, for given current
quark masses, the magnitude of the Seebeck coefficient for each quark flavor as well as for the partonic medium decreases as the temperature rises and increases as the chemical potential increases.
Furthermore, we have investigated the Seebeck effect by considering the partonic interactions described in perturbative thermal QCD within the quasiparticle model. In addition, we have presented a
comparison between our findings and the results of the standard RTA model.
[38] 2411.04294
Chiral matter exhibits unique electromagnetic responses due to the macroscopic manifestation of the chiral anomaly as anomalous transport currents. Here, we study the modification of electromagnetic
radiation in isotropic chiral matter characterized by an axion coupling that varies linearly over time $\theta(t) = b_0 t$. Using Carroll-Field-Jackiw electrodynamics, we derive the causal Green's
function to investigate the stability and radiation properties of the system. Even though the plane-wave modes of isotropic chiral matter exhibit imaginary frequencies for long wavelengths, which
might suggest instability in the system, we show that their contribution is confined to the near-field region. Also we find no exponentially growing fields at arbitrarily large times, so that
stability is preserved. Under these conditions the radiation yields a positive energy flux, although this is not an inherent property of the general definition. In the case of a fast-moving charge,
we confirm the existence of vacuum Cherenkov radiation and show that, for refractive indices $n > 1$, the Cherenkov cone can split into two concentric cones with opposite circular polarizations. This
split, governed by the speed of the particle $v$, $n$ and $b_0$, resembles the optical spin-Hall effect and offers potential applications for creating circularly polarized terahertz (THz) light
sources. Our Green's function approach provides a general method for analyzing radiation in chiral matter, from Weyl semimetals to quark-gluon plasmas, and can be extended to systems such as
oscillating dipoles and accelerated charges.
[39] 2411.04360
In mixed quantum states, the notion of symmetry is divided into two types: strong and weak symmetry. While spontaneous symmetry breaking (SSB) for a weak symmetry is detected by two-point correlation
functions, SSB for a strong symmetry is characterized by the Renyi-2 correlators. In this work, we present a way to construct various SSB phases for strong symmetries, starting from the ground state
phase diagram of lattice gauge theory models. In addition to introducing a new type of mixed-state topological phases, we provide models of the criticalities between them, including those with
gapless symmetry-protected topological order. We clarify that the ground states of lattice gauge theories are purified states of the corresponding mixed SSB states. Our construction can be applied to
any finite gauge theory and offers a framework to study quantum operations between mixed quantum phases.
[40] 2411.04633
We investigate the response function of an inertial Unruh-deWitt detector in an impulsive plane wave spacetime. Through symmetry considerations applied to the Wightman function, we demonstrate that
the response function remains invariant for any inertial detector, even for those experiencing a discontinuous lightcone coordinate shift after interacting with the shockwave. This implies that the
vacuum state in an impulsive plane wave spacetime is preserved under the associated spacetime symmetries. Additionally, we confirm that the quantum imprint of the shockwave, as discussed in [J. High
Energ. Phys. 2021, 54 (2021)], is not an artifact and exhibits a distinct characteristic form. We identify this form by defining a "renormalized" response function for an eternally inertial detector,
with Minkowski spacetime as a reference.
[41] 2411.04674
In this work, we start by examining a spherically symmetric black hole within the framework of non-commutative geometry and apply a modified Newman-Janis method to obtain a new rotating solution. We
then investigate its consequences, focusing on the horizon structure, ergospheres, and the black hole's angular velocity. Following this, a detailed thermodynamic analysis is performed, covering
surface gravity, Hawking temperature, entropy, and heat capacity. We also study geodesic motion, with particular emphasis on null geodesics and their associated radial accelerations. Additionally,
the photon sphere and the resulting black hole shadows are explored. Finally, we compute the quasinormal modes for scalar perturbations using the 6th-order WKB approximation.
[42] 2411.04730
The no-cloning principle has played a foundational role in quantum information and cryptography. Following a long-standing tradition of studying quantum mechanical phenomena through the lens of
interactive games, Broadbent and Lord (TQC 2020) formalized cloning games in order to quantitatively capture no-cloning in the context of unclonable encryption schemes. The conceptual contribution of
this paper is the new, natural, notion of Haar cloning games together with two applications. In the area of black-hole physics, our game reveals that, in an idealized model of a black hole which
features Haar random (or pseudorandom) scrambling dynamics, the information from infalling entangled qubits can only be recovered from either the interior or the exterior of the black hole -- but
never from both places at the same time. In the area of quantum cryptography, our game helps us construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, thereby,
for the first time, bridging the gap between "MicroCrypt" and unclonable cryptography. The technical contribution of this work is a tight analysis of Haar cloning games which requires us to overcome
many long-standing barriers in our understanding of cloning games. Answering these questions provably requires us to go beyond existing methods (Tomamichel, Fehr, Kaniewski and Wehner, New Journal of
Physics 2013). In particular, we show a new technique for analyzing cloning games with respect to binary phase states through the lens of binary subtypes, and combine it with novel bounds on the
operator norms of block-wise tensor products of matrices.
[43] 2411.04749
We investigate the Berry phase arising from axion-gauge-boson and axion-fermion interactions. The effective Hamiltonians in these two systems are shown to share the same form, enabling a unified
description of the Berry phase. This approach offers a new perspective on certain axion experiments, including photon birefringence and storage-ring experiments. Additionally, we conceptually propose
a novel photon-ring experiment for axion detection. Furthermore, we demonstrate that measuring the axion-induced Berry phase provides a unique way for probing the global structure of the Standard
Model (SM) gauge group and axion-related generalized symmetries.
[44] 2411.04780
This document summarizes the discussions which took place during the PITT-PACC Workshop entitled "Non-Standard Cosmological Epochs and Expansion Histories," held in Pittsburgh, Pennsylvania, Sept.
5-7, 2024. Much like the non-standard cosmological epochs that were the subject of these discussions, the format of this workshop was also non-standard. Rather than consisting of a series of talks
from participants, with each person presenting their own work, this workshop was instead organized around free-form discussion blocks, with each centered on a different overall theme and guided by a
different set of Discussion Leaders. This document is not intended to serve as a comprehensive review of these topics, but rather as an informal record of the discussions that took place during the
workshop, in the hope that the content and free-flowing spirit of these discussions may inspire new ideas and research directions.
[45] 2411.04838
The notion of duality -- that a given physical system can have two different mathematical descriptions -- is a key idea in modern theoretical physics. Establishing a duality in lattice statistical
mechanics models requires the construction of a dual Hamiltonian and a map from the original to the dual observables. By using simple neural networks to parameterize these maps and introducing a loss
function that penalises the difference between correlation functions in original and dual models, we formulate the process of duality discovery as an optimization problem. We numerically solve this
problem and show that our framework can rediscover the celebrated Kramers-Wannier duality for the 2d Ising model, reconstructing the known mapping of temperatures. We also discuss an alternative
approach which uses known features of the mapping of topological lines to reduce the problem to optimizing the couplings in a dual Hamiltonian, and explore next-to-nearest neighbour deformations of
the 2d Ising duality. We discuss future directions and prospects for discovering new dualities within this framework.
[46] 2411.04851
We consider a Yang-Mills type gauge theory of gravity based on the conformal group SO(4,2) coupled to a conformally invariant real scalar field. The goal is to generate fundamental dimensional
constants via spontaneous breakdown of the conformal symmetry. In the absence of other matter couplings the resulting theory resembles Weyl-Einstein gravity, {with the Newton constant given by the
square of the (constant) vacuum expectation value of the scalar, the cosmological constant determined by the quartic coupling constant of the scalar field and the Weyl to Einstein transition scale
determined by the Yang-Mills coupling constant.} The emergent theory in the long-wave-length limit is Einstein gravity with cosmological constant. As an illustrative example we present an exact
spherically symmetric cosmological solution with perfect fluid energy-momentum tensor that reduces to $\Lambda$FRW in the long-wavelength limit.
[47] 2411.04854
We numerically study axion-U(1) inflation, focusing on the regime where the coupling between axions and gauge fields results in significant backreaction from the amplified gauge fields during
inflation. These amplified gauge fields not only generate high-frequency gravitational waves (GWs) but also induce spatial inhomogeneities in the axion field, which can lead to the formation of
primordial black holes (PBHs). Both GWs and PBHs serve as key probes for constraining the coupling strength between the axion and gauge fields. We find that, when backreaction is important during
inflation, the constraints on the coupling strength due to GW overproduction are relaxed compared to previous studies, in which backreaction matters only after inflation. For PBH formation,
understanding the probability density function (PDF) of axion field fluctuations is crucial. While earlier analytical studies assumed that these fluctuations followed a $\chi^2$-distribution, our
results suggest that the PDF tends toward a Gaussian distribution in cases where gauge field backreaction is important, regardless whether during or after inflation. We also calculate the spectrum of
the produced magnetic fields in this model and find that their strength is compatible with the observed lower limits.
[48] 2411.04900
We discuss modular domain walls and gravitational waves in a class of supersymmetric models where quark and lepton flavour symmetry emerges from modular symmetry. In such models a single modulus
field $\tau$ is often assumed to be stabilised at or near certain fixed point values such as $\tau = {\rm i}$ and $\tau = \omega$ (the cube root of unity), in its fundamental domain. We show that, in
the global supersymmetry limit of certain classes of potentials, the vacua at these fixed points may be degenerate, leading to the formation of modular domain walls in the early Universe. Taking
supergravity effects into account, in the background of a fixed dilaton field $S$, the degeneracy may be lifted, leading to a bias term in the potential allowing the domain walls to collapse. We
study the resulting gravitational wave spectra arising from the dynamics of such modular domain walls, and assess their observability by current and future experiments, as a window into modular
flavour symmetry.
[49] 2411.05004
We explore the states of matter arising from the spontaneous symmetry breaking (SSB) of $\mathbb{Z}_2$ non-onsite symmetries. In one spatial dimension, we construct a frustration-free lattice model
exhibiting SSB of a non-onsite symmetry, which features the coexistence of two ground states with distinct symmetry-protected topological (SPT) orders. We analytically prove the two-fold ground-state
degeneracy and the existence of a finite energy gap. Fixing the symmetry sector yields a long-range entangled ground state that features long-range correlations among non-invertible charged
operators. We also present a constant-depth measurement-feedback protocol to prepare such a state with a constant success probability in the thermodynamic limit, which may be of independent interest.
Under a symmetric deformation, the SSB persists up to a critical point, beyond which a gapless phase characterized by a conformal field theory emerges. In two spatial dimensions, the SSB of 1-form
non-onsite symmetries leads to a long-range entangled state (SPT soup) - a condensate of 1d SPT along any closed loops. On a torus, there are four such locally indistinguishable states that exhibit
algebraic correlations between local operators, which we derived via a mapping to the critical $O(2)$ loop model. This provides an intriguing example of `topological quantum criticality'. Our work
reveals the exotic features of SSB of non-onsite symmetries, which may lie beyond the framework of topological holography (SymTFT). | {"url":"https://academ.us/list/hep-th/","timestamp":"2024-11-09T09:29:09Z","content_type":"text/html","content_length":"78607","record_id":"<urn:uuid:4b6f1584-c22e-43bd-9943-7f7afe757002>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00768.warc.gz"} |
Isometric projection is a method for visually representing three-dimensional objects in two dimensions in technical and engineering drawings. It is an axonometric projection in which the three
coordinate axes appear equally foreshortened and the angle between any two of them is 120 degrees.
Some 3D shapes using the isometric drawing method. The black dimensions are the true lengths as found in an orthographic projection. The red dimensions are used when drawing with the isometric
drawing method. The same 3D shapes drawn in isometric projection would appear smaller; an isometric projection will show the object's sides foreshortened, by approximately 80%.
Classification of Isometric projection and some 3D projections
The term "isometric" comes from the Greek for "equal measure", reflecting that the scale along each axis of the projection is the same (unlike some other forms of graphical projection).
An isometric view of an object can be obtained by choosing the viewing direction such that the angles between the projections of the x, y, and z axes are all the same, or 120°. For example, with a
cube, this is done by first looking straight towards one face. Next, the cube is rotated ±45° about the vertical axis, followed by a rotation of approximately 35.264° (precisely arcsin 1⁄√3 or arctan
1⁄√2, which is related to the Magic angle) about the horizontal axis. Note that with the cube (see image) the perimeter of the resulting 2D drawing is a perfect regular hexagon: all the black lines
have equal length and all the cube's faces are the same area. Isometric graph paper can be placed under a normal piece of drawing paper to help achieve the effect without calculation.
In a similar way, an isometric view can be obtained in a 3D scene. Starting with the camera aligned parallel to the floor and aligned to the coordinate axes, it is first rotated horizontally (around
the vertical axis) by ±45°, then 35.264° around the horizontal axis.
Another way isometric projection can be visualized is by considering a view within a cubical room starting in an upper corner and looking towards the opposite, lower corner. The x-axis extends
diagonally down and right, the y-axis extends diagonally down and left, and the z-axis is straight up. Depth is also shown by height on the image. Lines drawn along the axes are at 120° to one
In all these cases, as with all axonometric and orthographic projections, such a camera would need a object-space telecentric lens, in order that projected lengths not change with distance from the
The term "isometric" is often mistakenly used to refer to axonometric projections, generally. There are, however, actually three types of axonometric projections: isometric, dimetric and oblique.
Rotation angles
From the two angles needed for an isometric projection, the value of the second may seem counterintuitive and deserves some further explanation. Let's first imagine a cube with sides of length 2, and
its center at the axis origin, which means all its faces intersect the axes at a distance of 1 from the origin. We can calculate the length of the line from its center to the middle of any edge as √2
using Pythagoras' theorem . By rotating the cube by 45° on the x-axis, the point (1, 1, 1) will therefore become (1, 0, √2) as depicted in the diagram. The second rotation aims to bring the same
point on the positive z-axis and so needs to perform a rotation of value equal to the arctangent of 1⁄√2 which is approximately 35.264°.
There are eight different orientations to obtain an isometric view, depending into which octant the viewer looks. The isometric transform from a point a[x,y,z] in 3D space to a point b[x,y] in 2D
space looking into the first octant can be written mathematically with rotation matrices as: ${\displaystyle {\begin{bmatrix}\mathbf {c} _{x}\\\mathbf {c} _{y}\\\mathbf {c} _{z}\\\end{bmatrix}}={\
begin{bmatrix}1&0&0\\0&{\cos \alpha }&{\sin \alpha }\\0&{-\sin \alpha }&{\cos \alpha }\\\end{bmatrix}}{\begin{bmatrix}{\cos \beta }&0&{-\sin \beta }\\0&1&0\\{\sin \beta }&0&{\cos \beta }\\\end
{bmatrix}}{\begin{bmatrix}\mathbf {a} _{x}\\\mathbf {a} _{y}\\\mathbf {a} _{z}\\\end{bmatrix}}={\frac {1}{\sqrt {6}}}{\begin{bmatrix}{\sqrt {3}}&0&-{\sqrt {3}}\\1&2&1\\{\sqrt {2}}&-{\sqrt {2}}&{\sqrt
{2}}\\\end{bmatrix}}{\begin{bmatrix}\mathbf {a} _{x}\\\mathbf {a} _{y}\\\mathbf {a} _{z}\\\end{bmatrix}}}$
where α = arcsin(tan 30°) ≈ 35.264° and β = 45°. As explained above, this is a rotation around the vertical (here y) axis by β, followed by a rotation around the horizontal (here x) axis by α. This
is then followed by an orthographic projection to the xy-plane: ${\displaystyle {\begin{bmatrix}\mathbf {b} _{x}\\\mathbf {b} _{y}\\0\\\end{bmatrix}}={\begin{bmatrix}1&0&0\\0&1&0\\0&0&0\\\end
{bmatrix}}{\begin{bmatrix}\mathbf {c} _{x}\\\mathbf {c} _{y}\\\mathbf {c} _{z}\\\end{bmatrix}}}$
The other 7 possibilities are obtained by either rotating to the opposite sides or not, and then inverting the view direction or not.^[1]
History and limitations
First formalized by Professor William Farish (1759–1837), the concept of isometry had existed in a rough empirical form for centuries.^[3]^[4] From the middle of the 19th century, isometry became an
"invaluable tool for engineers, and soon thereafter axonometry and isometry were incorporated in the curriculum of architectural training courses in Europe and the U.S."^[5] According to Jan Krikke
(2000)^[6] however, "axonometry originated in China. Its function in Chinese art was similar to linear perspective in European art. Axonometry, and the pictorial grammar that goes with it, has taken
on a new significance with the advent of visual computing".^[6]
As with all types of parallel projection, objects drawn with isometric projection do not appear larger or smaller as they extend closer to or away from the viewer. While advantageous for
architectural drawings where measurements need to be taken directly, the result is a perceived distortion, as unlike perspective projection, it is not how human vision or photography normally work.
It also can easily result in situations where depth and altitude are difficult to gauge, as is shown in the illustration to the right or above. This can appear to create paradoxical or impossible
shapes, such as the Penrose stairs.
Usage in video games and pixel art
Isometric video game graphics are graphics employed in video games and pixel art that utilize a parallel projection, but which angle the viewpoint to reveal facets of the environment that would
otherwise not be visible from a top-down perspective or side view, thereby producing a three-dimensional effect. Despite the name, isometric computer graphics are not necessarily truly
isometric—i.e., the x, y, and z axes are not necessarily oriented 120° to each other. Instead, a variety of angles are used, with dimetric projection and a 2:1 pixel ratio being the most common. The
terms "3⁄4 perspective", "3⁄4 view", "2.5D", and "pseudo 3D" are also sometimes used, although these terms can bear slightly different meanings in other contexts.
Once common, isometric projection became less so with the advent of more powerful 3D graphics systems, and as video games began to focus more on action and individual characters.^[7] However, video
games utilizing isometric projection—especially computer role-playing games—have seen a resurgence in recent years within the indie gaming scene.^[7]^[8]
See also
External links
Wikimedia Commons has media related to Isometric projection. | {"url":"https://www.knowpia.com/knowpedia/Isometric_projection","timestamp":"2024-11-14T04:04:08Z","content_type":"text/html","content_length":"124996","record_id":"<urn:uuid:60be9c71-f8e2-487f-9bc9-0d4ac5b74c94>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00430.warc.gz"} |
Bayesian Treatment of Incomplete Discrete Data applied to Mutual Information and Feature Selection
││ LaTeX - PostScript - PDF - Html/Gif ││
Bayesian Treatment of Incomplete Discrete Data applied to Mutual Information and Feature Selection
Authors: Marcus Hutter and Marco Zaffalon (2003)
Comments: 11 pages
Subj-class: Artificial Intelligence; Learning
ACM-class: G.3; G.1.2
Reference: Proceedings of the 26th German Conference on Artificial Intelligence (KI-2003) pages 396-406
Report-no: IDSIA-15-03 and cs.LG/0306126
Paper: LaTeX - PostScript - PDF - Html/Gif
Slides: PowerPoint - PDF
Keywords: Incomplete data, Bayesian statistics, expectation maximization, global optimization, Mutual Information, Cross Entropy, Dirichlet distribution, Second order distribution, Credible
intervals, expectation and variance of mutual information, missing data, Robust feature selection, Filter approach, naive Bayes classifier.
Abstract: Given the joint chances of a pair of random variables one can compute quantities of interest, like the mutual information. The Bayesian treatment of unknown chances involves
computing, from a second order prior distribution and the data likelihood, a posterior distribution of the chances. A common treatment of incomplete data is to assume ignorability and
determine the chances by the expectation maximization (EM) algorithm. The two different methods above are well established but typically separated. This paper joins the two approaches in the
case of Dirichlet priors, and derives efficient approximations for the mean, mode and the (co)variance of the chances and the mutual information. Furthermore, we prove the unimodality of the
posterior distribution, whence the important property of convergence of EM to the global maximum in the chosen framework. These results are applied to the problem of selecting features for
incremental learning and naive Bayes classification. A fast filter based on the distribution of mutual information is shown to outperform the traditional filter based on empirical mutual
information on a number of incomplete real data sets.
││ LaTeX - PostScript - PDF - Html/Gif ││
BibTeX Entry
author = "Marcus Hutter and Marco Zaffalon",
title = "Bayesian Treatment of Incomplete Discrete Data applied
to Mutual Information and Feature Selection",
year = "2003",
pages = "396--406",
series = "Lecture Notes in Computer Science",
volume = "2821",
booktitle = "Proceedings of the 26th German Conference on Artificial Intelligence (KI-2003)",
editor = "A. G{\"u}nter, R. Kruse and B. Neumann",
publisher = "Springer",
address = "Heidelberg",
http = "http://www.hutter1.net/ai/mimiss.htm",
url = "http://arxiv.org/abs/cs.LG/0306126",
ftp = "ftp://ftp.idsia.ch/pub/techrep/IDSIA-15-03.ps.gz",
keywords = "Incomplete data, Bayesian statistics, expectation maximization,
global optimization, Mutual Information, Cross Entropy, Dirichlet
distribution, Second order distribution, Credible intervals,
expectation and variance of mutual information, missing data,
Robust feature selection, Filter approach, naive Bayes classifier.",
abstract = "Given the joint chances of a pair of random variables one can
compute quantities of interest, like the mutual information. The
Bayesian treatment of unknown chances involves computing, from a
second order prior distribution and the data likelihood, a
posterior distribution of the chances. A common treatment of
incomplete data is to assume ignorability and determine the
chances by the expectation maximization (EM) algorithm. The two
different methods above are well established but typically
separated. This paper joins the two approaches in the case of
Dirichlet priors, and derives efficient approximations for the
mean, mode and the (co)variance of the chances and the mutual
information. Furthermore, we prove the unimodality of the
posterior distribution, whence the important property of
convergence of EM to the global maximum in the chosen framework.
These results are applied to the problem of selecting features for
incremental learning and naive Bayes classification. A fast filter
based on the distribution of mutual information is shown to
outperform the traditional filter based on empirical mutual
information on a number of incomplete real data sets.",
││ LaTeX - PostScript - PDF - Html/Gif ││ | {"url":"http://hutter1.net/ai/mimiss.htm","timestamp":"2024-11-11T06:09:41Z","content_type":"text/html","content_length":"10499","record_id":"<urn:uuid:de718ca6-39d7-4573-8829-63333c0f3fcc>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00444.warc.gz"} |
Write an equation or formula – What is Microsoft Office 2016?
Looking for:
force display mode for inline equations (word mac) – Microsoft Community
See the ribbon for more Structures and Convert options. If so, see Change an equation that was written in a previous version of Word. Choose Design to see tools for adding various elements to your
You can add or change the following elements to your equation. To see all the symbols, click the More button. To see other sets of symbols, click the arrow in the upper right corner of the gallery.
Microsoft word 2016 numbering equations free download Structures group provides structures you can insert. Just choose a structure to insert it and then нажмите сюда the placeholders, the small
dotted-line boxes, with your own values.
The Professional option displays the equation in a professional format optimized for display. The Linear option displays the equation as source text, which can be used to make changes to the equation
if needed. The linear option will display the equation in either UnicodeMath format, pixelmator to pro free LaTeX format, which can be set in the Conversions chunk.
It is possible to convert all equations in a document to microsoft word 2016 numbering equations free download Professional or Linear formats, or a single equation only, if the math zone is selected
or microsoft word 2016 numbering equations free download cursor is in the equation. Перейти на источник touch- and pen-enabled devices you can write equations using a stylus or your finger. To write
equations with ink. Смотрите подробнее a stylus or your finger to write a math equation by hand.
If you’re not using a touch device, читать далее your mouse to write out the equation. You can select portions of the equation and edit them as you go, microsoft word 2016 numbering equations free
download use the preview box to make sure Word is correctly interpreting your handwriting. Http://replace.me/14392.txt you’re satisfied, click Insert to convert the ink equation to an equation in
your document.
Where is Equation Editor? Get Microsoft education templates. Related topics. Use Word for school. Write an equation or formula Article Indent the first line of a paragraph Article Double-space the
lines in a document Article Create a bibliography, citations, and references Article Insert footnotes and endnotes Article.
Next: Improve accessibility and ease of use. Related topics Use Word for продолжить. Select the equation you need. Use your finger, stylus, or mouse to write your equation.
Add an equation to the equation gallery Select the equation you want to add. Choose the down arrow and select Save as New Equation Type a name for the equation in the Create New Building Block
dialog. Select Equations in the gallery list. Choose OK. To change or edit an equation that was previously written, Select the equation to see Equation Tools in the ribbon. Need more help? Was this
information helpful? Yes No. Thank you! Any more feedback? The more you tell us the more we can help. Can you help us improve?
Resolved my issue. Clear instructions. Easy to follow. No jargon. Pictures helped. Didn’t match my screen. Incorrect instructions. Too technical. Not enough information. Not enough продолжить чтение.
Any additional feedback?
Submit feedback. Http://replace.me/10042.txt you for your feedback!
Microsoft word 2016 numbering equations free download
Word for Android and Word Mobile supports writing and editing math equations. Write your math equations in linear format, for example like a2+b2=c2 and Word. Use the Formula dialog box to create your
formula. You can type in the Formula box, select a number format from the Number Format list, and. the whole matrix shifts. I can put negatives in the first number of the rows, but not in any.
Equation Editor in Microsoft Word | {"url":"https://principa.org/2022/09/30/write-an-equation-or-formula-what-is-microsoft/","timestamp":"2024-11-02T09:20:56Z","content_type":"text/html","content_length":"84039","record_id":"<urn:uuid:6880409b-eae6-4e45-a8f8-0aa3102f2a7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00231.warc.gz"} |
A. Applied Mechanics Applied Mechanics, Bio-mechanics, Computational mechanics, Experimental mechanics, Electro-mechanics, Fluid mechanics, Micro-mechanics, Nano-mechanics, Solid mechanics,
Thermo-mechanics, Mechanics of shocks, Material science, Composite materials, Constitutive modeling of materials, Mechanical properties of materials, The mechanical behavior of advanced materials,
Thermodynamics, Thermodynamics of materials and in flowing fluids, Flow and fracture, Internal flow, Heat transport in fluid flows, Aerodynamics, Aeroelasticity, Wave propagation, Stress analysis,
Boundary layers, Dynamics, Elasticity, Hydraulics, Plasticity, Vibration. B. Computational Fluid Dynamics Acoustics, Chemical reactions and combustion, Convergence acceleration procedures,
Distributed computing, Discretisation methods and schemes, Free-surfaces, Fluid-solid interaction, Grid generation and adaptation techniques, Heat transfer, Mesh-free methods, Two-phase flows,
Turbulence, Unsteady flows, Software used for Computational Fluid Dynamics: Fluent, Software used for Computational Fluid Dynamics: Gambit, Modeling & simulation used for Computational Fluid Dynamics | {"url":"https://adrjournalshouse.com/index.php/journal-mechanics-fluid-dynamics/user/register?source=%2Findex.php%2Fjournal-mechanics-fluid-dynamics%2Farticle%2Fview%2F1680%2F1646","timestamp":"2024-11-13T12:46:44Z","content_type":"text/html","content_length":"24938","record_id":"<urn:uuid:531039fb-7e78-40f4-a23f-7e85c6b708b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00738.warc.gz"} |
riemann hypothesis solved 2019
0000063843 00000 n
Riemann hypothesis is consequence of Generalized Riemann hypothesis, but we consider them apart introducing full prove of Riemann hypothesis Proof We assume that T > 1.3 ∗ 1021. also includes large
amplitude behaviour. These yields two independent 'helicity' states per 4-momentum.
Four mathematicians, Michael Griffin of Brigham Young University, Ken Ono of Emory University (now at University of Virginia), Larry Rolen of Vanderbilt University and Don Zagierof the Max Planck
Institute, have proven a significant result that is thought to be on the roadmap to a proof of the most celebrated of unsolved mathematical conjecture, namely the Riemann hypothesis. Riemann found
that the key to understanding their distribution lay within another set of numbers, the zeroes of a function called the Riemann zeta function that has both real and imaginary inputs. 0000011906 00000
n 0000038181 00000 n : Let the Riemann zeta function be represented by the meromorphic function: Am.
What Griffin, Ono, Rolen and Zagier have shown is that for $d \geq 1$, the associated Jensen polynomials $J_\gamma^{d,n}$ are hyperbolic for all sufficiently large $n$. solution of the problem is
proposed. Unsolved Problem in Mathematics; Joseph Henry Press, 412 pages. 0000015508 00000 n ISSN: 1814-8085 / E-ISSN: 1927-5129/12 © 2012 Lifescience Global. “Mathematics and physics divided and
became more specialized of necessity because the explosion of scientific knowledge made it almost impossible to master even one of them,” Gonek says.
But that’s not why it fascinates mathematical physicist Andre’ LeClair, for whom this is perhaps the most important open question in mathematics. The result for transmission probability through the
Eckart barrier as a function of momentum
In this work, we seek to present the connections between the equations constructed from Euler's quadratic equation by Enoch (2015) that gave its Eigen values as the zeros of the Riemann Zeta
equation. 0000082406 00000 n Ono thought that the problem was essentially intractable and did not expect Zagier to make much headway with it, but Zagier was enthused by the challenge and soon had
sketched a solution. A proof – or disproof – of this 150-year-old hypothesis would have major implications in number theory. Hilbert answered, ‘Has the Riemann hypothesis been proven?’” As you keep
counting upwards, prime numbers rapidly become less frequent. But the result is definitely encouraging. "His hypothesis is a mouthful, but Riemann's motivation was simple," Ono says.
Enoch and L. O. Salaudeen (2013). Mathematics, computing and modern science. LeClair has been working on the hypothesis over the past 5 years. "The beauty of our proof is its simplicity," Ono says.
Euler subsequently proved that $$\zeta(s) \; = \; \prod_{p \; {\rm prime}} \frac{1}{1 – p^{-s}} \; = \; \frac{1}{1 – 2^{-s}} \cdot \frac{1}{1 – 3^{-s}} \cdot \frac{1}{1 – 5^{-s}} \cdot \frac{1}{1 – 7
^{-s}} \cdots,$$ which clearly indicates an intimate relationship between the zeta function and prime numbers.
All content in this area was uploaded by Enoch Opeyemi Oluwole on Dec 17, 2016, The proof of the Riemann Hypothesis is presented in three different ways in this paper. This site uses cookies to
assist with navigation, analyse your use of our services, and provide content from third parties. [6] Problem of the millennium;hypothesis.en.wikipedia.org/wiki/Riemann-hypo. A&S Communications
Your feedback will go directly to Science X editors. Click here to sign in with The probability of picking a number at random and having it be prime is zero. 0000009686 00000 n Instead, the advantage
of their proof is its simplicity (the paper is only eight pages long!). A new representation of the Integral component of the ζ(z) is derived and the connecting link between the Riemann Zeta Function
and the work of A.Selberg (1956), for which he won Wolf Prize in mathematics on; Harmonic analysis and discontinuous groups in weakly symmetric Riemannian spaces with applications to Dirichlet series
is shown to be a generator of the imaginary components of the non-trivial zeros of the Riemann zeta function.
Although the primes may appear random, they are deterministic. On The Turning Point, Critical Line and the Zeros of Riemann Zeta Function, An introduction to the theory of the Riemann zeta-function.
He illustrates the hypothesis’ importance to number theory with a well-known anecdote: “Someone asked a famous mathematician if he went to sleep and woke up 100 years from now, what would his first
question be? 0000026709 00000 n In dimension 4(2j+1) e , with 2j integer, 2j+1 prime and e positive integer, the use of direct products of matrices of type V a makes it possible to generate mutually
unbiased bases. A proof – or disproof – of this 150-year-old hypothesis would have major implications in number theory. “Even if we can’t fully prove our main assumption, we’ve shed light on where to
This recipe yields an (apparently new) compact formula for the vectors spanning the various mutually, The classical electromagnetic field is described by a pair of 3-vector fields, E and B, (or a
single anti-symmetric second rank tensor) which are functions of space and time.
Standing since 1859, it relates to how prime numbers work, and connects to many other branches of math.
0000040768 00000 n
The Derivation of The Riemann Zeta Function from Euler's Quadratic Equation And The Proof of The Riemann Hypothesis, THE RIEMANN HYPOTHESIS AND THE EULER'S QUADRATIC EQUATION, From The Zeros of the
Riemann Zeta Function to Its Analytical Continuation Formula, A new representation of the riemann zeta function via its functional equation, A general representation of the zeros of the Riemann zeta
Function via Fourier series Expansion. From this one assumption we show that a lot of things would follow. The relationship among the turning point, the critical strip and the zeros of Riemann Zeta
function is Investigated and established in this paper.
the computational time is reasonable and the algorithm used is relatively simple in comparison to other ones proposed for
STILL ELUSIVE Researchers may have edged closer to a proof of the Riemann hypothesis — a statement about the Riemann zeta function, ... May 24, 2019 at 12:03 pm. Similarly, $\zeta(4) = \pi^4/90$, $\
zeta(6) = \pi^6/945$, and similar results for all positive even integer arguments. "The result established here may be viewed as offering further evidence toward the Riemann Hypothesis, and in any
case, it is a beautiful stand-alone theorem," says Kannan Soundararajan, a mathematician at Stanford University and an expert on the Riemann Hypothesis. 0000049903 00000 n “We make one assumption, a
fully plausible conjecture.
In 2019, as with all other such years, mathematicians who bother to read such proofs realize that they are not proper proofs.
0000048021 00000 n 0000047502 00000 n
The Hilbert-Poly Conjecture and the Berry-Keating Conjecture as it applies to quantum mechanics, are investigated and shown to be true through the representation of the obtained matrices as H_cl=XP
,where X and P are position (one of the Pauli Spin Matrices) and momentum Matrices respectively, for ζ(z) and ε(t).
Some numbers have the special property that they cannot be expressed as the product of two smaller numbers, e.g., 2, 3, 5, 7, etc. "And at the same time, its proofs are easy to follow and understand.
This in effect explains why the Maxwell equations have the form they do.
Zinus 14'' Metal Platform Bed, Criminal Lawyer Salary California, Best Latin Words For Business, Bed Frame 180x200 Hout, Willy Wonka Age, Hereford Cattle For Sale In Missouri, Reverse Atbash Cipher,
Island Way Sorbet Coconut Only, Takeaway Soup Containers, Renegades Piano Easy, Black Metal Headboard - Twin, Melon Gin Cocktail, Space Hulk: Deathwing 2019, Rozana Basmati Rice, Stand Up Past
Continuous, Source Sans Pro, Asu Mechanical Engineering Master's, Coors Light Home Draft, Social Life Of Santhal Tribe, Diontae Johnson Breakout, Daisy Cottage Cheese Review, How To Keep A Friendship
Strong, Kroger Seltzer Water Ingredients, Seagram's Escapes Strawberry Daiquiri Alcohol Content, Ariana Kabob Duluth, What Is Standards-based Education, Science Kits For Adults, Mindfulness
Certification Online, How To Contact Your Local Member, Best Gigabit Switch 2020, Eye On The Prize Ac Odyssey, Rome Weather September 2019, What Hormone Does Caffeine Block, Why Do I Make Faces In
The Mirror, Cottage Cheese In The Philippines, Metal Queen Headboard Clearance, It's Not Supposed To Be This Way Bible Study Videos, Mad Max Minefields Jeet's Territory, Boxing Helena Wiki, Kunal
Khemu Father, Vegan Poppy Seed Cake, Air Force Sere Training What To Expect, What Is Dangerously Low Blood Sugar, Cut Past Tense Sentences, Esau And Jacob, Ps5 120 Fps, Forsyte Saga Continues, Dwayne
Bacon Draft, Bed Spread Sets, How To Remove Tan From Stomach, Down Alternative Comforter Vs Duvet, Realme X50 5g 12gb Ram And 256gb Storage, Hong Kong Prevailing Wind Direction, Pixar Short Films
Collection Volume 4 2022, Madhya Pradesh Politics News, God Of War Ps4 Pro Review, 365 Bible Stories And Prayers Pdf, Cheap Log Cabin Kits, Shark That Bit Bethany Hamilton, Jacob Pronunciation In
Spanish, G3100 Router Setup, Quell In A Sentence, Importance Of Education In Our Life, El Viejo San Juan, Oppo Find X2 Pro Buy, Ding Ning, Ezubao, King Kebab Menu Charleston, Wv, Jobs That Pay $80k A
Year With A Bachelor's Degree, Stomp Stomp Clap My Cafe, James Dean And Audrey Hepburn Movie, Books About Female Leaders, Assassin's Creed Odyssey Fate Of Atlantis Episode 3, Anak Meaning Indonesia,
Uk Social Security Pension, Homophones Sentences For Class 3, No Turning Back Meaning, Faridabad Ka Vidhayak, Blacklock Roast Review, Seagram's Bahama Mama Ingredients, Prosecutor Exit Options, Made
Good Chocolate Granola Minis,
Napsat komentář | {"url":"https://defektybudov.vstecb.cz/site/archive.php?edc0e2=riemann-hypothesis-solved-2019","timestamp":"2024-11-13T01:44:03Z","content_type":"text/html","content_length":"45561","record_id":"<urn:uuid:58ebf661-32c5-4ff7-b3fd-154bc4fbca86>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00674.warc.gz"} |
Tim Sullivan
In two weeks Hanne Kekkonen (University of Warwick) will give a talk on “Large noise in variational regularisation”.
Time and Place. Monday 12 June 2017, 11:00–12:00, ZIB Seminar Room 2006, Zuse Institute Berlin, Takustraße 7, 14195 Berlin
Abstract. We consider variational regularisation methods for inverse problems with large noise, which is in general unbounded in the image space of the forward operator. We introduce a Banach space
setting that allows to define a reasonable notion of solutions for more general noise in a larger space provided one has sufficient mapping properties of the forward operator. As an example we study
the particularly important cases of one- and p-homogeneous regularisation functionals. As a natural further step we study stochastic noise models and in particular white noise, for which we derive
error estimates in terms of the expectation of the Bregman distance. As an example we study total variation prior. This is joint work with Martin Burger and Tapio Helin.
Published on Wednesday 31 May 2017 at 16:00 UTC #event #uq-talk #inverse-problems #kekkonen | {"url":"http://www.tjsullivan.org.uk/tag/kekkonen/","timestamp":"2024-11-11T21:20:42Z","content_type":"text/html","content_length":"14813","record_id":"<urn:uuid:d7e5cacb-af22-4fc3-a622-7c703e9f56a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00797.warc.gz"} |
Am I doing wrong with the boundary conditions in NDSolveValue?
7429 Views
1 Reply
3 Total Likes
Am I doing wrong with the boundary conditions in NDSolveValue?
I am trying to solve 2D PDEs with NDSolveValue[] (Mathematica 10 windows & linux both), but I always get some errors with the derivative boundary conditions. For example:
uif = NDSolveValue[D[f[x, y], {x, 2}] + D[f[x, y], {y, 2}] == 1 && Derivative[1, 0][f][1, y] == 1 && f[0, y] == 1, f, {x, 0, 1}, {y, 0, 1}]
solves a simple Poisson equation on a square (0,1)x(0,1), with \partial f / \partial x =1 on x=1. However, the solution uif always gives wrong answer:
Plot[Derivative[1, 0][uif][x, 0.5], {x, 0, 1}]
So the derivative of the solution at x=1 is the negative of the condition specified.
If I specify Derivative[1, 0][f][0, y] == 1, (same condition at x=0), the answer would be correct. Same thing would happen for all my test PDEs.
Is Mathematica 10 assuming that the boundary condition specified is affected by the normal vector on the boundary? Or how should I specify the boundary conditions appropriately?
Thank you,
1 Reply
the same command does not evaluate inMathematica 9 and gives an error message instead:
NDSolveValue::ivone: Boundary values may only be specified for one independent variable. Initial values may only be specified at one value of the other independent variable. >>
Also, if you invert the sign of your first condition like this:
uif = NDSolveValue[D[f[x, y], {x, 2}] + D[f[x, y], {y, 2}] == 1 && Derivative[1, 0][f][1, y] == -1 && f[0, y] == 1, f, {x, 0, 1}, {y, 0, 1}]
you get a positive 1 derivative just as you wanted.
Plot3D[uif[x, y], {x, 0, 1}, {y, 0, 1}]
I know that this does not solve your problem. I am not really sure what MMA10 does differently from MMA9 here.
Cheers, Marco | {"url":"https://community.wolfram.com/groups/-/m/t/298305","timestamp":"2024-11-14T14:21:29Z","content_type":"text/html","content_length":"95706","record_id":"<urn:uuid:a6aa1c2e-33c4-4874-8c98-419639ba9f36>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00748.warc.gz"} |
How to Rate Coil Springs Without a Spring Rater Tool
Mathematical Spring Rating Formula
Not many people have access to a spring rating tool. You can come a lot closer than you would think just using some dial calipers and a measuring tape to measure the spring rate of your springs. Here
is how to do it:
SPRING RATE = GD4/8NM3
G = Torsional Modulus for Steel 11.25 x 106
D = Wire Diameter in Inches
N = Number of Active Coils
M = Mean Coil Diameter in Inches. Mean Diameter Is:
If using I.D. = 1 Wire plus Inside Diameter
If using O.D. = 1 Wire minus Outside Diameter
8 = A Constant for all Coil Springs
The “G” Factor is always the same for all coil springs made from
steel (11.25 x 106 can also be written as 11,250,000).
EXAMPLE: 10 active Coils and a mean coil diameter of 5.00 inches and a wire size of .625
11,250,000 x .625 x .625 x .625 x .625 = 171,661,370
8 x 10 x 5.0 x 5.0 x 5.0 10,000
(Constant) x (Active Coils) x (Mean Dia.) x (Mean Dia.) x (Mean Dia.)
Spring Rate = 171,661,370 / 10,000
Spring Rate = 171.66 lbs./per inch
Count total number of coils, subtract a coil for each coil that touches, these are dead coils. Ground flat ends are a dead coil. Start count with cut-off end facing you directly above would be one
and so on. Not all coil rings are even coiled. You can have .125, .25, .5 or .75 of a coil (Example 10.25 Coils). | {"url":"https://roadraceengineering.com/blog/2003/07/29/how-to-rate-coil-springs-without-a-rater/","timestamp":"2024-11-13T11:59:44Z","content_type":"application/xhtml+xml","content_length":"40627","record_id":"<urn:uuid:a6133b48-15f5-4a3b-ae88-7d75fc59f89c>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00188.warc.gz"} |
PPT - Understanding the States of Matter and Properties of Gases
Understanding the States of Matter and Properties of Gases
Matter can be classified into gas, liquid, and solid based on particle arrangement, shape, and motion. Gases in pharmacy are crucial for various applications like anesthesia and aerosols. Properties
of gases include compressibility, pressure exertion, diffusion, and expansion. Gas laws such as Boyle's, Charles's, Amonton's, and Avogadro's laws help explain gas behavior under different
Uploaded on Aug 09, 2024 | 0 Views
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download
presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
Presentation Transcript
2. Matter can be classified according to its state into: 1. Gas 2. Liquid 3. Solid Character Solid Liquid Gas Particles arrangement Particles close together in a regular arrangement Closely packed
together in an irregular arrangement Arrangedtotally irregular Have fixed shape and volume Have no fixed shape but fixed volume Have no fixed shape and volume Shape Motion of particles No freely
motion but vibrate in its positions Move around past each other Move randomly Ability to compress No compression Little Easy
3. Gases and volatile substances are encountered in pharmacy mainly as anaesthetic gases, volatile drugs and aerosol propellants. This part deals with the properties of gases and vapours, including
the way in which the vapourpressure above solutions varies with the composition of the solution and the temperature. The factors governing the solubility of gases in liquids are reviewed and
related to the solubility of anaesthetic gases in the complex solvent systems comprising blood and tissues.
4. Properties of gases Gases can be compressed into smaller volume Gases exert pressure on their surroundings and always form homogenous mixture with other gases Temperature affect either the volume
or the pressure or both. Gas diffuse (move throughout any available space). Gases expand without limits. Gases measured in : pressure, volume, temperature and number of mole. 1. 2. 3. 4. 5. 6.
5. Boyle`s law Charles`s law Amonton`slaw Avogadro`s law General gas law
6. At constant temperature, the volume of a definite mass of a gas is inversely proportional to the pressure. This means that the product of the pressure and volume of a given mass is constant at
constant temperature. PV = K (at constant n and T) Where; P is the pressure, V is the volume and K is a constant that depends on the number of mole of the gas and the temperature. If a definite
mass of gas has a volume =V1and pressure = P1and either the volume or pressure is changed to V2or P2at constant temperature then P1V1= P2V2
7. A sample of oxygen occupies 10 L under a pressure of 790 torr (105 kPa). At what pressure would it occupy 13.4 L if the temperature did not change? Solution: P1= 790 torr, V1= 10 L, and V2= 13.4
L, P2= ? P1V1= P2V2 790 x 10 = P2x 13.4 P2= 7900 / 13.4 P2= 590 torr
8. At constant pressure, the volume occupied by a definite mass of a gas is directly proportional to its absolute temperature. Mathematically, Charles`s law can be written as follow: V = KT (at
constant n and P) Where; P is the pressure, V is the volume and K is a constant that depends on the number of mole of the gas and the pressure. If a definite mass of gas has a volume =V1and
temperature= T1and the volume is changed to V2at constant pressure then V1/T1= V2/T2 or V2/ V1=T1/ T2 The equation is valid only when the temperature in Kelvin (K = C + 273), However, volume can
be expressed in any volume unites such as; Liters, milliliters or cubic feet
9. A sample of nitrogen occupies 117 mLat 100 C . At what temperature in C, would it occupy 234 mLif the pressure did not change? Solution: T1= 100 C, V1= 117 mL, and V2= 234 mL, T2= ? T1= 100 C =
100 + 273 = 373 K V1/T1= V2/T2 T2=V2 x T1/ V1 T = 234 x 273 / 117 T2= 746 K T2= 746 -273 = 473 C
10. Combination of Boyle`s law and Charles's law in a single expression gives the combined gas law. Boyle` law P1V1= P2V2 Charles`s law V1/T1= V2/T2 P T P V V = Combined gas law for constant amount
of gas 1 1 2 T 2 1 2 When any five of these variables are known, the sixth variable can be calculated. Units: Volume can be expressed in any units, pressure also can be expressed in any units,
but temperature must be in Kelvin (absolute temperature).
11. The pressure and temperature can affect the volume (and therefore the density) of a gas. It is convenient to choose some standard temperature and pressure as a reference points. By international
agreement, the standard temperature is exactly 0 C and the pressure is one atmosphere (=760 torr) or (760 mm Hg).
12. A sample of neon occupies 105 L at 27 C under pressure of 985 torr. What volume it occupy at standard temperature and pressure (STP). V1= 105 L , T1= 27 C = 27 + 273 = 300 K, P1= 985 torr STP
means , P2= 760 torr and T2= 0 C = 0 + 273 = 273 K V P T 300 x x V 760 300 1. 985 105 x 760 xV P V = = 2 1 1 2 T 2 273 1 2 985 105 273 = = 124 L 2 x A volume of a gas occupies 12 L at 240 C under
a pressure of 80 kPa. At what temperature would the gas occupy 15 L if the pressure was increased to 107 kPa? 1. (answer T = 858 K or 585 C)
13. At constant pressure and temperature, equal volumes of all gases contain the same number of molecules. Or At constant pressure and temperature, the volume (v1) occupied by gas sample is directly
proportional to the number of mole (n1) of gas. V n or V = k n or V/n = k at constant P and T For of two samples of gases at the same temperature and pressure, the relationship between volumes
and number of moles can expressed as follow: n n V= V 1 2 at constant P and T 1 2 Standard molar volume: is the volume occupied by a mole of gas at standard temperature and pressure (STP).
Standard molar volume of ideal gas is 22.414 liters per mole
14. According to the previous equations, gas can be described in terms of its pressure, temperature, volume and number of mole. If any three of these variables are known, the fourth could be
calculated. Ideal gas is one that obeys exactly these gas laws. Many real gases show slight deviations from ideality, but at normal temperature and pressure the deviations are usually enough to
be ignored. - Boyle`s law V 1/p - Charles`s law V T -Avogadro`s law V n Ideal gas equation or ideal gas law: (no restriction) or P at constant T and n at constant P and n at constant P and T nT
nT PV = nRT V V = (P ) R
15. PV = nRT The constant R is called the universal gas constant and its value is dependent of the unites of P, V and T. The numerical value of R can be calculated as follow: If one mole of an ideal
gas occupies 22.414 liters at 1.000 atmosphere and 273.15 K (STP) . PV =nRT R = PV/nT R = (1 x 22.414 ) / 1 x 273.15 =0.082057 L atm / mol K Usually the universal gas constant is equal to 0.0821
8.314 x 10-3L Pa/ mol K 8.314 x 10-3g m2/S2mol K 8.314 J/K mol L atm / mol K
16. Example 1: What pressure in atm, is exerted by 54.0 g of Xe gas in 1.0 L flask at 20 C. (atomic number = 54 and atomic mass= 131.3 for Xe). Solution V= 1.0 L, T= 20 + 273 = 293 K Number of mole
(n) = grams / (M.Wt x V in L) = 54 / 131.3 = 0.411 mol . 0 411 0821 . 0 293 nRT x x = = = PV = nRT . 9 89 P atm 1 V Example 2: What is the volume of a gas balloon filled with 4.0 moles of He when
the atmospheric pressure is 748 torr and the temperature is 30 C. P = 748 torr = 748/760 = 0.984 atm, n = 4.0 mol, T= 30+273 = 303 K nRT V = = 0 . 4 0821 . 0 303 x x = 101 L . 0 984 P
17. Example: A 0.109 g sample of a pure gaseous compound occupies 112 mLat 100 C and 750 torr. What is the molecular weight of the compound? Solution The number of moles is calculated using Ideal gas
law V= 112 mL= 0.112 L, P = 750 torr = 750/760= 0.987 atm, T = 100 + 273 =373 K x RT 373 0821 . 0 . 0 987 . 0 122 PV = = = 00361 . 0 n mol x Number of mole (n) = weight (in gram) / M Wt. Weight
Wt M . = ( ) . 0 109 gram = = 30 2 . mol 00361 . 0 moles
18. Ideal gas always obeys the gas law (PV=nRT). Real gas deviate (not obey) gas law (PV nRT) Gases tend to behave ideally at high temperature and low pressure. Real gases deviate from ideal gases
when T is very low and P is very high. At high pressure, the distance between gas molecules are small, and intermolecular forces will developed between the molecules. Therefore, the less gas
resemble ideal gas (Gases tend to liquefy at high pressure). At low temperature, the gas molecules move slowly and close to each other . So low temperature means low energy available to break
intermolecular forces. Therefore, colder temperature, make gases act less ideally (Gases tend to liquefy at low temperature). Real gas deviate from the kinetic theory of gas in two points:- 1-
Real gases posses attractive forces between molecules. 2- Every molecule in real gas has a real volume
19. The kinetic theory of gases assumes that the gas molecules behave as perfectly elastic spheres having negligible volume with no intermolecular attraction or repulsion. In some types of aerosol
(compressed gas aerosols) an inert gas under pressure is used to expel the product as a solid stream, a mist or a foam. The pressure of gas in an aerosol container of this type is readily
calculated using the gas laws, as in Example 2.1.
20. Most gases obey the ideal gas law to a good approximation when near room temperature and at a moderate pressure. At higher pressures one might need a better description. . The van derWaals
equation of state is an P 2 + = V ( ) nb nRT 2 V The symbols a and b represent constant parameters that have different values for different gases. According to van derWaal equation, the pressure
of a real gas will be lower than that of the ideal gas because attraction to neighboring molecules tend to decrease the impact of the gas molecules on the wall of the container .
21. The measured pressure of 1.000 mol of a gas of a volume of 1.000 L at a temperature of 503 K is 30 atm. A. Is the gas under these conditions behaves as an ideal gas or not ? B. Calculate the
pressure of the gas using van der Waal equation ? (a= 17.0 L2atm/mol2and b = 0.136 L/mol) Answer Ideal gas law: PV = nRT or P=nRT/V =1 x 0.0821 x 503 /1 = 41.3 atm This value is higher than the
measured value, so it do not obey the gas law. van der Waal equation V V x V nb V . 0 1 1 ( ) ( 2 an + = ( ) P nb nRT 2 2 2 1 0821 . 0 503 17 0 . ) 1 ( 2 nRT an x 136 x = = = 30 8 . P atm 2 ) ) 0
. 1 ( x The calculated value is close to that obtained by van der Waal equation
22. Liquids have definite volume because intermolecular forces of attraction between molecules is just strong enough to confine the molecules in a definite space. A liquid has no definite shape and
acquires the shape of the container because the intermolecular forces is weaker than solid. All liquids have intermolecular forces called van derWaal forces in addition to other forces in certain
solvents. A liquid is comprisable because the distance between the molecules is larger in liquids than in solid. Each liquid has the following properties: 1-Vapor pressure, 3-freezing point and
2-boiling point, 4-surface tension | {"url":"https://www.slideorbit.com/slide/understanding-the-states-of-matter-and-properties-of-gases/154431","timestamp":"2024-11-04T20:15:04Z","content_type":"text/html","content_length":"85248","record_id":"<urn:uuid:aa7a9858-bd20-4c05-a749-e2220d3dd1e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00538.warc.gz"} |
Price Elasticity of Demand Calculator
It can be a challenge to explain the price elasticity of demand without an example. Imagine you’re a furniture salesman. You sell 20 lounge suites every month for $1,500. One day, you get a bright idea. What if I cut the price by $200, selling them for $1,300?
From such a decision, you might get more customers. More customers equal an increase in revenue. Let’s see if it would be enough to offset the discount.
This scenario highlights the price elasticity of demand. The change would create a new customer behavior with sales of that discounted sofa. If price elasticity is high, the decrease will cause a massive demand for the product. The salesman’s profits will increase as a result. If the elasticity is low, there might be a slight demand, but not enough to offset revenue loss from the discount.
Price elasticity of demand doesn’t relate to packaging and marketing. This calculator can’t tell you the profitability of selling a gallon of apple juice for $1 or two gallons for $1.50. You would need a price and quantity calculator for such an equation.
You use a midpoint formula to determine the elasticity of demand. It’s as follows:
PED = [ (Q₁ - Q₀) / (Q₁ + Q₀) ] / [ (P₁ - P₀) / (P₁ + P₀) ]
P₀ = initial product price
P₁ = final product price
Q₀ = initial demand
Q₁ = the demand post-price change
PED = price elasticity of demand
In most situations, the price elasticity of demand is negative. Price and demand are inversely proportionate. As a result, the higher the rate, the lower the demand, and vice versa.
The midpoint formula and calculator can find any values in the equation (P₀, P₁, Q₀ or Q₁). Input your variables into the price elasticity of demand calculator to get an answer.
Enjoy this example of price elasticity of demand below.
1. Identify the original price.
Our lounge suite is $1,500.
2. Identify the initial demand.
We sold 20 lounge suites per month.
3. Choose a new price.
We took $200 off, bringing it down to $1,300.
4. Calculate the quantity you sold on the new price.
We sold 30.
5. Use a midpoint formula to calculate the elasticity of demand.
PED = [ (Q₁ - Q₀) / (Q₁ + Q₀) ] / [ (P₁ - P₀) / (P₁ + P₀) ]
PED = [ (30 - 20) / (30 + 20) ] / [ (1,300 – 1,500) / (1,300 + 1,500) ]
PED = [ 10 / 50 ] / [ -200 / 2,800 ]
PED = (10 x 2,800) / (-200 x 50)
PED = 28,000 / -10,000 = -2.8
6. The price elasticity of demand is -2.8. You can work this problem out by hand or with the calculator.
There is also an equation for calculating revenue of initial and final states. It is as follows:
R = P x Q
The revenue increase uses this formula:
ΔR = R₁ - R₀ = P₁ x Q₁ - P₀ x Q₀
When you see a negative revenue increase, it means the revenue is dropping. Price elasticity demand relates to a revenue increase. A few rules apply in the calculation process.
1. PED is inelastic (at zero) = a price change doesn’t affect demand (e.g., survival goods).
2. PED is inelastic (at -1 < PED < 0) = a price drop causes a demand increase, but revenue decreases.
3. PED is unitary elastic (at PED = -1) = price decrease is proportionate to demand increase. The revenue doesn’t change.
4. PED is elastic (at -∞ < PED < -1) = price decrease causes massive demand and revenue increase.
5. PED is perfectly elastic (PED = -∞) = any price increase causes the price to drop to zero (e.g., $1 being worth $1). | {"url":"https://thefreecalculator.com/finance/price-elasticity-demand","timestamp":"2024-11-03T16:11:07Z","content_type":"text/html","content_length":"106864","record_id":"<urn:uuid:e2c1e6c2-7b64-4427-a01a-272d9ecf786f>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00889.warc.gz"} |
Quantitative Data Analysis Assignments
This report focuses on the data analysis of various factors and effects of United Kingdom. The researcher analysed the data with the help of SPSS-20 software. The analysis would help to find out and
highlight the equality and diversity in mentality and behaviours among the people of countries of UK. The report includes descriptive statistics, inferential statistics, paired two-sample t-test,
independent sample t-test, parametric and non-parametric analysis. | {"url":"https://desklib.com/document/quantitative-data-analysis-assignments/","timestamp":"2024-11-12T00:17:55Z","content_type":"text/html","content_length":"997596","record_id":"<urn:uuid:27e0e155-d1c1-483d-82a5-3b4251c8fbc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00057.warc.gz"} |
Congestion Games with Polytopal Strategy Spaces / 165
Hau Chan, Albert Xin Jiang
Congestion games are a well-studied class of games that has been used to model real-world systems such as Internet routing. In many congestion games, each player's number of strategies can be
exponential in the natural description of the game. Most existing algorithms for game theoretic computation, from computing expected utilities and best responses to finding Nash equilibrium and other
solution concepts, all involve enumeration of pure strategies. As a result, such algorithms would take exponential time on these congestion games. In this work, we study congestion games in which
each player's strategy space can be described compactly using a set of linear constraints. For instance, network congestion games naturally fall into this subclass as each player's strategy can be
described by a set of flow constraints. We show that we can represent any mixed strategy compactly using marginals which specify the probability of using each resource. As a consequence, the expected
utilities and the best responses can be computed in polynomial time. We reduce the problem of computing a best/worst symmetric approximate mixed-strategy Nash equilibrium in symmetric congestion
games to a constraint optimization problem on a graph formed by the resources and the strategy constraints. As a result, we present a fully polynomial time approximation scheme (FPTAS) for this
problem when the graph has bounded tree width. | {"url":"https://www.ijcai.org/Abstract/16/031","timestamp":"2024-11-02T08:51:20Z","content_type":"text/html","content_length":"10458","record_id":"<urn:uuid:05f47625-773e-43f7-a26a-ea4bb3a881ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00695.warc.gz"} |
Making gradients in Processing using OpenGL
I’ve been working on a display project using Processing, and wanted to add some linear gradients and fuzzed edges to make the display look a bit nicer. The example linear and radial gradient
functions work ok, but are somewhat processor intensive, so I wanted to find a better way to achieve the same effect.
Someone on the processing forum suggested using opengl shapes to achieve this. The idea is that you can make a gradient by drawing a polygon (or triangle) shape, and specifying a different color for
each vertex of the shape. This causes the gpu to do all the heavy lifting of actually drawing the gradient. Here is an example of how to draw a simple linear gradient using this method:
import processing.opengl.*;
void setup() {
size(550, 400, OPENGL);
void draw() {
color leftColor = color(255,0,255);
color rightColor = color(255,255,255);
vertex(0, 0);
vertex(width, 0);
vertex(width, height);
vertex(0, height);
I played around with this for a while, and eventually achieved the effects i wanted- linear gradients near the edges of the window to (hopefully) subtly draw attention to the center of the screen,
and a function to draw fuzzy rectangles by repeating the gradient technique around the edges of a regular rectangle. Here’s the source code, which draws the aforementioned background, as well as a
fuzzy rectangle that changes size and fuzziness as you move the mouse cursor around:
import processing.opengl.*;
void setup() {
size(550, 400, OPENGL);
// Draw a rectangle which can have differently colored edges
// @param x X coordinate of the top-left corner of the rectangle (pixels)
// @param y X coordinate of the top-left corner of the rectangle (pixels)
// @param widt Width of the rectangle (pixels)
// @param heigh Height of the rectangle (pixels)
// @param tlcolor Color of the top-left rectangle corner
// @param trcolor Color of the top-right rectangle corner
// @param brcolor Color of the bottom-right rectangle corner
// @param blcolor Color of the bottom-left rectangle corner
void makeRectangle(int x, int y, int widh, int heigh,
color tlcolor, color trcolor,
color brcolor, color blcolor) {
vertex(x, y);
vertex(x+widh, y);
vertex(x+widh, y+heigh);
vertex(x, y+heigh);
// Draw a gradient corner by making triangles
// TODO: do this directly somehow; a shader?
// @param x X coordinate of the center of the semicircle (pixels)
// @param y Y coordinate of the center of the semicircle (pixels)
// @param rad Radius of the semicircle (pixels)
// @param divisions Number of triangle divisions to make (more=smoother)
// @param quadrant Which quadrant to draw in
// @param insideColor Color to use for the center of the semicircle
// @param outsideColor Color to use for the outside of the semicircle
void makeGradientCorner(int x, int y, int rad,
int divisions, int quadrant,
color insideColor, color outsideColor) {
for(float angle = quadrant*PI/2;
angle < (quadrant + 1)*PI/2 - .001;
angle += PI/divisions/2) {
vertex(x, y);
vertex(x+cos(angle)*rad, y-sin(angle)*rad);
vertex(x+cos(angle+PI/divisions/2)*rad, y-sin(angle+PI/divisions/2)*rad);
// Draw a fuzzy rectangle at the specified position
// @param x X coordinate of the top-left corner of the rectangle (pixels)
// @param y X coordinate of the top-left corner of the rectangle (pixels)
// @param widt Width of the rectangle (pixels)
// @param heigh Height of the rectangle (pixels)
// @param radius Radius of the fuzzing (pixels)
// @param fgcolor color of the rectangle
void drawFuzzyRectangle(int x, int y, int widt, int heigh,
int rad, color fgcolor) {
// Handle the case where the radius is too big, by clipping it to 1/2 the max height or width.
int max_rad = int(min(widt/2, heigh/2));
rad = min(rad, max_rad);
// Uncomment this to see how the gradients are being drawn
color bgcolor = color(255,255,255,0);
makeRectangle(x+rad, y+rad, widt-2*rad, heigh-2*rad, fgcolor, fgcolor, fgcolor, fgcolor);
makeRectangle(x+rad, y, widt-2*rad, rad, bgcolor, bgcolor, fgcolor, fgcolor);
makeRectangle(x, y+rad, rad, heigh-2*rad, bgcolor, fgcolor, fgcolor, bgcolor);
makeRectangle(x+rad, y+rad+heigh-2*rad, widt-2*rad, rad, fgcolor, fgcolor, bgcolor, bgcolor);
makeRectangle(x+widt-rad, y+rad, rad, heigh-2*rad, fgcolor, bgcolor, bgcolor, fgcolor);
makeGradientCorner(x+widt-rad, y+rad, rad, 8, 0, fgcolor, bgcolor);
makeGradientCorner(x+rad, y+rad, rad, 8, 1, fgcolor, bgcolor);
makeGradientCorner(x+rad, y+heigh-rad, rad, 8, 2, fgcolor, bgcolor);
makeGradientCorner(x+widt-rad, y+heigh-rad, rad, 8, 3, fgcolor, bgcolor);
void draw() {
// Fill the background with a gradient
color edgeColor = color(220);
color centerColor = color(255);
makeRectangle(0, 0, width/4, height, edgeColor, centerColor, centerColor, edgeColor);
makeRectangle(width*3/4, 0, width/4, height, centerColor, edgeColor, edgeColor, centerColor);
// Draw a fuzzy rectangle
color rectColor = color(80,80,140);
int widt = 100 + int(map(mouseY,0,width,0,200));
int heigh = 80 + int(map(mouseY,0,width,0,200));
int rad = int(map(mouseX,0,width,0,50));
drawFuzzyRectangle(mouseX, mouseY, widt, heigh, rad, rectColor);
One Response to Making gradients in Processing using OpenGL
1. excellent. this ended up being very helpful. “_aug08a” (with a few tweaks) was just what I was looking for. thank you.
This entry was posted in tech. Bookmark the permalink. | {"url":"https://www.cibomahto.com/2012/08/making-gradients-in-processing-using-opengl/comment-page-1/","timestamp":"2024-11-13T07:41:51Z","content_type":"text/html","content_length":"42790","record_id":"<urn:uuid:2418ffd0-0952-41cb-858b-a36dc2337bef>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00611.warc.gz"} |
Preorder Traversal in a Binary Tree - CSVeda
Preorder Traversal in a binary tree defines that the root node of a tree/ subtree is visited before its left and right child nodes. It can be defined as Root-Left-Right. This Traversal process in
Binary Trees can be done by recursive and non recursive methods.
Recursive Preorder Traversal
In recursive traversal the address of current node is stored in a pointer CURR. CURR pointer is initialized to the Root of the binary tree.
In all the subsequent recursive calls the CURR->LEFT or CURR->RIGHT nodes are passed as the last argument. The node passed as last argument becomes the root for the subtree that is currently being
The traversal procedure is repeatedly called for left and right child nodes till CURR (pointer storing address of current node) becomes NULL.
Algorithm REC_TRAVERSAL(DATA, LEFT, RIGHT, ROOT)
1. [ Initialize Pointer] CURR=ROOT
2. IF CURR <>NULL
a. Process node CURR
b. REC_TRAVERSAL (DATA, LEFT, RIGHT, CURR->LEFT)
c. REC_TRAVERSAL (DATA, LEFT, RIGHT, CURR->RIGHT)
3. Exit
Non-Recursive Traversal
In non-recursive Traversal an intermediary storage data structure Stack is used to maintain the nodes that are still to be traversed.
Traversal begins from the root node. The current node is processed and right child node of the current node is preserved in the stack to be processed while backtracking from leaf node. Once a leaf
node is reached, backtracking is used to pick up the unvisited node and performing the similar traversal operation.
Algorithm NONREC_TRAVERSAL (DATA, LEFT, RIGHT, ROOT)
1. [Initialize variables and stack] TOS=1, STK[TOP]=# , CURR=ROOT
2. [Follow the leftmost branch of the tree while saving right child nodes in stack and
popping node from stack for backtracking]
Repeat steps 3, 4 and 5 while CURR<> NULL
3. Print DATA[CURR]
4. [If Right child node is existing for current node store it in the stack]
5. [If Left child node is existing for current node make the Left child as the current
node else pop topmost node from stack to backtrack]
If CURR->LEFT<>NULL
CURR= CURR->LEFT
CURR= STK[TOS]
6. END
Preorder Traversal of Arithmetic Expression Tree
One immensely useful application of Preorder Traversal is converting an arithmetic expression stored in a binary tree into prefix notation. Prefix notation is also called Polish Notation wherein a
Binary Operator precedes the operands that it operates on.
Consider the following expression-
Its Prefix Notation or Polish Notation is +AB.
When an arithmetic binary expression is represented in an expression tree, Traversal of such an expression tree in Preorder will result in its corresponding Prefix expression.
The following image shows an expression tree made from the binary expression (a+b)*(c-d)
Its traversal in the preorder generates this output.
Be First to Comment | {"url":"https://csveda.com/preorder-traversal-in-a-binary-tree/","timestamp":"2024-11-09T00:22:17Z","content_type":"text/html","content_length":"63463","record_id":"<urn:uuid:70a8e13a-4761-4acf-94e2-36302340554d>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00840.warc.gz"} |
Download Equations Questions and Answers - Form 1 Topical Mathematics.
Why download?
• To read offline at any time.
• To Print at your convinience
• Share Easily with Friends/Students
How to pay
1. Go to Lipa na MPesa, Buy Goods and Services option
2. Enter Till Number 738874
3. Enter the exact amount 50/-. Don't pay more or less, the system will reject
4. Enter your MPesa pin and send
5. The mpesa hakikisha will show payment to Lemfundo Technologies
6. You will receive an SMS from M-Pesa with a confirmation code eg PI98O3P8RQ
7. After you receive the confimation sms from MPesa, enter the phone number you used to pay and mpesa confirmation code below. (The one below is an example, enter the one mpesa has sent to your
8. Click on the submit button
9. You will be able to instantly download the file you have paid for.
10. Experiencing difficulties? Call/Whatsapp +254 703 165 909, or email us to info@easyelimu.com | {"url":"https://www.easyelimu.com/component/donation/?view=donation&layout=singledownload&tmpl=component&catid=125&fileid=1129&filename=Equations%20Questions%20and%20Answers%20-%20Form%201%20Topical%20Mathematics&utm_source=youmayalsolike&utm_medium=mainsite","timestamp":"2024-11-05T17:13:52Z","content_type":"text/html","content_length":"20464","record_id":"<urn:uuid:52b0c93b-4510-463b-b01f-a09ed41a38fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00874.warc.gz"} |
Add, Subtract, and Multiply Complex Numbers
Learning Outcomes
• Add and subtract complex numbers
• Multiply complex numbers
Just as with real numbers, we can perform arithmetic operations on complex numbers. To add or subtract complex numbers, we combine the real parts and combine the imaginary parts.
recall doing operations on algebraic expressions
Performing arithmetic on complex numbers is very similar to adding, subtracting, and multiplying algebraic variable expressions. Recall that doing so involves combining like terms, carefully
subtracting, and using the distributive property.
Complex numbers of the form [latex]a+bi[/latex] each contain a real part [latex]a[/latex] and an imaginary part [latex]bi[/latex]. Real parts are like terms with real parts. Likewise, imaginary parts
are like with other imaginary parts.
A General Note: Addition and Subtraction of Complex Numbers
Adding complex numbers:
Subtracting complex numbers:
How To: Given two complex numbers, find the sum or difference.
1. Identify the real and imaginary parts of each number.
2. Add or subtract the real parts.
3. Add or subtract the imaginary parts.
Example: Adding Complex Numbers
Add [latex]3 - 4i[/latex] and [latex]2+5i[/latex].
Show Solution
Try It
Subtract [latex]2+5i[/latex] from [latex]3 - 4i[/latex].
Show Solution
Multiplying Complex Numbers
Multiplying complex numbers is much like multiplying binomials. The major difference is that we work with the real and imaginary parts separately.
Multiplying a Complex Number by a Real Number
Let’s begin by multiplying a complex number by a real number. We distribute the real number just as we would with a binomial. So, for example,
How To: Given a complex number and a real number, multiply to find the product.
1. Use the distributive property.
2. Simplify.
Example: Multiplying a Complex Number by a Real Number
Find the product [latex]4\left(2+5i\right)[/latex].
Show Solution
Try It
Find the product [latex]-4\left(2+6i\right)[/latex].
Show Solution
Multiplying Complex Numbers Together
Now, let’s multiply two complex numbers. We can use either the distributive property or the FOIL method. Recall that FOIL is an acronym for multiplying First, Outer, Inner, and Last terms together.
Using either the distributive property or the FOIL method, we get
Because [latex]{i}^{2}=-1[/latex], we have
To simplify, we combine the real parts, and we combine the imaginary parts.
How To: Given two complex numbers, multiply to find the product.
1. Use the distributive property or the FOIL method.
2. Simplify.
Example: Multiplying a Complex Number by a Complex Number
Multiply [latex]\left(4+3i\right)\left(2 - 5i\right)[/latex].
Show Solution
Try It
Multiply [latex]\left(3 - 4i\right)\left(2+3i\right)[/latex].
Show Solution | {"url":"https://courses.lumenlearning.com/waymakercollegealgebracorequisite/chapter/operations-on-complex-numbers/","timestamp":"2024-11-04T16:56:40Z","content_type":"text/html","content_length":"56269","record_id":"<urn:uuid:a3ce1438-fc3c-4982-9442-4ef424e8e216>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00180.warc.gz"} |
E. Collins-Woodfin - Research
Published papers and preprints
• Using RMT to understand phase transitions in spherical spin glass
Random Matrices & Scaling Limits, Institut Mittag-Leffler, 2024
• High-dimensional dynamics of SGD for generalized linear models
IDEAL Workshop: Statistical Inference / Learning Dynamics, Northwestern University, 2024
• Bipartite spherical spin glass at critical temperature
Integrable Probability, Classical and Quantum Integrability, online seminar of CNRS, 2024
• Bipartite spherical spin glass at critical temperature
Probability Seminar, Harvard University, 2024
• Bipartite spherical spin glass at critical temperature
Probability Seminar, Fields Institute, 2024
• High-dimensional limit of streaming SGD for GLMs
Mathematics of Machine Learning, CMS Winter Meeting, 2023
• Bipartite spherical spin glass at critical temperature
Probability Seminar, University of Minnesota and Lehigh, 2023
• Bipartite spherical spin glass at critical temperature
Probability Seminar, University of Waterloo, 2023
• High-dimensional limit of streaming SGD for GLMs
Applied Math Seminar, Centre de Recherches Mathematiques, 2023
• Phase transitions in spherical spin glasses
AMS Eastern Sectional, University of Buffalo, 2023
• Phase transitions in spherical spin glasses
Great Lakes Mathematical Physics, Oberlin College, 2023
• Edge CLT for log determinant of Laguerre beta ensembles
Summer school for random matrix theory and applications, Ohio State University, 2023
• Phase transitions in spherical spin glasses
Probability Seminar, Queen's University, 2023
• Overlaps of a Spherical Spin Glass Model with External Field
AWM Research Symposium, University of Minnesota, 2022
• Overlaps of a Spherical Spin Glass Model with External Field
Introductory and Connections Workshop, MSRI, 2021
• Free Energy & Overlaps of Spherical Spin Glass with External Field
AWM Graduate Student Poster Session, JMM 2021
• Overlaps of a Spherical Spin Glass Model with External Field
Bernoulli Society & IMS One World Symposium, 2020
Kalamazoo College, Math Colloquium, 2020
• Using Random Matrix Theory to Analyze Spin Glasses
NY Graduate Math Conference, Syracuse University, 2020
• Small Norm Matrices and Tensors
Nebraska Conference for Undergraduate Women in Math, 2010
• Matrices and Tensors of Small Norm
Ohio State University, Young Mathematicians Conference, 2009
Seminar Talks at home institution
• Free energy fluctuations of spherical spin glasses near the critical temperature threshold
Statistics seminar, McGill U., 2024
• Detection thresholds in spiked matrix models
Algorithmic spin glass learning seminar, McGill University, 2024
• High-dimensional limit of streaming SGD for GLMs
Optimization and ML seminar, McGill U., 2023
• Spin Glasses and Random Matrix Theory
Probability Research Retreat, McGill U., 2022
• Tridiagonal Random Matrices
Student Analysis Seminar, U. Michigan, 2021
• Spin Glasses Across Disciplines
SIAM Mini-Symposium, U. Michigan, 2021
• Spherical Spin Glass Model with External Field
Integrable Systems Seminar, U. Michigan, 2020
• Rademacher Complexity and Spin Glass
Machine Learning Graduate Seminar, U. Michigan, 2020
• Using Random Matrix Theory to Analyze Spin Glasses
Student Analysis Seminar, U. Michigan, 2020
Budapest Semesters in Math, Research Colloquium, 2015
• Munshi's Theorem: Zero Places and Infinite Spaces
Carleton College, Math & Stat Comps Gala, 2011
Workshops and Summer Schools
• Random Matrices and Scaling Limits, Institut Mittag-Leffler, 2024
• Frontiers of Statistical Mechanics & Theoretical Computer Science, Banff International Research Station, 2024
• Summer School on Random Matrices & Applications, Ohio State University, 2023
• Random Matrix Theory Summer School, University of Michigan, 2022
• Universality & Integrability in Random Matrix Theory, MSRI (hybrid), 2021
• Open Online Probability School, virtual, 2020
• Randomness in Physics & Mathematics, Bielefeld University, 2019
• Integrable Probability Summer School, University of Virginia, 2019
• Geometric Analysis for Women & Math, Institute for Advanced Study, 2019
• Random Walks in Random Environments, Texas A&M University, 2018
• Probability Summer Analysis Program, Northwestern University, 2018
• Random Matrix Theory Summer School, University of Michigan, 2018 | {"url":"https://sites.google.com/view/e-collins-woodfin/research","timestamp":"2024-11-02T21:17:14Z","content_type":"text/html","content_length":"137315","record_id":"<urn:uuid:e1ecf101-20f0-48ec-baa5-a88ea7ce1f63>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00632.warc.gz"} |
Exam-Style Questions.
Problems adapted from questions set for previous Mathematics exams.
(a) Use the red graphs to solve the simultaneous equations:
$$4x-7y=-47$$ $$3y=-5x$$
(b) Use the blue graph to find estimates for the solutions of the quadratic equation:
Estimate the solutions of the following simultaneous equations using their graphs as drawn on the grid below.
$$3x-5y=17$$ $$y=5-\frac{7x}{5}$$
If you would like space on the right of the question to write out the solution try this Thinning Feature. It will collapse the text into the left half of your screen but large diagrams will remain
The exam-style questions appearing on this site are based on those set in previous examinations (or sample assessment papers for future examinations) by the major examination boards. The wording,
diagrams and figures used in these questions have been changed from the originals so that students can have fresh, relevant problem solving practice even if they have previously worked through the
related exam paper.
The solutions to the questions on this website are only available to those who have a Transum Subscription.
Exam-Style Questions Main Page
To search the entire Transum website use the search box in the grey area below.
Do you have any comments about these exam-style questions? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the
world. Click here to enter your comments. | {"url":"https://transum.org/Maths/Exam/Online_Exercise.asp?NaCu=23","timestamp":"2024-11-10T13:05:23Z","content_type":"text/html","content_length":"17385","record_id":"<urn:uuid:18ed674f-15c6-41b2-ad9a-2980666b66df>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00263.warc.gz"} |
The Taub Faculty of Computer Science
The Taub Faculty of Computer Science Events and Talks
Asaf Nussbaum (Weizmann Institute of Science)
Sunday, 03.08.2008, 11:00
We study the limits of efficient computation in the context of constructing random-looking graph distributions that can be used to emulate huge random graphs over N=2^n vertices. We suggest and
analyze a new criterion of faithful emulation, inspired by the famous 0/1-law of random graphs. This 0/1-law states that any graph property T holds with probability converging either to 0 or to 1,
whenever T is expressible by a single formula on graphs.
We consider preserving even depth-D properties, namely, properties expressible via an entire sequence of formulas {\psi_N} where thequantifier depth of \psi_N is bounded by D=D(N). We first ask what
is the maximal depth D_1 for which random graphs' 0/1-laws can be generalized fordepth D_1 properties. We then ask what is the maximal depth D_2 for whichrandom graphs can be efficiently emulated by
capturing similar 0/1-laws. Both D_1 and D_2 are shown to be precisely log(N)/log(1/p), where p denotes the graphs' density. These hardness results require no computational assumptions.
Relevant background on random graphs and pseudorandomness will be provided.
Joint work with Moni Naor and Eran Tromer. | {"url":"https://cs.technion.ac.il/events/view-event.php?evid=407","timestamp":"2024-11-04T05:37:43Z","content_type":"text/html","content_length":"15149","record_id":"<urn:uuid:5e848bb8-1ba3-448f-bade-a533263f92d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00767.warc.gz"} |
8Bomb is now playable!
Project Page
Today I finished a first playable version of 8Bomb! I built upon the self hosted SCRIPT-8 engine from yesterday by adding some performance improvements to the canvas API calls (which I will try to
push upstream after SCRIPT-8 is back and running). I also added keyboard input support also based on the code from the original repository. With those out of the way, I was able to get back to making
progress. These changes were much of the same as yesterday, so I decided not to document much.
I made a number of changes to the game code:
1. I changed the terrain rendering to be completely pixel based, tearing out the marching squares
2. I added a concept of panels so the terrain can be extended automatically
3. I added a score counter
4. I added bombs which flicker and explode after a short while causing the game to restart if the player is too close.
Terrain Improvements
After digging through the rendering code in SCRIPT-8 I found out that sprites are rendered square by square similar to how the setPixel and fillRectangle functions work. Since everything was run in
JavaScript, I figured that doing that rendering by hand would be just as efficient as letting SCRIPT-8 do it for me.
So I modified the terrain data structure to have 300 by 128 entries:
function initTerrain() {
let terrain = [];
for (let y = 0; y < 300; y++) {
let row = [];
for (let x = 0; x < 128; x++) {
if (y > 90) {
} else {
return terrain;
And updated the drawTerrain function to setPixels and pick the color based on whether there is an open pixel above or below the current pixel:
function drawTerrain({ terrain, cameraY }) {
let top = Math.max(0, Math.floor(cameraY));
let bottom = Math.min(Math.floor(cameraY + 128), terrain.length);
for (let y = top; y < bottom; y++) {
for (let x = 0; x < terrain[y].length - 1; x++) {
if (terrainAt(x, y, terrain)) {
if (!terrainAt(x, y - 1, terrain)) {
setPixel(x, y, 0);
} else if (!terrainAt(x, y + 1, terrain)) {
setPixel(x, y, 2);
} else {
setPixel(x, y, 1);
This adds a little bit of depth to the terrain which goes a long way.
I then refactored further to allow for automatic extension of the terrain. I could have just went with adding pixel rows to the terrain array, but I decided to go with a panel based approach which
will allow me to add more information such as style etc to the panels later on. I moved the terrain init code to a createPanel function, and added an updateTerrain function which will automatically
append panels to a terrain map each time the camera moves far enough.
function createPanel() {
let panel = [];
for (let y = 0; y < 100; y++) {
let row = [];
for (let x = 0; x < 128; x++) {
return panel;
export function updateTerrain(cameraY) {
let panelTop = Math.floor(cameraY / panelHeight) - 1;
let panelBottom = panelTop + 5;
for (let i = panelTop; i < panelBottom; i++) {
if (!terrain[i]) {
terrain[i] = createPanel();
if (lowestPanel < i) lowestPanel = i;
The terrainAt, and cutTerrain functions were similarly modified to first index into the terrain map, and then index into the correct entry in the associated panel. I also added a setTerrainAt
function which handles settings a terrain boolean if it is contained in the terrain map.
export function terrainAt(x, y) {
if (y < terrainStart) return false;
let panelNumber = Math.floor(y / panelHeight);
let panel = terrain[panelNumber];
let panelY = y - (panelNumber * panelHeight);
if (panel && panelNumber != lowestPanel) {
return panel[panelY][x];
return false;
export function setTerrainAt(x, y, value) {
let panelNumber = Math.floor(y / panelHeight);
let panel = terrain[panelNumber];
if (panel) {
let panelY = y - (panelNumber * panelHeight);
let row = panel[panelY];
if (x >= 0 && x < row.length) {
row[x] = value;
export function cutTerrain(x, y, r) {
for (let cx = Math.floor(x - r); cx <= x + r; cx++) {
for (let cy = Math.floor(y - r); cy <= y + r; cy++) {
let dx = cx - x;
let dy = cy - y;
let cr = Math.floor(Math.sqrt(dx * dx + dy * dy));
if (cr > r) continue;
setTerrainAt(cx, cy, false);
Luckily since drawTerrain already used the terrainAt abstraction, it did not need modified. These changes made the terrain much smoother, and actually improved performance since only the pixels on
the screen were drawn.
The bombs were pretty simple. I split the functionality into three sections: Spawning, Timer Updates, and Explosions.
Spawning was pretty simple. Every frame I generate a random number between 0 and 100 and check if that number is less than the current score divided by 400 plus 0.5. This means that at the start of
the game, there is a 0.5% chance that a bomb will spawn, and as the player moves down the screen, that chance will increase at a linear rate.
export function spawnBombs({ player, bombs, score }) {
if (Math.random() * 100 <= score / 400 + 0.5) {
bombs.push(createPhysicsObject(Math.random() * 112 + 8, player.position.y - 300, 2));
Then each frame I also update the fuzes on each bomb.
// Update Bomb Timers
for (const bomb of state.bombs) {
if (bomb.timeLeft != undefined) {
// Reset Bomb Sprite
bomb.sprite = 2;
// Decrement timer
bomb.timeLeft -= 1;
if (bomb.timeLeft <= 0) {
// Halve next timer length
bomb.fuze = bomb.fuze * 0.75;
if (bomb.fuze < 1 && bomb.grounded) {
// Fuze finished. Explode
} else {
// Not enough iterations yet. Flicker again
bomb.timeLeft = bomb.fuze;
bomb.sprite = 3;
} else if (bomb.grounded) {
// Start fuze since the bomb has hit the ground
bomb.timeLeft = fuzeTime;
bomb.fuze = fuzeTime;
// Preserve this bomb
// Preserve all remaining bombs
state.bombs = remainingBombs;
The basic idea with the fuze is two fold: First, flicker the bomb at a steadily increasing speed so that the player can tell when a bomb is about to blow and second, only start the fuze (or blow up
the bomb) if the bomb is touching the ground.
I achieve the first by keeping track of two timers, the fuze and the timeLeft. The fuze is used to keep track of how long until the next flash, and the timeLeft is used to keep track of when to
actually flash. Each iteration, the fuze is shrunk by 0.75 and when the fuze hits less than 1, the bomb is queued for explosion.
// Blog up bombs
let physicsObjects = getPhysicsObjects(state);
for (const bomb of bombsToExplode) {
cutTerrain(bomb.position.x, bomb.position.y, bombRadius, state.terrain);
for (const object of physicsObjects) {
// Find the distance to the object
let dx = object.position.x - bomb.position.x;
let dy = object.position.y - bomb.position.y;
let length = Math.sqrt(dx * dx + dy * dy);
// If the object is the player, and the length is less than 3/4 of the
// bomb radius, the player has lost.
if (object == state.player && length < bombRadius * 0.75) location.reload();
// Otherwise knockback the object by the distance * knockBack / length^2;
let lengthSquared = length * length;
object.position.x += dx * knockBack / lengthSquared;
object.position.y += dy * knockBack / lengthSquared;
The comments make things pretty self explanatory, but the gist is to loop over every physics object and add some velocity away from the exploding bomb depending on how far the object is from the
explosion. To wrap things up, I check if the player is too close to the explosion, and restart the game as a losing condition.
As shown in the above screen cap, I also added a simple score number at the bottom of the screen which indicates how far the player got.
export function drawScore({ cameraY, score }) {
let scoreText = Math.max(score - 68, 0).toString();
if (scoreText.length > 5) scoreText = scoreText.substring(0, 5);
print(5, 120, scoreText, 6);
Pretty easy, but works well. I take the substring to truncate the extra decimal points. Eventually I will need to make this smarter, but it works for now.
The game is very playable at this point and if I don't say so myself, pretty fun! There is still a lot of polishing to do though! I haven't decided what all I will implement next, but I think I might
do some final polish steps on this simplified version so that I can have a truly SCRIPT-8 game, and then pull out into my own engine to loosen the constraints on color and libraries. Who knows, maybe
I'll build a multiplayer version :P
Till tomorrow, | {"url":"https://kaylees.dev/trio/oak/day21-8bomb-version-0-1/","timestamp":"2024-11-07T09:19:01Z","content_type":"text/html","content_length":"13228","record_id":"<urn:uuid:2cec545a-836b-4a46-a3c9-c1fb594e1d57>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00096.warc.gz"} |
F-Distribution Critical Value Formulas
Cumulative distribution function (CDF) for the F-distribution:
Variable definitions:
d[1] and d[2] = the degrees of freedom
I = regularized lower incomplete beta function
Lower incomplete beta function:
Regularized lower incomplete beta function:
Variable definitions:
the numerator = lower incomplete beta function
the denominator = beta function | {"url":"https://analyticscalculators.com/formulas.aspx?id=4","timestamp":"2024-11-15T04:24:35Z","content_type":"text/html","content_length":"26366","record_id":"<urn:uuid:377573f4-9b7e-4791-b0d5-cdbb75efb1c7>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00571.warc.gz"} |
Convert 90 Kiloliters per Second (kl/s) to Tablespoons per Second (Tbs/s)
This is our conversion tool for converting kiloliters per second to tablespoons per second.
To use the tool, simply enter a number in any of the inputs and the converted value will automatically appear in the opposite box.
How to convert Kiloliters per Second (kl/s) to Tablespoons per Second (Tbs/s)
Converting Kiloliters per Second (kl/s) to Tablespoons per Second (Tbs/s) is simple. Why is it simple? Because it only requires one basic operation: multiplication. The same is true for many types of
unit conversion (there are some expections, such as temperature). To convert Kiloliters per Second (kl/s) to Tablespoons per Second (Tbs/s), you just need to know that 1kl/s is equal to Tbs/s. With
that knowledge, you can solve any other similar conversion problem by multiplying the number of Kiloliters per Second (kl/s) by . For example, 3kl/s multiplied by is equal to Tbs/s.
Best conversion unit for 90 Kiloliters per Second (kl/s)
We define the "best" unit to convert a number as the unit that is the lowest without going lower than 1. For 90 kiloliters per second, the best unit to convert to is . | {"url":"https://unitconversion.io/90-klps-to-tbsps","timestamp":"2024-11-08T18:31:13Z","content_type":"text/html","content_length":"34050","record_id":"<urn:uuid:abfbdc64-c9c4-4870-bc7f-74ae6ad65de8>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00062.warc.gz"} |
NIPS 2016
Mon Dec 5th through Sun the 11th, 2016 at Centre Convencions Internacional Barcelona
Reviewer 1
The paper presents a new exploration technique for reinforcement learning methods. The approach is based on computing the information gain for the posterior distribution of a learned dynamics model.
The dynamics model is modeled by a possible deep neural network. Actions get higher rewards if the posterior distribution over the parameters of the learned dynamics model is likely to change (in a
KL sense, which is equivalent to the information gain). As the posterior distribution can not be represented, the authors use variational Bayes to approximate the posterior by a fully factorized
Gaussian distribution. The paper includes efficient update equations for the variational distribution, which has to be computed for each experienced state action sample. The authors evaluate their
exploration strategy with different reinforcement learning algorithms on a couple of challenging continuous control problems.
Qualitative Assessment
Positive points: - I like the idea of using the information gain for exploration. This paper is an important step for scaling such exploration to complex systems. - The paper is technically very
strong, presenting a clever idea to drive exploration and efficient algorithms to implement this idea. - Exploration in in continuous action control problems is an unsolved problem and the algorithm
seems to be very effective - The evaluations are convincing, including several easy but also some challenging control tasks. The exploration strategy is tested with different RL algorithms. Negative
points: - Not many... maybe clarity could be slightly improved, but in general, the paper reads well. Minor comments: - Equation 7 would be easier to understand if the authors would indicate the
dependency of \phi_t+1 on s_t and a_t. Also do that in the algorithm box - Why does the system dynamics in line 3 of the algorithm box depend on the history and not on the state s_t? - I do not
understand how to compute p(s_t|\theta) in 13. Should it not be p(s_t|\theta, s_t-1, a_t-1) ? - There are typos in the in-text equation before Eq. 17. The last l should be in the brackets and the
first nabla operator is missing the l. - Even if it is in the supplement, it would be good to shortly describe what models are used for the learned system dynamics - citation [11] is not published in
ICML, its only available on archive.
Confidence in this Review
3-Expert (read the paper in detail, know the area, quite certain of my opinion)
Reviewer 2
This paper looks at the notion of curiosity in deep reinforcement learning. It returns to the idea of equating exploratory behavior with maximizing information. Key contributions are the formulation
of a variational perspective on information gain, an algorithm based on Blundell et al.'s Bayesian neural networks and the exposition of the relationship between their approach and Schmidhuber's
compression improvement. Results on simple domains are given.
Qualitative Assessment
The paper shows a pleasant breadth of understanding of the literature. It provides a number of insights into curiosity for RL with neural networks. I think it could be improved by focusing on the
development of the variational approach and the immediately resulting algorithm. As is, there are a number of asides that detract from the main contribution. My main concern is that the proposed
algorithm seems relatively brittle. In the case of Eqn 17, computing the Hessian might only be a good idea in the diagonal case. Dividing by the median suggests an underlying instability. Questions:
* The median trick bothers me. Suppose the model & KL have converged. Then at best the intrinsic reward is 1 everywhere and this does not change the value function. In the worst case the KL is close
to 0 and you end up with a high variance in your intrinsic rewards. Why isn't this an issue here? * Eqn 13: updating the posterior at every step is different from updating the posterior given all
data from the prior. Do you think there are issues with the resulting "posterior chaining"? * How good are the learned transition models? * Can you explain line 230: "very low eta values should
reduce VIME to traditional Gaussian control noise"? * Why do you propose two intrinsic rewards on line 5 of algorithm 1? I'd like to see a clear position. Suggestions: Eqn 2: P(. | s_t, a_t) Line
116: For another connection, you may want to look at Lopes et al. (2012), "Exploration in Model-based Reinforcement Learning by Empirically Estimating Learning Progress" Line 119: You should specify
which description length you mean... the statement is possibly imprecise/incorrect as is Line 128: in expectation propagation (which I know from Bishop (2006)) the KLs end up getting reversed, too...
is there a relationship? Eqn 13: log p(s_t | theta) should be p(s_t | s_t-1, a_t-1, theta), no? It would be good to give empirical evidence showing why the median is needed Graphs: unreadable It
might be good to cite more than just [20] as a reference on Bayesian networks. Again, Bishop (2006) (Section 5.7) provides a nice list.
Confidence in this Review
3-Expert (read the paper in detail, know the area, quite certain of my opinion)
Reviewer 3
The paper describes a curiosity driven RL technique for continuous state and actions spaces. It selects actions to maximize the information gain. Since Bayesian learning is intractable most of the
time, a variational Bayes approximation is described. The approach is applied to domains where the transition dynamics are represented by Bayesian neural networks.
Qualitative Assessment
The paper is well written. The ideas are clearly described. The approach advances the state of the art in curiosity driven RL by using Bayesian neural networks and showing how to maximize information
gain in that context. This is good work. I have one high level comment regarding the reason for focusing on curiosity driven RL. Why maximize information gain instead of expected rewards? Bayesian RL
induces a distribution over rewards. If we maximize expected rewards, exploration naturally occurs in Bayesian RL and the exploration/exploitation tradeoff is optimally balanced. Curiosity driven RL
based on information gain makes sense when there are no rewards. However if there are rewards, adding an information gain might yield unnecessary exploration. For instance, suppose that an action
yields a reward of exactly 10, while a second action yields uncertain rewards of at most 9. Curiosity driven RL would explore the second action in order to resolve the uncertainty even though this
action will never be optimal. Bayesian RL that optimizes rewards only will explore the environment systematically, but will explore only what is necessary.
Confidence in this Review
3-Expert (read the paper in detail, know the area, quite certain of my opinion)
Reviewer 4
This paper proposes Variational Information Maximizing Exploration for RL, which is heavily based on Schmidhubers work on curiosity driven learning and his more recent work on utilizing Kolmogorov
Qualitative Assessment
Experiments: The experiments are nice, but unfortunately not a fair comparison. It would be more useful to compare how your information-theoretic reward compares to a simple maximization of the
entropy, which can also functions as an exploration term, as in e.g., Information-Theoretic Neuro-Correlates Boost Evolution of Cognitive Systems by Jory Schossau, Christoph Adami and Arend Hintze
and Linear combination of one-step predictive information with an external reward in an episodic policy gradient setting: a critical analysis by Keyan Zahedi, Georg Martius and Nihat Ay Minor
remarks: Equation 2: This equation results from Eq. (1) in which H(\theta|\xi_t, a_t) is compared with H(\theta|S_t+1, \xi_t, a_t). It seems that a_t is missing in p(\theta|\xi_t), if not, please
mention why it can be omitted. Equation 4: Bayes' rule is formalted as p(a|b) = p(b|a) p(a) / p(b) (which I am sure the authors know). Unfortunately, I cannot see how Eq. 4 fits into this
formulation. It seems that there are terms omitted. I would see it, if e.g., p(\theta|\xi_t,s_t+1) p(s_t+1|\xi_t, a_t;\theta).
Confidence in this Review
3-Expert (read the paper in detail, know the area, quite certain of my opinion)
Reviewer 5
This paper proposes a new method to compute a curiosity based intrinsic reward for reinforcement learning to promote exploration. The learning agent maintains the dynamics model explicitly, and it is
used to compute the KL divergence between the approximated distribution of the dynamics with the current parameters and that with the old parameters. The computed KL divergence is used as intrinsic
reward and the augmented reward values can be used with model-free policy search methods such as TRPO, REINFORCE and ERWR. The authors also show the relation between the proposed idea and compression
improvement that is developed by Schmidhuber and his colleagues. The proposed method is evaluated by several control tasks and some of them have high dimensional state-action spaces.
Qualitative Assessment
The agent’s dynamics model is estimated by Bayesian neural networks (BNNs) explicitly and the intrinsic reward. My major concern is that the estimated model is not used to find an optimal policy.
In other words, model free reinforcement learning such as TRPO, REINFORCE and ERWR are used. Since the model is estimated, model-based reinforcement learning is more appropriate for this setting.
Please discuss about this point for more detail. The second concern is the performance of the cart pole swing-up task. Figure 2 show that REINFORCE and ERWR obtained sub-optimal policies as compared
with the result of TRPO. In this task, there was no significant difference between the reinforcement learning agent with and without VIME exploration. Please discuss why the proposed method did not
improve the learning performance in the cart pole swing-up task for more detail. Is it related to the property of the dynamics itself? Lines 170-174 claim that the KL-divergence is divided by median
of previous KL divergences. What happens if you directly use the original KL divergence as intrinsic reward? Does it make the learning process unstable? In addition, how many previous trajectories is
Confidence in this Review
3-Expert (read the paper in detail, know the area, quite certain of my opinion)
Reviewer 6
Authors introduce a method: VIME for trying to optimally trade off exploration and exploitation for reinforcement learning tasks. The idea is to add an auxiliary cost that maximizes the information
gain about the model of the environment, encouraging exploration into regions with larger uncertainty. This requires use of a model that has a notion of uncertainty for which the authors use a
Bayesian neural network in the style of Blundell et al. Experiments demonstrate the utility of the method, especially in instances which require some amount of exploration before rewards are
Qualitative Assessment
I enjoyed the paper. Demonstrations and comparisons seem great, and the improvements are hard to ignore. The method is novel and appears to be generally useful. The Figure legends are difficult to
read and should be made larger, especially in Figure 1 and 2.
Confidence in this Review
2-Confident (read it all; understood it all reasonably well) | {"url":"https://proceedings.neurips.cc/paper_files/paper/2016/file/abd815286ba1007abfbb8415b83ae2cf-Reviews.html","timestamp":"2024-11-14T14:31:44Z","content_type":"text/html","content_length":"12981","record_id":"<urn:uuid:eea30e70-c901-4a26-8bd7-c0a0bdf701cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00291.warc.gz"} |
Equations devs & users
According to the docs, when I declare a function with Equations, I should get an induction principle f_ind to do inductive proofs about that function. However, such a principle does not seem to be
generated for my function. What could be going on here?
My definition looks like this:
Equations type_interp (ve : val_or_expr) (A : type) (delta : tyvar_interp) (W : world) :
Prop by wf (type_interp_measure ve A) := { ... }.
This is not mutually recursive with anything else, just a plain recursive definition, albeit with a custom measure.
There is a type_interp_elim, but that doesn't seem to be strong enough for inductive reasoning. There's also type_interp_graph_ind but that's not useful, it's about type_interp_graph which seems to
be some internal thing.
I can do induction over the measure of course, but it would be much more convenient to have a custom induction principle for my function, given that I have already proven that the measure decreases
and it's silly that I have to prove this again for the induction.
looking at type_interp_elim again it provides some of the recursive cases but not enough...
but also the docs say
quations also automatically generates a mutually-inductive relation corre-
sponding to the graph of the programs, whose first inductive is named f_ind. It
automatically shows that the functions respects their graphs (lemma f_ind_fun)
and derives from this proof an elimination principle named f_elim.
so I would expect f_ind to exist, not just f_fun
for instance I have these two cases
(** forall case *)
type_interp (inj_val v) (forall: A) delta W =>
exists e, v = TLamV e /\
forall tau : sem_type, type_interp (inj_expr e) A (tau .: delta) W;
(** exists case *)
type_interp (inj_val v) (exists: A) delta W =>
exists v', v = PackV v' /\
exists tau : sem_type, type_interp (inj_val v') A (tau .: delta) W;
in _elim, for exists: I get
→ (∀ v (A : {bind type}) delta W,
(∀ v' tau,
P (inj_val v') A (tau .: delta) W
(type_interp (inj_val v') A (tau .: delta) W))
→ P (inj_val v) (exists: A)%ty delta W
(∃ v', v = (pack v')%V ∧ (∃ tau, type_interp (inj_val v') A (tau .: delta) W)))
which looks good, I get P for some smaller stuff and have to show it for the bigger stuff. for for forall: I just get
→ (∀ v (A : {bind type}) delta W,
P (inj_val v) (forall: A)%ty delta W
(∃ e, v = (Lam: e)%V ∧ (∀ tau, type_interp (inj_expr e) A (tau .: delta) W)))
which leads to an unprovable goal
for now i am using this rune
remember (ve, A) as x.
revert ve A Heqx.
induction (lt_wf_0_projected (B := val_or_expr*type)
(fun '(ve, A) => type_interp_measure ve A) x) as [x _ IH].
where lt_wf_0_projected is an std++ lemma, not sure if a similar lemma exists in the stdlib
(** Given a measure/size [f : B -> nat], you can do induction on the size of
[b : B] using [induction (lt_wf_0_projected f b)]. *)
Lemma lt_wf_0_projected {B} (f : B -> nat) : well_founded (fun x y => f x < f y).
Interesting, if it misses the recursive call in the forall case then it's a bug
The eliminator is indeed the _elim constant
okay so the docs are wrong then?
they say "Equations also automatically generates a mutually-inductive relation corre-
sponding to the graph of the programs, whose first inductive is named f_ind."
but only "f_elim" exists
filed https://github.com/mattam82/Coq-Equations/issues/575 for the docs issue
and filed https://github.com/mattam82/Coq-Equations/issues/576 for the missing recursive call
Last updated: Oct 13 2024 at 01:02 UTC | {"url":"https://coq.gitlab.io/zulip-archive/stream/237659-Equations-devs-.26-users/topic/Induction.20principle.html","timestamp":"2024-11-10T18:23:16Z","content_type":"text/html","content_length":"19221","record_id":"<urn:uuid:b467c703-f07a-45ef-9986-e96682ed3a33>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00304.warc.gz"} |
Many to Many Relationships in DAX Explained
Level: Advanced, but explained in detail so everyone can understand
There is a lot to learn in DAX if you want to be a ninja. A couple of the more complex and important things to learn and understand are filter propagation and context transition. It is not so much
that you need to be a rocket scientist to understand these concepts, it is more that they are not naturally intuitive. You simply have to learn how filter propagation and context transition work.
What’s more, if you have an Excel background, there are some fundamental differences between the way Power Pivot works vs regular Excel, and you have to learn these things too. All these things are
learnable – but indeed you will need to learn. You need an understanding of filter propagation and context transition to understand how to solve the Many to Many problem below – but don’t worry – I
will explain it in detail in this post.
The Many to Many Problem – Bill of Materials
The problem I am going to cover today is the DAX Many to Many problem. All relationships between tables in DAX are of the type 1 to Many* – there is no support for Many to Many relationships in DAX
(*Note: in Power BI there is also a 1 to 1 relationship type).
Now for a lot of people this won’t make any difference as there is no need for a Many to Many relationship. But sometimes there is a DAX data model that has 2 or more tables that have multiple
records in each table, and these tables have a logical business relationship to each other. An example will make it easier to understand.
In the example below, each product exists in the Product table (orange) only once, but each product can be sold many times, and hence it can appear in the Sales table (blue) many times. This is a
standard 1 to many relationship that DAX was built to handle really well.
Some of the products in the Sales table are actually “multi product” products. eg Product ABC is a box of fruit that contains 3 different products (2 x Apples, 3 x Bananas and 1 x Cantaloupe – 6
items in total) . If you want to see the individual products that were sold, you would need to create a many to many relationship between the Sales table and the Bill of Materials table (green).
The product ID in the sales table has many instances of each product ID (1 for each sale), and the Bill of Materials table also has many instances of the Product ID (1 for each sub product). This is
allowed in a traditional relational database tool like Access or SQL Server but it is not allowed in Power Pivot.
Using ONLY 1 to Many Relationships to Solve the Problem
Given that you simply must use 1 to Many relationships in DAX*, the only workable setup of the relationships between the tables is shown below. There is a 1 to many relationship from the Products
table to the Sales table. There is also a 1 to many relationship from the Products table to the Bill of Materials (BOM) table. But there is no relationship between the Calendar table and the BOM
table because the BOM table doesn’t record the date of the sale; it only records the quantity of each sub product.
Now for Some Simple Measures
I have created 3 measures here.
Total Sales Qty = SUM(Sales[Qty])
Total BOM Qty = SUM(BOM[Qty])
Total BOM Sales = SUMX(Products,[Total Sales Qty] * [Total BOM Qty])
When I put these 3 measures in a pivot table with Calendar Date in Rows, this is what I get (shown below). See the problem? The measure [Total BOM Qty] is giving the wrong answer – I get the same
value for every row in the Pivot Table. What is going on here?
To understand the problem in DAX, you need to have a very clear understanding of automatic filter propagation between relationships.
The Many to Many Problem Explained
Below is the Power Pivot data model again. I always layout my tables using the Collie Layout Methodology – that is, the lookup tables are at placed at the top and the data tables are placed at the
bottom. This has no effect on the behaviour of the model but it makes it much easier to visualise how filter propagation works. Lookup tables always have 1 and only 1 row for every object in the
table, and there must be a unique identifying key (date in the case of the Calendar table and Product ID in the case of the Products table). Data tables also must have the same key (date and Product
ID – otherwise you can’t join the tables) but data tables are allowed to have as many duplicates as needed (many sales are made on the same date, and the same product is sold many times).
Filter propagation automatically flows from the 1 side of the relationship to the many side of the relationship but it does not automatically flow in the other direction. When you use the Collie
Layout Methodology like I have here, we say that filters always automatically flow downhill – they can’t automatically flow uphill. So in the image above,
• Any filter on the Calendar table will automatically flow through the relationship to the Sales table (shown as 1 to 2 in the image above).
• Any filter on the Products table will automatically flow through the relationship to the Sales table (3 to 4) and it will also flow to the BOM table (5 to 6).
• But very importantly, filters will not automatically flow from the Sales table to the Products table (7 to 8), nor uphill through (9 to 10) nor (11 to 12). The implication is that when you set
up a pivot table like the one shown earlier, the Calendar table will filter the Sales table, and hence the Total Sales Qty will be correct. But there is no automatic way for the Calendar table
to filter the BOM table because filters don’t automatically flow up hill. Hence the BOM table is completely unfiltered by the Calendar table. The result of Total BOM Qty will therefore always be
the quantity of the entire BOM table – completely unfiltered by the Calendar table (it is of course filtered by the Product table).
This is an Easy Problem to Solve in Power BI Desktop
In Power BI Desktop this is an easy problem to solve – I will explain how now before going back and solving it for Excel 2010/2013. There is a feature in Power BI Desktop called Bi-Directional Cross
Filtering that will allow you to change the natural “down hill only” filtering of the relationship between tables so that it flows in both directions. As you can see in the data model below (from
Power BI Desktop), I have swapped the filtering direction of relationship 2 (Products to Sales) to be bi-directional (these arrows indicate the direction of filter propagation in Power Bi Desktop –
which is very helpful indeed. We can thank Rob Collie for lobbying for this UI improvement, and Microsoft for listening).
When you make this change, the Products table will be automatically filtered based on the entries in the Sales table – reverse “up hill” filtering. Once the Products table is filtered by the Sales
table (due to the bi-directional cross filtering behaviour), then the new filter on the Products table will automatically propagate downhill through relationship 3 shown above. As a result of the
end to end flow of cross filtering:
1. The Calendar table filters the Sales table,
2. The Sales table filters the Products table,
3. The Products table filters the BOM table
The net result is that the Calendar table is now filtering the BOM table even though there is no direct relationship between these 2 tables.
When I create a Matrix in Power BI Desktop (a matrix is similar to a Pivot Table in Excel), I get the correct answers as shown below).
But there is another problem. Note that Total Sales x Total BOM Qty doesn’t automatically equal Total BOM Sales at this level (eg on 8th Jan). I need to bring the BOM ID column into my Matrix so I
can see exactly which BOM items sold each day. When I do this I get a similar problem to before. See in the Matrix below that the BOM Qty is correct for each BOM ID, but the Sales Quantity is the
same for each of the BOM IDs in the Matrix – this is not correct.
This is an almost identical problem as the first one. Let’s look at the data model again (below). The Matrix above has the BOM ID column on Rows in the matrix. This column comes from the BOM Table
(shown as 1 below) and because it is on rows in the matrix, it is normal to expect that this will filter the measure [Total Sales Qty]. But remember filters automatically flow down hill, not
uphill. So the BOM ID column is not filtering the Products table (2) and hence the Products table is not filtering the Sales table based on the BOM ID. The net result, you get the same Total Sales
Quantity regardless of the BOM ID because the BOM ID is not filtering the Sales Table. The simple answer to this problem (in Power BI Desktop) is to change the cross filtering behaviour of the
relationship (4) from single to Bi-Directional – just like before.
Once you make this change, you will get a fully working report that makes sense to anyone reading it.
How to use DAX to force the Calendar Table to filter the BOM Table
OK, now that you understand how to solve this with Bi-Directional cross filtering, hopefully you will realise what needs to be done to solve the problem in Excel 2010/2013. Here is the data model
again (shown below). I need to force the Sales Table to filter the Products table (shown as 1 below) and I also need to force the BOM table also to filter the Products table (shown as 2 below). If
I can force the 2 Data tables to filter the common Products table, then the Products table will do its job and pass those filters through to the other data tables automatically, hence solving the
problem. Stated another way, I want the Sales table to filter the Products table, then the products table will filter the BOM table automatically. I also want to force the BOM table to filter the
Products table as well – then the products table will automatically filter the Sales table. I am trying to get the 2 data tables to filter the common lookup table so the common lookup table will
then pass the filters on to the other table
If I were to write these formulas using “Pseudo DAX”, the 2 formulas would read like this:
Total Sales Qty =
'Filter the Products table based on the rows in the BOM table first'
Total BOM Qty =
'Filter the Products table based on the rows in the Sales table first
after applying filters from the Calendar table'
So now all I need to do is find a suitable filter to replace the “Filter the Products table… ” portion of each formula and I will achieve the outcome. There are many ways to do this, but first I am
going to show you a method using the FILTER function, and then I will show (and explain) another method using Black Magic from The Italians.
Total Sales Qty
Let’s start with this formula.
Total Sales Qty =
'Filter the Products table based on the rows in the BOM table first'
How can I write a filter statement to put inside CALCULATE that will filter the Products table based on the values in the BOM? Let me show the formula and then explain what it does.
= FILTER(
CALCULATE(COUNTROWS(BOM)) > 0
The FILTER function is an iterator and hence it has a Row Context. The above FILTER formula iterates over the Products table and returns a filtered table of all rows in the Products table that pass
the given test. At each iteration (i.e. each product) in the Products table, the CALCULATE function forces context transition (turns the row context into a filter context) and hence the BOM table is
filtered for the current row in the Products table iteration.
Then the FILTER formula asks the question “Now that it’s clear that we are only talking about this one single product for this single step of the iteration process and we have filtered the BOM table
to reflect this, are there currently any rows visible in the BOM table?”. If the answer is yes, then FILTER keeps that product, if the answer is no, then FILTER discards the product. FILTER then goes
to the second product in the Products table, then CALCULATE again forces context transition for this second iteration and the BOM table is filtered so that only rows of this specific second product
are visible in the BOM table, and then FILTER completes the COUNTROWS check again. This process goes on for every product in the Product table (all those in the current filter context anyway) and
then FILTER is left with a new Filtered Table of Products that contains only products that also exist in the BOM table in the current filter context.
What if you leave out the CALCULATE?
It is worth pointing out here that the following filter formula will not work.
FILTER(Products, COUNTROWS(BOM) > 0)
The problem with this second formula is that there is no CALCULATE wrapped around COUNTROWS(BOM). FILTER is an iterator and has a Row Context. But a Row Context does not automatically create a
Filter Context. So when the FILTER function steps through its iteration process and gets to the first Product, there is no Filter Context and hence the BOM table is not filtered by the new iteration
process. COUNTROWS(BOM) will therefore be the total number of rows in the original table in the original filter context, every product will therefore always pass the test (or always fail – depending
on the initial filter context) and there will be no change to the new Filtered Products table. The net result is the new Filtered Products table is actually identical to the original Products table
– no change in filtering at all. The formula simply doesn’t work.
Bringing the Correct Formulas Together
So putting the correct filter formula inside the CALCULATE from earlier, I end up with this formula.
Total Sales Qty =
FILTER(Products, CALCULATE(COUNTROWS(BOM)) > 0)
Total BOM Qty
Now I can just apply the same pattern to the other formula, switching out the table names.
Total BOM Qty =
FILTER(Products, CALCULATE(COUNTROWS(Sales)) > 0)
And here is the working Pivot Table, same as in Power BI Desktop earlier. The BOM table is filtering the Sales table, and the Sales table is filtering the BOM table – Many to Many using DAX formulas
in action!
Now for the Italian Black Magic
There is another way you can write these formulas that is simpler to write and easy to read – unfortunately it is difficult to understand how it works – Marco Russo calls it Black Magic. Here are
the 2 formulas.
Total Sales Qty =
CALCULATE(SUM(Sales[Qty]), BOM)
Total BOM Qty =
CALCULATE(SUM(BOM[Qty]), Sales)
When you compare these Black Magic formulas against the Pseudo DAX formula I wrote earlier, you will see that I am using the BOM table as the filter expression in the first formula, and the Sales
table as the filter expression in the second formula. This doesn’t make any sense on first sight. If filters always propagate from the one side of the relationship to the many side of the
relationship, how can these formulas possibly work? This can be explained with “Expanded Tables”.
Expanded Tables
Power Pivot is built on top of some more traditional database technologies and hence what happens inside Power Pivot can be converted (or thought of) in more traditional database patterns and
structures behind the scenes. In SQL terms, the relationships between the Sales table (shown as 1 below), the Calendar table (2) and the Products table (3) are:
Sales Left Outer Join Calendar ( 1 to 2)
Sales Left Outer Join Products ( 1 to 3)
If I had these tables in a relational database, I could materialise the sales table into an Expanded Table that contains all the original Sales columns, plus the Calendar table columns and the
Product table columns. To do tihs I could write the following SQL Query:
Select *
from Sales
Left Join Calendar on Sales.Date = Calendar.Date
Left Join Products on Sales.[Product ID] = Products.[Product ID]
The above query will return the following Expanded Table with columns of data coming from 3 different tables.
Technically speaking when I place the Sales table as a filter argument inside the following formula…
Total BOM Qty = CALCULATE(SUM(BOM[Qty]), Sales)
…I am actually placing the Expanded Table – the Sales table plus all the relevant records from the other tables on the one side of the relationships.
The Expanded table will still be filtered by the current filter context. So if there is a filter on the Calendar table (say for 3rd Jan), then the Calendar table will filter the Sales table AND the
Expanded Sales Table. If I re-run the SQL code with a filter on Calendar[Date] = ‘3 Jan 2016’ I get this new Expanded Table. The Calendar table is filtering the Sales table, and the Sales table is
filtering the Products table.
So when Sales table is used as the filter portion of the CALCULATE function, you can only “see” the Sales table, but it is actually the entire Expanded Sales table (including the Calendar and
Products tables and any filters from all of these 3 tables) that is doing the filtering, not just the single Sales table. Filters from all 3 tables are therefore effectively filtering the BOM table
and that is why it works.
Here are copies of my workbook files if you want to take a look.
Where to learn more
I have learnt most of my advanced DAX knowledge from The Italians (Marco Russo and Alberto Ferrari). There is an excellent video available here where Alberto explains Many to Many relationships. I
also recommend the book “The Definitive Guide to DAX” for anyone that wants to develop a deep mastery of the DAX language.
For people that are earlier in the learning stages of DAX, you really must master filter propagation and context transition before you can move forward. If you haven’t mastered these techniques, I
recommend you invest some time going back over the basics and make sure you have a solid understanding of the pre-requisites. The fastest, cheapest and most effective way you can do this is to read
my book “Supercharge Power BI”.
37 thoughts on “Many to Many Relationships in DAX Explained”
Hi Matt. Thank you for the wonderful post. I have one question. If the model is typical i.e. Calendar 1-* Sales *-1 Products with filter propagation Single (i.e. Calendar can’t filter Products)
and with this scenario I create a table/matrix and put Date from Calendar and Product ID from Products table it should not allow the filtering and will show all dates where the product is
selling. But when it is selected Don’t summarize it is somehow allowing filtering from Calendar to Product table and showing exactly the corresponding sales dates. If I select any aggregation
like Sum, Count (or measure based on this) it is showing all dates (correct since the filter is not allowed in this direction). I am curious to find out what is the reason for that and is it
Thanks in advance!
I have observed this, too. I have asked Microsoft for an explanation, but I haven’t heard back. It seems there is some optimisation when you add columns from 2 dim tables into a visual
without a measure. If you try the same in Power Pivot for Excel, it will give a full outer join of both columns. Power BI desktop used to do this, too, and as far as I am concerned it is the
way it should work. It appears that the developers have added some code in the compile engine to do an inner join instead (bridging through the sales table in this case). I have observed if
you add an implicit measure, this behaviour persists, but if you add an explicit measure, it does not. You can actually see the difference in the query from the performance analyser pane.
Thanks a lot, Matt. I thought that I am missing something about how Power BI is working. Using implicit measure like count is not propagating, just the Don’t summarize.
Matt this article has changed my life completely. Very grateful
Would a third option be to use CROSSFILTER?
Total Sales Qty =
CROSSFILTER(BOM[Product ID],Product[Product ID],BOTH)
If so, is there a preferred method from a performance standpoint? Marco states in linked post the the “black magic” is faster than the FILTER method. Is applying a bidirectional relationship
CROSSFILTER any better or worse than the expanded table method?
In the example, if the BOM for a given product could the adjusted each month and had a relationship with the Calendar table, but we still wanted to understand the BOM items sold each day, would
there be a preferred method to minimize ambiguity? (Thinking about this: https://www.sqlbi.com/articles/bidirectional-relationships-and-ambiguity-in-dax/)
Definitely, yes. This is a pretty old article and probably needs to be updated. I wasn’t even using a power BI back in 2016. I don’t recall if CROSSFILTER was a function back then in Excel –
my guess is that it wasn’t but I could be wrong. I assume CROSSFILTER would be as performant as black magic as the developers have a choice how they deploy such solutions and i assume they
probably have just built syntax sugar over black magic. This would make sense given that’s how the engine works natively. To test, you could look at the query plan and results using DAX query
Thanks for the suggestion and opportunity to learn a bit about DAX Studio. It looks like CROSSFILTER is 4 to 5 times faster than the Expanded Table method in my model when looking at the
Server Timings (2 queries for CROSSFILTER vs. 7 queries for Expanded Table). Will spend some more time with DAX Studio going forward.
This is really helpful, thank you so much.
The only issue I have is that when I do this in my own workbook, it doesn’t seem to calculate at the subtotal level. I’m comparing the measures and relationships side-by-side and there’s no
difference between both workbooks, but for some reason the subtotals only calculate when I copy my data into your xlsx file, not when I create my own. Is there any reason this would be the case?
Many thanks,
Sorry, No idea. If you post a link to your workbook (eg Onedrive, dropbox) I will take a look
Hi Matt.
Really grateful for your reply, thank you.
I’m having a much more frustrating issue outside of the totals thing now actually!
Below is the link to the One Drive file, which should hopefully give clarity to the issue described below.
My issue is around adding current stockholding levels to the report you described how to product, as my stockholding table (SOH) contains multiple rows per item (both BOM ID and Product
ID as we hold stock of finished goods and components), with each row representing a different batch. I can relate this stock table (SOH) to the products table for a many to one
relationship but then this won’t show the stock levels for the product IDs and BOM IDs in the pivot table report, as it doesn’t filter upwards to the BOM table, which is what is providing
the filter context.
So the problem comes with trying to relate the BOM Table to the SOH table as this is a many to many relationship. I tried to follow the same logic as with the other measures but had no
luck as I think the logic is slightly different.
The measure [SOH Available] correctly calculates the stock total for the product ID, but doesn’t filter down to the BOM ID.
SOH Available =CALCULATE(SUM(‘SOH'[Available]),BOM)
I added a second stock table called SOHTWO which only has one row per product and was therefore able to relate it to the BOM table meaning that the component stock levels sum correctly as
shown in the measure [SOH Available 2].
SOH Available 2=calculate(SUM(SOHTWO[Available Balance]),BOM)
I could therefore use a combination of two measures, but it would be much better to use the SOH table as the stock table as this provides additional required layers such as the BBE date
for each batch of the product.
The column ‘desired output’ shows what I am trying to achieve with the measure. You will see there are duplicated values for the BOM IDs because one BOM ID might be in multiple Product
IDs, but that is okay.
I hope this makes sense after looking at the One Drive file.
I would be very grateful for any advice you can provide.
Many thanks,
I don’t normally provide support here, but in this case you have provided me a clear explanation. You have done all the work. Your 2 measures individually give the answers you need.
You just need to write a measure that selectively shows one or the other depending what level you are in for your Pivot. Try this
=VAR LineLevel = calculate(SUM(SOHTWO[Available Balance]),BOM)
VAR Total =CALCULATE(SUM(‘SOH'[Available]),BOM)
VAR FilterLevel = ISFILTERED(BOM[BOM ID]) //returns true if you are looking at a single BOM item
RETURN if(FilterLevel,LineLevel,Total)
Hi Matt,
thanks for your response!
That definitely works for that particular measure, but what I was trying to achieve was just using exclusively the SOH table (the one with multiple rows per product) for stock,
instead of a combination of both. The reason for this is that each row contains a different date when the product will pass its BBE, meaning it has to be written off. If I use the
SOHTWO table with just one row per product, I will lose the visibility of the different batches.
I was going to use a measure like the below to show this:
Out of Date Stock
=CALCULATE(SUM(‘SOH'[Available]),FILTER(ALL(‘Calendar’),’Calendar'[Date]=MAX(‘SOH'[BBE W/C])))
This works on a pivot table where the Product ID providing the filter context comes from the products table, but doesn’t work on a pivot table where the filter context comes from
the BOM table, as shown in
I appreciate you don’t usually offer support on this blog so not expecting a response, but thank you for your support on this! Hopefully I find a way around the issue. I look
forward to reading your future posts!
I’m not really sure what you are trying to do. The new link doesn’t have a SOHTWO table you refer to in your comment. The SOH table has no relationships in the model. I’m
still willing to take a look, but you will need to help me understand what you are after. I suggest you build another pivot showing the result and expected result you are
In Power BI, it seems the crossfilter not working on the measure [Total BOM Qty] when I put [Product ID] in together with [BOM ID]. It does not filter out the product with no sales for the [Total
BOM Qty] measure.
Same case when using Crossfilter function in DAX.
However, it works well with the “black magic”, and the Calculate filter above.
What could be the explanation for that? And I notice that the measure [Total Sales Qty] measure is ok in both cases.
Can you please send me your sample file showing the behaviour.
here it is.
Hi Matt,
Thanks for this awesome article.
But I found myself confused about the pseudo dax under section: ‘How to use DAX to force the Calendar Table to filter the BOM Table’.
For the [Total Sales Qty], why the product table should be filtered by BOM table, not other tables? also for the [Total BOM Qty], why the sales table?
How do you translate this problem into those pseudo dax?
Hi Spence. The sales table is already filtered by the calendar table and the product table. This happens automatically due to automatic filter propagation. Therefore there is only 1 table in
this data model that is not filtering the Sales table, and that is the BOM table. If you can get the BOM table to filter the Product table, then this filter will automatically propagate to
the sales table (from the Product table). So the task is to force the BOM table to filter the Product table. If you can do this, then everything else will just work.
Likewise, the BOM table is already filtered by the Product table, but it is not filtered by the Sales table. If you want to slice the BOM table by Sales, you need to get the sales table to
filter the product table – then the product table will automatically filter the BOM table for you. There is no need to worry about the Calendar table as this table already filters the Sales
table. So as long as you can get the sales table to filter the Product table, everything else will just work. Hope that makes sense.
Got it!
Thanks very much for your explanation.
Awesome article, thanks Matt!
How to solve the case when you have a sub-assembly within the assembly
For example:
Add to tables:
14-01-2016, ABPK, 2
ABPK, BoxToBox, Multi
ABPK, ABC, 1
ABPK, P, 2
I don’t see any issue connecting a sub assembly table to the BOM table. This would be a 1 (BOM) to many (Sub Assembly) relationship. Any filters one the BOM table would propagate to the Sub
Assembly table. Alternatively you could merge the data in the BOM and Sub Assembly tables into the single BOM table. eg if the BOM has 2 products – 1 is a single item and the second is a sub
assembly containing 3 items, you could just load the BOM with the 4 single items.
Dear Matt,
Thank you for the informative article about a topic which I’ve been searching for ages but could not have found any satisfying answer; up to now.
I downloaded your workbook. Changed your products table with mine. Adapted my BOM to yours and put into table BOM. Changed Sales table contents with mine. All worked perfect. I definitely
will use it in my calculations.
What I could not have achieved is that showing my BOM in multilevel. How can I make Level1, Level2, Level3 and apply BOMQty and SalesQTY to these levels in one measure? I first thought of
using a parent-child relationship but as one sub assembly is used in more than one assembly it failed.
Can you please guide me how I can show my BOM in multilevel and calculate BOMQty and SalesQty for each of my levels separately together with their subtotal corresponding the individual
This is an illustration how a table I want to make:
XYZ 2
PRODUCT ID BOM ID Qty
XYZ ABC 2
XYZ D 1
ABC A 1
ABC B 1
B C 8
B H 5
D EF 4
EF E 2
EF F 6
BOM Qty Sales Qty
Product ID Level1 Item Level2 Item Level3 Item Level1 Level2 Level3 Level1 Level2 Level3
XYZ ABC 2 4
A 1 2
B 1 2
C 8 16
—————————————————————————————————————————————————————————– H 5 10
—————————————————————————————————————————————————————————– D 1 2
—————————————————————————————————————————————————————————– EF 4 8
—————————————————————————————————————————————————————————– E 2 4
—————————————————————————————————————————————————————————– F 6 12
—————————————————————————————————————————————————————————– TOTAL Level3 21 42
—————————————————————————————————————————————————————————– TOTAL Level2 6 12
—————————————————————————————————————————————————————————– TOTAL Level1 3 6
I am happy to try to help you, but can you pleas do a couple of things to make it easier for me help. Can you please take my sample workbook and extend it to show the scenario you
have. Then please post a question at http://powerpivotforum.com.au and provide the detail and also what the outcome you are expecting. I monitor he forum and will take a look when I
get a few minutes.
Dear Matt,
This was an awesome article. This article taught me how to handle well the relationships in PowerBI desktop.
On the other hand I also would be interested in CitizenBH’s question.
If I add the mentioned extra rows into the source tables I get this result on the Table visualization:
Date BomID Total_Sales_qty Total_Bom Qty Total_BOM_Sales
14 Jan 2016 ABC 2; 1; 2;
K 2; 1; 2;
P 2; 2; 4;
My problem is that the ABC is not disassembled into its components. (according to BOM table: A, B and C)
Therefore I cannot see how many “A”, “B” and “C” are needed to create “ABC” to create 2 pieces “ABPK”.
The expected result would be this in this case below:
Date BomID Total_Sales_qty Total_Bom Qty Total_BOM_Sales
14 Jan 2016 ABC 2; 1; 2;
K 2; 1; 2;
P 2; 2; 4;
A 2; 2; 4;
B 2; 3; 6;
C 2; 1; 2;
The “ABPK” is the finished-product, the “ABC” is the child-part and “A”, “B” and “C” are the materials in this case.
Is it possible to solve this issue please?
Thank you in advance
Very good read. But I have a question. Does Bi-Directional Cross Filtering exist in Excel 2016 (I can’t find it in Powerpivot in Excel2016)? I can only see it in Power BI Desktop though.
Min Li, apparently you are correct – it is not there. I don’t have Excel 2016 but my understanding was that it was included. I just confirmed it was left out of the final version due to
backward compatibility issues with Excel 2013. Thanks for alerting me to this.
can you please help with solution for the below file.
You should ask your question on a forum such as community.powerbi.com
Very nicely explained – I first learnt about Magic behind the Logic of a DAX Cross Filter – here http://mdxdax.blogspot.in/2011/03/logic-behind-magic-of-dax-cross-table.html
Also a important thing to learn about Calculate is the Filter Parameter is always a Table
The best way to visualize this is to think of the Filters being happening using the Advanced Filter method of the Range object of Excel (which expects a range as the criteria)
So even when you Say Calculate([mSales], PRODUCTS[PRODUCTS_TYP]=”Single”) – behind the scenes it is passing a Table of Products filtered for Type single as the “Criteria” Range for Sales and
Sales is filtered for that Criteria
Once you understand that a Table can is passed as the Filter Criteria then the Shock of
CALCULATE ( SUM ( BOM[Qty] ), Sales ) – reduces !!
Also a good practice is not to Create the measures on Individual tables but on a separate table called M having a column called MEASURES
This way it becomes easy to maintain and also you dont have to re-create the measures if you decide to change the Data source of the Fact / Dim Tables or for some reason delete the Fact / Dim
Referring to place where measures should be kept. I remember the article from Rob Collie saying that best practice is to write measures with tables which they are concerning with. You are
giving me a smart hint in case deleting/changing structure ..
Knowing that it’s forcing a Left Outer join for the purposes of the calculation makes it all click. Thank you! I’ve seen this formula before but never seen the explanation on how that was
working, and without the proper context remembering how to use it was just not happening.
Superb, insightful explanation of a difficult concept. High quality blogging, Matt!!
Great time and effort put in here to explain a tricky concept. I didn’t know about the new feature in 2016 and wondered how I can find out about stuff like this more readily. The black magic
formula is great and if you spend too much time trying to understand it your head begins to hurt!
A beautiful article about not-so-easy-to-grasp problem. The episode has been masterfully set out with great simplification, description and clarification of all problem areas. I particularly
loved the explanation of Black Magic formulas with the help of Expanded Tables concepts.
Truly enjoyed to read and learn. Thank you Matt.
Leave a Comment | {"url":"https://exceleratorbi.com.au/many-many-relationships-dax-explained/","timestamp":"2024-11-14T01:11:13Z","content_type":"text/html","content_length":"472791","record_id":"<urn:uuid:d9b1e636-0ea1-4d34-b1d6-58c1a56f13a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00346.warc.gz"} |
Percentile definition
A percentile is a value on a scale of one hundred that indicates the percent of a distribution that is equal to or below it. For example, on the map of daily streamflow conditions a river discharge
at the 90th percentile is equal to or greater than 90 percent of the discharge values recorded during all years that measurements have been made. In these plots, the percentiles are based on historic
daily values calculated for each month. In general,
a percentile greater than 75 is considered above normal
a percentile between 25 and 75 is considered normal
a percentile less than 25 is considered below normal
In some hydrological studies, particularly those related to floods, a variation of the percentile known as the "percent exceedance" is used. It is simply obtained by subtracting the percentile scale
value from 100 percent. For example, a discharge at the 75th percentile is the same as a discharge at the 25th percent exceedance (100-75=25). | {"url":"https://va.water.usgs.gov/duration_plots/percentile.htm","timestamp":"2024-11-14T07:02:41Z","content_type":"text/html","content_length":"6775","record_id":"<urn:uuid:702b58d1-c11b-4f60-8dc2-09551ed35ed4>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00577.warc.gz"} |
Advanced Analytical Models: Over 800 Models And 300 Applications From The Basel Ii Accord To Wall Street And Beyond [PDF] [79gb9l8nnhr0]
If you’re seeking solutions to advanced and even esoteric problems, Advanced Analytical Models goes beyond theoretical discussions of modeling by facilitating a thorough understanding of concepts and
their real-world applications—including the use of embedded functions and algorithms. This reliable resource will equip you with all the tools you need to quantitatively assess risk in a range of
areas, whether you are a risk manager, business decision-maker, or investor.
March 18, 2008
Char Count= 0
Advanced Analytical Models
March 18, 2008
Char Count= 0
Founded in 1807, John Wiley & Sons is the oldest independent publishing company in the United States. With offices in North America, Europe, Australia, and Asia, Wiley is globally committed to
developing and marketing print and electronic products and services for our customers’ professional and personal knowledge and understanding. The Wiley Finance series contains books written
specifically for finance and investment professionals as well as sophisticated individual investors and their financial advisors. Book topics range from portfolio management to e-commerce, risk
management, financial engineering, valuation, and financial instrument analysis, as well as much more. For a list of available titles, please visit our Web site at www.WileyFinance.com.
March 18, 2008
Char Count= 0
Advanced Analytical Models Over 800 Models and 300 Applications from the Basel II Accord to Wall Street and Beyond
John Wiley & Sons, Inc.
March 18, 2008
Char Count= 0
C 2008 by Johnathan Mun. All rights reserved. Copyright
Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form
or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the
prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978)
750-8400, fax (978) 646-8600, or on the Web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River
Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used
their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied
warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained
herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other
commercial damages, including but not limited to special, incidental, consequential, or other damages. Designations used by companies to distinguish their products are often claimed as trademarks. In
all instances where John Wiley & Sons, Inc., is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact the appropriate companies for
more complete information regarding trademarks and registration. For general information on our other products and services or for technical support, please contact our Customer Care Department
within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that
appears in print may not be available in electronic formats. For more information about Wiley products, visit our Web site at www.wiley.com. Library of Congress Cataloging-in-Publication Data: Mun,
Johnathan. Advanced analytical models : over 800 models and 300 applications from the Basel II Accord to Wall Street and beyond / Johnathan Mun. p. cm. — (Wiley finance series) Includes index. ISBN
978-0-470-17921-5 (cloth/dvd) 1. Finance—Mathematical models. 2. Risk assessment—Mathematical models. 3. Mathematical models. 4. Computer simulation. I. Title. HG106.M86 2008 003 .3—dc22 2007039385
Printed in the United States of America. 10 9 8 7 6 5 4 3 2 1
March 18, 2008
Char Count= 0
Dedicated to my wife, Penny. Without your encouragement, advice, and support, this modeling book would never have taken off. “Delight yourself in the Lord and He will give you the desires of your
heart.” —Psalms 37:4 (NIV)
March 18, 2008
Char Count= 0
March 18, 2008
Char Count= 0
Software Applications
PART 1
Modeling Toolkit and Risk Simulator Applications Introduction to the Modeling Toolkit Software Introduction to Risk Simulator Running a Monte Carlo Simulation Using Forecast Charts and Confidence
Intervals Correlations and Precision Control Tornado and Sensitivity Tools in Simulation Sensitivity Analysis Distributional Fitting: Single Variable and Multiple Variables Bootstrap Simulation
Hypothesis Testing Data Extraction, Saving Simulation Results, and Generating Reports Regression and Forecasting Diagnostic Tool Statistical Analysis Tool Distributional Analysis Tool Portfolio
Optimization Optimization with Discrete Integer Variables Forecasting 1. Analytics—Central Limit Theorem 2. Analytics—Central Limit Theorem—Winning Lottery Numbers 3. Analytics—Flaw of Averages 4.
Analytics—Mathematical Integration Approximation Model 5. Analytics—Projectile Motion 6. Analytics—Regression Diagnostics 7. Analytics—Ships in the Night 8. Analytics—Statistical Analysis 9.
Analytics—Weighting of Ratios
March 18, 2008
Char Count= 0
10. Credit Analysis—Credit Premium 11. Credit Analysis—Credit Default Swaps and Credit Spread Options 12. Credit Analysis—Credit Risk Analysis and Effects on Prices 13. Credit Analysis—External Debt
Ratings and Spread 14. Credit Analysis—Internal Credit Risk Rating Model 15. Credit Analysis—Profit Cost Analysis of New Credit
16. Debt Analysis—Asset-Equity Parity Model 17. Debt Analysis—Cox Model on Price and Yield of Risky Debt with Mean-Reverting Rates 18. Debt Analysis—Debt Repayment and Amortization 19. Debt
Analysis—Debt Sensitivity Models 20. Debt Analysis—Merton Price of Risky Debt with Stochastic Asset and Interest 21. Debt Analysis—Vasicek Debt Option Valuation 22. Debt Analysis—Vasicek Price and
Yield of Risky Debt
23. Decision Analysis—Decision Tree Basics 24. Decision Analysis—Decision Tree with EVPI, Minimax, and Bayes’ Theorem 25. Decision Analysis—Economic Order Quantity and Inventory Reorder Point 26.
Decision Analysis—Economic Order Quantity and Optimal Manufacturing 27. Decision Analysis—Expected Utility Analysis 28. Decision Analysis—Inventory Control 29. Decision Analysis—Queuing Models 30.
Exotic Options—Accruals on Basket of Assets 31. Exotic Options—American, Bermudan, and European Options with Sensitivities 32. Exotic Options—American Call Option on Foreign Exchange 33. Exotic
Options—American Call Options on Index Futures 34. Exotic Options—American Call Option with Dividends 35. Exotic Options—Asian Lookback Options Using Arithmetic Averages 36. Exotic Options—Asian
Lookback Options Using Geometric Averages 37. Exotic Options—Asset or Nothing Options 38. Exotic Options—Barrier Options 39. Exotic Options—Binary Digital Options 40. Exotic Options—Cash or Nothing
Options 41. Exotic Options—Chooser Option (Simple Chooser) 42. Exotic Options—Chooser Option (Complex Chooser) 43. Exotic Options—Commodity Options 44. Exotic Options—Currency (Foreign Exchange)
March 18, 2008
Char Count= 0
45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79.
Exotic Options—Double Barrier Options Exotic Options—European Call Option with Dividends Exotic Options—Exchange Assets Option Exotic Options—Extreme Spreads Option Exotic Options—Foreign
Equity–Linked Foreign Exchange Options in Domestic Currency Exotic Options—Foreign Equity Struck in Domestic Currency Exotic Options—Foreign Equity with Fixed Exchange Rate Exotic Options—Foreign
Takeover Options Exotic Options—Forward Start Options Exotic Options—Futures and Forward Options Exotic Options—Gap Options Exotic Options—Graduated Barrier Options Exotic Options—Index Options
Exotic Options—Inverse Gamma Out-of-the-Money Options Exotic Options—Jump-Diffusion Options Exotic Options—Leptokurtic and Skewed Options Exotic Options—Lookback with Fixed Strike (Partial Time)
Exotic Options—Lookback with Fixed Strike Exotic Options—Lookback with Floating Strike (Partial Time) Exotic Options—Lookback with Floating Strike Exotic Options—Min and Max of Two Assets Exotic
Options—Options on Options Exotic Options—Option Collar Exotic Options—Perpetual Options Exotic Options—Range Accruals (Fairway Options) Exotic Options—Simple Chooser Exotic Options—Spread on Futures
Exotic Options—Supershare Options Exotic Options—Time Switch Options Exotic Options—Trading-Day Corrections Exotic Options—Two-Asset Barrier Options Exotic Options—Two Asset Cash or Nothing Exotic
Options—Two Correlated Assets Option Exotic Options—Uneven Dividend Payments Option Exotic Options—Writer Extendible Option
80. Forecasting—Data Diagnostics 81. Forecasting—Econometric, Correlations, and Multiple Regression Modeling 82. Forecasting—Exponential J-Growth Curves 83. Forecasting—Forecasting Manual
Computations 84. Forecasting—Linear Interpolation and Nonlinear Spline Extrapolation 85. Forecasting—Logistic S-Growth Curves
March 18, 2008
Char Count= 0
86. 87. 88. 89.
Forecasting—Markov Chains and Market Share Forecasting—Multiple Regression Forecasting—Nonlinear Extrapolation and Forecasting Forecasting—Stochastic Processes, Brownian Motion, Forecast Distribution
at Horizon, Jump-Diffusion, and Mean-Reversion 90. Forecasting—Time-Series ARIMA 91. Forecasting—Time-Series Analysis
92. Industry Applications—Biotech Manufacturing Strategy 93. Industry Applications—Biotech Inlicensing Drug Deal Structuring 94. Industry Applications—Biotech Investment Valuation 95. Industry
Application—Banking: Integrated Risk Management, Probability of Default, Economic Capital, Value at Risk, and Optimal Bank Portfolios 96. Industry Application—Electric/Utility: Optimal Power Contract
Portfolios 97. Industry Application—IT—Information Security Intrusion Risk Management 98. Industry Applications—Insurance ALM Model
99. Operational Risk—Queuing Models at Bank Branches
100. 101. 102. 103. 104. 105.
106. 107. 108. 109. 110.
Optimization—Continuous Portfolio Allocation Optimization—Discrete Project Selection Optimization—Inventory Optimization Optimization—Investment Portfolio Allocation Optimization—Investment Capital
Allocation I (Basic Model) Optimization—Investment Capital Allocation II (Advanced Model) Optimization—Military Portfolio and Efficient Frontier Optimization—Optimal Pricing with Elasticity
Optimization—Optimization of a Harvest Model Optimization—Optimizing Ordinary Least Squares Optimization—Stochastic Portfolio Allocation
111. 112. 113. 114. 115. 116.
Options Analysis—Binary Digital Instruments Options Analysis—Inverse Floater Bond Options Analysis—Options-Trading Strategies Options Analysis—Options-Adjusted Spreads Lattice Options
Analysis—Options on Debt Options Analysis—Five Plain Vanilla Options
117. Probability of Default—Bond Yields and Spreads (Market Comparable) 118. Probability of Default—Empirical Model 119. Probability of Default—External Options Model (Public Company)
March 18, 2008
Char Count= 0
120. Probability of Default—Merton Internal Options Model (Private Company) 121. Probability of Default—Merton Market Options Model (Industry Comparable)
122. Project Management—Cost Estimation Model 123. Project Management—Critical Path Analysis (CPM PERT GANTT) 124. Project Management—Project Timing
125. Real Estate—Commercial Real Estate ROI
126. Risk Analysis—Integrated Risk Analysis 127. Risk Analysis—Interest Rate Risk 128. Risk Analysis—Portfolio Risk Return Profiles
129. 130. 131. 132. 133.
Risk Hedging—Delta-Gamma Hedging Risk Hedging—Delta Hedging Risk Hedging—Effects of Fixed versus Floating Rates Risk Hedging—Foreign Exchange Cash Flow Model Risk Hedging—Hedging Foreign Exchange
134. Sensitivity—Greeks 135. Sensitivity—Tornado and Sensitivity Charts Linear 136. Sensitivity—Tornado and Sensitivity Nonlinear
137. 138. 139. 140. 141. 142. 143. 144.
145. 146. 147. 148. 149.
Simulation—Basic Simulation Model Simulation—Best Surgical Team Simulation—Correlated Simulation Simulation—Correlation Effects on Risk Simulation—Data Fitting Simulation—Debt Repayment and
Amortization Simulation—Demand Curve and Elasticity Estimation Simulation—Discounted Cash Flow, Return on Investment, and Volatility Estimates Simulation—Infectious Diseases Simulation—Recruitment
Budget (Negative Binomial and Multidimensional Simulation) Simulation—Retirement Funding with VBA Macros Simulation—Roulette Wheel Simulation—Time Value of Money
150. Six Sigma—Obtaining Statistical Probabilities, Basic Hypothesis Tests, Confidence Intervals, and Bootstrapping Statistics 151. Six Sigma—One- and Two-Sample Hypothesis Tests Using t-Tests,
Z-Tests, F-Tests, ANOVA, and Nonparametric Tests (Friedman, Kruskal Wallis, Lilliefors, and Runs Tests)
March 18, 2008
Char Count= 0
152. Six Sigma—Sample Size Determination and Design of Experiments 153. Six Sigma—Statistical and Unit Capability Measures, Specification Levels, and Control Charts 154. Valuation—Buy versus Lease
155. Valuation—Banking: Classified Loan Borrowing Base 156. Valuation—Banking: Break-Even Inventory with Seasonal Lending Trial Balance Analysis 157. Valuation—Banking: Firm in Financial Distress
158. Valuation—Banking: Pricing Loan Fees Model 159. Valuation—Valuation Model 160. Value at Risk—Optimized and Simulated Portfolio VaR 161. Value at Risk—Options Delta Portfolio VaR 162. Value at
Risk—Portfolio Operational and Credit Risk VaR Capital Adequacy 163. Value at Risk—Right-Tail Capital Requirements 164. Value at Risk—Static Covariance Method
165. Volatility—Implied Volatility 166. Volatility—Volatility Computations (Log Returns, Log Assets, Implied Volatility, Management Assumptions, EWMA, GARCH)
167. 168. 169. 170. 171. 172.
Yield Curve—CIR Model Yield Curve—Curve Interpolation BIM Model Yield Curve—Curve Interpolation NS Model Yield Curve—Forward Rates from Spot Rates Yield Curve—Term Structure of Volatility Yield
Curve—U.S. Treasury Risk-Free Rates and Cubic Spline Curves 173. Yield Curve—Vasicek Model
PART 2
Real Options SLS Applications
174. Introduction to the SLS Software Single Asset and Single Phased Module Multiple Asset or Multiple Phased SLS Module Multinomial SLS Module SLS Excel Solution Module SLS Excel Functions Module
Lattice Maker Module
175. Employee Stock Options—Simple American Call Option 176. Employee Stock Options—Simple Bermudan Call Option with Vesting 177. Employee Stock Options—Simple European Call Option
March 18, 2008
Char Count= 0
178. Employee Stock Options—Suboptimal Exercise 179. Employee Stock Options—Vesting, Blackout, Suboptimal, Forfeiture 180. Exotic Options—American and European Lower Barrier Options 181. Exotic
Options—American and European Upper Barrier Options 182. Exotic Options—American and European Double Barrier Options and Exotic Barriers 183. Exotic Options—Basic American, European, and Bermudan
Call Options 184. Exotic Options—Basic American, European, and Bermudan Put Options 185. Real Options—American, European, Bermudan, and Customized Abandonment Options 186. Real Options—American,
European, Bermudan, and Customized Contraction Options 187. Real Options—American, European, Bermudan, and Customized Expansion Options 188. Real Options—Contraction, Expansion, and Abandonment
Options 189. Real Options—Dual Variable Rainbow Option Using Pentanomial Lattices 190. Real Options—Exotic Chooser Options 191. Real Options—Exotic Complex Floating American and European Chooser 192.
Real Options—Jump-Diffusion Option Using Quadranomial Lattices 193. Real Options—Mean-Reverting Calls and Puts Using Trinomial Lattices 194. Real Options—Multiple Assets Competing Options 195. Real
Options—Path-Dependent, Path-Independent, Mutually Exclusive, Non–Mutually Exclusive, and Complex Combinatorial Nested Options 196. Real Options—Sequential Compound Options 197. Real
Options—Simultaneous Compound Options 198. Real Options—Simple Calls and Puts Using Trinomial Lattices
xiii 720 723
PART 3
Real Options Strategic Case Studies—Framing the Options 199. Real Options Strategic Cases—High-Tech Manufacturing: Build or Buy Decision with Real Options 200. Real Options Strategic Cases—Oil and
Gas: Farm-Outs, Options to Defer, and Value of Information
March 18, 2008
Char Count= 0
201. Real Options Strategic Cases—Pharmaceutical Development: Value of Perfect Information and Optimal Trigger Values 202. Real Options Strategic Cases—Option to Switch Inputs 203.
Valuation—Convertible Warrants with a Vesting Period and Put Protection
APPENDIX A List of Models
APPENDIX B List of Functions
APPENDIX C Understanding and Choosing the Right Probability Distributions
APPENDIX D Financial Statement Analysis
APPENDIX E Exotic Options Formulae
APPENDIX F Measures of Risk
APPENDIX G Mathematical Structures of Stochastic Processes
Glossary of Input Variables and Parameters in the Modeling Toolkit Software
About the DVD
About the Author
March 18, 2008
Char Count= 0
dvanced Analytical Models is a large collection of advanced models with a multitude of industry and domain applications. The book is based on years of academic research and practical consulting
experience, coupled with domain expert contributions. The Modeling Toolkit software that holds all the models, Risk Simulator software, and Real Options SLS software were all developed by the author,
with over 1,000 functions, tools, and model templates in these software applications. The trial versions are included in the accompanying DVD. The applications covered are vast. Included are Basel II
banking risk requirements (credit risk, market risk, credit spreads, default risk, value at risk, etc.) and financial analysis (exotic options and valuation), risk analysis (stochastic forecasting,
risk-based Monte Carlo simulation, optimization), real options analysis (strategic options and decision analysis), Six Sigma and quality initiatives, management science and statistical applications,
and everything in between, such as applied statistics, manufacturing, decision analysis, operations research, optimization, forecasting, and econometrics. This book is targeted at practitioners who
require the algorithms, examples, models, and insights in solving more advanced and even esoteric problems. This book does not only talk about modeling or illustrate basic concepts and examples; it
comes complete with a DVD filled with sample modeling videos, case studies, and software applications to help you get started immediately. In other words, this book dispenses with all the theoretical
discussions and mathematical models that are extremely hard to decipher and apply in the real business world. Instead, these theoretical models have been coded up into user-friendly and powerful
software, and this book shows the reader how to start applying advanced modeling techniques almost immediately. The trial software applications allow you to access the approximately 300 model
templates and 800 functions and tools, understand the concepts, and use embedded functions and algorithms in their own models. In addition, you can run risk-based Monte Carlo simulations and advanced
forecasting methods, and perform optimization on a myriad of situations, as well as structure and solve customized real options and financial options problems. Each model template that comes in the
Modeling Toolkit software is described in this book. Descriptions are provided in as much detail as the applications warrant. Some of the more fundamental concepts in risk analysis and real options
are covered in the author’s other books. It is suggested that these books, Modeling Toolkit: Applying Monte Carlo Simulation, Real Options Analysis, Stochastic Forecasting, and Portfolio Optimization
(2006) and Real Options Analysis, Second Edition (2005),
March 18, 2008
Char Count= 0
both published by John Wiley & Sons, be used as references for some of the models in this book. Those modeling issues that are, in the author’s opinion, critical, whether they are basic issues or
more advanced analytical ones, are presented in detail. As software applications change continually, it is recommended that you check the author’s web site (www.realoptionsvaluation.com) frequently
for any analytical updates, software upgrades, and revised or new models.
ACKNOWLEDGMENTS A special thank you to the contributors, including Mark Benyovszky, Morton Glantz, Uriel Kusiatin, and Victor Wong. DR. JOHNATHAN MUN
[email protected]
California, USA
March 18, 2008
Char Count= 0
Software Applications
his book covers the following software applications: Modeling Toolkit Over 800 functions, models, and tools and over 300 Excel and SLS templates covering the following applications:
Business analytics and statistics (CDF, ICDF, PDF, data analysis, integration) Credit and Debt Analysis (credit default swap, credit spread options, credit rating, debt options and pricing) Decision
Analysis (decision tree, Minimax, utility functions) Exotic Options (over 100 types of financial and exotic options) Forecasting (ARIMA, econometrics, EWMA, GARCH, nonlinear extrapolation, spline,
time-series) Industry Applications (banking, biotech, insurance, IT, real estate, utility) Operations Research and Portfolio Optimization (continuous, discrete, integer, static, dynamic, and
stochastic) Options Analysis (BDT interest lattices, debt options, options trading strategies) Portfolio Models (investment allocations, optimization, risk and return profiles) Probability of Default
and Banking Credit Risk (private, public and retail debt, credit derivatives and swaps) Real Options Analysis (over 100 types: abandon, barrier, contract, customized, dual asset, expand, multi-asset,
multi-phased, pentanomials, quadranomials, sequential, switch, and trinomials) Risk Hedging (delta and delta-gamma hedges, foreign exchange and interest rate risk) Risk Simulation (correlated
simulation, data fitting, Monte Carlo simulation, risk-simulation) Six Sigma (capability measures, control charts, hypothesis tests, measurement systems, precision, sample size) Statistical Tools
(ANOVA, Two-Way ANOVA, nonparametric hypotheses tests, parametric tests, principal components, variance-covariance) Valuation (APT, buy versus lease, CAPM, caps and floors, convertibles, financial
ratios, valuation models) Value at Risk (static covariance and simulation-based VaR) Volatility (EWMA, GARCH, implied volatility, Log Returns, Real Options Volatility, probability to volatility)
Yield Curve (BIS, Cox, Merton, NS, spline, Vasicek)
March 18, 2008
Char Count= 0
Risk Simulator Over 25 statistical distributions covering the following applications:
Applied Business Statistics (descriptive statistics, CDF/ICDF/PDF probabilities, stochastic parameter calibration) Bootstrap Nonparametric Simulation and Hypothesis Testing (testing empirical and
theoretical moments) Correlated Simulations (simulation copulas and Monte Carlo) Data Analysis and Regression Diagnostics (heteroskedasticity, multicollinearity, nonlinearity, outliers) Forecasting
(ARIMA, Auto-ARIMA, J-S curves, GARCH, Markov chains, multivariate regressions, stochastic processes) Optimization (static, dynamic, stochastic) Sensitivity Analysis (correlated sensitivity,
scenario, spider, tornado) Real Options SLS Customizable Binomial, Trinomial, Quadranomial, and Pentanomial Lattices Lattice Makers (lattices with Monte Carlo simulation) Super fast super lattice
algorithms (running thousands of lattice steps in seconds) Covering the following applications:
Exotic Options Models (barriers, benchmarked, multiple assets, portfolio options) Financial Options Models (3D dual asset exchange, single and double barriers) Real Options Models (abandon, barrier,
contract, expand, sequential compound, switching) Specialized Options (mean-reverting, jump-diffusion, and dual asset rainbows) Employee Stock Options Valuation Toolkit Applied by the U.S. Financial
Accounting Standards Board for FAS 123R 2004 Binomial and closed-form models Covers:
Blackout Periods Changing Volatility Forfeiture Rates Suboptimal Exercise Multiple Vesting
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
his book covers about 300 different analytical model templates that apply up to 800 modeling functions and tools from a variety of software applications. Trial versions of these software applications
are included in the book’s DVD or can be downloaded directly from the Web at www.realoptionsvaluation.com. Part I of the book deals with models using the Modeling Toolkit and Risk Simulator software
applications. Part II deals with real options and financial option models using the Real Options SLS software. Readers who are currently expert users of the Modeling Toolkit software and Risk
Simulator software may skip this section and dive directly into the models.
INTRODUCTION TO THE MODELING TOOLKIT SOFTWARE The Modeling Toolkit software incorporates about 800 different advanced analytical models, functions, and tools, applicable in a variety of industries
and applications. Appendix 1 lists the models available in the software as of this book’s publication date. To install this software for a trial period of 30 days, insert the DVD that comes with the
book or visit www.realoptionsvaluation.com and click on Downloads. Look for the Modeling Toolkit software. This software works on Windows XP, or Vista, and requires Excel XP, 2003 or 2007 to run. At
the end of the installation process, you will be prompted for a license key. Please use this trial license: Name: 30 Day Trial Key: 4C55-0BA2-420E-CA84 To start the software, click on Start |
Programs | Real Options Valuation | Modeling Toolkit | Modeling Toolkit. This action will start Excel. Inside Excel, you will notice a new menu item called Modeling Toolkit. This menu is
self-explanatory, as the models are categorized by application domain, and each model is described in more detail in this book. Please note that this software uses Excel macros. If you receive an
error message on macros, it is because your system is set to a high security level. You need to fix this by starting Excel XP or 2003 and clicking on Tools | Macros | Security | Medium and restarting
the software. If you are using Excel 2007, you can simply click on Enable Macros when prompted (or reset your security settings when in Excel 2007 by clicking on the Office button located at
March 18, 2008
Char Count=
the top left of the screen and selecting Excel Options | Trust Center | Trust Center Settings | Macro Settings | Enable All Macros). Note that the trial version will expire in 30 days. To obtain a
full corporate license, please contact the author’s firm, Real Options Valuation, Inc., at admin@ realoptionsvaluation.com or visit the company’s web site (www.realoptions valuation.com). Notice that
after the software expiration date, some of the models that depend on Risk Simulator or Real Options SLS software still will function, until their respective expiration dates. In addition, after the
expiration date, these worksheets still will be visible, but the analytical results and functions will return null values. Finally, software versions continually change and improve, and the best
recommendation is to visit the company’s web site for any updated or newer software versions or details on installation and licensing. The Appendixes provide a more detailed list of all the
functions, tools, and models and the Glossary details the required variable inputs in this software.
INTRODUCTION TO RISK SIMULATOR This section also provides the novice risk analyst an introduction to the Risk Simulator software for performing Monte Carlo simulation, where a trial version of the
software is included in the book’s DVD. Please refer to About the DVD at the end of this book for details on obtaining this extended trial license. This section starts off by illustrating what Risk
Simulator does and what steps are taken in a Monte Carlo simulation as well as some of the more basic elements in a simulation analysis. It continues with how to interpret the results from a
simulation and ends with a discussion of correlating variables in a simulation as well as applying precision and error control. Many more advanced techniques such as ARIMA forecasts and optimization
are also discussed. Software versions with new enhancements are released continually. Please review the software’s user manual and the software download site (www.realoptionsvaluation.com) for more
up-to-date details on using the latest version of the software. See Modeling Risk: Applying Monte Carlo Simulation, Real Options Analysis, Stochastic Forecasting, and Portfolio Optimization (Hoboken,
NJ: John Wiley & Sons, 2007), also by the author, for more technical details on using Risk Simulator. Risk Simulator is a Monte Carlo simulation, forecasting, and optimization software. It is written
in Microsoft .NET C# and functions with Excel as an add-in. This software is compatible and often used with the Real Options SLS software shown in Part II of this book, also developed by the author.
Stand-alone software applications in C++ are also available for implementation into other existing proprietary software or databases. The different functions or modules in both software applications
are briefly described next. The Appendixes provide a more detailed list of all the functions, tools, and models.
The Simulation Module allows you to: Run simulations in your existing Excel-based models Generate and extract simulation forecasts (distributions of results) Perform distributional fitting
(automatically finding the best-fitting statistical distribution)
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
Compute correlations (maintaining relationships among multiple simulated random variables) Identify sensitivities (creating tornado and sensitivity charts) Test statistical hypotheses (finding
statistical differences and similarities between pairs of forecasts) Run bootstrap simulation (testing the robustness of result statistics) Run custom and nonparametric simulations (simulations using
historical data without specifying any distributions or their parameters for forecasting without data or applying expert opinion forecasts) The Forecasting Module can be used to generate: Automatic
time-series forecasts (with and without seasonality and trend) Automatic ARIMA (automatically generate the best-fitting ARIMA forecasts) Basic Econometrics (modified multivariate regression
forecasts) Box-Jenkins ARIMA (econometric forecasts) GARCH Models (forecasting and modeling volatility) J-Curves (exponential growth forecasts) Markov Chains (market share and dynamics forecasts)
Multivariate regressions (modeling linear and nonlinear relationships among variables) Nonlinear extrapolations (curve fitting) S-Curves (logistic growth forecasts) Spline Curves (interpolating and
extrapolating missing nonlinear values) Stochastic processes forecasts (random walks, mean-reversions, jumpdiffusions, and mixed processes). The Optimization module is used for running: Linear and
nonlinear optimization Static optimization (without simulation), dynamic optimization (with simulation), and stochastic optimization (with simulation and run multiple times) Discrete, continuous and
integer decision variables Analytical Tools Correlated simulations Data diagnostics (autocorrelation, correlation, distributive lags, heteroskedasticity, micronumerosity, multicollinearity,
nonlinearity, nonstationarity, normality, outliers, partial autocorrelation, and others) Data extraction Data fitting Data import and export Distribution analysis (PDF, CDF, ICDF) Distribution
designer (creating customized distributions and Delphi simulation) Hypothesis tests and bootstrap simulation Sensitivity and dynamic scenario analysis Statistical analysis (descriptive statistics,
distributional fitting, hypothesis tests, nonlinear extrapolation, normality, stochastic parameter estimation, timeseries forecasts, trending, and others) Tornado and spider charts The Real Options
Super Lattice Solver (SLS) is another stand-alone software that complements Risk Simulator, used for solving simple to complex real options problems. See Part II of this book for details on this
software’s applications.
March 18, 2008
Char Count=
To install the software, insert the accompanying DVD, click on the Install Risk Simulator link, and follow the onscreen instructions. You will need to be online to download the latest version of the
software. The software requires Windows XP/Vista, administrative privileges, and Microsoft .NET Framework 1.1 and 2.0 installed on the computer. Most new computers come with Microsoft .NET Framework
1.1 already preinstalled. However, if an error message pertaining to requiring .NET Framework occurs during the installation of Risk Simulator, exit the installation. Then install the relevant .NET
Framework software, also included in the DVD (found in the DOT NET Framework folder). Complete the .NET installation, restart the computer, and then reinstall the Risk Simulator software. Version 1.1
of the .NET Framework is required even if your system has version 2.0/3.0, as they work independently of each other. You may also download this software on the Download page of
www.realoptionsvaluation.com. See the About the DVD section at the end of this book for details on obtaining an extended trial license. Once installation is complete, start Microsoft Excel. If the
installation was successful, you should see an additional Risk Simulator item on the menu bar in Excel and a new icon bar, as shown in Figure I.1. Figure I.2 shows the icon toolbar in more detail.
Please note that Risk Simulator supports multiple languages (e.g., English, Chinese, Japanese, and Spanish) and you can switch among languages by going to Risk Simulator | Languages. You are now
ready to start using the software for a trial period. You can obtain permanent or academic licenses from www.realoptionsvaluation.com. If you are using Windows Vista, make sure to disable User Access
Control before installing the software license. To do so: Click on Start | Control Panel | Classic View
FIGURE I.1 Risk Simulator menu and icon toolbar (Continued)
March 18, 2008
Char Count=
FIGURE I.1 (Continued)
March 18, 2008
Char Count=
(on the left panel) | User Accounts | Turn User Account Control On or Off and uncheck the option, Use User Account Control (UAC), and restart the computer. When restarting the computer, you will get
a message that UAC is turned off. You can turn this message off by going to the Control Panel | Security Center | Change the Way Security Center Alerts Me | Don’t Notify Me and Don’t Display the
Icon. The sections that follow provide step-by-step instructions for using the software. As the software is continually updated and improved, the examples in this book might be slightly different
from the latest version downloaded from the Internet.
RUNNING A MONTE CARLO SIMULATION Typically, to run a simulation in your existing Excel model, you must perform these five steps: 1. 2. 3. 4. 5.
Start a new or open an existing simulation profile. Define input assumptions in the relevant cells. Define output forecasts in the relevant cells. Run the simulation. Interpret the results.
If desired, and for practice, open the example file called Basic Simulation Model and follow along the examples on creating a simulation. The example file can be found on the menu at Risk Simulator |
Example Models.
1. Starting Models a New Simulation Profile To start a new simulation, you must first create a simulation profile. A simulation profile contains a complete set of instructions on how you would like
to run a simulation; it contains all the assumptions, forecasts, simulation run preferences, and so forth. Having profiles facilitates creating multiple scenarios of simulations; that is, using the
same exact model, several profiles can be created, each with its own specific simulation assumptions, forecasts, properties, and requirements. The same analyst can create different test scenarios
using different distributional assumptions and inputs or multiple users can test their own assumptions and inputs on the same model. Instead of having to make duplicates of the model, the same model
can be used and different simulations can be run through this model profiling process.
FIGURE I.2 Risk Simulator icon toolbar (Continued)
March 18, 2008
Char Count=
FIGURE I.2 (Continued)
March 18, 2008
Char Count=
FIGURE I.3 New simulation profile
Start a new simulation profile by performing these three steps. 1. Start Excel and create a new or open an existing model. You can use the Basic Simulation Model example to follow along: Risk
Simulator | Examples | Basic Simulation Model. 2. Click on Risk Simulator | New Simulation Profile. 3. Enter a title for your simulation including all other pertinent information (Figure I.3). The
elements in the new simulation profile dialog, shown in Figure I.3, include:
Title. Specifying a simulation profile name or title allows you to create multiple simulation profiles in a single Excel model. By so doing, you can save different simulation scenario profiles within
the same model without having to delete existing assumptions and change them each time a new simulation scenario is required. Number of trials. Enter the number of simulation trials required. Running
1,000 trials means that 1,000 different iterations of outcomes based on the input assumptions will be generated. You can change this number as desired, but the input has to be positive integers. The
default number of runs is 1,000 trials. Pause on simulation error. If checked, the simulation stops every time an error is encountered in the Excel model; that is, if your model encounters a
computational error (e.g., some input values generated in a simulation trial may yield a divide-by-zero error in a spreadsheet cell), the simulation stops. This feature is important to help audit
your model to make sure there are no computational errors in your Excel model. However, if you are sure the model works, there is no need for you to check this preference. Turn on correlations. If
checked, correlations between paired input assumptions will be computed. Otherwise, correlations will all be set to zero and a simulation is run assuming no cross-correlations between input
assumptions. Applying correlations will yield more accurate results if correlations do indeed exist and will tend to yield a lower forecast confidence if negative correlations exist.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.4 Change active simulation
Specify random number sequence. By definition, a simulation yields slightly different results every time it is run by virtue of the random-number generation routine in Monte Carlo simulation. This is
a theoretical fact in all randomnumber generators. However, when making presentations, sometimes you may require the same results. (For example, during a live presentation you may like to show the
same results that are in some pregenerated printed reports from a previous simulation run; when you are sharing models with others, you also may want the same results to be obtained every time.) If
that is the case, check this preference and enter in an initial seed number. The seed number can be any positive integer. Using the same initial seed value, the same number of trials, and the same
input assumptions always will yield the same sequence of random numbers, guaranteeing the same final set of results.
Note that once a new simulation profile has been created, you can come back later and modify your selections. In order to do this, make sure that the current active profile is the profile you wish to
modify; otherwise, click on Risk Simulator | Change Simulation Profile, select the profile you wish to change, and click OK. (Figure I.4 shows an example where there are multiple profiles and how to
activate, duplicate or delete a selected profile.) Then click on Risk Simulator | Edit Simulation Profile and make the required changes.
2. Defining Input Assumptions The next step is to set input assumptions in your model. Note that assumptions can be assigned only to cells without any equations or functions (i.e., typed-in numerical
values that are inputs in a model), whereas output forecasts can be assigned only to cells with equations and functions (i.e., outputs of a model). Recall that assumptions
March 18, 2008
Char Count=
FIGURE I.5 Setting an input assumption
and forecasts cannot be set unless a simulation profile already exists. Follow these three steps to set new input assumptions in your model: 1. Select the cell you wish to set an assumption on (e.g.,
cell G8 in the Basic Simulation Model example). 2. Click on Risk Simulator | Set Input Assumption or click the Set Assumption icon in the Risk Simulator icon toolbar. 3. Select the relevant
distribution you want, enter the relevant distribution parameters, and hit OK to insert the input assumption into your model (Figure I.5). Several key areas are worthy of mention in the Assumption
Properties. Figure I.6 shows the different areas.
Assumption Name. This optional area allows you to enter in unique names for the assumptions to help track what each of the assumptions represents. Good modeling practice is to use short but precise
assumption names. Distribution Gallery. This area shows all of the different distributions available in the software. To change the views, right-click anywhere in the gallery and select large icons,
small icons, or list. More than two dozen distributions are available. Input Parameters. Depending on the distribution selected, the required relevant parameters are shown. You may either enter the
parameters directly or link them to specific cells in your worksheet. Click on the Link icon to link an input parameter to a worksheet cell. Hard-coding or typing the parameters is useful when the
assumption parameters are assumed not to change. Linking to worksheet cells is useful when the input parameters themselves need to be visible on the worksheets or can be changed, as in a dynamic
simulation (where the input parameters themselves are linked to assumptions in the worksheets creating a multidimensional simulation, or simulation of simulations).
March 18, 2008
Char Count=
FIGURE I.6 Assumption properties
March 18, 2008
Char Count=
Data Boundary. Typically, the average analyst does not use distributional or data boundaries truncation, but they exist for truncating the distributional assumptions. For instance, if a normal
distribution is selected, the theoretical boundaries are between negative infinity and positive infinity. However, in practice, the simulated variable exists only within some smaller range. This
range can be entered to truncate the distribution appropriately. Correlations. Pairwise correlations can be assigned to input assumptions here. If assumptions are required, remember to check the Turn
on Correlations preference by clicking on Risk Simulator | Edit Simulation Profile. See the discussion on correlations later in this chapter for more details about assigning correlations and the
effects correlations will have on a model. Short Descriptions. Short descriptions exist for each of the distributions in the gallery. The short descriptions explain when a certain distribution is
used as well as the input parameter requirements. See the section in the appendix “Understanding Probability Distributions” in Modeling Risk: Applying Monte Carlo Simulation, Real Options Analysis,
Stochastic Forecasting, and Portfolio Optimization (Hoboken, NJ: John Wiley & Sons, 2006), also by the author, for details about each distribution type available in the software.
Note: If you are following along with the example, continue by setting another assumption on cell G9. This time use the Uniform distribution with a minimum value of 0.9 and a maximum value of 1.1.
Then proceed to defining the output forecasts in the next step.
3. Defining Output Forecasts The next step is to define output forecasts in the model. Forecasts can be defined only on output cells with equations or functions. Use these three steps to define the
forecasts: 1. Select the cell on which you wish to set an assumption (e.g., cell G10 in the Basic Simulation Model example). 2. Click on Risk Simulator | Set Output Forecast or click on the set
forecast icon on the Risk Simulator icon toolbar. 3. Enter the relevant information and click OK. Figure I.7 illustrates the set forecast properties, which include:
Forecast Name. Specify the name of the forecast cell. This is important because when you have a large model with multiple forecast cells, naming the forecast cells individually allows you to access
the right results quickly. Do not underestimate the importance of this simple step. Good modeling practice is to use short but precise assumption names. Forecast Precision. Instead of relying on a
guesstimate of how many trials to run in your simulation, you can set up precision and error controls. When an errorprecision combination has been achieved in the simulation, the simulation will
pause and inform you of the precision achieved. Thus the number of simulation trials is an automated process; you do not have to guess the required number
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.7 Set output forecast
of trials to simulate. Review the section on error and precision control for more specific details. Show Forecast Window. This property allows you to show or not show a particular forecast window.
The default is to always show a forecast chart.
4. Run Simulation If everything looks right, simply click on Risk Simulator | Run Simulation or click on the Run icon on the Risk Simulator toolbar, and the simulation will proceed. You may also
reset a simulation after it has run to rerun it (Risk Simulator | Reset Simulation or the Reset icon on the toolbar), or to pause it during a run. Also, the step function (Risk Simulator | Step
Simulation or the Step icon on the toolbar) allows you to simulate a single trial, one at a time, which is useful for educating others on simulation (i.e., you can show that at each trial, all the
values in the assumption cells are being replaced and the entire model is recalculated each time).
5. Interpreting the Forecast Results The final step in Monte Carlo simulation is to interpret the resulting forecast charts. Figures I.8 to I.15 show the forecast chart and the statistics generated
after running the simulation. Typically, these sections on the forecast window are important in interpreting the results of a simulation:
Forecast Chart. The forecast chart shown in Figure I.8 is a probability histogram that shows the frequency counts of values occurring and the total number of trials simulated. The vertical bars show
the frequency of a particular x value occurring out of the total number of trials, while the cumulative frequency (smooth line) shows the total probabilities of all values at and below x occurring in
the forecast. Forecast Statistics. The forecast statistics shown in Figure I.9 summarize the distribution of the forecast values in terms of the four moments of a distribution.
March 18, 2008
Char Count=
FIGURE I.8 Forecast chart
You can rotate between the histogram and statistics tab by depressing the space bar. Preferences. The preferences tab in the forecast chart (Figure I.10) allows you to change the look and feel of the
charts. For instance, if Always Show Window On Top is selected, the forecast charts will always be visible regardless of what other software is running on your computer. The Semitransparent When
Inactive is a powerful option used to compare or overlay multiple forecast charts at once
FIGURE I.9 Forecast statistics
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.10 Forecast chart preferences (e.g., enable this option on several forecast charts and drag them on top of one another to visually see the similarities or differences). Histogram Resolution
allows you to change the number of bins of the histogram, anywhere from 5 bins to 100 bins. Also, the Update Data Interval section allows you to control how fast the simulation runs versus how often
the forecast chart is updated. That is, if you wish to see the forecast chart updated at almost every trial, this will slow down the simulation as more memory is being allocated to updating
FIGURE I.11 Forecast chart options
March 18, 2008
Char Count=
the chart versus running the simulation. This is merely a user preference and in no way changes the results of the simulation, just the speed of completing the simulation. The Copy Chart button
allows you to copy the active forecast chart for pasting into other software applications (e.g., PowerPoint or Word) and the Close All and Minimize All buttons allow you to control all opened
forecast charts at once. Options. This forecast chart option (Figure I.11) allows you to show all the forecast data or to filter in or out values that fall within some specified interval or within
some standard deviation that you choose. Also, you can set the precision level here for this specific forecast to show the error levels in the statistics view. See the section on precision and error
control for more details.
USING FORECAST CHARTS AND CONFIDENCE INTERVALS In forecast charts, you can determine the probability of occurrence called confidence intervals; that is, given two values, what are the chances that
the outcome will fall between these two values? Figure I.12 illustrates that there is a 90% probability that the final outcome (in this case, the level of income) will be between $0.2781 and $1.3068.
The two-tailed confidence interval can be obtained by first selecting TwoTail as the type, entering the desired certainty value (e.g., 90) and hitting Tab on the keyboard. The two computed values
corresponding to the certainty value will then be displayed. In this example, there is a 5% probability that income will be below $0.2781 and another 5% probability that income will be above $1.3068;
that is, the two-tailed confidence interval is a symmetrical interval centered on the median or 50th percentile value. Thus, both tails will have the same probability. Alternatively, a one-tail
probability can be computed. Figure I.13 shows a LeftTail selection at 95% confidence (i.e., choose Left-Tail as the type, enter 95 as
FIGURE I.12 Forecast chart two-tailed confidence interval
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.13 Forecast chart one-tailed confidence interval the certainty level, and hit Tab on the keyboard). This means that there is a 95% probability that the income will be below $1.3068 (i.e.,
95% on the left tail of $1.3068) or a 5% probability that income will be above $1.3068, corresponding perfectly with the results seen in Figure I.12. In addition to evaluating the confidence interval
(i.e., given a probability level and finding the relevant income values), you can determine the probability of a given income value (Figure I.14). For instance, what is the probability that income
will be
FIGURE I.14 Forecast chart left-tail probability evaluation
March 18, 2008
Char Count=
FIGURE I.15 Forecast chart right-tail probability evaluation less than or equal to $1? To do this, select the Left-Tail probability type, enter 1 into the value input box, and hit Tab. The
corresponding certainty will be computed. (In this case, there is a 64.80% probability income will be at or below $1.) For the sake of completeness, you can select the Right-Tail probability type,
enter the value 1 in the value input box, and hit Tab (Figure I.15). The resulting probability indicates the right-tail probability past the value 1; that is, the probability of income exceeding $1.
In this case, we see that there is a 35.20% probability of income at or exceeding $1. Note that the forecast window is resizable by clicking on and dragging the bottom right corner of the window.
Finally, it is always advisable that before rerunning a simulation, you reset the current simulation by selecting Risk Simulator | Reset Simulation.
CORRELATIONS AND PRECISION CONTROL The correlation coefficient is a measure of the strength and direction of the relationship between two variables and can take on any values between –1.0 and +1.0;
that is, the correlation coefficient can be decomposed into its direction or sign (positive or negative relationship between two variables) and the magnitude or strength of the relationship (the
higher the absolute value of the correlation coefficient, the stronger the relationship). The correlation coefficient can be computed in several ways. The first approach is to manually compute the
correlation coefficient r of a pair of variables x and y using: n xi yi − xi yi r x,y = 2 2 2 2 n xi − xi n yi − yi
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
The second approach is to use Excel’s CORREL function. For instance, if the 10 data points for x and y are listed in cells A1:B10, then the Excel function to use is CORREL (A1:A10, B1:B10). The third
approach is to run Risk Simulator’s Multi-Variable Distributional Fitting Tool, and the resulting correlation matrix will be computed and displayed. It is important to note that correlation does not
imply causation. Two completely unrelated random variables might display some correlation, but this does not imply any causation between the two (e.g., sunspot activity and events in the stock market
are correlated, but there is no causation between the two). There are two general types of correlations: parametric and nonparametric correlations. Pearson’s correlation coefficient is the most
common correlation measure and usually is referred to simply as the correlation coefficient. However, Pearson’s correlation is a parametric measure, which means that it requires both correlated
variables to have an underlying normal distribution and that the relationship between the variables is linear. When these conditions are violated, which is often the case in Monte Carlo simulations,
the nonparametric counterparts become more important. Spearman’s rank correlation and Kendall’s tau are the two nonparametric alternatives. The Spearman correlation is used most commonly and is most
appropriate when applied in the context of Monte Carlo simulation––there is no dependence on normal distributions or linearity, meaning that correlations between different variables with different
distribution can be applied. In order to compute the Spearman correlation, first rank all the x and y variable values and then apply the Pearson’s correlation computation. Risk Simulator uses the
more robust nonparametric Spearman’s rank correlation. However, to simplify the simulation process and to be consistent with Excel’s correlation function, the correlation user inputs required are the
Pearson’s correlation coefficient. Risk Simulator then applies its own algorithms to convert them into Spearman’s rank correlation, thereby simplifying the process.
Applying Correlations in Risk Simulator Correlations can be applied in Risk Simulator in several ways:
When defining assumptions, simply enter the correlations into the correlation grid in the set input assumption dialog in Figure I.6. With existing data, run the Multi-Variable Distribution Fitting
Tool to perform distributional fitting and to obtain the correlation matrix between pairwise variables. If a simulation profile exists, the assumptions fitted automatically will contain the relevant
correlation values. With the use of a direct-input correlation matrix, click on Risk Simulator | Edit Correlations after multiple assumptions have been set, to view and edit the correlation matrix
used in the simulation.
Note that the correlation matrix must be positive definite; that is, the correlation must be mathematically valid. For instance, suppose you are trying to correlate three variables: grades of
graduate students in a particular year, the number of beers they
March 18, 2008
Char Count=
consume a week, and the number of hours they study a week. You would assume that these correlation relationships exist:
Grades and Beer:
Grades and Study: Beer and Study:
+ −
The more they drink, the lower the grades (no show on exams) The more they study, the higher the grades The more they drink, the less they study (drunk and partying all the time)
However, if you input a negative correlation between Grades and Study and assuming that the correlation coefficients have high magnitudes, the correlation matrix will be nonpositive definite. It
would defy logic, correlation requirements, and matrix mathematics. However, smaller coefficients sometimes still can work, even with the bad logic. When a nonpositive definite or bad correlation
matrix is entered, Risk Simulator automatically informs you of the error and offers to adjust these correlations to something that is semipositive definite while still maintaining the overall
structure of the correlation relationship (the same signs as well as the same relative strengths).
Effects of Correlations in Monte Carlo Simulation Although the computations required to correlate variables in a simulation are complex, the resulting effects are fairly clear. Figure I.16 shows a
simple correlation model (Correlation Risk Effects Model in the example folder). The calculation for revenue is simply price multiplied by quantity. The same model is replicated for no correlations,
positive correlation (+0.9), and negative correlation (–0.9) between price and quantity. The resulting statistics are shown in Figure I.17. Notice that the standard deviation of the model without
correlations is 0.1450, compared to 0.1886 for the positive correlation model and 0.0717 for the negative correlation model. That is, for simple models with positive relationships (e.g., additions
and multiplications), negative correlations tend to reduce the average spread of the distribution and create a tighter and more concentrated forecast distribution as compared to positive correlations
with larger average spreads. However, the mean remains relatively stable. This implies that correlations do little to change the expected value of projects but can reduce or increase a project’s
risk. Recall in financial theory that
FIGURE I.16 Simple correlation model
March 18, 2008
Char Count=
FIGURE I.17 Correlation results
March 18, 2008
Char Count=
FIGURE I.18 Correlations recovered
negatively correlated variables, projects, or assets when combined in a portfolio tend to create a diversification effect where the overall risk is reduced. Therefore, we see a smaller standard
deviation for the negatively correlated model. In a positively related model (e.g., A + B = C or A × B = C), a negative correlation reduces the risk (standard deviation and all other second moments
of the distribution) of the result (C) whereas a positive correlation between the inputs (A and B) will increase the overall risk. The opposite is true for a negatively related model (e.g., A – B = C
or A/B = C), where a positive correlation between the inputs will reduce the risk and a negative correlation increases the risk. In more complex models, as is often the case in real-life situations,
the effects will be unknown a priori and can be determined only after a simulation is run. Figure I.18 illustrates the results after running a simulation, extracting the raw data of the assumptions,
and computing the correlations between the variables. The figure shows that the input assumptions are recovered in the simulation; that is, you enter +0.9 and –0.9 correlations and the resulting
simulated values have the same correlations. Clearly there will be minor differences from one simulation run to another, but when enough trials are run, the resulting recovered correlations approach
those that were inputted.
TORNADO AND SENSITIVITY TOOLS IN SIMULATION One of the powerful simulation tools in Risk Simulator is tornado analysis––it captures the static impacts of each variable on the outcome of the model;
that is, the tool automatically perturbs each variable in the model a preset amount, captures the fluctuation on the model’s forecast or final result, and lists the resulting perturbations ranked
from the most significant to the least. Figures I.19 through I.24 illustrate the application of a tornado analysis. For instance, Figure I.19 is a sample discounted
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.19 Sample discounted cash flow model
cash flow model where the input assumptions in the model are shown. The question is: What are the critical success drivers that affect the model’s output the most? That is, what really drives the net
present value of $96.63 or which input variable impacts this value the most? The tornado chart tool can be obtained through Risk Simulator | Tools | Tornado Analysis. To follow along the first
example, open the Tornado and Sensitivity Charts (Linear) file in the examples folder. Figure I.20 shows this sample model, where cell G6 containing the net present value is chosen as the target
result to be analyzed. The target cell’s precedents in the model are used in creating the tornado chart. Precedents are all the input variables that affect the outcome of the model. For instance, if
March 18, 2008
Char Count=
FIGURE I.20 Running a tornado analysis
model consists of A = B + C, and where C = D + E, then B, D, and E are the precedents for A (C is not a precedent as it is only an intermediate calculated value). Figure I.20 shows the testing range
of each precedent variable used to estimate the target result. If the precedent variables are simple inputs, then the testing range will be a simple perturbation based on the range chosen (e.g., the
default is ±10%). Each precedent variable can be perturbed at different percentages if required. A wider range is important as it is better able to test extreme values rather than smaller
perturbations around the expected values. In certain circumstances, extreme values may have a larger, smaller, or unbalanced impact (e.g., nonlinearities may occur where increasing or decreasing
economies of scale and scope creep in for larger or smaller values of a variable) and only a wider range will capture this nonlinear impact.
PROCEDURE Use these three steps to create a tornado analysis: 1. Select the single output cell (i.e., a cell with a function or equation) in an Excel model (e.g., cell G6 is selected in our example).
2. Select Risk Simulator | Tools | Tornado Analysis.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
3. Review the precedents and rename them as appropriate (renaming the precedents to shorter names allows a more visually pleasing tornado and spider chart) and click OK. Alternatively, click on Use
Cell Address to apply cell locations as the variable names.
Results Interpretation Figure I.21 shows the resulting tornado analysis report, which indicates that capital investment has the largest impact on net present value (NPV), followed by tax rate,
average sale price and quantity demanded of the product lines, and so forth. The report contains four distinct elements: 1. A statistical summary listing the procedure performed. 2. A sensitivity
table (Figure I.22) shows the starting NPV base value of $96.63 and how each input is changed (e.g., Investment is changed from $1,800 to $1,980 on the upside with a +10% swing and from $1,800 to
$1,620 on the downside with a –10% swing). The resulting upside and downside values on NPV are –$83.37 and $276.63, with a total change of $360, making it the variable with the highest impact on NPV.
The precedent variables are ranked from the highest impact to the lowest impact. 3. The spider chart (Figure I.23) illustrates these effects graphically. The y-axis is the NPV target value while the
x-axis depicts the percentage change on each of the precedent value. The central point is the base case value at $96.63 at 0% change from the base value of each precedent. Positively sloped lines
indicate a positive relationship or effect; negatively sloped lines indicate a negative relationship (e.g., investment is negatively sloped, which means that the higher the investment level, the
lower the NPV). The absolute value of the slope indicates the magnitude of the effect computed as the percentage change in the result given a percentage change in the precedent. A steep line
indicates a higher impact on the NPV y-axis given a change in the precedent x-axis. 4. The tornado chart (Figure I.24) illustrates the results in another graphical manner, where the highest-impacting
precedent is listed first. The x-axis is the NPV value, with the center of the chart being the base case condition. Green (lighter) bars in the chart indicate a positive effect; red (darker) bars
indicate a negative effect. Therefore, for investments, the red (darker) bar on the right side indicate a negative effect of investment on higher NPV––in other words, capital investment and NPV are
negatively correlated. The opposite is true for price and quantity of products A to C (their green or lighter bars are on the right side of the chart).
Notes Remember that tornado analysis is a static sensitivity analysis applied on each input variable in the model––that is, each variable is perturbed individually, and the resulting effects are
tabulated. This makes tornado analysis a key component to execute before running a simulation. Capturing and identifying the most important impact drivers in the model is one of the very first steps
in risk analysis. The next step
March 18, 2008
Char Count=
FIGURE I.21 Tornado analysis report
is to identify which of these important impact drivers are uncertain. These uncertain impact drivers are the critical success drivers of a project; the results of the model depend on these critical
success drivers. These variables are the ones that should be simulated. Do not waste time simulating variables that are neither uncertain nor have little impact on the results. Tornado charts assist
in identifying these critical success drivers quickly and easily. Following this example, it might be that price and quantity should be simulated, assuming if the required investment and effective
tax rate are both known in advance and unchanging.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.22 Sensitivity table
Although the tornado chart is easier to read, the spider chart is important to determine if there are any nonlinearities in the model. For instance, Figure I.25 shows another spider chart where
nonlinearities are fairly evident (the lines on the graph are not straight but curved). The example model used is Tornado and Sensitivity Charts (Nonlinear), which applies the Black-Scholes option
pricing model. Such nonlinearities cannot be easily ascertained from a tornado chart and may be important information in the model or may provide decision makers important insight
FIGURE I.23 Spider chart
March 18, 2008
Char Count=
FIGURE I.24 Tornado chart into the model’s dynamics. For instance, in this Black-Scholes model, the fact that stock price and strike price are nonlinearly related to the option value is important to
know. This characteristic implies that option value will not increase or decrease proportionally to the changes in stock or strike price and that there might be some interactions between these two
prices as well as other variables. As another example, an engineering model depicting nonlinearities might indicate that a particular part or component, when subjected to a high enough force or
tension, will break. Clearly, it is important to understand such nonlinearities.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.25 Nonlinear spider chart
SENSITIVITY ANALYSIS A related feature is sensitivity analysis. While tornado analysis (tornado charts and spider charts) applies static perturbations before a simulation run, sensitivity analysis
applies dynamic perturbations created after the simulation run. Tornado and spider charts are the results of static perturbations, meaning that each precedent or assumption variable is perturbed a
preset amount one at a time, and the fluctuations in the results are tabulated. In contrast, sensitivity charts are the results of dynamic perturbations in the sense that multiple assumptions are
perturbed simultaneously and their interactions in the model and correlations among variables are captured in the fluctuations of the results. Tornado charts therefore identify which variables drive
the results the most and hence are suitable for simulation; sensitivity charts identify the impact to the results when multiple interacting variables are simulated together in the model. This effect
is clearly illustrated in Figure I.26. Notice that the ranking of critical success drivers is similar to the tornado chart in the previous examples. However, if correlations are added between the
assumptions, Figure I.27 shows a very different picture. Notice, for instance, price erosion had little impact on NPV, but when some of the input assumptions are correlated, the interaction that
exists between these correlated variables makes price erosion have more impact. Note that tornado analysis cannot capture these correlated dynamic relationships. Only after a simulation is run will
such relationships become evident in a sensitivity analysis. A tornado chart’s presimulation critical success factors therefore sometimes will be different from a sensitivity chart’s postsimulation
critical success factors. The postsimulation critical success factors should be the ones that are of interest, as these more readily capture the interactions of the model precedents.
March 18, 2008
Char Count=
FIGURE I.26 Sensitivity chart without correlations
FIGURE I.27 Sensitivity chart with correlations
PROCEDURE Use these three steps to create a sensitivity analysis: 1. Open or create a model, define assumptions and forecasts, and run the simulation––the example here uses the Tornado and
Sensitivity Charts (Linear) file. 2. Select Risk Simulator | Tools | Sensitivity Analysis. 3. Select the forecast of choice to analyze and click OK (Figure I.28).
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.28 Running sensitivity analysis
Note that sensitivity analysis cannot be run unless assumptions and forecasts have been defined and a simulation has been run.
Results Interpretation The results of the sensitivity analysis comprise a report and two key charts. The first is a nonlinear rank correlation chart (Figure I.29) that ranks from highest to lowest
the assumption-forecast correlation pairs. These correlations are nonlinear and nonparametric, making them free of any distributional requirements (i.e., an assumption with a Weibull distribution can
be compared to another with a beta distribution). The results from this chart are fairly similar to that of the tornado analysis seen previously (of course without the capital investment value, which
we decided was a known value and hence was not simulated), with one special exception. Tax rate was relegated to a much lower position in the sensitivity analysis chart (Figure I.29) as compared to
the tornado chart (Figure I.24). This is because by itself, tax rate will have a significant impact. Once the other variables are interacting in the model, however, it appears that tax rate has less
of a dominant effect. This is because
March 18, 2008
Char Count=
FIGURE I.29 Rank correlation chart
tax rate has a smaller distribution, as historical tax rates tend not to fluctuate too much, and also because tax rate is a straight percentage value of the income before taxes, whereas other
precedent variables have a larger effect on NPV. This example proves that it is important to perform sensitivity analysis after a simulation run to ascertain if there are any interactions in the
model and if the effects of certain variables still hold. The second chart (Figure I.30) illustrates the percent variation explained; that is, of the fluctuations in the forecast, how much of the
variation can be explained by each of the assumptions after accounting for all the interactions among variables? Notice that the sum of all variations explained is usually close to 100% (sometimes
other elements impact the model, but they cannot be captured
FIGURE I.30 Contribution to variance chart
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
here directly), and if correlations exist, the sum may sometimes exceed 100% (due to the interaction effects that are cumulative).
Notes Tornado analysis is performed before a simulation run while sensitivity analysis is performed after a simulation run. Spider charts in tornado analysis can consider nonlinearities while rank
correlation charts in sensitivity analysis can account for nonlinear and distributional-free conditions.
DISTRIBUTIONAL FITTING: SINGLE VARIABLE AND MULTIPLE VARIABLES Another powerful simulation tool is distributional fitting; that is, which distribution does an analyst or engineer use for a particular
input variable in a model? What are the relevant distributional parameters? If no historical data exist, then the analyst must make assumptions about the variables in question. One approach is to use
the Delphi method, where a group of experts are tasked with estimating the behavior of each variable. For instance, a group of mechanical engineers can be tasked with evaluating the extreme
possibilities of the diameter of a spring coil through rigorous experimentation or guesstimates. These values can be used as the variable’s input parameters (e.g., uniform distribution with extreme
values between 0.5 and 1.2). When testing is not possible (e.g., market share and revenue growth rate), management still can make estimates of potential outcomes and provide the best-case,
most-likely case, and worst-case scenarios, whereupon a triangular or custom distribution can be created. However, if reliable historical data are available, distributional fitting can be
accomplished. Assuming that historical patterns hold and that history tends to repeat itself, historical data can be used to find the best-fitting distribution with their relevant parameters to
better define the variables to be simulated. Clearly, adjustments to the forecast value can be made (e.g., structural shifts and adjustments) as required, to reflect future expectations. Figures I.31
through I.33 illustrate a distributional-fitting example. The next discussion uses the Data Fitting file in the examples folder.
PROCEDURE Use these four steps to perform a distributional-fitting model: 1. Open a spreadsheet with existing data for fitting (e.g., use the Data Fitting example file from the Risk Simulator |
Example Models menu). 2. Select the data you wish to fit, not including the variable name. (Data should be in a single column with multiple rows.) 3. Select Risk Simulator | Tools | Distributional
Fitting (Single-Variable). Decide if you wish to fit to continuous or discrete distributions.
March 18, 2008
Char Count=
FIGURE I.31 Single-variable distributional fitting
4. Select the specific distributions you wish to fit to or keep the default where all distributions are selected and click OK (Figure I.31). 5. Review the results of the fit, choose the relevant
distribution you want, and click OK (Figure I.32).
Results Interpretation The null hypothesis (Ho ) being tested is such that the fitted distribution is the same distribution as the population from which the sample data to be fitted comes. Thus, if
the computed p-value is lower than a critical alpha level (typically 0.10 or 0.05), then the distribution is the wrong distribution. Conversely, the higher the p-value, the better the distribution
fits the data. Roughly, you can think of p-value as a percentage explained; that is, if the p-value is 1.00 (Figure I.32), then setting a normal distribution with a mean of 100.67 and a standard
deviation of 10.40 explains close to 100% of the variation in the data, indicating an especially good fit. The data was from a 1,000-trial simulation in Risk Simulator based on a normal distribution
with a mean of 100 and a standard deviation of 10. Because only 1,000 trials were simulated, the resulting distribution is fairly close to the specified distributional parameters, and in this case,
about a 100% precision.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.32 Distributional fitting result
Both the results (Figure I.32) and the report (Figure I.33) show the test statistic, p-value, theoretical statistics (based on the selected distribution), empirical statistics (based on the raw
data), the original data (to maintain a record of the data used), and the assumption complete with the relevant distributional parameters (i.e., if you selected the option to automatically generate
assumption and if a simulation profile already exists). The results also rank all the selected distributions and how well they fit the data.
BOOTSTRAP SIMULATION Bootstrap simulation is a simple technique that estimates the reliability or accuracy of forecast statistics or other sample raw data. Bootstrap simulation can be used to answer
a lot of confidence and precision-based questions in simulation. For instance,
March 18, 2008
Char Count=
FIGURE I.33 Distributional fitting report
suppose an identical model (with identical assumptions and forecasts but without any random seeds) is run by 100 different people; the results will clearly be slightly different. The question is, if
we collected all the statistics from these 100 people, how will the mean be distributed, or the median, or the skewness, or excess kurtosis? Suppose one person has a mean value of, say, 1.50 while
another has 1.52. Are these two values statistically significantly different from one another, or are they statistically similar and the slight difference is due entirely to random chance? What about
1.53? So, how far is far enough to say that the values are statistically different? In addition, if a model’s resulting skewness is –0.19, is this forecast distribution negatively skewed or is it
statistically close enough to zero to state that this distribution is symmetrical and not skewed? Thus, if we bootstrapped this forecast 100 times (i.e., run a 1,000-trial simulation for 100 times
and collect the 100 skewness coefficients), the skewness distribution would indicate how far zero is away from –0.19. If the 90% confidence on the bootstrapped skewness distribution contains the
value zero, then we can state that on a 90% confidence level, this distribution is symmetrical and not skewed, and the value –0.19 is statistically close enough to zero. Otherwise, if zero falls
outside of this 90% confidence area, then this distribution is negatively skewed. The same analysis can be applied to excess kurtosis and other statistics. Essentially, bootstrap simulation is a
hypothesis-testing tool. Classical methods used in the past relied on mathematical formulas to describe the accuracy of sample statistics. These methods assume that the distribution of a sample
statistic approaches a normal distribution, making the calculation of the statistic’s standard error or confidence interval relatively easy. However, when a statistic’s sampling distribution
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.34 Nonparametric bootstrap simulation
is not normally distributed or easily found, these classical methods are difficult to use. In contrast, bootstrapping analyzes sample statistics empirically by sampling the data repeatedly and
creating distributions of the different statistics from each sampling. The classical methods of hypothesis testing are available in Risk Simulator and are explained in the next section. Classical
methods provide higher power in their tests but rely on normality assumptions and can be used only to test the mean and variance of a distribution, as compared to bootstrap simulation, which provides
lower power but is nonparametric and distribution-free, and can be used to test any distributional statistic.
PROCEDURE 1. Run a simulation with assumptions and forecasts. 2. Select Risk Simulator | Tools | Nonparametric Bootstrap. 3. Select only one forecast to bootstrap, select the statistic(s) to
bootstrap, and enter the number of bootstrap trials and click OK (Figure I.34).
Results Interpretation Figure I.35 illustrates some sample bootstrap results. The example file used was Hypothesis Testing and Bootstrap Simulation. For instance, the 90% confidence for the skewness
statistic is between –0.0189 and 0.0952, such that the value 0 falls
March 18, 2008
Char Count=
within this confidence, indicating that on a 90% confidence, the skewness of this forecast is not statistically significantly different from zero, or that this distribution can be considered as
symmetrical and not skewed. Conversely, if the value 0 falls outside of this confidence, then the opposite is true: The distribution is skewed (positively skewed if the forecast statistic is
positive, and negatively skewed if the forecast statistic is negative).
Notes The term bootstrap comes from the saying “to pull oneself up by one’s own bootstraps” and is applicable because this method uses the distribution of statistics
FIGURE I.35 Bootstrap simulation results (Continued)
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.35 (Continued)
themselves to analyze the accuracy of the statistics. Nonparametric simulation is simply randomly picking golf balls from a large basket with replacement where each golf ball is based on a historical
data point. Suppose there are 365 golf balls in the basket (representing 365 historical data points). Imagine if you will that the value of each golf ball picked at random is written on a large
whiteboard. The results of the 365 balls picked with replacement are written in the first column of the board with 365 rows of numbers. Relevant statistics (e.g., mean, median, mode, standard
deviation, etc.) are calculated on these 365 rows. The process is then repeated, say, 5,000 times. The whiteboard will now be filled with 365 rows and 5,000 columns. Hence, 5,000 sets of statistics
(i.e., there will be 5,000 means, 5,000 medians, 5,000 modes, 5,000 standard deviations, etc.) are tabulated and their distributions shown.
March 18, 2008
Char Count=
The relevant statistics of the statistics are then tabulated, where from these results you can ascertain how confident the simulated statistics are. Finally, bootstrap results are important because
according to the Law of Large Numbers and Central Limit Theorem in statistics, the mean of the sample means is an unbiased estimator and approaches the true population mean when the sample size
HYPOTHESIS TESTING A hypothesis test is performed when testing the means and variances of two distributions to determine if they are statistically identical or statistically different from one
another (i.e., to see if the differences between the means and variances of two different forecasts that occur are based on random chance or if they are, in fact, statistically significantly
different from one another). This analysis is related to bootstrap simulation with several differences. Classical hypothesis testing uses mathematical models and is based on theoretical
distributions. This means that the precision and power of the test is higher than bootstrap simulation’s empirically based method of simulating a simulation and letting the data tell the story.
However, the classical hypothesis test is applicable only for testing means and variances of two distributions (and by extension, standard deviations) to see if they are statistically identical or
different. In contrast, nonparametric bootstrap simulation can be used to test for any distributional statistics, making it more useful; the drawback is its lower testing power. Risk Simulator
provides both techniques from which to choose.
PROCEDURE 1. Run a simulation. 2. Select Risk Simulator | Tools | Hypothesis Testing. 3. Select the two forecasts to test, select the type of hypothesis test you wish to run, and click OK (Figure
Results Interpretation A two-tailed hypothesis test is performed on the null hypothesis (Ho ) such that the population means of the two variables are statistically identical to one another. The
alternative hypothesis (Ha ) is such that the population means are statistically different from one another. If the calculated p-values are less than or equal to 0.01, 0.05, or 0.10 alpha test
levels, it means that the null hypothesis is rejected, which implies that the forecast means are statistically significantly different at the 1%, 5%, and 10% significance levels. If the null
hypothesis is not rejected when the p-values are high, the means of the two forecast distributions are statistically similar to one another. The same analysis is performed on variances of two
forecasts at a time using the pairwise F-test. If the p-values are small, then the variances (and standard deviations) are statistically different from one another; otherwise, for large p-values, the
variances are statistically identical to one another. The example file used was Hypothesis Testing and Bootstrap Simulation.
March 18, 2008
Char Count=
FIGURE I.36 Hypothesis testing
March 18, 2008
Char Count=
Notes The two-variable t-test with unequal variances (the population variance of forecast 1 is expected to be different from the population variance of forecast 2) is appropriate when the forecast
distributions are from different populations (e.g., data collected from two different geographical locations, or two different operating business units, etc.). The two-variable t-test with equal
variances (the population variance of forecast 1 is expected to be equal to the population variance of forecast 2) is appropriate when the forecast distributions are from similar populations (e.g.,
data collected from two different engine designs with similar specifications, etc.). The paired dependent twovariable t-test is appropriate when the forecast distributions are from exactly the same
population and subjects (e.g., data collected from the same group of patients before an experimental drug was used and after the drug was applied, etc.).
DATA EXTRACTION, SAVING SIMULATION RESULTS, AND GENERATING REPORTS Raw data of a simulation can be extracted very easily using Risk Simulator’s Data Extraction routine. Both assumptions and forecasts
can be extracted, but a simulation must be run first. The extracted data can then be used for a variety of other analyses, and the data can be extracted to different formats—for use in spreadsheets,
databases, and other software products.
PROCEDURE 1. Open or create a model, define assumptions and forecasts, and run the simulation. 2. Select Risk Simulator | Tools | Data Extraction. 3. Select the assumptions and/or forecasts you wish
to extract the data from and click OK. The simulated data can be extracted to an Excel worksheet, a flat text file (for easy import into other software applications), or as *.risksim files (which can
be reopened as Risk Simulator forecast charts at a later date). Finally, you can create a simulation report of all the assumptions and forecasts in the model by going to Risk Simulator | Create
Report. A sample report is shown in Figure I.37.
REGRESSION AND FORECASTING DIAGNOSTIC TOOL This advanced analytical tool in Risk Simulator is used to determine the econometric properties of your data. The diagnostics include checking the data for
heteroskedasticity, nonlinearity, outliers, specification errors, micronumerosity, stationarity and stochastic properties, normality and sphericity of the errors, and multicollinearity. Each test is
described in more detail in their respective reports in the model.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.37 Sample simulation report
March 18, 2008
Char Count=
FIGURE I.38 Running the data diagnostic tool
Open the example model (Risk Simulator | Examples | Regression Diagnostics) and go to the Time-Series Data worksheet and select the data including the variable names (cells C5:H55). Click on Risk
Simulator | Tools | Diagnostic Tool. Check the data and select the Dependent Variable Y from the drop-down menu. Click OK when finished (Figure I.38).
A common violation in forecasting and regression analysis is heteroskedasticity; that is, the variance of the errors increases over time (see Figure I.39 for test results using the diagnostic tool).
Visually, the width of the vertical data fluctuations increases or fans out over time, and typically, the coefficient of determination (R-squared coefficient) drops significantly when
heteroskedasticity exists. If the variance of the dependent variable is not constant, then the error’s variance will not be constant. Unless the heteroskedasticity of the dependent variable is
pronounced, its effect will not be severe: the least-squares estimates will still be unbiased, and the estimates of the slope and intercept will either be normally distributed if the errors are
normally distributed, or at least normally distributed asymptotically (as the number of data points becomes large) if the errors are not normally distributed. The estimate for the variance of the
slope and overall variance will be inaccurate, but the inaccuracy is not likely to be substantial if the independent-variable values are symmetric about their mean.
JWBK121-Mun March 18, 2008 20:35
FIGURE I.39 Results from tests of outliers, heteroskedasticity, micronumerosity, and nonlinearity
Intro Char Count=
March 18, 2008
Char Count=
If the number of data points is small (micronumerosity), it may be difficult to detect assumption violations. With small sample sizes, assumption violations such as non-normality or
heteroskedasticity of variances are difficult to detect even when they are present. With a small number of data points, linear regression offers less protection against violation of assumptions. With
few data points, it may be hard to determine how well the fitted line matches the data, or whether a nonlinear function would be more appropriate. Even if none of the test assumptions are violated, a
linear regression on a small number of data points may not have sufficient power to detect a significant difference between the slope and zero, even if the slope is nonzero. The power depends on the
residual error, the observed variation in the independent variable, the selected significance alpha level of the test, and the number of data points. Power decreases as the residual variance
increases, decreases as the significance level is decreased (i.e., as the test is made more stringent), increases as the variation in observed independent variable increases, and increases as the
number of data points increases. Values may not be identically distributed because of the presence of outliers. Outliers are anomalous values in the data. Outliers may have a strong influence over
the fitted slope and intercept, giving a poor fit to the bulk of the data points. Outliers tend to increase the estimate of residual variance, lowering the chance of rejecting the null hypothesis
(i.e., creating higher prediction errors). They may be due to recording errors, which may be correctable, or they may be due to the dependent-variable values not all being sampled from the same
population. Apparent outliers may also be due to the dependent-variable values being from the same, but non-normal, population. However, a point may be an unusual value in either an independent or a
dependent variable without necessarily being an outlier in the scatter plot. In regression analysis, the fitted line can be highly sensitive to outliers. In other words, least squares regression is
not resistant to outliers; thus, neither is the fitted-slope estimate. A point vertically removed from the other points can cause the fitted line to pass close to it, instead of following the general
linear trend of the rest of the data, especially if the point is relatively far horizontally from the center of the data. However, great care should be taken when deciding if the outliers should be
removed. Although in most cases when outliers are removed the regression results look better, a priori justification must first exist. For instance, if one is regressing the performance of a
particular firm’s stock returns, outliers caused by downturns in the stock market should be included; these are not truly outliers as they are inevitabilities in the business cycle. Forgoing these
outliers and using the regression equation to forecast one’s retirement fund based on the firm’s stocks will yield incorrect results at best. In contrast, suppose the outliers are caused by a single
nonrecurring business condition (e.g., merger and acquisition) and such business structural changes are not forecast to recur; then these outliers should be removed and the data cleansed prior to
running a regression analysis. The analysis here only identifies outliers and it is up to the user to determine if they should remain or be excluded. Sometimes, a nonlinear relationship between the
dependent and independent variables is more appropriate than a linear relationship. In such cases, running a linear regression will not be optimal. If the linear model is not the correct form, then
the slope and intercept estimates and the fitted values from the linear regression will be biased, and the fitted slope and intercept estimates will not be meaningful. Over a restricted range of
independent or dependent variables, nonlinear models may be
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
well approximated by linear models (this is in fact the basis of linear interpolation), but for accurate prediction a model appropriate to the data should be selected. A nonlinear transformation
should first be applied to the data before running a regression. One simple approach is to take the natural logarithm of the independent variable (other approaches include taking the square root or
raising the independent variable to the second or third power) and run a regression or forecast using the nonlinearly transformed data. Another typical issue when forecasting time-series data is
whether the independent-variable values are truly independent of each other or are dependent. Dependent variable values collected over a time series may be autocorrelated. For serially correlated
dependent-variable values, the estimates of the slope and intercept will be unbiased, but the estimates of their forecast and variances will not be reliable, and hence the validity of certain
statistical goodness-of-fit tests will be flawed. For instance, interest rates, inflation rates, sales, revenues, and many other time-series data are typically autocorrelated, where the value in the
current period is related to the value in a previous period, and so forth (clearly, the inflation rate in March is related to February’s level, which in turn is related to January’s level, and so
forth). Ignoring such blatant relationships will yield biased and less accurate forecasts. In such events, an autocorrelated regression model or an ARIMA model may be better suited (Risk Simulator |
Forecasting | ARIMA). Finally, the autocorrelation functions of a series that is nonstationary tend to decay slowly (see Nonstationary report in the model). If autocorrelation AC(1) is nonzero, it
means that the series is first-order serially correlated. If AC(k) dies off more or less geometrically with increasing lag, it implies that the series follows a low-order autoregressive process. If
AC(k) drops to zero after a small number of lags, it implies that the series follows a low-order movingaverage process. Partial correlation PAC(k) measures the correlation of values that are k
periods apart after removing the correlation from the intervening lags. If the pattern of autocorrelation can be captured by an autoregression of order less than k, then the partial autocorrelation
at lag k will be close to zero. Ljung-Box Q-statistics and their p-values at lag k have the null hypothesis that there is no autocorrelation up to order k. The dotted lines in the plots of the
autocorrelations are the approximate two standard error bounds. If the autocorrelation is within these bounds, it is not significantly different from zero at the 5% significance level.
Autocorrelation measures the relationship to the past of the dependent Y variable to itself. Distributive lags, in contrast, are time-lag relationships between the dependent Y variable and different
independent X variables. For instance, the movement and direction of mortgage rates tend to follow the federal funds rate but at a time lag (typically one to three months). Sometimes, time lags
follow cycles and seasonality (e.g., ice cream sales tend to peak during the summer months and are hence related to the previous summer’s sales, 12 months in the past). The distributive lag analysis
(Figure I.40) shows how the dependent variable is related to each of the independent variables at various time lags, when all lags are considered simultaneously, to determine which time lags are
statistically significant and should be considered. Another requirement in running a regression model is the assumption of normality and sphericity of the error term. If the assumption of normality
is violated or outliers are present, then the linear regression goodness-of-fit test may not be the most powerful or informative test available, and this could mean the difference between either
detecting a linear fit or not. If the errors are not independent and not
FIGURE I.40 Autocorrelation and distributive lag results
Intro JWBK121-Mun
March 18, 2008 20:35 Char Count=
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.41 Test for normality of errors
normally distributed, it may indicate that the data might be autocorrelated or suffer from nonlinearities or other more destructive errors. Independence of the errors can also be detected in the
heteroskedasticity tests (Figure I.41). The normality test on the errors performed is a nonparametric test, which makes no assumptions about the specific shape of the population from which the sample
is drawn, allowing for smaller sample data sets to be analyzed. This test evaluates the null hypothesis of whether the sample errors were drawn from a normally distributed population, versus an
alternate hypothesis that the data sample is not normally distributed. If the calculated D-Statistic is greater than or equal to the D-Critical values at various significance values, then reject the
null hypothesis and accept the alternate hypothesis (the errors are not normally distributed). Otherwise, if the DStatistic is less than the D-Critical value, do not reject the null hypothesis (the
errors are normally distributed). This test relies on two cumulative frequencies: one derived from the sample data set, and the second from a theoretical distribution based on the mean and standard
deviation of the sample data. Sometimes, certain types of time-series data cannot be modeled using any other methods except for a stochastic process, because the underlying events are stochastic in
nature. For instance, you cannot adequately model and forecast stock prices, interest rates, the price of oil, and other commodity prices using a simple regression model, because these variables are
highly uncertain and volatile, and they do not follow a predefined static rule of behavior; in other words, the process is not stationary. Stationarity is checked here using the runs test, while
another visual clue is found in the autocorrelation report (the ACF tends to decay slowly). A stochastic process is a sequence of events or paths generated by probabilistic laws. That is, random
events can occur over time but are governed by specific statistical and probabilistic rules. The main stochastic processes include random walk or Brownian motion, mean-reversion, and jump-diffusion.
These processes can be used to forecast a multitude of variables that seemingly follow random trends but are restricted by probabilistic laws. The process-generating equation is known in advance, but
the actual results generated are unknown (Figure I.42). The random walk or Brownian motion process can be used to forecast stock prices, prices of commodities, and other stochastic time-series data
given a drift or growth rate and volatility around the drift path. The mean-reversion process can be
FIGURE I.42 Stochastic process parameter estimation
Intro JWBK121-Mun
March 18, 2008 20:35 Char Count=
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
used to reduce the fluctuations of the random walk process by allowing the path to target a long-term value, making it useful for forecasting time-series variables that have a long-term rate such as
interest rates and inflation rates (these are long-term target rates by regulatory authorities or the market). The jump-diffusion process is useful for forecasting time-series data when the variable
can occasionally exhibit random jumps, such as oil prices or the price of electricity (discrete exogenous event shocks can make prices jump up or down). These processes can also be mixed and matched
as required. Multicollinearity exists when there is a linear relationship between the independent variables. When this occurs, the regression equation cannot be estimated at all. In near collinearity
situations, the estimated regression equation will be biased and provide inaccurate results. This situation is especially true when a step-wise regression approach is used, where the statistically
significant independent variables will be thrown out of the regression mix earlier than expected, resulting in a regression equation that is neither efficient nor accurate. One quick test of the
presence of multicollinearity in a multiple regression equation is that the R-squared value is relatively high while the t-statistics are relatively low. Another quick test is to create a correlation
matrix between the independent variables. A high cross-correlation indicates a potential for autocorrelation. The rule of thumb is that a correlation with an absolute value greater than 0.75 is
indicative of severe multicollinearity. Another test for multicollinearity is the use of the Variance Inflation Factor (VIF), obtained by regressing each independent variable to all the other
independent variables, obtaining the R-squared value and calculating the VIF. A VIF exceeding 2.0 can be considered as severe multicollinearity. A VIF exceeding 10.0 indicates destructive
multicollinearity (Figure I.43). The Correlation Matrix lists the Pearson’s product moment correlations (commonly referred to as the Pearson’s R) between variable pairs. The correlation coefficient
ranges between –1.0 and +1.0 inclusive. The sign indicates the direction of association between the variables, while the coefficient indicates the magnitude or strength of association. The Pearson’s
R measures only a linear relationship, and is less effective in measuring nonlinear relationships.
FIGURE I.43 Multicollinearity errors
March 18, 2008
Char Count=
To test whether the correlations are significant, a two-tailed hypothesis test is performed and the resulting p-values are listed. P-values less than 0.10, 0.05, and 0.01 are highlighted to indicate
statistical significance. In other words, a p-value for a correlation pair that is less than a given significance value is statistically significantly different from zero, indicating that there is a
significant linear relationship between the two variables. The Pearson’s product moment correlation coefficient (R) between two variables COV (x and y) is related to the covariance (cov) measure
where Rx,y = sx s yx,y . The benefit of dividing the covariance by the product of the two variables’ standard deviations (s) is that the resulting correlation coefficient is bounded between –1.0 and
+1.0 inclusive. This makes the correlation a good relative measure to compare among different variables (particularly with different units and magnitude). The Spearman rankbased nonparametric
correlation is also included in the analysis. The Spearman’s R is related to the Pearson’s R in that the data is first ranked and then correlated. The rank correlations provide a better estimate of
the relationship between two variables when one or both of them is nonlinear. It must be stressed that a significant correlation does not imply causation. Associations between variables in no way
imply that the change of one variable causes another variable to change. When two variables are moving independently of each other but in a related path, they may be correlated but their relationship
might be spurious (e.g., a correlation between sunspots and the stock market might be strong, but one can surmise that there is no causality and that this relationship is purely spurious).
STATISTICAL ANALYSIS TOOL Another very powerful tool in Risk Simulator is the statistical analysis tool, which determines the statistical properties of the data. The diagnostics run include checking
the data for various statistical properties, from basic descriptive statistics to testing for and calibrating the stochastic properties of the data.
PROCEDURE Open the example model (Risk Simulator | Example Models | Statistical Analysis) and go to the Data worksheet and select the data including the variable names (cells C5:E55).
Click on Risk Simulator | Tools | Statistical Analysis (Figure I.44). Check the data type, whether the data selected is from a single variable or multiple variables arranged in columns. In our
example, we assume that the data areas selected are from multiple variables. Click OK when finished. Choose the statistical tests you wish to perform. The suggestion (and by default) is to choose all
the tests. Click OK when finished (Figure I.45).
Spend some time going through the reports generated to get a better understanding of the statistical tests performed (sample reports are shown in Figures I.46 to I.49).
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.44 Running the statistical analysis tool
FIGURE I.45 Statistical tests
March 18, 2008
Char Count=
FIGURE I.46 Sample statistical analysis tool report
DISTRIBUTIONAL ANALYSIS TOOL This is a statistical probability tool in Risk Simulator that is rather useful in a variety of settings, and can be used to compute the probability density function
(PDF), which is also called the probability mass function (PMF) for discrete distributions (we will use these terms interchangeably), where given some distribution and its parameters, we can
determine the probability of occurrence given some outcome x. In addition, the cumulative distribution function (CDF) can also be computed, which is the sum of the PDF values up to and including this
x value. Finally, the inverse cumulative distribution function (ICDF) is used to compute the value x given the probability of occurrence. This tool is accessible via Risk Simulator | Tools |
Distributional Analysis. As an example, Figure I.50 shows the computation of a binomial distribution (i.e., a distribution with two outcomes, such as the tossing of a coin, where the outcome is
either heads or tails, with some prescribed probability of heads and tails). Suppose we toss a coin two times and set the outcome heads as a success; we use the binomial distribution with Trials = 2
(tossing the coin twice) and Probability = 0.50 (the probability of success, of getting heads). Selecting the PDF and setting the range of
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.47 Sample statistical analysis tool report (hypothesis testing of one variable)
values x as from 0 to 2 with a step size of 1 (this means we are requesting the values 0, 1, and 2 for x), the resulting probabilities are provided in the table and graphically, as well as the
theoretical four moments of the distribution. As the outcomes of the coin toss are heads-heads, tails-tails, heads-tails, and tails-heads, the probability of getting exactly no heads is 25%, one
heads is 50%, and two heads is 25%. Similarly, we can obtain the exact probabilities of tossing the coin, say, 20 times, as seen in Figure I.51. The results are presented in both table and graphical
formats. Figure I.51 shows the same binomial distribution but now the CDF is computed. The CDF is simply the sum of the PDF values up to the point x. For instance, in Figure I.51, we see that the
probabilities of 0, 1, and 2 are 0.000001, 0.000019, and 0.000181, whose sum is 0.000201, which is the value of the CDF at x = 2 in Figure I.52. Whereas the PDF computes the probabilities of getting
two heads, the CDF computes the probability of getting no more than two heads (or probabilities of 0, 1, and 2 heads). Taking the complement (i.e., 1 – 0.000201 obtains 0.999799 or 99.9799%) provides
the probability of getting three heads or more. Using this distributional analysis tool, even more advanced distributions can be analyzed, such as the gamma, beta, negative binomial, and many others
in Risk
March 18, 2008
Char Count=
FIGURE I.48 Sample statistical analysis tool report (normality test)
Simulator. As a further example of the tool’s use in a continuous distribution and the ICDF functionality, Figure I.53 shows the standard normal distribution (normal distribution with a mean of zero
and standard deviation of one), where we apply the ICDF to find the value of x that corresponds to the cumulative probability of 97.50% (CDF). That is, a one-tail CDF of 97.50% is equivalent to a
two-tail 95% confidence interval (there is a 2.50% probability in the right tail and 2.50% in the left tail, leaving 95% in the center or confidence interval area, which is equivalent to a 97.50%
area for one tail). The result is the familiar Z-Score of 1.96. Therefore, using this distributional analysis tool, and the standardized scores for other distributions, the exact and cumulative
probabilities of other distributions can all be obtained quickly and easily.
PORTFOLIO OPTIMIZATION In today’s competitive global economy, companies are faced with many difficult decisions. These decisions include allocating financial resources, building or expanding
facilities, managing inventories, and determining product-mix strategies. Such
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.49 Sample statistical analysis tool report (stochastic parameter estimation)
decisions might involve thousands or millions of potential alternatives. Considering and evaluating each of them would be impractical or even impossible. A model can provide valuable assistance in
incorporating relevant variables when analyzing decisions and finding the best solutions for making decisions. Models capture the most important features of a problem and present them in a form that
is easy to interpret. Models often provide insights that intuition alone cannot. An optimization model has three major elements: decision variables, constraints, and an objective. In short, the
optimization methodology finds the best combination or permutation of decision variables (e.g., which products to sell and which projects to execute) in every conceivable way such that the objective
is maximized (e.g., revenues and net income) or minimized (e.g., risk and costs) while still satisfying the constraints (e.g., budget and resources). Obtaining optimal values generally requires that
you search in an iterative or ad hoc fashion. This search involves running one iteration for an initial set of values, analyzing the results, changing one or more values, rerunning the model, and
repeating the process until you find a satisfactory solution. This process can be very tedious and time-consuming even for small models, and often it is not clear how to adjust the values from one
iteration to the next.
March 18, 2008
Char Count=
FIGURE I.50 Distributional analysis tool (binomial distribution with 2 trials and a 0.5 probability of success)
A more rigorous method systematically enumerates all possible alternatives. This approach guarantees optimal solutions if the model is correctly specified. Suppose that an optimization model depends
on only two decision variables. If each variable has 10 possible values, trying each combination requires 100 iterations (102 alternatives). If each iteration is very short (e.g., 2 seconds), then
the entire process could be done in approximately three minutes of computer time. However, instead of two decision variables, consider six, then consider that trying all combinations requires
1,000,000 iterations (106 alternatives). It is easily possible for complete enumeration to take weeks, months, or even years to carry out.
The Traveling Financial Planner A very simple example is in order. Table I.1 illustrates the traveling financial planner problem. Suppose the traveling financial planner has to make three sales trips
to New York, Chicago, and Seattle. Further suppose that the order of arrival at each city is irrelevant. All that is important in this simple example is to find the lowest total cost possible to
cover all three cities. Table I.1 also lists the flight costs from these different cities. The problem here is cost minimization, suitable for optimization. One basic approach to solving this problem
is through an ad hoc or brute force method. That is, manually list all six possible permutations, as seen in Table I.2. Clearly the cheapest itinerary is going from the East Coast to the West Coast,
going from New York to
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.51 Distributional analysis tool (binomial distribution with 20 trials)
Chicago and finally on to Seattle. Here, the problem is simple and can be calculated manually, as there were three cities and hence six possible itineraries. However, add two more cities and the
total number of possible itineraries jumps to 120. Performing an ad hoc calculation will be fairly intimidating and time-consuming. On a larger scale, suppose there are 100 cities on the
salesperson’s list; the possible itineraries will be as many as 9.3 × 10157 . The problem will take many years to calculate manually, which is where optimization software steps in, automating the
search for the optimal itinerary. The example illustrated up to now is a deterministic optimization problem; that is, the airline ticket prices are known ahead of time and are assumed to be constant.
Now suppose the ticket prices are not constant but are uncertain, following some distribution (e.g., a ticket from Chicago to Seattle averages $325, but is never cheaper than $300 and usually doesn’t
exceed $500). The same uncertainty applies to tickets for the other cities. The problem now becomes an optimization under uncertainty. Ad hoc and brute force approaches simply do not work. Software
such as Risk
March 18, 2008
Char Count=
FIGURE I.52 Distributional analysis tool (binomial distribution’s CDF with 20 trials)
Simulator can take over this optimization problem and automate the entire process seamlessly. The next section discusses the terms required in an optimization under uncertainty.
The Lingo of Optimization Before embarking on solving an optimization problem, it is vital to understand the terminology of optimization—the terms used to describe certain attributes of the
optimization process. These words include decision variables, constraints, and objectives. Decision variables are quantities over which you have control—for example, the amount of a product to make,
the number of dollars to allocate among different investments, or which projects to select from among a limited set. As an example, portfolio optimization analysis includes a go or no-go decision on
particular projects.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.53 Distributional Analysis Tool (Normal Distribution’s ICDF and Z-Score)
In addition, the dollar or percentage budget allocation across multiple projects also can be structured as decision variables. Constraints describe relationships among decision variables that
restrict the values of the decision variables. For example, a constraint might ensure that the total amount of money allocated among various investments cannot exceed a specified
TABLE I.1
Traveling Financial Planner
Seattle to Chicago Chicago to Seattle New York to Seattle Seattle to New York Chicago to New York New York to Chicago
$325 $225 $350 $375 $325 $325
March 18, 2008
Char Count=
TABLE I.2
Multiple Combinations of the Traveling Financial Planner Problem
Seattle–Chicago–New York Seattle–New York–Chicago Chicago–Seattle–New York Chicago–New York–Seattle New York–Seattle–Chicago New York–Chicago–Seattle
$325 + $325 = $650 $375 + $325 = $700 $225 + $375 = $600 $325 + $350 = $675 $350 + $325 = $675 $325 + $225 = $550
Three cities means 3! = 3 × 2 × 1 = 6 itinerary permutations. Five cities means 5! = 5 × 4 × 3 × 2 × 1 = 120 permutations. One hundred cities means 100! = 100 × 99 × · · · × 1 = 9.3 × 10157
amount (or at most one project from a certain group can be selected), budget constraints, timing restrictions, minimum returns, or risk tolerance levels. Objectives give a mathematical representation
of the model’s desired outcome, such as maximizing profit or minimizing cost, in terms of the decision variables. In financial analysis, for example, the objective may be to maximize returns while
minimizing risks (maximizing the Sharpe ratio or returns-to-risk ratio). The solution to an optimization model provides a set of values for the decision variables that optimizes (maximizes or
minimizes) the associated objective. If the real business conditions were simple and if the future were predictable, all data in an optimization model would be constant, making the model
deterministic. In many cases, however, a deterministic optimization model cannot capture all the relevant intricacies of a practical decision-making environment. When a model’s data are uncertain and
can only be described probabilistically, the objective will have some probability distribution for any chosen set of decision variables. You can find this probability distribution by simulating the
model using Risk Simulator. An optimization model under uncertainty has several additional elements, including assumptions and forecasts. Assumptions capture the uncertainty of model data using
probability distributions, whereas forecasts are the frequency distributions of possible results for the model. Forecast statistics are summary values of a forecast distribution, such as the mean,
standard deviation, and variance. The optimization process controls the optimization by maximizing or minimizing the objective. Each optimization model has one objective, a variable that
mathematically represents the model’s objective in terms of the assumption and decision variables. Optimization’s job is to find the optimal (minimum or maximum) value of the objective by selecting
and improving different values for the decision variables. When model data are uncertain and can only be described using probability distributions, the objective itself will have some probability
distribution for any set of decision variables. Many algorithms exist to run optimization, and many different procedures exist when optimization is coupled with Monte Carlo simulation. In Risk
Simulator, there are three distinct optimization procedures and optimization types as well as different decision variable types. For instance, Risk Simulator can handle continuous decision variables
(1.2535, 0.2215, and so forth) as well as integers decision variables (e.g., 1, 2, 3, 4, and so forth), binary decision variables (1 and 0 for go and no-go decisions), and mixed decision variables
(both integers and continuous variables).
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
On top of that, Risk Simulator can handle linear optimization (i.e., when both the objective and constraints are all linear equations and functions) as well as nonlinear optimizations (i.e., when the
objective and constraints are a mixture of linear as well as nonlinear functions and equations). As far as the optimization process is concerned, Risk Simulator can be used to run a static
optimization; that is, an optimization that is run on a static model, where no simulations are run. In other words, all the inputs in the model are static and unchanging. This optimization type is
applicable when the model is assumed to be known and no uncertainties exist. Also, a static optimization can be first run to determine the optimal portfolio and its corresponding optimal allocation
of decision variables before more advanced optimization procedures are applied. For instance, before running a stochastic optimization problem, a static optimization is first run to determine if
there exist solutions to the optimization problem before a more protracted analysis is performed. Next, dynamic optimization is applied when Monte Carlo simulation is used together with optimization.
Another name for such a procedure is simulationoptimization. That is, a simulation is first run; the results of the simulation are then applied in the Excel model, and then an optimization is applied
to the simulated values. In other words, a simulation is run for N trials, and then an optimization process is run for M iterations until the optimal results are obtained or an infeasible set is
found. That is, using Risk Simulator’s optimization module, you can choose which forecast and assumption statistics to use and replace in the model after the simulation is run. Then, these forecast
statistics can be applied in the optimization process. This approach is useful when you have a large model with many interacting assumptions and forecasts, and when some of the forecast statistics
are required in the optimization. For example, if the standard deviation of an assumption or forecast is required in the optimization model (e.g., computing the Sharpe ratio in asset allocation and
optimization problems where we have mean divided by standard deviation of the portfolio), then this approach should be used. The stochastic optimization process, in contrast, is similar to the
dynamic optimization procedure with the exception that the entire dynamic optimization process is repeated T times. That is, a simulation with N trials is run, and then an optimization is run with M
iterations to obtain the optimal results. Then the process is replicated T times. The results will be a forecast chart of each decision variable with T values. In other words, a simulation is run and
the forecast or assumption statistics are used in the optimization model to find the optimal allocation of decision variables. Then, another simulation is run, generating different forecast
statistics, and these new updated values are then optimized, and so forth. Hence, each of the final decision variables will have its own forecast chart, indicating the range of the optimal decision
variables. For instance, instead of obtaining single-point estimates in the dynamic optimization procedure, you can now obtain a distribution of the decision variables, hence, a range of optimal
values for each decision variable, also known as a stochastic optimization. Finally, an efficient frontier optimization procedure applies the concepts of marginal increments and shadow pricing in
optimization. That is, what would happen to the results of the optimization if one of the constraints were relaxed slightly? Say, for instance, the budget constraint is set at $1 million. What would
happen to the portfolio’s outcome and optimal decisions if the constraint were now $1.5 million or $2 million, and so forth? This is the concept of the Markowitz efficient
March 18, 2008
Char Count=
frontiers in investment finance, where if the portfolio standard deviation is allowed to increase slightly, what additional returns will the portfolio generate? This process is similar to the dynamic
optimization process with the exception that one of the constraints is allowed to change, and with each change, the simulation and optimization process is run. This process is best applied manually
using Risk Simulator. That is, run a dynamic or stochastic optimization, then rerun another optimization with a constraint, and repeat that procedure several times. This manual process is important,
as by changing the constraint, the analyst can determine if the results are similar or different, and hence, whether it is worthy of any additional analysis, or can determine how far a marginal
increase in the constraint should be to obtain a significant change in the objective and decision variables. One item is worthy of consideration. There exist other software products that supposedly
perform stochastic optimization but in fact they do not. For instance, after a simulation is run, then one iteration of the optimization process is generated, and then another simulation is run, then
the second optimization iteration is generated and so forth; this is simply a waste of time and resources. That is, in optimization, the model is put through a rigorous set of algorithms, where
multiple iterations (ranging from several to thousands of iterations) are required to obtain the optimal results. Hence, generating one iteration at a time is a waste of time and resources. The same
portfolio can be solved using Risk Simulator in under a minute as compared to multiple hours using such a backward approach. Also, such a simulation-optimization approach will typically yield bad
results, and is not a stochastic optimization approach. Be extremely careful of such methodologies when applying optimization to your models. The following are two example optimization problems. One
uses continuous decision variables, while the other uses discrete integer decision variables. In either model, you can apply discrete optimization, dynamic optimization, stochastic optimization, or
even the efficient frontiers with shadow pricing. Any of these approaches can be used for these two examples. Therefore, for simplicity, only the model setup will be illustrated and it is up to the
user to decide which optimization process to run. Also, the continuous model uses the nonlinear optimization approach (this is because the portfolio risk computed is a nonlinear function, and the
objective is a nonlinear function of portfolio returns divided by portfolio risks), while the second example of an integer optimization is an example of a linear optimization model (its objective and
all of its constraints are linear). Therefore, these two examples encapsulate all of the procedures aforementioned. Example: Optimization with Continuous Decision Variables Figure I.54 illustrates
the sample continuous optimization model. The example here uses the Continuous Optimization file accessed through Risk Simulator | Examples. In this example, there are 10 distinct asset classes
(e.g., different types of mutual funds, stocks, or assets) where the idea is to most efficiently and effectively allocate the portfolio holdings such that the best bang for the buck is obtained. That
is, to generate the best portfolio returns possible given the risks inherent in each asset class. In order to truly understand the concept of optimization, we will have to delve more deeply into this
sample model in order to see how the optimization process can best be applied. The model shows the 10 asset classes, and each asset class has its own set of annualized returns and annualized
volatilities. These return and risk measures are
March 18, 2008
Char Count=
FIGURE I.54 Continuous optimization model
March 18, 2008
Char Count=
annualized values such that they can be consistently compared across different asset classes. Returns are computed using the geometric average of the relative returns, while the risks are computed
using the logarithmic relative stock returns approach. See the chapters on Volatility Models for details on computing the annualized volatility and annualized returns on a stock or asset class. The
Allocation Weights in column E holds the decision variables, which are the variables that need to be tweaked and tested such that the total weight is constrained at 100% (cell E17). Typically, to
start the optimization, we will set these cells to a uniform value, where in this case, cells E6 to E15 are set at 10% each. In addition, each decision variable may have specific restrictions in its
allowed range. In this example, the lower and upper allocations allowed are 5% and 35%, as seen in columns F and G. This means that each asset class may have its own allocation boundaries. Next,
column H shows the return to risk ratio, which is simply the return percentage divided by the risk percentage, where the higher this value, the higher the bang for the buck. The remaining model shows
the individual asset class rankings by returns, risk, return to risk ratio, and allocation. In other words, these rankings show at a glance which asset class has the lowest risk, the highest return,
and so forth. The portfolio’s total returns in cell C17 is SUMPRODUCT(C6:C15, E6:E15), that is, the sum of the allocation weights multiplied by the annualized returns for each asset class. As an
example, with a portfolio of four assets, we haveRP = ω A RA + ω B RB + ωC RC + ω D RD, where RP is the return on the portfolio, RA,B,C,D are the individual returns on the projects, and ωA,B,C,D are
the respective weights or capital allocation across each project. In addition, the portfolio’s diversified risk in cell D17 is computed by taking σ P = i n m ωi2 σi2 + 2ωi ω j ρi, j σi σ j . Here, ρ
i,j are the respective cross-correlations bei=1
i=1 j=1
tween the asset classes—hence, if the cross-correlations are negative, there are risk diversification effects, and the portfolio risk decreases. However, to simplify the computations here, we assume
zero correlations among the asset classes through this portfolio risk computation, but assume the correlations when applying simulation on the returns as will be seen later. Therefore, instead of
applying static correlations among these different asset returns, we apply the correlations in the simulation assumptions themselves, creating a more dynamic relationship among the simulated return
values. Finally, the return to risk ratio or Sharpe ratio is computed for the portfolio. This value is seen in cell C18, and represents the objective to be maximized in this optimization exercise. To
summarize, we have the following specifications in this example model: Objective: Decision Variables: Restrictions on Decision Variables: Constraints:
Maximize Return to Risk Ratio (C18) Allocation Weights (E6:E15) Minimum and Maximum Required (F6:G15) Total Allocation Weights Sum to 100% (E17)
The model has been preset to run the optimization (simply click on Risk Simulator | Optimization | Run Optimization) or alternatively, the following shows how to recreate the optimization model.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
Open the example file and start a new profile by clicking on Risk Simulator | New Profile and provide it a name. The first step in optimization is to set the decision variables. Select cell E6 and
set the first decision variable (Risk Simulator | Optimization | Set Decision) and click on the link icon to select the name cell (B6), as well as the lower bound and upper bound values at cells F6
and G6. Then, using Risk Simulator copy, copy this cell E6 decision variable and paste the decision variable to the remaining cells in E7 to E15. Make sure to use Risk Simulator copy and paste,
rather than Excel’s copy and paste. The second step in optimization is to set the constraint. There is only one constraint here; that is, the total allocation in the portfolio must sum to 100%. So,
click on Risk Simulator | Optimization | Constraints. . . and select ADD to add a new constraint. Then, select the cell E17 and make it equal (=) to 100%. Click OK when done. The final step in
optimization is to set the objective function and start the optimization by selecting the objective cell C18, select Risk Simulator | Optimization | Set Objective and Risk Simulator | Optimization |
Run Optimization and choosing the optimization of choice (Static Optimization, Dynamic Optimization, or Stochastic Optimization). To get started, select Static Optimization. Check to make sure the
objective cell is set for C18 and select Maximize. You can now review the decision variables and constraints if required, or click OK to run the static optimization. Once the optimization is
complete, you may select Revert to revert back to the original values of the decision variables as well as the objective, or select Replace to apply the optimized decision variables. Typically,
Replace is chosen after the optimization is done.
Figure I.55 shows the screen shots of these procedural steps. You can add simulation assumptions on the model’s returns and risk (columns C and D) and apply the dynamic optimization and stochastic
optimization for additional practice. Results Interpretation The optimization’s final results are shown in Figure I.56, where the optimal allocation of assets for the portfolio is seen in cells
E6:E15. That is, given the restrictions of each asset fluctuating between 5% and 35%, and where the sum of the allocation must equal 100%, the allocation that maximizes the return to risk ratio is
seen in Figure I.56. A few important points have to be noted when reviewing the results and optimization procedures performed thus far:
The correct way to run the optimization is to maximize the bang for the buck or returns to risk Sharpe ratio as we have done. If instead we maximized the total portfolio returns, the optimal
allocation result is trivial and does not require optimization to obtain. That is, simply allocate 5% (the minimum allowed) to the lowest eight assets, 35% (the maximum allowed) to the
highest-returning asset, and the remaining (25%) to the secondbest returns asset. Optimization is not required. However, when allocating the
March 18, 2008
Char Count=
FIGURE I.55 Running Continuous Optimization in Risk Simulator
March 18, 2008
Char Count=
FIGURE I.56 Continuous optimization results
March 18, 2008
Char Count=
TABLE I.3
Optimization Results
Portfolio Returns
Portfolio Risk
Portfolio Returns to Risk Ratio
Maximize Returns to Risk Ratio Maximize Returns Minimize Risk
12.69% 13.97% 12.38%
4.52% 6.77% 4.46%
2.8091 2.0636 2.7754
portfolio this way, the risk is a lot higher as compared to when maximizing the returns to risk ratio, although the portfolio returns by themselves are higher. In contrast, one can minimize the total
portfolio risk, but the returns will now be less.
Table I.3 illustrates the results from the three different objectives being optimized. From the table, the best approach is to maximize the returns to risk ratio; that is, for the same amount of
risk, this allocation provides the highest amount of return. Conversely, for the same amount of return, this allocation provides the lowest amount of risk possible. This approach of bang for the buck
or returns to risk ratio is the cornerstone of the Markowitz efficient frontier in modern portfolio theory. That is, if we constrained the total portfolio risk levels and successively increased it
over time we will obtain several efficient portfolio allocations for different risk characteristics. Thus, different efficient portfolio allocations can be obtained for different individuals with
different risk preferences.
OPTIMIZATION WITH DISCRETE INTEGER VARIABLES Sometimes, the decision variables are not continuous but discrete integers (e.g., 1,2,3) or binary (e.g., 0 and 1). That is, we can use such optimization
as on-off switches or go/no-go decisions. Figure I.57 illustrates a project selection model where there are 12 projects listed. The example here uses the Discrete Optimization file found either on
the start menu at Start | Real Options Valuation | Risk Simulator | Examples or accessed directly through Risk Simulator | Example Models. Each project, like before, has its own returns (ENPV and NPV
for expanded net present value and net present value—the ENPV is simply the NPV plus any strategic real options values), costs of implementation, risks, and so forth. If required, this model can be
modified to include required full-time equivalences (FTE) and other resources of various functions, and additional constraints can be set on these additional resources. The inputs into this model are
typically linked from other spreadsheet models. For instance, each project will have its own discounted cash flow or returns on investment model. The application here is to maximize the portfolio’s
Sharpe ratio subject to some budget allocation. Many other versions of this model can be created—for instance, maximizing the portfolio returns, minimizing the risks, or adding additional constraints
where the total number of projects chosen cannot exceed 6. All of these items can be run using this existing model.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.57 Discrete integer optimization model
The example model had been preset and is ready to run (Risk Simulator | Change Profile and select the optimization profile and then click on Risk Simulator | Optimization | Run Optimization) or
follow the procedure below to recreate the optimization model form scratch).
Open the example file and start a new profile by clicking on Risk Simulator | New Profile and provide it a name. The first step in optimization is to set up the decision variables. Set the first
decision variable by selecting cell J4, and select Risk Simulator | Optimization | Set Decision, click on the link icon to select the name cell (B4), and select the Binary variable. Then, using Risk
Simulator copy, copy this cell J4 decision variable and paste the decision variable to the remaining cells in J5 to J15. This is the best method if you have only several decision variables and you
can name each decision variable with a unique name for identification later. Make sure to use the Risk Simulator copy and paste, rather than Excel copy and paste functions. The second step in
optimization is to set the constraints. There are two constraints here; that is, the total budget allocation in the portfolio must be less than $5,000 and the total number of projects must not exceed
6. So, click on Risk Simulator | Optimization | Constraints. . . and select ADD to add a new constraint. Then, select the cell D17 and make it less than or equal (≤) to 5000. Repeat by setting cell
J17 ≤ 6.
March 18, 2008
Char Count=
The final step in optimization is to set the objective function and start the optimization by selecting cell C19 and selecting Risk Simulator | Optimization | Set Objective; then, run the
optimization using Risk Simulator | Optimization | Run Optimization and choosing the optimization of choice (Static Optimization, Dynamic Optimization, or Stochastic Optimization). To get started,
select Static Optimization. Check to make sure that the objective cell is either the Sharpe ratio or portfolio returns to risk ratio and select Maximize. You can now review the decision variables and
constraints if required, or click OK to run the static optimization.
Figure I.58 shows the screen shots of these procedural steps. You can add simulation assumptions on the model’s ENPV and risk (columns C and D) and apply the dynamic optimization and stochastic
optimization for additional practice. Results Interpretation Figure I.59 shows a sample optimal selection of projects that maximizes the Sharpe ratio. In contrast, one can always maximize total
revenues, but as before, this is a trivial process and simply involves choosing the highest returning project and going down the list until you run out of money or exceed the budget constraint. Doing
so will yield theoretically undesirable projects as the highest-yielding projects typically hold higher risks. Now, if desired, you can replicate the optimization using a stochastic or dynamic
optimization by adding in assumptions in the ENPV and Risk values. For additional hands-on examples of optimization in action, see the various chapters on optimization models.
FORECASTING Forecasting is the act of predicting the future, whether it is based on historical data or on speculation about the future when no history exists. When historical data exist, a
quantitative or statistical approach is best, but if no historical data exist, then potentially a qualitative or judgmental approach is usually the only recourse.
Different Types of Forecasting Techniques Generally, forecasting can be divided into quantitative and qualitative techniques. Qualitative forecasting is used when little to no reliable historical,
contemporaneous, or comparable data exists. Several qualitative methods exist, such as the Delphi or expert opinion approach (a consensus-building forecast by field experts, marketing experts, or
internal staff members); management assumptions (target growth rates set by senior management); as well as market research or external data or polling and surveys (data obtained through third-party
sources, industry and sector indexes, or from active market research). These estimates can be either single-point estimates (an average consensus) or a set of forecast values (a distribution of
forecasts). The latter can be entered into Risk Simulator as a custom distribution, and the resulting forecasts can be simulated. That is, a nonparametric simulation can be run using the estimated
data points themselves as the distribution.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
FIGURE I.58 Running discrete integer optimization
March 18, 2008
Char Count=
FIGURE I.59 Optimal selection of projects that maximizes the Sharpe ratio On the quantitative side of forecasting, the available data or data that need to be forecasted can be divided into
time-series (values that have a time element to them, such as revenues of different years, inflation rates, interest rates, market share, and so forth, or failure rates); cross-sectional (values that
are time-independent, such as the grade point average of sophomore students across the nation in a particular year, given each student’s levels of SAT scores, IQ, and number of alcoholic beverages
consumed per week); or mixed panel (mixture between time-series and panel data, such as predicting sales over the next 10 years given budgeted marketing expenses and market share projections; this
means that the sales data is time-series, but exogenous variables such as marketing expenses and market share exist to help to model the forecast predictions). The Risk Simulator software provides
the user several forecasting methodologies:
Auto-ARIMA Basic Econometrics Box-Jenkins ARIMA Custom Distributions J-S Curves GARCH Markov Chains Maximum Likelihood Multivariate Regression Nonlinear Extrapolation Spline Curves Stochastic Process
Forecasting Time-Series Analysis In general, to create forecasts, several quick steps are required:
Start Excel and enter in or open your existing historical data. Select the data, and click on Risk Simulator and select Forecasting.
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
Select the relevant forecasting application (Auto-ARIMA, Basic Econometrics, Box-Jenkins ARIMA, J-S Curves, GARCH, Markov Chains, Maximum Likelihood, Multivariate Regression, Nonlinear Extrapolation,
Spline Curves, Stochastic Process Forecasting, or Time-Series Analysis), and enter the relevant inputs.
The following provides a quick review of each methodology and several quick getting-started examples in using the software. More detailed descriptions and example models of each of these techniques
are found throughout this book. Auto-ARIMA Autoregressive integrated moving average (ARIMA) is an advanced econometric modeling technique. ARIMA looks at historical time-series data and performs
back-fitting optimization routines to account for historical autocorrelation (the relationship of one value versus another in time), the stability of the data to correct for the nonstationary
characteristics of the data, and this predictive model learns over time by correcting its forecasting errors. Advanced knowledge in econometrics is typically required to build good predictive models
using this approach. The Auto-ARIMA module automates some of the traditional ARIMA modeling by automatically testing multiple permutations of model specifications and returns the best-fitting model.
Running the Auto-ARIMA is similar to regular ARIMA forecasts, the difference being the P, D, Q inputs are no longer required and different combinations of these inputs are automatically run and
compared. See Chapter 90, “Forecasting—Time-Series ARIMA,” for more technical details on running and interpreting an ARIMA model. This approach can only be used to forecast time-series data and can
include other independent variables in its forecasts. Basic Econometrics Econometrics refers to a branch of business analytics, modeling, and forecasting techniques for modeling the behavior or
forecasting certain business or economic variables. Running the Basic Econometrics models are similar to doing regular regression analysis except that the dependent and independent variables are
allowed to be modified before a regression is run. See Chapter 87, “Forecasting—Multiple Regression,” for details on running regression models. This approach can be used to model the relationship or
forecast time-series, cross-sectional, as well as mixed data sets. Box-Jenkins ARIMA A summary of this methodology is provided earlier, in the Auto-ARIMA section. See Chapter 90,
“Forecasting—Time-Series ARIMA,” for details on running an ARIMA model. This approach can only be used to forecast time-series data and can include other independent variables in its forecasts.
March 18, 2008
Char Count=
Custom Distributions Using Risk Simulator, expert opinions can be collected and a customized distribution can be generated. This forecasting technique comes in handy when the data set is small or the
goodness-of-fit is bad when applied to a distributional fitting routine. See Chapter 132, “Risk Hedging—Foreign Exchange Cash Flow Model,” for details on creating a custom distribution (in the
chapter, a custom distribution is created to forecast foreign exchange rates). This approach can be used to forecast time-series, cross-sectional, or mixed data sets. J-S Curves The J-curve or
exponential growth curve is where the growth of the next period depends on the current period’s level and the increase is exponential. This means that over time, the values will increase
significantly from one period to another. This model is typically used in forecasting biological growth and chemical reactions over time. The S-curve or logistic growth curve starts off like a
J-curve, with exponential growth rates. Over time, the environment becomes saturated (e.g., market saturation, competition, overcrowding); the growth slows; and the forecast value eventually ends up
at a saturation or maximum level. This model is typically used in forecasting market share or sales growth of a new product from market introduction until maturity and decline. See Chapter 82,
“Forecasting—Exponential J-Growth Curves,” for details on running the J-curve model, and Chapter 85, “Forecasting—Logistic S-Growth Curves,” for running the S-curve model. These approaches are used
to forecast time-series data. GARCH The generalized autoregressive conditional heteroskedasticity (GARCH) model is used to model historical and forecast future volatility levels of a marketable
security (e.g., stock prices, commodity prices, oil prices, and so forth). The data set has to be a time-series of raw price levels. GARCH will first convert the prices into relative returns and then
run an internal optimization to fit the historical data to a mean-reverting volatility term structure, while assuming that the volatility is heteroskedastic in nature (changes over time according to
some econometric characteristics). See Chapter 166, “Volatility—Volatility Computations (Log Returns, Log Assets, Implied Volatility, Management Assumptions, EWMA, GARCH),” for details on the GARCH
model. This approach is used for forecasting the timeseries of volatility of a marketable security. There must be a lot of data available and the data points must all be positive. Markov Chains A
Markov chain exists when the probability of a future state depends on a previous state and when linked together forms a chain that reverts to a long-run steady state level. This approach is typically
used to forecast the market share of two competitors. The required inputs are the starting probability of a customer
March 18, 2008
Char Count=
Modeling Toolkit and Risk Simulator Applications
in the first store (the first state) will return to the same store in the next period, versus the probability of switching to a competitor’s store in the next state. See Chapter 86,
“Forecasting—Markov Chains and Market Share,” for details. This method is used to forecast a time-series of probabilistic states and the long-run steady-state condition.
Maximum Likelihood Maximum likelihood estimation (MLE) is used to forecast the probability of something occurring given some independent variables. For instance, MLE is used to predict if a credit
line or debt will default given the obligor’s characteristics (30 years old, single, salary of $100,000 per year, and has a total credit card debt of $10,000); or the probability a patient will have
lung cancer if the person is a male, between the ages of 50 and 60, smokes five packs of cigarettes per month, and so forth. See Chapter 118. “Probability of Default—Empirical Model,” for details on
running this MLE model. The data set are typically cross-sectional and the dependent variable has to be binary (with values of 0 or 1). Multivariate Regression Multivariate regression is used to
model the relationship structure and characteristics of a certain dependent variable as it depends on other independent exogenous variables. Using the modeled relationship, we can forecast the future
values of the dependent variable. The accuracy and goodness of fit for this model can also be determined. Linear and nonlinear models can be fitted in regression analysis. See Chapter 87,
“Forecasting—Multiple Regression,” for details on running regression models. This methodology can be used to model and forecast timeseries data, cross-sectional data, or mixed data. Nonlinear
Extrapolation The underlying structure of the data to be forecasted is assumed to be nonlinear over time. For instance, a data set such as 1, 4, 9, 16, 25 is considered to be nonlinear (these data
points are from a squared function). See Chapter 88, “Forecasting—Nonlinear Extrapolation and Forecasting,” for details on nonlinear extrapolation forecasts. This methodology is typically applied to
forecast time-series data. Sometimes, cross-sectional data can be applied if there is a nonlinear relationship between data points arranged from small to large values. Spline Curves Sometimes there
are missing values in a time-series data set. For instance, interest rates for years 1 to 3 may exist, followed by years 5 to 8, and then year 10. Spline curves can be used to interpolate the missing
years’ interest rate values based on the data that exist. Spline curves can also be used to forecast or extrapolate values of future time periods beyond the time period of available data. The data
can be linear or nonlinear.
March 18, 2008
Char Count=
See Chapter 172, “Yield Curve—U.S. Treasury Risk-Free Rates and Cubic Spline Curves,” for details. This methodology is used to back-fit and forecast time-series data only.
Stochastic Process Forecasting Sometimes variables are stochastic and cannot be readily predicted using traditional means. These variables are said to be stochastic. Nonetheless, most financial,
economic and naturally occurring phenomena (e.g., motion of molecules through the air) follow a known mathematical law or relationship. Although the resulting values are uncertain, the underlying
mathematical structure is known and can be simulated using Monte Carlo risk-simulation. See Chapter 89, “Forecasting—Stochastic Processes,” for details on stochastic process forecasting where we
forecast using random walk, Brownian motion, mean-reverting, and jump-diffusion processes. Time-Series Analysis In well-behaved time-series data (typical examples include sales revenues and cost
structures of large corporations), the values tend to have up to three elements, a base value, trend, and seasonality. Time-series analysis uses these historical data and decomposes them into these
three elements, and recomposes them into future forecasts. In other words, this forecasting method, like some of the others discribed, first performs a back-fitting (backcast) of historical data
before it provides estimates of future values (forecast). See Chapter 91, “Forecasting—Time-Series Analysis,” for details on time-series decomposition models. This methodology is applicable only to
time-series data.
March 15, 2008
Char Count=
1. Analytics—Central Limit Theorem
1. Analytics—Central Limit Theorem File Name: Analytics—Central Limit Theorem Location: Modeling Toolkit | Analytics | Central Limit Theorem Brief Description: Illustrating the concept of Central
Limit Theorem and Law of Large Numbers using Risk Simulator’s set assumptions functionality, where many distributions, at the limit, are shown to approach normality Requirements: Modeling Toolkit,
Risk Simulator
This example shows how the Central Limit Theorem works by using Risk Simulator and without the applications of any mathematical derivations. Specifically, we look at how the normal distribution
sometimes can be used to approximate other distributions and how some distributions can be made to be highly flexible, as in the case of the beta distribution. The Central Limit Theorem contains a
set of weak-convergence results in probability theory. Intuitively, they all express the fact that any sum of many independent and identically distributed random variables will tend to be distributed
according to a particular attractor distribution. The most important and famous result is called the Central Limit Theorem, which states that if the sum of the variables has a finite variance, then
it will be approximately normally distributed. As many real processes yield distributions with finite variance, this theorem explains the ubiquity of the normal distribution. Also, the distribution
of an average tends to be normal, even when the distribution from which the average is computed is decidedly not normal.
DISCRETE UNIFORM DISTRIBUTION In this model, we look at various distributions and see that over a large sample size and various parameters, they approach normality. We start off with a highly
unlikely candidate, the discrete uniform distribution. The discrete uniform distribution is also known as the equally likely outcomes distribution. Where the distribution has a set of N elements, and
each element has the same probability (Figure 1.1). This distribution is related to the uniform distribution but its elements are discrete instead of continuous. The input requirement is such that
minimum < maximum and both values must be integers. An example would be tossing a single die, with 6 sides. The probability of hitting 1, 2, 3, 4, 5, or 6 is exactly the same: 1/6. So, how can a
distribution like this be converted into a normal distribution? The idea lies in the combination of multiple distributions. Suppose you now take a pair of dice and toss them. You would have 36
possible outcomes; that is, the first
March 15, 2008
Char Count=
FIGURE 1.1 Tossing a single die and the discrete uniform distribution (values between 1 and 6)
FIGURE 1.2 Tossing two dice (36 possible outcomes) single die can be 1 and the second die can be 1, or perhaps 1–2, or 1–3, and so forth, until 6–6, with 36 outcomes as in Figure 1.2. Now, summing up
the two dice, you get an interesting set of results (Figure 1.3). If you then plotted out these sums, you get an approximation of a normal distribution (Figure 1.4). In fact, if you threw 12 dice
together and added up their values, and repeated the process many times, you get an extremely close discrete normal distribution. If
FIGURE 1.3 Summation of two dice
March 15, 2008
Char Count=
1. Analytics—Central Limit Theorem
FIGURE 1.4 Approximation to a normal distribution
you add 12 continuous uniform distributions, where the results can, say, take on any continuous value between 1 and 6, you obtain a perfectly normal distribution.
POISSON, BINOMIAL, AND HYPERGEOMETRIC DISTRIBUTIONS Continuing with the examples, we show that for higher values of the distributional parameters (where many trials exist), these three distributions
also tend to normality. For instance, in the Other Discrete worksheet in the model, notice that as the number of trials (N) in a binomial distribution increases, the distribution tends to normal.
Even with a small probability (P) value, as the number of trials N increases, normality again reigns (Figure 1.5). In fact, as N × P exceeds about 30, you can use the normal distribution to
approximate the binomial distribution. Also, this is important, as at high N values, it is very difficult to compute the exact binomial distribution value, and the normal distribution is a lot easier
to use. We can test this approximation by using the Distributional Analysis tool (Start | Programs | Real Options Valuation | Risk Simulator | Distribution Analysis). As an example, we test a
binomial distribution with N = 5000 and P = 0.50. We then compute the mean of the distribution, NP = 2500 and the standard deviation of the binomial distribution, NP(1 − P) = 5000(0.5)(1 − 0.5) =
35.3553. We then enter these values in the normal distribution and look at the Cumulative Distribution Function (CDF) of some random range. Sure enough, the probabilities we obtain are close although
not precisely the same (Figure 1.6). The normal distribution does in fact approximate the binomial distribution when N × P is large (compare the results in Figures 1.6 and 1.7). The examples also
examine the hypergeometric and Poisson distributions. A similar phenomenon occurs. When the input parameters are large, they revert to
FIGURE 1.5 Different faces of a binomial distribution
c01 JWBK121-Mun
March 15, 2008 21:56 Char Count=
March 15, 2008
Char Count=
1. Analytics—Central Limit Theorem
FIGURE 1.6 Normal approximation of the binomial
the normal approximation. In fact, the normal distribution also can be used to approximate the Poisson and hypergeometric distributions. Clearly there will be slight differences in value as the
normal is a continuous distribution whereas the binomial, Poisson, and hypergeometric are discrete distributions. Therefore, slight variations will obviously exist.
BETA DISTRIBUTION Finally, the Beta worksheet illustrates an interesting distribution, the beta distribution. Beta is a highly flexible and malleable distribution and can be made to approximate
multiple distributions. If the two input parameters, alpha and beta, are equal, the distribution is symmetrical. If either parameter is 1 while the other parameter is greater than 1, the distribution
is triangular or J-shaped. If alpha is less than beta, the distribution is said to be positively skewed (most of the values are near the minimum value). If alpha is greater than beta, the
distribution is negatively skewed (most of the values are near the maximum value).
March 15, 2008
Char Count=
FIGURE 1.7 Binomial approximation of the normal
2. Analytics—Central Limit Theorem—Winning Lottery Numbers File Name: Analytics—Central Limit Theorem (Winning Lottery Numbers) Location: Modeling Toolkit | Analytics | Central Limit Theorem—Winning
Lottery Numbers Brief Description: Applying distributional fitting on past winning lottery numbers and to illustrate the Central Limit Theorem and Law of Large Numbers Requirements: Modeling Toolkit,
Risk Simulator
This fun model is used to illustrate the behavior of seemingly random events. For the best results, first review the Central Limit Theorem model in Chapter 1 before
March 15, 2008
Char Count=
2. Analytics—Central Limit Theorem—Winning Lottery Numbers
going over this example model. As in the Central Limit Theorem model, tossing a single six-sided die will yield a discrete uniform distribution with equal probabilities (1/6) for each side of the
die. In contrast, when a pair of dice is tossed, there are 36 permutations of outcomes, and the sum of each of the 36 outcomes actually follows a discrete normal distribution. When more dice are
tossed, the resulting sums are normally distributed. The same concept applies here. Suppose that in a lottery there are 6 numbers you have to choose. First, you choose 5 numbers ranging from 1 to 47,
without repetition, and then, you choose the sixth special number, from 1 to 27. You need to hit all 6 numbers to win the lottery jackpot. Clearly, assuming that the lottery balls selected at random
are truly random and fair (i.e., the State Lottery Commission actually does a good job), then over many trials and many lottery games, the distribution of each value selected follows a discrete
uniform distribution. That is, the probability of the number 1 being chosen is 1/47, and the probability of the number 2 being chosen is also 1/47, and so forth. However, if all of the 5 balls that
are randomly chosen between 1 and 47 are summed, an interesting phenomenon occurs. The Historical worksheet in the model shows the actual biweekly historical lottery winning numbers for the past 6
years. Summing up the 5 values and performing a distributional fitting routine using Risk Simulator reveals that the distribution is indeed normal. The probability and odds of hitting the jackpot is
clearly very small. In fact, this can be computed using a combinatorial equation. The probability of selecting the 5 exact numbers out of 47 is 1 out of 1,533,939 chances. That is, we have: Cxn =
n! 47! 47! 47 x 46 x 45 x . . . x 1 = = = x!(n − x)! 5!(47 − 5)! 5!42! (5 x 4 x . . . x 1)(42 x 43 x . . . x 1)
= 1,533,939 Where C represents the number of possible combinations, n is the total number of balls in the population, while x represents the total number of balls chosen at a time.
FIGURE 2.1 Lottery winnings and payoffs
March 15, 2008
Char Count=
FIGURE 2.2 Lottery numbers and the discrete uniform distribution The chances of choosing the sixth special number is 1 out of 27, hence, the probability is 1/27. Therefore, the chance of choosing the
correct 5 numbers and the special number is 1,533,939 × 27 = 41,416,353. So, the odds are 1 in 41,416,353 or 0.000002415% probability of hitting the jackpot. In fact, from the State Lottery
Commission, we see that the published odds are exactly as computed (Figure 2.1). Also as expected, performing a data-fitting routine to the raw numbers (the first 5 values between 1 and 47), we have
a discrete uniform distribution (see Figure 2.2 or the Report 2 worksheet in the model). However, when we performed a data fitting on the sum of the 5 values, we obtain an interesting result. The
distribution is, as expected, normal (see Figure 2.3 or the Report worksheet in the model).
FIGURE 2.3 Sum of the lottery numbers is normally distributed
March 15, 2008
Char Count=
2. Analytics—Central Limit Theorem—Winning Lottery Numbers
FIGURE 2.4 Simulation of lottery numbers In fact, running the simulation 10,000 trials using the fitted normal assumption, we obtain the forecast results seen in Figure 2.4. The theoretical
distribution predicts that 90% of the time, the sum of the first five winning lottery numbers will be between 71 and 167 (rounded), and 50% of the time, they will be between 99 and 138 (rounded), as
seen in Figures 2.4 and 2.5. We then looked at the raw historical winning numbers and computed the percentage
FIGURE 2.5 Confidence interval of lottery results
March 15, 2008
Char Count=
FIGURE 2.6 Actual empirical statistics of lottery drawings
of winning sequences that fall within this range (actual statistics are shown in Figure 2.6). Sure enough, the empirical statistics of actual winning numbers are very close to the theoretically
predicted values. This means that, if you followed the statistics, and picked the first 5 numbers such that the sum of the 5 values is within 71 and 167, your odds will have significantly improved!
This is of course only an academic example and does not guarantee any results. So, don’t go rushing out to buy any lottery tickets!
3. Analytics—Flaw of Averages File Name: Analytics—Flaw of Averages Location: Modeling Toolkit | Analytics | Flaw of Averages Brief Description: Illustrating the concept of the Flaw of Averages
(where using the simple average sometimes yields incorrect answers) through the introductions of harmonic averages, geometric averages, medians, and skew Requirements: Modeling Toolkit
This model does not require any simulations or sophisticated modeling. It is simply an illustration of various ways to look at the first moment of a distribution (measuring the central tendency and
location of a distribution); that is, the mean or average value of a distribution or data points. This model shows how a simple arithmetic average sometimes can be wrong in certain cases and how
harmonic averages, geometric averages, and medians are sometimes more appropriate.
FLAW OF AVERAGES: GEOMETRIC AVERAGE Suppose you purchased a stock at some time period (call it time zero) for $100. Then, after one period (e.g., a day, a month, a year), the stock price goes up to
March 15, 2008
Char Count=
3. Analytics—Flaw of Averages
(period one), at which point you should sell and cash in the profits, but you do not, and hold it for another period. Further suppose that at the end of period two, the stock price drops back down to
$100, and then you decide to sell. Assuming there are no transaction costs or hidden fees for the sake of simplicity, what is your average return for these two periods? Period
Stock Price
$100 $200 $100
First, let’s compute it the incorrect way, using arithmetic averages: Absolute Return from Period 0 to Period 1: Absolute Return from Period 1 to Period 2: Average Return for both periods:
100% –50% 25%
That is, the return for the first holding period is (New – Old)/Old or ($200− $100)/$100 = 100%, which makes sense as you started with $100 and it then became $200, or returned 100%. Next, the second
holding period return is ($100 − $200)/$200 = –50%, which also makes sense as you started with $200 and ended up with $100, or lost half the value. So, the arithmetic average of 100% and –50% is
(100% + [–50%])/2 = 25%. Well, clearly you did not make 25% in returns. You started with $100 and ended up with $100. How can you have a 25% average return? So, this simple arithmetic mean approach
is incorrect. The correct methodology is to use geometric average returns, applying something called relative returns: Period
Stock Price
Relative Returns
$100 $200 $100
2.00 0.50
Absolute returns are similar to relative returns, less one. For instance, going from $10 to $11 implies an absolute return of ($11 – $10)/$10 = 10%. However, using relative returns, we have $11/$10 =
1.10. If you take 1 off this value, you obtain the absolute returns. Also, 1.1 means a 10% return and 0.9 means a –10% return, and so forth. The preceding table shows the computations of the two
relative returns for the two periods. We then compute the geometric average where we have: Geometric Average =
X1 X0
X2 X1
XN XN−1
That is, we take the root of the total number of periods N of the multiplications of the relative returns. We then obtain a geometric average of 0.00%.
March 15, 2008
Char Count=
Alternatively, we can use Excel’s equation of “=POWER(2.00*0.50,1/2)–1” to obtain 0%. Note that the POWER function in Excel takes X to some power Y in “POWER (X,Y)”; the root of 2 (N is 2 periods in
this case, not including period 0) is the same as taking it to the power of 1/2. This 0% return on average for the periods make a lot more sense. Be careful when you see large stock or fund returns
as some may actually be computed using arithmetic averages. Where there is an element of time series in the data and fluctuations of the data are high in value, be careful when computing the series’
average; the geometric average might be more appropriate. Note: For simplicity, you can also use Excel’s GEOMEAN function on the relative returns and deduct one from it. Note that you have to take
the GEOMEAN of the relative returns: =GEOMEAN(2,0.5)–1 and minus one, not the raw stock prices themselves.
FLAW OF AVERAGES: HARMONIC AVERAGE Say there are three friends, Larry, Curly, and Moe, who happen to be cycling enthusiasts, apart from being movie stars and close friends. Further, suppose each one
has a different level of physical fitness and they ride their bikes at a constant speed of 10 miles per hour (mph), 20 mph, and 30 mph respectively. Biker
Constant Miles/Hour
Larry Curly Moe
The question is, how long will it take, on average, for all three cyclists to complete a 10-mile course? Well, let’s first solve this problem the incorrect way, in order to understand why it is so
easy to commit the Flaw of Averages. First, computing it the wrong way, we obtain the average speed of all three bikers; that is, (10 + 20 + 30)/3 = 20 mph. So, it would take 10 miles/20 miles per
hour = 0.5 hours to complete the trek on average. Biker
Constant Miles/Hour
Larry Curly Moe Average: Distance: Time:
10 20 30 20 miles/hour 10 miles 0.5 hours to complete the 10-mile trek
Had we done this, we would have committed a serious mistake. The average time is not 0.5 hours using the simple arithmetic average. Let us prove why this is the case. First let’s show the time it
takes for each biker to complete 10 miles. Then we simply take the average of these times.
March 15, 2008
Char Count=
3. Analytics—Flaw of Averages
Constant Miles/Hour
Larry Curly Moe Average:
Time to Complete 10 miles
10 20 30 0.6111 hours
1.00 hours 0.50 hours 0.33 hours
So, the true average is actually 0.6111 hours or 36.67 minutes, not 30 minutes or 0.5 hours. How do we compute the true average? The answer lies in the computation of harmonic averages, where we
define the N where N is the total number of elements, in this case, 3; harmonic average as (1/x i) and Xi are values of the individual elements. That is, we have these computations: Biker
Constant Miles/Hour
Larry Curly Moe
N SUM (1/X) Harmonic Arithmetic
3.0000 0.1833 16.3636 20.0000
Therefore, the harmonic average speed of 16.3636 mph would mean that a 10mile trek would take 10/16.3636 or 0.6111 hours (36.67 minutes). Using a simple arithmetic average would yield wrong results
when you have rates and ratios that depend on time.
FLAW OF AVERAGES: SKEWED AVERAGE Assume that you are in a room with 10 colleagues and you are tasked with figuring out the average salary of the group. You start to ask around the room to obtain 10
salary data points, and then quantify the group’s average: Person
1 2 3 4 5 6 7 8 9 10 Average
$75,000 $120,000 $95,000 $69,800 $72,000 $75,000 $108,000 $115,000 $135,000 $100,000 $96,480
March 15, 2008
Char Count=
This average is, of course, the arithmetic average or the sum of all the individual salaries divided by the number of people present. Suddenly, a senior executive enters the room and participates in
the little exercise. His salary, with all the executive bonuses and perks, came to $20 million last year. What happens to your new computed average?
1 2 3 4 5 6 7 8 9 10 11 Average
$75,000 $120,000 $95,000 $69,800 $72,000 $75,000 $108,000 $115,000 $135,000 $100,000 $20,000,000 $1,905,891
The average now becomes $1.9 million. This value is clearly not representative of the central tendency and the “true average” of the distribution. Looking at the raw data, to say that the average
salary of the group is $96,480 per person makes more sense than $1.9 million per person. What happened? The issue was that an outlier existed. The $20 million is an outlier in the distribution,
skewing the distribution to the right. When there is such an obvious skew, the median would be a better measure as the median is less susceptible to outliers than the simple arithmetic average.
Median for 10 people: Median for 11 people:
$97,500 $100,000
Thus, $100,000 is a much better representative of the group’s “true average.” Other approaches exist to find the “true” or “truncated” mean. They include performing a single variable statistical
hypothesis t-test on the sample raw data or simply removing the outliers. However, be careful when dealing with outliers; sometimes outliers are very important data points. For instance, extreme
stock price movements actually may yield significant information. These extreme price movements may not be outliers but, in fact, are part of doing business, as extreme situations exist (i.e., the
distribution is leptokurtic, with a high kurtosis) and should be modeled if the true risk profile is to be constructed. Another approach to spot an outlier is to compare the mean with the median. If
they are very close, the distribution is probably symmetrically distributed. If the mean and median are far apart, the distribution is skewed. And a skewed mean is typically a bad approximation of
the true mean of the distribution. Care should be
March 15, 2008
Char Count=
4. Analytics—Mathematical Integration Approximation Model
taken when you spot a high positive or negative skew. You can use Excel’s SKEW function to compute the skew: Skew for 10 people: Skew for 11 people:
0.28 3.32
As expected, the skew is high for the 11-person group as there is an outlier and the difference between the mean and median is significant.
4. Analytics—Mathematical Integration Approximation Model File Name: Analytics—Mathematical Integration Approximation Model Location: Modeling Toolkit | Analytics | Mathematical Integration
Approximation Model Brief Description: Applying simulation to estimate the area under a curve without the use of any calculus-based mathematical integration Requirements: Modeling Toolkit, Risk
THEORETICAL BACKGROUND There are several ways to compute the area under a curve. The best approach is mathematical integration of a function or equation. For instance, if you have the equation: f (x)
= x3 then the area under the curve between 1 and 10 is found through the integral:
10 x3 dx = 1
x4 4
10 = 2499.75 1
Similarly, any function f (x) can be solved and found this way. However, for complex functions, applying mathematical integration might be somewhat cumbersome. This is where simulation comes in. To
illustrate, how would you solve a seemingly simple problem like the next one? B A
1 dx x4 − sin(1 − x4 )
March 15, 2008
Char Count=
Well, dust off those old advanced calculus books and get the solution: B A
1 dx = x4 − Sin(1 − x4 )
Log Sin(x)1/4 − x(1 + Sin(x))1/4 − Log Sin(x)1/4 − x(1 + Sin(x))1/4 − 2ArcTan x 1+Sin(x)
3/4 1/4 4Sin(x) (1 + Sin(x))
The point is, sometimes simple-looking functions get really complicated. Using Monte Carlo simulation, we can approximate the value under the curve. Note that this approach yields approximations
only, not exact values. Let’s see how this approach works . . . The area under a curve can be seen as the shaded area (A.T.R.U.B) in Figure 4.1, between the x-axis values of A and B, and the y = f
(x) curve. Looking closely at the graph, one can actually imagine two boxes. Specifically, if the area of interest is the shaded region or A.T.R.U.B, then we can draw two imaginary boxes, A.T.U.B and
T.Q.R.U. Computing the area of the first box is simple, where we have a simple rectangle. Computing the second box is trickier, as part of the area in the box is below the curve and part of it is
above the curve. In order to obtain the area under the curve that is within the T.Q.R.U box, we run a simulation with a uniform distribution between the values A and B and compute the corresponding
values on the y-axis using the f (x) function, while at the same time we simulate a uniform distribution between f (A) and f (B) on the y-axis. Then we find the average number of times the simulated
values on the y-axis is at or below the curve or f (x) value. Using this average value, we multiply it by the area in the box to approximate the value under the curve in the box. Summing this value
with the smaller box of A.T.U.B provides the entire area under the curve.
FIGURE 4.1 Graphical representation of a mathematical integration
March 15, 2008
Char Count=
4. Analytics—Mathematical Integration Approximation Model
MODEL BACKGROUND The analysis in the Model worksheet illustrates an approximation of a simple equation, namely, we have this equation to value: 10 x3 dx 1
Solving this integration, we obtain the value under the curve of:
10 x3 dx = 1
x4 4
10 = 2499.75 1
Now we attempt to solve this using simulation through the model shown in Figure 4.2.
PROCEDURE 1. We first enter the minimum and maximum values on the x-axis. In this case, they are 1 and 10 (cells C11 and D11 in Figure 4.2). This represents the range on the x-axis we are interested
in. 2. Next, compute the corresponding y-axis values (cells C12 and D12). For instance, in this example we have y = f(x) = x3 , which means that for x = 1, we have y = 13 = 1 and for x = 10, we have
y = 103 = 1000. 3. Set two uniform distribution assumptions between the minimum and maximum values, one for x and one for y (cells E11 and E12). 4. Compute the PDF equation, in this example, it is y
= f(x) = x3 in cell E13, linking the x value in the equation to the simulated x value. 5. Create a dummy 0,1 variable and set it as “IF(SimulatedY 1, the call option starts (Alpha–1) out of the money
and puts start (Alpha–1) in the money.
FIGURE 53.1 Forward start options
March 15, 2008
Char Count=
54. Exotic Options—Futures and Forward Options
54. Exotic Options—Futures and Forward Options File Name: Exotic Options—Futures Options Location: Modeling Toolkit | Exotic Options | Futures and Forward Options Brief Description: Applying the same
generalities as the Black-Scholes model but the underlying asset is a futures or forward contract, not a stock Requirements: Modeling Toolkit Modeling Toolkit Functions Used:
B2FuturesForwardsCallOption, B2FuturesForwardsPutOption
The Futures option (Figure 54.1) is similar to a regular option, but the underlying asset is a futures or forward contract. Be careful here; the analysis cannot be solved using a Generalized
Black-Scholes-Merton model. In many cases, options are traded on futures. A put is the option to sell a futures contract, and a call is the option to buy a futures contract. For both, the option
strike price is the specified futures price at which the future is traded if the option is exercised. A futures contract is a standardized contract, typically traded on a futures exchange, to buy or
sell a certain underlying instrument at a certain date in the future, at a prespecified price. The future date is called the delivery date or final settlement date. The preset price is called the
futures price. The price of the underlying asset
FIGURE 54.1 Futures options
March 15, 2008
Char Count=
on the delivery date is called the settlement price. The settlement price normally converges toward the futures price on the delivery date. A futures contract gives the holder the obligation to buy
or sell, which differs from an options contract, which gives the holder the right but not the obligation to buy or sell. In other words, the owner of an options contract may exercise the contract. If
it is an American-style option, it can be exercised on or before the expiration date; a European option can be exercised only at expiration. Thus, a futures contract is more like a European option.
Both parties of a futures contract must fulfill the contract on the settlement date. The seller delivers the commodity to the buyer, or, if it is a cash-settled future, cash is transferred from the
futures trader who sustained a loss to the one who made a profit. To exit the commitment prior to the settlement date, the holder of a futures position has to offset the position either by selling a
long position or by buying back a short position, effectively closing out the futures position and its contract obligations.
55. Exotic Options—Gap Options File Name: Exotic Options—Gap Options Location: Modeling Toolkit | Exotic Options | Gap Options Brief Description: Valuing gap options, where there are two strike
prices with respect to one underlying asset and where the first strike acts like a barrier with the second strike price coming into play when that barrier is breached. Requirements: Modeling Toolkit
Modeling Toolkit Functions Used: B2GapCallOption, B2GapPutOption
Gap options are similar to Barrier options and Two Asset Correlated options in the sense that the call option is knocked in when the underlying asset exceeds the reference Strike Price 1, making the
option payoff the asset price less Strike Price 2 for the underlying. Similarly, the put option is knocked in only if the underlying asset is less than the reference Strike Price 1, providing a
payoff of Strike Price 2 less the underlying asset. Please see Figure 55.1.
March 15, 2008
Char Count=
56. Exotic Options—Graduated Barrier Options
FIGURE 55.1 Gap options
56. Exotic Options—Graduated Barrier Options File Name: Exotic Options—Graduated Barriers Location: Modeling Toolkit | Exotic Options | Graduated Barriers Brief Description: Modeling Graduated
Barrier models, which are similar to barrier options with flexible and graduated payoffs, depending on how far above or below a barrier the asset ends up at maturity Requirements: Modeling Toolkit
Modeling Toolkit Functions Used: B2GraduatedBarrierDownandInCall, B2GraduatedBarrierDownandOutCall, B2GraduatedBarrierUpandInPut, B2GraduatedBarrierUpandOutPut
Graduated or Soft Barrier options are similar to standard Barrier options except that the barriers are no longer static values but a graduated range between the lower and upper barriers. The option
is knocked in or out of the money proportionally. Both upper and lower barriers should be either above (for up and in or up and out options) or below (for down and in or down and out options) the
starting stock price or asset value. For instance, in the down and in call option, the instruments become knocked in or live at expiration if and only if the asset or stock value breaches the lower
barrier (asset value goes below the barriers). If the option to be valued is a down and in call,
March 15, 2008
Char Count=
FIGURE 56.1 Graduated barrier options then both the upper barrier and the lower barrier should be lower than the starting stock price or asset value, providing a collar of graduated prices. For
instance, if the upper and lower barriers are $90 and $80, and if the asset price ends up being $89, a down and out option will be knocked out 10% of its value. Standard barrier options are more
difficult to delta hedge when the asset values and barriers are close to each other. Graduated barrier options are more appropriate for delta hedges, providing less delta risk and gamma risk. Please
see Figure 56.1.
57. Exotic Options—Index Options File Name: Exotic Options—Index Options Location: Modeling Toolkit | Exotic Options | Index Options Brief Description: Understanding index options, which are similar
to regular plain vanilla options and can be solved using the Black-Scholes model, with the only difference where the underlying asset is not a stock but an index Requirements: Modeling Toolkit
Modeling Toolkit Functions Used: B2StockIndexCallOption, B2StockIndexPutOption, B2GeneralizedBlackScholesCall, B2GeneralizedBlackScholesPut
The Index option (Figure 57.1) is similar to a regular option, but the underlying asset is a reference stock index, such as the Standard & Poor’s 500. The analysis can be solved using a Generalized
Black-Scholes-Merton Model as well.
March 15, 2008
Char Count=
58. Exotic Options—Inverse Gamma Out-of-the-Money Options
FIGURE 57.1 Index options
58. Exotic Options—Inverse Gamma Out-of-the-Money Options File Name: Exotic Options—Inverse Gamma Out-of-the-Money Options Location: Modeling Toolkit | Exotic Options | Inverse Gamma Out-of-theMoney
Options Brief Description: Analyzing options using an inverse gamma distribution rather than the typical normal-lognormal assumptions. This type of options analytical model is important for extreme
in- or out-of-the-money options Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2InverseGammaCallOption, B2InverseGammaPutOption
This model computes the value of European call and put options using an Inverse Gamma distribution, as opposed to the standard normal distribution. This distribution accounts for the peaked
distributions of asset returns and provides better estimates for deep out-of-the-money options. The traditional Generalized BlackScholes-Merton model is also provided as benchmark. Please see Figure
March 15, 2008
Char Count=
FIGURE 58.1 Inverse gamma option
59. Exotic Options—Jump-Diffusion Options File Name: Exotic Options—Jump Diffusion Location: Modeling Toolkit | Exotic Options | Jump Diffusion Brief Description: Assuming the underlying asset in an
option follows a Poisson jump-diffusion process instead of a random-walk Brownian motion, which is applicable for underlying assets such as oil and gas commodities and price of electricity
Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2MertonJumpDiffusionCall, B2MertonJumpDiffusionPut
A jump-diffusion option is similar to a regular option except that instead of assuming that the underlying asset follows a lognormal Brownian motion process, the process here follows a Poisson
jump-diffusion process. That is, stock or asset prices follow jumps, which occur several times per year (observed from history). Cumulatively, these jumps explain a certain percentage of the total
volatility of the asset. Please see Figure 59.1.
March 15, 2008
Char Count=
60. Exotic Options—Leptokurtic and Skewed Options
FIGURE 59.1 Jump-Diffusion options
60. Exotic Options—Leptokurtic and Skewed Options File Name: Exotic Options—Leptokurtic and Skewed Options Location: Modeling Toolkit | Exotic Options | Leptokurtic and Skewed Options Brief
Description: Computing options where the underlying assets are assumed to have returns that are skewed and leptokurtic or have fat tails and are leaning on one end of the distribution rather than
having symmetrical returns Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2AltDistributionCallOption, B2AltDistributionPutOption
This model is used to compute the European call and put options using the binomial lattice approach when the underlying distribution of stock returns is not normally distributed, not symmetrical, and
has additional slight kurtosis and skew. Be careful when using this model to account for a high or low skew and kurtosis. Certain combinations of these two coefficients actually yield unsolvable
results. The Black-Scholes results are also included to benchmark the effects of a high kurtosis and positive or negatively skewed distributions compared to the normal distribution assumptions on
asset returns. Please see Figure 60.1.
March 15, 2008
Char Count=
FIGURE 60.1 Leptokurtic options
61. Exotic Options—Lookback with Fixed Strike (Partial Time) File Name: Exotic Options—Lookback with Fixed Strike Partial Time Location: Modeling Toolkit | Exotic Options | Lookback Fixed Strike
Partial Time Brief Description: Computing payoff on the option, the difference between the highest or the lowest attained asset price against the strike, when the strike price is predetermined
Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2FixedStrikePartialLookbackCall, B2FixedStrikePartialLookbackPut
In a Fixed Strike Option with Lookback Feature (Partial Time), the strike price is predetermined, while at expiration, the payoff on the call option is the difference between the maximum asset price
less the strike price during the time between the starting period of the lookback to the maturity of the option (see Figure 61.1). Conversely, the put will pay the maximum difference between the
lowest observed asset price less the strike price during the time between the starting period of the lookback to the maturity of the option.
March 15, 2008
Char Count=
62. Exotic Options—Lookback with Fixed Strike
FIGURE 61.1 Lookback options with fixed strike (partial lookback time)
62. Exotic Options—Lookback with Fixed Strike File Name: Exotic Options—Lookback with Fixed Strike Location: Modeling Toolkit | Exotic Options | Lookback Fixed Strike Brief Description: Computing the
value of an option where the strike price is fixed but the value at expiration is based on the value of the underlying asset’s maximum and minimum values during the option’s lifetime Requirements:
Modeling Toolkit Modeling Toolkit Functions Used: B2FixedStrikeLookbackCall, B2FixedStrikeLookbackPut
In a Fixed Strike Option with Lookback feature (Figure 62.1), the strike price is predetermined, while at expiration, the payoff on the call option is the difference between the maximum asset price
less the strike price during the lifetime of the option. Conversely, the put will pay the maximum difference between the lowest observed asset price less the strike price during the lifetime of the
March 15, 2008
Char Count=
FIGURE 62.1 Lookback options with fixed strike
63. Exotic Options—Lookback with Floating Strike (Partial) File Name: Exotic Options—Lookback with Floating Strike Partial Time Location: Modeling Toolkit | Exotic Options | Lookback Floating Strike
Partial Time Brief Description: Computing the value of an option where the strike price is not fixed but floating and the value at expiration is based on the value of the underlying asset’s maximum
and minimum values starting from the lookback inception time to maturity, as the purchase or sale price Requirements: Modeling Toolkit Modeling Toolkit Functions Used:
B2FloatingStrikePartialLookbackCallonMin, B2FloatingStrikePartialLookbackPutonMax
In a Floating Strike Option with Lookback feature (Partial Time), the strike price is floating; at expiration, the payoff on the call option is being able to purchase the underlying asset at the
minimum observed price from inception to the end of the lookback time (see Figure 63.1). Conversely, the put will allow the option holder to sell at the maximum observed asset price from inception to
the end of the lookback time.
March 15, 2008
Char Count=
64. Exotic Options—Lookback with Floating Strike
FIGURE 63.1 Lookback options with floating strike (partial lookback)
64. Exotic Options—Lookback with Floating Strike File Name: Exotic Options—Lookback with Floating Strike Location: Modeling Toolkit | Exotic Options | Lookback Floating Strike Brief Description:
Computing the value of an option where the strike price is not fixed but floating and the value at expiration is based on the value of the underlying asset’s maximum and minimum values during the
option’s lifetime as the purchase or sale price Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2FloatingStrikeLookbackCallonMin, B2FloatingStrikeLookbackPutonMax
In a Floating Strike Option with Lookback feature (Figure 64.1), the strike price is floating; at expiration, the payoff on the call option is being able to purchase the underlying asset at the
minimum observed price during the life of the option. Conversely, the put will allow the option holder to sell at the maximum observed asset price during the life of the option.
March 15, 2008
Char Count=
FIGURE 64.1 Lookback options with floating strike
65. Exotic Options—Min and Max of Two Assets File Name: Exotic Options—Min and Max of Two Assets Location: Modeling Toolkit | Exotic Options | Min and Max of Two Assets Brief Description: Computing
the value of an option where there are two underlying assets that are correlated with different volatilities, and the differences between the assets’ values are used as the benchmark for determining
the value of the payoff at expiration Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2CallOptionOnTheMin, B2CallOptionOnTheMax, B2PutOptionOnTheMin, B2PutOptionOnTheMax
Options on Minimum or Maximum are used when there are two assets with different volatilities. (See Figure 65.1.) Either the maximum or the minimum value at expiration of both assets is used in option
exercise. For instance, a call option on the minimum implies that the payoff at expiration is such that the minimum price between Asset 1 and Asset 2 is used against the strike price of the option.
March 15, 2008
Char Count=
66. Exotic Options—Options on Options
FIGURE 65.1 Options on the minimum and maximum of two assets
66. Exotic Options—Options on Options File Name: Exotic Options—Options on Options Location: Modeling Toolkit | Exotic Options | Options on Options Brief Description: Computing the value of an option
on another option, or a compound option, where the option provides the holder the right to buy or sell a subsequent option at the expiration of the first option Requirements: Modeling Toolkit
Modeling Toolkit Functions Used:B2CompoundOptionsCallonCall, B2CompoundOptionsCallonPut, B2CompoundOptionsPutonCall, B2CompoundOptionsPutonPut
Options on Options, sometimes known as Compound Options, allow the holder to call or buy versus put or sell an option in the future (Figure 66.1). For instance, a put on call option means that the
holder has the right to sell a call option in some future period for a specified strike price (strike price for the option on option). The time for this right to sell is called the maturity of the
option on option. The maturity
March 15, 2008
Char Count=
FIGURE 66.1 Compound options on options of the underlying means the maturity of the option to be bought or sold in the future, starting from now.
67. Exotic Options—Option Collar File Name: Exotic Options—Option Collar Location: Modeling Toolkit | Exotic Options | Options Collar Brief Description: Computing the call-put collar strategy; that
is, short a call and long a put at different strike prices such that the hedge is costless and effective Requirements: Modeling Toolkit
The call and put collar strategy requires that one stock be purchased, one call be sold, and one put be purchased (Figure 67.1). The idea is that the proceeds from the call sold are sufficient to
cover the proceeds of the put bought. Therefore, given a specific set of stock price, option maturity, risk-free rate, volatility, and dividend of a stock, you can impute the required strike price of
a call if you know what put to purchase (and its relevant strike price) or the strike price of a put if you know what call to sell (and its relevant strike price).
March 15, 2008
Char Count=
68. Exotic Options—Perpetual Options
FIGURE 67.1 Creating a call and put collar strategy
68. Exotic Options—Perpetual Options File Name: Exotic Options—Perpetual Options Location: Modeling Toolkit | Exotic Options | Perpetual Options Brief Description: Computing the value of an American
option that has a perpetual life where the underlying is a dividend-paying asset Requirements: Modeling Toolkit Modeling Toolkit Functions Used: B2PerpetualCallOption, B2PerpetualPutOption
The perpetual call and put options are American options with continuous dividends that can be executed at any time but have an infinite life. Clearly a European option (only exercisable at
termination) has a zero value, hence only American options are viable perpetual options. American closed-form approximations with 100-year maturities are also provided in the model to benchmark the
results. Please see Figure 68.1.
FIGURE 68.1 Perpetual American options
March 15, 2008
Char Count=
69. Exotic Options—Range Accruals (Fairway Options) File Name: Real Options Models—Range Accruals Location: Modeling Toolkit | Real Options Models | Range Accruals Brief Description: Computing the
value of Fairway options or Range Accrual options, where the option pays a specified return if the underlying asset is within a range but pays something else if it is outside the range at any time
during its maturity Requirements: Modeling Toolkit and Real Options SLS
A Range Accrual option is also called a Fairway option. Here the option pays a certain return if the asset value stays within a certain range (between the upper and lower barriers) but pays a
different amount or return if the asset value falls outside this range during any time before and up to maturity. The name Fairway option sometimes is used, as the option is similar to the game of
golf where if the ball stays within the fairway (a narrow path), it is in play, and if it goes outside, a penalty might be imposed (in this case, a lower return). Such options and instruments can be
solved using the Real Options SLS software as seen in Figure 69.1, using the Custom Option approach, where we enter the terminal equation as: If (Asset >= LowerBarrier & Asset= LowerBarrier & Asset
0). Expanding into this market will require three times the capacity required for the primary highprice/low-volume market. In addition, the company faces significant technical risks as it moves
through lengthy and expensive Food and Drug Administration–mandated clinical trials; the company has the option to abandon further investments at any time should business conditions (such as
additional competitors) become unfavorable. Given these many uncertainties and opportunities, which alternative should the company choose?
March 17, 2008
Char Count=
FIGURE 92.1 Biotech manufacturing model summary results
Monte Carlo simulation is used to simulate commercial uncertainty as well as the probabilities of successfully launching a drug currently in development and for determining the manufacturing
facility’s capacity. A binomial approach is then used to value the inherent flexibility (strategic real options) for each alternative. For the specific details, refer to the model (Figure 92.1).
March 17, 2008
Char Count=
93. Industry Applications—Biotech Inlicensing Drug Deal Structuring
93. Industry Applications—Biotech Inlicensing Drug Deal Structuring File Name: Biotech—Inlicensing Drug Deal Structuring Location: Modeling Toolkit | Industry Applications | Biotech—Inlicensing Drug
Deal Structuring Brief Description: Illustrating how to identify and negotiate the ideal inlicensing terms for a compound in the biotech industry Requirements: Modeling Toolkit, Risk Simulator
Special Contribution: This model was contributed by Uriel Kusiatin, a senior vice president of SKS Consulting (www.sksconsulting.us), a turnaround and strategy advisory firm that works with small-cap
companies and their investors in the telecommunications, high-tech, and life sciences industries. Uriel has applied real options, Monte Carlo simulation, and optimization techniques to research and
development portfolio decisions, licensing opportunities, and major capital investments. Uriel regularly presents at industry conferences, and has guest lectured at the Wharton School and MIT’s Sloan
School of Management on the applications of real options analysis in the life sciences industry. He also coteaches a course on real options analysis to students at the Executive Masters in Technology
Management program cosponsored by the School of Engineering and Applied Sciences and the Wharton School of the University of Pennsylvania. Mr. Kusiatin holds an MBA from the Wharton School and a
B.Sc. in industrial engineering from the Engineering Academy of Denmark.
This model is used to identify optimal deal terms for a company preparing to negotiate the inlicensing of a compound that has successfully completed Phase I clinical trials. The company needs to
identify the combination of up-front payments, milestone payments, research and development funding, and royalty fees it should agree to that maximizes net present value (NPV) while reducing risk.
Monte Carlo simulation is used to simulate commercial uncertainty as well as the probabilities of successfully launching the drug currently in development. Stochastic optimization is used to optimize
deal NPV given budgetary constraints and risk tolerance thresholds in an environment of commercial and technical uncertainty. A binomial approach is then used to value the built in strategic real
options of the contingency-based deal terms. For more details, please refer to the model (Figure 93.1).
March 17, 2008
Char Count=
FIGURE 93.1 Inlicensing model summary
March 17, 2008
Char Count=
94. Industry Applications—Biotech Investment Valuation
94. Industry Applications—Biotech Investment Valuation File Name: Biotech—Biotech Investment Valuation Location: Modeling Toolkit | Industry Applications | Biotech—Investment Valuation Brief
Description: Illustrating how to evaluate the decisions of investing in the development of a drug given commercial uncertainties Requirements: Modeling Toolkit, Risk Simulator Special Contribution:
This model was contributed by Uriel Kusiatin, a senior vice president of SKS Consulting (www.sksconsulting.us), a turnaround and strategy advisory firm that works with small-cap companies and their
investors in the telecommunications, high-tech, and life sciences industries. Uriel has applied real options, Monte Carlo simulation, and optimization techniques to research and development portfolio
decisions, licensing opportunities, and major capital investments. Uriel regularly presents at industry conferences, and has guest lectured at the Wharton School and MIT’s Sloan School of Management
on the applications of real options analysis in the life sciences industry. He also coteaches a course on real options analysis to students at the Executive Masters in Technology Management program
cosponsored by the School of Engineering and Applied Sciences and the Wharton School of the University of Pennsylvania. Mr. Kusiatin holds an MBA from the Wharton School and a B.Sc. in industrial
engineering from the Engineering Academy of Denmark.
This model is used to evaluate a decision to invest in the development of a drug given commercial uncertainties and technical risks. Monte Carlo simulation is used to simulate commercial uncertainty
as well as the probabilities of successfully completing clinical trial phases for a drug currently in development. For specific details, please refer to the model (Figure 94.1).
FIGURE 94.1 Staged-gate investment process in drug development
c91-100 JWBK121-Mun
March 17, 2008 11:12 Char Count=
March 17, 2008
Char Count=
95. Industry Application—Banking
95. Industry Application—Banking Integrated Risk Management, Probability of Default, Economic Capital, Value at Risk, and Optimal Bank Portfolios File Names: Multiple files (see chapter for details
on example files used) Location: Various places in the Modeling Toolkit Brief Description: Illustrating multiple models in computing a bank’s economic capital, value at risk, loss given default, and
probability of default Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions: B2ProbabilityDefaultMertonImputedAssetValue, B2ProbabilityDefaultMertonImputedAssetVolatility,
B2ProbabilityDefaultMertonII, B2ProbabilityDefaultMertonDefaultDistance, B2ProbabilityDefaultMertonRecoveryRate, B2ProbabilityDefaultMertonMVDebt
With the new Basel II Accord, internationally active banks are now allowed to compute their own risk capital requirements using the internal ratings–based (IRB) approach. Not only is adequate risk
capital analysis important as a compliance obligation, it provides banks the ability to optimize their capital by computing and allocating risks, performing performance measurements, executing
strategic decisions, increasing competitiveness, and enhancing profitability. This chapter discusses the various approaches required to implement an IRB method and the step-by-step models and
methodologies in implementing and valuing economic capital, value at risk, probability of default, and loss given default, the key ingredients required in an IRB approach, through the use of advanced
analytics such as Monte Carlo and historical risk simulation, portfolio optimization, stochastic forecasting, and options analysis. It shows the use of Risk Simulator and the Modeling Toolkit
software in computing and calibrating these critical input parameters. Instead of dwelling on theory or revamping what has already been written many times, this chapter focuses solely on the
practical modeling applications of the key ingredients to the Basel II Accord. Specifically, these topics are addressed:
Probability of Default (structural and empirical models for commercial versus retail banking) Loss Given Default and Expected Losses Economic Capital and Portfolio Value at Risk (structural and
risk-based simulation)
March 17, 2008
Char Count=
Portfolio Optimization Hurdle Rates and Required Rates of Return
Please note that several other white papers exist and are available by request (send an e-mail request to
[email protected]
). They discuss such topics as:
White Paper: portfolio optimization, project selection, and optimal investment allocation White Paper: credit analysis White Paper: interest rate risk, foreign exchange risk, volatility estimation,
and risk hedging White Paper: exotic options and credit derivatives
To follow along the analyses in this chapter, we assume that the reader already has Risk Simulator, Real Options SLS, and Modeling Toolkit (Basel II Toolkit) installed and is somewhat familiar with
the basic functions of each software. If not, please refer to www.realoptionsvaluation.com (click on Download) and watch the Getting Started videos, read some of the Getting Started case studies, or
to install the latest trial versions of these software programs. Alternatively, refer to Part I of this book to obtain a primer on using these software programs. Each topic discussed starts with some
basic introduction to the methodologies that are appropriate, followed by some practical hands-on modeling approaches and examples.
PROBABILITY OF DEFAULT Probability of default measures the degree of likelihood that the borrower of a loan or debt (the obligor) will be unable to make the necessary scheduled repayments on the
debt, thereby defaulting on the debt. Should the obligor be unable to pay, the debt is in default, and the lenders of the debt have legal avenues to attempt a recovery of the debt or at least partial
repayment of the entire debt. The higher the default probability a lender estimates a borrower to have, the higher the interest rate the lender will charge the borrower as compensation for bearing
the higher default risk. Probability of default models are categorized as structural or empirical. Structural models look at a borrower’s ability to pay based on market data, such as equity prices,
market and book values of asset and liabilities, as well as the volatility of these variables. Hence these structural models are used predominantly to estimate the probability of default of companies
and countries, most applicable within the areas of commercial and industrial banking. In contrast, empirical models or credit scoring models are used to quantitatively determine the probability that
a loan or loan holder will default, where the loan holder is an individual, by looking at historical portfolios of loans held and assessing individual characteristics (e.g., age, educational level,
debt to income ratio, etc.). This second approach is more applicable to the retail banking sector.
Structural Models of Probability of Default Probability of default models are models that assess the likelihood of default by an obligor. They differ from regular credit scoring models in several
ways. First
March 17, 2008
Char Count=
95. Industry Application—Banking
of all, credit scoring models usually are applied to smaller credits—individuals or small businesses—whereas default models are applied to larger credits—corporations or countries. Credit scoring
models are largely statistical, regressing instances of default against various risk indicators, such as an obligor’s income, home renter or owner status, years at a job, educational level, debt to
income ratio, and so forth (discussed later in this chapter). Structural default models, in contrast, directly model the default process and typically are calibrated to market variables, such as the
obligor’s stock price, asset value, debt book value, or the credit spread on its bonds. Default models find many applications within financial institutions. They are used to support credit analysis
and to determine the probability that a firm will default, to value counterparty credit risk limits, or to apply financial engineering techniques in developing credit derivatives or other credit
instruments. The first model illustrated in this chapter is used to solve the probability of default of a publicly traded company with equity and debt holdings and accounting for its volatilities in
the market (Figure 95.1). This model is currently used by KMV and Moody’s to perform credit risk analysis. This approach assumes that the book value of asset and asset volatility are unknown and
solved in the model; that the company is relatively stable; and that the growth rate of the company’s assets is stable over time (e.g., not in start-up mode). The model uses several simultaneous
equations in options valuation coupled with optimization to obtain the implied underlying asset’s market value and volatility of the asset in order to compute the probability of default and distance
to default for the firm.
Illustrative Example: Structural Probability of Default Models on Public Firms It is assumed that at this point, the reader is well versed in running simulations and optimizations in Risk Simulator.
The example model used is the Probability of Default—External Options model and can be accessed through Modeling Toolkit | Prob of Default | External Options Model (Public Company). To run this model
(Figure 95.1), enter in the required inputs:
Market value of equity (obtained from market data on the firm’s capitalization, i.e., stock price times number of stocks outstanding) Market equity volatility (computed in the Volatility or LPVA
worksheets in the model) Book value of debt and liabilities (the firm’s book value of all debt and liabilities) Risk-free rate (the prevailing country’s risk-free interest rate for the same maturity)
Anticipated growth rate of the company (the expected annualized cumulative growth rate of the firm’s assets, which can be estimated using historical data over a long period of time, making this
approach more applicable to mature companies rather than start-ups) Debt maturity (the debt maturity to be analyzed, or enter 1 for the annual default probability)
The comparable option parameters are shown in cells G18 to G23. All these comparable inputs are computed except for Asset Value (the market value of asset) and the Volatility of Asset. You will need
to input some rough estimates as a starting point so that the analysis can be run. The rule of thumb is to set the volatility of the
March 17, 2008
Char Count=
FIGURE 95.1 Default probability model setup
March 17, 2008
Char Count=
95. Industry Application—Banking
asset in G22 to be one-fifth to half of the volatility of equity computed in G10 and the market value of asset (G19) to be approximately the sum of the market value of equity and book value of
liabilities and debt (G9 and G11). Then an optimization needs to be run in Risk Simulator in order to obtain the desired outputs. To do this, set Asset Value and Volatility of Asset as the decision
variables (make them continuous variables with a lower limit of 1% for volatility and $1 for asset, as both these inputs can only take on positive values). Set cell G29 as the objective to minimize
as this is the absolute error value. Finally, the constraint is such that cell H33, the implied volatility in the default model, is set to exactly equal the numerical value of the equity volatility
in cell G10. Run a static optimization using Risk Simulator. If the model has a solution, the absolute error value in cell G29 will revert to zero (Figure 95.2). From here, the probability of default
(measured in percent) and the distance to default (measured in standard deviations) are computed in cells G39 and G41. Then the relevant credit spread required can be determined using the Credit
Analysis—Credit Premium model or some other credit spread tables (such as using the Internal Credit Risk Rating model). The results indicate that the company has a probability of default at 0.56%
with 2.54 standard deviations to default, indicating good creditworthiness (Figure 95.2). A simpler approach is to use the Modeling Toolkit functions instead of manually running the optimization.
These functions have internal intelligent optimization routines embedded in them. For instance, the B2ProbabilityDefaultMertonImputed AssetValue and B2ProbabilityDefaultMertonImputedAssetVolatility
functions perform multiple internal optimization routines of simultaneous stochastic equations to obtain their respective results, which are then used as an input into the
B2ProbabilityDefaultMertonII function to compute the probability of default. See the model for more specific details.
Illustrative Example: Structural Probability of Default Models on Private Firms Several other structural models exist for computing the probability of default of a firm. Specific models are used
depending on the need and availability of data. In the previous example, the firm is a publicly traded firm, with stock prices and equity volatility that can be readily obtained from the market. In
the next example, we assume that the firm is privately held, meaning that there would be no market equity data available. This example essentially computes the probability of default or the point of
default for the company when its liabilities exceed its assets, given the asset’s growth rates and volatility over time (Figure 95.3). Before using this model, first review the model on external
publicly traded company. Similar methodological parallels exist between these two models, and this example builds on the knowledge and expertise of the previous example. In Figure 95.3, the example
firm with an asset value of $12M and a debt book value of $10M with significant growth rates of its internal assets and low volatility returns a 0.67% probability of default. Instead of relying on
the valuation of the firm, external market benchmarks can be used, if such data are available. In Figure 95.4, we see that additional input assumptions are required, such as the market fluctuation
(market returns and volatility) and relationship (correlation between
FIGURE 95.2 Default probability of a publicly traded entity
c91-100 JWBK121-Mun
March 17, 2008 11:12 Char Count=
March 17, 2008
Char Count=
95. Industry Application—Banking
FIGURE 95.3 Default probability of a privately held entity the market benchmark and the company’s assets). The model used is the Probability of Default—Merton Market Options model accessible from
Modeling Toolkit | Prob of Default | Merton Market Options Model (Industry Comparable).
Empirical Models of Probability of Default As mentioned, empirical models of probability of default are used to compute an individual’s default probability, applicable within the retail banking
arena, where empirical or actual historical or comparable data exist on past credit defaults. The data set in Figure 95.5 represents a sample of several thousand previous loans, credit, or debt
issues. The data show whether each loan had defaulted or not (0 for no default, and 1 for default) as well as the specifics of each loan applicant’s age, education level (1 to 3 indicating high
school, university, or graduate professional education), years with current employer, and so forth. The idea is to model these empirical data to see which variables affect the default behavior of
individuals, using Risk Simulator’s Maximum Likelihood Model. The resulting model will help the
FIGURE 95.4 Default probability of a privately held entity calibrated to market fluctuations
March 17, 2008
Char Count=
FIGURE 95.5 Empirical analysis of probability of default bank or credit issuer compute the expected probability of default of an individual credit holder of having specific characteristics.
Illustrative Example: Applying Empirical Models of Probability of Default The example file is Probability of Default—Empirical and can be accessed through Modeling Toolkit | Prob of Default |
Empirical (Individuals). To run the analysis, select the data (include the headers) and make sure that the data have the same length for all variables, without any missing or invalid data. Then using
Risk Simulator, click on Risk Simulator | Forecasting | Maximum Likelihood Models. A sample set of results are provided in the MLE worksheet, complete with detailed instructions on how to compute the
expected probability of default of an individual. The Maximum Likelihood Estimates (MLE) approach on a binary multivariate logistic analysis is used to model dependent variables to determine the
expected probability of success of belonging to a certain group. For instance, given a set of independent variables (e.g., age, income, education level of credit card or mortgage loan holders), we
can model the probability of default using MLE. A typical regression model is invalid because the errors are heteroskedastic and nonnormal, and the resulting estimated probability estimates will
sometimes be above 1 or below 0. MLE analysis handles these problems using an iterative optimization routine. The computed results show the coefficients of the estimated MLE intercept and slopes.
March 17, 2008
Char Count=
95. Industry Application—Banking
FIGURE 95.6 MLE results The coefficients estimated are actually the logarithmic odds ratios and cannot be interpreted directly as probabilities. A quick but simple computation is first required. The
approach is simple. To estimate the probability of success of belonging to a certain group (e.g., predicting if a debt holder will default given the amount of debt he holds), simply compute the
estimated Y value using the MLE coefficients. Figure 95.6 illustrates an individual with 8 years at a current employer and current address, a low 3% debt to income ratio and $2,000 in credit card
debt has a log odds ratio of −3.1549. The inverse antilog of the odds ratio is obtained by computing: exp(−3.1549) exp(estimated Y) = = 0.0409 1 + exp(estimated Y) 1 + exp(−3.1549) So, such a person
has a 4.09% chance of defaulting on the new debt. Using this probability of default, you can then use the Credit Analysis—Credit Premium model to determine the additional credit spread to charge this
person, given this default level and the customized cash flows anticipated from this debt holder.
LOSS GIVEN DEFAULT As shown previously, probability of default is a key parameter for computing credit risk of a portfolio. In fact, the Basel II Accord requires the probability of default as well as
other key parameters, such as the loss given default (LGD) and exposure at default (EAD), be reported as well. The reason is that a bank’s expected loss is equivalent to: Expected Losses =
(Probability of Default) × (Loss Given Default) ×(Exposure at Default) or simply: EL = PD × LGD × EAD
March 17, 2008
Char Count=
PD and LGD are both percentages, whereas EAD is a value. As we have shown how to compute PD earlier, we will now revert to some estimations of LGD. There are several methods used to estimate LGD. The
first is through a simple empirical approach where we set LGD = 1 – Recovery Rate. That is, whatever is not recovered at default is the loss at default, computed as the charge-off (net of recovery)
divided by the outstanding balance: LGD = 1 − Recovery Rate or Charge-offs (Net of Recovery) LGD = Outstanding Balance at Default Therefore, if market data or historical information are available,
LGD can be segmented by various market conditions, types of obligor, and other pertinent segmentations. LGD can then be readily read off a chart. A second approach to estimate LGD is more attractive
in that if the bank has available information, it can attempt to run some econometric models to create the best-fitting model under an ordinary least squares (OLS) approach. By using this approach, a
single model can be determined and calibrated, and this same model can be applied under various conditions, with no data mining required. However, in most econometric models, a normal transformation
will have to be performed first. Suppose the bank has some historical LGD data (Figure 95.7), the best-fitting distribution can be found using Risk Simulator (select the historical data, click on
Risk Simulator | Tools | Distributional Fitting (Single Variable) to perform the fitting routine). The result is a beta distribution for the thousands of LGD values. Then, using the Distribution
Analysis tool in Risk Simulator, obtain the theoretical mean and standard deviation of the fitted distribution (Figure 95.8). Then transform the LGD variable using the B2NormalTransform function in
the Modeling Toolkit software. For instance, the value 49.69% will be transformed and normalized to 28.54%. Using this newly transformed data set, you can run some nonlinear econometric models to
determine LGD. For instance, a partial list of independent variables that might be significant for a bank, in terms of determining and forecast the LGD value might include:
Debt to capital ratio Profit margin Revenue Current assets to current liabilities Risk rating at default and one year before default Industry Authorized balance at default Collateral value Facility
type Tightness of covenant Seniority of debt Operating income to sales ratio (and other efficiency ratios) Total asset, total net worth, total liabilities
March 17, 2008
Char Count=
95. Industry Application—Banking
FIGURE 95.7 Fitting historical LGD data
ECONOMIC CAPITAL AND VALUE AT RISK Economic capital is critical to a bank as it links a bank’s earnings and returns to risks that are specific to business line or business opportunity. In addition,
these economic capital measurements can be aggregated into a portfolio of holdings. Value at Risk (VaR) is used in trying to understand how the entire organization is affected by the various risks of
each holding as aggregated into a portfolio, after accounting for cross-correlations among various holdings. VaR measures the maximum possible loss given some predefined probability level (e.g.,
99.90%) over some holding period or time horizon (e.g., 10 days). Senior management at the bank usually selects the probability or confidence interval, which reflects the board’s risk appetite.
Stated another way, we can define the probability level as the bank’s desired probability of surviving per year. In addition, the holding period usually is chosen such that it coincides with the time
period it takes to liquidate a loss position. VaR can be computed several ways. Two main families of approaches exist: structural closed-form models and Monte Carlo risk simulation approaches. We
will showcase both methods, starting with the structural models.
March 17, 2008
Char Count=
FIGURE 95.8 Distributional Analysis tool
The second and much more powerful approach is Monte Carlo risk simulation. Instead of simply correlating individual business lines or assets, entire probability distributions can be correlated using
mathematical copulas and simulation algorithms, by using Risk Simulator. In addition, tens to hundreds of thousands of scenarios can be generated using simulation, providing a very powerful stress
testing mechanism for valuing VaR. Distributional-fitting methods are applied to reduce the thousands of data points into their appropriate probability distributions, allowing their modeling to be
handled with greater ease.
Illustrative Example: Structural VaR Models This first VaR example model used is Value at Risk—Static Covariance Method, accessible through Modeling Toolkit | Value at Risk | Static Covariance
Method. This model is used to compute the portfolio’s VaR at a given percentile for a specific holding period, after accounting for the cross-correlation effects between the assets (Figure 95.9). The
daily volatility is the annualized volatility divided by the square root of trading days per year. Typically, positive correlations tend to carry a higher VaR compared to zero correlation asset
mixes, whereas negative correlations reduce the total risk of the portfolio through the diversification effect (Figures 95.9 and
March 17, 2008
Char Count=
95. Industry Application—Banking
FIGURE 95.9 Computing VaR using the structural covariance model 95.10). The approach used is a portfolio VaR with correlated inputs, where the portfolio has multiple asset holdings with different
amounts and volatilities. Each asset is also correlated to each other. The covariance or correlation structural model is used to compute the VaR given a holding period or horizon and percentile value
(typically 10 days at 99% confidence). Of course, the example illustrates only a few assets or business lines or credit lines for simplicity’s sake. Nonetheless, using
FIGURE 95.10 Effects of different correlations
March 17, 2008
Char Count=
the VaR function (B2VaRCorrelationMethod) in the Modeling Toolkit, many more lines, asset, or businesses can be modeled.
Illustrative Example: VaR Using Monte Carlo Simulation The model used is Value at Risk—Portfolio Operational and Credit Risk VaR Capital Adequacy and is accessible through Modeling Toolkit | Value at
Risk | Portfolio Operational and Credit Risk VaR Capital Adequacy. This model shows how operational risk and credit risk parameters are fitted to statistical distributions and their resulting
distributions are modeled in a portfolio of liabilities to determine the VaR (99.50th percentile certainty) for the capital requirement under Basel II requirements. It is assumed that the historical
data of the operational risk impacts (Historical Data worksheet) are obtained through econometric modeling of the key risk indicators. The Distributional Fitting Report worksheet is a result of
running a distributional fitting routine in Risk Simulator to obtain the appropriate distribution for the operational risk parameter. Using the resulting distributional parameter, we model each
liability’s capital requirements within an entire portfolio. Correlations can also be inputted if required, between pairs of liabilities or business units. The resulting Monte Carlo simulation
results show the VaR capital requirements. Note that an appropriate empirically based historical VaR cannot be obtained if distributional fitting and risk-based simulations were not first run. The
VaR will be obtained only by running simulations. To perform distributional fitting, follow these six steps: 1. In the Historical Data worksheet (Figure 95.11), select the data area (cells C5:L104)
and click on Risk Simulator | Tools | Distributional Fitting (Single Variable). 2. Browse through the fitted distributions and select the best-fitting distribution (in this case, the exponential
distribution in Figure 95.12) and click OK. 3. You may now set the assumptions on the Operational Risk Factors with the exponential distribution (fitted results show Lambda = 1) in the Credit Risk
worksheet. Note that the assumptions have already been set for you in advance. You may set the assumption by going to cell F27 and clicking on Risk Simulator | Set Input Assumption, selecting
Exponential distribution, and entering 1 for the Lambda value and clicking OK. Continue this process for the remaining cells in column F, or simply perform a Risk Simulator Copy and Risk Simulator
Paste on the remaining cells. A. Note that since the cells in column F have assumptions set, you will first have to clear them if you wish to reset and copy/paste parameters. You can do so by first
selecting cells F28:F126 and clicking on the Remove Parameter icon or select Risk Simulator | Remove Parameter. B. Then select cell F27, click on the Risk Simulator Copy icon or select Simulation |
Copy Parameter, and then select cells F28:F126 and click on the Risk Simulator Paste icon or select Risk Simulator | Paste Parameter. 4. Next you can set additional assumptions, such as the
probability of default using the Bernoulli distribution (column H) and Loss Given Default (column J). Repeat the procedure in step 3 if you wish to reset the assumptions.
March 17, 2008
Char Count=
FIGURE 95.11 Sample historical bank loans
March 17, 2008
Char Count=
FIGURE 95.12 Data-fitting results 5. Run the simulation by clicking on the Run icon or clicking on Risk Simulator | Run Simulation. 6. Obtain the Value at Risk by going to the forecast chart once the
simulation is done running and selecting Left-Tail and typing in 99.50. Hit Tab on the keyboard to enter the confidence value and obtain the VaR of $25,959 (Figure 95.13). Another example on VaR
computation is shown next, where the model Value at Risk—Right Tail Capital Requirements is used, available through Modeling Toolkit | Value at Risk | Right Tail Capital Requirements.
March 17, 2008
Char Count=
95. Industry Application—Banking
FIGURE 95.13 Simulated forecast results and the 99.50% value at risk This model shows the capital requirements per Basel II requirements (99.95th percentile capital adequacy based on a specific
holding period’s VaR). Without running risk-based historical and Monte Carlo simulation using Risk Simulator, the required capital is $37.01M (Figure 95.14) as compared to only $14.00M that is
required using a correlated simulation (Figure 95.15). This is due to the crosscorrelations between assets and business lines, and can be modeled only using Risk Simulator. This lower VaR is
preferred as banks can now be required to hold less required capital and can reinvest the remaining capital in various profitable ventures, thereby generating higher profits.
March 17, 2008
Char Count=
FIGURE 95.14 Right-tail VaR model
March 17, 2008
Char Count=
95. Industry Application—Banking
FIGURE 95.15 Simulated results of the portfolio VaR
To run the model, follow these three steps: 1. Click on Risk Simulator | Run Simulation. If you had other models open, make sure you first click on Risk Simulator | Change Simulation Profile and
select the Tail VaR profile before starting. 2. When the simulation is complete, select Left-Tail in the forecast chart and enter in 99.95 in the Certainty box and hit TAB on the keyboard to obtain
the value of $14.00M Value at Risk for this correlated simulation. 3. Note that the assumptions have already been set for you in advance in the model in cells C6:C15. However, you may set it again by
going to cell C6 and clicking on Risk Simulator | Set Input Assumption, selecting your distribution of choice or using the default Normal Distribution or performing a distributional fitting on
historical data, then clicking OK. Continue this process for the remaining cells in column C. You may also decide to first Remove Parameters of these cells in column C and setting your own
distributions. Further, correlations can be set manually when assumptions are set (Figure 95.16) or by going to Risk Simulator | Edit Correlations (Figure 95.17) after all the assumptions are set. If
risk simulation was not run, the VaR or economic capital required would have been $37M, as opposed to only $14M. All cross-correlations between business lines have been modeled, as are stress and
scenario tests, and thousands and thousands
March 17, 2008
Char Count=
FIGURE 95.16 Setting correlations one at a time
FIGURE 95.17 Setting correlations using the correlation matrix routine
March 17, 2008
Char Count=
95. Industry Application—Banking
of possible iterations are run. Individual risks are now aggregated into a cumulative portfolio level VaR.
EFFICIENT PORTFOLIO ALLOCATION AND ECONOMIC CAPITAL VaR As a side note, by performing portfolio optimization, a portfolio’s VaR actually can be reduced. We start by first introducing the concept of
stochastic portfolio optimization through an illustrative hands-on example. Then, using this portfolio optimization technique, we apply it to four business lines or assets to compute the VaR or an
unoptimized versus an optimized portfolio of assets and see the difference in computed VaR. You will note that at the end, the optimized portfolio bears less risk and has a lower required economic
Illustrative Example: Stochastic Portfolio Optimization The Optimization model used to illustrate the concepts of stochastic portfolio optimization is Optimization—Stochastic Portfolio Allocation and
can be accessed via Modeling Toolkit | Optimization | Stochastic Portfolio Allocation. This model shows four asset classes with different risk and return characteristics. The idea here is to find the
best portfolio allocation such that the portfolio’s bang for the buck, or returns to risk ratio, is maximized. That is, in order to allocate 100% of an individual’s investment among several different
asset classes (e.g., different types of mutual funds or investment styles: growth, value, aggressive growth, income, global, index, contrarian, momentum, etc.), optimization is used. This model is
different from others in that there exist several simulation assumptions (risk and return values for each asset), as seen in Figure 95.18. That is, a simulation is run, then optimization is executed,
and the entire process is repeated multiple times to obtain distributions of each decision variable. The entire analysis can be automated using stochastic optimization.
FIGURE 95.18 Asset allocation model ready for stochastic optimization
March 17, 2008
Char Count=
In order to run an optimization, several key specifications on the model have to be identified first: Objective: Maximize Return to Risk Ratio (C12) Decision Variables: Allocation weights (E6:E9)
Restrictions on Decision Variables: Minimum and maximum required (F6:G9) Constraints: Portfolio total allocation weights 100% (E11 is set to 100%) Simulation Assumptions: Return and risk values
(C6:D9) The model shows the various asset classes. Each asset class has its own set of annualized returns and annualized volatilities. These return and risk measures are annualized values such that
they can be compared consistently across different asset classes. Returns are computed using the geometric average of the relative returns, while the risks are computed using the logarithmic relative
stock returns approach. The allocation weights in column E holds the decision variables, which are the variables that need to be tweaked and tested such that the total weight is constrained at 100%
(cell E11). Typically, to start the optimization, we will set these cells to a uniform value, where in this case, cells E6 to E9 are set at 25% each. In addition, each decision variable may have
specific restrictions in its allowed range. In this example, the lower and upper allocations allowed are 10% and 40%, as seen in columns F and G. This setting means that each asset class can have its
own allocation boundaries. Column H shows the return to risk ratio, which is simply the return percentage divided by the risk percentage, where the higher this value, the higher the bang for the
buck. The remaining sections of the model show the individual asset class rankings by returns, risk, return to risk ratio, and allocation. In other words, these rankings show at a glance which asset
class has the lowest risk or the highest return, and so forth.
RUNNING AN OPTIMIZATION To run this model, simply click on Risk Simulator | Optimization | Run Optimization. Alternatively, and for practice, you can set up the model using these seven steps: 1.
Start a new profile (Risk Simulator | New Profile). 2. For stochastic optimization, set distributional assumptions on the risk and returns for each asset class. That is, select cell C6 and set an
assumption (Risk Simulator | Set Input Assumption) and make your own assumption as required. Repeat for cells C7 to D9. 3. Select cell E6, and define the decision variable (Risk Simulator |
Optimization | Decision Variables or click on the Define Decision icon) and make it a Continuous Variable and then link the decision variable’s name and minimum/maximum required to the relevant cells
(B6, F6, G6).
March 17, 2008
Char Count=
95. Industry Application—Banking
4. Then use the Risk Simulator Copy on cell E6, select cells E7 to E9, and use Risk Simulator’s Paste (Risk Simulator | Copy Parameter and Simulation | Paste Parameter or use the Risk Simulator copy
and paste icons). Make sure you do not use Excel’s copy and paste. 5. Next, set up the optimization’s constraints by selecting Risk Simulator | Optimization | Constraints, selecting ADD, and
selecting the cell E11, and making it equal 100% (total allocation, and do not forget the % sign). 6. Select cell C12, the objective to be maximized, and make it the objective: Risk Simulator |
Optimization | Set Objective or click on the O icon. 7. Run the simulation by going to Risk Simulator | Optimization | Run Optimization. Review the different tabs to make sure that all the required
inputs in steps 2 and 3 are correct. Select Stochastic Optimization and let it run for 500 trials repeated 20 times.
You may also try other optimization routines where:
Static Optimization is an optimization that is run on a static model, where no simulations are run. This optimization type is applicable when the model is assumed to be known and no uncertainties
exist. Also, a static optimization can be run first to determine the optimal portfolio and its corresponding optimal allocation of decision variables before more advanced optimization procedures are
applied. For instance, before running a stochastic optimization problem, a static optimization is run to determine if there exist solutions to the optimization problem before a more protracted
analysis is performed. Dynamic Optimization is applied when Monte Carlo simulation is used together with optimization. Another name for such a procedure is simulationoptimization. In other words, a
simulation is run for N trials, and then an optimization process is run for M iterations until the optimal results are obtained or an infeasible set is found. That is, using Risk Simulator’s
Optimization module, you can choose which forecast and assumption statistics to use and replace in the model after the simulation is run. Then these forecast statistics can be applied in the
optimization process. This approach is useful when you have a large model with many interacting assumptions and forecasts, and when some of the forecast statistics are required in the optimization.
Stochastic Optimization is similar to the dynamic optimization procedure except that the entire dynamic optimization process is repeated T times. The results will be a forecast chart of each decision
variable with T values. In other words, a simulation is run and the forecast or assumption statistics are used in the optimization model to find the optimal allocation of decision variables. Then
another simulation is run, generating different forecast statistics, and these new updated values are then optimized, and so forth. Hence, each of the final decision variables will have its own
forecast chart, indicating the range of the optimal decision variables. For instance, instead of obtaining single-point estimates in the dynamic optimization procedure, you can now obtain a
distribution of the decision variables and, hence, a range of optimal values for each decision variable, also known as a stochastic optimization.
March 17, 2008
Char Count=
FIGURE 95.19 Simulated results from the stochastic optimization Stochastic optimization is performed when a simulation is first run and then the optimization is run. Then the whole analysis is
repeated multiple times. The result is a distribution of each decision variable, rather than a single-point estimate (Figure 95.19). This means that instead of saying you should invest 30.57% in
Asset 1, the optimal decision is to invest between 30.10% and 30.99% as long as the total portfolio sums to 100%. This way, the optimization results provide
March 17, 2008
Char Count=
95. Industry Application—Banking
management or decision makers a range of flexibility in the optimal decisions. Refer to Chapter 11 of Modeling Risk, 2nd Edition, by Dr. Johnathan Mun (Hoboken, NJ: John Wiley & Sons, 2006), for more
detailed explanations about this model, the different optimization techniques, and an interpretation of the results. Chapter 11’s appendix also details how the risk and return values are computed.
Illustrative Example: Portfolio Optimization and Portfolio VaR Now that we understand the concepts of optimized portfolios, let us see what the effects are on computed economic capital through the
use of a correlated portfolio VaR. This model uses Monte Carlo simulation and optimization routines in Risk Simulator to minimize the VaR of a portfolio of assets (Figure 95.20). The file used is
Value at Risk—Optimized and Simulated Portfolio VaR, which is accessible via Modeling Toolkit | Value at Risk | Optimized and Simulated Portfolio VaR. In this example, we intentionally used only four
asset classes to illustrate the effects of an
FIGURE 95.20 Computing VaR with simulation
March 17, 2008
Char Count=
FIGURE 95.21 Nonoptimized VaR
optimized portfolio. In real life, we can extend this to cover a multitude of asset classes and business lines. Here we illustrate the use of a left-tail VaR as opposed to a right-tail VaR, but the
concepts are similar. First, simulation is used to determine the 90% left-tail VaR. The 90% lefttail probability means that there is a 10% chance that losses will exceed this VaR for a specified
holding period. With an equal allocation of 25% across the four asset classes, the VaR is determined using simulation (Figure 95.21). The annualized
March 17, 2008
Char Count=
95. Industry Application—Banking
returns are uncertain and hence simulated. The VaR is then read off the forecast chart. Then optimization is run to find the best portfolio subject to the 100% allocation across the four projects
that will maximize the portfolio’s bang for the buck (returns to risk ratio). The resulting optimized portfolio is then simulated once again, and the new VaR is obtained (Figure 95.22). The VaR of
this optimized portfolio is a lot less than the not-optimized portfolio.
FIGURE 95.22 Optimal portfolio’s VaR through optimization and simulation returns a much lower capital requirement
March 17, 2008
Char Count=
HURDLE RATES AND DISCOUNT RATES Another related item in the discussion of risk in the context of Basel II Accords is the issue of hurdle rates, or the required rate of return on investment that is
sufficient to justify the amount of risk undertaken in the portfolio. There is a nice theoretical connection between uncertainty and volatility whereby the discount rate of a specific risk portfolio
can be obtained. In a financial model, the old axiom of high risk, high return is seen through the use of a discount rate. That is, the higher the risk of a project, the higher the discount rate
should be to risk-adjust this riskier project so that all projects are comparable. There are two methods in computing the hurdle rate. The first is an internal model, where the VaR of the portfolio
is computed first. This economic capital is then compared to the market risk premium. That is, we have Hurdle Rate =
Market Return – Risk-Free Return Risk Capital
That is, assuming that a similar set of comparable investments are obtained in the market, based on tradable assets, the market return is obtained. Using the bank’s internal cash flow models, all
future cash flows can be discounted at the risk-free rate, in order to determine the risk-free return. Finally, the difference is then divided into the VaR risk capital to determine the required
hurdle rate. This concept is very similar to the capital asset pricing model (CAPM), which often is used to compute the appropriate discount rate for a discounted cash flow model. Weighted average
cost of capital, hurdle rates, multiple asset pricing models, and arbitrage pricing models are the other alternatives but are based on similar principles. The second approach is the use of the CAPM
to determine the hurdle rate.
March 17, 2008
Char Count=
96. Industry Application—Electric/Utility
96. Industry Application—Electric/Utility Optimal Power Contract Portfolios
File Name: Electric Utility—Electricity Contract Risk Location: Modeling Toolkit | Industry Application | Electric Utility—Electricity Contract Risk Brief Description: Modeling electric utility
contracts under uncertainty and performing portfolio optimization to obtain the efficient portfolio allocation Requirements: Modeling Toolkit, Risk Simulator Special Credits: This case study was
contributed by Elio Cuneo Hervieux, CRM, an electrical civil engineer with an MBA in finance. He is the energy supply contracts manager in an energy generation company in northern Chile, an area in
which important mining companies are located. Besides being responsible for looking after the correct application of the contractual agreements with clients, he specializes in the analysis and
definition of risk metrics for each contract and for the company’s portfolio. He can be contacted at
[email protected]
Electricity is generated through different production methods, either with hydropower (reservoirs and rivers) or thermal methods (where a great variety of technologies exists, depending on the type
of fuel used). A common characteristic of the units that produce electricity at competitive prices is that they are very capital intensive. In addition, the input used to generate energy may present
important variations in its prices, as in the case with thermal power stations. Another potential source of volatility that should be considered is the hydrology, specifically, the availability of
the water resource for hydropower generation. In Chilean legal contracts of electricity supply, there are two items that comprise the electricity billing. The first is associated with the power
levels, and the second is associated with the energy levels. Billings associated with power levels are related to the peak of the client’s demand expressed in US$/kW-month (U.S. dollars per kilowatt,
per month of usage). The amount is also related to the investments in generation made by the energy producer, developed at the client’s request, or an alternative value is used in accordance with the
unitary price of the power that is traded in the respective markets. The last case corresponds to the electricity market in Chile. Billings associated with energy levels are related to the type of
fuels used for the energy generation or according to a future projection of the prices of the spot market or a mixture of both.
March 17, 2008
Char Count=
Since billings to the client consider these two key variables, to obtain the prospective margins and the associated profitability, it is necessary to assign different weights to each variable. In
practice, there is no consensus in this respect. From the point of view of obtaining the margin for the electricity sale, the margin can be divided into two components with different characteristics
in terms of risk: the margin for power and the margin for energy. 1. Margin for power. Once the rate of power for the client is fixed, the respective margin for this variable is determined by the
level of the client’s maximum demand with respect to the cost levels where power is traded in the electricity market. If the client maintains a stable level of maximum demand, the margin will be
maintained. 2. Margin for energy. This margin corresponds to the difference between the incomes associated to energy and the costs incurred during the energy production. As an example, an energy
generator using only one type of fuel, the energy rate would be upgraded according to the price variations experienced by the input. If the producer maintains a generation matrix with units that use
diverse types of fuels (diversified matrix of fuels), the energy rate would be upgraded considering a polynomial function that contemplates the variation in the prices of the inputs according to
certain percentages in that each part participates in the generation portfolio. In terms of risk, this last event presents a particularity, because it is expected that the polynomial function of
upgrading the energy rate represents the form in which the production costs move, so it is possible to properly cover the upgraded rate. In analyzing the risk of the polynomial function of upgrading
the energy rate for a hypothetical company, we assume the use of typical prices of inputs as well as standard technical aspects for thermal stations of electricity generation.
EFFICIENT FRONTIER OF GENERATION According to the theory of efficient portfolios of investment, when a portfolio of different assets with diverse profitability and risk levels exists, the most
efficient investment portfolio is obtained when the combination of assets that is selected (given by the percentage of investment allocation of each asset) is located at some point along the
efficient frontier of all possible feasible combination portfolio sets. In the case of electricity generation, the same output can be generated according to different inputs. The associated risks to
the generation costs can be analyzed using the theory of efficient frontier, where each input represents an asset, the production cost and profitability, and the risks associated with the volatility
of the price of each input. The efficient frontier of generation (EFG) should be representative of the operation of the power stations for a certain period of time; typically, 12 months is
appropriate, because it is necessary to consider the period of time where the units are out of service for programmed maintenance or forced outage (FOR), and where it is necessary to buy backup
energy from third parties. Usually this energy has risk characteristics different from those of the energy generated by a unit that is forcibly
March 17, 2008
Char Count=
96. Industry Application—Electric/Utility
TABLE 96.1
Technical Aspects
Fuel Type
Net MW
Heat Rate
7.50 MMBTU/MWh 0.41 Ton/MWh
US$ 2.00/MWh US$ 2.00/MWh
7% 8%
35 Days/yr. 40 Days/yr.
Natural Gas Coal
TABLE 96.2 Fuel Type Natural Gas Coal Spot Market
Economic Variables Volatility
Fuel Price
Variable Cost
20% 30% 60%
US$ 2.00/MMBTU US$ 60.00/Ton
US$ 2.00/MWh US$ 2.00/MWh
US$ 17.00/MWh US$ 26.60/MWh US$ 30.00/MWh
stopped. If we add the fact that the prices of the inputs have their own volatility, besides the changes that the industry is exposed to in the local market, it is clear that the EFG results will
have dynamic characteristics rather than being stationary. As an example of obtaining the EFG, let us consider a company that has generating units of the characteristics indicated in Tables 96.1 and
96.2. The EFG for the generation assets are obtained in the example model as well as in Figure 96.1, which summarizes the results. From Figure 96.1, it is interesting to highlight these points:
Under normal operating conditions with the two power stations in service, the EFG moves between points A and B. In any contract of electricity supply, the rate of updated energy should be over the
line that unites points A and B.
FIGURE 96.1 Typical generation portfolio curve with mix natural gas, coal, and marginal cost (CMg)
March 17, 2008
Char Count=
The EFG for the assets sustains a change that is represented by the curves that unite the points A and C, similar to the one that unites points B and C. This change originates from the fact that the
units must be subject to maintenance (the cost of generation of the unit that goes out of service is replaced by the purchase of the same energy block in the spot market or a similar block through
contracts with third parties). The EFG curves shown clearly indicate the changes in the position of risk that the hypothetical company sustains when units are out of service compared to a controlled
risk condition when the units operate in a normal way. For the hypothetical company and considering the previously stated technical economic variables to be the operational points of each curve, we
Frontier NG - Coal Coal - CMg NG - CMg
Average C
US$ 20.89/MWh US$ 22.27/MWh US$ 28.62/MWh
18.63% 27.08% 45.99%
The EFG curves shown in Figure 96.1 represent a static situation of operations for the units of the market, which allow you to individualize the aspects that have to be taken into account at any
moment to fix the upgrade scheme of the energy rate to the risk levels the company may face. The case corresponds to the risk that the hypothetical company faces when maintaining one of the units in
programmed maintenance. This risk can notably affect the annual margins for energy.
Since the EFG of the generation assets represent a static situation of operation of the power stations, it is necessary to run stochastic simulations that allow stress testing of the indexation
polynomial function that is considered for the energy rate in order to detect possible scenarios where the originally estimated margin is not reached. If there were such scenarios, a certain
probability of obtaining lower margins than those originally estimated would exist.
ILLUSTRATIVE EXAMPLE To illustrate, suppose there are two outlines for energy rates upgrading to a hypothetical client whose demand reaches the 370 MW level, with a load factor of about 90% per month
conditional on the typical demand for electricity in a mining company. To cover this client’s demand, there are two possible outlines of rate upgrades in which each one is associated with the kind of
fuel used for the electricity production and for the generation asset of the hypothetical company. The analysis seeks to compare two outlines of rate upgrades, determining the impact of each outline
in the prospective annual margins as well as its risks. The goal of the analysis is to generate recommendations regarding which outline the electricity-producing company should choose.
March 17, 2008
Char Count=
96. Industry Application—Electric/Utility
Outline 1 This first upgrade outline is typical in the electricity supply based on thermal stations and considers, as a base for the energy rate upgrade, the variation of the prices of the supplies
with which the electricity is generated. The considered shared percentages correspond to the participation of the different inputs used by the electricity producer in the process of generation (e.g.,
here we consider natural gas and coal). Block 1: 59.45%, 220MW of the consumption of the client’s energy with rates based on use of natural gas, determined as: EG(m) = EG0
PGm PG0
Block 2: 40.55%, 150MW of the consumption of the client’s energy, with rates based on use of coal, determined as: EC(m) = EC0
PCm PC0
Outline 2 Besides considering the variation in the input prices, this outline deals with the effects of the lack of generation units being used for programmed maintenance or for forced outages of
generation units. In practical terms, it considers the EFG associated with the generation assets of the hypothetical company. Block 1: 49.60%, 184MW of the consumption of the client’s energy, with
rates based on use of natural gas, determined as: EG(m) = EG0
P Gm P G0
Block 2: 32.85%, 122MW of the consumption of the client’s energy, with rates based on use of coal, determined as: EC(m) = EC0
PCm PC0
Block 3: 17.55%, 64MW of the consumption of the client’s energy, with rates based on use of energy purchased in the spot market, determined as: ECMg(m) = ECMg0
CMgm CMg0
where we define the variables as: EG0 = Base value of the energy rate for Block 1 of the client’s consumption and considering generation to natural gas, in US$/MWh EC0 = Base value of the energy rate
for Block 2 of the client’s consumption and considering generation to coal, in US$/MWh
March 17, 2008
Char Count=
TABLE 96.3
Resolution of the considered percentages
Maint Days
Net days
MW days
NG Coal Spot days
7% 8%
35/yr. 40/yr.
66,979 44,370 23,701
49.60% 32.85% 17.55%
ECMg0 = Base value of the energy rate for block 3 of the client’s consumption and considering purchases in the spot market, in US$/MWh PG0 = Base value of base natural gas, in US$/MMBTU PC0 = Base
value of coal, base 6000 kcal, in US$/Ton CMg0 = Base value in the spot market, in US$/MWh EG(m) = Upgraded value to period m of the rate of energy of Block 1 EC(m) = Upgraded value to period m of
the rate of energy of Block 2 ECMg(m) = Upgraded value to period m of the rate of energy of Block 3 PGm = Natural gas price, valid for the period m, in US$/MMBTU PCm = Coal price, valid for the
period m, in US$/Ton CMgm = Price of the spot market valid for the period m, in US$/MWh The associated percentages of each input were determined by considering the effects of the programmed
maintenance days to each unit as well as the percentages of forced outages associated to each unit. Table 96.3 summarizes the resolution of the considered percentages. For the case study, the
numerical values listed in Table 96.4 are indicated for the different variables that conform to the polynomial of upgrade of the energy rate. Risk Simulator was used to generate and run the
stochastic risk simulations of the impact of the indexation outlined in the portfolio energy margin. The associated parameters of the different variables that represent risks are set as probability
distributions, as shown in Table 96.5.
TABLE 96.4
Polynomial upgrade variable values
No. 1
No. 2
Eg0 Ec0 ECMg0 PG0 PC0 CMg0
20.40 31.92
20.40 31.92 36.00 2.00 60.00 30.00
2.00 60.00
March 17, 2008
Char Count=
96. Industry Application—Electric/Utility
TABLE 96.5 Items
Distributional assumptions for simulation Minimum
1.8 30 10 240 400 2,828
2.0 45 22.5 302 521 2,900
3.0 60 35 360 650 2,973
Triangular Triangular Triangular Triangular Triangular Triangular
NG FOB Coal Freight Bunkers Diesel MW SEP
The results of the margins for energy of the portfolio, expressed in US$/MWh, according to the outline of upgrade of the rate of energy are summarized next. Table 96.6 shows the statistics so readers
can visualize the differences in terms of risk for the hypothetical company, comparing one outline versus another outline of energy rate upgrade. The statistics obtained from the simulations for each
outline of upgrade provide information regarding the different characteristics of risk that the hypothetical company may face for the energy sale.
In terms of mid values, Outline 2 offers a better prospective value for the margin of the portfolio compared to Outline 1. In terms of risk, Outline 2 is more attractive for the hypothetical company
(lower standard deviation, coefficient of variation and range). Outline 2 has improved statistics because the risks associated with periodic unit maintenance as well as no programmed outages are
attenuated when considering the upgrade of the energy rate, a percentage of the energy bought at the spot market should also be reflected under the operating conditions of the power stations. TABLE
Risk simulation forecast statistics and results
Mean Median Standard Deviation Variance Average Deviation Coef. of Variation Maximum Minimum Range Skewness Kurtosis 25% Percentile 75% Percentile
Outline 1
Outline 2
4.038 4.324 1.515 2.295 1.110 0.375 7.485 −6.051 13.537 −1.446 3.620 3.424 5.027
6.289 6.264 1.458 2.125 1.112 0.232 11.284 −0.718 12.002 −0.312 1.408 5.429 7.178
March 17, 2008
Char Count=
FIGURE 96.2 Gross Margin, percentile curve, Outline No. 1 and No. 2
An alternative way to evaluate the outlines of the energy rate upgrading is in considering the percentiles associated with the margins that are obtained after running the stochastic simulations. For
example, if the hypothetical company determines that the margin of its interest on an annual basis is US$4/MWh, which is the more attractive strategy? Figure 96.2 illustrates the two strategies
graphically. If a minimum annual margin of US$4/MWh is required in commercial terms, upgrade No. 2 is more convenient because the results would indicate that it has a probability equal to or above
95% that the required margin is exceeded, whereas outline No. 1 has an associated probability of only 60%. The comparison is a method contracts of these type can be analyzed. If a portfolio of N
commercial agreements exist, based on the methods discussed in this chapter, it is possible to determine the associated probabilities that the margins originally estimated at contract signing can be
CONCLUSION Based on our analysis, we can conclude:
The efficient frontier of generation, EFG, is the only way to obtain analytically valid and correct results in the determination of the outline of rate upgrade to more efficient energy for the
hypothetical company. A valid analysis of risk in the determination of the impact in the margins for electric power sale of an industry in a world of highly volatile inputs market can only be
determined with simulation and optimization techniques. Even when examples are based on the analysis of the company’s portfolio, analyses should include studies of risks associated with each contract
in order to
March 17, 2008
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
recognize the contribution of each to the total risk of the margins of energy of the company and the requirements of taking corrective actions. It is advisable to develop these analyses in a
permanent form for risk administration, analyzing existing contracts as well as future client contracts. The effects of the existing contracts to the portfolio can be determined using a similar
approach. Even when the developed example allows us to compare two outlines of rate upgrades on a projection of twelve months, similar analyses can be developed to visualize the impact in the VAN of
the company, considering a long horizon with other risk factors (for example, FOR variable rate, days of maintenance different to those originally estimated, changes of the market of the inputs,
unavailability of input supply as in the case of Chile, etc).
97. Industry Application—IT—Information Security Intrusion Risk Management File Name: Industry Application—IT Risk Management Investment Decision Locations: Modeling Toolkit | Industry Application |
IT Risk Management Investment Decision Brief Description: The case study and model illustrated in this chapter looks at information systems security attack profile as well as provide decision
analysis and support on the required optimal investment Requirements: Modeling Toolkit, Risk Simulator Special Credits: This model was contributed by Mark A. Benyovszky, managing director of Zero
Delta Center for Enterprise Alignment. Zero Delta CfEA is a research and development organization that specializes in helping companies to align their strategic and tactical efforts. Mr. Benyovszky
may be reached at
[email protected]
or +1.312.235.2390. Additional information about Zero Delta can be found at www.zerodelta.org.
Organizations of all sizes rely upon technology to support a wide-range of business processes that span the spectrum from “back-office” finance and accounting to “mid-office” manufacturing,
distribution, and other operational support functions, to “front-office” sales, marketing, and customer support functions. As a general rule of thumb, larger organizations have more complex system
environments and significantly greater volumes of data along with a wide range of different types of information. If you were to look across industries, there are different degrees of sensitivity of
both the systems and information that are employed. For example, financial and
March 17, 2008
Char Count=
insurance companies store critical and very sensitive information (financial transactions and personal medical histories) about their customers; or that an energy company engaged in gas transmission
and distribution relies upon critical technology systems that control the flow of gas through complex pipeline networks. Regardless of the specific industry an organization is involved with or the
size of the company, the underlying technology systems and the data and information they consume and produce are significant business assets. Like any asset, they must be protected. In order to
protect these assets, we must understand what their individual and collective risk profiles look like. Protecting these assets is of paramount concern. Technology systems are interconnected across
private, semi-private, and public networks. Every second (perhaps you prefer nanoseconds, picoseconds, or attoseconds—depending upon your “geekiness factor”) of every day, information moves across
these networks—most of the time the information moves about intentionally; on other occasions it does not. We can think of this information and these systems in the context of an information security
asset portfolio. It is important for us to quantify the value of each class of system or set of information, which will help us to understand, according to a scale of sensitivity, which assets
require greater protection. Higher value assets are likely to be greater targets for attack (based on the basic risk/reward equation). We can then apply various methods against the portfolio to
determine the composite (a high-level view) risk level of the portfolio, risk profiles of categories of assets, and individual asset risk profiles (detailed view). This approach enables us to gain a
better grasp on our information and technology asset portfolio, and provides us with the ability to determine how much to invest to protect each class of assets. While the specific approaches and
processes that are required to perform this initial portfolio structuring are beyond the scope of this case study, determining the probabilities of events occurring against these assets and what the
resultant outcomes are likely to be is at the center of our discussion. This case study will assume that this structuring process already has been completed. Specifically, there are five steps to
undergo, including: Step 1: Create Environment Details Step 2: Create Attack Models Step 3: Create Attack Scenarios Step 4: Determine Financial Impact Step 5: Arrive at Investment Decision Now, let
us get on with the heart of our discussion. Monte Carlo simulation provides us with an effective way to estimate losses associated with a given attack. Monte Carlo simulation addresses the “flaw of
averages” problem that plagues many single point estimates or estimates based upon standard averages. For the sake of this discussion, we will explore how we applied this approach to a large gas
transmission and distribution company. The company (which we will refer to as Acme T&D) is one of the largest natural gas transmission and distribution companies in the world. Acme T&D has an
extensive gas pipeline network that
March 17, 2008
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
supplies natural gas to wholesalers and retailers in some of the largest markets throughout North America. Energy companies fit in a unique category of organizations that use technology at the core
of their business operations. Acme T&D relies upon extensive industrial control systems known in the industry as SCADA (Supervisory Control and Data Access) and PCM (Process Control Monitoring)
systems. These systems are composed of a number of devices that are distributed throughout the gas pipeline network; these components are used to control the flow of gas through the network. It
supplies critical information, such as gas flow-rate, temperature of gas, and pressure at various points through the network, to a system operator who then makes certain decisions of what to do to
keep the pipeline running at an operationally and economically efficient level—always supplying gas where it’s needed and when it’s needed in a dynamic environment that changes on a consistent basis.
These systems are critical not only to the operations of Acme T&D but they are critical also to the greater infrastructure of the United States. If the transmission and distribution of natural gas is
interrupted for a significant period of time it can have ‘downstream’ effects that could be economically (the suspended operations of manufacturing companies that rely upon natural gas) or personally
(lack of gas to run a furnace in the cold of winter) devastating. Clearly, these SCADA system(s) would be categorized as business critical assets with the highest priority placed on them vis-a-vis
their protection. `
STEP 1: CREATE ENVIRONMENT DETAILS When we consider the extent to which an attack will cause damage, we must identify the factors that drive the top end of our model. These factors will be different
for each company (with similarities for companies within the same industry). For Acme T&D our greatest concerns, from an operational perspective, are the count and types of networks in the
environment, and employee productivity (we will take into account separately how operations are impacted when a threat impacts a SCADA network). The reason for using employee productivity as a factor
is due to the fact that when networks are down or systems are unreachable (for whatever reason), employees are directly impacted (we use this in this example because of its universal relevance across
industry domains).
ACME T&D Network Counts Enterprise Network Count SCADA Network Count PCN Network Count
Total Networks
As an aside, and as previously eluded to, the factors that drive the model will change based upon industry characteristics. For example, a financial institution may
March 17, 2008
Char Count=
wish to use the economic losses associated with stolen credit card data as a primary factor to drive the model, in addition to employee productivity losses, and so forth. Acme T&D has approximately
10,000 employees. We must determine the payroll expenses (fully burdened) per hour. We are simplifying this model intentionally—it is not likely that 10,000 employees are working all at once (e.g.,
some % of employees may be on a shift rotation schedule). A sample computation is shown next: Total Employee Cost/Hour = Employee Count × Salary/2,000 Where 2,000 is the number of hours worked per
employee each calendar year, or 2,080 less 80 hours for holidays; and the Salary input is the fully-burdened average salary for all employees. The model is based upon various types of attack. We
determine the probability that each attack will occur and to what extent it will cause damage (economic and operational) to the organization. We then create a separate model (our attack portfolio),
which will allow us to simulate multiple attacks occurring against different networks in the environment and the resultant impacts in aggregate. We classify attacks based upon two variables—the
frequency and impact of the attack. An attack as profiled in Class I is considered an average attack. An average attack could be considered a low-impact worm, a Trojan horse, or virus that may affect
various network systems and employee computers. Acme T&D has a variety of tools deployed in the network to mitigate these types of attacks; however, as stated earlier, no tool is 100% effective. This
is where the value of Monte Carlo simulation is realized. Minimum Most Likely Maximum
0.7 0.8 1.0
Now we construct the remaining elements of the model. We will use standard (and fairly conservative) estimates for the probability of occurrence of an attack. Table 97.1 illustrates how the top-end
of the model comes together. We place the attack types across the columns of the model and we create the network structure and impact structure components.
STEP 2: CREATE ATTACK MODELS We must first create a common attack model and categorize the different types of attacks that exist. The classes of attacks are based upon the severity level of the
attack (from average to extreme). We also indicate the extent of damage that an attack produces and the recovery details associated with each class of attack. This classification structure provides
us with a basic framework we can leverage throughout the analysis exercise. We have five classes of attacks structured in our model. The descriptors are qualitative in nature (Table 97.1).
March 17, 2008
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
TABLE 97.1 Attack Class
Qualitative assessments of attack classes Severity Level of Attack
Type of Attack
Extent of Damage
Recovery Approach
Class I
Benign worm, Trojan Limited. Most horse, virus, or damage occurs at equivalent. host level.
Mostly automated, but may require some human intervention.
Class II
Slightly Above Average
Worm, Trojan horse, Limited. Damage can virus, or equivalent occur at the host and designed to create network level. some damage or consume resources.
Human intervention is required. Humans use tools that require interaction and expertise.
Class III Moderately Worm, Trojan horse, Above or equivalent Average designed to create significant damage and consume resources.
Noticeable damage at host and network levels. Automated tools have limited affect to combat attacker.
Significant human intervention is required. Personnel require physical access to host machines and network environments.
Class IV Significantly Concentrated attack Above by hacker using a Average variety of tools and techniques to compromise systems.
Significant damage to important/sensitive data. May also include damage to host machines as Trojans and other tools are used to circumvent detection and mitigation techniques.
Extensive human intervention is required. Data and systems recovery necessary. Multiple techniques and methods necessary to fully recover.
Class V
Critical damage to important/sensitive information. Irreversible damage to systems/hardware.
Extensive human intervention is required. External “experts” required to assess and recover environment.
Extreme Case
Concentrated attack by hacker or groups of hackers who are trying to compromise information/systems and have malicious intent.
We create current state and future state models for the classes of attacks. This is performed for comparison purposes and is an important aspect to the overall analysis. The current state model is
based upon the technology and approaches that are currently in use (our preexisting investments) to detect, mitigate, and recover from each respective type of attack. The future-state model is based
upon a set of new technologies (our future investments) that can be deployed in the environment to enhance the security of the environment, mitigate a wider range of attacks, and more rapidly recover
from various types of attacks.
March 17, 2008
Char Count=
These types of attacks will be consistent across our current and future state models. There are a number of variables that are a part of our attack models. They include: % of Network Impacted % of
Employees Impacted Productivity Loss (hours/employee) Costs to Recover Employees Hours to Recover Employees Note that the models are populated with static values that are single point estimates and
averages. For example, a Class I Attack in the current state attack model has a 10% Network Impacted value and a 5-hour Productivity Loss value. How can we be absolutely certain that a Class I Attack
will always impact 10% of the networks and result in a productivity loss of 5 hours per employee (along with the other variables included in the model)? We cannot be certain, at least not with a
reliable degree of confidence. As such, any analysis based upon single point estimates or averages is flawed. Monte Carlo simulation allows us to refine our assumptions and provides us with a
mechanism to perturb these variables in a dynamic fashion. While we have solved the problem of dealing with averages, we are faced with a new challenge—what are the appropriate ranges to use to
perturb the values and how should these perturbations behave throughout the simulation? To gather these values we leveraged the Delphi method. Following the Delphi method approach, we interviewed a
number of technical experts in the environment who had knowledge of prior attacks and the extent to which tools were used to mitigate them. The expert panel provided the details necessary to
determine how the model variables might behave and provided appropriate upper and lower boundary values. Figure 97.1 illustrates how we have adapted the % of Network Impacted value for a Class I
attack. The original value was based upon an average of 10%. Upon closer inspection and after some discussion, our panel of experts determined that such an attack is unlikely to impact less than 10%
of the network and may in fact impact a greater percentage of the network before it is identified and subsequently terminated successfully before further damage can occur. Using Monte Carlo
simulation, we create an assumption for this value and select a normal distribution. We truncate the left side (or minimum value) of the distribution to take into account the 10% “floor” and provide
some movement towards the right side (maximum or ceiling value) of the distribution. We set the mean to 10% and standard deviation to 5%. The resultant distribution indicates a minimum value of 10%,
a mean of 10% (our average), and a maximum value of approximately 25%. We have introduced into our model a very powerful feature. Our model better reflects reality by taking into account the
uncertainty associated with this value. We use this same approach for the other values and select and adjust the distributions accordingly. To further illustrate this point, Figure 97.2 is taken from
the Class V attack column. A Class V attack is considered an extreme event. The probability of occurrence is very low and the damage caused is expected to be extreme or
JWBK121-Mun March 17, 2008 11:12
FIGURE 97.1 Truncated percent of Network Impacted simulation assumption
c91-100 Char Count=
FIGURE 97.2 Percent of Network Impacted simulation assumption of a Class V attack
c91-100 JWBK121-Mun
March 17, 2008 11:12 Char Count=
March 17, 2008
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
FIGURE 97.3 Forecast
catastrophic in nature. An analogous event would be a volcano eruption or an earthquake (which may evoke a tsunami wave, for example, if either event occurs in a susceptible place in the South
Pacific) that occurs once every hundred years. The Gumbel Maximum Distribution is ideally suited for this type of catastrophic event. This distribution model is positively skewed and is designed to
produce a higher probability of lower numbers and a lower probability of extreme values. We set the Alpha value to 70 and the Beta to 10. This results in a mean of 75.7722 and a standard deviation of
12.8255. It is important to note the third and fourth moments of this distribution. Skewness coefficient is 1.1395 (indicating the positively skewed nature of the distributions) and Kurtosis
coefficient is 2.400 (indicating the extent to which extreme events should occur in the distribution). This distribution model better reflects reality vis-a-vis extreme attacks. We can ` see in
Figure 97.3 there are higher probabilities to the left of the mean than to the right. However, the model has taken into account the extreme distributions to the far right of the median. The original
analysis, which was based upon standard averages indicated that for this scenario, the total financial losses are $21,741,176. If we follow our “1 in 3” approach, we find that the number is adjusted
downward to $18,0174,729 or by a little over 12%. As you explore the model in more detail you will note the use of various distributions for each class of attack. We adjust these figures for each
scenario to take into account the greater variability of more advanced and staged attacks. We know that as attacks gain more sophistication there are more ‘unknowns’ with respect to how far reaching
an attack will be or to what extent it will cause damage. Hence, the mean and standard deviation parameters can be adjusted to take into account this variability.
March 17, 2008
Char Count=
MODEL RESULTS Impact to Operational Productivity We have determined that the average fully-burdened salary per employee is $80,000. For Scenario I, we estimate that an attack that impacts each
employee results in 5 hours of lost productivity. It costs Acme T&D $39.22 per employee per hour of lost productivity. For an attack profile we modeled in Scenario I where 10% of the networks and 10%
of employees are impacted results in a total productivity loss of $196,078.43 (Table 97.2).
Recovery Costs Attacks generally result in some form of damage (more often than not the damage is nonphysical in nature). It is often necessary to deploy technical personnel to remediate the impacted
environments and systems. There are two dimensions to this remediation. There is network remediation (resetting/reconfiguring network routers, switches, firewalls, etc.) and “client” remediation
(“ghosting” client machines, patching software, reinstalling/reconfiguring software, etc.). Our model takes into account the number of resources and time necessary to recover the networks and the
same for recovering employees. For Scenario I the costs are $50,000 and $4,800, respectively.
Total Impact We now sum up all of the separate loss components of the model. Loss (Productivity) + Loss (Network Recovery) + Loss (Employee Recovery) For Scenario I, we have total losses of $147,647.
TABLE 97.2
Modeling results from Scenario I
Lost Revenues Impact to Operational Productivity
Assumption—Avg. Salary/Employee (fully burdened) Assumption—Total Time to Fully Recover/employee (hours) Productivity Cost/hour
$80,000 5 $39.22
Costs to Recover/employee
Assumption (hours to recover) Costs to Recover Networks Assumption—Hours to Recover Resources per network
1 $4,800 12 5
Cost per Hour Total Costs to Recover Employees
$50 $50,000
Total Costs to Recover Networks
Total Impact
March 17, 2008
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
In the model, there are 4 additional scenarios. For each scenario we adjust the assumptions to better fit the attack profiles. The percentage of networks down and employees impacted increase for each
scenario. Exposing the Flaw of Averages Total Impact (original)
Class I Attack
Class II Attack
Class III Attack
Class IV Attack
Class V Attack
Total Impact (revised)
Variance (%)
The next step of our modeling efforts involves creating a “portfolio of attacks.” This step will provide us with the answer to the question “how much should Acme T&D invest in security solutions to
mitigate the risks associated with the attacks profiled?”
STEP 3: CREATE ATTACK SCENARIOS Now that we have determined the estimated costs associated with different types of attacks we are ready to move on to creating the attack scenarios. The attack
scenarios will provide us with the total losses realized during a specified period of time. We have created six attack scenarios. The attack scenarios consider the occurrence of different types of
attacks over a five year period. By creating different scenarios, we can consider different “foreseeable futures.” This approach allows an organization to determine how it wishes to “view the world”
from a risk planning and risk mitigation standpoint. The degree to which an organization will tolerate risk varies greatly. Some organizations are more tolerant of risk and will invest less in
mitigating technologies and approaches while other organizations that are more risk adverse will invest substantially more in order to reduce their risk profile. One can think of this type of
investment as an insurance policy—juggling “premium” with “payout” or from a strategic real options perspective of risk mitigation. The scenarios provide us with a landscape view—from lowest to
highest possible losses. We will explore two different approaches to determining the probability of attacks occurring across a specified timeline. The first approach involves the use of the Delphi
method. We interview a number of subject matter and technical experts who are asked to produce five different likely scenarios for various attack profiles. We provide some guidance and suggest to
each expert that the scenarios should range from a “most likely scenario” to a “least likely scenario.” This team of experts is then brought together to discuss the ranges across the spectrum. After
various conversations and some debate, the team collectively determines to reduce the total scenarios (25 − 5 experts × 5 scenarios) to the “final 5.” These scenarios are reflected as Scenarios I
through V on the Attack Scenarios spreadsheet. Figure 97.4 illustrates a Scenario I attack profile. On our defined scale of “least likely to most likely” this scenario is most likely to be realized.
The experts provided the count of each type of attack that occurs within our 5-year period and further determined the years in which they will occur.
March 17, 2008
Char Count=
FIGURE 97.4 Scenario I attack profile of a future state
We have carried over our financial impact information from our previous exercise. For each class of attack we have current state and future state impact costs. The first section of the model includes
the classes of attacks. For Scenario I, we have determined that three Class I attacks will occur in years 1, 3, and 4. In addition we have determined that one Class II attack will occur in year 5.
The second section of the model includes the impact values from the attack models. We include in this model both the current state and future state impact values. These values are computed for each
year and are summed in the totals column. The variance value indicates the percentage reduction from current to future state loss values. By investing in the proposed technologies we can reduce by
73.34% the total losses for this scenario. The risk adjustment value is the difference of the current state impact and future state impact values. This value is carried over to the next step of our
analysis. We use this same model to create the other attack scenarios. Figure 97.5 illustrates the Scenario IV attack profile. This scenario represents the opposite end of the spectrum. In this
scenario the company is successful in preventing all classes of attacks until year 5, when a Class V attack occurs. This is the infamous “hacker with malicious intent” scenario, wherein a hacker sets
out, with a concentrated effort to circumvent intrusion management technologies with the specific desire to cause significant harm to the organization. For Acme T&D this scenario could perhaps
March 17, 2008
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
FIGURE 97.5 Scenario IV attack profile of a future state
reflect the sentiments of a terrorist who has a desire to gain access to the critical gas pipeline systems in order to cause a catastrophic failure to the pipeline network. One could argue that such
an approach to determining these probabilities lacks “scientific rigor” or can be significantly biased—either intentionally or unintentionally. Consider the technical expert who firmly believes that
his skills are second to none with respect to effectively deploying and managing an armory of intrusion management technology. He may be biased to create scenarios that reflect the conservative end
of the spectrum, significantly coloring the reality of the environment or threat landscape. If an organization were to pin their decision on this approach, a crafty hacker who has superior skills to
this individual may easily circumvent these technologies and successfully realize his attack objectives and goals. Conversely, consider the “doomsday” character who is constantly pondering the worst
case scenario and has a strong voice in the room. He or she may be overly aggressive with the attack scenarios, creating unrealistic models that result in “doom and gloom.” How can one test for these
biases? Is there a way to independently determine the probabilities and likelihoods of events? Indeed there is a way and it is again found in Monte Carlo simulation.
March 17, 2008
Char Count=
Scenario VI represents our independent attack scenario. You may consider this the “control model.” This is our unbiased “expert” who is neutral to all biases. The probabilities of occurrence are
factually driven and leverage a distribution model that is focused on the discrete binary nature of these events—an event either happens or it doesn’t. The Poisson distribution provides us with the
ability to address the unique aspects of occurrence probabilities. Figure 97.6 illustrates how we can leverage the Poisson distribution for the Class I Attack. These events are discrete in
nature—they either occur or don’t occur. For a Class I Attack we set the Lambda value to 1.5984. This creates a distribution model that ranges from 0 to 6. Note on the left side of the model the
probability scale. We can see that this Lambda value results in a 20% nonoccurrence outcome. Or, in other words, 80% of the time a Class I Attack will occur at least one time (at a rate of
approximately 33%) and may occur up to 6 times within our time interval at a rate of say .01%. Compare this to a Poisson distribution model for a Class V extreme attack. We set the Lambda value to
0.0012 to reflect this. It results in a distribution model where this event will not occur 99.9988% of the time. There is only a .0012% chance that the event will occur in any given trial. You may
wonder why, if Monte Carlo simulation can be used reliably to arrive at probabilities of occurrence, we choose to use two different methods for determining probabilities. There are three primary
reasons for doing so; they include:
To reduce the “fear of the black box” phenomenon. People who are not familiar with analytical techniques or the details associated with statistical methods have a tendency to treat analysis and the
resultant outputs as “black box” generated values. There is a natural fear to distrust the unknown. By leveraging both statistical methods and expert opinion interviews, the layman observing the
analysis and output can rationalize in their mind how the results were generated and how they were validated or refuted. It also provides an avenue for the layman to provide input (vis-a-vis his or
her own opinions) into the equation. ` To spur additional dialogue and debate. The interview process inherently spurs additional dialogue among the expert panel. My experience has been that the
greater the divergence in opinions the more debate occurs, which results in more robust and more refined models. The process may require more work, but, more often than not, the value of the outcome
is greater than the additional effort. As a “litmus test” of expert opinions. Conversely, if we relied solely on the input of expert opinions without thinking through and modeling out the statistical
side of the equation, we may fall victim to tunnel vision.
While it’s beyond the scope of this case study, these models could be further enhanced by creating forecasts for different types of attacks and determining the probabilities of becoming a victim for
each attack. These enhancements could be realized by using historical data (what is published in the public domain along with an organization’s own data). For the purposes of simplicity, we leveraged
the Delphi method to create the various attack scenarios. The attack scenario total impact values range from $1,547,895 to $23,791,472, which represents quite a significant range. How do we determine
how much to invest to mitigate the risks associated with the attacks?
March 17, 2008
Char Count=
FIGURE 97.6 Poisson distribution assumption
March 17, 2008
Char Count=
STEP 4: DETERMINE FINANCIAL IMPACT We are now ready to explore different investment scenarios to offset the risks of attacks. We now have more reliable estimates for the various classes of attacks
and can take this financial information and turn it through a classical Net Present Value (NPV) and Discounted Cash Flow (DCF) analysis. Our NPV/DCF analysis also will have six different scenarios
that will follow the same scenario structure as those previously defined. We follow this same approach through the entire analysis. It allows us to see multiple sides of the problem and arrive at a
more reliable outcome. We will return to our original investment estimate (as provided by the client) of $2,000,000, which was previously arrived at through a variety of network and systems analysis.
This amount reflects the investment necessary to upgrade and enhance the intrusion management systems currently distributed throughout the environment. At a high-level, this investment will result
The replacement of Intrusion Detection Systems (IDS) with Intrusion Prevention Systems (IPS). An increased deployment of IPS devices at additional points throughout the network—from network perimeter
to network core. The deployment of Network Behavior Analysis (NBA) solutions at various points throughout the network along with data collection and analysis engines necessary to detect anomalies and
suspicious network activities.
The logical question is, “Does a $2,000,000 investment adequately address the risks associated with the attacks and their likelihood of occurrence in this environment?” Add to this, “Is it too much
or too little?” If you recall from the previous steps, we created two different aspects of our models. Current state and future state views. The basic premise of our argument is that no technology or
set of technologies can provide 100% protection from all types of attacks. However, we can intelligently place technology throughout the environment that mitigates these attacks. And, these
technologies will have varying degrees of success with respect to eliminating altogether the attack or significantly reducing the damage produced or the amount of time necessary to recover from an
attack. What is important to us then is the reduction of losses. The investment decision is how much we should invest to reduce our losses. This is the basis behind our current state and future state
views. We now move on to create our DCF and NPV analysis scenarios. Figure 97.7 illustrates Scenario I. We create a 5-year time horizon and determine the timing and intensity of capital investments
(the intrusion management technology solutions). The Risk Adjustment value is the difference of the Current State Impact less the Future State impact for each year in this scenario (as modeled
previously during the attack scenario step). We compute our net cash flows for each year, sum up the values and then apply our NPV and Internal Rate of Return (IRR) calculations (note: we use the
MIRR function in Excel to better adapt to negative values). We also have unknowns associated with
March 17, 2008
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
FIGURE 97.7 DCF and NPV analysis
this model. We don’t know precisely a few critical inputs into the model. We must account for these uncertainties. The following lists the additional inputs required to run the NPV analysis. DCF/NPV
Input Parameters Discount Rate Finance Rate Reinvestment Rate Equipment Annual Maintenance
10% 5% 7% 15.00%
Where we define the following parameters: Discount Rate. The standard discount rate on the cash flows. Finance Rate. The cost of capital or financing rate used to acquire the desired assets.
Reinvestment Rate. The return on the free cash flows that are reinvested. Equipment Annual Maintenance. The annual maintenance fees and service fees associated with keeping current (software
upgrades, signature updates) the various technology solutions. We apply Monte Carlo-based distributions to each value. For example, we may have varying degrees of success negotiating annual
maintenance fees on the various equipment we decide to purchase. For this value we use a normal distribution with the mean set to 15% (industry average for a company of Acme T&D’s size) and the
standard deviation set to 0.015%, which gives us a range of between just less than 12% and slightly more than 18% (both of which are realistic outer limits). Next, Figure 97.8 represents what the
expert team believed to be the “planning case”—in others words the team agreed that they should plan their efforts and investments based upon this scenario. This is also the scenario we use for our
“unbiased expert.”
March 17, 2008
Char Count=
FIGURE 97.8 DCF and NPV analysis on Scenario VI
Based upon this case we are expecting a total of eight attacks during a five year period. Our current state model suggests that we would incur losses of $11,650,567 in this scenario; our future state
model suggests losses of $4,600,118. As mentioned above, our DCF/NPV analysis is concerned with the net difference, which in this case is $6,990.449. The model takes into account when these losses
occur and when the difference is realized. We then compute the NPV and IRR values. Using the $2,000,000 assumption as our capital investment in intrusion management technologies, we can see that this
scenario results in a positive NPV of $2,228,925.15, which results in a 35.32% IRR on our investment. Clearly, this model supports a $2,000,000 investment. This model in isolation would suggest that
we could nearly double the initial investment and still have a positive NPV and IRR (the threshold to negative NPV is $3,666,000 year 1 expense following standard computations for all other variables
in the model).
STEP 5: ARRIVE AT INVESTMENT DECISION We are now near the end of our analysis. We now have a solid understanding of what our current and future risks are vis-a-vis the losses we are likely to incur
across ` a variety of attack scenarios. We know that a $2,000,000 investment is within the range of reason. However, we also know that we could invest more and as a result further reduce our risk of
losses. Alternatively, we could invest less and rely upon the relatively low probability of being a target of a severe or catastrophic event. We are at a crossroad. There is no absolute right or
wrong decision for all or any organization. The decision makers in your organization must choose the right decision based upon all of the available facts, expert opinion, and in light of the
organization’s culture. Consider that the analysis is relatively conservative in nature. Consider that the most conservative and least biased model (the model generated by our “unbiased expert”)
suggests that 80% of the time losses will be greater than $1,857,474 (current
March 17, 2008
Char Count=
97. Industry Application—IT—Information Security Intrusion Risk Management
FIGURE 97.9 Risk tolerance levels
state) and if we implement our proposed future state technology plan these losses will reduce to $267,792, resulting in a total loss reduction of $1,589,742. Follow this same mode of thinking and be
on the greater side of a “betting man”—51% of the time losses will be greater than $2,570,762 (current state) and $401,688 (future state), yielding $2,169,074 in loss reductions. Figure 97.9
illustrates an example set of risk tolerance and required investment levels, and the resulting simulation forecast distributions shown in Figure 97.10 further illustrates the probabilistic levels of
these risk tolerances.
FIGURE 97.10 Simulation forecast risk tolerance levels
c91-100 JWBK121-Mun
March 17, 2008 11:12 Char Count=
March 17, 2008
Char Count=
98. Industry Applications—Insurance ALM Model
98. Industry Applications—Insurance ALM Model File Name: Insurance—Asset Liability Management (ALM) Location: Modeling Toolkit | Industry Applications | Insurance ALM Model Brief Description:
Illustrating how to perform a basic asset liability model by optimizing asset allocation for a portfolio of insurance policies while minimizing the volatility of the insurance surplus Requirements:
Modeling Toolkit, Risk Simulator Special Credits: This model was contributed by Victor Wong, the managing director of Real Actuarial Consulting Limited. He is currently a fellow of the Society of
Actuaries (FSA), a fellow of the Canadian Institute of Actuaries (FCIA), a charter holder of the Certified Financial Analyst (CFA), and certified in Risk Management (CRM).
This is a simplified Asset Liability Management (ALM) model for a portfolio of insurance endowment products. The key risk factor that requires modeling is the interest yield curve, which in turn
affects the asset and liability cash flows and market values. The objective of the model is to minimize the surplus volatility. The decision variables are the asset allocation into various asset
classes. This and the next two chapters discuss the concepts of ALM and pension benefits as they pertain to insurance analysis.
ASSET LIABILITY MANAGEMENT ALM is a financial technique that can help companies to manage the mismatch of asset and liability and/or cash flow risks. The mismatched risks are due to different
underlying factors that cause the assets and liabilities to move in different directions with different magnitudes. Asset liability risk is a leveraged form of risk. The capital of most financial
institutions is small relative to the firm’s assets or liabilities, so small percentage changes in assets or liabilities can translate into large percentage changes in capital. Typically, companies
such as banks, insurance companies, and pension funds (or their corporate sponsors) adopt such techniques to help them better manage their mismatched asset/liability risks (more particularly the
interest rate risks) and to ensure that their capital will not be depleted in changing demographic and economic environments. Techniques for assessing asset liability risk include gap analysis and
duration analysis. These analyses facilitated techniques of gap management and duration
March 17, 2008
Char Count=
matching of assets and liabilities. Both approaches worked well if assets and liabilities comprised fixed cash flows. However, the increasing use of options, such as embedded prepayment risks in
mortgages or callable debt, posed problems that these traditional analyses could not address. Thus, Monte Carlo simulation techniques are more appropriate to address the increasingly complex
financial markets. Today, financial institutions also make use of over-the-counter (OTC) derivatives to structure hedging strategies and securitization techniques to remove assets and liabilities
from their balance sheet, therefore eliminating asset liability risk and freeing up capital for new business. The scope of ALM activities has broadened to other nonfinancial industries. Today,
companies need to address interest rate exposures, commodity price risks, liquidity risk, and foreign exchange risk.
EMBEDDED OPTIONS IN FINANCIAL INSTRUMENTS Traditionally, ALM was used as a tool to protect the capital/surplus from movements of assets/liabilities against a certain risk (e.g., parallel shift in
yield curve). In theory, ALM enables the financial institution to remove certain volatility risks. For banks and insurers, ALM can potentially lower regulatory capital requirements, as less capital
is needed to protect against unforeseen risks. For pension sponsors, ALM also can reduce the plan’s funding requirements and accounting costs by locking into a certain level of return. Cash Flow
Matching (or Immunization) is one of the ALM methods in which both asset and liability cash flows are matched exactly such that any movement in the yield curve would be irrelevant for the entity.
However, most financial instruments today rarely have fixed cash flows. Thus, cash flow matching would require frequent portfolio rebalancing, which is prohibitively expensive. Due to the
shortcomings of cash flow matching, duration matching was used to manage the mismatch risks (Figure 98.1). Typical duration matching is to find an optimal asset allocation portfolio in which the
asset duration matches the liability duration. The asset and liability duration is defined as the amount of change in the market value of assets/liabilities when the yield curve shifts by 100 basis
points. The obvious shortcomings of duration matching are that the yield curve rarely shifts in a parallel fashion and that the linear approximate (asset and liability duration) works well only on
small changes to the yield curve. Today’s financial assets and liabilities have embedded options that significantly affect the timing of cash flows, sensitivity to change in market rates, and total
return. Examples of embedded options in various financial institutions include:
Insurance policies. Guaranteed rates, cash surrender values, policy loans, dividends/bonuses Banks. Prepayment option to borrowers, overdraft, early withdrawal Pension plans. Early retirement, cash
out option, DC conversion Assets. Callable options, prepayment options, abandon option (credit/ bankruptcy)
March 17, 2008
Char Count=
98. Industry Applications—Insurance ALM Model
FIGURE 98.1 Insurance ALM duration matching
Figure 98.1 illustrates the effects of compound embedded options and the sensitivity of a life insurer’s liabilities to the change in interest rates. Other variations of traditional ALM models
include convexity matching (second derivative) and dynamic matching (frequent rebalancing). These variations attempt to increase the precision of the changes in market values of the assets and
liabilities compensating for the effects of the embedded options. Traditional ALM using cash flow/duration matching is not an effective method to protect capital as models do not recognize the risks
of embedded options. Furthermore, the trading costs of rebalancing the portfolios are prohibitively expensive. The simulation approach on assets/liability models is a better way to protect capital by
capturing the impact of embedded options in many possible variations and finding the optimal portfolio that can minimize the volatility of the surpluses. More advanced approaches would consider the
downside risk only. Most financial institutions would like to guard against the risk of reducing capital/surpluses. An increase in capital/surpluses is actually a good thing. As a result, a slightly
higher volatility in the entity’s capital may be acceptable as long as the potentially higher yields can outweigh the level of downside risk undertaken.
IMPLEMENTING ALM Six steps are taken to implement an effective ALM: 1. Set ALM objectives. First of all, the bank, insurer, or pension fund sponsor needs to decide on its ALM objectives. The
objectives may be affected by the organization’s desires, goals, and positioning in relation to its stakeholders, regulators, competition, and external rating agencies. Would it be simply minimizing
the volatility of surpluses? Would a higher yield be more desirable, and if so, what is the maximum level of risk that can be undertaken?
March 17, 2008
Char Count=
FIGURE 98.2 Optimization in ALM
2. Determine risk factors and cash flow structure. The ALM manager/analyst needs to capture the various risks the entity carries and take into account the complex interactions between the asset and
liability cash flows. Risk factors may include market, interest rate, and credit risks as well as contractual obligations that behave like options. 3. Consider available hedging solutions. While
diversification can reduce nonsystematic risks, financial institutions often carry firm-specific risks that cannot be diversified easily. The organization needs to evaluate the appropriateness of
various hedging solutions, including types of assets, use of hedging instruments (derivatives, interest rate options, credit options, pricing, reinsurance) and use of capital market instruments
(securitization, acquisition, and sale of business lines). 4. Model the risk factors. Modeling the underlying risk factors may not be trivial. If the ALM manager’s concern is the interest rate risks,
then modeling the yield curve would be critical. 5. Set decision variables. For different financial institutions, the decision variables would differ. For instance, an insurer needs to set a decision
on asset allocation to each qualified investment, the level of dividend/bonuses paid to policyholders, the amount of new businesses undertaken, pricing, and so on. 6. Set constraints. Typically
financial institutions are heavily regulated (reserve and capital requirements). More recently, accounting requirements (or profits) also have become increasingly important. These constraints need to
be modeled to ensure that the final solution can meet the regulatory and accounting requirements. Figure 98.2 shows a typical ALM modeling flowchart for an insurance company that aims to minimize its
interest rate risks.
March 17, 2008
Char Count=
98. Industry Applications—Insurance ALM Model
TABLE 98.1
New versus old approaches in ALM risk management
Risk Minimization (Old Way)
Real Options Approach (New Way)
All risks are undesirable
Downside risks are undesirable Missing upside potential is also undesirable There may be value in waiting
Minimize volatility to capital
Formulate strategies to hedge capital depletion, while retaining upside potentials
Limit asset allocation to match liability during duration, thus reducing asset return potentials
Value, analyze, and rank/optimize alternative “protection” strategies against return/risk objectives from an organization’s perspective Trial portfolios/proof of concept Redesign of products with
embedded options
REAL OPTIONS APPLICATIONS TO ALM Real options have useful applications in the evaluation of various hedging strategies in an ALM context. Traditional analysis focuses on minimizing surplus volatility
as a fixed strategy with periodic rebalancing (Table 98.1). However, today’s business decisions are much more dynamic with increasingly complex hedging instruments and strategies available to
management. Each business strategy has its associated costs, risks, and benefits. Business strategies that can be implemented when needed can enhance business flexibility to guard against undesirable
events, therefore enhancing their overall value to the organization (Table 98.2). Real options can determine the risk-adjusted strategic value to the organization that can be ranked and optimized
according to business objectives and available resources.
SUMMARY Traditional ALM models are no longer sufficient in protecting the company’s limited capital due to the presence of embedded options in both assets and liabilities. The simulation approach can
provide a more realistic picture by taking into account the TABLE 98.2
Asset versus liability strategies
Asset-Based Strategies
Liability-Based Strategies
Asset allocation
Pricing margin/reserves
Structured products (derivatives, swaps, credit options, interest rate options, etc.)
Alpha and beta approach to asset allocation
Acquisition/sale of business lines Redesigning/reengineering products Timing
March 17, 2008
Char Count=
risks of embedded options and other risk factors simultaneously. More advanced models today focus on downside risks with moderate risk taking to enhance yields. Optimization algorithms can then be
used to maximize return/risk objectives within set constraints and decision variables. Formulating real options strategies can enhance corporate value by incorporating future risks and management
flexibility in decision analysis. This book highlights the potential applications of real options in an ALM context. However, a comprehensive analysis of this topic is beyond our scope.
99. Operational Risk—Queuing Models at Bank Branches File Name: Banking—Queuing Models at Bank Branches Location: Modeling Toolkit | Banking Models | Queuing Models Brief Description: Illustrating
how to set up a queuing model, run a Monte Carlo simulation on a queuing model, and interpret the results of a queuing model Requirements: Modeling Toolkit, Risk Simulator Modeling Toolkit Functions
Used: B2QueuingSCProbNoCustomer, B2QueuingSCAveCustomersWaiting, B2QueuingSCAveCustomersin System, B2QueuingSCAveTimeWaiting, B2QueuingSCAveTimeinSystem, B2QueuingSCProbHaveToWait,
B2QueuingSCAProbNoCustomer, B2QueuingSCAAveCustomersWaiting, B2QueuingSCAAveCustomersinSystem, B2QueuingSCAAveTimeWaiting, B2QueuingSCAAveTimeinSystem, B2QueuingSCAProbHaveToWait,
B2QueuingMCProbNoCustomer, B2QueuingMCAveCustomersWaiting, B2QueuingMCAveCustomersinSystem, B2QueuingMCAveTimeWaiting, B2QueuingMCAveTimeinSystem, B2QueuingMCProbHaveToWait, B2QueuingMGKProbBusy,
MODEL BACKGROUND Think of how queuing models work; consider a customer service call center, a bank teller’s waiting line, or the waiting line at an ATM machine. The queue is the line of people
waiting to get served. Typically, the arrival rates of patrons follow a Poisson distribution on a per-period basis, per hour or per day, etc. The number of checkout counters open is the number of
channels in a queuing model. The rate at which servers are able to serve patrons typically follows an exponential distribution. The questions that a queuing model answers are how many servers or
channels there
March 17, 2008
Char Count=
99. Operational Risk—Queuing Models at Bank Branches
should be if we do not want patrons to wait more than X minutes, or, if we have Y servers, what the probability is that a patron arriving will have to wait and what the average wait time is. These
types of models are extremely powerful when coupled with simulation, where the arrival rates and service times are variable and simulated. Imagine applications from staffing call centers, customer
service lines, and checkout counters to how many hospital beds should exist in a hospital per type of diagnosticrelated group and the like. These models are based on operations research queuing
models. The singlechannel queuing model and the multiple-channel queuing model assume a Poisson distribution of arrival rates and exponential distribution of service times, with the only difference
between them being the number of channels. Both the MG1 single arbitrary model and M/G/K blocked queuing model assume the same Poisson distribution on arrival rates but do not rely on the exponential
distribution for service times. The two main differences between these two general-distribution (G) models are that the M/G/K uses multiple channels as compared to the single-channel MG1, as well as
the fact that the MG1 model assumes the possibility of waiting in line while the M/G/K model assumes customers will be turned away if the channels are loaded when they arrive.
RUNNING A MONTE CARLO SIMULATION In all of these models, the results are closed form. Hence, only the input assumptions (arrival rates and service rates) are uncertain and should be simulated. The
forecast results should be any of the outputs of interest. Please see Figures 99.1 and 99.2.
FIGURE 99.1 MG1: Single-channel arbitrary queuing model
March 17, 2008
Char Count=
FIGURE 99.2 Multiple-channel queuing model
100. Optimization—Continuous Portfolio Allocation File Name: Optimization—Continuous Portfolio Allocation Location: Modeling Toolkit | Optimization | Continuous Portfolio Allocation Brief
Description: Illustrating how to run an optimization on continuous decision variables, viewing and interpreting optimization results Requirements: Modeling Toolkit, Risk Simulator
This model shows 10 asset classes with different risk and return characteristics. The idea here is to find the best portfolio allocation such that the portfolio’s bang for the buck, or returns to
risk ratio, is maximized—that is, to allocate 100% of an individual’s investment portfolio among several different asset classes (e.g., different types of mutual funds or investment styles: growth,
value, aggressive growth, income, global, index, contrarian, momentum, etc.). In order to run an optimization, several key specifications on the model have to first be identified: Objective: Maximize
Return to Risk ratio (C18) Decision Variables: Allocation weights (E6:E15) Restrictions on Decision Variables: Minimum and maximum required (F6:G15) Constraints: Portfolio total allocation weights
100% (E17 is set to 100%)
March 17, 2008
Char Count=
100. Optimization—Continuous Portfolio Allocation
FIGURE 100.1 Asset allocation optimization model The model shows the 10 asset classes. Each asset class has its own set of annualized returns and risks, measured by annualized volatilities (Figure
100.1). These return and risk measures are annualized values such that they can be consistently compared across different asset classes. Returns are computed using the geometric average of the
relative returns, while the risks are computed using the annualized standard deviation of the logarithmic relative historical stock returns approach. See the chapters on Volatility models for
detailed calculations. The allocation weights in column E hold the decision variables, which are the variables that need to be tweaked and tested such that the total weight is constrained at 100%
(cell E17). Typically, to start the optimization, we will set these cells to a uniform value; in this case, cells E6 to E15 are set at 10% each. In addition, each decision variable may have specific
restrictions in its allowed range. In this example, the lower and upper allocations allowed are 5% and 35%, as seen in columns F and G. This setting means that each asset class may have its own
allocation boundaries. Next, column H shows the return to risk ratio for each asset class, which is simply the return percentage divided by the risk percentage, where the higher this value, the
higher the bang for the buck. The remaining sections of the model show the individual asset class rankings by returns, risk, return to risk ratio, and allocation. In other words, these rankings show
at a glance which asset class has the lowest risk or the highest return, and so forth.
RUNNING AN OPTIMIZATION To run this model, simply click on Risk Simulator | Optimization | Run Optimization. Alternatively, for practice, you can try to set up the model again by doing (the steps are
illustrated in Figure 100.2): 1. Start a new profile (Risk Simulator | New Profile) and give it a name. 2. Select cell E6, and define the decision variable (Risk Simulator | Optimization | Set
Decision, or click on the Set Decision D icon) and make it a Continuous
5. 6. 7.
March 17, 2008
Char Count=
Variable and then link the decision variable’s name and minimum/maximum required to the relevant cells (B6, F6, G6). Then use the Risk Simulator Copy on cell E6, select cells E7 to E15, and use Risk
Simulator’s Paste (Risk Simulator | Copy Parameter and Risk Simulator | Paste Parameter or use the copy and paste icons). To rerun the optimization, type in 10% for all decision variables. Make sure
you do not use the regular Excel copy | paste. Next, set up the optimization’s constraints by selecting Risk Simulator | Optimization | Constraints, selecting ADD, and selecting cell E17, and making
it (==) equal 100% (for total allocation, and remember to insert the % sign). Select cell C18 as the objective to be maximized (Risk Simulator | Optimization | Set Objective). Select Risk Simulator |
Optimization | Run Optimization. Review the different tabs to make sure that all the required inputs in steps 2 to 4 are correct. You may now select the optimization method of choice and click OK to
run the optimization. a. Static Optimization is an optimization that is run on a static model, where no simulations are run. This optimization type is applicable when the model
FIGURE 100.2 Optimization model setup
March 17, 2008
Char Count=
100. Optimization—Continuous Portfolio Allocation
FIGURE 100.2 (Continued)
is assumed to be known and no uncertainties exist. Also, a static optimization can be run first to determine the optimal portfolio and its corresponding optimal allocation of decision variables
before applying more advanced optimization procedures. For instance, before running a stochastic optimization problem, first run a static optimization to determine if there exist solutions to the
optimization problem before performing a more protracted analysis. b. Dynamic Optimization is applied when Monte Carlo simulation is used together with optimization. Another name for such a procedure
is simulationoptimization. In other words, a simulation is run for N trials and then an optimization process is run for M iterations until the optimal results are obtained
March 17, 2008
Char Count=
or an infeasible set is found. That is, using Risk Simulator’s Optimization module, you can choose which forecast and assumption statistics to use and replace in the model after the simulation is
run. Then you can apply these forecast statistics in the optimization process. This approach is useful when you have a large model with many interacting assumptions and forecasts, and when some of
the forecast statistics are required in the optimization. c. Stochastic Optimization is similar to the dynamic optimization procedure except that the entire dynamic optimization process is repeated T
times. The results will be a forecast chart of each decision variable with T values. In other words, a simulation is run and the forecast or assumption statistics are used in the optimization model
to find the optimal allocation of decision variables. Then another simulation is run, generating different forecast statistics, and these new updated values are optimized, and so forth. Hence, each
of the final decision variables will have its own forecast chart, indicating the range of the optimal decision variables. For instance, instead of obtaining singlepoint estimates in the dynamic
optimization procedure, you can now obtain a distribution of the decision variables and, hence, a range of optimal values for each decision variable, also known as a stochastic optimization. Note: If
you are to run either a dynamic or stochastic optimization routine, make sure that you first define the assumptions in the model. That is, make sure that some of the cells in C6:D15 are assumptions.
The model setup is illustrated in Figure 100.2.
RESULTS INTERPRETATION Briefly, the optimization results show the percentage allocation for each asset class (or projects or business lines, etc.) that would maximize the portfolio’s bang for buck
(i.e., the allocation that would provide the highest returns subject to the least amount of risk). In other words, for the same amount of risk, what is the highest amount of returns that can be
generated, or for the same amount of returns, what is the least amount of risk that can be obtained? See Figure 100.3. This is the concept of the Markowitz efficient portfolio analysis. For a
comparable example, see Chapter 106, Military Portfolio and Efficient Frontier.
March 17, 2008
Char Count=
FIGURE 100.3 Optimization results
March 17, 2008
Char Count=
101. Optimization—Discrete Project Selection File Name: Optimization—Discrete Project Selection Location: Modeling Toolkit | Optimization | Discrete Project Selection Brief Description: Illustrating
how to run an optimization on discrete integer decision variables in project selection in order to choose the best projects in a portfolio given a large variety of project options subject to risk,
return, budget, and other constraints Requirements: Modeling Toolkit, Risk Simulator
This model shows 12 different projects with different risk and return characteristics. The idea here is to find the best portfolio allocation such that the portfolio’s total strategic returns are
maximized. That is, the model is used to find the best project mix in the portfolio that maximizes the total returns after considering the risks and returns of each project, subject to the
constraints of number of projects and the budget constraint. Figure 101.1 illustrates the model. Objective: Maximize Total Portfolio Returns (C17) or Sharpe returns to risk ratio (C19) Decision
Variables: Allocation or Go/No-Go decision (I4:I15) Restrictions on Decision Variables: Binary decision variables (0 or 1) Constraints: Total Cost (D17) is less than $5000 and less than or equal to 6
projects selected (I17)
RUNNING AN OPTIMIZATION To run this preset model, simply run the optimization (Risk Simulator | Optimization | Run Optimization) or for practice, set up the model yourself: 1. Start a new profile
(Risk Simulator | New Profile) and give it a name. 2. In this example, all the allocations are required to be binary (0 or 1) values, so first select cell I4 and make this a decision variable in the
Integer Optimization worksheet and select cell I4 and define it as a decision variable (Risk Simulator | Optimization | Set Decision or click on the Set Decision icon) and make it a Binary Variable.
This setting automatically sets the minimum to 0 and maximum to 1 and can only take on a value of 0 or 1. Then use the Risk Simulator Copy on cell I4, select cells I5 to I15, and use Risk Simulator’s
Paste (Risk Simulator |
March 17, 2008
Char Count=
FIGURE 101.1 Discrete project selection model
March 17, 2008
Char Count=
Copy Parameter and Risk Simulator | Paste Parameter or use the Risk Simulator copy and paste icons, NOT the Excel copy/paste). 3. Next, set up the optimization’s constraints by selecting Risk
Simulator | Optimization | Constraints and selecting ADD. Then link to cell D17, and make it 1: call (A – 1)% out of the money, put (A – 1)% in the money. 381. B2ForwardStartPutOption Starts
proportionally in or out of the money in the future. Alpha < 1: call starts (1 – A)% in the money, put starts (1 – A)% out of the money. Alpha > 1: call (A – 1)% out of the money, put (A – 1)% in the
March 17, 2008
List of Functions
Char Count=
382. B2FuturesForwardsCallOption Similar to a regular option but the underlying asset is a futures of a forward contract. A call option is the option to buy a futures contract, with the specified
futures strike price at which the futures is traded if the option is exercised. 383. B2FuturesForwardsPutOption Similar to a regular option but the underlying asset is a futures of a forward
contract. A put option is the option to sell a futures contract, with the specified futures strike price at which the futures is traded if the option is exercised. 384. B2FuturesSpreadCall The payoff
of a spread option is the difference between the two futures’ values at expiration. The spread is Futures 1 – Futures 2, and the call payoff is Spread – Strike. 385. B2FuturesSpreadPut The payoff of
a spread option is the difference between the two futures’ values at expiration. The spread is Futures 1 – Futures 2, and the put payoff is Strike – Spread. 386. B2GARCH Computes the forward-looking
volatility forecast using the generalized autoregressive conditional heteroskedasticity (p, q) model where future volatilities are forecast based on historical price levels and information. 387.
B2GapCallOption The call option is knocked in if the asset exceeds the reference Strike 1, and the option payoff is the asset price less Strike 2 for the underlying. 388. B2GapPutOption The put
option is knocked in only if the underlying asset is less than the reference Strike 1, providing a payoff of Strike 2 less the underlying asset value. 389. B2GeneralizedBlackScholesCall Returns the
Black-Scholes model with a continuous dividend yield call option. 390. B2GeneralizedBlackScholesCallCashDividends Modification of the Generalized Black-Scholes model to solve European call options,
assuming a series of dividend cash flows that may be even or uneven. A series of dividend payments and time are required. 391. B2GeneralizedBlackScholesPut Returns the Black-Scholes model with a
continuous dividend yield put option. 392. B2GeneralizedBlackScholesPutCashDividends Modification of the Generalized Black-Scholes model to solve European put options, assuming a series of dividend
cash flows that may be even or uneven. A series of dividend payments and time are required. 393. B2GraduatedBarrierDownandInCall Barriers are graduated ranges between lower and upper values. The
option is knocked in the money proportionally depending on how low the asset value is in the range.
March 17, 2008
Char Count=
394. B2GraduatedBarrierDownandOutCall Barriers are graduated ranges between lower and upper values. The option is knocked out of the money proportionally depending on how low the asset value is in
the range. 395. B2GraduatedBarrierUpandInPut Barriers are graduated ranges between lower and upper values. The option is knocked in the money proportionally depending on how high the asset value is
in the range. 396. B2GraduatedBarrierUpandOutPut Barriers are graduated ranges between lower and upper values. The option is knocked out of the money proportionally depending on how high the asset
value is in the range. 397. B2ImpliedVolatilityBestCase Computes the implied volatility given an expected value of an asset, along with an alternative best-case scenario value and its corresponding
percentile (must be above 50%). 398. B2ImpliedVolatilityCall Computes the implied volatility in a European call option given all the inputs parameters and the option value. 399.
B2ImpliedVolatilityPut Computes the implied volatility in a European put option given all the inputs parameters and the option value. 400. B2ImpliedVolatilityWorstCase Computes the implied volatility
given an expected value of an asset, along with an alternative worst-case scenario value and its corresponding percentile (must be below 50%). 401. B2InterestAnnualtoPeriodic Computes the periodic
compounding rate based on the annualized compounding interest rate per year. 402. B2InterestCaplet Computes the interest rate caplet (sum all the caplets into the total value of the interest rate
cap) and acts like an interest rate call option. 403. B2InterestContinuousToDiscrete Returns the corresponding discrete compounding interest rate, given the continuous compounding rate. 404.
B2InterestContinuousToPeriodic Computes the periodic compounding interest rate based on a continuous compounding rate. 405. B2InterestDiscreteToContinuous Returns the corresponding continuous
compounding interest rate, given the discrete compounding rate. 406. B2InterestFloorlet Computes the interest rate floorlet (sum all the floorlets into the total value of the interest rate floor) and
acts like an interest rate put option.
March 17, 2008
Char Count=
List of Functions
407. B2InterestPeriodictoAnnual Computes the annualized compounding interest rate per year based on a periodic compounding rate. 408. B2InterestPeriodictoContinuous Computes the continuous
compounding rate based on the periodic compounding interest rate. 409. B2InverseGammaCallOption Computes the European call option assuming an inverse Gamma distribution, rather than a normal
distribution, and is important for deep out-of-the-money options. 410. B2InverseGammaPutOption Computes the European put option assuming an inverse Gamma distribution, rather than a normal
distribution, and is important for deep out-of-the-money options. 411. B2IRRContinuous Returns the continuously discounted Internal Rate of Return for a cash flow series with its respective cash flow
times in years. 412. B2IRRDiscrete Returns the discretely discounted Internal Rate of Return for a cash flow series with its respective cash flow times in years. 413. B2LinearInterpolation
Interpolates and fills in the missing values of a time series. 414. B2MarketPriceRisk Computes the market price of risk used in a variety of options analyses, using market return, risk-free return,
volatility of the market, and correlation between the market and the asset. 415. B2MathGammaLog Returns the result from a Log Gamma function. 416. B2MathIncompleteBeta Returns the result from an
Incomplete Beta function. 417. B2MathIncompleteGammaP Returns the result from an Incomplete Gamma P function. 418. B2MathIncompleteGammaQ Returns the result from an Incomplete Gamma Q function. 419.
B2MatrixMultiplyAxB Multiplies two compatible matrices, such as M × N and N × M, to create an M × M matrix. Copy and paste function to the entire matrix area and use Ctrl+Shift+Enter to obtain the
matrix. 420. B2MatrixMultiplyAxTransposeB Multiplies the first matrix with the transpose of the second matrix (multiplies M × N with M × N matrix by transposing the second matrix to N × M, generating
an M × M matrix). Copy and paste function to the entire matrix area and use Ctrl+Shift+Enter to obtain the matrix.
March 17, 2008
Char Count=
421. B2MatrixMultiplyTransposeAxB Multiplies the transpose of the first matrix with the second matrix (multiplies M × N with M × N matrix by transposing the first matrix to N × M, generating an N × N
matrix). Copy and paste function to the entire matrix area and use Ctrl+Shift+Enter to obtain the matrix. 422. B2MatrixTranspose Transposes a matrix from M × N to N × M. Copy and paste function to
the entire matrix area and use Ctrl+Shift+Enter to obtain the matrix. 423. B2MertonJumpDiffusionCall Call value of an underlying whose asset returns are assumed to follow a Poisson Jump Diffusion
process; that is, prices jump several times a year, and cumulatively these jumps explain a percentage of the total asset volatility. 424. B2MertonJumpDiffusionPut Put value of an underlying whose
asset returns are assumed to follow a Poisson Jump Diffusion process; that is, prices jump several times a year, and cumulatively these jumps explain a percentage of the total asset volatility. 425.
B2NormalTransform Converts values into a normalized distribution. 426. B2NPVContinuous Returns the Net Present Value of a cash flow series given the time and discount rate, using continuous
discounting. 427. B2NPVDiscrete Returns the Net Present Value of a cash flow series given the time and discount rate, using discrete discounting. 428. B2OptionStrategyLongBearCreditSpread Returns the
matrix [stock price, buy put, sell put, profit] of a long bearish credit spread (buying a higher strike put with a high price and selling a lower strike put with a low price). 429.
B2OptionStrategyLongBullCreditSpread Returns the matrix [stock price, buy put, sell put, profit] of a bullish credit spread (buying a lower strike put at a low price and selling a higher strike put
at a high price). 430. B2OptionStrategyLongBearDebitSpread Returns the matrix [stock price, buy call, sell call, profit] of a long bearish debit spread (buying a higher strike call with a low price
and selling a lower strike call with a high price). 431. B2OptionStrategyLongBullDebitSpread Returns the matrix [stock price, buy call, sell call, profit] of a bullish debit spread (buying a lower
strike call at a high price and selling a further out-ofthe-money higher strike call at a low price). 432. B2OptionStrategyLongCoveredCall Returns the matrix [stock price, buy stock, sell call,
profit] of a long covered call position (buying the stock and selling a call of the same asset). 433. B2OptionStrategyLongProtectivePut Returns the matrix [stock price, buy stock, buy put, profit] of
a long protective put position (buying the stock and buying a put of the same asset).
March 17, 2008
List of Functions
Char Count=
434. B2OptionStrategyLongStraddle Returns the matrix [stock price, buy call, buy put, profit] of a long straddle position (buying an equal number of puts and calls with identical strike price and
expiration) to profit from high volatility. 435. B2OptionStrategyLongStrangle Returns the matrix [stock price, buy call, buy put, profit] of a long strangle (buying a higher strike call at a low
price and buying a lower strike put at a low price—close expirations) to profit from high volatility. 436. B2OptionStrategyWriteCoveredCall Returns the matrix [stock price, sell stock, buy call,
profit] of writing a covered call (selling the stock and buying a call of the same asset). 437. B2OptionStrategyWriteProtectivePut Returns the matrix [stock price, sell stock, sell put, profit] of
writing a protective put position (selling the stock and selling a put of the same asset). 438. B2OptionStrategyWriteStraddle Returns the matrix [stock price, sell call, sell put, profit] of writing
a straddle position (selling an equal number of puts and calls with identical strike price and expiration) to profit from low volatility. 439. B2OptionStrategyWriteStrangle Returns the matrix [stock
price, sell call, sell put, profit] of writing a strangle (sell a higher strike call at a low price and sell a lower strike put at a low price—close expirations) to profit from low volatility. 440.
B2Payback Computes the payback period given some initial investment and subsequent cash flows. 441. B2PerpetualCallOption Computes the American perpetual call option. Note that it returns an error if
dividend is 0% (this is because the American option reverts to European and a perpetual European has no value). 442. B2PerpetualPutOption Computes the American perpetual put option. Note that it
returns an error if dividend is 0% (this is because the American option reverts to European and a perpetual European has no value). 443. B2PortfolioReturns Computes the portfolio weighted average
expected returns given individual asset returns and allocations. 444. B2PortfolioRisk Computes the portfolio risk given individual asset allocations and variancecovariance matrix. 445.
B2PortfolioVariance Computes the portfolio variance given individual asset allocations and variance-covariance matrix. Take the square root of the result to obtain the portfolio risk. 446.
B2ProbabilityDefaultAdjustedBondYield Computes the required risk-adjusted yield (premium spread plus risk-free rate) to charge given the cumulative probability of default.
March 17, 2008
Char Count=
447. B2ProbabilityDefaultAverageDefaults Credit Risk Plus’ average number of credit defaults per period using total portfolio credit exposures, average cumulative probability of default, and
percentile Value at Risk for the portfolio. 448. B2ProbabilityDefaultCorrelation Computes the correlations of default probabilities given the probabilities of default of each asset and the
correlation between their equity prices. The result is typically much smaller than the equity correlation. 449. B2ProbabilityDefaultCumulativeBondYieldApproach Computes the cumulative probability of
default from Year 0 to Maturity using a comparable zero bond yield versus a zero risk-free yield and accounting for a recovery rate. 450. B2ProbabilityDefaultCumulativeSpreadApproach Computes the
cumulative probability of default from Year 0 to Maturity using a comparable risky debt’s spread (premium) versus the risk-free rate and accounting for a recovery rate. 451.
B2ProbabilityDefaultHazardRate Computes the hazard rate for a specific year (in survival analysis) using a comparable zero bond yield versus a zero risk-free yield and accounting for a recovery rate.
452. B2ProbabilityDefaultMertonDefaultDistance Distance to Default (does not require market returns and correlations but requires the internal growth rates). 453. B2ProbabilityDefaultMertonI
Probability of Default (without regard to Equity Value or Equity Volatility, but requires asset, debt, and market values). 454. B2ProbabilityDefaultMertonII Probability of Default (does not require
market returns and correlations but requires the internal asset value and asset volatility). 455. B2ProbabilityDefaultMertonImputedAssetValue Returns the imputed market value of asset given external
equity value, equity volatility, and other option inputs. Used in the Merton probability of default model. 456. B2ProbabilityDefaultMertonImputedAssetVolatility Returns the imputed volatility of
asset given external equity value, equity volatility, and other option inputs. Used in the Merton probability of default model. 457. B2ProbabilityDefaultMertonMVDebt Computes the market value of debt
(for risky debt) in the Merton-based simultaneous options model. 458. B2ProbabilityDefaultMertonRecoveryRate Computes the rate of recovery in percent for risky debt in the Merton-based simultaneous
options model. 459. B2ProbabilityDefaultPercentileDefaults Credit Risk Plus method to compute the percentile given some estimated average number of defaults per period.
March 17, 2008
List of Functions
Char Count=
460. B2PropertyDepreciation Value of the periodic depreciation allowed on a commercial real estate project, given the percent of price going to improvement and the allowed recovery period. 461.
B2PropertyEquityRequired Value of the required equity down payment on a commercial real estate project, given the valuation of the project. 462. B2PropertyLoanAmount Value of the required mortgage
amount on a commercial real estate project, given the value of the project and the loan required (loan-to-value ratio or the percentage of the value that a loan represents is required). 463.
B2PropertyValuation Value of a commercial real estate property assuming Gross Rent, Vacancy, Operating Expenses, and the Cap Rate at Purchase Date (Net Operating Income/Sale Price). 464.
B2PutCallParityCalltoPut Computes the European put option value given the value of a corresponding European call option with identical input assumptions. 465. B2PutCallParityCalltoPutCurrencyOptions
Computes the European currency put option value given the value of a corresponding European currency call option on futures and forwards with identical input assumptions. 466.
B2PutCallParityCalltoPutFutures Computes the value of a European put option on futures and forwards given the value of a corresponding European call option on futures and forwards with identical
input assumptions. 467. B2PutCallParityPuttoCall Computes the European call option value given the value of a corresponding European put option with identical input assumptions. 468.
B2PutCallParityPuttoCallCurrencyOptions Computes the value of a European currency call option given the value of a corresponding European currency put option on futures and forwards with identical
input assumptions. 469. B2PutCallParityPuttoCallFutures Computes the value of a European call option on futures and forwards given the value of a corresponding European put option on futures and
forwards with identical input assumptions. 470. B2PutDelta Returns the option valuation sensitivity Delta (a put option value’s sensitivity to changes in the asset value). 471. B2PutGamma Returns the
option valuation sensitivity Gamma (a put option value’s sensitivity to changes in the Delta value). 472. B2PutOptionOnTheMax The maximum values at expiration of both assets are used in option
March 17, 2008
Char Count=
where the call option payoff at expiration is the strike price against the maximum price between Asset 1 and Asset 2. B2PutOptionOnTheMin The minimum values at expiration of both assets are used in
option exercise, where the call option payoff at expiration is the strike price against the minimum price between Asset 1 and Asset 2. B2PutRho Returns the option valuation sensitivity Rho (a put
option value’s sensitivity to changes in the interest rate). B2PutTheta Returns the option valuation sensitivity Theta (a put option value’s sensitivity to changes in the maturity). B2PutVega Returns
the option valuation sensitivity Vega (a put option value’s sensitivity to changes in the volatility). B2QueuingMCAveCustomersinSystem Average number of customers in the system, using a
multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMCAveCustomersWaiting Average number of customers in the waiting line, using a
multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMCAveTimeinSystem Average time a customer spends in the system, using a
multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMCAveTimeWaiting Average time a customer spends in the waiting line, using a
multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMCProbHaveToWait Probability an arriving customer has to wait, using a
multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMCProbNoCustomer Probability that no customers are in the system, using a
multiple-channel queuing model assuming a Poisson arrival rate with Exponential distribution of service times. B2QueuingMGKAveCustomersinSystem Average number of customers in the system, using a
multiple-channel queuing model assuming a Poisson arrival rate with unknown distribution of service times. B2QueuingMGKCostPerPeriod Total cost per time period, using a multiple-channel queuing model
assuming a Poisson arrival rate with unknown distribution of service times.
March 17, 2008
List of Functions
Char Count=
485. B2QueuingMGKProbBusy Probability a channel will be busy, using a multiple-channel queuing model assuming a Poisson arrival rate with unknown distribution of service times. 486.
B2QueuingSCAAveCustomersinSystem Average number of customers in the system, using an MG1 single-channel arbitrary queuing model assuming a Poisson arrival rate with unknown distribution of service
times. 487. B2QueuingSCAAveCustomersWaiting Average number of customers in the waiting line, using an MG1 single-channel arbitrary queuing model assuming a Poisson arrival rate with unknown
distribution of service times. 488. B2QueuingSCAAveTimeinSystem Average time a customer spends in the system, using an MG1 single-channel arbitrary queuing model assuming a Poisson arrival rate with
unknown distribution of service times. 489. B2QueuingSCAAveTimeWaiting Average time a customer spends in the waiting line, using an MG1 singlechannel arbitrary queuing model assuming a Poisson
arrival rate with unknown distribution of service times. 490. B2QueuingSCAProbHaveToWait Probability an arriving customer has to wait, using an MG1 single-channel arbitrary queuing model assuming a
Poisson arrival rate with unknown distribution of service times. 491. B2QueuingSCAProbNoCustomer Probability that no customers are in the system, using an MG1 single-channel arbitrary queuing model
assuming a Poisson arrival rate with unknown distribution of service times. 492. B2QueuingSCAveCustomersinSystem Average number of customers in the system, using a single-channel queuing model. 493.
B2QueuingSCAveCustomersWaiting Returns the average number of customers in the waiting line, using a singlechannel queuing model. 494. B2QueuingSCAveTimeinSystem Average time a customer spends in the
system, using a single-channel queuing model. 495. B2QueuingSCAveTimeWaiting Average time a customer spends in the waiting line, using a single-channel queuing model. 496. B2QueuingSCProbHaveToWait
Probability an arriving customer has to wait, using a single-channel queuing model. 497. B2QueuingSCProbNoCustomer Returns the probability that no customers are in the system, using a singlechannel
queuing model.
March 17, 2008
Char Count=
498. B2RatiosBasicEarningPower Computes the basic earning power (BEP) by accounting for earnings before interest and taxes (EBIT) and the amount of total assets employed. 499. B2RatiosBetaLevered
Computes the levered beta from an unlevered beta level after accounting for the tax rate, total debt, and equity values. 500. B2RatiosBetaUnlevered Computes the unlevered beta from a levered beta
level after accounting for the tax rate, total debt, and equity values. 501. B2RatiosBookValuePerShare Computes the book value per share (BV) by accounting for the total common equity amount and
number of shares outstanding. 502. B2RatiosCapitalCharge Computes the capital charge value (typically used to compute the economic profit of a project). 503. B2RatiosCAPM Computes the capital asset
pricing model’s required rate of return in percent, given some benchmark market return, beta risk coefficient, and risk-free rate. 504. B2RatiosCashFlowtoEquityLeveredFirm Cash flow to equity for a
levered firm (accounting for operating expenses, taxes, depreciation, amortization, capital expenditures, change in working capital, preferred dividends, principal repaid, and new debt issues). 505.
B2RatiosCashFlowtoEquityUnleveredFirm Cash flow to equity for an unlevered firm (accounting for operating expenses, taxes, depreciation, amortization, capital expenditures, change in working capital,
and taxes). 506. B2RatiosCashFlowtoFirm Cash flow to the firm (accounting for earnings before interest and taxes [EBIT], tax rate, depreciation, capital expenditures, and change in working capital).
507. B2RatiosCashFlowtoFirm2 Cash flow to the firm (accounting for net operating profit after taxes [NOPAT], depreciation, capital expenditures, and change in working capital). 508.
B2RatiosContinuingValue1 Computes the continuing value based on a constant growth rate of free cash flows to perpetuity using a Gordon Growth Model. 509. B2RatiosContinuingValue2 Computes the
continuing value based on a constant growth rate of free cash flows to perpetuity using net operating profit after taxes (NOPAT), return on invested capital (ROIC), growth rate, and current free cash
flow. 510. B2RatiosCostEquity Computes the cost of equity (as used in a CAPM model) using the dividend rate, growth rate of dividends, and current equity price. 511. B2RatiosCurrentRatio Computes the
current ratio by accounting for the individual asset and liabilities.
March 17, 2008
List of Functions
Char Count=
512. B2RatiosDaysSalesOutstanding Computes the days sales outstanding by looking at the accounts receivable value, total annual sales, and number of days per year. 513. B2RatiosDebtAssetRatio
Computes the debt-to-asset ratio by accounting for the total debt and total asset values. 514. B2RatiosDebtEquityRatio Computes the debt-to-equity ratio by accounting for the total debt and total
common equity levels. 515. B2RatiosDebtRatio1 Computes the debt ratio by accounting for the total debt and total asset values. 516. B2RatiosDebtRatio2 Computes the debt ratio by accounting for the
total equity and total asset values. 517. B2RatiosDividendsPerShare Computes the dividends per share (DPS) by accounting for the dividend payment amount and number of shares outstanding. 518.
B2RatiosEarningsPerShare Computes the earnings per share (EPS) by accounting for the net income amount and number of shares outstanding. 519. B2RatiosEconomicProfit1 Computes the economic profit
using invested capital, return on invested capital (ROIC), and weighted average cost of capital (WACC). 520. B2RatiosEconomicProfit2 Computes the economic profit using net operating profit after
taxes (NOPAT), return on invested capital (ROIC), and weighted average cost of capital (WACC). 521. B2RatiosEconomicProfit3 Computes the economic profit using net operating profit after taxes (NOPAT)
and capital charge. 522. B2RatiosEconomicValueAdded Computes the economic value added using earnings before interest and taxes (EBIT), total capital employed, tax rate, and weighted average cost of
capital (WACC). 523. B2RatiosEquityMultiplier Computes the equity multiplier (the ratio of total assets to total equity). 524. B2RatiosFixedAssetTurnover Computes the fixed asset turnover by
accounting for the annual sales levels and net fixed assets. 525. B2RatiosInventoryTurnover Computes the inventory turnover using sales and inventory levels. 526. B2RatiosMarketBookRatio1 Computes
the market to book value (BV) per share by accounting for the share price and the book value per share.
March 17, 2008
Char Count=
527. B2RatiosMarketBookRatio2 Computes the market to book value per share by accounting for the share price, total common equity value, and number of shares outstanding. 528. B2RatiosMarketValueAdded
Computes the market value added by accounting for the stock price, total common equity, and number of shares outstanding. 529. B2RatiosNominalCashFlow Computes the nominal cash flow amount assuming
some inflation rate, real cash flow, and the number of years in the future. 530. B2RatiosNominalDiscountRate Computes the nominal discount rate assuming some inflation rate and real discount rate.
531. B2RatiosPERatio1 Computes the price-to-earnings (P/E) ratio using stock price and earnings per share (EPS). 532. B2RatiosPERatio2 Computes the price-to-earnings (P/E) ratio using stock price,
net income, and number of shares outstanding. 533. B2RatiosPERatio3 Computes the price-to-earnings (P/E) ratio using growth rates, rate of return, and discount rate. 534. B2RatiosProfitMargin
Computes the profit margin by taking the ratio of net income to annual sales. 535. B2RatiosQuickRatio Computes the quick ratio by accounting for the individual assets and liabilities. 536.
B2RatiosRealCashFlow Computes the real cash flow amount assuming some inflation rate, nominal cash flow (Nominal CF), and the number of years in the future. 537. B2RatiosRealDiscountRate Computes the
real discount rate assuming some inflation rate and nominal discount rate. 538. B2RatiosReturnonAsset1 Computes the return on assets using net income amount and total assets employed. 539.
B2RatiosReturnonAsset2 Computes the return on assets using net profit margin percentage and total asset turnover ratio. 540. B2RatiosReturnonEquity1 Computes return on equity using net income and
total common equity values. 541. B2RatiosReturnonEquity2 Computes return on equity using return on assets (ROA), total assets, and total equity values. 542. B2RatiosReturnonEquity3 Computes return on
equity using net income, total sales, total assets, and total common equity values.
March 17, 2008
List of Functions
Char Count=
543. B2RatiosReturnonEquity4 Computes return on equity using net profit margin, total asset turnover, and equity multiplier values. 544. B2RatiosROIC Computes the return on invested capital
(typically used for computing economic profit) accounting for change in working capital; property, plant, and equipment (PPE); and other assets. 545. B2RatiosShareholderEquity Computes the common
shareholder’s equity after accounting for total assets, total liabilities, and preferred stocks. 546. B2RatiosTimesInterestEarned Computes the times interest earned ratio by accounting for earnings
before interest and taxes (EBIT) and the amount of interest payment. 547. B2RatiosTotalAssetTurnover Computes the total asset turnover by accounting for the annual sales levels and total assets. 548.
B2RatiosWACC1 Computes the weighted average cost of capital (WACC) using market values of debt, preferred equity, and common equity, as well as their respective costs. 549. B2RatiosWACC2 Computes the
weighted average cost of capital (WACC) using market values of debt, market values of common equity, as well as their respective costs. 550. B2ROBinomialAmericanAbandonContract Returns the American
option to abandon and contract using a binomial lattice model. 551. B2ROBinomialAmericanAbandonContractExpand Returns the American option to abandon, contract, and expand using a binomial lattice
model. 552. B2ROBinomialAmericanAbandonExpand Returns the American option to abandon and expand using a binomial lattice model. 553. B2ROBinomialAmericanAbandonment Returns the American option to
abandon using a binomial lattice model. 554. B2ROBinomialAmericanCall Returns the American call option with dividends using a binomial lattice model. 555. B2ROBinomialAmericanChangingRiskFree Returns
the American call option with dividends and assuming the risk-free rate changes over time, using a binomial lattice model. 556. B2ROBinomialAmericanChangingVolatility Returns the American call option
with dividends and assuming the volatility changes over time, using a binomial lattice model. Use small number of steps or it will take a long time to compute! 557. B2ROBinomialAmericanContractExpand
Returns the American option to contract and expand using a binomial lattice model.
March 17, 2008
Char Count=
558. B2ROBinomialAmericanContraction Returns the American option to contract using a binomial lattice model. 559. B2ROBinomialAmericanCustomCall Returns the American option call option with changing
inputs, vesting periods, and suboptimal exercise multiple using a binomial lattice model. 560. B2ROBinomialAmericanExpansion Returns the American option to expand using a binomial lattice model. 561.
B2ROBinomialAmericanPut Returns the American put option with dividends using a binomial lattice model. 562. B2ROBinomialBermudanAbandonContract Returns the Bermudan option to abandon and contract
using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 563. B2ROBinomialBermudanAbandonContractExpand Returns the Bermudan option to
abandon, contract, and expand, using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 564. B2ROBinomialBermudanAbandonExpand Returns the
Bermudan option to abandon and expand using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 565. B2ROBinomialBermudanAbandonment Returns
the Bermudan option to abandon using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 566. B2ROBinomialBermudanCall Returns the Bermudan
call option with dividends, where there is a vesting/blackout period during which the option cannot be executed. 567. B2ROBinomialBermudanContractExpand Returns the Bermudan option to contract and
expand, using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 568. B2ROBinomialBermudanContraction Returns the Bermudan option to
contract using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 569. B2ROBinomialBermudanExpansion Returns the Bermudan option to expand
using a binomial lattice model, where there is a vesting/blackout period during which the option cannot be executed. 570. B2ROBinomialBermudanPut Returns the Bermudan put option with dividends, where
there is a vesting/blackout period during which the option cannot be executed. 571. B2ROBinomialEuropeanAbandonContract Returns the European option to abandon and contract, using a binomial lattice
model, where the option can be executed only at expiration.
March 17, 2008
List of Functions
Char Count=
572. B2ROBinomialEuropeanAbandonContractExpand Returns the European option to abandon, contract, and expand, using a binomial lattice model, where the option can be executed only at expiration. 573.
B2ROBinomialEuropeanAbandonExpand Returns the European option to abandon and expand, using a binomial lattice model, where the option can be executed only at expiration. 574.
B2ROBinomialEuropeanAbandonment Returns the European option to abandon using a binomial lattice model, where the option can be executed only at expiration. 575. B2ROBinomialEuropeanCall Returns the
European call option with dividends, where the option can be executed only at expiration. 576. B2ROBinomialEuropeanContractExpand Returns the European option to contract and expand, using a binomial
lattice model, where the option can be executed only at expiration. 577. B2ROBinomialEuropeanContraction Returns the European option to contract using a binomial lattice model, where the option can
be executed only at expiration. 578. B2ROBinomialEuropeanExpansion Returns the European option to expand using a binomial lattice model, where the option can be executed only at expiration. 579.
B2ROBinomialEuropeanPut Returns the European put option with dividends, where the option can be executed only at expiration. 580. B2ROJumpDiffusionCall Returns the closed-form model for a European
call option whose underlying asset follows a Poisson Jump Diffusion process. 581. B2ROJumpDiffusionPut Returns the closed-form model for a European put option whose underlying asset follows a Poisson
Jump Diffusion process. 582. B2ROMeanRevertingCall Returns the closed-form model for a European call option whose underlying asset follows a mean-reversion process. 583. B2ROMeanRevertingPut Returns
the closed-form model for a European put option whose underlying asset follows a mean-reversion process. 584. B2ROPentanomialAmericanCall Returns the Rainbow American call option with two underlying
assets (these are typically price and quantity, and are multiplied together to form a new combinatorial pentanomial lattice). 585. B2ROPentanomialAmericanPut Returns the Rainbow American put option
with two underlying assets (these are typically price and quantity, and are multiplied together to form a new combinatorial pentanomial lattice).
March 17, 2008
Char Count=
586. B2ROPentanomialEuropeanCall Returns the Rainbow European call option with two underlying assets (these are typically price and quantity, and are multiplied together to form a new combinatorial
pentanomial lattice). 587. B2ROPentanomialEuropeanPut Returns the Rainbow European put option with two underlying assets (these are typically price and quantity, and are multiplied together to form a
new combinatorial pentanomial lattice). 588. B2ROQuadranomialJumpDiffusionAmericanCall Returns the American call option whose underlying asset follows a Poisson Jump Diffusion process, using a
combinatorial quadranomial lattice. 589. B2ROQuadranomialJumpDiffusionAmericanPut Returns the American put option whose underlying asset follows a Poisson Jump Diffusion process, using a
combinatorial quadranomial lattice. 590. B2ROQuadranomialJumpDiffusionEuropeanCall Returns the European call option whose underlying asset follows a Poisson Jump Diffusion process, using a
combinatorial quadranomial lattice. 591. B2ROQuadranomialJumpDiffusionEuropeanPut Returns the European put option whose underlying asset follows a Poisson Jump Diffusion process, using a
combinatorial quadranomial lattice. 592. B2ROStateAmericanCall Returns the American call option using a state jump function, where the up and down states can be asymmetrical, solved in a lattice
model. 593. B2ROStateAmericanPut Returns the American put option using a state jump function, where the up and down states can be asymmetrical, solved in a lattice model. 594. B2ROStateBermudanCall
Returns the Bermudan call option using a state jump function, where the up and down states can be asymmetrical, solved in a lattice model, and where the option cannot be exercised during certain
vesting/blackout periods. 595. B2ROStateBermudanPut Returns the Bermudan put option using a state jump function, where the up and down states can be asymmetrical, solved in a lattice model, and where
the option cannot be exercised during certain vesting/blackout periods. 596. B2ROStateEuropeanCall Returns the European call option using a state jump function, where the up and down states can be
asymmetrical, solved in a lattice model, and where the option can be exercised only at maturity. 597. B2ROStateEuropeanPut Returns the European put option using a state jump function, where the up
and down states can be asymmetrical, solved in a lattice model, and where the option can be exercised only at maturity. 598. B2ROTrinomialAmericanCall Returns the American call option with dividend,
solved using a trinomial lattice.
March 17, 2008
List of Functions
Char Count=
599. B2ROTrinomialAmericanMeanRevertingCall Returns the American call option with dividend, assuming the underlying asset is mean-reverting, and solved using a trinomial lattice. 600.
B2ROTrinomialAmericanMeanRevertingPut Returns the American put option with dividend, assuming the underlying asset is mean-reverting, and solved using a trinomial lattice. 601.
B2ROTrinomialAmericanPut Returns the American put option with dividend, solved using a trinomial lattice. 602. B2ROTrinomialBermudanCall Returns the Bermudan call option with dividend, solved using a
trinomial lattice, where during certain vesting/blackout periods the option cannot be exercised. 603. B2ROTrinomialBermudanPut Returns the Bermudan put option with dividend, solved using a trinomial
lattice, where during certain vesting/blackout periods the option cannot be exercised. 604. B2ROTrinomialEuropeanCall Returns the European call option with dividend, solved using a trinomial lattice,
where the option can be exercised only at maturity. 605. B2ROTrinomialEuropeanMeanRevertingCall Returns the European call option with dividend, solved using a trinomial lattice, assuming the
underlying asset is mean-reverting, and where the option can be exercised only at maturity. 606. B2ROTrinomialEuropeanMeanRevertingPut Returns the European put option with dividend, solved using a
trinomial lattice, assuming the underlying asset is mean-reverting, and where the option can be exercised only at maturity. 607. B2ROTrinomialEuropeanPut Returns the European put option with
dividend, solved using a trinomial lattice, where the option can be exercised only at maturity. 608. B2SCurveValue Computes the S-Curve extrapolation’s next forecast value based on previous value,
growth rate, and maximum capacity levels. 609. B2SCurveValueSaturation Computes the S-Curve extrapolation’s saturation level based on previous value, growth rate, and maximum capacity levels. 610.
B2SemiStandardDeviationPopulation Computes the semi-standard deviation of the population; that is, only the values below the mean are used to compute an adjusted population standard deviation, a more
appropriate measure of downside risk. 611. B2SemiStandardDeviationSample Computes the semi-standard deviation of the sample; that is, only the values below the mean are used to compute an adjusted
sample standard deviation, a more appropriate measure of downside risk.
March 17, 2008
Char Count=
612. B2SharpeRatio Computes the Sharpe Ratio (returns-to-risk ratio) based on a series of stock prices of an asset and a market benchmark series of prices. 613. B2SimulateBernoulli Returns simulated
random numbers from the Bernoulli distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 614. B2SimulateBeta Returns simulated random
numbers from the Beta distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 615. B2SimulateBinomial Returns simulated random numbers
from the Binomial distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 616. B2SimulateChiSquare Returns simulated random numbers from
the Chi-Square distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 617. B2SimulatedEuropeanCall Returns the Monte Carlo simulated
European call option (only European options can be approximated well with simulation). This function is volatile. 618. B2SimulatedEuropeanPut Returns the Monte Carlo simulated European put option
(only European options can be approximated well with simulation). This function is volatile. 619. B2SimulateDiscreteUniform Returns simulated random numbers from the Discrete Uniform distribution.
Type in RAND() as the random input parameter to generate volatile random values from this distribution. 620. B2SimulateExponential Returns simulated random numbers from the Exponential distribution.
Type in RAND() as the random input parameter to generate volatile random values from this distribution. 621. B2SimulateFDist Returns simulated random numbers from the F distribution. Type in RAND()
as the random input parameter to generate volatile random values from this distribution. 622. B2SimulateGamma Returns simulated random numbers from the Gamma distribution. Type in RAND() as the
random input parameter to generate volatile random values from this distribution. 623. B2SimulateGeometric Returns simulated random numbers from the Geometric distribution. Type in RAND() as the
random input parameter to generate volatile random values from this distribution.
March 17, 2008
List of Functions
Char Count=
624. B2SimulateGumbelMax Returns simulated random numbers from the Gumbel Max distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution.
625. B2SimulateGumbelMin Returns simulated random numbers from the Gumbel Min distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution.
626. B2SimulateLogistic Returns simulated random numbers from the Logistic distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 627.
B2SimulateLognormal Returns simulated random numbers from the Lognormal distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 628.
B2SimulateNormal Returns simulated random numbers from the Normal distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 629.
B2SimulatePareto Returns simulated random numbers from the Pareto distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 630.
B2SimulatePoisson Returns simulated random numbers from the Poisson distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 631.
B2SimulateRayleigh Returns simulated random numbers from the Rayleigh distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 632.
B2SimulateStandardNormal Returns simulated random numbers from the Standard Normal distribution. Type in RAND() as the random input parameter to generate volatile random values from this
distribution. 633. B2SimulateTDist Returns simulated random numbers from the Student’s T distribution. Type in RAND() as the random input parameter to generate volatile random values from this
distribution. 634. B2SimulateTriangular Returns simulated random numbers from the Triangular distribution. Type in RAND() as the random input parameter to generate volatile random values from this
March 17, 2008
Char Count=
635. B2SimulateUniform Returns simulated random numbers from the Uniform distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 636.
B2SimulateWeibull Returns simulated random numbers from the Weibull distribution. Type in RAND() as the random input parameter to generate volatile random values from this distribution. 637.
B2SixSigmaControlCChartCL Computes the center line in a control c-chart. C-charts are applicable when only the number of defects is important. 638. B2SixSigmaControlCChartDown1Sigma Computes the
lower 1 sigma limit in a control c-chart. C-charts are applicable when only the number of defects is important. 639. B2SixSigmaControlCChartDown2Sigma Computes the lower 2 sigma limit in a control
c-chart. C-charts are applicable when only the number of defects is important. 640. B2SixSigmaControlCChartLCL Computes the lower control limit in a control c-chart. C-charts are applicable when only
the number of defects is important. 641. B2SixSigmaControlCChartUCL Computes the upper control limit in a control c-chart. C-charts are applicable when only the number of defects is important. 642.
B2SixSigmaControlCChartUp1Sigma Computes the upper 1 sigma limit in a control c-chart. C-charts are applicable when only the number of defects is important. 643. B2SixSigmaControlCChartUp2Sigma
Computes the upper 2 sigma limit in a control c-chart. C-charts are applicable when only the number of defects is important. 644. B2SixSigmaControlNPChartCL Computes the center line in a control
np-chart. NP-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes is constant. 645. B2SixSigmaControlNPChartDown1Sigma
Computes the lower 1 sigma limit in a control np-chart. NP-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes is
constant. 646. B2SixSigmaControlNPChartDown2Sigma Computes the lower 2 sigma limit in a control np-chart. NP-charts are applicable when proportions of defects are important, and where in each
experimental subgroup the number of sample sizes is constant. 647. B2SixSigmaControlNPChartLCL Computes the lower control limit in a control np-chart. NP-charts are applicable when proportions of
defects are important, and where in each experimental subgroup the number of sample sizes is constant.
March 17, 2008
List of Functions
Char Count=
648. B2SixSigmaControlNPChartUCL Computes the upper control limit in a control np-chart. NP-charts are applicable when proportions of defects are important, and where in each experimental subgroup
the number of sample sizes is constant. 649. B2SixSigmaControlNPChartUp1Sigma Computes the upper 1 sigma limit in a control np-chart. NP-charts are applicable when proportions of defects are
important, and where in each experimental subgroup the number of sample sizes is constant. 650. B2SixSigmaControlNPChartUp2Sigma Computes the upper 2 sigma limit in a control np-chart. NP-charts are
applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes is constant. 651. B2SixSigmaControlPChartCL Computes the center line in a
control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes might be different. 652.
B2SixSigmaControlPChartDown1Sigma Computes the lower 1 sigma limit in a control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the
number of sample sizes might be different. 653. B2SixSigmaControlPChartDown2Sigma Computes the lower 2 sigma limit in a control p-chart. P-charts are applicable when proportions of defects are
important, and where in each experimental subgroup the number of sample sizes might be different. 654. B2SixSigmaControlPChartLCL Computes the lower control limit in a control p-chart. P-charts are
applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes might be different. 655. B2SixSigmaControlPChartUCL Computes the upper control
limit in a control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the number of sample sizes might be different. 656.
B2SixSigmaControlPChartUp1Sigma Computes the upper 1 sigma limit in a control p-chart. P-charts are applicable when proportions of defects are important, and where in each experimental subgroup the
number of sample sizes might be different. 657. B2SixSigmaControlPChartUp2Sigma Computes the upper 2 sigma limit in a control p-chart. P-charts are applicable when proportions of defects are
important, and where in each experimental subgroup the number of sample sizes might be different. 658. B2SixSigmaControlRChartCL Computes the center line in a control R-chart. R-charts are used when
the number of defects is important; in each subgroup experiment multiple measurements are taken, and the range of the measurements is the variable plotted.
March 17, 2008
Char Count=
659. B2SixSigmaControlRChartLCL Computes the lower control limit in a control R-chart. R-charts are used when the number of defects is important; in each subgroup experiment multiple measurements are
taken, and the range of the measurements is the variable plotted. 660. B2SixSigmaControlRChartUCL Computes the upper control limit in a control R-chart. R-charts are used when the number of defects
is important; in each subgroup experiment multiple measurements are taken, and the range of the measurements is the variable plotted. 661. B2SixSigmaControlUChartCL Computes the center line in a
control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 662.
B2SixSigmaControlUChartDown1Sigma Computes the lower 1 sigma limit in a control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the
number of sample sizes is the same. 663. B2SixSigmaControlUChartDown2Sigma Computes the lower 2 sigma limit in a control u-chart. U-charts are applicable when the number of defects is important, and
where in each experimental subgroup the number of sample sizes is the same. 664. B2SixSigmaControlUChartLCL Computes the lower control limit in a control u-chart. U-charts are applicable when the
number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 665. B2SixSigmaControlUChartUCL Computes the upper control limit in a control u-chart.
U-charts are applicable when the number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 666. B2SixSigmaControlUChartUp1Sigma Computes the
upper 1 sigma limit in a control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the number of sample sizes is the same. 667.
B2SixSigmaControlUChartUp2Sigma Computes the upper 2 sigma limit in a control u-chart. U-charts are applicable when the number of defects is important, and where in each experimental subgroup the
number of sample sizes is the same. 668. B2SixSigmaControlXChartCL Computes the center line in a control X-chart. X-charts are used when the number of defects is important; in each subgroup
experiment multiple measurements are taken, and the average of the measurements is the variable plotted. 669. B2SixSigmaControlXChartLCL Computes the lower control limit in a control X-chart.
X-charts are used when
March 17, 2008
List of Functions
Char Count=
the number of defects is important; in each subgroup experiment multiple measurements are taken, and the average of the measurements is the variable plotted. B2SixSigmaControlXChartUCL Computes the
upper control limit in a control X-chart. X-charts are used when the number of defects is important; in each subgroup experiment multiple measurements are taken, and the average of the measurements
is the variable plotted. B2SixSigmaControlXMRChartCL Computes the center line in a control XmR-chart. XmR-charts are used when the number of defects is important; there is only a single measurement
for each sample, and a time series of moving ranges is the variable plotted. B2SixSigmaControlXMRChartLCL Computes the lower control limit in a control XmR-chart. XmR-charts are used when the number
of defects is important; there is only a single measurement for each sample, and a time series of moving ranges is the variable plotted. B2SixSigmaControlXMRChartUCL Computes the upper control limit
in a control XmR-chart. XmR-charts are used when the number of defects is important; there is only a single measurement for each sample, and a time series of moving ranges is the variable plotted.
B2SixSigmaDeltaPrecision Computes the error precision given specific levels of Type I and Type II errors, as well as the sample size and variance. B2SixSigmaSampleSize Computes the required minimum
sample size given Type I and Type II errors, as well as the required precision of the mean and the error tolerances. B2SixSigmaSampleSizeDPU Computes the required minimum sample size given Type I and
Type II errors, as well as the required precision of the defects per unit and the error tolerances. B2SixSigmaSampleSizeProportion Computes the required minimum sample size given Type I and Type II
errors, as well as the required precision of the proportion of defects and the error tolerances.
678. B2SixSigmaSampleSizeStdev Computes the required minimum sample size given Type I and Type II errors, as well as the required precision of the standard deviation and the error tolerances. 679.
B2SixSigmaSampleSizeZeroCorrelTest Computes the required minimum sample size to test whether a correlation is statistically significant at an alpha of 0.05 and beta of 0.10. 680. B2SixSigmaStatCP
Computes the potential process capability index Cp given the actual mean and sigma of the process, including the upper and lower specification limits. 681. B2SixSigmaStatCPK Computes the process
capability index Cpk given the actual mean and sigma of the process, including the upper and lower specification limits.
March 17, 2008
Char Count=
682. B2SixSigmaStatDPMO Computes the defects per million opportunities (DPMO) given the actual mean and sigma of the process, including the upper and lower specification limits. 683.
B2SixSigmaStatDPU Computes the proportion of defects per unit (DPU) given the actual mean and sigma of the process, including the upper and lower specification limits. 684. B2SixSigmaStatProcessSigma
Computes the process sigma level given the actual mean and sigma of the process, including the upper and lower specification limits. 685. B2SixSigmaStatYield Computes the nondefective parts or the
yield of the process, given the actual mean and sigma of the process, including the upper and lower specification limits. 686. B2SixSigmaUnitCPK Computes the process capability index Cpk given the
actual counts of defective parts and the total opportunities in the population. 687. B2SixSigmaUnitDPMO Computes the defects per million opportunities (DPMO) given the actual counts of defective
parts and the total opportunities in the population. 688. B2SixSigmaUnitDPU Computes the proportion of defects per unit (DPU) given the actual counts of defective parts and the total opportunities in
the population. 689. B2SixSigmaUnitProcessSigma Computes the process sigma level given the actual counts of defective parts and the total opportunities in the population. 690. B2SixSigmaUnitYield
Computes the nondefective parts or the yield of the process given the actual counts of defective parts and the total opportunities in the population. 691. B2StandardNormalBivariateCDF Given the two
Z-scores and correlation, returns the value of the bivariate standard normal (means of zero, variances of 1) cumulative distribution function. 692. B2StandardNormalCDF Given the Z-score, returns the
value of the standard normal (mean of zero, variance of 1) cumulative distribution function. 693. B2StandardNormalInverseCDF Computes the inverse cumulative distribution function of a standard normal
distribution (mean of zero, variance of 1). 694. B2StandardNormalPDF Given the Z-score, returns the value of the standard normal (mean of zero, variance of 1) probability density function. 695.
B2StockIndexCallOption Similar to a regular call option but the underlying asset is a reference stock index such as the Standard & Poor’s 500. The analysis can be solved using a Generalized
Black-Scholes-Merton model as well.
March 17, 2008
List of Functions
Char Count=
696. B2StockIndexPutOption Similar to a regular put option but the underlying asset is a reference stock index such as the Standard & Poor’s 500. The analysis can be solved using a Generalized
Black-Scholes-Merton model as well. 697. B2SuperShareOptions The option has value only if the stock or asset price is between the upper and lower barriers, and at expiration provides a payoff
equivalent to the stock or asset price divided by the lower strike price (S/X Lower). 698. B2SwaptionEuropeanPayer European Call Interest Swaption, where the holder has the right to enter in a swap
to pay fixed and receive floating interest payments. 699. B2SwaptionEuropeanReceiver European Put Interest Swaption, where the holder has the right to enter in a swap to receive fixed and pay
floating interest payments. 700. B2TakeoverFXOption At a successful takeover (foreign firm value in foreign currency is less than the foreign currency units), option holder can purchase the foreign
units at a predetermined strike price (in exchange rates of the domestic to foreign currency). 701. B2TimeSwitchOptionCall Holder gets AccumAmount × TimeSteps each time asset > strike for a call.
TimeSteps is the frequency at which the asset price is checked as to whether the strike is breached (e.g., for 252 trading days, set DT as 1/252). 702. B2TimeSwitchOptionPut Holder gets AccumAmount ×
TimeSteps each time asset < strike for a put. TimeSteps is the frequency at which the asset price is checked as to whether the strike is breached (e.g., for 252 trading days, set DT as 1/252). 703.
B2TradingDayAdjustedCall Call option corrected for varying volatilities (higher on trading days than on nontrading days). Trading Days Ratio is the number of trading days left until maturity divided
by total trading days per year (between 250 and 252). 704. B2TradingDayAdjustedPut Put option corrected for varying volatilities (higher on trading days than on nontrading days). Trading Days Ratio
is the number of trading days left until maturity divided by total trading days per year (between 250 and 252). 705. B2TrinomialImpliedArrowDebreuLattice Computes the complete set of implied
Arrow-Debreu prices in an implied trinomial lattice using actual observed data. Copy and paste the function and use Ctrl+Shift+Enter to obtain the matrix. 706. B2TrinomialImpliedArrowDebreuValue
Computes the single value of implied Arrow-Debreu price (for a specific step/column and up-down event/row) in an implied trinomial lattice using actual observed data. 707.
B2TrinomialImpliedCallOptionValue Computes the European call option using an implied trinomial lattice approach, taking into account actual observed inputs.
March 17, 2008
Char Count=
708. B2TrinomialImpliedDownProbabilityLattice Computes the complete set of implied DOWN probabilities in an implied trinomial lattice using actual observed data. Copy and paste the function and use
Ctrl+Shift+Enter to obtain the matrix. 709. B2TrinomialImpliedDownProbabilityValue Computes the single value of implied DOWN probability (for a specific step/column and up-down event/row) in an
implied trinomial lattice using actual observed data. 710. B2TrinomialImpliedLocalVolatilityLattice Computes the complete set of implied local probabilities in an implied trinomial lattice using
actual observed data. Copy and paste the function and use Ctrl+Shift+Enter to obtain the matrix. 711. B2TrinomialImpliedLocalVolatilityValue Computes the single value of implied localized volatility
(for a specific step/column and up-down event/row) in an implied trinomial lattice using actual observed data. 712. B2TrinomialImpliedUpProbabilityLattice Computes the complete set of implied UP
probabilities in an implied trinomial lattice using actual observed data. Copy and paste the function and use Ctrl+Shift+Enter to obtain the matrix. 713. B2TrinomialImpliedUpProbabilityValue Computes
the single value of implied UP probability (for a specific step/column and up-down event/row) in an implied trinomial lattice using actual observed data. 714. B2TrinomialImpliedPutOptionValue
Computes the European put option using an implied trinomial lattice approach, taking into account actual observed inputs. 715. B2TwoAssetBarrierDownandInCall Valuable or knocked in the money only if
the lower barrier is breached (reference Asset 2 goes below the barrier), and the payout is in the option on Asset 1 less the strike price. 716. B2TwoAssetBarrierDownandInPut Valuable or knocked in
the money only if the lower barrier is breached (reference Asset 2 goes below the barrier), and the payout is in the option on the strike price less the Asset 1 value. 717.
B2TwoAssetBarrierDownandOutCall Valuable or stays in-the-money only if the lower barrier is not breached (reference Asset 2 does not go below the barrier), and the payout is in the option on Asset 1
less the strike price. 718. B2TwoAssetBarrierDownandOutPut Valuable or stays in the money only if the lower barrier is not breached (reference Asset 2 does not go below the barrier), and the payout
is in the option on the strike price less the Asset 1 value. 719. B2TwoAssetBarrierUpandInCall Valuable or knocked in the money only if the upper barrier is breached
March 17, 2008
List of Functions
Char Count=
(reference Asset 2 goes above the barrier), and the payout is in the option on Asset 1 less the strike price. B2TwoAssetBarrierUpandInPut Valuable or knocked in the money only if the upper barrier is
breached (reference Asset 2 goes above the barrier), and the payout is in the option on the strike price less the Asset 1 value. B2TwoAssetBarrierUpandOutCall Valuable or stays in the money only if
the upper barrier is not breached (reference Asset 2 does not go above the barrier), and the payout is in the option on Asset 1 less the strike price. B2TwoAssetBarrierUpandOutPut Valuable or stays
in the money only if the upper barrier is not breached (reference Asset 2 does not go above the barrier), and the payout is in the option on the strike price less the Asset 1 value.
B2TwoAssetCashOrNothingCall Pays cash at expiration as long as both assets are in the money. For call options, both asset values must be above their respective strike prices.
B2TwoAssetCashOrNothingDownUp Cash will be paid only if at expiration the first asset is below the first strike, and the second asset is above the second strike.
725. B2TwoAssetCashOrNothingPut Pays cash at expiration as long as both assets are in the money. For put options, both assets must be below their respective strike prices. 726.
B2TwoAssetCashOrNothingUpDown Cash will be paid only if the first asset is above the first strike price, and the second asset is below the second strike price at maturity. 727.
B2TwoAssetCorrelationCall Asset 1 is the benchmark asset, whereby if at expiration Asset 1’s value exceeds Strike 1’s value, then the call option is knocked in the money, and the payoff on the option
is Asset 2 – Strike 2; otherwise the option becomes worthless. 728. B2TwoAssetCorrelationPut Asset 1 is the benchmark asset, whereby if at expiration Asset 1’s value is below Strike 1’s value, then
the put option is knocked in the money, and the payoff on the option is Strike 2 – Asset 2; otherwise the option becomes worthless. 729. B2VaRCorrelationMethod Computes the Value at Risk using the
Variance-Covariance and Correlation method, accounting for a specific VaR percentile and holding period. 730. RB2VaROptions Computes the Value at Risk of a portfolio of correlated options. 731.
B2Volatility Returns the Annualized Volatility of time-series cash flows. Enter in the number of periods in a cycle to annualize the volatility (1 = annual, 4 = quarterly, 12 = monthly data).
March 17, 2008
Char Count=
732. B2VolatilityImpliedforDefaultRisk Used only when computing the implied volatility required for optimizing an option model to compute the probability of default. 733. B2WarrantsDilutedValue
Returns the value of a warrant (like an option) that is convertible to stock while accounting for dilution effects based on the number of shares and warrants outstanding. 734.
B2WriterExtendibleCallOption The call option is extended beyond the initial maturity to an extended date with a new extended strike if at maturity the option is out of the money, providing a safety
net of time for the option holder. 735. B2WriterExtendiblePutOption The put option is extended beyond the initial maturity to an extended date with a new extended strike if at maturity the option is
out of the money, providing a safety net of time for the option holder. 736. B2YieldCurveBIM Returns the Yield Curve at various points in time using the Bliss model. 737. B2YieldCurveNS Returns the
Yield Curve at various points in time using the Nelson-Siegel approach. 738. B2ZEOB Returns the Economic Order Batch or the optimal quantity to be manufactured on each production batch. 739.
B2ZEOBBatch Returns the Economic Order Batch analysis’ optimal number of batches to be manufactured per year. 740. B2ZEOBHoldingCost Returns the Economic Order Batch analysis’ cost of holding excess
units per year if manufactured at the optimal level. 741. B2ZEOBProductionCost Returns the Economic Order Batch analysis’ total cost of setting up production per year if manufactured at the optimal
level. 742. B2ZEOBTotalCost Returns the Economic Order Batch analysis’ total cost of production and holding costs per year if manufactured at the optimal level. 743. B2ZEOQ Economic Order Quantity’s
order size on each order. 744. B2ZEOQExcess Economic Order Quantity’s excess safety stock level. 745. B2ZEOQOrders Economic Order Quantity’s number of orders per year. 746. B2ZEOQProbability Economic
Order Quantity’s probability of out of stock.
March 17, 2008
Char Count=
List of Functions
747. B2ZEOQReorderPoint Economic Order Quantity’s reorder point. The following lists the statistical and analytical tools in the Modeling Toolkit: 748. Statistical Tool: Chi-Square Goodness of Fit
Test 749. Statistical Tool: Chi-Square Independence Test 750. Statistical Tool: Chi-Square Population Variance Test 751. 752. 753. 754. 755.
Statistical Tool: Dependent Means (T) Statistical Tool: Friedman’s Test Statistical Tool: Independent and Equal Variances (T) Statistical Tool: Independent and Unequal Variances (T) Statistical Tool:
Independent Means (Z)
756. Statistical Tool: Independent Proportions (Z) 757. Statistical Tool: Independent Variances (F) 758. 759. 760. 761.
Statistical Tool: Kruskal-Wallis Test Statistical Tool: Lilliefors Test Statistical Tool: Principal Component Analysis Statistical Tool: Randomized Block Multiple Treatments
762. Statistical Tool: Runs Test 763. Statistical Tool: Single Factor Multiple Treatments 764. Statistical Tool: Testing Means (T) 765. Statistical Tool: Testing Means (Z) 766. Statistical Tool:
Testing Proportions (Z) 767. Statistical Tool: Two-Way ANOVA 768. 769. 770. 771.
Statistical Tool: Variance-Covariance Matrix Statistical Tool: Wilcoxon Signed-Rank Test (One Variable) Statistical Tool: Wilcoxon Signed-Rank Test (Two Variables) Valuation Tool: Lattice Maker for
772. Valuation Tool: Lattice Maker for Yield The following lists Risk Simulator tools/applications that are used in the Modeling Toolkit: 773. Monte Carlo Simulation Using 25 Statistical
Distributions 774. Monte Carlo Simulation: Simulations with Correlations 775. 776. 777. 778. 779.
Monte Carlo Simulation: Simulations with Precision Control Monte Carlo Simulation: Simulations with Truncation Stochastic Forecasting: Basic Econometrics Stochastic Forecasting: Box-Jenkins ARIMA and
Auto ARIMA Stochastic Forecasting: Cubic Spline
March 17, 2008
Char Count=
780. Stochastic Forecasting: GARCH 781. Stochastic Forecasting: J and S Curves 782. Stochastic Forecasting: Markov Chains 783. 784. 785. 786. 787. 788. 789. 790.
Stochastic Forecasting: Maximum Likelihood Stochastic Forecasting: Nonlinear Extrapolation Stochastic Forecasting: Regression Analysis Stochastic Forecasting: Stochastic Processes Stochastic
Forecasting: Time-Series Analysis Portfolio Optimization: Discrete Binary Decision Variables Portfolio Optimization: Discrete and Continuous Decision Variables Portfolio Optimization: Discrete
Decision Variables
791. 792. 793. 794. 795.
Portfolio Optimization: Static Optimization Portfolio Optimization: Dynamic Optimization Portfolio Optimization: Stochastic Optimization Simulation Tools: Bootstrap Simulation Simulation Tools:
Custom Historical Simulation
796. Simulation Tools: Data Diagnostics 797. 798. 799. 800.
Simulation Tools: Distributional Analysis Simulation Tools: Multiple Correlated Data Fitting Simulation Tools: Scenario Analysis Simulation Tools: Sensitivity Analysis
801. Simulation Tools: Single Data Fitting 802. Simulation Tools: Statistical Analysis 803. Simulation Tools: Tornado Analysis The following lists Real Options SLS tools/applications that are used in
the Modeling Toolkit: 804. 805. 806. 807. 808. 809.
Audit Sheet Functions Changing Volatility and Risk-Free Rates Model Lattice Maker SLS Single Asset and Single Phase: American Options SLS Single Asset and Single Phase: Bermudan Options SLS Single
Asset and Single Phase: Customized Options
810. 811. 812. 813. 814.
SLS Single Asset and Single Phase: European Options SLS Multiple Asset and Multiple Phases SLS Multinomial Lattices: Pentanomials SLS Multinomial Lattices: Quadranomials SLS Multinomial Lattices:
815. SLS Multinomial Lattices: Trinomials Mean-Reversion
March 18, 2008
Char Count=
Understanding and Choosing the Right Probability Distributions
lotting data is one method for selecting a probability distribution. The following steps provide another process for selecting probability distributions that best describe the uncertain variables in
your spreadsheets. To select the correct probability distribution, use the following steps:
1. Look at the variable in question. List everything you know about the conditions surrounding this variable. You might be able to gather valuable information about the uncertain variable from
historical data. If historical data are not available, use your own judgment, based on experience, listing everything you know about the uncertain variable. 2. Review the descriptions of the
probability distributions. 3. Select the distribution that characterizes this variable. A distribution characterizes a variable when the conditions of the distribution match those of the variable.
Alternatively, if you have historical, comparable, contemporaneous, or forecast data, you can use Risk Simulator’s distributional fitting modules to find the best statistical fit for your existing
data. This fitting process will apply some advanced statistical techniques to find the best distribution and its relevant parameters that describe the data.
PROBABILITY DENSITY FUNCTIONS, CUMULATIVE DISTRIBUTION FUNCTIONS, AND PROBABILITY MASS FUNCTIONS In mathematics and Monte Carlo simulation, a probability density function (PDF) represents a
continuous probability distribution in terms of integrals. If a probability distribution has a density of f (x), then intuitively the infinitesimal interval of [x, x + dx] has a probability of f (x)
dx. The PDF therefore can be seen as a smoothed version of a probability histogram; that is, by providing an empirically large sample of a continuous random variable repeatedly, the histogram using
very narrow ranges
March 18, 2008
Char Count=
will resemble the random variable’s PDF. The probability of the interval between [a, b] is given by b f (x)dx a
which means that the total integral of the function f must be 1.0. It is a common mistake to think of f(a) as the probability of a. This is incorrect. In fact, f (a) can sometimes be larger than
1—consider a uniform distribution between 0.0 and 0.5. The random variable x within this distribution will have f (x) greater than 1. The probability in reality is the function f (x)dx discussed
previously, where dx is an infinitesimal amount. The cumulative distribution function (CDF) is denoted as F(x) = P(X ≤ x), indicating the probability of X taking on a less than or equal value to x.
Every CDF is monotonically increasing, is continuous from the right, and at the limits, has the following properties: lim F (x) = 0
lim F (x) = 1
Further, the CDF is related to the PDF by b F (b) − F (a) = P(a ≤ X ≤ b) = f (x)dx a
where the PDF function f is the derivative of the CDF function F. In probability theory, a probability mass function or PMF gives the probability that a discrete random variable is exactly equal to
some value. The PMF differs from the PDF in that the values of the latter, defined only for continuous random variables, are not probabilities; rather, its integral over a set of possible values of
the random variable is a probability. A random variable is discrete if its probability distribution is discrete and can be characterized by a PMF. Therefore, X is a discrete random variable if
P(X = u) = 1
as u runs through all possible values of the random variable X.
DISCRETE DISTRIBUTIONS Following is a detailed listing of the different types of probability distributions that can be used in Monte Carlo simulation. This listing is included in the appendix for the
reader’s reference.
March 18, 2008
Char Count=
Understanding and Choosing the Right Probability Distributions
Bernoulli or Yes/No Distribution The Bernoulli distribution is a discrete distribution with two outcomes (e.g., head or tails, success or failure, 0 or 1). The Bernoulli distribution is the binomial
distribution with one trial and can be used to simulate Yes/No or Success/Failure conditions. This distribution is the fundamental building block of other more complex distributions. For instance:
Binomial distribution: Bernoulli distribution with higher number of n total trials and computes the probability of x successes within this total number of trials. Geometric distribution: Bernoulli
distribution with higher number of trials and computes the number of failures required before the first success occurs. Negative binomial distribution: Bernoulli distribution with higher number of
trials and computes the number of failures before the xth success occurs. The mathematical constructs for the Bernoulli distribution are as follows: 1 − p for x = 0 P(x) = p for x = 1 or P(x) = px (1
− p)1−x Mean = p Standard Deviation =
p(1 − p)
1 − 2p Skewness = p(1 − p) Excess Kurtosis =
6 p2 − 6 p + 1 p(1 − p)
The probability of success (p) is the only distributional parameter. Also, it is important to note that there is only one trial in the Bernoulli distribution, and the resulting simulated value is
either 0 or 1. Input requirements: Probability of success > 0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999)
Binomial Distribution The binomial distribution describes the number of times a particular event occurs in a fixed number of trials, such as the number of heads in 10 flips of a coin or the number of
defective items out of 50 items chosen. The three conditions underlying the binomial distribution are: 1. For each trial, only two outcomes are possible that are mutually exclusive. 2. The trials are
independent—what happens in the first trial does not affect the next trial.
March 18, 2008
Char Count=
3. The probability of an event occurring remains the same from trial to trial. P(x) =
n! px (1 − p)(n−x) for n > 0; x = 0, 1, 2, . . . n; and 0 < p < 1 x!(n − x)!
Mean = np Standard Deviation =
np(1 − p)
1 − 2p Skewness = np(1 − p) Excess Kurtosis =
6 p2 − 6 p + 1 np(1 − p)
The probability of success (p) and the integer number of total trials (n) are the distributional parameters. The number of successful trials is denoted x. It is important to note that probability of
success (p) of 0 or 1 are trivial conditions and do not require any simulations, and, hence, are not allowed in the software. Input requirements: Probability of success > 0 and < 1 (that is, 0.0001 ≤
p ≤ 0.9999). Number of trials ≥ 1 or positive integers and ≤ 1,000 (for larger trials, use the normal distribution with the relevant computed binomial mean and standard deviation as the normal
distribution’s parameters).
Discrete Uniform The discrete uniform distribution is also known as the equally likely outcomes distribution, where the distribution has a set of N elements, and each element has the same
probability. This distribution is related to the uniform distribution, but its elements are discrete and not continuous. The mathematical constructs for the discrete uniform distribution are as
follows: 1 ranked value N N+1 ranked value Mean = 2 (N − 1)(N + 1) Standard Deviation = ranked value 12 P(x) =
Skewness = 0 (that is, the distribution is perfectly symmetrical) Excess Kurtosis =
−6(N 2 + 1) ranked value 5(N − 1)(N + 1)
Input requirements: Minimum < Maximum and both must be integers (negative integers and zero are allowed)
March 18, 2008
Char Count=
Understanding and Choosing the Right Probability Distributions
Geometric Distribution The geometric distribution describes the number of trials until the first successful occurrence, such as the number of times you need to spin a roulette wheel before you win.
The three conditions underlying the geometric distribution are: 1. The number of trials is not fixed. 2. The trials continue until the first success. 3. The probability of success is the same from
trial to trial. The mathematical constructs for the geometric distribution are as follows: P(x) = p(1 − p)x−1 for 0 < p < 1 and x = 1, 2, . . . , n Mean =
1 −1 p
Standard Deviation =
1− p p2
2− p Skewness = 1− p Excess Kurtosis =
p2 − 6 p + 6 1− p
The probability of success (p) is the only distributional parameter. The number of successful trials simulated is denoted x, which can only take on positive integers. Input requirements: Probability
of success > 0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999). It is important to note that probability of success (p) of 0 or 1 are trivial conditions and do not require any simulations, and, hence, are not
allowed in the software.
Hypergeometric Distribution The hypergeometric distribution is similar to the binomial distribution in that both describe the number of times a particular event occurs in a fixed number of trials.
The difference is that binomial distribution trials are independent, whereas hypergeometric distribution trials change the probability for each subsequent trial and are called trials without
replacement. For example, suppose a box of manufactured parts is known to contain some defective parts. You choose a part from the box, find it is defective, and remove the part from the box. If you
choose another part from the box, the probability that it is defective is somewhat lower than for the first part because you have removed a defective part. If you had replaced the defective part, the
probabilities would have remained the same, and the process would have satisfied the conditions for a binomial distribution.
March 18, 2008
Char Count=
The three conditions underlying the hypergeometric distribution are: 1. The total number of items or elements (the population size) is a fixed number, a finite population. The population size must be
less than or equal to 1,750. 2. The sample size (the number of trials) represents a portion of the population. 3. The known initial probability of success in the population changes after each trial.
The mathematical constructs for the hypergeometric distribution are as follows: (N − Nx )! (Nx )! x!(Nx − x)! (n − x)!(N − Nx − n + x)! P(x) = N! n!(N − n)! for x = Max(n − (N − Nx ), 0), . . . , Min
(n, Nx ) Mean =
Nx n N
(N − Nx )Nx n(N − n) N2 (N − 1) (N − 2Nx )(N − 2n) N−1 Skewness = N−2 (N − Nx )Nx n(N − n)
Standard Deviation =
Excess Kurtosis =
V(N, Nx , n) where (N − Nx )Nx n(−3 + N)(−2 + N)(−N + n)
V(N, Nx, n) = (N − Nx )3 − (N − Nx )5 + 3(N − Nx )2 Nx − 6(N − Nx )3 Nx + (N − Nx )4 Nx + 3(N − Nx )Nx2 − 12(N − Nx )2 Nx2 + 8(N − Nx )3 Nx2 + Nx3 − 6(N − Nx )Nx3 + 8(N − Nx )2 Nx3 + (N − Nx )Nx4 −
Nx5 − 6(N − Nx )3 Nx + 6(N − Nx )4 Nx + 18(N − Nx )2 Nx n − 6(N − Nx )3 Nx n + 18(N − Nx )Nx2 n − 24(N − Nx )2 Nx2 n − 6(N − Nx )3 n − 6(N − Nx )Nx3 n + 6Nx4 n + 6(N − Nx )2 n2 − 6(N − Nx )3 n2 − 24
(N − Nx )Nx n2 + 12(N − Nx )2 Nx n2 + 6Nx2 n2 + 12(N − Nx )Nx2 n2 − 6Nx3 n2 The number of items in the population (N), trials sampled (n), and number of items in the population that have the
successful trait (Nx ) are the distributional parameters. The number of successful trials is denoted x. Input requirements: Population ≥ 2 and integer Trials > 0 and integer Successes > 0 and integer
Population > Successes Trials < Population Population < 1,750
March 18, 2008
Char Count=
Understanding and Choosing the Right Probability Distributions
Negative Binomial Distribution The negative binomial distribution is useful for modeling the distribution of the number of trials until the rth successful occurrence, such as the number of sales
calls you need to make to close a total of 10 orders. It is essentially a superdistribution of the geometric distribution. This distribution shows the probabilities of each number of trials in excess
of r to produce the required success r. The three conditions underlying the negative binomial distribution are: 1. The number of trials is not fixed. 2. The trials continue until the rth success. 3.
The probability of success is the same from trial to trial. The mathematical constructs for the negative binomial distribution are as follows: P(x) =
(x + r − 1)! r p (1 − p)x for x = r, r + 1, . . . ; and 0 < p < 1 (r − 1)!x!
Mean =
r (1 − p) p
Standard Deviation =
r (1 − p) p2
2− p Skewness = r (1 − p) Excess Kurtosis =
p2 − 6 p + 6 r (1 − p)
The probability of success (p) and required successes (r) are the distributional parameters. Input requirements: Successes required must be positive integers > 0 and < 8,000. Probability of success >
0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999). It is important to note that probability of success (p) of 0 or 1 are trivial conditions and do not require any simulations, and, hence, are not allowed in
the software.
Poisson Distribution The Poisson distribution describes the number of times an event occurs in a given interval, such as the number of telephone calls per minute or the number of errors per page in a
document. The three conditions underlying the Poisson distribution are: 1. The number of possible occurrences in any interval is unlimited. 2. The occurrences are independent. The number of
occurrences in one interval does not affect the number of occurrences in other intervals. 3. The average number of occurrences must remain the same from interval to interval.
March 18, 2008
Char Count=
The mathematical constructs for the Poisson are as follows: e−λ λx for x and λ > 0 x! Mean = λ √ Standard Deviation = λ 1 Skewness = √ λ P(x) =
Excess Kurtosis =
1 λ
Rate (λ) is the only distributional parameter. Input requirements: Rate > 0 and ≤ 1,000 (that is, 0.0001 ≤ rate ≤ 1,000)
CONTINUOUS DISTRIBUTIONS Beta Distribution The beta distribution is very flexible and is commonly used to represent variability over a fixed range. One of the more important applications of the beta
distribution is its use as a conjugate distribution for the parameter of a Bernoulli distribution. In this application, the beta distribution is used to represent the uncertainty in the probability
of occurrence of an event. It is also used to describe empirical data and predict the random behavior of percentages and fractions, as the range of outcomes is typically between 0 and 1. The value of
the beta distribution lies in the wide variety of shapes it can assume when you vary the two parameters, alpha and beta. If the parameters are equal, the distribution is symmetrical. If either
parameter is 1 and the other parameter is greater than 1, the distribution is J-shaped. If alpha is less than beta, the distribution is said to be positively skewed (most of the values are near the
minimum value). If alpha is greater than beta, the distribution is negatively skewed (most of the values are near the maximum value). The mathematical constructs for the beta distribution are as
follows: (x)(α−1) (1 − x)(β−1) for α > 0; β > 0; x > 0 (α)(β) (α + β) α Mean = α+β αβ Standard Deviation = (α + β)2 (1 + α + β) f (x) =
March 18, 2008
Char Count=
Understanding and Choosing the Right Probability Distributions
2(β − α) 1 + α + β Skewness = √ (2 + α + β) αβ Excess Kurtosis =
3(α + β + 1)[αβ(α + β − 6) + 2(α + β)2 ] −3 αβ(α + β + 2)(α + β + 3)
Alpha (α) and beta (β) are the two distributional shape parameters, and is the gamma function. The two conditions underlying the beta distribution are: 1. The uncertain variable is a random value
between 0 and a positive value. 2. The shape of the distribution can be specified using two positive values. Input requirements: Alpha and beta > 0 and can be any positive value
Cauchy Distribution or Lorentzian Distribution or Breit–Wigner Distribution The Cauchy distribution, also called the Lorentzian distribution or Breit–Wigner distribution, is a continuous distribution
describing resonance behavior. It also describes the distribution of horizontal distances at which a line segment tilted at a random angle cuts the x-axis. The mathematical constructs for the Cauchy
or Lorentzian distribution are as follows: f (x) =
γ /2 1 π (x − m)2 + γ 2 /4
The Cauchy distribution is a special case where it does not have any theoretical moments (mean, standard deviation, skewness, and kurtosis) as they are all undefined. Mode location (m) and scale (γ )
are the only two parameters in this distribution. The location parameter specifies the peak or mode of the distribution, while the scale parameter specifies the half-width at half-maximum of the
distribution. In addition, the mean and variance of a Cauchy or Lorentzian distribution are undefined. In addition, the Cauchy distribution is the Student’s t distribution with only 1 degree of
freedom. This distribution is also constructed by taking the ratio of two standard normal distributions (normal distributions with a mean of zero and a variance of one) that are independent of one
another. Input requirements: Location can be any value Scale > 0 and can be any positive value
March 18, 2008
Char Count=
Chi-Square Distribution The chi-square distribution is a probability distribution used predominantly in hypothesis testing, and is related to the gamma distribution and the standard normal
distribution. For instance, the sums of independent normal distributions are distributed as a chi-square (χ 2 ) with k degrees of freedom: d
Z12 + Z22 + . . . + Zk2 ∼ χk2 The mathematical constructs for the chi-square distribution are as follows: f (x) =
2(−k/2) k/2−1 −x/2 e for all x > 0 x (k/2)
Mean = k Standard Deviation = 2 Skewness = 2 k 12 Excess Kurtosis = k
The gamma function is written as . Degrees of freedom k is the only distributional parameter. The chi-square distribution can also be modeled using a gamma distribution by setting the shape parameter
as k/2 and scale as 2S2 where S is the scale. Input requirements: Degrees of freedom > 1 and must be an integer < 1,000
Exponential Distribution The exponential distribution is widely used to describe events recurring at random points in time, such as the time between failures of electronic equipment or the time
between arrivals at a service booth. It is related to the Poisson distribution, which describes the number of occurrences of an event in a given interval of time. An important characteristic of the
exponential distribution is the “memoryless” property, which means that the future lifetime of a given object has the same distribution, regardless of the time it existed. In other words, time has no
effect on future outcomes. The mathematical constructs for the exponential distribution are as follows: f (x) = λe−λx for x ≥ 0; λ > 0 Mean =
1 λ
1 λ Skewness = 2 (this value applies to all success rate λ inputs)
Standard Deviation =
Excess Kurtosis = 6 (this value applies to all success rate λ inputs)
March 18, 2008
Char Count=
Understanding and Choosing the Right Probability Distributions
Success rate (λ) is the only distributional parameter. The number of successful trials is denoted x. The condition underlying the exponential distribution is: 1. The exponential distribution
describes the amount of time between occurrences. Input requirements: Rate > 0 and ≤ 300
Extreme Value Distribution or Gumbel Distribution The extreme value distribution (Type 1) is commonly used to describe the largest value of a response over a period of time, for example, in flood
flows, rainfall, and earthquakes. Other applications include the breaking strengths of materials, construction design, and aircraft loads and tolerances. The extreme value distribution is also known
as the Gumbel distribution. The mathematical constructs for the extreme value distribution are as follows: f (x) =
x−m 1 −z ze where z = e β for β > 0; and any value of x and m β
Mean = m + 0.577215β Standard Deviation =
1 2 2 π β 6
√ 12 6(1.2020569) Skewness = π3 = 1.13955 (this applies for all values of mode and scale) Excess Kurtosis = 5.4 (this applies for all values of mode and scale) Mode (m) and scale (β) are the
distributional parameters. There are two standard parameters for the extreme value distribution: mode and scale. The mode parameter is the most likely value for the variable (the highest point on the
probability distribution). The scale parameter is a number greater than 0. The larger the scale parameter, the greater the variance. Input requirements: Mode can be any value Scale > 0
F Distribution or Fisher–Snedecor Distribution The F distribution, also known as the Fisher–Snedecor distribution, is another continuous distribution used most frequently for hypothesis testing.
Specifically, it is used to test the statistical difference between two variances in analysis of variance tests and likelihood ratio tests. The F distribution with the numerator degree of
March 18, 2008
Char Count=
freedom n and denominator degree of freedom m is related to the chi-square distribution in that:
n + m n n/2 n/2−1 x χn2 /n d 2 m ∼ Fn,m or f (x) = (n+m)/2 n m n χm2 /m x +1 2 2 m m Mean = m−2 2m2 (m + n − 2) for all m > 4 n(m − 2)2 (m − 4) 2(m + 2n − 2) 2(m − 4) Skewness = m−6 n(m + n − 2)
Standard Deviation =
Excess Kurtosis =
12(−16 + 20m− 8m2 + m3 + 44n − 32mn + 5m2 n − 22n2 + 5mn2 ) n(m− 6)(m− 8)(n + m− 2)
The numerator degree of freedom n and denominator degree of freedom m are the only distributional parameters. Input requirements: Degrees of freedom numerator and degrees of freedom denominator both
> 0 integers
Gamma Distribution (Erlang Distribution) The gamma distribution applies to a wide range of physical quantities and is related to other distributions: lognormal, exponential, Pascal, Erlang, Poisson,
and chisquare. It is used in meteorological processes to represent pollutant concentrations and precipitation quantities. The gamma distribution is also used to measure the time between the
occurrence of events when the event process is not completely random. Other applications of the gamma distribution include inventory control, economic theory, and insurance risk theory. The gamma
distribution is most often used as the distribution of the amount of time until the rth occurrence of an event in a Poisson process. When used in this fashion, the three conditions underlying the
gamma distribution are: 1. The number of possible occurrences in any unit of measurement is not limited to a fixed number. 2. The occurrences are independent. The number of occurrences in one unit of
measurement does not affect the number of occurrences in other units. 3. The average number of occurrences must remain the same from unit to unit. The mathematical constructs for the gamma
distribution are as follows: α−1 x x e− β β with any value of α > 0 and β > 0 f (x) = (α)β Mean = αβ
March 18, 2008
Char Count=
Understanding and Choosing the Right Probability Distributions
Standard Deviation =
αβ 2
2 Skewness = √ α Excess Kurtosis =
6 α
Shape parameter alpha (α) and scale parameter beta (β) are the distributional parameters, and is the gamma function. When the alpha parameter is a positive integer, the gamma distribution is called
the Erlang distribution, used to predict waiting times in queuing systems, where the Erlang distribution is the sum of independent and identically distributed random variables each having a
memoryless exponential distribution. Setting n as the number of these random variables, the mathematical construct of the Erlang distribution is: f (x) =
xn−1 e−x for all x > 0 and all positive integers of n (n − 1)!
Input requirements: Scale beta > 0 and can be any positive value Shape alpha ≥ 0.05 and any positive value Location can be any value
Logistic Distribution The logistic distribution is commonly used to describe growth, that is, the size of a population expressed as a function of a time variable. It also can be used to describe
chemical reactions and the course of growth for a population or individual. The mathematical constructs for the logistic distribution are as follows: f (x) =
µ−x α
for any value of α and µ µ−x 2 α 1+e α
Mean = µ Standard Deviation =
1 2 2 π α 3
Skewness = 0 (this applies to all mean and scale inputs) Excess Kurtosis = 1.2 (this applies to all mean and scale inputs) Mean (µ) and scale (α) are the distributional parameters. There are two
standard parameters for the logistic distribution: mean and scale. The mean parameter is the average value, which for this distribution is the same as the mode, because this distribution is
symmetrical. The scale parameter is a number greater than 0. The larger the scale parameter, the greater the variance.
March 18, 2008
Char Count=
Input requirements: Scale > 0 and can be any positive value Mean can be any value
Lognormal Distribution The lognormal distribution is widely used in situations where values are positively skewed, for example, in financial analysis for security valuation or in real estate for
property valuation, and where values cannot fall below zero. Stock prices are usually positively skewed rather than normally (symmetrically) distributed. Stock prices exhibit this trend because they
cannot fall below the lower limit of zero but might increase to any price without limit. Similarly, real estate prices illustrate positive skewness and are lognormally distributed as property values
cannot become negative. The three conditions underlying the lognormal distribution are: 1. The uncertain variable can increase without limits but cannot fall below zero. 2. The uncertain variable is
positively skewed, with most of the values near the lower limit. 3. The natural logarithm of the uncertain variable yields a normal distribution. Generally, if the coefficient of variability is
greater than 30 percent, use a lognormal distribution. Otherwise, use the normal distribution. The mathematical constructs for the lognormal distribution are as follows: 2 1 − [ln(x)−ln(µ)] e 2[ln(σ
)]2 for x > 0; µ > 0 and σ > 0 f (x) = x 2π ln(σ )
σ2 Mean = exp µ + 2 Standard Deviation = exp(σ 2 + 2µ)[exp(σ 2 ) − 1] exp(σ 2 ) − 1 (2 + exp(σ 2 )) Skewness =
Excess Kurtosis = exp(4σ 2 ) + 2 exp(3σ 2 ) + 3 exp(2σ 2 ) − 6 Mean (µ) and standard deviation (σ ) are the distributional parameters. Input requirements: Mean and standard deviation both > 0 and can
be any positive value Lognormal Parameter Sets By default, the lognormal distribution uses the arithmetic mean and standard deviation. For applications for which historical data are available, it is
more appropriate to use either the logarithmic mean and standard deviation, or the geometric mean and standard deviation.
March 18, 2008
Char Count=
Understanding and Choosing the Right Probability Distributions
Normal Distribution The normal distribution is the most important distribution in probability theory because it describes many natural phenomena, such as people’s IQs or heights. Decision makers can
use the normal distribution to describe uncertain variables such as the inflation rate or the future price of gasoline. The three conditions underlying the normal distribution are: 1. Some value of
the uncertain variable is the most likely (the mean of the distribution). 2. The uncertain variable could as likely be above the mean as it could be below the mean (symmetrical about the mean). 3.
The uncertain variable is more likely to be in the vicinity of the mean than further away. The mathematical constructs for the normal distribution are as follows: (x−µ)2 1 e− 2σ 2 for all values of x
and µ; while σ > 0 f (x) = √ 2π σ Mean = µ
Standard Deviation = σ Skewness = 0 (this applies to all inputs of mean and standard deviation) Excess Kurtosis = 0 (this applies to all inputs of mean and standard deviation) Mean (µ) and standard
deviation (σ ) are the distributional parameters. Input requirements: Standard deviation > 0 and can be any positive value Mean can be any value
Pareto Distribution The Pareto distribution is widely used for the investigation of distributions associated with such empirical phenomena as city population sizes, the occurrence of natural
resources, the size of companies, personal incomes, stock price fluctuations, and error clustering in communication circuits. The mathematical constructs for the Pareto are as follows: β Lβ for x > L
x(1+β) βL Mean = β −1 f (x) =
Standard Deviation =
β L2 (β − 1)2 (β − 2)
March 18, 2008
Char Count=
Skewness =
β − 2 2(β + 1) β β −3
Excess Kurtosis =
6(β 3 + β 2 − 6β − 2) β(β − 3)(β − 4)
Location (L) and shape (β) are the distributional parameters. There are two standard parameters for the Pareto distribution: location and shape. The location parameter is the lower bound for the
variable. After you select the location parameter, you can estimate the shape parameter. The shape parameter is a number greater than 0, usually greater than 1. The larger the shape parameter, the
smaller the variance and the thicker the right tail of the distribution. Input requirements: Location > 0 and can be any positive value Shape ≥ 0.05
Student’s t Distribution The Student’s t distribution is the most widely used distribution in hypothesis testing. This distribution is used to estimate the mean of a normally distributed population
when the sample size is small, and is used to test the statistical significance of the difference between two sample means or confidence intervals for small sample sizes. The mathematical constructs
for the t distribution are as follows: [(r + 1)/2] (1 + t 2 /r )−(r +1)/2 f (t) = √ r π[r/2] where t =
x − x¯ and is the gamma function s
Mean = 0 (this applies to all degrees of freedom r except if the distribution is shifted to another nonzero central location) r Standard Deviation = r −2 Skewness = 0 (this applies to all degrees of
freedom r ) Excess Kurtosis =
6 for all r > 4 r −4
Degree of freedom r is the only distributional parameter. The t distribution is related to the F-distribution as follows: The square of a value of t with r degrees of freedom is distributed as F with
1 and r degrees of freedom. The overall shape of the probability density function of the t distribution also resembles the bell shape of a normally distributed variable with mean 0 and variance 1,
except that it is a bit lower and wider or is leptokurtic (fat tails at the ends and peaked center). As the number of degrees of freedom grows (say,
March 18, 2008
Char Count=
Understanding and Choosing the Right Probability Distributions
above 30), the t distribution approaches the normal distribution with mean 0 and variance 1. Input requirements: Degrees of freedom ≥ 1 and must be an integer
Triangular Distribution The triangular distribution describes a situation where you know the minimum, maximum, and most likely values to occur. For example, you could describe the number of cars sold
per week when past sales show the minimum, maximum, and usual number of cars sold. The three conditions underlying the triangular distribution are: 1. The minimum number of items is fixed. 2. The
maximum number of items is fixed. 3. The most likely number of items falls between the minimum and maximum values, forming a triangular-shaped distribution, which shows that values near the minimum
and maximum are less likely to occur than those near the most likely value. The mathematical constructs for the triangular distribution are as follows:
f (x) =
2(x − Min) (Max − Min)(Likely − Min)
for Min < x < Likely
for Likely < x < Max
2(Max − x) (Max − Min)(Max − Likely)
1 (Min + Likely + Max) 3 Standard Deviation = 1 2 (Min2 + Likely + Max2 − MinMax − MinLikely − MaxLikely) 18
Mean =
Skewness = √ 2(Min + Max − 2Likely)(2Min − Max − Likely)(Min − 2Max + Likely) 2
5(Min2 + Max2 + Likely − MinMax − MinLikely − MaxLikely)3/2 Excess Kurtosis = −0.6 (this applies to all inputs of Min, Max, and Likely) Minimum value (Min), most likely value (Likely), and maximum
value (Max) are the distributional parameters. Input requirements: Min ≤ Likely ≤ Max and can also take any value However, Min < Max and can also take any value
March 18, 2008
Char Count=
Uniform Distribution With the uniform distribution, all values fall between the minimum and maximum and occur with equal likelihood. The three conditions underlying the uniform distribution are: 1.
The minimum value is fixed. 2. The maximum value is fixed. 3. All values between the minimum and maximum occur with equal likelihood. The mathematical constructs for the uniform distribution are as
follows: 1 for all values such that Min < Max Max − Min Min + Max Mean = 2 (Max − Min)2 Standard Deviation = 12 f (x) =
Skewness = 0 (this applies to all inputs of Min and Max) Excess Kurtosis = −1.2 (this applies to all inputs of Min and Max) Maximum value (Max) and minimum value (Min) are the distributional
parameters. Input requirements: Min < Max and can take any value
Weibull Distribution (Rayleigh Distribution) The Weibull distribution describes data resulting from life and fatigue tests. It is commonly used to describe failure time in reliability studies as well
as the breaking strengths of materials in reliability and quality control tests. Weibull distributions are also used to represent various physical quantities, such as wind speed. The Weibull
distribution is a family of distributions that can assume the properties of several other distributions. For example, depending on the shape parameter you define, the Weibull distribution can be used
to model the exponential and Rayleigh distributions, among others. The Weibull distribution is very flexible. When the Weibull shape parameter is equal to 1.0, the Weibull distribution is identical
to the exponential distribution. The Weibull location parameter lets you set up an exponential distribution to start at a location other than 0.0. When the shape parameter is less than 1.0, the
Weibull distribution becomes a steeply declining curve. A manufacturer might find this effect useful in describing part failures during a burn-in period .
March 18, 2008
Char Count=
Understanding and Choosing the Right Probability Distributions
The mathematical constructs for the Weibull distribution are as follows: α f (x) = β
α α−1 − x x e β β
Mean = β(1 + α −1 )
Standard Deviation = β 2 (1 + 2α −1 ) − 2 (1 + α −1 ) Skewness =
2 3 (1 + β −1 ) − 3(1 + β −1 )(1 + 2β −1 ) + (1 + 3β −1 ) 3/2 (1 + 2β −1 ) − 2 (1 + β −1 )
Excess Kurtosis = −6 4 (1 + β −1 ) + 12 2 (1 + β −1 )(1 + 2β −1 ) − 3 2 (1 + β −1 ) − 4(1 + β −1 )(1 + 3β −1 ) + (1 + 4β −1 ) 2 (1 + 2β −1 ) − 2 (1 + β −1 )
Location (L), shape (α), and scale (β) are the distributional parameters, and is the gamma function. Input requirements: Scale > 0 and can be any positive value Shape ≥ 0.05 Location can take on any
March 18, 2008
Char Count=
March 17, 2008
Char Count=
Financial Statement Analysis
his appendix provides some basic financial statement analysis concepts used in financial modeling chapters throughout the book. The focus of this appendix is on calculating the free cash flows used
under different scenarios, including making appropriate adjustments under levered and unlevered operating conditions. Although many versions of free cash flows exist, these calculations are examples
of more generic free cash flows applicable under most circumstances. An adjustment for inflation and the calculation of terminal cash flows are also presented here. Finally, a market multiple
approach that uses price-to-earnings ratios is also briefly discussed.
FREE CASH FLOW CALCULATIONS Below is a list of some generic financial statement definitions used to generate free cash flows based on GAAP (generally accepted accounting principles):
Gross Profits = Revenues − Cost of Goods Sold. Earnings Before Interest and Taxes = Gross Profits − Selling Expenses – General and Administrative Costs − Depreciation − Amortization. Earnings Before
Taxes = Earnings Before Interest and Taxes − Interest. Net Income = Earnings Before Taxes − Taxes. Free Cash Flow to Equity = Net Income + Depreciation + Amortization – Capital Expenditures ± Change
in Net Working Capital − Principal Repayments + New Debt Proceeds − Preferred Dividends − Interest (1 − Tax Rate). Free Cash Flow to the Firm = EBIT (1 − Tax Rate) + Depreciation + Amortization −
Capital Expenditures ± Change in Net Working Capital = Free Cash Flow to Equity + Principal Repayment − New Debt Proceeds + Preferred Dividends + Interest (1 − Tax Rate).
FREE CASH FLOW TO A FIRM An alternative version of the free cash flow for an unlevered firm can be defined as:
Free Cash Flow = Earnings Before Interest and Taxes [1 − Effective Tax Rate] + Depreciation + Amortization − Capital Expenditures ± Change in Net Working Capital.
March 17, 2008
Char Count=
LEVERED FREE CASH FLOW For a levered firm, the free cash flow becomes:
Free Cash Flow = Net Income + α [Depreciation + Amortization] ± α [Change in Net Working Capital] − α [Capital Expenditures] − Principal Repayments + New Debt Proceeds − Preferred Debt Dividends
where α is the equity-to-total-capital ratio; and (1 − α) is the debt ratio.
INFLATION ADJUSTMENT The following adjustments show an inflationary adjustment for free cash flows and discount rates from nominal to real conditions:
Nominal C F (1 + E[π ]) 1 + Nominal ρ Real ρ = −1 (1 + E[π ]) Real C F =
where CF π E[π] ρ
is the cash flow series; is the inflation rate; is the expected inflation rate; and is the discount rate.
TERMINAL VALUE The following are commonly accepted ways of getting terminal free cash flows under zero growth, constant growth, and supernormal growth assumptions:
Zero Growth Perpetuity: ∞ t−1
FCFt FCFT = t [1 + WACC] WACC
Constant Growth: ∞ FCFt−1 (1 + gt ) t−1
[1 +
FCFT−1 (1 + gT ) FCFT = WACC − gT WACC – gT
March 17, 2008
Char Count=
Financial Statement Analysis
Punctuated Supernormal Growth: N t−1
FCFt + [1 + WACC]t
FCF N(1 + g N) [WACC − g N] [1 + WACC] N
WACC = ωe ke + ωd kd (1 − τ ) + ω pe kpe where FCF WACC g t T N ω ke kd kpe τ
is the free cash flow series; is the weighted average cost of capital; is the growth rate of free cash flows; is the individual time periods; is the terminal time at which a forecast is available; is
the time when a punctuated growth rate occurs; is the respective weights on each capital component; is the cost of common equity; is the cost of debt; is the cost of preferred equity; and is the
effective tax rate.
PRICE-TO-EARNINGS MULTIPLES APPROACH Related concepts in valuation are the uses of market multiples. An example is using the price-to-earnings multiple, which is a simple derivation of the constant
growth model shown above, breaking it down into dividends per share (DPS) and earnings per share (EPS) components. The derivation starts with the constant growth model: P0 =
DPS0 (1 + gn ) DPS1 = ke − gn ke − gn
We then use the fact that the dividend per share next period (DPS1 ) is the earnings per share current period multiplied by the payout ratio (PR), defined as the ratio of dividends per share to
earnings per share, which is assumed to be constant, multiplied by one plus the growth rate (1 + g) of earnings: DPS1 = EPS0 [PR](1 + gn ) Similarly, the earnings per share the following period is
the same as the earnings per share this period multiplied by one plus the growth rate: EPS1 = EPS0 (1 + gn )
March 17, 2008
Char Count=
Substituting the earnings per share model for the dividends per share in the constant growth model, we get the pricing relationship: P0 =
EPS0 [PR](1 + gn ) ke − gn
Because we are using price-to-earnings ratios, we can divide the pricing relationship by earnings per share to obtain an approximation of the price-to-earnings ratio (PE): P0 [PR] = ≈ PE1 EPS1 ke −
gn Assuming that the PE and EPS ratios are fairly stable over time, we can estimate the current pricing structure through forecasting the next term EPS we obtain: P0 = EPS1 [PE1 ] Issues of using PE
ratios include the fact that PE ratios change across different markets. If a firm serves multiple markets, it is difficult to find an adequate weighted average PE ratio. PE ratios may not be stable
through time and are most certainly not stable across firms. If more efficient firms are added to less efficiently run firms, the average PE ratio may be skewed. In addition, market overreaction and
speculation, particularly among high-growth firms, provide an overinflated PE ratio. Furthermore, not all firms are publicly held, some firms may not have a PE ratio, and if valuation of individual
projects is required, PE ratios may not be adequate because it is difficult to isolate a specific investment’s profitability and its corresponding PE ratio. Similar approaches include using other
proxy multiples, including Business Enterprise Value to Earnings, Price to Book, Price to Sales, and so forth, with similar methods and applications.
DISCOUNTING CONVENTIONS In using discounted cash flow analysis, several conventions require consideration: continuous versus discrete discounting, midyear versus end-of-year convention, and beginning
versus end-of-period discounting.
Continuous versus Discrete Periodic Discounting The discounting convention is important when performing a discounted cash flow analysis. Using the same compounding period principle, future cash flows
can be discounted using the effective annualized discount rate. For instance, suppose an annualized discount rate of 30 percent is used on a $100 cash flow. Depending on the compounding periodicity,
the calculated present value and future value differ (see Table D.1). To illustrate this point further, a $100 deposit in a 30 percent interest-bearing account will yield $130 at the end of one year
if the interest compounds once a year.
March 17, 2008
Char Count=
Financial Statement Analysis
TABLE D.1
Continuous versus Periodic Discrete Discounting
Interest Factor
Future Value
Present Value
Annual Quarterly Monthly Daily Continuous
1 4 12 365 ∞
30.00% 33.55 34.49 34.97 34.99
$130.00 133.55 134.49 134.97 134.99
$76.92 74.88 74.36 74.09 74.08
However, if interest is compounded quarterly, the deposit value increases to $133.55 due to the additional interest-on-interest compounding effects. For instance, Value at the end of the first
quarter = $100.00(1 + 0.30/4)1 = $107.50 Value at the end of the second quarter = $107.50(1 + 0.30/4)1 = $115.56 Value at the end of the third quarter = $115.56(1 + 0.30/4)1 = $124.23 Value at the
end of the fourth quarter = $124.23(1 + 0.30/4)1 = $133.55 That is, the annualized discount rate for different compounding periods is its effective annualized rate, calculated as discount periods −1
1+ periods For the quarterly compounding interest rate, the effective annualized rate is 30.00% 4 – 1 = 33.55% 1+ 4 Applying this rate for the year, we have $100(1 + 0.3355) = $133.55. This analysis
can be extended for monthly, daily, or any other periodicities. In addition, if the interest rate is assumed to be continuously compounding, the continuous effective annualized rate should be used,
where lim
discount 1+ periods
periods – 1 = ediscount − 1
For instance, the 30 percent interest rate compounded continuously yields e0.3 − 1 = 34.99%. Notice that as the number of compounding periods increases, the effective interest rate increases until it
approaches the limit of continuous compounding. The annual, quarterly, monthly, and daily compounding is termed discrete periodic compounding, as compared to the continuous compounding approach using
the exponential function. In summary, the higher the number of compounding periods, the higher the future value and the lower the present value of a cash flow payment. When applied to discounted cash
flow analysis, if the discount rate calculated using a weighted average cost of capital is continuously compounding (e.g., interest payments and cost of capital are continuously compounding), then
the net present value calculated may be overoptimistic if discounted discretely.
March 17, 2008
Char Count=
APPENDIX D WACC = 20% Year 0
Year 1
Year 2
Year 3
Year 4
Year 5 Time
Investment = –$1,000 FCF1 = $500
NPV = −$1,000 +
FCF2 = $600
FCF3 = $700
FCF4 = $800
FCF5 = $900
$500 $600 $700 $800 $900 + + + + = $985 (1 + 0.2)1 (1 + 0.2)2 (1 + 0.2)3 (1 + 0.2)4 (1 + 0.2)5
FIGURE D.1 Full-year versus midyear discounting
Full-Year versus Midyear Convention In the conventional discounted cash flow approach, cash flows occurring in the future are discounted back to the present value and summed to obtain the net present
value of a project. These cash flows are usually attached to a particular period in the future, measured usually in years, quarters, or months. The time line in Figure D.1 illustrates a sample series
of cash flows over the next five years, with an assumed 20 percent discount rate. Because the cash flows are attached to an annual time line, they are usually assumed to occur at the end of each
year. That is, $500 will be recognized at the end of the first full year, $600 at the end of the second year, and so forth. This is termed the full-year discounting convention. However, under usual
business conditions, cash flows tend to accrue throughout the entire year and do not arrive in a single lump sum at the end of the year. Instead, the midyear convention may be applied. That is, the
$500 cash flow gets accrued over the entire first year and should be discounted at 0.5 years, rather than 1.0 years. Using this midpoint supposes that the $500 cash flow comes in equally over the
entire year. NPV = −$1,000 + +
$600 $500 + 0.5 (1 + 0.2) (1 + 0.2)1.5
$700 $800 $900 + + = $1,175 (1 + 0.2)2.5 (1 + 0.2)3.5 (1 + 0.2)4.5
End-of-Period versus Beginning-of-Period Discounting Another key issue in discounting involves the use of end-of-period versus beginningof-period discounting. Suppose the cash flow series are
generated on a time line such as in Figure D.2. WACC = 20% Year 2002
Year 2003
Year 2004
Year 2005 Time
Investment = −$1,000 FCF1 = $500
FCF2 = $600
FCF3 = $700
FIGURE D.2 End-of-period versus beginning-of-period discounting
March 17, 2008
Financial Statement Analysis
Char Count=
Further suppose that the valuation date is January 1, 2002. The $500 cash flow can occur either at the beginning of the first year (January 1, 2003) or at the end of the first year (December 31,
2003). The former requires the discounting of one year and the latter, the discounting of two years. If the cash flows are assumed to roll in equally over the year—that is, from January 1, 2002, to
January 1, 2003—the discounting should only be for 0.5 years. In contrast, suppose that the valuation date is December 31, 2002, and the cash flow series occurs at January 1, 2003, or December 31,
2003. The former requires no discounting, while the latter requires a one-year discounting using an end-ofyear discounting convention. In the midyear convention, the cash flow occurring on December
31, 2003, should be discounted at 0.5 years.
March 17, 2008
Char Count=
March 18, 2008
Char Count=
Exotic Options Formulae
BLACK AND SCHOLES OPTION MODEL—EUROPEAN VERSION This is the famous Nobel Prize–winning Black-Scholes model without any dividend payments. It is the European version, where an option can only be
executed at expiration and not before. Although it is simple enough to use, care should be taken in estimating its input variable assumptions, especially that of volatility, which is usually
difficult to estimate. However, the Black-Scholes model is useful in generating ballpark estimates of the true real options value, especially for more generic-type calls and puts. For more complex
real options analysis, different types of exotic options are required.
Definitions of Variables S present value of future cash flows ($) X implementation cost ($) r risk-free rate (%) T time to expiration (years) σ volatility (%)
cumulative standard-normal distribution
ln(S/ X) + (r + σ 2 /2)T ln(S/ X) + (r − σ 2 /2)T − Xe−r T √ √ σ T σ T ln(S/ X) + (r − σ 2 /2)T ln(S/ X) + (r + σ 2 /2)T −r T Put = Xe − − S − √ √ σ T σ T
Call = S
BLACK AND SCHOLES WITH DRIFT (DIVIDEND)—EUROPEAN VERSION This is a modification of the Black-Scholes model and assumes a fixed dividend payment rate of q in percent. This can be construed as the
opportunity cost of holding the option rather than holding the underlying asset.
March 18, 2008
Char Count=
Definitions of Variables S present value of future cash flows ($) X implementation cost ($) r risk-free rate (%) T σ q
time to expiration (years) volatility (%) cumulative standard-normal distribution continuous dividend payout or opportunity cost (%)
Computation Call = Se
ln(S/ X) + (r − q + σ 2 /2)T √ σ T
− Xe−r T
ln(S/ X) + (r − q − σ 2 /2)T √ σ T
ln(S/ X) + (r − q − σ 2 /2)T Put = Xe−r T − √ σ T − Se
ln(S/ X) + (r − q + σ 2 /2)T − √ σ T
BLACK AND SCHOLES WITH FUTURE PAYMENTS—EUROPEAN VERSION Here, cash flow streams may be uneven over time, and we should allow for different discount rates (risk-free rate should be used) for all
future times, perhaps allowing for the flexibility of the forward risk-free yield curve.
Definitions of Variables S X r T σ
present value of future cash flows ($) implementation cost ($) risk-free rate (%) time to expiration (years) volatility (%)
March 18, 2008
Char Count=
Exotic Options Formulae
q CFi
cumulative standard-normal distribution continuous dividend payout or opportunity cost (%) cash flow at time i
S ∗ = S − C F1 e−r t1 − C F2 e−r t2 . . . − C Fn e−r tn = S −
C Fi e−r ti
ln(S ∗ / X) + (r − q + σ 2 /2)T √ σ T ∗ ln(S / X) + (r − q − σ 2 /2)T − Xe−r T √ σ T ∗ ln(S / X) + (r − q − σ 2 /2)T Put = Xe−r T − √ σ T ∗ ln(S / X) + (r − q + σ 2 /2)T − S ∗ e−qT − √ σ T
Call = S ∗ e−qT
CHOOSER OPTIONS (BASIC CHOOSER) This is the payoff for a simple chooser option when t1 < T 2 , or it doesn’t work! In addition, it is assumed that the holder has the right to choose either a call or
a put with the same strike price at time t1 and with the same expiration date T 2 . For different values of strike prices at different times, we need a complex variable chooser option.
Definitions of Variables S
present value of future cash flows ($)
X r t1
implementation cost ($) risk-free rate (%)
time to choose between a call or put (years) T 2 time to expiration (years) σ volatility (%)
March 18, 2008
Char Count=
930 q
cumulative standard-normal distribution continuous dividend payments (%)
Computation Option Value = Se
− Se
ln(S/ X) + (r − q + σ 2 /2)T2 √ σ T2
−ln(S/ X) + (q − r )T2 − t1 σ 2 /2 √ σ t1
− Xe
−r T2
+ Xe
−r T2
ln(S/ X) + (r − q + σ 2 /2)T2 − σ T2 √ σ T2
√ −ln(S/ X) + (q − r )T2 − t1 σ 2 /2 + σ t1 √ σ t1
COMPLEX CHOOSER The holder of the option has the right to choose between a call and a put at different times (TC and TP ) with different strike levels (XC and XP ) of calls and puts. Note that some
of these equations cannot be readily solved using Excel spreadsheets. Instead, due to the recursive methods used to solve certain bivariate distributions and critical values, the use of programming
scripts is required.
Definitions of Variables S
present value of future cash flows ($)
X r T σ q I Z
implementation cost ($) risk-free rate (%) time to expiration (years) for call (TC ) and put (TP ) volatility (%) cumulative standard-normal distribution cumulative bivariate-normal distribution
continuous dividend payout (%) critical value solved recursively intermediate variables (Z1 and Z2 )
March 18, 2008
Char Count=
Exotic Options Formulae
Computation First, solve recursively for the critical I value as follows:
0 = Ie
−q(TC −t)
ln(I/ XC ) + (r − q + σ 2 /2)(TC − t) √ σ TC − t
−XC e−r (TC −t)
ln(I/ XC ) + (r − q + σ 2 /2)(TC − t) − σ TC − t √ σ TC − t
+ Ie
−q(TP −t)
−ln(I/ XP ) + (q − r − σ 2 /2)(TP − t) √ σ TP − t
−XP e−r (TP −t)
−ln(I/ XP ) + (q − r − σ 2 /2)(TP − t) + σ TP − t √ σ TP − t
Then using the I value, calculate d1 =
ln(S/I) + (r − q + σ 2 /2)t √ σ t
y1 =
ln(S/ XC ) + (r − q + σ 2 /2)TC √ σ TC
√ d2 = d1 − σ t
ln(S/ XP ) + (r − q + σ 2 /2)TP √ σ TP ρ1 = t/TC and ρ2 = t/TP y2 =
Option Value = Se−qTC (d1 ; y1 ; ρ1 ) − XC e−r TC (d2 ; y1 − σ TC ; ρ1 )
− Se−qTP (−d1 ; −y2 ; ρ2 ) + XP e−r TP (−d2 ; −y2 + σ TP ; ρ2 )
COMPOUND OPTIONS ON OPTIONS The value of a compound option is based on the value of another option. That is, the underlying variable for the compound option is another option. Again, solving this
model requires programming capabilities.
Definitions of Variables S
present value of future cash flows ($)
r σ
risk-free rate (%) volatility (%) cumulative standard-normal distribution
March 18, 2008
Char Count=
932 q I X1
continuous dividend payout (%) critical value solved recursively cumulative bivariate-normal distribution strike for the underlying ($) strike for the option on the option ($)
X2 t1 expiration date for the option on the option (years) T 2 expiration for the underlying option (years)
Computation First, solve for the critical value of I using
ln(I/ X1 ) + (r − q + σ 2 /2)(T2 − t1 ) X2 = Ie σ (T2 − t1 ) ln(I/ X1 ) + (r − q − σ 2 /2)(T2 − t1 ) −r (T2 −t1 ) − X1 e σ (T2 − t1 ) −q(T2 −t1 )
Solve recursively for the value I above and then input it into ln(S/ X1 ) + (r − q + σ 2 /2)T2 ; √ σ T2 −qT2 Call on call = Se ln(S/I) + (r − q + σ 2 /2)t1 ; t1 /T2 √ σ t1 ln(S/
X1 ) + (r − q + σ 2 /2)T2 − σ T2 ; √ σ T2 − X1 e−r T2 ln(S/I) + (r − q + σ 2 /2)t1 √ − σ t1 ; t1 /T2 √ σ t1 √ ln(S/I) + (r − q + σ 2 /2)t1 − σ t1 − X2 e−r t1 √ σ t1
EXCHANGE ASSET FOR ASSET OPTION The exchange asset for an asset option is a good application in a mergers and acquisition situation when a firm exchanges one stock for another firm’s stock as a means
of payment.
Definitions of Variables S
present value of future cash flows ($) for Asset 1 (S1 ) and Asset 2 (S2 )
implementation cost ($)
March 18, 2008
Char Count=
Exotic Options Formulae
Q r T
quantity of Asset 1 to be exchanged for quantity of Asset 2 risk-free rate (%) time to expiration (years) for call (T C ) and put (T P )
volatility (%) of Asset 1 (σ 1 ) and Asset 2 (σ 2 )
σ * portfolio volatility after accounting for the assets’ correlation ρ cumulative standard-normal distribution q1 continuous dividend payout (%) for Asset 1 q2 continuous dividend payout (%) for
Asset 2
Computation Option =
2 2 ln(Q S /Q S ) + q − q + σ + σ − 2ρσ σ 1 1 2 2 2 1 1 2 /2 T 1 2 Q1 S1 e−q1 T T σ12 + σ22 − 2ρσ1 σ2
ln(Q1 S1 /Q2 S2 ) + q2 − q1 + σ12 + σ22 − 2ρσ1 σ2 /2 T T σ12 + σ22 − 2ρσ1 σ2 −q2 T −Q2 S2 e − T σ12 + σ22 − 2ρσ1 σ2
FIXED STRIKE LOOK-BACK OPTION The strike price is fixed in advance, and at expiration, the call option pays out the maximum of the difference between the highest observed price in the option’s
lifetime and the strike X, and 0, that is, Call = Max[SMAX − X, 0]. A put at expiration pays out the maximum of the difference between the fixed strike X and the minimum price, and 0, that is, Put =
Max[X − SMIN , 0].
Definitions of Variables S
present value of future cash flows ($)
X r T σ
implementation cost ($) risk-free rate (%) time to expiration (years) volatility (%) cumulative standard-normal distribution
continuous dividend payout (%)
March 18, 2008
Char Count=
Computation Under the fixed strike look-back call option, when we have X > SMAX , the call option is
ln(S/ X) + (r − q + σ 2 /2)T Call = Se √ σ T √ ln(S/ X) + (r − q + σ 2 /2)T −r T −σ T − Xe √ σ T ln(S/ X) + (r − q + σ 2 /2)T −2(r2−q) √ σ S σ T − 2 2(r − q) √ σ X + Se−r T − T 2(r
− q) σ ln(S/ X) + (r − q + σ 2 /2)T + e(r −q)T √ σ T −qT
However, when X < SMAX the call option is
ln(S/S MAX) + (r − q + σ 2 /2)T Call = e (S MAX − X) + Se √ σ T √ ln(S/S MAX) + (r − q + σ 2 /2)T −r T −σ T − S MAXe √ σ T −r T
ln(S/S MAX) + (r − q + σ 2 /2)T √ −2(r −q) S σ T − σ2 2 √ S σ 2(r − q) MAX + Se−r T − T 2(r − q) σ 2 ln(S/S MAX) + (r − q + σ /2)T (r −q)T +e √ σ T
FLOATING STRIKE LOOK-BACK OPTIONS Floating strike look-back options give the call holder the option to buy the underlying security at the lowest observable price and the put holder the option to sell
at the highest observable price. That is, we have a Call = Max(S − SMIN , 0) and Put = Max(SMAX − S, 0).
Definitions of Variables S
present value of future cash flows ($)
implementation cost ($)
March 18, 2008
Char Count=
Exotic Options Formulae
r T σ
risk-free rate (%) time to expiration (years) volatility (%) cumulative standard-normal distribution
continuous dividend payout (%)
ln(S/S MI N) + (r − q + σ 2 /2)T √ σ T √ ln(S/S MI N) + (r − q + σ 2 /2)T −σ T − S MI Ne−r T √ σ T − ln(S/S MI N) − (r − q + σ 2 /2)T √ −2(r2−q) σ T σ S 2(r − q) √ + 2 S MI N T
σ σ + Se−r T 2(r − q) (r −q)T − ln(S/S MI N) − (r − q + σ 2 /2)T −e √ σ T
Call = Se−qT
Put = S MAXe
−r T
√ − ln(S/S MAX) − (r − q + σ 2 /2)T +σ T √ σ T
− Se−qT
− ln(S/S MAX) − (r − q + σ 2 /2)T √ σ T
ln(S/S MAX) + (r − q + σ 2 /2)T √ σ T −2(r − q) √ T σ
−2(r2−q) σ S − S MAX σ2 + Se−r T 2(r − q) ln(S/S MAX) + (r − q + σ 2 /2)T (r−q)T +e √ σ T
March 18, 2008
Char Count=
FORWARD START OPTIONS Definitions of Variables S X r t1
present value of future cash flows ($) implementation cost ($) risk-free rate (%)
T2 σ
time when the forward start option begins (years) time to expiration of the forward start option (years) volatility (%) cumulative standard-normal distribution
continuous dividend payout (%)
ln(1/α) + (r − q + σ 2 /2)(T2 − t1 ) Call = Se e √ σ T2 − t1 ln(1/α) + (r − q + σ 2 /2)(T2 − t1 ) −Se−qt1 αe(−r)(T2 −t1 ) − σ T2 − t1 √ σ T2 − t1 − ln(1/α) − (r − q + σ 2 /2)(T2 − t1 ) −qt1 (−r)(T2
−t1 ) Put = Se αe + σ T2 − t1 √ σ T2 − t1 − ln(1/α) − (r − q + σ 2 /2)(T2 − t1 ) − Se−qt1 e−q(T2 −t1 ) √ σ T2 − t1 −qt1 −q(T2 −t1 )
where α is the multiplier constant. Note: If the option starts at X percent out-of-the-money, α will be (1 + X). If it starts at-the-money, α will be 1.0, and (1 − X) if in-the-money.
GENERALIZED BLACK-SCHOLES MODEL Definitions of Variables S present value of future cash flows ($) X implementation cost ($) r risk-free rate (%) T σ b q
time to expiration (years) volatility (%) cumulative standard-normal distribution carrying cost (%) continuous dividend payout (%)
March 18, 2008
Char Count=
Exotic Options Formulae
ln(S/ X) + (b + σ 2 /2)T Call = Se √ σ T ln(S/ X) + (b − σ 2 /2)T −r T − Xe √ σ T ln(S/ X) + (b − σ 2 /2)T −r T Put = Xe − √ σ T ln(S/ X) + (b + σ 2 /2)T (b−r)T − Se − √ σ T (b−r)T
Notes: b=0
Futures options model
b=r−q b=r b = r − r*
Black-Scholes with dividend payment Simple Black-Scholes formula Foreign currency options model
OPTIONS ON FUTURES The underlying security is a forward or futures contract with initial price F. Here, the value of F is the forward or futures contract’s initial price, replacing S with F as well
as calculating its present value.
Definitions of Variables X implementation cost ($) F futures single-point cash flows ($) r risk-free rate (%) T time to expiration (years) σ volatility (%) cumulative standard-normal distribution q
continuous dividend payout (%)
Computation ln(F / X) + (σ 2 /2)T ln(F / X) − (σ 2 /2)T −r T Call = F e − Xe √ √ σ T σ T ln(F / X) − (σ 2 /2)T ln(F / X) + (σ 2 /2)T −r T −r T − Fe − Put = Xe − √ √ σ T σ T −r T
March 18, 2008
Char Count=
SPREAD OPTION The payoff on a spread option depends on the spread between the two futures contracts less the implementation cost.
Definitions of Variables X r T σ
implementation cost ($) risk-free rate (%) time to expiration (years) volatility (%)
F1 F2 ρ
cumulative standard-normal distribution price for futures contract 1 price for futures contract 2 correlation between the two futures contracts
Computation First, calculate the portfolio volatility: σ =
σ12 + σ2
F2 F2 + X
2 − 2ρσ1 σ2
F2 F2 + X
Then, obtain the call and put option values: F1 2 + (σ /2)T ln F F + X 1 2 √ F + X σ T 2 −r T Call =
(F2 + X) e F1 2 + (σ /2)T ln √ F + X 2 − σ T √ − σ T
F1 2 − (σ /2)T − ln √ F + X 2 + σ T √ σ T −r T e Put = (F2 + X) F1 2
− (σ /2)T ln F F + X 1 2 − √ F + X σ T 2
March 18, 2008
Char Count=
Exotic Options Formulae
DISCRETE TIME SWITCH OPTIONS The discrete time switch option holder will receive an amount equivalent to At at maturity T for each time interval of t where the corresponding asset price Sit has
exceeded strike price X. The put option provides a similar payoff every time Sit is below the strike price.
Definitions of Variables S present value of future cash flows ($) X implementation cost ($) r risk-free rate (%) T time to expiration (years) σ
volatility (%)
cumulative standard-normal distribution b carrying cost (%), usually the risk-free rate less any continuous dividend payout rate
ln(S/ X) + (b − σ 2 /2)it Call = Ae t √ σ it i=1 n − ln(S/ X) − (b − σ 2 /2)it Put = Ae−r T t √ σ it i=1 −r T
TWO-CORRELATED-ASSETS OPTION The payoff on an option depends on whether the other correlated option is in-themoney. This is the continuous counterpart to a correlated quadranomial model.
Definitions of Variables S X r T σ
present value of future cash flows ($) implementation cost ($) risk-free rate (%) time to expiration (years) volatility (%) cumulative bivariate-normal distribution function
ρ q1 q2
correlation (%) between the two assets continuous dividend payout for the first asset (%) continuous dividend payout for the second asset (%)
March 18, 2008
Char Count=
Computation √ ln(S2 / X2 ) + r − q2 − σ22 /2 T + σ T; √ 2 σ2 T Call = S2 e−q2 T ln(S1 / X1 ) + r − q1 − σ 2 /2 T 1 + ρσ2 T;ρ √ σ1 T ln(S2 / X2 ) + r − q2 − σ22 /2 T ; √ σ2 T
−X2 e−r T ln(S1 / X1 ) + r − q1 − σ 2 /2 T 1 ;ρ √ σ1 T −ln(S2 / X2 ) − r − q2 − σ22 /2 T ; √ σ2 T −r T Put = X2 e −ln(S1 / X1 ) − r − q1 − σ 2 /2 T 1 ;ρ √ σ1 T √
−ln(S2 / X2 ) − r − q2 − σ22 /2 T − σ T; √ 2 σ2 T − S2 e−q2 T −ln(S1 / X1 ) − r − q1 − σ 2 /2 T √ 1 − ρσ2 T; ρ √ σ1 T
March 15, 2008
Char Count=
Measures of Risk
TAMING THE BEAST Risky ventures are the norm in the daily business world. The mere mention of names such as George Soros, John Meriweather, Paul Reichmann, and Nicholas Leeson, or firms such as Long
Term Capital Management, Metallgesellschaft, Barings Bank, Bankers Trust, Daiwa Bank, Sumimoto Corporation, Merrill Lynch, and Citibank brings a shrug of disbelief and fear. These names are some of
the biggest in the world of business and finance. Their claim to fame is not simply being the best and brightest individuals or being the largest and most respected firms, but for bearing the stigma
of being involved in highly risky ventures that turned sour almost overnight. George Soros was and still is one of the most respected names in high finance; he is known globally for his brilliance
and exploits. Paul Reichmann was a reputable and brilliant real estate and property tycoon. Between the two of them, nothing was impossible, but when they ventured into investments in Mexican real
estate, the wild fluctuations of the peso in the foreign exchange market were nothing short of a disaster. During late 1994 and early 1995, the peso hit an all-time low and their ventures went from
bad to worse, but the one thing that they did not expect was that the situation would become a lot worse before it was all over and billions would be lost as a consequence. Long Term Capital
Management was headed by Meriweather, one of the rising stars in Wall Street, with a slew of superstars on its management team, including two Nobel laureates in finance and economics (Robert Merton
and Myron Scholes). The firm was also backed by giant investment banks. A firm that seemed indestructible blew up with billions of dollars in the red, shaking the international investment community
with repercussions throughout Wall Street as individual investors started to lose faith in large hedge funds and wealth-management firms, forcing the eventual massive bailout organized by the Federal
Reserve. Barings was one of the oldest banks in England. It was so respected that even Queen Elizabeth II herself held a private account with it. This multibillion-dollar institution was brought down
single-handedly by Nicholas Leeson, an employee halfway around the world. Leeson was a young and brilliant investment banker who headed up Barings’ Singapore branch. His illegally doctored track
record showed significant investment profits, which gave him more leeway and trust from the home office over time. He was able to cover his losses through fancy accounting and by taking significant
amounts of risk. His speculations in the Japanese yen went south
March 15, 2008
Char Count=
and he took Barings down with him, and the top echelon in London never knew what hit them. Had any of the managers in the boardroom at their respective headquarters bothered to look at the risk
profile of their investments, they would surely have made a very different decision much earlier on, preventing what became major embarrassments in the global investment community. If the projected
returns are adjusted for risks, that is, finding what levels of risks are required to attain such seemingly extravagant returns, it would be sensible not to proceed. Risks occur in everyday life that
do not require investments in the multimillions. For instance, when would one purchase a house in a fluctuating housing market? When would it be more profitable to lock in a fixed-rate mortgage
rather than keep a floating variable rate? What are the chances that there will be insufficient funds at retirement? What about the potential personal property losses when a hurricane hits? How much
accident insurance is considered sufficient? How much is a lottery ticket actually worth? Risk permeates all aspects of life and one can never avoid taking or facing risks. What we can do is
understand risks better through a systematic assessment of their impacts and repercussions. This assessment framework must also be capable of measuring, monitoring, and managing risks; otherwise,
simply noting that risks exist and moving on is not optimal. This book provides the tools and framework necessary to tackle risks head-on. Only with the added insights gained through a rigorous
assessment of risk can one actively manage and monitor risk.
Risks permeate every aspect of business, but we do not have to be passive participants. What we can do is develop a framework to better understand risks through a systematic assessment of their
impacts and repercussions. This framework also must be capable of measuring, monitoring, and managing risks.
THE BASICS OF RISK Risk can be defined simply as any uncertainty that affects a system in an unknown fashion whereby the ramifications are also unknown but bears with it great fluctuation in value
and outcome. In every instance, for risk to be evident, the following generalities must exist:
Uncertainties and risks have a time horizon. Uncertainties exist in the future and will evolve over time. Uncertainties become risks if they affect the outcomes and scenarios of the system. These
changing scenarios’ effects on the system can be measured. The measurement has to be set against a benchmark.
March 15, 2008
Char Count=
Measures of Risk
Risk is never instantaneous. It has a time horizon. For instance, a firm engaged in a risky research and development venture will face significant amounts of risk but only until the product is fully
developed or has proven itself in the market. These risks are caused by uncertainties in the technology of the product under research, uncertainties about the potential market, uncertainties about
the level of competitive threats and substitutes, and so forth. These uncertainties will change over the course of the company’s research and marketing activities—some uncertainties will increase
while others will most likely decrease through the passage of time, actions, and events. However, only the uncertainties that affect the product directly will have any bearing on the risks of the
product being unsuccessful. That is, only uncertainties that change the possible scenario outcomes will make the product risky (e.g., market and economic conditions). Finally, risk exists if it can
be measured and compared against a benchmark. If no benchmark exists, then perhaps the conditions just described are the norm for research and development activities, and thus the negative results
are to be expected. These benchmarks have to be measurable and tangible, for example, gross profits, success rates, market share, time to implementation, and so forth.
Risk is any uncertainty that affects a system in an unknown fashion and its ramifications are unknown, but it brings great fluctuation in value and outcome. Risk has a time horizon, meaning that
uncertainty evolves over time, which affects measurable future outcomes and scenarios with respect to a benchmark.
THE NATURE OF RISK AND RETURN Nobel laureate Harry Markowitz’s groundbreaking research into the nature of risk and return has revolutionized the world of finance. His seminal work, which is now known
all over the world as the Markowitz Efficient Frontier, looks at the nature of risk and return. Markowitz did not look at risk as the enemy but as a condition that should be embraced and balanced out
through its expected returns. The concept of risk and return was then refined through later works by William Sharpe and others, who stated that a heightened risk necessitates a higher return, as
elegantly expressed through the capital asset pricing model (CAPM), where the required rate of return on a marketable risky equity is equivalent to the return on an equivalent riskless asset plus a
beta systematic and undiversifiable risk measure multiplied by the market risk’s return premium. In essence, a higher-risk asset requires a higher return. In Markowitz’s model, one could strike a
balance between risk and return. Depending on the risk appetite of an investor, the optimal or best-case returns can be obtained through the efficient frontier. Should the investor require a higher
level of returns, he or she would have to face a higher level of risk. Markowitz’s work carried over to finding combinations of individual projects or assets in a portfolio that would provide the
best bang for the buck, striking an elegant balance between risk and return. In order to better understand this balance, also known as risk adjustment
March 15, 2008
Char Count=
in modern risk analysis language, risks must first be measured and understood. The following section illustrates how risk can be measured.
THE STATISTICS OF RISK The study of statistics refers to the collection, presentation, analysis, and utilization of numerical data to infer and make decisions in the face of uncertainty, where the
actual population data is unknown. There are two branches in the study of statistics: descriptive statistics, where data is summarized and described, and inferential statistics, where the population
is generalized through a small random sample, such that the sample becomes useful for making predictions or decisions when the population characteristics are unknown. A sample can be defined as a
subset of the population being measured, whereas the population can be defined as all possible observations of interest of a variable. For instance, if one is interested in the voting practices of
all U.S. registered voters, the entire pool of a hundred million registered voters is considered the population, whereas a small survey of one thousand registered voters taken from several small
towns across the nation is the sample. The calculated characteristics of the sample (e.g., mean, median, standard deviation) are termed statistics, while parameters imply that the entire population
has been surveyed and the results tabulated. Thus, in decision making, the statistic is of vital importance, seeing that sometimes the entire population is yet unknown (e.g., who are all your
customers, what is the total market share, etc.) or it is very difficult to obtain all relevant information on the population seeing that it would be too time- or resource-consuming. In inferential
statistics, the usual steps undertaken include:
Designing the experiment—this phase includes designing the ways to collect all possible and relevant data. Collection of sample data—data is gathered and tabulated. Analysis of data—statistical
analysis is performed. Estimation or prediction—inferences are made based on the statistics obtained. Hypothesis testing—decisions are tested against the data to see the outcomes.
Goodness-of-fit—actual data is compared to historical data to see how accurate, valid, and reliable the inference is. Decision making—decisions are made based on the outcome of the inference.
Measuring the Center of the Distribution— The First Moment The first moment of a distribution measures the expected rate of return on a particular project. It measures the location of the project’s
scenarios and possible outcomes on average. The common statistics for the first moment include the mean (average), median (center of a distribution), and mode (most commonly occurring value). Figure
F.1 illustrates the first moment—where, in this case, the first moment of this distribution is measured by the mean (µ) or average value.
March 15, 2008
Char Count=
Measures of Risk σ1
σ 1 = σ2
µ1 ≠ µ 2
Skew = 0 Kurtosis = 0
FIGURE F.1 First moment
Measuring the Spread of the Distribution— The Second Moment The second moment measures the spread of a distribution, which is a measure of risk. The spread or width of a distribution measures the
variability of a variable, that is, the potential that the variable can fall into different regions of the distribution—in other words, the potential scenarios of outcomes. Figure F.2 illustrates two
distributions with identical first moments (identical means) but very different second moments or risks. The visualization becomes clearer in Figure F.3. As an example, suppose there are two stocks
and the first stock’s movements (illustrated by the dotted line) with the smaller fluctuation is compared against the second stock’s movements (illustrated by the darker line) with a much higher
price fluctuation. Clearly an investor would view the stock with the wilder fluctuation as riskier because the outcomes of the more risky stock are relatively more unknown than the less risky stock.
The vertical axis in Figure F.3 measures the stock prices; thus, the more risky stock has a wider range of potential outcomes. This range is translated into a distribution’s width (the horizontal
axis) in Figure F.2, where the wider distribution represents the riskier asset. Hence, width or spread of a distribution measures a variable’s risks. Notice that in Figure F.2, both distributions
have identical first moments or central tendencies, but clearly the distributions are very different. This difference in the distributional width is measurable. Mathematically and statistically, the
σ2 σ1
Skew = 0 Kurtosis = 0
µ1 = µ2
FIGURE F.2 Second moment
March 15, 2008
Char Count=
APPENDIX F Stock prices
FIGURE F.3 Stock price fluctuations
or risk of a variable can be measured through several different statistics, including the range, standard deviation (σ ), variance, coefficient of variation, volatility, and percentiles.
Measuring the Skew of the Distribution— The Third Moment The third moment measures a distribution’s skewness, that is, how the distribution is pulled to one side or the other. Figure F.4 illustrates
a negative or left skew (the tail of the distribution points to the left) and Figure F.5 illustrates a positive or right skew (the tail of the distribution points to the right). The mean is always
skewed toward the tail of the distribution while the median remains constant. Another way of seeing this is that the mean moves, but the standard deviation, variance, or width may still remain
constant. If the third moment is not considered, then looking only at the expected returns (e.g., mean or median) and risk (standard deviation), a positively skewed project might be incorrectly
chosen! For example, if the horizontal axis represents the net revenues of a project, then clearly a left or negatively skewed distribution might be preferred as there is a higher probability of
greater returns (Figure F.4) as compared to a higher probability for lower-level returns (Figure F.5). Thus, in a skewed distribution, the median is a better measure of returns, as the medians for
bothFigures F.4 and F.5 are identical, risks are identical, and, hence, a
σ1 = σ2 Skew < 0 Kurtosis = 0
FIGURE F.4 Third moment (left skew)
µ1 ≠ µ2
March 15, 2008
Char Count=
Measures of Risk σ1 = σ2 Skew > 0 Kurtosis = 0
µ1 ≠ µ2
FIGURE F.5 Third moment (right skew)
project with a negatively skewed distribution of net profits is a better choice. Failure to account for a project’s distributional skewness may mean that the incorrect project may be chosen (e.g.,
two projects may have identical first and second moments, that is, they both have identical returns and risk profiles, but their distributional skews may be very different).
Measuring the Catastrophic Tail Events of the Distribution—The Fourth Moment The fourth moment, or kurtosis, measures the peakedness of a distribution. Figure F.6 illustrates this effect. The
background (denoted by the dotted line) is a normal distribution with an excess kurtosis of 0. The new distribution has a higher kurtosis; thus the area under the curve is thicker at the tails with
less area in the central body. This condition has major impacts on risk analysis as for the two distributions in Figure F.6; the first three moments (mean, standard deviation, and skewness) can be
identical, but the fourth moment (kurtosis) is different. This condition means that, although the returns and risks are identical, the probabilities of extreme and catastrophic events (potential
large losses or large gains) occurring are higher for a high kurtosis distribution (e.g., stock market returns are leptokurtic or have high kurtosis). Ignoring a project’s return’s kurtosis may be
detrimental. Note that sometimes a normal kurtosis is denoted as 3.0, but in this book we use the measure of excess kurtosis, henceforth simply known as kurtosis. In other words, a kurtosis of 3.5
σ1 = σ2
Skew = 0 Kurtosis > 0
µ1 = µ2
FIGURE F.6 Fourth moment
March 15, 2008
Char Count=
is also known as an excess kurtosis of 0.5, indicating that the distribution has 0.5 additional kurtosis above the normal distribution. The use of excess kurtosis is more prevalent in academic
literature and is, hence, used here. Finally, the normalization of kurtosis to a base of 0 makes for easier interpretation of the statistic (e.g., a positive kurtosis indicates fatter-tailed
distributions while negative kurtosis indicates thinner-tailed distributions).
Most distributions can be defined up to four moments. The first moment describes the distribution’s location or central tendency (expected returns), the second moment describes its width or spread
(risks), the third moment its directional skew (most probable events), and the fourth moment its peakedness or thickness in the tails (catastrophic losses or gains). All four moments should be
calculated and interpreted to provide a more comprehensive view of the project under analysis.
THE MEASUREMENTS OF RISK There are multiple ways to measure risk in projects. This section summarizes some of the more common measures of risk and lists their potential benefits and pitfalls. The
measures include:
Probability of Occurrence. This approach is simplistic and yet effective. As an example, there is a 10 percent probability that a project will not break even (it will return a negative net present
value indicating losses) within the next 5 years. Further, suppose two similar projects have identical implementation costs and expected returns. Based on a single-point estimate, management should
be indifferent between them. However, if risk analysis such as Monte Carlo simulation is performed, the first project might reveal a 70 percent probability of losses compared to only a 5 percent
probability of losses on the second project. Clearly, the second project is better when risks are analyzed. Standard Deviation and Variance. Standard deviation is a measure of the average of each
data point’s deviation from the mean. This is the most popular measure of risk, where a higher standard deviation implies a wider distributional width and, thus, carries a higher risk. The drawback
of this measure is that both the upside and downside variations are included in the computation of the standard deviation. Some analysts define risks as the potential losses or downside; thus,
standard deviation and variance will penalize upswings as well as downsides. Semi-Standard Deviation. The semi-standard deviation only measures the standard deviation of the downside risks and
ignores the upside fluctuations. Modifications of the semi-standard deviation include calculating only the values below the mean, or values below a threshold (e.g., negative profits or negative cash
flows). This provides a better picture of downside risk but is more difficult to estimate.
March 15, 2008
Measures of Risk
Char Count=
Volatility. The concept of volatility is widely used in the applications of real options and can be defined briefly as a measure of uncertainty and risks. Volatility can be estimated using multiple
methods, including simulation of the uncertain variables impacting a particular project and estimating the standard deviation of the resulting asset’s logarithmic returns over time. This concept is
more difficult to define and estimate but is more powerful than most other risk measures in that this single value incorporates all sources of uncertainty rolled into one value. Beta. Beta is another
common measure of risk in the investment finance arena. Beta can be defined simply as the undiversifiable, systematic risk of a financial asset. This concept is made famous through the CAPM, where a
higher beta means a higher risk, which in turn requires a higher expected return on the asset. Coefficient of Variation. The coefficient of variation is simply defined as the ratio of standard
deviation to the mean, which means that the risks are commonsized. For example, the distribution of a group of students’ heights (measured in meters) can be compared to the distribution of the
students’ weights (measured in kilograms). This measure of risk or dispersion is applicable when the variables’ estimates, measures, magnitudes, or units differ. Value at Risk. Value at Risk (VaR)
was made famous by J. P. Morgan in the mid-1990s through the introduction of its RiskMetrics approach, and has thus far been sanctioned by several bank governing bodies around the world. Briefly, it
measures the amount of capital reserves at risk given a particular holding period at a particular probability of loss. This measurement can be modified to risk applications by stating, for example,
the amount of potential losses a certain percent of the time during the period of the economic life of the project—clearly, a project with a smaller VaR is better. Worst-Case Scenario and Regret.
Another simple measure is the value of the worst-case scenario given catastrophic losses. Another definition is regret. That is, if a decision is made to pursue a particular project, but if the
project becomes unprofitable and suffers a loss, the level of regret is simply the difference between the actual losses compared to doing nothing at all. Risk-Adjusted Return on Capital.
Risk-adjusted return on capital (RAROC) takes the ratio of the difference between the fiftieth percentile (median) return and the fifth percentile return on a project to its standard deviation. This
approach is used mostly by banks to estimate returns subject to their risks by measuring only the potential downside effects and ignoring the positive upswings.
The following details the computations of some of these risk measures and is worthy of review before proceeding through the book.
COMPUTING RISK This section illustrates how some of the more common measures of risk are computed. Each risk measurement has its own computations and uses. For example, certain risk measures are
applicable only on time-series data (e.g., volatility) while others are applicable in both cross-sectional and time-series data (e.g.,
March 15, 2008
Char Count=
variance, standard deviation, and covariance), while others require a consistent holding period (e.g., Value at Risk) or a market comparable or benchmark (e.g., beta coefficient).
Probability of Occurrence This approach is simplistic yet effective. The probability of success or failure can be determined several ways. The first is through management expectations and
assumptions, also known as expert opinion, based on historical occurrences or experience of the expert. Another approach is simply to gather available historical or comparable data, industry
averages, academic research, or other third-party sources, indicating the historical probabilities of success or failure (e.g., pharmaceutical R&D’s probability of technical success based on various
drug indications can be obtained from external research consulting groups). Finally, Monte Carlo simulation can be run on a model with multiple interacting input assumptions and the output of
interest (e.g., net present value, gross margin, tolerance ratios, and development success rates) can be captured as a simulation forecast and the relevant probabilities can be obtained, such as the
probability of breaking even, probability of failure, probability of making a profit, and so forth. See Chapter 5 on step-by-step instructions on running and interpreting simulations and
Standard Deviation and Variance Standard deviation is a measure of the average of each data point’s deviation from the mean. A higher standard deviation or variance implies a wider distributional
width and, thus, a higher risk. The standard deviation can be measured in terms of the population or sample, and for illustration purposes, is shown in the following list, where we define xi as the
individual data points, µ as the population mean, N as the population size, and n as the sample size: Population standard deviation:
σ =
n (xi − µ)2 i−1 N
and population variance is simply the square of the standard deviation or σ 2 . Alternatively, use Excel’s STDEVP and VARP functions for the population standard deviation and variance respectively.
Sample standard deviation:
n ¯ 2 (xi − x) i−1 n−1
March 15, 2008
Char Count=
Measures of Risk
Sum Mean
–10.50 12.25 –11.50 13.25 –14.65 15.65 –14.50 –10.00 –1.43
X – Mean
Square of (X – Mean)
–9.07 13.68 –10.07 14.68 –13.22 17.08 –13.07
82.2908 187.1033 101.4337 215.4605 174.8062 291.6776 170.8622
Population Standard Deviation and Variance Sum of Square (X – Mean) Variance = Sum of Square (X – Mean)/N Using Excel’s VARP function: Standard Deviation = Square Root of (Sum of Square (X – Mean)/N
) Using Excel’s STDEVP function:
1,223.6343 174.8049 174.8049 13.2214 13.2214
Sample Standard Deviation and Variance Sum of Square (X – Mean) Variance = Sum of Square (X – Mean)/(N – 1) Using Excel’s VAR function: Standard Deviation = Square Root of (Sum of Square (X – Mean)/
(N – 1)) Using Excel’s STDEV function:
1,223.6343 203.9390 203.9390 14.2807 14.2807
FIGURE F.7 Standard deviation and variance computation and sample variance is similarly the square of the standard deviation or s2 . Alternatively, use Excel’s STDEV and VAR functions for the sample
standard deviation and variance respectively. Figure F.7 shows the step-bystep computations. The drawbacks of this measure are that both the upside and downside variations are included in the
computation of the standard deviation, and its dependence on the units (e.g., values of x in thousands of dollars versus millions of dollars are not comparable). Some analysts define risks as the
potential losses or downside; thus, standard deviation and variance penalize upswings as well as downsides. An alternative is the semi-standard deviation.
Semi-Standard Deviation The semi-standard deviation only measures the standard deviation of the downside risks and ignores the upside fluctuations. Modifications of the semi-standard deviation
include calculating only the values below the mean, or values below a threshold (e.g., negative profits or negative cash flows). This approach provides a better picture of downside risk but is more
difficult to estimate. Figure F.8 shows how a sample
March 15, 2008
Char Count=
X − Mean
−10.50 12.25 −11.50 13.25 −14.65 15.65 −14.50
2.29 Ignore 1.29 Ignore −1.86 Ignore −1.71
Sum Mean
Square of (X − Mean) 5.2327 (Ignore the positive values) 1.6577 (Ignore the positive values) 3.4689 (Ignore the positive values) 2.9327 −51.1500 −12.7875
Population Standard Deviation and Variance Sum of Square (X − Mean) Variance = Sum of Square (X − Mean)/N Using Excel’s VARP function: Standard Deviation = Square Root of (Sum of Square (X − Mean)/N)
Using Excel’s STDEVP function:
13.2919 3.3230 3.3230 1.8229 1.8229
Sample Standard Deviation and Variance Sum of Square (X − Mean) Variance = Sum of Square (X − Mean)/(N − 1) Using Excel’s VAR function: Standard Deviation = Square Root of (Sum of Square (X − Mean)/
(N − 1)) Using Excel’s STDEV function:
13.2919 4.4306 4.4306 2.1049 2.1049
FIGURE F.8 Semi-standard deviation and semi-variance computation
semi-standard deviation and semi-variance are computed. Note that the computation must be performed manually.
Volatility The concept of volatility is widely used in the applications of real options and can be defined briefly as a measure of uncertainty and risks. Volatility can be estimated using multiple
methods, including simulation of the uncertain variables impacting a particular project and estimating the standard deviation of the resulting asset’s logarithmic returns over time. This concept is
more difficult to define and estimate but is more powerful than most other risk measures in that this single value incorporates all sources of uncertainty rolled into one value. Figure F.9
illustrates the computation of an annualized volatility. Volatility is typically computed for timeseries data only (i.e., data that follows a time series such as stock price, price of oil, interest
rates, and so forth). The first step is to determine the relative returns from period to period, take their natural logarithms (ln), and then compute the sample standard deviation of these logged
values. The result is the periodic volatility. Then, annualize the volatility by multiplying this periodic volatility by the square root of the number of periods in a year (e.g., 1 if annual data, 4
if quarterly data, and 12 if
March 15, 2008
Char Count=
Measures of Risk
0 1 2 3 4 5 6 Sum Average
10.50 12.25 11.50 13.25 14.65 15.65 14.50
Relative Returns
1.17 0.94 1.15 1.11 1.07 0.93
LN (Relative Returns)
0.1542 −0.0632 0.1417 0.1004 0.0660 −0.0763 0.3228 0.0538
Square of (LN Relative Returns – Average)
0.0101 0.0137 0.0077 0.0022 0.0001 0.0169
Sample Standard Deviation and Variance Sum of Square (LN Relative Returns − Average) Volatility = Square Root of (Sum of Square (LN Relative Returns − Average)/N − 1) Using Excel’s STDEV function on
LN(Relative Returns): Annualized Volatility (Periodic Volatility × Square Root (Periods in a Year))
0.0507 10.07% 10.07% 34.89%
FIGURE F.9 Volatility computation
monthly data are used). See Chapter 166 on Volatility Computations for details on obtaining volatility risk measures from various approaches (e.g., GARCH, volatility to probability, logarithmic
returns, implied volatility, and others).
Beta Beta is another common measure of risk in the investment finance arena. Beta can be defined simply as the undiversifiable, systematic risk of a financial asset. This concept is made famous
through the CAPM, where a higher beta means a higher risk, which in turn requires a higher expected return on the asset. The beta coefficient measures the relative movements of one asset value to a
comparable benchmark or market portfolio; that is, we define the beta coefficient as: β=
ρx,mσx σm Cov(x, m) = Var (m) σm2
where Cov(x,m) is the population covariance between the asset x and the market or comparable benchmark m, Var(m) is the population variance of m, where both can be computed in Excel using the COVAR
and VARP functions. The computed beta will be for the population. In contrast, the sample beta coefficient is computed using the correlation coefficient between x and m or ρ x,m and the sample
standard deviations of x and m or using sx and sn for sample standard deviations and σ x and σ m for population standard deviations.
March 15, 2008
Char Count=
A beta of 1.0 implies that the relative movements or risk of x is identical to the relative movements of the benchmark (see Example 1 in Figure F.10 where the asset x is simply one unit less than the
market asset m, but they both fluctuate at the same levels). Similarly, a beta of 0.5 implies that the relative movements or risk of x is half of the relative movements of the benchmark (see Example
2 in Figure F.10 where the asset x is simply half the market’s fluctuations m). Therefore, beta is a powerful measure but requires a comparable to which to benchmark its fluctuations.
Coefficient of Variation The coefficient of variation (CV) is simply defined as the ratio of standard deviation to the mean, which means that the risks are common sized. For example, a distribution
of a group of students’ heights (measured in meters) can be compared to the distribution of the students’ weights (measured in kilograms). This measure of risk or dispersion is applicable when the
variables’ estimates, measures, magnitudes, or units differ. For example, in the computations in Figure F.7, the CV for the population is –9.25 or –9.99 for the sample. The CV is useful as a measure
of risk per unit of return, or when inverted, can be used as a measure of bang for the buck or returns per unit of risk. Thus, in portfolio optimization, one would be interested in minimizing the CV
or maximizing the inverse of the CV.
Value at Risk Value at Risk (VaR) measures the amount of capital reserves at risk given a particular holding period at a particular probability of loss. This measurement can be modified to risk
applications by stating, for example, the amount of potential losses a certain percent of the time during the period of the economic life of the project—clearly, a project with a smaller VaR is
better. VaR has a holding time period requirement, typically one year or one month. It also has a percentile requirement, for example, a 99.9 percent one-tail confidence. There are also modifications
for daily risk measures such as DEaR or Daily Earnings at Risk. The VaR or DEaR can be determined very easily using Risk Simulator; that is, create your risk model, run a simulation, look at the
forecast chart, and enter in 99.9 percent as the right-tail probability of the distribution or 0.01 percent as the left-tail probability of the distribution, then read the VaR or DEaR directly off
the forecast chart.
Worst-Case Scenario and Regret Another simple measure is the value of the worst-case scenario given catastrophic losses. An additional definition is regret; that is, if a decision is made to pursue a
particular project, but if the project becomes unprofitable and suffers a loss, the level of regret is simply the difference between the actual losses compared to doing nothing at all. This analysis
is very similar to the VaR but is not time dependent. For instance, a financial return on investment model can be created and a simulation is run. The 5 percent worst-case scenario can be read
directly from the forecast chart in Risk Simulator.
10.50 12.25 11.50 13.25 14.65 15.65 14.50
21.00 24.50 23.00 26.50 29.30 31.30 29.00
Covariance population using Excel’s COVAR: Variance of M using Excel’s VARP: Population Beta (Covariance population (X, M)/Variance (M))
FIGURE F.10 Beta coefficient computation
Population Beta
Covariance population using Excel’s COVAR: Variance of M using Excel’s VARP: Population Beta (Covariance population (X, M)/Variance (M))
2.9827 2.9827 1.0000
Population Beta
Correlation between X and M using Excel’s CORREL: Standard deviation of X using Excel’s STDEV: Standard deviation of M using Excel’s STDEV: Beta Coefficient (Correlation X and M * Stdev X * Stdev M)/
(Stdev M * Stdev M)
1.0000 1.8654 1.8654 1.0000
Correlation between X and M using Excel’s CORREL: Standard deviation of X using Excel’s STDEV: Standard deviation of M using Excel’s STDEV: Beta Coefficient (Correlation X and M * Stdev X * Stdev M)/
(Stdev M * Stdev M)
11.50 13.25 12.50 14.25 15.65 16.65 15.50
5.9653 11.9306 0.5000
1.0000 1.8654 3.7308 0.5000
Sample Beta
10.50 12.25 11.50 13.25 14.65 15.65 14.50
March 15, 2008
Sample Beta
Market Comparable M
Example 2: Half the fluctuations of the market
Market Comparable M
Example 1: Similar fluctuations with the market
appf Char Count=
March 15, 2008
Char Count=
Risk-Adjusted Return on Capital Risk-adjusted return on capital (RAROC) takes the ratio of the difference between the fiftieth percentile P50 or its median return and the fifth percentile P5 return
on a project to its standard deviation σ , written as: RAROC =
P50 − P5 σ
This approach is used mostly by banks to estimate returns subject to their risks by measuring only the potential downside effects and truncating the distribution to the worst-case 5 percent of the
time, ignoring the positive upswings, while at the same time common sizing to the risk measure of standard deviation. Thus, RAROC can be seen as a measure that combines standard deviation, CV,
semistandard deviation, and worst-case scenario analysis. This measure is useful when applied with Monte Carlo simulation, where the percentiles and standard deviation measurements required can be
obtained through the forecast chart’s statistics view in Risk Simulator.
March 18, 2008
Char Count=
Mathematical Structures of Stochastic Processes
hroughout the book, we discuss using stochastic processes for establishing simulation structures, for risk-neutralizing revenue and cost, for forecasting and obtaining an evolution of pricing
structures, as well as for modeling and valuing exotic options. This appendix sheds some light on the underpinnings of a stochastic process and what it means. A stochastic process is nothing but a
mathematically defined equation that can create a series of outcomes over time, outcomes that are not deterministic in nature. That is, it is an equation or a process that does not follow any simple
discernible rule such as: price will increase X percent every year, or revenues will increase by this factor of X plus Y percent. A stochastic process is by definition nondeterministic and not
static, and one can plug different numbers into a stochastic process equation and obtain different results every time. Instead of dealing with a single reality, a stochastic process models the
randomness and indeterminacy of the future outcomes. The current initial value is known but the future paths are unknown; however, certain paths are more probable than others. Therefore, to model a
stochastic process, probability distributions and simulation techniques are required. For instance, the path of a stock price is stochastic in nature, and one cannot reliably predict the stock price
path with any certainty. However, the price evolution over time is enveloped in a process that generates these prices. The process is fixed and predetermined, but the outcomes are not. Hence, by
stochastic simulation, we create multiple pathways of prices, obtain a statistical sampling of these simulations, and make inferences on the potential pathways that the actual price may undertake
given the nature and parameters of the stochastic process used to generate the time series. The interesting thing about stochastic process simulation is that historical data are not necessarily
required; that is, the model does not have to fit any sets of historical data. To run a stochastic process forecast, either compute the expected returns and the volatility of the historical data,
estimate them using comparable external data, or make assumptions about these values based on expert judgment and expectations. Three basic stochastic processes are discussed in this appendix,
including the geometric Brownian motion, which is the most common and prevalently used process due to its simplicity and wide-ranging applications. The mean-reversion and jumpdiffusion processes are
also discussed. Regardless of the process used, the idea is to simulate multiple paths or evolutions thousands of times (see Figure G.1), where certain paths are more prevalent than others. This
means that at the time period
March 18, 2008
Char Count=
Stochastic Process
0.0 0
FIGURE G.1 Simulating multiple paths in a stochastic process
of interest (i.e., the specific forecast year, month, day, or other period), there will be thousands of values; these values are plotted in a histogram and its statistical properties are determined.
SUMMARY MATHEMATICAL CHARACTERISTICS OF BROWNIAN MOTION (RANDOM WALK) Assume a process X, where X = [Xt : t ≥ 0] if and only if Xt is continuous, where the starting point is X0 = 0, X is normally
distributed with mean zero and variance one or X ∈ N(0, 1), and where each increment in time is independent of each previous increment and is itself normally distributed with mean zero and variance
t, such that Xt+a − Xt ∈ N(0, t). Then, the process dX = α X dt + σ X dZ follows a geometric Brownian motion, √ where α is a drift parameter, σ the volatility measure, the Weiner ] ∈ N(µ, σ ) or X
and dX are lognormally disprocess dZ = εt δt such that ln[ dX X tributed. If at time zero X0 = 0, then the expected value of the process X at any time t is such that E[X(t)] = X0 eαt and the variance
of the process X at time t is 2 case where V[X(t)] = X02 e2αt (eσ t − 1). In the continuous ∞ there is a drift parameter α, ∞ the expected value then becomes E[ 0 X(t)e−r t dt] = 0 X0 e−(r −α)t dt =
X0 /(r − α). Stated in another more applicable format, the √ Brownian motion or random walk process takes the form of δXX = α(δt) + σ ε δt for regular options simulation when multiple time steps are
√ simulated, or a more generic version takes the form of δXX = (α − σ 2 /2)δt + σ ε δt for a geometric process with fewer time steps.
March 18, 2008
Char Count=
Mathematical Structures of Stochastic Processes
For an exponential version, we simply take the exponentials, and as an example, we have √ δX = exp[α(δt) + σ ε δt] X where we define X as the variable’s value δX as the change in the variable’s value
from one step to the next α as the annualized growth or drift rate σ as the annualized volatility ε as the random variable from a normal N(0,1) distribution Figure G.2 illustrates a sample forecast
path of a random walk Brownian motion process. Notice that in this example, the drift growth rate is a positive 5%, which means the evolution trends upwards most of the time, with some fluctuations
around this trend due to a positive annualized volatility, where the higher the volatility, the more volatile this fluctuation around the trend. To estimate the parameters from a set of time-series
data, the drift rate and volatility can be found by setting α to be the average of the natural logarithm of the t t , while σ is the standard deviation of all ln XXt−1 values. relative returns ln
FIGURE G.2 Sample price path of a random walk Brownian motion process
March 18, 2008
Char Count=
SUMMARY MATHEMATICAL CHARACTERISTICS OF MEAN-REVERSION PROCESSES If a stochastic process has a long-run attractor such as a long-run production cost or long-run steady state inflationary price level,
then a mean-reversion process is more likely. The process reverts to a long-run average such that the expected value σ2 is E[Xt ] = X + (X0 −X)e−ηt and the variance is V[Xt −X] = 2η(1−e −2ηt ) . The
special circumstance that becomes useful is that in the limiting case when the time change becomes instantaneous or when dt → 0, we have the condition where Xt − Xt−1 = X(1 − e−η ) + Xt−1 (e−η − 1) +
εt , which is the first-order autoregressive process, and η can be tested econometrically in a unit root context. Stated in another more applicable format, the following describes the mathematical
structure of a mean-reverting process with drift: δXX = η(Xeα(δt) − X)δt + α(δt) + √ σ ε δt. In order to obtain the rate of reversion and long-term rate, using the historical data points, run a
regression such that Yt − Yt−1 = β0 + β1 Yt−1 + ε, and we find η = − ln[1 + β1 ] and X = −β0 /β1 , where we further define η as the rate of reversion to the mean, X as the long-term value the process
reverts to, Y as the historical data series, β 0 as the intercept coefficient in a regression analysis, and β 1 as the slope coefficient in a regression analysis. Figure G.3 illustrates a sample
evolution path
FIGURE G.3 Sample price path of a mean-reverting process
March 18, 2008
Char Count=
Mathematical Structures of Stochastic Processes
of a mean-reverting process. Notice that the fluctuations are not as wild as in the random walk process but are more tampered and fluctuate around the long-term mean value ($120 in this example).
SUMMARY MATHEMATICAL CHARACTERISTICS OF JUMP-DIFFUSION PROCESSES Start-up ventures, research and development initiatives, and oil and electricity prices usually follow a jump-diffusion process.
Business operations may be status quo for a few months or years, and then a product or initiative becomes highly successful and takes off; or when there is a terrorist attack or war breaks out, oil
prices jump immediately. An initial public offering of equities, oil price jumps, and the price of electricity are textbook examples of this. Assuming that the probability of the jumps follows a
Poisson distribution, we have a process dX = f (X, t)dt + g(X, t)dq, where the functions f and g are known and where the probability process is dq =
0 with P(X) = 1 − λdt µ with P(X) = Xdt
A jump-diffusion process is similar to a random walk process except there is a probability of a jump at any point in time. The occurrences of such jumps are completely random, but their probability
and magnitude are governed by the process itself. In fact, these three processes can be combined into a mixed process. An example of a mixed mean-reverting, jump-diffusion, random-walk stochastic
process is √ δX = η(X exp(α(δt)) − X)δt + α(δt) + σ ε δt + θ F (λ)(δt) X where we further define θ as the jump size of S F(λ) as the inverse of the Poisson cumulative distribution function λ as the
jump rate of S Figure G.4 illustrates a sample path evolution of a jump-diffusion process. Notice that there are sharp edges or jumps in value from one period to the next. The higher the jump rate
and jump size, the sharper these jumps. The jump size can be found by computing the ratio of the postjump to the prejump levels, and the jump rate can be imputed from past historical data. The other
parameters are found the same way as in the other processes. For computational details and examples, see Chapter 89: Forecasting— Stochastic Processes, Brownian Motion, Forecast Distribution at
Horizon, Jump Diffusion, and Mean-Reversion or use the Modeling Toolkit’s examples, Forecasting: Brownian Motion Stochastic Process, Forecasting: Jump-Diffusion Stochastic Process, Forecasting:
Mean-Reverting Stochastic Process, and Forecasting: Stochastic Processes.
March 18, 2008
Char Count=
FIGURE G.4 Sample price path of a jump-diffusion process
March 13, 2008
Char Count=
Glossary of Input Variables and Parameters in the Modeling Toolkit Software
ach of the inputs used in the Modeling Toolkit functions is listed here. Typically, most inputs are single point estimates; that is, a single value such as 10.50, with the exception of the input
variables listed with “Series” in parentheses.
A This is the first input variable that determines the shape of the beta and gamma functions, and is required to compute the Incomplete Beta and Incomplete Gamma values. The Incomplete Beta function
is a generalization of the beta function that replaces the definite integral of the beta function with an indefinite integral, and is a mathematical expression used to compute a variety of
probability distributions such as the gamma and beta distributions. The same can be said about the Incomplete Beta function. This input is used exclusively in the B2MathIncompleteBeta,
B2MathIncompleteGammaP, and B2MathIncompleteGammaQ functions, and the parameter is a positive value. Above Below This input variable is used in the partial floating lookback options where the strike
price is floating at the Above Below ratio, which has to be a positive value, and is greater than or equal to 1 for a call, and less than or equal to 1 for a put. Accruals This is the amount in notes
accruals, a subsection of current liabilities in the balance sheet. This variable is typically zero or a positive dollar or currency amount. Additional Cost This is the amount in additional operating
cost used in the B2CreditAcceptanceCost function to determine if a specific credit should be accepted or rejected. This variable is typically a positive dollar or currency amount, and the amount can
be zero or positive. Alpha Alpha is used in several places and has various definitions. In the first instance, alpha is the shape parameter in several distributions such as the beta, gamma, Gumbel,
logistic, and Weibull distributions. It is also used in the Forward Call Option where if Alpha < 1, then a call option starts (1 – Alpha)% in the money (a put option will be the same amount out of
the money), or if Alpha > 1, then the call starts (Alpha – 1)% out of the money (a put option will be the same amount in the money). Finally, alpha is also used as the alpha error level, or Type I
error, also known as the significance level in a hypothesis test. It measures the probability of not having the true population mean included in the confidence interval of the sample. That is, it
computes the probability of rejecting a true hypothesis. 1 – Alpha
March 13, 2008
Char Count=
is of course the confidence interval, or the probability that the true population mean resides in the sample confidence interval, and is used in several Six Sigma models. Regardless of use, this
parameter has to be a positive value. Amortization This is the amount in amortization in the financial income statement of a firm, and is used to compute the cash flow to equity for both a levered
and unlevered firm. This amount is typically zero or positive. Amounts (Series) This is a series of numbers (typically listed in a single column with multiple rows) indicating the dollar or currency
amounts invested in a specific asset class, used to compute the total portfolio’s Value at Risk and used only in the B2VaRCorrelationMethod function. These parameters have to be positive values and
arranged in a column with multiple rows. Arithmetic Mean This is the simple average used in the lognormal distribution. We differentiate this from the geometric or harmonic means, as this arithmetic
mean or simple average is the one used as an input parameter in the lognormal distribution. This parameter has to be a positive value, as the lognormal distribution takes on only positive values.
Arithmetic Standard Deviation This is a simple population standard deviation that is used in the lognormal distribution. You can use Excel’s STDEVP to compute this value from a series of data points.
This parameter has to be a positive value. Arrival Rate This is the rate of arrival on average to a queue in a specific time period (e.g., the average number of people arriving at a restaurant per
day or per hour), and typically follows a Poisson distribution. This parameter has to be a positive value. Asset 1 and Asset 2 These are the first and second assets in a two-asset exotic option or
exchange of asset options. Typically, the first asset (Asset 1) is the payoff asset, whereas the second asset (Asset 2) is some sort of benchmark asset. This is not to be confused with PVAsset, which
is the present value of the asset used in a real options analysis. These parameters must be positive values. Asset Allocation (Series) These are a series of percentage allocations of assets in a
portfolio and must sum to 100%, and this series is used to compute a portfolio’s total risk and return levels. These parameters are arranged in a single column with multiple rows and can take on zero
or positive values, but the sum of these values must equal 100%. Asset Turnover This is the total asset turnover financial ratio, or equivalent to annual total sales divided by total assets, used to
compute return on equity or return on asset ratios. It has to be a positive value. Asset Volatility This is the internal asset volatility (not to be confused with regular volatility in an options
model where we compute it using external equity values) used in determining probabilities of default and distance to default on risky debt (e.g., Merton models); it has to be a positive value. This
value can only be determined through optimization either using Risk Simulator to solve for a multiple simultaneous equation function or using the B2ProbabilityDefaultMertonImputedAssetVolatility
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
Average Lead This is the average lead time in days required in order to receive an order that is placed. This parameter is typically a positive value, and is used in the economic order quantity
models. Average Measurement (Series) This is a series of the average measurements per sample subgroup in a Six Sigma environment to determine the upper and lower control limits for a control chart
(e.g., in an experiment, 5 measurements are taken of a production output, and the experiment is repeated 10 different times with 5 samples taken each time, and the 10 averages of the 5 samples are
computed). These values are typically zero or positive, and are arranged in a single column with multiple rows. Average Price This is the average of historically observed stock prices during a
specific lookback period, used to determine the value of Asian options. This parameter has to be positive. B This is the second input variable for the scale of the beta or gamma functions, and is
required to compute the Incomplete Beta and Incomplete Gamma values. The Incomplete Beta function is a generalization of the Beta function that replaces the definite integral of the beta function
with an indefinite integral, and is a mathematical expression used to compute a variety of probability distributions such as the gamma and beta distributions. The same can be said about the
Incomplete Beta function. This input is used exclusively in B2MathIncompleteBeta, B2MathIncompleteGammaP, and B2MathIncompleteGammaQ functions, and the parameter is a positive value. Barrier This is
the stock price barrier (it can be an upper or lower barrier) for certain exotic barrier and binary options where if the barrier is breached within the lifetime of the option, the option either comes
into the money or goes out of the money, or an asset or cash is exchanged. This parameter is a positive value. Base This is the power value for determining and calibrating the width of the credit
tables. Typically, it ranges between 1 and 4 and has to be a positive value. Baseline DPU This is the average number of defects per unit in a Six Sigma process, and is used to determine the number of
trials required to obtain a specific error boundary and significance level based on this average DPU. This parameter has to be a positive value. Batch Cost This is the total dollar or currency value
of the cost to manufacture a batch of products each time the production line is run. This parameter is a positive value. Benchmark Prices (Series) This is a series of benchmark prices or levels
arranged in a single column with multiple rows, such as the market Standard & Poor’s 500, to be used as a benchmark against another equity price level in order to determine the Sharpe ratio. Best
Case This is the best-case scenario value or dollar/currency, used in concert with the Expected Value and Percentile value, to determine the volatility of the process or project. This value is
typically positive and has to exceed the expected value.
March 13, 2008
Char Count=
Beta This parameter is used in several places and denotes different things. When used in the beta, gamma, Gumbel, logistic, and Weibull distributions, it is used to denote the scale of the
distribution. When used in the capital asset pricing model (CAPM), it is used to denote the beta relative risk (covariance between a stock’s returns and market returns divided by the variance of the
market returns). Finally, beta is also used as the beta error or Type II error, measuring the probability of accepting a false hypothesis, or the probability of not being able to detect the standard
deviation’s changes. 1 – Beta is the power of the test, and this parameter is used in statistical sampling and sample size determination in the Six Sigma models. Regardless, this parameter has to be
a positive value. Beta 0, 1, and 2 These are mathematical parameters in a yield curve construction when applying the Bliss and Nelson-Siegel models for forecasting interest rates. The exact values of
these parameters need to be calibrated with optimization, but are either zero or positive values. Beta Levered This is the relative risk beta level of a company that is levered or has debt, and can
be used to determine the equivalent level of an unlevered company’s beta. This parameter has to be a positive value. Beta Unlevered This is the relative risk beta level of a company that is unlevered
or has zero debt, and can be used to determine the equivalent level of a levered company’s beta with debt. This parameter has to be a positive value. Bond Maturity positive value.
This is the maturity of a bond, measured in years, and has to be a
Bond Price This is the market price of the bond in dollars or other currency units, and has to be a positive value. Bond Yield This is the bond’s yield to maturity—that is, the internal rate of
return on the bond when held to maturity—and has to be a positive value. These could be applied to corporate bonds or Treasury zero coupon bonds. Buy Cap Rate This is the capitalization rate computed
by (net operating income/sale price) at the time of purchase of a property, and is typically a positive value, used in the valuation of real estate properties. BV Asset This is the book value of
assets in a company, including all short-term and long-term assets. BV Debt and BV Liabilities This is the book value of debt or all liabilities in a company, including all short-term and long-term
debt or liabilities, and has to be a positive value. BV Per Share This is the book value price of a share of stock, typically recorded at the initial public offering price available through the
company’s balance sheet, and has to be a positive value. Calendar Ratio This ratio is a positive value and is used in pricing an option with a Trading Day Correction, which looks at a typical option
and corrects it for the varying volatilities. Specifically, volatility tends to be higher on trading days than on nontrading days. The Trading Days Ratio is simply the number of trading days
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
left until maturity divided by the total number of trading days per year (typically between 250 and 252), and the Calendar Days Ratio is the number of calendar days left until maturity divided by the
total number of days per year (365). Callable Price This is the amount that, when a bond is called, the bondholder will be paid, and is typically higher than the par value of the bond. This parameter
requires a positive value. Callable Step This is the step number on a binomial lattice representing the time period when a bond can be called, and this parameter is a positive integer. For instance,
in a 10-year bond when the bond is callable starting on the fifth anniversary, the callable step is 50 in a 100-step lattice model. Call Maturity This is the maturity of the call option in years, and
is used in the complex chooser option (i.e., the exotic option where the holder can decide to make it a call or a put, and each option has its own maturity and strike values), and must be a positive
value. Call Strike This is the strike price of the call option in dollars or currency, and is used in the complex chooser option (i.e., the exotic option where the holder can decide to make it a call
or a put, and each option has its own maturity and strike values), and must be a positive value. Sometimes, this variable has different suffixes (e.g., Call Strike Sell Low, Call Strike Buy High, and
so forth, whenever there might be more than one call option in the portfolio of option strategies, and these suffixes represent whether this particular call is bought or sold, and whether the strike
price is higher or lower than the other call option). Call Value This is the value of a call option, and is used in the put-call parity model, whereby the value of a corresponding put can be
determined given the price of the call with similar option parameters, and this parameter has to be a positive value. Sometimes, this variable has different suffixes (e.g., Call Value Sell Low, Call
Value Buy High, and so forth, whenever there might be more than one call option in the portfolio of option strategies, and these suffixes represent whether this particular call is bought or sold, and
whether the premium paid for the option or the option’s value is higher or lower than the other call option). Cap This is the interest rate cap (ceiling) in an interest cap derivative, and has to be
a positive value. The valuation of the cap is done through computing the value of each of its caplets and summing them up for the price of the derivative. Capacity This is the maximum capacity level,
and is used in forecasting using the S-curve model (where the capacity is the maximum demand or load the market or environment can hold), as well as in the economic order quantity (batch production)
model; it has to be a positive value. Capital Charge This is the amount of invested capital multiplied by the weighted average cost of capital or hurdle rate or required rate of return. This value is
used to compute the economic profit of a project, and is a positive value. Capital Expenditures This is used to compute the cash flow to the firm and the cash flow to equity for a firm. Capital
expenditures are deducted from the net cash flow to a firm as an expenditure, and this input parameter can be zero or a positive value.
March 13, 2008
Char Count=
Cash This variable is used in several places. The first and most prominent is the amount of money that is paid when a binary or barrier option comes into the money, whereas it is also used to denote
the amount of cash available in a current asset on a balance sheet. This parameter is zero or positive. Cash Dividend This is the dividend rate or dividend yield, in percent, and is typically either
zero or positive. This parameter is not to be confused with Cash Dividends series, which is a dollar or currency unit amount, and which can also be zero or positive. This variable is used many times
in exotic and real options models. Cash Dividends (Series) This is a series of cash dividends in dollars or currency units, which come as lump sum payments of dividends on the underlying stock of an
option and can be zero or positive values. This input variable is used in the Generalized Black-Scholes model with cash dividends, and the timing of these cash dividends (Dividend Times) are also
listed as a series in a single column with multiple rows. Cash Flows (Series) This is a series of cash flows used for a variety of models, including the computation of volatility (using the
logarithmic cash flow returns approach) and bond models (bond pricing, convexity, and duration computations), and each cash flow value must be a positive number, arranged in a column with multiple
rows. Channels This is the number of channels available in a queuing model—for instance, the number of customer service or point of sale cash registers available in a McDonald’s fast-food restaurant,
where patrons can obtain service. This parameter is a positive integer. Channels Busy This is the number of channels that are currently busy and serving customers at any given moment. This parameter
can be zero or a positive integer. Choose Time or Chooser Time This is the time available for the holder of a complex chooser option whereby the option holder can choose to make the option a call or
a put, with different maturities and strike prices. This parameter is a positive value. Column The column number in a lattice; for instance, if there is a 20-step lattice for 10 years, then the
column number for the third year is the sixth step in the lattice and the column is set to 6, corresponding to the step in the lattice. Columnwise This variable is used in the changing risk-free and
changing volatility option model, where the default is 1, indicating that the data (risk-free rates and volatilities) are arranged in a column. This parameter is either a 1 (values are listed in a
column) or a 0 (values are listed in a row). Common Equity This is the total common equity listed in the balance sheet of a company, and is used in financial ratios analysis to determine the return
on equity as well as other profitability and efficiency measures, and this parameter is a positive value. This value is different than total equity, which also includes other forms such as preferred
equity. Compounding This is the number of compounding periods per year for the European Swaptions (payer and receiver) and requires a positive integer (e.g., set it as 365 for daily compounding, 12
for monthly compounding, and so forth).
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
Contract Factor This is the contraction factor used in a real option to contract, and this value is computed as the after-contracting net present value divided by the existing base-case net present
value (stated another way, this value is 1 – X where X is the fraction that is forgone if contraction occurs, or the portion that is shared with an alliance or joint venture partner or outsourcing
outfit), and the parameter has to be between 0 and 1, noninclusive. Conversion Date This is the number of days in the future where the convertible bond can be converted into an equivalent value of
equity. Corporate Bond Yield This is the yield of a risky debt or a risky corporate bond in percent, and is used to compute the implied probability of default of a risky debt given a comparable zero
coupon risk-free bond with similar maturity. This input has to be a positive value. Correlation This variable is used in multiple places, including exotic options with multiple underlying assets
(e.g., exchange of assets, two-asset options, foreign exchange, and futures or commodity options) and the bivariate normal distribution where we combine two correlated normal distributions.
Correlations (Series) This is an n × n correlation matrix and is used to value the portfolio Value at Risk where the individual components of the portfolio are correlated with one another. Cost, Cost
1, and Cost 2 This is a dollar or currency amount corresponding to the cost to execute a particular project or option, and has to be a positive value. This variable is used most frequently in real
options models. When there are multiple costs (Cost 1 and Cost 2), this implies several underlying assets and their respective costs or strike prices. Cost of Debt This is the cost of debt before tax
in percent, used to compute the weighted average cost of capital for a project or firm, and is typically a zero or positive value. Cost of Equity This is the cost of equity before tax in percent,
used to compute the weighted average cost of capital for a project or firm, and is typically a zero or positive value. Cost of Funds This is the cost of obtaining additional funds, in percent, used
in determining credit acceptance levels, and this parameter can be zero or a positive value. Cost of Losing a Unit This is the monetary dollar or currency amount lost or forgone if one unit of sales
is lost when there is an insufficient number of channels in the queuing models to determine the optimal number of channels to have available, and can be zero or a positive value. Cost of Order This
is a dollar or currency amount of the cost of placing an order for additional inventory, used in the economic order quantity models to determine the optimal quantity of inventory to order and to have
on hand. Cost of Preferred Equity This is the before-tax cost of preferred equity in percent, used to compute the cost of funds using the weighted average cost of capital model, and is either zero or
a positive value.
March 13, 2008
Char Count=
Cost to Add Channel This is the monetary dollar or currency amount required to add another channel in the queuing models, to determine the optimal number of channels to have available, and is a
positive value. Coupon and Coupons (Series) This is the coupon payment in dollars or currency of a debt or callable debt, and is used in the options adjusted spread model to determine the required
spreads for a risky and callable bond. For Coupons, it is a time series of cash coupon payments at specific times. Coupon Rate This is the coupon payment per year, represented in percent, and is used
in various debt-based options and credit options where the underlying is a coupon-paying bond or debt, and this value can be zero or positive. Covariances (Series) This is the n × n
variance-covariance matrix required to compute the portfolio returns and risk levels given each individual asset’s allocation (see Asset Allocation), and these values can be negative, zero, or
positive values. The Variance-Covariance Matrix tool in the Modeling Toolkit can be used to compute this matrix given the raw data of each asset’s historical values. Credit Exposures This is the
number of credit or debt lines that exists in a portfolio, and has to be a positive integer. Credit Spread This is the percentage spread difference between a risky debt or security and the risk-free
rate with comparable maturity, and is typically a positive value. Cum Amount This is a dollar or currency amount, used in a Time Switch option, where the holder receives the Accumulated (Cum) Amount
× Time Steps each time the asset price exceeds the strike price for a call option (or falls below the strike price for a put option). Currency Units This input parameter is a positive value and is
used in a Foreign Takeover option with a foreign exchange element, which means that if a successful takeover ensues (if the value of the foreign firm denominated in foreign currency is less than the
foreign currency units required), then the option holder has the right to purchase the number of foreign currency units at the predetermined strike price (denominated in exchange rates of the
domestic currency to the foreign currency) at the expiration date of the option. Current Asset This is the sum of cash, accounts receivable, and inventories on a balance sheet—that is, the short-term
liquid assets—and has to be a positive value. Current Price This is the price level of a variable at the current time. This known value has to be positive, and is used for forecasting future price
levels. Current Yield This is the current spot interest rate or yield, used to price risky debt with callable and embedded option features, and has to be a positive value. Custom Risk-Free (Series)
This is a series of risk-free rates with the relevant times of occurrence—that is, where there are two columns with multiple rows and the first column is the time in years (positive values) and the
second column lists the risk-free rates (each value has to be a positive percentage), and both columns have multiple
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
rows. This variable is used in the custom option models where risk-free rates and volatilities are allowed to change over time. Custom Volatility (Series) This is a series of annualized volatilities
with the relevant times of occurrence—that is, where there are two columns with multiple rows and the first column is the time in years (positive values) and the second column lists the volatilities
(each value has to be a positive percentage), and both columns have multiple rows. This variable is used in the custom option models where risk-free rates and volatilities are allowed to change over
time. CY Reversion This is the rate of mean reversion of the convenience yield of a futures and commodities contract, and has to be zero or a positive value. The convenience yield is simply the rate
differential between a nonarbitrage futures and spot price and a real-life fair market value of the futures price, and can be computed using the B2ConvenienceYield function. With the raw data or
computed convenience yields, the mean reversion rate can be calibrated using Risk Simulator’s statistical analysis tool. CY Volatility This is the annualized volatility of the convenience yield of a
futures and commodities contract, and has to be a positive value. The convenience yield is simply the rate differential between a nonarbitrage futures and spot price and a real-life fair market value
of the futures price, and can be computed using the B2ConvenienceYield function. The volatility can be computed using various approaches as discussed in the Volatility definition. Daily Volatilities
(Series) This is a series of daily volatilities of various asset classes (arranged in a column with multiple rows), used in computing the portfolio Value at Risk, where each volatility is typically
small but has to be a positive value. This can also be computed using annualized volatilities and dividing them by the square root of number of trading days per year. Days Per Year This is the number
of days per year to compute days sales outstanding, and is typically set to 365 or 360. The parameter has to be a positive integer. Debt Maturity The maturity period measured in years for the debt,
typically this is the maturity of a corporate bond, and is a positive value, used in the asset-equity parity models, to determine the market value of assets and market value of debt, based on the
book value of debt and book value of assets as well as the equity volatility. Defaults This is the number of credit or debt defaults within some specified period, and can be zero or a positive
integer. Default Probability This is the probability of default, set between 0% and 100%, to compute the credit risk shortfall value, and can be computed using the Merton probability of default
models, as well as other probability of default models in the Modeling Toolkit. Defective Units (Series) These is the series of numbers of defective units in Six Sigma models, to compute the upper
and lower control limits for quality control charts; the numbers are typically zero or positive integers, arranged in a column with multiple rows.
March 13, 2008
Char Count=
Defects This is a single value indicative of the number of defects in a process for Six Sigma quality control, to determine items such as process capability (Cpk) defects per million opportunities
(DPMO) and defects per unit (DPU). This parameter is either zero or a positive integer. Delta Delta is a precision measure used in Six Sigma models. Specifically, the Delta Precision is the accuracy
or precision with which the standard deviation may be estimated. For instance, a 0.10% Delta with 5% Alpha for 2 tails means that the estimated mean is plus or minus 0.10%, at a 90% (1 – 2 × Alpha)
confidence level. Deltas (Series) This is a series of delta measures, where the delta is defined here as a sensitivity measure of an option. Specifically, it is the instantaneous change of the option
value with an instantaneous change in the stock price. You can use the B2CallDelta function to compute this input, which typically consists of positive values arranged in a column with multiple rows.
Demand This is the level of demand for a particular manufactured product, used to determine the optimal economic order quantity or the optimal level of inventory to have on hand, and has to be a
positive integer. Depreciation This is the level of depreciation, measured in dollars or currency levels, as a noncash expense add-back to obtain the cash flows available to equity and cash flows
available to the firm. DF This is the degrees of freedom input used in the chi-square and t-distributions. The higher this value, the more closely these distributions approach the normal or Gaussian
distribution. This input parameter is a positive integer, and is typically larger than 1. You can use Risk Simulator’s distributional fitting tool to fit your existing data to obtain the best
estimate of DF. Alternatively, the distributional analysis tool can also be used to see the effects of higher and lower DF values. DF Denominator This is the degrees of freedom of the denominator
used in the F-distribution. This input parameter is a positive integer, and is typically larger than 1. You can use Risk Simulator’s distributional fitting tool to fit your existing data to obtain
the best estimate of DF. Alternatively, the distributional analysis tool can also be used to see the effects of higher and lower DF values. DF Numerator This is the degrees of freedom of the
numerator used in the Fdistribution. This input parameter is a positive integer, and is typically larger than 1. You can use Risk Simulator’s distributional fitting tool to fit your existing data to
obtain the best estimate of DF. Alternatively, the distributional analysis tool can also be used to see the effects of higher and lower DF values. Discount Rate This is the discount rate used to
determine the price-to-earnings multiple by first using this input to value the future stock price. This parameter is a positive value, and in the case of the PE Ratio model it needs to be higher
than the growth rate. Sometimes the weighted average cost of capital is used in its place for simplicity. Dividend, Dividend Rate, Dividend 1 and 2 This is the dividend rate or dividend yield, in
percent, and is typically either zero or positive. This parameter is not to be confused with Cash Dividend, which is a dollar or currency unit amount and can also be zero or positive. This variable
is used many times in exotic and real
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
options models. Dividend 1 and Dividend 2 are simply the dividend yields on the two underlying assets in a two-asset option. Dividend Times (Series) This is a series of times in years when the cash
dividends in dollars or currency are paid on the underlying stock of an option, and can be zero or positive values. This input variable is used in the Generalized Black-Scholes model with cash
dividends, and the timing of these cash dividends is listed as a series in a single column with multiple rows. Domestic RF This is the domestic risk-free rate used in foreign or takeover options that
requires the inputs of a domestic and foreign risk-free rate, which in this case has to be a positive value. Down This is the down step size used in an asymmetrical state option pricing model, and
needs to be a value between 0 and 1. This value should be carefully calibrated to the option’s maturity and the number of lattice steps, to denote the down step size per lattice step. DSO This is
days sales outstanding, or the average accounts receivables divided by the average sales per day, to be used to compute the profitability of issuing new credit to a corporation. This input variable
can be computed using the B2RatiosDaysSalesOutstanding function, and the parameter has to be a positive value. DT This is the time between steps; that is, suppose a bond or an option has a maturity
of 10 years and a 100-step lattice is used. DT is 0.1, or 0.1 years will elapse with every lattice step taken. This parameter has to be a positive value, and is used in the B2BDT lattice functions.
Duration This variable is typically computed using some B2BondDuration function, but as an input it represents the conversion factor used in converting a spread or interest rate differential into a
dollar currency amount, and is used in several debt-based options. This input has to be a positive value, and in some cases is set to 1 in order to determine the debt-based option’s value in
percentage terms. EBIT Earnings before interest and taxes (EBIT) is used in several financial ratios analysis models. EBIT is also sometimes called operating income, and can be a negative or positive
value. Ending Plot This variable is used in the options trading strategies (e.g., straddles, strangles, bull spreads, and so forth), representing the last value to plot for the terminal stock price
(the x-axis on an option payoff chart); it has to be higher than the Starting Plot value, and is a positive input. EPS Earnings per share (EPS) is net income divided by the number of shares
outstanding; EPS is used in several financial ratios analysis models, and can take on either negative or positive values. Equity Correlation This is the correlation coefficient between two equity
stock prices (not returns), and can be between –1 and +1 (inclusive), including 0. Equity Multiplier Equity multiplier is the ratio of total assets to the total equity of the company, indicating the
amount of increase in the ability of the existing equity to generate the available total assets, and has to be a positive value.
March 13, 2008
974 Equity Price or Share Price a positive value.
Char Count=
This is the same as stock price per share, and has to be
Equity Value or Total Equity This is the same as total equity in a firm, computed by the number of shares outstanding times the market share price, and can be either zero or a positive value. Equity
Volatility This is the volatility of stock prices, not to be confused with the volatility of internal assets. The term Volatility is used interchangeably with Equity Volatility, but this term is used
in models that require both equity volatility and some other volatility (e.g., asset volatility or foreign exchange rate volatility), and this value is typically positive. Exchange Rate This is the
foreign exchange rate from one currency to another, and is the spot rate for domestic currency to foreign currency; it has to be a positive value. Exercise Multiple This is the suboptimal exercise
multiple ratio, computed as the historical average stock price at which an option with similar type and class, held by a similar group of people, was executed, divided by the strike price of the
option. This multiple has to be greater than 1. This input variable is used in valuing employee stock options with suboptimal exercise behaviors. Expand Factor This is the expansion factor for real
options models of options to expand, and has to be a positive value greater than 1.0, computed using the total expanded net present value (base case plus the expanded case) divided by the base case
net present value. Expected Value This is the expected value or mean value of a project’s net present value, used to determine the rough estimate of an annualized implied volatility of a project
using the management approach (volatility to probability approach), and is typically a positive value. Face Value This is the face value of a bond, in dollars or currency, and has to be a positive
value. This face value is the redeemable value at the maturity of the bond (typically, this value is $1,000 or $10,000). First Period This input variable is used in a spread option, where the
maturity of a spread option is divided into two periods (from time zero to this first period, and from the first period to maturity) and the spread option pays the difference between the maximum
values of these two periods. This input parameter has to be greater than zero and less than the maturity of the spread option. First Variable This is the first variable used in a pentanomial lattice
model to value exotic or real options problems. In the pentanomial lattice, two binomial lattices (a binomial lattice models two outcomes, up or down, evolved through the entire lattice) are combined
to create a single rainbow lattice with two underlying variables multiplied together, to create five possible outcomes (UP1 and UP2, UP1 and DOWN2, Unchanged 1 and Unchanged 2, DOWN1 and UP2, and
DOWN2 and DOWN2). This input parameter has to be a positive value. Fixed FX Rate This input variable is used in valuing Quanto options that are traded on exchanges around the world, (also known as
foreign equity options). The
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
options are denominated in another currency than that of the underlying asset. The option has an expanding or contracting coverage of the foreign exchange value of the underlying asset, based on the
fixed exchange rate (domestic currency to foreign currency), and has to be a positive value. Floor This is the interest rate floor and is an interest derivative; it has to be a positive value. The
valuation of the floor is done through computing the value of each of its floorlets and summing them up to determine the price of the derivative. Foreign Exchange Volatility or Forex Volatility This
is the annualized volatility of foreign exchange rates, typically computed using the annualized logarithmic relative returns (use the B2Volatility function to compute this volatility based on
historical exchange rates), and has to be a positive value. Foreign Rate or Foreign RF This is the foreign risk-free rate, used in foreign exchange or foreign equity options and valuation models, and
has to be a positive value. Foreign Value This is the value of a foreign firm denominated in foreign currency, used in valuing a takeover option, and this value has to be a positive number. Forward
CY Correlation This variable is sometimes truncated to “ForCYCorrel.” It is the linear correlation between forward rates and convenience yields, and is used in valuing commodity options. Correlations
have to be between –1 and +1 (typically noninclusive). Forward Days This is the positive integer representing the number of days into the future where there is a corresponding forward rate that is
applicable. Forward Price This is the prearranged price of a contract set today for delivery in the future, and is sometimes also used interchangeably in terms of the future price of an asset or
commodity that may not be prearranged but is known with certainty or is the expected price in the future. Forward Rate This is the forward rate in a commodity option, and has to be a positive value.
Forward Reversion Rate or For-Reversion This input variable is used in valuing commodity options. It computes the values of commodity-based European call and put options, where the convenience yield
and forward rates are assumed to be meanreverting and each has its own volatilities and cross-correlations, creating a complex multifactor model with interrelationships among the variables. The
forward reversion rate is the rate of mean reversion of the forward rate, and is typically a small positive value; it can be determined and calibrated using Risk Simulator’s statistical analysis
tool. Forward Time This is the time in the future when a forward start option begins to become active, and this input parameter has to be a positive value greater than zero and less than the maturity
of the option. Forward Volatility or For-Volatility This input variable is used in valuing commodity options. It computes the values of commodity-based European call and put options, where the
convenience yield and forward rates are assumed to be
March 13, 2008
Char Count=
mean-reverting and each has its own volatilities and cross-correlations, creating a complex multifactor model with interrelationships among the variables. The forward volatility is the annualized
volatility of forward rates and prices, and has to be a positive value, typically computed using the annualized logarithmic relative returns of historical forward prices (use the B2Volatility
function to compute this volatility based on historical prices). It has to be a positive value. Free Cash Flow This is the free cash flow available to the firm, and can be computed as the net income
generated by the firm with all the modifications of noncash expense add-backs as well as capital expenditure reductions, or can be computed using the three B2RatiosCashFlow models. Future Price This
is the price in the future of any variable that is either known in advance or forecasted. This value is not the price of a futures contract, and is typically a positive value. Future Returns This is
the returns of any variable that is either known in advance or forecasted. This value is not the returns on a futures contract, and can be positive or negative in value. Futures, Futures Price, and
Futures 1 or Futures 2 This is the price of the futures contract (if there are two futures contracts, there will be a numerical value, as in the futures spread options computations), and has to be a
positive value. Futures Maturity This is the maturity of the futures contract, measured in years, and has to be a positive value. Granularities This input parameter has to be a positive integer value
and is used in the computation of finite differences in obtaining the value of an option. Great care has to be taken to calibrate this input, using alternate closed-form solutions. Gross Rent This is
the dollar or currency amount of annualized gross rent, and can be zero or a positive value; it is used in property valuation models. Growth Rate This positive percentage value is used in various
locations and signifies the annualized average growth of some variable. In the financial ratios analysis, this would be the growth rate of dividends (and this value must be less than the discount
rate used in the model). In contrast, this parameter is the annualized growth rate of assets for the Merton probability of default models, and this variable is used as the growth of a population or
market in the S-curve forecast computation on curve saturation rates. Holding Cost This is the zero or positive dollar or currency cost of holding on to an additional unit of inventory, used in the
economic order quantity models to determine the optimal level of inventories to hold. Horizon This is a positive value representing some time period denominated in years, and is used in forecasting
future values of some variable. Horizon Days This is a positive integer value representing the number of holding days to compute a Value at Risk for, which typically is between 1 and 10 days, and
calibrated to how long it will take on average for the bank or company to liquidate its assets to cover any extreme and catastrophic losses or to move out of a loss portfolio.
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
Inflation This is the annualized rate of inflation, measured as a percentage, and is typically a positive value, although zero and negative values may occur but are rare. Interest Lattice This refers
to the lattice that is developed for the underlying interest rates modeled for a yield curve and its spot volatilities over time, and is used in pricing interest-sensitive derivatives. Interest Paid
This is the dollar or currency amount of interest paid per year, and is either zero or a positive value. Interest Rate This is the percentage interest paid per year, and is typically zero or a
positive value. Interest Rates (Series) This is a series of annualized interest rates or discount rates in percent, in a column with multiple rows, used in computing a project’s net present value or
the price of a bond (given a corresponding series of cash flows). Interest Volatility This is the annualized volatility of interest rates, in percent, and has to be a positive value. See the
definition of Volatility in this Glossary for details on some of the techniques used in computing volatility. Inventory This is the amount of inventories in dollars or currency, and can be determined
from a company’s balance sheet; it is typically a positive number but can sometimes take on a zero value. Invested Capital This is the dollar or currency amount of invested capital, and is typically
a positive value, used to compute capital charge and economic capital of a project or firm. Investment This is the initial lump sum investment dollar or currency amount, used to compute the internal
rate of return (IRR) of a project, and this value is a positive number (although it is used as a negative value in the model, enter the value as positive). Jump Rate This variable is used in a Jump
Diffusion option, which is similar to a regular option with the exception that instead of assuming that the underlying asset follows a lognormal Brownian Motion process, the process here follows a
Poisson Jump Diffusion process, and is used in the B2ROJumpDiffusion models. That is, stock or asset prices follow jumps, and these jumps occur several times per year (observed from history).
Cumulatively, these jumps explain a certain percentage of the total volatility of the asset. The jump rate can be determined using historical data or using Risk Simulator’s statistical analysis tool
to calibrate the jump rate. Jump Size Similar to the Jump Rate, the Jump Size is used to determine the size of a jump in a Jump Diffusion option model. Typically, this value is greater than 1, to
indicate how much the jump is from the previous period, and is used on the B2ROJumpDiffusion models. Jumps Per Year An alternative input to the Jump Size is the number of jumps per year, as it is
easier to calibrate the total number of jumps per year based on expectations or historical data; this input is a positive integer used in the B2MertonJumpDiffusion models.
March 13, 2008
Char Count=
Known X and Known Y Values These are the historical or comparable data available and observable, in order to use the cubic spline model (both interpolate missing values and extrapolate and forecast
beyond the sample data set), which is usually applied in yield curve and interest rate term structure construction. Kurtosis This is the fourth moment of a distribution, measuring the distribution’s
peakedness and extreme values. An excess kurtosis of 0 is a normal distribution with “normal” peaks and extreme values, and this parameter can take on positive, zero, or negative values. Lambda,
Lambda 1, and Lambda 2 Lambda is the mean or average value used in a Poisson (an event occurring on average during a specified time period or area) and an exponential (the average rate of occurrence)
distribution, and is also used in calibrating the yield curve models. Regardless of the use, lambda has to be a positive value. Last Return This input is used in the exponentially weighted moving
average (EWMA) volatility forecast, representing the last period’s return; it can be periodic or annualized, and can take on positive or negative values. If entering a periodic return, make sure to
set the Periodicity input in the EWMA function to 1 to obtain a periodic volatility forecast, or the correct periodicity value to obtain the annualized volatility forecast. Conversely, if entering an
annualized return, set periodicity to be equal to 1 to obtain the annualized volatility forecast. Last Volatility This input is used in the EWMA volatility forecast, representing the last period’s
volatility; it can be periodic or annualized, and can take on only positive values. If entering a periodic volatility, make sure to set the Periodicity input in the EWMA function to 1 to obtain a
periodic volatility forecast, or the correct periodicity value to obtain the annualized volatility forecast. Conversely, if entering an annualized volatility, set periodicity to be equal to 1 to
obtain the annualized volatility forecast. Likely This is the most likely or mode value in a triangular distribution, and can take on any value, but has to be greater than or equal to the minimum and
less than or equal to the maximum value inputs in the distribution. Loan Value Ratio This is a positive percentage ratio of the amount of loan required to purchase a real estate investment to the
value of the real estate. Location This is the location parameter in the Pareto distribution, also used as the starting point or minimum of the distribution, and is sometimes also called the Beta
parameter in the Pareto distribution; it can only take on a positive value. Long Term Level This is the long-term level to which the underlying variable will revert in the long run; it is used in
mean-reverting option models, where the underlying variable is stochastically changing but reverts to some long-term mean rate, which has to be a positive value. Long Term Rate This is similar to the
long-term level, but the parameter here is a percent interest rate, a long-term rate to which the underlying interest rate process reverts over time. Lookback Length This input variable is used in a
floating strike partial lookback option, where at expiration the payoff on the call option is being able to purchase
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
the underlying asset at the minimum observed price from inception to the end of the lookback time. Conversely, the put will allow the option holder to sell at the maximum observed asset price from
inception to the end of the lookback time. Lookback Start This input variable is used in fixed strike lookback options, where the strike price is predetermined, such that at expiration, the payoff on
the call option is the difference between the maximum observed asset price less the strike price during the time between the Lookback Start period to the maturity of the option. Conversely, the put
will pay the maximum difference between the lowest observed asset price less the strike price during the time between the starting period of the lookback to the maturity of the option. Lost Sales
Cost This is the dollar or currency amount of a lost sale, typically zero or a positive value, and is used in the economic order quantity models to determine the optimal levels of inventory to hold
or levels of production to have. Lower Barrier This is the lower barrier stock price in a double barrier or graduated barrier option, where this barrier is typically lower than the existing stock
price and lower than the upper barrier level; it must be a positive value. Lower Delta This is the instantaneous options delta (a Greek sensitivity measure that can be computed using the B2CallDelta
or B2PutDelta functions) of the percentage change in option value given the instantaneous change in stock prices for the lower barrier stock price level. This value is typically set at zero or a
positive value. Lower Strike This is the lower strike price (a positive value) in a Supershare option, which is traded or embedded in supershare funds and is related to a Down and Out, Up and Out
double barrier option, where the option has value only if the stock or asset price is between the upper and lower barriers; at expiration, it provides a payoff equivalent to the stock or asset price
divided by the lower strike price. Lower Value This input variable is used in the B2DT lattices for computing option adjusted spreads in debt with convertible or callable options, and represents the
value that is one cell adjacent to the right and directly below the current value in a lattice. All values in a lattice and this input must be positive. LSL This is the lower specification level of a
Six Sigma measured process—that is, the prespecified value that is the lowest obtainable or a value that the process should not be less than. Marginal Cost This is the additional dollar or currency
cost to the bank or creditgranting institution of approving one extra credit application, and is used to determine if a credit should be approved; this parameter is typically a positive value.
Marginal Profit This is the additional dollar or currency profit to the bank or credit-granting institution of approving one extra credit application, and is used to determine if a credit should be
approved; this parameter is typically a positive value. Market Price Risk This input variable is used in mean-reverting option models as well as in the CIR, Merton, and Vasicek models of risky debt,
where the underlying interest rate process is also assumed to be mean-reverting. The market price of risk is also synonymous with the Sharpe ratio, or bang for the buck—that is, the expected returns
of a risky asset less the risk-free rate, all divided by the standard deviation of the excess returns.
March 13, 2008
Char Count=
Market Return This is the positive percentage of the annualized expected rate of return on the market, where a typical index such as the Standard & Poor’s 500 is used as a proxy for the market.
Market Volatility This input variable is the annualized volatility of a market index, used to model the probability of default for both public and private companies using an index, a group of
comparables, or the market, assuming that the company’s asset and debt book values are known, as well as the asset’s annualized volatility. Based on this volatility and the correlation of the
company’s assets to the market, we can determine the probability of default. Matrix A and Matrix B (Series) This is simply an n × m matrix where n and m can be any positive integer, and is used for
matrix math and matrix manipulations. Maturity This is the period until a certain contract, project, or option matures, measured in years, and has to be a positive value. Maturity Bought This input
variable is the maturity, measured in years (a positive value), of a call option that is bought in a Delta-Gamma hedge that provides a hedge against larger changes in the underlying stock or asset
value. This is done by buying some equity shares and a call option, which are funded by borrowing some amount of money and selling a call option at a different strike price. The net amount is a zero
sum game, making this hedge costless. Maturity Extend This is the maturity in years, for the writer extendible option of the extended maturity, and has to be a positive value. Maturity Sold This
input variable is the maturity, measured in years, of a call option that is sold in a Delta-Gamma hedge that provides a hedge against larger changes in the underlying stock or asset value. This is
done by buying some equity shares and a call option, which are funded by borrowing some amount of money and selling a call option at a different strike price. The net amount is a zero sum game,
making this hedge costless. Maximum or Max This is the maximum value of a distribution (e.g., in a discrete uniform, triangular, or uniform distribution), indicating the highest attainable value, and
can be both positive or negative values, as well as integer (used in discrete uniform, triangular, or uniform distributions) or continuous (used in triangular and uniform distributions). Mean This is
the arithmetic mean used in distributions (e.g., logistic, lognormal, and normal distributions) as well as the average levels in a Six Sigma process. This value can be positive (e.g., logistic and
lognormal distributions) or negative (e.g., normal distribution), and is typically positive when applied in Six Sigma. Mean Reverting Rate This is the rate of reversion of an underlying variable
(typically interest rates, inflation rates, or some other commodity prices) to a long-run level. This parameter is either zero or positive, and the higher the value, the faster the variable’s value
reverts to the long-run mean. Use Risk Simulator’s statistical analysis tool to determine this rate based on historical data. Measurement Range (Series) In each sampling group in a Six Sigma process,
several measurements are taken, and the range (maximum value less the minimum
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
value) is determined. This experiment is replicated multiple times through various sampling groups. The measurement range is hence a series of values (one value for each statistical sampling or
experiment subgroup) arranged in a column with multiple rows, where each row represents a group. The range has to be a positive value and is typically a positive integer, and the results are used to
determine the central line, as well as upper and lower control limits for quality control charts in Six Sigma. Minimum or Min This is the minimum value of a distribution (e.g., in a discrete uniform,
triangular, or uniform distribution), indicating the lowest attainable value, and can be both positive or negative values, as well as integer (used in discrete uniform, triangular, or uniform
distributions) or continuous (used in triangular and uniform distributions). MV Debt This is the market value of risky debt, and can be priced using the AssetEquity Parity models using book values of
debt and equity, and applying the equity volatility in the market. Typically, this value is different from the book value of debt, depending on the market volatility and internal asset values, but is
always zero or a positive value. MV Equity This is the total market value of equity, computed by multiplying the number of outstanding shares by the market price of a share of the company’s stock,
and is a positive value. MV Preferred Equity This is the total market value of preferred equity, computed by multiplying the number of outstanding shares by the market price of a share of the
company’s preferred stock, and is a positive value. Net Fixed Asset This is the total net fixed assets (gross fixed long-term assets less any accumulated depreciation levels), and is a positive
value, obtained from a company’s balance sheet. Net Income This is the net income after taxes, in dollar or currency amounts, and can be either positive or negative. New Debt Issue This is the amount
of new debt issued to raise additional capital, and is either zero or positive. Nominal CF This is the nominal cash flow amounts, including inflation, and can be negative or positive. Nominal cash
flow is the real cash flow levels plus inflation adjustments. Nominal Rate This is the quoted or nominal interest rate, which is equivalent to the real rate of interest plus the inflation rate, and
as such is typically higher than either the real interest rate or the inflation rate, and must be a positive value. Nonpayment Probability This is the probability that a debt holder will be unable to
make a payment and will default for one time. Sometimes the probability of default can be used, but in most cases the single nonpayment probability is higher than the complete default probability.
NOPAT Net operating profits after taxes (NOPAT) is typically computed as net revenues less any operating expenses and less applicable taxes, making this value
March 13, 2008
Char Count=
typically higher than net income, which accounts for other items such as depreciation and interest payments. This parameter can be positive or negative. Notes or Notes Payable The amount in dollars
or currency for notes payable, a form of short-term current liability, is typically zero or a positive value. Notional This is a positive dollar amount indicating the underlying contractual amount
(e.g., in a swap). Observed Max This is the observed maximum stock price in the past for a lookback Asian option, and this parameter has to be a positive amount and larger than the observed minimum
value. Observed Min This is the observed minimum stock price in the past for a lookback Asian option, and this parameter has to be a positive amount and smaller than the observed maximum value. Old
Value This is the previous period’s value or old value, used in computing the S-curve forecast, and must be a positive value. Operating Expenses The dollar or currency amount of total operating
expenses (other than direct expenses or cost of goods sold, but including items like sales and general administrative expenses) has to be a positive value. Option Maturity This is the maturity of an
option measured in years, and has to be a positive value; the longer the maturity, holding everything else constant, the higher the value of the option. Option Strike This is the contractual strike
price of an option measured in dollars or currency levels, and has to be a positive value. Holding everything else constant, a higher strike price means a lower call option value and a higher put
option value. Option Value This is the value of an option, and has to be either zero or a positive value. The option value is never negative, and can be computed through a variety of methods
including closed-form models (e.g., Black-Scholes and American approximation models); lattices (binomial, trinomial, quadranomial, and pentanomial lattices); simulation; and analytical techniques
(variance reduction, finite differences, and iterative processes). Other Assets The value of any short-term indirect or intangible assets is usually a zero or positive value. Payables The amount in
dollars or currency values for accounts payable, a form of short-term current liability, is typically zero or a positive value. Payment Probability This is used to compute the cost of rejecting a
good credit by accounting for the chances that payment will be received each time when it is due, and is a positive percentage value between 0% and 100%. Percentile This parameter has to be a
positive value between 0% and 100%, and is used in Value at Risk computations and implied volatility computations. In VaR analysis, this value is typically 95%, 99%, or 99.9%, whereas it has to be
lower than 50% for the worst-case scenario volatility model and higher than 50% for the best-case scenario volatility model.
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
Periodicity Periodicity in the context of barrier options means how often during the life of the option the asset or stock value will be monitored to see if it breaches a barrier. As an example,
entering 1 means annual monitoring, 12 implies monthly monitoring, 52 for weekly, 252 for daily trading, 365 for daily calendar, and 1,000,000 for continuous monitoring. In the application of GARCH
volatility forecasts, if weekly stock price data is used, enter 52 for periodicity (250 for number of trading days per year if daily data is used, and 12 for monthly data). Regardless of the
application, this parameter is a positive integer. Periodic Rate This is the interest rate per period, and is used to compute the implied rate of return on an annuity; this value has to be a positive
percent. Periods This refers to a positive integer value representing the number of payment periods in an annuity, and is used to compute the equivalent annuity payment based on the periodic rate.
Population This is used in the hypergeometric discrete distribution, indicating the population size. Clearly this positive integer value has to be larger than the population successes and is at least
2. The total number of items or elements or the population size is a fixed number, a finite population; the population size must be less than or equal to 1,750; the sample size (the number of trials)
represents a portion of the population; and the known initial probability of success in the population changes after each trial. Population Success or Pop Success This is used in the hypergeometric
discrete distribution, indicating the number of successes of a trait in a population. Clearly this positive integer value has to be smaller than the population size. The hypergeometric distribution
is a distribution where the actual trials change the probability for each subsequent trial and are called trials without replacement. For example, suppose a box of manufactured parts is known to
contain some defective parts. You choose a part from the box, find it is defective, and remove the part from the box. If you choose another part from the box, the probability that it is defective is
somewhat lower than for the first part because you have removed a defective part. If you had replaced the defective part, the probabilities would have remained the same, and the process would have
satisfied the conditions for a binomial distribution. The total number of items or elements (the population size) is a fixed number, a finite population; the population size must be less than or
equal to 1,750, the sample size (the number of trials) represents a portion of the population, and the known initial probability of success in the population changes after each trial. PPE This is the
dollar or currency value of plant, property, and equipment values, and is either zero or positive. Preferred Dividend This is the dollar or currency amount of total dividends paid to preferred stocks
(dividends per share multiplied by the number of outstanding shares), and is a positive value. Preferred Stock This is the price of a preferred stock per share multiplied by the number of preferred
shares outstanding, and has to be a positive value. Previous Value This is the value of some variable in the previous period, used in forecasting time-series data. This has to be a positive value.
March 13, 2008
Char Count=
Price and CY Correlation This is the correlation between bond price returns and convenience yields, used in the computation of commodity options, and can take on any value between –1 and +1,
inclusive. Price and Forward Correlation This is the correlation between bond price returns and future price returns, used in the computation of commodity options, and can take on any value between
–1 and +1, inclusive. Price Improvement This is a percentage value of the price of a real estate property that went to improvements, and is used to compute the depreciation on the property. Price
Lattice This is the price lattice of an interest-based derivative (e.g., bond option) where the underlying is the term structure of interest rates with its own volatilities. Principal Repaid This is
the dollar or currency amount indicating the value of principal of debt repaid, and is used to compute the adjusted cash flow to equity of a levered firm. Probability This is a probability value
between 0% and 100% and used in the inverse cumulative distribution function (ICDF) of any distribution, where given a probability level and the relevant distributional parameters, will return the X
value of the distribution. For instance, in tossing a coin two times, using the binomial distribution (trials is set to 2 and the probability of success, in this case, obtaining heads in the coin
toss, is set to 50%), the ICDF of a 25% probability parameter will return an X value of 0. That is, the probability of getting no heads (X of zero) is exactly 25%. Profit Margin This is the
percentage of net income to total sales, and is typically a positive value, although zero and negative values are possible. Proportion This is the proportion of defects in a Six Sigma model to
determine the requisite sample size to obtain in order to reach the desired Type I and Type II errors, and this value is between 0 and 1, inclusive. Put Maturity This is the maturity of the put
option, measured in years, and this parameter is a positive value. Put Strike This is the contractual strike price for the put option, and has to be a positive value. Sometimes this variable has
different suffixes (e.g., Put Strike Sell Low, Put Strike Buy High, and so forth, whenever there might be more than one put option in the portfolio of option strategies, and these suffixes represent
whether this particular put is bought or sold, and whether the strike price is higher or lower than the other put option). Put Value This is the fair market value of the put option, and sometimes the
theoretical price of a put option is used in its place when market information is unavailable. This parameter requires a positive input. Sometimes this variable has different suffixes (e.g., Put
Value Sell Low, Put Value Buy High, and so forth, whenever there might be more than one put option in the portfolio of option strategies, and these suffixes represent whether this particular put is
bought or sold, and whether the premium paid for this put option or the option value is higher or lower than the other put option).
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
PV Asset or Present Value of the Asset This is the ubiquitous input in all real options models, and is the sum of the present values of all net benefits from a real options project or its underlying
asset. Sometimes the net present value is used as a proxy, but typically the implementation cost is separated from the PV Asset value, such that PV Asset less any implementation cost, if executed
immediately, equals the net present value of the project. The PV Asset input has to be a positive value. Quantities (Series) This is a series of positive integers indicating the number of a specific
class of options in a portfolio in order to compute the Value at Risk of a portfolio of options, and these values are typically arranged in a column with multiple rows. Quantity 1 and Quantity 2
These are positive integers indicating the amount of the first asset that is exchanged for the second asset in an asset exchange option with two correlated underlying assets. Random This value
replaces the Probability value when used to obtain the inverse cumulative distribution function (ICDF) of a probability distribution for the purposes of running a simulation. This variable is between
0 and 1, inclusive, and is from a continuous uniform distribution. By choosing a random value between 0 and 1 with equal probability of any continuous value between these two numbers, we obtain a
probability value between 0% and 100%, and when mapped against the ICDF of a specific distribution, it will return the relevant X value from that distribution. Then, when repeated multiple times, it
will yield a simulation of multiple trials or outcomes from that specific distribution. You can use Excel’s RAND() function for this input. Rate of Return This is the annualized percentage required
rate of return on equity, used to compute the price to earnings ratio. Real Cash Flow This is the real cash flow level after adjusting and deducting inflation rates. Specifically, the real cash flow
plus inflation is the nominal cash flow. Real Rate This is the real rate of return or real interest rate after inflation adjustments; in other words, the real rate of return plus the inflation rate
is the nominal rate of return. Receivables The dollar or currency amount of accounts receivable, a short-term or current asset from the balance sheet, is usually a positive value or zero. Recovery
Period This is the recovery period in determining the depreciation of real estate investments, in number of years. Recovery Rate This is the rate of recovery to determine the credit risk shortfall—
that is, the percentage of credit that defaults and the proportion that is recoverable. Remaining Time This is the amount of time remaining in years in an Asian option model. Return on Asset This is
the return on a project or an asset, computed by taking net income after taxes and dividing it by total assets; this parameter value can be positive or negative.
March 13, 2008
Char Count=
Returns (Series) These are the percentage returns on various assets in a portfolio, arranged in a column with multiple rows; they can be both negative and positive, and are used to compute the
portfolio’s weighted average returns. Revenues
This is the dollar or currency amount of net revenues per year.
Risk-Free Rate and Risk-Free 0 This is the annualized risk-free rate of government securities comparable in maturity to the underlying asset under analysis (e.g., the risk-free rate with the same
maturity as the option), and has to be positive. Riskfree 0 is the default variable for a changing risk-free rate option model, where if the risk-free series is left blank, this single rate is used
throughout the maturity of the option. ROIC This is the return on invested capital (ROIC), and can be computed using the B2RatiosROIC function, using net operating profit after taxes, working
capital, and assets used. This value can be negative or positive. Row
This is the row number in a lattice, and starts from 0 at the top or first row.
Sales This is the annual total sales of the company in dollar or currency values and is a positive number. Sales Growth is a related variable that looks at the difference of sales between two periods
in percentage, versus Sales Increase, which is the difference in sales but denominated in currency amounts. Salvage This is the positive salvage value in dollars or currency value when an option is
abandoned; the holder of the abandonment option will receive this amount. Sample Size This is the positive integer value of sample size in each subgroup used in the computation of a Six Sigma quality
control chart and computation of control limits. Savings The positive dollar or currency value of savings when the option to contract is executed—that is, the amount of money saved. Second Variable
This is the second underlying variable used in a pentanomial lattice, where the underlying asset lattice is the product of the first and second variables; this input parameter has to be positive.
Service Rate This parameter measures the average rate of service per period (typically per day or per hour)—that is, on average, how many people will be serviced in a queue in a period (e.g., per
hour or per day). This value has to be positive. Shape This is the second input assumption in the Pareto distribution, determining the shape of the distribution, and is a positive value. Share Price
or Equity Price This is the current share or stock price per share at the time of valuation, used in a variety of options models, and has to be a positive dollar or currency value. Shares
This is the number of outstanding shares of a stock, and is a positive integer.
Sigma This is the variation or standard deviation measure of variation within a process and is used in Six Sigma quality control models. This parameter has to be a positive value.
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
Sigma Service Rate This is the variation or standard deviation measure of variation within the service rate used in Six Sigma process and quality control models. This value has to be a positive
value. Single Interest This is the interest rate used in computing a bond’s convexity and duration models, the second- and first-level sensitivities, respectively. This input parameter has to be a
positive value. Single Period This is the period in years or months that is used to interpolate the missing value within a range of values, applied in the B2LinerInterpolation model (used together
with the Time Periods series and corresponding Values series). Skewness This is the third moment or measure of skew in a distribution. This input parameter is used in an Alternate Distribution option
model, where the underlying distribution of the asset returns is assumed to be skewed and has some kurtosis. This value can be either positive or negative. S Max This is the observed maximum stock
price in the past in an extreme spread option, where such options have their maturities divided into two segments, starting from time zero to the First Time Period (first segment) and from the First
Time Period to Maturity (second segment). An extreme spread call option pays the difference between the maximum asset value from the second segment and the maximum value of the first segment.
Conversely, the put pays the difference between the minimum of the second segment’s asset value and the minimum of the first segment’s asset value. A reverse call pays the minimum from the first
segment less the minimum of the second segment, whereas a reverse put pays the maximum of the first segment less the maximum of the second segment. This variable is the observed maximum stock value
in the observable past. S Min This is the observed minimum stock price in the past in an extreme spread option, similar to the S Max variable as described previously. Spot FX Rate This is the input
in a currency option, which is the current or spot exchange rate, computed by the ratio of the domestic currency to the foreign currency; it has to be a positive value. Spot Price The spot price is
the same as the existing or current stock price, and is a positive value. We use this definition to differentiate between the spot and average or future price levels, and this parameter has to be
positive. Spot Rate, Spot Rate 1, and Spot Rate 2 This is the input in an exotic currency forward option, which is the current or spot interest rate, and has to be a positive value. Spot Volatility
This is the commodity option’s spot price return’s annualized volatility, as measured by the zero bond price level, and this value has to be positive. Spread Certain types of debt come with an
option-embedded provision; for instance, a bond might be callable if the market price exceeds a certain value (when prevailing interest rates drop, making it more profitable for the issuing company
to call the debt and reissue new bonds at the lower rate) or prepayment allowance of mortgages or lines of credit and debt. This input is the option adjusted spread (i.e.,
March 13, 2008
Char Count=
the additional premium that should be charged on the option provision). This value is computed using an optimization or internal search algorithm. Standard Deviation The standard deviation or sigma
is the second moment of a distribution, and can be defined as the average dispersion of all values about the central mean; it is an input into the normal distribution. The higher the sigma level, the
wider the spread and the higher the risk or uncertainty. When applying it as a normal distribution’s parameter, it is the standard deviation of the population and has to be a positive value (there is
no point in using a normal distribution with a sigma of zero, which is nothing but a single point estimate, where all points in the distribution fall exactly at the mean, generating a vertical line).
Standard Deviation of Demand This is the measure of the variability of demand as used in the determination of economic order quantity, and this value is either zero or positive. Standard Deviation of
Lead Time This is the measure of the variability of lead time it takes to obtain the inventory or product after it is ordered, as used in the determination of economic order quantity, and this value
is either zero or positive. Starting Plot This variable is used in the options trading strategies (e.g., straddles, strangles, bull spreads, and so forth), representing the first value to plot for
the terminal stock price (the x-axis on an option payoff chart); it has to be lower than the Ending Plot value, and is a positive input. Steps This is a positive integer value (typically at least 5
and between 100 and 1000) denoting the total number of steps in a lattice, where the higher the number of steps, the higher the level of precision but the longer the computational time. Stock This is
the current stock price per share at the time of valuation, used in a variety of options models, and has to be a positive dollar or currency value. Stock Index This is the stock index level, and must
be a positive value, measured at the time of valuation; it is used in index options computations. Stock Prices (Series) This is a list of stock prices over time in a series as used in the GARCH
volatility model (B2GARCH) or computation of the Sharpe ratio (B2SharpeRatio), listed in chronological order (e.g., Jan, Feb, Mar, and so forth) in a single column with multiple rows, versus stock
prices at valuation dates for various options in a portfolio, when used to compute the portfolio’s Value at Risk (B2VarOptions). Stock Volatility This is the same as Equity Volatility or simply
Volatility described in this Glossary (and used interchangeably), but this definition is used when multiple volatilities are required in the model, in order to reduce any confusion. Strike, Strike 1,
and Strike 2 The strike price in an option is the contractually prespecified price in advance at which the underlying asset (typically a stock) can be bought (call) or sold (put). Holding everything
else constant, a higher (lower) strike price means a lower (higher) call option value and a higher (lower) put option value. This input parameter has to be a positive value, and in some rare cases it
can be set to very close to zero for a costless strike option. Strike 1 and Strike 2 are used when
March 13, 2008
Char Count=
Glossary of Input Variables and Parameters
referring to exotic option inputs with two underlying assets (e.g., exchange options or a 3D binomial model). Strike Bought This is the positive dollar or currency strike price of an option (usually
a call) purchased in a Delta-Gamma hedge that provides a hedge against larger changes in the underlying stock or asset value. This is done by buying some equity shares and a call option, which are
funded by borrowing some amount of money and selling a call option at a different strike price. Strike Extend This is the positive value of the new strike price in a writer extendible option, which
is an insurance policy in case the option becomes worthless at maturity. Specifically, the call or put option can be automatically extended beyond the initial maturity date to an extended date with a
new extended strike price, assuming that at maturity the option is out of the money and worthless. This extendibility provides a safety net of time for the holder of the option. Strike FX Rate This
is the positive dollar or currency value of the contractual strike price denominated in exchange rates (domestic currency to foreign currency) for a foreign exchange option. Strike Rate This is the
positive percentage value of the contractual strike price in a swaption (option to swap) or a futures option. Strike Sold This is the positive dollar or currency strike price of an option (usually a
call) sold in a Delta-Gamma hedge that provides a hedge against larger changes in the underlying stock or asset value. This is done by buying some equity shares and a call option, which are funded by
borrowing some amount of money and selling a call option at a different strike price. Successes This is the number of successes in the negative binomial distribution, which is useful for modeling the
distribution of the number of additional trials required on top of the number of successful occurrences required. For instance, in order to close a total of 10 sales opportunities, how many extra
sales calls would you need to make above 10 calls, given some probability of success in each call? The x-axis of the distribution shows the number of additional calls required or the number of failed
calls. The number of trials is not fixed; the trials continue until the required number of successes, and the probability of success is the same from trial to trial. The successes input parameter has
to be a positive integer less than 8,000. Success Probability This is a probability percent, between 0% and 100%, inclusive, for the probability of an event occurring, and is used in various discrete
probability distributions such as the binomial distribution. Tails This is the number of tails in a distribution for hypothesis testing as applied in Six Sigma models to determine the adequate sample
size for specific Type I and Type II errors. This parameter can only be either 1 or 2. Tax Rate Tenure
This is the corporate tax rate in percent and has to be a positive value. This is the maturity of a swaption (option to swap).
This Category This is the category index number (a positive integer—1, 2, 3, and so forth), to compute the relative width of the credit rating table.
March 13, 2008
Char Count=
Time, Time 1, and Time 2 The Time variable is in years (positive value) to indicate the specific time period to forecast the interest rate level using various yield curve models, whereas Time 1 and
Time 2 are the years for different spot rates, in order to impute the forward rate between these two periods. Time Interval or DT This is the positive time step input used in a time switch option,
where the holder of the option receives the Accumulated Amount × Time Steps each time the asset price exceeds the strike price for a call option (or falls below the strike price for a put option).
The time step is how often the asset price is checked as to whether the strike threshold has been breached (typically, for a one-year option with 252 trading days, set DT as 1/252). Time Periods
(Series) This is a series of positive time periods in years, arranged in a column with multiple rows, concurrent with another column of values, so that any missing values within the range of the time
periods can be interpolated using the B2LinearInterpolation and B2CubicSpline models. The time periods do not have to be linearly and sequentially increasing. Timing (Series) This is a series of
positive time periods in years, arranged in a column with multiple rows, concurrent with another column of cash flows, so that the present value or price of the bond or some other present value
computations can be done. Typically, the timing in years is linearly increasing. Total Asset This is the total assets in a company, including all short-term and longterm assets, and can be determined
from the company’s balance sheets. Typically, this parameter is a positive value, and is used in financial ratios analysis. Total Capital This is the total dollar or currency amount of capital
invested in order to compute the economic value added in a project. Total Category This is a positive integer value in determining the number of credit rating categories required (e.g., AAA, AA, A,
and so forth). Typically, this value is between 3 and 12. Total Debt This is the total debt in a company, including all short-term and longterm debt, and can be determined from the company’s balance
sheets. Typically, this parameter is zero or a positive value, and is used in financial ratios analysis. Total Equity or Equity Value This is the total common equity in a company, and can be
determined from the company’s balance sheets. Typically, this parameter is zero or a positive value. Total Liability This is the total liabilities in a company, including all short-term and long-term
liabilities, and can be determined from the company’s balance sheets. Typically, this parameter is zero or a positive value, and is used in financial ratios analysis. Trading Ratio This is the number
of trading days left until maturity divided by the number of trading days in a year (typically around 250 days), and is used to compute the plain-vanilla option value after adjusting for the number
of trading days left; it is typically a positive value. Trials This value is used in several places. For a probability distribution, it denotes the number of trials or events (e.g., in a binomial
distribution where a coin is tossed
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
10 times, the number of trials in this case is 10) or denotes the number of simulation trials and iterations to complete in order to compute the value of an option using the simulation approach.
Regardless, this parameter has to be a positive integer. Units This is the positive integer value denoting the number of units sampled in a Six Sigma quality control study, to determine the number of
defects and proportion of defects. Units Fulfilled This zero or positive integer input variable is used in the Time Switch option model, where in such an option, the holder receives the Accumulated
Amount × Time Steps each time the asset price exceeds the strike price for a call option within the maturity period (or falls below the strike price for a put option). Sometimes the option has
already accumulated past amounts (or as agreed to in the option as a minimum guaranteed payment) as measured by the number of time units fulfilled (which is typically set at zero). Unlevered Cost of
Equity This is the cost of equity in an unlevered firm with no debt, and has to be a positive value, used to compute the weighted average cost of capital for a company. Up This is the up step size
used in an asymmetrical state option pricing model, and needs to be a value greater than 1. This value should be carefully calibrated to the option’s maturity and the number of lattice steps, to
denote the up step size per lattice step. Upper Barrier This is the upper barrier stock price in a double barrier or graduated barrier option, where this barrier is typically higher than the existing
stock price and higher than the lower barrier level; it must be a positive value. Upper Delta This is the instantaneous options delta (a Greek sensitivity measure that can be computed using the
B2CallDelta or B2PutDelta functions) of the percentage change in option value given the instantaneous change in stock prices, for the upper barrier stock price level. This value is typically set at
zero or a positive value. Upper Strike This is the upper strike price (a positive value) in a Supershare option, which is traded or embedded in supershare funds, and is related to a Down and Out, Up
and Out double barrier option, where the option has value only if the stock or asset price is between the upper and lower barriers, and at expiration provides a payoff equivalent to the stock or
asset price divided by the lower strike price. Upper Value This input variable is used in the B2DT lattices for computing option adjusted spreads in debt with convertible or callable options, and
represents the value that is one cell adjacent to the right and directly above the current value in a lattice. All values in a lattice and this input must be positive. USL This is the upper
specification level of a Six Sigma measured process—that is, the prespecified value that is the highest obtainable value or a value that the process should not exceed. Vacancy Factor and Collection
Factor This is the percentage (between 0% and 100%) where the ratio of vacancies or noncollectable rent occurs as a percentage of 100% occupancy, and is used in the valuation of real estate
March 13, 2008
Char Count=
Values (Series) This is a series of values or numbers, either negative or positive values, arranged in a column with multiple rows, to be used in concert with the Time Period variable, where any
missing values can be interpolated and internally fitted to a linear model. As an example, suppose the following series of time periods and values exist (Time 1 = 10, Time 2 = 20, Time 5 = 50); we
can then use the B2LinearInterpolation and B2CubicSpline models to determine the missing value(s). Vesting Year This is the number of years or partial years in which the option is still in the
vesting period and cannot be executed. This vesting year period can range from zero to the maturity of the option (the latter being a no-vesting American option, whereas the latter reverts to a
European option), and if the value is somewhere in between, it becomes a Bermudan option with blackout and vesting periods. Volatilities (Series) This is a series of annualized volatilities (see the
definition of Volatility for more details) arranged in a row with multiple columns going across, for use in the valuation of risky debt and callable bonds or bond spreads. Each value in the series
must be positive. Volatility This is the annualized volatility of equity or stock prices; it has to be a positive value, and can be computed in various ways—for example, exponentially weighted moving
average (EWMA), generalized autoregressive conditional heteroskedasticity (GARCH), logarithmic relative returns, and so forth. Review the volatility examples and models in the Modeling Toolkit to
obtain details on these methodologies. Volatility 0, 1, 2 These volatility variables are computed exactly as discussed in the Volatility definition, but the difference is that for Volatility 0, this
is the default volatility used in a customized option model with changing volatilities (that is, if the changing volatilities input is left empty, this Volatility 0 will be used as the single
repeated volatility in the model), whereas Volatility 1 and 2 are the volatilities for the first underlying asset and the second underlying asset in a multiple asset option model. These values have
to be positive values. Volatility FX or Volatility Foreign Exchange Rate This is the annualized volatility of foreign exchange rates (see the Volatility definition for the various methods applicable
in valuing this parameter), and this value has to be positive. Volatility Ratio This variable is used in the Merton Jump Diffusion models, where this ratio is the percentage of volatility that can be
explained by the jumps, and is typically a positive value not exceeding 1. WACC The weighted average cost of capital (WACC) is the average cost of capital from common equity, debt (after tax), and
preferred equity, all weighted by the amount obtained from each source. It has to be a positive value, and when used in perpetual firm continuity values with growth rates, WACC has to be greater than
the growth rate parameter. Warrants This is the positive integer number indicative of the total number of warrants issued by the company. Working Capital This is also known as the net working capital
of a company and can be determined using the company’s balance sheet, and is typically a positive dollar or currency value (while zero is a rare but possible occurrence).
March 13, 2008
Glossary of Input Variables and Parameters
Char Count=
Worst Case This is the worst-case scenario’s dollar or currency value of a project or asset within a one-year time frame, and is used in the implied volatility (volatility to probability) estimation.
When used together with the Best Case and Expected Value input parameters, this worst case value has to be less than these two latter inputs. X This is the ubiquitous random variable X, and is used
in multiple locations. When used in probability distributions, it denotes the X value on the x-axis of the probability distribution or the specific outcome of a distribution (e.g., in tossing a coin
10 times, where the probability of getting heads is 50%, we can compute the exact probability of getting exactly four heads, and in this case, X = 4). X is typically a positive value (continuous
values in continuous distributions, and discrete positive values, including zero, for discrete probability distributions). Z1 and Z2 These are the standard normal z-scores used in a bivariate normal
distribution. These values can be either negative or positive. Zero Bond Price This is the price of a zero coupon bond, used in the valuation of callable and risky debt and for pricing commodity
options, and this parameter has to be a positive value. Zero Yields This is the yield of a zero coupon bond, used in the valuation of callable and risky debt, and this parameter has to be a positive
March 13, 2008
Char Count=
March 18, 2008
Char Count=
About the DVD
This DVD contains: 1. Risk Simulator trial version (30-day trial license and 10-day pre-installed license). 2. Real Options SLS trial version (30-day license information provided below). 3. Employee
Stock Options Valuation Toolkit (3-day license information provided below). 4. Modeling Toolkit trial version (30-day license information provided below). 5. Sample getting started modeling videos
viewable from your personal computer. 6. Sample case studies and models. 7. Several brochures and manuals on our Certified in Risk Management (CRM designation) program, Training DVD set, live
training seminars, software, and other pertinent information. Also, do visit our web site at www.realoptionsvaluation.com to obtain the latest free Excel models, training schedules, case studies, and
so forth. Please follow the instructions below to run the trial software programs.
MODELING VIDEOS If you have an older version of Windows Media Player, you might only hear voices without any video when running some of the enclosed sample video clips. To fix this problem, simply
install the “Video CODEC.exe” located in the DVD’s “Required Software” folder. Restart the videos and they should work.
RISK SIMULATOR (INTERNATIONAL VERSION: ENGLISH, CHINESE, JAPANESE, SPANISH) FULL & TRIAL VERSION DOWNLOAD: You can download the Risk Simulator (RS) software from the web site to obtain the latest
version or use the version that is on your DVD and run the file Software Install—Risk Simulator.exe. This is a full version of the software but will expire in 10 days, during which you can purchase a
license to permanently unlock the software. If you are using Windows Vista and Excel 2007, please see the Windows Vista note below before installing the software or the software license. In addition,
there is an extended 30-day trial license in the DVD. Please see instructions below for details on how to license the software. To obtain the extended trial period, install RS and then start Excel,
click on Risk Simulator, License, Install License and point to the location of the extension license file in the DVD folder “Instructions and Licenses,” restart Excel and you are now
March 18, 2008
Char Count=
temporarily licensed. If you are running Windows Vista, please see the note below before installing the license. To permanently unlock the software, install RS and then purchase a license from the
web site and e-mail us your Hardware ID (after installing the software, start Excel, click on Risk Simulator, License, and e-mail
[email protected]
the 11- to 20-digit Hardware ID located on the bottom left of the splash screen). We will then e-mail you a permanent license file. Save this file to your hard drive, start Excel, click on Risk
Simulator, License, Install License and point to the location of this license file, restart Excel and you are now permanently licensed. If you are running Windows Vista, please see the note below
before installing the license.
System Requirements, FAQ, and Additional Resources
Requires Windows Vista/XP; Excel XP/2003/2007; 512 MB RAM; 100 MB Hard Drive; administrative rights; and .NET Framework 1.1 and 20. Download Microsoft .NET Framework 1.1 or from the DVD, run the file
Dot NET 1.1.exe (required to install Risk Simulator if your system does not already have it).
Please view the FAQ on the web site if you have any questions about system requirements or problems installing the software. Windows Vista This software works with Vista and Excel 2007. Please follow
the instructions below for installing the software as well as for installing the license in Vista. You need to first turn off User Access Control before installing the software or license file for
Risk Simulator: (i) In Windows Vista, click on Start | Control Panel | Classic View | User Accounts | Turn on User Access Control ON or OFF; and uncheck Use User Account Control; and then reboot the
computer. You can now install the software per the instructions above, or install the permanent license that you have purchased by starting Excel and clicking on Risk Simulator | License | Install
License, and browse to the license file you received when you purchased the software. (ii) When restarting the computer, you will get a message that UAC is turned off. You can turn this message off
by going to the Control Panel, Security Center; Change the Way Security Center Alerts Me; Don’t Notify Me and Don’t Display the Icon. Either that or you can turn UAC back on after completing the
license installation.
REAL OPTIONS SLS (WITH SUPER SPEED) (INTERNATIONAL VERSION: ENGLISH, CHINESE, JAPANESE, SPANISH) The FULL/TRIAL version of Real Options SLS is now available. Download the software from the web site
(or browse the DVD and run Real Options .exe). Make sure you install .NET Framework 2.0 before installing this software (see below). Trial Version Info You can test this software on a 30-day trial
basis, upon which you can purchase the software at www.realoptionsvaluation.com.
March 18, 2008
Char Count=
About the DVD
Full Version Info If you are purchasing or have already purchased the software, simply download and install the software. You will need TWO license keys for the software. The first, when prompted
after the 30-day trial period when you start the SLS Functions, e-mail us the Hardware Fingerprint at
[email protected]
and we will send you your first permanent unlock code. Next, for the second license, you have 30 days to click on Install License on the main software screen and e-mail us the Hardware ID. One you
receive the license file, click on Buy or Install License, and click on Activate and browse to the license file we sent you. Please note that the Hardware ID and Hardware Fingerprint are different.
System Requirements, FAQ, and Additional Resources
Requires Windows Vista/XP; Excel XP/2003/2007; 512 MB RAM; 50 MB Hard Drive; Administrative rights; and NET Framework 2.0. Download Microsoft NET Framework 2.0 (required for Real Options SLS if your
system does not already have it). Version 2.0 should be installed together with 1.1 (they work side by side). This file is also available in the DVD’s required software folder.
Please view the FAQ if you have any questions about system requirements or problems installing the software. Installation Instructions for Real Options SLS can be found on the web site.
EMPLOYEE STOCK OPTIONS VALUATION TOOLKIT FULL & TRIAL VERSION DOWNLOAD: Download the Employee Stock Options Toolkit from the web site. A license key is required to install. Once you purchased the
full license, all you have to do is to e-mail us your Fingerprint ID (at installation or at the end of your trial period when you start the software, you will be asked for a key and the fingerprint
is provided to you then). We suggest writing down this Fingerprint ID. Send the fingerprint to
[email protected]
and we will send you the permanent license key within a few hours of your purchase. After 3 days, simply start the software and you will be asked for a permanent license key. Enter the key we
e-mailed you to permanently unlock the software. Use the following trial license key when prompted for your free 3-day trial: Name: Temporary Key License: 656B-994F-F942-952F
System Requirements, FAQ, and Additional Resources Requires Windows Vista/XP; Excel XP/2003/2007; 256 MB RAM; 10 MB Hard Drive; and administrative rights. Please view the FAQ if you have any
questions about system requirements or problems installing the software.
March 18, 2008
Char Count=
MODELING TOOLKIT The Modeling Toolkit has over 800+ models and functions, as well as over 300+ Excel and SLS modeling templates for use within the Excel environment as well as standalone SLS. Please
see the MANUALS folder on the DVD for a detailed list of the models and functions. This software is introduced in detail in the Advanced Analytical Models (Dr. Johnathan Mun, Wiley 2008) book. FULL &
TRIAL VERSION DOWNLOAD: Download the Modeling Toolkit from the web site or install the software from the DVD. A license key is required to install. Once you purchased the full license, all you have
to do is to e-mail us your Fingerprint ID (at installation or at the end of your trial period when you start the software, you will be asked for a key and the fingerprint is provided to you then). We
suggest writing down this Fingerprint ID. Send the fingerprint to
[email protected]
and we will send you the permanent license key within a few hours of your purchase. After 30 days, simply start the software and you will be asked for a permanent license key. Enter the key we
e-mailed you to permanently unlock the software. Use the following trial license key when prompted for your free 30-day trial: Name: 30 Day Trial Key: 4C55-0BA2-420E-CA84
System Requirements, FAQ, and Additional Resources Windows Vista/XP, Excel XP/2003/2007, 512 MB RAM, 75 MB Hard Drive, and administrative rights. Please view the FAQ if you have any questions about
system requirements or problems installing the software.
CUSTOMER CARE If you have trouble with the CD-ROM, please call the Wiley Product Technical Support phone number at (800) 762–2974. Outside the United States, call 1(317) 572–3994. You can also
contact Wiley Product Technical Support at http://support.wiley.com. John Wiley & Sons will provide technical support only for installation and other general quality control items. For technical
support on the applications themselves, consult the program’s vendor at admin@ realoptionsvaluation.com or visit www.realoptionsvaluation.com. To place additional orders or to request information
about other Wiley products, please call (877) 762-2974.
March 18, 2008
Char Count=
About the Author
r. Johnathan C. Mun is the founder and CEO of Real Options Valuation, Inc. (ROV), a consulting, training, and software development firm specializing in strategic real options, financial valuation,
Monte Carlo simulation, stochastic forecasting, optimization, Basel II, FAS123, and risk analysis located in northern Silicon Valley, California. ROV has partners around the world, including Beijing,
Chicago, Hong Kong, Mexico City, New York, Port Harcourt of Nigeria, Shanghai, Singapore, Zurich, and other locations. ROV also has a local office in Shanghai. He is also the chairman of the
International Institute of Professional Education and Research (IIPER), an accredited global organization providing the Certified in Risk Management (CRM) designation, among others, staffed by
professors from named universities from around the world. He is the creator of the Real Options SLS Super Lattice Solver software, Risk Simulator software, Modeling Toolkit software, and Employee
Stock Options Valuation software showcased in this book, as well as the risk analysis Training DVD. He holds public seminars on risk analysis and CRM programs. He has authored nine other books
published by John Wiley & Sons, including Banker’s Handbook on Credit Risk (2008); Modeling Risk: Applying Monte Carlo Simulation, Real Options, Optimization, and Forecasting (2006); Real Options
Analysis: Tools and Techniques, First and Second Editions (2003 and 2005); Real Options Analysis Course: Business Cases (2003); Applied Risk Analysis: Moving Beyond Uncertainty (2003); and Valuing
Employee Stock Options (2004). His books and software are being taught and used by faculty and students at top universities around the world, including Beijing University, Bern Institute in Germany,
Chung-Ang University in South Korea, Georgetown University, ITESM in Mexico, Massachusetts Institute of Technology, U.S. Naval Postgraduate School, New York University, Stockholm University in
Sweden, University of the Andes in Chile, University of Chile, University of Pennsylvania Wharton School, University of York in the United Kingdom, and Edinburgh University in Scotland, among others.
Dr. Mun is also currently a finance and economics professor and has taught courses in financial management, investments, real options, economics, and statistics at the undergraduate and the graduate
MBA and Ph.D. levels. He teaches and has taught at universities all over the world, from the U.S. Naval Postgraduate School (Monterey, California) and University of Applied Sciences (Switzerland and
Germany) as full professor, to Golden Gate University, San Francisco State University, and St. Mary’s College (California), and has chaired many graduate research MBA thesis and Ph.D. dissertation
committees. He also teaches weeklong risk analysis, real options analysis, and risk analysis for managers’ public courses where participants can obtain the CRM designation upon completion. He is a
senior fellow at the Magellan Center and sits on the board of standards at the American Academy
March 18, 2008
Char Count=
of Financial Management. He was formerly the Vice President of Analytics at Decisioneering, Inc., where he headed the development of options and financial analytics software products, analytical
consulting, training, and technical support, and where he was the creator of the Real Options Analysis Toolkit software, the older and much less powerful predecessor of the Real Options Super Lattice
software introduced in this book. Prior to joining Decisioneering, he was a Consulting Manager and Financial Economist in the Valuation Services and Global Financial Services practice of KPMG
Consulting and a Manager with the Economic Consulting Services practice at KPMG LLP. He has extensive experience in econometric modeling, financial analysis, real options, economic analysis, and
statistics. During his tenure at Real Options Valuation, Inc., Decisioneering, and KPMG Consulting, he taught and consulted on a variety of real options, risk analysis, financial forecasting, project
management, and financial valuation issues for over 100 multinational firms (former and current clients include 3M, Airbus, Boeing, BP, Chevron Texaco, Financial Accounting Standards Board, Fujitsu,
GE, Microsoft, Motorola, Pfizer, Timken, U.S. Department of Defense, U.S. Navy, Veritas, and many others). His experience prior to joining KPMG included being department head of financial planning
and analysis at Viking Inc. of FedEx, performing financial forecasting, economic analysis, and market research. Prior to that, he did financial planning and freelance financial consulting work. Dr.
Mun received his Ph.D. in Finance and Economics from Lehigh University, where his research and academic interests were in the areas of investment finance, econometric modeling, financial options,
corporate finance, and microeconomic theory. He also has an MBA in business administration, an MS in management science, and a BS in Biology and Physics. He is Certified in Financial Risk Management,
Certified in Financial Consulting, and Certified in Risk Management. He is a member of the American Mensa, Phi Beta Kappa Honor Society, and Golden Key Honor Society as well as several other
professional organizations, including the Eastern and Southern Finance Associations, American Economic Association, and Global Association of Risk Professionals. In addition, he has written many
academic articles published in the Journal of the Advances in Quantitative Accounting and Finance, the Global Finance Journal, the International Financial Review, the Journal of Financial Analysis,
the Journal of Applied Financial Economics, the Journal of International Financial Markets, Institutions and Money, the Financial Engineering News, and the Journal of the Society of Petroleum
March 17, 2008
Char Count= 0
Abandonment options, 739–748, 763–767 Absolute returns, 89 Accrual(s): on basket of assets, 178–179 instruments, 405–406 range, 226–227 Acquisitions, 801–809, 821. See also Buy decisions AIC (Akaike
Information Criterion), 277 Allocation, portfolio: banking and, 313–314 continuous, 356–361 investment, 372 stochastic, 400–404 ALM (asset liability management), 349–354 Alpha errors, 624 American
closed-form approximations, 225, 424, 697–699, 753, 761–762 American options: abandonment, 739–740, 742–743 call options, 180–187, 424–425, 697–699, 726, 733–735, 737, 738 chooser options and, 764,
770–771 contraction, 750–751 on debt, 422–423 dividends and, 186–187, 734–738 double and exotic barrier, 731–733 dual variable rainbow options and, 767–769 employee stock options and, 715–717
exchange asset options and, 203 exotic options and, 178, 731–733 expansion, 756–758 on foreign exchange, 182–183 futures contracts and, 212 on index futures, 184–185 jump-diffusion options and, 774
lower barrier, 725–727 mean-reversion options and, 777, 779 multiple assets competing options and, 781 perpetual options and, 225 plain vanilla options and, 424–431 put options, 422, 424, 697, 733,
736–738, 748 range accruals and, 226 in Real Options SLS, 697–699, 715–717 with sensitivities, 180–181 simultaneous compound options and, 791 upper barrier, 728–730 Amortizations, 141–144, 534–537
Analytics: Central Limit Theorem, 79–88 Flaw of Averages, 88–93 lottery numbers, winning, 84–88 Mathematical Integration Approximation Model, 93–96 projectile motion, 96–99 regression diagnostics,
100–109 Ships in the Night, 109–111 statistical analysis, 111–122 weighting of ratios, 123–124 ANOVA (analysis of variance), 610–611, 618–623 Approximation models, 93–96, 180 Approximations, 93–96,
186, 225. See also American closed-form approximations ARIMA (autoregressive integrated moving average), 74–75, 276–282 Arithmetic averages, 188–189 Asian lookback options, 188–190 Assets: accruals
on basket of, 178–179 allocation optimization model, 356–361 asset-equity parity model, 137–138 asset liability management (ALM), 349–354 asset or nothing options, 190–191 benchmark, 235–236 debt
analysis and, 147–148 exchange assets options, 203 market value of, 137–138 multiple, 779–781 reference, 236 two-asset options, 212, 222–223, 233–236 volatility of, 137–138, 222 Assumptions:
defining, 9–12 management, 666, 681 optimization and, 62 violations of, 46, 101, 240 Audit worksheets, 700, 702 Auto-ARIMA, 75, 278–281 Autocorrelations. See also Correlations diagnostic tools and,
47–49, 51–52 forecasting and, 241, 243, 244, 277 regression diagnostics and, 103–105 statistical analysis and, 113, 120–121 Autoregressive integrated moving average (ARIMA), 75, 276–282 Average
options, 188–189
March 17, 2008
Char Count= 0
1002 Averages: analytics and, 88–93 arithmetic, 188–189 autoregressive integrated moving, 75, 276–282 exponentially weighted moving, 664–665 geometric, 88–90, 189–190 harmonic, 90–91 skewed, 91–93
Backcasting, 133 Backward induction, 153, 156, 158 Bandwidth requirements, 329 Banking, 293–320, 634–643 break-even inventory and, 637–639 classified loan borrowing base and, 634–636 default
probabilities in, 294–301 economic capital and value at risk and, 303–312 firm in financial distress and, 640–641 hurdle and discount rates and, 318–320 loss given default and, 301–303 optimization
and, 315–319 portfolio allocation and, 313–315 pricing loan fees model and, 642–643 queuing models and, 354–356 Barrier options: binary digital instruments and, 405 double, 200–201, 230, 731–733
exotic, 191–192, 732 gap options and, 212 graduated, 213–214 lower, 725–727, 818 two-asset, 233–234 upper, 728–730 Basel II Accord, 293, 301, 652, 653, 657 Baseline growth, 283 Basic Econometrics, 75
Basic simulation model, 510–516 Bayesian analysis (Bayes’ Theorem), 166–168 Bearish spread positions, 417 Benchmark assets, 235–236 Benchmarks, 697–700 Benyovszky, Mark A., 329 Bermudan options:
abandonment, 744–746 chooser options and, 765 contraction, 751–754 dividends and, 186, 734–738 employee stock options and, 716–718 exotic options and, 178 expansion, 760 plain vanilla options and,
429–431 with sensitivities, 180–181 Bernoulli distributions, 97 Beta distributions, 83, 446, 518 Beta errors, 624 BIM (Bliss interpolation model), 674–675 Binary decision variables, 62 Binary digital
instruments, 405–406 Binary digital options, 193–194
INDEX Binomial distributions, 54–56, 81–84, 548–552, 571–573 Binomial lattices: closed-form model versus, 821–826 employee stock options and, 719 exotic options and, 180–182, 186, 217 modified,
407–413, 420–423 multinomial lattices versus, 705, 708, 709, 768, 795–798 plain vanilla options and, 424–431 Biotech, 287–292 Bivariate linear regression, 395–400 Black-Derman-Toy methodology, 407
Blackout Steps, 428–429, 701–703, 717, 723–724 Black-Scholes closed-form model, 697–699, 719, 821–823 Black-Scholes option pricing model. See also Generalized Black-Scholes-Merton model exotic
options and, 180, 217, 237, 737–738 plain vanilla options and, 424 tornado and sensitivity charts and, 27–28 Bliss interpolation model (BIM), 674–675 Bond(s): call options, 149–150 debt analysis and,
138–140 inverse floater, 407–413 value of, 129–130, 151–152 yields and spreads, 432–434 Bootstrap simulations, 35–40, 522–525, 587–589 Borrowing base loans, 634–636 Box-Jenkins ARIMA, 75, 277–278
Break-even inventory, 637–639 Brownian motion, 49, 51, 106, 216, 245 Bull spread positions, 417 Buy decisions. See also Acquisitions build versus, 801–809 lease versus, 631–633 Calendar-days ratios,
232 Call and put collar strategy, 224–225 Call options. See also American options; Bermudan options; European options bonds, 149–150 definition of, 211 dividends and, 734–735 mean-reversion and,
777–779 plain vanilla, 424–431 simple, 795–797 Capability measures (Cpk), 627–631 Capital: asset liability management and, 349–354 economic, 303–315 value at risk and, 653–661 CAPM (capital asset
pricing model), 320 Case studies. See Real Options Strategic Case Studies Cash flow. See also Discounted cash flow model matching, 350 model, 481–486, 667 returns, 664–668, 681
March 17, 2008
Char Count= 0
Index Cash or nothing options, 195 Causation versus correlation, 19, 52, 247 CDF (cumulative distribution function), 54–56, 81 Central Limit Theorem, 40, 79–88, 624 Changing volatility, 709, 712
Chi-square tests, 608–611 Chooser options, 196–197, 228, 763–767, 770–773 CIR (Cox-Ingersoll-Ross) model, 138–140, 673–674 Classified loan borrowing base, 634–636 Closed-form models. See American
closed-form approximations; Black-Scholes closed-form model Coefficient of variation, 327, 374, 482 Combinatorics, 625–626 Commercial real estate, 456–459 Commodity options, 198 Complex chooser
options, 197, 228, 771–773 Complex combinatorial nested options, 781–782, 823 Compound options, 223–224. See also Sequential compound options; Simultaneous compound options Confidence intervals,
16–18, 87, 580–583 Constraints, 61–64, 368, 382, 470–472 Continuous decision variables, 62, 64–70, 400–404 Continuous portfolio allocation, 356–361 Contraction options, 748–755, 763–767, 803–804,
806, 808 Control charts, quality, 628–631 Convertible warrants, 821–826 Convexity, 145–146, 472–473 Correlations. See also Autocorrelations causation versus, 19, 52, 247 correlated simulations,
525–529 decision analysis and, 158, 168 effects model, 528–530 forecasting and, 246–253 pairwise, 525–531 parametric, 19 precision control and, 18–22 rainbow options and, 768–769 regression
diagnostics and, 108 sensitivity analysis and, 29–33, 505, 508–509 serial, 276–277 value at risk and, 304–305, 309–312, 657–663 volatility and, 682–684 Cost estimation model, 443–446 Covariance,
474–475, 661–663 Covered call positions, 416–417 Cox-Ingersoll-Ross (CIR) model, 138–140, 673–674 Cpk (Process Capability Index), 627 CPM (critical path method), 446–453 Credit analysis. See also
Debt analysis; Loans credit default swaps and credit spread options, 127–128 credit premium, 125–126
1003 credit risk analysis, 129–130, 133–134, 295, 437, 653–656 external debt ratings and spread, 131–132 internal credit risk rating model, 133–134 profit cost analysis of new credit, 135–136 Credit
default swaps, 131–132 Credit premiums, 125–126 Credit risk analysis, 129–130, 133–134, 295, 437, 653–656 Credit Risk Plus method, 434 Credit scoring models, 295, 299–301, 432–436 Credit spreads,
125–132, 434 Credit Suisse Financial Products, 434 Critical path analysis (CPM PERT GANTT), 446–453 Cubic spline extrapolation, 258, 262–263, 684–689 Cumulative distribution function (CDF), 54–56, 81
Cuneo Hervieux, Elio, 321 Currency options, 199–200, 205–207, 487–488. See also Foreign exchange: options Custom distributions, 76 Customized options: abandonment, 745–747 chooser options and, 764,
766, 767 contraction, 752, 755 expansion, 760–761 Custom variables, 703 Data. See also Historical data extraction, 42, 515 fitting, 531–534 Debt: options on, 422–423 repayment of, 141–144, 534–537
value of, 129–132, 147–148, 151–152 Debt analysis. See also Credit analysis asset-equity parity model, 137–138 Cox model, 138–140 debt repayment and amortization, 141–144 debt sensitivity models,
145–146 Merton model, 147–148 Vasicek models, 149–152 Debt sensitivity models, 145–146 Decision analysis: Bayes’ Theorem and, 166–168 buy versus build, 801–809 buy versus lease, 631–633 decision
trees, 153–168 economic order quantity and, 169–172 expected utility analysis and, 172–173 expected value of perfect information and, 164 inventory and, 169–172, 174–175 Minimax and, 165, 168
optimization models and, 56–70 queuing models and, 176–178 Decision trees, 153–168
March 17, 2008
Char Count= 0
1004 Decision variables. See also Variables continuous, 62, 64–70, 400–404 integers, 62 mixed, 62 optimization and, 60–63 Default probabilities: banking and, 294–301 bond yield and spreads and,
432–434 credit analysis and, 125–126, 133, 135 empirical models of, 294, 299–301, 432–436 external options model (public company), 437–440 loss given default, 301–303 Merton models and, 441–442
structural models of, 294–300, 432–434, 441 Defective proportion units (DPU), 627 Deferment options, 810–813 Delphi method, 33, 334, 339, 342, 380 Delta, call, 492–493 Delta-gamma hedges, 474–478
Delta hedges, 214, 478 Delta options portfolio, 651–652 Delta precision, 624 Demand curves, 538–541 Descriptive statistics, 112–114 Design of experiments, 625–626 Deterministic, 59, 62, 274, 373–374,
642 Diagnostic tools, forecasting and regression, 42, 44–52, 100–109, 239–247, 270 Digital instruments, 405–406 Discounted cash flow model: abandonment options and, 739 contraction options and, 748
expansion options and, 756 net present value and, 344–346 sensitivity analysis and, 496–497 simulations and, 23–24, 542–545 valuations and, 640 volatility and, 664 Discount rates, 318–320 Discrete
integer variables, 70–73, 364–368 Discrete project selection, 362–366 Discrete uniform distribution, 79–81, 86 Distributional analysis, 54–56, 58–61, 521–523 Distributional fitting: data and, 531–534
multiple variable, 682–686 simulations and, 33–35, 36 statistical analysis and, 113–116 value at risk and, 304, 306, 653–655 Distributions: Bernoulli, 97 Beta, 83, 446, 518 binomial, 54–56, 81–84,
548–552, 571–573 custom, 76 discrete uniform, 79–81, 86 exponential, 354–356 Gumbel maximum, 337 hypergeometric, 83
INDEX normal, 79–86, 574–577 outcomes, 79–81 Poisson, 83, 176–177, 342, 354–356, 573–576 skewed, 217 skewness, 36–38 triangular, 446 Distributive lag analysis, 47–48, 104–105, 243–244 Dividends: call
options and, 734–735 exotic options and, 186–187, 201–202 expansion options and, 756–757, 759 put options and, 736–738 uneven payment options, 237 Domestic currency, 205–206, 207 Double barrier
options, 200–201, 230, 731–733 DPU (defective proportion units), 627 Dual variable rainbow options, 767–769 Duration, 145–146, 349–351, 470–471 Dynamic optimization: continuous portfolio allocation
and, 359–360 discrete project selection and, 364 industry applications and, 316 military portfolio and, 377, 380 simulations and, 63–64 stochastic portfolio allocation and, 403–404 Dynamic versus
static perturbations, 25–26, 29–33 Econometrics, 75, 248–253 Economic capital, 303–314 Economic order quantity, 169–172 Efficient frontier: of generation, 322–328 integrated risk analysis and,
470–472 military portfolio and, 380–385 optimization procedure, 63–64, 71, 360, 365 Elasticity, 386–390, 538–541 Electric/utility, 321–329 Embedded options, 350–351, 420 Employee stock options,
715–724 American call options and, 715–716 Bermudan call options with vesting and, 716–718 blackouts and, 723–724 European call options and, 719–720 forfeiture and, 723–724 suboptimal exercise and,
720–722, 723–724 valuation toolkit, 717–718, 721–722 vesting and, 716–718, 723–724 Epidemics, 546–548 Equity, 137–138 Errors. See also Normality of errors alpha/beta, 624 estimates of, 284, 286 mean
absolute percent, 284, 286 mean-squared, 284 sphericity of, 104, 243, 286 ESO (Employee Stock Options) Valuation Toolkit, 717–718, 721–722
March 17, 2008
Char Count= 0
Index European options: abandonment, 743–745 chooser options and, 764, 770–773 contraction, 751–753 on debt, 422–423 dividends and, 186, 201–202, 734–738 double and exotic barrier, 731–733 dual
variable rainbow options and, 767–769 employee stock options and, 719–720 exchange asset options and, 203 exotic options and, 178, 198, 217, 236 expansion, 756–760 futures contracts and, 212 inverse
gamma options and, 215 jump-diffusion options and, 774 lower barrier, 725–727 mean-reversion options and, 777 multiple assets competing options and, 781 plain vanilla options and, 424–431 range
accruals and, 226 in Real Options SLS, 697–699 with sensitivities, 180–181 upper barrier, 728–730 EVII (expected value of imperfect information), 161, 163–164 EVPI (expected value of perfect
information), 164 EWMA (exponentially weighted moving average) models, 664, 665, 668 Excel, 1–2, 4, 6, 8, 19, 24, 42, 63, 67, 71, 74, 90, 93, 96, 163, 315, 344, 358, 364, 382, 402, 410, 414, 424,
426–427, 450, 455, 463, 465, 515, 526, 558, 562, 625, 695–697, 700, 709–714 Exchange assets options, 203 Exchange rates. See Foreign exchange Exotic options: accruals on basket of assets, 178–179
Asian lookback options, 188–190 asset or nothing options, 190–191 barrier options, 191–192, 200–201, 213–214, 725–733 basic call options, 734–735 basic put options, 736–738 binary digital options,
193–194 cash or nothing options, 195 chooser options, 196–197, 228, 763–767, 770–773 commodity options, 198 currency options, 199–200 with dividends, 186–187, 201–202, 237 exchange assets options,
203 foreign equity options, 207–208 foreign exchange options, 182–183, 199–200, 205–206 foreign takeover options, 209 forward start options, 210 futures options, 184–185, 211–212, 229 gap options,
212–213 index options, 184–185, 214–215 inverse gamma options, 215–216
1005 jump-diffusion options, 216–217 leptokurtic and skewed options, 217–218 lookback options, 188–190, 218–222 option collar, 224–225 options on options, 223–224 perpetual options, 225 range
accruals, 226–227 with sensitivities, 180–181 supershare options, 230 time switch options, 231 trading-day corrections, 232 two asset options, 222–223, 233–236 writer extendible options, 238
Expansion options, 756–767, 804–806 Expected utility analysis, 172–173 Expected values, 156, 161, 163–164 Experiments, design of, 625–626 Exponential distribution, 354–356 Exponential growth,
254–256, 264–266 Exponentially weighted moving average (EWMA) models, 664–665 External debt ratings, 127–132 External options model (public company), 437–440 Extrapolation, 77, 118, 258–263, 271–272,
684–689 Extreme spreads options, 204–205 Fairway options, 226–227, 405–406 Farm-outs, 810–813 Federal Reserve Bank, 139 Financial distress, 640–641 Financial instruments, embedded options in, 350–351
Financial statements, valuation and, 637–639, 644 Fixed income investments, 472–473 Fixed strike options, 218–220 Fixed versus floating rates, 479–480 Flaw of Averages, 88–93 Floating exchange rate,
487 Floating strike options, 220–222 Floating versus fixed rates, 479–480 Forecast(s): charts, 14–18, 511–513 correlations and, 246–253 defining, 12–13 diagnostic tools, 42, 44–52, 100–109, 239–247,
270 econometric, 248–253 exponential J-growth curves and, 254–256 integrated risk analysis and, 460–462 interpretation of results, 13–16 linear interpolation and, 258–263 logistic S-growth curves
and, 264–266 manual computations, 257–258 market share and, 267–268 Markov chains and, 267–268 module, functions of, 3 multiple regression, 248–253, 269–270
March 17, 2008
Char Count= 0
1006 Forecast(s) (Continued) nonlinear extrapolation and, 258–263, 271–272 optimization and, 62 statistics, 14 stochastic processes and, 77–78, 245–246, 273–275 techniques, 73–78 time-series analysis
and, 257–258, 283–286 time-series ARIMA and, 276–282 Foreign equity options, 205–208 Foreign exchange: cash flow model, 481–486 hedging exposure, 487–491 options, 182–183, 199–200, 205–206 Foreign
takeover options, 209 Forfeiture, 719–720 Forward rates, 674 Forward start options, 210 Free cash flow, 345, 463, 465, 784 Friedman’s test, 610–612 F-tests, two-variable, 606–607 Futures, 184–185,
211–212, 229 Gamma, call, 492–493. See also Delta-gamma hedges; Inverse gamma options GANTT chart analysis, 446–453 Gap analysis, 349–350 Gap options, 212–213 GARCH (generalized autoregressive
conditional heteroskedasticity), 76, 665, 668–672, 673 Garman-Kohlhagen model, 182, 199 Generalized Black-Scholes-Merton model, 180, 184, 214–215, 232, 738. See also Black-Scholes option pricing
model Geometric averages, 88–90, 189–190 Glantz, Morton, 634, 640, 642 Goodness of fit tests, 608–609 Graduated barrier options, 213–214 Greeks, 491–495 Growth: baseline, 283 exponential, 254–256,
264–266 Gumbel Maximum Distribution, 337 Harmonic averages, 90–91 Harvest model, 390–394 Hedges. See Risk hedges Heteroskedasticity, 44, 46, 100–101, 239–240, 242. See also GARCH High-tech
manufacturing, 801–809 Histogram (tab), 511 Historical data: distributional fitting and, 33 elasticity and, 386–390, 538 forecasting and, 276, 283 success rates and, 517–518 value at risk and,
653–654 volatility and, 668
INDEX Hull-White models, 149 Hurdle rates, 319–320 Hypergeometric distributions, 83 Hypotheses tests: advanced techniques, 590–623 ANOVA, 619–623 bootstrap simulation as, 36–37 chi-square tests,
608–611 classical, 40–42 confidence intervals and, 580–583 design of experiments and, 623–626 in empirical simulation, 580–587 Friedman’s test, 610–612 Kruskal-Wallis test, 612–614 Lilliefors test,
613–615 nonparametric methodologies, 607–618 one-variable, 591–597 runs test, 616–617 sample size determination and, 623–626 statistical analysis and, 113, 116 in theoretical situations, 576, 578–582
two-variable, 597–607 types of, 591–594 Wilcoxon signed-rank test, 616, 618 ICDF (inverse cumulative distribution function), 54–56, 579 Implied volatility, 663 Index futures, 184–185 Index options,
214–215 Industry applications: banking, 293–320 biotech, 287–292 electric/utility, 321–329 information security intrusion risk management, 329–348 insurance asset liability management model, 349–354
inventory, 169–172, 174–175, 366–371, 637–638 manufacturing, 170–172, 287–288, 801–809 oil and gas industry, 810–813 pensions, 354 Industry comparables, 442 Infectious diseases, 546–548 Inflation,
556, 673 Information, value of, 810–817 Information security intrusion risk management, 329–348 attack models, 332–339 attack scenarios, 339–342 environmental details, 331–332 financial impact,
344–346 investment decisions, 346–348 Inlicensing drug deal structuring, 289–290 Input assumptions. See Assumptions Installation, software, 4–5 Insurance asset liability management model, 349–354
Integers decision variables, 62
March 17, 2008
Char Count= 0
Index Integrated risk analysis: forecasting and, 460–462 Monte Carlo simulation and, 463–464 optimization and, 465, 468–472 Real Options analysis and, 463, 465–468 Intellectual property, 742,
745–747, 771, 804 Interest payments, 141–144, 534–537 Interest rates: debt analysis and, 145 floating versus fixed, 479–480 inverse floater bonds and, 407–413 mean-reverting, 138–140, 147–152, 673,
690 premiums on, 125–126, 131–132 risk and, 472–473 term structure of, 673, 674–675, 684, 690–691 volatility of, 139, 422–423, 679–680 Intermediate Node Equations, 425–430, 698–699, 703 Internal
credit risk rating model, 133–134 Internal optimization model, 169 Internal rate of return, 146, 344, 463, 542, 562 Internal ratings-based approach, 293 Interpolation, 258–263, 674–677, 684–685
Inventory, 169–172, 174–175, 366–371, 637–638 Inverse cumulative distribution function (ICDF), 54–56, 579 Inverse floater bonds, 407–413 Inverse gamma options, 215–216 Investments: fixed income,
472–473 information security and, 346–348 portfolio allocation, 372 return on, 123–124, 456–459, 542–545 simulations and, 542–545 staged-gate, 292, 802, 815, 818 J Curves, 254–256 J-S Curves, 76. See
also S-growth curves Jump-diffusion, 106, 216–217, 245, 774–776 Kendall’s tau, 19 KMV, 295, 437 Kruskal-Wallis test, 612–614 Kurtosis, 217 Kusiatin, Uriel, 287, 291 Languages, 696 Lattice Maker
module, 714–715 Lattices. See also Binomial lattices; Real Options Super Lattice Solver (SLS) multinomial, 695, 705, 707–709, 768, 795–797 options-adjusted spreads, 420–421 pentanomial, 709, 767–769
quadranomial, 709, 774–776 trinomial, 705, 708–709, 777–779, 795–797 Law of Large Numbers, 40, 624 Lease versus buy valuation, 631–633 Left-tailed hypotheses tests, 591, 594 Leptokurtic options,
217–218 Liabilities, 349–354
1007 Lilliefors test, 613–615 Linear interpolation, 258–263 Linear optimization, 63 Linear regression, 103–104, 240–241, 394–399 Linear trend detection, 119 Loans, 634–639, 642–643. See also Credit
analysis Logarithmic cash flow returns approach, 664–666, 681 Logarithmic present value returns, 665, 666–669, 681 Logistic S-growth curves, 76, 264–266 Lookback options, 188–190, 218–222 Loss given
default, 301–303 Lottery numbers, winning, 84–88 Lower barrier options, 725–727, 818 MAD (mean absolute deviation), 284 Management assumptions and guesses, 666, 681 Manufacturing, 170–172, 287–288,
801–809 MAPE (mean absolute percent error), 284, 286 Market research, 159–164, 167, 802–803 Market share, 267–268 Market-traded instruments, 432 Market uncertainties, 810, 814 Market values: of
assets, 137–138 of debt, 147–148 of interest rate risk, 139 Markov chains, 77, 267–268 Markowitz efficient frontier optimization procedure, 63–64, 70, 362, 367 Mathematical Integration Approximation
Model, 93–96 Maturity, 76, 125, 127, 129, 130, 141, 145–146, 149, 178, 180, 183, 186, 190, 191, 193, 196, 197, 200–203, 204, 213, 218, 223–224, 226, 232, 234, 238, 264–265, 295, 405, 407, 414, 417,
420, 424, 426, 430, 439, 487, 535, 652, 673, 679–680, 690, 697, 699, 705, 715, 716, 719, 732, 737–739, 742, 743, 748, 756, 763, 771, 782, 821–823 Matrix: regret, 165 variance-covariance, 474–475 (see
also Covariance) Maximin analysis, 164–165, 168, 222–223 Maximum likelihood estimation (MLE), 76–77, 436 McKinsey discounted cash flow model, 640 Mean: hypotheses tests and, 583, 594, 596–598, 599,
600, 602 median versus, 92 Mean absolute deviation (MAD), 284 Mean absolute percent error (MAPE), 284, 286 Mean-reversion: forecasting/regression diagnostics and, 106, 245 of interest rates, 138–140,
147–152, 673, 690 options, 777–779 trinomial lattices and, 795
March 17, 2008
Char Count= 0
1008 Mean-squared error (MSE), 284 Median, 92 Media streaming, 329 Meeting, probability of, 109–111 Merton models. See also Generalized Black-Scholes-Merton model of debt analysis, 147–148 internal
options (private company), 441 market options (industry comparable), 442 MG1 single arbitrary queuing model, 177, 355 M/G/k blocked queuing model, 177, 355 Micronumerosity, 46, 101, 240, 242 Military
portfolio, 379–385 Minimax analysis, 165, 168, 222–223 Mixed decision variables, 62 MLE (maximum likelihood estimation), 77, 436 Modeling Risk: Applying Monte Carlo Simulation, Real Options Analysis,
Stochastic Forecasting, and Portfolio Optimization (Mun), 2, 12, 248, 269, 274, 315, 360, 365, 511 Modeling toolkit software, 1–2 Modified binomial lattices, 407–413, 420–421 Money, time value of,
562–570 Monitoring periodicities, 191–192, 200, 233–234 Monte Carlo simulations: asset liability management and, 350 banking and, 304, 306–312, 316 basic simulation model and, 510–513 biotech
industry and, 288, 289, 291 continuous portfolio allocation and, 359–360 correlations and, 20–22, 529–530 decision analysis and, 156–158, 161–163, 168, 171, 178 discrete project selection and, 364
information security and, 331, 333, 342, 346 integrated risk analysis and, 463–464 investment decisions and capital budgeting and, 542 optimization and, 62–63 queuing models and, 357 retirement
funding and, 557–558 Risk Simulator and, 2–8 running, 6–16 surgical success rates and, 518–520 valuation model and, 644 value at risk and, 306–312, 647–650, 653 Moody’s, 133, 295, 437 Mortgages,
141–144 MSE (mean-squared error), 284 Multicollinearity, 51, 106, 108, 245–246 Multidimensional simulations, 552–555 Multinomial Lattice Solver, 695, 705, 707, 768, 795–797 Multiple Asset or Multiple
Phased module, 695, 704–705, 706, 707 Multiple assets competing options, 779–781 Multiple-phased complex sequential compound options, 787–789, 790 Multiple-phased sequential compound options,
786–787, 788
INDEX Multiple-phased simultaneous compound options, 793–794 Multiple regression modeling, 248–253, 269–270 Multiple variable distributional fitting, 682–684, 685, 686 Multivariate regression, 77
Mutual exclusivity of options, 781–782 Negative binomial distributions, 548–552 Nelson-Siegel (NS) interpolation model, 676–677 Nested combinatorial options, 781–782, 823 Net present value and
discounted cash flow analysis, 344–346 Nonlinear extrapolation, 77, 118, 258–263, 271–272 Nonlinearity, 46–47, 241–242 Nonlinear optimization, 63 Nonlinear rank correlation charts, 503 Nonlinear
tornado and sensitivity charts, 503–509 Nonparametric bootstrap simulations, 587–589 Nonparametric correlations, 19 Nonparametric hypotheses tests, 607–618 Nonstationarity, 120, 122 Normal
distributions, 79–86, 574–576, 577 Normality of errors: forecasting/regression diagnostics and, 47, 49, 104–106, 243, 245 statistical analysis and, 113, 116–117, 119 NS (Nelson-Siegel) interpolation
model, 676–677 Objectives, 62–63 Oil and gas industry, 810–811 One-variable tests: T-tests, 591–594, 595 Wilcoxon signed-rank test, 616–618 Z-tests, 594, 596–597 Operational risk, 356–358, 653–656
Opportunity costs, 165 Optimal pricing with elasticity, 386–390 Optimal trigger values, 741, 759, 814–817 Optimization. See also Dynamic optimization; Static optimization asset allocation
optimization model, 358–363 banking and, 315–319 with continuous decision variables, 64–70 continuous portfolio allocation, 356–361 with discrete integer variables, 70–73 discrete project selection,
362–366 examples of, 58–62 harvest model, 390–394 integrated risk analysis and, 465, 468–472 internal optimization model, 169 inventory optimization, 366–371 investment portfolio allocation, 372
Markowitz efficient frontier procedure, 63–64, 70, 362, 367 methods of, 63–64 military portfolio and efficient frontier, 380–385 module, functions of, 3
March 17, 2008
Char Count= 0
Index optimal pricing with elasticity, 386–390 ordinary least squares, 394–399 portfolio, 56–70 simulation-optimization, 63, 315, 359, 360, 364 stochastic, 63–64, 313–315, 400, 404 stochastic
portfolio allocation, 400–404 terminology of, 60–66 value at risk and, 317–319, 647–650 Options. See also American options; Barrier options; Bermudan options; Call options; Customized options;
Employee stock options; European options; Exotic options; Put options; Real Options entries abandonment, 739–748, 763–767 Asian lookback, 188–190 asset or nothing, 190–191 average, 188–189 barrier,
158, 168, 191–192, 200–201, 213–214, 233–234, 725–727, 728–730, 731–733, 818 binary digital, 193–194 cash or nothing, 195 chooser, 196–197, 228, 763–767, 770–773 collar, 224–225 commodity, 198
compound, 223–224 contraction, 748–755, 763–767, 803–804, 806, 808 contract versus futures contract, 212 credit spread, 131–132 currency, 199–200, 205–207, 487–488 on debt, 422–423 deferment, 810–813
delta portfolio, 651–652 dual variable rainbow, 767–769 embedded, 350–351, 420 exchange assets, 203 expansion, 756–767, 804, 806 extreme spreads, 204–205 fairway, 226–227, 405–406 fixed strike,
218–220 floating strike, 220–222 foreign equity, 205–208 foreign exchange, 182–183, 199–200, 205–206 foreign takeover, 209 forward start, 210 futures, 184–185, 211–212, 229 gap, 212–213 index,
214–215 inverse gamma, 215–216 jump-diffusion, 106, 216–217, 245, 774–776 leptokurtic, 217–218 lookback, 188–190, 218–222 mean-reversion, 777–779 multiple assets competing, 779–781 mutual exclusivity
of, 781–782 nested combinatorial, 781–782, 823 path-dependent/path-independent, 781–782
1009 payoff values and, 414–416 perpetual, 225 plain vanilla, 424–431 quanto, 208 with sensitivity, 180–181 sequential compound, 781–791, 815, 817 simultaneous compound, 791–794 skewed, 217–218
supershare, 230 switching, 817–820 time switch, 231 two-asset, 212, 222–223, 233–236 uneven dividend payments, 237 writer extendible, 238 Options-adjusted spreads lattices, 420–421 Options analysis:
binary digital instruments, 405–406 on debt, 422–423 inverse floater bond, 407–413 options-adjusted spreads lattices, 420–421 plain vanilla options, 424–431 trading strategies, 413–419 Ordinary least
squares, 394–400 Outcomes distribution, 79–81 Outcomes probabilities, 153, 159–161 Outliers, 46, 92, 103, 240–242 Output forecasts. See Forecast(s) Pairwise correlations, 525–530 Parametric
correlations, 19 Path-dependent/path-independent options, 781–782 Payoff values: decision analysis and, 153, 157, 159–161, 168, 172–173 exotic options and, 218–222 options and, 414–416 PDF
(probability density functions), 54–56, 573 Pearson’s correlation coefficient, 19 Pearson’s product moment correlations, 51–52, 108–109, 246 Pentanomial lattices, 709, 767–769 Periodicities,
monitoring, 191–192, 200, 233–234 Perpetual options, 225 PERT (program evaluation review technique), 446–453 Perturbations: dynamic versus static, 25, 29–32 sensitivity analysis and, 496–498, 504–505
Pharmaceutical development, 814–817 Plain vanilla options, 424–431 PMF (probability mass functions), 54–56, 673 Poisson distributions: Central Limit Theorem and, 83 industry applications and, 342
queuing models and, 176–177, 354–356 Six Sigma quality control and, 573–574, 575, 576 Poisson jump-diffusion process, 216 Population variance, 610
March 17, 2008
Char Count= 0
1010 Portfolio allocation: banking and, 313–314 continuous, 356–361 investment, 372 stochastic, 399–404 Portfolio efficient frontier, 382, 385–386, 470–472 Portfolio optimization. See Optimization
Portfolio risk return profiles, 474–476 Precedents, 496–498, 504–505 Precision control, 18–22 Preferences, run, 515 Prices. See also Black-Scholes option pricing model capital asset pricing model,
320 credit risk analysis and, 129–130 debt analysis and, 138–140, 147–148, 151–152 elasticity and, 386–390 of loan fees model, 642–643 quantity and, relationship between, 539–540 strike, 218–224
Private companies, 441 Probability. See also Default probabilities of meeting, 109–111 outcomes, 153, 159–161 statistical, 571–576 steady state, 268 to volatility, 666, 668 Probability density
functions (PDF), 54–56, 573 Probability mass functions (PMF), 54–56, 573 Process Capability Index (Cpk), 627 Profiles: portfolio risk return, 474–476 risk, 163–164 simulation, 6, 8–9, 515–516 Profit
cost analysis, 135–136 Program evaluation review technique (PERT), 446–453 Projectile motion, 96–99 Project management: cost estimation model, 443–446 critical path analysis, 446–453 project timing,
453–455 Proportions, 596–597, 602–606 Protective put positions, 417, 821–826 Public companies, 437–440 Purchase. See Acquisitions; Buy decisions Put options: American, dividends and, 736–738 call and
put collar strategy, 224–225 debt analysis and, 149–150 definition of, 211 mean-reversion and, 777–779 plain vanilla, 424–431 protective, 417, 821–826 put on call options, 223 simple, 795–799
Quadranomial lattices, 709, 774–776 Qualitative forecasting, 72 Quality control. See Six Sigma quality control
INDEX Quantitative forecasting, 74 Quanto options, 208 Queuing models, 176–178, 354–356 Rainbow options, 763–765 Random walk, 49, 51, 106, 216, 245 Range accruals, 226–227 Rank correlation chart,
31–32, 503 Ratios: calendar-days, 232 return to risk, 129–130, 356–361, 400–401, 470–472 Sharpe, 62, 63, 66, 73–74 trading-days, 232 weighting of, 123–124 Real estate, commercial, 456–459 Real
Options analysis, 158, 168, 353, 459, 463, 465–468 Real Options Analysis: Tools and Techniques, 2nd Edition (Mun), 425, 562, 681, 696–698 Real Options Analysis Toolkit, 697 Real Options Strategic
Case Studies: build or buy decision, 801–809 deferment options, 810 farm-outs, 810–813 optimal trigger values, 814–817 switching options, 817–820 value of information, 810–813, 814–817 warrant
valuation, 821–826 Real Options Super Lattice Solver (SLS): abandonment options and, 739–748 American options and, 697–699, 715–717 Bermudan options and, 716–718 chooser options and, 763–767, 770–773
contraction options and, 748–755 dual variable rainbow options and, 767–769 European options and, 697–699 exotic options and, 178–182, 186, 226, 770–773 expansion options and, 756–762 forecast module
of, 3 integrated risk analysis and, 463, 465 introduction to, 694–715 jump-diffusion options and, 774–776 Lattice Maker module of, 714–715 mean-reversion options and, 777–779 Multinomial Lattice
Solver and, 695, 705, 708–709, 768, 795–798 Multiple Asset or Multiple Phased module of, 695, 704–705, 706, 707 multiple assets competing options and, 779–781 optimization module of, 3 plain vanilla
call and put options and, 424–431 Risk Simulator and, 4 sequential compound options and, 781–791 simple call and put options and, 795–798 simulation module of, 2–3 simultaneous compound options and,
791–794 Single Asset and Single Phased module of, 695, 697–704
March 17, 2008
Char Count= 0
Index SLS Excel Functions module of, 696, 712–714 SLS Excel Solution module of, 695, 709–711 Recruitment budget, 548–555 Reference assets, 236 Regression: bivariate, 394–400 diagnostic tool, 42,
44–52, 100–109, 239–247, 270 multiple, modeling, 248–253, 269–270 multivariate, 77 Regret matrix, 165 Relative returns, 89–90 Retirement funding, 556–559 Return(s): on investments (ROI), 123–124,
456–459, 542–545 logarithmic cash flow, 664–666, 681 logarithmic present value, 665, 666–667, 681 relative and absolute, 89–90 to risk ratio, 129–130, 356–361, 400–401, 470–472 (see also Sharpe
ratio) risk return profiles, 474–476 Rho, call, 492–494 Right-tail capital requirements, 657–661 Right-tailed hypotheses tests, 591 Risk: analysis, 460–473, 474–476, 803–809, 810 asset-liability,
349–350 capital analysis, 293 debt analysis and, 138–140, 147–148, 151–152 information security intrusion risk management, 329–348 operational, 354–356, 653–656 preferences, 172 profile, 163–164
return profiles, 474–476 returns to risk ratio, 129–130, 356–361, 400–401, 470–472 (see also Sharpe ratio) tolerance levels, 347–348 Risk-free rate volatility, 676–685, 708 Risk hedges: delta-gamma
hedges, 477–478 delta hedges, 214, 478 fixed versus floating rates, 479–480 foreign exchange cash flow model, 477–482 foreign exchange exposure, 487–491 Risk-neutral, 690, 775, 795 Risk Simulator,
introduction to, 2–7 RMSE (root mean-squared error), 284 ROI (return on investment), 123–124, 456–459, 542–545 Roulette wheel, 560–561 Runs test, 616–617 Salvage values, 735–744 Sample size
determination, 623–626 Scenario analysis, commercial real estate, 458–459 Scholes. See Black-Scholes closed-form model; Black-Scholes option pricing model; Generalized Black-Scholes-Merton model
1011 SC (Schwarz Criterion), 277 Seasonality, 272, 283 Seasonal lending trial balance analysis, 637–639 Security, information intrusion risk management, 329–348 Seed values, 513–514 Sensitivity:
analysis, 25, 29–33, 505 charts, 496, 501–505, 508–509 debt sensitivity models, 145–146 Greeks, 491–495 options with, 180–181 tables, 25, 27 tornado analysis and, 25, 496–501, 503–507 Sequential
compound options, 781–791, 815, 818 Serial correlations, 276–277 S-growth curves, 76, 264–266 Sharpe ratio, 62, 63, 66, 74 Ships in the Night, 109–111 Sigma sample, 624 Simple call and put options,
795–797 Simple chooser options, 196, 228 Simple put options, 791–793 Simulation-optimization, 63, 315, 359–360, 364 Simulations: basic simulation model, 510–516 correlation and, 525–530 data fitting,
531–534 debt repayment and amortization, 534–537 demand curve and elasticity estimation, 538–541 infectious diseases, 546–548 investment decisions and capital budgeting, 542–545 module, 2–3
multidimensional, 552–555 profile, 6, 8–9, 515–516 recruitment budget, 548–556 reports, 42–43 retirement funding with VBA macros, 556–559 roulette wheel, 560–561 surgical success rates, 517–525 time
value of money, 562–570 Simultaneous compound options, 791–794 Single Asset and Single Phased module, 695, 697–704 Six Sigma quality control: capability measures, 627–631 hypotheses tests (advanced
techniques), 590–623 hypotheses tests in empirical simulations, 583–587 hypotheses tests in theoretical situations, 576, 578–582 nonparametric bootstrap simulations, 587–589 sample size determination
and design of experiments, 623–626 statistical probabilities and, 571–576 Skewed averages, 91–93 Skewed distributions, 217
March 17, 2008
Char Count= 0
1012 Skewed options, 217–218 Skewness distributions, 36–38 SLS Excel Functions module, 696, 712–714 SLS Excel Solution module, 695, 709–712 Software requirements, 3–6 Spearman’s rank correlation
(Spearman’s R), 19, 22, 52, 108–109, 247 Specification levels, 627–628 Sphericity of errors, 104, 243 Spider charts, 25, 26–28, 33, 496–301, 504–506 Spline extrapolation, 77–78, 258–263 Spot curves,
472–473 Spot rates, 678 Spot yields, 684 Spreads: bearish/bull, 417 credit, 125–132, 434 extreme spreads options, 204–205 on futures options, 229 Staged-gate investment process, 292, 802, 815, 817
Standard deviations, 529, 580–583, 624 Standard & Poor’s 500, 214, 236 Static covariance method, 661–663 Static versus dynamic perturbations, 25–26, 29–32 Stationarity, 106, 120, 122 Statistical
analysis tools, 52–57, 111–122 Statistical capability measures (Cpk), 627–628 Statistical confidence intervals, 580–583 Statistical probabilities, 571–576 Static optimization: continuous portfolio
allocation, 358–359 discrete project selection and, 364 military portfolio and, 382 simulations and, 63–64 stochastic portfolio allocation and, 401–402 Statistics: descriptive, 112–113, 114 forecast,
14 tab, 511, 515 Theil’s U, 286 Steady state probability, 268 Stochastic optimization: banking and, 315 continuous portfolio allocation and, 360 description of, 63–64 discrete project selection and,
364 stochastic portfolio allocation and, 400, 404 Stochastic portfolio allocation, 400–404 Stochastic processes: forecasting/regression diagnostics and, 49–50, 78, 106–107, 245–246, 273–275
statistical analysis and, 113, 120, 122 Straddle positions, 417 Strangle positions, 417 Strategy trees, 802, 803, 811, 815, 818 Strike prices, 218–224
INDEX Suboptimal exercise, 720–724 Success rates, surgical, 517–525 Super Lattice Solver (SLS). See Real Options Super Lattice Solver (SLS) Supershare options, 230 Surgical success rates, 517–525
Switching options, 817–820 Terminal Node Equations, 425–430, 698–699, 702–703 Theil’s U statistic, 286 Theta, call, 492, 494 Time horizon, 303, 344 Time-series analysis, 78, 113, 119–122, 257–258,
283–286 Time-series ARIMA, 276–282 Time-series data: analytics and, 90–91 diagnostic tools and, 47, 49–52 extrapolation and interpolation of, 684–689 forecasting and, 241, 245, 258–263, 276–282
regression diagnostics and, 103–104, 106 volatility and, 668 Time switch options, 231 Time value of money, 562–570 Tornado analysis, 22–33, 458, 496, 503–506 Trading-days ratio, 232 Trading
strategies, 413–419 Traveling financial planner, 58–62 Trend analysis, 113, 118, 119, 283 Trial balances, 637–638 Trials, 8, 9, 12–13, 22, 34, 37, 54, 58, 59, 60, 63, 81, 85, 87, 267, 315, 316, 374,
402, 514, 515, 518, 521, 523, 525, 548–549, 552, 558, 560, 571–572, 573, 588 Trial version of software, 1–2 Triangular distribution, 446 Trigger values, 741, 759, 814–817 Trinomial lattices, 705,
708–709, 774–779, 795–797 Truncation, 12 T-tests, 591–594, 595, 597–601 Two asset options, 212, 222–223, 233–236 Two-phased sequential compound options, 783–786, 815, 817 Two-phased simultaneous
compound options, 791, 793 Two-tailed hypotheses tests, 591 Two-variable tests: F-tests, 606–607 T-tests, 597–601 Wilcoxon signed rank tests, 616, 618 Z-tests, 602–606 Uncertainty. See also Monte
Carlo simulations debt ratings and spread under, 131–132 industry applications and, 334 market, 810, 814 optimization under, 59, 62 private, 810
March 17, 2008
Char Count= 0
Index Underlying asset, 127, 139, 178, 182, 184, 188, 189, 190, 191, 193, 200, 205, 206, 208, 211–212, 214, 216, 217, 219, 220, 221, 222, 226, 229, 233, 235–236, 237, 295, 405, 417, 422, 424, 439,
477–478, 491–493, 695, 697, 704, 705, 708, 709, 719, 731, 737, 739, 767–769, 774–775, 777–778, 779–781, 786, 795 Uneven dividend payments options, 237 Unit capability measures, 627–628 Upper barrier
options, 728–730 U.S. Treasury securities, 680–689 Utility analysis, 172–173, 321–329 Valuation. See also Payoff values of break-even inventory, 637–639 of buy versus lease, 631–633 of classified
loan borrowing base, 634–636 of convertible warrants, 821–826 of debt, 129–132, 147–148, 151–152 ESO Valuation Toolkit, 717–718, 721–722 expected values and, 156, 161, 163–164 of firm in financial
distress, 640–641 of information, 810–817 market values and, 137–139, 147–148 optimal trigger values and, 741, 759, 814–817 of pricing loan fees model, 642–643 salvage, 739–748 seed, 513–514 of time
value of money, 562–570 valuation model, 644–646 Valuation lattice, 704–705, 709, 712, 781, 782 Value at Risk, 303–312 economic capital and, 313–314 foreign exchange exposure and, 487, 489 Monte
Carlo simulations and, 306–312, 647–650, 653 operational and credit risk and, 653–656 optimization and, 316, 647–650 options delta portfolio and, 651–652 right-tail capital requirements and, 657–661
static covariance method and, 661–663 structural models of, 304–306 Valuing Employee Stock Options (Mun), 717 Variables. See also Decision variables; One-variable tests; Two-variable tests custom,
703 discrete integer, 70–73, 362–366 distributional fitting and, 33–35, 682–684, 685, 686 dual, rainbow options, 767–769 Variance-covariance matrix, 474–475. See also Covariance Variance Inflation
Factor (VIF), 51, 108, 246–247 Variance(s): analysis of (ANOVA), 610, 611, 618–623 charts, 32
hypotheses tests and, 584, 598–601, 606, 607 population, 610 Variation, percent explained, 503 Vasicek models (Oldrich Vasicek), 149–152, 690–691 VBA (Visual Basic for Applications), 556–560 Vega,
call, 492, 494–495 Vesting, 716–718, 723–724, 821–826 VIF (Variance Inflation Factor), 51, 108, 246–247 Violations of assumptions, 46, 101, 240 Visual Basic for Applications (VBA), 556–560
Volatility: of assets, 137–138, 222 barrier options and, 726 computations, 664–672 EWMA, 664, 665, 668 GARCH, 664, 665, 668–672, 681 implied, 663 of interest rates, 139, 422–423, 679, 680 inverse
floater bonds and, 407, 410 logarithmic cash flow returns approach, 664–666, 681 logarithmic present value returns approach, 665, 666–669, 681, 759 management assumption approach, 664–666, 681 to
probability, 666, 668 Real Options SLS and, 712 risk-free rate, 680–689, 712 sensitivity analysis and, 505 simulations and, 542–545 value at risk and, 661 Warrants, valuation of, 821–826 Weighted
least squares, 436 Weighting of ratios, 123–124 Wilcoxon signed-rank test, 616, 618 Wong, Victor, 349 Writer extendible options, 238 Xi, call, 492–495 Yield curves: asset liability management and,
349–354 Cox-Ingersoll-Ross model, 673–674 curve interpolation, 674–677 debt analysis and, 138–140, 146, 151–152 forward rates from spot rates, 678 term structure of volatility, 679–680 U.S. Treasury
risk-free rates and cubic spline curves, 680–689 Vasicek model, 690–691 Z-scores, 580 Z-tests, 436, 594, 596, 597, 602, 606
March 18, 2008
Char Count= 0
For more information about the DVD, see the About the DVD section on page 995. CUSTOMER NOTE: IF THIS BOOK IS ACCOMPANIED BY SOFTWARE, PLEASE READ THE FOLLOWING BEFORE OPENING THE PACKAGE This
software contains files to help you utilize the models described in the accompanying book. By opening the package, you are agreeing to be bound by the following agreement: This software product is
protected by copyright and all rights are reserved by the author, John Wiley & Sons, Inc., or their licensors. You are licensed to use this software on a single computer. Copying the software to
another medium or format for use on a single computer does not violate the U.S. Copyright Law. Copying the software for any other purpose is a violation of the U.S. Copyright Law. This software
product is sold as is without warranty of any kind, either express or implied, including but not limited to the implied warranty of merchantability and fitness for a particular purpose. Neither Wiley
nor its dealers or distributors assumes any liability for any alleged or actual damages arising from the use of or the inability to use this software. (Some states do not allow the exclusion of
implied warranties, so the exclusion may not apply to you.) | {"url":"https://vdoc.pub/documents/advanced-analytical-models-over-800-models-and-300-applications-from-the-basel-ii-accord-to-wall-street-and-beyond-79gb9l8nnhr0","timestamp":"2024-11-12T05:55:43Z","content_type":"text/html","content_length":"785097","record_id":"<urn:uuid:a9c1ae67-3168-47e4-9dca-c53c0cc01fb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00757.warc.gz"} |
: Solving the subset sum problem
CoCreate Modeling: Solving the subset sum problem
In a
recent discussion
in the German
user forum, a customer was looking for ways to solve a variation of the
subset sum problem
(which is a special case of the
knapsack problem
). This was needed to find the right combination of tool parts to manufacture a screw. Those tool parts are available in a wide variety of diameters, and the task at hand is to find a set of these
tools which, when combined, can be used to manufacture a screw of a given size. Abstracting from the manufacturing problem, the problem can be stated like this: Given
items of lengths
and a total length of
, find all subsets of items which add up to
. Even though the description sounds fairly simple, the problem is surprisingly tough. In fact, it is
, which is computer science gobbledigook for "d*mn tough, man!". In practice, it means that for small values of
, it's easy to find an algorithm which tests all permutations and lists all subsets in a reasonable amount of time. However, as the size of the array of item lengths grows, the computation time may
grow exponentially, and chances are you'll never see the day when it ends (cf.
Deep Thought
). The following simple
algorithm is of that nature: It performs reasonably well for small arrays, but will quickly wear your patience for larger arrays. To see how the algorithm behaves, I created a version of the code
which only counts the number of found solutions; then I ran it in
, compiled and interpreted:
Take those results with arbitrary amounts of salt; the code (see below) initializes the test array with random numbers and will therefore always vary a little. But the results are reliable enough to
show that you don't really want to use this algorithm for large arrays...
(let ((solutions 0)
(defun found-solution()
"Called whenever the algorithm has found a solution"
(let ((total 0))
(format t " ")
(dotimes (i (length numbers))
(when (aref flags i)
(incf total (aref numbers i))
(format t "~A " (aref numbers i)))
(format t " => ~A~%" total)
(incf solutions)))
(defun find-solutions(k target-sum callback)
"Core backtracking algorithm"
(when (zerop target-sum)
(funcall callback)
(return-from find-solutions))
(unless (= k (length numbers))
(let ((nk (aref numbers k)))
(when (>= target-sum nk)
;; try subtracting numbers[k] from target-sum
(setf (aref flags k) t)
(find-solutions (+ 1 k) (- target-sum nk) callback)
(setf (aref flags k) nil)))
;; recurse without subtracting first
(find-solutions (+ 1 k) target-sum callback)))
(defun find-subset-sum(target-sum)
"Set up and run backtracking algorithm based on 'numbers' array"
(setf flags (make-array (list (length numbers))))
(setf solutions 0)
(find-solutions 0 target-sum #'found-solution)
(format t "Found ~A different solutions.~%" solutions))
(defun subset-sum-test(size)
"Test subset sum algorithm using random numbers"
(let* ((total 0) target-sum)
;; init numbers array with random values up to 1000
(setf numbers (make-array (list size)))
(dotimes (i size)
(setf (aref numbers i) (random 1000))
(incf total (aref numbers i)))
(setf target-sum (floor (/ total 2))) ;; random target sum
(format t "Now listing all subsets which sum up to ~A:~%" target-sum)
(find-subset-sum target-sum)))
The core backtracking algorithm is in
. It will recursively exhaust all subsets. When it finds a subset which adds up to
, it will call the
function - this function can either simply increase a solution counter, report the current solution to the user, or store it somewhere for later retrieval. In the test example above, the callback
function is
which increments a solution counter and prints the current solution. To test the code, run
, providing an array size. This function will create an array of
of that size and initialize it with random values; it will also pick a random
. In a real application, you would replace
with a function which gets the array data from somewhere (for example, from a tools database as in the customer's case), and lets the user pick a
. --
- 01 Mar 2006 The aforementioned customer would actually have preferred a solution in
Drafting's macro language. However, this macro language isn't really a full-blown programming language (even though it is perfectly adequate for almost all customization purposes). For instance, its
macros cannot return values, the language doesn't have an array type, and the macro expansion stack (i.e. the depth of the macro call tree) has a fixed limit - which pretty much rules out non-trivial
amounts of recursion. While I was considering my options, I also fooled around with VBA, which resulted in the code presented below. I'm not at all proficient in VBA, so I'm sure the implementation
is lacking, but anyway - maybe someone out there finds it useful nonetheless
Dim solutions As Long
Dim flags() As Boolean
Dim numbers() As Long
Sub findSolutions(k As Long, targetSum As Long)
If targetSum = 0 Then
' we found a solution
solutions = solutions + 1
Exit Sub
End If
If k <= UBound(numbers) Then
If (targetSum >= numbers(k)) Then
flags(k) = True
' try first by subtracting numbers[k] from targetSum
Call findSolutions(k + 1, targetSum - numbers(k))
flags(k) = False
End If
' now try without subtracting
Call findSolutions(k + 1, targetSum)
End If
End Sub
Sub subsetsum()
Dim targetSum As Long
Dim i As Long
Dim arraySize As Long
arraySize = 25
ReDim numbers(0 To arraySize - 1)
ReDim flags(0 To arraySize - 1)
' initialize numbers array with random entries
For i = 0 To arraySize - 1
numbers(i) = Int(1000 * Rnd + 1)
flags(i) = False
targetSum = targetSum + numbers(i)
targetSum = Int(targetSum / 2)
solutions = 0
Call findSolutions(0, targetSum)
MsgBox "Found " + Str(solutions) + " solutions."
End Sub
Let's see - we recurse one level for each entry in the array, so with a maximum array size of 30 (the customer said he was considering a table of 20-30 values), the recursion depth should never
exceed 30. That's not a lot, in fact, so would this still exceed the macro stack thresholds in CoCreate Drafting? Oh, and by the way, the recursive function doesn't even try to return values, so the
lack of return values in the macro language isn't a real obstacle in this case! I couldn't resist and just had to try to translate the algorithm into CoCreate Drafting macro language:
{ Description: Subset sum algorithm }
{ Author: Claus Brod }
{ Language: CoCreate Drafting macros }
{ (C) Copyright 2006 Claus Brod, all rights reserved }
DEFINE Found_solution
LOCAL I
LOCAL Solution
LOCAL Total
LOCAL Nk
{ display current solution }
LET Subset_sum_solutions (Subset_sum_solutions+1)
LET I 1
LET Solution ''
LET Total 0
WHILE (I <= Subset_sum_arraysize)
IF (READ_LTAB 'Flags' I 1)
LET Nk (READ_LTAB 'Numbers' I 1)
LET Total (Total + Nk)
LET Solution (Solution + ' ' + STR(Nk))
LET I (I+1)
DISPLAY_NO_WAIT (Solution + ' sum up to ' + STR(Total))
DEFINE Find_solutions
PARAMETER K
PARAMETER Target_sum
LOCAL Nk
IF (Target_sum = 0)
{ we found a solution, display it }
ELSE_IF (K <= Subset_sum_arraysize)
LET Nk (READ_LTAB 'Numbers' K 1)
{ The following optimization only works if we can assume a sorted array }
IF ((Nk * (Subset_sum_arraysize-K+1)) >= Target_sum)
IF (Target_sum >= Nk)
{ try first by subtracting Numbers[k] from Target }
WRITE_LTAB 'Flags' K 1 1
Find_solutions (K+1) (Target_sum-Nk)
WRITE_LTAB 'Flags' K 1 0
{ now try without subtracting }
Find_solutions (K+1) Target_sum
DEFINE Subset_sum
PARAMETER Subset_sum_arraysize
LOCAL Target
LOCAL Random
LOCAL I
LOCAL Subset_sum_solutions
LOCAL Start_time
{ Allocate Numbers and Flags arrays }
CREATE_LTAB Subset_sum_arraysize 1 'Numbers'
CREATE_LTAB Subset_sum_arraysize 1 'Flags'
LET Target 0
LET I 1
WHILE (I <= Subset_sum_arraysize)
LET Random (INT(1000 * RND + 1))
LET Target (Target + Random)
WRITE_LTAB 'Numbers' I 1 Random
WRITE_LTAB 'Flags' I 1 0
LET I (I+1)
LET Target (INT (Target/2))
DISPLAY ('Array size is ' + STR(Subset_sum_arraysize) + ', target sum is ' + STR(Target))
{ Sorting in reverse order speeds up the recursion }
SORT_LTAB 'Numbers' REVERSE_SORT 1 CONFIRM
LET Start_time (TIME)
LET Subset_sum_solutions 0
Find_solutions 1 Target
DISPLAY ('Found ' + STR(Subset_sum_solutions) + ' solutions in ' + STR(TIME-Start_time) + ' seconds.')
Because CoCreate Drafting's macro language doesn't have arrays, they have to be emulated using
logical tables
, or
. The solutions which are found are displayed in CoCreate Drafting's prompt line, which is certainly not the most ideal place, but it's sufficient to verify the algorithm is actually doing anything
.-) What you see above, is a tuned version of the macro code I came up with initially. The "literal" translation from the original Lisp version took ages to complete even for small arrays; for
instance, searching for solutions in an array of 20 numbers took 95 seconds (using OSDD 2005). However, there are two simple optimizations which can be applied to the algorithm. First, the input
array can be sorted in reverse order, i.e. largest numbers first. This makes it more likely that we can prune the recursion tree early. This optimization itself improved runtimes by only 5% or so,
but more importantly, it paved the way for another optimization. Since we know that the numbers in the array are monotonically decreasing, we can now predict in many cases that there is no chance of
possibly reaching the target sum anyway, and therefore abort the recursion early. Example for an array of size 20:
• We are in the midst of the recursion, say, at recursion level 15. The array entry at index 15 contains the value 42.
• Let us further assume that Target_sum has already been reduced to 500 earlier in the recursion, i.e. the remaining entries in the array somehow have to sum up to 500 to meet the required subset
• Because the values in the array are monotonically decreasing, we know that the entries 15 through 20 have a value of 42 at most. Assuming that all remaining entries are 42, we get a maximum
remaining sum of 42*6=252.
• This means we can prune the recursion tree at this point because there's no way that this recursion line will ever find a solution.
This second optimization improved runtimes tremendously; with an array size of 20, we're now down to 15 seconds (originally 95 seconds); however, it still takes more than 3000 seconds to find all
solutions in an array of size 30. By the way, all performance measurements mentioned here were taken on the same system, a laptop with a 1.7 GHz Celeron CPU. In real life, the
array will in fact contain floating-point values rather than integers. The algorithm doesn't change, but whenever you work with floating-point values, it's good to follow a few basic guidelines like
the ones outlined
. In the case of the above macro code, instead of a comparison like
IF (Target = 0)
, you'd probably want to write something like
IF (ABS(Target) < Epsilon)
is a small value chosen to meet a user-defined tolerance (for example 0.001). --
- 16 Mar 2006 This is taking me to places I didn't anticipate.
Over at codecomments.com
, they are discussing solutions to the subset sum problem in Haskell, Prolog and Caml, if anyone is interested. (I sure was, and even read a tutorial on Haskell to help me understand what these guys
are talking about --
- 01 Apr 2006 I started to learn some Ruby, so here's a naïve implementation of the algorithm in yet another language
this blog entry
$solutions = 0
$numbers = []
$flags = []
def find_solutions(k, target_sum)
if target_sum == 0
# found a solution!
(0..$numbers.length).each { |i| if ($flags[i]) then print $numbers[i], " "; end }
print "\n"
$solutions = $solutions + 1
if k < $numbers.length
if target_sum >= $numbers[k]
$flags[k] = true
find_solutions k+1, target_sum-$numbers[k]
$flags[k] = false
find_solutions k+1, target_sum
def find_subset_sum(target_sum)
print "\nNow listing all subsets which sum up to ", target_sum, ":\n"
$solutions = 0
(0..$numbers.length()).each { |i| $flags[i] = false }
find_solutions 0, target_sum
print "Found ", $solutions, " different solutions.\n"
def subset_sum_test(size)
total = 0
target_sum = 0
(0..size).each { |i| $numbers[i] = rand(1000); total += $numbers[i]; print $numbers[i], " " }
target_sum = total/2
find_subset_sum target_sum
subset_sum_test 25
- 17 Apr 2006
The other day, I experimented with Python and thought I'd start with a quasi-verbatim translation of the subset sum code. Here is the result. Apologies for the non-idiomatic and naïve implementation.
import random
import array
import sys
numbers = array.array('i')
flags = array.array('c')
solutions = 0
def find_solutions(k, target_sum):
global solutions
if target_sum == 0:
print " Solution:",
for i in range(0, len(numbers)):
if flags[i] != 0:
print numbers[i],
solutions = solutions + 1
if k < len(numbers):
if (numbers[k] * (len(numbers)-k+1)) >= target_sum:
if target_sum >= numbers[k]:
flags[k] = 1
find_solutions(k+1, target_sum - numbers[k])
flags[k] = 0
find_solutions(k+1, target_sum)
def find_subset_sum(target_sum):
global solutions
global flags
print "Subsets which sum up to %s:" % target_sum
flags = [0] * len(numbers)
find_solutions(0, target_sum)
print "Found", solutions, "different solutions"
def subset_sum_test(size):
global numbers
total = 0
print "Random values:\n ",
for i in range(0, size):
numbers.append(random.randint(0, 1000))
total = total + numbers[i]
print numbers[i],
numbers = sorted(numbers, reverse = True)
target_sum = total/2
subset_sum_test(15 if len(sys.argv) < 2 else int(sys.argv[1]))
See also
A Subset Of Python
. --
- 30 Jun 2013
And here is an implementation in C#:
using System;
namespace SubsetSum
class SubsetSum
private int[] numbers;
private bool[] flags;
private int findSolutions(int k, int targetSum, int solutions=0)
if (targetSum == 0) {
Console.Write(" Solution: ");
for (int i=0; i<numbers.Length; i++) {
if (flags[i]) {
Console.Write("{0} ", numbers[i]);
} else {
if (k < numbers.Length) {
if ((numbers[k] * (numbers.Length - k + 1)) >= targetSum) {
if (targetSum >= numbers[k]) {
flags[k] = true;
solutions = findSolutions(k + 1, targetSum - numbers[k], solutions);
flags[k] = false;
solutions = findSolutions(k + 1, targetSum, solutions);
return solutions;
public void solve() {
Array.Sort(numbers, (x, y) => y - x); // sort in reverse order
Array.Clear(flags, 0, flags.Length);
int total = 0;
Array.ForEach(numbers, (int n) => total += n);
int solutions = findSolutions(0, total / 2);
Console.WriteLine("Found {0} different solutions.", solutions);
SubsetSum(int size) {
numbers = new int[size];
Random r = new Random();
for (int i = 0; i < size; i++) {
numbers[i] = r.Next(1000);
Console.Write("{0} ", numbers[i]);
flags = new bool[size];
public static void Main(string[] args)
int size = args.Length > 1 ? int.Parse(args[1]) : 15;
new SubsetSum(size).solve();
A naïve implementation in Delphi:
program subsetsum;
{$APPTYPE CONSOLE}
{$R *.res}
TSubsetSum = class
FNumbers: TArray<Integer>;
FFlags: TArray<Boolean>;
function FindSolutions( aK: Integer; aTargetSum: Integer; aSolutions: Integer = 0 ): Integer;
procedure Solve( );
constructor Create( aSize: Integer );
vSize: Integer;
vSubsetSum: TSubsetSum;
constructor TSubsetSum.Create( aSize: Integer );
i: Integer;
SetLength( FNumbers, aSize );
SetLength( FFlags, aSize );
for i := 0 to aSize - 1 do
FNumbers[i] := Random( 1000 );
Write( FNumbers[i].ToString + ' ' );
function TSubsetSum.!FindSolutions( aK, aTargetSum, aSolutions: Integer ): Integer;
if ( aTargetSum = 0 ) then
write( ' Solution: ' );
for var i := 0 to Length( FNumbers ) - 1 do
if FFlags[i] then
write( FNumbers[i].ToString + ' ' );
inc( aSolutions );
if ( aK < Length( FNumbers ) ) then
if ( ( FNumbers[aK] * ( Length( FNumbers ) - aK + 1 ) ) >= aTargetSum ) then
if ( aTargetSum >= FNumbers[aK] ) then
FFlags[aK] := True;
aSolutions := FindSolutions( aK + 1, aTargetSum - FNumbers[aK], aSolutions );
FFlags[aK] := False;
aSolutions := FindSolutions( aK + 1, aTargetSum, aSolutions );
Result := aSolutions;
procedure TSubsetSum.Solve;
vTotal: Integer;
vSolutions: Integer;
TArray.Sort<Integer>( FNumbers, TComparer<Integer>.Construct(
function( const Left, Right: Integer ): Integer
Result := Right - Left;
end ) );
vTotal := 0;
for var vNumber: Integer in FNumbers do
vTotal := vTotal + vNumber;
vSolutions := FindSolutions( 0, vTotal div 2 );
writeln( Format( 'Found %d different solutions.', [vSolutions] ) );
vSize := 15;
if ( ParamCount > 1 ) then
vSize := ParamStr( 0 ).ToInteger;
vSubsetSum := TSubsetSum.Create( vSize );
vSubsetSum.Solve( );
- 19 Jan 2021 See also
- 16 Mar 2016
to top | {"url":"http://www.clausbrod.de/cgi-bin/view.pl/CoCreateModeling/MacroSubsetSum","timestamp":"2024-11-03T00:55:37Z","content_type":"application/xhtml+xml","content_length":"55275","record_id":"<urn:uuid:70bfc765-4867-485f-ae5b-643d9f44a7ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00026.warc.gz"} |
Skewed Distribution: Definition, Examples - Statistics How To
Main topics about normal distribution right tail
normal distribution long right tail • normal distribution right tail table •
Check also videos for this topic
Skewed Distribution: Definition, Examples - Statistics How To
A left-skewed distribution has a long left tail. Left-skewed distributions are also called negatively-skewed distributions. That's because there is a long tail in the negative direction on the number
line. The mean is also to the left of the peak.
A right-skewed distribution has a long right tail. Right-skewed distributions are also called positive-skew distributions. That's because there is a... [more...]
→ 12 Articles (and 1 Videos) for this topic
Normal Distribution - Six Sigma Material
The normal distribution is generally credited to Pierre-Simon de LaPlace. Karl Gauss is generally given credit for recognition of the normal curve of errors. This curve is also referred to as
the�Gaussian Distribution.
Manufacturing processes and natural occurrences frequently create this type of distribution, a unimodal bell curve. The distribution is spread symmetrically around the central... [more...]
→ 4 Articles (and 2 Videos) for this topic
Selected articles for topic: normal distribution right tail
→ 14 articles
Check also 14 Videos for this topic
Skewed Distribution: Definition, Examples - Statistics How To
A left-skewed distribution has a long left tail. Left-skewed distributions are also called negatively-skewed distributions. That's because there is a long tail in the negative direction on the number
line. The mean is also to the left of the peak.
A right-skewed distribution has a long right tail. Right-skewed distributions are also called positive-skew distributions. That's because there is a...
Website: http://www.statisticshowto.com
Related topics : normal distribution long right tail / long right tail distribution / long right tail mean median / normal distribution right tail / long tail distributions
What is skewness? - Definition from WhatIs.com
Skewness is asymmetry in a statistical distribution, in which the curve appears distorted or skewed either to the left or to the right. Skewness can be quantified to define the extent to which a
distribution differs from a normal distribution.
In a normal distribution, the graph appears as a classical, symmetrical "bell-shaped curve." The mean , or average, and...
Website: http://whatis.techtarget.com
Related topics : normal distribution right tail
Normal Distributions, Standard Deviations, Modality ...
Facebook: https://www.facebook.com/laura.killam
Normal distributions, Modality, Skewness and Kurtosis: Understanding the concepts
The normal distribution is a theoretical concept of how large samples of ratio or interval level data will look once plotted. Since many variables tend to have approximately normal distributions it
is one of the most important concepts in statistics. The normal curve...
Website: http://www.youtube.com
Related topics : normal distribution long right tail / normal distribution right tail / long right tail mean median / long right tail distribution / long right tail
Normal distribution - Wikipedia
This article is about the univariate normal distribution. For normally distributed vectors, see Multivariate normal distribution .
"Bell curve" redirects here. For other uses, see Bell curve (disambiguation) .
Normal distribution
The red curve is the standard normal distribution
Cumulative distribution function
{\displaystyle {\mathcal {N}}(\mu ,\,\sigma ^{2})}
Date: 2017-04-03 01:08:18
Website: https://en.wikipedia.org
Related topics : normal distribution right tail
Normal Test Plot - SkyMark Corp
J. Edgar Thomson
Normal Test Plot
Normal Test Plots (also called Normal Probability Plots or Normal Quartile Plots) are used to investigate whether process data exhibit the standard normal "bell curve" or Gaussian distribution.
First, the x-axis is transformed so that a cumulative normal density function will plot in a straight line. Then, using the mean and standard deviation (sigma) which are...
Website: http://www.skymark.com
Related topics : normal distribution long right tail / normal distribution right tail / long right tail distribution / long right hand tail / long tail short tail distribution
Normal Distribution - Six Sigma Material
The normal distribution is generally credited to Pierre-Simon de LaPlace. Karl Gauss is generally given credit for recognition of the normal curve of errors. This curve is also referred to as
the�Gaussian Distribution.
Manufacturing processes and natural occurrences frequently create this type of distribution, a unimodal bell curve. The distribution is spread symmetrically around the central...
Website: http://www.six-sigma-material.com
Related topics : normal distribution right tail table / normal distribution long right tail / normal distribution right tail
Find The Right Fit With Probability Distributions ...
The distribution is an attempt to chart uncertainty. In this case, an outcome of 50 is the most likely but only will happen about 4% of the time; an outcome of 40 is one standard deviation below the
mean and it will occur just under 2.5% of the time.
The other distinction is between the probability density function and the cumulative distribution function.
The PDF is the probability that our...
Date: 2017-04-03 11:23:21
Website: http://www.investopedia.com
Related topics : normal distribution right tail
June 2017 CFA Level 1: CFA Exam Preparation (study notes ...
My Note:
If a distribution is symmetrical, each side of the distribution is a mirror image of the other. For a symmetrical, bell-shaped distribution (known as the normal distribution), the mean, median, and
mode of the distribution are equal. The normal distribution can be completely described by its mean and variance.
A distribution is skewed if one of its tails is longer than the other...
Website: http://analystnotes.com
Related topics : normal distribution long right tail / long right tail mean median / long right tail distribution / normal distribution right tail / long tail distributions
Comparing groups for statistical differences: how to ...
Choosing the right statistical test. What do we need to know before we start the statistical analysis?
Question 4: What type(s) of data may we obtain during an experiment?
Answer 4: Basic data collected from an experiment could be either quantitative (numerical) data or qualitative (categorical) data, both of them having some subtypes (4).
The quantitative (numerical) data could be:
1. Discrete...
Date: 2017-04-03 09:23:46
Website: http://www.biochemia-medica.com
Related topics : normal distribution right tail table / normal distribution right tail / normal distribution long right tail / long right tail mean median
How to check for normal distribution using Excel for ...
You have the right idea. This can be done systematically, comprehensively, and with relatively simple calculations. A graph of the results is called a normal probability plot (or sometimes a P-P
plot). From it you can see much more detail than appears in other graphical representations, especially histograms , and with a little practice you can even learn to determine ways to re-express
Website: http://stats.stackexchange.com
Related topics : normal distribution right tail
Skewness and Kurtosis - UAH
depending on whether \(\skw(X)\) is positive, negative, or 0.
In the unimodal case, if the distribution is positively skewed then the probability density function has a long tail to the right, and if the distribution is negatively skewed then the probability
density function has a long tail to the left. A symmetric distribution is unskewed.
Suppose that the distribution of \(X\) is...
Date: 2016-10-04 22:08:48
Website: http://www.math.uah.edu
Related topics : normal distribution long right tail / long right tail distribution / normal distribution right tail / long tail distributions / long right hand tail
Skewed Data - mathsisfun.com
About Ads
Skewed Data
Data can be "skewed", meaning it tends to have a long tail on one side or the other:
Negative Skew?
Why is it called negative skew? Because the long "tail" is on the negative side of the...
Date: 2017-02-04 03:04:59
Website: mathsisfun.com
Related topics : normal distribution long right tail / long tail meaning / long right hand tail / long tailed data / long tail data
Excel for Business Statistics - Personal Web Space Basics
The Business Statistics Online Course
This site provides illustrative experience in the use of Excel for data summary, presentation, and for other basic statistical analysis. I believe the popular use of Excel is on the areas where Excel
really can excel. This includes organizing data, i.e. basic data management, tabulation and graphics. For real statistical analysis on must learn...
Date: 2013-01-07 17:19:32
Website: http://home.ubalt.edu
Related topics : normal distribution right tail table / normal distribution right tail / normal distribution long right tail | {"url":"http://www.longtailaggregator.com/lta/c,k/bloglist/normal+distribution+right+tail,0","timestamp":"2024-11-06T17:57:12Z","content_type":"text/html","content_length":"42250","record_id":"<urn:uuid:99e31123-80ff-460a-833a-071e3bf67689>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00699.warc.gz"} |
Minimax | Brilliant Math & Science Wiki
In game theory, minimax is a decision rule used to minimize the worst-case potential loss; in other words, a player considers all of the best opponent responses to his strategies, and selects the
strategy such that the opponent's best strategy gives a payoff as large as possible.
The name "minimax" comes from minimizing the loss involved when the opponent selects the strategy that gives maximum loss, and is useful in analyzing the first player's decisions both when the
players move sequentially and when the players move simultaneously. In the latter case, minimax may give a Nash equilibrium of the game if some additional conditions hold.
Minimax is also useful in combinatorial games, in which every position is assigned a payoff. The simplest example is assigning a "1" to a winning position and "-1" to a losing one, but as this is
difficult to calculate for all but the simplest games, intermediate evaluations (specifically chosen for the game in question) are generally necessary. In this context, the goal of the first player
is to maximize the evaluation of the position, and the goal of the second player is to minimize the evaluation of the position, so the minimax rule applies. This, in essence, is how computers
approach games like chess and Go, though various computational improvements are possible to the "naive" implementation of minimax.
Suppose player \(i\) chooses strategy \(s_i\), and the remaining players choose the strategy profile \(s_{-i}\). If \(u_i(S)\) denotes the utility function for player \(i\) on strategy profile \(s\),
the minimax of a game is defined as
\[\overline{u_i} = \text{min}_{s_{-i}}\text{max}_{s_i}u_i(s_i, s_{-i})\]
Intuitively speaking, the minimax (for player \(i\)) is one of two equivalent formulations:
• The minimax is the smallest value that the other players can force player \(i\) to receive, without knowing player \(i\)'s strategy
• The minimax is the largest value player \(i\) can guarantee when he is told the strategies of all other players.
Similarly, the maximin is defined as
\[\underline{u_i} = \text{max}_{s_i}\text{min}_{s_{-i}}u_i(s_i, s_{-i})\]
which can be intuitively understood as either of:
• The maximin is the largest value player \(i\) can guarantee when he does not know the strategies of any other player
• The maximin is the smallest value the other players can force player \(i\) to receive, while knowing player \(i\)'s strategy
For example, consider the following payoff matrix (the rows represent the first player's choices and the columns represent the second player's choices):
where both players have a choice between three strategies. In such a payoff matrix, from the first player's perspective:
□ The maximin is the largest of the smallest values in each row
□ The minimax is the smallest of the largest values in each column
so the maximin is the largest of -2, 1, and -1 (i.e. 1), and the minimax is the smaller of 2, 2, and 1 (i.e. 1).
It is extremely important to note that
\[\underline{u_i} \leq \overline{u_i}\] i.e. the maximin is always at most the minimax.
This can be intuitively understood by noting that in minimax, player \(i\) effectively gets to choose his strategy after learning everyone else's, which can only increase his payoff.
In the above example, the maximin and the minimax are in fact equal. In such a case (which does not always happen!), the minimax strategy for both players gives a Nash equilibrium of the game.
This is especially important in zero-sum games, in which the minimax always gives a Nash equilibrium of the game, as the minimax and maximin are necessarily equal.
The minimax theorem establishes conditions on when the minimax and maximin of a function are equal. More precisely, the minimax theorem gives conditions on when
Minimax theorem:
Let \(X,Y\) be two compact convex sets, and \(f: X \times Y \rightarrow \mathbb{R}\) be a continuous function on pairs \((x,y), x \in X, y \in Y\). If \(f\) is convex-concave, i.e.
□ \(f(x,y):X \rightarrow \mathbb{R}\) is convex for all fixed \(y\)
□ \(f(x,y):Y \rightarrow \mathbb{R}\) is concave for all fixed \(x\)
The application of the minimax theorem to zero-sum games is especially important, as it becomes equivalent to
For a zero-sum game with finitely many strategies, there exists a payoff \(P\) and a mixed strategy for each player such that
□ Player 1 can achieve a payoff of at most \(P\), even given player 2's strategy
□ Player 2 can achieve a payoff of at most \(-P\), even given player 1's strategy
which is equivalent to establishing a Nash equilibrium.
It is important to note that the minimax strategy may be mixed; in general,
It is not necessarily the case that the pure minimax strategy for each player leads to a Nash equilibrium.
For example, consider the payoff matrix
The minimax choice for the first player is strategy 2, and the minimax choice for the second player is also strategy 2. But both players choosing strategy 2 does not lead to a Nash equilibrium;
either player would choose to change their strategy given knowledge of the other's. In fact, the mixed minimax strategies of:
• Player 1 chooses strategy 1 with probability \(\frac{1}{6}\) and strategy 2 with probability \(\frac{5}{6}\)
• Player 2 chooses strategy 1 with probability \(\frac{1}{3}\) and strategy 2 with probability \(\frac{2}{3}\)
is stable and represents a Nash equilibrium.
In combinatorial games such as chess and Go, the minimax algorithm gives a method of selecting the next optimal move. Firstly, an evaluation function \(f:\mathbb{P} \rightarrow \mathbb{R}\) from the
set of positions to real numbers is required, representing the payoff to the first player. For example, a chess position with evaluation +1.5 is significantly in favor of the first player, while a
position with evaluation \(-\infty\) is a chess position in which White is checkmated. Once such a function is known, each player can apply the minimax principle to the tree of possible moves, thus
selecting their next move by truncating the tree at some sufficiently deep point.
More specifically, given a tree of possible moves in which the leaves have been evaluated using the function \(f\), a player recursively assigns to each node an evaluation based on the following:
• If the node is at even depth, meaning that the first player is on move, the evaluation of the node is the maximum of the evaluations of its children.
• If the node is at odd depth, meaning that the second player is on move, the evaluate of the node is the minimum of the evaluations of its children.
For example, in the below tree the evaluations of the leaves are calculated first (with 99 and -99 representing a won/lost game respectively); this fills in the bottom row of the tree. At depth 2,
the first player is on move, so he should select the move that maximizes the evaluation. This means that each evaluation in the depth 2 row is the maximum of the numbers in its subtree. At depth 1,
the second player is on move, so he should select the move that minimizes the evaluation. This means that each evaluation in the depth 1 row is the minimum of the numbers in its subtree. Finally, at
depth 0 the first player is on move, so he should select the move that maximizes the evaluation, giving an overall evaluation of 4.
Of course, games like chess and go are vastly more complicated, as dozens of moves are possible at any possible point (rather than the 1-3 in the above example). Thus it is infeasible to completely
solve these games using a minimax algorithm, meaning that the evaluation function is used at a sufficiently deep point in the tree (for example, most modern chess engines apply a depth of somewhere
between 16 and 18) and minimax is used to fill in the rest of this relatively small tree. | {"url":"https://brilliant.org/wiki/minimax/?subtopic=games&chapter=game-theory","timestamp":"2024-11-10T18:18:35Z","content_type":"text/html","content_length":"52978","record_id":"<urn:uuid:8f20dd49-7530-451a-a016-049a8a7809f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00373.warc.gz"} |
how to find gcd of 2 numbers
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs. | {"url":"https://topic.alibabacloud.com/zqpop/how-to-find-gcd-of-2-numbers_45381.html","timestamp":"2024-11-02T09:36:09Z","content_type":"text/html","content_length":"85125","record_id":"<urn:uuid:34d92a69-f625-478c-8708-e08683c18555>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00179.warc.gz"} |
Short-Term Precipitation Forecasting Rolling Update Correction Technology Based on Optimal Fusion Correction
Short-Term Precipitation Forecasting Rolling Update Correction Technology Based on Optimal Fusion Correction ()
1. Introduction
The trend forecast of the development of severe weather system is the biggest difficulty of short-term forward forecasting. The model precipitation forecast becomes an important support for
short-term precipitation forecasting business, but the model usually has “spin-up” problem (Long et al., 2014) . Therefore, it affects the reliability of 1 - 6 h before the forecast. The Radar
Quantitative Precipitation Forecast (QPF) has a small error before 2 - 3 h, but for a rapidly developing and weakened storm system, the forecast error will increase rapidly with time. The fusion
method not only considers the accuracy of the radar QPF forecast in predicting the location of the precipitation system, but also considers the ability of the numerical model forecast to reflect the
changes of the precipitation system, avoiding the limitations of the single method (Dong, 2018; Zhang et al., 2017; Tanessong et al., 2017) .
Because radar QPF and numerical precipitation forecast have their own strengths (Chen et al., 2004) , through the fusion technology, using the measured rainfall data and high spatial-temporal
resolution radar quantitative precipitation forecast QPF to scan the model precipitation forecast, can be greatly improved. The accuracy of model precipitation forecast effectively compensates for
the current shortage of such products. Therefore, the accuracy of precipitation forecasting also makes it play a greater role in weather forecasting, especially short-term temporary forecasting,
which helps to improve meteorological disaster prevention and mitigation. At the same time, it also further improves the pertinence and practicality of flood control weather services (Sakijege et
al., 2014; Kissi et al., 2015; Huang et al., 2011) .
2. Data Selection and Data Preprocessing
This paper selects the rainfall data of the automatic station in the dense area of Fujian Province, the SWAN radar reflectivity factor puzzle data (Fujian radar jigsaw network, radar model CINRAD/SA)
and the reflectivity factor puzzle extrapolation forecast ZI relationship QPF data by China Meteorological Administration Provided by the Severe Weather Automatic Nowcast System (SWAN; Hu et al.,
2011 ). The time resolution of the product output is 6 min, the horizontal spatial resolution is 1 km × 1 km, the forecasting time of the extrapolated forecast data is 1h, and the horizontal spatial
resolution is 3 km × 3 km hourly WRF mode precipitation product.
Quality control of automatic station rainfall data (Awadallah & Awadallah, 2013) , using “fast quality control” automatic station minute level data, detecting minute level rainfall large value points
and using radar data cross-check or QPE correction to eliminate singular points. Time matching of radar QPF and mode precipitation forecasting and intra-provincial radar time synchronization problem:
By synthesizing radar QPF precipitation in 6 minutes, it can match the hourly HRRR regional numerical model precipitation time, and the 6 radar time synchronization in the province can be controlled
in 2 minutes. Spatial matching of radar QPF and model precipitation prediction: using objective analysis methods such as Cressman (Goerss & Jeffries, 1994) to downscale radar QPF.
3. Fusion Method Introduction
3.1. Precipitation Forecast Error Test Evaluation
Using the historical rainfall data, using historical statistical data analysis, the QPF of the SWAN, the optical flow method extrapolation forecast precipitation and the HRRR regional numerical model
precipitation were respectively subjected to the precipitation TS score by 1 hour, 3 hours, and 6 hours in the immediate forecasting time. Equal point-to-point testing, as well as statistical
analysis and evaluation of CSI index errors.
3.1.1. QPF Product Inspection Method
Pairing grid selection method for automatic station rainfall data and precipitation forecast: Firstly, the automatic station rainfall data and QPF data are tested according to the latitude and
longitude consistency pairing test.
Specific inspection methods:
1) The closest method: select the grid point closest to the rainfall station as the center and expand to the surrounding 3 × 3 total of 9 grid points, take the grid point QPF value which is closest
to the precipitation station live value, and check the error between the two; 2) 9-point averaging method: select the grid point closest to the rainfall station as the center and expand to the
surrounding 3 × 3 total 9 grid points, take the 9-point average QPF value and the rainfall station precipitation actual value, and check the error between the two; 3) The QPF of the nearest point of
the whole point, the QPF of the 9-point averaging method, the corresponding hourly rainfall data, the rain station name, and the station number generation text file are generated one file per hour.
According to the test results of the 2015 Swan Radar Precipitation Forecast QPF, it is found that there are some differences in the results of different weather systems, system change speeds and echo
movement speeds. For the echo motion, the large stable precipitation with long duration, the radar QPF error is smaller, and it is generally smaller than the actual situation. For the convective
system with uneven distribution of local development, the QPF error is relatively large compared with the actual precipitation. For a precipitation system with less stable position and less change in
echo intensity, the QPF error is not large. Overall, QPF products are concentrated between the evaluation factor E(S) of 0.5 - 1.4 and the error rate of 0.2 - 0.65.
With the formula:
1) Evaluation factor:
$E\left(S\right)=\frac{\underset{i=1}{\overset{N}{\sum }}{Q}_{Ri,t}}{\underset{i=1}{\overset{N}{\sum }}{Q}_{Gi,t}}$(1)
2) Error rate:
$F\left(S\right)=\frac{\underset{i=1}{\overset{N}{\sum }}|{Q}_{Ri,t}-{Q}_{Gi,t}|}{\underset{i=1}{\overset{N}{\sum }}{Q}_{Gi,t}}$(2)
where N is the number of precipitation stations participating in the assessment throughout the region, ${Q}_{Ri,t}$ and ${Q}_{Gi,t}$ is the hourly cumulative amount measured by the radar QPF and the
rain gauge at the t-th hour of the i-th rainfall station.
3.2. Comparative Analysis of Optical Flow Method Extrapolation Forecast and Practical Application of SWAN Forecast Products
Two convective process data were selected, and the radar combined reflectivity factor was extrapolated for 0 - 2 hours by optical flow method, and compared with SWAN products (i.e. cross-correlation
3.2.1. Optical Flow Method Combined with Semi-Lagrangian Tracking Extrapolation Algorithm (Hereinafter Referred to as Optical Flow Method) Minute-Level Extrapolation Precipitation Prediction Method
The optical flow method is used to track the motion of the echo. For the fast convective precipitation weather process with fast change, the optical flow method can capture the motion vector, so it
has obvious forecasting advantages. Although the cross-correlation method is relatively ideal in a relatively flat stratified cloud precipitation system, the tracking failure in the strong convective
weather will increase significantly, which will affect the final forecasting result. Cross-correlation method used in SWAN business systems. The optical flow method can make up for the defects of the
traditional cross-correlation method, improve the extrapolation effect of the convective echo, improve the prediction ability of the radar echo movement, and improve the performance of the convective
nowcasting system.
The semi-Lagrangian scheme extrapolates the echo, which maintains the rotation of the echo and improves the echo prediction. Due to the large vorticity in typhoon and strong weather, the single
motion trajectory has a certain rotation effect. In order to solve the linear extrapolation without considering the shortcomings of rotation, the semi-Lagrangian advection scheme is used to
extrapolate the radar echo.
3.2.2. Comparative Analysis of Practical Applications
1) Case study No. 1
On April 11, 2017, the strong convective weather process in Fujian was a general convective weather process. On the 11th, a strong convective weather distribution zone appeared in the south of Hunan
to the central part of Fujian; scattered convective weather appeared over the south of Guangdong, and the border between Fujian and Guangdong There was also a strong convective weather. At 15 o’clock
(Beijing time), the echoes entered Jiangxi from Jiangxi, and the strongest echo intensity was 40 - 45 dBz. The strong echo region above 30 dBz exhibits a bow-shaped distribution (figure omitted),
with a northerly wind intrusion on the back side and a southwesterly wind on the south side, which is generally a bow-shaped echo structure.
Using the optical flow method combined with the Lagrangian tracking extrapolation algorithm, the shape and position of the strong convection obtained by the extrapolation prediction (Figure 1(c)) are
close to those observed by the radar (Figure 1(a)). The system has better predicted the echoes in central and southern Fujian and the strong echoes in western Fujian. Of course, comparing Figure 1(a)
and Figure 1(c), it can be seen that the moving speed of the south segment of the echo is slow. Comparing the forecast and the actual situation, the optical flow method basically predicts the
position of the maximum reflectivity factor characteristic. The strong centers on the live echoes are mainly distributed in the west, middle and south sections: the west section is mainly large-area
echoes, and the strongest echoes are about 40 dBz. The middle and south sections are arcuate echoes, and the strongest echoes are about It is 50 dBz. The strong echo center after optical flow
prediction is also mainly in the western, middle and southern sections. The strongest echo in the southern section is more south than the live, and the middle section is equivalent, but the maximum
value is 50 dBz larger than the live echo. From the comparison of the 1-hour forecast of this process and the echo situation after 1 hour, it can be seen that the strong echo position is consistent
with the echo reality after 1 hour, and the shape is also close to the actual situation, while the SWAN product (Figure 1(b)) is The echo positions are similar, and the mid-range strength is
obviously weaker and northerly. Therefore, the optical flow method is also of great reference value for the one-hour extrapolation forecast of a large range of convective systems.
2) Case study No. 2
On April 24, 2017, the convective weather process in Fujian, on the 24th, there was a strong convective weather over the province. The intensity was relatively general. There were local convection
processes in the province, which was less intense than the 11th. The main body of the storm was in Zhejiang. In our province, the echoes below 40 dBz are dominant, and the shape and position of the
strong convection obtained by the optical flow extrapolation prediction (Figure 2(c)) are close to those observed by the radar (Figure 2(a)). The SWAN product (Figure 2(b)) has a weaker intensity,
and the area above 35 dBz is less
Figure 1. 19:00, April 11, 20 17 (Beijing time). (a) Live echo; (b) Swan 10 hour forecast for 1 hour extrapolation echo; (c) Optical flow method 10 hour forecast for extrapolated echo after 1 hour.
Figure 2. At 22 o’clock on April 24, 20 17 (Beijing time). (a) Live echo; (b) Swan 21 hour forecast for 1 hour extrapolation echo; (c) Optical flow method 21 hour forecast for 1 hour extrapolation
forecasted. It can be seen that for the convection, the optical flow method is also more effective than the SWAN method.
By combining the optical flow method and the Lagrangian extrapolation method, the radar echo is predicted and predicted. By comparison, it is found that the scheme is superior to the SWAN product and
has obvious advantages in strength prediction. At the same time, the optical flow method is extrapolated and cross-correlated. There are limitations as well, that is, there is still a certain
difficulty in short-term local consumption prediction, and it is necessary to introduce a model dynamic mechanism to influence the prediction scheme of improving echo cancellation. At present, the
optical flow method is gradually used to extrapolate and forecast precipitation.
3.3. Grid Fusion Technology
3.3.1. Grid Fusion Technology
Combining the advantages of two fusion technologies, dynamic weighting method and trend evolution superposition method (Yang et al., 2010) , a grid fusion technology is established. 1) Dynamic weight
fusion method: The relative weights of radar QPF and numerical model output prediction values need to be adjusted with time. In the aging node, the radar QPF prediction value takes a larger weight,
but with the extension of the forecasting time, the radar QPF The error increases, so the numerical forecast results are given a relatively large weight when the forecast period is long to prolong
the timeliness and accuracy of the forecast. 2) Trend evolution superposition method: Calculate the quantitative trend of the convective system enhancement weakening of the numerical model prediction
corresponding to the extrapolation time of each pixel point, and superimpose it with the radar QPF, so that the forecast result can better reflect the development trend of the convective system. In a
certain area, find the development trend of the numerical model forecast rainfall in the extrapolation time corresponding to each grid point, quantify it, and then superimpose this quantification
trend into the radar QPF forecast result. In the selected region, using the numerical model predicted from the actual t-time to predict the numerical prediction of the predicted time of $\left(t+\
Delta t\right)$ and $\left(t+\Delta t-1\right)$ , the development trend of the numerical model corresponding to the extrapolation time corresponding to each pixel point can be obtained: $\text{Rt}\
left(t+\Delta t\right)=\text{Rm}\left(t+\Delta t\right)-\text{Rm}\left(t+\Delta t-1\right)$ where $\text{Rt}\left(t+\Delta t\right)$ is the development trend of the numerical model corresponding to
each lattice point, and $\text{Rm}\left(t+\Delta t\right)$ is the time of the numerical model prediction $\left(t+\Delta t\right)$ The rainfall, $\text{Rm}\left(t+\Delta t-1\right)$ is the rainfall
at the time of $\left(t+\Delta t-1\right)$ predicted by the numerical model. Then the development trend of the numerical model prediction is superimposed with the interpolated radar QPF prediction to
obtain the fusion value: $\text{R}\left(t+\Delta t\right)=\text{Rr}\left(t+\Delta t\right)+\text{Rt}\left(t+\Delta t\right)$ where $\text{R}\left(t+\Delta t\right)$ is the time after fusion $\left(t+
\Delta t\right)$ The rainfall, $\text{Rr}\left(t+\Delta t\right)$ is the radar QPF rainfall.
3.3.2. Establishing a Grid Fusion Solution
Based on the above-mentioned precipitation prediction error test statistical evaluation results (Table 1), it is determined that the radar QPF quantitative precipitation prediction error is smaller
than the aging node of the model precipitation forecast, etc., using Doppler radar three-dimensional puzzle program, combined with Jianyang, Longyan, Xiamen, Changle, Sanming, Quanzhou 6 CINRAD/SA
radar-based data, ground automatic station rainfall data and radar QPF precipitation products, using grid fusion technology, comprehensive error analysis results using error correction coefficient to
determine the weight function, establishing measured rainfall, HRRR mode precipitation forecast, Radar QPF three-source fusion system correction scheme, output time resolution of 6 minutes, spatial
resolution of 1 km × 1 km, time accumulation of 1 hour precipitation forecast products.
3.4. Establish an Optimal Correction Plan
Using the radar data optical flow method to extrapolate the minute scale, and cooperate with the rapid assimilation mode mesoscale numerical model HRRR of Fujian Province (the product as the
environmental field, make full use of the advantages of both, and propose the dynamic weight fusion method to integrate the radar data with the numerical model product. Strong convective weather
short-term nowcasting: The relative weight of the radar extrapolated QPF value and the numerical model’s output forecast value needs to be adjusted with time. In a shorter forecasting time, the radar
extrapolation method predicts a relatively
Table 1. Monthly and TS report rate for April-June 2017.
large weight. However, with the extension of the forecasting time, the error of the radar extrapolation forecast increases, so when the forecast period is long, the numerical forecast results are
given a relatively large weight to prolong the timeliness and accuracy of the forecast. The weight of the numerical model forecast is calculated using the sinusoidal weight. Finally, based on the
weights obtained, the radar extrapolation is combined with the numerical model prediction. The radar weight is ${W}_{r}\left(t\right)$ , the mode weight is ${W}_{r}\left(t\right)$ , and t is time.
The method of integrating rainfall is as follows:
$R\left(t\right)$ is the amount of rainfall after the fusion, ${R}_{r}\left(t\right)$ is the rainfall obtained by the radar extrapolation forecast ${R}_{m}\left(t\right)$ is the rainfall predicted by
the numerical model.
Combining various error analysis results, the error correction coefficient is used to determine the scheme, and based on the correction effect, the fusion parameters are continuously adjusted, and
the preferred scheme is adopted to optimize the correction scheme. Furthermore, the forecasting products of precipitation rolling update based on measured dense data and short-term forecast are
obtained. The system outputs objective precipitation forecast grid products and drop forecast products (contours, images, texts).
4. Precipitation Rolling Update Revised Forecast Product Inspection and Evaluation
After the real-time effect rolling inspection and evaluation of the precipitation rolling update correction products based on the measured dense data and the short-term forecasting is carried out,
and the precipitation rolling update revised forecasting product is determined as the objective and optimal 1 - 3 hour precipitation forecasting product.
4.1. Sliding Up Update Revised Forecast Product Inspection and Evaluation
The real-time effect rolling inspection and evaluation of the precipitation rolling correction products based on the measured dense data and the short-term forecasting is carried out, and the
precipitation rolling revised forecasting product is determined as the objective and optimal 1 - 3 Hour precipitation forecasting product.
Extrapolation of the radar extrapolation results at 14:00 on 15 June 2017 (Figure 3(b)) and HRRR numerical model forecast results (Figure 3(b)) and 1 hour live 1 hour precipitation (Figure 3(a)) and
condensed forecast rainfall
Figure 3. 15:00, June 15, 2017 (Beijing time). (a) Live 1 hour precipitation; (b) Optical flow prediction 1 hour QPF; (c) HRRR forecast 1 hour QPF; (d) Convergence forecast 1 hour QPF.
(Figure 3(d)). It can be seen from the comparison that the radar extrapolation forecast is basically accurate in the precipitation area in the southern part of the province, but the estimation of
rainfall is not very accurate, and the change of the central rain area and the rainfall amount is not predicted. The estimation of rainfall in the central part of our province is generally too large,
and the forecast of the center position of the convective monomer is deviated from the actual situation. According to Figure 3(d), the radar extrapolation and numerical model are combined to correct
the rain area of the radar extrapolation forecast, and the rainfall is also corrected, reflecting the tendency of the convective system to weaken.
Extrapolation of the radar extrapolation results from 16:00 on June 15, 2017 to 17 o’clock (Figure 4(b)) and HRRR numerical model prediction results (Figure 4(c)) and 1 hour live 1 hour precipitation
(Figure 4(a)) and convergent forecast rainfall (Figure 4(d)) It can be seen that the live one-hour precipitation area is in the area of Longyan and Zhangzhou. The HRRR is reported to be above 50 mm,
but the position is eastward. The fusion forecast and radar extrapolation are more consistent, but the 50 mm range It is too small, but for the precipitation in Ganzhou, the fusion forecast is closer
to the reality.
Figure 4. 17:00, June 15, 2017 (Beijing time). (a) Live 1 hour precipitation; (b) Optical flow prediction 1 hour QPF; (c) HRRR forecast 1 hour QPF; (d) Convergence forecast 1 hour QPF.
Extrapolation of the radar extrapolation forecast at 1700 hours on 15 June 2017 (Figure 5(b)) with the HRRR numerical model forecast (Figure 5(c)) and the 15-hour live 1 hour precipitation (Figure 5
(a)) and the combined forecast rainfall (Figure 5(d)). It can be seen that the live one-hour precipitation area is in the area of Longyan and Zhangzhou. The HRRR is reported to be above 50 mm, but
the position is obviously large and the magnitude is also high. The fusion forecast and radar extrapolation More consistent, but the 10 mm range is closer to the live, the fusion forecast reflects
the HRRR’s heavy rain, but the extrapolation advantage reduces the scope of the heavy precipitation. Compared with the radar extrapolation and numerical model prediction, the predicted forecast
results are closer to the actual situation, and the numerical model predicts the problem that the rain intensity is generally too large, and the deviation of the rain intensity center position is
4.2. Hourly TS Score
The following Figure 6 gives the various magnitudes of TS for the period of June 14-15, 2017. It can be seen that the fusion forecast is significantly better than the HRRR and extrapolation forecasts
at various magnitudes below 7 mm, indicating
Figure 5. 19:00, June 15, 2017 (Beijing time). (a) Live 1 hour precipitation; (b) Optical flow prediction 1 hour QPF; (c) HRRR forecast 1 hour QPF; (d) Convergence forecast 1 hour QPF.
that the fusion forecast effectively reduces the forecast bias of weak precipitation. The forecast between 7 - 30 mm also maintains the optimal level of extrapolation and mode, combining the
advantages of both, but the prediction effect of all members above 30 mm is not good, indicating that the fusion forecast is also limited by predictability. The process of pushing and mode cannot be
reported, and the fusion cannot be predicted.
4.3. HRRR Model Forecasting, Optical Flow Extrapolation Forecasting and Fusion Product Predictable Test Evaluation
From the fusion product (blue line), WRF mode prediction (brown line) and optical flow method extrapolation forecast (light blue line) precipitation CSI with time curve (Figure 7) available: optical
flow radar extrapolation forecast There is an advantage in 0 - 2 hours, the fusion product is better than WRF mode prediction (brown line) and radar extrapolation forecast in 0 - 6 hours, and over
time, that is, over 6 hours, high resolution numerical prediction mode The advantages are becoming more and more apparent.
5. Conclusion
For the nowcasting around 0 - 2 h, the fusion prediction effect is better than the mode prediction effect. The optical flow radar extrapolation forecast has an advantage in 0 - 2 hours, and the
fusion product performs better than the WRF mode prediction (brown line) and the radar extrapolation forecast in 0 - 6 hours. Over time, it is more than 6 hours. The advantages of high-resolution
Figure 6. (a) 0.1 - 0.9 mm; (b) 0.1 - 0.9 mm; (c) 1.5 - 6.9 mm; (d) 7.0 - 14.9 mm; (e) 15 - 29.9 mm extrapolation-TS, HRRR-TS and fusion-TS hourly distribution.
Figure 7. Convergence product (blue line), WRF forecast (brown line) and optical flow extrapolation forecast (light blue line) precipitation CSI versus time curve.
prediction models are becoming more and more apparent.
6. Discussion
1) Using the optical flow method combined with the semi-Lagrangian tracking extrapolation algorithm (abbreviated as optical flow method) to extrapolate the precipitation forecast at the minute level:
when the echo extrapolation forecasts, the COTREC cross-correlation method is changed to the optical flow method. The optical flow method is also of great reference value for the one-hour
extrapolation forecast of a wide range of convective systems. The optical flow method improves the extrapolation effect of convective echoes and improves the prediction ability of radar echo
movement. Combined with the semi-Lagrangian scheme, the echo is extrapolated, the rotation of the echo is maintained, and the echo prediction effect is improved.
2) Grid fusion technology is established: combining the advantages of two fusion technologies, dynamic weighting method and trend evolution superposition method, a grid fusion technology is
3) Establish an optimal correction plan: based on the real-time effect rolling test to evaluate the correction effect, continuously adjust the fusion parameters, and adopt the preferred scheme to
make the correction scheme optimal. Furthermore, the forecasting products of precipitation rolling update based on measured dense data and short-term forecast are obtained. The combination of radar
extrapolation and numerical model corrections corrected the rain area of the radar extrapolation forecast, and the rainfall was also corrected, reflecting the tendency of the convective system to
4) High-resolution precipitation rolling update of revised products improves the availability of model precipitation forecast, and helps to improve the accuracy and refinement level of short-term
monitoring and early warning; fusion technology extends the extrapolation time.
This paper is sponsored by the Public Welfare Industry (Meteorological) Research Project (GYHY201506022), the Fujian Provincial Meteorological Bureau Open-end Fund Project (2016K01), and the Fujian
Provincial Natural Science Foundation Project (2016J01182), China, and China Meteorological Administration Forecaster Special Project (CMAYBY2019-55). | {"url":"https://www.scirp.org/journal/paperinformation?paperid=91467","timestamp":"2024-11-05T12:12:01Z","content_type":"application/xhtml+xml","content_length":"134778","record_id":"<urn:uuid:f91c37cf-5e00-4589-9162-a660a0fa58f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00237.warc.gz"} |
Degree Course in
Academic Year 2020/2021
- 1° Year
Learning Objectives
Knowledge and understanding: starting from a problem, the student will learn to investigate it using mathematical software.
- Applying knowledge and understanding: given its highly applied structure, the class will develop the ability to apply the acquired competences.
- Making judgements: the "answers" obtained with the software need to be critically analized. In many cases the raw data or the graphical representations need to be validated and understood.
- Communication skills: one of the main goals of the class is to organize material, also in the form of a small written essay.
- Learning skills: the students will work in groups, also in case the lectures will be given online. In such a way the work load will appear not as tiring, and good students will learn how to explain
facts while the less brilliant students will have the possibility of improving their performances.
Course Structure
The course will be given in a computer lab. The activities are of laboratorial type, with a first description of the argument followed by a work in front of a computer, with the possibility of
working in groups, discussing with other students and the teacher. In case the lectures will be given online, the student will work with own pc.
Should teaching be carried out in mixed mode or remotely, it may be necessary to introduce changes with respect to previous statements, in line with the programme planned and outlined in the
Detailed Course Content
A part of the class will be dedicated to learning how to write mathematical manuscripts with LaTeX. The other part will be dedicated to the use of the computer to investigate mathematical problems
and obtain answers in graphical form by two languages: Mathematica and Matlab. Such graphical representations can be then used to write a pleasant mathematical document.
Learning assessment may also be carried out on line, should the conditions require it.
Textbook Information
1. T. Oetiker, H. Partl, I. Hyna, E. Schlegl (1999). The not so short introduction to LaTeX. Retrieved from https://tobi.oetiker.ch/lshort/lshort.pdf
2. G.Naldi, L.Pareschi Matlab: concetti e progetti Apogeo 2002
3. Notes of the teacher | {"url":"https://dmi.unict.it/courses/l-35/course-units/?cod=16546","timestamp":"2024-11-14T23:39:23Z","content_type":"text/html","content_length":"22937","record_id":"<urn:uuid:981c0b07-4c51-4c91-b14b-c5bbed4fe862>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00272.warc.gz"} |
The Maximum Binary Tree Problem
We introduce and investigate the approximability of the maximum binary tree problem (MBT) in directed and undirected graphs. The goal in MBT is to find a maximum-sized binary tree in a given graph.
MBT is a natural variant of the well-studied longest path problem, since both can be viewed as finding a maximum-sized tree of bounded degree in a given graph. The connection to longest path
motivates the study of MBT in directed acyclic graphs (DAGs), since the longest path problem is solvable efficiently in DAGs. In contrast, we show that MBT in DAGs is hard: it has no efficient exp (-
O(log n/ log log n)) -approximation under the exponential time hypothesis, where n is the number of vertices in the input graph. In undirected graphs, we show that MBT has no efficient exp (- O(log ^
0.63n)) -approximation under the exponential time hypothesis. Our inapproximability results rely on self-improving reductions and structural properties of binary trees. We also show constant-factor
inapproximability assuming P≠ NP. In addition to inapproximability results, we present algorithmic results along two different flavors: (1) We design a randomized algorithm to verify if a given
directed graph on n vertices contains a binary tree of size k in 2 ^kpoly(n) time. (2) Motivated by the longest heapable subsequence problem, introduced by Byers, Heeringa, Mitzenmacher, and Zervas,
ANALCO 2011, which is equivalent to MBT in permutation DAGs, we design efficient algorithms for MBT in bipartite permutation graphs.
• Fixed-parameter tractability
• Inapproximability
• Maximum binary tree
ASJC Scopus subject areas
• General Computer Science
• Computer Science Applications
• Applied Mathematics
Dive into the research topics of 'The Maximum Binary Tree Problem'. Together they form a unique fingerprint. | {"url":"https://experts.illinois.edu/en/publications/the-maximum-binary-tree-problem-3","timestamp":"2024-11-04T14:17:51Z","content_type":"text/html","content_length":"58959","record_id":"<urn:uuid:1715cd81-b6fe-4d16-a1df-da36342d6f61>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00448.warc.gz"} |
How to Apply Heap Leaching Process to Extract Gold?
2022-06-13 XinHai Views (1767)
If you want to know more information, like quotation, products, solutions, etc., please contact us online.
Heap leaching process is one of the direct and efficient methods for processing gold ores with low grade. It is widely welcomed by concentrators because of its advantages of gold extraction on the
spot, simple process, easy operation and high gold recovery rate. How to apply heap leaching process to extract gold? You can get the detailed gold heap leaching process from the following. Heap
leaching process for gold extraction mainly includes: two stages one closed-circuit cruhsing and screening, ore stacking, dosing and spraying, activated carbon adsorption, and gold-loaded carbon
desorption and electrolysis. The specific process flow is as follows:
Use the table of contents below to navigate through the guide:
01Step 1: Two-satge one-closed circuit crushing process
In order to improve the heap leaching efficiency of gold mines, a two-stage closed-circuit screening and crushing process is often used for gold ore pretreatment stage. Its main process is as
The raw gold ore stored in the raw ore bin enters the jaw crusher through the trough feeder for coarse crushing. The crushed products enter the vibrating screen through the belt conveyor for
processing. The ore that meets the particle size requirements enters the next heap leaching process, and the oversize materials need to be re-entered into the cone crusher for fine crushing. The ore
discharged from the cone crusher and from the jaw crusher enter the screening machine for screening at the same time. Such a two-stage one-closed-circuit crushing and screening process can not only
improve the heap leaching efficiency of gold ore, but also separate gold ore concentrate and gangue minerals as much as possible, and improve the recovery rate of gold ore.
02Step 2: Stacking process of gold ore
Heap building is one of the important factors affecting the heap leaching efficiency of gold mines. When building piles, belt conveyors and bulldozers are used to avoid ore segregation as much as
possible. In addition, when choosing a site for gold mines heap leaching, choosing a gully area or a rough slope area can save costs. The height of the stack is a factor that affects the permeability
of the drug. Therefore, when stacking, its height should not exceed 5m. It should be noted that in order to avoid the loss of precious liquid, a leak-proof floor mat should be laid at the bottom.
03Step 3: Dosing and spraying process of gold ore heap
In order to spray the medicine on the ore pile evenly, the pipes of the spraying device should be arranged on the top of the pile. During the spraying process, intermittent spraying can be used. The
spray liquid can be composed of cyanide (such as sodium cyanide), alkaline solution (sodium hydroxide), lime water, etc. After these drugs are added to the pharmaceutical agtitaion tank, they are
evenly stirred and then sprayed. After a period of spraying, the pregnant solution flows into the pregnant solution tank for further treatment.
04Step 4: Activated carbon adsorption process
The percolated gold pregnant solution is pumped into the stripping column through the pregnant solution pump for activated carbon adsorption. After the adsorption is completed, the gold-loaded carbon
enters the carbon storage tank for the next step of desorption. The remaining products of adsorption flow into the lean liquid pool, and then pumped into the pile by the lean liquid pump for
re-spraying. This circulation system can improve the recovery rate of gold concentrate.
05Step 5: Gold-loaded carbon desorption electrolysis process
The gold-loaded carbon in the carbon storage tank enters the stripping column for desorption. The desorption products are filtered by a filter and flows into an electrolytic cell for electrolytic
treatment. The gold mud obtained by electrolysis can be smelted. The pregnant solution part of electrolysis and the analytical overflow part are sent to the desorption liquid tank for desorption, and
then enter the circulation system. In addition, the desorbed underflow part is washed in the pickling tank and then sent back to the carbon storage tank for the removal of the adsorption column.
The above is the complete gold mine heap leaching process. In the process of heap leaching gold extraction, there are many factors that affect its efficiency, such as ore particle size, heap building
effect, pH value, oxygen, spray intensity and cyanide concentration. Therefore, in the design of the gold heap leaching process, there are various consideration. Only in this way can the ore leaching
rate be high and the leaching speed be fast. Xinhai has served large, medium and small gold ore dressing plants. We have rich experience in gold ore heap leaching process design. What’s more, we can
provide high-quality and high-efficiency gold ore dressing equipment. Welcome to consult. | {"url":"https://m.xinhaimining.com/newo/how-to-apply-heap-leaching-process-to-extract-gold.html","timestamp":"2024-11-11T05:09:48Z","content_type":"text/html","content_length":"28860","record_id":"<urn:uuid:f83b0db9-790f-47b9-a208-cd287f7c7393>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00732.warc.gz"} |
On this page, you’ll find some of the open-source software that I've developed. This is not a complete list, so check back soon for more.
bayesplay R package
Bayesplay is an R package for computing Bayes factors for simple models (for example, differences in means). Models are constructed by specifying the relevant likelihood and priors. These can then
combined to compute a Bayes factor.
It's possible to specify the following likelihoods:
• Normal
• Scaled and shifted t-distribution
• Binomial
• and several non-central t-distributions for making inferences about t- statistics, one-sample Cohen’s d, and independent samples Cohen’s d.
The following priors can be used:
• Normal distribution
• Uniform distribution
• Scaled and shifted t distribution
• Cauchy distributions
• And the Beta distribution
bayesplay can be installed directly from CRAN as follows:
The documentation is available at bayesplay.github.io/bayesplay/
Note that it's also possible to use bayesplay from inside JASP. It is available under the menu for General Bayesian Tests.
bayesplay web app
The Bayesplay web app is a web-based interface for the Bayesplay package. It allows users to construct models and compute Bayes factors without having to install any software. It is possible to use
the web app as a standalone tool, but it is also possible to use the web-app to generate R code for the Bayesplay R pacakge.
The web app is available at bayesplay.colling.net.nz. | {"url":"https://research.colling.net.nz/software","timestamp":"2024-11-03T10:02:58Z","content_type":"text/html","content_length":"7097","record_id":"<urn:uuid:1f7b53e1-57e3-4998-8800-721d12063efc>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00610.warc.gz"} |
Дональдсон Саймон
(род. 20.08.1957)
Simon Donaldson's secondary school education was at Sevenoaks School in Kent which he attended from 1970 to 1975. He then entered Pembroke College, Cambridge where he studied until 1980, receiving
his B.A. in 1979. One of his tutors at Cambridge described him as a very good student but certainly not the top student in his year. Apparently he would always come to his tutorials carrying a violin
case. In 1980 Donaldson began postgraduate work at Worcester College, Oxford, first under Nigel Hitchen's supervision and later under Atiyah's supervision. Atiyah writes in [2]:- In 1982, when he was
a second-year graduate student, Simon Donaldson proved a result that stunned the mathematical world. This result was published by Donaldson in a paper Self-dual connections and the topology of smooth
4-manifolds which appeared in the Bulletin of the American Mathematical Society in 1983. Atiyah continues his description of Donaldson's work [2]:- Together with the important work of Michael
Freedman ..., Donaldson's result implied that there are "exotic" 4-spaces, i.e. 4-dimensional differentiable manifolds which are topologically but not differentiably equivalent to the standard
Euclidean 4-space R4. What makes this result so surprising is that n = 4 is the only value for which such exotic n-spaces exist. These exotic 4-spaces have the remarkable property that (unlike R4)
they contain compact sets which cannot be contained inside any differentiably embedded 3-sphere ! After being awarded his doctorate from Oxford in 1983, Donaldson was appointed a Junior Research
Fellow at All Souls College, Oxford. He spent the academic year 1983-84 at the Institute for Advanced Study at Princeton, After returning to Oxford he was appointed Wallis Professor of Mathematics in
1985, a position he continues to hold. Donaldson has received many honours for his work. He received the Junior Whitehead Prize from the London Mathematical Society in 1985. In the following year he
was elected a Fellow of the Royal Society and, also in 1986, he received a Fields Medal at the International Congress at Berkeley. In 1991 Donaldson received the Sir William Hopkins Prize from the
Cambridge Philosophical Society. Then, the following year, he received the Royal Medal from the Royal Society. He also received the Crafoord Prize from the Royal Swedish Academy of Sciences in 1994:-
... for his fundamental investigations in four-dimensional geometry through application of instantons, in particular his discovery of new differential invariants ... Atiyah describes the contribution
which led to Donaldson's award of a Fields Medal in [2]. He sums up Donaldson's contribution:- When Donaldson produced his first few results on 4-manifolds, the ideas were so new and foreign to
geometers and topologists that they merely gazed in bewildered admiration. Slowly the message has gotten across and now Donaldson's ideas are beginning to be used by others in a variety of ways. ...
Donaldson has opened up an entirely new area; unexpected and mysterious phenomena about the geometry of 4-dimensions have been discovered. Moreover the methods are new and extremely subtle, using
difficult nonlinear partial differential equations. On the other hand, this theory is firmly in the mainstream of mathematics, having intimate links with the past, incorporating ideas from
theoretical physics, and tying in beautifully with algebraic geometry. The article [3] is very interesting and provides both a collection of reminiscences by Donaldson on how he came to make his
major discoveries while a graduate student at Oxford and also a survey of areas which he has worked on in recent years. Donaldson writes in [3] that nearly all his work has all come under the
headings:- (1) Differential geometry of holomorphic vector bundles. (2) Applications of gauge theory to 4-manifold topology. and he relates his contribution to that of many others in the field.
Donaldson's work in summed up by R Stern in [6]:- In 1982 Simon Donaldson began a rich geometrical journey that is leading us to an exciting conclusion to this century. He has created an entirely new
and exciting area of research through which much of mathematics passes and which continues to yield mysterious and unexpected phenomena about the topology and geometry of smooth 4-manifolds. Article
by: J J O'Connor and E F Robertson
Источник: http://www-history.mcs.st-andrews.ac.uk/Biographies/Donaldson.html
Присуждены Филдсовские премии-2018
Прошла летняя школа «Современная математика», теперь имени Виталия Арнольда.
доступны труды А.Н.Крылова и А.Пуанкаре
"Мат.этюды" выпустили книгу «Математическая составляющая».
Подробнее »
Новые арифметические ребусы для iГаджетов
все новости » | {"url":"https://math.ru/history/people/Donaldson","timestamp":"2024-11-11T09:59:49Z","content_type":"text/html","content_length":"14420","record_id":"<urn:uuid:ce1a1c89-84d4-4fb1-8362-750198011daf>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00493.warc.gz"} |
Cybersecurity Books | The Security Buddy
Cyber security has become an absolute necessity. No device should be connected to the Internet without securing it properly. Attackers often use malware and other cyber-attacks to infect a computer
or steal sensitive data from users. It is crucial that we secure ourselves.
Here is a list of cyber security books written for students and professionals who want to gain an in-depth knowledge of cyber security.
Author: Amrita Mitra
The book “The Design And Implementation Of DSA And ECDSA Using Python” discusses the design of the DSA and ECDSA algorithms. The book also discusses the underlying mathematics behind the DSA and
ECDSA algorithms. It then discusses the implementation of the DSA and ECDSA algorithms using Python without using any Python library dedicated to them. More …
Author: Amrita Mitra
The book “The Design And Implementation Of The Diffie-Hellman Key Exchange Algorithm Using Python” explains the design of the Diffie-Hellman Key Exchange algorithm and the Elliptic Curve
Diffie-Hellman (ECDH) algorithm. It also explains the fundamental concepts of mathematics required to understand the said algorithms. It also explains how to implement the algorithms using Python
without using any Python library dedicated to them. More …
Author: Amrita Mitra
The book explains the design and implementation of RSA using Python. It first explains the underlying mathematics, without which understanding the design of RSA is difficult. It then explains the
RSA key generation, encryption, decryption, and digital signature algorithms and discusses the implementation of these algorithms from scratch using Python. The book then explains how to use
Python libraries for RSA encryption, decryption, and digital signatures. It also discusses various security concerns of RSA and how to address them. More …
Author: Amrita Mitra
The book Web Application Vulnerabilities And Prevention is good for students or professionals who want to learn about web application security. The cyber security book explains different types of
web application vulnerabilities and how these vulnerabilities make a web application vulnerable to cyber-attacks. It also explains various preventive measures that can be taken to prevent
attackers from exploiting each of these vulnerabilities. The cyber security book is written in a simple and easy-to-understand manner, and it explains various web application attacks with several
diagrams and illustrations so that the book can benefit all the readers. More …
Author: Amrita Mitra
The cryptography book “Cryptography And Public Key Infrastructure” explains how symmetric key encryption, public-key encryption, cryptographic hashing, digital signature, and Public Key
Infrastructure work. It also explains how different cryptographic algorithms like DES, AES, IDEA, A5/1, RC4, MD5, SHA, HMAC, etc. work and how different cryptographic algorithms are used in
various secure network protocols like TLS, SSH, DNSSEC, SFTP, FTPS, etc. The cryptography book also explains cryptanalysis and various cryptographic attacks. This book is good for students or
professionals who want an in-depth knowledge of cryptography. No prior experience in the cryptography field is required to read this book. More …
Author: Amrita Mitra
The phishing book “Phishing: Detection, Analysis And Prevention” is good for readers who want to learn about preventing phishing and social engineering. The book explains what phishing is, the
different types of phishing scams, and what techniques are commonly used by attackers in a phishing scam. The phishing book also analyses several phishing messages and shows the readers what
techniques are used in those phishing messages to deceive a victim. It also explains how we can prevent various phishing scams. This cyber security book is also written in simple language and is
easy to understand so that all readers can benefit from it. More …
Author: Amrita Mitra
A Guide To Cyber Security is good for beginners who want to learn how various malware and cyber-attacks work, why attackers make such attacks, and how we can effectively prevent them. The cyber
security book covers topics like different types of encryption, email security, phishing, different types of malware and how they work, ransomware, phishing, the security of mobile phones, the
security of IoT, Blockchain, etc. This cyber security book is also written in simple language, and an easy-to-understand manner and the book explains all the topics with lots of diagrams and
illustrations so that it can benefit all the readers. More …
About the Author
Ms. Amrita Mitra is an author and security researcher. Her areas of interest are cyber security, Artificial Intelligence, and mathematics. She is also an entrepreneur who spreads knowledge and
awareness about cyber security and Artificial Intelligence through the website The Security Buddy. | {"url":"https://www.thesecuritybuddy.com/cyber-security-books/","timestamp":"2024-11-07T18:28:01Z","content_type":"text/html","content_length":"261011","record_id":"<urn:uuid:d70025c7-c4cb-4817-82d9-42fc4775d998>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00794.warc.gz"} |
Newton's second law of motion
Newton's Second Law of Motion: The Relationship Between Force, Mass, and Acceleration
Newton's second law of motion explains how force, mass, and acceleration are connected. It can be summed up in two ways:
Formal Way: The acceleration of an object depends on the net force acting on it and its mass. The direction of the acceleration is the same as the direction of the net force.
Equation: F = ma (where F is the net force, m is mass, and a is acceleration)
Key Concepts:
Net Force: This is the total force acting on an object. If there's only one force, the net force is just that force. If there are multiple forces, you add them up to find the net force.
Mass: Mass is how much stuff is in an object. A heavier object (with more mass) is harder to move and needs more force to accelerate compared to a lighter object.
Acceleration: Acceleration is how quickly an object speeds up, slows down, or changes direction. The bigger the force acting on an object, the more it will accelerate. The direction of acceleration
is the same as the direction of the force.
In Simple Terms:
This law means that how fast an object speeds up or slows down depends on two things: the strength of the force acting on it and the object's mass.
• If you push a toy car (which has a small mass) with a strong force, it will speed up quickly.
• If you push a heavy wagon (which has a large mass) with the same force, it won't speed up as quickly.
So, Newton's second law tells us that a stronger force or a lighter object will result in faster acceleration!
🎉 Ready to turn your knowledge into cash? Dive into our fun and rewarding quizzes! REGISTER ON HAMROQUIZ TODAY 🎉 | {"url":"https://hamroquiz.com/post/newton-second-law-of-motion","timestamp":"2024-11-07T19:19:58Z","content_type":"text/html","content_length":"14592","record_id":"<urn:uuid:fc71451d-3a66-4307-a1bd-95a2cbb3ad04>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00417.warc.gz"} |
Statistical Computing Seminars Regression with Stata
The aim of this seminar is to help you increase your skills in using regression analysis with Stata. The seminar does not teach regression, per se, but focuses on how to perform regression analyses
using Stata. It is assumed that you have had at least a one quarter/semester course in regression (linear models) or a general statistical methods course that covers simple and multiple regression
and have access to a regression textbook that explains the theoretical background of the materials covered in this seminar. These materials also assume you are familiar with using Stata, for example
that you have taken the Introduction to Stata class or have equivalent knowledge of Stata. Below are materials used for teaching this seminar.
• The Stata program on which the seminar is based (Note that the links in the program have changed; please follow the hyperlink convention below).
• All data files used in the book are available as Stata (.dta) files. The files can be downloaded from within Stata. The general form of the command looks as follows:
use https://stats.idre.ucla.edu/stat/stata/webbooks/reg/filename
where filename is replaced by the name of the file. Once you have loaded the data file into your computer’s memory, you should save it to the local hard disk so that it will load faster when you
use the file in the future. The files used in this seminar include
• acadindx crime elemapi elemapi2 hsb2
• The seminar is based on chapters 1, 2, 3 and 4 of the Regression with Stata Web Book. | {"url":"https://stats.oarc.ucla.edu/stata/seminars/stata-regression/","timestamp":"2024-11-09T14:05:23Z","content_type":"text/html","content_length":"37248","record_id":"<urn:uuid:8aba2f5d-5529-45eb-8bd4-22658d8a192f>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00275.warc.gz"} |
Hey! Pass this quiz: Comprehensive Guide to Algebra 2: Solving Equations and Graphing Functions
Created from Youtube video: https://www.youtube.com/watch?v=i6sbjtJjJ-Avideo
Concepts covered:Algebra 2, linear equations, quadratic equations, graphing inequalities, factoring
This video provides a comprehensive overview of basic Algebra 2 concepts, including solving linear equations, graphing inequalities, and factoring quadratic equations. It also covers methods for
solving systems of equations and graphing quadratic functions in both vertex and standard forms.
Basic Concepts in Algebra 2: Solving Linear Equations
Concepts covered:linear equations, solving for x, variables, distribution, simplification
This chapter covers basic concepts in Algebra 2, focusing on solving linear equations. It provides step-by-step examples of solving equations with variables on both sides and distributing terms to
simplify and solve for x.
Question 1
Adding the same number to both sides preserves equality.
Question 2
What is the result of distributing -2 in -2(3x+1)?
Question 3
What is the first step to solve 5x-4=11?
Question 4
CASE STUDY: An engineer is trying to balance a chemical equation and needs to solve 4x + 7 = 15x - 2.
All of the following are correct steps except...
Question 5
CASE STUDY: A financial analyst is solving the equation 3x - 5 = 2x + 8 to find the break-even point.
Select three correct steps out of the following...
Solving Fractional Equations
Concepts covered:fractions, common denominator, cross-multiplication, simplifying equations, solving for x
This chapter explains how to solve equations involving fractions by using common denominators and cross-multiplication. It provides step-by-step examples to illustrate the process of simplifying and
solving for x in different types of fractional equations.
Question 6
Cross multiplication is used when both sides are fractions.
Question 7
What is 6 times 1/2 in the equation?
Question 8
What is the common denominator of 2 and 3?
Question 9
CASE STUDY: A company has 2/3 of its budget allocated to marketing and 1/4 to research. They need to determine the total budget if the marketing budget is $3000.
All of the following are correct applications of solving for the total budget except...
Question 10
CASE STUDY: A student is solving the equation 3/4x + 2 = 5. They need to isolate x.
Select three correct steps to solve for x out of the following...
Solving and Graphing Inequalities
Concepts covered:inequalities, number line, interval notation, open circle, closed circle
This chapter explains how to solve inequalities, graph the solutions on a number line, and represent them using interval notation. It covers various examples, detailing the steps to manipulate
inequalities and the rules for using open and closed circles on a number line.
Question 11
Dividing by a negative number reverses inequality signs.
Question 12
How do you graph x > 3?
Question 13
What happens when dividing by a negative number?
Question 14
CASE STUDY: An engineer is solving the inequality -2x + 4 ≤ 8. They need to represent the solution on a number line.
All of the following are correct applications except...
Question 15
CASE STUDY: A researcher is working with the inequality x/2 - 1 ≥ 0. They need to graph the solution and use interval notation.
Select three correct representations of the solution.
Solving and Graphing Inequalities
Concepts covered:inequalities, number line, interval notation, compound inequality, graphing
This chapter explains how to solve and graph two inequalities on a number line, and how to represent the solutions using interval notation. It also covers solving a compound inequality simultaneously
and plotting the solution on a number line.
Question 16
x > 3 is represented with a closed circle on a number line.
Question 17
How do you graph x > 3 on a number line?
Question 18
What is the solution for 2x + 5 ≤ -1?
Question 19
CASE STUDY: A company needs to determine the range of acceptable production rates. The inequality is 6x + 8 < 20 and 2x + 3 > 7. Solve for x and represent the solution in interval notation.
All of the following are correct applications except?
Question 20
CASE STUDY: A financial analyst is setting investment limits. The inequality is 3x + 5 ≤ 20 and 2x - 3 > 1. Solve for x and represent the solution in interval notation.
Select three correct solutions out of the following.
Solving Absolute Value Equations
Concepts covered:absolute value, equations, solving, non-negative, properties
This chapter explains how to solve equations involving absolute values by breaking them into two separate equations. It also emphasizes the properties of absolute value expressions, highlighting that
they always yield non-negative results.
Question 21
To solve |2x - 3| = 6, set 2x - 3 to 6 and -6.
Question 22
Solve for x: |2x - 3| = 6.
Question 23
Solve for x: |x| = 4.
Question 24
CASE STUDY: A student is given the equation |3x - 2| = 7/5. They correctly isolate the absolute value expression and split it into two equations: 3x - 2 = 7/5 and 3x - 2 = -7/5.
All of the following are correct steps except:
Question 25
CASE STUDY: A student is working on the equation |x| = 4. They know they need to write two separate equations to solve for x.
Select three correct values of x:
Would you like to create and run this quiz?
Created with Kwizie | {"url":"https://app.kwizie.ai/en/public/quiz/f5f9b00e-033c-454f-b1e4-b696705c035e","timestamp":"2024-11-07T19:44:46Z","content_type":"text/html","content_length":"48619","record_id":"<urn:uuid:16eee3be-0ec4-405f-bfd4-70746e7a0685>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00833.warc.gz"} |
[Seminar 2021.05.06] Arithmetic properties of weakly holomorphic modular functions of arbitrary level
Date: 6 May (Thr) 14:30 ~ 15:30
Place: Zoom (ID: 854 1988 1532)
Speaker : 강순이 (강원대학교)
Title: Arithmetic properties of weakly holomorphic modular functions of arbitrary level
The canonical basis of the space of modular functions on the modular group of genus zero form a Hecke system. From this fact, many important properties of modular functions were derived. Recently, we
have proved that the Niebur-Poincare basis of the space of Harmonic Maass functions also forms a Hecke system. In this talk, we show its applications, including divisibility of Fourier coefficients
of modular functions of arbitrary level, higher genus replicability, and values of modular functions on divisors of modular forms.
This is a joint work with Daeyeol Jeon and Chang Heon Kim.
cf. 봄학기 정수론 세미나 웹페이지: https://sites.google.com/view/snunt/seminars | {"url":"https://qsms.math.snu.ac.kr/board_sjXR83/1397","timestamp":"2024-11-04T23:31:26Z","content_type":"text/html","content_length":"60452","record_id":"<urn:uuid:dcf686ab-05c0-45d6-8536-0a4a13cc3ad8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00442.warc.gz"} |
3rd Grade Math Games Printable Free - 3rd Grade Math Worksheets
3rd Grade Math Worksheet Free – As the saying goes “A journey that spans a thousand miles starts with a single step.” This is a great quote for learning math in 3rd grade. In the third grade,
students are acquiring more advanced math concepts. The importance of 3rd grade math Third-grade mathematics marks the transition … Read more
3rd Grade Math Worksheet Games
3rd Grade Math Worksheet Games – According to the old saying “A journey of a thousand miles begins with just a single foot.” This is an adage that perfectly describes the process of learning
mathematics in 3rd grade. In the 3rd grade, students are learning new math concepts that are more sophisticated. The importance of … Read more | {"url":"https://www.3stgrademathworksheets.com/tag/3rd-grade-math-games-printable-free/","timestamp":"2024-11-13T01:17:08Z","content_type":"text/html","content_length":"54144","record_id":"<urn:uuid:91ef6a4a-96ce-4d99-b12b-3bc812c6f670>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00872.warc.gz"} |
Rayleigh principle for linear Hamiltonian systems without
Rayleigh principle for linear Hamiltonian systems without controllability∗
In this paper we consider linear Hamiltonian differential systems without the controllability (or normality) assumption. We prove the Rayleigh principle for these systems with Dirichlet boundary
conditions, which provides a variational characterization of the finite eigenvalues of the associated self-adjoint eigenvalue problem. This result generalizes the traditional Rayleigh principle to
possibly abnormal linear Hamiltonian systems. The main tools are the extended Picone formula, which is proven here for this general setting, results on piecewise constant kernels for conjoined bases
of the Hamiltonian system, and the oscillation theorem relating the number of proper focal points of conjoined bases with the number of finite eigenvalues. As applications we obtain the expansion
theorem in the space of admissible functions without controllability and a result on coercivity of the corresponding quadratic functional.
Kratz, Werner, and Hilscher, Roman Šimon. "Rayleigh principle for linear Hamiltonian systems without controllability∗." ESAIM: Control, Optimisation and Calculus of Variations 18.2 (2012): 501-519.
abstract = {In this paper we consider linear Hamiltonian differential systems without the controllability (or normality) assumption. We prove the Rayleigh principle for these systems with Dirichlet
boundary conditions, which provides a variational characterization of the finite eigenvalues of the associated self-adjoint eigenvalue problem. This result generalizes the traditional Rayleigh
principle to possibly abnormal linear Hamiltonian systems. The main tools are the extended Picone formula, which is proven here for this general setting, results on piecewise constant kernels for
conjoined bases of the Hamiltonian system, and the oscillation theorem relating the number of proper focal points of conjoined bases with the number of finite eigenvalues. As applications we obtain
the expansion theorem in the space of admissible functions without controllability and a result on coercivity of the corresponding quadratic functional. },
author = {Kratz, Werner, Hilscher, Roman Šimon},
journal = {ESAIM: Control, Optimisation and Calculus of Variations},
keywords = {Linear Hamiltonian system; Rayleigh principle; self-adjoint eigenvalue problem; proper focal point; conjoined basis; finite eigenvalue; oscillation theorem; controllability; normality;
quadratic functional; linear Hamiltonian system},
language = {eng},
month = {7},
number = {2},
pages = {501-519},
publisher = {EDP Sciences},
title = {Rayleigh principle for linear Hamiltonian systems without controllability∗},
url = {http://eudml.org/doc/276370},
volume = {18},
year = {2012},
TY - JOUR
AU - Kratz, Werner
AU - Hilscher, Roman Šimon
TI - Rayleigh principle for linear Hamiltonian systems without controllability∗
JO - ESAIM: Control, Optimisation and Calculus of Variations
DA - 2012/7//
PB - EDP Sciences
VL - 18
IS - 2
SP - 501
EP - 519
AB - In this paper we consider linear Hamiltonian differential systems without the controllability (or normality) assumption. We prove the Rayleigh principle for these systems with Dirichlet boundary
conditions, which provides a variational characterization of the finite eigenvalues of the associated self-adjoint eigenvalue problem. This result generalizes the traditional Rayleigh principle to
possibly abnormal linear Hamiltonian systems. The main tools are the extended Picone formula, which is proven here for this general setting, results on piecewise constant kernels for conjoined bases
of the Hamiltonian system, and the oscillation theorem relating the number of proper focal points of conjoined bases with the number of finite eigenvalues. As applications we obtain the expansion
theorem in the space of admissible functions without controllability and a result on coercivity of the corresponding quadratic functional.
LA - eng
KW - Linear Hamiltonian system; Rayleigh principle; self-adjoint eigenvalue problem; proper focal point; conjoined basis; finite eigenvalue; oscillation theorem; controllability; normality; quadratic
functional; linear Hamiltonian system
UR - http://eudml.org/doc/276370
ER -
1. M. Bohner, O. Došlý and W. Kratz, Sturmian and spectral theory for discrete symplectic systems. Trans. Am. Math. Soc.361 (2009) 3109–3123.
2. W.A. Coppel, Disconjugacy, Lecture Notes in Mathematics220. Springer-Verlag, Berlin, Heidelberg (1971).
3. O. Došlý and W. Kratz, Oscillation theorems for symplectic difference systems. J. Difference Equ. Appl.13 (2007) 585–605.
4. J.V. Elyseeva, The comparative index and the number of focal points for conjoined bases of symplectic difference systems in Discrete Dynamics and Difference Equations, in Proceedings of the
Twelfth International Conference on Difference Equations and Applications, Lisbon, 2007, edited by S. Elaydi, H. Oliveira, J.M. Ferreira and J.F. Alves. World Scientific Publishing Co., London
(2010) 231–238.
5. R. Hilscher and V. Zeidan, Riccati equations for abnormal time scale quadratic functionals. J. Differ. Equ.244 (2008) 1410–1447.
6. R. Hilscher and V. Zeidan, Nabla time scale symplectic systems. Differ. Equ. Dyn. Syst.18 (2010) 163–198.
7. W. Kratz, Quadratic Functionals in Variational Analysis and Control Theory. Akademie Verlag, Berlin (1995).
8. W. Kratz, An oscillation theorem for self-adjoint differential systems and the Rayleigh principle for quadratic functionals. J. London Math. Soc.51 (1995) 401–416.
9. W. Kratz, Definiteness of quadratic functionals. Analysis (Munich)23 (2003) 163–183.
10. W. Kratz, R. Šimon Hilscher, and V. Zeidan, Eigenvalue and oscillation theorems for time scale symplectic systems. Int. J. Dyn. Syst. Differ. Equ.3 (2011) 84–131.
11. W.T. Reid, Ordinary Differential Equations. Wiley, New York (1971).
12. W.T. Reid, Sturmian Theory for Ordinary Differential Equations. Springer-Verlag, New York-Berlin-Heidelberg (1980).
13. R. Šimon Hilscher, and V. Zeidan, Picone type identities and definiteness of quadratic functionals on time scales. Appl. Math. Comput.215 (2009) 2425–2437.
14. M. Wahrheit, Eigenwertprobleme und Oszillation linearer Hamiltonischer Systeme [Eigenvalue Problems and Oscillation of Linear Hamiltonian Systems]. Ph.D. thesis, University of Ulm, Germany
15. M. Wahrheit, Eigenvalue problems and oscillation of linear Hamiltonian systems. Int. J. Difference Equ.2 (2007) 221–244.
You must be logged in to post comments.
To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear. | {"url":"https://eudml.org/doc/276370","timestamp":"2024-11-02T11:14:52Z","content_type":"application/xhtml+xml","content_length":"45749","record_id":"<urn:uuid:191d21e2-7c2c-422d-8eb9-a1b39efc56bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00748.warc.gz"} |
Get the last working day of the previous quarter from a given date
Get the last working day of the previous quarter from a given date
Hi All,
I am working on requirements where I have to calculate logic always based on the last day of the previous quarter. I have the below date field. and it has what I want. My problem here is, that the
Week field has all dates I only want to do the logic based on the last day of the previous quarter. Can someone tell me how to do that?
I don't want to filter dates. I want to apply some logic that gives me the dates what I want
Any help on this is greatly appreciated
Operating system used: Windows
Best Answers
• Thanks so much for your reply.
In my dataset date field, I don't have any non-working day dates, so, fortunately, I don't need to calculate this. What I require, is to select the last day of the quarter. So for that I did the
below steps
1st step: I used formula to truncate my Date to quarter
2nd step: I used group by to get the max of my date field
3rd step: I used the formula again to get the matching between date and date_max and that's it, so for a given quarter when my logic is true I get yes if not no, and based on that I will do my
I don't know if this is correct approach or not but I got what I want.
Again thanks for the reply. Much appreciated
Turribeach Dataiku DSS Core Designer, Neuron, Dataiku DSS Adv Designer, Registered, Neuron 2023 Posts: 2,074 Neuron
You asked to "Get the last working day of the previous quarter from a given date" and that's exactly what I have provided you. Now you say you don't need to worry about working days and that you
want the last day of the current quarter not the previous one. That to me is a significant change in the requirement of what you asked.
In any case glad you found what you need but it will be much helpful in the future if you specify clearly what you want because otherwise makes help you a lot harder.
Turribeach Dataiku DSS Core Designer, Neuron, Dataiku DSS Adv Designer, Registered, Neuron 2023 Posts: 2,074 Neuron
Calculating the last working day of the previous quarter is relatevely easy to do in Pandas (Python Code Recipe) assuming you don't need to handle national holidays. I wasn't sure if you wanted a
Code Recipe or a Visual Recipe solution but I felt tempted to try to do it in a Visual Recipe (Prepare Recipe) since according to the documention there are no built-in formula functions to deal
with quarters. So here is a formula that you can add a step in a Prepare recipe (Add Step, search for Formula and add the following to Expression) will give you the last weekday of the previous
if(datePart(inc(inc(asDate(concat(datePart(inc(trunc(now(), 'days'), if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 0, 3, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 1, 1, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 2, 2, 0))) * -1, 'months'), 'year'), '-', datePart(inc(trunc(now(), 'days'), if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 0, 3, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 1, 1, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 2, 2, 0))) * -1, 'months'), 'month'), '-1'), 'yyyy-MM-dd'), 1, 'months'), -1, 'days'), 'dayofweek') < 6, inc(inc(asDate(concat(datePart(inc(trunc(now(), 'days'), if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 0, 3, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 1, 1, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 2, 2, 0))) * -1, 'months'), 'year'), '-', datePart(inc(trunc(now(), 'days'), if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 0, 3, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 1, 1, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 2, 2, 0))) * -1, 'months'), 'month'), '-1'), 'yyyy-MM-dd'), 1, 'months'), -1, 'days'), inc(inc(inc(asDate(concat(datePart(inc(trunc(now(), 'days'), if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 0, 3, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 1, 1, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 2, 2, 0))) * -1, 'months'), 'year'), '-', datePart(inc(trunc(now(), 'days'), if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 0, 3, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 1, 1, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 2, 2, 0))) * -1, 'months'), 'month'), '-1'), 'yyyy-MM-dd'), 1, 'months'), -1, 'days'), mod(datePart(inc(inc(asDate(concat(datePart(inc(trunc(now(), 'days'), if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 0, 3, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 1, 1, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 2, 2, 0))) * -1, 'months'), 'year'), '-', datePart(inc(trunc(now(), 'days'), if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 0, 3, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 1, 1, if(mod(datePart(trunc(now(), 'days'), 'month'), 3) == 2, 2, 0))) * -1, 'months'), 'month'), '-1'), 'yyyy-MM-dd'), 1, 'months'), -1, 'days'), 'dayofweek'), 5) * -1
, 'days'))
Now clearly you are going to struggle to understand what's going there so I am going to explain. In my sample I am using the now() function as my source date but you can replace that with
whatever column name you want (such as your Week column) using a Text Editor as long as it is a date data type. How does this long formula works?
1. We start by calculating the mod(date, 3) as this allows us to know how many months we need to go back to find the previous quarter. So if we are in month 11 for instance mod(11, 3) returns 2
which means that if we do 11 - 2 it will get us month 9 which is the correct month for the previous quarter
2. We then use the mod result to jump to n number of months ago. Then we only take the year and month parts from this date as we are trying to get the last day of the quarter, not any random
day. We concatenate the year and month parts to a 1 as the day to be in the first day of the quarter month
3. Now we convert back from string to date and we have date field again with the month of the last quarter but the first day
4. To calculate the last day of the month in an accurate way we can easily add 1 month and then take 1 day away which gives us the last calendar day of the quarter
5. The final step is to then calculate what day of the week we are to go backwards 1 or 2 days (if falling into Saturday or Sunday) if it is not a weekday. We first check if the day of the week
is below 6, which means it's Mon-Fri so no changes needed. If it is 6 or 7 we do the mod of that by 5 which gives 2 for Sunday and 1 for Saturday, the exact number of days we need to go back
to find a weekday day
With regards to the unreadability of the formula you can vote for the following Product idea which I just raised to improve the formula preprocesor to support code formatting:
• I agree. Sorry when i was working through those steps and thats when I realized i dont have any non working dates in my dataset.
once again sorry for the confusion | {"url":"https://community.dataiku.com/discussion/38708/get-the-last-working-day-of-the-previous-quarter-from-a-given-date","timestamp":"2024-11-15T04:10:00Z","content_type":"text/html","content_length":"423512","record_id":"<urn:uuid:6f37ef5a-9a84-4938-b408-e0055a204b37>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00871.warc.gz"} |
Seminar Series: Topics in Special Functions and Number Theory
Dear all,
The next talk is by Seamus Albion of the University of Vienna. The announcement is as follows.
Talk Announcement:
Title: An elliptic $A_n$ Selberg integral
Speaker: Seamus Albion (Vienna, Austria)
When: Thursday, Thursday Oct 26, 2023 - 4:00 PM (IST) (12:30 PM CEST)
Where: Zoom: Write to the organisers to get the link
Selberg's multivariate extension of the beta integral appears
all over mathematics: in random matrix theory, analytic number theory, multivariate orthogonal polynomials and conformal field theory. The goal of my talk will be to explain a recent unification of
two important generalisations of the Selberg integral, namely the Selberg integral associated with the root system of type A_n due to Warnaar and the elliptic Selberg integral conjectured by van
Diejen and Spiridonov and proved by Rains. The key tool in our approach is the ellipticinterpolation kernel, also due to Rains. This is based on joint work with Eric Rains and Ole Warnaar.
Dear all,
The next talk is by David Wahiche of the University of Tours, France. The title and abstract is below.
Talk Announcement:
Title: From Macdonald identities to Nekrasov--Okounkov type formulas
Speaker: David Wahiche (Universite' de Tours, France)
When: Thursday, Oct 12, 2023, 4:00 PM- 5:00 PM IST
Where: Zoom: Ask the organisers for a link
Live Link: https://youtube.co/live/9BNeHB2umCM?feature=share
Between 2006 and 2008, using various methods coming from representation theory (Westbury), gauge theory (Nekrasov--Okounkov) and combinatorics (Han), several authors proved the so-called
Nekrasov–Okounkov formula which involves hook lengths of integer partitions.
This formula does not only cover the generating series for P, but more generally gives a connection between powers of the Dedekind η function and integer partitions. Among the generalizations of the
Nekrasov--Okounkov formula, a (q, t)-extension was proved by Rains and Warnaar, by using refined skew Cauchy-type identities for Macdonald polynomials. The same result was also obtained independently
by Carlsson–Rodriguez-Villegas by means of vertex operators and the plethystic exponential. As mentioned in both of these papers, the special case q=t of their formula correspond to a q version of
the Nekrasov--Okounkov formula, which was already obtained by Dehaye and Han (2011) and Iqbal et al. (2012).
Motivated by the work of Han et al. around the generalizations of the Nekrasov--Okounkov formula, one way of deriving Nekrasov--Okounkov formula is by using the Macdonald identities for infinite
affine root systems (Macdonald 1972), which can be thought as extension of the classical Weyl denominator formula.
In this talk, I will try to explain how some reformulations of the Macdonald identities (Macdonald 1972, Stanton 1989, Rosengren and Schlosser 2006) can be decomposed in the basis of characters for
each infinite of the 7 infinite affine root systems by the Littlewood decomposition. This echoes a representation theoretic interpretation of the Macdonald identities (see the book of Carter for
instance) and an ongoing project with Cédric Lecouvey, I will mention some partial results we get.
At last, I will briefly explain how to go from these reformulations of Macdonald identities to q Nekrasov--Okounkov type formulas. | {"url":"https://www.sfnt.org/2023/10/","timestamp":"2024-11-14T20:38:56Z","content_type":"application/xhtml+xml","content_length":"84213","record_id":"<urn:uuid:e8106c79-8f7f-4a8d-b3c7-827e318f91a2>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00885.warc.gz"} |
Calculate E_(cell) for a battery based in the following two half-reactions and conditions? | Socratic
Calculate #E_(cell)# for a battery based in the following two half-reactions and conditions?
$\text{Cu (s)"\toCu^(2+)"(0.010 M)} + 2 {e}^{-}$
$M n {O}_{4}^{-} \text{(2.0 M)"+4H^+"(1.0 M)"+3e^(-)\toMnO_2"(s)"+2H_2O"(l)}$
I actually have everything... except ${E}_{c e l l}^{o}$, because I was not given voltages.
1 Answer
You should be given them, or you can look them up. I found them to be:#""^([1])##""^([2])#
So, I get $\text{1.40 V}$.
The values are:
$3 \left({\text{Cu"(s) -> "Cu}}^{2 +} \left(a q\right) + \cancel{2 {e}^{-}}\right)$, ${E}_{red}^{\circ} = \text{0.34 V}$
$\underline{2 \left(\text{MnO"_4^(-)(aq) + 4"H"^(+)(aq) + cancel(3e^(-)) -> "MnO"_2(s) + 2"H"_2"O} \left(l\right)\right)}$, ${E}_{red}^{\circ} = \text{1.67 V}$
$3 \text{Cu"(s) + 2"MnO"_4^(-)(aq) + 8"H"^(+)(aq) -> 3"Cu"^(2+)(aq) + 2"MnO"_2(s) + 4"H"_2"O} \left(l\right)$
Hence, we can calculate ${E}_{c e l l}^{\circ}$. Again, two ways I could do this.
${E}_{c e l l}^{\circ}$
$= \left\{\begin{matrix}{\overbrace{{E}_{red}^{\circ}}}^{\text{Reduction" + overbrace(E_(o x)^@)^"Oxidation" \\ underbrace(E_"cathode"^@)_"Reduction" - underbrace(E_"anode"^@)_"Reduction}}\end
$= \left\{\begin{matrix}\text{1.67 V" + (-"0.34 V") \\ "1.67 V" - "0.34 V}\end{matrix}\right.$
$= + \text{1.33 V}$
As a result, one can then calculate ${E}_{c e l l}$ at these nonstandard concentrations from the Nernst equation.
${E}_{c e l l} = {E}_{c e l l}^{\circ} - \frac{R T}{n F} \ln Q$
□ $R$ and $T$ are known from the ideal gas law as the universal gas constant $\text{8.134 V"cdot"C/mol"cdot"K}$ and temperature in $\text{K}$.
□ $n$ is the total mols of electrons per mol of the atoms involved. Just take its value to be the number of electrons cancelled out from the balanced reaction.
□ $F = {\text{96485 C/mol e}}^{-}$ is Faraday's constant.
□ $Q$ is the reaction quotient, i.e. the not-yet equilibrium constant.
$Q$ is known, remembering that pure liquids and solids are given "concentrations" of $1$, and that we assign a standard concentration of ${c}^{\circ} = \text{1 M}$ to aqueous species):
$Q = \left({\left(\left[{\text{Cu"^(2+)]//c^@)^3)/((["MnO"_4^(-)]//c^@)^2(["H}}^{+}\right] / {c}^{\circ}\right)}^{8}\right)$
#= ("0.010 M"//"1 M")^3/(("2.0 M"//"1 M")^2("1.0 M"//"1 M")^8)#
$= 2.5 \times {10}^{- 7}$
From here, assuming you mean $T = \text{298.15 K}$, and knowing that $n = \text{6 mol e"^(-)"/mol atoms}$,
#color(blue)(E_(cell)) = "1.33 V" - ("8.314 V"cdotcancel"C"//cancel"mol"cdotcancel"K" cdot 298.15 cancel"K")/(((6 cancel("mol e"^(-)))/cancel"1 mol atoms") cdot 96485 cancel"C"//cancel("mol e"^
(-)))ln(2.5 xx 10^(-7))#
#= "1.33 V" - ("0.0257 V")/(6)ln(2.5 xx 10^(-7))#
#= "1.33 V" - ("0.0592 V")/(6)log(2.5 xx 10^(-7))#
$= {1.39}_{5}$$\text{V}$
$\implies$$\textcolor{b l u e}{\text{1.40 V}}$
Impact of this question
2479 views around the world | {"url":"https://socratic.org/questions/use-half-cell-potential-to-calculate-keq-for-the-following-reaction","timestamp":"2024-11-13T14:45:43Z","content_type":"text/html","content_length":"40507","record_id":"<urn:uuid:a1db148f-6b20-4f0f-951c-61d9a02f1596>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00711.warc.gz"} |
[EM] Spatial models -- Polytopes vs Sampling
Kristofer Munsterhjelm km_elmet at t-online.de
Fri Feb 4 13:38:17 PST 2022
On 04.02.2022 20:24, Daniel Carrera wrote:
> On Fri, Feb 4, 2022 at 5:05 AM Colin Champion
> <colin.champion at routemaster.app> wrote:
>> I haven't followed this discussion - sorry if I'm missing something. I
>> quite like Jameson Quinn's model of an infinite number of dimensions of
>> progressively diminishing importance. On the other hand, if 'n
>> dimensions' is understood as meaning n dimensions of equal importance,
>> then it seems to me intuitively unattractive. As a first
>> approximation I
>> might describe politics on a left/right axis; as a second I might
>> distinguish between economic and social liberalism but expect them
>> to be
>> correlated (leading to a cigar-shaped 2D Gaussian) etc. (This doesn't
>> help Daniel who wants an upper limit.)
> That's my intuition as well --- dimensions of decreasing importance. If
> you wanted to make a cigar-shaped Gaussian, do you have any idea of how
> elongated it should be? Like... should each dimension have half the
> variance of the one before? Something else?
I found a (very rough draft of a) paper that argues that political
positions are hierarchical down to infinity, i.e. that if you ask very
specific questions, different voters will have different opinions about
these, but that they can be grouped into larger categories that do make
sense in lower dimensional space.
If that's true, then there should be some kind of effective limit to the
number of dimensions given by the degree that voters care to coordinate
and know the issues, and how strong a low-pass filter (so to speak) the
political mechanism provides. PR would allow for more distinctions than
In such a kind of model, I would imagine that any sort of single-member
district would have a definite attenuating effect on variety, but that
Condorcet would provide better reproduction for the unavoidable level of
quantization than would say, IRV or FPTP; i.e. that it's better at
finding consensus candidates.
If you want to directly represent an area of opinion space that 10% of
the voters care about in every district, then you would pretty much need
PR. IRV would either magnify or squash this 10% support based on its
"strongest wing of strongest wing" recursive logic, whereas Condorcet
would pull the consensus candidate somewhat in its direction.
So SMD would give you some degree of bundling, PR with thresholds give
you a lesser degree, and PR without thresholds an even lesser degree.
I'd imagine sortition would (on average) produce a very low degree of
bundling, since the representative sample should be accurate of the
people down to the variance provided by the size of the assembly itself.
It would also bypass whatever natural limit arises from the nature of
campaigning and party organization.
I don't know what this natural level would be, though, for something
like Condorcet. I guess it would depend on to what degree dimensions
follow geographical distinctions: if six districts have voters who care
exclusively about economic issues, and four districts care exclusively
about unitary state vs devolution issues, then that's going to give a
different composition of issue space than if the unitary/devolved voters
are represented by 40% in every district and the economic voters by 60%
also in every district.
(If the model is accurate, that is. Perhaps a properly designed poll
could determine whether it is; such a poll would ask various questions
within subcategories of subcategories of main categories and see if the
results are consistent with the hierarchical model.)
> Let me also respond to Kristofer's comment about not taking the current
> bungling of issues as a given. One could argue that the pragmatic
> approach is to ask how a change in the electoral system would affect the
> fortunes of minor parties that already exist, or how it might encourage
> a new party to form. In the former case, you only need to model the
> sub-space spanned by parties that already exist, and in the latter you
> only need 1 more dimension than that.
Yes. But if we're not careful, that's the kind of reasoning that leads
to IRV-type reform. Suppose that a method is stable for k parties and
then something bad happens at k+1. Once we're stuck at k parties, we may
ask for a reform that brings the well-behaved regime up to k+1. But then
we'll have the same problem at k+1.
It would be better, I think, to take into account scenarios all the way
up to n parties for some large n, so that there's plenty of room to grow
-- if we have that luxury as mechanism designers, of course.
The metaphor isn't completely fair to Condorcet because Condorcet passes
IIA whenever there is a honest CW and too few strategic voters to
disturb the presence of that CW. That could theoretically happen at any
dimension level. A Condorcet method can be stable at any dimension (best
case) and be unstable at any above one (worst case), so it's much harder
to tell how many parties it can support.
> Quinn's model is on his vse page:
> http://electionscience.github.io/vse-sim/VSE/
> <http://electionscience.github.io/vse-sim/VSE/>
> I've read this page before and I dismissed it because I couldn't figure
> out what the model actually was. Maybe it's obvious and I just don't see
> it, but what is the actual formula for the VSE?
The concept of VSE is simply this:
Suppose every voter assigns an absolute utility to each candidate. Then
each candidate provides the electorate as a whole with some amount of
total utility, were he elected.
Now suppose there's a magic method that reads the voters' minds and
always elects the best candidate. That method has 100% VSE by
definition. A method that picks candidates at random is defined to have
0% VSE. The VSE percentage of a method is its utility performance mapped
on this scale; most methods are above 0%, but it's possible for a truly
bad method to be less than 0%.
More info on wikipedia:
Jameson summarizes his results here:
Other people have also done VSE calculations. Here's John Huang's:
More information about the Election-Methods mailing list | {"url":"http://lists.electorama.com/pipermail/election-methods-electorama.com/2022-February/003578.html","timestamp":"2024-11-02T08:46:52Z","content_type":"text/html","content_length":"10411","record_id":"<urn:uuid:653fc63e-27ab-493e-b86d-b5734f587e7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00211.warc.gz"} |
international journal of industrial Engineering & Production ResearchA hybrid GRASP algorithm for minimizing total weighted resource tardiness penalty costs in scheduling of project networks
en کنترل پروژه Project Control پژوهشي Research In this paper, we consider scheduling of project networks under minimization of total weighted resource tardiness penalty costs. In this problem, we
assume constrained resources are renewable and limited to very costly machines and tools which are also used in other projects and are not accessible in all periods of time of a project. In other
words, there is a dictated ready date as well as a due date for each resource such that no resource can be available before its ready date but the resources are allowed to be used after their due
dates by paying penalty cost depending on the resource type. We also assume, there is only one unit of each resource type available and no activity needs more than it for execution. The goal is to
find a schedule with minimal total weighted resource tardiness penalty costs. For this purpose, we present a hybrid metaheuristic procedure based on the greedy randomized adaptive search algorithm
and path-relinking algorithm. We develop reactive and non-reactive versions of the algorithm. Also, we use different bias probability functions to make our solution procedure more efficient. The
computational experiments show the reactive version of the algorithm outperforms the non-reactive version. Moreover, the bias probability functions defined based on the duration and precedence
relation characteristics give better results than other bias probability functions. Project scheduling, weighted resource tardiness, GRASP, path-relinking 231 243 http://ijiepr.iust.ac.ir/browse.php?
a_code=A-10-372-1&slc_lang=en&sid=1 M. Ranjbar m_ranjbar@um.ac.ir 180031947532846003117 180031947532846003117 Yes Ferdowsi University of Mashhad | {"url":"https://www.iust.ac.ir/ijieen/article-313.xml","timestamp":"2024-11-01T23:08:12Z","content_type":"application/xml","content_length":"4934","record_id":"<urn:uuid:a6dffb00-cd36-482f-99c4-a470aa7f39d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00678.warc.gz"} |
Statistics/Numerical Methods/Random Number Generation - Wikibooks, open books for an open world
Definition and Usage
Random Numbers → ”A sequence of integers or group of numbers which show absolutely no relationship to each other anywhere in the sequence. At any point, all integers have an equal chance of
occurring, and they occur in an unpredictable fashion”
Many statistical methods rely on random numbers. The use of random numbers, however, has expanded beyond random sampling or random assignment of treatments to experimental units. More common uses now
are in simulation studies of physical processes, of analytically intractable mathematical expressions, or of a population resampling from a given sample from that population. These three general
areas of application are sometimes called simulation, Monte Carlo, and resampling.
Uniform Random Number Generation
Given an infinite sequence ${\displaystyle R_{0},R_{1},...,R_{n},...}$ most people’s notion of random would include that the numbers be uniformly distributed in the interval (0, 1). We denote this
distribution by U(0, 1). I will present in this paper the two ways of generating Uniform Random Numbers : Linear Congruential Generators and Tausworthe Generators.
Linear Congruential Generators
D. H. Lehmer in 1948 proposed the linear congruential generator as a source of random numbers. In this generator, each single number determines its successor. The form of the generator is
${\displaystyle X_{i+1}}$ = (a${\displaystyle X_{i}}$ + c) mod m , with 0 ≤ ${\displaystyle X_{i}}$ ≤ m .
m is called modulus. ${\displaystyle X_{0}}$ , a, and c are known as the seed, multiplier, and the increment respectively.
For example, consider m = 31, a = 7, c = 0 and begin with ${\displaystyle X_{0}}$ = 19. The next integers in the sequence are
9, 1, 7, 18, 2, 14, 5, 4, 28, 10, 8, 25, 20, 16, 19,
so, of course, at this point the sequence begins to repeat.
If we would have taken a = 3 instead of a = 7, we would have got:
26, 16, 17, 20, 29, 25, 13, 8, 24, 10, 30, 28, 22, 4, 12, 5, 15,
14, 11, 2, 6, 18, 23, 7, 21, 1, 3, 9, 27, 19
From the simple example above we can guess that modulus, multiplier, and increment play a role in the period length of the linear congruential generator.
The Period → The period is the smallest positive integer λ for which ${\displaystyle X_{\lambda }=X_{0}}$ . The period can be no greater than m. Therefore, m is chosen to be equal or nearly equal to
the largest representable integer in the computer to get a long period.
A full period generator is one in which the period is m, and it is obtained iff:
1. c is relatively prime to m; 2. (a-1) is a multiple of q, for each prime factor q of m; 3. (a-1) is a multiple of 4, if m is.
The Increment →
If c > 0 , we can achieve a full period by such :
1- m = ${\displaystyle 2^{b}}$ → faster computer arithmetic,
2- Set (a-1) as a multiple of 4,
3- c should be odd-valued,
4- Set b as high as possible. For example b=31 in a 32-bit computer.
If c = 0, a full period is not possible. A maximum number of random variables, then, can be achieved by such :
1-A maximum period generator, with λ = m − 1, is one in which a is a primitive element modulo m, if m is prime.
i. a mod m 6= 0
ii. a(m−1)/q modm 6= 1 , for each prime q of (m-1)
2-Given preceding comments, m is often set to the largest prime number less than 2b. The most commonly used modulus is the Mersenne prime ${\displaystyle 2^{31}-1}$ .
maximum period → m = prime, a = primitive element modulo m
Structure in the Generated Numbers
The idea behind the generated random number is that the numbers should be really random. That means that the numbers should appear to be distributionally independent of each other; that is, the
serial correlations should be small. How bad the structure of a sequence is (that is, how much this situation causes the output to appear nonrandom) depends on the structure of the lattice.
Consider the output of the generator with m =31 and a =3 that begins with ${\displaystyle x_{0}}$ =19. Plot the successive pairs
(27, 19), (19, 26), (26, 16)...
As it can be seen easily from the Figure 1.2, the successive pairs of random numbers lie only on three lines. The generated numbers does not seem to be random. They rather seem to be correlated. From
a visual perspective we can conclude that a generator with small number of lines does not cover the space well and has a bad lattice structure.
MacLaren and Marsaglia (1965) suggest that the output stream of a linear congruential random number generator be shuffled by using another generator to permuate subsequences from the original
By this way the period can be increased and the shuffling of the output can also break up the bad lattice structure.
Bays-Durham Shuffling of Uniform Deviates
Bays and Durham suggest using a single generator to fill a table of length k and then using a single stream to select a number from the table and replenish the table. After initializing a table T to
contain ${\displaystyle x_{1},x_{2},...,x_{k},}$ set i = k+1 and generate ${\displaystyle x_{i}}$ to use as an index to the table. Then update the table with ${\displaystyle x_{i+1}}$ .
For example, with the generator used in Figure 1.3, which yielded the sequence
27, 19, 26, 16, 17, 20, 29, 25, 13, 8, 24, 10, 30, 28, 22, 4, 12, 5, 15,
14, 11, 2, 6, 18, 23, 7, 21, 1, 3, 9,
we select k = 8, and initialize the table as
27, 19, 26, 16, 17, 20, 29, 25.
We then use the next number, 13, as the first value in the output stream, and also to form a random index into the table. If we form the index as 13 mod8 + 1, we get the sixth tabular value, 20, as
the second number in the output stream. We generate the next number in the original stream, 8, and put it in the table, so we now have the table
27, 19, 26, 16, 17, 8, 29, 25.
Now we use 20 as the index to the table and get the fifth tabular value, 17, as the third number in the output stream. By continuing in this manner to yield 10,000 deviates, and plotting the
successive pairs, we get Figure 1.6.
Tausworthe Generators
Tausworthe (1965) introduced a generator based on a sequence of 0`s and 1`s generated by a recurrence of the form
${\displaystyle b_{i}\equiv (a_{p}b_{i-p}+a_{p-1}b_{i-p+1}+...+a_{1}b_{i-1}){\textit {mod}}2}$
where all variables take on values of either 0 or 1.
For computational efficiency, most of the a`s in the equation set to be zero. Then we get,
${\displaystyle b_{i}\equiv (b_{i-p}+b_{i-p+q}){\textit {mod}}2}$
After this recurrence has been evaluated a sufficient number of times, say l, the l-tuple of a`s is interpreted as a number in base 2. This is referred to as an l-wise decimation of the sequence of a
As an example take a primitive trinomial modulo 2,
x4 + x + 1,
and begin with the bit sequence
1, 0, 1, 0.
For this polynomial, p = 4 and q = 3 in the recurrence. Operating the generator, we obtain
1, 1, 0, 0,1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0,
at which point the sequence repeats. A 4-wise decimation yields the numbers
12, 8, 15, 5, …
As with the linear congruential generators, different values of the c`s and even the starting values of the a`s can yield either good or bad generators.
Goodness-of-Fit Tests
In this part, I will introduce you the basic Goodness-of-Fit Tests, with which we can evaluate the sequence of random number we created using the above mentioned methods.
Chi-Squared Goodness-of-Fit Tests
${\displaystyle \chi ^{2}=\sum _{i=1}^{k}{\frac {(o_{i}-e_{i})^{2}}{e_{i}}}}$
→ The null hypothesis: A random variable has a uniform (0,1) distribution
→ Count the number of observation in each of the ten intervals
(0, 0.1], (0.1, 0.2], ..., (0.9, 1.0)
→ Compare those counts with the expected numbers.
→ If the observed numbers are significantly different from the expected numbers, we have reason to reject the null hypothesis.
The Kolmogorov-Smirnov Test
This test compares the empirical cumulative distribution function (c.d.f) ${\displaystyle {\hat {F}}(.)}$ with a theoretical c.d.f F(.).
→ H0: F(x) = x, 0 _ x < 1
→ Rank the observation so that ${\displaystyle R^{(1)}\leq R^{(2)}...\leq R^{n}}$
${\displaystyle {\hat {F}}(x)={\frac {i}{n}}}$ ,and ${\displaystyle R^{(n+1)}\equiv 1}$
→ where ${\displaystyle R^{(0)}\equiv 0}$ and ${\displaystyle R^{(n+1)}\equiv 1}$ .
→ The test statistic measures the size of the largest difference between these two:
${\displaystyle D_{n}=max_{0\leq x<1}}$ ${\displaystyle \left|{\hat {F}}(x)-F(x)\right|}$
Linear Dependence Test
→One way to test for dependencies between numbers in a sequence is to restrict such examination to linear dependence between observations which are separated by k numbers.
→ Given a realization of n random numbers ${\displaystyle R_{0},R_{1},\ldots ,R_{n}}$ the sample covariance of lag k is
${\displaystyle C_{k}=(n-k)^{-1}\sum _{i=1}^{n-k}(R_{i}-{\frac {1}{2}})(R_{i+k}-{\frac {1}{2}}).}$
→ Under randomness ${\displaystyle E[C_{k}]=0}$ . | {"url":"https://en.m.wikibooks.org/wiki/Statistics/Numerical_Methods/Random_Number_Generation","timestamp":"2024-11-06T08:24:44Z","content_type":"text/html","content_length":"81929","record_id":"<urn:uuid:de0bba74-09d3-4fa5-91b5-bfd26a1fc029>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00588.warc.gz"} |
[solids4Foam] How to calculate drag coeff when using solids4Foam
New Member
Join Date: Dec 2013
Location: Istanbul
Posts: 16
Rep Power:
Dear Ali,
First, you’re right. The solver is getting problem with the omega field, that’s why the bounding term came in. This problem can be solved if I use fixed value velocity through inlet instead of being
required to use time series velocity.
Second, following your suggestions, I just reduce the Tolerance for the FSI loop within each time-step from 1e-6 to 1e-3, and the solver takes a few quick iterations for convergence, and that’s much
better right now. But, the deviation in between the interfaces is getting larger, and I hope that would be its maximum value.
Dear Philip,
If you mean the interfaces deviation gap, the time series velocity profile seems to be related problem as well. I don’t have that much deviation gap when I use fixed value velocity at the inlet
I feel that the solver doesn’t work fine with some time series velocity BC.
Regarding the Method for transferring information between the interfaces, direct map approach doesn’t work well even if use a very fined conformal mesh created by ICEM CFD block structured approach.
Plus, RBF works much better than GGI.
In sum, as I mentioned before, the procedure sounds to be much better now, but the deviation gap is widening over time, and I think this is becuase the FSI tolerance (1e-3) is too much. Lower FSI
tolerance (1e-6), however, demands many iterations.
The other confusing point is that the current time step is 1e-5 with 0.0034 Co number, but if I take it 2-3 times bigger, the solver simply crashes.
Furthermore, the inlet patch is far from the FSI interface at the beginning, and it should take some time for flow field to reach there. But, the interface starts to move short after the start time.
Time = 0.00718
Setting traction on solid patch
Interpolating from fluid to solid using GGI/AMI interpolation
Total force (fluid) = (-2.33377e-08 -3.01451e-07 1.21318e-05)
Total force (solid) = (2.15632e-08 3.02419e-07 -1.21189e-05)
Evolving solid solver
Corr 0, relative residual = 1
Corr 242, relative residual = 0
PCG: Solving for D, Initial residual = 0.00284558, Final residual = 9.3859e-10, No outer iterations = 242
Max relative residual = 1, Relative residual = 0, enforceLinear = false
Interpolating from solid to fluid using GGI/AMI interpolation
Interpolating from solid to fluid using GGI/AMI interpolation
Current fsi relative residual norm: 1
Alternative fsi residual: 0.00234895
Time = 0.00718, iteration: 1
Modes before clean-up : 2, modes after clean-up : 0
Current fsi under-relaxation factor: 0.05
Maximal accumulated displacement of interface points: 4.10358e-05
GAMG: Solving for cellMotionUx, Initial residual = 0.840076, Final residual = 2.39959e-07, No Iterations 7
GAMG: Solving for cellMotionUy, Initial residual = 0.846326, Final residual = 8.91448e-07, No Iterations 7
GAMG: Solving for cellMotionUz, Initial residual = 0.888646, Final residual = 9.87733e-07, No Iterations 6
GAMG: Solving for cellMotionUx, Initial residual = 0.320706, Final residual = 7.29239e-07, No Iterations 8
GAMG: Solving for cellMotionUy, Initial residual = 0.483341, Final residual = 9.35381e-07, No Iterations 9
GAMG: Solving for cellMotionUz, Initial residual = 0.329802, Final residual = 5.32541e-07, No Iterations 8
Evolving fluid model: pimpleFluid
Courant Number mean: 1.1378e-06 max: 0.0038489 velocity magnitude: 0.100049
PIMPLE: iteration 1
DILUPBiCG: Solving for Ux, Initial residual = 0.000822067, Final residual = 2.49849e-10, No Iterations 2
DILUPBiCG: Solving for Uy, Initial residual = 0.000776971, Final residual = 3.60458e-09, No Iterations 2
DILUPBiCG: Solving for Uz, Initial residual = 0.000224024, Final residual = 1.25943e-09, No Iterations 2
DILUPBiCG: Solving for omega, Initial residual = 1.05632e-05, Final residual = 3.48389e-09, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 0.000398424, Final residual = 2.46952e-07, No Iterations 1
bounding k, min: -8.52414e-10 max: 0.0152085 average: 0.00947244
PIMPLE: iteration 2
DILUPBiCG: Solving for Ux, Initial residual = 1.07711e-05, Final residual = 4.34146e-09, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 1.41384e-05, Final residual = 8.04891e-09, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 3.48485e-07, Final residual = 3.49716e-10, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 1.06643e-06, Final residual = 4.72058e-10, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 4.08304e-05, Final residual = 2.89133e-08, No Iterations 1
bounding k, min: -3.43264e-11 max: 0.0152085 average: 0.00947211
PIMPLE: iteration 3
DILUPBiCG: Solving for Ux, Initial residual = 4.595e-08, Final residual = 3.15598e-11, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 5.08656e-08, Final residual = 4.0114e-11, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 3.0592e-09, Final residual = 3.30954e-12, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 1.0566e-07, Final residual = 5.76603e-11, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 4.18618e-06, Final residual = 3.45449e-09, No Iterations 1
bounding k, min: -8.51751e-10 max: 0.0152085 average: 0.00947208
Setting traction on solid patch
Interpolating from fluid to solid using GGI/AMI interpolation
Total force (fluid) = (9.58789e-09 2.0536e-07 2.23259e-06)
Total force (solid) = (-9.05041e-09 -2.07311e-07 -2.22094e-06)
Evolving solid solver
Corr 0, relative residual = 0.00108072
Corr 177, relative residual = 0
PCG: Solving for D, Initial residual = 4.66104e-06, Final residual = 9.87664e-10, No outer iterations = 177
Max relative residual = 0.00108072, Relative residual = 0, enforceLinear = false
Interpolating from solid to fluid using GGI/AMI interpolation
Interpolating from solid to fluid using GGI/AMI interpolation
Current fsi relative residual norm: 0.946871
Alternative fsi residual: 0.00222415
Time = 0.00718, iteration: 2
Current fsi under-relaxation factor: 0.05
Maximal accumulated displacement of interface points: 3.88777e-05
GAMG: Solving for cellMotionUx, Initial residual = 0.0600702, Final residual = 3.16851e-07, No Iterations 7
GAMG: Solving for cellMotionUy, Initial residual = 0.0923394, Final residual = 7.79415e-07, No Iterations 8
GAMG: Solving for cellMotionUz, Initial residual = 0.0666684, Final residual = 4.59348e-07, No Iterations 7
GAMG: Solving for cellMotionUx, Initial residual = 0.0131503, Final residual = 4.48109e-07, No Iterations 6
GAMG: Solving for cellMotionUy, Initial residual = 0.0288351, Final residual = 7.12569e-07, No Iterations 7
GAMG: Solving for cellMotionUz, Initial residual = 0.0169817, Final residual = 7.17113e-07, No Iterations 6
Evolving fluid model: pimpleFluid
Courant Number mean: 9.25785e-07 max: 0.0038489 velocity magnitude: 0.100049
PIMPLE: iteration 1
DILUPBiCG: Solving for Ux, Initial residual = 8.36956e-06, Final residual = 2.43038e-09, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 7.197e-06, Final residual = 3.83537e-09, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 9.13025e-08, Final residual = 7.72385e-11, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 3.10613e-07, Final residual = 1.17103e-10, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 4.74007e-07, Final residual = 3.76998e-10, No Iterations 1
bounding k, min: -2.57545e-12 max: 0.0152085 average: 0.00947207
PIMPLE: iteration 2
DILUPBiCG: Solving for Ux, Initial residual = 1.29491e-06, Final residual = 6.6442e-10, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 1.76408e-06, Final residual = 9.90313e-10, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 3.96689e-08, Final residual = 4.08151e-11, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 3.33405e-08, Final residual = 1.82678e-11, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 4.8677e-08, Final residual = 5.0997e-11, No Iterations 1
bounding k, min: -8.52066e-10 max: 0.0152085 average: 0.00947207
PIMPLE: iteration 3
DILUPBiCG: Solving for Ux, Initial residual = 2.60354e-09, Final residual = 3.21151e-12, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 2.9883e-09, Final residual = 3.31592e-12, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 7.88369e-11, Final residual = 7.39832e-14, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 3.23573e-09, Final residual = 2.22854e-12, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 5.07204e-09, Final residual = 6.06171e-12, No Iterations 1
bounding k, min: -5.67284e-12 max: 0.0152085 average: 0.00947207
Setting traction on solid patch
Interpolating from fluid to solid using GGI/AMI interpolation
Total force (fluid) = (6.91648e-09 1.89368e-07 2.83362e-06)
Total force (solid) = (-6.47144e-09 -1.91187e-07 -2.82192e-06)
Evolving solid solver
Corr 0, relative residual = 6.13304e-05
Corr 83, relative residual = 0
PCG: Solving for D, Initial residual = 2.70202e-07, Final residual = 9.67325e-10, No outer iterations = 83
Max relative residual = 6.13304e-05, Relative residual = 0, enforceLinear = false
Interpolating from solid to fluid using GGI/AMI interpolation
Interpolating from solid to fluid using GGI/AMI interpolation
Current fsi relative residual norm: 0.899692
Alternative fsi residual: 0.00211333
Time = 0.00718, iteration: 3
Current fsi under-relaxation factor: 0.05
Maximal accumulated displacement of interface points: 3.69388e-05
GAMG: Solving for cellMotionUx, Initial residual = 0.0275252, Final residual = 7.98234e-07, No Iterations 5
GAMG: Solving for cellMotionUy, Initial residual = 0.0292615, Final residual = 7.50964e-07, No Iterations 6
GAMG: Solving for cellMotionUz, Initial residual = 0.0303498, Final residual = 4.56572e-07, No Iterations 6
GAMG: Solving for cellMotionUx, Initial residual = 0.0027185, Final residual = 4.0568e-07, No Iterations 5
GAMG: Solving for cellMotionUy, Initial residual = 0.00533021, Final residual = 4.11815e-07, No Iterations 6
GAMG: Solving for cellMotionUz, Initial residual = 0.00351797, Final residual = 5.2661e-07, No Iterations 5
Evolving fluid model: pimpleFluid
Courant Number mean: 7.32008e-07 max: 0.0038489 velocity magnitude: 0.100049
PIMPLE: iteration 1
DILUPBiCG: Solving for Ux, Initial residual = 8.14705e-06, Final residual = 2.29806e-09, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 6.91261e-06, Final residual = 3.6221e-09, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 8.75404e-08, Final residual = 7.03357e-11, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 2.81286e-07, Final residual = 1.2101e-10, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 6.49217e-08, Final residual = 4.47831e-11, No Iterations 1
bounding k, min: -7.58949e-10 max: 0.0152085 average: 0.00947207
PIMPLE: iteration 2
DILUPBiCG: Solving for Ux, Initial residual = 1.367e-06, Final residual = 5.53742e-10, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 1.57508e-06, Final residual = 9.00186e-10, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 3.50923e-08, Final residual = 3.10754e-11, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 2.99862e-08, Final residual = 1.82564e-11, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 6.73838e-09, Final residual = 5.65891e-12, No Iterations 1
bounding k, min: -8.51795e-10 max: 0.0152085 average: 0.00947207
PIMPLE: iteration 3
DILUPBiCG: Solving for Ux, Initial residual = 2.16595e-09, Final residual = 2.18187e-12, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 2.64342e-09, Final residual = 3.20919e-12, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 5.99663e-11, Final residual = 1.36567e-13, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 2.9135e-09, Final residual = 2.17967e-12, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 6.94569e-10, Final residual = 6.58861e-13, No Iterations 1
bounding k, min: -7.58995e-10 max: 0.0152085 average: 0.00947207
Setting traction on solid patch
Interpolating from fluid to solid using GGI/AMI interpolation
Total force (fluid) = (3.96849e-09 1.69376e-07 3.41162e-06)
Total force (solid) = (-3.61189e-09 -1.71066e-07 -3.3999e-06)
Evolving solid solver
Corr 0, relative residual = 6.18627e-05
Corr 86, relative residual = 0
PCG: Solving for D, Initial residual = 2.65482e-07, Final residual = 9.24092e-10, No outer iterations = 86
Max relative residual = 6.18627e-05, Relative residual = 0, enforceLinear = false
Interpolating from solid to fluid using GGI/AMI interpolation
Interpolating from solid to fluid using GGI/AMI interpolation
Current fsi relative residual norm: 0.854873
Alternative fsi residual: 0.00200806
Time = 0.00718, iteration: 4
Maximal accumulated displacement of interface points: 0.000703515
GAMG: Solving for cellMotionUx, Initial residual = 0.899139, Final residual = 3.14282e-07, No Iterations 7
GAMG: Solving for cellMotionUy, Initial residual = 0.899561, Final residual = 7.76862e-07, No Iterations 7
GAMG: Solving for cellMotionUz, Initial residual = 0.908687, Final residual = 2.81354e-07, No Iterations 7
GAMG: Solving for cellMotionUx, Initial residual = 0.0274633, Final residual = 8.09574e-07, No Iterations 6
GAMG: Solving for cellMotionUy, Initial residual = 0.0409861, Final residual = 6.12213e-07, No Iterations 7
GAMG: Solving for cellMotionUz, Initial residual = 0.026483, Final residual = 4.1743e-07, No Iterations 6
Evolving fluid model: pimpleFluid
Courant Number mean: 3.48986e-06 max: 0.0038489 velocity magnitude: 0.100049
PIMPLE: iteration 1
DILUPBiCG: Solving for Ux, Initial residual = 0.000164904, Final residual = 4.44584e-08, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 0.000137998, Final residual = 6.90704e-08, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 1.68687e-06, Final residual = 1.2523e-09, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 2.7701e-06, Final residual = 8.76416e-10, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 7.9778e-07, Final residual = 5.9646e-10, No Iterations 1
bounding k, min: -5.29395e-12 max: 0.0152085 average: 0.00947207
PIMPLE: iteration 2
DILUPBiCG: Solving for Ux, Initial residual = 4.63699e-05, Final residual = 1.05237e-08, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 3.83718e-05, Final residual = 1.58858e-08, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 5.43873e-07, Final residual = 2.87303e-10, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 2.97631e-07, Final residual = 1.42784e-10, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 8.22545e-08, Final residual = 6.35442e-11, No Iterations 1
bounding k, min: -8.51511e-10 max: 0.0152085 average: 0.00947207
PIMPLE: iteration 3
DILUPBiCG: Solving for Ux, Initial residual = 3.5925e-08, Final residual = 2.57546e-11, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 4.12395e-08, Final residual = 3.52182e-11, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 6.36586e-10, Final residual = 4.37736e-12, No Iterations 1
DILUPBiCG: Solving for omega, Initial residual = 2.89809e-08, Final residual = 1.54184e-11, No Iterations 1
DILUPBiCG: Solving for k, Initial residual = 8.56864e-09, Final residual = 7.18223e-12, No Iterations 1
bounding k, min: -6.90639e-13 max: 0.0152085 average: 0.00947207
Setting traction on solid patch
Interpolating from fluid to solid using GGI/AMI interpolation
Total force (fluid) = (-2.53231e-08 -2.77636e-07 1.28999e-05)
Total force (solid) = (2.35382e-08 2.78443e-07 -1.28879e-05)
Evolving solid solver
Corr 0, relative residual = 0.00101833
Corr 175, relative residual = 0
PCG: Solving for D, Initial residual = 4.27465e-06, Final residual = 9.68254e-10, No outer iterations = 175
Max relative residual = 0.00101833, Relative residual = 0, enforceLinear = false
Interpolating from solid to fluid using GGI/AMI interpolation
Interpolating from solid to fluid using GGI/AMI interpolation
Current fsi relative residual norm: 0.000862285
Alternative fsi residual: 2.02547e-06
ExecutionTime = 10730.1 s ClockTime = 10731 s
forces output:
forces(pressure, viscous)((0 0 0) (-2.39268e-09 -2.61921e-08 1.21742e-06))
moment(pressure, viscous)((0 0 0) (-1.20252e-07 5.99465e-07 1.26537e-08)) | {"url":"https://www.cfd-online.com/Forums/openfoam-cc-toolkits-fluid-structure-interaction/218913-how-calculate-drag-coeff-when-using-solids4foam.html","timestamp":"2024-11-05T03:52:57Z","content_type":"application/xhtml+xml","content_length":"194662","record_id":"<urn:uuid:3b7e78a4-ea78-4fe9-8620-c844e634febc>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00738.warc.gz"} |
Math Colloquia - Creation of concepts for prediction models and quantitative trading
Modern mathematics with axiomatic systems has been developed to create a complete reasoning system.
This was one of the most exciting mathematical experiments.
However, even after the failure of the experiment, mathematical research is still directed by the vague ideal completeness.
Tight definitions to guarantee logical soundness were good for small toy world, but it could not model complex human knowledge. Perfect prediction of future needs perfect system. By changing the
direction from perfection to specific goals, we can build rich world of mathematical systems that can predict the future for given goal. Creation of mathematical concepts needs not be a complex task,
but it is one of the most creative task with deep insight for mathematics and real world. | {"url":"http://my.math.snu.ac.kr/board/index.php?mid=colloquia&l=en&page=7&document_srl=772040&sort_index=Time&order_type=asc","timestamp":"2024-11-03T06:36:39Z","content_type":"text/html","content_length":"45531","record_id":"<urn:uuid:50f946ea-97ba-473c-ad06-727fce393bc8>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00550.warc.gz"} |
Sums of Semiprime, Z, and D L-Ideals in a Class of F-Rings
In this paper it is shown that there is a large class of f-rings in which the sum of any two semiprime i-ideals is semiprime. This result is used to give a class of commutative f-rings with identity
element in which the sum of any two z-ideals which are i-ideals is a z-ideal and the sum of any two d-ideals is a d-ideal.
Original Publication Citation
Larson, S. Sums of Semiprime, Z, and D L-Ideals in a Class of F-Rings, Proceedings of the American Mathematical Society. vol. 109 (1990) pp. 895-901.
Publisher Statement
First published in Proceedings of the American Mathematical Society in 1990, published by the American Mathematical Society
Digital Commons @ LMU & LLS Citation
Larson, Suzanne, "Sums of Semiprime, Z, and D L-Ideals in a Class of F-Rings" (1990). Mathematics, Statistics and Data Science Faculty Works. 9. | {"url":"https://digitalcommons.lmu.edu/math_fac/9/","timestamp":"2024-11-08T12:36:21Z","content_type":"text/html","content_length":"32211","record_id":"<urn:uuid:05224e9d-c85c-4158-8a4d-4c7638076974>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00187.warc.gz"} |
Metric isomorphism
From Encyclopedia of Mathematics
of two measure spaces
A bijective mapping
In correspondence with the usual tendency in measure theory to ignore sets of measure zero, there is (and is primarily used) a "modulo 0" version of all these ideas. For example, let
For a number of objects given in
Associated with Lebesgue space, then the converse is true: Every isomorphism of the Boolean measure
[1] V.A. Rokhlin, "On mean notions in measure theory" Mat. Sb. , 25 : 1 (1949) pp. 107–150 (In Russian)
See also Ergodic theory for additional references. As a rule, the adjective "metric" is not anymore used and one simply speaks of an isomorphism of measure spaces, a homomorphism of measure spaces,
[a1] P.R. Halmos, "Measure theory" , v. Nostrand (1950)
How to Cite This Entry:
Metric isomorphism. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Metric_isomorphism&oldid=18067
This article was adapted from an original article by D.V. Anosov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"https://encyclopediaofmath.org/index.php?title=Metric_isomorphism&oldid=18067","timestamp":"2024-11-12T08:45:35Z","content_type":"text/html","content_length":"21595","record_id":"<urn:uuid:dbdcea04-3629-4931-a7b4-333ef8c47931>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00428.warc.gz"} |
Multiplication Tables 1 12 Worksheets
Math, particularly multiplication, forms the cornerstone of countless scholastic self-controls and real-world applications. Yet, for numerous learners, understanding multiplication can posture an
obstacle. To resolve this hurdle, educators and moms and dads have actually accepted an effective tool: Multiplication Tables 1 12 Worksheets.
Introduction to Multiplication Tables 1 12 Worksheets
Multiplication Tables 1 12 Worksheets
Multiplication Tables 1 12 Worksheets -
Description Multiplication Table 1 12 When you are just getting started learning the multiplication tables these simple printable pages are great tools There are printable tables for individual sets
of math facts as well as complete reference multiplication tables for all the facts 1 12 There are table variations with and without answers
Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills
Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets
Value of Multiplication Technique Recognizing multiplication is pivotal, laying a solid foundation for sophisticated mathematical concepts. Multiplication Tables 1 12 Worksheets use structured and
targeted technique, cultivating a deeper comprehension of this essential arithmetic operation.
Development of Multiplication Tables 1 12 Worksheets
Times Tables 1 12 Printable Worksheets Have Students Multiply The Number In The Center By The
Times Tables 1 12 Printable Worksheets Have Students Multiply The Number In The Center By The
Basic multiplication printables for teaching basic facts through 12x12 Print games quizzes mystery picture worksheets flashcards and more If you re teaching basic facts between 0 and 10 only you may
want to jump to our Multiplication 0 10 page Games Multiplication Puzzle Match 0 12 FREE
These multiplication times table worksheets are colorful and a great resource for teaching kids their multiplication times tables A complete set of free printable multiplication times tables for 1 to
12 These multiplication times table worksheets are appropriate for Kindergarten 1st Grade 2nd Grade 3rd Grade 4th Grade and 5th Grade
From standard pen-and-paper workouts to digitized interactive layouts, Multiplication Tables 1 12 Worksheets have advanced, satisfying diverse knowing styles and preferences.
Sorts Of Multiplication Tables 1 12 Worksheets
Basic Multiplication Sheets Basic exercises focusing on multiplication tables, assisting learners build a solid arithmetic base.
Word Problem Worksheets
Real-life circumstances incorporated right into troubles, boosting critical reasoning and application skills.
Timed Multiplication Drills Examinations developed to enhance rate and accuracy, aiding in quick mental mathematics.
Benefits of Using Multiplication Tables 1 12 Worksheets
10 Best Printable Multiplication Tables 0 12 PDF For Free At Printablee
10 Best Printable Multiplication Tables 0 12 PDF For Free At Printablee
Multiplication Table When you are just getting started learning the multiplication tables these simple printable pages are great tools There are printable tables for individual sets of math facts as
well as complete reference multiplication tables for all the facts 1 12
On this page you have a large selection of 2 digit by 1 digit multiplication worksheets to choose from example 32x5 Multiplication 3 Digits Times 1 Digit On these PDF files students can find the
products of 3 digit numbers and 1 digit numbers example 371x3 Multiplication 4 Digits Times 1 Digit
Improved Mathematical Skills
Constant practice sharpens multiplication proficiency, boosting general mathematics abilities.
Boosted Problem-Solving Abilities
Word troubles in worksheets create logical thinking and method application.
Self-Paced Learning Advantages
Worksheets accommodate private understanding speeds, promoting a comfy and adaptable discovering environment.
Exactly How to Develop Engaging Multiplication Tables 1 12 Worksheets
Incorporating Visuals and Shades Lively visuals and colors catch interest, making worksheets aesthetically appealing and involving.
Consisting Of Real-Life Scenarios
Connecting multiplication to everyday scenarios adds relevance and practicality to workouts.
Tailoring Worksheets to Different Skill Levels Customizing worksheets based on varying effectiveness levels guarantees inclusive knowing. Interactive and Online Multiplication Resources Digital
Multiplication Equipment and Gamings Technology-based resources supply interactive discovering experiences, making multiplication interesting and delightful. Interactive Websites and Apps On the
internet platforms offer diverse and accessible multiplication technique, supplementing conventional worksheets. Personalizing Worksheets for Numerous Discovering Styles Visual Learners Aesthetic
aids and diagrams aid comprehension for learners inclined toward visual knowing. Auditory Learners Spoken multiplication issues or mnemonics cater to students that realize principles through auditory
methods. Kinesthetic Learners Hands-on activities and manipulatives support kinesthetic learners in recognizing multiplication. Tips for Effective Execution in Understanding Consistency in Practice
Routine method strengthens multiplication skills, advertising retention and fluency. Balancing Repetition and Range A mix of recurring workouts and diverse issue formats keeps interest and
understanding. Providing Constructive Responses Comments help in recognizing areas of improvement, urging ongoing progress. Obstacles in Multiplication Technique and Solutions Motivation and
Interaction Hurdles Tedious drills can result in disinterest; innovative strategies can reignite inspiration. Conquering Concern of Mathematics Negative perceptions around math can impede progress;
producing a positive discovering atmosphere is necessary. Influence of Multiplication Tables 1 12 Worksheets on Academic Efficiency Researches and Research Study Findings Research study shows a
favorable connection in between consistent worksheet usage and boosted mathematics efficiency.
Multiplication Tables 1 12 Worksheets emerge as flexible devices, cultivating mathematical proficiency in learners while accommodating varied knowing styles. From fundamental drills to interactive
online resources, these worksheets not just boost multiplication skills yet likewise promote crucial reasoning and analytic capacities.
Multiplication By 12 Worksheets Printable
2 9 Multiplication Chart PrintableMultiplication
Check more of Multiplication Tables 1 12 Worksheets below
Free Blank Multiplication Tables 1 12 Printable Worksheets Bios Pics
Times Table 2 12 Worksheets 1 2 3 4 5 6 7 8 9 10 11 Multiplication Tables 1 12
Printable Times Table Worksheets Web Browse Printable Multiplication Worksheets Education
Multiplication Tables 1 12 Printable Worksheets
Multiplication Tables 1 12 Printable Worksheets Printable Worksheets
Multiplication Tables 1 12 Practice Sheet Brokeasshome
Multiplication Worksheets K5 Learning
Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills
Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets
Multiplying by 12 worksheets K5 Learning
Worksheets Math drills Multiplication facts Multiplying by 12 Multiplying by 12 Multiplication facts with 12 s Students multiply 12 times numbers between 1 and 12 The first worksheet is a table of
all multiplication facts 1 12 with twelve as a factor 12 times table Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions
Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills
Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets
Worksheets Math drills Multiplication facts Multiplying by 12 Multiplying by 12 Multiplication facts with 12 s Students multiply 12 times numbers between 1 and 12 The first worksheet is a table of
all multiplication facts 1 12 with twelve as a factor 12 times table Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions
Multiplication Tables 1 12 Printable Worksheets
Times Table 2 12 Worksheets 1 2 3 4 5 6 7 8 9 10 11 Multiplication Tables 1 12
Multiplication Tables 1 12 Printable Worksheets Printable Worksheets
Multiplication Tables 1 12 Practice Sheet Brokeasshome
12 Times Tables Worksheets Multiplacation Tables Times Tables Multiplication Tables 1 12
Pin By Katie Derringer On 3rd Grade Teaching Math Math multiplication Homeschool Math
Pin By Katie Derringer On 3rd Grade Teaching Math Math multiplication Homeschool Math
Free Printable Multiplication Chart Table Worksheet For Kids
FAQs (Frequently Asked Questions).
Are Multiplication Tables 1 12 Worksheets appropriate for any age groups?
Yes, worksheets can be tailored to different age and ability levels, making them versatile for numerous learners.
Just how usually should students exercise making use of Multiplication Tables 1 12 Worksheets?
Regular practice is key. Regular sessions, preferably a few times a week, can produce considerable improvement.
Can worksheets alone boost math skills?
Worksheets are an useful tool however needs to be supplemented with diverse learning approaches for detailed ability advancement.
Are there on-line systems using cost-free Multiplication Tables 1 12 Worksheets?
Yes, several instructional internet sites provide open door to a wide range of Multiplication Tables 1 12 Worksheets.
Exactly how can moms and dads sustain their children's multiplication method in your home?
Motivating constant practice, offering support, and developing a positive understanding setting are useful steps. | {"url":"https://crown-darts.com/en/multiplication-tables-1-12-worksheets.html","timestamp":"2024-11-12T21:56:38Z","content_type":"text/html","content_length":"29847","record_id":"<urn:uuid:0262cb05-2daa-4d86-8e65-3f5d684712cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00373.warc.gz"} |
KSEEB Solutions for Class 8 Maths Chapter 2 Bijoktigalu Ex 2.1
Students can Download Maths Chapter 2 Bijoktigalu Ex 2.1 Questions and Answers, Notes Pdf, KSEEB Solutions for Class 8 Maths in Kannada helps you to revise the complete Karnataka State Board Syllabus
and score more marks in your examinations.
KSEEB Solutions for Class 8 Maths Chapter 2 Bijoktigalu Ex 2.1 | {"url":"https://kseebsolutions.guru/kseeb-solutions-for-class-8-maths-chapter-2-ex-2-1-in-kannada/","timestamp":"2024-11-10T04:48:29Z","content_type":"text/html","content_length":"61983","record_id":"<urn:uuid:124bdea6-cb05-4779-940d-7082b91d38c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00187.warc.gz"} |
Dashed Line Segmentation in D3.js
As everybody knows by now, D3 is a pretty nifty tool for creating interactive data visualizations: its simple yet powerful data binding API affords extraordinary flexibility for creating almost every
kind of chart imaginable. The downside with this flexibility is, however, that solving some common visualization problems might take a bit more work than with your average high-level charting
One of these problems is implementing dashed lines in a line chart - a common way to indicate uncertainty in your data. D3, at least at the time of writing, does not provide an out-of-the-box feature
for dashing lines. Fear not though, since in this article we’ll walk through a simple solution to dash your lines for days on end. For more details, check out the code at JSFiddle.
The Algorithm
In brief, we’ll figure out the dashed segments, calculate their lengths along the path element, and use these lengths to derive the stroke-dash-array property.
The first thing we need is a list of the dashed segments, namely their starting and ending indices. Utilizing lodash.reduce gives us a neat & functional implementation:
function getDashedRanges(data) {
const hasOpenRange = (arr) => _.last(arr) && !('end' in _.last(arr))
const lastIndex = data.length - 1
return _.reduce(data, (res, d, i) => {
const isRangeStart = !hasOpenRange(res) && isDashed(d)
if (isRangeStart) res.push({ start: Math.max(0, i - 1) })
const isRangeEnd = hasOpenRange(res) && (!isDashed(d) || i === lastIndex)
if (isRangeEnd) res[res.length - 1].end = i
return res
}, [])
isDashed simply checks the certainty property, which I’ve added to each data item (faintly mimicking the Google Charts API):
function isDashed(d) {
return !d.certainty
You’re free to come up with your own method of indicating uncertainty: return d.value !== 4 if you don’t trust the number 4, for example.
Next, we’ll need a list of path lengths at each data point. We’re using the getTotalLength and getPointAtLength methods from the SVG specs to define a new function, getPathLengthAtX. getPathLengthAtX
basically approximates the length of a path element at given x coordinate:
function getPathLengthAtX(path, x) {
const EPSILON = 1
let point
let target
let start = 0
let end = path.getTotalLength()
// Mad binary search, yo
while (true) {
target = Math.floor((start + end) / 2)
point = path.getPointAtLength(target)
if (Math.abs(point.x - x) <= EPSILON) break
if ((target >= end || target <= start) && point.x !== x) {
if (point.x > x) {
end = target
} else if (point.x < x) {
start = target
} else {
return target
Then we’ll just put our state-of-the-art method to work, mapping it over the data:
const lengths = data.map(d => getPathLengthAtX(path, scales.x(d.date)))
Finally, we’ll take the dashed segments and path lengths, and use them to build the stroke-dash-array property:
function buildDashArray(dashedRanges, lengths) {
return _.reduce(dashedRanges, (res, { start, end }, i) => {
const prevEnd = i === 0 ? 0 : dashedRanges[i - 1].end
const normalSegment = lengths[start] - lengths[prevEnd]
const dashedSegment = getDashedSegment(lengths[end] - lengths[start])
return res.concat([normalSegment, dashedSegment])
}, [])
For each non-dashed segment, append the length of the segment; for dashed segments, append dash & blank lengths totaling up to the segment length:
function getDashedSegment(length) {
const totalDashLen = DASH_LENGTH + DASH_SEPARATOR_LENGTH
const dashCount = Math.floor(length / totalDashLen)
return _.range(dashCount)
.map(() => DASH_SEPARATOR_LENGTH + ',' + DASH_LENGTH)
.concat(length - dashCount * totalDashLen)
And there you go, dashed line segments! It took a bit of work, but we got there in the end. At this point one might ask, why not just build your path out of multiple line elements, i.e. adding
separate elements for the dashed regions? That’s because the aforementioned method breaks interpolation. A more sophisticated hacker might use clip-paths to solve the dilemma; both solutions,
however, force you to split your line into multiple elements which is, in most cases, undesirable (clip-paths are the way to go if you need something more advanced than dashed or blank segments,
Check out the code at JSFiddle. | {"url":"https://wiredcraft.com/blog/dashed-line-segmentation-d3js/","timestamp":"2024-11-02T11:32:40Z","content_type":"text/html","content_length":"517567","record_id":"<urn:uuid:b01628a1-c936-47ef-b57f-4721573551ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00599.warc.gz"} |
Introduction to Graphs Class 8 Extra Questions Maths Chapter 15
Extra Questions for Class 8 Maths Chapter 15 Introduction to Graphs
Question 1.
Write the coordinates of each point shown is the graph.
Question 2.
From the given figure, choose the letters indicate the location of the points.
(i) (3, 1)
(ii) (0, 5)
(iii) (3, 0)
(iv) (1, 2)
(v) (2, 3)
(vi) (8, 12)
(vii) (6, 10)
(viii) (0, 9)
(i) D(3, 1), (ii) E(0, 5), (iii) F(3, 0), (iv) G(1, 2), (v) H(2, 3), (vi) B(8, 12), (vii) C(6, 10), (viii) A(0, 9)
Question 3.
Draw the graph of the following table. Is it a linear graph?
Yes, it is a linear graph.
Question 4.
The given graphs show the progress of two different cyclists during a ride. For each graph, describe the rider’s progress over the period of time. (NCERT Exemplar)
(a) As time passes, the speed of cyclist I decreases steadily.
(b) Speed of cyclist II increases for a short time period, and then increases very slowly.
Question 5.
Match the coordinates given in Column A with the items mentioned in Column B. (NCERT Exemplar)
(i) – (d), (ii) – (f), (iii) – (e), (iv) – (a), (v) – (b), (vi) – (c)
Question 6.
The given graph shows the flight of an aeroplanes.
(i) What are the scales taken on x-axis and y-axis?
(ii) Upto what height the aeroplane rises?
(iii) What was the speed of the aeroplane while rising?
(iv) How long was the plane in level flight?
(v) How long did the whole flight take?
(i) Scale on x-axis, 1 cm = 10 minutes
Scale ony-axis, 1 cm = 100 metres
(ii) The aeroplane rose upto 1000 metres.
(iii) The speed of the aeroplane while rising was 100 m per minutes.
(iv) The time taken by the aeroplane to be in level flight is 40 + 30 = 70 minutes
(v) Total flight time is 130 minutes.
Question 7.
A bank gives 10% interest on the deposits by the Ladies. Draw a graph showing the relation between the amount deposited and the simple interest earned by the ladies and state following from the
(i) The annual interest earned for an investment of ₹ 250
(ii) The investment one has to make to get an annual interest of ₹ 70.
Required Graph is as under:
(i) ₹ 25 is Earned as annual interest for an investment of ₹ 250
(ii) ₹ 700 is to be invested to get an annual interest of ₹ 70. | {"url":"https://www.learncbse.in/introduction-to-graphs-ncert-extra-questions-for-class-8-maths-ch-15/","timestamp":"2024-11-10T11:53:52Z","content_type":"text/html","content_length":"148418","record_id":"<urn:uuid:d34775f8-9176-438f-bdab-1a840c495a7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00106.warc.gz"} |
How to Calculate Payback Period in Excel.
The payback period refers to the amount of time it takes to recover the cost of an investment or simply the length of time an investment reaches a break-even point.
Formula to calculate payback period.
The formula used to calculate the payback period is:
This formula is used where you have a constant cash inflow.
Suppose the initial investment amount of a project is $60,000, Calculate the payback period if the cash inflows is $ 20,000 per year for 5 years.
We begin by transferring the data to an excel spreadsheet.
Then divide B1 by 20,000 to get the payback period.
Therefore, your payback period is 3 years. | {"url":"https://www.learntocalculate.com/payback-period-in-excel/","timestamp":"2024-11-08T18:38:02Z","content_type":"text/html","content_length":"57493","record_id":"<urn:uuid:e150d8be-ea0a-4e14-99e8-b9a7b4d1d0b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00632.warc.gz"} |
seminars - Local Shimura varieties and their cohomology I, II, III
※ Zoom Meeting ID: 858 2614 5315 / Passcode: 104462
※ 5/3, 5/10, 5/17 3회 진행하는 연속 강연입니다.
Local Shimura varieties are non-archimedean analytic spaces analogous to Shimura varieties, whose cohomology is expected to realize (in a precise sense) both the local Langlands correspondence and
the local Jacquet-Langlands correspondence. In these lectures, I'll review the theory of local Shimura varieties, and explain what can be proven about their cohomology using current technology. Some
of this material is joint work with Tasho Kaletha and Jared Weinstein. | {"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&page=52&sort_index=date&order_type=asc&l=en&document_srl=815535","timestamp":"2024-11-08T04:58:17Z","content_type":"text/html","content_length":"48433","record_id":"<urn:uuid:c59606ea-e7ad-4cad-a434-45da133268c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00307.warc.gz"} |
A Line Passes Through The Point (-4 , -5) And Has A Slope Of -5/4. Write An Equation In Slope - Intercept
Step-by-step explanation:
(✿◡‿◡) <-- she wants Brainliest
Answer:Answer: 5.2-p
Step-by-step explanation:
Let p be the price, in dollars, per pound of cherries.
Given : .Jackson buys a watermelon for $7.65 and 5 pounds of cherries.
Cost of 5 pounds of cherries = 5p
Cost of fruits to Jackson = 7.65+5p (in dollars)
Tim buys a pineapple for $2.45 and 4 pounds of cherries.
Cost of 4 pounds of cherries = 4p
Cost of fruits to Tim = 2.45+4p (in dollars)
Amount spent by Jackson more than Tim = (Cost of fruits to Jackson)-(Cost of fruits to Tim)
= 7.65+5p -(2.45+4p)
= 7.65+5p -2.45 - 4p
= 5.2-p
Hence , the required expression in terms of p = "5.2-p"
Step-by-step explanation: | {"url":"https://diemso.unix.edu.vn/question/a-line-passes-through-the-point-4-5-and-has-a-slope-of-54-wr-frkg","timestamp":"2024-11-14T23:57:41Z","content_type":"text/html","content_length":"68190","record_id":"<urn:uuid:e3e11103-4d2a-4f07-abc1-0adb24259736>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00661.warc.gz"} |
Calculating the Torque of a 90 lb Flywheel (2024)
• Forums
• Engineering
• General Engineering
• Thread starterRustyScienceGuy
• Start date
In summary: What is the flywheel doing?Thanks for your question. When you add weight to a flywheel, it increases the moment of inertia (I) of the flywheel. This means that the flywheel will provide
more torque at a given speed than a lighter flywheel. The flywheel is also balancing the crank torque fluctuations, which makes for a smoother and more consistent pedal.
I am trying to understand how to calculate the torque on a flywheel. If I have a 90 pound flywheel that is 20" in diameter, how do I find the torque?
Thanks for your help.
Science Advisor
Gold Member
...ie, torque comes from/causes acceleration/deceleration. So a flywheel at constant speed doesn't have any associated torque.
We'll need to know more about what you want to do.
The angular moment of inertia for a solid disk is m x r^2 / 2 so in this case it's 90 lb (mass) x (10/12 ft)^2 / 2 = 62.5 lb ft^2
1 lb mass = 1 slug / 32.174, so moment of inertia
I = 1.94 slug ft^2
Assuming no friction, torque would be associated with a rate of acceleration or deceleration.
Assume that flywheel accelerates at a rate of 1000 rpm per second, rate of acceleration is:
(1000 revolutions / minute x sec) x (minute / 60 sec) x ( 2 x pi x radians / revolution) = 104.72 radians / sec ^2 = alpha
Conversion factors:
1 slug = 1 lb sec^2 / ft
Back to example:
Torque = alpha(/radians) x I = (104.72 / sec^2) x 1.94 slug ft^2 = (104.72 / sec^2) x 1.94 (1 lb sec^2 / ft) ft^2 = 203.16 ft lb
Last edited:
Torque is claclulated in absolute units, in this case foot pounds. You have to take into account the angular velocity, and the radius and mass of the fly-wheel. But here's the interesting thing if
you have a fly-wheel that is two feet in diameter, and apply a force of 1lbwt. to turn the fly-wheel, the torque experienced at the axle is 1 foot pound. 1lb wt = about 1/32 of a foot pound, so you
can see that the greater the diameter of the fly-wheel the larger the mechanical advantage. You can see why Archimedes was able to say give me a lever that is long enough and I will move the Earth or
words to that effect. django
Last edited:
I need help as i am not good at physics, despite formula given by you gentlemen.
I have made a flywheel for a bicycle. What is the torque for 6 kg steel Flywheel?
The diameter is 3 1/2" ins by thickness 2 1/2" ins. The weight is 6 kg or 212 ounces ( 13.2 lb
Flywheel rpm : 410 rpm ( Pedalling average at 60 rpm )
Yo are a gentleman and a scholar thanks I rally appreciate your help.
Again, as per the previous posts, a flywheel requires no torque at a constant speed like you have mentioned. What exactly do you want to know when you ask for the torque? Are you wanting to know what
the flywheel will provide or what it takes to get it to speed?
Sorry for not stating clearly. I have a flywheel installed on a bike and it runs well. The system provide energy to the pedal when pedaling and it's balance out. Yes, what the flywheel provide?
Thanks so much Sir!
What is the angular velocity and moment of inertia for a flywheel weight of 6 kg ( 212 oz )with the diameter of 3 and 1/2 ins. and rpm is 479
Last edited:
The crank power is 0.6 Hp ( 450 Watts ) and rotating speed of 100 rpm. A solid disc steel flywheel of 6 kg with a diameter of 3 1/2 ins.( rotates at 547 rpm if this is important ). Can this flywheel
provide it's stored energy more than the crank's when release ?
I don't mind anyone correcting me for not understanding flywheel principle, i don't have academic education coming from poor family, I self learn things that fascinates me. Thanks you!
Last edited:
Jeff went through a sample of all of those calculations in post #4. We like to try to get our users to learn instead of just giving answers - give the calculations a shot and see how you do.
One thing though:
Can this flywheel provide it's stored energy more than the crank's when release ?
Conservation of energy always applies. Assuming the flywheel is perfect, the energy you can get out of it would be exactly equal to the energy you put into it.
Thank you Sir, and to all at this site. I am encourage.
edneo said:
Thank you Sir, and to all at this site. I am encourage.
I managed to work out as you encourage following Jeffs example. Thanks so much I learn something when knowing its terminology.
In designing the flywheel, the balancing of crank torque fluctuations produced by pedaling, I see that there is a transfer of energy to the low torque area in the pedal cycle. Thus the pedal is
smooth and consistant. In all i wanted to know how it is that the extra weight added to the already heavy bicycle, when pedaling I don't feel the load at the pedal and the back wheel, instead it is
light and easy throughout the pedal cycle. So I thought to workout the torque transfer from the flywheel. Perhaps in physics perspective I can see.
Hi! edneo,
Torque is calculated as follows: Torque = inertial mass x angular acceleration. It should be simple enough to substitute whatever figures apply in your case to the formula and to work out whatever
you need to know.
I have to design a flywheel to generate 30 ft lb torque. How do i go about it?
yusufk said:
I have to design a flywheel to generate 30 ft lb torque. How do i go about it?
Welcome to PF... first you'll need more information about the performance requirements of the flywheel. Hopefully what you read in this thread will help you understand why that is: torque is only one
component of the performance of the flywheel. You also need to know how long you want it to be able to generate that torque and probably at what minimum rpm (giving you energy and power).
we need the flywheel to run an alternator. the torque requirement of the alternator is 30 ft lb. and the RPM range will be 300~0. the alternator produces 71V and 2.1A at 300RPM and decreases
uniformly with RPM. the flywheel should be albe to move the alternator for atleast 15 seconds.
Thank you
hello Russ,
thanx for your post. can you suggest any book or link that i can refer and that will help me design the flywheel
Jeff did a walk-through of the calculation in post #4. You should be able to just plug your numbers in and get the answer. Could you give it a try...? We don't like just handing out answers here.
hello russ,
i used the calculation by jeff and came up with flywheel of mass 206lb and 16in diameter.
hello russ,
can you guide me as on how to find the thickness of the flywheel.
I am trying to understand how to calculate the torque on a flywheel. If I have a 90 pound flywheel that is 20" in diameter, how do I find the torque?
Thanks for your help. :waseemraja
I am trying to understand how to calculate the torque on a flywheel. If I have a 90 pound flywheel that is 20" in diameter, how do I find the torque?
Thanks for your help. :waseemraja
I want to plot a graph of Torque at flywheel vs piston position of a formula one engine, I have all the engine specification so I started with plotting the velocity and acceleration diagrams using
excel because I may need them, but now am totally stuck with the torque diagram, would you guys help me with this problem, thank you.
i need to make a flywheel which has rating of 16.3 slug-ft squared. Can i get some input for fabrication methods?
FAQ: Calculating the Torque of a 90 lb Flywheel
1. What is torque?
Torque is a measure of the force that causes an object to rotate around an axis. It is calculated by multiplying the force applied to an object by the distance from the axis of rotation to the point
where the force is applied.
2. How do I calculate the torque of a 90 lb flywheel?
To calculate the torque of a flywheel, you will need to know the weight of the flywheel (in this case, 90 lbs) and the radius of the flywheel. Multiply the weight by the radius to get the torque in
pound-feet. For example, if the radius is 2 feet, the torque would be 180 pound-feet (90 lbs x 2 feet = 180 pound-feet).
3. Why is torque important for a flywheel?
Torque is important for a flywheel because it determines the rotational speed of the flywheel. The greater the torque, the faster the flywheel will rotate. This is important for machines that use
flywheels for energy storage or to maintain a constant speed.
4. What units are used to measure torque?
Torque is typically measured in pound-feet (lb-ft) or Newton-meters (Nm). Pound-feet is used in the United States, while Newton-meters is used in most other countries.
5. Can torque be increased or decreased?
Yes, torque can be increased or decreased by changing the amount of force applied to an object or by changing the distance between the force and the axis of rotation. For a flywheel, increasing the
weight or the radius will increase the torque, while decreasing either will decrease the torque.
Similar threads
Help with flywheel / clutch calculations
IHow to transfer angular momentum between two flywheels?
Rod Bearing Load Estimate for 90 ft-lbs Torque Starter Motor
Friction between motor and flywheel
IUnderstanding Net Internal Torque and Kinetic Friction in a System
AutomotiveMotorcycle: Flywheel mass and rear wheel torque
Commercially available flywheels
Reaction Torque from an Ice Auger
Kinetic Vehicle [flywheel powered] mass and energy
Maximizing Energy Efficiency with Flywheel Energy Storage
• Forums
• Engineering
• General Engineering | {"url":"https://wanemgmt.com/article/calculating-the-torque-of-a-90-lb-flywheel","timestamp":"2024-11-11T20:01:40Z","content_type":"text/html","content_length":"101754","record_id":"<urn:uuid:b20966d7-ca00-4fcd-b4eb-6bf0e78e2db6>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00081.warc.gz"} |
Digital Math Resources
Display Title
Math Example: Language of Math--Numerical Expressions--Addition--Example 13
Math Example: Language of Math--Numerical Expressions--Addition--Example 13
Numerical Expressions
This example illustrates the conversion of the phrase "Negative five added to thirteen" into a numerical expression. The image demonstrates how this verbal statement translates to the expression 13 +
(-5), emphasizing the order of numbers when using the phrase "added to."
Understanding the nuances of phrases like "added to" is crucial for students to correctly interpret and translate verbal statements into numerical expressions. This skill is particularly important
when dealing with combinations of positive and negative numbers.
Exposure to various examples helps students recognize patterns in how different phrases affect the order of numbers in an expression. This reinforces their understanding of addition with negative
numbers and prepares them for more complex mathematical concepts.
Teacher Script: In this example, we see the phrase "added to" with a negative number. Notice how this affects the order of the numbers in our expression. Why do you think 13 comes before -5? How
might the meaning change if we said "thirteen added to negative five" instead?
For a complete collection of math examples related to Numerical Expressions click on this link: Math Examples: Numerical Expressions: Addition Collection. | {"url":"https://www.media4math.com/library/math-example-language-math-numerical-expressions-addition-example-13","timestamp":"2024-11-06T18:59:08Z","content_type":"text/html","content_length":"51096","record_id":"<urn:uuid:964b129d-1404-42c2-9b1f-8f6af2fda1c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00668.warc.gz"} |
Intro to Functions
Processing, please wait...
It was processed successfully!
WHAT IS A FUNCTION?
A relation shows a relationship between two quantities. A function is a special type of relation that, for each input, gives exactly one output.c definition here
To better understand functions…
WHAT IS A FUNCTION?. A relation shows a relationship between two quantities. A function is a special type of relation that, for each input, gives exactly one output.c definition here To better
understand functions…
LET’S BREAK IT DOWN!
Use an equation to represent a function for carnival tokens.
Emily wants to buy tokens for the carnival. The token machine says one dollar buys three tokens. A function is a special type of relation where one input gives one output. The machine works like a
function, with an input of dollars and an output of tokens. You can represent the function mathematically using the equation y=3x, where x represents the number of dollars and y represents the number
of tokens. If you want to know how many tokens she gets for $20, substitute 20 into the function for x and calculate the output number of tokens, y. For y=3x, y=320=60. So Emily gets 60 tokens if she
puts $20 into the machine. Try this yourself: How many tokens does Emily get for $30?
Use an equation to represent a function for carnival tokens. Emily wants to buy tokens for the carnival. The token machine says one dollar buys three tokens. A function is a special type of relation
where one input gives one output. The machine works like a function, with an input of dollars and an output of tokens. You can represent the function mathematically using the equation y=3x, where x
represents the number of dollars and y represents the number of tokens. If you want to know how many tokens she gets for $20, substitute 20 into the function for x and calculate the output number of
tokens, y. For y=3x, y=320=60. So Emily gets 60 tokens if she puts $20 into the machine. Try this yourself: How many tokens does Emily get for $30?
Identify functions in lists and graphs.
You can use an equation to generate a set of ordered pairs for a function. For example, the equation y=3x generates the ordered pairs {(1, 3), (2, 6), (3, 9), (4, 12), (5, 15)...}. Notice that in the
set of ordered pairs, no x-values are repeated within the set, which tells us that it is a function. You can graph those points and connect them to show the graph of the function. In a function, each
input (x-value) can only produce one output (y-value). If you draw a vertical line on the graph, then this vertical line only crosses the function once at any given x-value. The Vertical Line Test is
a method to determine if a graphed relation is also a function. To conduct the test, imagine scanning a vertical line from left to right on the graph of a relation. If at any point the vertical line
crosses the function twice, the relation fails the test, and it is not a function. Try this yourself: Use a graphing calculator to graph a relation. Use the vertical line test to see if the relation
is a function.
Identify functions in lists and graphs. You can use an equation to generate a set of ordered pairs for a function. For example, the equation y=3x generates the ordered pairs {(1, 3), (2, 6), (3, 9),
(4, 12), (5, 15)...}. Notice that in the set of ordered pairs, no x-values are repeated within the set, which tells us that it is a function. You can graph those points and connect them to show the
graph of the function. In a function, each input (x-value) can only produce one output (y-value). If you draw a vertical line on the graph, then this vertical line only crosses the function once at
any given x-value. The Vertical Line Test is a method to determine if a graphed relation is also a function. To conduct the test, imagine scanning a vertical line from left to right on the graph of a
relation. If at any point the vertical line crosses the function twice, the relation fails the test, and it is not a function. Try this yourself: Use a graphing calculator to graph a relation. Use
the vertical line test to see if the relation is a function.
Linear functions are relations with a rate and a constant.
Kenzie make $7 a week in allowance plus $2.50 per chore. You can represent the relation between the number of chores Kenzie does (x) and the number of dollars he earns in a week (y) with the equation
y=2.5x+7. Each number of chores Kenzie does produces a different amount of allowance. So, this relation is also a function. This function has a rate ($2.50/chore) and a constant ($7). This function
is a linear function. Try this yourself: Determine the amount of allowance Kenzie makes if he does 1 to 10 chores in a week, and then represent this function in a table of values, a set of ordered
pairs, and a graph.
Linear functions are relations with a rate and a constant. Kenzie make $7 a week in allowance plus $2.50 per chore. You can represent the relation between the number of chores Kenzie does (x) and the
number of dollars he earns in a week (y) with the equation y=2.5x+7. Each number of chores Kenzie does produces a different amount of allowance. So, this relation is also a function. This function
has a rate ($2.50/chore) and a constant ($7). This function is a linear function. Try this yourself: Determine the amount of allowance Kenzie makes if he does 1 to 10 chores in a week, and then
represent this function in a table of values, a set of ordered pairs, and a graph.
Use function notation to represent the relation between Fahrenheit and Celsius.
You want to make tanghulu, a treat made of fruit in a candy coating. To make the candy, you need to heat the sugar to 149°C, but your thermometer is only marked in Fahrenheit. Fortunately, the
function f(x)=1.8x+32 converts temperature in Celsius to temperature in Fahrenheit. The function is written in function notation, and f(x) is read as “f of x”, meaning a function of x. Function
notation is helpful because it allows you to see the ordered pair directly from the function notation. For example, to calculate the temperature you need, you first substitute 149°C everywhere there
is an “x”, giving f(149)=1.8(149)+32. Then evaluate: f(149)=300.2. In the function notation you can directly see the x-value that produces each output or y-value: f(x)=y. You can easily write the
ordered pair (149, 300.2). Now you know that you need to heat the sugar to 300.2°F when your make your tanghulu. Try this yourself: A fudge recipe requires you to cook the sugar to the soft ball
stage of 113°C, what is this temperature in degrees Fahrenheit?
Use function notation to represent the relation between Fahrenheit and Celsius. You want to make tanghulu, a treat made of fruit in a candy coating. To make the candy, you need to heat the sugar to
149°C, but your thermometer is only marked in Fahrenheit. Fortunately, the function f(x)=1.8x+32 converts temperature in Celsius to temperature in Fahrenheit. The function is written in function
notation, and f(x) is read as “f of x”, meaning a function of x. Function notation is helpful because it allows you to see the ordered pair directly from the function notation. For example, to
calculate the temperature you need, you first substitute 149°C everywhere there is an “x”, giving f(149)=1.8(149)+32. Then evaluate: f(149)=300.2. In the function notation you can directly see the
x-value that produces each output or y-value: f(x)=y. You can easily write the ordered pair (149, 300.2). Now you know that you need to heat the sugar to 300.2°F when your make your tanghulu. Try
this yourself: A fudge recipe requires you to cook the sugar to the soft ball stage of 113°C, what is this temperature in degrees Fahrenheit?
A relation that takes an input and gives one output. We can represent functions with a table of values, ordered pairs, a graph, an equation, function notation, or words.
A value that we substitute into a function that is transformed by the function into the output.
The answer given by substituting the input into the function.
Vertical line test
A method for determining if the graph of a relation is also a function. If a vertical line scanned from left to right on the graph only intersects the relation once at any time, the relation is also
a function. If at any x-value the vertical line crosses the relation more than once, the relation is not a function.
Any rule that connects one variable to another. We can represent a relation with a table of values, a set of ordered pairs, a graph, an equation, or words.
Linear function
A function that, when graphed, forms a straight line.
A function is a relation where each input produces only one output.
Function notation makes it easier to see the input and output in a function. Function notation says f(input)=output, which makes it easy to write the ordered pair (input, output), and easy to graph,
since the input is on the x-axis and the output is on the y-axis.
A linear function has a graph that is a straight line. The function increases to the right if it has a positive slope (rate) and decreases if it has a negative slope (rate).
A function is non-linear if its graph is not a straight line. Some examples of non-linear functions are quadratic and cubic functions.
The vertical line test is a method for determining if a relation is a function using its graph on a coordinate plane. Imagine drawing a vertical line on the coordinate plane and scanning it from left
to right. If at any point the vertical line would cross the relation in two or more places, the relation fails the vertical line test, so it is not a function.
We’ve sent you an email with instructions how to reset your password.
Choose Your Free Trial Period
Get 30 Days Free
By inviting 4 other teachers to try it too.
Skip, I will use a 3 day free trial
Thank You!
Enjoy your free 30 days trial
We use cookies to make your experience with this site better. By using this site you agree to our use of cookies. Click "Decline" to delete and block any non-essential cookies for this site on this
specific property, device, and browser. Please read our privacy policy for more information on the cookies we use.Learn More
We use cookies to improve your experience. By using this site, you agree to our use of cookies. Click "Decline" to block non-essential cookies. See our privacy policy for details.Learn More | {"url":"https://www.generationgenius.com/intro-to-functions/","timestamp":"2024-11-06T05:41:56Z","content_type":"text/html","content_length":"384409","record_id":"<urn:uuid:5fc3a556-0b15-421f-a6d9-245ddb825bff>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00651.warc.gz"} |
Perfect Sine Waves At Particular Frequencies e
Perfect Sine Waves At Particular Frequencies example essay topic
1,400 words
Sam it PabuwalJeff ToplakEE 315 FUNCTION GENERATOR 12/12/20011. Abstract This project requires us to design and build a circuit that generates specified functions on the oscilloscope, as given on the
project description. Also, we were required to design a means of selecting different frequencies and different amplitudes. This was especially difficult because the selection had to be made directly
from the breadboard. For the final part of the lab, we were to draw the graphs we obtained to show how the various functions were generated digitally.
The results we got were very sound. We were able to generate all of the minimum requirements (a square wave and ramp wave). Also, we were able to generate the waves that would give us extra credit (a
triangle wave and approximations to a sine wave). We also discovered a way to easily vary frequency and amplitude.
2. Introduction This experiment uses four operational amplifier's (op amps) to deliver these waveforms in the 6 Hz to 7000 Hz range. The sine wave is a pseudo sine wave produces by a very simple wave
Shaping circuit. A digital counter can be used along with a DAC to generate analog voltage functions of time such as square wave and ramp wave. In this lab, we were to design and build a circuit that
generates an analog square wave and a saw-tooth (ramp) wave of voltage, using a counter and DAC/.
3. Design Process While working on this project, we thought of alternatives that would increase efficiency. An alternative we considered was to switch to a different project. This was thought about
when we were having trouble with the implementation of the circuit.
However, we decided to stick with this project. We decided this because we had already put in much thought and work into this particular project. Also, we decided to use flip-flops to simplify the
circuit. 4. Implementation CIRCUIT DESCRIPTION Square, sine and triangle waves are produced using an LM 348 and passive components. The LM 348 is a quad operational amplifier IC package; that is, it
contains four separate op amps all in the one IC.
They are marked A, B, C & D in the schematic diagram. Square Wave. One op amp (LM 348: D) is used. The voltage level to pin 13 is set by the resistor divider pair R 1 and R 2. The input to pin 12
depends on two things; firstly the potential of pin 14, and secondly, the voltage output of op amp C at pin 8. When the input at pin 13 is higher than the input at pin 12 the output goes low.
If it is lower then the output goes high. Switching back and forth between the two states causes a square wave to be produced. The time constant (R 4+R 5) C 2 determines the frequency. Triangle Wave.
You can also consider that op amp D is set up as a bi-directional threshold detector with positive feedback provided by R 3. R 3 also gives hysteresis.
The output provides a bias, which tends to keep it in its existing state before allowing switching to take place. The inverting input is set up at about half the op amp output swing voltage by
resistors R 1 andR 2. Accordingly the signal required from op amp C to cause switching is offset from this midpoint voltage by R 11/ (R 11+R 3), which is approximately 2/3 the voltage from midpoint
to swing limit, and is symmetrical above and below the switching point. Op amp C is set up as an integrator. It performs the mathematical operation of integration with respect to time.
For a constant input the output is a constant multiplied by the elapsed time, that is, the output is a ramp. Since the input signal goes to the inverting input, a high input will produce a ramp down
and a low input will produce a ramp up. The input signal is a square wave symmetrical about the midpoint potential. The current this potential produces through R 4 and R 5 is constant so the up and
down ramps are of equal gradient and the resultant triangular wave is symmetrical.
Any increase in the trim pot R 5 reduces the current and the integration constant which lowers the gradient of the ramp. The switching levels have not changed so the frequency reduces while the
amplitude remains constant. In a similar way the current depends on the value of integration capacitor. Accordingly the integration constant and hence the frequency vary with the value of the
(Higher value, lower frequency since the capacitor takes longer to charge.) If C 2, for example, is increased to say 680 nF then the minimum frequency will be less than 1 Hz. The output triangle wave
does not require amplification but it does require buffering so that that loading does not affect the waveform generator circuit. It is buffered here with op amp A connected as a unity gain buffer.
Unity gain is achieved by directly coupling back the output to the inverting input. Sine Wave. A pseudo or imitation sine wave is produced by a wave shaping circuit.
A diode is a non-linear device. As the potential difference across it increases the current rises in the characteristic way published in all textbooks. This circuit 'joins together' this
characteristic curve to produce an approximation to a sine wave. Two diodes have been joined together as a series pair in order to provide higher amplitude than would be obtained using only a single
diode. The shape of the pseudo sine wave could be improved at any particular frequency by filtering, but filtering will cause distortion at lower frequencies and loss of amplitude at higher
frequencies. You can have perfect sine waves at particular frequencies by switching inappropriate filters at those frequencies.
The sine wave is sensitive to loading and must be buffered. It is also low in amplitude and needs amplification. R 9 & R 10 set the gain of op amp B by forming a voltage divider between the source
and the output. If the wave shaper voltage is 1 volt higher than the reference (at the non-inverting input) the op amp reduces the output voltage until the inverting input voltage set by the divider
is equal to the non-inverting voltage.
The ratio of the values of R 10 to R 9 gives the gain. The gain here is about 2.5. Testing 6. Evaluation Overall, we were very successful in completing the objectives set forth by the project
description. We worked very well together to finish this project, and the following write-up.
Initially, we encountered a problem with developing the square wave. However, this problem was solved with the proper logic techniques. We were surprised to find that adding a flip-flop changed the
signal to a sine wave. 7.
Summary This project, in particular, is well designed. We learned much information about flip-flops, frequency control and counters. We would recommend this project to any incoming students taking
this class. If we were to do a redesign, we would draw the schematic before the actual construction of the circuit. This is very efficient and saves much needed time, as well as improves
organization. ASSEMBLY INSTRUCTIONS Identify all the components supplied in the kit against the Components listing.
Make sure you get the 4 diodes and the integrated circuit (IC) around the correct way. Match the bar on the diodes with the bar shown on the PCB overlay. WHAT TO DO IF IT DOES NOT WORK Poor soldering
is the most likely reason that the circuit does not work. Check all solder joints carefully under a good light. Next check that the four diodes and the IC are in their correct orientation on the PCB.
Is the battery flat?
A cathode ray oscilloscope (CRO) is the ideal test instrument to check the operation of the Kit. | {"url":"https://essaypride.com/ex/perfect-sine-waves-at-particular-frequencies-e778e","timestamp":"2024-11-12T05:38:55Z","content_type":"text/html","content_length":"35013","record_id":"<urn:uuid:a7da497d-7c31-4415-9d1d-ec107a7743fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00125.warc.gz"} |
Readings in modern applied math
Random Kitchen Sinks
I’ve been away from this blog for years now(!); this is a post that’s been sitting in the pipeline. More to come.
In their 2007 NIPS paper, “Random Features for Large-Scale Kernel Machines“, Rahimi and Recht propose using random feature maps to facilitate dealing with nonlinear kernel methods over large training
sets. The problem is that if \(n\), the number of training examples, is large, then the optimization problems involving dealing with an \(n \times n\) kernel matrix. Also, the resulting classifier
(or regression equation, or what have you) requires the computation of \(k(x, x_i)\) for all the training points \(x_i\).
One way to mitigate these concerns is to use sampling methods like the Nystrom extension. The Nystrom extension and its ilk work by sampling then interpolating to approximate the kernel. Rahimi and
Recht’s idea is to use the fact that PSD kernels can be written in terms of feature maps, \(k(x,y) = \langle \phi(x), \phi(y) \rangle\). This implies that one could approximate the kernel \(k(x,y) \
approx z(x)^T z(y)\) if you chose \(z(\cdot)\) appropriately. Note that for general kernels, the feature map \(\phi\) may be infinite-dimensional, so if \(z\) could be chosen to be a finite
dimensional mapping, we’d arguably have a simplication of the kernel representation. More practically interesting, although we’d still be dealing with an \(n \times n\) large kernel matrix, we’d be
using a linear kernel, for which much more efficient optimization methods exist. Furthermore, applying the trained classifier (or regression equation, or what have you) would require only the
computation of \(w^T x\), where \(w\) is a vector in the span of \(z(x_i)\). Thus, finding an appropriate low-dimensional approximate feature map \(z\) would increase the efficiency with which kernel
methods are both trained and applied.
Rahimi and Recht’s key idea was to take \(z(x)\) to be a sum of unbiased estimators of \(\phi(x)\). Then concentration of measure ensures that uniformly over a compact space \(\Omega\), we have \(x,y
\in\Omega\) implies
k(x,y) = \langle \phi(x), \phi(y) \rangle \approx z(x)^T z(y).
In particular, they proposed sampling from the Fourier transform of shift invariant kernels to form such estimators. In more detail, if
k(x,y) = k(x-y) = \int_\mathrm{R} \exp(i \omega^T (x-y)) \rho(\omega) d\omega,
then one can approximate \(k\) with a finite sum:
k(x,y) \approx \sum_j \exp(i \omega_j^T x) \exp(i \omega_j^T x)
when \(\omega_j\) are sampled according to \(\rho.\)
This is a neat idea, but one is left with the question: now we know how to approximate the nonlinear kernel matrix with a linear kernel matrix, but what can we say about how this approximation
affects our downstream application? Recht and Rahimi’s second paper, “Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning“, answers this question for
Lipschitz multiplicative loss functions of the form
\ell(c, c^\prime) = \ell(cc^\prime), \quad\text{with}\quad \|\ell(x) – \ell(y)\| \leq L \|x – y\|
and hypothesis space of the form
\left\{ f(x) = \int_{\Omega} \alpha(\omega) \phi(x; \omega) d\omega \,\big|\, |\alpha(\omega)| \leq Cp(\omega) \right\},
where the atoms \(\phi(x; \omega)\) are uniformly bounded w.r.t both arguments, and \(p\) is a distribution over \(\Omega.\)
The idea is that since the contribution of each atom is bounded (by \(C \sup_{x,\omega} |\phi(x; \omega)|\)) the approximation error in replacing the function in the true hypothesis space that
minimizes the loss by the function in the finite-dimensional approximation space
\left\{ f(x) = \sum_{k=1}^K \alpha_k \phi(x; \omega_k) \,\big|\, |\alpha_k| \leq \frac{C}{K} \right\}
that minimizes the loss is small with high probability with respect to the \(\omega_k\), which are selected i.i.d according to \(p\). Concisely, the Galerkin method works when the basis elements are
selected randomly from an appropriate distribution. | {"url":"http://www.thousandfold.net/readingblog/?p=36","timestamp":"2024-11-05T21:48:39Z","content_type":"text/html","content_length":"31666","record_id":"<urn:uuid:a21c7cff-7d51-4141-8e95-8ada84562a5f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00089.warc.gz"} |
Solving an equation
Yahoo users found us today by typing in these math terms :
│absolute value function on ti-83 │decimal to binomial convertor │dividing exponents calculator online │
│7th grade variables and expressions quizzes │Wild Crazy Halloween Contact Lenses │boolean algebra simplification │
│MATHS PROJECT SET LANGUAGE FOR NINTH STANDARD │multiply radical expressions │quadratic inequalities word problems │
│leaf area calculate by matlab │strategies for TAKS 9th grade math objectives │hyperbola tutorial │
│real life applications using the quadratic formula │transform metres in squres │examples of trivia in math │
│7th grade polynomial equation worksheets │different GCF of a numbers │"square root game,solution " │
│mcdougal littell modern world history notes │Maths for Dummies │yr 8 maths answers │
│cube roots algebra │Virtual Mall │(square root of x minus square root of y)/ │
│ │ │(square root of x plus square root of y) │
│integers worksheet │Work from Home Jobs │formula for squareroot of number │
│freebasic math │how to college fractional word problems │online factoring │
│free downlod indian history objective type question │writing exponents and square roots │Travel Kauai │
│Mcdougal Littell The World of chemistry computer test │Partial Factoring to find roots │ti-89 solve for nth term of taylor │
│bank │ │ │
│algebraic formula to do percentages │6th grade algebra problems │math trivias │
│college algebra practice problems │how to find cube roots on TI-30X IIS calculator │properties of matrices to the exponent │
│ │ │expression │
│WWW Computers │graphing algebraic equations using excel │math test GCSE │
│roots of in adding,subracting,multiplying&dividing │formula for ratio │online algebra problem solver │
│third grade math downloads │"Year 7 algebra " complex numbers │What is the hardest math question in the │
│ │ │world? │
│algebra combining like terms practice problems │FREE algebra solver download │download aptitude test │
│solved aptitude questions │algebraic expression exercise beginner │fun math printouts for kids │
│arithmatic/square footage │graph linear eqation with one variable │dodecagon dimension calculator │
│Star Travel │algebra questions │permutation examples with answers │
│how to enter logs ti89 │free math answers problem solver │Free Parabola lessons and notes │
│Waycross Real Estate │square root algebra │physics free workbook │
│basic partial sums │Wife Marriage │applications of trigonometry in our daily │
│ │ │life │
│Least Common Denominator with variables │california practice exam 8th grade download free │ks3 math questions │
│math Quadratics parabola grade 11 │dividing polynomials │quadratic formula used everyday │
│dividing root fractions │trig calculator │Free Math Problem Solver │
│algebra factoring calculator │sciencetific notations │graphing linear equations in excel │
│Stock Plays │simplify the cube root │matrices ti 84 program │
│aptitude test books for cat,gre free download │how to solve exponents │ONLINE COLLEGE ALGEBRA CALCULATOR │
│fundamental of trigonometry 9th edition answers │sales aptitude test + free download │finding the y interceptor in algebra │
│free prealgebra math practice tests │addition of algebraic expressions │quad root with ti-83 │
│ │free │ │
│Evaluating an Algebraic Expression Program Graphing a │ │ │
│Sine Function Program TI-84 │pre algebra │evaluate expression worksheet │
│ │ │ │
│ │help │ │
│gcse maths foil expand double brackets │common algebra mistakes │solving by elimination │
│Software Utilities │TI-89 trig equation solver │basic math cheat sheets │
│Solving Inequalities │answer my algebra problem │solving equations - activities for Algebra I│
│common denominator 5, 9, and 27 │Heath algebra 1 an integrated approach copyright 1998 teacher edition │balancing equations maths │
│US Travel │find equivalent form of radicals │how to solve simple algebra equations with │
│ │ │ti89 │
│7th grade pre-algebra cupertino │free 7th grade tutoring worksheets │Intermediate algebra problem solvers │
│algebra test │Holt, Rinehart and Winston GA Test Prep Grade 8 │complex numbers practice sheets │
│QUADRITIC EQUATION 7th grade │simplify algebra │algebra: create Equation │
│trinomial solutions │free online aptitude test for 11th students │free Algebra I practice games │
│c aptitude questions │ninth grade tutorial worksheets │grade 10 dividing radicals help │
│advanced typing tuter soft ware │graph 3y=3x-9 │trignometry for class 10th .ppt │
│ti 84 plus inequality solver │online factorization program │online test for problem on ages │
│Work │learn how to solve complex numbers │college algebra multiplication of unlike │
│ │ │signs │
│how to add fraction using least common denominator │free math sheets for seventh graders │seventh grade honors math problems to do │
│students │free algebra math paper │Printable Math Sheets First Grade │
│rational equations calculator │a level hard quadratics │cardano ti-83 │
│Wisconsin Reverse Mortgage │domain of quadratic equations with cubes │free math answers radical operation │
│Free help on how to do +equivalant fractions │simplify cubed roots │does a variable represent any number │
│maths for dummys │free online ti 83 calculator │dividing and multipling decimals worksheet │
│free maths aptitude questions and answers │algebra worksheets for level 1 primary children │math calculator simplify radicals │
│aptitude test for 6 th grader │calculator math problem 6th grade │how to calculate base numbers in a ti 83 │
│hours algebra equation │operations with functions radicals │class VIII-math learning │
│linear equalities calculator │Summerlin Real Estate │hard equations │
│Aptitude questions on probabaility │free worksheets factor gcf polynomials │free printable worksheet using math formulas│
│what calculator can be used to factor polynomials │vector equation Maple │sixth grade algebra practice │
│worksheets on compound inequalities │VoIP WiFi │equation for percentages │
│grade 9 math exam practice sheets │clep guide pdf │problems permutations and combinations │
│ │ │(chapter 5) │
│Stuttgart ISP │adding or timesing games │highest common factor worksheets │
│geometrical in real life │GED Word Problems Free Worksheets │Maths Simplification problems for Bank jobs │
│A homework sheet with math sums on it │free algebrator upgrade │simplifying radical functions │
│converting seconds of latitude into metres │Practicing Mathematics books download │algebra formulas │
│fluids statics exam test paper │pre-algebra book │Sets and Subsets in Algebra.ppt │
│maths quizzes on algebra │mix numbers │calculating common factor │
│Student Loan Servicing Center │prealgerbra │how to convert decimal square roots │
│8th grade science worksheets │State the rule in adding subtracting polynomials │equation to convert decimal system to binary│
│ │ │system │
│adding fractions test │lowest common denominator solver calculator │Free pre algebra math worksheets │
│help with solving systems with equations with only one │factoring polynomials with four or more terms │fractions and cube roots │
│solution │ │ │
│Wisconsin Mortgages │non-linear differential equations in matlab │free mathematics for high school worksheet │
│use rational exponents to simplify │how to do cubed on a ti-83 │factorising quadratic equations with 2 │
│ │ │unkown values │
│NC EOC Algebra 1 review PowerPoint │discrete mathmatics │Free Fifth Grade Worksheets │
│grade 10 geometric algabra help │importance of algebra in your everyday life │quadratic formula program ti-84 │
│learn algebra free online │using the calculator to solve integration │free 9th grade geometry printables │
│least common multiple expressions │Rational Expression Calculators │Yacht Cruises │
│math 10 apps formula sheet │Singles Dining │Converting formulas │
│math problems - simplifying quotients │excel algebra graphing │scale factor │
│everyday examples of quadratic equations │Wavefront Laser Eye Surgery │Small Business Opportunity │
│Writers Health Care │simultaneous equation solver │does anyone remember the ged math problems │
│Star London Hotels │square root graph │converting decimal to fraction with TI-82 │
│summing numbers using loops in java │java algebra games │polynom coefficients from x and y in C++ │
│fluids statics exam testpaper │algebra chart using fractions in simplest form │first grade math lesson inequalities │
│Lowest Common Multiple for kids │solving for difference quotient │polynominal │
│how to simplify a root within a root │ks3 sums │solving 4th grade equation │
│good pre algebra books for practice │Is there a basic difference between solving a system of equations by the algebraic method and │program to calculate lcm │
│ │the graphical method? Why (i.e., what’s the key difference between the two methods. │ │
│algebra story problem formulas │conjugate of cubic roots │year 7 maths equations │
│Should I take Intermediate Algebra before College │introducing algebra powerpoint │Objective math books │
│Algebra since I have been away from school a long time? │ │ │
│balancing equations helper │mixed numbers as decimals │decimal to a mixed fraction │
│steps in getting square root manually │solving for two unknowns using TI-89 │select an odd integer, square it and │
│ │ │subtract 1 │
│math year 10 cheat sheet │Used Computers │coordinate with algebra lesson plans │
│MASTER Algebra II online │"too old to learn math │Ratio Formula │
│prentice hall pre algebra book answers │"online simplification" boolean │3rd grade worksheets printable │
│free calculator for algebra │math trivia with ansers │TI 84 complex base conversion │
│car depreciation formula algebra │excel solving quadratic equations │mathematical questions simultaneous │
│ │ │equations │
│73530341853416 │printouts for school │free ebook download accounting │
│Talking Books │proportions worksheet 6th │printable math exercises for grade 1 │
│online free science papers │solved aptitude test papers │Tenants Insurance │
│percentage formuals │trivia questions for 8th graders │ti-84 polynomial solver download free │
│adding and subtracting integers worksheets │easy equations worksheets │MY EIGHTH GRADER IS STRUGGLING WITH MATH │
│Texas TI-84 Pascal Triangle │ROM wizard TI calculator emulator │math discret tutorial │
│physic problem solver │multiplying and dividing rational expressions calculator │simultaneous equations 3 unknown lesson ppt │
│simplifying radicals │ti 83 calculating y value on graph │substitution worksheet maths │
│math cheats for sixth graders │Unique Christmas Gifts │aptitude sample test papers │
│change mixed numbers to decimals │Past Exam papers - grade 9 │implicit differentiation online calculator │
│math+1 metre+straight+line+90degree+turn │What calculator can I use to factor polynomials │basic english quiz questions and answer for │
│ │ │kids │
│solve any algebra problem │accounting books+download │scale factor teaching 8th grade math │
│Temp Employment Agencies Charlotte │grade 9 polynomial questions │solve algebra steps │
│hardest mathematical equation │example math trivia │Teacher Material │
│david c. lay e-book solutions manual free download │Wedding Favors │examples of trinomials with 1 variables │
│number sequences solver │Who Invented │"simultaneous equation solver" │
│Ski Holiday Insurance │why casio calculator do not convert decimal to fraction │prentice hall prealgebra powerpoint │
│easyb way to learn the rules of algebra │permutation and combination tricks │solve nonlinear differential equation │
│similar terms in algebraic expressions │how to solve systems of linear equations by the addition method │define subtracting fraction │
│free factoring binomials calculator │worksheets about numeric pyramids │teaching strategies to divide whole numbers │
│ │ │illuminations │
│glencoe math course 2+online textbook │ks2 online free exams maths │how solve word problems on measurement │
│ │ │fractions with solutions and answers. │
│Vision Correction Castle Rock │graphing linear equations worksheets │free high school teacher algebra textbooks │
│Algebrator │nonlinear simultaneous equation algorithm java │free geometry worksheet for grader's sixth │
Yahoo visitors found us yesterday by using these math terms :
Rudin principles mathematical analysis solution manual download, Free Fraction Reciprocal worksheets, difference quotient calculator, formula for decimal to fraction.
Online equation solver, fifth grade math prentice hall, Sun Equipment, algebra 2 worksheets.
Cost accounting books download, factoring online, free use of Ti-84 Plus.
Sample aptitude test papers, percentage + gcse worksheet, Subtracting Polynomials free worksheet.
Rational expression equation calculator, real trigonometric problems in singapore, cube root worksheet.
Inverse log ti 89, 3 equations 3 unknowns, how to solve simple aptitude, sat pre algebra terms, special product and factoring.
7th grade math free tutorial, rules in adding,subtracting,multiplying and dividing real numbers, algebra activities for fifth graders, simultaneous equations quadratic three unknowns, square root
decimals quadratic equation, question helper Perfect world download.
Worksheets to study for the compass test, Sumincom Mini PC, multiple, factor maths games, Algebra2 workbook answer, free grade 9 printable math sheets, State Education.
Online integration caculator, Accounting books for learning free download, How do I multiply fractions w/different denominators?, real world mathimatical modeling, math practise, writing math symbols
on matlab graphs.
Finding exponents, free online maths questions for class 6, math poem rational, example sentinel value java.
Free sixth grade math lessons practice, convert to radical form, mixed number into a decimal, algebra worksheets variables, contribution of mathematicians in the field of cube roots, formatting ti=83
Algebraic Factoring Tool, TI-89 +FINDING RATIO, solve polynomial ti-83 in program, general differential heat equation, gmat math facts worksheets, Apptitude mathematical related exam model paper mcq
questions, solving system for equations for 9th grade.
HOMEWORK SHEETS, how to enter a log with base TI-83 calculator, Simplify radicals by using division online, hw+solution+ a graduate course in probability, free nyc first grade math assessment.
Square root property algebra, free downloadable sample algebra tests, alebra help, order of operations fifth grade math, free pre-algebra pdf, free introdutory to algebra online games.
1st grade math sheets, Starting a Franchise, algebra 2- quiz sample, north carolina 6th grade woorksheets, trigonometry problems on volume for tenth grade, free 8th grade math problems.
Cost accounting for dummies, difference between lowest common multiple and highest common factor, Stores Houston TX, Simplify radicals by using division online calculator, concepts involved in
algebraic equations, download aptitude questions, Williamson Lawyers.
Everyday math tutorial grade six, example to make a gragh about school, percentage formulas, college algebra tutor.
Intermediate algebra help, Algebra Dummies Free, quadratic equations + applications, rules for multiplying radical expressions, simplified radical form, basic algebra equation balancing.
Printable free 8th grade math, cost accounting, online tutorial, free, "simplifying" "fraction" "absolute value" "denominator", programming code to solve a polynomial equation, ti rom, factoring
Printable maths test level c, math help in finding slopes, Online study 9th grade, Hardest equations and geometry for grade 9, non linear equation system matlab code.
Development of quadratic equation, Vegetarian Atkins Diet, linear algebra done right solutiona manual.
Log bases on ti 83, work an algebra problem, exact trig addition examples, Kumon online answer books, Free Algebra solver, free online test on fundamental concepts grade 8, worksheet for sixth
Statistics year 8 test, real life quadratic functions, Order of operations picture, daily algebra.
Free online three dimensional calculator, trigonometry in daily life, free math calculater software, Algebra 1 Ohio Edition Glencoe McGraw.
TI 89 calculator for chemistry equations, log, factorising problem solver, classifications of algebraic expressions, homework and solution Dummit Foote, "prealgebra lesson plans", Timi Trial.
Examples of cramer's rule, fraction in the linear equation and solve system algebraically, algebra worksheets like terms.
Solve square root manually, Multiplying Matrices, answers to McDougal Littell worksheets.
Free ks3 math sat papers, United Health Care Insurance Company, how to solve problems based on probability, solve radicals, UK Cheap Business Web Hosting.
Graphing inequality online calculator, Technical Consulting, printable mix maths test 3rd online for free, grade 11 trigonometry sample questions, ti 84 plus app complex.
Math problem solver or convertor, high school algerbra, what are the roots in adding,subracting,multiplying&dividing?, answers to intergrated algebra1 regents, binomial formula ti-83.
Basic Math formulas cheat codes, how to solve an algebra equation for kids, 7th grade polynomial equation worksheets foil, printable algebra assignments for 6th grade.
Small Business Book Keeping Software, free answers to accounting homework mcgraw hill, college algebra trivia, multiply rational expressions calculator, nj pre-algebra readiness test, what is a
factor of 4 maths kids.
Transportation Stocks, algebra1 cheat, basic maths solutions for ninth class.
Integration of two variable using matlab, aptitude question and answer, x2-Test TI-84.
When do we use greatest common factor in real life, Talmudic Law, easy learn prime factors, Teaching Material, 7th grade free math tutorial and download.
Dividing exponents calculator, Multiplication Property to simplify radical expression Calculator, clep test for college algebra in dallas texas.
Topics in algebra herstein download, Training Education, college algebra practice guide for clep, factoring by group solver.
Algebra equation in Matlab, "simplified radical form", algebra intermediate completing the square, elementary algebra polynomials worksheet, how do you solve multiplication & division of rational
expressions, Algebra formulas with variables, sample 6th grade math lesson plan.
Math Trivia Questions, Small Business Health Insurance Coverage, mode best "mathematical formula" charting statistics, convert dec - fraction formula, Small Loans, intermediate algebra fifth edition
Real world matimatical modeling using partial differential equation, ral life example of a quadratic equation, procedure for converting from decimal notation to fraction notation.
WWW Prepaidlegal com, "multiple equation" pocket, free kids adding and subtracting math games, algebra midpoint graphs to copy, solving square roots with exponents, radical expression calculator.
Multiple factor math games, Work at Home Computer Based Business, online integral solver.
Aptitude questions asked in software companies, t1-83 absolute value button, investigatory problem, electrician algebra lessons, Vancouver Low Price Car Rental, cube root with ti-83, Visalia Mortgage
Cost accounts freebooks, printouts for 6th grade, teaching fractions first grade print outs, square root worksheet, combining three chemical equations using hess's law, lineal square metres convert.
Canada Grade 9 Math, free online math programs, high school textbook ontario, free calculator simplifying linear equations, teaching identify like terms, world's hardest math equation.
Advance pre-algebra recommendation for 8th grade workbooks, 8th grade worksheets, grade 11 math exam cheat sheet, how to do log base 10 on a calculator, solve fraction on ti83, graphing cheat.
Galois KS3, Powerpoint Presentation in College Algebra, simple algebra substitution test, 9th grade math made easy, download physics gcse practice papers plus answers, algebra inequality, power
point, ks3.
Where can i take a free math algebra test, complex numbers practice sheet, Modern Chemistry cheat keys, Week End Business, algebra solutions for 9th.
Solve by elimination method, aptitude online book, base on a ti-83, cayley-hamilton example, worksheets KS2 pythagoras theorem.
Cases special products on algebra, math book for 6th graders elementary, free 6th grade worksheets online, kids maths aptitude, math help cramer's rule.
How to do logarithm TI 83 calculator, Stature Software, math homework solver free download.
Free pre algebra formula, online 9th grade math book, samples of entrance exams for fifth graders, 9th grade pre alg.
Online algebra problems, 8th grade inverse operations worksheets, Alberta Grade 8 Practice Exam.
Algebra 2 solver calculation, differential equation second order types, Stocks and Mutual Fund, printable 6th grade math, college algebra.
Alegebra I help, Wallace Attorney, algebraic fractions help, maths-how to solve ratio, free math worksheet 8 grade, free learning algebra.
Free Algebra games, positive and negative fractions worksheet, Year 8 algebra exercises and solutions, simplify algebraic equations, are there free sites to teach you honors pre-algebra.
Symmetry worksheet printable, boolean algebra calculator inverse, class tenth math quiz- India.
Maths exam papers for adults, Simplifying Rational Expressions Step by Step, solve summations online, free ks3 test papers to do online, worksheets on addition on time, simpifying radical
Solving 3 system of equation excel, clep question bank, 5th grade fraction lesson plans simplify, trigonometry word problems worksheets, hardest add math question, maths tests for year 8, math
formulas for figuring coordinates.
Free module 3 biology online past papers, dividing algebraic terms, step by step on how to solve math problems, women evil formular.
Free classic books for 8th graders, radical expressions, equations, and functions, Work at Home Info, what is discriminant in college algebra, exercices on modern algebra, fx115MS manual complex
Free books of statistics, conversion table TI 89, graphs, circles, hyperbola, powerpoints on factoring, download free aptitude questions.
Silver Wedding Gifts, nth term fraction quadratic, how to solve quadratic equations from foundations of maths second edition, allgebra math sheets.
JACOBS ALGEBRA REVIEW, first grade math sheets, addition and subtraction of rational expressions calculator.
Studycard guidebook ti 89, online worksheets for 10 year olds, g.c.s.e revision for dummies, dummy algebra, rudin solutions chapter 9, aptitude tests with answers for downloads.
Help for elementary algebra, college algebra for dummies, differential equation second order standard form, online solve simultaneous calculator.
Abstract algebra solutions download, algebra simplification problem example, prime factorization sheets for 6th grade, algebra math test, pre-assessment fractions 5th grade, free printable worksheets
yr 8.
Free answer to algebra problems, y7 math printable sheets, asymtotes in algebra, radical multiplication calculator, Undergraduate Online.
Free g e d practice test printout, SIMPLIFYING RADICALS... HARD EXAMPLES, FREE STANDARDIZED TESTING CA SHEETS, prentice hall biology workbook answers, general solution of second order homogeneous
differential equation examples, 7th print out worksheets, prealgebra bittinger download pdf form.
"unit price" "free worksheet", free trig functions online calculator, Learn college algebra easy, basic grade nine math.
Aptitude questions pdf, free accounting book download, algebra worksheets slope intercept, multiplying rational expressions calculator, show a linear equation with one variable and discuss the slope
or rate of change of each.
8th grade math examples free printable, solved aptitude tests free downloads, 4th grade graphing.
Rate of change formula, write a program to calculate the sum and multiplication of two numbers in java programing, quadratic simultaneous equations solver, how do you solve equations, 7th grade,
texas, solve equations algebraically +matlab.
Free download printable math and English home work for year three, factoring polynomials solver, integration of radical expression, mcdougal littell algebra 2 worksheet cheats, quadratics with
fractional exponents, Simplifing the expression online calculator.
A homework sheet with math sums on it, factorization free printable worksheets for grade 5th in maths, LINEAL METER TO SQ FORMULA, nonlinear differential regression equations, matlab code, grade 11
maths exam papers, TI-89 practice college algebra.
Physics free online mcqs, tennis involving maths powerpoints, math and spelling free worksheet for grade 5 and grade 6, intermediate algebra curriculum.
"algebra long division", how to solve logarithms, simplification in maths, equation solution matlab, gmat permutations, free math review for 9th grade, can you solve a differential equation with a
SOLVE ALGEBRA, 8 year old differential equation india, PPT OF MATHS ON ARITHMETIC PROGRESSION <CLASS TENTH>, mixed radicals exam, exponential expression calculators, iowa algebra aptitude tests free
8th grade.
Homeschooling 8th grade pre-algebra, 6th grade statistics data and probability printouts, software, Time additions and subtractions.
Fractions Formula, Elementary Statistics Using TI-83/84 Plus Calculator, Second Edition CD Download, equation solver trigonometry.
EXAMPLES OF TRIVIA, matlab second order differential equation solver, proof for permutation/combination formula.
Special FX Theatrical Contact Lenses, solving equations with powers that are fractions, solver differential Linear Equations, translate algebraic expressions and simplify, inequations yr 12 free
worksheets, Algebra assessment problems, 9th grade algebra review.
Solving cubed functions, free online equation solver, LCD worksheet, real-life use for quadratic equations, find a cube on a TI-83, How to type Radical Math problems online free, problems on ellipse.
Download physics problem and answers, simplify boolean expression calculator, math test generator for 4th grade, fourth grade math readiness practice tools, rules on solving equations with negative
and positive fraction, graphing calculater.
Why was algebra invented?, division method to find square root of a number in maths?, fractional exponent equations, taking square roots of exponents, free online math for dummies, order of
operations lecture notes 7th grade, FREE grade 5 maths worsheet.
Downlable books on fluid mechanics, prentice hall trigonometry low price edition, solve+Quadratic+to find conic, ''mathematica''+free download, free ninth grade tests/quizzes.
Standard seven math practise test question, simultaneous quadratic equations "linear solution", hard 6th grade math problems, Solving Multi Variable Equations, integers worksheets, solve my algebra
problem, Travel Deals.
Ti 89 solving systems, simplify algebra calculator, printable math game first grade, FunWays to teach Algebra-- Inverse operations, grade 11 math cheat exam cheat sheet, Pre-Algebra Equations,
simplify program for TI-83.
Basic Math test for 5th and 6th grade, rational expression solver, Radicals multiplication Calculator, middle school math with pizzazz topic 7 d test of genius, "multiple equation" pocketpc.
Trigonometry, larson, answer keys, prealgebra printables, integrating absolute value in matlab, easy algebra worksheets, xdot matlab, ANSWER KEY elementary and intermediate algebra.
Free cost accounting notes, MATHS POEMS, Self Employed Health Plan.
Physics formula, Summerlin Condos, third grade printable math, "algebra c language", inurl"exponent+problem", online quick mental maths test FREE.
Algebra simplify equations program, express polynomial of three terms and discuss the slope or rate of change of each, free algebra help in san antonio texas, class 6th maths papers, square root
algebra equation, prealgebra workbook, igcse grade 10 free sample papers papers.
English aptitude test papers, free math worksheets for 7th graders, ks3 maths tests, worksheets adding integers, adding negative numbers on a calculator, questions on quadratic inequalities, online
fraction solver.
Online Calculator Square Root to the second power, GMAT problems on statistics, advanced algebra operations on function, Adding and subtracting radicals with expression calculator, algebra least
common multiple.
Math step equations game, parabola algebra, point of intersection grade.9 math.
Mathmatics for dummies, how to solve fractions when included of, sample TAKS math 2nd grade, third grade free online final math test, trigonometry word problems and answers, Algebra Linear Equations.
Order of Operations with absolute value grade seven, 7th grade adding and subtracting positive and negative integers, example of trivia question in math, problems and it's background in college
Mcdougal littell lesson plans, download better calculator square root, free printing simple algebra worksheets.
Difference of 2 square, Software Application Tester, Texas Genealogy Records, "nature of solutions"+quadratics, hard 6th math grade problems, kumon solution book, Solving Quadratic Equations.
Practice sums related to physics of 9th standard, free online accounting test, poems on maths topics, FOURTH GRADE FRACTION, changing a mixed number into a decimal.
Solver multiple square root, trig addition solving equations, grade 8 math exercices example.
Factoring cubed, free algebra powerpoint, addition and subtraction of fractions free worksheet, 8th grade prealgebra worksheets.
Free online maths test ks3, algebraic expansion third order, Math Problems To Do, pre algebra practice final, ALGEBRA FREEWARE.
Algebra measurements for triangle worksheet, grade 9 slopes quiz, online simultaneous equation solve, multiplying and dividing integers game, algebra tips percentage problems, third root, equations
ks3 tests.
Basic math trivia, ti 89 formato pdf, rules in addition & subtraction of polynomials, algebra answers.
Powerpoint notes Conceptual physics, pratice test for solveing equations, free aptitude questions.
Ks2 algebra problem solving, algerberator, log base 2 in the Ti calculator, High School Algebra Exercise.
Algebra 2 answers, Factor problems, complex math problem made simple, easy equations perimeter triangle worksheet.
Grade six achievement test worksheets, mcdougal littell tutorial, past gcse papers printable, ti calculator quadratic formula program, an easy way to solve combinations and permutations, square roots
activities, algebra for dummies pretest.
Decimal form into radical form, math 20 pure online prep exams, Differential Equations on the casio Calculator, mathcad systems of nonlinear equations, evaluating algebraic expressions in real life
situations + worksheets, Step-by-step La-place calculator, trigonometric identities solver.
Simplifying expressions calculator, Westminster Hotel Livingston, cube root negative.
Systems of linear equations three variables the scores of test, Solve Equations Multiple Variables, mathematical flowcharts, math problem solver converting fractions into decimals, Multiplying
Rational Expressions Calcualtor, algebra software for ti-84, prealgebra worksheets 7th grade.
How to add/subtract like and unlike integers, flowchart worksheets, free algebra solver.
What is a 64% on aleks math placement test, ks3 maths worksheets, solve using the principles together, simplifying complex radicals, simplify alegebra fractions, grade 9 math printable worksheets.
Everyday mathematics resource guide for pre algebra, Self Insurance, TI84 calculator downloads.
Lowest common factor, Online Simplify Radicals Calculator, Workstation Recovery, maths and english tests online(basic), radical fraction equation, McDougal Littell Structure and Method, table of
cubed roots.
Chicago maths, 6th grade algebra worksheets, kinds of factoring special products of polynomials, exercise solved by laplace transform, The Homework.
Accounting books wallpaper, real life quadratic equation, lyapunov excel, least square method for solving quadratic.
Equation grapher in standard form, matlab solve, balancing equations calculator, grsade school maths logarithm, Dividing Integers Worksheets, happy numbers for dummies.
Calculator to find absolute values, how to factor complex trinomials decomposition, basic pre- algebra worksheets, Holt algebra, Importance in college algebra, mixed percent fractions, teacher exam
objective solved paper.
True Crime Books, Waste Books, accounting homework solutions, grade 9 math slope quizzes, square root using log function, Probability 83 plus, secant define asymptotes.
Trig calculator curved line, Trademark Attorney Wyoming, learn basic algebra, year 8 maths quizzes.
Newton's method root of a non linear equation matlab, TI-85 calculator Rom, slope intercept equation 3 points, Small Business Health Insurance Michigan, tutor text adventures in algebra, ti 89 free
downloads, square roots of horizontal method.
Algebra power, i value anything under a root square, poems about maths, Algebrator, solving radicals.
Equation on a curved line, "year 5" fraction lesson plans, algebrator, Littell MaTH Connections test generator, ti 89 titanium highest common multiple, revision test questions yr 8 maths, how to find
the point of intersection grade 9 math.
Solving for x and x squared= a number, easy way to learn balancing chemical equations, Mcdougal , solution Algebra 2, how to solve system of equations using ti-83 plus.
Maths sats papers for ks3 students to print, kumon free worksheets, algebra game for ks2, order and compare integer games, ti-84 emulator, presentation in MS power point of 12th class physics
Swan Lawyers, easy way solving quadratic inequality, free math worksheets for eight grade, Square Root Formula, Least Common Multiple code, Solve Radical Math problems online free.
An example of using the distributive property for a negative monomial times a trinomial with different signs on the terms, algebrater, ti-89 cramer's rule, combination of the basic types of
factoring, easy way to learn math, Shopping Mall Philadelphia, ac testing in 6th grade.
Solving simultaneous equations online, online algebra solver, kumon test, hard 6th grade problems, Free linear programming Answers Problem Solver, how to solve a quadratic on ti89, my algebra solver.
Calculs books-pdf, 9th grade algebra word problems, mixed number to decimals, math trivia examples.
Free algebra equation solver, adding negative and positive fractions, uses of quadratic formula, Talmud Babli, review all of algebra 2 online, ti 83 calculator download for free.
Factoring monomials caculator, free math problem solver, answers to the even problems in elementary geometry for college students fourth edition, worksheets for o-level english.
Do my algebra 2 homework, maths lesson plans for year 8, free math sheets coordinate plane.
Online maths tests ks3, online inequality answers, 8th grade algebra 1A worksheets, sloving quadratic equations, forming equations using geometric properties bitesize, download free mathematical
formulas for GRE.
Grade 4 canadian science work sheet practise, examples of math prayers, solving basic math equations square to round formula.
FIRST GRADER STUDY GUIDES, frobenius on TI-89, maths problems solutions for ninth class, houghton mifflin practice worksheet chem.
Spelling 6th grade workbook, Differential Equations pdf download free, Third Grade Printable Math Sheets, answers to conceptual physics workbook.
Ks3 maths exam online, integral solver step by step free, math word problems for second grade printables.
Maths algebra with sums, "square root" online worksheet, the algebrator.
Negative exponents solving equations with, online test sample papers for cat, pre algebra problems and answers, algebra 1 notes.
Mathematics worksheet 9th standard, remedial math pratice example, 11 plus free worksheet, nth term for idiots, Worksheets for Children, algebra double varible equation, logic problem printouts.
Mathematics for 8 year old, concepts,problem solving techniques and theory on permutation and combinations (maths) required for GRE, multiplying square root powers, Permutation & Combination gmat
software, what is 6 square root 5 6 square square root to 5 to second power 6 square root to a5 cubeof 3, 5th grade square roots lesson plans.
Nonlinear simultaneous java demo, 7th grade math taks help, intermediate algebra for college students answers, algebra: decimal to fraction, graphing linear equations resources.
+help simplifying expressions with rational exponents, free books cost accounting, algebra ks2, free intermediate algebra quizzes, rational expressions functions, simultaneous equation solver with
Calculator cu radical, euler cauchy matlab, Truck Insurance, square root calculator online free, equation of line maths slope graphics calculator, how to rewrite roots.
Free printable SAT book online, graphing slope y intercept sheets, What Is a Credit Score, aptitude question papers, College Algebra cheat sheets.
8th grade pre-algebra and algebra, java least common multiple code, solve square root steps.
Order of operations on fractions w/ different denominators, math +trivias, tricks and tips for calculating inverse of a matrix.
Free printable ged work sheets, mathamatics, gr.10 math radicals exponent laws, hyperbola error software.
Solve algebra problems online, divisors calculator, how would I solve the math problem 7.86 x 4.6?.
Summation solver, concepts involved in algebraic equations, free linear inequalities calculator, parallelogram equations for dummies, free+objective +questions+maths.
Asymptotes graph calculator online, 7th grade learning printouts, free 11 plus practise papers, Used Notebooks, solve algebra equations with work shown, real life examples of the quadratic equation,
maths worksheets for grade 8.
Easy way to learn chemistry, t83 probability, algera 2 class irvine, KS3 maths revision on factors, free square root & cube root calculators.
Quadratic multiplication eqautions, "balancing equations" numeracy powerpoint, define fundamental concepts of mathematics for class 6th.
Math formula for percentages, free math cheats for sixth graders, examples of math trivia mathematics.
"interactive worksheet" excel 5th grade, combining like terms, how to calculate partial fractions, year 9 math test answers, free 1st grade homework, adding and subtracting integers questions, free
online roots to an equation.
Yr 8 maths test, Factoring by Grouping and with Negative Exponents with ti 89, best algebra 2 software, extra hard algebra question, free pre algebra study guides, reasoning test paper free download,
how do you determine the solutions of a quadriatic equation on a graph.
Teaching Degrees, simple aptitude question papers download, what are the roots in adding,subtracting,multiplying&dividing?, 6th grade pre-algebra test, quadratic equation factoring calculator.
Dividing polynomials calculator free, Shopping Mall Seattle, free worksheets grade 5 money, calculate log problems, Math Lists for the GRE.
Printable college algebra problems, math book free, free algebra cd, glencoe sixth grade math book, evaluating radical expressions, Wellness Coaching, Student Health Insurance Program.
Common fraction worksheets, algebra GED word problems, grade 6 alberta science achievement exam practice test, sample aptitude tests and ways to solve them, glencoe 9th grade pre alg, rudin
Algebra solver for finding the slope of a line, how to solve a binomial with a ti-84, radicals into fraction.
Linear equations games, free abstract algebra exercise, learn algebra fast, Study Guide and workbook Glencoe Mathematics McGraw-Hill Answers, Typists Needed.
Free online math solver, linear equations in two variables square root, multiplication radical rules math.
Factoring expressions calculator online, free high school worksheet printouts, holt algebra 2 texas online test, factorise a cubed number, ks3 maths test online.
Cd test nyc math, factor 216, Algebra 1: inequalities and polynomials worksheets, convert fraction to decimal worksheet.
Free yr 8 maths test on line, imperfect square roots, maths worksheet angle year six, Answer to chapter 4 of dugopolski elementary and intermediate algebra, Visa Clout, learning algebra fast.
Search pie value, Vacation Insurance, singpore math.com, 6th MATH GCM practice, math codes for the ti 83, algebra clep test, example essays for math 6th grade.
Linear equations "slack variables", free printable maths gcse test papers, least common denominator calculator.
Solving systems of linear inequalities on the Ti 89, math daily practice sheets of 2nd graders, Algebra mixtures worksheets, cost accounting books, trigonometric calculator, examples of math trivia
for elementary students.
T1 83 calculators+natural logarithms+how to enter into calculator, biology ninth grade tx, learning games for 9th graders, solve algebra problem, quadratic factoring calculator, 6th grade math
pratice sheets.
MATH WORKSHEETS, ADDING, SUBTRACTING, algebra equation calculator, nys math 10 final exams interactive, modern third grade maths, 7th grade algebra printable worksheets.
Formula for hyperbola, kumon factorisation 4 explanations, polynomial division algebra tutorial, "math work book" 3rd, examples of statistical trivia, radical calculator simplifying.
Free step by step elementry algebra tutor, free learning to do high school algebra download, Sylvan Learning Center Franchise, clep+college algebra+sample tests, addition worksheet to thirty,
download "gcse practice papers" physics.
Year 10 maths cheat sheet trig factor algebra, Sip VoIP Phone, matlab quadratic slope, intermediate algebra tobey and slater, actual problems.
Fluid mechanics sixth edition, simultaneous equations for common entrance, rules in subtracting multiplying signed numbers, animation rotationnel OR curl "vector", hardest equations, how to solve for
a variable in the denominator, past year 8 maths exam.
UK Mortgage, graph log base 2 ti 83, Free printable ged pre test, linear algebra anton solutions, math teacher poems, equation for getting percentages.
Free 3rd standard maths test papers, real life quadratic equation exampls, maths practise, algebra I practice worksheets & addition and subtraction only, While graphing an equation or an inequality,
what are the basic rules?, online simplification calculator.
Finding domain and range with absolute value, algebra, how to use T1-84 calculator to find the x-intercept, previous ks3 sats papers online, demonstration rational exponents, solve my math homework
for free, Free Aptitude Questions.
Victor Real Estate Listings, simplify + exponents + addition, how to calculate the greatest common factor, free basic math cheat sheets, logarithm equation solver.
Free E book + Matlab6 for engineers, Used Laptops Computers, quadractic functions in biology, Online 11+ maths exam, maths test papers free online yr7 schools, TI-83 Plus rationalizing denominators,
reduce algebraic function fraction.
Scaling in mathematics, long division of a polynomial on a TI-84 plus, TI38 calculator, Signature Loan Personal Loan, simple word problems involving permutations, factorials and combinations, free
printable math worksheets for tenth graders.
Signed numbers worksheets, subtracting fractions for dummies, programming Slope, Equation & y-intercept, alberta provincial grade 9 practise exams, Algebraic Formula percentages.
Graphing linear equalities calculator, Solve Radical Math online free, how to solve linear equations in two variables through tables, Stocks Volume, Sony DVD Player, decimal division solution solver.
Factor quadratic equation calculator, ti 83 log, holt algebra book answers, free algebra 2 help, precalculus for beginners.
Download maths practical for class 10th, subtract integer fraction, Start a Web Site Design Business, pre algebra work sheets, Quadratic inequations involving square roots, Surprise Gifts.
CONVERTING FRACTIONS TO STANDARD FORM, sixth grade free printables, polynomial divider, basic aptitude test algebra math.
4th grade ERB, Quadratic Applications to Daily Life, factor quadratic calculator.
Mathematical reasoning for 6th grade worksheets, free ks3 papers, pre a gebra, alegbra 11 formula for reciprocal, cubed polynomial factor.
Pre-algebra sample, calculating area ks3 powerpoints, gcse practice questions algebra, how to do proofs on a graphing calculator.
Sample lesson plan in solving radical equation, algebra problems 3rd grade, root square method, Temporary Employment Services Columbia.
Solving a quadratic equation by partial fraction, calculas/math, math formulas for 7 and 8 grade, algebra for 6th grade, English Aptitude Paper download, math exercise for 5 year old.
Least common multiply formula, how to graph f(x)=1/x, calculate least common multiply.
Advanced algebra word problem worksheets, free math worksheets rectangular arrays, ti 84 emulator, algebra tutor.
Math + algebra cartoons, discrete rsolve pack download mathematica, convert mixed fractions to percentages.
KS2 Maths Problems, learning Algebra 2 online, maths module 6 free practice papers.
Meaning of Cost accounting aptitude and skills, Utility Patent, Swan Patent, basic chemistry tutorial on line worksheet.
Permutations combinations applet, EVERYTHING U NEED TO REVISE FOR A MATH TEST ks3, how to use cube root on ti-83 plus, free algebra websites, "ax by c" polynom.
Preparing for the iowa algebra aptitude test, high school pre algebra plan program, simultaneous equations yr 8, basic balancing equations problems, real-life application of a quadratic function,
free math sites for 9 graders.
Maths quizzes for children of seventh standard, fractions in word, importance of using algebra in psychology, Pre Algebra final exam free, solving binomial absolute inequalities, calculator
activities for trigonometry, rewrite the rational exponent.
How to put that into the calculator, CPT Pre Algebra Practice, Visa Card, complex rational calculator, summation solve graphing calculator, Send Out Cards, algebra equations charts.
Sum of square errors ti-84, 6th grade math final reviews, combining multiple chemical reactions according to Hess's law, fraction quadratic, examples of math trivia mathematics word problems.
Logarithm math problems tutorials, pictograph worksheets, Aptitude test preparation free download.
Www.printable math question grade 3, Hard Math Questions Online (Gr8), Maths equations-worksheets, online factoring expression calculator, year 8 equations online, quadratic formula and negative
exponent solve.
How to pass college algebra, do my algebra work, accounting books in pdf, solver app download TI-84, 'Online Parabola graphing tool'.
Abstract Algebra Help, Wolverine Lake, teach yourself basic algebra, solving averages in algebra calculator, PreAlgebra and introductory algebra lectures, subtracting scientific notation.
Free science exam for grade 6, pratice hall, understanding intermediate algebra textbook 6th edition, trivias about math, cost accounting homework solutions, teaching algebras and computer for high
school, Tax Lawyers.
Free 9th grade lesson plans, how to do grade 10 radicals, vertex form transformations.
Rules of mathematics basic to advance free tutorial, Type from Home, McDougal Littell Geometry book answers for chapter tests.
Intermediate algebra problems students, VENN Diagrams permutation and combination, math quiz year 7.
Measurement formulae sheet, gre mixture problems, year 7 math test, factor with graphing calculator.
Algebrator softmath, Yr 9 nth term KS3, free help with beginning algebra with applications, Online Calculator Square Root tp the second power, Winchester Lawyers, permutation and combination
problems, size of bigdecimal.
Algebra, combining like terms work sheets, Vision Correction Surgery Oklahoma, mathmatical signs, Online Ks3 Mental Math Test.
Practice problems on squares, square roots and percentages for grade 8, Algebra for dummies+free+online, worksheets2grade, Travel Insurance for Senior Citizen.
Revision test questions yr 8, casio calculator+smith chart, solving logical equations matlab, how to solve a quadratic equation graphically, permutation/maths, ti-84 plus cheat.
Algebra introduction help ratio, high school physics equation sheet, ti83 plus convert base 7, Artin Algebra, holt algebra projects, algebra worksheets, algebra program.
How to show a factorial symbol on a graphing calcular, solve for x calculator, free basic chemistry lesson online, simultaneous equatins, Supernova Stocks, leneal metre, how to calculate binomial.
Simplifying a sum of radical expressions, Maths online for free for year 7, Multiple factor maths games, Wisconsin Milwaukee Motel, importance of algebra in psychology.
Factorise my equation, "solve matrices online", how to factor out third order equations?, Algebra II study guide, inverse on TI, practicing times, dividing, adding, subtracting.
Algebra with pizzazz creative publications, downloadable sample algebra tests, free mathematics model test for ontario college admission, YEAR 9 SAT PAST PAPERS, application of algebra.
Algebra For Dummies, 5th grade, word problems, multiple choice, help for class 11 th maths, algebraic formula, question answer download for mat entrance, revision for math test.
Radical expressions, equations, and functions solver, fractions book for 4th graders, secant method multiple variables matlab, Sporting Goods Mall, "real-life quadratic equations", square root of a
polynomial, solving systems of simultaneous equations in matlab.
Pre-algebra math games, optimization problem on TI 84, investigatory project in geometry, grade five printable adding and subtracting tests, easy year 10 math sheet, glencoe algebra 2, expression
Factoring quadratics worksheet, Importance of Algebra, factorising polynomials algebra, Somerset Web Design, simplify by factoring.
Prentice Hall + Pre Algebra + 6th grade, how to do power number on a fraction, how to calculate LCM, exam papers grade 11, maths basic ratio formula, tensor+shareware, general aptitude test questions
primary level.
Improper integrals calculator T-83, practicing expression of decimal, sample of a grade 3 math lesson plan, pre algebra on campus by prior, changing roots to exponents, ks3 online mental maths test,
abstract algebra homework.
Physics free download mcqs bank, ks3 maths perimeter powerpoints, math tricks and trivia, How do you solve for an unknown exponent?.
Examples of investment problems in algebra, how to solve exponentials, ged math for dummies to use online for free, convert tenths to inches calculator, square root homework sheets, importance of
maths for children in class 2 in india, solving an order of a matrix with excel downloads.
Fractional coefficients in equations, physic quiz for 10 grade, algebra free e-books.
Polynomial remainder algebra tutorial, least online games, lowest common denominator for fractions with variables in the denominator, ti 84 emulateur, Work at Home, KS3+maths+free papers, aptitude
test download.
Adding under the square root, 7th grade math taks objectives practice, complex rational expression calculator, interval of inequalities(kumon), ti89 log(), simplify alegbra equations.
Key skills free maths multiple choice practice paper level 2, Maths+KS2+worded algebraic problems, Yahoo Web Hosting, how do you cube root on a ti83?.
Calculas, ti-83 solving using the quadratic function, radical function calculator, seventh grade algebra tutorial, printable intermediate algebra problems, 9th standard maths books.
Solving Quadratic Equations by the Square Root Property, Woman Investing, examples of quadratic equations using real life applications, free algebra factorization, plot simultaneous equations, 3d
pythagoras worksheet.
Aptitude test papers+free download, free pre-algebra worksheets, Voice IP Gateway.
Houghton-mifflin pdf, excel multiplication equations, how to solve statistics, Wisconsin.
8th grade algebra, linear graphing unit, prentice hall, 1999 Maths SAT paper level 5-7, radical algebra.
Algerbra, algebra for grade 6, factor quadractic exponents.
Maths work sheet + Area Free + year 8, pre algebra cd fourth edition, maths ks3 test online in year 7, free Math quizzes before prealgebra.
Solved aptitude papers, maths 6 free practice papers, Online Substitution calculator.
Algebra questions and answers, values of greatest common factors, free online ti-84, prentice hall pre algebra, learning 6th grade algebra, online quizzes and tests for ks3 level, squares, Cubes &
square roots.
Algebra common denominator, how to solve algebra, indian math poem, math worksheets for fifth grade.
Free online trig calculator, calculator square symbol, online calculator variable, free worksheet negative numbers, hardest formula for maths, how to simplify a square root.
Free Mathematics for beginners, formulas for log, how to solve radical expression.
Math high school trivia, addition and subtraction of polynomials, evaluating algebraic expresions worksheet, application of algebra, ti83 plus solver usage, cost accounting e-book download,
Multiplying Scientific Notation.
Free trig calculator, Webcam Affiliate, 9th grade level math probems, grade 10 algebra, Small Cap, Quadratic Equation factoring calculator.
Umpire Baseball, quadratic factoring questions, kumon answers, formula for Compund interest with reducing interest rate, mixed number to decimal converter.
Free apititude questions, aleks + CHEAT, get paid to solve math problems, learning algebra 1, 73533242620760.
10 problem algebra practice test, basic allgebra, free online sats papers, ks3 & test paper & free, scale math.
Free printouts for a first grader, year 8 maths test papers, step by step slope Intercept equation explanation, algebra + expanding brackets + free worksheets, storing pdf on ti89.
Mixed fraction to decimal, ti 83+ rom download, maths trivia worksheets, Grade 6 Pat practice tests Alberta, how to sold logarithms.
Factoring cubed polynomials, Formula Of Evil, gnuplot multiply.
Ti 83 equation solver, combine like terms worksheet, calculator programming square root function, free online english worksheets that you can do and study from.
Boolean algebra solver, Algebra I worksheet and quizzes, an aptitute test of maths for children of class 7, ks2 maths work for free online.
Top 100 Singles, basic math practice sheet with answers, what is the function for finding the greatest common divisor of 2 numbers.
Rational expression calculator, math calculator multiply the product of two radicals, decimal into percent worksheets.
Glencoe Algebra one answers, math trivias, solving cubed roots, learn algebr.
First 1st grade homework, solve nonlinear set of equations in maple, examples of quadratic function using real life applications, four fundamental operation(advance algebra), graph paper linear
Free algebraic expressions worksheet, clep college algebra practice questions, fluid mechanics learning aid, Vacaville Dating, benefits of math interpolation in everyday life.
Free simple Equation Solving wizard, Timi Score, free online english worksheets for kids grade 2, Maths problem solver hard, 9th grade math print out sheets, solving equations with negative
exponents, free lesson mathematica tutorial.
Radius maths "11 plus" questions, maths tests ks3, solving quadratic equations by factoring worksheet, free algebra word solver, factorising hard quadratic equation, polynomial division solver.
Teaching aids modeling addition and subtraction 6th grade, boolean theorem solve, free algebra calculators, mathematics exam sheets download for 11 year old, calculate gcd, fraction problems to
print, economics app ti 89.
Convert .74 into a fraction, how to factor a third order polynomial, non-homogeneous differential equation, graphical solution+"quadratic equation".
Problems ti-84 plus buttons, holt algebra, 9th grade work, powerpoint flash algebra OR geometry OR trigonometry, ti-83 ROM Codes, alg.2 worksheets and answers, log on ti83.
Texas ti-89 text converter, teach yourself algebra, pratice adding and subtracting fractions negitive and postive, aptitude question answer, implicit differentiation solver, math module 6 free
practice paper.
Free online graphic calculator mathematics, t-83 simulator, trigonometry problem solver, grade 8 ontario algebra study guide, simple algebra examples worksheet year 9, free printable english
worksheets 7th grade.
Log2 Calculator, worlds hardest physics question, delta ti-89, free math problems for 4th graders, printable pre-algebra games, how to solve square feet.
Solving for x quadradric, solving 3rd polynomial, solving algebra equations with powers in the denominator.
Free 8th grade school work, permutation and combination example, math 8 scale factors, Free printable algebra sheets, taks test algebra inequalities.
Lesson plans in basic statistics, 10th grade math placement test practice, free math problem steps.
Year 9 trigonometry worded problems, how do you use the ti-83 to solve exponential growth problems, LINEAR EQUATIONS TEST SAMPLES, 7 years old kids mathemetics exersises, pre algebra worksheets, use
formulas to solve problems.
Multiplying exponents with square roots, Yellowstone Hiking, quad math worksheet, free math problem solvers, how to do algabra, 8th grade math work sheets, WWW Prague.
Solve simultaneous equations using matlab, beginning & intermediate algebra How to Work the Problems, what is a multiplication expression, biology a level singapore junior college papers free
download, VoIP IP Telephony, pie value.
Free printable worksheets for first grade, ti-89 computer log base 10, kumon worksheet, free fractional polynomial calculator, applications of trignometry in our daily life.
How to a solve a power to a fraction, investigatory project, square root polynomial, mathematics+worksheets+circles+chord's properties, tutorial for beginning algebra math 110 4th edition charles p
mckeague, ti-83 finding the slope, elementary algebra tutorial.
College algebra rules, ordered pair calculator, free polynomial root calculator, free worksheets third grade, Stated Income Mortgages, Alegbra SUBSTRACTING UNLIKE DENOMIATOR samples.
Trigonometric practice problems and answers, rules in adding'subtracting,multiplying,dividing real numbers, how to convert radicals into mixed radical form, mcdougal answer key, application of LCM
for kids.
Long division of polynomials lesson plan, YEAR 10 SATs PAST PAPERS, easy way to pass compass test, aptitude questions with solutions, slope and y intercept solver calculator, Grade 9 Math Exams
Algebra puzzles pizzazz, how to enter fraction 183 plus scientific calculator, fraction equation calculator, solve college algebra equations.
Sunnyvale Rental, determine the roots of the following quadratic equations by factoring., complex rational expression, Tig Insurance, cool math 4 kids, math for kids print.
Two step algebra equations fractions, free 9th grade quiz, Selling Your Home Yourself, least common denominator in expressions, power point presentation about algebraic balancing of chamical
Simultaneous equations problem solving, maths online testing, algebra 2 holt texas Teacher's Edition, simplifying expressions math games.
Starting Your Own Home Based Business, permutation and combination in trigonometry, java code base 2 to base 10.
Multiply in lowest term, trigonometric expression algebraic expression, multiplacation.com, software algebra integral.
Graphing linear equations maths worksheet, algebra poems, math promblems, algebra second differences, rewrite with rational exponents solve calculator, 9th grade algebra worksheets.
Factoring on a TI-83, importants of algebra, year 7 math worksheet, free english comp worksheets, Equations with Rational expressions Calculator.
Triad Books, factoring a cubed quadratic, Free Standard work combination sheet, free math worksheets for ninth graders.
How to find the slope of a line on a ti83 calculator, Adding and subtracting radicals calculator, how to convert decimals to mix numbers, ti-89 rationalize the denominator, "spiral square" flowchart.
Washington Landlord Insurance, GIVE SOME PROPERTIES OF IRRATIONAL NUMBERS, free online 9th grade math games.
Linear extrapolation calculator, yahoo answers, practice finding slope and y-intercept problems.
Factorization calculator, free online teaching of beginning algebra, printable test in maths year 8.
Free download accounting book, high school algebra II practice, north carolina practice and test prep scott foresman addison wesley, lineal metre definition, glencoe algebra 1 cheat site.
Probability ticalc, interpolation for tI89, using graphs to solve the equation, math calculation up arrow, newton's method to solve quadratic equation, physics Equation sheet, free online newton
raphson solver.
Fractions calculator converted into percentages decimals, translate rotate symmetry ks2, free tutorial maths primary, yr 8 science exam papers, factoring and expanding polynomi, math trivias in
highschool, rational equation calculator.
Year 6 mathematics free work sheet, volume formula games 5th grade, free online ARea practice problems for math, pizzazz math.
Algebra 1: Concepts and Skills : Practice Workbook With Examples, Subprime Mortgages, free 9th grade math worksheets, solutions to 3rd order polynomial, maths paper for grade 5 australia, how to do
Integer percent calculate, matlab highest common factor, Solving Chemical Equations For Free.
Special Offer Prague, need answers for exam 2 Mathematical ideas, formula poem worksheet, how do you rewrite division into multiplication, free high school algebra textbooks.
Factoring by grouping calculator, reflections worksheets KS3, beginners algebra help, Wavefront Lasik Eye Surgery.
Rules in adding and subtracting polynomials, sequence algebra applied problems + Free, heat transfer problem solution free pdf ebooks, dividing two intigers by minusing.
Calculator to convert decimals, Staying Home, Free Trial of Algebrator.
Multiplying rational expressions, algebra solver, cheat, rational exponents made easy.
Tricks to solving the difference quotient, free printable maths gcse sheets, worksheets for 6th graders.
Pre algebra books for beginners, Top Rated Laptops, variable as an exponent, math textbook pre-algebra, trig ratio word problem + example, pre algebra pdf.
Tower Automotive Bankruptcy, completing the square worksheets, simultaneous equations quadratic four unknowns, free algebra book to download.
Prentice hall-exponents, t1 83 calculators+natural logarithms, answers to algebra 2 problems, hardest maths equations, ti-89 completing the square.
English grammer aptitude test questions with solutions, quadriatic formula in real life, U S Micro Cap Fund.
Equation system +maple, free worksheets on rotations, math exam papers, Addition and Subtraction of Radical Expressions calculator, maths word problems for 6 standard matriculation, 1st grade
printable math games, how to find intercepts of a function in excel.
Elementary slope graph explaination, DOWNLOAD MATHS GRAPHICS CALCULATOR, Solvasa Barcelona, free maths tutor Geometry year 10, how to add radicals on a calculator, yr 8 free maths exam online, graphs
and exercises of simultaneous equations bitesize.
Solving multivariable systems by addition, factor polynomial with no gcf, math percentages formulas, Software QA Jobs, maths sheet, Investigatory Project Physics, solving online equation.
Gcse inequality graph, common denominator calculator, ti89 complex system equations, answers physics book glencoe, Shopping Center Houston.
VoIP Trunking, Trigonometry Activities year 10, "algebra" + "distributive property" + "fractions", hardest math problems, algebra + power.
2007 sixth grade Holt math california textbook rating, download aptitude test, ODE matlab system nonlinear equations newton raphson, math calculator to solve linear equations, free decimal math
worksheets online grade 6.
CA 9th grade math guide, solver square root, Algebrator Program.
Grade 11 final year exam papers, how to program quadratic formula into Texas Instruments calculator, algibra, free college mathematics, MULTIPLICATIONPROPERTY OF EQUALITY, texas instruments t1-83
plus formulas to do probability.
Simultaneous equations bitesize, 8th and 9th grade revision online, 7th grade math poems.
Solve Inequalities, grade 9 algebra study sheets, highest common factor of 100 and 150, download free ebook of aptitude question, square rooting expressions, Algebra jokes.
Math work sheets for 9th graders, basic algebraic step by step, free download analytical aptitude books, simplify rational expressions calculator, MAT objective questions free to download, word
phrase translate into mathematical expression, investigatory project problems.
Quadratic equations in third form, basic on line calulator, free pre-algebra online.
Free mathematical games for class 3rd to 8th, ged past papers, beginner's Algebra, teaching scale factor, free download manual solution to Applied Mathematics and modeling for chemical engineers.
Aptitude questions and answers, combination and permutation questions, Trigonometric application and answer, learning how to work out algebra(maths), printable homework, Hard Math Questions Online,
mathematics sums free online class 6th.
Online algebra word problem solvers, online gcd calculator, free 9th grade practice algebra test, 11+ exams free practise paper.
9 th grade workbook for maths, subtracting integers worksheets, formula to calculate molecular formula simple example in java to solve.
Rational exponents radicals and complex numbers, +practice college algebra quadratic equations applications, free printable basic math formulas, Wedding Supplies, how to divide exponents calculator,
maths ks3 revision test online, software pythagoras calculation.
Free solved exercise on electric circuit, TI 84 base conversion binary, nonlinear equations solver source C newton, saxon algebra review.
Ti voyage linear programming, gcse MATHS PYTHAGORAS THEORUM TEST WITH ANSWERS, where is the lambda symbol on a scientific calculator, formula for place value, free worksheets algrebra middle school.
Wallpaper Download, find polynomial equation using points, math poems graphing, "subtracting multiples of ten" interactive resources, learning basic algebra equations free, Summerlin NV, science
worksheets for 8th graders.
Algebra homework, free fraction worksheets, ration table distributive property, log base 2 program, free all formulas of aptitude test, linear worksheets for grade 10 for free, Hardest Math.
How to solve problems involving the four fundamental operations. 1)fractions, taylor frobenius ti-89, Algebra 2, Skills Practice Workbook, algebra sums, algebra game factorising quadratics.
Divide trinomials solver, Discrete Mathematics and Its Applications 5th ebook, ti calculator rom.
Sqaure root, integer worksheet, easy way to learn algebra, Mathamatics, reducing complex algebra fractions radical, Sports Books, algebra pdf.
Using excel to help with algebra, convolution with TI89, Small Business, chapter 5 algebra review games and activities mcdougal littell inc.
KS2-PRACTISE PAPERS, Spanish Tutor, third root of n, free key skills 2 english practice papers.
Algebra special products, 7th grade math teks spiral problem, online maths quiz class 6, solving 2nd order differential equation, simplifying complex square roots, simplify radical expression with 2
variables, equalities calculator.
Algebra problems, decimal to a mixed number, Simplify mathematic equations, factoring a cubed expression, free advanced online graphing calculator, indian syllabus accounting ebook, quadratics for
Search Engine users found our website today by typing in these algebra terms:
│Vacation Rental │saxon math test │radical calculator for algebra │mcdougal littell algebra 2 and trig │free online test series of complex number │
│ │ │ │answer book │ │
│glencoe algebra i │9th grade math drills │ti-30x cube root │mental mathematics work sheet for Indian │free online sats papers ks3 │
│ │ │ │7 year old │ │
│download c aptitude │complementary percentage │trig equation solver TI 89 │nonlinear equation system │free answers to accounting homework │
│questions │worksheets │ │ │ │
│free GMAT previous │algebra calculator program │what are the ways in simplifying │examples with solution for combination │convert to radical, algebra │
│question papers │reviews │radical expressions │and permutaion │ │
│Aptitude question and │accountancy book free download │answers to mcdougal littell algebra│ged math practices for free │prentice hall conceptual physics │
│answer │ │2 book │ │ │
│West Portal Listings │math question combinations │maths calculator test online year 8│Do my Algebra │What Is the Formula to Find a Square Root │
│algebra help rational │c language aptitude questions │algebraic operations' multiplying │algebraic formulas │deriving quadratic equation from a table │
│expressions square roots │ │cube │ │ │
│grade 9 math work book, │solving algebraic equations │ │ │ │
│nelson, canada │with 2 variables and square │rational expressions calculator │www.help with 5grade TAKS │simplifying trinomial division with 2 variables │
│ │root │ │ │ │
│converting from standard │hyperbola graphics tutorial │solve any algebra problem for free │rudin solutions "chapter 3" │math powerpoints for converting fractions to decimals │
│form to vertex form │ │ │ │ │
│solve a maths paper for │Introductory Algebra Problem │Sokos Helsinki │algebra+y6+brackets+downloads │solving grade ten arithmetic sequence questions │
│class 7th online │Help │ │ │ │
│the hardest math problem │probability worksheets for 8th │positive and negative integers │log to the base 2 in the calculator │formula fractions to decimals │
│ │graders │worksheet │ │ │
│rational │Matric level Mathematics MCQs │solve limit online │slope intercept form worksheet │solving algebra equations with ti89 │
│free math problem sheets │lattice multiplication work │PreAlgebra practice problems with │Fundamentals of Physics 8th Edition │do algebra online for free │
│for 11 year old plus │sheet │answers │Answer Key │ │
│yr 7 maths sheets online │solve algebra problems step by │free math download 7th grade │line fit equation excel quadratic │real life example of the quadratic formula │
│ │step for free │ │ │ │
│solving eqations │remainder theorem and syntetic │teach me algebra │Quadratic Equation Help │6th grade math critical thinking problem worksheet │
│ │division to find f(x) solvers │ │ │ │
│download absolute value │download algebrator │grade 8 algebra quiz │conceptual answers to college physics │free download accounting books │
│books │ │ │ │ │
│word problems with │how to solve equations with │ │ │ │
│completing the square + │multiple variables │free 10th grade math worksheets │free 9th grade algebra worksheets │algbra answers │
│examples │ │ │ │ │
│radical expression │how to find out imaginary root │Timi Flow │algebra dividing │casio calculator tutorial │
│calculators │quadratic equation │ │ │ │
│computating inverse in │dividing and multiplying │ │ │ │
│maths excercise and │fractions at the same time │t184 calculator │sample papers of maths for seventh class │ │
│answers │ │ │ │ │
│TI 84 Plus Quadratic │subtracting fraction that have │ │ │ │
│Formula │denominator that have negative │multiplying worksheet │cost accounting free ebook │college algebra problem solving tutorial │
│ │and positive sign │ │ │ │
│simple solved aptitude │factor radicals calculator │solving radical expressions │java convert amount decimal │connect the dots 6 th grade homework │
│questions │ │ │ │ │
│calculator to calculate │simplifying cube roots │latest math trivia │aptitude questions solved answers │download free algebra solution │
│exponent function values │ │ │ │ │
│algebraic manipulatives │Vicki Almstrum s2s │factoring cubed │basic math for dummies │free algebra worksheets │
│converting negative and │how to solve multiple variable │ │ │ │
│positive decimals to │systems │adding whole number worksheets │multiplying by 1 to find expression │free step by step algebra help │
│fractions │ │ │ │ │
│Evaluate expressions │non-homogeneous second order │solved aptitude questions │accounting ebooks pdf │general aptitude papers and solutions │
│worksheets │differential equation │ │ │ │
│Simplifying Radicals with│pre algebra solving equations │aptitude questions with solutions │ │How is doing operations (adding, subtracting, multiplying, │
│variables │using the principle together │with formulae │free third grade e-books │and dividing) with rational expressions similar to or │
│ │ │ │ │different from doing operations with fractions? │
│instant algebra help │least common formulae │square and square root properties │parabola cube root 234 │Subtraction Integers Workshhet │
│LCM FINDER │pass college test │aptitude+test paper+with solution │how are inequalities used in real life? │liner equation │
│maths printable │ │excel for accountants book for free│prentice hall precalculus graphing and │ │
│worksheets year 5 │elementary algebra made easy │download │data analysis second edition free │apptitude question&answer │
│ │ │ │solution │ │
│ks3 │ │ │ │ │
│ │simplify matlab solve equation │AJmain │algebra calculator rational equations │indefinite integral calculator │
│maths tests │ │ │ │ │
│simplify cubed roots │simultaneous equations excel │permutation combination elementary │TURNING A FRACTION INTO A DECIMAL │free review of related studies balancing chemical equations │
│english comprehension KS3│TI-83 SOLVE algebra equation │explaining algebra formulas │algebra programs │9th grade algebra free tutor │
│worksheet │ │ │ │ │
│free trig calculator │radical expressions lcd indices│translating phrases to mathematical│TI-86 exponent key not working │college algebra software │
│ │ │expressions │ │ │
│combination and │easiest way to find a common │ │problems about quadratic equation │ │
│permutation worksheet │denominator │free printable formulae worksheets │involving physics │how to calculate the slope of a 2nd order polynomial equation│
│with answers │ │ │ │ │
│ │Discrete Mathematics and Its │ │ │ │
│math for dummies │Applications difference between│permutation calulator │algebra program │finding least common denominator calculator │
│ │5th and 6th edition │ │ │ │
│merrill geometry │ │ │ │ │
│applications and │math worksheets involving the │convert decimal probability │find square root logic in c programming │mathematics investigatory │
│connections practice │order or operations │ │ │ │
│worksheets │ │ │ │ │
│ti-89 simultaneous │proof methods worksheet for A │mathematics cheat sheet for middle │bank aptitude test free download │how to solve algebra home │
│equation solver tutorial │level maths │school │ │ │
│ti-84 calculator download│simplifying radicals with │holt algebra │adding, subtracting, multiplying negative│non homogeneous second order ordinary differential equations │
│ │variables │ │and positive numbers │ │
│dividing mixed decimal by│"simultaneous equation" solver │solving for x with multiple │tutorial rules for adding subtracting, │simplifying rational expressions calculator │
│whole numbers │visual basic access │variables complex │multiplying,and whole numbers │ │
│worksheet on algebra for │Saxon Math 76 Test and Practice│sample aptitude questions │word problems with positive and negative │finding the log of a base other than 10 on a TI 83 calculator│
│kids │Generator │ │integers │ │
│mcdougal littell biology │pre algeba │how to figure out cubed root │ti 83 plus emulator download │how to add radical terms calculator │
│study guide │ │ │ │ │
│TI 84 Least Common │domain of square roots │pdf auf ti voyage │linear equations in three variables │solve linear inequalities with the TI-30Xa │
│Denominator │ │ │calculator TI-83 plus │ │
│slope intercept equations│free math solover │free how to use a calculator for │"2nd grade online homework" │learn algebra one fast │
│calculator │ │algebra for dummies │ │ │
│Combining like Terms │free printable grade nine math │calculating clock gear ratios │Permutation and combination,free books │scientific calculators S197 │
│Worksheet │worksheets │ │ │ │
│maths year 8 work sheets │find expression from function │math test order of operations │factoring using graphing calculator TI-83│how to do mixed number on TI 84 │
│on three step equation │using graph │algebra II │ │ │
│Understanding │standard conversion to │ │Prentice Hall Mathematics Algebra 1 │the difference between vertex form and standard form │
│Permutations and │fractions │elementary algerbra │(Florida edition │quadratics │
│Combinations │ │ │ │ │
│fundamentals+lecture │real life problems using the │how do you find the fourth root of │take a prealgabra practice test │Solved Examples in Permutations & Combinations │
│notes+free download │quadratic equation │24 │ │ │
│function table worksheets│math power work sheet for grade│foerster, algebra ii and │free algebra fraction solver │math exam worksheet for the 4th grade │
│free │8 │trigonometry │ │ │
│Free math work book for │matlab code to solve │download contemporary abstract │algerba equals │math problems yr 10 │
│algebra │simultaneous equation │algebra │ │ │
│free math placement test │free project online examination│"square roots on number line ppt" │fractions percentage decimal KS2 │free worksheets on relations and functions │
│for kids │ │ │worksheet │ │
│three times a number is │cubed polynomials │yr algebra questions │nonlinear differential equation with │9th grade mathematics solutions for evaluation │
│what type of expression │ │ │three unknowns │ │
│ti-84 texas instrument │free general aptitude test in │real life examples of permutation │ti emulator download free │solving two linear equations graphically in excel │
│rom images │singapore │ │ │ │
│"lial algebra" TI-83 │free math worksheets │what is the rule for converting │answers to algebra 1 concepts and skills │first order laplace transform │
│ │commutative property │decimals to mixed numbers │textbook │ │
│hard math problems with │simplifying exponential │simplification of rational │8th grade pre algebra math worksheets │adding and subtracting integers games │
│answers │expressions │expressions │ │ │
│homemade exam papers for │ │Square roots of number which is in │ │ │
│ks3 maths │cubed quadratic equations │the form of x plus or minus 2 │workbook answers on geometry littell │math trivia only │
│ │ │square root of y │ │ │
│how we can calculate the │0nline factoring expressions │ │example problems about conics with │ │
│chi-square table in java │calculator │algrebra tutor software │solution │algerbra │
│programming │ │ │ │ │
│free algebra solver │simple c programs quadratic │intermediate algebra equations and │long division calculator download │aptitude questions+pdf │
│download │equation │inequalities free worksheets │ │ │
│common denominator │finding where 2 lines intersect│problems Algebra is struggling with│aptitude questions solved answere │gcse additional mathematics free book download │
│calculator │TI 84 │ │ │ │
│solve simple matlab │exponential and radical │Algebra 11 solver │beginers algebra tutoring │free 9th grade consumer math printout with answer keys │
│ │expression │ │ │ │
│biology: the dynamics of │ │ │ │ │
│life teachers edition │simplifying square roots │algibra │degree of an algebraic expression tell us│www.google 7th grade math basic algebra │
│download │ │ │ │ │
│ │online free books for cost │common algebra mistakes worksheet │ │ │
│gcd formula │accounting │fractions, parentheses exponents │teach me algebra 2 │abstract math and +artin │
│ │ │and roots │ │ │
│ │solving pair of linear │ │ │ │
│solving 7/20 on paper │equations worksheets grade 8 │apti question,with answer │6th grade math work sheets │printable worksheet on adding integers │
│ │math │ │ │ │
│pie value │first grade math sheets │element intermediate algebra │math homework solver │TI-83 Plus cube roots │
│ │ │textbook for sale texas │ │ │
│beginning algebra study │elementary algerbra for dummies│completing the square calculator │intermediate algebra questions │maths calculator highest common factor │
│guides sixth edition │ │ │ │ │
│TI-84 online calculator │tabling the Slope And Y │nth term online calculator │math test fraction printable │hard factoring worksheets │
│download │Intercept │ │ │ │
│adding and subtracting │get value of x on TI-89 │math teks fo seventh graders in │printable excel formulas for dummies │square root of a fraction │
│decimals worksheet │ │texas │ │ │
│sample aptitude test │excel polynome equation solve │Example Of Math Trivia Questions │free printable algebra cheat sheets │algebraic square root calculators │
│papers │ │ │ │ │
│free printable ap word │quadratic equation problems │changing an expression into │need work sheets for pre-algebra for high│beginner geometry tutorial │
│book │ │simplified radical form │school │ │
│sample square root │objective question and answer │common denominator algebra │decimal percent worksheet │free into to algebra 2 help │
│fractions │related to computer fundamental│ │ │ │
│+easy way to │ │ │calculating fractional bases to │ │
│understanding ratio to │ti emulator step │learn algebra 1 easy │fractional exponents │book for Math problem and solution │
│proportions k12 │ │ │ │ │
│Prentice Hall Algebra 1 │addition of similar fraction │prentice hall mathematics algebra 1│factoring trinomials calculator │how many units are there for intermediate algebra? │
│ │with examples │teachers edition │ │ │
│Prentice Hall Algebra 2 │converting mixed fractions to │all in one math worksheet │fundamental of cost accounting cd │systems of three second order differential equations │
│Student Edition │number │add,subtract, multiply and divide │tutorial │ │
│positives and negatives │free ti 84 program download │how to write a cubed equation │partial differential equation matlab │how to create limits on a graphing calculator │
│worksheet │ │ │files │ │
│writing piecewise │ │ │ │ORDER OF OPERATIONS AND SOLVING REAL WORLD PROBLEMS PRIMARY │
│functions from absolute │Algebra II - glencoe │Square Root Method │pre-algebra syllabus nj standards │WORKSHEET │
│values │ │ │ │ │
│example of verbal │sample physics problems used to│ │ │ │
│problems in college │solve for all variables │variables as exponents │honors college algebra 1 book chapter one│prentice hall mathematics e-books │
│algebra │ │ │ │ │
│ │Free exponents fractions │"Multiplication-Division property │ │ │
│algebra 1 explained │formula for free │of equality" "addition-subtraction │convert percent to decimal calculator │"free online program" solves expressions │
│ │ │property of equality" │ │ │
│finding expressions for │download free gre test papers │download ti-84 calculator │how to solve linear equations on the │solve systems of linear equations in 3 variables variables │
│parabolas │"Maths Questions" │ │TI-30XA │powerpoint │
│aptitude question paper │MATLAB programme for solving │ │ │ │
│download │non linear partial differential│algabra problems │printable math exams │dividing decimal by mixed numbers │
│ │equation │ │ │ │
│online expression │solving for variable z │lesson plans for year nine │calculater │learning chemistry for year 10 free │
│calculator │ │probability │ │ │
│example of +mathmatical │positive and negatives │free sheets of 5th grade math.com │download trigonometry calculator │exponent worksheet 7th │
│proportion │worksheet │ │ │ │
│free online graphing │algebra I powerpoints │singapore online math test papers │saxon geometry cheat │least common denominator equation │
│curves solver │ │ │ │ │
│quadratic formula slope │prime factorization practice │math text book, Pre Algebra 5th │homework answers math free │If you are looking at a graph of a quadratic equation, how do│
│ │sheets 6th grade │Edition, McKeague │ │you determine where the solutions are? │
│Learn how to find the │algebra help guides │online grade 4 lessons │free high school algebra practice exam │printable middle school math pretest │
│square root │ │ │with answers │ │
│free printables for 4th │free worksheet on integers │college algebra 9th edition │how do i work out the variable of a │c aptitude questions download │
│grade english │ │gustafson workbook and study guide │percentage │ │
│radicals percent, │Adding and Subtracting Mixed │Exponential Equations solver │labor Day and Math and fifth grade │free printable worksheets integers 8th grade │
│decimal, fraction │Numbers worksheets │ │ │ │
│Finding the common │algabra │teach me prealgbra │ti-84 plus quadratic program download │softmath │
│denominator │ │ │ │ │
│free algebra calculator │trigonometry graphing online │permutation +combination+c+program │HOW TO MULTIPLY RADICAL NUMBERS │pre-algebra, algebra 1 syllabus nj standards │
│Simultaneous equations, │ │positive and negative fractions │worksheets for positive and negative │ │
│beginners │"Integers Lesson Plans" │worksheets │fractions decimals, and mixed numbers and│ALGEBRA BEGINNER EXERCISES │
│ │ │ │place them on a number line. │ │
│HOW TO MULTIPLY FRACTIONS│ │ │quadratic equations by extracting the │ │
│USING A TI-83 PLUS │ONLINE FACTORING │common square roots solutions │square root │number in front of square root │
│CALCULATOR │ │ │ │ │
│Solving equations in │ti-83 plus complex │intermediate algebra help │programa parabolas download free │subtracting integers worksheet │
│vertex form │ │ │ │ │
│ti calculators ROM │finding the value of the │solving simultaneous equations x, │coordinate grid worksheets 2nd grade │college algebra tips and tricks │
│ │hyperbola │y, and z │ │ │
│lowest common denominator│Radicals Start to Finish Math │poems about number words │how to solve radicals │free shakuntala aptitude e book │
│calculator │ │ │ │ │
│Free download textbook of│rationalizing polynomial square│least common factor using simple │solving square root variables │problem solver for linear combinations │
│financial accounting │roots │c++ │ │ │ | {"url":"https://softmath.com/math-com-calculator/factoring-expressions/solving-an-equation.html","timestamp":"2024-11-04T10:56:58Z","content_type":"text/html","content_length":"175695","record_id":"<urn:uuid:122576b8-2251-4bf7-813e-922f45d59594>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00714.warc.gz"} |
How Many Teaspoons Is 5ml
You are viewing the article How Many Teaspoons Is 5ml at Lassho.edu.vn you can quickly access the necessary information in the table of contents of the article below.
✅ How Many ml In A Teaspoon
✅ How Many ml In A Teaspoon
You’re wondering how many teaspoons is 5ml, right?
Cooking can be an exciting and creative way to express yourself! Understanding the measurements of ingredients used in cooking can be overwhelming – especially when you’re trying to convert between
milliliters (ml) and teaspoons.
Whether you are a professional chef, an amateur cook, or just beginning your journey into learning how to cook, this article will give you all the information you need on understanding and convert
between ml and teaspoons.
With clear explanations, helpful tips, and useful examples, by the end of this blog post, you’ll understand exactly how many teaspoons is 5ml!
What Are Teaspoons in Metric System?
To understand how many teaspoons is 5ml, it’s important to know how teaspoons are measured in the metric system.
A teaspoon is a unit of volume measurement that is widely used for measuring small amounts of food, medicine, and other ingredients. It’s also important to note that there are different types of
teaspoons available on the market with different sizes and capacities.
How Many Teaspoons Is 5ml
What Is A Milliliter in Metric System?
In the metric system, a milliliter (ml) is a unit of measure for liquid volume. One milliliter is equal to one-thousandth of a liter or 0.001 liters. It’s also important to note that one milliliter
is equivalent to 0.0042267528198649 cups in the imperial system or 1/1000 of a liter.
Spooning Up the Wrong Dose
Read on to learn how many teaspoons is 5ml!
How Many Teaspoons Is 5ml?
Now that you know how teaspoons and milliliters are measured, you can calculate how many teaspoons is 5ml!
The answer to this question is quite simple: one teaspoon. This number is based on the regular size of a teaspoon on the market, which is 5ml.
How Many Teaspoons Is 5ml
It’s important to note that the size of teaspoons can vary, so if your teaspoon is larger or smaller than this number, then you may need to adjust how many teaspoons is 5 ml accordingly.
How Many Teaspoons Is 5 Ml of Thin Liquids?
When it comes to how many teaspoons is 5ml of thin liquids, the answer is still one teaspoon. However, some thin liquids such as water and oil may need to be measured differently due to their
In this case, you will need to use a different method for measuring how many teaspoons in 5ml of these types of liquids.
How Many Teaspoons Is 5ml of Thick Liquids?
In case you are measuring thicker liquids like honey or syrup, the answer to how many tsp is 5ml is a bit different. In this case, how many teaspoons is 5 ml of thick liquids? The answer is two
Since the viscosity of these types of liquids is thicker than water and oil, they may require more teaspoons to measure accurately.
How Many Teaspoons Is 5ml
How Many Tsp Is 5ml of Dry Ingredients?
In addition to how many teaspoons is 5ml, measuring how many tsp is 5ml of dry ingredients may also be necessary.
For dry ingredients such as sugar, flour, and spices, how many teaspoons is in 5ml? The answer is that one teaspoon will usually suffice. However, if the density of the dry ingredient is higher than
usual then it might require more teaspoons to measure accurately.
How Many Teaspoons Is 5 Milliliters of Medicine?
It’s also important to consider how many teaspoons is 5 mil of medicine, which can vary depending on the type of medication you are taking.
For liquids such as cough syrup or liquid pain reliever, how many tsp is 5 ml? Generally speaking, one teaspoon will be enough for this measurement. However, it’s always best to consult with a doctor
before taking any kind of medication to ensure you take the right dosage.
How Many Teaspoons Is 5ml
Milliliters to Teaspoons Conversion Chart
If you’re still having trouble understanding how many teaspoons is 5ml, then take a look at the following conversion table. This should help make it easier to convert between milliliters and
Milliliters Teaspoons
1ml 0.2 tsp
2ml 0.4 tsp
3ml 0.6 tsp
4ml 0.8 tsp
5ml 1 tsp
Tips on Converting Teaspoons to Milliliters
When it comes to how many teaspoons is 5 mil, it’s important to know how to accurately measure the amount of liquid or dry ingredient you need. Here are some tips for converting teaspoons to
• Use measuring spoons and cups with markings in milliliters. This ensures that you’re using accurate measurements when transferring ingredients from one container to another.
• Make sure that your teaspoon is level before measuring a teaspoon of an ingredient so as not to over-measure or under-measure the ingredient.
• If you don’t have access to measuring utensils with ml markings, then use a calculator or online converter tool to make the conversion easier.
How Many Teaspoons Is 5ml
Now that you know how many teaspoons is 5ml, it’s time to start measuring accurately! Whether you’re cooking or taking medicine, understanding how to convert between metric units of measure can go a
long way in helping you get accurate results.
With this knowledge from Irving Diner, there’s no need to worry about over-measuring or under-measuring ingredients anymore!
FAQs of Converting Tsp to Mil
What should I use to calculate how many teaspoons is 5ml?
You can use measuring utensils that are marked with both teaspoons and milliliters.
Can I do the calculation of how many teaspoons is in 5 ml by calculator?
Yes, you can also use a calculator or online converter tool to make the conversion easier.
How many teaspoons are 5ml of soup?
For liquids like soup, how many teaspoons are 5 ml? The answer is one teaspoon. However, it’s important to take into account the viscosity of the soup when measuring. Thicker soups may require more
teaspoons to measure accurately.
Is there any formula to calculate tsp in 5ml?
No, there is no formula for how many teaspoons are in 5ml. However, you can use a conversion chart or calculator to make the calculation easier.
How many sizes does a teaspoon have?
A teaspoon typically has three sizes: 1/8 tsp, 1/4 tsp, and 1/2 tsp.
Is a Water Bottle Cap a Teaspoon
Why should I calculate how many teaspoons is 5ml?
Accurately measuring how many teaspoons is in 5 ml of liquid or dry ingredient ensures that you get the desired result when cooking or taking medicine.
How many tsp is 5 ml of chocolate syrup?
The answer is one teaspoon. However, if the syrup is thicker than usual, then you may need to adjust how many teaspoons are needed accordingly.
What can I measure in both milliliters and tsp?
You can measure both liquids and dry ingredients in both milliliters and teaspoons.
How can I convert teaspoon to ml?
You can use a conversion chart or calculator to help convert teaspoons to milliliters. It’s also helpful to have measuring utensils that are marked with both teaspoons and milliliters.
Can I count how many teaspoons is 5 ml by hand?
Yes, you can count how many teaspoons is 5 ml by hand, but it’s important to ensure that your teaspoon is level before measuring.
What should I do if my measurement of how many teaspoons is 5ml fails?
If your measurement of how many teaspoons is 5 ml fails, then go back and recheck the amount you used.
How much do 5 mils of milk weigh?
5 milliliters of milk typically weighs 24 grams. However, this can vary depending on the type of milk and how it is measured.
What happens if my calculation of how many teaspoons is 5 mils fails?
If your calculation of how many teaspoons is 5 ml fails, your recipe may not turn out as expected. It’s important to double-check your measurements and use accurate measuring utensils for the best
Is there any way to get the exact measurement?
Yes, you can use measuring utensils that are marked with both teaspoons and milliliters to ensure accuracy when transferring ingredients from one container to another. You can also use a calculator
or online converter tool to make the conversion easier.
How can I check my measurement of how many teaspoons in 5ml without a calculator?
You can check how many teaspoons is 5ml without a calculator by using measuring utensils that are marked with both teaspoons and milliliters.
Thank you for reading this post How Many Teaspoons Is 5ml at Lassho.edu.vn You can comment, see more related articles below and hope to help you with interesting information.
Related Search: | {"url":"https://lassho.edu.vn/how-many-teaspoons-is-5ml/","timestamp":"2024-11-12T14:05:59Z","content_type":"text/html","content_length":"61162","record_id":"<urn:uuid:6582dc07-5f4a-4f0e-b1f2-1c5947e8874a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00051.warc.gz"} |
Monte Carlo Simulation
Estimation is part of project management.
The most important estimates for the project manager are related to time and cost.
And remember you're High School Economics class - The Value of something cannot be determined alone, you need to know the Cost to acquire that Value and the Time that Value will be available for its
Since showing up late and over budget diminishes the value produced from the project.
Since it is easier to estimate small tasks, these estimates are often calculated and performed as point estimates, for example, a task will take 3 days. Or perhaps as an estimate with two-point
ranges. A task will take between 2 and 5 days.
When there are a number of tasks, each with possible dependencies on other tasks there is a problem. You simply can't add up the duration and the ranges of those durations. Since each task may or may
not be dependent on other tasks, and when one task is being modeled on the low end of its range, another task may be being modeled on its upper range of possible values.
When we hear the term Monte Carlo we think of the gambling center in the country of Monaco. The scientific study of probability concerns itself with the occurrence of random events and the
characterization of those random happenings. Gambling casinos rely on probability to ensure, over the long run, that they are profitable.
For this to happen, the odds or chance of the casino winning has to be in its favor. This is where probability comes into play because the theory of probability provides a mathematical way to set the
rules for each one of its games to make sure the odds are in its favor. As a simulation technique, Monte Carlo simulation relies on probability. [10] [18]
Monte Carlo simulation, also known as the Monte Carlo method, originated in the 1940s at Los Alamos National Laboratory. Physicists Stanislaw
Ulman, Enrico Fermi, John von Neumann, and Nicholas Metropolis had to perform repeated simulations of their atomic physics models to understand
how these models would behave given a large number of uncertain input variable values. As random samples of the input variables were chosen for each simulation run, a statistical description of the
model output emerged that provided evidence as to how the real-world system would behave. That real-world system was the first atomic bomb. [13]
It is the repeated random sampling model of the input variables over many simulation runs that defines Monte Carlo simulation. The result is an artificial world (model) that is meant to closely
resemble the real world in all relevant aspects. [8]
So before we proceed, let's look at a definition of Monter Carlo Simulation in the project domain, so we don't have to decipher someone else's definition designed to obscure the actual definition.
A Monte Carlo Simulation is “a problem-solving technique used to approximate the probability of certain outcomes by running multiple trial runs, called simulations, using random variables.” ...
It then simulates the completion of remaining work and produces a histogram showing the distribution of possible delivery dates.
The Monte Carlo Method
Monte Carlo Simulation has four steps, no matter the domain or the problem:
1. Define the distribution of possible inputs for each input random variable.
2. Generate the inputs randomly for those distributions.
3. Perform the deterministic computation using that set of inputs.
4. Aggregate the results in the individual computation into a final result.
This is the simple but powerful process of Monte Carlo Simulation that is universally applicable from development projects to nuclear physics (my original domain) to molecular plant biological
processes (our son's domain), to financial planning (our broker's domain), to the modeling of human behaviors of special needs clients (our daughter's domain).
This approach allows you to model systems in the future using past data OR using models of what the future might be if the system hasn't been done before.
The notion that Monte Carlo Simulation cannot be applied to a single project is simply wrong.
Let's look further.
What Is Monte Carlo Simulation of Projects
Monte Carlo Simulation started with Buffon's Needle Problem which says...
Let a needle of length L be thrown at random onto a horizontal plane ruled with parallel straight lines spaced by a distance d from each other, with d > L. What is the probability p that the
needle will intersect one of those lines?
Monte Carlo simulation on projects examines all paths through the network of activities or all possible states of the project for the duration, cost, and risk that create impacts on duration and
It provides an accurate estimate (within the confidence intervals) for the overall duration of the project schedule for that work and the impact of risk on that cost and schedule.
As well it provides a sensitivity analysis for all the interacting tasks
Let's look at a notional project, where the tasks are interconnected and dependent - predecessor and successor relationships with each other like the project below.
Each work activity in a discrete model will have an estimated duration - a scalar number, usually measured in days. Since all projects operate in the presence of uncertainty, this deterministic
duration is not likely to have much credibility in actual use. For traditional projects, a Monte Carlo Simulation creates a list of durations from the Probability Distribution of a specific duration
for a specific task.
This probability distribution can be built from past data for similar work, like the PDFs shown above, which have different shapes depending on the type of work. Or it can be pre-defined for a shape
and an upper and lower range for that shape. In the simplest approach when we know little about the past performance for the work a Triangle distribution provides a confidence that isn't too
optimistic on the lower bounds and too pessimistic on the upper bounds. Using this past data is called Reference Class Forecasting [14].
When there is no past data available, a second approach can be used. This involves ranking to ranges of the most likely value for the variable. Here's an example for a spacecraft with ranking ranges
around the most likely using a Triangle distribution.
In this case, the Business as Usually ranges is -5% for low and +10% for a high around the most likely value for the duration, cost, or some technical performance parameter. The business as usual,
but with some technical processes get a bit wider. Flight software is always an issue where we work, so those ranges are wider even more. Putting all the components together into a working system is
fraught with uncertainties, so a wider range is used. Getting the software qualified is about the same variability as getting it certified, so the same range is used. But the big problem comes with
the spacecraft goes to the Thermal Vacuum chamber. That is modeled as -5% to +175%.
These assignments are made initially during the proposal, then updated monthly for the reality of the project's performance. The proposals I work usually require an 80% confidence of an on or before
for schedule and an at or below for cost. Monte Carlo Simulation tools are the heart of this work
This is a Closed Loop Control Systems for managing the performance of a Software Intensive System of Systems, all developed using Scrum.
The triangular distribution can be used for when we have no idea what the distribution is but we have some idea what the minimum value is for the variable, the maximum value for the variable and what
you think the most likely value is. The Triangle distribution is a good place to start since it models the log-normal distribution which is found for many naturally occurring processes. The
uncertainties for the work effort on projects is a naturally occurring process derived from the Aleatory uncertainty processes of humans working on technical processes.
When you run the Monte Carlo Simulation tool (Risky Project in this case), you get a chart that looks like this. This chart is the result of the MCS for a complete project, that is for the end of the
project. A similar chart can be produced for any specific task in the project with the same results.
The chart shows the probability distribution of the completion dates for the project, when the durations for all the work on the project are defined as most likely value, the upper and lower limits
of those durations, and the PDF for the curve that the Monte Carlo tool is going to produced samples from - this is usually a Triangle curve for me, since it gives us credible values with the least
amount of work.
Let's Look At Some Myths of Monte Carlo Simulation
Here's are some common myths, misunderstandings, and willful ignorance of the use of probability and statistics, which is the basis of Monet Carlo Simulation, when making informed decisions of how to
manage a software project
Statistics don't apply to single events. Stats don't make sense in a single event
In the presence of uncertainty, a single event in the future - like the delivery date or the cost to deliver that outcome, Monte Carlo Simulation is THE tool to be used to develop a confidence and
accuracy model of that future event. All that is needed is to know...
• The Most Likely value that event could take on. If you have No idea what that most likely value might be, there are several ways to come up with that answer, starting with wideband Delphi. Here
an example of How To Estimate Almost Any Software Deliverable in 90 Seconds.
• Then use a simple Monte Carlo tool that you can find on the web for Excel. RiskAmp is one I like.
• Then you'll be able to show the probability of occurrence for the variables range of possible values and be able to debunk that statement since Statistics DOES apply to a single event when you
can answer the question
What's the probability of completing this work on or before the due date, given I know something about what the work entails, what the dependencies of the work are, and what are the ranges of the
random variables that drives the work.
If you don't know the answers to these broad statistical questions, your project is doomed before you start, assuming you actually have a due date, a not to exceed budget, and a minimum set of
capabilities for that time or budget. If you don't have those, we'd call that a de minimis projects - meaning no one cares when you show up, how much it costs, or what you'll deliver. Nice work if
you can get it.
Here's an example from an actual project - the Wright Brothers Army contract for the Wright Flyer
Here's there schedule (this is the MSFT Project version, but we had access to the Smithsonian archives and recreated this schedule from their notebooks).
From the notebooks, they made estimates for the reducible and irreducible uncertainties to be assured they could meet the contractual dates
Using an estimating technique of their own, we've recreated a Monte Carlo Simulation of the cost and schedule targets in the contract. Here's the cost model, given the schedule, and the cost loaded
activities, with a single upper/lower range of -10%/+20% (we were lazy). With that you get a confidence (an 80% probability referred to a P80) that the work will come in under $20,000 with some
assumptions should in the picture below
Orville and Wilbur needed to show up on time as well as on budget. They had to deliver the Flyer on or before September 30, 1908. So they needed a schedule with enough margin to protect that date.
The Monte Carlo Simulation of the MSFT Project schedule taken from the work in the notebooks showing they understood the notion of schedule margin. They also understood the principles of Systems
Engineering. Here's a paper (you need an INCOSE membership) on "The Concepts of Systems Engineering as Practiced by the Wright Brothers."
So when you hear statistics can't be applied to a single project it is simply NO True.
Here's some more background on Wright Brothers Overview of the Wright Brothers Innovation Process that debunks the myth that only in the modern world are their innovators.
Monte Carlo and Agile Development
There are a number of Monte Carlo Simulation tools for agile software development when you don't have an Integrated Master Schedule, with planned durations, and ranges of values
• Start with Troy's book below and download the Excel simulator [12]. There is a download section at the website
• Jira has a number of plugins for Monte Carlo Simulation. My favorite is Agile Monte Carlo. But there are other in the Jira Marketplace. If you set up Jira properly, and capture the estimating
data, from Product Backlog, Release Plans, Tee Shirt Sizes to Hours planned for developed and Hours actually performed for development this tool will provide you an Estimate to Complete using
Monte Carlo. Now if you don't use Jira properly - well then you get what you deserve.
• VersonOne has a portfolio item Monte Carlo Simulation dashboard. Again you've got to use the tool properly to get any value out of this dashboard.
There are several issues with apply Monte Carlo Simulation to Agile projects.
• First is there is no schedule in the sense of a Gantt chart, with tasks arranged in the sequence needed. The is work, contained in Sprints, but those are not the same
• So what is it that the Monte Carlo Simulator is simulating
Here's a classic estimating fallacies
My proposal is don't estimate. Stipulate. If I say that Feature A will be available in 1 month. I'll make sure that I have a working version of it in 1 week. So that 1-month from now, I'll have
something at least. Then slice appropriately.
How does this person know they can get the work done in one week, one month, 6 months? Any uncertainties involved? With no model of the work, the productivity needed to produce the outcomes, no model
of the uncertainties and the impact of those uncertainties, that statement is simply nonsense
”80% confidence of on-or-before” is a meaningless term for a single project. It means “If we carry out this exact project a statistically significant number of times (>20 say) then 80% of those
will be within this date.” But we will carry it out exactly once. Ever. Statistics matters.
... any Monte Carlo simulation has parameters that are guesses, with probability distributions that are more guesses.
Yes, statistics matters. Monte Carlo Simulation can be applied to work that has not yet been performed if you have some sense of the most likely effort and the range of possible efforts. If you don't
have either, why are you spending your customer's money, unless it is to generate those values?
If you're guessing, I'd suggest that those paying you hired to the wrong person to spend their money. There are numerous databases with reference class data for most of the software problems on the
planet. Now you may have to pay to get access, but don't guess learn how to make informed decisions with good estimating processes
Monte Carlo Simulation is a good approach to estimating in the presence of uncertainty.
From the same author, here's another
Monte Carlo predicts a probability distribution for a number of future trials. We are using it to estimate the result of a single trial.
That is not how Monte Carlo Simulation is used. MCS provides a probability distribution of the occurrence of all the possible outcomes from the model. In project work, this model is a network of
activities, with durations and upper and lower limits on those durations. Then MCS can tell you what outcomes have what probability of occurrence.
Like the chart above produce by Risky Project, there is a 52% chance the cost will be less than 390.15 thousand dollars or there is a 51% change to duration will be less than 5 days. The Risky
Project tool predicts these values from a number of samples of the work that drives that outcome. The notion of a single trial is NOT how Monte Carlo Simulation works, now could it work that way, nor
does it work that way.
Another example of misunderstanding how the tool works, either because of lack of knowledge and experience or willful ignorance
"Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it. When we enquire into any subject, the first thing we have to do is to know what books
have treated of it. This leads us to look at catalogues, and at the backs of books in libraries."
— Samuel Johnson (Boswell's Life of Johnson)
[1] "Examining the Value of Monte Carlo Simulation for Project Time Management," Goran Avlijas. Management: Journal of Sustainable Business and Management Solutions in Emerging Economies
[2] "Introduction To Monte Carlo Simulation," Robert L. Harrison, AIP Conference Proceedings, January 5, 2010, 1204, pp. 17-21.
[3] "Adding Probability to Your 'Swiss Army Knife'", John Goodpasture, Proceedings of the 3oth Annual Project Management Institute, 1999 Seminars & Symposiums, October 1999.
[4] "Monte Carlo for Newbies," Simon Leger, QuantLabs
[5] "Monte Carlo Methods for Absolute Beginners," Christophe Andrieu, Advanced Lectures on Machine Learning, 2003
[6] "The Monte Carlo Method," Nicholas Metropolis and Stan Ulam, Journal of the American Statistical Association, Vol. 44, No. 247, September 1949, pp. 335 - 341
[7] "Fuzzy Monte Carlo Simulation and Risk Assessment in Construction," N. Sadeghi, A. R. Fayek, and W. Pedrycz, Computer-Aided Civil and Infrastructure Engineering, 25 (2010) 238–252
[8] "The Principles of Monte Carlo Simulation," University of Alberta
• Lecture One: Overview
• Lecture Two: Probability Distributions
• Lecture Three: Statistical Models and Stationarity
• Lecture Four: Monte Carlo Simulation
• Lecture Five: Dependence and Multivariable Distributions
• Lecture Six: Problem Formulations, Implementation Details, and Validation
• Lecture Seven: Transfer of Uncertainty
• Lecture Eight: Decision Making
[9] Essentials of Monte Carlo Simulation: Statistical Methods for Building Simulation Models 2013th Edition, Nick T. Thomopoulos, Springer, 2013
[10] Modeling and Simulation Fundamentals Theoretical Underpinnings and Practical Domains, edited by John A. Sokolowski and Catherine M. Banks, John Wiley & Sons, 2010.
[11] Monte Carlo Methods, Second, Revised and Enlarged Edition, Malvin H. Kalos and Paula A. Whitlock, Wiley-Black Well, 2008
[12] Forecasting and Simulating Software Development Projects: Effective MOdeling of Kanban & Scrum Projects using Monte-carlo Simulation, Troy Magennis, www.focusedobjectives.com
[13] "Stan Ulam, John Von Neumann, and the Monte Carlo Method," Roger Eckhardt, Los Alamos Science, Special Issue, 1987, pp. 131-143.
[14] "Reference Class Forecasting: Resolving Its Challange to Statistical Modeling," Robert F. Bordley, American Statistical Association November 2014, Vol. 68, No. 4.
[15] "Introduction to Monte Carlo, Astro 542," Princeton University, Shirley Ho.
[16] "Monte Carol Methods," Dirk P. Kroese, The University of Queensland and Reuven Y. Rubinstein, Technion, Israel Institute of Technology
[17] Evidence-Based Software Engineering: Based in the Publically Available Data, Derek M. Jones
[18] "Calibration and Validation of the SAGE Software Cost/Schedule Estimating Systems to United States Aur Force Databases," David B. <arzo, Captin, USAF, AFIT/GCA/LAS/97S-6
[19] "Estimating Total Program Cost of a Long-Term, High-Technology, High-Risk Project with Task Duration and Cost That May Increase Over Time," Maj Roger T. Grose and Dr. Robert A. Koyak, Military
Operations Reseach, V11, N4, 2006.
[20] "Parametric Quality Metrics for Evolutionary Software Development Models," Walker Royce, TRW Space and Defense Sector, Redondo Beach California.
[21] "Empirical Cost Estimating Tool," Dr. Johnathan Mun and Dr. Thomas Housel, Naval Postgraduate School, 17 October 2016.
[22] "Agile Estimation with Monte Carlo Simulation," Juanjuan Zang,
Recent Comments | {"url":"https://herdingcats.typepad.com/my_weblog/2018/09/monte-carlo-simulation.html","timestamp":"2024-11-06T08:00:06Z","content_type":"application/xhtml+xml","content_length":"101713","record_id":"<urn:uuid:be541e82-cc41-4d3f-ab58-1b3fd0657ac0>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00639.warc.gz"} |
Moment & Couple
M = r x F
M : Moment (Nm)
F : Force (N)
r : Arm (m)
The moment of force is a vector quantity that represents the magnitude of force applied to a rotational system at a distance from the axis of rotation. The SI unit for moment is the Newton metre
A (force) moment is a measure of the amount of rotation effect due to the working force. The effect depends both on the force and the force arm r. The moment M is a vector quantity, with its axis
orthogonal to the plane spawned by F and r.
M = d x F
A special case of a moment is a couple. A couple consists of two parallel forces that are equal in magnitude, opposite in sign and do not share a line of action. It does not produce any translation,
only rotation. The resultant force of a couple is zero. But, the result of a couple is not zero; it is a pure moment.\
A couple is used to describe the rotational effect of two equal forces that do not share a line of action. A couple (or torque) is much used in mechanical engineering. For example, a combustion
engine delivers torque at its shaft, which can be used to drive equipment. As stated, a couple is a pure moment and as such can only be destroyed by a couple with a counter effect. Otherwise, the
total sum of the forces will not be zero.
We examine the tail rotor of a helicopter. This rotor is also called the anti torque rotor, as its purpose is to counteract the reaction torque on the fuselage as a result of the applied torque to
the main rotor by the engine and transmission. When this reaction torque is not cancelled, the fuselage will rotate. The anti torque rotor functions by introducing a moment, which consists of a
thrust vector F which works over the arm 'l' with origin O. This origin O lies on the main rotor shaft. We assume a right angle between the arm 'l' and the tail rotor thrust vector F. The helicopter
engine produces a torque of 500 Nm during a hover. The length of 'l' is 5 metres. What force F is needed to prevent the fuselage from spinning?
When we assume a right angle between the arm 'r' and the tail rotor thrust vector F, so the vector products of F and r resolve to the product of their magnitudes. The moment M must counteract the
torque of 500Nm; in other words, the sum of the torque and the moment must be zero. This leads to the equation 500 + 5F = 0 -> F = -100N. The minus sign indicates the direction needed to cancel out
the effect of the applied engine torque.
Will this result in equilibrium in a helicopter? As the sum of moments is zero, the helicopter will not rotate. However, for an object to be in equilibrium, both the sum of moments and the sum of
forces in all directions must be zero. We now look at the sum of forces in the plane of rotation, as these are the only relevant forces in this example. These forces are the couple of 500 Nm and the
moment of -500Nm. We know that the couple (torque) will not introduce any translational force (by definition).
The sum of the couple is therefore 0. We now look at the anti torque moment of -500 Nm, which uses a force of 100N. As the torque forces are cancelled out, this is the only component left.
Conclusion: the sum of all forces is 100 N (500 - 500 + 100).
This force will move the helicopter sidewards! This result should be expected, because a couple (torque) can only be cancelled by another couple, which is not the case with the tail rotor produced
Some manufacturers design their helicopters with the main rotor shaft tilted a number of degrees, in order to counteract the translational force with main rotor thrust. Another solution is to offset
the cyclic blade pitch to produce a similar cancelling out thrust vector from the main rotors.
1 Comments
│ │ Eman amjad │ │
│ 1 │ │ What is couple arm ? │
│ │ Wednesday, January 2, 2019 6:06 PM │ │
│ │ │ │
Comments are disabled. | {"url":"http://helistart.com/MomentCouple.aspx","timestamp":"2024-11-12T23:51:19Z","content_type":"application/xhtml+xml","content_length":"169697","record_id":"<urn:uuid:fb9327d5-17b1-4223-9b0b-bbe00c72f25f>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00363.warc.gz"} |
AC Losses in Magnetic Components
What are AC Losses and why are they important? This paper discusses the physical effects and which issues must be observed by design engineers. Furthermore, an approach to determine AC losses with
unmatched accuracy is being introduced.
In 1892 Charles P. Steinmetz, working for General Electric presented his now classic paper “On the Law of Hysteresis” which contained his formula for hysteresis loss, today commonly referred to as
the General Steinmetz Equation (GSE):
Click image to enlarge
Here we see AC core loss increases exponentially with both frequency (f) and flux density (B).
Knowledge of losses is important because the size of inductors and transformers is constrained by their ability to dissipate the heat generated by the losses. The smaller the device, the smaller its
surface area and the lower the losses must be. Today’s world of miniaturization comes with many thermal challenges for all components.
What are losses?
When we speak of losses, there are two types, DC and AC. These types refer to the type of currents involved, whether it is steady state (direct current) or alternating current. Switch mode power
supplies have both types. We sometimes simplify things to the equivalent heating current (I[rms]) which is the geometric sum of I[DC] and I[AC] currents.
Click image to enlarge
DC losses are straight forward, P[DC] = R[DC] * I^2. AC losses are much more complex because the currents do not distribute evenly (skin and proximity effect) and depend on material properties like
permeability and permittivity plus frequency, wave shape and phase shift.
What are core losses?
The root of hysteresis core losses is domain wall movement. When a soft magnetic material is influenced by an external magnetic field (current in a coil or nearby permanent magnet) the domains which
we imagine to be like tiny magnetics within the material align themselves with the field. This takes energy and time. When the direction of current changes the domains reverse direction. When the
outside influence is removed, the domains will spring back, but not completely. The energy in the spring back is returned to the system but the rest is expended as work done against the friction of
other domains and is converted to heat. Higher frequency means shifting the domains more often requiring more energy, exponentially more. Domain movement is also proportional to flux swing. Greater
flux swing equals more movement equals more energy required, not all of which is returned. The area within the BH curve represents the energy loss through one cycle (Figure 1).
Eddy current losses stem from the fact that the core itself is a turn like a winding. Generally, its resistivity is higher than the winding conductor and in the case of ferrite, considerably more.
Eddy currents are therefore influenced by the resistivity of the core material, changes resulting from temperature, and the applied voltages on the windings. The higher the voltage, the greater the
induced voltage regardless of duration (pulse width) and the higher the loss (Figure 2).
Click image to enlarge
Figure 2. Two waveforms with equal volt-seconds but the right waveform has twice the average losses of the left waveform. Calculated from P = V^2*DC/R where DC = duty cycle, R = (core) resistance
The third contributor to AC loss is DC bias. At first this seems unintuitive because a static current does not cause any domain wall movement besides the initial change. However, when an AC waveform
is DC biased (as ripple on a DC voltage), the minor hysteresis curve shifts to a different position along the full BH curve. Depending on the position, this can change its shape, hence the area
enclosed by the loop, usually resulting in higher hysteresis losses. This can result in an unanticipated hot inductor. Traditional methods for calculating AC core loss do not account for this, adding
further to its surprise appearance.
What is winding loss?
DC winding loss comes from the DC resistance of the conductor used for the winding. This is simply the measured DC resistance multiplied by the DC current portion of the current waveform squared, P =
I^2*R. The AC winding loss consists of skin and predominantly proximity losses from the AC portion of the current waveform (Figure 3).
Click image to enlarge
Figure 3. A waveform can be reduced to its dc and ac components, which are different depending on wave shape. Geometrically summed they are the rms value. Peak and ripple are only momentarily values
Skin effect is the result of high frequency currents only penetrating a small distance into a conductor, therefore not using the full cross-sectional area of the conductor and effectively increasing
the resistance. Skin effect as commonly defined is only valid for a single conductor in free space far from other conductors. This is not the case in inductors or transformers where there are usually
many turns, and layers of wire tightly wound together.
Proximity effect describes the influence of adjacent magnetic fields on the currents in a conductor. There are two effects. Adjacent wires with currents flowing in the same direction will attract
each other (the fields between cancel) leaving the facing surface areas with little current. Adjacent wires with opposing currents will repel each other (fields add) and the facing surfaces will have
a greater concentration of current with the opposite sides having considerably less.
What about temperature?
Inductors use various core materials including pressed powdered metals, fired ceramic ferrites, laminations and amorphous ribbons. Each material exhibits different variation over temperature. The
properties of permeability, saturation, resistivity, and core losses are of primary interest. As frequency increases permittivity becomes important as well as the size of the core. At high
frequencies cores exhibit a skin effect where the flux does not penetrate the core evenly.
How to measure AC losses?
Würth Elektronikuses the MADMIX system from MinDCet NV. This is essentially a highly programmable and highly instrumented buck converter designed to reproduce real world working conditions for
inductors and measure how they perform. The system operates over frequencies from 10 kHz to 10 MHz, duty cycles from 5 to 95%, voltages from 0.5 to 70 V, load currents up to 48 A and can make
measurements over a wide temperature range of -45 to 225°C. Fully calibrated and verified by calorimetric measurements, it is the most accurate equipment available for measuring inductor performance.
Because of its extreme versatility and programmability, inductors can be characterized over a wide range of operating conditions yielding comprehensive data sets. When setup to emulate continuous
conductor mode, the effects of DC bias are automatically included. Power is measured going into and coming out of the test platform converter. Adjustments for ringing and switching losses are made in
the software. Prior to loss measurement, the DC resistance is measured and later the DC losses are separated from the total losses (Figure 4).
Click image to enlarge
Figure 4. Schematic of DC-DC converter for loss determination and resulting waveforms
Deriving the losses
Ideally all the energy going into an inductor comes back out, but reality is different because of losses. The main losses are from the DC resistance of the windings, the AC losses of the winding and
the AC core losses. If the input and output power can be accurately measured, the difference is the total loss. This is easily done by integrating over the voltage and current waveforms for duration
of a period (one cycle).
Click image to enlarge
From this can be subtracted the DC losses since the DC output current and dc resistance of the coil are easily measured. The remainder is the total AC losses, which are more difficult to separate
into individual components.
Click image to enlarge
The core losses can be determined from the B-H curves allowing separation of winding losses, but it is not possible to separate skin and proximity effects losses. Furthermore, it is not necessary
from a user perspective because the main concern is to know how hot the inductor will get from the total losses under its operating conditions. This depends on the size of the part and how it is
mounted with regard to cooling.
This is the purpose of knowing the total losses of an inductor. The losses are dissipated as heat that must be removed by conduction to the pcb and by convection to the air. Each inductor in
REDEXPERT has a ‘Rated current’ based on the DC current that causes a 40°C temperature rise when mounted on a standardized pcb and operated in still air. This means the total losses (AC + DC) must be
less than the loss generated by the inductor’s rated current (R[DC] * I[rated]^2) provided you have an equivalent copper surface area to that of the test board. REDEXPERT gives the temperature rise
of an inductor based on different ambient temperatures and DC currents, but you still must make the transition to your application based on the total losses and your specific layout.
Würth ElektronikeiSos has reduced this empirical data to equations where the ac losses are more than a function of the frequency and flux density as in the Steinmetz model. The WE model separates the
losses into the DC losses and the total AC losses. No attempt is made to separate the AC core losses from the AC winding losses. In the end, they both contribute to the losses, which cause
temperature rise.
Over the years Würth ElektronikeiSos has taken thousands and thousands of measurements of inductors in its portfolio. This rich set of data has been reduced to equations that give the AC losses for
the inductors based on the operating frequency, duty cycle, DC bias, ripple current, etc. It is important to realize this is not a design formula because the measurements used to generate the
formulae are all component specific considering the construction methods and materials. This is why it is so precise. The Würth ElektronikeiSos online selection tool, REDEXPERT (Figure 5) provides a
very convenient way to access this information and compare multiple inductors instantly.
Click image to enlarge
Figure 5. Tables and charts in REDEXPERT allow quick and accurate comparison of inductors
A nice feature of REDEXPERT is that it will save your work automatically. Simply click on the share icon in the top menu bar and a unique URL will be presented. Save it to your design book, email it
to yourself or to a colleague to share and the exact display you see will be reproduced when you need it (Figure 6).
Click image to enlarge
Figure 6. Detailed information on losses and temperature rise are only a click away. (URL https://we-online.com/re/5gYnYCqG)
[1] Brander, T., Gerfer, A., Rail, B., Zenkner, H.,Trilogy of Magnetics, 2018
[2] Bramanpalli, R., ANP029 Accurate Inductor Loss Determination Using WürthElektronik’s REDEXPERT,
[3] De Smedt, V., MADMIX Inductor Models, MinDCet,2017 | {"url":"https://www.powersystemsdesign.com/articles/ac-losses-in-magnetic-components/28/18823","timestamp":"2024-11-04T23:44:37Z","content_type":"text/html","content_length":"122883","record_id":"<urn:uuid:ab633227-ad60-4884-8bca-6ce9b3f98908>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00410.warc.gz"} |
Is 18 prime or composite? | HIX Tutor
Is 18 prime or composite?
Answer 1
Prime numbers are numbers that only have one factor other than 1, that being the number itself. The number 18 can be factored into 2 * 3 * 3, or 2 * 3². This is known as the number's prime
factorization. The number 17, on the other hand, can not be broken down any further into smaller prime factors, and so 17 is prime.
While it isn't always easy to see if a number is prime just at a glance, it can be easy to tell if some numbers are composite. One trick is to see if the number is even (and greater than 2). If it
is, then it must have 2 as a factor, and that means it is composite. The number 18 is even (and greater than 2), and so it is composite.
Similar tricks exist to tell if numbers are multiples of 3, 4, 5, etc. To tell if a number is a multiple of 3, simply add the digits together to get a new number. (In the case of 18, we get 1 + 8 =
9.) If this new number is bigger than 9, add the digits again, until you get a sum that's less than 10. If the final sum is a multiple of 3 (i.e. 3, 6, or 9), then the original number is a multiple
of 3. For 18, we got 1 + 8 = 9, and since 9 is a multiple of 3, 18 is too.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
18 is composite because it has factors other than 1 and itself. It can be expressed as the product of 2 and 9, or 3 and 6. Therefore, it is not a prime number.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/is-18-prime-or-composite-8f9afa4505","timestamp":"2024-11-13T09:08:35Z","content_type":"text/html","content_length":"581964","record_id":"<urn:uuid:c005af7a-6aa7-4dbf-bae1-221d58023127>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00349.warc.gz"} |
Finding roots of an exponential equation
• Thread starter Akash47
• Start date
In summary, the roots of an exponential equation can be found by setting the equation equal to zero and solving for the variable using logarithms. Not all exponential equations have roots, as some
may have no real roots or complex roots. The difference between a root and a solution is that a root makes the equation equal to zero, while a solution makes the equation true. There are special
techniques for finding roots of exponential equations, such as using logarithms, graphing, or iterative methods. An exponential equation can have multiple roots, but if the base is positive, there
can only be one positive and one negative root.
Homework Statement
Given that f(x)=(3^x).(e^x).(x^2-4) for real x.Find the root of the function.(setting f(x)=0).
Relevant Equations
No equations required.
I know that both 3^x and e^x can't be 0 for real x. Then x^2-4=0 is the only choice and we get x=2,-2. Am I right? Or should I add something?
FAQ: Finding roots of an exponential equation
1. What is an exponential equation?
An exponential equation is an equation in which a variable appears in an exponent. It has the general form y = ab^x, where a and b are constants and x is the variable.
2. How do you find the roots of an exponential equation?
To find the roots of an exponential equation, you need to set the equation equal to zero and solve for the variable. This can be done by taking the logarithm of both sides of the equation or by using
algebraic methods.
3. Can an exponential equation have more than one root?
Yes, an exponential equation can have multiple roots. The number of roots depends on the value of the exponent and the values of the constants in the equation. For example, an equation with an even
exponent will have at least two roots, while an equation with an odd exponent may have only one root.
4. How do I know if a root of an exponential equation is real or complex?
If the exponent in the exponential equation is an even number, then all of the roots will be real. However, if the exponent is an odd number, then there is a possibility that some of the roots will
be complex numbers. To determine if a root is complex, you can use the discriminant formula.
5. Can an exponential equation have irrational roots?
Yes, an exponential equation can have irrational roots. This means that the roots cannot be expressed as a fraction of two integers, and they are usually expressed using square roots or other
irrational numbers. For example, the equation y = 2^x has an irrational root at x = 0.693147... (which is the natural logarithm of 2). | {"url":"https://www.physicsforums.com/threads/finding-roots-of-an-exponential-equation.979884/","timestamp":"2024-11-02T02:40:50Z","content_type":"text/html","content_length":"83428","record_id":"<urn:uuid:d55271a7-b544-47bf-b201-8383a9bb5f7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00402.warc.gz"} |
One-way analysis of variance
p = anova1(y) performs one-way ANOVA for the sample data y and returns the p-value. anova1 treats each column of y as a separate group. The function tests the hypothesis that the samples in the
columns of y are drawn from populations with the same mean against the alternative hypothesis that the population means are not all the same. The function also displays the box plot for each group in
y and the standard ANOVA table (tbl).
p = anova1(y,group) performs one-way ANOVA for the sample data y, grouped by group.
p = anova1(y,group,displayopt) enables the ANOVA table and box plot displays when displayopt is 'on' (default) and suppresses the displays when displayopt is 'off'.
[p,tbl] = anova1(___) returns the ANOVA table (including column and row labels) in the cell array tbl using any of the input argument combinations in the previous syntaxes. To copy a text version of
the ANOVA table to the clipboard, select Edit > Copy Text from the ANOVA table figure.
[p,tbl,stats] = anova1(___) returns a structure, stats, which you can use to perform a multiple comparison test. A multiple comparison test enables you to determine which pairs of group means are
significantly different. To perform this test, use multcompare, providing the stats structure as an input argument.
One-Way ANOVA
Create sample data matrix y with columns that are constants, plus random normal disturbances with mean 0 and standard deviation 1.
y = meshgrid(1:5);
rng default; % For reproducibility
y = y + normrnd(0,1,5,5)
y = 5×5
1.5377 0.6923 1.6501 3.7950 5.6715
2.8339 1.5664 6.0349 3.8759 3.7925
-1.2588 2.3426 3.7254 5.4897 5.7172
1.8622 5.5784 2.9369 5.4090 6.6302
1.3188 4.7694 3.7147 5.4172 5.4889
Perform one-way ANOVA.
The ANOVA table shows the between-groups variation (Columns) and within-groups variation (Error). SS is the sum of squares, and df is the degrees of freedom. The total degrees of freedom is total
number of observations minus one, which is 25 - 1 = 24. The between-groups degrees of freedom is number of groups minus one, which is 5 - 1 = 4. The within-groups degrees of freedom is total degrees
of freedom minus the between groups degrees of freedom, which is 24 - 4 = 20.
MS is the mean squared error, which is SS/df for each source of variation. The F-statistic is the ratio of the mean squared errors (13.4309/2.2204). The p-value is the probability that the test
statistic can take a value greater than the value of the computed test statistic, i.e., P(F > 6.05). The small p-value of 0.0023 indicates that differences between column means are significant.
Compare Beam Strength Using One-Way ANOVA
Input the sample data.
strength = [82 86 79 83 84 85 86 87 74 82 ...
78 75 76 77 79 79 77 78 82 79];
alloy = {'st','st','st','st','st','st','st','st',...
The data are from a study of the strength of structural beams in Hogg (1987). The vector strength measures deflections of beams in thousandths of an inch under 3000 pounds of force. The vector alloy
identifies each beam as steel ('st'), alloy 1 ('al1'), or alloy 2 ('al2'). Although alloy is sorted in this example, grouping variables do not need to be sorted.
Test the null hypothesis that the steel beams are equal in strength to the beams made of the two more expensive alloys. Turn the figure display off and return the ANOVA results in a cell array.
[p,tbl] = anova1(strength,alloy,'off')
tbl=4×6 cell array
{'Source'} {'SS' } {'df'} {'MS' } {'F' } {'Prob>F' }
{'Groups'} {[184.8000]} {[ 2]} {[ 92.4000]} {[ 15.4000]} {[1.5264e-04]}
{'Error' } {[102.0000]} {[17]} {[ 6.0000]} {0x0 double} {0x0 double }
{'Total' } {[286.8000]} {[19]} {0x0 double} {0x0 double} {0x0 double }
The total degrees of freedom is total number of observations minus one, which is $20-1=19$. The between-groups degrees of freedom is number of groups minus one, which is $3-1=2$. The within-groups
degrees of freedom is total degrees of freedom minus the between groups degrees of freedom, which is $19-2=17$.
MS is the mean squared error, which is SS/df for each source of variation. The F-statistic is the ratio of the mean squared errors. The p-value is the probability that the test statistic can take a
value greater than or equal to the value of the test statistic. The p-value of 1.5264e-04 suggests rejection of the null hypothesis.
You can retrieve the values in the ANOVA table by indexing into the cell array. Save the F-statistic value and the p-value in the new variables Fstat and pvalue.
Multiple Comparisons for One-Way ANOVA
Input the sample data.
strength = [82 86 79 83 84 85 86 87 74 82 ...
78 75 76 77 79 79 77 78 82 79];
alloy = {'st','st','st','st','st','st','st','st',...
The data are from a study of the strength of structural beams in Hogg (1987). The vector strength measures deflections of beams in thousandths of an inch under 3000 pounds of force. The vector alloy
identifies each beam as steel (st), alloy 1 (al1), or alloy 2 (al2). Although alloy is sorted in this example, grouping variables do not need to be sorted.
Perform one-way ANOVA using anova1. Return the structure stats, which contains the statistics multcompare needs for performing Multiple Comparisons.
[~,~,stats] = anova1(strength,alloy);
The small p-value of 0.0002 suggests that the strength of the beams is not the same.
Perform a multiple comparison of the mean strength of the beams.
[c,~,~,gnames] = multcompare(stats);
In the figure, the blue bar represents the comparison interval for mean material strength for steel. The red bars represent the comparison intervals for the mean material strength for alloy 1 and
alloy 2. Neither of the red bars overlaps with the blue bar, which indicates that the mean material strength for steel is significantly different from that of alloy 1 and alloy 2. You can confirm the
significant difference by clicking the bars that represent alloy 1 and 2.
Display the multiple comparison results and the corresponding group names in a table.
tbl = array2table(c,"VariableNames", ...
["Group A","Group B","Lower Limit","A-B","Upper Limit","P-value"]);
tbl.("Group A") = gnames(tbl.("Group A"));
tbl.("Group B") = gnames(tbl.("Group B"))
tbl=3×6 table
Group A Group B Lower Limit A-B Upper Limit P-value
_______ _______ ___________ ___ ___________ __________
{'st' } {'al1'} 3.6064 7 10.394 0.00016831
{'st' } {'al2'} 1.6064 5 8.3936 0.0040464
{'al1'} {'al2'} -5.628 -2 1.628 0.35601
The first two columns show the pair of groups that are compared. The fourth column shows the difference between the estimated group means. The third and fifth columns show the lower and upper limits
for the 95% confidence intervals of the true difference of means. The sixth column shows the p-value for a hypothesis that the true difference of means for the corresponding groups is equal to zero.
The first two rows show that both comparisons involving the first group (steel) have confidence intervals that do not include zero. Because the corresponding p-values (1.6831e-04 and 0.0040,
respectively) are small, those differences are significant.
The third row shows that the differences in strength between the two alloys is not significant. A 95% confidence interval for the difference is [-5.6,1.6], so you cannot reject the hypothesis that
the true difference is zero. The corresponding p-value of 0.3560 in the sixth column confirms this result.
Input Arguments
y — sample data
vector | matrix
Sample data, specified as a vector or matrix.
• If y is a vector, you must specify the group input argument. Each element in group represents a group name of the corresponding element in y. The anova1 function treats the y values corresponding
to the same value of group as part of the same group. Use this design when groups have different numbers of elements (unbalanced ANOVA).
• If y is a matrix and you do not specify group, then anova1 treats each column of y as a separate group. In this design, the function evaluates whether the population means of the columns are
equal. Use this design when each group has the same number of elements (balanced ANOVA).
• If y is a matrix and you specify group, then each element in group represents a group name for the corresponding column in y. The anova1 function treats the columns that have the same group name
as part of the same group.
anova1 ignores any NaN values in y. Also, if group contains empty or NaN values, anova1 ignores the corresponding observations in y. The anova1 function performs balanced ANOVA if each group has the
same number of observations after the function disregards empty or NaN values. Otherwise, anova1 performs unbalanced ANOVA.
Data Types: single | double
group — Grouping variable
numeric vector | logical vector | categorical vector | character array | string array | cell array of character vectors
Grouping variable containing group names, specified as a numeric vector, logical vector, categorical vector, character array, string array, or cell array of character vectors.
• If y is a vector, then each element in group represents a group name of the corresponding element in y. The anova1 function treats the y values corresponding to the same value of group as part of
the same group.
N is the total number of observations.
• If y is a matrix, then each element in group represents a group name for the corresponding column in y. The anova1 function treats the columns of y that have the same group name as part of the
same group.
If you do not want to specify group names for the matrix sample data y, enter an empty array ([]) or omit this argument. In this case, anova1 treats each column of y as a separate group.
If group contains empty or NaN values, anova1 ignores the corresponding observations in y.
For more information on grouping variables, see Grouping Variables.
Example: 'group',[1,2,1,3,1,...,3,1] when y is a vector with observations categorized into groups 1, 2, and 3
Example: 'group',{'white','red','white','black','red'} when y is a matrix with five columns categorized into groups red, white, and black
Data Types: single | double | logical | categorical | char | string | cell
displayopt — Indicator to display ANOVA table and box plot
'on' (default) | 'off'
Indicator to display the ANOVA table and box plot, specified as 'on' or 'off'. When displayopt is 'off', anova1 returns the output arguments, only. It does not display the standard ANOVA table and
box plot.
Example: p = anova(x,group,'off')
Output Arguments
p — p-value for the F-test
scalar value
p-value for the F-test, returned as a scalar value. p-value is the probability that the F-statistic can take a value larger than the computed test-statistic value. anova1 tests the null hypothesis
that all group means are equal to each other against the alternative hypothesis that at least one group mean is different from the others. The function derives the p-value from the cdf of the F
A p-value that is smaller than the significance level indicates that at least one of the sample means is significantly different from the others. Common significance levels are 0.05 or 0.01.
tbl — ANOVA table
cell array
ANOVA table, returned as a cell array. tbl has six columns.
Column Definition
source The source of the variability.
SS The sum of squares due to each source.
df The degrees of freedom associated with each source. Suppose N is the total number of observations and k is the number of groups. Then, N – k is the within-groups degrees of freedom (Error), k
– 1 is the between-groups degrees of freedom (Columns), and N – 1 is the total degrees of freedom. N – 1 = (N – k) + (k – 1)
MS The mean squares for each source, which is the ratio SS/df.
F F-statistic, which is the ratio of the mean squares.
Prob>F The p-value, which is the probability that the F-statistic can take a value larger than the computed test-statistic value. anova1 derives this probability from the cdf of F-distribution.
The rows of the ANOVA table show the variability in the data that is divided by the source.
Row Definition
Groups Variability due to the differences among the group means (variability between groups)
Error Variability due to the differences between the data in each group and the group mean (variability within groups)
Total Total variability
stats — Statistics for multiple comparison tests
Statistics for multiple comparison tests, returned as a structure with the fields described in this table.
Field name Definition
gnames Names of the groups
n Number of observations in each group
source Source of the stats output
means Estimated values of the means
df Error (within-groups) degrees of freedom (N – k, where N is the total number of observations and k is the number of groups)
s Square root of the mean squared error
More About
Box Plot
anova1 returns a box plot of the observations for each group in y. Box plots provide a visual comparison of the group location parameters.
On each box, the central mark is the median (2nd quantile, q[2]) and the edges of the box are the 25th and 75th percentiles (1st and 3rd quantiles, q[1] and q[3], respectively). The whiskers extend
to the most extreme data points that are not considered outliers. The outliers are plotted individually using the '+' symbol. The extremes of the whiskers correspond to q[3] + 1.5 × (q[3] – q[1]) and
q[1] – 1.5 × (q[3] – q[1]).
Box plots include notches for the comparison of the median values. Two medians are significantly different at the 5% significance level if their intervals, represented by notches, do not overlap.
This test is different from the F-test that ANOVA performs; however, large differences in the center lines of the boxes correspond to a large F-statistic value and correspondingly a small p-value.
The extremes of the notches correspond to q[2] – 1.57(q[3] – q[1])/sqrt(n) and q[2] + 1.57(q[3] – q[1])/sqrt(n), where n is the number of observations without any NaN values. In some cases, notches
can extend outside the boxes.
For more information about box plots, see 'Whisker' and 'Notch' of boxplot.
Alternative Functionality
Instead of using anova1, you can create an anova object by using the anova function. The anova function provides these advantages:
• The anova function allows you to specify the ANOVA model type, sum of squares type, and factors to treat as categorical. anova also supports table predictor and response input arguments.
• In addition to the outputs returned by anova1, the properties of the anova object contain the following:
□ ANOVA model formula
□ Fitted ANOVA model coefficients
□ Residuals
□ Factors and response data
• The anova object functions allow you to conduct further analysis after fitting the anova object. For example, you can create an interactive plot of multiple comparisons of means for the ANOVA,
get the mean response estimates for each value of a factor, and calculate the variance component estimates.
[1] Hogg, R. V., and J. Ledolter. Engineering Statistics. New York: MacMillan, 1987.
Version History
Introduced before R2006a | {"url":"https://in.mathworks.com/help/stats/anova1.html","timestamp":"2024-11-09T01:15:41Z","content_type":"text/html","content_length":"124068","record_id":"<urn:uuid:805c839f-3c5b-4af0-b021-7da1eeac2487>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00338.warc.gz"} |
Is chalcopyrite a fools gold?
The specific gravity is between 4.1 and 4.3 with hardness of 3.65 in Mohs scale.
What is the density of chalcopyrite
4.1-4.3 g/ml
What is the density of chalcocite
Chalcocite, Cu 2 S, sometimes dark, smoky, soft (Mohs hardness 3), not completely divisible, dense (5.5-5.6 g/cm3, t/m3). It is actually a p-type semiconductor (Shuey, 1975.
Is chalcopyrite a fools gold
Fool’s gold can definitely be one of the three minerals. The most common mineral considered miraculous is pyrite. Chalcopyrite can also look like gold, and weathered mica can also reflect gold.
What is the specific gravity of pyrite and chalcopyrite
Chalcopyrite: Chalcopyrite is brittle and scratches easily with a fingernail. Pyrite: The specific gravity of pyrite is between 4.9 and 5.2. Chalcopyrite: The specific gravity created by chalcopyrite
is between 4.1 and 4.3. Pyrite: The crystal system of pyrite is usually isometric.
What is the density of chalcopyrite
Average density of chalcopyrite is 4.19 g/cm 3 , strength is 3.5. Chalcopyrite is common in different places: especially in Marmot, Vanadium, Grant Co. New Rossie South America Lead Mines, St.
Lawrence Co., New York.
What is chalcopyrite crystal
The name chalcopyrite comes from the Greek words and expressions chalkos or chalkos, which means copper in addition to pyrite, and also for kindling a fire. Chalcopyrite has a hardness of 3 out of 5
out of 4 and is commonly found in Mexico, Peru and Australia. This gemstone has a unique character and hue that makes it an indispensable part of any crystal collection!
When equal volumes of two substances are mixed the specific gravity of the mixture is4 when equal weights are mixed the specific gravity is3 if the specific gravity of one of the substances is2 the
specific gravity of the other is
When mixing equal volumes of two solutions, the specific gravity of the mixture is 4. Equal masses of the same substance are mixed, the specific gravity of their mixture is 3. The specific gravity
between two substances can always be. d1=6 and d2=2.
What is the difference between specific gravity and bulk specific gravity
The term “specific gravity” in the context of soil only refers to bulk material, while bulk density includes the total pore volume of the aggregate. Bulk density is the ratio of sludge weight to
total soil volume (solid volume plus void or pore volume). It brings in less than specific gravity.
What is the difference between specific gravity and apparent specific gravity
True Appearance Specific Gravity is based on the volume of all solid materials, excluding pores. … The specific gravity appears to be derived from the volume of reliable material plus the volume of
the sealed pore type.
Is specific gravity density times gravity
Multiply the gravitational acceleration density by (9 to 81) to quantify the specific gravity. In my example, the specific gravity is almost certainly 840 x 9.81 = 8240.4. Measure or obtain the
volume of all substances elsewhere. | {"url":"https://www.vanessabenedict.com/chalcopyrite-specific-gravity/","timestamp":"2024-11-02T05:33:07Z","content_type":"text/html","content_length":"71773","record_id":"<urn:uuid:f84323b2-76c5-4ad3-a774-a8e39c4b1760>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00048.warc.gz"} |
R’s Favorite: Nearest Neighbors - ProgrammingR
In this tutorial, I will talk about the awesome k nearest neighbor and its implementation in R. The k-nearest neighbour algorithm, abbreviated k-nn, is surprisingly simple and easy to implement, yet
a very powerful method for solving classification and regression problems in data science. Here, in this tutorial, I will only talk about the working of knn in r as a classifier but you can easily
modify it to implement a predictor for regression problems.
How K-nn Works
In machine learning algorithms, the knn algorithm belongs to the class of “lazy” learners as opposed to “eager” learners. The terminology is quite suitable to describe this classifier as it does
nothing during the training phase and applies its predictive abilities only at the test phase. So you need to store all the training points along with their labels in a smart way and use them to
classify points during the test phase. In general, k-nn works with numeric variables rather than categorical variables.
You can think of k-nn’s working as similar to a social network or a society, where people who are closest to us (in terms of relationships at work or home) influence us the most. To see how the knn
function works, I have constructed a very simple example with just two classes. See the figure below, where there are points in 2D space and each point belongs to either the green class or the blue
Figure 1: Training points Figure 2: Training points (green+blue) and the test point (red triangle)
In the Figure 2, I have a test point, represented by a red triangle, whose class is unknown and our classifier has to decide its label. Think of the red triangle as a boat. The fisherman inside the
boat throws a magic net around it and keeps growing the net till it catches k training points (instead of fish). The label of the majority training points is the predicted label. In Figure 2, if k is
set to 1, then a green training point is caught and the prediction is green. If k=3, a bigger net is required and the prediction is blue, as out of the three caught training points, two are blue and
one is green. The variable k, is therefore, a user defined parameter, and it specifies how many closest neighbors to look at for deciding the class of a test point. Normally, for a two class problem
we set k to an odd number (e.g., 1,3,5,…), to avoid ties between classes.
k-nn is as simple as that. All you need to figure out is a way to know which are the closest neighbors. You can use any distance measure you like. The most popular is euclidian distance but there is
nothing stopping us from using other distance measures like Manhattan distance or a more general Minkowski distance. You can even use similarity measures like Jaccard index or cosine similarity etc.,
depending on the nature of the data you have at hand. In this tutorial, I’ll show you the working of k-nn, implemented using Euclidean distance. The distance between two points a and b is given by:
Here a and b are n-dimensional points and ai and bi in the square root represent ith feature for a and b respectively
K-nn In R
Lets do a straightforward and easy implementation in R. As long as we have a distance function in R we can implement k-nn in a data set or data frame. Given below is my implementation of Euclidean
distance in R.
Euclidean distance in R
The R function called “euclideanDistance” takes a point and a data matrix as parameters and computes the Euclidean distance between the point and each point of the data matrix. Of course, for this to
work correctly the dimensions of all data points should be the same.
#function to return Euclidean distance between a point and points stored in matrix dat
#both pt and dat should have the same dimensions
#pt is then 1xn and dat is mxn
#return would be a vector of m values indicating distance of pt with each row of dat
euclideanDistance <-function(pt,dat)
#get the difference of each pt's value with dat
#as R treats data column by column we take the transpose of dat and then transpose again
#to make sure a vector (pt) is substracted row by row from dat
diff = t(pt-t(dat))
#get square of difference
diff = diff*diff
#add all square of differences
dist = apply(diff,1,sum)
#take the square root
dist = sqrt(dist)
In the code above “pt” is a single vector of values and “dat” is a matrix. For example suppose:
pt = (1,2)
dat has two points: ((3,2),(5,5))
The Euclidean distance is then computed as:
Distance between (1,2) and (3,2) = sqrt((1-3)(1-3)+(2-2)(2-2)) = sqrt(4) = 2
Distance between (1,2) and (5,5) = sqrt((1-5)(1-5)+(2-5)(2-5)) = sqrt(25) = 5
The function, hence, returns the vector (2,5) indicating the distance of pt with each row of dat.
R is awesome when it comes to computing with vectors and matrices. Note, we did not use any loops for implementing the above. The code is pretty straightforward except maybe the line that takes the
difference of values of the point pt with all points of the matrix dat:
diff = t(pt-t(dat))
You may wonder why I have used the transpose function t() here. To take the difference of the point with all points of the data matrix, a simple command such as “diff=pt-dat” won’t work. To subtract a
matrix from a point, R makes duplicate copies of pt as (1,2,1,2) to match the size of dat. It also stores dat as a vector in column major order, making its vector representation: (3,3,2,6)
If we do a simple difference as:
wrongDiff = pt-dat
Then wrongDiff has (1-3,2-3,1-2,2-6), giving us the wrong result (we need (1-3,2-2,1-5,2-5). To make a correction, we subtract pt from dat in row major order by taking the transpose of dat (hence the
command (pt-t(dat)). Transpose it back again to get the original matrix dimensions:
diff = t(pt-t(dat)).
Mode in R
The next module we need is a function that computes the mode of a vector. The mode is a statistical function that returns the most frequently occurring values in a list. We need the mode to decide
the class of a point, when given the k closest training points. As shown in the earlier example, for 3-nn if the nearest points are (green, blue, blue) then mode would return blue. Unfortunately, R
does not have a built in function to compute the mode of a list of values so let’s write our own code to do it. The code below computes the mode:
#get the mode of a vector v
modeValue <- function(v)
if (length(v)==1)
#construct the frequency table
tbl = table(v)
#get the most frequently occurring value
myMode = names(tbl)[which.max(tbl)]
return (myMode=myMode)
The “table” function returns frequencies of each value in v. Try it at your end as given in the screen shot below, where the table command returns a nice summary of values in v:
In the next line:
myMode = names(tbl)[which.max(tbl)]
which.max(tbl) returns the index of maximum value and names() returns its value.
K-nn: Putting It All Together
Now we are finally ready to implement k-nn. See the code below:
#will apply knn algorithm to find the label of the testPoints, when given
#trainData and corresponding labels
#default value of k parameter is 1 and you can pass any distance function
#as a parameter as long as the function has pt and data as parameters
testknn <- function(trainData,trainLabels,testPoints,k=1,distFunc=euclideanDistance)
totalTest = nrow(testPoints)
predictedLabels = rep(0,totalTest)
for (i in 1:totalTest)
distance = distFunc(pt=testPoints[i,],dat=trainData)
nearestPts = sort(distance,index.return=TRUE)
nearestLabels = trainLabels[nearestPts$ix][1:k]
predictedLabels[i] = modeValue(nearestLabels)
This code is pretty simple. The parameters for testknn function are the training data points, their corresponding labels, value of k (set to 1 by default) and a distance function, which by default is
Eucidean distance. Use the distance function as a function parameter so that later you can write other distance functions and pass them as parameters, without re-writing the whole code for k-nn.
Testknn function takes each point in the test set, computes its Euclidean distance with each training point, sorts the distances and finds the predicted labels using our implemented mode function.
That’s it, we are done writing k-nn code. K-nn being a lazy algorithm puts us programmers at an advantage as there is nothing to do in the training phase (although for huge datasets it’s not true.
You may have to apply some smart tricks to extract summaries of data). Now that we have this nice R code lets have some fun with k-nn and see how it works.
Coding For Decision Boundaries In R
When working with any classifier, we are interested in seeing what kind of decision boundaries it makes. For example, logistic regression makes linear decision boundaries and there are some classifiers
that make nonlinear boundaries (for example neural networks). It’s very easy to visualize the decision boundary for any classifier by taking points in the 2D space. We can treat the points in the
entire space as test points, classify them and then plot them in different colors according to their classification to see which regions belong to which class. For example, if we take the training
points as given in Figure 1, then the rest of the points around it form our test set. Classifying them as blue or green will give us an idea of which points belong to the blue class and which of them
belong to the green class.
To visualize the decision boundaries in R, first we need a small function “plotData” that plots different points having the same label in the same color. At the start if newPlot is set to TRUE then a
new graphics object is created, otherwise the data is plotted on an older graph. You’ll see its usefulness later. The “unique” function gives us all the unique labels without duplicates and “points”
function is called for each label in a for loop. Note the default value of style in the “points” function is set to 1, which are open circles.
#plot data points with labels in different colors
#if you have more than 4 labels then send the colors array as parameter with more colors
plotData <- function(dat,labels,colors=c('lawngreen','lightblue','red','yellow'),newPlot=TRUE,style=1)
l = unique(labels)
l = sort(l)
#initialize the plot if specified in parameters
if (newPlot)
col=“white", xlab=“x1", ylab="x2")
for (i in l)
ind = labels==i
count = count+1
Now let’s code to view the decision boundary of k-nn.
#function to make the decision boundary of k-nn
decisionBoundaryKnn <- function(trainData,trainLabels,k=1)
#generate test points in the entire boundary of plane
#subtract -2 from in and +2 from max to extend the boundary
x1 <- seq(min(trainData)-2,max(trainData)+2,by=.3)
l = length(x1)
x2 = x1
#this will make x-coordinate
x1 = rep(x1,each=l)
#this will make y-coordinate
x2 = rep(x2,times=l)
#construct the final test matrix by placing x1 and x2 together
xtest = cbind(x1,x2)
#plot the test points with predicted labels
#plot the training points in a different style (filled circles)
The first part of the code generates points “xtest” in the entire space, calls the k-nn to classify these points and then calls plotData to plot those points. In the end it also plots the training
points but in a different color so that we can see the difference between training and test points. Also, the training points are plotted with style 16, which specifies filled circles. Lets see this
function in action next.
Visualizing Decision Boundaries Of K-nn
Lets look at our simple example given in Figure 1. You can download the sample points knnTrainPoints.txt here{URL needed for file}. First load the training points in memory by typing the following
labels = x[,ncol(x)]
x = x[,-ncol(x)]
The graphs in the next sections show the decision boundary, which you can reproduce using the code shown in the box of each plot. The solid circles, which are also the darker points are the training
points. The lighter colors are the test points that we generated in the entire space.
Variable Number Of Training Points And Fixed K=1
To see how the decision boundary changes with a change in the number of training points, let’s run the code for 2,4, 10 and all the points. For two training points, we have a linear decision boundary
just as expected. The line is equidistant from the two training points. If we are unlucky our test point would fall on that line and we would have a point of confusion as the point could belong to
either of the two classes. In this case, just pick a class randomly. This situation would occur rarely in real life applications but it is not impossible.
When two more training points are added, see how the boundary shifts with each point exerting its influence on the test points. Adding more and more points makes more complex boundaries for k=1.
Figure 3: Decision boundaries for variable number of training points and k=1
Figure 4: Decision boundaries for variable k
Variable k, Fixed Number Of Training Points
Now lets see what effect the parameter k has on deciding the classification of the test points. The correct technical word for k is a hyper parameter (although you can get away by simply saying
parameter). For k=1, only the closest point is important and so each training point has its own region of influence. For higher values of k, many training points together decide the boundary. I
deliberately selected 13 green training points and 12 blue ones, making a total of 25 training points. Note what happens when we take k=25 and you can figure out why all the test points are now
classified as green. You will see that for values of very large values of k, k-nn would start favoring the green class for obvious reasons. Determining which value of k to use is still a challenge and
many researchers working in the area of model selection are trying to answer this question.
Applying K-nn For OCR
The application, I have chosen, for applying k-nn, is the recognition of handwritten digits. There is a very nice dataset for OCR at UCI machine learning repository and we thank them for their
support. Here is a reference to the repository:
Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
You can download the OCR dataset from the machine learning repository from this link.
The dataset was contributed by Alpaydin and Kaynak and it has pre-processed handwritten images of the digits 0-9. In particular, we’ll work with the training set optdigits.tra and test file
I am pasting here 3 example images from this dataset and you can generate them too using the code I am providing.
Figure 5: Example images of handwritten digits from UCI machine learning repository
The function “knnOcr” returns the accuracy rate of the classifier. The default value for k is again 1. This function reads the training and test files, separates their labels and calls the testknn
function that we wrote earlier. If you want to view the images of the dataset you can uncomment the lines in the code below.
knnOcr <- function(k=1)
#read the training and test files and separate their labels
trainData <- read.csv('digits/optdigits.tra',header = FALSE)
trainData = data.matrix(trainData)
trainLabels = trainData[,ncol(trainData)]
trainData = trainData[,-ncol(trainData)]
testData <- read.csv('digits/optdigits.tes',header = FALSE)
testData = data.matrix(testData)
testLabels = testData[,ncol(testData)]
testData = testData[,-ncol(testData)]
#uncomment the following lines if you want to display an image
#at any index set by indexToDisplay
#flip image to display properly
#start uncommenting now
#indexToDisplay = 1
#digit = trainData[indexToDisplay,]
#dim(digit) = c(8,8) #reshape to 8x8 images
#im <- apply(digit,1,rev)
#stop uncommenting now
predictions = testknn(trainData,trainLabels,testPoints=testData,k)
#compute accuracy of the classifier
accuracy = totalCorrect/length(predictions)
Run the following command:
This would return the number 0.9799666, which shows that the digits were recognized with 98% accuracy rate. That’s simply marvelous! You can run the same function for different values of k and see how
it goes. This link has information on the accuracy of k-nn classifier for different values of k for this dataset. Do run it yourself and verify whether your percentage accuracy matches theirs.
Further Readings
I hope you enjoyed this tutorial as much as I enjoyed writing and messing about with the code. You can look at any standard text book on machine learning or pattern recognition to learn more about
k-nn. One of my favorites is “Pattern Recognition and Machine Learning” by Christopher Bishop.
About The Author
Mehreen Saeed is an academic and an independent researcher. I did my PhD in AI in 1999 from University of Bristol, worked in the industry for two years and then joined the academia. My last thirteen
years were spent in teaching, learning and researching at FAST NUCES. My LinkedIn profile. | {"url":"https://www.programmingr.com/knn/","timestamp":"2024-11-14T01:15:32Z","content_type":"text/html","content_length":"218919","record_id":"<urn:uuid:bbffed87-813d-4917-9aad-9693068ab004>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00164.warc.gz"} |
Understanding Span in Linear Algebra
Linear algebra is a crucial branch of mathematics that deals with vectors, vector spaces, and linear transformations. One of the foundational concepts in linear algebra is the idea of span.
Understanding span in linear algebra can help you grasp more advanced topics, and it’s essential for solving a variety of problems in both mathematics and real-world applications. This article aims
to break down the concept of span and engagingly.
What is Span?
At its core, the span of a set of vectors refers to all possible linear combinations of those vectors. In simpler terms, if you have a group of vectors, the span includes every vector you can create
by adding them and scaling them (multiplying them by numbers).
Key Definitions
• Vector: A quantity with magnitude and direction represented in a coordinate system.
• Linear Combination: An expression comprising a set of vectors, each multiplied by a scalar (a natural number) and added together.
• Scalar: A single number used to scale a vector.
Why is Span Important?
Understanding the concept of the span is essential for several reasons:
• Basis for Vector Spaces: The span of a set of vectors can help define the entire vector space. If you can express every vector in that space as a linear combination of your set, you’ve identified
a basis.
• Dimensionality: The number of vectors in the span can indicate the dimension of the space you’re working with. For example, two vectors in 3D space can span a plane, while three vectors can span
the entire space if not coplanar.
• Real-World Applications: The concept of span is used in various fields, including computer graphics, engineering, and data science, making it a practical skill to master.
How to Determine the Span
Finding the span of a set of vectors. Here’s how you can do it:
1. Identify the Vectors: Start with the vectors you want to analyze.
2. Form Linear Combinations: Create linear combinations of these vectors using scalar multipliers.
3. Visualize: Visualize the vectors in a coordinate system to understand how they relate.
4. Check for Independence: Determine if the vectors are linearly independent (no vector can be expressed as a combination of the others). This significantly affects the span.
Consider the vectors v1=(1,0)\mathbf{v_1} = (1, 0)v1=(1,0) and v2=(0,1)\mathbf{v_2} = (0, 1)v2=(0,1). The span of these vectors includes all points in the 2D plane since you can form any point
(x,y)(x, y)(x,y) by taking:
x⋅v1+y⋅v2x \cdot \mathbf{v_1} + y \cdot \mathbf{v_2} x⋅v1+y⋅v2
This means that the span of {v1,v2}\{ \mathbf{v_1}, \mathbf{v_2} \}{v1,v2} is the entire 2D space.
Visualizing Span
Visual representation can significantly aid your understanding of span in linear algebra. Let’s break it down:
• 2D Space: If you plot two vectors that are not parallel, they will span a plane. The area covered by all linear combinations of these two vectors is the entire plane they define.
• 3D Space: Three non-coplanar vectors in 3D space will span the whole volume of the space. If any two vectors lie in the same plane, the span is limited to that plane.
Table: Span Examples
{(1,0)}\{(1, 0)\}{(1,0)} x-axis 1D
{(0,1)}\{(0, 1)\}{(0,1)} y-axis 1D
{(1,0),(0,1)}\{(1, 0), (0, 1)\}{(1,0),(0,1)} Entire 2D plane 2D
{(1,0,0),(0,1,0),(0,0,1)}\{(1, 0, 0), (0, 1, 0), (0, 0, 1)\}{(1,0,0),(0,1,0),(0,0,1)} Entire 3D space 3D
Linear Independence and Span
The concept of linear independence plays a significant role in span.
• Linearly Independent Vectors: A set of vectors is considered linearly independent if none can be written as a linear combination of the others. In this case, the span will equal the number of
• Linearly Dependent Vectors: If at least one vector in the set can be expressed as a combination of others, the vectors are linearly dependent. This means the span is less than the total number of
Identifying Linear Independence
To check if a set of vectors is linearly independent, you can:
• Set Up an Equation: Write an equal zero equation using the vectors and scalars.
• Row Reduction: Use Gaussian elimination or row reduction on the matrix formed by the vectors to see if the only solution to the equation is the trivial solution (all scalars equal to zero).
Applications of Span in Real Life
The concept of span in linear algebra has numerous applications across various fields:
• Computer Graphics: In graphics, span helps define the area where an object can be rendered.
• Engineering: Engineers use span when analyzing forces in different directions to ensure structural integrity.
• Machine Learning: In data science, span assists in understanding data sets and feature spaces for better model training.
Understanding the span in linear algebra is essential for grasping the broader concepts of vector spaces and linear transformations. Knowing how to determine span and recognizing the importance of
linear independence allows you to solve complex problems in mathematics and real-world scenarios. Whether you’re a student, a professional, or just someone curious about math, mastering this concept
will empower you in your mathematical journey.
Key Takeaways
• Span refers to all possible linear combinations of a set of vectors.
• The dimensionality of the span indicates the space covered by the vectors.
• Linear independence is crucial in determining the effectiveness of a set of vectors in spanning a space.
• The span applications extend far beyond theoretical math, influencing various practical fields.
You’ve taken a solid step toward understanding linear algebra by exploring these ideas. Keep practicing, and soon, you’ll find yourself confidently using these concepts!
You may also read
Leave a Reply Cancel reply | {"url":"https://allaboutworlds.com/span-linear-algebra/","timestamp":"2024-11-07T15:23:51Z","content_type":"text/html","content_length":"169407","record_id":"<urn:uuid:101fd49f-4070-4eff-96be-d2e39abc98f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00555.warc.gz"} |
If H(X) Is The Inverse Of F(X), What Is The Value Of H(F(X))? - Saverudata.info
In mathematics, the inverse of a function is the function that reverses the effect of the original function. If H(X) is the inverse of F(X), then the value of H(F(X)) can be calculated by reversing
the steps of the original function. This article will discuss what H(X) is and explain the value of H(F(X)).
What is H(X)?
H(X) is the inverse of a function, F(X). An inverse function is a mathematical operation that reverses the effect of another function. For example, if F(X) is a function that multiplies a number by
two, then H(X) is the inverse of F(X), which would divide a number by two.
Value of H(F(X))
The value of H(F(X)) is the result of reversing the steps of the original function, F(X). For example, if F(X) is the function that multiplies a number by two, then H(F(X)) would be the result of
dividing the number by two.
In conclusion, H(X) is the inverse of F(X) and the value of H(F(X)) is the result of reversing the steps of the original function. Knowing H(X) and its value is important in solving mathematical
equations and can provide valuable insight into the behavior of a function. | {"url":"https://saverudata.me/if-hx-is-the-inverse-of-fx-what-is-the-value-of-hfx/","timestamp":"2024-11-02T03:14:13Z","content_type":"text/html","content_length":"221357","record_id":"<urn:uuid:23d609c0-9706-4383-bfad-f46994ebe46d>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00400.warc.gz"} |
Target loop shape for control system tuning
Use TuningGoal.LoopShape to specify a target gain profile (gain as a function of frequency) of an open-loop response. TuningGoal.LoopShape constrains the open-loop, point-to-point response (L) at a
specified location in your control system. Use this tuning goal for control system tuning with tuning commands, such as systune or looptune.
When you tune a control system, the target open-loop gain profile is converted into constraints on the inverse sensitivity function inv(S) = (I + L) and the complementary sensitivity function T = 1–S
. These constraints are illustrated for a representative tuned system in the following figure.
Where L is much greater than 1, a minimum gain constraint on inv(S) (green shaded region) is equivalent to a minimum gain constraint on L. Similarly, where L is much smaller than 1, a maximum gain
constraint on T (red shaded region) is equivalent to a maximum gain constraint on L. The gap between these two constraints is twice the CrossTol parameter, which specifies the frequency band where
the loop gain can cross 0 dB.
For multi-input, multi-output (MIMO) control systems, values in the gain profile greater than 1 are interpreted as minimum performance requirements. Such values are lower bounds on the smallest
singular value of the open-loop response. Gain profile values less than one are interpreted as minimum roll-off requirements, which are upper bounds on the largest singular value of the open-loop
response. For more information about singular values, see sigma.
Use TuningGoal.LoopShape when the loop shape near crossover is simple or well understood (such as integral action). To specify only high gain or low gain constraints in certain frequency bands, use
TuningGoal.MinLoopGain and TuningGoal.MaxLoopGain. When you do so, the software determines the best loop shape near crossover.
Req = TuningGoal.LoopShape(location,loopgain) creates a tuning goal for shaping the open-loop response measured at the specified location. The magnitude of the single-input, single-output (SISO)
transfer function loopgain specifies the target open-loop gain profile. You can specify the target gain profile (maximum gain across the I/O pair) as a smooth transfer function or sketch a piecewise
error profile using an frd model.
Req = TuningGoal.LoopShape(location,loopgain,crosstol) specifies a tolerance on the location of the crossover frequency. crosstol expresses the tolerance in decades. For example, crosstol = 0.5
allows gain crossovers within half a decade on either side of the target crossover frequency specified by loopgain. When you omit crosstol, the tuning goal uses a default value of 0.1 decades. You
can increase crosstol when tuning MIMO control systems. Doing so allows more widely varying crossover frequencies for different loops in the system.
Req = TuningGoal.LoopShape(location,wc) specifies just the target gain crossover frequency. This syntax is equivalent to specifying a pure integrator loop shape, loopgain = wc/s.
Req = TuningGoal.LoopShape(location,wcrange) specifies a range for the target gain crossover frequency. The range is a vector of the form wcrange = [wc1,wc2]. This syntax is equivalent to using the
geometric mean sqrt(wc1*wc2) as wc and setting crosstol to the half-width of wcrange in decades. Using a range instead of a single wc value increases the ability of the tuning algorithm to enforce
the target loop shape for all loops in a MIMO control system.
Input Arguments
location — Location where open-loop response shape to be constrained is measured
character vector | cell array of character vectors
Location where the open-loop response shape to be constrained is measured, specified as a character vector or cell array of character vectors that identify one or more locations in the control system
to tune. What locations are available depends on what kind of system you are tuning:
• If you are tuning a Simulink^® model of a control system, you can use any linear analysis point marked in the model, or any linear analysis point in an slTuner (Simulink Control Design) interface
associated with the Simulink model. Use addPoint (Simulink Control Design) to add analysis points to the slTuner interface. For example, if the slTuner interface contains an analysis point u, you
can use 'u' to refer to that point when creating tuning goals. Use getPoints (Simulink Control Design) to get the list of analysis points available in an slTuner interface to your model.
• If you are tuning a generalized state-space (genss) model of a control system, you can use any AnalysisPoint location in the control system model. For example, the following code creates a PI
loop with an analysis point at the plant input 'u'.
AP = AnalysisPoint('u');
G = tf(1,[1 2]);
C = tunablePID('C','pi');
T = feedback(G*AP*C,1);
When creating tuning goals, you can use 'u' to refer to the analysis point at the plant input. Use getPoints to get the list of analysis points available in a genss model.
The loop shape requirement applies to the point-to-point open-loop transfer function at the specified location. That transfer function is the open-loop response obtained by injecting signals at the
location and measuring the return signals at the same point.
If location specifies multiple locations, then the loop-shape requirement applies to the MIMO open-loop transfer function.
loopgain — Target open-loop gain profile as function of frequency
tf model object | zpk model object | ss model object | frd model object | makeweight model object
Target open-loop gain profile as a function of frequency.
You can specify loopgain as a smooth SISO transfer function (tf, zpk, or ss model). Alternatively, you can sketch a piecewise gain profile using a frd model or the makeweight (Robust Control Toolbox)
function. When you do so, the software automatically maps your specified gain profile to a zpk model whose magnitude approximates the desired gain profile. Use viewGoal(Req) to plot the magnitude of
that zpk model.
For multi-input, multi-output (MIMO) control systems, values in the gain profile greater than 1 are interpreted as minimum performance requirements. These values are lower bounds on the smallest
singular value of L. Gain profile values less than one are interpreted as minimum roll-off requirements, which are upper bounds on the largest singular value of L. For more information about singular
values, see sigma.
If you are tuning in discrete time (that is, using a genss model or slTuner interface with nonzero Ts), you can specify loopgain as a discrete-time model with the same Ts. If you specify loopgain in
continuous time, the tuning software discretizes it. Specifying the loop shape in discrete time gives you more control over the loop shape near the Nyquist frequency.
crosstol — Tolerance in location of crossover frequency
0.1 (default) | scalar
Tolerance in the location of crossover frequency, in decades. specified as a scalar value. For example, crosstol = 0.5 allows gain crossovers within half a decade on either side of the target
crossover frequency specified by loopgain. Increasing crosstol increases the ability of the tuning algorithm to enforce the target loop shape for all loops in a MIMO control system.
wc — Target crossover frequency
positive scalar
Target crossover frequency, specified as a positive scalar value. Express wc in units of rad/TimeUnit, where TimeUnit is the TimeUnit property of the control system model you are tuning.
wcrange — Range for target crossover frequency
vector of two elements
Range for target crossover frequency, specified as a vector of the form [wc1,wc2]. Express wc in units of rad/TimeUnit, where TimeUnit is the TimeUnit property of the control system model you are
LoopGain — Target loop shape as function of frequency
SISO zpk model object
Target loop shape as a function of frequency, specified as a SISO zpk model.
The software automatically maps the input argument loopgain onto a zpk model. The magnitude of this zpk model approximates the desired gain profile. Use viewGoal(Req) to plot the magnitude of the zpk
model LoopGain.
CrossTol — Tolerance on gain crossover frequency
0.1 (default) | positive scalar
Tolerance on gain crossover frequency, in decades.
The initial value of CrossTol is set by the crosstol input when you create the tuning goal.
LoopScaling — Toggle for automatically scaling loop signals
'on' (default) | 'off'
Toggle for automatically scaling loop signals, specified as 'on' or 'off'.
In multi-loop or MIMO control systems, the feedback channels are automatically rescaled to equalize the off-diagonal terms in the open-loop transfer function (loop interaction terms). Set LoopScaling
to 'off' to disable such scaling and shape the unscaled open-loop response.
Location — Location at which open-loop response shape to be constrained is measured
cell array of character vectors
Location at which the open-loop response shape to be constrained is measured, specified as a cell array of character vectors that identify one or more analysis points in the control system to tune.
For example, if Location = {'u'}, the tuning goal evaluates the open-loop response measured at an analysis point 'u'. If Location = {'u1','u2'}, the tuning goal evaluates the MIMO open-loop response
measured at analysis points 'u1' and 'u2'.
The initial value of the Location property is set by the location input argument when you create the tuning goal.
Loop Shape and Crossover Tolerance
Create a target gain profile requirement for the following control system. Specify integral action, gain crossover at 1, and a roll-off requirement of 40 dB/decade.
The requirement should apply to the open-loop response measured at the AnalysisPoint block X. Specify a crossover tolerance of 0.5 decades.
LS = frd([100 1 0.0001],[0.01 1 100]);
Req = TuningGoal.LoopShape('X',LS,0.5);
The software converts LS into a smooth function of frequency that approximates the piecewise-specified requirement. Display the requirement using viewGoal.
The green and red regions indicate the bounds for the inverse sensitivity, inv(S) = 1-G*C, and the complementary sensitivity, T = 1-S, respectively. The gap between these regions at 0 dB gain
reflects the specified crossover tolerance, which is half a decade to either side of the target loop crossover.
When you use viewGoal(Req,CL) to validate a tuned closed-loop model of this control system, CL, the tuned values of S and T are also plotted.
Specify Different Loop Shapes for Multiple Loops
Create separate loop shape requirements for the inner and outer loops of the following control system.
For the inner loop, specify a loop shape with integral action, gain crossover at 1, and a roll-off requirement of 40 dB/decade. Additionally, specify that this loop shape requirement should be
enforced with the outer loop open.
LS2 = frd([100 1 0.0001],[0.01 1 100]);
Req2 = TuningGoal.LoopShape('X2',LS2);
Req2.Openings = 'X1';
Specifying 'X2' for the location indicates that Req2 applies to the point-to point, open-loop transfer function at the location X2. Setting Req2.Openings indicates that the loop is opened at the
analysis point X1 when Req2 is enforced.
By default, Req2 imposes a stability requirement on the inner loop as well as the loop shape requirement. In some control systems, however, inner-loop stability might not be required, or might be
impossible to achieve. In that case, remove the stability requirement from Req2 as follows.
For the outer loop, specify a loop shape with integral action, gain crossover at 0.1, and a roll-off requirement of 20 dB/decade.
LS1 = frd([10 1 0.01],[0.01 0.1 10]);
Req1 = TuningGoal.LoopShape('X1',LS1);
Specifying 'X1' for the location indicates that Req1 applies to the point-to point, open-loop transfer function at the location X1. You do not have to set Req1.Openings because this loop shape is
enforced with the inner loop closed.
You might want to tune the control system with both loop shaping requirements Req1 and Req2. To do so, use both requirements as inputs to the tuning command. For example, suppose CL0 is a tunable
genss model of the closed-loop control system. In that case, use [CL,fSoft] = systune(CL0,[Req1,Req2]) to tune the control system to both requirements.
Loop Shape for Tuning Simulink Model
Create a loop-shape requirement for the feedback loop on 'q' in the Simulink model rct_airframe2. Specify that the loop-shape requirement is enforced with the 'az' loop open.
Open the model.
Create a loop shape requirement that enforces integral action with a crossover a 2 rad/s for the 'q' loop. This loop shape corresponds to a loop shape of 2/_s_.
s = tf('s');
shape = 2/s;
Req = TuningGoal.LoopShape('q',shape);
Specify the location at which to open an additional loop when enforcing the requirement.
To use this requirement to tune the Simulink model, create an slTuner interface to the model. Identify the block to tune in the interface.
ST0 = slTuner('rct_airframe2','MIMO Controller');
Designate both az and q as analysis points in the slTuner interface.
This command makes q available as an analysis location. It also allows the tuning requirement to be enforced with the loop open at az.
You can now tune the model using Req and any other tuning requirements. For example:
[ST,fSoft] = systune(ST0,Req);
Final: Soft = 0.845, Hard = -Inf, Iterations = 51
Loop Shape Requirement with Crossover Range
Create a tuning requirement specifying that the open-loop response of loop identified by 'X' cross unity gain between 50 and 100 rad/s.
Req = TuningGoal.LoopShape('X',[50,100]);
Examine the resulting requirement to see the target loop shape.
The plot shows that the requirement specifies an integral loop shape, with crossover around 70 rad/s, the geometrical mean of the range [50,100]. The gap at 0 dB between the minimum low-frequency
gain (green region) and the maximum high-frequency gain (red region) reflects the allowed crossover range [50,100].
• This tuning goal imposes an implicit stability constraint on the closed-loop sensitivity function measured at Location, evaluated with loops opened at the points identified in Openings. The
dynamics affected by this implicit constraint are the stabilized dynamics for this tuning goal. The MinDecay and MaxRadius options of systuneOptions control the bounds on these implicitly
constrained dynamics. If the optimization fails to meet the default bounds, or if the default bounds conflict with other requirements, use systuneOptions to change these defaults.
When you tune a control system using a TuningGoal, the software converts the tuning goal into a normalized scalar value f(x), where x is the vector of free (tunable) parameters in the control system.
The software then adjusts the parameter values to minimize f(x) or to drive f(x) below 1 if the tuning goal is a hard constraint.
For TuningGoal.LoopShape, f(x) is given by:
$f\left(x\right)={‖\begin{array}{c}{W}_{S}S\\ {W}_{T}T\end{array}‖}_{\infty }.$
Here, S = D^–1[I – L(s,x)]^–1D is the scaled sensitivity function at the specified location, where L(s,x) is the open-loop response being shaped. D is an automatically-computed loop scaling factor.
(If the LoopScaling property is set to 'off', then D = I.) T = S – I is the complementary sensitivity function.
W[S] and W[T] are frequency weighting functions derived from the specified loop shape. The gains of these functions roughly match LoopGain and 1/LoopGain, for values ranging from –20 dB to 60 dB. For
numerical reasons, the weighting functions level off outside this range, unless the specified loop gain profile changes slope for gains above 60 dB or below –60 dB. Because poles of W[S] or W[T]
close to s = 0 or s = Inf might lead to poor numeric conditioning of the systune optimization problem, it is not recommended to specify loop shapes with very low-frequency or very high-frequency
To obtain W[S] and W[T], use:
[WS,WT] = getWeights(Req,Ts)
where Req is the tuning goal, and Ts is the sample time at which you are tuning (Ts = 0 for continuous time). For more information about the effects of the weighting functions on numeric stability,
see Visualize Tuning Goals.
Version History
Introduced in R2016a
R2016a: Functionality moved from Robust Control Toolbox
Prior to R2016a, this functionality required a Robust Control Toolbox™ license. | {"url":"https://la.mathworks.com/help/control/ref/tuninggoal.loopshape.html","timestamp":"2024-11-10T15:22:58Z","content_type":"text/html","content_length":"143217","record_id":"<urn:uuid:bb301a49-b661-4379-abb7-bd7f695cecff>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00777.warc.gz"} |
seminars - Green's functions and Well-posedness of Compressible Navier-Stokes equation
A class of decomposition of Green's functions for the compressilbe Navier-Stokes
linearized around a constant state is introduced. The singular structures of the Green's functions
are developed as essential devices to use the nonlinearity directly to covert the
2nd order quasi-linear PDE into a system of zero-th order integral equation with regular
integral kernels. The system of integrable equations allows a wider class of functions such as BV solutions.
We have shown global existence and well-posedness of the compressible Navier-Stokes
equation for isentropic gas with the gas constant $\gamma \in (0,e)$ in the Lagrangian
coordinate for the class of the BV functions and point wise $L^\infty$ around a constant state; and the
underline pointwise structure of the solutions is constructed. It is also shown that for the class
of BV solution the solution is at most piecewise $C^2$-solution even though the initial data
is piecewise C^infty. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=62&sort_index=speaker&order_type=desc&l=en&document_srl=787894","timestamp":"2024-11-10T11:43:59Z","content_type":"text/html","content_length":"46127","record_id":"<urn:uuid:0d98ab4b-23bb-4bad-927f-a920abbc8edf>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00038.warc.gz"} |
Go to the source code of this file.
subroutine spptri (UPLO, N, AP, INFO)
Function/Subroutine Documentation
subroutine spptri ( character UPLO,
integer N,
real, dimension( * ) AP,
integer INFO
Download SPPTRI + dependencies
[TGZ] [ZIP] [TXT]
SPPTRI computes the inverse of a real symmetric positive definite
matrix A using the Cholesky factorization A = U**T*U or A = L*L**T
computed by SPPTRF.
UPLO is CHARACTER*1
[in] UPLO = 'U': Upper triangular factor is stored in AP;
= 'L': Lower triangular factor is stored in AP.
N is INTEGER
[in] N The order of the matrix A. N >= 0.
AP is REAL array, dimension (N*(N+1)/2)
On entry, the triangular factor U or L from the Cholesky
factorization A = U**T*U or A = L*L**T, packed columnwise as
a linear array. The j-th column of U or L is stored in the
array AP as follows:
[in,out] AP if UPLO = 'U', AP(i + (j-1)*j/2) = U(i,j) for 1<=i<=j;
if UPLO = 'L', AP(i + (j-1)*(2n-j)/2) = L(i,j) for j<=i<=n.
On exit, the upper or lower triangle of the (symmetric)
inverse of A, overwriting the input factor U or L.
INFO is INTEGER
= 0: successful exit
[out] INFO < 0: if INFO = -i, the i-th argument had an illegal value
> 0: if INFO = i, the (i,i) element of the factor U or L is
zero, and the inverse could not be computed.
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
November 2011
Definition at line 94 of file spptri.f. | {"url":"https://netlib.org/lapack/explore-html-3.4.2/d2/dcc/spptri_8f.html","timestamp":"2024-11-09T09:26:29Z","content_type":"application/xhtml+xml","content_length":"10949","record_id":"<urn:uuid:1e61a400-31ee-4a30-8d02-6032bd257291>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00138.warc.gz"} |
A general analysis of boundedly rational learning in social networks
Theoretical Economics 16 (2021), 317–357
A general analysis of boundedly rational learning in social networks
Manuel Mueller-Frank, Claudia Neri
We analyze boundedly rational learning in social networks within binary action environments. We establish how learning outcomes depend on the environment (i.e., informational structure, utility
function), the axioms imposed on the updating behavior, and the network structure. In particular, we provide a normative foundation for Quasi-Bayesian updating, where a Quasi-Bayesian agent treats
others' actions as if they were based only on their private signal. Quasi-Bayesian updating induces learning (i.e., convergence to the optimal action for every agent in every connected network) only
in highly asymmetric environments. In all other environments learning fails in networks with a diameter larger than four. Finally, we consider a richer class of updating behavior that allows for
non-stationarity and differential treatment of neighbors' actions depending on their position in the network. We show that within this class there exist updating systems which induce learning for
most networks.
Keywords: Social networks, naive inference, information aggregation, bounded rationality, agreement
JEL classification: D83, D85
Full Text: | {"url":"https://econtheory.org/ojs/index.php/te/article/viewArticle/20210317/0","timestamp":"2024-11-11T00:30:54Z","content_type":"text/html","content_length":"4499","record_id":"<urn:uuid:85862e2f-6143-47fd-b130-10c7d5bf64b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00255.warc.gz"} |
How many Cans in a 2 liter - Vending Business Machine Pro ServiceHow many Cans in a 2 liter
How many Cans in a 2 liter
Ever wondered how many cans you can squeeze into a 2-liter bottle? It’s like trying to solve a puzzle – the mystery of fitting round pegs into square holes. But fear not, we’re here to crack this
fizzy conundrum wide open! Picture this: you’ve got those soda cans lined up, eyeing that empty 2-liter bottle with curiosity. Are you ready for the surprise number of aluminum soldiers that will
march right into that bottle? Let’s dive in and unravel this mind-boggling question together!
So, buckle up and get ready to uncover the magic behind cramming those cans into a single container. The answer might just leave you astonished! Stay tuned as we spill all the secrets on how many
cans can call a 2-liter bottle home.
Understanding Volume
Liquid Measurements
Understanding the relationship between different units is crucial. A common question that arises is, “How many cans are in a 2-liter bottle?” To answer this, we need to consider the standard size of
a can. Typically, a soda can holds 12 fluid ounces. Knowing this, we can calculate how many cans are in a 2-liter bottle.
Converting liters to ounces allows us to make this comparison easily. Since there are approximately 33.8 fluid ounces in a liter, a 2-liter bottle contains around 67.6 fluid ounces of liquid. By
dividing the total number of ounces by the size of one can (12 ounces), we find that there are about 5-6 cans in a 2-liter bottle.
Conversion Basics
Understanding conversion basics is essential when dealing with volume measurements like liters and fluid ounces. Converting from liters to smaller units like cups or milliliters requires knowing the
equivalent values for each unit. For instance, one liter equals approximately four cups or 1000 milliliters.
Knowing these conversions helps when you need to switch between different units quickly and accurately without confusion or errors. If you’re trying to figure out how much liquid fits into various
containers or how many smaller units make up larger ones, mastering basic conversion principles simplifies these tasks significantly.
The Standard Can
Capacity Overview
If you’re wondering how many cans in a 2 liter bottle, typically, a standard can holds around 12 fluid ounces. In comparison, a 2-liter bottle contains approximately 67.6 fluid ounces of liquid. By
dividing the total volume of the bottle by the capacity of one can, you would find that there are about five and a half cans in a 2-liter bottle.
When considering parties or gatherings where beverages are served from large containers like bottles or pitchers, understanding how many cans in a 2 liter could help estimate how much drink is needed
per person. For instance, if each guest drinks two cans during an event and you have ten guests attending, knowing that there are around five and a half cans in each 2-liter bottle would guide your
beverage purchasing decisions accurately.
The dimensions of standard soda or beer cans play an essential role when determining how many cans in a 2 liter container. A typical can measures approximately 4.83 inches high with a diameter of
around 2.13 inches at its widest point. On the other hand, for most brands, the height of a standard two-liter soda bottle is roughly between twelve to thirteen inches with diameters ranging from
three to four inches.
Understanding these measurements allows individuals to calculate not only how many cans fit into different-sized containers but also how they should arrange items within their storage spaces
effectively based on size constraints.
While discussing how many cans fit into various containers like two-liter bottles provides valuable insights for planning events or organizing storage spaces efficiently; it’s crucial to note that
variations exist among different types of beverage containers and can sizes globally. For example:
• In some countries, canned beverages might come in smaller sizes such as eight ounces instead of twelve.
• Specialty beers may be sold in larger pint-sized (16-ounce) aluminum cans compared to traditional twelve-ounce soda or beer cans found commonly.
Calculating Conversions
Liters to Cans
Converting liters to cans involves understanding the volume capacity of each container. A standard 2-liter bottle can hold around 67.6 fluid ounces or approximately eight 8-ounce cups of liquid. On
the other hand, a typical can has a volume of 12 fluid ounces. To determine how many cans are in a 2-liter bottle, you need to divide the total volume of the bottle by the volume of one can.
Calculating this conversion is quite straightforward. By dividing 67.6 (volume of a 2-liter bottle) by 12 (volume of one can), you get an approximate answer that reveals there are about 5.63 cans in
a 2-liter bottle.
Math Involved
The math equation used for this calculation is simple division: dividing the total volume in liters by the individual volume per can gives you an estimate on how many cans fit into that larger
container size. For instance, if we consider filling up containers with water at home – say you have ten liters and each glass holds two liters; using division will show that five glasses would be
needed to hold all ten liters.
To put it simply:
• Divide total liters by individual container size
• The result shows how many smaller containers fit into one larger container
• This method works for various conversions involving different units such as gallons and pints too
Beverage Industry Standards
Common Sizes
In the beverage industry, a common question that arises is “how many cans in a 2-liter bottle?” A standard 2-liter bottle can hold approximately 67.6 fluid ounces, which is equivalent to about 8.5
cups of liquid. When considering the conversion from a 2-liter bottle to cans, it’s important to note that the size of a standard soda can is typically 12 fluid ounces.
When converting the contents of a 2-liter bottle into cans, you would need around 5.63 cans to match the volume held by one 2-liter bottle. This calculation is based on dividing the total fluid
ounces in a 2-liter bottle by the size of one can (67.6 divided by 12). Therefore, if you were transferring soda from a large container like a 2-liter bottle into individual servings in cans for an
event or party, you could estimate needing roughly six cans for each full 2-liter bottle.
Packaging Norms
Packaging norms and standards are crucial in ensuring consistency and convenience for consumers across different brands and products within the beverage industry. By adhering to these norms,
manufacturers maintain uniformity in packaging sizes such as bottles and cans, making it easier for customers to understand product quantities and make informed purchasing decisions.
One benefit of following packaging norms is that it helps establish familiarity among consumers regarding product sizes across various brands. For example, when someone buys a standard-sized soft
drink at any grocery store or convenience shop, they expect certain packaging volumes like seeing sodas sold commonly in either 12-ounce cans or liter bottles, creating predictability and ease of
selection for shoppers.
Practical Applications
Event Planning
When organizing an event, knowing how many cans in a 2-liter bottle can be crucial. For instance, if you’re hosting a party and planning to serve soda from 2-liter bottles, understanding the
conversion helps estimate how many individual servings can be prepared. This knowledge ensures you have enough beverages for all your guests without overbuying or running out prematurely.
Calculating the number of cans in a 2-liter bottle is simple. Since one liter equals approximately 33.8 fluid ounces and a standard soda can is typically 12 ounces, a 2-liter bottle would contain
around 67.6 fluid ounces. By dividing this by the size of a standard can (12 ounces), you’ll find that there are roughly 5-6 cans worth of soda in each 2-liter bottle.
Stocking Up
When stocking up on supplies for your household or business, understanding how many cans in a 2-liter container can aid in efficient purchasing decisions. For example, when buying soft drinks or
other beverages for storage purposes, recognizing the equivalent number of individual cans per larger container helps optimize storage space while ensuring you have an ample supply on hand.
Environmental Impact
Recycling Facts
Recycling is crucial for reducing waste and preserving the environment. Aluminum cans are one of the most recyclable items, with nearly 75% of all aluminum ever produced still in use today.Recycling
just one bottle can save enough energy to power a laptop for over 25 hours.
Recycling cans and bottles not only conserves resources but also helps reduce greenhouse gas emissions. For instance, recycling aluminum cans saves up to 95% of the energy needed to make new cans
from raw materials. Similarly, recycling a single plastic bottle can conserve enough energy to light a 60-watt bulb for six hours.
• Pros:
□ Reduces waste
□ Conserves energy
□ Lowers greenhouse gas emissions
• Cons:
□ Contamination issues
□ Limited recycling infrastructure
Sustainability Efforts
Companies are increasingly focusing on sustainability efforts by using recycled materials in their packaging. For example, some beverage companies utilize recycled plastic from 2-liter bottles to
create new bottles or other products like clothing or furniture. This circular approach reduces the need for virgin materials and minimizes environmental impact.
Moreover, initiatives such as deposit return systems encourage consumers to return empty containers for recycling in exchange for refunds or discounts on future purchases. These programs incentivize
individuals to participate actively in the recycling process and contribute towards creating a more sustainable future.
1. Collect empty cans and bottles.
2. Sort them based on material type (e.g., aluminum vs plastic).
3. Take them to designated recycling centers.
4. Participate in local deposit return programs if available.
Cost Implications
Buying in Bulk
When considering how many cans in a 2 liter can affect your expenses, buying in bulk is a smart choice. Purchasing soda in larger quantities often results in lower costs per unit. For instance, if
you buy a pack of twelve 12 oz cans or one 2-liter bottle, the price difference might surprise you.
Buying soda in bulk not only saves money but also reduces packaging waste. Instead of multiple individual containers, you have one large container to recycle or reuse. This can contribute positively
to the environment by minimizing plastic and aluminum waste.
• Pros:
□ Cost savings
□ Reduced packaging waste
• Cons:
□ Limited storage space required
□ Potential for soda going flat before consumption
Price Comparisons
Comparing prices between purchasing individual cans versus a 2-liter bottle is essential when analyzing the cost implications. Retailers often offer discounts on larger sizes due to economies of
scale and reduced packaging costs associated with bigger containers. By doing some quick math at the store, you can determine which option provides better value for your money.
For example, if a single can costs $1 and a 2-liter bottle is priced at $3, it’s evident that opting for the larger size offers more quantity at a lower price per ounce. However, always consider
factors like shelf life and how quickly you consume soda before making your decision.
• Key Information:
□ Compare prices per unit
□ Factor in storage capacity
Consumer Choices
Brand Preferences
Consumers often consider brand preferences when purchasing beverages. Some people are loyal to specific brands, trusting their quality and taste consistency. For example, a soda enthusiast might
prefer Coca-Cola over Pepsi due to personal preference or brand loyalty.
On the other hand, some consumers are open to trying different brands based on promotions or recommendations from friends and family. They may switch between brands depending on availability or
pricing. This flexibility allows them to explore various options in the market.
• Pros:
□ Loyalty to trusted brands
□ Consistency in taste and quality
• Cons:
□ Limited exploration of new products
□ Potential higher costs for premium brands
Brand preferences can also be influenced by marketing strategies such as advertising campaigns, endorsements by celebrities, or social media presence. Companies invest heavily in promoting their
products to attract and retain customers.
Flavor Varieties
Beverage companies offer a wide range of options to cater to diverse consumer preferences. For instance, popular soda flavors include cola, lemon-lime, orange, root beer, and cherry among others.
Each flavor appeals differently based on individual tastes.
Moreover, companies often introduce limited-edition flavors or seasonal variations to create excitement among consumers and boost sales. These unique offerings provide an opportunity for customers to
try something new and share their experiences with others.
• Key Information:
□ Diverse flavor options available
□ Seasonal variations add variety
• Examples:
□ Coca-Cola’s holiday-themed flavors
□ Mountain Dew’s experimental flavor releases
Future Trends
Packaging Innovations
Packaging innovations in the beverage industry have led to advancements like smaller can sizes that offer convenience and portability. These innovative packaging solutions cater to changing consumer
preferences, providing options beyond traditional bottle sizes. For instance, consumers may wonder how many cans are equivalent to a 2-liter bottle of soda when opting for smaller portions.
One significant trend is the introduction of multipack options containing several small cans or bottles that add up to the volume of a 2-liter container. This allows consumers to enjoy their favorite
drinks without committing to a large quantity all at once. Manufacturers are focusing on sustainable packaging materials such as aluminum cans, which are easily recyclable.
Consumer Demand Shifts
Consumer demand shifts towards more customizable and convenient packaging formats have influenced product offerings in the beverage market. Brands now provide various sizing options based on consumer
needs and preferences. For example, some individuals prefer single-serve cans over larger containers due to portion control or on-the-go consumption.
Moreover, with an increasing emphasis on health and wellness trends, consumers are seeking reduced portion sizes as part of their lifestyle choices. This shift has prompted companies to diversify
their product lines by introducing different pack sizes ranging from mini-cans to larger bottles while ensuring nutritional information is readily available for informed decision-making.
Final Remarks
You’ve now uncovered the secrets behind the number of cans in a 2-liter bottle, delving into volume conversions, industry standards, and even the environmental and cost impacts. Understanding these
aspects can empower you to make informed consumer choices that align with your values and needs. As you navigate future trends in the beverage industry, keep in mind the implications of your choices
on both the environment and your wallet.
Take charge of your decisions by considering not just the convenience but also the broader effects of your consumption habits. Your choices matter, influencing not only what’s in your fridge but also
the world around you. Stay curious, stay informed, and continue exploring the impact of seemingly small choices on a larger scale.
Frequently Asked Questions
How does understanding volume relate to the number of cans in a 2-liter container?
Understanding volume helps grasp how much space liquid occupies. In this case, knowing that a standard can is around 355ml allows us to calculate approximately five and a half cans fitting into a
2-liter container.
Why are beverage industry standards important when determining the number of cans in a 2-liter bottle?
Beverage industry standards ensure consistency in packaging sizes. By adhering to these norms, we can accurately predict that there would be roughly five standard-sized cans within a 2-liter bottle.
What practical applications involve knowing how many cans fit into a 2-liter container?
This knowledge is handy for inventory management or event planning where bulk beverages are involved. Understanding the conversion from liters to individual units like cans aids in efficient resource
allocation and cost estimation.
How does environmental impact tie into the discussion of the number of cans in a 2-liter bottle?
Reducing packaging waste is crucial for environmental sustainability. Knowing how many smaller cans can replace larger bottles encourages eco-friendly choices by opting for less material usage and
potentially more recyclable containers.
What cost implications exist regarding the quantity of canned drinks inside a 2-liter bottle?
Considering pricing differences between individual canned drinks and multi-packs or large bottles, calculating how many single servings are equivalent to one larger container helps consumers make
informed decisions based on value for money.
Leave a Comment | {"url":"https://vendingproservice.com/how-many-cans-in-a-2-liter/","timestamp":"2024-11-10T04:51:22Z","content_type":"text/html","content_length":"131623","record_id":"<urn:uuid:02cc142e-3dac-4e16-a359-46207aadb975>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00546.warc.gz"} |
My First Year in Python: What Have I Learned?
Today, or yesterday, I celebrate a year of work in Python. Probably, nowadays and in the near future, it will be my main working tool. Every working day starts with running PyCharm and coding in
What have I learned in the first year?
1) New mathematical models.
Within one year, I have learned and developed solutions using the following algorithms/models:
• Empirical Mode Decomposition;
• Neural Network: Dense and LSTM;
• Boosts: XGBoost, LightGBM, CatBoost;
• Prophet.
Some models are well developed, like Neural Network; some are still in their first draft versions, like EMD and Prophet. My boostings, now, are in the state of semi-products.
2) Kaggle.
I opened up Kaggle. I registered on this five years ago but, since then, had not opened it until fall 2019. In late October, I found the electric power, water, and steam consumption forecast
competition for 1,500 buildings all over the world. I made my first and very immature solution and got an idea of the problem. But what impressed me most was the winner’s solutions, which they openly
and kindly shared with the rest of the world. Going through their solutions, I have learned modern approaches and techniques in solving a problem of such a scale. Kaggle is unique; it lets us study
best practice, discuss ideas and thoughts and, most importantly, share Python modeling code.
3) High demand for Python data scientists.
I become a more-demanded power analyst every time I mention Python. What I have learned: everyone wants mathematics in Python. Very easily, I organized my first Python workshop after 10 months of
practice. Currently, together with the Education Center, we are planning a new one with the same subject “The Power Price and Consumption Forecast Using Neural Network in Python”. This time, it will
last for two days. It is worth noting that 90% of participants in the Kaggle power consumption forecast competition used Python in their solutions.
What is the core difference between Python and MATLAB, R?
1) Level of “development.”
I have been working in MATLAB for 12 years, R for six months, and Python for one year, and I once made an application in Java (IDE Eclipse). From this experience, I have concluded the following:
• MATLAB is a pure data science tool. You take an input, calculate the result, and put the result in your scientific paper. That’s it. The main advantage of MATLAB is the ability to rerun
calculations from the middle: you calculate for hours, then save workspace in a single .mat file. You can proceed from where you stopped at a later date. MATLAB is the only tool mentioned that
effortlessly allows such a trick. I do not expect a great future for MATLAB because its license costs a lot and algorithms are closed for users.
• R is the first step from pure science to software development. In R, we use libraries and, still, R contains some sort of workspace (analogously to MATLAB). To jump from MATLAB to R is easy
because the coding logic is close. What I do not like about R is the debugger — this is the worst debugger I’ve ever worked with, cumbersome and untransparent. I hope, nowadays, R studio has
improved that important part of their application.
• Python is the next step from math to software development. On one hand, this is a high-level object-oriented language, but it also allows simple and clear math coding à la R and MATLAB. You are
still able to import data from a text file in two lines, which never works in a strict object-oriented language like Java. This coding logic flexibility allows the smooth integration of Python
mathematics into IT infrastructure. On the other hand, Python contains an innumerous number of libraries with different types of mathematical models.
• Java is the complete opposite of MATLAB and comprises a pure cross-platform software development. I do not believe a lot of math models are being implemented in Java today.
2) Complete openness and investment from Google, Microsoft, Facebook, Yandex, etc.
You should know that libraries in Python are being developed and openly published by IT monsters. These are a few examples I have been working with:
I guess that the rapid development of mathematics for both big and small data problems is the result of the openness of mathematical models. Monsters like Microsoft and Google understand clearly that
the most efficient way to solve the mathematical problems that people and industries face today is to collaborate with the entire world: share ideas, codes, and best practices openly. | {"url":"https://www.mbureau.energy/blog/my-first-year-python-what-have-i-learned","timestamp":"2024-11-05T04:20:27Z","content_type":"text/html","content_length":"31267","record_id":"<urn:uuid:a8d089d7-a750-451d-a775-5aa6952785c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00820.warc.gz"} |
Astronomy Formulas
PERIODICITY FORMULAS:
Sidereal Orbit (365.25636042 + 1.1 x 10^-7 TE) days
Tropical Year (365.24219878 - 6.14 x 10^-6 TE) days
Eclipse Year (346.620031 + 3.2 x 10^-5 TE) days
Anomalistic Year (365.25964134 + 3.04 x 10^-6 TE) days
Sidereal Lunar Orbit (27.3216610 - 2.0 x 10^-7 T) days
Lunar Mean Daily Sidereal Motion (13.1763582975 - 1.0224 x 10^-8 T)°
Lunar Synodical Period (29.5305992 - 2.0 x 10^-7 T) days
Centenial General Precession Longitude (1.396291666... + 0.0006180555...T)°
Given TE = Julian centuries from day 0.5, 1900 ET
Given T = Tropical centuries from 1900.0 N
Download the Epoch Calc v2012 Excel spreadsheet
to calculate and view updated formulas. You just enter the date.
Since 1997, when this page was published, many astronomy formulas have been refined.
PERIODICITY FORMULAS -- NOTATION -- TIME FORMULAS -- PAGE TWO
Sidereal Orbit is a revolution relative to a fixed celestial position.
Sidereal Noon is the instant of transit of mean equinox relative to a fixed meridian position.
Epoch (FE) is the instant 12 hours, 0 days, 1900 years A.D. with hours in mean sidereal time.
of Sidereal Time
Ephemeris Time is the actual count of solar days from a fixed meridian.
Tropical Year is the period from equinox to equinox.
Eclipse Year is the period between the earth and lunar orbit planes node crossings.
Temporal Unit is 36,525 mean solar days since Jan. 0.5, 1900, UT.
Greenwich Mean
Sidereal Time = 0.0 hours UT = 12 hours + aFMS
Universal Time has replaced Mean Solar Time due to a recognition of the non-uniform rotation rate of the earth.
Lunar Synodic is the period of time from one full or new moon to another, that is the time between consecutive alignments of the sun, earth and moon on a plane perpendicular to the plane of solar
Period (S9) revolution.
Julian Date (JD) The Julian Date is the Julian day number for the preceding noon plus the fraction of the day since that instant. A Julian Date begins at 12h 0m 0s and is composed of 86400 seconds.
It is recommended that JD be specified as SI seconds in Terrestrial Time (TT) where the length of day is 86,400 SI seconds.
Julian Day The Julian day number is the solar day number assigned in a continuous count of days beginning with zero assigned to Greenwich mean noon on 1 January 4713 BC, Julian proleptic
Number (JDN) calendar -4712.
Precession (PR) is the retrograde rotation of the earth's axis relative to fixed celestial reference.
Annual Parallax is the viewpoint difference due to the change in the earth's position relative to the sun. For the nearest star the angle is about 0.000222.°
Annual is the angular shift in apparent position resulting from motion velocity of viewing from orbiting (moving) earth.
Diurnal Parallax is the viewpoint difference due to the rotation of the earth. The amount varies with the latitude of the observer.
Diurnal is the result of observing from a spining observing position on the surface of the earth. Velocity of the observer causes apparent shift to a maximum correction of about 0.0008333°
Aberration at the equator.
Atmospheric is the bending of light rays by the earth's atmosphere.
PERIODICITY FORMULAS -- DEFINITIONS -- TIME FORMULAS -- PAGE TWO
aFMS Ficticious Mean Solar position
DMS Day, Mean Sidereal
d, h, m, s, day, hour, minute, second
ES Ephemeris Second
ET Ephemeris Time
FE Fundamental Epoch
GT Greenwich Mean Sidereal Time
JC Julian Century
JD Julian Day
L° Longitude of the Mean Sun
R° Period of Sidereal Rotation
T Tropical Centuries from 1900.0 N
TE Temporal Epoch
TU Temporal Solar Based Unit
UT Universal Time
PERIODICITY FORMULAS -- DEFINITIONS -- NOTATION -- PAGE TWO
TIME FORMULAS:
aFMS 0.776919398148d + 8640184.s628 TU + 0.0929 TU^2
DMS 86,400.s
DMS / P° 0.999999902907 - 5.9 x 10^-11 TE
DMS / P° (1.000000097093 + 5.9 x 10^-11 TE) ^-1
ES Tropical Year 1900 / 31,556,925.9747
FE 12h 0d 1900 A.D. (hours in mean sidereal time)
FE geometric mean solar longitude : mean equinox @ 279.6966777...°
GT 12h + aFMS
GT 0.279057325232d + 8640184.s8138 T'U + 0.s0929 T'U ^2
JC 36,525 days ephemeris time
L° 297.69667777...° + 36,001.2914583° TE + .0003025° TE^2
Mean Solar Day 1.002737909265 + 5.89 x 10^-11 TU
: Mean Sidereal Time
Mean Solar Day (0.997269566414 - 5.86 x 10^-11 TU)^ -1
: Mean Sidereal Time
TE One JC from 0.d5, 1900 (JD 2,415,020.)
TU 36,525 mean solar days from 12h, Jan. 0, 1900 UT.
T'U 36,525 mean solar days from 12h, Jan. 0, 2000 UT1.
ASTRONOMY FORMULAS, PAGE TWO
Top of Page | {"url":"http://jqjacobs.net/astro/astrofor.html","timestamp":"2024-11-11T13:27:00Z","content_type":"text/html","content_length":"20581","record_id":"<urn:uuid:4ce8e597-2e56-498e-9d84-9d690a802ed0>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00879.warc.gz"} |
Dictionary.com | Meanings & Definitions of English Words
1. in immediate physical contact; touching.
Synonyms: meeting
1. touching at a single point, as a tangent in relation to a curve or surface.
2. in contact along a single line or element, as a plane with a cylinder.
1. Geometry. a line or a plane that touches a curve or a surface at a point so that it is closer to the curve in the vicinity of the point than any other line or plane drawn through the point.
1. (in a right triangle) the ratio of the side opposite a given angle to the side adjacent to the angle.
2. Also called
. (of an angle) a trigonometric function equal to the ratio of the ordinate of the end point of the arc to the abscissa of this end point, the origin being at the center of the circle on
which the arc lies and the initial point of the arc being on the x-axis. : tg, tgn
3. (originally) a straight line perpendicular to the radius of a circle at one end of an arc and extending from this point to the produced radius which cuts off the arc at its other end.
3. the upright metal blade, fastened on the inner end of a clavichord key, that rises and strikes the string when the outer end of the key is depressed.
1. a geometric line, curve, plane, or curved surface that touches another curve or surface at one point but does not intersect it
2. (of an angle) a trigonometric function that in a right-angled triangle is the ratio of the length of the opposite side to that of the adjacent side; the ratio of sine to cosine tan
3. the straight part on a survey line between curves
4. music a part of the action of a clavichord consisting of a small piece of metal that strikes the string to produce a note
5. on a tangent or at a tangent
on a completely different or divergent course, esp of thought
1. of or involving a tangent
2. touching at a single point
1. A line, curve, or surface touching but not intersecting another.
2. The ratio of the length of the side opposite an acute angle in a right triangle to the side adjacent to the angle. The tangent of an angle is equal to the sine of the angle divided by the cosine
of the angle.
3. The ratio of the ordinate to the abscissa of the endpoint of an arc of a unit circle centered at the origin of a Cartesian coordinate system, the arc being of length x and measured
counterclockwise from the point (1, 0) if x is positive or clockwise if x is negative.
4. A function of a number x, equal to the tangent of an angle whose measure in radians is equal to x.
Discover More
Derived Forms
Discover More
Other Words From
• quasi-tangent adjective
Discover More
Word History and Origins
Origin of tangent^1
First recorded in 1585–90; from Latin tangent-, stem of tangēns “touching” (present participle of tangere “to touch”), in phrase līnea tangēns “touching line”
Discover More
Word History and Origins
Origin of tangent^1
C16: from Latin phrase līnea tangēns the touching line, from tangere to touch
Discover More
Idioms and Phrases
1. off on / at a tangent, digressing suddenly from one course of action or thought and turning to another:
The speaker flew off on a tangent.
More idioms and phrases containing tangent
see on a tangent .
Discover More
Example Sentences
The option to create new threads from a text channel is one of the newer features added to Discord in recent months, and it can be really helpful when conversations keep going off on tangents.
The third-party cookie still stands at a crucial intersection between digital marketing, SEO, paid media, web design, and several business tangents.
Between where there were two possible directions and zero possible directions, there was one — where the line was tangent to the circle.
At no time were you allowed to venture into the wall, though you were permitted to be tangent to the wall or on a grid point along a wall.
To remove as much area as possible, each of those cuts would have been tangent to the circle while making a 45 degree angle with the sides of the square.
Then, the next morning, he did that radio show and went on his notorious tangent about Chuck Lorre.
Not every tangent Jacobson follows is particularly illuminating, as he is the first to admit.
At last the climbing trail dipped into a level tangent just below the top of the mountain.
Mr. Benjamin had been in the neighborhood, but, hearing that the enemy were in Madison, had gone off at a tangent.
The mere thought of the hospital sent her mind flying off at a tangent.
The last wipe should be a quick stroke coming off of joint on a tangent.
Two guns were for a time in the hands of the Boers, who, I believe, removed the tangent sights.
Definitions and idiom definitions from Dictionary.com Unabridged, based on the Random House Unabridged Dictionary, © Random House, Inc. 2023
Idioms from The American Heritage® Idioms Dictionary copyright © 2002, 2001, 1995 by Houghton Mifflin Harcourt Publishing Company. Published by Houghton Mifflin Harcourt Publishing Company. | {"url":"https://www.dictionary.com/browse/tangent?s=t","timestamp":"2024-11-08T22:19:23Z","content_type":"text/html","content_length":"238561","record_id":"<urn:uuid:38580d4e-f3b7-4f1a-b162-925c8dc16cbe>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00150.warc.gz"} |
Stoppskyltar I Sverige - Canal Midi
840 dividerat med 14 - lång division 840/14 i enklaste form
Divide 10 by 2. Write the remainder after subtracting the bottom number from the top number. Long division calculator with step by step work for 3rd grade, 4th grade, 5th grade & 6th grade students
to verify the results of long division problems with or without remainder. The long division rules are explained in 12 steps for a case in which the dividend is a 3 length number, while the divisor
is a 2 length one: 1st step: establish the dividend (the number to be divided) and the divisor (is the number “y” we often refer to in sentences like: divide the dividend x by the divisor y).
Fraction Calculator. Fill in your two fractions below (the numerator above the scoreline and the denominator below the scoreline) and choose if you want to add, subtract, multiply or divide fractions
and click "Calculate" to make the calculation. Divide Two Numbers - powered by WebMath.
7x12=84. 8x12=96. 9x12=108. 10x12=120. 11x12=132. 12x12=144.
Julshow Gävle - Canal Midi
Before you continue, note that in the problem 72 … Become a Study.com member to unlock this answer! Create your account. View this answer.
Stoppskyltar I Sverige - Canal Midi
Next, we will subtract the result from the previous step from the fourth digit of the dividend (238 - 0 = 238) and write that answer below: What is 840 divided by 12?
Instead of saying 72 divided by 12 equals 6, you could just use the division symbol, which is a slash, as we did above. Also note that all answers in our division calculations are rounded to three
decimals if … 2006-01-14 2007-11-15 12 th step: Subtract the result of the 11 step from the number above it and this is the step where you get the remainder and the quotient. In case you want to
perform this calculation quicker than by hand you can use our long division calculator. 09 Apr, 2015. Instead of saying 96 divided by 12 equals 8, you could just use the division symbol, which is a
slash, as we did above. Also note that all answers in our division calculations are rounded to three decimals if necessary.
Utvandring sverige
Simple and best practice solution for 8400=12x equation. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so don`t hesitate to use it as a solution
of your homework. 840 / 70 = 12 840 / 84 = 10 840 / 105 = 8 840 / 120 = 7 840 / 140 = 6 840 / 168 = 5 840 / 210 = 4 840 / 280 = 3 840 / 420 = 2 840 / 840 = 1 What is 841 divisible by?
I need help with: Choose Math Help Item Calculus, Derivatives Calculus, Integration Calculus, Quotient Rule Coins, Counting Combinations, Finding all If you enter 72 divided by 12 into a calculator,
you will get: 6. The answer to 72 divided by 12 can also be written as a mixed fraction as follows: 6 0/12. Note that the numerator in the fraction above is the remainder and the denominator is the
divisor. How to calculate 72 divided by 13 using long division. Simple and best practice solution for 8400=12x equation.
Orokin reactor
280.000. 840divided by4. Step 12. Next, we will subtract the result from the previous step from the fourth digit of the dividend (238 - 0 = 238) and write that answer below: What is 840 divided by
12? 70. What is 840 divided by 9?
In case you want to perform this calculation quicker than by hand you can use our long division calculator. 09 Apr, 2015. Instead of saying 96 divided by 12 equals 8, you could just use the division
symbol, which is a slash, as we did above. Also note that all answers in our division calculations are rounded to three decimals if necessary.
Köpa lagerbolag
bilprovning åkersbergaoracle jobs salaryekonomiprogrammet, 180 hpstadsbyggnadskontoret stockholm ritningarhur länge kan man spara semesterdagar handels
Stoppskyltar I Sverige - Canal Midi
Let's look at the math. To find out what 8,000 divided by 12 is, it can be helpful to take away See full answer below. Solve your math problems using our free math solver with step-by-step solutions.
840 dividerat med 14 - lång division 840/14 i enklaste form
Fill in your two fractions below (the numerator above the scoreline and the denominator below the scoreline) and choose if you want to add, subtract, multiply or divide fractions and click
"Calculate" to make the calculation. Divide Two Numbers - powered by WebMath. This page will show you a complete "long division" solution for the division of two numbers. Similarly, if a number is
being divided by 9, add each of the digits to each other until you are left with one number (e.g., 1164 becomes 12 which in turn becomes 3), which is the remainder. Lastly, you can multiply the
decimal of the quotient by the divisor to get the remainder. Divided By What Equals Calculator.
One Time Payment $12.99 USD for 2 months: Weekly Subscription $1.99 USD per week until cancelled: Monthly Subscription $6.99 USD per month until cancelled: Annual Subscription $29.99 USD per year
until cancelled If you enter 84 divided by 12 into a calculator, you will get: 7 The answer to 84 divided by 12 can also be written as a mixed fraction as follows: 7 0/12 Note that the numerator in
the fraction above is the remainder and the denominator is the divisor. | {"url":"https://investerarpengaratie.netlify.app/92007/14048","timestamp":"2024-11-04T17:33:13Z","content_type":"text/html","content_length":"15986","record_id":"<urn:uuid:cfe563ae-e20c-4ec0-a6a0-ef4b080ac6d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00408.warc.gz"} |
Introduction to readsdr
Model: SIR
The SIR model is an epidemiological model that computes the theoretical number of people infected with a contagious illness in a closed population over time. The SIR models the flows of people
between three states:
Susceptible (S)
number of individuals who are not infected but could become infected.
Infected (I)
number of individuals who have the disease and can transmit it to the susceptibles.
Recovered (R)
number of individuals who have recovered from the disease and are immune from getting it again.
The model assumes a time scale short enough that births and deaths can be neglected.
The SIR model is used where individuals infect each other directly. Contact between people is also modeled to be random.
The rate that people become infected is proportional to the number of people who are infected, and the number of people who are susceptible. If there are lots of people infected, the chances of a
susceptible coming into contact with someone who is infected is high. Likewise, if there are very few people who are susceptible, the chances of a susceptible coming into contact with an infected is
lower (since most of the contact would be between either infected or recovered). In mathematical notation, the model is described by the following equations:
\[$$\frac{dS}{dt} = -\beta SI$$\]
\[$$\frac{dI}{dt} = \beta SI - \frac{I}{rd}$$\]
\[$$\frac{dR}{dt} = \frac{I}{rd}$$\]
\[$$\beta = \frac{e}{n}$$\]
\[$$e = c \times i$$\]
\(n\) denotes the population size; \(\beta\) the per capita rate at which two specific individuals come into effective contact per unit time; \(e\) the effective contacts per infected individual; \(c
\) the contacts per person per unit time; and \(i\) the infectivity.
There exists several ways to implement this model. On one side of the spectrum, we find specialised software such as Vensim and Stella that offer friendly graphical user interfaces to seamlessly
design complex models, emphasising on a systems perspective. Figure 1 presents the implementation of the SIR model in Stella.
On the other side, we can implement the model in flexible and powerful statistical environments such as R that offer innumerable tools for numerical analysis and data visualisation. Specifically,
models can be implemented with the deSolve library (see Duggan for more details). This alternative requires the user to type all model equations in computational order. For large models, this task
can be cumbersome and may lead to incorrect implementations.
In order to bridge these two approaches, the package readsdr fills the gap by automating the translation from Vensim and Stella models to R. Such a process is achieved by the function read_xmile.
filepath <- system.file("models/", "SIR.stmx", package = "readsdr")
mdl <- read_xmile(filepath, graph = TRUE)
read_xmile returns a list of three elements. The element description contains the simulation parameters and the model variables as lists.
description <- mdl$description
model_summary <- data.frame(n_stocks = length(description$levels),
n_variables = length(description$variables),
n_consts = length(description$constants))
#> n_stocks n_variables n_consts
#> 1 3 4 4
To simulate a model with deSolve, we use the function ode. This routine takes as arguments, a vector with the model’s stocks, the simulation time, the equations wrapped in a function, the model’s
constants and the integration method. Indeed, the second element from read_xmile is a list with all these inputs but the integration method. If this is the only element of interest, readsdr provides
xmile_to_deSolve. ode returns a matrix which then is converted to a data frame for convenience. readsdr offers sd_simulate to simplify this process.
deSolve_components <- mdl$deSolve_components
all.equal(deSolve_components, xmile_to_deSolve(filepath))
#> [1] TRUE
simtime <- seq(deSolve_components$sim_params$start,
output_deSolve <- ode(y = deSolve_components$stocks,
times = simtime,
func = deSolve_components$func,
parms = deSolve_components$consts,
method = "euler")
result_df <- data.frame(output_deSolve)
#> time Susceptible Infected Recovered RR e beta_var IR population
#> 1 1.000 990.0000 10.00000 0.000000 5.000000 2 0.002 19.80000 1000
#> 2 1.125 987.5250 11.85000 0.625000 5.925000 2 0.002 23.40434 1000
#> 3 1.250 984.5995 14.03492 1.365625 7.017459 2 0.002 27.63754 1000
#> 4 1.375 981.1448 16.61243 2.242807 8.306214 2 0.002 32.59839 1000
#> 5 1.500 977.0700 19.64895 3.281084 9.824476 2 0.002 38.39680 1000
#> 6 1.625 972.2704 23.22049 4.509144 11.610246 2 0.002 45.15319 1000
#> i c recoveryDelay
#> 1 0.25 8 2
#> 2 0.25 8 2
#> 3 0.25 8 2
#> 4 0.25 8 2
#> 5 0.25 8 2
#> 6 0.25 8 2
result_df2 <- sd_simulate(deSolve_components)
identical(result_df, result_df2)
#> [1] TRUE
With the results in a data frame, we can manipulate the data structure for further analysis and data visualisation. In this case, we simply produce a plot with the behaviour over time using ggplot2,
a library part of the tidyverse. See Duggan (2018) for more details.
tidy_result_df <- result_df %>%
select(time, Susceptible, Infected, Recovered) %>%
pivot_longer(-time, names_to = "Variable")
ggplot(tidy_result_df, aes(x = time, y = value)) +
geom_line(aes(group = Variable, colour = Variable)) +
theme_classic() +
theme(legend.position = "bottom")
Finally, the third element from read_xmile is a list of two data frames. These structures describe the model as a graph in terms of vertices and edges, and serve as input to the functions in igraph,
a library collection for creating and manipulating graphs and analyzing networks. | {"url":"http://rsync.jp.gentoo.org/pub/CRAN/web/packages/readsdr/vignettes/Introduction_to_readsdr.html","timestamp":"2024-11-14T13:57:19Z","content_type":"text/html","content_length":"178994","record_id":"<urn:uuid:19db0d2e-2f5c-4337-9c9f-7e52ac68dbb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00256.warc.gz"} |
Trading Fees | Copy of IVX
Fixed Fees
A fixed fee structure is implemented for traders based on the total open interest, whether positions are opened or closed:
Taker fixed fee: 0.03% of the notional
Maker fixed fee: 0.07% of the notional
Fixed fees are subject to variation between assets to incentivize the Diem AMM to accept higher-risk assets. Additionally, option fees are capped by 35% of the option price, as follows:
$\large Fixed\ fees = Min(0.35*initial\ premium\ price , fixed\ fee\ * total\ open\ interest)$
Dynamic Fees
To protect the AMM from over-exposure to market volatility and directional risk, a dynamic fee system is introduced during the buying and selling process to evaluate the risk each position is
introducing to liquidity providers, and by offering higher fees accordingly.
The main focus of the dynamic fees is to help rebalance delta and vega manually towards neutral exposure.
Vega Fee:
Vega measures the option's sensitivity to changes in implied volatility. The following Greek calculates how much the option's price is expected to change for a 1% change in the implied volatility of
the asset traded.
When buying an option, the trader is decreasing the AMM's net vega exposure
When selling an option, the trader is increasing the AMM's net vega exposure
By that:
$\large v_f = \begin{cases} VEGA\_MAKER\_FACTOR & \text{if } |v_1| < |v_0| \\ VEGA\_TAKER\_FACTOR & \text{if } |v_1| \geq |v_0| \end{cases} \$
$\Large Vega_{fee} = | |v_1|-|v_0||*v_f$
Where $|v_0|$ is the AMM Vega prior to the trade being executed and $|v_1|$ is the AMM Vega after the trade is live
Suppose the AMM net Vega $v_0 = 3.2$ and a trader wants to trade an option with a Vega of $v = 0.02$
If the user wants to sell 1 option contract, then they force the AMM to buy 1 contract from the AMM, so the AMM needs to go long 1 contract, and the new AMM net Vega is $v_1 = v_0 + v = 3.22$
$v_1 > v_0$, the AMM will treat the position opened as a taker and the trader needs to pay $0.02*v_f$ where $v_f = VEGA\_TAKER\_FACTOR$
Delta Fee:
Delta measures the option's price sensitivity to changes in the price of the underlying asset. The following Greek calculates how much the option's price is expected to change for a $1 change in the
price of the asset traded.
When buying a Call Option, the trader is decreasing the AMM's net delta exposure as the AMM acts as the seller of the call option
When buying a Put Option, the trader is increasing the AMM's net delta exposure as the AMM acts as the seller of the put option
When selling a call option, the trader is increasing the AMM's net delta exposure as the AMM acts as the buyer of the call option
When selling a put option, the trader is decreasing the AMM's net delta exposure as the AMM acts as the buyer of the put option
By that:
$\large \Delta f = \begin{cases} \text{DELTA\_MAKER\_FACTOR} & \text{if } |\Delta_1| < |\Delta_0| \\ \text{DELTA\_TAKER\_FACTOR} & \text{if } |\Delta_1| \geq |\Delta_0| \end{cases}$
$\Large \Delta_{fee} = || \Delta_1|-|\Delta_0||*\Delta_f$
Where $|\Delta_0|$ is the AMM Delta before the trade being executed and $|\Delta_1|$ is the AMM Delta after the trade is live
suppose the AMM net Vega $\Delta_0 = 3.1$ and a trader wants to open a put option position with a Delta of $\Delta = - 0.5$, by that the AMM is exposed to $-\Delta$ and needs to increase his Delta
exposure by 0.5.
The AMM total Delta exposure will be: $\Delta_1 = \Delta_0 -\Delta = 3.6$
$|\Delta_1| > |\Delta_0|$ , trader is considered as a taker and charged $\Delta_f*0.5$ where $\Delta_f = DELTA\_TAKER\_FEE$
Charging dynamic fees based on maker/taker status will add a layer of stability for the AMM and assure to align incentives between liquidity providers and traders. Normally the vega and delta maker
factor is close to zero to incentivize makers to trade into the system and balance the AMM into net zero exposure. | {"url":"https://docs.ivx.fi/diem-amm/trading-fees","timestamp":"2024-11-13T22:19:46Z","content_type":"text/html","content_length":"390188","record_id":"<urn:uuid:8577b44b-c14f-4a3c-a5ca-002722048524>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00637.warc.gz"} |
How to Use Absolute Function With Percentage In Oracle?
To use the absolute function with a percentage in Oracle, you can simply pass the percentage value as an argument to the ABS function. For example, if you want to calculate the absolute value of a
percentage value, you can use the ABS function with the percentage value. This will return the absolute value of the percentage, ignoring its sign. You can then use this absolute value in your
calculations or comparisons as needed. The ABS function is a mathematical function in Oracle that returns the absolute value of a number.
How to calculate percentage of a value in Oracle?
To calculate the percentage of a value in Oracle, you can use the following formula:
Percentage = (Value / Total) * 100
Here's an example using a sample table called "sales_data" with columns "product_id" and "sales_amount" where you want to calculate the percentage of sales for a specific product:
SELECT product_id, (sales_amount / SUM(sales_amount) OVER ()) * 100 AS percentage FROM sales_data;
This query will calculate the percentage of sales for each product_id in the "sales_data" table.
How to use ABS function to get absolute value in Oracle?
To use the ABS function in Oracle to get the absolute value of a number, you would simply pass the number as an argument to the ABS function. Here is an example:
SELECT ABS(-5) FROM dual;
This query would return the absolute value of -5, which is 5.
What is the behavior of ABS function with percentage in Oracle for negative values?
In Oracle, the ABS function returns the absolute value of a number. When using the ABS function with percentage values, it will still return the absolute value of the percentage, regardless of
whether the number is positive or negative.
For example, if you have a percentage value of -50%, the ABS function will return 50%. Similarly, if you have a percentage value of 75%, the ABS function will return 75%. It essentially removes the
negative sign from the number while keeping the value the same.
Overall, the ABS function behaves the same way with percentage values as it does with regular numbers in Oracle.
How to use ABS function with percentage in Oracle for complex calculations?
To use the ABS function with percentages in Oracle for complex calculations, you can follow these steps:
1. Calculate the percentage as a decimal value by dividing the percentage by 100. For example, if you want to calculate the absolute value of 25% of a number, you would calculate it as 0.25.
2. Use the ABS function to calculate the absolute value of the percentage. For example, if you have a number stored in a column called "value" and you want to calculate the absolute value of 25% of
that number, you can use the following SQL query:
1 SELECT ABS(value * 0.25) AS absolute_value_percentage
2 FROM table_name;
This query multiplies the value in the "value" column by 0.25 to get 25% of the number, and then calculates the absolute value of that percentage using the ABS function.
1. You can also include conditions, joins, and other complex calculations in your query to perform more complex calculations with percentages using the ABS function. Just make sure to correctly
define the logic and use the ABS function at the appropriate place in your query.
By following these steps, you can use the ABS function with percentages in Oracle for complex calculations. | {"url":"https://finblog.mooo.com/blog/how-to-use-absolute-function-with-percentage-in","timestamp":"2024-11-08T12:29:37Z","content_type":"text/html","content_length":"137299","record_id":"<urn:uuid:212a89f3-626c-4f7e-a990-56780f797dcf>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00267.warc.gz"} |